The research in Chaps. 410 is focused on the qualitative analysis of complex neural networks with delays. It is well known that the qualitative analysis of nonlinear dynamical systems is the foundation of controlling the systems. Therefore, in this chapter controller design problem will be studied for a class of stochastic Cohen-Grossberg neural networks with mode-dependent mixed time delays and Markovian switching, in which the neural dynamical networks will be stabilized. The contents in this chapter are from the research result in [1].

11.1 Introduction

In recent decades, neural networks have been successfully applied to various fields such as optimization, image processing, and associative memory design. In such application, it is important to know the stability properties of the designed neural network, these properties include asymptotic stability and exponential stability. However, time delays inevitably exist in neural networks due to various reasons [2]. The existence of time delay may lead to some complex dynamic behaviors such as oscillation, divergence, chaos, instability, or other poor performance of the neural networks. Since neural networks usually have a spatial extent, there is a distribution of propagation delays over a period of time. In these circumstances, the signal propagation is not instantaneous and cannot be modeled with discrete-time delays [3]. A more appropriate way is to incorporate discrete and continuously distributed time delays in the neural network model [2, 4]. Stability analysis for neural networks with delays has attracted more and more interests in recent years, for example, see [521] and references therein.

On the other hand, the stabilization issue has been an important focus of research in the control fields, and several feedback stabilizing control design approaches have been proposed (see [7, 2225]). Some interesting results [6, 2635] on the stabilization of a wide range and different types of neural networks have been reported in the literature. For a class of discrete-time dynamic neural networks, reference [29] proposes two methods, namely, the gradient projection and the minimum distance projection to investigate the stabilization. For a class of dynamic neural network systems, a global robust stabilizing controller with unknown nonlinearities is developed in [6] via Lyapunov stability and inverse optimality. For a class of linearly coupled stochastic neural networks, some results are derived in [31] on the design of the minimum number of controllers for the pinning stabilization, which are expressed in terms of strict linear matrix inequality (LMI). For a class of neutral neural networks with varying delays, a novel criterion is obtained in [28] for the global stabilization using the Razumikhin’s method. For a class of so-called standard neural network models with time delays, a few stabilization criteria are presented [30] which are based on the Lyapunov–Krasovskii stability theory and the LMI approach. For a class of impulsive high-order Hopfield-type neural networks with time-varying delays, some stabilization criteria are reported in [26] by employing the Lyapunov–Razumikhin technique. Very recently, for a class of neural networks with various activation functions and time-varying continuously distributed delays, LMI-based delay-dependent conditions are obtained in [27] for the global exponential stabilization. Despite some good progress on the stability analysis of delayed neural networks with various activation functions [3638], the stabilization issue has not been fully explored in the existing studies.

Although the stabilization problem for some kinds of neural networks with or without time delays is investigated by some authors, there has been no literature reported on the stabilization of stochastic Cohen-Grossberg neural networks with both Markovian jumping parameters and mixed mode-dependent time delays. As well known, mode-dependent time delays are of practical significance since the signal may switch between different modes and also propagate in a distributed way during a certain time period with the presence of an amount of parallel pathways [24]. The purpose of this chapter is to make an attempt to deal with the control problem for a class of stochastic neural networks with mode-dependent delays [1]. By introducing a new Lyapunov–Krasovskii functional that accounts for the mode-dependent mixed delays, stochastic analysis is conducted in order to derive delay-dependent criteria for the exponential stabilization problem. The feedback stabilizing controller is designed to satisfy some exponential stability constraints on the closed-loop poles. The stabilization criteria are obtained in terms of LMI and hence the gain control matrix is easily determined by numerical MATLABs LMI Control Toolbox. Three numerical examples are carried out to demonstrate the feasibility of our delay-dependent stabilization criteria.

Throughout this chapter, the shorthand \(\mathrm{col}\{M_1,M_2,\ldots ,M_l\}\) denotes a column matrix with the matrices \(M_1,M_2,\ldots ,M_l.\) \(\left( \varOmega ,\mathcal {F},\{\mathcal {F}_t\}_{t\ge 0},\mathcal {P}\right) \) denotes a complete probability space with a filtration \(\{\mathcal {F}_t\}_{t\ge 0}\) satisfying the usual conditions, i.e., the filtration is right continuous and contains all \(\mathcal {P}\)-null sets. \(\mathcal {L}^p_{\mathcal {F}_0}([-h,0],\mathbb {R}^n)\) denotes the family of all \(\mathcal {F}_0\)-measurable \(\mathbb {C}\left( [-h,0];\mathbb {R}^n\right) \)-valued random variables \(\xi =\{\xi (\theta ):-h\le \theta \le 0\}\) such that \(\sup _{-h\le \theta \le 0}\mathbb {E}|\xi (\theta )|^p<\infty ,\) where \(\mathbb {E}\{\cdot \}\) stands for the mathematical expectation operator with respect to the given probability measure \(\mathcal {P}.\)

11.2 Problem Formulation and Preliminaries

We consider the following stochastic neural network with both feedback control law and Markovian jumping parameters described by

$$\begin{aligned} \mathrm{d}x(t)=&-\alpha (x(t),\eta _t)\bigg [\beta (x(t),\eta _t)-A(\eta _t)f(x(t))\nonumber \\&-B(\eta _t)f(x(t-\tau (t,\eta _t)))\nonumber \\&-C(\eta _t)\int ^t_{t-\upsilon (t,\eta _t)}g(x(s))\mathrm{d}s-D(\eta _t)u(t,\eta _t)\bigg ]\mathrm{d}t\nonumber \\&+\bigg [E_1(\eta _t)x(t)+E_2(\eta _t)x(t-\tau (t,\eta _t))\nonumber \\&+E_3(\eta _t)f(x(t))+E_4(\eta _t)f(x(t-\tau (t,\eta _t)))\nonumber \\&+E_5(\eta _t)\int ^t_{t-\upsilon (t,\eta _t)}g(x(s))\mathrm{d}s\bigg ]\mathrm{d}\omega (t),\end{aligned}$$
(11.1)

where \(x(t)=[x_1(t),\ldots ,x_n(t)]^T\) denotes the neuron state at time t, \(u(t)\in L_2([0,s),\mathbb {R}^m), \forall s> 0,\) is the control input vector of the neural networks, \(\alpha (x(t),\eta _t)=\mathrm{diag}\{\alpha _j(x_j(t),\eta _t),\ldots ,\alpha _n(x_n(t),\eta _t)\}\) denotes the amplification function, \(\beta (x(t),\eta _t)=\mathrm{diag}\{\beta _j(x_j(t),\eta _t),\ldots ,\beta _n(x_n(t),\eta _t)\}\) denotes the appropriately behaved function such that the solution of the model given in (11.1) remains bounded, and \(f(x(t))=[f_1(x_1(t)),\ldots ,f_n(x_n(t))]^T,\) \(g(x(s))=[g_1(x_1(s)),\ldots ,g_n(x_n(s))]^T\) denote the activation functions. \(f(x(t-\tau (t,\eta _t)))= \left[ f_1(x_1(t-\tau (t,\eta _t))),\ldots ,f_n\right. \left. (x_n(t-\tau (t,\eta _t)))\right] ^T.\) \(0\le \tau (t,\eta _t)\le \bar{\tau }(\eta _t)\le \bar{\tau },0\le \upsilon (t,\eta _t)\le \bar{\upsilon }(\eta _t)\le \bar{\upsilon }\) \((j=1,\ldots ,n)\) are bounded and unknown delays. The matrices \(A(\eta _t),B(\eta _t),C(\eta _t)\in \mathbb {R}^{n\times n},D(\eta _t)\in \mathbb {R}^{n\times m}\) are the connection weight matrix, the discretely delayed connection weight matrix, the distributively delayed connection weight matrix and the control input weights, respectively. \(E_j(\eta _t)(j=1,2,\ldots ,5)\) is known real constant matrix with appropriate dimension, \(\omega (t)\) is a one-dimensional Brownian motion defined on complete probability space \(\left( \varOmega ,\mathcal {F},\{\mathcal {F}_t\}_{t\ge 0},\mathcal {P}\right) \) with \(\mathbb {E}\{\mathrm{d}\omega (t)\}=0,\ \mathbb {E}\{[\mathrm{d}\omega (t)]^2\}=\mathrm{d}t.\) \(\{\eta _t=\eta (t),t\ge 0\}\) is a homogeneous, finite-state Markovian process with right continuous trajectories and taking values in finite set \(\wp =\{1,2,\ldots ,N\}\) with given probability space \(\left( \varOmega ,\mathcal {F},\{\mathcal {F}_t\}_{t\ge 0},\mathcal {P}\right) \) and the initial model \(\eta _0\). It is assumed that the initial condition of neural network (11.1) has the form \(x(t)=\varphi (t)\) for \(t\in [-\varpi ,0],\) where \(\varphi (t)=\left[ \varphi _1(t),\ldots ,\varphi _n(t)\right] ^T\), function \(\varphi _j(t)(j=1,2,\ldots ,n)\) is continuous, \(\varpi =\max \{\bar{\tau },\bar{\upsilon }\}.\) Let \(\aleph =[\pi _{\textit{ij}}]_{i,j\in \wp }\) denote the transition rate matrix with given probability:

$$P(\eta _{t+\delta }=j\vert \eta _t=i)=\left\{ \begin{array}{cc}\ \pi _{\textit{ij}}\delta +o(\delta ),&{}i\ne j,\\ \pi _{\textit{ii}}\delta +o(\delta )+1,&{}i=j,\end{array}\right. $$

where \(\delta >0,\ \lim _{\delta \rightarrow 0^+}{\frac{o(\delta )}{\delta }}=0\) and \(\pi _{\textit{ij}}\) is the transition rate from mode i to mode j satisfying \(\pi _{\textit{ij}}\ge 0\) for \(i\ne j\) with \(\pi _{\textit{ii}}=-\sum ^N_{j=1,i\ne j} \pi _{\textit{ij}},i,j\in \wp \).

For convenience, each possible value of \(\eta _t\) is denoted by \(i(i\in \wp )\) in the sequel. Then we have

$$\begin{aligned}&\alpha _i(x(t))=\alpha (x(t),\eta _t),\ \beta _i(x(t))=\beta (x(t),\eta _t),\\&A_i=A(\eta _t),\ B_i=B(\eta _t),\ C_i=C(\eta _t),\\&D_i=D(\eta _t),\ \tau _i(t)=\tau (t,\eta _t),\ \upsilon _i(t)=\upsilon (t,\eta _t),\\&E_{\textit{li}}=E_l(\eta _t),\ l=1,\ldots ,5. \end{aligned}$$

In the following, we need the following definitions, assumptions, and lemmas.

Definition 11.1

([24, 27])  Given \(r>0\), and any initial condition \(\varphi \in \mathcal {L}^2_{\mathcal {F}_0}([-\varpi ,0],\mathbb {R}^n)\) with \(u(t,\eta _t)=0.\) The zero solution of system (11.1) is said to be r-exponentially stable in the mean square, if there exists a positive scalar M such that any solution \(x(t,\varphi )\) of the system satisfies the following inequality,

$$\begin{aligned} \mathbb {E}{||x(t,\phi )||}^2 \le M\sup _{-\varpi \le s\le 0}\mathbb {E}{||\phi (s)||}^2 e^{-2r t},\ \ \forall \ t\ge 0. \end{aligned}$$

Definition 11.2

([24, 27])  Given \(r>0\). The system (11.1) is said to be r-exponentially stabilizable in the mean square, if there is a feedback control law \(u(t,\eta _t)=\overline{U}(\eta _t)x(t),\) such that the following closed-loop system

$$\begin{aligned} \mathrm{d}x(t)=&-\alpha (x(t),\eta _t)\bigg [\beta (x(t),\eta _t)-A(\eta _t)f(x(t))\\&-B(\eta _t)f(x(t-\tau (t,\eta _t)))\\&-C(\eta _t)\int ^t_{t-\upsilon (t,\eta _t)}g(x(s))\mathrm{d}s-D(\eta _t)\overline{U}(\eta _t)x(t)\bigg ]\mathrm{d}t\\&+\bigg [E_1(\eta _t)x(t)+E_2(\eta _t)x(t-\tau (t,\eta _t))\\&+E_3(\eta _t)f(x(t))+E_4(\eta _t)f(x(t-\tau (t,\eta _t)))\\&+E_5(\eta _t)\int ^t_{t-\upsilon (t,\eta _t)}g(x(s))\mathrm{d}s\bigg ]\mathrm{d}\omega (t),\\ x(t)=\,&\varphi (t),\ \ \ t\in [-\varpi ,0], \end{aligned}$$

is r-exponentially stable.

Assumption 11.3

([8])  Each \(\alpha _{\textit{ji}}(\cdot )\) is a continuous function and satisfies \(\bar{\alpha }_{\textit{ji}}\ge \alpha _{\textit{ji}}(\cdot )\ge \underline{\alpha }_{\textit{ji}}>0,\ j=1,2,\ldots ,n,i=1,2,\ldots ,N.\)

Here, we denote \(\underline{\alpha }_i=\mathrm{min}_{1\le j\le n}\{\underline{\alpha }_{\textit{ji}}\},\) \(\bar{\alpha }_i=\mathrm{max}_{1\le j\le n}\{\bar{\alpha }_{\textit{ji}}\}\) for simplicity.

Assumption 11.4

Each function \(\beta _{\textit{ji}}(\cdot )\) is locally Lipschitz continuous, \(\beta _{\textit{ji}}(0)=0\) and there exist constants \(\bar{\beta }_{\textit{ji}}>\underline{\beta }_{\textit{ji}}\ge 0\) such that

$$\begin{aligned}&\underline{\beta }_{\textit{ji}}s^2\le \beta _{\textit{ji}}(s)s\le \bar{\beta }_{\textit{ji}}s^2, \end{aligned}$$

for any \(s\in \mathbb {R},\ j=1,2,\ldots ,n,i=1,2,\ldots ,N.\)

For simplicity, we denote \({\varPi }_i=\mathrm{diag}\{\bar{\beta }_{1i},\ldots ,\bar{\beta }_{\textit{ni}}\},\) \(\varGamma _i=\mathrm{diag}\{\underline{\beta }_{1i},\ldots ,\underline{\beta }_{\textit{ni}}\}.\)

Assumption 11.5

For \(j=1,2,\ldots ,n,\) \(f_j(0)=g_j(0)=0.\) Furthermore, there exist constants \(\varrho ^-_{j}, \varrho ^+_{j}, \psi ^-_{j}, \psi ^+_{j}\) such that \(\varrho ^-_{j}<\varrho ^+_{j}, \psi ^-_{j}<\psi ^+_{j}\) and

$$\begin{aligned}&\varrho ^-_{j}\le {\frac{f_j(s)}{s}}\le \varrho ^+_{j},\ \ \psi ^-_{j}\le {\frac{g_j(s)}{s}}\le \psi ^+_{j}, \end{aligned}$$

for any \(s\in \mathbb {R},\ j=1,2,\ldots ,n.\)

Remark 11.6

As pointed out in [24], the constants \(\varrho ^-_{j},\varrho ^+_{j}, \psi ^-_{j},\psi ^+_{j}\) in Assumption 11.5 are allowed to be positive, negative, or zero. Then, those previously used Lipschitz conditions are just the special cases of Assumption 11.5. Hence, the activation functions can be of more general descriptions than those earlier forms.

For notational simplicity, we denote

$$\begin{aligned} \bar{\varSigma }=\,&\mathrm{diag}\left\{ \varrho ^+_{1},\varrho ^+_{2},\ldots ,\varrho ^+_{n}\right\} ,\\ {\varSigma }=\,&\mathrm{diag}\left\{ \varrho ^-_{1},\varrho ^-_{2},\ldots ,\varrho ^-_{n}\right\} ,\\ {F_1}=\,&\mathrm{diag}\left\{ \varrho ^-_{1}\varrho ^+_{1},\varrho ^-_{2}\varrho ^+_{2},\ldots ,\varrho ^-_{n}\varrho ^+_{n}\right\} ,\\ {F_2}=\,&\mathrm{diag}\left\{ \frac{\varrho ^-_{1}+\varrho ^+_{1}}{2},\frac{\varrho ^-_{2}+\varrho ^+_{2}}{2},\ldots , \frac{\varrho ^-_{n}+\varrho ^+_{n}}{2}\right\} ,\\ {F_3}=\,&\mathrm{diag}\left\{ \psi ^-_{1}\psi ^+_{1},\psi ^-_{2}\psi ^+_{2},\ldots ,\psi ^-_{n}\psi ^+_{n}\right\} ,\\ {F_4}=\,&\mathrm{diag}\left\{ \frac{\psi ^-_{1}+\psi ^+_{1}}{2},\frac{\psi ^-_{2}+\psi ^+_{2}}{2},\ldots ,\frac{\psi ^-_{n}+\psi ^+_{n}}{2}\right\} . \end{aligned}$$

Lemma 11.7

( Jensen integral inequality , see [39]) For any constant matrix \(M > 0\), any scalars a and b with \(a < b\), and a vector function \(\chi (t):[a,b]\rightarrow \mathbb {R}\) such that the integrals concerned are well defined, then the following inequality holds

$$\begin{aligned} \bigg <\int ^b_{a}\chi (s)\mathrm{d}s,M\int ^b_{a}\chi (s)\mathrm{d}s\bigg >\le (b-a)\int ^b_{a}\chi (s)^TM\chi (s)\mathrm{d}s, \end{aligned}$$

where \(\Big <A,B\Big >=A^TB\) denotes the inner product.

Lemma 11.8

Assume that \(\nu ,\mu ,\underline{\vartheta },\bar{\vartheta }\) are real scalars such that \(\nu \le 1,\nu +\mu \le 4,\) and \(\underline{\vartheta }<\bar{\vartheta }.\) Let \(\vartheta :\mathbb {R}\rightarrow (\underline{\vartheta },\bar{\vartheta })\) be a real function. Then for any nonnegative scalars ab,  the following inequality holds

$$\begin{aligned}&-\frac{ a}{\vartheta (t)-\underline{\vartheta }}-\frac{ b}{\bar{\vartheta }-\vartheta (t)}\nonumber \\&\le \frac{1}{\bar{\vartheta }-\underline{\vartheta }}\max \{-\nu a-\mu b,-\mu a-\nu b\}. \end{aligned}$$
(11.2)

Proof

Without loss of generality, we assume that \(\nu \le \mu .\) First consider the case that \( a\le b.\) It is easy to see that \(\max \{-\nu a-\mu b,-\mu a-\nu b\}=-\mu a-\nu b.\) Therefore, we have

$$\begin{aligned}&\left( \vartheta (t)-\underline{\vartheta }\right) \left( \bar{\vartheta }-\vartheta (t)\right) (-\mu a-\nu b)\\&+\left( \bar{\vartheta }-\underline{\vartheta }\right) \left[ \left( \bar{\vartheta }-\vartheta (t)\right) a +\left( \vartheta (t)-\underline{\vartheta }\right) b\right] \\ =&\left( \bar{\vartheta }-\vartheta (t)\right) \left[ \bar{\vartheta }+(\mu -1)\underline{\vartheta }-\mu \vartheta (t)\right] a\\&+\left( \vartheta (t)-\underline{\vartheta }\right) \left[ (1-\nu )\left( \bar{\vartheta }-\vartheta (t)\right) +\left( \vartheta (t) -\underline{\vartheta }\right) \right] b\\ \ge&\left\{ \left( \bar{\vartheta }-\vartheta (t)\right) \left[ \bar{\vartheta }+(\mu -1)\underline{\vartheta } -\mu \vartheta (t)\right] \right. \\&\left. +\left( \vartheta (t)-\underline{\vartheta }\right) \left[ (1-\nu )\left( \bar{\vartheta }-\vartheta (t)\right) +\left( \vartheta (t) -\underline{\vartheta }\right) \right] \right\} a\\ =\,&\frac{a}{4}\left[ (\nu +\mu )\left( 2\vartheta (t)-\underline{\vartheta }-\bar{\vartheta }\right) ^2 +(4-\nu -\mu )\left( \bar{\vartheta }-\underline{\vartheta }\right) ^2\right] \\ \ge \,&0. \end{aligned}$$

That is

$$\begin{aligned} \frac{1}{\bar{\vartheta }-\underline{\vartheta }}&\max \{-\nu a-\mu b,-\mu a-\nu b\}\\ =&\frac{1}{\bar{\vartheta }-\underline{\vartheta }}\left( -\mu a-\nu b\right) \\ \ge&-\frac{ a}{\vartheta (t) -\underline{\vartheta }}-\frac{ b}{\bar{\vartheta }-\vartheta (t)}. \end{aligned}$$

Similarly, we can also conclude that the inequality (11.2) holds for \( a>b\). Now, the proof of Lemma 11.8 is completed.

Remark 11.9

If we set \(\nu =1,\mu =3,\) then we get Lemma 3 of [40] from Lemma 11.8. Thus, based on Lemma 11.8, we can get some conditions of exponential stabilization problem with less conservativeness.

11.3 Stabilization Result

As is well known, for stochastic systems, It\(\hat{\mathrm{o}}\)’s formula plays an important role in the stability analysis of stochastic systems and we cite some related results here [41]. Consider a general stochastic system

$$\begin{aligned} \mathrm{d}x(t)=f(x(t),t,\eta _t)\mathrm{d}t+g(x(t),t,\eta _t)\mathrm{d}\omega (t) \end{aligned}$$
(11.3)

on \(t\ge t_0\) with initial value \(x(t_0)=x_0\in \mathbb {R}^n,\) where \(f:\mathbb {R}^n\times \mathbb {R}^+\times \wp \rightarrow \mathbb {R}^n\) and \(g:\mathbb {R}^n\times \mathbb {R}^+\times \wp \rightarrow \mathbb {R}^{n+m}.\) Let \(\mathbb {C}^{2,1}\big (\mathbb {R}^n\times \mathbb {R}^+,\mathbb {R}^+\big )\) denote the family of all nonnegative functions V(xti) on \(\mathbb {R}^n\times \mathbb {R}^+\) which are continuously differentiable in t and twice differentiable in x. Let \(\pounds \) be the weak infinitesimal generator of the random process \(\{x(t),\eta (t)\}_{t\ge 0}\) along the system (11.3) (see [24, 42, 43]), i.e.,

$$\begin{aligned} \pounds V(x_t,t,i):=\lim _{\delta \rightarrow 0^+}\frac{1}{\delta }\sup&\Big [\mathbb {E}\big \{V(x_{t+\delta },t+\delta ,\eta (t+\delta ))\big |x(t),\\&\ \ \ \ \eta (t)=i\big \}-V(x_t,t,\eta (t)=i)\Big ], \end{aligned}$$

then, by the generalized It\(\hat{\mathrm{o}}\)’s formula, one can get

$$\begin{aligned} \mathbb {E}V(x,t,i)=\mathbb {E}V(x_0,t_0,i)+\mathbb {E}\int ^t_{t_0}\pounds V(x(s),s,i)\mathrm{d}s. \end{aligned}$$

Theorem 11.10

Given \(r>0\). For any given scalars \(\bar{\tau }_i>0,\) \(\bar{\upsilon }_i>0,\upsilon '_i<1,\) considering the system (11.1) satisfying Assumptions 11.311.5 and \(\dot{\tau }_i(t)\le \tau '_i,\dot{\upsilon }_i(t)\le \upsilon '_i,\) the system (11.1) is globally r-exponentially stabilized if there exist symmetric positive definite matrices \(P_i\in \mathbb {R}^{n\times n},\) symmetric nonnegative definite matrices \(Q_{\textit{ji}},R_i,M_i,S_{l},Z_i\ (j=1,\ldots ,4,l=1,\ldots ,9),\) positive diagonal matrices \(G_i,U_i,T_i,W_i,H,K,\) and real matrices \(X_i\) satisfying the following inequalities \((i=1,\ldots ,N)\)

$$\begin{aligned}&\sum ^N_{j=1}\pi _{\textit{ij}}Q_{\textit{lj}}<S_l,\ \ l=1,2,3,4, \end{aligned}$$
(11.4)
$$\begin{aligned}&\sum ^N_{j=1}\pi _{\textit{ij}}R_{j}<S_5,\end{aligned}$$
(11.5)
$$\begin{aligned}&\sum ^N_{j=1}\pi _{\textit{ij}}Z_j<S_6,\end{aligned}$$
(11.6)
$$\begin{aligned}&\sum ^N_{j=1}\pi _{\textit{ij}}\bar{\upsilon }_{j}R_j<S_7,\end{aligned}$$
(11.7)
$$\begin{aligned}&\sum ^N_{j=1}\pi _{\textit{ij}}\bar{\tau }_{j}Z_{j}<S_8,\end{aligned}$$
(11.8)
$$\begin{aligned}&\sum ^N_{j=1}\pi _{\textit{ij}}\bar{\tau }_jQ_{4j}<S_9,\end{aligned}$$
(11.9)
$$\begin{aligned}&\left[ \begin{array}{cc} \varOmega _i+\widetilde{\varOmega }_i&{}\mathcal {E}^T\\ \mathcal {E}&{}\overline{Z}_i\end{array}\right] <0, \end{aligned}$$
(11.10)
$$\begin{aligned}&\left[ \begin{array}{cc} \varOmega _i+\widehat{\varOmega }_i&{}\mathcal {E}^T\\ \mathcal {E}&{}\overline{Z}_i\end{array}\right] <0, \end{aligned}$$
(11.11)

where

$$\begin{aligned} \varOmega _i=&\left[ \begin{array}{cccc} \varOmega _{1i}&{}\varOmega _{2i}&{}\varOmega _{4i}&{}\varOmega _{7i}\\ *&{}\varOmega _{3i}&{}\varOmega _{5i}&{}0\\ *&{}*&{}\varOmega _{6i}&{}\varOmega _{8i}\\ *&{}*&{}*&{}\varOmega _{9i}\end{array}\right] ,\\ \widetilde{\varOmega }_i=&-\frac{2}{\bar{\tau }_i}\mathbb {I}^TQ_{4i}\mathbb {I},\ \ \widehat{\varOmega }_i=-\frac{2}{\bar{\tau }_i}\mathfrak {I}^TQ_{4i}\mathfrak {I},\\ \mathcal {E}=\,&[\ E_{1i}\ \ E_{2i}\ \ E_{3i}\ \ E_{4i}\ \ 0\ \ E_{5i}\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ ],\\ \overline{Z}_i=\,&\frac{\bar{\tau }^2}{2}S_{6}+\bar{\tau }S_8+\bar{\tau }_iZ_i+\widetilde{Z}_i,\\ \widetilde{Z}_i=\,&\underline{\alpha }^{-1}_i[P_i+H(\varPi _i-\varGamma _i)+K(\bar{\varSigma }-\varSigma )], \end{aligned}$$

with

$$\begin{aligned} \varOmega _{1i}=&\left[ \begin{array}{cccccc} \varOmega _{11i}&{}\varOmega _{12i}&{}\varOmega _{13i}&{}\varOmega _{14i}&{}\varOmega _{15i}&{}\varOmega _{16i}\\ *&{}\varOmega _{22i}&{}0&{}\varOmega _{24i}&{}0&{}0\\ *&{}*&{}\varOmega _{33i}&{}\varOmega _{34i}&{}0&{}\varOmega _{36i}\\ *&{}*&{}*&{}\varOmega _{44i}&{}0&{}0\\ *&{}*&{}*&{}*&{}\varOmega _{55i}&{}0\\ *&{}*&{}*&{}*&{}*&{}\varOmega _{66i}\\ \end{array}\right] ,\\ \varOmega _{2i}=&\left[ \ 0\ \ 0\ \ A_i\ \ B_i\ \ 0\ \ C_i\ \right] ^TG_i^T, \end{aligned}$$
$$\begin{aligned} \varOmega _{3i}=&-2\bar{\alpha }^{-2}_iG_i+\underline{\alpha }_i^{-2}\Big [\frac{1}{2}\bar{\tau }^2S_4+\bar{\tau }_iQ_{4i} +\bar{\tau }S_9\Big ],\\ \varOmega _{4i}=&\left[ \begin{array}{cccc} \varOmega _{18i}&{}0&{}\varOmega _{1{\textit{ai}}}\\ 0&{}\varOmega _{29i}&{}0\\ \varOmega _{38i}&{}0&{}\varOmega _{3{\textit{ai}}}\\ 0&{}0&{}\varOmega _{4{\textit{ai}}}\\ 0&{}0&{}0\\ 0&{}0&{}\varOmega _{6{\textit{ai}}}\end{array}\right] ,\ \ \varOmega _{7i}=\left[ \begin{array}{cc} \varOmega _{1{\textit{bi}}}&{}0\\ \varOmega _{2{\textit{bi}}}&{}\varOmega _{2{\textit{ci}}}\\ 0&{}0\\ 0&{}0\\ 0&{}0\\ 0&{}0\end{array}\right] ,\\ \varOmega _{5i}=\,&G_i\left[ \ D_iD^T_i\ \ \ \ 0\ \ \ -I\ \right] ,\\ \varOmega _{6i}=&\left[ \begin{array}{ccc} \varOmega _{88i}&{}0&{}\varOmega _{8{\textit{ai}}}\\ *&{}\varOmega _{99i}&{}0\\ *&{}*&{}\varOmega _{\textit{aai}}\end{array}\right] ,\ \ \varOmega _{8i}=\left[ \begin{array}{cc} 0&{}0\\ 0&{}\varOmega _{9{\textit{ci}}}\\ 0&{}0\end{array}\right] ,\\ \varOmega _{9i}=\,&\mathrm{diag}\{\varOmega _{\textit{bbi}},\ \varOmega _{\textit{cci}}\},\\ \mathbb {I}=&\left[ \begin{array}{cccccccccccc}0&-I&0&0&0&0&0&0&I&0&0&I\end{array}\right] ,\\ \mathfrak {I}=&\left[ \begin{array}{cccccccccccc}-I&I&0&0&0&0&0&0&0&0&I&0\end{array}\right] , \end{aligned}$$

and

$$\begin{aligned} \varOmega _{11i}=&-2P_i\varGamma _i+Q_{1i}+Q_{3i}-\frac{1}{\bar{\tau }_i}Q_{4i}\\&+\sum ^N_{j=1}\pi _{\textit{ij}}\rho ^{-1}_{\textit{ij}}P_j+\bar{\tau }(S_1+S_3)-U_iF_1-W_iF_3\\&+\sum ^N_{j=1}\bar{\pi }_{\textit{ij}}\underline{\alpha }^{-1}_{i}\left[ P_i+2H(\varPi _i-\varGamma _i)+2K(\bar{\varSigma }-\varSigma )\right] ,\\ \varOmega _{12i}=\,&\frac{1}{\bar{\tau }_i}Q_{4i},\\ \varOmega _{22i}=&-(1-\tau '_i)Q_{1i}+\sum ^N_{j=1}\bar{\pi }_{\textit{ij}}\bar{\tau }_{j}Q_{1j}-\frac{2}{\bar{\tau }_i}Q_{4i}-T_iF_1,\\ \varOmega _{13i}=\,&P_iA_i+U_iF_2-\varGamma _i{\textit{HA}}_i-\varSigma {\textit{KA}}_i,\\ \varOmega _{33i}=\,&Q_{2i}-U_i+{\textit{KA}}_i+A^T_iK+\bar{\tau }S_2,\\ \varOmega _{14i}=\,&P_iB_i-\varGamma _i{\textit{HB}}_i-\varSigma {\textit{KB}}_i,\\ \varOmega _{24i}=&-T_iF_2,\ \ \varOmega _{34i}={\textit{KB}}_i,\\ \varOmega _{44i}=&-T_i-(1-\tau '_i)Q_{2i}+\sum ^N_{j=1}\bar{\pi }_{\textit{ij}}\bar{\tau }_{j}Q_{2j},\\ \varOmega _{15i}=\,&W_iF_4,\ \varOmega _{55i}=-W_i+\bar{\upsilon }_iR_i+\frac{\bar{\upsilon }^2}{2}S_5+\bar{\upsilon }S_{7},\\ \varOmega _{16i}=&P_iC_i-\varGamma _i{\textit{HC}}_i-\varSigma {\textit{KC}}_i,\\ \varOmega _{36i}=\,&{\textit{KC}}_i,\ \ \varOmega _{66i}=-\frac{1-\upsilon '_i}{\bar{\upsilon }_i}R_i,\\ \varOmega _{18i}=\,&(P_i-\varSigma K+\varGamma _i H)D_iD^T_i+\bar{M}_i^T,\\ \varOmega _{38i}=\,&{\textit{KD}}_iD^T_i,\ \varOmega _{88i}=-2M_i,\ \varOmega _{29i}=\frac{1}{\bar{\tau }_i}Q_{4i},\\ \varOmega _{99i}=&-\frac{1}{\bar{\tau }_i}Q_{4i}-Q_{3i}+\sum ^N_{j=1}\bar{\pi }_{\textit{ij}}\bar{\tau }_{j}Q_{3j},\\ \varOmega _{1{\textit{ai}}}=\,&\varGamma _iH+\varSigma K,\ \ \varOmega _{3{\textit{ai}}}=A^T_iH-K,\\ \varOmega _{4{\textit{ai}}}=\,&B^T_iH,\ \varOmega _{6{\textit{ai}}}=C^T_iH,\ \varOmega _{8{\textit{ai}}}=D_iD^T_iH,\\ \varOmega _{\textit{aai}}=&-2H,\ \ \varOmega _{1{\textit{bi}}}=\frac{1}{\bar{\tau }_i}Q_{4i},\\ \varOmega _{2{\textit{bi}}}=&-\frac{1}{\bar{\tau }_i}Q_{4i},\ \ \varOmega _{\textit{bbi}}=-\frac{1}{\bar{\tau }_i}Q_{4i}-Z_i, \end{aligned}$$
$$\begin{aligned} \varOmega _{2{\textit{ci}}}=\,&\frac{1}{\bar{\tau }_i}Q_{4i},\ \ \varOmega _{9{\textit{ci}}}=-\frac{1}{\bar{\tau }_i}Q_{4i},\ \ \varOmega _{\textit{cci}}=-\frac{1}{\bar{\tau }_i}Q_{4i}-Z_i, \end{aligned}$$

and \(\bar{\pi }_{\textit{ij}}=\max \{\pi _{\textit{ij}},0\},\ \bar{M}_i=M_iX_i,\)

$$\begin{aligned} \rho _{\textit{ij}}=\left\{ \begin{array}{cc}\bar{\alpha }_i,&{}\ j=i\\ \underline{\alpha }_i,&{}\ j\ne i\end{array}\right. . \end{aligned}$$

Furthermore, the feedback stabilizing control law is defined by \(u_i(t)=D^T_iX_ix(t).\)

Proof

From Assumption 11.3, we know that the amplification function \(\alpha _i(x(t))\) is nonlinear and satisfies \(\alpha _i(x(t))\alpha _i(x(t))\le \bar{\alpha }^2_iI.\) Following the way in [15], pre- and postmultiplying the left-hand sides of inequalities (11.10) and (11.11) by \(\mathrm{diag}\{I\ \ I\ \ I\ \ I\ \ I\ \ I\ \ \alpha _i(x(t))\ \ I\ \ I\ \ I\ \ I\ \ I\}\), respectively, it follows that

$$\begin{aligned}&\left[ \begin{array}{cc} \overline{\varOmega }_i+\widetilde{\varOmega }_i&{}\mathcal {E}^T\\ \mathcal {E}&{}\overline{Z}_i\end{array}\right] <0,\end{aligned}$$
(11.12)
$$\begin{aligned}&\left[ \begin{array}{cc} \overline{\varOmega }_i+\widehat{\varOmega }_i&{}\mathcal {E}^T\\ \mathcal {E}&{}\overline{Z}_i\end{array}\right] <0, \end{aligned}$$
(11.13)

where

$$\begin{aligned}&\overline{\varOmega }_i\doteq \left[ \begin{array}{ccc} \varOmega _{1i}&{}\overline{\varOmega }_{2i}&{}\varOmega _{4i}\\ *&{}\overline{\varOmega }_{3i}&{}\overline{\varOmega }_{5i}\\ *&{}*&{}\varOmega _{6i}\end{array}\right] , \end{aligned}$$

with

$$\begin{aligned} \overline{\varOmega }_{2i}=&\left[ \ 0\ \ 0\ \ A_i\ \ B_i\ \ 0\ \ C_i\ \right] ^T\alpha _i(x(t))G_i^T,\\ \overline{\varOmega }_{3i}=\,&\bar{\tau }_iQ_{4i}-(G_i+G^T_i)+\frac{\bar{\tau }^2}{2}S_{4}+\bar{\tau }S_{9},\\ \overline{\varOmega }_{5i}=\,&G_i\alpha _i(x(t))\left[ \ D_iD^T_i\ \ \ \ 0\ \ \ -I\ \ \ 0\ \ 0\ \right] . \end{aligned}$$

For any \(j=1,2,\ldots ,n\), from Assumption 11.5 we obtain that

$$\begin{aligned}&\left( f(x_j(t))-\varrho ^+_jx_j(t)\right) \left( f(x_j(t))-\varrho ^-_jx_j(t)\right) \le 0,\\&\left( g(x_j(t))-\psi ^+_jx_j(t)\right) \left( g(x_j(t))-\psi ^-_jx_j(t)\right) \le 0. \end{aligned}$$

Therefore, the following matrix inequalities hold for any positive diagonal matrices \(U_i,T_i,W_i\),

$$\begin{aligned}&\left\langle \left[ \begin{array}{cc} x(t)\\ f(x(t))\end{array}\right] ,\left[ \begin{array}{cc} -U_iF_1&{}U_iF_2\\ U_iF_2&{}-U_i \end{array}\right] \left[ \begin{array}{cc} x(t)\\ f(x(t))\end{array}\right] \right\rangle \ge 0,\end{aligned}$$
(11.14)
$$\begin{aligned}&\left\langle \left[ \begin{array}{cc} x(t-\tau _i(t))\\ f(x(t-\tau _i(t)))\end{array}\right] ,\left[ \begin{array}{cc} -T_iF_1&{}T_iF_2\\ T_iF_2&{}-T_i \end{array}\right] \left[ \begin{array}{cc} x(t-\tau _i(t))\\ f(x(t-\tau _i(t)))\end{array}\right] \right\rangle \ge 0,\end{aligned}$$
(11.15)
$$\begin{aligned}&\left\langle \left[ \begin{array}{cc} x(t)\\ g(x(t))\end{array}\right] ,\left[ \begin{array}{cc} -W_iF_3&{}W_iF_4\\ W_iF_4&{}-W_i \end{array}\right] \left[ \begin{array}{cc} x(t)\\ g(x(t))\end{array}\right] \right\rangle \ge 0. \end{aligned}$$
(11.16)

Denoting

$$\begin{aligned} \iota _i(t)=&-\beta _i(x(t))+A_if(x(t))+B_if(x(t-\tau _i(t)))\\&+C_i\int ^t_{t-\upsilon _i(t)}g(x(s))\mathrm{d}s+D_iu_i(t), \end{aligned}$$
$$\begin{aligned} \vartheta _i(t)=\,&\alpha _i(x(t))\iota _i(t),\\ \sigma _i(t)=\,&E_{1i}x(t)+E_{2i}x(t-\tau _i(t))+E_{3i}f(x(t))\\&+E_{4i}f(x(t-\tau _i(t)))+E_{5i}\int ^t_{t-\upsilon _i(t)}g(x(s))\mathrm{d}s, \end{aligned}$$

then system (11.1) can be rewritten as

$$\begin{aligned} \mathrm{d}x(t)=\vartheta _i(t)\mathrm{d}t+\sigma _i(t)\mathrm{d}\omega (t). \end{aligned}$$
(11.17)

Define the following Lyapunov–Krasovskii functional:

$$\begin{aligned} V(x_t,t,i)=&\sum ^6_{l=1}V_{\textit{li}}(x_t,t), \end{aligned}$$
(11.18)

where

$$\begin{aligned} V_{1i}(x_t,t)=&\sum ^n_{j=1}2p_{\textit{ji}}\int ^{x_j(t)}_0\frac{s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\\&+\sum ^n_{j=1}2h_j\int ^{x_j(t)}_0\frac{\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\\&+\sum ^n_{j=1}2k_j\int ^{x_j(t)}_0\frac{f_j(s)-\varrho ^-_js}{\alpha _{\textit{ji}}(s)}\mathrm{d}s,\\ V_{2i}(x_t,t)=&\int ^t_{t-\tau _i(t)}\left\langle x(s),Q_{1i}x(s)\right\rangle \mathrm{d}s\\&+\int ^t_{t-\tau _i(t)}\left\langle f(x(s)),Q_{2i}f(x(s))\right\rangle \mathrm{d}s\\&+\int ^t_{t-\bar{\tau }_i}\left\langle x(s),Q_{3i}x(s)\right\rangle \mathrm{d}s, \end{aligned}$$
$$\begin{aligned} V_{3i}(x_t,t)=&\int ^0_{-\bar{\tau }_i}\int ^t_{t+\theta }\left\langle \vartheta _i(s),Q_{4i}\vartheta _i(s)\right\rangle \mathrm{d}s\mathrm{d}\theta \\&+\int ^0_{-\upsilon _i(t)}\int ^t_{t+\theta }\left\langle g(x(s)),R_ig(x(s))\right\rangle \mathrm{d}s\mathrm{d}\theta \\&+\int ^0_{-\bar{\tau }_i}\int ^t_{t+\theta }\left\langle \sigma _i(s),Z_i\sigma _i(s)\right\rangle \mathrm{d}s\mathrm{d}\theta , \end{aligned}$$
$$\begin{aligned} V_{4i}(x_t,t)=&\int ^0_{-\bar{\tau }}\int ^t_{t+\theta }\big \{\left\langle x(s),(S_1+S_3)x(s)\right\rangle \\&\ \ \ \ \ \ \ \ \ \ \ \ \ +\left\langle f(x(s)),S_2f(x(s))\right\rangle \big \}\mathrm{d}s\mathrm{d}\theta ,\\ V_{5i}(x_t,t)=&\int ^0_{-\bar{\tau }}\int ^0_\theta \int ^t_{t+\lambda }\left\langle \vartheta _i(s),S_4\vartheta _i(s)\right\rangle \mathrm{d}s\mathrm{d}\lambda \mathrm{d}\theta \\&+\int ^0_{-\bar{\upsilon }}\int ^0_\theta \int ^t_{t+\lambda }\left\langle g(x(s)),S_5g(x(s))\right\rangle \mathrm{d}s\mathrm{d}\lambda \mathrm{d}\theta \\&+\int ^0_{-\bar{\tau }}\int ^0_\theta \int ^t_{t+\lambda }\left\langle \sigma _i(s),S_6\sigma _i(s)\right\rangle \mathrm{d}s\mathrm{d}\lambda \mathrm{d}\theta ,\\ V_{6i}(x_t,t)=&\int ^0_{-\bar{\upsilon }}\int ^t_{t+\theta }\left\langle g(x(s)),S_7g(x(s))\right\rangle \mathrm{d}s\mathrm{d}\theta \\&+\int ^0_{-\bar{\tau }}\int ^t_{t+\theta }\big \{\left\langle \sigma _i(s),S_8\sigma _i(s)\right\rangle \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\left\langle \vartheta _i(s),S_9\vartheta _i(s)\right\rangle \big \}\mathrm{d}s\mathrm{d}\theta , \end{aligned}$$

with \(P_i=\mathrm{diag}\{p_{1i},p_{2i},\ldots ,p_{\textit{ni}}\},\ H=\mathrm{diag}\{h_{1},h_{2},\ldots ,h_{n}\},\) \(K=\mathrm{diag}\{k_{1},k_{2},\ldots ,k_{n}\}.\)

For any \(\eta (t)=i\in \wp ,\) it can be shown that

$$\begin{aligned} \pounds&\left\{ \sum ^n_{j=1}2p_{\textit{ji}}\int ^{x_j(t)}_0\frac{s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\right\} \\ =&\lim _{\varDelta \rightarrow 0^+}\frac{1}{\varDelta }\mathbb {E}\left\{ \sum ^n_{j=1}2\left( \sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]p_{\textit{jl}}+p_{\textit{ji}}\right) \right. \\&\left. \times \int ^{x_{j}(t+\varDelta )}_0\frac{s}{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\alpha _{\textit{jl}}(s)+\alpha _{\textit{ji}}(s)}\mathrm{d}s\right. \\&\left. -\sum ^n_{j=1}2p_{\textit{ji}}\int ^{x_j(t)}_0\frac{s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\right\} \\ =&\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2p_{\textit{jl}}\int ^{x_j(t)}_0\frac{s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\\&+\lim _{\varDelta \rightarrow 0^+}\frac{1}{\varDelta }\mathbb {E}\left\{ \sum ^n_{j=1}2p_{\textit{ji}}\left[ -\int ^{x_j(t)}_0\frac{s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\right. \right. \\&\left. \left. +\int ^{x_{j}(t+\varDelta )}_0 \frac{s}{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\alpha _{\textit{jl}}(s)+\alpha _{\textit{ji}}(s)}\mathrm{d}s\right] \right\} \end{aligned}$$
$$\begin{aligned} =&\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2\int ^{x_j(t)}_0\frac{s\left[ p_{\textit{jl}}\alpha _{\textit{ji}}(s)-p_{\textit{ji}}\alpha _{\textit{jl}}(s)\right] }{\alpha ^2_{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&+2\left\langle \iota _i(t),P_ix(t)\right\rangle +\mathrm{trace}\left\langle \sigma _i(t),\alpha ^{-1}_i(x(t))P_i\sigma _i(t)\right\rangle ,\end{aligned}$$
(11.19)
$$\begin{aligned} \pounds&\left\{ \sum ^n_{j=1}2h_j\int ^{x_j(t)}_0\frac{\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\right\} \nonumber \\ =&\lim _{\varDelta \rightarrow 0^+}\frac{1}{\varDelta }\mathbb {E}\left\{ \sum ^n_{j=1}2h_j\left[ -\int ^{x_j(t)}_0\frac{\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\right. \right. \nonumber \\&+\int ^{x_{j}(t+\varDelta )}_0\frac{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\beta _{\textit{jl}}(s)+\beta _{\textit{ji}}(s)}{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\alpha _{\textit{jl}}(s)+\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\left. \left. -\int ^{x_{j}(t+\varDelta )}_0\frac{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\underline{\beta }_{\textit{jl}}s +\underline{\beta }_{\textit{ji}}s}{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\alpha _{\textit{jl}}(s)+\alpha _{\textit{ji}}(s)}\mathrm{d}s\right] \right\} \nonumber \\ =&\lim _{\varDelta \rightarrow 0^+}\frac{1}{\varDelta }\mathbb {E}\left\{ \sum ^n_{j=1}2h_j\bigg [-\int ^{x_j(t)}_0\frac{\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\right. \nonumber \\&+\int ^{x_{j}(t+\varDelta )}_0\frac{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\beta _{\textit{jl}}(s)}{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\alpha _{\textit{jl}}(s)+\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&-\int ^{x_{j}(t+\varDelta )}_0\frac{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\underline{\beta }_{\textit{jl}}s}{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\alpha _{\textit{jl}}(s)+\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\left. \left. +\int ^{x_{j}(t+\varDelta )}_0\frac{\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s}{\sum ^N_{l=1}[\pi _{\textit{il}}\varDelta +o(\varDelta )]\alpha _{\textit{jl}}(s) +\alpha _{\textit{ji}}(s)}\mathrm{d}s\right] \right\} \nonumber \\ =&\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2h_j\int ^{x_j(t)}_0\frac{\big [\beta _{\textit{ji}}(s) -\underline{\beta }_{\textit{ji}}s\big ]\left[ \alpha _{\textit{ji}}(s)-\alpha _{\textit{jl}}(s)\right] }{\alpha ^2_{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&+2\left\langle \iota _i(t),H\left( \beta _i(x(t))-\varGamma _ix(t)\right) \right\rangle \nonumber \\&+\mathrm{trace}\left\langle \sigma _i(t),\alpha ^{-1}_i(x(t))H(\varPi _i-\varGamma _i)\sigma _i(t)\right\rangle ,\end{aligned}$$
(11.20)
$$\begin{aligned} \pounds&\left\{ \sum ^n_{j=1}2k_j\int ^{x_j(t)}_0\frac{f_j(s)-\varrho ^-_js}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\right\} \nonumber \\ =&\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2k_j\int ^{x_j(t)}_0\frac{\big [f_j(s)-\varrho ^-_js\big ] \left[ \alpha _{\textit{ji}}(s)-\alpha _{\textit{jl}}(s)\right] }{\alpha ^2_{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&+2\left\langle \iota _i(t),K\left( f(x(t))-\varSigma x(t)\right) \right\rangle \nonumber \\&+\mathrm{trace}\left\langle \sigma _i(t),\alpha ^{-1}_i(x(t))K(\bar{\varSigma }-\varSigma )\sigma _i(t)\right\rangle , \end{aligned}$$
(11.21)

where \(\alpha ^{-1}_i(x(t))=\mathrm{diag}\left\{ \alpha ^{-1}_{1i}(x_1(t)),\ \ldots ,\ \alpha ^{-1}_{\textit{ni}}(x_n(t))\right\} .\)

According to the definition of \(\rho _{\textit{il}}\) and Assumptions 11.311.5 we have that

$$\begin{aligned}&\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2p_{\textit{jl}}\int ^{x_j(t)}_0\frac{s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s \le \bigg <x(t),\sum ^N_{l=1}\pi _{\textit{il}}\rho ^{-1}_{\textit{il}}P_lx(t)\bigg >,\end{aligned}$$
(11.22)
$$\begin{aligned}&-\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2p_{\textit{ji}}\int ^{x_j(t)}_0\frac{s\alpha _{\textit{jl}}(s)}{\alpha ^2_{\textit{ji}}(s)}\mathrm{d}s \nonumber \\ {}&\le -\pi _{\textit{ii}}\sum ^n_{j=1}2p_{\textit{ji}}\int ^{x_j(t)}_0\frac{s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le \bigg <x(t),\sum ^N_{l=1}\bar{\pi }_{\textit{il}}\underline{\alpha }^{-1}_{i}P_ix(t)\bigg >,\end{aligned}$$
(11.23)
$$\begin{aligned}&\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2h_j\int ^{x_j(t)}_0\frac{\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le \sum ^N_{l=1}\bar{\pi }_{\textit{il}}\sum ^n_{j=1}2h_j\int ^{x_j(t)}_0\frac{\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le \bigg <x(t),H\sum ^N_{l=1}\bar{\pi }_{\textit{il}}\underline{\alpha }^{-1}_{i}(\varPi _i-\varGamma _i)x(t)\bigg >,\end{aligned}$$
(11.24)
$$\begin{aligned}&-\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2h_j\int ^{x_j(t)}_0\frac{\big [\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s\big ] \alpha _{\textit{jl}}(s)}{\alpha ^2_{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le -\pi _{\textit{ii}}\sum ^n_{j=1}2h_j\int ^{x_j(t)}_0\frac{\beta _{\textit{ji}}(s)-\underline{\beta }_{\textit{ji}}s}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le \bigg <x(t),H\sum ^N_{l=1}\bar{\pi }_{\textit{il}}\underline{\alpha }^{-1}_{i}(\varPi _i-\varGamma _i)x(t)\bigg >,\end{aligned}$$
(11.25)
$$\begin{aligned}&\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2k_j\int ^{x_j(t)}_0\frac{f_j(s)-\varrho ^-_js}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le \sum ^N_{l=1}\bar{\pi }_{\textit{il}}\sum ^n_{j=1}2k_j\int ^{x_j(t)}_0\frac{f_j(s)-\varrho ^-_js}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le \bigg <x(t),K\sum ^N_{l=1}\bar{\pi }_{\textit{il}}\underline{\alpha }^{-1}_{i}(\bar{\varSigma }-\varSigma )x(t)\bigg >,\end{aligned}$$
(11.26)
$$\begin{aligned}&-\sum ^N_{l=1}\pi _{\textit{il}}\sum ^n_{j=1}2k_j\int ^{x_j(t)}_0\frac{\big [f_j(s)-\varrho ^-_js\big ]\alpha _{\textit{jl}}(s)}{\alpha ^2_{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le -\pi _{\textit{ii}}\sum ^n_{j=1}2k_j\int ^{x_j(t)}_0\frac{f_j(s)-\varrho ^-_js}{\alpha _{\textit{ji}}(s)}\mathrm{d}s\nonumber \\&\le \bigg <x(t),K\sum ^N_{l=1}\bar{\pi }_{\textit{il}}\underline{\alpha }^{-1}_{i}(\bar{\varSigma }-\varSigma )x(t)\bigg >. \end{aligned}$$
(11.27)

Using the well-known It\(\hat{\mathrm{o}}\)’s differential formula [41, 44], we obtain

$$\begin{aligned} \pounds V_{1i}(x_t,t)\le \,&2\big <\iota _i(t),\ P_ix(t)+H\big [\beta _i(x(t)) -\varGamma _ix(t)\big ]+K\left( f(x(t))-\varSigma x(t)\right) \big >\nonumber \\&+\mathrm{trace}\big <\sigma _i(t),\ \alpha ^{-1}_i(x(t))\big [P_i \left. +H(\varPi _i-\varGamma _i)+K(\bar{\varSigma }-\varSigma )\big ]\sigma _i(t)\right\rangle \nonumber \\&+\sum ^N_{l=1}\pi _{\textit{il}}\rho ^{-1}_{\textit{il}}\big <x(t),P_lx(t)\big >\nonumber \\&+\sum ^N_{l=1}\bar{\pi }_{\textit{il}}\left\langle x(t),\ \underline{\alpha }^{-1}_{i}\big [P_i+2H\right. \left. \times (\varPi _i-\varGamma _i)+2K(\bar{\varSigma }-\varSigma )\big ]x(t)\right\rangle ,\end{aligned}$$
(11.28)
$$\begin{aligned} \pounds V_{2i}(x_t,t)=&\left\langle x(t),Q_{1i}x(t)\right\rangle +\left\langle f(x(t)),Q_{2i}f(x(t))\right\rangle \nonumber \\&-(1-\dot{\tau }_i(t))\big \{\left\langle x(t-\tau _i(t)),Q_{1i}x(t-\tau _i(t))\right\rangle \nonumber \\&+\left\langle f(x(t-\tau _i(t))),Q_{2i}f(x(t-\tau _i(t)))\right\rangle \big \}\nonumber \\&+\left\langle x(t),Q_{3i}x(t)\right\rangle -\left\langle x(t-\bar{\tau }_i),Q_{3i}x(t-\bar{\tau }_i)\right\rangle \nonumber \\&+\sum ^N_{j=1}\pi _{\textit{ij}}\bigg [\int ^t_{t-\tau _i(t)}\left\langle x(s),Q_{1j}x(s)\right\rangle \mathrm{d}s\nonumber \\&+\int ^t_{t-\tau _i(t)}\left\langle f(x(s)),Q_{2j}f(x(s))\right\rangle \mathrm{d}s +\int ^t_{t-\bar{\tau }_i}\left\langle x(s),Q_{3j}x(s)\right\rangle \mathrm{d}s\bigg ]\nonumber \\&+\sum ^N_{j=1}\pi _{\textit{ij}}\tau _j(t)\big [\left\langle x(t-\tau _i(t)),Q_{1i}x(t-\tau _i(t))\right\rangle \nonumber \\&+\left\langle f(x(t-\tau _i(t))),Q_{2i}f(x(t-\tau _i(t)))\right\rangle \big ]\nonumber \\&+\sum ^N_{j=1}\pi _{\textit{ij}}\bar{\tau }_{j}\left\langle x(t-\bar{\tau }_{i}),Q_{3i}x(t-\bar{\tau }_{i})\right\rangle , \end{aligned}$$
(11.29)
$$\begin{aligned} \pounds V_{3i}(x_t,t)=\,&\bar{\tau }_i\left\langle \vartheta _i(t),Q_{4i}\vartheta _i(t)\right\rangle -\int ^t_{t-\bar{\tau }_i}\left\langle \vartheta _i(s),Q_{4i}\vartheta _i(s)\right\rangle \mathrm{d}s\nonumber \\&+\upsilon _i(t)\left\langle g(x(t)),R_ig(x(t))\right\rangle -\int ^t_{t-\upsilon _i(t)}\left\langle g(x(t)),R_ig(x(t))\right\rangle \mathrm{d}s\nonumber \\&+\bar{\tau }_i\left\langle \sigma _i(t),Z_i\sigma _i(t)\right\rangle -\int ^t_{t-\bar{\tau }_i}\left\langle \sigma _i(t),Z_i\sigma _i(t)\right\rangle \mathrm{d}s\nonumber \\&+\sum ^N_{j=1}\pi _{\textit{ij}}\left[ \int ^0_{-\bar{\tau }_i}\int ^t_{t+\theta }\left\langle \vartheta _i(s),Q_{4j}\vartheta _j(s)\right\rangle \mathrm{d}s\mathrm{d}\theta \right. \nonumber \\&+\int ^0_{-\upsilon _i(t)}\int ^t_{t+\theta }\left\langle g(x(s)),R_{j}g(x(s))\right\rangle \mathrm{d}s\mathrm{d}\theta \nonumber \\&\left. +\int ^0_{-\bar{\tau }_i}\int ^t_{t+\theta }\left\langle \sigma _i(s),Z_{j}\sigma _j(s)\right\rangle \mathrm{d}s\mathrm{d}\theta \right] \nonumber \\&+\sum ^N_{j=1}\pi _{\textit{ij}}\left[ \bar{\tau }_{j}\int ^t_{t-\bar{\tau }_i}\left\langle \vartheta _i(s),Q_{4i}\vartheta _i(s)\right\rangle \mathrm{d}s\right. \nonumber \\&+\upsilon _j(t)\int ^t_{t-\upsilon _i(t)}\left\langle g(x(s)),R_jg(x(s))\right\rangle \mathrm{d}s\nonumber \\&\left. +\,\bar{\tau }_{j}\int ^t_{t-\bar{\tau }_i}\left\langle \sigma _i(s),Z_j\sigma _i(s)\right\rangle \mathrm{d}s\right] , \end{aligned}$$
(11.30)
$$\begin{aligned} \pounds V_{4i}(x_t,t)=\,&\bar{\tau }\left\langle x(t),(S_1+S_3)x(t)\right\rangle +\bar{\tau }\left\langle f(x(t)),S_2f(x(t))\right\rangle \nonumber \\&-\int ^t_{t-\bar{\tau }}\left\{ \left\langle x(s),(S_1+S_3)x(s)\right\rangle \right. \left. +\left\langle f(x(s)),S_2f(x(s))\right\rangle \right\} \mathrm{d}s, \end{aligned}$$
(11.31)
$$\begin{aligned} \pounds V_{5i}(x_t,t)=\,&\frac{\bar{\tau }^2}{2}\big \{\left\langle \vartheta _i(t),S_4\vartheta _i(t)\right\rangle +\left\langle \sigma _i(t),S_6\sigma _i(t)\right\rangle \big \}\nonumber \\ {}&-\int ^0_{-\bar{\tau }}\int ^t_{t+\theta }\left\langle \vartheta _i(s),S_4\vartheta _i(s)\right\rangle \mathrm{d}s\mathrm{d}\theta \nonumber \\&+\frac{\bar{\upsilon }^2}{2}\left\langle g(x(t)),S_5g(x(t))\right\rangle -\int ^0_{-\bar{\upsilon }}\int ^t_{t+\theta }\left\langle g(x(s)),S_5g(x(s))\right\rangle \mathrm{d}s\mathrm{d}\theta \nonumber \\&-\int ^0_{-\bar{\tau }}\int ^t_{t+\theta }\left\langle \sigma _i(s),S_6\sigma _i(s)\right\rangle \mathrm{d}s\mathrm{d}\theta , \end{aligned}$$
(11.32)
$$\begin{aligned} \pounds V_{6i}(x_t,t)=\,&\bar{\upsilon }\left\langle g(x(t)),S_7g(x(t))\right\rangle -\int ^t_{t-\bar{\upsilon }}\left\langle g(x(s)),S_7g(x(s))\right\rangle \mathrm{d}s\nonumber \\&+\bar{\tau }\left\langle \sigma _i(t),S_8\sigma _i(t)\right\rangle +\bar{\tau }\left\langle \vartheta _i(t),S_9\vartheta _i(t)\right\rangle \nonumber \\&-\int ^t_{t-\bar{\tau }}\left\langle \sigma _i(s),S_8\sigma _i(s)\right\rangle \mathrm{d}s -\int ^t_{t-\bar{\tau }}\left\langle \vartheta _i(s),S_9\vartheta _i(s)\right\rangle \mathrm{d}s. \end{aligned}$$
(11.33)

Based on Assumption 11.4, we obtain that

$$\begin{aligned} -x^T(t)P_i\beta _i(x(t))\le -x^T(t)P_i\varGamma _ix(t). \end{aligned}$$
(11.34)

From Lemma 11.7, it follows that

$$\begin{aligned}&-\int ^t_{t-\upsilon _i(t)}\left\langle g(x(s)),R_ig(x(s))\right\rangle \mathrm{d}s\nonumber \\&\le -\frac{1}{\bar{\upsilon }_i}\bigg <\int ^t_{t-\upsilon _i(t)}g(x(s))\mathrm{d}s,R_i\int ^t_{t-\upsilon _i(t)}g(x(s))\mathrm{d}s\bigg >. \end{aligned}$$
(11.35)

For simplicity, we denote

$$\begin{aligned}&\varsigma _{1i}(t)=\int ^{t-\tau _i(t)}_{t-\bar{\tau }}\vartheta _i(s)\mathrm{d}s,\ \ \varsigma _{2i}(t)=\int ^{t}_{t-\tau _i(t)}\vartheta _i(s)\mathrm{d}s. \end{aligned}$$

When \(0<\tau _i(t)<\bar{\tau }_i,\) from Lemma 11.8 with \(\nu =1,\mu =3,\) one can obtain that

$$\begin{aligned}&-\int ^t_{t-\bar{\tau }_i}\left\langle \vartheta _i(s),Q_{4i}\vartheta _i(s)\right\rangle \mathrm{d}s\nonumber \\ =&-\int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\left\langle \vartheta _i(s),Q_{4i}\vartheta _i(s)\right\rangle \mathrm{d}s\nonumber \\&-\int ^t_{t-\tau _i(t)}\left\langle \vartheta _i(s),Q_{4i}\vartheta _i(s)\right\rangle \mathrm{d}s\nonumber \\ \le&-\frac{1}{\bar{\tau }_i-\tau _i(t)}\left\langle \varsigma _{1i}(t),Q_{4i}\varsigma _{1i}(t)\right\rangle -\frac{1}{\tau _i(t)} \left\langle \varsigma _{2i}(t),Q_{4i}\varsigma _{2i}(t)\right\rangle \nonumber \\ \le \,&\frac{1}{\bar{\tau }_i}\max \big \{-\left\langle \varsigma _{1i}(t),Q_{4i}\varsigma _{1i}(t)\right\rangle -3\left\langle \varsigma _{2i}(t),Q_{4i}\varsigma _{2i}(t)\right\rangle ,\nonumber \\&\ \ \ \ \ \ -3\left\langle \varsigma _{1i}(t),Q_{4i}\varsigma _{1i}(t)\right\rangle -\left\langle \varsigma _{2i}(t),Q_{4i}\varsigma _{2i}(t)\right\rangle \big \}. \end{aligned}$$
(11.36)

Obviously, from Lemma 11.7, inequality (11.36) holds when \(\tau _i(t)=0\) or \(\tau _i(t)=\bar{\tau }_i.\) Therefore, inequality (11.36) holds for any t with \(0\le \tau _i(t)\le \bar{\tau }_i.\)

On the other hand, by the Leibniz-Newton formula , we get

$$\begin{aligned} x(t)-x(t-\tau _i&(t))-\int ^t_{t-\tau _i(t)}\vartheta _i(s)\mathrm{d}s -\int ^t_{t-\tau _i(t)}\sigma _i(s)\mathrm{d}\omega (s)=0,\end{aligned}$$
(11.37)
$$\begin{aligned} x(t-\tau _i(t))-x&(t-\bar{\tau }_i)-\int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\vartheta _i(s)\mathrm{d}s -\int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\sigma _i(s)\mathrm{d}\omega (s)=0. \end{aligned}$$
(11.38)

It is easy to see that the following equality holds for any positive diagonal matrices \(G_i\) with compatible dimensions

$$\begin{aligned} 0=-2\left\langle G_i\vartheta _i(t),\vartheta _i(t)-\alpha _i(x(t))\iota _i(t)\right\rangle . \end{aligned}$$
(11.39)

Considering that the feedback stabilizing control law being defined by \(u_i(t)=D^T_iX_ix(t),\) if we denote \(y_i(t)=X_ix(t),\) then for any symmetric nonnegative definite matrices \(M_i,\) we have

$$\begin{aligned} 0=-2\left\langle M_iy_i(t),y_i(t)-X_ix(t)\right\rangle \ (i=1,2,\ldots ,N). \end{aligned}$$
(11.40)

Noticing that the following equality holds

$$\begin{aligned} -\int ^t_{t-\bar{\tau }_i}\left\langle \sigma _i(s),Z_i\sigma _i(s)\right\rangle \mathrm{d}s=&-\int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\left\langle \sigma _i(s),Z_i\sigma _i(s)\right\rangle \mathrm{d}s\nonumber \\&-\int ^t_{t-\tau _i(t)}\left\langle \sigma _i(s),Z_i\sigma _i(s)\right\rangle \mathrm{d}s. \end{aligned}$$
(11.41)

From [14], we have

$$\begin{aligned}&\mathbb {E}\bigg \{\int ^t_{t-\tau _i(t)}\left\langle \sigma _i(s),Z_i\sigma _i(s)\right\rangle \mathrm{d}s\bigg \}\nonumber \\ =\,&\mathbb {E}\bigg <\int ^t_{t-\tau _i(t)}\sigma _i(s)\mathrm{d}\omega (s),Z_i\int ^t_{t-\tau _i(t)}\sigma _i(s)\mathrm{d}\omega (s)\bigg >,\end{aligned}$$
(11.42)
$$\begin{aligned}&\mathbb {E}\bigg \{\int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\left\langle \sigma _i(s),Z_i\sigma _i(s)\right\rangle \mathrm{d}s\bigg \}\nonumber \\ =\,&\mathbb {E}\bigg <\int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\sigma _i(s)\mathrm{d}\omega (s),Z_i\int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\sigma _i(s)\mathrm{d}\omega (s)\bigg >. \end{aligned}$$
(11.43)

By (11.4)–(11.9) and (11.14)–(11.43), we obtain

$$\begin{aligned}&\frac{\mathrm{d}\mathbb {E}[V(x(t),t,i)]}{\mathrm{d}t}\nonumber \\ {}&\le \mathbb {E}\max \Big \{\big <\zeta _i(t),\big (\overline{\varOmega }_i+\widetilde{\varOmega }_i +\mathcal {E}^T\overline{Z}_i\mathcal {E}\big )\zeta _i(t)\big >, \big <\zeta _i(t),\big (\overline{\varOmega }_i+\widehat{\varOmega }_i+\mathcal {E}^T\overline{Z}_i\mathcal {E}\big )\zeta _i(t)\big >\Big \}, \end{aligned}$$
(11.44)

where

$$\begin{aligned} \zeta _i(t)=\mathrm{col}\bigg \{&x(t)\ \ \ x(t-\tau _i(t))\ \ \ f(x(t))\\&f(x(t-\tau _i(t)))\ \ \ g(x(t))\ \ \ \int ^t_{t-\upsilon _i(t)}g(x(s))\mathrm{d}s\\&\vartheta _i(t)\ \ \ y_i(t)\ \ \ x(t-\bar{\tau }_i)\ \ \ \beta _i(x(t))\\&\int ^t_{t-\tau _i(t)}\sigma _i(s)\mathrm{d}\omega (s)\ \ \ \int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\sigma _i(s)\mathrm{d}\omega (s)\bigg \}. \end{aligned}$$

Next, we prove that the error system is exponentially stable in mean square.

For convenience, we define

$$\lambda _p=\min _{i\in \wp }\{\lambda _{\min }(P_i)\},$$
$$\lambda _M=\min _{i\in \wp }\{\lambda _{\min }(-\overline{\varOmega }_i-\widetilde{\varOmega }_i-\mathcal {E}^T\overline{Z}_i\mathcal {E}),\ \lambda _{\min }(-\overline{\varOmega }_i-\widehat{\varOmega }_i-\mathcal {E}^T\overline{Z}_i\mathcal {E})\}.$$

From (11.12) and (11.13) and the well-known Schur complements , it can be easily seen that \(\lambda _M>0.\) Furthermore, from (11.44) we have that

$$\begin{aligned} \frac{\mathrm{d}\mathbb {E}[V(x(t),t,i)]}{\mathrm{d}t}\le -\lambda _M\mathbb {E}||\zeta _i(t)||^2\le -\lambda _M\mathbb {E}||x(t)||^2. \end{aligned}$$
(11.45)

Similar to [45], from (11.18) and the definition of \(\vartheta _i(t)\), there exist positive scalars \(\varepsilon _1\) and \(\varepsilon _2\) such that

$$\mathbb {E}[V(x(t),t,i)]\le \varepsilon _1\mathbb {E}||\zeta (t)||^2+\varepsilon _2\mathbb {E}\int ^t_{t-\bar{\tau }_i}|| x(s)||^2\mathrm{d}s.$$

To prove the mean square exponential stability, we modify the Lyapunov function candidate (11.18) as \(\bar{V}(x(t),t,i)=e^{\textit{rt}}V(x(t),t,i),\) where r is chosen such that \(r(\varepsilon _1+\bar{\tau }\varepsilon _2e^{r \bar{\tau }})\le \lambda _M\).

Then, we have

$$\mathbb {E}[\bar{V}(x(t),t,i)]\ge \lambda _p\mathbb {E}|| x(t)||^2.$$

Furthermore, by the Dynkin’s formula [14], for any \(\eta (t)=i\in \wp ,t>0,\) we obtain that

$$\begin{aligned} \mathbb {E}[\bar{V}(x(t),t,i)]=\,&\mathbb {E}[\bar{V}(x(0),0,\eta (0))]+\mathbb {E}\int ^t_0e^{\textit{rs}} \left[ r\bar{V}(x(s),s,i)+\pounds \bar{V}(x(s),s,i)\right] \mathrm{d}s\\ \le \,&(\varepsilon _1+\bar{\tau }\varepsilon _2)\sup _{-\varpi \le s \le 0}\mathbb {E}|| x(t)||^2 +r\varepsilon _1 \int ^t_0e^{\textit{rs}}\mathbb {E}|| x(s)||^2\mathrm{d}s\\&+r\varepsilon _2 \mathbb {E}\int ^t_0e^{\textit{rs}}\int ^s_{s-\bar{\tau }}|| x(\theta )||^2\mathrm{d}\theta \mathrm{d}s -\lambda _M\int ^t_0e^{\textit{rs}}|| x(s)||^2\mathrm{d}s. \end{aligned}$$

By changing the integration sequence, we get

$$\begin{aligned}&\int ^t_0e^{\textit{rs}}\int ^s_{s-\bar{\tau }}|| x(\theta )||^2\mathrm{d}\theta \mathrm{d}s\\ \le&\int ^0_{-\bar{\tau }}e^{\textit{rs}}\int ^{\theta +\bar{\tau }}_{0}|| x(\theta )||^2\mathrm{d}s\mathrm{d}\theta +\int ^t_0e^{\textit{rs}}\int ^{\theta +\bar{\tau }}_{0}|| x(\theta )||^2\mathrm{d}s\mathrm{d}\theta \\ \le&\int ^0_{-\bar{\tau }}(\theta +\bar{\tau })e^{r(\theta +\bar{\tau })}|| x(\theta )||^2\mathrm{d}\theta +\bar{\tau }\int ^t_0e^{r(\theta +\bar{\tau })}|| x(\theta )||^2\mathrm{d}\theta \\ \le \,&\bar{\tau }e^{r\bar{\tau }}\bigg \{\sup _{-\varpi \le s\le 0}|| x(s)||^2+\int ^t_0e^{r\theta }|| x(\theta )||^2\mathrm{d}\theta \bigg \}. \end{aligned}$$

Therefore we have

$$\mathbb {E}|| x(t)||^2\le \epsilon e^{-{\textit{rt}}}\sup _{-\varpi \le s\le 0}|| x(s)||^2,$$

or

$$\lim _{t\rightarrow \infty }\sup \frac{1}{t}\log (\mathbb {E}|| x(t)||^2)\le -r,$$

where \(\epsilon =\lambda ^{-1}_p(\varepsilon _1+\bar{\tau }\varepsilon _2+r\bar{\tau }^2\varepsilon _2e^{r\bar{\tau }}).\)

Consequently, we prove that the error system (11.1) is exponentially stable in mean square. So the system (11.1) is r-exponentially stabilizable in the mean square. This ends the proof.

Remark 11.11

The Lyapunov functional (11.18) of this chapter fully uses the information about the amplification function and the mode-dependent time-varying delays, but [15, 20] only use the information about delays when constructing their Lyapunov functionals. Therefore the Lyapunov functional is more general than those in [15, 20], and the stability criteria in this chapter may be less conservativeness.

Remark 11.12

When one of the time-varying delays \(\dot{\tau }_i(t)\) is not differentiable or unknown, the result in Theorem 11.10 is no longer applicable. For this case, by setting \(Q_{1i}=Q_{2i}=0\) in Theorem 11.10, one can obtain a result of the mean square exponential stability of system (11.1).

If there are no stochastic disturbances, that is \(E_j(\eta _t)=0\ (j=1,\ldots ,5)\), then the neural network (11.1) is simplified to

$$\begin{aligned} \dot{x}(t)=&-\alpha (x(t),\eta _t)\bigg [\beta (x(t),\eta _t) -A(\eta _t)f(x(t))-B(\eta _t)f(x(t-\tau (t,\eta _t)))\nonumber \\&-C(\eta _t)\int ^t_{t-\upsilon (t,\eta _t)}g(x(s))\mathrm{d}s-D(\eta _t)u(t,\eta _t)\bigg ]. \end{aligned}$$
(11.46)

For system (11.46), by setting \(Z_i=S_6=S_8=0\) in Theorem 11.10 and deleting \(\int ^t_{t-\tau _i(t)}\sigma _i(s)\mathrm{d}\omega (s),\ \int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\sigma _i(s)\mathrm{d}\omega (s)\) from \(\zeta _i(t)\), we can get the following result of the mean square exponential stability.

Corollary 11.13

Given \(r>0\). For any given scalars \(\bar{\tau }_i>0,\) \(\bar{\upsilon }_i>0,\upsilon '_i<1,\) considering the system (11.46) satisfying Assumptions 11.311.5 and \(\dot{\tau }_i(t)\le \tau '_i,\dot{\upsilon }_i(t)\le \upsilon '_i,\) the system (11.46) is globally r-exponentially stabilizable if there exist symmetric positive definite matrices \(P_i\in \mathbb {R}^{n\times n},\) symmetric nonnegative definite matrices \(Q_{\textit{li}},R_i,M_i,S_{l}\ (l=1,\ldots ,4,l=1,\ldots ,5,7,9),\) positive diagonal matrices \(G_i,U_i,T_i,W_i,H,K,\) and real matrices \(X_i\) such that (11.4), (11.5), (11.7), (11.9) and the following inequalities hold,

$$\begin{aligned}&\left[ \begin{array}{cc} \underline{\varOmega }_i+\check{\varOmega }_i&{}\mathfrak {E}^T\\ \mathfrak {E}&{}\widetilde{Z}_i\end{array}\right] <0,\end{aligned}$$
(11.47)
$$\begin{aligned}&\left[ \begin{array}{cc} \underline{\varOmega }_i+\grave{\varOmega }_i&{}\mathfrak {E}^T\\ \mathfrak {E}&{}\widetilde{Z}_i\end{array}\right] <0, \end{aligned}$$
(11.48)

where

$$\begin{aligned} \underline{\varOmega }_i=&\left[ \begin{array}{ccc} \varOmega _{1i}&{}\varOmega _{2i}&{}\varOmega _{4i}\\ *&{}\varOmega _{3i}&{}\varOmega _{5i}\\ *&{}*&{}\varOmega _{6i}\end{array}\right] ,\\ \check{\varOmega }_i=&-\frac{2}{\bar{\tau }_i}\mathcal {I}^TQ_{4i}\mathcal {I},\ \ \grave{\varOmega }_i=-\frac{2}{\bar{\tau }_i}\mathbb {J}^TQ_{4i}\mathbb {J},\\ \mathcal {I}=&\left[ \begin{array}{cccccccccc}0&-I&0&0&0&0&0&0&I&0\end{array}\right] ,\\ \mathbb {J}=&\left[ \begin{array}{cccccccccc}-I&I&0&0&0&0&0&0&0&0\end{array}\right] ,\\ \mathfrak {E}=\,&[\ E_{1i}\ \ E_{2i}\ \ E_{3i}\ \ E_{4i}\ \ 0\ \ E_{5i}\ \ 0\ \ 0\ \ 0\ \ 0\ ], \end{aligned}$$

\(i=1,\ldots ,N,\) and other parameters are defined in Theorem 11.10. Furthermore, the feedback stabilizing control law is defined by \(u_i(t)=D^T_iX_ix(t).\)

11.4 Illustrative Examples

In this section, we provide three numerical examples to demonstrate the feasibility of our delay-dependent stabilization criteria.

Example 11.14

Consider system (11.1) with \(N=2,\)

$$\begin{aligned} \alpha _{\textit{ji}}(x_j(t))=\,&0.4\sin (x_j(t))+0.8,\\ \beta _{\textit{ji}}(x_j(t))=\,&7.5x_j(t)+0.5\sin (x_j(t)),\\ f_j(x_j(t))=\,&g_j(x_j(t))=\tanh (x_j(t)),\ \ j=1,2,\\ \tau _i(t)=\,&0.2\sin (t)+0.2,\\ \upsilon _i(t)=\,&0.3\sin (t)+0.3,\ \ i=1,2, \end{aligned}$$

and

$$\begin{aligned} A_1=&\left[ \begin{array}{cc} 1&{} -0.01\\ 0.1 &{}1.2\end{array}\right] ,\ \ A_2=\left[ \begin{array}{cc} 1.1&{} -0.01\\ 0.1 &{}1.2\end{array}\right] ,\\ B_1=&\left[ \begin{array}{cc} 5.2&{} 1.2\\ 1.12&{} 2.3\end{array}\right] ,\ \ B_2=\left[ \begin{array}{cc} 5.3&{} 1.1\\ 1.11&{} 2.3\end{array}\right] ,\\ C_1=&\left[ \begin{array}{cc} 1.2&{} 0.11\\ 0.1 &{}1.22\end{array}\right] ,\ \ C_2=\left[ \begin{array}{cc} 1.1&{} 0.12\\ 0.1 &{}1.22\end{array}\right] ,\\ D_1=\,&D_2=0,\ \ E_{11}=E_{21}=0.5I,\\ E_{12}=\,&E_{22}=0.4I,\ \ E_{l1}=E_{l2}=0,\ l=3,4,5;\\ \aleph =&\left[ \begin{array}{cc} -0.8&{}0.8\\ 0.3&{}-0.3\end{array}\right] . \end{aligned}$$

For this system without external controller, Fig. 11.1a shows the results of time response of \(x_1(t)\) and \(x_2(t)\).

Fig. 11.1
figure 1

a Time response of \(x_1(t)\) and \(x_2(t)\) without external controller in Example 11.14, b Time response of \(x_1(t)\) and \(x_2(t)\) with external controller \(u_1(t),u_2(t)\) in Example 11.14

However, if we set

$$D_1=\left[ \begin{array}{cc}4 \\ 2.1\end{array}\right] ,\ D_2=\left[ \begin{array}{cc}4 \\ 2\end{array}\right] ,$$

it is easy to see that Assumptions 11.311.5 are satisfied with \(\underline{\alpha }_i=0.4,\bar{\alpha }_i=1.2,\varPi _i=8I,\varGamma _i=7I,\) \(\bar{\varSigma }=I,\varSigma =F_1=F_3=0,F_2=F_4=0.5I,\) and \(\bar{\tau }=\bar{\tau }_i=0.4,\bar{\upsilon }=\bar{\upsilon }_i=0.6,\ i=1,2.\) Using the Matlab LMI Toolbox, the LMIs (11.4)–(11.11) are feasible and the feedback control is

$$\begin{aligned}&u_1(t)=[\ -15.9876\ \ \ 28.4673\ ]x(t),\\&u_2(t)=[\ -9.8136\ \ \ 17.5622\ ]x(t). \end{aligned}$$

The simulation of the solution is given in Fig. 11.1b for \(t\in [-0.65,200]\). It is clear that both \(x_1(t)\) and \(x_2(t)\) converge exponentially to zeros.

Example 11.15

Consider system (11.46) with \(N=2,\)

$$\begin{aligned} B_1=&\left[ \begin{array}{cc} 6.2&{} 1.2\\ 1.12 &{}0.3\end{array}\right] ,\ \ B_2=\left[ \begin{array}{cc} 6.3 &{}1.1\\ 1.11&{} 0.3\end{array}\right] , \end{aligned}$$

and other parameters are defined in Example 11.14.

For this system without external controller, Fig. 11.2a shows the results of time response of \(x_1(t)\) and \(x_2(t)\).

Fig. 11.2
figure 2

a Time response of \(x_1(t)\) and \(x_2(t)\) without external controller in Example 11.15, b Time response of \(x_1(t)\) and \(x_2(t)\) with external controller \(u_1(t),u_2(t)\) in Example 11.15

However, if we set \(D_1=D_2=[\ 4\ \ 0\ ]^T,\) it is easy to see that Assumptions 1-3 are satisfied. Using the Matlab LMI Toolbox, the LMIs (11.4), (11.5), (11.7), (11.9), (11.47) and (11.48) are feasible and the feedback control is

$$\begin{aligned}&u_1(t)=[\ -0.9144\ \ \ -1.20177\ ]x(t),\\&u_2(t)=[\ -1.0149\ \ \ -0.1481\ ]x(t). \end{aligned}$$

The simulation of the solution is given in Fig. 11.2b for \(t\in [-0.65,200]\). It is clear that both \(x_1(t)\) and \(x_2(t)\) converge exponentially to zeros.

Example 11.16

Consider system (11.46) with \(N=1,\)

$$\begin{aligned} \alpha _{j1}(x_j(t))=\,&1,\ \beta _{j1}(x_j(t))=8x_j(t),\\ f_j(x_j(t))=\,&g_j(x_j(t))=\tanh (x_j(t)),\ \ j=1,2,\\ \tau _1(t)=\,&8.5,\ \ \ \upsilon _1(t)=2.5, \end{aligned}$$

and

$$\begin{aligned} A_1=&\left[ \begin{array}{cc} 1&{} -0.01\\ 0.1 &{}1.2\end{array}\right] ,\ \ B_1=\left[ \begin{array}{cc} 5.2&{} 1.2\\ 1.12&{} 2.3\end{array}\right] ,\\ C_1=&\left[ \begin{array}{cc} 1.2&{} 0.11\\ 0.1 &{}1.22\end{array}\right] ,\ \ D_1=\left[ \begin{array}{c} -1.2\\ 0.2\end{array}\right] . \end{aligned}$$

For this system, Assumptions 11.311.5 are satisfied with \(\underline{\alpha }_i=\bar{\alpha }_i=1,\varPi _i=\varGamma _i=8I,\) \(\bar{\varSigma }=I,\varSigma =F_1=F_3=0,F_2=F_4=0.5I,\) and \(\bar{\tau }=\bar{\tau }_1=8.5,\bar{\upsilon }=\bar{\upsilon }_1=2.5.\) It is easy to verify that Theorem 1 of [27] admits no feasible solution. However, using the Matlab LMI Toolbox, the LMIs (11.4), (11.5), (11.7), (11.9), (11.47) and (11.48) are feasible with the following matrices:

$$\begin{aligned} P_1&=\left[ \begin{array}{cc} 72.4939 &{} -13.8747\\ -13.8747 &{} 103.5930\end{array}\right] ,\\ Q_{11}&=\left[ \begin{array}{cc}16.7250 &{} -26.0105\\ -26.0105 &{} 88.6304 \end{array}\right] ,\\ Q_{21}&=\left[ \begin{array}{cc}644.0687 &{} 178.7808\\ 178.7808 &{} 234.9008 \end{array}\right] ,\\ Q_{31}&=\left[ \begin{array}{cc} 14.2369 &{} -26.9039\\ -26.9039 &{} 81.3660 \end{array}\right] ,\\ Q_{41}&=\left[ \begin{array}{cc} 0.3839 &{} -0.0816\\ -0.0816 &{} 0.6447 \end{array}\right] ,\\ R_1&=\left[ \begin{array}{cc} 175.2573 &{} -2.4592\\ -2.4592&{} 269.3507 \end{array}\right] ,\\ M_1&=\left[ \begin{array}{cc} 95.6377 &{} -2.9849\\ -2.9849 &{} 75.6384 \end{array}\right] ,\\ \bar{M}_1&=\left[ \begin{array}{cc} -334.3276&{} 58.5190\\ 58.5190 &{} -17.4669 \end{array}\right] ,\\ T_1&=\mathrm{diag}\{12.6571,\ 49.6002\},\\ U_1&=\mathrm{diag}\{125.4140,\ 170.7878 \}, \end{aligned}$$
$$\begin{aligned} W_1&=\mathrm{diag}\{ 10.2844,\ 37.0834\},\\ G_1&=\mathrm{diag}\{9.2136,\ 15.5713 \},\\ H&=\mathrm{diag}\{11.6382,\ 15.4633 \},\\ K&=\mathrm{diag}\{ 5.1548,\ 8.9735 \}, \end{aligned}$$

and accordingly the feedback control is

$$\begin{aligned}&u_1(t)=[\ 0.3810\ \ \ 0.2053\ ]x(t). \end{aligned}$$

Based on Example 11.16, it is easy to see that the obtained results are better than those in [27]. Hence, the proposed method is an improvement over the existing ones.

11.5 Summary

In this chapter, the problem of designing a feedback control law to exponentially stabilize a class of stochastic Cohen-Grossberg neural networks with both Markovian jumping parameters and mixed mode-dependent time delays has been studied. The mixed time delays consist of both discrete and distributed delays. Using a new Lyapunov–Krasovskii functional that accounts for the mode-dependent mixed delays, a new delay-dependent condition for the global exponential stabilization has been established in terms of linear matrix inequalities. Upon the feasibility of the LMI, all the control parameters can be easily computed and the design of a stabilizing controller can be accomplished.