1 Introduction

Over the last decades, much interest has been focused on complex dynamical networks (CDNs) because of its importance in real-world applications. CDNs are composed of a set of interconnected nodes as a basic unit with a special dynamic. Applications of CDNs are found in various fields, such as biology, physics, and technology [40, 44]. In the engineering field, CDNs can be referred to as communication networks, artificial neural networks, electric power grids, multi-agent systems for cooperative applications, etc. [16, 20, 34, 45]. CDNs can expose many dynamical behaviors, such as synchronization [2, 21, 47], multi-synchronization [26], and group synchronization [18]. Network synchronization has been perceived in various physical and man-made networks, such as fireflies firing in unison, routing messages in the Internet, and dynamics in a power grid [2]. Synchronization means that the state vectors of all the nodes in a CDN converge to a unified behavior as a common state vector. For example, in the synchronization problem of two coupled chaotic neurons, the synchronization means that the two neurons always generate spikes almost simultaneously [39]. There are many different types of synchronization, such as exponential synchronization [1, 48], finite-time synchronization [33], and asymptotic synchronization [14, 23, 27]. The problem of synchronization is a kind of stability problem that will be addressed in the next section. Exponential stability is a more general stability regime than asymptotic stability; therefore, it has received much attention from researchers [3, 8].

There exist many studies on the synchronization problem of the CDNs with different challenges. Over the last few decades, synchronization of CDNs has come to well characterize several natural phenomena. It is necessary to mention that synchronization of CDNs has many useful applications in image processing [6], secure communication [5, 49], power systems [9], and multi-agent system applications [30, 31, 51]. Time delays are an undeniable issue in practical systems, due to the limited speed of signal propagation or traffic congestion which cause instability or oscillation [22]; hence, the necessity to study time delays. A wide variety of methods are used to study the time delays in the CDNs [19, 58]. Coupling time delays that exist in most CDNs are caused by the exchange of data among nodes in a CDN [17, 24, 42, 54]. In addition, there is a finite set of modes in practical systems that the parameters of a network are faced to switch between these modes. The reason for switching could be dependent on two factors [57]: arbitrary factors [29] or stochastic factors [11, 41, 50, 53]. For example, stochastic factors include abrupt natural variation [12, 38] and casual component or communication topology damages [10]. The stochastic switching with the memoryless feature is usually modeled by Markov chain. The systems exposed to stochastic switching and modeled by Markov chain are called Markovian jump systems, which are a special case of hybrid systems [4]. Wang et al. [43] investigated the mean-square exponential synchronization problem for Markovian jump CDN (MJCDN) under feedback control, and some sufficient conditions have been derived in terms of linear matrix inequalities (LMIs). Global exponential synchronization of a network with Markovian jumping parameters and time-varying delay has been discussed in [50]. Li and Yue [25] investigated the synchronization problem in MJCDN with mixed time delays, and the delay-dependent synchronization criteria were resulted by using the LMIs. In the Markov model for switching between the modes, there is a TR matrix that specifies the behavior of the system at all times. It is useful to point out that the above-mentioned references suppose that the information on transition probabilities in the jumping process is completely known. Actually, the study of Markovian jump systems with partly unknown TRs (PUTRs) is crucial because the information on TRs is not completely known in most practical cases. For example, the changes of load in DC motors for various servomechanisms are random and uncertain; hence, achieving completely known TRs (CKTRs) is impossible. The PUTRs are practically sort of uncertainty for TRs, which unlike all types of uncertainties, do not require the bound or structure information of uncertainty. Some papers are devoted to addressing the synchronization problem of MJCDNs with PUTRs [55, 59]. Zhou et al. [59] studied the exponential synchronization problem for an array of Markovian jump-delayed complex network with PUTRs under a new randomly occurring event-triggered control strategy. The exponential synchronization problem of Markovian jump genetic oscillators with PUTRs has been reported in [55]. In traditional Markovian jump systems, typically known as homogeneous Markovian jump systems, the TRs are assumed to be constant over time [15, 56]. It is very idealistic to get the accurate and complete TR information, for it is both difficult and costly to measure the TRs with accuracy [11]. Therefore, the TRs of a Markov process in recent researches are assumed to have uncertainty. Another class of uncertain TRs is time-variant TRs. A Markov process with time-varying TRs is named non-homogeneous Markov process. One physical meaning example for a non-homogeneous Markovian jump system is unmanned aerial vehicle (UAV) systems, where the airspeed variation in such system matrices is ideally modeled as a homogeneous Markovian process. Because of weather changes, such as wind speed changes, the probabilities of the transition of these multiple airspeeds are not fixed, and vary with time. Thus, this practical problem can be modeled by a non-homogeneous Markovian structure [52]. In non-homogeneous Markovian jump systems, due to the instant changing of TRs, measuring the TR values is time-consuming and practically impossible; as a result, assuming this process as piecewise-homogeneous would be a practical solution. The TRs in the piecewise-homogeneous Markov process are assumed to be time-varying, but invariant during an interval. Actually, in piecewise-homogeneous Markovian jump systems, there are two Markovian signals: (1) a Markov chain as a low-level signal that manages switching between the different dynamics and topologies with time-varying TRs, and (2) another Markov chain (high-level signal) with constant TRs that governs switching between the TR matrices of the low-level Markovian signal. It is suitable to use a two-level Markovian structure for modeling of time-variant TRs. In other words, time-variant TRs are less conservative than the time-invariant TRs.

Most articles in the field of synchronization problem of CDNs with Markovian jump parameters have considered the homogeneous Markovian structure, which is far from the real-systems [7, 36, 37]. To the best of the authors’ knowledge, there is only literature about the exponential synchronization problem of the CDN via piecewise-homogeneous Markovian structure [28]. Li et al. [28] discussed the synchronization problem of MJCDN with piecewise-homogeneous TRs for a simple model. Only one of the matrices (the outer coupling matrix) is mode-dependent and piecewise-homogeneous Markov-based. All the other parameters in [28] are considered constant at all times. Also, the time-varying coupling delay in [28] is not mode-dependent. In this study, we consider the piecewise-homogeneous MJCDN model with a more complicated dynamic for each coupled node in the network that covers a larger class of MJCDNs. Notably, contributions of the presented paper compared with existing studies are as follows:

  • All matrices presented in this paper are piecewise-homogeneous Markovian mode-dependent. To the best of the authors’ knowledge, the synchronization problem for such a MJCDN with piecewise-constant TRs has been insufficiently investigated yet. To model the more realistic conditions of the physical systems, we consider this assumption.

  • The time-varying coupling delay is regarded to change according to piecewise-homogeneous Markovian modes, i.e., the considered coupling time delay depends on two-level Markovian signals which has been less studied yet.

  • The exponential synchronization problems of piecewise-homogeneous Markovian jump-delayed complex networks with PUTRs have seldom been investigated and still remain to be challenging. The assumption of PUTRs is more general than the completely known ones.

The contributions of this paper have not been offered in other studies and are quite original. Also, our proposed method is more complicated than [28, 32] due to assuming mode-dependent time-varying coupling delay and uncertain Markovian jump process which includes piecewise-homogeneous and partly unknown transition rates. The researchers aim at investigating of the synchronization problem for a new class of MJCDN, mode-dependent coefficient matrices, and mode-dependent time-varying coupling delay. It is worth mentioning that the mode-dependent time delays are present in many real networks, such as network control systems and sensor/actuator networks [12].

The organization of this paper can be described as follows. Problem formulation and some elementary results are presented in Sect. 3. The main results for the synchronization problem of the piecewise-homogeneous MJCDN under the case of CKTRs are reported in Theorems 1 and 2 in Sect. 3. In Sect. 3.1, we extend the synchronization problem under the case of PUTRs in Theorems 3 and  4. A numerical example is exploited to demonstrate the effectiveness of the obtained results under two mentioned cases. Finally, the conclusion will be given. For achieving the synchronization problem of our model in this paper, a proper controller is designed for each node in the network based on Lyapunov–Krasovskii functional method and LMIs.

Notations The notations applied in this paper are standard. \({\mathbb {R}}^n\) refers to the n-dimensional Euclidean space and the set of real \({n\times m}\) matrices are defined by \({\mathbb {R}}^{n\times m}\). Suppose \({\mathbf {A}}\) and \({\mathbf {B}}\) to be symmetric real matrices, then \({\mathbf {A}}>{\mathbf {B}}\) (\({\mathbf {A}}\ge {\mathbf {B}}\)) denotes that \({\mathbf {A}}-{\mathbf {B}}\) is positive definite (positive semi definite). The matrix \(\mathbf {I_n}\) represents the identity matrix of order n and \({\mathbf {0}}\) implies the zero matrix with suitable dimension. The superscript “T” represents the transpose, and \(\mathrm{{diag}}\{ .\}\) is a block-diagonal matrix. \(\left\| . \right\| \) is the Euclidean norm in \({\mathbb {R}}^p\). \({{\mathbb {E}}}\{ x\}\) (\({{\mathbb {E}}}\{ x\mathrm{{|}}y\}\)) means the expectation of the stochastic variable x (conditional on y). \(({\mathbf{L}} \otimes {\mathbf{K}})\) is a matrix with order \({mp \times nq}\) that defines the Kronecker product of matrices \({\mathbf{L}} \in {{\mathbb {R}}^{m \times n}}\) and \({\mathbf{k}}\in {{\mathbb {R}}^{p \times q}}\). Symmetric terms in a symmetric matrix are presented by \(*\). \({\lambda _{\hbox {max}}(.)}\) and \({\lambda _{\hbox {min}}(.)}\) denote the largest and smallest eigenvalues of a given matrix.

2 Problem Formulation and Preliminaries

Consider the following CDN with piecewise-homogeneous Markovian jump structure, in which the dynamical equation of each node, the coupling time delay, and the coefficient matrices are assumed to be mode-dependent:

$$\begin{aligned} \begin{aligned}&{\dot{{\mathbf {x}}}_b}(t) ={{\mathbf {A}}}({r_t}) {{{\mathbf {x}}}_b}(t) + {{\mathbf {B}}}({r_t}) {{\mathbf {f}}}({{{\mathbf {x}}}_b}(t)) + \sum \limits _{d = 1}^N {{g_{bd}}({r_t})} {{\varvec{\Gamma }} ({r_t})} {{{\mathbf {x}}}_d}(t - \tau ^{{r_t},{\sigma _t}}(t)) + {{{\mathbf {u}}}_b}(t) \\&\quad b = 1,2, \ldots ,N, \end{aligned} \end{aligned}$$
(1)

where \({{\mathbf{x}}_b}(t) = {[{x_{b1}}(t),{x_{b2}}(t), \ldots ,{x_{bn}}(t)]^T}\in {{\mathbb {R}}^n}\) denotes the state vector of the bth node at time t, \({\mathbf{f}}({{\mathbf{x}}_b}(t))\) and \({\mathbf {u}}_b(t)\) are the continuous nonlinear vector function and the control input of the node b, respectively. \({\mathbf{A}}({r_t}),{\mathbf{B}}({r_t}) \in {{\mathbb {R}}^{n \times n}}\) are the mode-dependent coefficient matrices. \({{\varvec{\Gamma }} ({r_t})} = {[{\gamma _{bd}({r_t})}]_{n \times n}}\) indicates inner coupling between the elements of the node itself, and \({{\mathbf {G}}}({r_t}) = {[{g_{bd}}({r_t})]_{N \times N}}\) denotes outer coupling between the nodes of the whole network and represents the topological structure of the network. If there is a connection between node b and node d, then \({g_{bd}}({r_t}) = {g_{db}}({r_t}) \ne 0 \), \(b \ne d\); otherwise, \({g_{bd}}({r_t}) = {g_{db}}({r_t}) = 0\). It is assumed that the sum of each row of \({\mathbf{G}}({r_t})\) is zero, i.e., \(\sum \nolimits _{d = 1,b \ne d}^N {{g_{bd}}({r_t})} = - {g_{bb}}({r_t})\), \(b = 1,2, \ldots ,N\). The function \({{\tau }^{{r_t},{\sigma _t}}}(t)\) represents mode-dependent time-varying coupling delay convincing:

$$\begin{aligned} {{{\underline{\tau }}}} \le {{\tau }^{{r_t},{\sigma _t}}}(t) \le {{{\bar{\tau }}}}, \quad {{{\dot{\tau }}}^{{r_t},{\sigma _t}}}(t) \le \mu , \end{aligned}$$
(2)

where \({{{\underline{\tau }}} \ge 0}\), \({{{\bar{\tau }}}\ge 0}\), and \(\mu \ge 0\) are assumed to be known constants. The process \(\{r_t,t \ge 0\}\) is a continuous-time piecewise-homogeneous Markov process with its values in the finite set \({\mathcal {W}} = \{ 1,2, \ldots ,w\}\) that characterizes switching between different modes. The probability function for the procedure \(\{r_t,t \ge 0\}\) with TR matrix \({{\varPi }^{{\sigma _{t + {{\varDelta } t}}}}} = [\pi _{ij}^{{\sigma _{t + {{\varDelta } t}}}}]_{w \times w}\) is defined by

$$\begin{aligned} \Pr \{ {r_{t + {{\varDelta } t}}} = j\mathrm{{|}}{\mathrm{{r}}_t} = i\} = \left\{ {\begin{array}{*{20}{l}} {\pi _{ij}^{{\sigma _{t + {\varDelta } t}}}{{\varDelta } t} + o({\varDelta } t),\;}&{} \quad {i \ne j,}\\ {1 + \pi _{ii}^{{\sigma _{t + {{\varDelta } t}}}}{{\varDelta } t} + o({{\varDelta } t}),}&{} \quad {i = j,} \end{array}} \right. \end{aligned}$$
(3)

where \({{\varDelta } t}>0\), \(\lim \limits _{{{\varDelta } t} \rightarrow 0} o({{\varDelta } t})/{{\varDelta } t} = 0\), and \(\pi _{ij}^{{\sigma _{t + {{\varDelta } t}}}} \ge 0\) for \(i \ne j\) denotes the TR from mode i at time t to mode j at time \(t+{{\varDelta } t}\) with the following condition (4). The time-varying TR matrix \({{\varPi }^{{\sigma _{t + {{\varDelta } t}}}}}\) is defined by (5).

$$\begin{aligned} \pi _{ii}^{\sigma _{t + {{\varDelta } t}}}= & {} - \sum \limits _{j = 1,i \ne j}^w {{\pi _{ij}^{\sigma _{t + {{\varDelta } t}}}}}, \end{aligned}$$
(4)
$$\begin{aligned} {{\varPi }^{{\sigma _{t + {{\varDelta } t}}}}}= & {} \begin{bmatrix} \pi _{11}^{{\sigma _{t + {{\varDelta } t}}}} &{}\quad \pi _{12}^{\sigma _{t + {{\varDelta } t}}} &{}\quad \dots &{}\quad \pi _{1w}^{{\sigma _{t + {{\varDelta } t}}}} \\ \pi _{21}^{{\sigma _{t + {{\varDelta } t}}}} &{}\quad \pi _{22}^{{\sigma _{t + {{\varDelta } t}}}} &{}\quad \dots &{}\quad \pi _{2w}^{\sigma _{t + {{\varDelta } t}}} \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ \pi _{w1}^{{\sigma _{t + {{\varDelta } t}}}} &{}\quad \pi _{w2}^{{\sigma _{t + {{\varDelta } t}}}} &{}\quad \dots &{}\quad \pi _{ww}^{{\sigma _{t + {{\varDelta } t}}}} \end{bmatrix}. \end{aligned}$$
(5)

Similar to the previous Markov process, the process \(\{ {\sigma _t},t \ge 0\}\) is continuous-time Markov process with its values in the finite set \({\mathcal {V}} = \{ 1,2,\ldots ,v\}\). This process is homogeneous and time invariant. The probability function with TR matrix \({\varLambda } = [\rho _{mn}]_{v \times v}\) is given by:

$$\begin{aligned} \Pr \{ \sigma _{t + {{\varDelta } t}} = n\mathrm{{|}}\sigma _{t} = m\} = \left\{ {\begin{array}{*{20}{l}} {{\rho _{mn}}{{\varDelta } t} + o({{\varDelta } t}),}&{} \quad {m \ne n,}\\ {1 + {\rho _{mm}}{{\varDelta } t} + o({{\varDelta } t}),\;}&{} \quad {m = n,} \end{array}} \right. \end{aligned}$$
(6)

where \({{\varDelta } t}>0\), \(\lim \limits _{{{\varDelta } t} \rightarrow 0} o({{\varDelta } t})/{{\varDelta } t} = 0\), and \({\rho _{mn} \ge o}\) for \(m \ne n\) denotes the TR from mode m at time t to mode n at time \(t+{{\varDelta } t}\) with the following condition

$$\begin{aligned} {\rho _{mm}} = - \sum \limits _{n = 1,m \ne n}^v {{\rho _{mn}}} , \end{aligned}$$
(7)

and the TR matrix \({\varLambda }\) is given below

$$\begin{aligned} {{\varLambda }}= \begin{bmatrix} \rho _{11} &{}\quad \rho _{12} &{}\quad \dots &{}\quad \rho _{1v} \\ \rho _{21} &{}\quad \rho _{22} &{}\quad \dots &{}\quad \rho _{2v} \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ \rho _{v1} &{}\quad \rho _{v2}&{}\quad \dots &{}\quad \rho _{vv} \end{bmatrix}. \end{aligned}$$
(8)

Moreover, in this paper, we assume that the transition probabilities for two-level Markovian signals are partly unknown, i.e., some elements in the TR matrices \({{\varPi }^{{\sigma _{t + {{\varDelta } t}}}}}\) for the Markov process \(\{r_t,t \ge 0\}\) and the TR matrix \({\varLambda }\) of the Markov process \(\{ {\sigma _t},t \ge 0\}\) are unknown. For example, (5) and (8) are in the form:

$$\begin{aligned} {{\varPi }^{{\sigma _{t + {{\varDelta } t}}}}}= & {} \begin{bmatrix} \pi _{11}^{{\sigma _{t + {{\varDelta } t}}}} &{}\quad \pi _{12}^{\sigma _{t + {{\varDelta } t}}} &{}\quad \dots &{}\quad ? \\ ? &{}\quad ? &{}\quad \dots &{}\quad \pi _{2w}^{\sigma _{t + {{\varDelta } t}}} \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ \pi _{w1}^{{\sigma _{t + {{\varDelta } t}}}} &{}\quad ? &{}\quad \dots &{}\quad \pi _{ww} ^{{\sigma _{t + {{\varDelta } t}}}} \end{bmatrix}, \\ {{\varLambda }}= & {} \begin{bmatrix} \rho _{11} &{}\quad \rho _{12} &{}\quad \dots &{}\quad ? \\ ? &{}\quad ? &{}\quad \dots &{}\quad \rho _{2v} \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ \rho _{v1} &{}\quad ? &{}\quad \dots &{}\quad \rho _{vv} \end{bmatrix}, \end{aligned}$$

where \(``?''\) represents the unknown TR value. To solve this challenge, the following assumption is considered for two Markov processes \(\{r_t,t \ge 0\}\) and \(\{ {\sigma _t},t \ge 0\}\),

$$\begin{aligned} {{\mathcal {W}}^i}= & {} {{\mathcal {W}}^i_{k}}+{{\mathcal {W}}^i_{uk}} =\left\{ {\begin{array}{*{20}{l}} {{\mathcal {W}}^i_{k}}=\{j:{ \pi _{ij}^{\sigma _{t + {{\varDelta } t}}}} \, \hbox {is known}\},\\ {{\mathcal {W}}^i_{uk}}=\{j:{ \pi _{ij}^{\sigma _{t + {{\varDelta } t}}}} \, \hbox {is unknown}\}.\\ \end{array}} \right. \end{aligned}$$
(9)
$$\begin{aligned} {{\mathcal {V}}^m}= & {} {{\mathcal {V}}^m_{k}}+{{\mathcal {V}}^m_{uk}} =\left\{ {\begin{array}{*{20}{l}} {{\mathcal {V}}^m_{k}}=\{n:{ \rho _{mn}} \, \hbox {is known}\},\\ {{\mathcal {V}}^m_{uk}}=\{n:{ \rho _{mn}} \, \hbox {is unknown}\}.\\ \end{array}} \right. \end{aligned}$$
(10)

Remark 2.1

If all TR values are available and accessible, then \({{\mathcal {W}}^i}={{\mathcal {W}}^i_{k}}, {{\mathcal {V}}^m}= {{\mathcal {V}}^m_{k}}\) and \({{\mathcal {W}}^i_{uk}=\emptyset }, {{\mathcal {V}}^m_{uk}}=\emptyset \). In other words, if \({{\mathcal {W}}^i}= {{\mathcal {W}}^i_{uk}}, {{\mathcal {V}}^m}={{\mathcal {V}}^m_{uk}}\) and \({{\mathcal {W}}^i_{k}=\emptyset }, {{\mathcal {V}}^m_{k}}=\emptyset \), then all TR values are inaccessible.

Assumption 1

The function \({\mathbf{f}}:{{\mathbb {R}}^n} \rightarrow {{\mathbb {R}}^n}\) in system (1) has a sector-bounded property which satisfies the following condition [46]

$$\begin{aligned} {[{\mathbf{f}}({\mathbf{x}}) - {\mathbf{f}}({\mathbf{y}}) - {\mathbf{U}}({\mathbf{x}} - {\mathbf{y}})]^T}[{\mathbf{f}}({\mathbf{x}}) - {\mathbf{f}}({\mathbf{y}}) - {\mathbf{V}}({\mathbf{x}} - {\mathbf{y}})] \le 0, \end{aligned}$$
(11)

where \({\mathbf{x}},{\mathbf{y}} \in {{\mathbb {R}}^n}\) and \({\mathbf {U}}\), \({\mathbf {V}}\) are known constant matrices with appropriate dimensions. Note that this property is more general than Lipschitz property.

This paper aims at synchronizing all of the nodes in the network to an isolated node with the state vector \({\mathbf{s}}(t) \in {{\mathbb {R}}^n}\). Let \({{\mathbf{e}}_b}(t) = {{\mathbf{x}}_b}(t) - {\mathbf{s}}(t)\) be the synchronization errors, where the state trajectory of the isolated node (or uncoupled node) is

$$\begin{aligned} {\dot{\mathbf{s}}}(t) ={\mathbf{A}}({r_t}) {{\mathbf{s}}}(t) + {\mathbf{B}}({r_t}) {\mathbf{f}}({{\mathbf{s}}}(t)), \end{aligned}$$
(12)

where \( {{\mathbf{s}}}(t)\) is a particular solution of system (12). It must be assumed that \({{\mathbf{s}}}(.)\) and \({{\mathbf{x}}_b}(t)\), \(b={1,2,\ldots ,N}\) are well defined for \(t \ge {{-{\bar{\tau }}}}\). Then, the synchronization problem of piecewise-homogeneous MJCDN (1) can be changed to the stabilization problem of the following error dynamic system:

$$\begin{aligned}&{\dot{{\mathbf {e}}}_b}(t)={{\mathbf {A}}}({r_t}) {{{\mathbf {e}}}_b}(t) + {{\mathbf {B}}}({r_t}) {{\mathbf {g}}}({{{\mathbf {e}}}_b}(t)) + \sum \limits _{d = 1}^N {{g_{bd}}({r_t})} {{\varvec{\Gamma }} ({r_t}) }{{{\mathbf {e}}}_d}(t - \tau ^{{r_t},{\sigma _t}}(t)) + {{{\mathbf {u}}}_b}(t), \nonumber \\&\qquad b = 1,2, \ldots ,N, \end{aligned}$$
(13)

where \({{\mathbf {g}}}({{{\mathbf {e}}}_b}(t)) = {{\mathbf {f}}} ({{{\mathbf {x}}}_b}(t)) - {{\mathbf {f}}}({{\mathbf {s}}}(t))\).

The controller structure is assumed as

$$\begin{aligned} {{\mathbf{u}}_b}(t) = {{\mathbf{K}}_b}({r_t},{\sigma _t}){{\mathbf{e}}_b}(t), \quad b = 1,\ldots ,N, \end{aligned}$$
(14)

where \({{\mathbf{K}}_b}({r_t},{\sigma _t}) \in {{\mathbb {R}}^{n \times n}}\) is the controller gain matrices to be determined for each mode, i.e., for each node in the network, \(w \times v\) gain matrices are designed. By substituting (14) into (13), one can obtain the error dynamic system as

$$\begin{aligned} {\dot{\mathbf{e}}}(t) ={\bar{\mathbf{A}}}({r_t}) {{\mathbf{e}}}(t) + {\bar{\mathbf{B}}}({r_t}) {\bar{\mathbf{g}}}({{\mathbf{e}}}(t)) +{\bar{\mathbf{G}}}({r_t}){\mathbf{e}}(t - {\tau ^{{r_t},{\sigma _t}}(t)} + {\mathbf{K}}({r_t},{\sigma _t}){\mathbf{e}}(t), \end{aligned}$$
(15)

where \({\mathbf{e}}(t) = {[{\mathbf{e}}_1^T(t),{\mathbf{e}}_2^T(t), \ldots ,{\mathbf{e}}_N^T(t)]^T}\), \({\bar{\mathbf{g}}}({\mathbf{e}}(t)) = {[{{\mathbf{g}}^T}({{\mathbf{e}}_1}(t)),{{\mathbf{g}}^T}({{\mathbf{e}}_2}(t)), \ldots ,{{\mathbf{g}}^T}({{\mathbf{e}}_N}(t))]^T}\), \({\bar{\mathbf{A}}}({r_t})={\mathbf {I}}_N \otimes {\mathbf{A}}({r_t})\), \({\bar{\mathbf{B}}}({r_t})={\mathbf {I}}_N \otimes {\mathbf{B}}({r_t})\), \({\bar{\mathbf{G}}({r_t})}={\mathbf{G}}({r_t}) \otimes {{\varvec{\Gamma }}}({r_t})\) and

$$\begin{aligned} {\mathbf{K}}({r_t},{\sigma _t}) = \mathrm{{diag}}\{ {{\mathbf{K}}_1}({r_t},{\sigma _t}),{{\mathbf{K}}_2}({r_t},{\sigma _t}),\ldots ,{{\mathbf{K}}_N}({r_t},{\sigma _t})\}. \end{aligned}$$
(16)

Definition 1

The MJCDN (1) is said to be exponentially mean-square synchronized, if the error system (15) is exponentially mean-square stable. Actually, there exist scalars \(\alpha >0\) and \(\beta >0\) as decay rate and decay coefficient, respectively, such that:

$$\begin{aligned} {\mathbb {E}} \{ \left\| {{\mathbf{e}}(t)} \right\| {}^2\} \le \beta {e^{ - \alpha t}}\mathop {\sup }\limits _{- { {{\bar{\tau }}}} \le \theta \le 0} \{ {\left\| {{\mathbf{e}}(\theta )} \right\| ^2},{\left\| {{\dot{\mathbf{e}}}(\theta )} \right\| ^2}\}, {\forall t>0}. \end{aligned}$$
(17)

The following lemmas have been utilized through the paper.

Lemma 1

(Jensen inequality [13]). For any matrix \({{\varvec{\Delta }}} \ge 0\), scalars \(\eta _1\), \(\eta _2\), (\(\eta _2 > \eta _1\)) and a vector function \({{\varphi }}:[{\eta _1},{\eta _2}] \rightarrow {{\mathbb {R}}^n}\) such that integrations concerned are well defined, then:

$$\begin{aligned} ({\eta _2} - {\eta _1})\int _{{\eta _1}}^{{\eta _2}} {{{{\varphi }}^T}(\alpha ){{\varvec{\Delta }} {\varphi }}(\alpha )\mathrm{d}\alpha } \ge {\left[ {\int _{{\eta _1}}^{{\eta _2}} {{{\varphi }}(\alpha )\mathrm{d}\alpha } } \right] ^T}{{\varvec{\Delta }}}\left[ {\int _{{\eta _1}}^{{\eta _2}} {{{\varphi }}(\alpha )\mathrm{d}\alpha } } \right] . \end{aligned}$$
(18)

Lemma 2

[35] For any matrix \(\left[ {\begin{array}{*{20}{c}} {\mathbf{M}}&{}{\mathbf{S}}\\ *&{}{\mathbf{M}} \end{array}} \right] \; \ge 0\), scalars \(\tau \ge 0\), \(\tau (t) \ge 0\), and \(0 < \tau (t) \le \tau \), vector function \({\dot{\mathbf{z}}}(t + .):[ - \tau ,0] \rightarrow {{\mathbb {R}}^n}\), such that the concerned integrations are well defined, then

$$\begin{aligned} - \tau \int _{t - \tau }^t {{{{\dot{\mathbf{z}}}}^T}(\alpha ){\mathbf{M}}} {\dot{\mathbf{z}}}(\alpha )\mathrm{d}\alpha \le {{{\omega }}^T}(t){{\varvec{\Omega }} {\omega }}(t), \end{aligned}$$
(19)

where \({{\omega }}(t) = {[{{\mathbf{x}}^T}(t),{{\mathbf{x}}^T}(t - \tau (t)),{{\mathbf{x}}^T}(t - \tau )]^T}\),

$$\begin{aligned} {{\varvec{\Omega }}} = \left[ {\begin{array}{*{20}{c}} { - {\mathbf{M}}}&{}{{\mathbf{M}} - {\mathbf{S}}}&{}{\mathbf{S}}\\ *&{}{ - 2{\mathbf{M}} + {\mathbf{S}} + {{\mathbf{S}}^T}}&{}{ - {\mathbf{S}} + {\mathbf{M}}}\\ *&{}*&{}{ - {\mathbf{M}}} \end{array}} \right] . \end{aligned}$$

Remark 2.2

In order to avoid the complexity of notation in the whole paper, we assume \(\{ {r_t}=i\}\) and \(\{ {\sigma _t}=m\}\). Also, all of the matrices are labeled as \({\bar{\mathbf{A}}}_{i}\), \({{\bar{\mathbf{B}}}_{i}}\),\( {{\bar{\mathbf{G}}}_{i}}\) and \( {\mathbf{K}}_{i,m}\).

The exponential synchronization problem for (1) is addressed in Theorem 1 for \({{\mathcal {W}}^i_{k}}={{\mathcal {W}}^i}, {{\mathcal {V}}^m_{k}}={{\mathcal {V}}^m}\) in Sect. 3 and in Theorem 3 for (\({{\mathcal {W}}^i_{uk}}\subset {{\mathcal {W}}^i}, {{\mathcal {V}}^m_{uk}}\subset {{\mathcal {V}}^m}\)) in Sect. 3.1. Designing of the desired state feedback controller based on the resulted conditions in Theorem 1 has been presented in Theorem 2. Also, the proper controllers based on the resulted conditions in Theorem 3 have been designed in Theorem 4. For the sake of simplicity, we denote \({\bar{\pi }}= \max \limits _{1 \le i \le w}\{-\pi _{ii}\}\) and \({\bar{\rho }}= \max \limits _{1 \le {m} \le v}\{-\rho _{mm}\}\).

3 Main Results

In this section, the exponential synchronization problem and designing of the proper controller for (1) for \({{\mathcal {W}}^i_{k}} ={{\mathcal {W}}^i}, {{\mathcal {V}}^m_{k}}= {{\mathcal {V}}^m}\) in Theorem 1 and 2 are investigated.

Theorem 1

For given mode-dependent controller gain matrix \({\mathbf{K}}({r_t},{\sigma _t})\) and positive scalars \(\alpha \), \({\bar{\tau }}\), \({\underline{\tau }}\), \(\mu \), the error system (15) is exponentially mean-square stable under that \({{\mathcal {W}}^i_{k}}={{\mathcal {W}}^i}, {{\mathcal {V}}^m_{k}}={{\mathcal {V}}^m}\), if there exist symmetric positive definite matrices \({\mathbf {P}}_{i,m}\), \({\mathbf {Q}}\), \({\mathbf {Z}}\), any matrix \({\mathbf {S}}\), and scalars \(\lambda _{i,m}> 0\), such that for any \((i,j \in {\mathcal {W}}^i_{k}\), \({\mathcal {W}}^i_{uk}=\emptyset )\) and \({(m,n \in {\mathcal {V}}^i_{k}}\), \({{\mathcal {V}}^i_{uk}=\emptyset )}\), such that the following conditions hold

$$\begin{aligned}&{\varvec{\Phi }} = \left[ {\begin{array}{*{20}{c}} {{{{\varvec{\Xi }}}_{11}}}&{}0&{}{{{{\varvec{\Xi }}}_{13}}}&{}0&{}{{{{\varvec{\Xi }}}_{15}}}&{}{{{{\varvec{\Xi }}}_{16}}} \\ *&{}{{{{\varvec{\Xi }}}_{22}}}&{}{{{{\varvec{\Xi }}}_{23}}}&{}{{{{\varvec{\Xi }}}_{24}}}&{}0&{}0 \\ *&{}*&{}{{{{\varvec{\Xi }}}_{33}}}&{}{{{{\varvec{\Xi }}}_{34}}}&{}0&{} {\varvec{\Xi }}_{36} \\ *&{}*&{}*&{}{{{{\varvec{\Xi }}}_{44}}}&{}0&{}0 \\ *&{}*&{}*&{}*&{}{{{{\varvec{\Xi }}}_{55}}}&{}{{{{\varvec{\Xi }}}_{56}}} \\ *&{}*&{}*&{}*&{}*&{}{ - {{\mathbf {Z}}}} \end{array}} \right] < 0, \end{aligned}$$
(20)
$$\begin{aligned}&\left[ {\begin{array}{*{20}{c}} {e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}}&{}{\mathbf{S}}\\ *&{}{e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}} \end{array}} \right] > 0, \end{aligned}$$
(21)

where \({{{\varvec{\Xi }}}_{11}} = 2\alpha {{\mathbf{P}}_{i,m}} + {{\mathbf{P}}_{i,m}}{{\mathbf{K}}_{i,m}} + {\mathbf{K}}_{i,m}^T{{\mathbf{P}}_{i,m}}+ {{\mathbf{P}}_{i,m}}{{\bar{\mathbf{A}}}_{i}} + {\bar{\mathbf{A}}}_{i}^T{{\mathbf{P}}_{i,m}} + {{\mathbf{Q}}} - {\lambda _{i,m}}{\bar{\mathbf{U}}}+({{\bar{\pi }}+ \bar{p}})({{\bar{\tau }}}-{{\underline{\tau }}}) {{\mathbf{Q}}} + \sum \nolimits _{n \in {\mathcal {V}}} {{\rho _{mn}}{{\mathbf{P}}_{i,n}} + \sum \nolimits _{j \in {\mathcal {W}}} {\pi _{ij}^m{{\mathbf{P}}_{j,m}}} }\),

$$\begin{aligned}&{{\varvec{\Xi }}_{13}} = {{\mathbf{P}}_{i,m}}{{\bar{\mathbf{G}}}_i}, \\&{{{\varvec{\Xi }}}_{15}} = {{{\mathbf {P}}}_{i,m}}{\bar{{\mathbf {B}}}_i}- {\lambda _{i,m}}{\bar{\mathbf{V}}}, \\&{{{\varvec{\Xi }}}_{16}} = ({\bar{\mathbf{A}}_{i}}+{{{\mathbf {K}}}_{i,m}})^T{{{\varvec{\Upsilon }}}},\\&{{{\varvec{\Xi }}}_{22}} = -{e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}},\\&{{{\varvec{\Xi }}}_{23}} = - {{{\mathbf {S}}}} +{e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}},\\&{{{\varvec{\Xi }}}_{24}} = {{{\mathbf {S}}}},\\&{{{\varvec{\Xi }}}_{33}} = - (1 - \mu ){e^{ - 2\alpha \nu }} {{{\mathbf {Q}}}} - 2{e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}} + {{{\mathbf {S}}}} + {{\mathbf {S}}}^T,\\&{{{\varvec{\Xi }}}_{34}} = - {{{\mathbf {S}}}} + {e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}},\\&{\varvec{\Xi }}_{36} = {{\bar{{{\mathbf {G}}}_i}}^T}{{\varvec{\Upsilon }}},\\&{\Xi _{44}} = -{e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}},\\&{{{\varvec{\Xi }}}_{55}} =- {\lambda _{i,m}}{{\mathbf {I}}},\\&{\varvec{\Xi }}_{56} = {{\bar{{{\mathbf {B}}}_i}}^T}{{\varvec{\Upsilon }}},\\&{\varvec{\Upsilon }} =( {{\bar{\tau }}}-{{\underline{\tau }}}){{{\mathbf {Z}}}}, \\&{\bar{{\mathbf {U}}}} = \frac{{{{({{{\mathbf {I}}}_N} \otimes {{\mathbf {U}}})}^T}({{{\mathbf {I}}}_N} \otimes {{\mathbf {V}}})}}{2} + \frac{{{{({{{\mathbf {I}}}_N} \otimes {{\mathbf {V}}})}^T}({{{\mathbf {I}}}_N} \otimes {{\mathbf {U}}})}}{2}, {\bar{{\mathbf {V}}}} = - \frac{{{{({{{\mathbf {I}}}_N} \otimes {{\mathbf {U}}})}^T} + {{({{{\mathbf {I}}}_N} \otimes {{\mathbf {V}}})}^T}}}{2}, \\&\nu = \left\{ {\begin{array}{*{20}{l}} {{\bar{\tau }},}&{}{\mu < 1,} \\ {{\underline{\tau }},}&{}{\mu \geqslant 1,} \end{array}} \right. \end{aligned}$$

Proof

Construct the following Lyapunov–Krasovskii functional candidate

$$\begin{aligned} V({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) = \sum \limits _{k = 1}^4 {{V_k}} ({{{\mathbf {e}}}_t},{r_t},{\sigma _t}), \end{aligned}$$
(22)

where

$$\begin{aligned} {V_1}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) ={}&{e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{{r_t},{\sigma _t}}}{{\mathbf {e}}}(t), \\ {V_2}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) ={}&\int _{t -{ \tau ^{{r_t},{\sigma _t}}}(t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} , \\ {V_3}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) = {}&( {{\bar{\pi }}}+{{\bar{\rho }}} )\int _{{\underline{\tau }}}^{{\bar{\tau }}} {\int _{t-s }^t {{e^{2\alpha \theta }}{{{{\mathbf {e}}}}^T}(\theta ){{{\mathbf {Q}}}}{{{\mathbf { e}}}}(\theta )\hbox {d}\theta \hbox {d}s } } , \\ {V_4}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) ={}&({{\bar{\tau }}}-{{\underline{\tau }}} )\int _{{\underline{\tau }}}^{{\bar{\tau }}} {\int _{t - s }^t {{e^{2\alpha \theta }}}{\dot{{{\mathbf {e}}}^T}(s){{{\mathbf {Z}}}}{\dot{{{\mathbf {e}}}}(s)}\hbox {d}\theta \hbox {d}s } } . \end{aligned}$$

The infinitesimal generator \({\mathcal {L}}\) of the Markov process acting on \(V({{{\mathbf {e}}}_t},{r_t},{\sigma _t})\) is described as:

$$\begin{aligned} \begin{aligned}&{\mathcal {L}}V({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) \\&\quad = \mathop {\lim }\limits _{h \rightarrow 0} \frac{1}{h}\{ \mathrm{{\mathbb {E}}}\{ V({{{\mathbf {e}}}_{t + h}},{r_{t + h}},{\sigma _{t + h}})\left| {{{{\mathbf {e}}}_t}} \right. ,{r_t} = i,{\sigma _t} = m\} - V({{{\mathbf {e}}}_t},{r_t} = i,{\sigma _t} = m)\}. \end{aligned}\nonumber \\ \end{aligned}$$
(23)

Then, by applying the total probability law [59], the above equality can be expressed as

$$\begin{aligned} {\mathcal {L}}V({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) = \sum \limits _{n \in {\mathcal {V}}} {{\rho _{mn}}} V({{{\mathbf {e}}}_t},i,n) + \sum \limits _{j \in {\mathcal {W}}} {\pi _{ij}^m} V({{{\mathbf {e}}}_t},j,m) + \dot{V}({{{\mathbf {e}}}_t},i,m). \end{aligned}$$
(24)

Hence, for each \((i,m) \in {\mathcal {W}} \times {\mathcal {V}}\), it resulted:

$$\begin{aligned} {\mathcal {L}}{V_1}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) =&2\alpha {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{i,m}}{{\mathbf {e}}}(t) + 2{e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{i,m}} \dot{{\mathbf {e}}}(t) \nonumber \\&+ {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{n \in {\mathcal {V}}} {{\rho _{mn}}{{{\mathbf {P}}}_{i,n}} + \sum \limits _{j \in {\mathcal {W}}} {\pi _{ij}^m} } {{{\mathbf {P}}}_{j,m}}\right] {{\mathbf {e}}}(t), \end{aligned}$$
(25)
$$\begin{aligned} {\mathcal {L}}{V_2}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) \leqslant {}&{e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {Q}}}}{{\mathbf {e}}}(t) - (1 - \mu ){e^{2\alpha (t - {\tau }^{i,m}(t) )}} \nonumber \\&{{{\mathbf {e}}}^T}(t - {\tau }^{i,m} (t)){{{\mathbf {Q}}}}{{\mathbf {e}}}(t - {\tau }^{i,m} (t)) \nonumber \\&+\sum \limits _{n \in {\mathcal {V}}} {\rho _{mn}} \int _{t - {\tau }^{i,n} (t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} \nonumber \\&+ \sum \limits _{j \in {\mathcal {W}} } {{\pi _{ij}^m} \int _{t - {\tau }^{j,m} (t)}^t {e^{2\alpha s}}{{\mathbf {e}}}^T (s){{{\mathbf {Q}}}}} {{\mathbf {e}}}(s)\hbox {d}s, \end{aligned}$$
(26)
$$\begin{aligned} {\mathcal {L}}{V_3}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) = {}&{e^{2\alpha t}}{{\bar{\pi }}}({{\bar{\tau }}}-{{\underline{\tau }}}){{\mathbf {e}}}^T (t){{\mathbf {Q}}}{{\mathbf {e}}}(t)-{{\bar{\pi }}} \int _{{\underline{\tau }} }^{{\bar{\tau }}}{{e^{2\alpha (t-s)}}}{{\mathbf {e}}}^T (t-s){{\mathbf {Q}}}{{\mathbf {e}}} (t-s)\hbox {d}s \nonumber \\&+ {e^{2\alpha t}}{{\bar{\rho }}}({{\bar{\tau }}}-{{\underline{\tau }}}){{\mathbf {e}}}^T (t){{\mathbf {Q}}}{{\mathbf {e}}}(t)-{{\bar{\rho }}} \int _{{\underline{\tau }} }^{{\bar{\tau }}}{{e^{2\alpha (t-s)}}}{{\mathbf {e}}}^T (t-s){{\mathbf {Q}}}{{\mathbf {e}}} (t-s)\hbox {d}s \nonumber \\&+({\bar{\pi }}+{\bar{\rho }}) \sum \limits _{n \in {\mathcal {V}}}{\rho _{mn}} \int _{{\underline{\tau }}}^{{\bar{\tau }}} {\int _{t - s }^t {{e^{2\alpha s}}}{{\mathbf { {e}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {{e}}}(s)}\hbox {d}s\hbox {d}\theta } } \nonumber \\&+({\bar{\pi }}+{\bar{\rho }}) \sum \limits _{j \in {\mathcal {W}} } {\pi _{ij}^m} \int _{{\underline{\tau }}}^{{\bar{\tau }}} {\int _{t - s }^t {{e^{2\alpha s}}}{{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {{e}}}(s)}\hbox {d}s\hbox {d}\theta } } \end{aligned}$$
(27)
$$\begin{aligned} {\mathcal {L}}{V_4}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) ={}&({{\bar{\tau }}}{-}{{\underline{\tau }}})^2{e^{2\alpha t}}{{\dot{{\mathbf {e}}}}^T}(t){{\mathbf {Z}}}{{\dot{{\mathbf {e}}}}}(t){-} ({{\bar{\tau }}}-{{\underline{\tau }}})\int _{{{\underline{\tau }}}}^{{\bar{\tau }}} {{e^{2\alpha (t-s)}}{{\dot{{\mathbf {e}}}}^T}(t-s){\mathbf {Z}}{{\dot{{\mathbf {e}}}}}(t-s)\hbox {d}s } \nonumber \\&+({{\bar{\tau }}}-{{\underline{\tau }}} ) \sum \limits _{n \in {\mathcal {V}}}{\rho _{mn}} \int _{{\underline{\tau }}}^{{\bar{\tau }}} {\int _{t - s }^t {{e^{2\alpha s}}}{\dot{{{\mathbf {e}}}^T}(s){{{\mathbf {Z}}}}{\dot{{{\mathbf {e}}}}(s)}\hbox {d}s\hbox {d}\theta }} \nonumber \\&+({{\bar{\tau }}}-{{\underline{\tau }}} ) \sum \limits _{j \in {\mathcal {W}} } {\pi _{ij}^m} \int _{{\underline{\tau }}}^{{\bar{\tau }}} {\int _{t - s }^t {{e^{2\alpha s}}}{\dot{{{\mathbf {e}}}^T}(s){{{\mathbf {Z}}}} {\dot{{{\mathbf {e}}}}(s)}\hbox {d}s\hbox {d}\theta } }. \end{aligned}$$
(28)

It is straightforward to obtain:

$$\begin{aligned} {{\bar{\pi }}} \int _{{\underline{\tau }} }^{{\bar{\tau }}}{{e^{2\alpha (t-s)}}}{{\mathbf {e}}}^T (t-s){{\mathbf {Q}}}{{\mathbf {e}}} (t-s)\hbox {d}s = {{\bar{\pi }}} \int _{t-{{\bar{\tau }} } }^{t-{{\underline{\tau }}}}{{e^{2\alpha (s)}}}{{\mathbf {e}}}^T (s){{\mathbf {Q}}}{{\mathbf {e}}} (s)\hbox {d}s, \end{aligned}$$
(29)
$$\begin{aligned} {{\bar{\rho }}} \int _{{\underline{\tau }} }^{{\bar{\tau }}}{{e^{2\alpha (t-s)}}}{{\mathbf {e}}}^T (t-s){{\mathbf {Q}}}{{\mathbf {e}}} (t-s)\hbox {d}s = {{\bar{\rho }}} \int _{t-{{\bar{\tau }} } }^{t-{{\underline{\tau }}}}{{e^{2\alpha (s)}}}{{\mathbf {e}}}^T (s){{\mathbf {Q}}}{{\mathbf {e}}} (s)\hbox {d}s. \end{aligned}$$
(30)

Also

$$\begin{aligned} \begin{aligned}&{\sum \limits _{n\in {\mathcal {V}}}}{{\rho _{mn}}}\int _{t - {\tau }^{i,n} (t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} +{\sum \limits _{j\in {\mathcal {W}}}}{{\pi _{ij}^m}}\int _{t - {\tau }^{j,m} (t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s}\\&\quad = {\sum \limits _{n\in {\mathcal {V}}{m \ne n}}}{{\rho _{mn}}}\int _{t - {\tau }^{i,n} (t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} + {{\rho _{mm}}}\int _{t - {\tau }^{i,m} (t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s}\\&\qquad + {\sum \limits _{j\in {\mathcal {W}}{i \ne j}}}{{\pi _{ij}^m}}\int _{t - {\tau }^{j,m} (t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} + {{\pi _{ii}^m}}\int _{t - {\tau }^{i,m} (t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} \\&\quad \leqslant {\sum \limits _{n\in {\mathcal {V}}{m \ne n}}}{{\rho _{mn}}}\int _{t - {{\bar{\tau }}}}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} + {{\rho _{mm}}}\int _{t - {{\underline{\tau }}}}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s}\\&\qquad + {\sum \limits _{j\in {\mathcal {W}}{i \ne j}}}{{\pi _{ij}^m}}\int _{t - {{\bar{\tau }}}}^t {e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s + {{\pi _{ii}^m}}\int _{t - {{\underline{\tau }}}}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} \\&\quad \leqslant {{{\bar{\rho }}}}\int _{t - {{\bar{\tau }}}}^{t-{{\underline{\tau }}}} {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s}+ {{{\bar{\pi }}}}\int _{t - {{\bar{\tau }}}}^{t-{{\underline{\tau }}}} {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s}. \end{aligned}\nonumber \\ \end{aligned}$$
(31)

Based on Lemma 2 and (21), the following integral in (28) will be:

$$\begin{aligned}&- ({{\bar{\tau }}}-{{\underline{\tau }}}) \int _{{{\underline{\tau }}} }^{{{\bar{\tau }}}} {{e^{2\alpha (t-s)}}{{\dot{{\mathbf {e}}}}^T}(t-s){{{\mathbf {Z}}}}\dot{{\mathbf {e}}}(t-s)\hbox {d}s} = - ({{\bar{\tau }}}-{{\underline{\tau }}}) \int _{t -{{\bar{\tau }}} }^{t-{{\underline{\tau }}}} {{e^{2\alpha s}}{{\dot{{\mathbf {e}}}}^T}(s){{{\mathbf {Z}}}}\dot{{\mathbf {e}}}(s)\hbox {d}s}\nonumber \\&\quad \leqslant - ({{\bar{\tau }}}-{{\underline{\tau }}}) {e^{2\alpha t}}\int _{t -{{\bar{\tau }}} }^{t-{{\underline{\tau }}}} {{e^{ - 2\alpha {{\bar{\tau }}} }}{{\dot{{\mathbf {e}}}}^T}(s){{{\mathbf {Z}}}}\dot{{\mathbf {e}}}(s)\hbox {d}s} \leqslant {e^{2\alpha t}}{{{\omega }}^T}(t)\Omega {{\omega }}(t), \end{aligned}$$
(32)

where \({{\omega }}(t) = {[{{{\mathbf {e}}}^T}(t-{{\bar{\tau }}}), {{{\mathbf {e}}}^T} (t-\tau ^{i,m}(t)),{{{\mathbf {e}}}^T}(t - {{\underline{\tau }}} )]^T}\),

$$\begin{aligned} {{\varvec{\Omega }}} = \left[ {\begin{array}{*{20}{c}} { - {e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}}}&{}{{e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}} - {{{\mathbf {S}}}}}&{}{{{{\mathbf {S}}}}} \\ *&{}{-2 {e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}} + {{{\mathbf {S}}}} + {{{\mathbf {S}}}}^T}&{}{ - {{{\mathbf {S}}}} + {e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}}} \\ *&{}*&{}{ - {e^{ - 2\alpha {\bar{\tau }} }}{\mathbf{Z}}} \end{array}} \right] . \end{aligned}$$

For any scalars \({\lambda _{i,m}} > 0\), \((i \in {\mathcal {W}},m \in {\mathcal {V}})\), (11) is rewritten as

$$\begin{aligned} {\lambda _{i,m}}{\left[ {\begin{array}{*{20}{c}} {{{{\mathbf {e}}}_b}(t)} \\ {{{\mathbf {g}}}({{{\mathbf {e}}}_b}(t))} \end{array}} \right] ^T}{{\psi }}\left[ {\begin{array}{*{20}{c}} {{{{\mathbf {e}}}_b}(t)} \\ {{{\mathbf {g}}}({{{\mathbf {e}}}_b}(t))} \end{array}} \right] \leqslant 0, \end{aligned}$$
(33)

where

$$\begin{aligned} {{\psi }} = \left[ {\begin{array}{*{20}{c}} {\frac{{{{{\mathbf {U}}}^T}{{\mathbf {V}}} + {{{\mathbf {V}}}^T}{{\mathbf {U}}}}}{2}}&{}{ - \frac{{{{{\mathbf {U}}}^T} + {{{\mathbf {V}}}^T}}}{2}} \\ *&{}{{\mathbf {I}}} \end{array}} \right] . \end{aligned}$$

Moreover, the above inequality can be calculated in form of vector matrix as follows:

$$\begin{aligned} {{\varvec{\Theta }}}(t) = {e^{2\alpha t}}{\lambda _{i,m}}{\left[ {\begin{array}{*{20}{c}} {{{\mathbf {e}}}(t)} \\ {\bar{{\mathbf {g}}}({{\mathbf {e}}}(t))} \end{array}} \right] ^T}\left[ {\begin{array}{*{20}{c}} {\bar{{\mathbf {U}}}}&{}{\bar{{\mathbf {V}}}} \\ *&{}{{\mathbf {I}}} \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {{{\mathbf {e}}}(t)} \\ {\bar{{\mathbf {g}}}({{\mathbf {e}}}(t))} \end{array}} \right] \leqslant 0. \end{aligned}$$
(34)

Also, based on Eqs. (4) and (7), for the integrals in (27) and (28) we have:

$$\begin{aligned} \sum \limits _{n \in {\mathcal {V}}}{{\rho _{mn}}} \int _{{\underline{\tau }} }^{{\bar{\tau }}} {\int _{t -s }^{t} {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{\mathbf {Q}}}{{\mathbf {e}}}(s)\hbox {d}s\hbox {d}\theta } } ={}&0, \end{aligned}$$
(35)
$$\begin{aligned} \sum \limits _{j \in {\mathcal {W}}}{{\pi _{ij}^m}} \int _{{\underline{\tau }} }^{{\bar{\tau }}} {\int _{t -s }^{t} {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{\mathbf {Q}}}{{\mathbf {e}}}(s)\hbox {d}s\hbox {d}\theta } } ={}&0, \end{aligned}$$
(36)
$$\begin{aligned} \sum \limits _{n \in {\mathcal {V}}}{{\rho _{mn}}} \int _{{\underline{\tau }} }^{{\bar{\tau }}} {\int _{t -s }^{t} {{e^{2\alpha s}}{\dot{{{\mathbf {e}}}}^T}(s){{\mathbf {Z}}}\dot{{{\mathbf {e}}}}(s)\hbox {d}s\hbox {d}\theta } } ={}&0, \end{aligned}$$
(37)
$$\begin{aligned} \sum \limits _{j \in {\mathcal {W}}}{{\pi _{ij}^m}} \int _{{\underline{\tau }} }^{{\bar{\tau }}} {\int _{t -s }^{t} {{e^{2\alpha s}}{\dot{{{\mathbf {e}}}}^T}(s){{\mathbf {Z}}}\dot{{{\mathbf {e}}}}(s)\hbox {d}s\hbox {d}\theta } } ={}&0. \end{aligned}$$
(38)

Hence, by substituting (35)–(38) and (15) into (25)–(28), Eq. (22) is stated as

$$\begin{aligned} \begin{aligned} {{\mathcal {L}}{V}}({{{\mathbf {e}}}_t},{r_t},{\sigma _t})&\leqslant \sum \limits _{k = 1}^4 {{\mathcal {L}}{V_k}} ({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) - \Theta (t) \\&\leqslant 2\alpha {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{i,m}}{{\mathbf {e}}}(t) + 2{e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{i,m}}\dot{{\mathbf {e}}}(t) \\&\quad + {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{n \in {\mathcal {V}}} {{\rho _{mn}}{{{\mathbf {P}}}_{i,n}} + \sum \limits _{j \in {\mathcal {W}}} {\pi _{ij}^m} } {{{\mathbf {P}}}_{j,m}}\right] {{\mathbf {e}}}(t) \\&\quad + {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {Q}}}}{{\mathbf {e}}}(t) - (1 - \mu ){e^{2\alpha (t {-} {\nu } )}}{{{\mathbf {e}}}^T}(t {-} {\tau }^{i,m} (t)){{{\mathbf {Q}}}}{{\mathbf {e}}}(t {-} {\tau }^{i,m} (t)) \\&\quad +{{{\bar{\rho }}}}\int _{t - {{\bar{\tau }}}}^{t-{{\underline{\tau }}}} {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s}+ {{{\bar{\pi }}}}\int _{t - {{\bar{\tau }}}}^{t-{{\underline{\tau }}}} {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s}\\&\quad {e^{2\alpha t}}{{\bar{\pi }}}({{\bar{\tau }}}-{{\underline{\tau }}}){{\mathbf {e}}}^T (t){{\mathbf {Q}}}{{\mathbf {e}}}(t)-{{\bar{\pi }}} \int _{t-{{\bar{\tau }}} }^{t-{{\underline{\tau }}}}{{e^{2\alpha (s)}}}{{\mathbf {e}}}^T (s){{\mathbf {Q}}}{{\mathbf {e}}} (s)\hbox {d}s\\&\quad + {e^{2\alpha t}}{{\bar{\rho }}}({{\bar{\tau }}}-{{\underline{\tau }}}){{\mathbf {e}}}^T (t){{\mathbf {Q}}}{{\mathbf {e}}}(t)-{{\bar{\rho }}} \int _{t-{{\bar{\tau }} }}^{t-{{\underline{\tau }}}}{{e^{2\alpha (s)}}}{{\mathbf {e}}}^T (s){{\mathbf {Q}}}{{\mathbf {e}}} (s)\hbox {d}s \\&\quad + ({{\bar{\tau }}}-{{\underline{\tau }}})^2{e^{2\alpha t}}{{\dot{{\mathbf {e}}}}^T}(t){{\mathbf {Z}}}{{\dot{{\mathbf {e}}}}}(t)+ {e^{2\alpha t}}{{{\omega }}^T}(t){{\varvec{\Omega }} {\omega }}(t) \\&\quad - {e^{2\alpha t}}{\lambda _{i,m}}{\left[ {\begin{array}{*{20}{c}} {{{\mathbf {e}}}(t)} \\ {\bar{{\mathbf {g}}}({{\mathbf {e}}}(t))} \end{array}} \right] ^T}\left[ {\begin{array}{*{20}{c}} {\bar{{\mathbf {U}}}}&{}{\bar{{\mathbf {V}}}} \\ *&{}{{\mathbf {I}}} \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {{{\mathbf {e}}}(t)} \\ {\bar{{\mathbf {g}}}({{\mathbf {e}}}(t))} \end{array}} \right] .\\ \end{aligned} \end{aligned}$$
(39)

Then, it follows:

$$\begin{aligned} {\mathcal {L}}V({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) \leqslant {e^{2\alpha t}} {{\xi }}^T(t) \mathfrak {R}{{\xi }}(t), \end{aligned}$$

where \( {{\xi }}(t) = {[\;\;{{{\mathbf {e}}}^T}(t),{{{\mathbf {e}}}^T}(t - {{\bar{\tau }}}),{{{\mathbf {e}}}^T}(t - {{\tau }^{i,m}(t)} ),{{{\mathbf {e}}}^T}(t - {{\underline{\tau }}}),{\bar{{\mathbf {g}}}^T}({{\mathbf {e}}}(t))}]^T\) and

$$\begin{aligned} \mathfrak {R}= & {} \left[ {\begin{array}{*{20}{c}} {{{{\varvec{\Xi }}}_{11}}}&{}0&{}{{{{\varvec{\Xi }}}_{13}}}&{}0&{}{{{{\varvec{\Xi }}}_{15}}} \\ *&{}{{{{\varvec{\Xi }}}_{22}}}&{}{{{{\varvec{\Xi }}}_{23}}}&{}{{{{\varvec{\Xi }}}_{24}}}&{}0 \\ *&{}*&{}{{{{\varvec{\Xi }}}_{33}}}&{}{{{{\varvec{\Xi }}}_{34}}}&{}0 \\ *&{}*&{}*&{}{{{{\varvec{\Xi }}}_{44}}}&{}0 \\ *&{}*&{}*&{}*&{}{{{{\varvec{\Xi }}}_{55}}} \end{array}} \right] \nonumber \\&+ {\left[ {\begin{array}{*{20}{c}} {({\bar{\tau }} - {\underline{\tau }})({{{{\mathbf {K}}}_{i,m}}}+{\bar{{\mathbf {A}}}_i})^T} \\ 0 \\ {({\bar{\tau }} - {\underline{\tau }}){\bar{{\mathbf {G}}}_i}^T} \\ 0 \\ {({\bar{\tau }} - {\underline{\tau }}){\bar{{\mathbf {B}}}_i}^T} \end{array}} \right] }{\mathbf {Z}}\left[ {\begin{array}{*{20}{c}} {({\bar{\tau }} - {\underline{\tau }})({{{{\mathbf {K}}}_{i,m}}}+{\bar{{\mathbf {A}}}_i})^T} \\ 0 \\ {({\bar{\tau }} - {\underline{\tau }}){\bar{{\mathbf {G}}}_i}^T} \\ 0 \\ {({\bar{\tau }} - {\underline{\tau }}){\bar{{\mathbf {B}}}_i}^T} \end{array}} \right] ^T. \end{aligned}$$
(40)

According to Schur complements, Eq. (40) is equivalent to \({\varvec{\Phi }}\) in (20). If \({\varvec{\Phi }} < 0\), then

$$\begin{aligned} {\mathcal {L}}V({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) < 0. \end{aligned}$$
(41)

Therefore, relying on Dynkin’s formula, it can be found that

$$\begin{aligned} {\lambda _{\min }}({{{\mathbf {P}}}_{i,m}}){e^{2\alpha t}}\mathrm{E}\{ {\left\| {{{\mathbf {e}}}\left. {(t)} \right\| } \right. ^2}\} \leqslant \mathrm{E}\{ V({{{\mathbf {e}}}_t},{r_t},{\sigma _t})\} \leqslant \mathrm{E}\{ V({{{\mathbf {e}}}_0},{r_0},{\sigma _0})\}, \end{aligned}$$
(42)

from the definition of \( V({{{\mathbf {e}}}_0},{r_0},{\sigma _0})\), it can be obtained

$$\begin{aligned} \mathrm{E}\;\{ V({{{\mathbf {e}}}_0},{r_0},{\sigma _0})\}= & {} \sum \limits _{k = 1}^4 {\mathrm{E}\;\{ {V_k}({{{\mathbf {e}}}_0},{r_0},{\sigma _0})\} } \nonumber \\\leqslant & {} {\lambda _{\max }}({{{\mathbf {P}}}_{i,m}}){\left\| {\left. {{{\mathbf {e}}}(0)} \right\| } \right. ^2} + \;{\lambda _{\max }}({{{\mathbf {Q}}}})\;\mathop {\sup \;\;}\limits _{-{{\bar{\tau }}} \leqslant \theta \leqslant 0} {\left\| {\left. {{{\mathbf {e}}}(\theta )} \right\| } \right. ^2}\int _{-{{\bar{\tau }}} }^{0} {{e^{2\alpha x}}\hbox {d}x}\nonumber \\&+{({{{\bar{\pi }}}+{{\bar{\rho }}}})} \;{\lambda _{\max }}({{{\mathbf {Q}}}})\;\mathop {\sup \;\;}\limits _{-{ {\bar{\tau }}} \leqslant \theta \leqslant 0} {\left\| {\left. {{{\mathbf {e}}}(\theta )} \right\| } \right. ^2} \int _{{\underline{\tau }} }^{{\bar{\tau }}}\int _{-s }^{0} {{e^{2\alpha x}}\hbox {d}x \hbox {d}s }\nonumber \\&+ \;{({{\bar{\tau }}}-{{\underline{\tau }}})}{\lambda _{\max }}({{{\mathbf {Z}}}})\;\mathop {\sup \;\;}\limits _{- {{\bar{\tau }}} \leqslant \theta \leqslant 0} {\left\| {\left. {\dot{{\mathbf {e}}}(\theta )} \right\| } \right. ^2} \int _{{\underline{\tau }} }^{{\bar{\tau }}}\int _{-s }^{0} {{e^{2\alpha x}}\hbox {d}x \hbox {d}s }\nonumber \\\leqslant & {} a\mathop {\sup \;\;}\limits _{-{{\bar{\tau }}} \leqslant \theta \leqslant 0} {\left\| {\left. {{{\mathbf {e}}}(\theta )} \right\| } \right. ^2} + b\mathop {\sup \;\;}\limits _{-{ {\bar{\tau }}} \leqslant \theta \leqslant 0} {\left\| {\left. {\dot{{\mathbf {e}}}(\theta )} \right\| } \right. ^2} , \end{aligned}$$
(43)

where,

$$\begin{aligned} a&= {\lambda _{\max }}({{{\mathbf {P}}}_{i,m}}) \\&\quad +\left[ \frac{1-{e^{-2\alpha {\bar{\tau }}}}}{2 {\alpha }}+ {({{{\bar{\pi }}}+{{\bar{\rho }}}})} \frac{2 \alpha {\bar{\tau }} -2 \alpha {\underline{\tau }}+{e^{-2\alpha {\bar{\tau }}}}-{e^{-2\alpha {\underline{\tau }}}}}{4 {\alpha }^2}\right] \;{\lambda _{\max }}({{{\mathbf {Q}}}}), \\ b&= {({{\bar{\tau }}}-{{\underline{\tau }}})} \frac{2 \alpha {\bar{\tau }} -2 \alpha {\underline{\tau }}+{e^{-2\alpha {\bar{\tau }}}}-{e^{-2\alpha {\underline{\tau }}}}}{4 {\alpha }^2}\;{\lambda _{\max }}({{{\mathbf {Z}}}}). \end{aligned}$$

Therefore, (42) is rewritten as

$$\begin{aligned} \mathrm{E}\{ {\left\| {{{\mathbf {e}}}(t)} \right\| ^2}\} \leqslant \frac{{a + b }}{{{\lambda _{\min }}({{{\mathbf {P}}}_{i,m}})}}{e^{ - 2\alpha t}}\mathop {\sup \;\;}\limits _{- { {\bar{\tau }}} \leqslant \theta \leqslant 0} \{ {\left\| {\left. {{{\mathbf {e}}}(\theta )} \right\| } \right. ^2},{\left\| {\left. {\dot{{\mathbf {e}}}(\theta )} \right\| } \right. ^2}\}. \end{aligned}$$
(44)

According to Definition 1, it is proved that the error system (15) is exponentially mean-square stable which means that the exponential synchronization problem of the piecewise-homogeneous MJCDN (1) is realized. \(\square \)

Remark 3.1

If the coupling time delay in (1) is not switching, the Lyapunov–Krasovskii functional candidate can be reduced to the following Lyapunov–Krasovskii functional one by applying changes:

$$\begin{aligned} V({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) = \sum \limits _{k = 1}^3 {{V_k}} ({{{\mathbf {e}}}_t},{r_t},{\sigma _t}), \end{aligned}$$
(45)

where

$$\begin{aligned} {V_1}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) ={}&{e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{{r_t},{\sigma _t}}}{{\mathbf {e}}}(t), \\ {V_2}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) ={}&\int _{t -{ \tau }(t)}^t {{e^{2\alpha s}}{{{\mathbf {e}}}^T}(s){{{\mathbf {Q}}}}{{\mathbf {e}}}(s)\hbox {d}s} , \\ {V_3}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) ={}&{\tau }\int _{- \tau }^{0} {\int _{t - s }^t {{e^{2\alpha s}}}{\dot{{{\mathbf {e}}}^T}(s){{{\mathbf {Z}}}}{\dot{{{\mathbf {e}}}}(s)}\hbox {d}s\hbox {d}\theta } } . \end{aligned}$$

Remark 3.2

The matrix \({\mathbf{Z}}\) in \(V_4({{{\mathbf {e}}}_t},{r_t},{\sigma _t})\) can be a mode-dependent one \({\mathbf{Z}}_{i,m}\); therefore, \(V_4({{{\mathbf {e}}}_t},{r_t},{\sigma _t})\) is defined as follows:

$$\begin{aligned} \begin{aligned} {V_4}({{{\mathbf {e}}}_t},{r_t},{\sigma _t})&=({{\bar{\tau }}}-{{\underline{\tau }}} )\int _{{\underline{\tau }}}^{{\bar{\tau }}} {\int _{t + \theta }^t {{e^{2\alpha \theta }}}{\dot{{{\mathbf {e}}}^T}(s){{{\mathbf {Z}}}_{i,m}} {\dot{{{\mathbf {e}}}}(s)}\hbox {d}\theta \hbox {d}s }}\\&\quad +({{\bar{\tau }}}-{{\underline{\tau }}} ) \int _{ {\underline{\tau }} }^{{\bar{\tau }}} {\int _\theta ^0 {\int _{t + \beta }^t {{e^{2\alpha s}}{{\dot{{\mathbf {e}}}}^T}(s){{{\mathbf {L}}}}\dot{{\mathbf {e}}}(s)\hbox {d}s\hbox {d}\beta \hbox {d}\theta } } }, \end{aligned} \end{aligned}$$

where the matrix \({\mathbf{L}}\) is a symmetric positive definite matrix. If the matrix \(Z_{i,m}\) is considered to be mode-dependent, a triple integral with mode-independent matrix \({\mathbf {L}}\) is added in \({V_4}({{{\mathbf {e}}}_t},{r_t},{\sigma _t})\). Besides, the matrix \({\mathbf{S}}\) introduced in Lemma 2 can be assumed to be mode-dependent matrix as \({\mathbf{S}}_{i,m}\).

It is noteworthy that by assuming Lyapunov–Krasovskii matrices as mode-dependent matrices, the number of decision variables is increased, and as a result, the feasibility problem of the LMIs can be solved with less conservative conditions. Besides, the above privilege, an increase in the number of decision variables leads to more complicated computing and is therefore considered to be a disadvantage.

The next theorem proposes some sufficient conditions to achieve the feedback controller gain for each node in the piecewise-homogeneous MJCDN (1) with CKTRs.

Theorem 2

Under \({{\mathcal {W}}^i_{k}}={{\mathcal {W}}^i}, {{\mathcal {V}}^m_{k}}={{\mathcal {V}}^m}\), for given positive scalars \(\alpha \), \({\bar{\tau }}\), \({\underline{\tau }}\), \(\mu \), the piecewise-homogeneous MJCDN (1) is exponentially synchronized in the mean-square, if there exist symmetric positive definite matrices \({{{\mathbf {P}}}_{i,m}} = {\text {diag}}\{{{\mathbf {P}}}_{i,m}^1,{{\mathbf {P}}}_{i,m}^2, \ldots ,{{\mathbf {P}}}_{i,m}^N\}\), \({\mathbf {Q}}\), \({\mathbf {Z}}\), \({{{\mathbf {X}}}_{i,m}} = {\text {diag}}\{{{\mathbf {X}}}_{i,m}^1, {{\mathbf {X}}}_{i,m}^2,\ldots ,{{\mathbf {X}}}_{i,m}^N\}\), any matrix \({\mathbf {S}}\), and scalars \(\lambda _{i,m}> 0\) for any \(i\in {\mathcal {W}}\), \(m \in {\mathcal {V}}\), such that (21) and the following LMI hold:

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {\breve{\varvec{\Xi }}}_{11}&{}0&{}{{{{\varvec{\Xi }}}_{13}}}&{}0&{}{{{{\varvec{\Xi }}}_{15}}}&{}{\breve{{\varvec{\Xi }}}_{16}}\\ *&{}{{{{\varvec{\Xi }}}_{22}}}&{}{{{{\varvec{\Xi }}}_{23}}}&{}{{{{\varvec{\Xi }}}_{24}}}&{}0&{}0 \\ *&{}*&{}{{{{\varvec{\Xi }}}_{33}}}&{}0&{}{{{{\varvec{\Xi }}}_{35}}}&{}{\breve{{\varvec{\Xi }}}_{36}} \\ *&{}*&{}*&{}{{{{\varvec{\Xi }}}_{44}}}&{}0&{}0 \\ *&{}*&{}*&{}*&{}{{{{\varvec{\Xi }}}_{55}}}&{}{{{{\breve{{\varvec{\Xi }}}}_{56}}}}\\ *&{}*&{}*&{}*&{}*&{}{{\mathbf {Z}} {{\mathbf {I}}} - 2{{{\mathbf {P}}}_{i,m}}} \end{array}} \right] < 0, \end{aligned}$$
(46)

where:

$$\begin{aligned} {\breve{{\varvec{\Xi }}}}_{11}= & {} 2\alpha {{\mathbf{P}}_{i,m}} + {{{\mathbf {X}}}_{i,m}} + {{{\mathbf {X}}}_{i,m}}^T+ {{\mathbf{P}}_{i,m}}{{\bar{\mathbf{A}}}_{i}} + {\bar{\mathbf{A}}}_{i}^T{{\mathbf{P}}_{i,m}} \\&+ {{\mathbf{Q}}} - {\lambda _{i,m}}{\bar{\mathbf{U}}}+({{\bar{\pi }}+ {{\bar{p}}}})({{\bar{\tau }}}-{{\underline{\tau }}}) {{\mathbf{Q}}} + \sum \limits _{n \in {\mathcal {V}}} {{\rho _{mn}}{{\mathbf{P}}_{i,n}} + \sum \limits _{j \in {\mathcal {W}}} {\pi _{ij}^m{{\mathbf{P}}_{j,m}}} },\\ {\breve{{\varvec{\Xi }}}}_{16}= & {} ({\bar{\tau }} - {\underline{\tau }}){{{\bar{\mathbf{A}}}_i}^T}{{{\mathbf {P}}}_{i,m}}+({\bar{\tau }} - {\underline{\tau }}){{{\mathbf {X}}}_{i,m}}^T,\\ {\breve{{\varvec{\Xi }}}}_{36}= & {} ({\bar{\tau }} - {\underline{\tau }}){{\bar{{\mathbf {G}}}_i}^T}{{{\mathbf {P}}}_{i,m}},\\ {\breve{{\varvec{\Xi }}}}_{56}= & {} ({\bar{\tau }} - {\underline{\tau }}){{\bar{{\mathbf {B}}}_i}^T}{{\mathbf {P}}}_{i,m}, \end{aligned}$$

and the other parameters are determined in Theorem 1. In addition, the controller gain matrix is given in the form of:

$$\begin{aligned} {{{\mathbf {K}}}_{i,m}} = {{\mathbf {P}}}_{i,m}^{ - 1}{{{\mathbf {X}}}_{i,m}}. \end{aligned}$$
(47)

Proof

Pre- and post-multiplying both sides of (20) by \({{{\mathbf {L}}}^T} = {\text {diag}}\{ {{\mathbf {I}}},{{\mathbf {I}}},{{\mathbf {I}}},{{\mathbf {I}}}, {{\mathbf {I}}},{{{\mathbf {P}}}_{i,m}}{{\mathbf {Z}}^{ - 1}}\}\) and \({\mathbf {L}}\), respectively, and substituting \({{{\mathbf {X}}}_{i,m}} = {{{\mathbf {P}}}_{i,m}}{{{\mathbf {K}}}_{i,m}}\), yields

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {\breve{\varvec{\Xi }}}_{11}&{}0&{}{{{{\varvec{\Xi }}}_{13}}}&{}0&{}{{{{\varvec{\Xi }}}_{15}}}&{}{\breve{\varvec{\Xi }}}_{16} \\ *&{}{{{{\varvec{\Xi }}}_{22}}}&{}{{{{\varvec{\Xi }}}_{23}}}&{}{{{{\varvec{\Xi }}}_{24}}}&{}0&{}0 \\ *&{}*&{}{{{{\varvec{\Xi }}}_{33}}}&{}0&{}{{{{\varvec{\Xi }}}_{35}}}&{}{\breve{{\varvec{\Xi }}}}_{36} \\ *&{}*&{}*&{}{{{{\varvec{\Xi }}}_{44}}}&{}0&{}0 \\ *&{}*&{}*&{}*&{}{{{{\varvec{\Xi }}}_{55}}}&{}{\breve{{\varvec{\Xi }}}}_{56} \\ *&{}*&{}*&{}*&{}*&{}{ - {{{\mathbf {P}}}_{i,m}}{{\mathbf {Z}}^{ - 1}}{{{\mathbf {P}}}_{i,m}}} \end{array}} \right] < 0. \end{aligned}$$
(48)

It is obvious that (48) is not an LMI, due to the nonlinear term \( - {{{\mathbf {P}}}_{i,m}}{{\mathbf {Z}}^{ - 1}}{{{\mathbf {P}}}_{i,m}}\). Hence, this inequality should be converted to an LMI. Noting that \({\mathbf {Z}} \geqslant 0 \) and \({{{{\mathbf {P}}}_{i,m}}{{\mathbf {Z}}^{ - 1}}{{{\mathbf {P}}}_{i,m}}-{{{\mathbf {P}}}_{i,m}}{{\mathbf {Z}}^{ - 1}}{{{\mathbf {P}}}_{i,m}}\geqslant 0}\), then by the Schur complement, we can get

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {{{{\mathbf {P}}}_{i,m}}{{\mathbf {Z}}^{ - 1}}{{{\mathbf {P}}}_{i,m}}}&{}{ - {{{\mathbf {P}}}_{i,m}}} \\ *&{}{{\mathbf {Z}} {{\mathbf {I}}}} \end{array}} \right] \geqslant 0, \end{aligned}$$
(49)

then regarding to [28], it can be proved that:

$$\begin{aligned} - {{{\mathbf {P}}}_{i,m}}{{\mathbf {Z}}^{ - 1}}{{{\mathbf {P}}}_{i,m}} \leqslant {\mathbf {Z}} {{\mathbf {I}}} - 2{{{\mathbf {P}}}_{i,m}}. \end{aligned}$$
(50)

If (46) holds, then (48) holds. Thus, this completes the proof. \(\square \)

3.1 Development of the Markovian Jump Problem with PUTRs (\({{\mathcal {W}}^i_{k}} \subset {{\mathcal {W}}^i}\) and \({{{\mathcal {V}}^m_{k}} \subset {{\mathcal {V}}^m}}\))

In this section, we extend the synchronization problem presented in Sect. 3 to the synchronization problem of piecewise-homogeneous MJCDN with PUTRs (\({{\mathcal {W}}^i_{k}} \subset {{\mathcal {W}}^i}\) and \({{{\mathcal {V}}^m_{k}} \subset {{\mathcal {V}}^m}}\)). The TRs determine the dynamical behavior of the Markovian jump systems in the jumping process. In practice, obtaining the exact values of the TR matrices is impossible or at least is a challenging task. To extend the analysis and management of the Markovian jump systems, the uncertain TRs have been assumed for the Markov chains. A descriptions of the uncertain TRs can be PUTRs where every value is considered as either completely known or completely unknown. Hence, it is very important to attend PUTRs, which do not need any knowledge of the unknown elements.

Theorem 3

For given controller gain matrix \({\mathbf{K}}({r_t},{\sigma _t})\) and positive scalars \(\alpha \), \({\bar{\tau }}\), \({\underline{\tau }}\), \(\mu \), the error system (15) is exponentially mean-square stable under that \({{\mathcal {W}}^i_{k}} \subset {{\mathcal {W}}^i}\) and \({{{\mathcal {V}}^m_{k}} \subset {{\mathcal {V}}^m}}\), if there exist symmetric positive definite matrices \({\mathbf {P}}_{i,m}\), \({\mathbf {Q}}\), \({\mathbf {Z}}\), any matrices \({\mathbf {S}}\), \({{\mathbf {E}}}_{i,m}\), \({{\mathbf {H}}}_{i,m}\) and scalars \(\lambda _{i,m}> 0\), such that for any \((i,j \in {\mathcal {W}}^i)\), \((m,n \in {\mathcal {V}}^m)\), (21) and the following LMIs hold:,

(51)
$$\begin{aligned}&\left\{ {\begin{array}{*{20}{l}} {{\mathbf{P}}_{j,m}}-{{\mathbf{E}}_{i,m}} \leqslant 0,&{} {j \in {{\mathcal {W}}}_{uk}^i} &{}{j \ne i,}\\ {{\mathbf{P}}_{j,m}}-{{\mathbf{E}}_{i,m}} \geqslant 0,&{} {j \in {{\mathcal {W}}}_{uk}^i} &{}{j=i,}\\ \end{array}} \right. \end{aligned}$$
(52)
$$\begin{aligned}&\left\{ {\begin{array}{*{20}{l}} {{\mathbf{P}}_{i,n}}-{{\mathbf{H}}_{i,m}} \leqslant 0,&{} {n \in {{\mathcal {V}}}_{uk}^m} &{}{n \ne m,}\\ {{\mathbf{P}}_{i,n}}-{{\mathbf{H}}_{i,m}} \geqslant 0,&{} {n \in {{\mathcal {V}}}_{uk}^m} &{}{n=m,}\\ \end{array}} \right. \end{aligned}$$
(53)

where , and the other parameters follow the same definitions in Theorem 1.

Proof

Consider the same Lyapunov functional as that in Theorem 1. It is obvious that (25) is expressed as follows:

$$\begin{aligned} \begin{aligned} {\mathcal {L}}{V_1}({{{\mathbf {e}}}_t},{r_t},{\sigma _t}) =&2\alpha {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{i,m}}{{\mathbf {e}}}(t) + 2{e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{i,m}}\dot{{\mathbf {e}}}(t) \\&+ {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{n \in {\mathcal {V}}} {{\rho _{mn}}{{{\mathbf {P}}}_{i,n}} + \sum \limits _{j \in {\mathcal {W}}} {\pi _{ij}^m} } {{{\mathbf {P}}}_{j,m}}\right] {{\mathbf {e}}}(t)\\ =&2\alpha {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{i,m}}{{\mathbf {e}}}(t) + 2{e^{2\alpha t}}{{{\mathbf {e}}}^T}(t){{{\mathbf {P}}}_{i,m}}\dot{{\mathbf {e}}}(t) \\&+ {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{n \in {{\mathcal {V}}}_k^m} {{\rho _{mn}}{{{\mathbf {P}}}_{i,n}} + \sum \limits _{j \in {{\mathcal {W}}}_k^i} {\pi _{ij}^m} } {{{\mathbf {P}}}_{j,m}}\right] {{\mathbf {e}}}(t)\\&+ {e^{2\alpha t}}{{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{n \in {{\mathcal {V}}}_{uk}^m} {{\rho _{mn}}{{{\mathbf {P}}}_{i,n}} + \sum \limits _{j \in {{\mathcal {W}}}_{uk}^i} {\pi _{ij}^m} } {{{\mathbf {P}}}_{j,m}}\right] {{\mathbf {e}}}(t), \end{aligned} \end{aligned}$$
(54)

according to Eqs. (4) and (7), for arbitrary mode-dependent matrices \({{\mathbf {E}}}_{i,m}\), \({{\mathbf {H}}}_{i,m}\), the following zero equalities hold:

$$\begin{aligned} \begin{aligned} {{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{n \in {{\mathcal {V}}}} {\rho _{mn}}{{{\mathbf {H}}}_{i,m}}\right] {{\mathbf {e}}}(t) ={{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{n \in {{\mathcal {V}}_k^m}} {\rho _{mn}}{{{\mathbf {H}}}_{i,m}}+\sum \limits _{n \in {{\mathcal {V}}_{uk}^m}} {\rho _{mn}}{{{\mathbf {H}}}_{i,m}}\right] {{\mathbf {e}}}(t)=0,\\ {{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{j \in {{\mathcal {W}}}} {\pi _{ij}^m}{{{\mathbf {E}}}_{i,m}}\right] {{\mathbf {e}}}(t)= {{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{j \in {{\mathcal {W}}}_k^i} {\pi _{ij}^m} {{{\mathbf {E}}}_{i,m}}+\sum \limits _{j \in {{\mathcal {W}}}_{uk}^i} {\pi _{ij}^m} {{{\mathbf {E}}}_{i,m}}\right] {{\mathbf {e}}}(t)=0, \end{aligned} \end{aligned}$$
(55)

with regard to (26)–(38), (54), and (55), it is obvious that

$$\begin{aligned} {\mathcal {L}}V({{{\mathbf {e}}}_t},{r_t},{\sigma _t})\leqslant & {} {e^{2\alpha t}}{{{\xi }}^T(t) }\mathfrak {R}^* { {{\xi }}(t) } \\&+{e^{2\alpha t}}{{{\mathbf {e}}}^T}(t)\left[ \sum \limits _{n \in {{\mathcal {V}}}_{uk}^m} {{\rho _{mn}}({{{\mathbf {P}}}_{i,n}}{-}{{{\mathbf {H}}}_{i,m}}) {+} \sum \limits _{j \in {{\mathcal {W}}}_{uk}^i} {\pi _{ij}^m} } ({{{\mathbf {P}}}_{j,m}}{-}{{{\mathbf {E}}}_{i,m}})\right] {{\mathbf {e}}}(t), \end{aligned}$$

where

(56)

due to \(\pi _{ii}^{\sigma _{t + {{\varDelta } t}}} = - \sum \nolimits _{j = 1,i \ne j}^w {{\pi _{ij}^{\sigma _{t + {{\varDelta } t}}}}}\), \({\rho _{mm}} = - \sum \nolimits _{n = 1,m \ne n}^v {{\rho _{mn}}}\) and Schur complement, Equation (41) is held if the LMIs (51)–(53) and (21) are satisfied. \(\square \)

In Theorem 4, a desired Markovian mode-dependent controller is designed for every nodes in the piecewise-homogeneous MJCDN (1) with PUTRs.

Theorem 4

For given scalars \(\alpha \), \({\bar{\tau }}\), \({\underline{\tau }}\) and \(\mu \), the piecewise-homogeneous MJCDN (1) with PUTRs (\({{\mathcal {W}}^i_{k}} \subset {{\mathcal {W}}^i}\) and \({{{\mathcal {V}}^m_{k}} \subset {{\mathcal {V}}^m}}\)) based on the resulted conditions in Theorem 3 is exponentially synchronized in the mean-square by the feedback controller of the form (14), if there exist symmetric positive definite matrices \({{{\mathbf {P}}}_{i,m}} = {\text {diag}}\{ {{\mathbf {P}}}_{i,m}^1,{{\mathbf {P}}}_{i,m}^2,\ldots ,{{\mathbf {P}}}_{i,m}^N\}\), \({\mathbf {Q}}\), \({\mathbf {Z}}\), \({{{\mathbf {X}}}_{i,m}} = {\text {diag}}\{ {{\mathbf {X}}}_{i,m}^1,{{\mathbf {X}}}_{i,m}^2,\cdots ,{{\mathbf {X}}}_{i,m}^N\}\), any matrices \( {\mathbf {S}}\), \({{\mathbf {E}}}_{i,m}\), \({{\mathbf {H}}}_{i,m}\) and scalars \(\lambda _{i,m}> 0\) for any \(i \in {\mathcal {W}}\), \(m \in {\mathcal {V}}\), such that (21), (52), (53) and the following LMI hold:

(57)

where

and the other parameters are determined the same as definitions in Theorem 1 in Sect. 3. In addition, the controller gain matrices are given in the form of:

$$\begin{aligned} {{{\mathbf {K}}}_{i,m}} = {{\mathbf {P}}}_{i,m}^{ - 1}{{{\mathbf {X}}}_{i,m}}. \end{aligned}$$
(58)

Proof

Pre- and post-multiplying both sides of (51) by \({{{\mathbf {L}}}^T} = {\text {diag}}\{ {{\mathbf {I}}},{{\mathbf {I}}},{{\mathbf {I}}}, {{\mathbf {I}}},{{\mathbf {I}}}, {{{\mathbf {P}}}_{i,m}}{{\mathbf {Z}}^{ - 1}}\}\) and \({\mathbf {L}}\), respectively, and substituting \({{{\mathbf {X}}}_{i,m}} = {{{\mathbf {P}}}_{i,m}}{{{\mathbf {K}}}_{i,m}}\), yields

(59)

similar to Theorem 2 in Sect. 3, based on the (50), it is proofed that the (59) is equivalent to (57). Therefore, this completes the proof. \(\square \)

Remark 3.3

For a particular example, Theorem 2 in Sect. 3 and Theorem 4 in Sect. 3.1 give the controller gain matrix \({{{\mathbf {K}}}_{i,m}} \in {{\mathbb {R}}^{n \times n}}\) for every mode \(i \in {\mathcal {W}}\) and \(m \in {\mathcal {V}}\). In practice, we have to know the current states of stochastic variables \(r_t\) and \(\sigma _t\), in order to select the proper controller gain matrix \({{\mathbf{K}}_k}({r_t},{\sigma _t})\), \(k=1,2,\ldots ,N\), for the controller (14). Thus, a mode observer is needed to identify the current state of the system for implementation.

Remark 3.4

Non-homogeneous Markovian jump models are more general than homogeneous models and are applicable to more practical situations. In this paper, if one denotes \({\mathcal {V}} = \{ 1\}\), the Markovian problem will be reduced to a homogeneous Markovian problem.

4 Numerical Example

In this section, we present one example to demonstrate the usefulness of the proposed results. Consider the piecewise-homogeneous MJCDN (1) with three coupled Chua’s circuits and \({\mathcal {W}} = \{ 1,2\}\) for low-level signal \(\{r_t, t \geqslant 0\}\). The mode-dependent coefficient matrices for each \(i \in {{\mathcal {W}}}\) are given as [60],

$$\begin{aligned}&{{\mathbf {A}}_1}= \left[ {\begin{array}{ccc} - 9\left( \frac{2}{7}\right) &{} 9&{} 0 \\ 1 &{} -1 &{} 1 \\ 0 &{} -14.286 &{} 0\\ \end{array} } \right] , \quad {{\mathbf {A}}_2}= \left[ {\begin{array}{ccc} -9.1 \left( \frac{2}{7}\right) &{} 9.2&{} 0 \\ 1.1 &{} -0.9 &{} 1.1 \\ 0 &{} -14.286 &{} 0.1\\ \end{array} } \right] , \\&{{\mathbf {B}}_1}= \left[ {\begin{array}{ccc} 1 &{} 0&{} 0 \\ 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 1\\ \end{array} } \right] , \quad {{\mathbf {B}}_2}= \left[ {\begin{array}{ccc} 0.9 &{} 0&{} 0\\ 0 &{} 0.9 &{} 0\\ 0 &{} 0 &{} 0.9\\ \end{array} } \right] , \end{aligned}$$

and the nonlinear function vector is

$$\begin{aligned} {\mathbf{f}}({{{{\mathbf {x}}}}_{b}}(t))= \begin{bmatrix} -a(h({{\mathbf {x}}}_{b1}(t)))\\ 0\\ 0 \end{bmatrix}, \end{aligned}$$

where \(h({{\mathbf {x}}}_{b1}(t))=-0.5(m_1-m_0) (|{{\mathbf {x}}}_{b1}(t)+ 1|-|{{\mathbf {x}}}_{b1}(t)-1|)\), \(a=9\), \(m_0=\frac{-1}{7}\), and \(m_1=\frac{2}{7}\).

Two following matrices satisfy the sector-bounded condition in (7)

$$\begin{aligned} {\mathbf{U}}= \left[ {\begin{array}{ccc} {-9\left( \frac{3}{7}\right) } &{} 0&{} 0 \\ 0 &{} 0 &{} 0\\ 0 &{} 0&{} 0\\ \end{array} } \right] , {\mathbf{V}}= \left[ {\begin{array}{ccc} 9\left( \frac{3}{7}\right) &{} 0&{} 0 \\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0\\ \end{array} } \right] , \end{aligned}$$

the chaotic behavior of the isolated node (12) is shown in Fig. 1 .

Fig. 1
figure 1

The chaotic behavior (double scroll attractor) of the isolated node (12)

Moreover, the outer and inner coupling matrices, respectively, as

$$\begin{aligned} {{{\mathbf {G}}}_1}= & {} \left[ {\begin{array}{ccc} -1 &{} 0&{} 1 \\ 0 &{} -1 &{} 1\\ 1 &{} 1 &{} -2\\ \end{array} } \right] , \quad {{{\mathbf {G}}}_2}= \left[ {\begin{array}{ccc} -1 &{} 1 &{}0 \\ 1 &{} -2 &{} 1\\ 0 &{} 1 &{} -1\\ \end{array} } \right] , \\ {{{\varvec{\Gamma }}}_1}= & {} \left[ {\begin{array}{ccc} 1 &{} 0&{} 0 \\ 0 &{} 1 &{} 0\\ 0&{} 0 &{} 1\\ \end{array} } \right] , \quad {{{\varvec{\Gamma }}}_2}= \left[ {\begin{array}{ccc} 0.9 &{} 0 &{} 0 \\ 0 &{} 0.9 &{} 0\\ 0 &{} 0 &{} 0.9\\ \end{array} } \right] . \end{aligned}$$

We consider \({\mathcal {V}} = \{ 1,2,3\}\) for Markov process (high-level signal) \(\{{\sigma _t},t \ge 0\}\), the piecewise-homogeneous transition rate matrices of low-level Markovian signal \(\{r_t,t \ge 0\}\) for each modes in \({\mathcal {V}}\) are given as

$$\begin{aligned} {{\varPi }^{1}}= \left[ {\begin{array}{cc} -4.5 &{} 4.5 \\ 3.75 &{} -3.75 \\ \end{array} } \right] , {{\varPi }^{2}}= \left[ {\begin{array}{cc} -2.75 &{} 2.75 \\ 3 &{} -3 \\ \end{array} } \right] , {{\varPi }^{3}}= \left[ {\begin{array}{cc} -4 &{} 4 \\ 1.5 &{} -1.5 \\ \end{array} } \right] . \end{aligned}$$

Furthermore, the homogeneous transition rate matrix that represents transition rate among three transition rate matrices mentioned above is as follows:

$$\begin{aligned} {{\varLambda } }= \left[ {\begin{array}{ccc} -4.5 &{} 2 &{} 2.5 \\ 4.5 &{} -7.5 &{} 3 \\ 1 &{} 2 &{} -3 \\ \end{array} } \right] . \end{aligned}$$

The mode-dependent coupling delays are assumed to be in range \({{\tau }^{r(t),\sigma (t)}(t)}\in {[0.35, 0.95]}\). In other words, the lower and upper bound of mode-dependent time-varying coupling delay is 0.35 and 0.95, respectively. Time-varying delays \({{\tau }^{r(t),\sigma (t)}(t)}\) for different values for two Markov processes \(\{r_t,t \ge 0\}\) and \(\{\sigma _t,t \ge 0\}\) are chosen as:

Fig. 2
figure 2

Switching signal of Markov process \({r_t}\) with two modes

Fig. 3
figure 3

Switching signal of Markov process \({\sigma _t}\) with three modes

\({\tau ^{(1,1)}}(t)=0.9+0.05 \hbox {sin}(2t)\), \({\tau ^{(1,2)}}(t)=0.6+0.05 \hbox {sin}(10t)\), \({\tau ^{(1,3)}}(t)=0.5+0.05 \hbox {sin}(10t)\), \({\tau ^{(2,1)}}(t)=0.4+0.05 \hbox {sin}(10t)\), \({\tau ^{(2,2)}}(t)=0.9+0.05 \hbox {sin}(4t)\), \({\tau ^{(2,3)}}(t)=0.7+0.05 \hbox {sin}(3t)\). Then the maximum of derivatives of the mode-dependent time delays in various modes is \({\mu }=0.5\).

The initial state values for the isolated node are assumed as \({\mathbf {s}}(0)=[-0.2,-0.3,0.2]^T\). The nodes of the network have initial values as \({\mathbf {x}}_1(0)=[-3,6,0]^T\), \({\mathbf {x}}_2(0)=[4,2,0]^T\), and \({\mathbf {x}}_3(0)=[-7,4,0]^T\).

Fig. 4
figure 4

Synchronization error curves for the MJCDN with CKTRs without control inputs

Both switching signals are shown in Figs. 2 and 3 for piecewise-homogeneous process \(\{r_t,t \ge 0\}\) and homogeneous process \(\{ {\sigma _t},t \ge 0\}\), respectively. Figure 4 shows the synchronization error vectors without applying feedback control inputs to the MJCDN. As shown in this figure, the coupled nodes \({\mathbf{x}}_b(t), b=1,2,3,\) in the MJCDN can not synchronize to the isolated node \({\mathbf{s}}(t)\) and the error vectors can not converge to zero without control inputs. Therefore, using the MATLAB LMI control Toolbox to solve the LMIs in Theorem 2, with \(\alpha =0.2\), the aforementioned control gain matrices are obtained for any mode \(i \in {\mathcal {W}}\), \(m \in {\mathcal {V}}\) as follows:

$$\begin{aligned} {{{{\mathbf {K}}}_1(1,1)}}= & {} \left[ {\begin{array}{ccc} -3.6790 &{} -3.9227 &{} 3.1832\\ -1.5643 &{} -5.1979 &{} 0.4403 \\ 0.3433 &{} 4.7444 &{} -10.8701\\ \end{array} } \right] , \\ {{{{\mathbf {K}}}_2(1,1)} }= & {} \left[ {\begin{array}{ccc} -4.4444 &{}-4.1096 &{} 2.4536\\ -1.6375 &{} -5.7992 &{} 0.9429 \\ 0.4535 &{} 5.1842 &{} -10.0863\\ \end{array} } \right] , \\ \small {{{\mathbf {K}}}_3(1,1) }= & {} \left[ {\begin{array}{ccc} -5.9065 &{}-4.3833 &{} 4.3432\\ -1.9136 &{} -6.3552 &{} 1.0762\\ 2.4059 &{} 5.7676 &{} -13.9862\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(1,2) }= & {} \left[ {\begin{array}{ccc} -3.6294 &{}-3.6364 &{} 2.8790\\ -1.5505 &{} -5.2094 &{} 0.4226\\ 0.2217 &{} 4.2123 &{} -10.4858 \\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(1,2) }= & {} \left[ {\begin{array}{ccc} -4.3385 &{}-3.8600 &{} 2.2542 \\ -1.5978 &{} -5.7715 &{} 0.8808 \\ 0.2863 &{} 4.7596 &{} -9.8460 \\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(1,2) }= & {} \left[ {\begin{array}{ccc} -5.8116 &{}-4.1318 &{}3.8807\\ -2.0145 &{}-6.5288 &{} 1.2312\\ 2.1206 &{} 5.4274 &{}-13.4479\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(1,3) }= & {} \left[ {\begin{array}{ccc} -3.6781 &{}-3.9075 &{} 3.1227 \\ -1.5694 &{} -5.2085 &{} 0.4582 \\ 0.3390 &{} 4.7200 &{}-10.7723\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(1,3)}= & {} \left[ {\begin{array}{ccc} -4.3303 &{}-3.9027 &{} 2.3613\\ -1.6060 &{} -5.7642 &{} 0.8007 \\ 0.2747 &{} 4.8012 &{}-10.0067\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(1,3) }= & {} \left[ {\begin{array}{ccc} -6.0284 &{} -4.4509 &{} 4.3318\\ -1.9728 &{} -6.4720 &{} 1.2269\\ 2.4871 &{} 5.9449 &{} -14.0133\\ \end{array} } \right] , \end{aligned}$$
$$\begin{aligned} {{{\mathbf {K}}}_1(2,1)}= & {} \left[ {\begin{array}{ccc} -3.5609 &{}-4.1214 &{} 3.3153 \\ -1.6595 &{} -5.2557 &{} 0.3649\\ 0.2357 &{} 4.7800 &{} -11.0066\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(2,1) }= & {} \left[ {\begin{array}{ccc} -5.7939 &{}-4.5212 &{} 4.5676\\ -2.0106 &{} -6.4195 &{} 0.9747\\ 2.3922 &{} 5.6920 &{} -14.2497\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(2,1)}= & {} \left[ {\begin{array}{ccc} -4.4795 &{} -4.4162 &{} 2.6136\\ -1.7253 &{} -5.8853 &{} 0.9266\\ 0.5053 &{} 5.3763 &{} -10.2824\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(2,2) }= & {} \left[ {\begin{array}{ccc} -3.5036 &{} -3.6947 &{} 2.8562\\ -1.6431 &{} -5.2867 &{} 0.3441\\ 0.0799 &{} 3.9827 &{} -10.4417\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(2,2)}= & {} \left[ {\begin{array}{ccc} -5.6711 &{}-4.1987 &{} 4.0765 \\ -2.1203 &{} -6.6270 &{} 1.0820\\ 2.0635 &{} 5.2456 &{} -13.7828 \\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(2,2) }= & {} \left[ {\begin{array}{ccc} -4.3433 &{} -3.9735 &{} 2.1847\\ -1.6963 &{} -5.9008 &{} 0.8646\\ 0.2176 &{} 4.6226 &{} -9.8033 \\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(2,3)}= & {} \left[ {\begin{array}{ccc} -3.6476 &{}-4.4165 &{} 3.6559\\ -1.6632 &{} -5.2294 &{} 0.3806 \\ 0.4185 &{} 5.3213 &{} -11.4421\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(2,3) }= & {} \left[ {\begin{array}{ccc} -5.9849 &{} -4.5821 &{} 4.6465 \\ -2.0104 &{} -6.5054 &{} 1.1002\\ 2.5946 &{} 5.7701 &{} -14.2461\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(2,3)}= & {} \left[ {\begin{array}{ccc} -4.5147 &{}-4.6625 &{} 3.0244\\ -1.6842 &{}-5.7856 &{} 0.8118\\ 0.6928 &{} 5.7594 &{} -10.7844\\ \end{array} } \right] . \end{aligned}$$

By applying control law (14), Fig. 5 shows that the synchronization errors between the state vectors of coupled nodes \({\mathbf{x}}_b(t),b=1,2,3,\) and isolated node \({\mathbf{s}}(t)\), where \({\mathbf{e}}_{b}\), \(b=1,2,3,\) converge to zero. It means the coupled nodes’ trajectories \({\mathbf{x}}_b(t),b=1,2,3,\) in the MJCDN synchronize to the isolated nodes’ trajectory \({\mathbf{s}}(t)\). Synchronization between the state vectors of isolate node \({\mathbf{s}}(t)\) and coupled nodes \({\mathbf{x}}_b(t),i=1,2,3\) is shown in Fig. 6. The corresponding control inputs are also shown in Fig.7.

Fig. 5
figure 5

Synchronization error vectors for the MJCDN with CKTRs by applying control inputs

The maximum upper bound of mode-dependent delays with \(\mu =0.8\) for different decay rate values \((\alpha )\) is presented in Table 1.

Fig. 6
figure 6

Synchronization curves for the states of the isolated node \({\mathbf{s}}(t)\) and node \({\mathbf{x}}_b(t), b=1,2,3,\) with CKRTs

Fig. 7
figure 7

Synchronizing control effort for MJCDN with CKTRs

4.1 Case 2

The transition rate matrices \({{\varPi }^{m}},m \in {\mathcal {V}}\) for the Markov process \(\{{r_t},t \ge 0\}\) are assumed to be partially unknown as follows:

$$\begin{aligned} {{\varPi }^{1}}= \left[ {\begin{array}{cc} -4.5 &{} ? \\ 3.75 &{} ? \\ \end{array} } \right] , {{\varPi }^{2}}= \left[ {\begin{array}{cc} -2.75 &{} ? \\ 3 &{} ? \\ \end{array} } \right] , {{\varPi }^{3}}= \left[ {\begin{array}{cc} -4 &{} ? \\ ?&{} -1.5 \\ \end{array} } \right] . \end{aligned}$$

Furthermore, the transition rate matrix \(({{\varLambda } })\) for the Markov process \(\{{\sigma _t},t \ge 0\}\) that represents transition rate among three transition rate matrices \({{\varPi }^{m}}\) is supposed to be partially unknown as follows:

$$\begin{aligned} {{\varLambda } }= \left[ {\begin{array}{ccc} -4.5 &{} ? &{} ? \\ 4.5 &{} ? &{} ? \\ ? &{} ? &{} -3 \\ \end{array} } \right] . \end{aligned}$$

The mode-dependent coupling delays for each mode, \({\mu }\), \(\alpha \) are the same as the values in Case 1. By applying Theorem 4, the controller gain matrices are obtained for any mode \(i \in {\mathcal {W}}\), \(m \in {\mathcal {V}}\) as follows:

Table 1 Upper bound of \({\bar{\tau }}\) for different decay rate \(\alpha \) for the MJCDN with CKRTs
$$\begin{aligned} {{{\mathbf {K}}}_1(1,1) }= & {} \left[ {\begin{array}{ccc} -3.6351 &{}-4.0588 &{} 5.1986 \\ -1.4400 &{} -4.8888 &{} 0.0167\\ 0.3935 &{} 4.8255 &{} -14.5795\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(1,1)}= & {} \left[ {\begin{array}{ccc} -4.0682 &{}-4.1440 &{} 4.1266\\ -1.4627 &{}-5.2585 &{} 0.2021\\ 0.4370 &{} 5.0715 &{} -12.8495\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(1,1) }= & {} \left[ {\begin{array}{ccc} -6.6923 &{}-4.3425 &{} 6.2891\\ -1.6801 &{} -6.2606 &{} 0.5781\\ 3.6292 &{} 5.5371 &{} -17.7698\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(1,2) }= & {} \left[ {\begin{array}{ccc} -3.2823 &{} -3.7438 &{} 3.8714\\ -1.4778 &{}-4.8870 &{} 0.1354\\ -0.0548 &{} 4.3216 &{} -12.1789\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(1,2) }= & {} \left[ {\begin{array}{ccc} -3.6625 &{} -3.9703 &{} 3.4596\\ -1.4758 &{} -5.1698 &{} 0.2770 \\ 0.0104 &{} 4.8290 &{}-11.5950 \\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(1,2) }= & {} \left[ {\begin{array}{ccc} -6.2736 &{}-3.7785 &{} 3.7739\\ -2.0228 &{} -6.7492 &{} 1.4102\\ 2.3884 &{} 4.8414 &{}-13.7926\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(1,3) }= & {} \left[ {\begin{array}{ccc} -3.4923 &{}-3.7945 &{} 4.4204\\ -1.5031 &{}-4.8542 &{} 0.0583\\ 0.1966 &{} 4.3313 &{} -13.2768 \\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(1,3) }= & {} \left[ {\begin{array}{ccc} -3.8030 &{}-3.9015 &{} 3.8141 \\ -1.5125 &{} -5.1606 &{} 0.1523 \\ 0.0944 &{} 4.6029 &{}-12.4175\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(1,3) }= & {} \left[ {\begin{array}{ccc} -6.3803 &{}-3.9322 &{} 5.1425\\ -1.8046 &{}-6.3753 &{} 0.6973\\ 2.9944 &{} 4.7975 &{} -15.9958\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(2,1) }= & {} \left[ {\begin{array}{ccc} -3.0461 &{}-4.0107 &{} 4.5650\\ -1.5764 &{}-4.7861 &{} -0.0653 \\ -0.2424 &{} 4.3861 &{} -13.1994\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(2,1) }= & {} \left[ {\begin{array}{ccc} -5.7884 &{} -4.0359 &{} 5.5255 \\ -1.7810 &{}-6.0015 &{} 0.4448 \\ 2.8500 &{} 4.6411 &{}-15.8984 \\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(2,1) }= & {} \left[ {\begin{array}{ccc} -3.5895 &{} -4.3816 &{} 3.8369\\ -1.6501 &{}-5.1573 &{} 0.1582\\ -0.0759 &{} 5.1972 &{} -12.1997\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(2,2) }= & {} \left[ {\begin{array}{ccc} -2.9748 &{}-3.7490 &{} 3.2561\\ -1.5993 &{}-4.8534 &{} 0.1454 \\ -0.2584 &{} 4.0070 &{} -10.8479\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(2,2) }= & {} \left[ {\begin{array}{ccc} -5.2685 &{}-4.0308 &{} 4.3107 \\ -2.0300 &{} -6.0090 &{} 0.9332\\ 2.0830 &{} 4.9250 &{} -13.8611 \\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(2,2) }= & {} \left[ {\begin{array}{ccc} -3.5425 &{} -3.9906 &{} 2.5397\\ -1.6533 &{} -5.2553 &{} 0.4935\\ -0.1209 &{} 4.6142 &{} -9.8890\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_1(2,3) }= & {} \left[ {\begin{array}{ccc} 0.5418 &{}-5.3881 &{} 1.0838 \\ -2.5936 &{} -1.1275 &{} 2.2269\\ -0.3304 &{} 7.2218 &{} -2.9148\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_2(2,3) }= & {} \left[ {\begin{array}{ccc} -0.1870 &{}-4.6602 &{}0.9004 \\ -2.7801 &{} -1.8520 &{} 2.6192 \\ -0.0889 &{} 5.9990 &{} -3.4278\\ \end{array} } \right] , \\ {{{\mathbf {K}}}_3(2,3) }= & {} \left[ {\begin{array}{ccc} 0.4431 &{}-5.2470 &{}0.8518\\ -2.8843 &{}-1.1466 &{}3.0154\\ -0.1051 &{} 6.9320 &{} -2.6494\\ \end{array} } \right] . \end{aligned}$$
Fig. 8
figure 8

Synchronization error curves for the MJCDN with PUTRs without control inputs

Fig. 9
figure 9

Synchronizing error vectors for MJCDN with PUTRs by applying control inputs

Fig. 10
figure 10

Synchronization curves for the states of the isolated node \({\mathbf{s}}(t)\) and node \({\mathbf{x}}_b(t), b=1,2,3,\) with PUTRs

Fig. 11
figure 11

Synchronizing control effort for MJCDN with PUTRs

The initial state values for the isolated node and other coupled nodes are also assumed similar to Case 1. Without applying feedback control inputs to each coupled nodes, the error vectors \({\mathbf{e}}_b(t), b=1,2,3,\) are shown in Fig. 8, where the trajectories of the coupled nodes in the MJCDN do not converge to the trajectory of isolated node when no control signals are applied to the coupled nodes. By applying the control inputs (14) to each coupled nodes in the MJCDN with PUTRs, the synchronization errors \({\mathbf{e}}_b(t), b=1,2,3,\) are shown in Fig. 9. From Fig. 10, we can observe that the state trajectories of the coupled nodes converge to the state trajectory of the isolated node when the gain matrices are designed properly and applied to each coupled nodes. Also, the control inputs that are applied to each coupled nodes are shown in Fig. 11.

5 Conclusion

The present study is an investigation of the exponential stochastic synchronization for the piecewise-homogeneous MJCDN (1). In our proposed model, all coefficient matrices and time-varying coupling delay are considered as mode-dependent values. The piecewise-homogeneous Markovian structure is capable of presenting abrupt changes in order to reduce conservatism of the traditional Markovian jump systems. Assuming that coefficient matrices and time delay are mode-dependent, the degree of the freedom of the synchronization problem is inevitably increased; hence, the necessity of presenting a more realistic model of applications. Delay-dependent sufficient criteria are firstly derived from the theorems to guarantee the synchronization of the piecewise-homogeneous MJCDN. Synchronization mode-dependent controllers are then designed in terms of LMIs under the derived results. A numerical example under two cases has been given to demonstrate the effectiveness and usefulness of the proposed method. The researchers are interested in Wirtinger-based integral inequality approach, and improved reciprocally convex technique as well as semi-Markovian jump systems for future researches.