1 Introduction

In the midst of the earlier decades, a speedy development has been made in the examination of neural networks which have wide applications in various locales, for instance, reproducing moving pictures, signal processing pattern recognition and optimization issues. For the late progress, since time delays cannot be kept up a key separation from and they frequently provoke insecurity of neural networks, it essentially focuses on progressing different sorts of stability conditions of delayed neural frameworks. Thus, the delayed neural frameworks have been widely studied by many authors and a assortment of results have been derived in [111]. Before the above examination, there is a crucial errand which is to choose the equilibrium and in addition to analysis stability conditions of the equilibrium point of different made neural frameworks. Recently, many efforts have been paid to study the stability of time delay systems [160].

However, a huge segment of the present clarifying transition probabilities in the Markov process has been thought to be absolutely accessible. The ideal assumption transition probabilities certainly limit the use of the standard Markovian jump process theory. Honestly, the likelihood to get the complete learning transition probabilities is defective and the cost is in all probability high. So it is tremendous and vital to further focus more wide Markovian jump process with incomplete transition descriptions [2030]. Recently, the fault detection problem for a class of Markov jump linear system with partially known transition probabilities was investigated in [2328]. The proposed systems are more general, which relax the traditional assumption in Markov jump systems that all the transition probabilities must be completely known.

Additionally, passivity and passification issues have been given watchful thought starting late. Passivity theory was at first proposed in the circuit analysis [31] and after that connected with various distinctive structures, including high-order nonlinear systems and electrical framework [32]. It can be extensively associated with play out the stability analysis, onlooker plan, signal processing, fuzzy control, sliding mode control and networked control etc. Over the past few years, increasing attention has been devoted to this topic and a number of important results have been reported, (see [3340]). Recently, the passivity of straight structures with delays and the passivity of delayed neural networks have been focused by using fitting Lyapunov–Krasovskii functionals [36, 40].

Recently, various control approaches have been adopted to stabilize instable systems. Controls such as impulsive control, \(H_\infty\) control, finite-time control, intermittent control and sampled-data control are adopted by many authors [4348]. The sampled-data control manages constant framework by data sampling at discrete time based on the pc, sensors, filters and network communication. Henceforth, it is more desirable over use of computerized controllers rather than simple circuits. This definitely lessens the measure of the transmitted data and enhances the control proficiency. In this manner, the sampled-data control innovation has demonstrated increasingly predominance over other control approaches ( [4951]). The authors [52, 53] considered the non-uniform sampled-data control for stochastic passivity and passification of delayed Markov jump genetic regulatory networks with the help of Lyapunov technique and LMI framework. In spite of the way that the significance of sampled-data management and passivity property have been comprehensively seen, up to now, the sampled-data issues of passivity and passification for delayed neural networks with Markov jump parameters have not yet been represented and stay open.

Motivated by the ahead of time conveyed examination, this paper gets some information about the design of non-uniform sampled-data control for Markov jump delayed neural networks with passive theory. Isolated and the present results, another Markov jump delayed neural frameworks is set up to delineate the non-uniform by utilizing the input delay approach. Some new passivity conditions are displayed in the sorts of linear matrix inequalities (LMIs) by using Lyapunov–Krasovskii functional method. In connection of the grabbed conditions, a further mode-dependent passification controller game-plan rationale is given to guarantee the required execution of the ensuing closed-loop Markov process delayed neural networks.

The credentials of this paper are abbreviated here:

  1. 1.

    Energized by the work [32, 34, 35], the new diagram of passivity and passification for delayed neural networks with Markov jump parameters is initially settled in our work.

  2. 2.

    The fact of the matter is to exploit new methodologies to perform less conservatism than Jensen’s inequality and takes altogether the link between the terms within the Leibniz–Newton formula within the arrangement of linear matrix inequalities (LMIs).

  3. 3.

    Finally, numerical cases are given to show the practicality and the less conservatism of the procured passivity criteria.

Notations: Throughout the paper, \({\mathscr {R}}^{n}\) denotes the n dimensional Euclidean space, and \({\mathscr {R}}^{m\times n}\) is the set of all \(m\times n\) real matrices. The notation * represents the entries implied by symmetry. For symmetric matrices \({\mathscr {X}}\) and \({\mathscr {Y}}\), the notation \({\mathscr {X}}\ge {\mathscr {Y}}\) means that \({\mathscr {X}}-{\mathscr {Y}}\) is positive semi-definite; \({\mathscr {M}}^T\)transpose of the matrix \({\mathscr {M}}\); I denotes the identity matrix with appropriate dimensions. \({\mathcal {E}}\) denotes the expectation operator with respect to some probability measure \(\mathcal {P}\). Let \((\mathcal {Z}, {\mathcal {F}},{\mathcal {P}})\) be a complete probability space which relates to an increasing family \(({\mathcal {F}}_t)_{t>0}\) of \(\sigma\) algebras \(({\mathcal {F}}_t)_{t>0} \subset {\mathcal {F}}\), where \({\mathcal {Z}}\) is the samples space, \({\mathcal {F}}\) is \(\sigma\) algebra of subsets of the sample space and \({\mathcal {P}}\) is the probability measure on \({\mathcal {F}}\). Let \(||f||_2=\,\sqrt{\int _0^\infty ||f||^2{\text {d}}t}, f(t) \in L_2[0,\infty )\), \(\Vert f\Vert\) refers to the Euclidean norm of the function f(t) at the time t. \(L_2[0,\infty )\) is the space of square integrable vectors on \([0,\infty )\). Let \(\{\varrho _t, t\ge 0\}\) be a right-continuous Markov chain on a probability space \(({\mathcal {Z}}, {\mathcal {F}},{\mathcal {P}})\) taking values in a finite state space \({\mathcal {N}}=\,\{1,2,\ldots ,m\}\) with generator \(\Pi =\,\{\pi _{pq}\}\) given by,

$$\begin{aligned} P\{r_{t+\Delta }=\,q|r_t=\,p\}=\,\left\{ \begin{array}{ll} \pi _{pq}\Delta +o(\Delta ), \quad p\ne q \\ 1+\pi _{pq}\Delta +o(\Delta ),\quad p=\,q. \end{array} \right. \end{aligned}$$
(1)

Here \(\Delta >0,\lim _{t\rightarrow +\infty }\frac{o(\Delta )}{\Delta }=\,0,\pi _{pq}\ge 0\) is the transition rate from p to q if \(q\ne p\) while \(\pi _{pp}=\,-\sum _{q=\,1,q\ne p}^m\pi _{pq}\) for each mode p. Note that if \(\pi _{pp}=\,0\) for some \(p \in {\mathcal {N}}\), then the pth mode is called “terminal mode” [54]. Furthermore, the transition probabilities matrix \(\Pi\) is denoted as follows:

$$\begin{aligned} \Pi =\,\left[ \begin{array}{cccccc} \pi _{11}&{} \pi _{12} &{} \ldots &{} \pi _{1m} \\ \pi _{21}&{} \pi _{22} &{} \ldots &{} \pi _{2m} \\ \vdots &{} \vdots &{} {\ddots} &{} \vdots \\ \pi _{m1}&{} \pi _{m2} &{} \ldots &{} \pi _{mm} \end{array}\right] . \end{aligned}$$

Furthermore, the move rates or probabilities of the jumping process are thought to be known.

2 Problem statement and preliminaries

Consider the Markov jump delayed neural networks described in a complete probability \(({\mathcal {Z}}, {\mathcal {F}}, {\mathcal {P}})\) which can be depicted by the going with conditions

$$\begin{aligned} {\dot{\theta }}(t)=\,&-{\mathscr {C}}(\varrho _t)\theta (t)+{\mathscr {W}}_1(\varrho _t)g(\theta (t))+{\mathscr {W}}_2(\varrho _t) g(\theta (t-\tau (t)))+{\mathscr {H}}(\varrho _t)w(t) +u(t),\nonumber \\ {\theta }(t)=\,&\phi (t),\ t\in [-\tau ,0] \end{aligned}$$
(2)

where \(\theta (t)=\,[\theta _1(t),\theta _2(t),\ldots ,\theta _n(t)]^T\in R^n\) is the neuron state vector. The nonlinear function \(g(\theta (t))=\,[g_1(\theta _1(t)),g_2(\theta _2(t)),\ldots ,g_n(\theta _n(t))]^T\in R^n\) denotes the neuron activation function, \(\tau (t)\) are the time-varying delays, \(\phi (t)\in R^n\) is a vector-valued initial condition function. The control inputs \(u(t)=\,[u_1(t),u_2(t),\ldots ,u_n(t)]^T\in R^n\) satisfy \(u_k(t)\ (k=\,1,2,\ldots ,n)\in L_2[0,\infty )\). \(w(t)=\,[w_1(t),w_2(t),\ldots ,w_n(t)]^T\in R^n\) denote the external disturbances satisfying \(w_k(t)\ (k=\,1,2,\ldots ,n)\in L_2[0,\infty )\). \({\mathscr {C}}(\varrho _t)=\,{\text {diag}}\{c_1,\ldots ,c_n\}>0,\ {\mathscr {W}}_1(\varrho _t),\ {\mathscr {W}}_2(\varrho _t),\ {\mathscr {H}}(\varrho _t)\) are interconnection weight matrices with appropriate dimensions for a altered mode \((\varrho _t)\) and all modes can be identified.

For presentation solace, we mean the Markov methodology \(\{\varrho _t, t\ge 0\}\) by i records, then the Markov jump delayed network system (2) can be changed into the accompanying structure:

$$\begin{aligned} {\dot{\theta }}(t)=\,&-{\mathscr {C}}_i\theta (t)+{\mathscr {W}}_{1i}g(\theta (t))+{\mathscr {W}}_{2i} g(\theta (t-\tau (t)))+{\mathscr {H}}_iw(t)+u(t). \end{aligned}$$
(3)

Presently, we consider the mode-dependent sampled-data control can be laid out as

$$\begin{aligned} u(t)=\,u_{d}(t_k)=\,{\mathcal {K}}_i\theta (t_{k})=\, {\mathcal {K}}_i\theta (t-h(t)),\ t_{k}\le t<t_{k+1}\ (k=\, 0, 1,\dots ), \end{aligned}$$
(4)

where \(u_{d}\) is discrete-time control signals. More precisely, the control signs are addressed with a gathering of sampling times: \({\mathcal {K}}_i\) is the gains of the feedback controller to be resolved later. The time-varying delay \(0\le h(t)=\,t-t_k\) is a piecewise linear function with derivative \({\dot{h}}(t)=\,1\) for \(t\ne t_{k}\), where \(t_k\) denotes the sample-time point and the available for interval \(t_k\le t<t_{k+1}\), \(k\in {\mathbb {N}}\).

Under the control law, the system (3) can be rewritten as follows:

$$\begin{aligned} {\dot{\theta }}(t)=\,-{\mathscr {C}}_i\theta (t)+{\mathscr {W}}_{1i}g(\theta (t))+{\mathscr {W}}_{2i} g(\theta (t-\tau (t)))+{\mathcal {K}}_i\theta (t-h(t))+{\mathscr {H}}_iw(t). \end{aligned}$$
(5)

Remark 2.1

The investigation of Markov jump delayed neural networks has gotten much consideration because of their hypothetical significance and potential applications. It merits saying that the vast majority of the investigates fundamentally concentrate on the stability problem delayed neural networks and have accomplished remarkable results [2030]. In this paper, we analyse the non-uniform sampled-data control plan to tackle the passivity and passification issues of Markov jump delayed neural networks.

Remark 2.2

The input delay approach has been a successful strategy to manage the non-uniform sampling case, which is more confused yet more practical in the applications. By presenting the idea of virtual delay, the examined sampled-data control framework can be changed to the continuous-time system framework with time-varying delays, which is the key thought of this paper.

Before proceeding, the going with assumptions, definitions and lemmas are employed throughout our paper.

Assumption 1

The time-varying delays satisfy

$$\begin{aligned} 0\le \tau (t)\le \tau ,\quad {\dot{\tau }}(t)\le \mu <1,\ \forall t \end{aligned}$$
(6)

where \(\tau\) and \(\mu\) are known constants.

Assumption 2

[55] The nonlinear function \(g_i(e) (i=\,1,2,\ldots ,n)\) fulfils the part condition

$$\begin{aligned} 0\le (g_i(e))/e \le l_i,\ e\ne 0)\Rightarrow g_i(e)[g_i(e)-l_ie]\le 0. \end{aligned}$$
(7)

where \(g(0)=\,0\), and \(l_i\) are known real scalars and denote \({\mathcal {L}}=\,{\text {diag}}\{l_1,l_2,\ldots ,l_n\}\).

Definition 2.3

[52] The Markovian jump system (2) is said to be stochastically passive if there exists a positive scalar \(\gamma\) such that, \(\forall \ t_p\ge 0\)

$$\begin{aligned} 2{\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}z^T(t)\varphi (t){\text {d}}\bigg \}\ge -\gamma {\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}\varphi ^T(t)\varphi (t){\text {d}}t\bigg \} \end{aligned}$$
(8)

holds, under the zero initial condition, where \(z^T(t)=\,x^T(t)\) and \(\varphi ^T(t)=\,w^T(t)\).

Definition 2.4

[52] Let \(V(\theta (t),\varrho _t,t>0)\) be stochastic positive function, its weak infinitesimal \({\mathscr {L}}\) is defined as

$$\begin{aligned}&{\mathscr {L}}V(\theta (t),\varrho _t,t>0)\nonumber \\&\quad =\,\lim \limits _{\Delta>0}\frac{1}{\Delta }{\mathcal {E}}_{\varrho _t}\bigg \{\big (V(\theta (t+\Delta ), \varrho _{(t+\Delta )},t>0)\big \arrowvert \theta (t),\varrho _t\big )-V(\theta (t),\varrho _t,t>0)\bigg \}. \end{aligned}$$
(9)

Lemma 2.5

[56] For any constant positive matrix \({\mathscr {T}}\in {\mathcal {R}}^{n\times n}\), scalar \(0\le \tau (t)\le \tau\), and vector function \({{\dot{\theta }}}(t):[-\tau ,0]\rightarrow {\mathcal {R}}^{n}\), it holds that

$$\begin{aligned}&-\tau \int _{t-\tau }^{t}{{\dot{\theta }}}^T(s){\mathscr {T}}{{\dot{\theta }}}(s){\text {d}}s\le \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-\tau (t)) \\ \theta (t-\tau ) \end{array}\right] ^T \left[ \begin{array}{ccc} -{\mathscr {T}} &{} {\mathscr {T}} &{} 0 \\ *&{} -2{\mathscr {T}} &{} {\mathscr {T}} \\ *&{} *&{} -{\mathscr {T}} \end{array}\right] \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-\tau (t)) \\ \theta (t-\tau ) \end{array}\right] . \end{aligned}$$

Lemma 2.6

[57] If there exist symmetric positive-definite matrix \({\mathcal {X}}_{33}>0\) and arbitrary matrices \({\mathcal {X}}_{11}\), \({\mathcal {X}}_{12}\), \({\mathcal {X}}_{13}\), \({\mathcal {X}}_{22}\) and \({\mathcal {X}}_{23}\) such that \([{\mathcal {X}}_{ij}]_{3\times 3} \ge 0\), then we obtain

$$-\int\limits_{t-h(t)}^{t}\dot \theta^T(s)\mathcal{X}_{33}\dot\theta(s){\text {d}}s \leq \int\limits_{t-h(t)}^{t} \varpi^T(t)\left[\begin{array}{ccc}\mathcal{X}_{11} & \mathcal{X}_{12} & \mathcal{X}_{13}\\ * & \mathcal{X}_{22} & \mathcal{X}_{23}\\ * & * & 0 \end{array}\right]\varpi(t){\text {d}}s,\\$$

where \(\varpi (t)=\,[\theta ^T(t)\quad \theta ^T(t-h(t))\quad {\dot{\theta }}^T(s)]^T.\)

3 Main results

In this section, the criteria on the passivity and passification of the system (5) are derived. The following theorem presents a sufficient condition on the stochastic passivity of the system (5).

Theorem 3.1

Under Assumption 2 hold, for given scalars \(\tau ,\ h, \ \gamma >0\), and \(\mu\), the system (5) is stochastically passive if there exist matrices \({\mathscr {P}}_i>0(i=\,1,2,\ldots ,m),\) \({\mathscr {Q}}>0,\ {\mathscr {R}}>0,\ {\mathcal {X}}>0,\ {\mathscr {T}}>0\), positive semi-definite matrices \([{\mathscr {Q}}_{ij}]_{3 \times 3}\ge 0\), diagonal matrix \(\varGamma >0\), and any matrices \({\mathscr {J}},\ {\mathscr {G}}_i\) with appropriate dimensions such that the following LMI hold :

$$\begin{aligned} \Xi =\,\left[ \begin{array}{ccccccccc} (1,1) &{} {\mathscr {P}}_i-{\mathscr {J}}-({{\mathscr {J}}}{{\mathscr {C}}}_i)^T &{} [{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau &{} (1,4) &{} {\mathscr {J}}{\mathscr {W}}_{1i}+\varGamma ^T{\mathcal {L}}^T &{} {\mathscr {J}}{\mathscr {W}}_{2i} &{} {\mathscr {G}}+{\mathscr {R}}/h &{} 0 &{} {{\mathscr {J}}}{{\mathscr {H}}}_i-{\mathcal {I}} \\ *&{} \tau {\mathscr {Q}}+h{\mathscr {R}}-2{\mathscr {J}} &{} 0 &{} 0 &{} {\mathscr {J}}{\mathscr {W}}_{1i} &{} {\mathscr {J}}{\mathscr {W}}_{2i} &{} {\mathscr {G}} &{} 0 &{} {{\mathscr {J}}}{{\mathscr {H}}}_i \\ *&{} *&{} (3,3) &{} [{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ *&{} *&{} *&{} (4,4) &{} 0 &{} 0 &{} 0 &{}0 &{} 0 \\ *&{} *&{}*&{} *&{} {\mathcal {X}}-2\varGamma &{} 0 &{} 0 &{} 0 &{}0 \\ *&{} *&{} *&{} *&{} *&{} -(1-\mu ){\mathcal {X}}&{} 0 &{} 0 &{} 0 \\ *&{} *&{} *&{} *&{} *&{}*&{} -2{\mathscr {R}}/h &{} {\mathscr {R}}/h &{} 0 \\ *&{} *&{} *&{} *&{} *&{} *&{} *&{}-{\mathscr {R}}/h &{} 0 \\ *&{} *&{} *&{} *&{} *&{} *&{} *&{}*&{} -\gamma {\mathcal {I}} \end{array} \right] <0, \end{aligned}$$
(10)

where

$$\begin{aligned} (1,1)=\,&-[{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau +\tau {\mathscr {Q}}_{11}+{\mathscr {Q}}^T_{13}+{\mathscr {Q}}_{13} +{\mathscr {T}}-{\mathscr {R}}/h-2{{\mathscr {J}}}{{\mathscr {C}}}_i+\sum \limits _{j=\,1}^{N}\pi _{ij}{\mathscr {P}}_i.\\ (1,4)=\,&\tau {\mathscr {Q}}_{12}-{\mathscr {Q}}_{13}+{\mathscr {Q}}^T_{23},\ (3,3)=\,-2[{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau -(1-\mu ){\mathscr {T}},\\ (4,4)=\,&-[{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau +\tau {\mathscr {Q}}_{22}-{\mathscr {Q}}_{23}-{\mathscr {Q}}^T_{23}. \end{aligned}$$

Moreover, the mode-dependent gain matrices are given by \({\mathcal {K}}_i=\,{\mathscr {J}}^{-1}{\mathscr {G}}_i.\)

Proof

We consider the following LKF candidate as

$$\begin{aligned} V(\theta (t),t,i)=\,\sum \limits _{\alpha =\,1}^{3}V_\alpha (\theta (t),t,i), \end{aligned}$$
(11)

where

$$\begin{aligned} V_1(\theta (t),t,i)=\,&\theta ^T(t){\mathscr {P}}_i\theta (t),\\ V_2(\theta (t),t,i)=\,&\int _{t-\tau (t)}^{t}\theta ^T(s) {\mathscr {T}} \theta (s)ds+\int _{t-\tau (t)}^{t}g^T(\theta (s)) {\mathcal {X}} g(\theta (s))ds,\\ V_3(\theta (t),t,i)=\,&\int _{-\tau }^{0}\int _{t+\xi }^{t}{{\dot{\theta }}}^T(s){\mathscr {Q}}{{\dot{\theta }}}(s)ds d\xi +\int _{-h}^{0}\int _{t+\xi }^{t}{{\dot{\theta }}}^T(s){\mathscr {R}}{{\dot{\theta }}}(s)ds d\xi. \end{aligned}$$

For each mode i, it can be contemplated that

$$\begin{aligned} {\mathscr {L}} V_1(\theta (t),t,i)=\,&2\theta ^T(t){\mathscr {P}}_i{{\dot{\theta }}}(t) +\sum \limits _{j=\,1}^{N}\pi _{ij}\theta ^T(t){\mathscr {P}}_{j}\theta (t), \end{aligned}$$
(12)
$$\begin{aligned} {\mathscr {L}} V_2(\theta (t),t,i)\le&\theta ^T(t){\mathscr {T}}\theta (t)+g^T(\theta (t)){\mathcal {X}}g(\theta (t)) -(1-\mu )[\theta ^T(t-\tau (t)){\mathscr {T}}\theta (t-\tau (t))\nonumber \\&\quad +g^T(\theta (t-\tau (t))){\mathcal {X}}g(\theta (t-\tau (t)))], \end{aligned}$$
(13)
$$\begin{aligned} {\mathscr {L}} V_3(\theta (t),t,i)=\,&{{\dot{\theta }}}^T(t)[\tau {\mathscr {Q}}+h{\mathscr {R}}]{{\dot{\theta }}}(t) -\int _{t-\tau }^{t}{{\dot{\theta }}}^T(s){\mathscr {Q}}{{\dot{\theta }}}(s)ds -\int _{t-h}^{t}{{\dot{\theta }}}^T(s){\mathscr {R}}{{\dot{\theta }}}(s)ds,\nonumber \\ =\,&{{\dot{\theta }}}^T(t)[\tau {\mathscr {Q}}+h{\mathscr {R}}]{{\dot{\theta }}}(t) -\int _{t-\tau }^{t}{{\dot{\theta }}}^T(s)[{\mathscr {Q}}-{\mathscr {Q}}_{33}]{{\dot{\theta }}}(s)ds\nonumber \\&\quad -\int _{t-\tau }^{t}{{\dot{\theta }}}^T(s){\mathscr {Q}}_{33}{{\dot{\theta }}}(s)ds -\int _{t-h}^{t}{{\dot{\theta }}}^T(s){\mathscr {R}}{{\dot{\theta }}}(s)ds. \end{aligned}$$
(14)

Applying Lemma 2.6 and the Leibniz–Newton formula for the integral term \(-\int _{t-\tau }^{t}{{\dot{\theta }}}^T(s){\mathscr {Q}}_{33}{{\dot{\theta }}}(s)ds\) the following equation holds:

$$\begin{aligned} -\int _{t-\tau }^{t}{{\dot{\theta }}}^T(s){\mathscr {Q}}_{33}{{\dot{\theta }}}(s)ds\le&\int _{t-\tau }^{t} \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-\tau ) \\ {{\dot{\theta }}}(s) \end{array}\right] ^T \left[ \begin{array}{ccc} {\mathscr {Q}}_{11} &{} {\mathscr {Q}}_{12} &{} {\mathscr {Q}}_{13} \\ *&{} {\mathscr {Q}}_{22} &{} {\mathscr {Q}}_{23} \\ *&{} *&{} 0 \end{array}\right] \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-\tau ) \\ {{\dot{\theta }}}(s) \end{array}\right] ds,\nonumber \\ \le&\theta ^T(t)\tau {\mathscr {Q}}_{11}\theta (t)+2\theta ^T(t)\tau {\mathscr {Q}}_{12}\theta (t-\tau _2)+\theta ^T(t-\tau ) \tau {\mathscr {Q}}_{22}\theta (t-\tau )\nonumber \\ {}&+2\theta ^T(t){\mathscr {Q}}^T_{13}\int _{t-\tau }^{t}{{\dot{\theta }}}(s)ds+2\theta ^T(t-\tau ){\mathscr {Q}}^T_{23}\int _{t-\tau }^{t}{{\dot{\theta }}}(s)ds,\nonumber \\ =\,&\theta ^T(t)\big [\tau {\mathscr {Q}}_{11}+{\mathscr {Q}}^T_{13}+{\mathscr {Q}}_{13}\big ]\theta (t) +2\theta ^T(t)\big [\tau {\mathscr {Q}}_{12}-{\mathscr {Q}}_{13}+{\mathscr {Q}}^T_{23}\big ]\theta (t-\tau )\nonumber \\&\quad +\theta ^T(t-\tau )\big [\tau {\mathscr {Q}}_{22}-{\mathscr {Q}}_{23}-{\mathscr {Q}}^T_{23}\big ]\theta (t-\tau ). \end{aligned}$$
(15)

According to Lemma 2.5, the integral terms in (14) can be rewritten as

$$\begin{aligned} -\int _{t-\tau }^{t}&{{\dot{\theta }}}^T(s)[{\mathscr {Q}}-{\mathscr {Q}}_{33}]{{\dot{\theta }}}(s)ds\nonumber \\ {}&\le 1/\tau \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-\tau (t)) \\ \theta (t-\tau ) \end{array}\right] ^T \left[ \begin{array}{ccc} -[{\mathscr {Q}}-{\mathscr {Q}}_{33}] &{} [{\mathscr {Q}}-{\mathscr {Q}}_{33}] &{} 0 \\ *&{} -2[{\mathscr {Q}}-{\mathscr {Q}}_{33}] &{} [{\mathscr {Q}}-{\mathscr {Q}}_{33}] \\ *&{} *&{} -[{\mathscr {Q}}-{\mathscr {Q}}_{33}] \end{array}\right] \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-\tau (t)) \\ \theta (t-\tau ) \end{array}\right] , \end{aligned}$$
(16)
$$\begin{aligned}&-\int _{t-h}^{t}{{\dot{\theta }}}^T(s){\mathscr {R}}{{\dot{\theta }}}(s)ds\le 1/h\left[ \begin{array}{ccc} \theta (t) \\ \theta (t-h(t)) \\ \theta (t-h) \end{array}\right] ^T \left[ \begin{array}{ccc} -{\mathscr {R}} &{} {\mathscr {R}} &{} 0 \\ *&{} -2{\mathscr {R}} &{} {\mathscr {R}} \\ *&{} *&{} -{\mathscr {R}} \end{array}\right] \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-h(t)) \\ \theta (t-h) \end{array}\right] . \end{aligned}$$
(17)

Moreover, by Assumption 2, for appropriate dimensional diagonal matrix \(\varGamma\), we have

$$\begin{aligned} -2g(\theta (t))\varGamma (g(\theta (t)-{\mathcal {L}}\theta (t)))\ge 0. \end{aligned}$$
(18)

In addition,

$$\begin{aligned} 0=\,&2[\theta ^T(t)+{{\dot{\theta }}}^T(t)]{\mathscr {J}}[-{{\dot{\theta }}}(t)-{\mathscr {C}}_i\theta (t)+{\mathscr {W}}_{1i}g(\theta (t))+{\mathscr {W}}_{2i} g(\theta (t-\tau (t)))+K_i\theta (t-h(t))+{\mathscr {H}}_iw(t)],\nonumber \\ =\,&-2\theta ^T(t){\mathscr {J}}{{\dot{\theta }}}(t) -2\theta ^T(t){{\mathscr {J}}}{{\mathscr {C}}}_i\theta (t) +2\theta ^T(t){{\mathscr {J}}}{{\mathscr {W}}}_{1i}g(\theta (t)) +2\theta ^T(t){{\mathscr {J}}}{{\mathscr {W}}}_{2i}g(\theta (t-\tau (t))) +2\theta ^T(t){{\mathscr {J}}}{{\mathscr {H}}}_{i}w(t)\nonumber \\ {}&-2{{\dot{\theta }}}^T(t){\mathscr {J}}{{\dot{\theta }}}(t) -2{{\dot{\theta }}}^T(t){{\mathscr {J}}}{{\mathscr {C}}}_i\theta (t) +2{{\dot{\theta }}}^T(t){{\mathscr {J}}}{{\mathscr {W}}}_{1i}g(\theta (t)) +2{{\dot{\theta }}}^T(t){{\mathscr {J}}}{\mathscr {W}}_{2i}g(\theta (t-\tau (t))) +2{{\dot{\theta }}}^T(t){\mathscr {J}}{\mathscr {H}}_{i}w(t). \end{aligned}$$
(19)

Recollecting the last target to display the nonattendance of passivity property, the following expression is given:

$$\begin{aligned} {\mathcal {J}}(t_p):=\,&2{\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}z^T(t)\varphi (t){\text {d}}t\bigg \}\ge -\gamma {\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}\varphi ^T(t)\varphi (t){\text {d}}t\bigg \},\nonumber \\ \Rightarrow&-2{\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}z^T(t)\varphi (t){\text {d}}t\bigg \}-\gamma {\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}\varphi ^T(t)\varphi (t){\text {d}}t\bigg \}\le 0,\nonumber \\ {\mathcal {J}}(t_p):=\,&{\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}-2z^T(t)\varphi (t)-\gamma \varphi ^T(t)\varphi (t)\bigg \}. \end{aligned}$$
(20)

By combining (12)–(20) it can be gotten that

$$\begin{aligned} {\mathscr {E}}\bigg \{\mathscr {L} V(\theta (t),t,i)-2z^T(t)\varphi (t)-\gamma \varphi ^T(t)\varphi (t)\bigg \}\le {\mathscr {E}}\bigg \{\eta ^T(t) \Xi \eta (t)\bigg \}<0, \end{aligned}$$
(21)

where

$$\begin{aligned} \eta ^T(t)=\,\bigg [\theta ^T(t)\ {{\dot{\theta }}}^T(t) \ \theta ^T(t-\tau (t)) \ \theta ^T(t-\tau )\ g^T(\theta (t)) \ g^T(\theta (t-\tau (t))) \ \theta ^T(t-h(t)) \ \theta ^T(t-h)\ w^T(t)\bigg ]. \end{aligned}$$

Integrating both sides of (21) with respect to t over the time period from 0 to \(t_p\), we have

$$\begin{aligned} 2{\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}z^T(t)\varphi (t){\text {d}}t\bigg \}&\ge {\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}({\mathscr {L}} V(\theta (t),t,i)-\gamma \varphi ^T(t)\varphi (t)){\text {d}}t\bigg \}\nonumber \\&\ge -\gamma {\mathscr {E}}\bigg \{\int \limits _{0}^{t_p}\varphi ^T(t)\varphi (t){\text {d}}t\bigg \}. \end{aligned}$$
(22)

Therefore, the resulting Markov jump delayed neural networks (5) are stochastically passive by Definition 2.3, which completes the proof.\(\square\)

Remark 3.2

It should be noted that Theorem 3.1 provides a passive theory for Markovian jump delayed neural networks with time-varying delay. The results are expressed in the framework of LMIs, which can be easily verified by the existing powerful tools, such as the LMI toolbox of MATLAB. Moreover, if (10) are feasible, it follows from \(\Xi <0\) that \({\mathscr {G}}\) is nonsingular, and thus, the desired state feedback gain \({\mathcal {K}}\) can be readily obtained.

Remark 3.3

It is highly worth to mentioning here, less conservative criteria is obtained by using Lemma 2.5 in the integral term \(-\int _{t-\tau }^{t}{{\dot{\theta }}}^T(s)[{\mathscr {Q}}-{\mathscr {Q}}_{33}]{{\dot{\theta }}}(s){\text {d}}s\), which results,

$$\begin{aligned} -\int _{t-\tau }^{t}&{{\dot{\theta }}}^T(s)[{\mathscr {Q}}-{\mathscr {Q}}_{33}]{{\dot{\theta }}}(s){\text {d}}s\nonumber \\ {}&\le 1/\tau \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-\tau (t)) \\ \theta (t-\tau ) \end{array}\right] ^T \left[ \begin{array}{ccc} -[{\mathscr {Q}}-{\mathscr {Q}}_{33}] &{} [{\mathscr {Q}}-{\mathscr {Q}}_{33}] &{} 0 \\ *&{} -2[{\mathscr {Q}}-{\mathscr {Q}}_{33}] &{} [{\mathscr {Q}}-{\mathscr {Q}}_{33}] \\ *&{} *&{} -[{\mathscr {Q}}-{\mathscr {Q}}_{33}] \end{array}\right] \left[ \begin{array}{ccc} \theta (t) \\ \theta (t-\tau (t)) \\ \theta (t-\tau ) \end{array}\right] . \end{aligned}$$
(23)

Furthermore, if we consider the delayed neural networks in (2) without Markovian jumping parameters and without external disturbances, that is

$$\begin{aligned} {\dot{\theta }}(t)=\,-{\mathscr {C}}\theta (t)+{\mathscr {W}}_1g(\theta (t))+{\mathscr {W}}_2g(\theta (t-\tau (t))). \end{aligned}$$
(24)

Then, following a similar idea as in the proof of Theorem 3.1, we obtain the corollary 3.4.

Corollary 3.4

Under Assumption 2, for given scalars \(\tau\) and \(\mu\), the system (24) is asymptotically stable if there exist matrices \({\mathscr {P}}>0,\ {\mathscr {Q}}>0,\ {\mathcal {X}}>0,\ {\mathscr {T}}>0\), positive semi-definite matrices \([{\mathscr {Q}}_{ij}]_{3 \times 3}\ge 0\), diagonal matrix \(\varGamma >0\), and any matrix \({\mathscr {J}}\) with appropriate dimensions such that the following LMI hold:

$$\begin{aligned} \Xi =\,\left[ \begin{array}{ccccccccc} (1,1) &{} {\mathscr {P}}-{\mathscr {J}}-({\mathscr {J}}{\mathscr {C}})^T &{} [{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau &{} (1,4) &{} {\mathscr {J}}{\mathscr {W}}_{1}+\varGamma ^T{\mathcal {L}}^T &{} {\mathscr {J}}{\mathscr {W}}_{2} \\ *&{} \tau {\mathscr {Q}}-2{\mathscr {J}} &{} 0 &{} 0 &{} {\mathscr {J}}{\mathscr {W}}_{1} &{} {\mathscr {J}}{\mathscr {W}}_{2} \\ *&{} *&{} (3,3) &{} [{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau &{} 0 &{} 0 \\ *&{} *&{} *&{} (4,4) &{} 0 &{} 0 \\ *&{} *&{}*&{} *&{} {\mathcal {X}}-2\varGamma &{} 0 \\ *&{} *&{} *&{} *&{} *&{} -(1-\mu ){\mathcal {X}} \end{array} \right] <0, \end{aligned}$$
(25)

where

$$\begin{aligned} (1,1)=\,&-[{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau +\tau {\mathscr {Q}}_{11}+{\mathscr {Q}}^T_{13}+{\mathscr {Q}}_{13}+{\mathscr {T}} -2{\mathscr {J}}{\mathscr {C}},\ (1,4)=\,\tau {\mathscr {Q}}_{12}-{\mathscr {Q}}_{13}+{\mathscr {Q}}^T_{23},\\ (3,3)=\,&-2[{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau -(1-\mu ){\mathscr {T}},\ (4,4)=\,-[{\mathscr {Q}}-{\mathscr {Q}}_{33}]/\tau +\tau {\mathscr {Q}}_{22}-{\mathscr {Q}}_{23}-{\mathscr {Q}}^T_{23}. \end{aligned}$$

Proof

We consider the LKF candidate as

$$\begin{aligned} V(\theta (t),t)=\,\sum \limits _{\alpha =\,1}^{3}V_\alpha (\theta (t),t), \end{aligned}$$
(26)

where

$$\begin{aligned} V_1(\theta (t),t)=\,&\theta ^T(t){\mathscr {P}}\theta (t),\\ V_2(\theta (t),t)=\,&\int _{t-\tau (t)}^{t}\theta ^T(s) {\mathscr {T}} \theta (s){\text {d}}s+\int _{t-\tau (t)}^{t}g^T(\theta (s)) \mathcal {X} g(\theta (s)){\text {d}}s,\\ V_3(\theta (t),t)=\,&\int _{-\tau }^{0}\int _{t+\xi }^{t}{{\dot{\theta }}}^T(s){\mathscr {Q}}{{\dot{\theta }}}(s){\text {d}}s {\text {d}}\xi . \end{aligned}$$

The derivation is similar to that of in Theorem 3.1. so it is omitted here.\(\square\)

Remark 3.5

In Theorems 3.1 and Corollary 3.4, some LMI-based conditions are presented to guarantee asymptotically stability of the delayed neural networks. The established criterion is mode-dependent on the Markovian jump parameters. Set \(\tau =\,\frac{1}{\tau }\) and \(h=\,\frac{1}{h}\), in Theorem 3.1 and Corollary (3.4) with fixed value \(\mu\), the optimal value can be obtained through following optimization procedure:

$$\begin{aligned} \left\{ \begin{array}{ll} {\text {Maximize}} \ \tau , \ h \\ {\text {Subject \,to\, Theorem}} \ (3.1) \end{array} \right. \end{aligned}$$
(27)

and

$$\begin{aligned} \left\{ \begin{array}{ll} {\text {Maximize}} \ \tau , \\ {\text {Subject\, to\, Corollary}} (3.4). \end{array} \right. \end{aligned}$$
(28)

Inequalities (27) and (28) are convex optimization problems and can be obtained efficiently by using the Matlab LMI toolbox.

Remark 3.6

It is very interesting to note that, in this paper, the reduced conservatism is primarily from the construction of the suitable Lyapunov–Krasovskii functional and the use of bounding techniques in integral terms. Secondly, a new integral technique is given to take fully the relationship between the terms in the Leibniz–Newton formula in the frame work of linear matrix inequalities (LMIs) into account, which gives the conservatism in our results.

Remark 3.7

Our approach is based on the Lyapunov–Krasovskii functional method combined with the slack variables and an improved technique of calculation. Comparing the numbers of variables required in Theorem 3.1, Corollary 3.4 of this paper and the results of the literature, we can see that our results lead to less conservative. The comparison results are summarized in Tables 1 and 2. From this tables we can conclude that our approach is less conservative.

Remark 3.8

The reduced conservatism of Theorem 3.1 and Corollary 3.4 benefit by introducing some free weighting matrices to express the relationship among the system matrices and neither the model transformation approach nor any bounding technique are needed to estimate the inner product of the involved crossing terms. It can be easily seen that results of this paper gives better results.

4 Numerical examples

In this section, numerical examples are provided to illustrate the potential benefits and effectiveness of the developed method for delayed neural networks.

Example 4.1

Consider the following delayed neural networks with (\(i=\,1,2\)):

$$\begin{aligned} {\mathscr {C}}_1&=\, \left[ \begin{array}{cc} 5 &{} 0 \\ 0 &{} 5 \end{array}\right] ,\ {\mathscr {W}}_{11}=\, \left[ \begin{array}{cc} -0.5 &{} -1.3 \\ 0.42 &{} 0.35 \end{array}\right] ,\ {\mathscr {W}}_{21}=\, \left[ \begin{array}{cc} 0.3 &{} 0.2 \\ 1.1 &{} 1.2 \end{array}\right] ,\ {\mathscr {H}}_1=\, \left[ \begin{array}{cc} 1.5 &{} 0 \\ 0 &{} 2 \end{array}\right] ,\\ {\mathscr {C}}_2&=\, \left[ \begin{array}{cc} 4 &{} 0 \\ 0 &{} 5 \end{array}\right] ,\ {\mathscr {W}}_{12}=\, \left[ \begin{array}{cc} -0.1 &{} 0.3 \\ -0.22 &{} 0.25 \end{array}\right] ,\ {\mathscr {W}}_{22}=\, \left[ \begin{array}{cc} 1.3 &{} 1.2 \\ 1.1 &{} 1.2 \end{array}\right] ,\ {\mathscr {H}}_2=\, \left[ \begin{array}{cc} 0.5 &{} 0 \\ 0 &{} 1 \end{array}\right] . \end{aligned}$$

The transition rate matrix with two operation modes is given as

$$\begin{aligned} \Pi =\, \left[ \begin{array}{cc} -0.8 &{} 0.8 \\ 0.4 &{} -0.4 \end{array}\right] . \end{aligned}$$

Additionally, we take \({\mathcal {L}}=\,{\mathcal {I}},\ \gamma =\,1.4570,\ \tau =\,0.9\) and \(\mu =\,0.8\). Let sampling time points \(t_k=\,0.1 k,\ k=\,1,2,\ldots ,\) and the sampling period is \(h=\,0.1\). By using MATLAB LMI control toolbox and by solving the LMIs in Theorem 3.1 in our paper we get the following feasible solutions

$$\begin{aligned} {\mathscr {P}}_1&=\, \left[ \begin{array}{cc} 1.6333 &{} -0.4740\\ -0.4740 &{} 1.1048 \end{array}\right] ,\ {\mathscr {P}}_2=\, \left[ \begin{array}{cc} 0.6284 &{} -0.0823\\ -0.0823 &{} 0.5828 \end{array}\right] ,\\ {\mathscr {Q}}&=\,10^{-6}\times \left[ \begin{array}{cc} 0.3062 &{} -0.2175\\ -0.2175 &{} 0.1933 \end{array}\right] ,\\ {\mathscr {R}}&=\, \left[ \begin{array}{cc} 0.6726 &{} -0.2531\\ -0.2531 &{} 0.3631 \end{array}\right] ,\ {\mathcal {X}}=\, \left[ \begin{array}{cc} 0.5672 &{} 0.2733\\ 0.2733 &{} 0.7202 \end{array}\right] ,\ {\mathscr {T}}=\,10^{-5}\times \left[ \begin{array}{cc} 0.7557 &{} -0.4801\\ -0.4801 &{} 0.3216 \end{array}\right] \\ {\mathscr {J}}&=\, \left[ \begin{array}{cc} 0.1110 &{} -0.0302\\ -0.0302 &{} 0.1405 \end{array}\right] ,\ {\mathscr {G}}_1=\,\left[ \begin{array}{cc} -0.7894 &{} 0.3290\\ 0.3290 &{} -0.3963 \end{array}\right] ,\ {\mathscr {G}}_2=\, \left[ \begin{array}{cc} -0.1353 &{} -0.0460\\ -0.0460 &{} -0.2057 \end{array}\right] . \end{aligned}$$

The controller gains as

$$\begin{aligned} {\mathcal {K}}_1=\,{\mathscr {J}}^{-1}{\mathscr {G}}_1=\, \left[ \begin{array}{cc} -6.8801 &{} 2.3335\\ 0.8619 &{} -2.3190 \end{array}\right] ,\ {\mathcal {K}}_2=\,{\mathscr {J}}^{-1}{\mathscr {G}}_2=\, \left[ \begin{array}{cc} -1.5121 &{} -1.0954\\ -0.9963 &{} -2.7444 \end{array}\right] . \end{aligned}$$

This implies all conditions in Theorem 3.1 are fulfilled. By Theorem 3.1, system (5) is asymptotically stable under the given sampled-data control. It is found that the proposed states of Theorem 3.1 are feasible with minimum passivity performance \(\gamma\) for different \(\tau ,\ \mu\) given in Table 1.

Given the time-varying delays within the corresponding ranges and system modes evolutions, by applying the obtained controller, the state responses of the resulting closed-loop Markov jump delayed neural networks (6) can be checked and observed as shown in Figs. 1 and 2 for the given initial condition \(\theta (t)=\,[-0.1,0.1]\). It is clear that the designed controller is feasible and ensures the passivity of the resulting closed-loop system (6) despite the disturbances and the time-varying delays.

Table 1 Minimum passivity performance \(\gamma\) for various \(\tau ,\ \mu\) in Example 4.1
Fig. 1
figure 1

State trajectories of the system in Example 4.1

Fig. 2
figure 2

Modes evolution i in Example 4.1

Table 2 The maximum allowable delay bounds (MADBs) of \(\tau\) for various \(\mu\) in Example 4.2

Example 4.2

Consider the delayed neural networks (24) with the following parameters:

$$\begin{aligned} {\mathscr{C}}= \left[ \begin{array}{cc} 2 &{} 0 \\ 0 &{} 2 \end{array}\right] ,\ {\mathscr {W}}_1= \left[ \begin{array}{cc} 1 &{} 1 \\ -1 &{} -1 \end{array}\right] ,\ {\mathscr {W}}_{1}= \left[ \begin{array}{cc} 0.88 &{} 1 \\ 1 &{} 1 \end{array}\right] ,\ {\mathcal {L}}={\text {diag}}\{0.4,0.8\}. \end{aligned}$$

It can be verified that the LMI (25) is feasible. For different values of \(\mu\), Table 2 gives out the allowable upper bound \(\tau\) of the time-varying delay. It is clear that the delay-dependent stability result proposed in Corollary 3.4 provides less conservative results than those obtained in [68, 10].

Example 4.3

Consider the delayed neural networks (24) with the following parameters:

$$\begin{aligned} {\mathscr {C}}= \left[ \begin{array}{cc} 1.5 &{} 0 \\ 0 &{} 0.7 \end{array}\right] ,\ {\mathscr {W}}_1= \left[ \begin{array}{cc} 0.0503 &{} 0.0454 \\ 0.0987 &{} 0.2075 \end{array}\right] ,\ {\mathscr {W}}_{2}= \left[ \begin{array}{cc} 0.2381 &{} 0.9320 \\ 0.0388 &{} 0.5062 \end{array}\right] ,\ {\mathcal {L}}={\text {diag}}\{0.3,0.8\}. \end{aligned}$$

To compare our results with the existing results in [2, 4, 5], we give different values of \(\mu\). The comparison results are stated in Table 3. From Table 3, it is observed that the results in this paper are less conservative than those in [2, 4, 5].

Table 3 Maximum allowable delay bounds (MADBs) of \(\tau\) for various \(\mu\) in Example 4.3

Example 4.4

Originally, neural networks embody the characteristics of real biological neurons that are connected or functionally related in a nervous system. On the other hand, neural networks can represent not only biological neurons but also other practical systems. One of them is the quadruple-tank process, which is presented in Fig. 3. The quadruple-tank process consists of four interconnected water tanks and two pumps. The inputs are the voltages \((\nu _1\) and \(\nu _2)\) to the two pumps and the outputs are the water levels of Tanks 1 and 2. As shown in Fig. 3, the quadruple-tank process can be expressed clearly using the neural network model. [5860] proposed the state-space equation of the quadruple-tank process and designed the state feedback controller as follows:

$$\begin{aligned} \dot{x}(t)=\,A_0x(t)+A_1x(t-\tau _1)+B_0u(t-\tau _2)+B_1u(t-\tau _3), \end{aligned}$$
(29)

where

$$\begin{aligned} A_0&= \left[ \begin{array}{cccc} -0.0021 &{} 0 &{} 0 &{} 0 \\ 0 &{} -0.0021 &{} 0 &{} 0 \\ 0 &{} 0 &{} -0.0424 &{} 0 \\ 0 &{} 0 &{} 0 &{} -0.0424 \end{array}\right] ,\ A_1= \left[ \begin{array}{cccc} 0 &{} 0 &{} 0.0424 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0.0424 \\ 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 \end{array}\right] ,\\ B_0&= \left[ \begin{array}{cccc} 0.1113 \gamma _1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0.1042 \gamma _2 &{} 0 &{} 0 \end{array}\right] ,\ B_1= \left[ \begin{array}{cccc} 0 &{} 0 &{} 0 &{} 0.1113 (1-\gamma _1) \\ 0 &{} 0 &{} 0.1042 (1-\gamma _2) &{} 0 \end{array}\right] ,\\ \gamma _1&=0.333,\ \gamma _2=0.307,\ u(t)=Kx(t),\\ K&= \left[ \begin{array}{cccc} -0.1609 &{} -0.1765 &{} -0.0795 &{} -0.2073 \\ -0.1977 &{} -0.1579 &{} -0.2288 &{} -0.0772 \end{array}\right] . \end{aligned}$$

For simplicity, it was assumed that \(\tau _1=\tau _2=0\), and \(\tau _3=\tau (t)\).

Here, the control input, u(t), means the amount of water supplied by pumps. Therefore, it is true that u(t) has a threshold value due to the limited area of the hose and the capacity of the pumps. Therefore, it is natural to consider, u(t), as a nonlinear function as follows:

$$\begin{aligned} u(t)=Kf(x(t)), \end{aligned}$$

\(f(x(t))=[f_1(x_1(t)),\ldots ,f_4(x_4(t))],\ f_i(x_i(t))=0.1(|x_i(t)+1|-|x_i(t)-1|),\ i=1,\ldots ,4.\)

The quadruple-tank process (29) can be rewritten to the form of system (2)with \(\varrho _t=1\), as follows:

$$\begin{aligned} {\dot{\theta }}(t)=\,-{\mathscr {C}}(\varrho _t)\theta (t)+{\mathscr {W}}_1(\varrho _t) g(\theta (t))+{\mathscr {W}}_2(\varrho _t) g(\theta (t-\tau (t))), \end{aligned}$$
(30)

where \({\mathscr {C}}=\,-A_0-A_1,\ {\mathscr {W}}_1=\,B^T_0K,\ {\mathscr {W}}_2=\,B^T_1K,\ f(\cdot )=\,f(\cdot )\) and \(w(t)=\,0\). In addition, \({\mathcal {L}}=\,0.1I,\ \tau =\,0.1\) and \(\mu =\,0.5\) can be obtained from the above system parameters. Figure 4 shows the state trajectories of the system is converges to zero equilibrium point with an initial state \([-0.3, 0.2, 0.5, -0.4]\); hence, it is found that the dynamical behaviour of the quadruple-tank process system (30) is asymptotically stable.

Originally, the water levels of the two lower tanks are the only accessible and usable information in the quadruple-tank process, whereas the water levels of the two upper tanks are not. Although the water in the upper tanks overflow, no action can be taken. Therefore, there is a strong need to know the water level of the upper tanks, which is one of the methods for obtaining the information to design the consider system. Until now, the research of the quadruple-tank process focused on designing the sampled-data controller. Therefore, in the sense of practical applications, it deserves that the quadruple-tank process should be applied to the proposed method.

When the sampling interval \(h=\,0.01\), the control gain matrix, which was calculated by Theorem 3.1 can be expressed as

$$\begin{aligned} {\mathcal {K}}=\,\left[ \begin{array}{cccc} -23.6565 &{} -0.1534 &{} -0.0248 &{} 0.0039\\ -0.1534 &{} -23.6712 &{} -0.0055 &{} -0.0248\\ -0.0248 &{} -0.0055 &{} -23.6179 &{} -0.1509\\ 0.0039 &{} -0.0248 &{} -0.1509 &{} -23.6330 \end{array}\right] \end{aligned}$$
Fig. 3
figure 3

Schematic representation of the quadruple-tank process. Source: From [59]

Fig. 4
figure 4

State trajectories of the system (30) in Example 4.4

5 Conclusion

In this paper, we have made an effort for a novel setup of passivity and passification for delayed neural networks with Markov jump parameters via non-uniform sampled-data control. By building reasonable Lyapunov–Krasovskii functional and using Newton–Leibniz enumerating and delay-dependent passivity performance conditions are derived as LMIs. A mode-dependent controller layout method is displayed to guarantee that the resulting Markov jump delayed neural networks are passive. Numerical results are given to show the value of the proposed method.