1 Introduction

Time delays are often encountered in many dynamic systems, and it is a source of degradation of performance and even instability. For this reason, the stability analysis of time-delayed systems has been one of the hottest theoretical issues in the past few decades, and there are many stability results and many examples of practical applications [12, 16, 17].

Let us consider the time-delayed linear systems described by

$$\begin{aligned} \left\{ \begin{array}{l} \dot{x} (t) = Ax(t)+ A_d x(t-d(t)),\\ x(t)= \psi (\theta ), \theta \in [-h,0]\end{array}\right. \end{aligned}$$
(1)

where \(x(t)\in {\mathbb {R}}^n, A, A_d \in {\mathbb {R}}^{n \times n}, \psi (\theta )\) is an initial condition, and d(t) is a time-varying delay satisfying

$$\begin{aligned} 0\le d(t) \le h,~\mu _1 \le \dot{d} (t) \le \mu _2 \le 1 \end{aligned}$$
(2)

with \(h, \mu _1 , \mu _2\) are scalars. The Lyapunov-Krasovskii functional(LKF) is a powerful tool to get stability criteria for a time-varying delay, and most of the recent results have adopted it. The LKF approach consists of two steps: one is a suitable choice of an LKF, and another is to find a less conservative upper bound of its time-derivative in the form of LMI by using various inequalities.

In terms of choosing an appropriate LKF, a simple type of LKF was first introduced [15], and it was updated by adding terms containing more information on time-delay, system information, and cross-terms of variables: augmented LKF [1], multiple integral LKF [2], matrix-refined-functional LKF [7], delay-product LKF [8], and delay partitioning LKF [9]. Also, from the point of view of the integral inequality of quadratic term, Jensen inequality was the first powerful one [15], and it was upgraded to the Wirtinger-based integral inequality [3], free matrix-based integral inequality [4], the Bessel-Legendre integral inequality [10]. Except for free matrix-based integral inequality, all of the above inequalities require another inequality, such as the reciprocally convex inequality [7], to bound it in the form of LMI.

Recently, in order to obtain a less conservative stability results, there have been attempts to use a quadratic matrix inequality with constraint(i.e. \(d^2 (t) M_2 + d(t) M_1 + M_0 <0, \forall d(t) \in [0,h]\)) rather than an affine form(i.e., \(d(t) M_1 + M_0 <0 , \forall d(t) \in [0,h]\)). And one sufficient condition for converting this to LMI was presented [5], and after that, two different types of necessary and sufficient conditions were presented [12,13,14,15].

In this paper, we propose another form of LMI condition to guarantee the negativity for the constrained quadratic matrix inequality. By using this, the constrained quadratic forms, the upper bound of the time-derivative of LKF as well as the constrained quadratic form of reciprocally convex inequality, are expressed as in the form of LMI. The usefulness of our results is shown by two well-known examples.

2 Preliminaries

The following Lemmas are useful results that will be used to prove the main result. First, the following Lemma 1 is the well-known second-order Bessel-Legendre inequality [10].

Lemma 1

Let \(0< R=R^T \in {\mathbb {R}}^{n \times n}\), then we have

$$\begin{aligned} -\int _{a}^b {\dot{x}}^T (s) R \dot{x} (s) ds \le - {1 \over b-a} \varOmega ^T \mathrm{diag} \left\{ R, 3R, 5R \right\} \varOmega \end{aligned}$$

where \(\varOmega =\mathrm{col}\{\varOmega _1 , \varOmega _2 , \varOmega _3 \}\) with \( \varOmega _1 = x(b)-x(a), \varOmega _2 = x(b)+x(a) - {2 \over b-a} \int _a^b x(s)ds, \varOmega _3 = \varOmega _1 + {6 \over b-a}\int _a^b x(s) ds - {12 \over (b-a)^2} \int _a^b (s-a) x(s)ds .\)

The following Lemma 2 is the extension of the extended reciprocally convex lemma in [6].

Lemma 2

Let \( {\tilde{R}}, X_1 , X_2 , Z_1 , Z_2 \in {\mathbb {R}}^{N \times N}\) be symmetric matrices with \(0< {\tilde{R}}\), and let \(Y_0, Y_1, Y_2 \in {\mathbb {R}}^{N \times N}\) be square matrices. If

$$\begin{aligned} \alpha ^2 \left[ \begin{array}{cc}-X_2&{}-Y_2\\ \star &{}Z_2\end{array}\right] +\alpha \left[ \begin{array}{cc} {\tilde{R}} - X_1&{}-Y_1 \\ \star &{}- {\tilde{R}}-Z_2+Z_1\end{array}\right] \nonumber \\ + \left[ \begin{array}{cc}- {\tilde{R}}&{}-Y_0\\ \star &{}-Z_1\end{array}\right] <0,~ \forall \alpha \in [0,1], \end{aligned}$$
(3)

then the following equality holds \(\forall \alpha \in (0,1)\)

$$\begin{aligned}&-\left[ \begin{array}{cc} {1 \over \alpha } {\tilde{R}}&{}0\\ 0&{} {1\over 1-\alpha } {\tilde{R}}\end{array}\right] < \left[ \begin{array}{cc}-2{\tilde{R}} +X_1&{}Y_0\\ \star &{}-{\tilde{R}}\end{array}\right] \nonumber \\&\quad +\alpha \left[ \begin{array}{cc} {\tilde{R}}+X_2 - X_1&{}Y_1\\ \star &{} -{\tilde{R}} +Z_1\end{array}\right] + \alpha ^2 \left[ \begin{array}{cc} -X_2 &{}Y_2 \\ \star &{}Z_2 \end{array}\right] \end{aligned}$$
(4)

where \(\beta = 1-\alpha \).

Proof

First, note that the following inequality is equivalent to (3)

$$\begin{aligned} -\left[ \begin{array}{cc} \beta {\tilde{R}}&{}0\\ 0&{} \alpha {\tilde{R}}\end{array}\right] < \left[ \begin{array}{cc}\alpha (X_1 + \alpha X_2 )&{}Y_0 + \alpha Y_1 + \alpha ^2 Y_2\\ \star &{} \beta (Z_1 + \alpha Z_2 )\end{array}\right] . \end{aligned}$$

Next, pre-multiply and post-multiply by block-diagonal matrix \( \mathrm{diag} \left\{ \sqrt{ \beta \over \alpha } I_N , \sqrt{\alpha \over \beta } I_N \right\} \), then we get

$$\begin{aligned}&-\left[ \begin{array}{cc} { {\beta }^2 \over \alpha } {\tilde{R}}&{}0\\ 0&{} {\alpha ^2 \over \beta } {\tilde{R}}\end{array}\right] < \left[ \begin{array}{cc} \beta (X_1 + \alpha X_2 )&{}Y_0 + \alpha Y_1 + \alpha ^2 Y_2\\ \star &{}\alpha (Z_1 + \alpha Z_2 )\end{array}\right] . \end{aligned}$$

Finally, by using the relations \( - { {\beta }^2 \over \alpha } \tilde{R} = - { 1 \over \alpha } {\tilde{R}} + (2-\alpha ) {\tilde{R}}\) and \(-{\alpha ^2 \over \beta } {\tilde{R}} = - {1 \over \beta } {\tilde{R}} + (1+\alpha ) {\tilde{R}}\) , we can easily get (4). This completes the proof. \(\square \)

The following Lemma 3 is the result of the negativity of a second-order matrix-valued polynomial in the closed interval.

Lemma 3

Let \( A_2 , A_1 , A_0 \in {\mathbb {R}}^{n \times n}\) be symmetric matrices, and let \(B ,M \in {\mathbb {R}}^{n \times n}\) be square matrices with \(M^T + M >0\). Then the following holds

$$\begin{aligned}&z^2 A_2 + z [A_1 + B^T + B ] + A_0 < 0, \forall z \in [0,h] \end{aligned}$$
(5)
$$\begin{aligned}\Leftrightarrow & {} \varGamma _1 = \left[ \begin{array}{cc}A_0 &{}{1 \over 2} A_1 + B +hM\\ {1 \over 2} A_1 + B^T +hM^T&{} A_2 -(M^T + M)\end{array}\right] < 0 \end{aligned}$$
(6)

Proof

Apply the relation

$$\begin{aligned} 0\le z \le h \Leftrightarrow z(z-h) \le 0, \end{aligned}$$
(7)

and the S-Procedure in turn, then we get

$$\begin{aligned} (5)\Leftrightarrow & {} \xi ^T \left\{ z^2 A_2 + z [A_1 + B^T + B ] + A_0 \right\} \xi< 0, \\&\qquad \qquad \qquad \qquad \forall \xi \ne 0, \forall z \in [0,h] \\\Leftrightarrow & {} \xi ^T \left\{ z^2 A_2 + z [A_1 + B^T + B ] + A_0 \right\} \xi< 0, \\&\qquad \qquad \forall \xi \ne 0,~\mathrm{whenever} ~z(z-h) \le 0 \\\Leftrightarrow & {} \xi ^T \left\{ z^2 A_2 + z [A_1 + B^T + B ] + A_0 \right\} \xi \\&\qquad - \xi ^T (M^T + M) \xi z(z-h)< 0, \forall \xi \ne 0\\\Leftrightarrow & {} {\hat{\xi }}^T \left[ \begin{array}{cc}A_0 &{}{1\over 2}A_1 + B+h M \\ {1\over 2}A_1 + B^T +h M^T &{} A_2 -(M+M^T )\end{array}\right] {\hat{\xi }} < 0, \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \forall {\hat{\xi }} \ne 0, \\\Leftrightarrow & {} (6) \end{aligned}$$

where \( {\hat{\xi }} ^T = \left[ \begin{array}{cc} \xi ^T&z \xi ^T \end{array}\right] \). This completes the proof. \(\square \)

Remark 2

For comparisons, we give two previous works having different forms:

  1. (I)

    The result in [13] [14]:

    $$\begin{aligned} (5) \Leftrightarrow \varGamma _2 = \left[ \begin{array}{ccc}A_0 &{}&{}{1 \over 2} (A_1 + B +B^T ) -{h \over 2}(G-D)\\ \star &{}&{} A_2 -D\end{array}\right] < 0 \end{aligned}$$

    where \(D=D^T >0 , G=-G^T\).

  2. (II)

    The result in [15]:

    $$\begin{aligned} (5) \Leftrightarrow \varGamma _3= \left[ \begin{array}{ccc}A_0 &{}&{}{1 \over 2} (A_1 + B +B^T )+ hM \\ \star &{}&{} A_2 -(M^T +M)\end{array}\right] < 0 \end{aligned}$$

    where M is an appropriate dimensional square matrix with \(M+M^T > 0\).

Furthermore, As we can see above, \(\varGamma _1 (i,i)= \varGamma _2 (i,i) = \varGamma _3 (i,i), \forall i=1,2.\) with \(D=M+M^T\). Also, \(\varGamma _2(1,2)=\varGamma _1(1,2)+ {1\over 2} (B^T - B)\) and \(\varGamma _3(1,2)=\varGamma _1(1,2)+ {1\over 2} (B^T - B)- {h \over 2}G\) , where \({1\over 2}(B^T - B)\) and \({h\over 2}G\) are skew-symmetric matrices.

3 Main result

Now we give the main result guaranteeing the stability of the system (1) under the constraints in (2).

Theorem 1

Let \(P_0 , P_1\in {\mathbb {R}}^{7n \times 7n}, S_1 , S_2 \in {\mathbb {R}}^{5n \times 5n}, Q_1,\) \( Q_2,X_2 , X_2, Z_1 , Z_2 \in {\mathbb {R}}^{3n \times 3n}, R\in {\mathbb {R}}^{n \times n} \) be symmetric matrices, and let \(Y_0, Y_1 , Y_2 , M_0 , M_1 , M_2 \in {\mathbb {R}}^{3n \times 3n}\) be square matrices. If the following LMI’s are satisfied

$$\begin{aligned}&P_0 , P_0 + hP_1, S_1 , S_2 , Q_1 , Q_2, R >0, \end{aligned}$$
(8)
$$\begin{aligned}&M_i^T + M_i >0, i=0,1,2, \end{aligned}$$
(9)
$$\begin{aligned}&\left[ \begin{array}{ccc} {\hat{A}}_0&{}&{} {1\over 2}{\hat{A}}_1 + {\hat{B}} + M_0\\ \star &{}&{} {\hat{A}}_2 -(M_0^T+M_0)\end{array}\right] < 0, \end{aligned}$$
(10)
$$\begin{aligned}&\left[ \begin{array}{ccc} A_0(\mu _1)&{}&{} {1\over 2}A_1 +B(\mu _1)+ h M_1\\ \star &{}&{} A_2(\mu _1)-(M_1^T+M_1)\end{array}\right] < 0, \end{aligned}$$
(11)
$$\begin{aligned}&\left[ \begin{array}{ccc} A_0(\mu _2)&{}&{} {1\over 2}A_1 +B(\mu _2)+ h M_2\\ \star &{}&{} A_2(\mu _2)-(M_2^T+M_2)\end{array}\right] < 0, \end{aligned}$$
(12)

then the time-delayed linear system (1) with constraint (2) is asymptotically stable. Here

$$\begin{aligned}&{\hat{A}}_0 = \left[ \begin{array}{cc}- {\tilde{R}}&{}-Y_0\\ \star &{}-Z_1\end{array}\right] ,~ {\hat{A}}_1 = \left[ \begin{array}{cc} {\tilde{R}} - X_1&{}0\\ \star &{}- {\tilde{R}}-Z_2+Z_1\end{array}\right] , \\&{\hat{B}}= \left[ \begin{array}{cc}0&{}-Y_1\\ 0&{}0\end{array}\right] , ~~{\hat{A}}_2 = \left[ \begin{array}{cc}-X_2&{}-Y_2\\ \star &{}Z_2\end{array}\right] \\&A_0 [\dot{d} (t)] =-\dot{d} (t) E_1^T P_1 E_1 + He \{E_1^T P_0 E_3\} \\&~+ \dot{d} (t) E_4^T S_1 E_4 + He\{E_4^T S_1 E_5\} -\dot{d} (t) E_7^T S_2 E_7 \\&~+ He\{E_7^T S_2 E_8\} +E_9^T Q_1 E_9 -(1-\dot{d} (t)) E_{10}^T Q_1 E_{10 } \\&~+ He\{E_{12} Q_1 E_{15}\}+(1-\dot{d} (t)) E_{10}^TQ_2 E_{10} \\&~ -E_{16}^T Q_2 E_{16} + (1-\dot{d}(t)) He\{E_{18}^T Q_2 E_{21}\} \\&~+h^2 A_c^T R A_c +E_a^T (-2 {\tilde{R}} + X_1 ) E_a + He \{E_a^T Y_0 E_b \}\\&~ -E_b^T {\tilde{R}} E_b ,\\&A_1 = E_a^T ({\tilde{R}} + X_2 - X_1 ) E_a +E_b^T (-{\tilde{R}} + Z_1 )E_b , \\&B [\dot{d} (t)] =-\dot{d} (t) E_2^T P_1 E_1 + E_1^T P_0E_3 + E_1^T P_1 E_3 \\&~+ E_4^T S_1 E_6 - E_7^T S_2 E_6 -(1-\dot{d} (t)) E_{10}Q_1 E_{11} \\&~+ E_{13}^T Q_1 E_{15}+ E_{16}^T Q_2 E_{17} +(1-\dot{d} (t)) E_{19}^T Q_2 E_{21} \\&~+ E_a^T Y_1 E_b, \\&A_2 [\dot{d} (t)] =\dot{d} (t) E_2^T P_1 E_2 + E_2^T P_1 E_3 + He\{E_{14}^T Q_1 E_{15} \} \\&~ - E_{17}^T Q_2 E_{17}+(1-\dot{d} (t)) He\{E_{20}^T Q_2 E_{21}\}\\&~- E_a^T X_2 E_a +He \{E_a^T Y_2 E_b\} +E_b^T Z_2 E_b, \end{aligned}$$

and the used vectors and matrices are defined as

$$\begin{aligned}&{\tilde{R}}= \mathrm{diag}\{R,3R,5R\}, ~ A_c = Ae_1 + A_1 e_2, \\&e_i = [0_{n \times (i-1)n}~~I_{n \times n}~~0_{n \times (9-i)n}], i=1,2,\cdots ,9,\\&e_0=0_{n \times 9n},~{\tilde{e}}_2 = (1-\dot{d} (t)) e_2 ,~{\tilde{e}}_6 = (1-\dot{d} (t)) e_6, \\&E_1 = \mathrm{col}\left\{ e_1, e_2, e_3, e_0 , e_0, he_7, he_9 \right\} ,\\&E_2 = \mathrm{col}\left\{ e_0,e_0,e_0 , e_6, e_8, -e_7 , -e_9 \right\} ,\\&E_3 = \mathrm{col} \{Ac, {\tilde{e}}_4, e_5, e_1 -{\tilde{e}}_2, e_1 - {\tilde{e}}_6 -\dot{d} (t) e_8 , {\tilde{e}}_2 - e_3 , \\&\qquad \qquad \qquad {\tilde{e}}_2 -e_7 + \dot{d} (t) e_9 \},\\&E_4 = \mathrm{col}\left\{ e_1 , e_2, e_3, e_6 , e_8 \right\} ,\\&E_5 = \mathrm{col}\left\{ e_0 , e_0 , e_0 , e_1 -{\tilde{e}}_2 -\dot{d} (t) e_6 , e_1 - {\tilde{e}}_6 - 2\dot{d} (t) e_8 \right\} ,\\&E_6 = \mathrm{col}\left\{ A_c , {\tilde{e}}_4, e_5 , e_0 , e_0 \right\} ,\\&E_7 = \mathrm{col}\left\{ e_1 ,e_2, e_3, e_7 , e_9 \right\} ,\\&E_8 = \mathrm{col}\left\{ hA_c , h{\tilde{e}}_4 , he_5, {\tilde{e}}_2 - e_3 + \dot{e}_7 , {\tilde{e}}_2 - e_7 + 2\dot{d}(t) e_9 \right\} ,\\&E_9 = \mathrm{col}\left\{ e_0,e_1,A_c \right\} ,~E_{10} = \mathrm{col}\left\{ e_0,e_2,e_4 \right\} ,\\&E_{11} = \mathrm{col}\left\{ e_6,e_0, e_0 \right\} ,~E_{12} = \mathrm{col}\left\{ e_0,e_0,e_1 - e_2 \right\} ,\\&E_{13} = \mathrm{col}\left\{ e_0,e_6,e_0 \right\} ,~E_{14} = \mathrm{col}\left\{ e_8,e_0,e_0 \right\} ,\\&E_{15} = \mathrm{col}\left\{ e_1,e_0,e_0 \right\} ,~E_{16} = \mathrm{col}\left\{ e_7,e_3,e_5 \right\} ,\\&E_{17} = \mathrm{col}\left\{ e_7,e_0, e_0 \right\} ,~E_{18} = \mathrm{col}\left\{ h^2 e_9, he_7 , e_2 - e_3 \right\} ,\\&E_{19} = \mathrm{col}\left\{ -2h e_9, -e_7 , e_0 \right\} ,\\&E_{20} = \mathrm{col}\left\{ e_9,e_0,e_0 \right\} ,~E_{21} = \mathrm{col}\left\{ e_2,e_0,e_0 \right\} ,\\&E_{a} = \mathrm{col}\left\{ e_1 -e_2 , e_1 + e_2 - 2 e_6 , e_1 - e_2 + 6 e_6 -12 e_8 \right\} ,\\&E_{b} = \mathrm{col}\left\{ e_2 -e_3 , e_2 + e_3 - 2 e_7 , e_2 - e_3 + 6 e_7 -12 e_9 \right\} . \end{aligned}$$

Proof

Let us consider a quadratic functional

$$\begin{aligned} V(x_t)= & {} \eta _1^T (t) [P_0 + d(t) P_1 ] \eta _1 (t) +d(t) \eta _2^T (t) S_1 \eta _2 (t) \nonumber \\&+ h_d (t) \eta _3^T (t) S_2 \eta _3 (t) +\int _{t_d}^t w_1^T (t,s) Q_1 w_1(t,s) ds \nonumber \\&+\int _{t_h}^{t_d} w_2^T (t,s) Q_2 w_2(t,s) ds \nonumber \\&+\int _{t_h}^{t} (h-t+s) \dot{x}^T (s) R \dot{x} (s) ds \end{aligned}$$
(13)

where \(t_d = t-d(t), t_h = t-h, h_d (t) = h-d(t)\). and

$$\begin{aligned}\left\{ \begin{array}{l} \eta _0 (t) = \mathrm{col}\left\{ x(t),x(t_d),x(t_h) \right\} , \\ \eta _1 (t)= \mathrm{col}\left\{ \eta _0 (t), d(t)[u_1 (t), u_2 (t)],h_d(t)[v_1(t), v_2(t)] \right\} , \\ \eta _2 (t)= \mathrm{col}\left\{ \eta _0 (t), u_1 (t), u_2 (t) \right\} , \\ \eta _3 (t) = \mathrm{col}\left\{ \eta _0 (t) , v_1(t) ,v_2 (t)\right\} , \\ w_1(t,s)= \mathrm{col}\left\{ \int _s^t x(r)dr , x(s), \dot{x} (s) \right\} .\\ w_2(t,s)= \mathrm{col}\left\{ \int _s^{t_d}x(r)dr, x(s) , \dot{x} (s) \right\} . \end{array}\right. \end{aligned}$$

Then, from (8), the above \(v(x_t)\) in (13) is a good LKF candidate. Now, find its time-derivative along the trajectories of (1),

$$\begin{aligned}&\dot{V} (x_t ) = -\dot{d} (t) \eta _1^T (t) P_1 \eta _1 (t) +2 \eta _1^T (t) [P_0 +d(t)P_1] {\dot{\eta }}_1 (t) \nonumber \\&~+\dot{d} (t) \eta _2^T (t) S_1 \eta _1 (t) +2 \eta _2^T (t) S_1 [d (t) {\dot{\eta }}_1 (t)] \nonumber \\&~-\dot{d} (t) \eta _3^T (t) S_2 \eta _3 (t) + 2 \eta _3^T (t) S_2 [h_d(t) {\dot{\eta }}_3 (t) ]\nonumber \\&~+ w_1^T (t,t) Q_1 w_1(t,t) -(1-\dot{d} (t)) w_1 (t,t_d) Q_1 w_1 (t,t_d) \nonumber \\&~+ 2\int _{t_d}^t w_1^T(t,s) Q_1 {\partial \over \partial t} w_1 (t,s)ds \nonumber \\&~+(1-\dot{d} (t)) w_2^T (t,t_d) Q_2 w_2(t,t_d) - w_2 (t,t_h) Q_2 w_2 (t,t_h) \nonumber \\&~+ 2\int _{t_h}^{t_d} w_2^T(t,s) Q_2 {\partial \over \partial t} w_2 (t,s)ds\nonumber \\&~+ h^2 \dot{x}^T (t) R \dot{x} (t) + v_a (x_t ) \nonumber \\&=\xi _t^T \left\{ -\dot{d} (t) (E_1+d(t)E_2)^T P_1(E_1 + d(t)E_2 ) \right. \nonumber \\&~+ 2(E_1+d(t)E_2 )^T (P_0 +d(t)P_1 )E_3\nonumber \\&~+\dot{d} (t)E_4^T S_1 E_4 + 2E_4^T S_1 (E_5+d(t)E_6 -\dot{d} (t) E_7^T S_2 E_7 \nonumber \\&~+ 2 E_7^T S_2 (E_8 -d(t) E_6 + E_9^T Q_1 E_9 \nonumber \\&~-(1-\dot{d}(t)) [E_{10}+d(t)E_{11}]^T Q_1 [E_{10}+d(t)E_{11}] \nonumber \\&~+[E_{12}+d(t)E_{13}+d^2(t)E_{14}^T] Q_1 E_{15} \nonumber \\&~+ (1-\dot{d} (t))E_{10}^T Q_2 E_{10} \nonumber \\&~- [E_{16}-d(t)E_{17}]^T Q_2 [E_{16}-d(t)E_{17}] \nonumber \\&~\left. +2(1-\dot{d}(t))[E_{18}+d(t)E_{19}+d^2 (t)E_{20}]^T Q_2 E_{21} \right\} \xi _t \nonumber \\&~ + v_a (x_t ) \end{aligned}$$
(14)

where \(v_a (x_t ) = -h \int _{t-h}^t \dot{x}^T (s) R \dot{x} (s) ds\) .

Apply Lemma 1 to \(v_a (x_t )\),

$$\begin{aligned}&v_a (x_t ) \le - {1\over \alpha } \xi _t^T E_a^T {\tilde{R}} E_a \xi _t - {1\over 1-\alpha } \xi _t^T E_b^T {\tilde{R}} E_b \xi _t \end{aligned}$$
(15)

From Lemma 3, we have (9)–(10) is equivalent to (3), which means that we can use (4) under (9)–(10). Apply this fact, with \(\alpha = {d(t) \over h} \in [0,1]\), to (15)

$$\begin{aligned}&v_a (x_t ) \le \xi _t^T {\left[ \begin{array}{c}E_a\\ E_b \end{array}\right] }^T \left\{ \left[ \begin{array}{cc}-2{\tilde{R}} +X_1&{}Y_0\\ \star &{}-{\tilde{R}}\end{array}\right] \right. \nonumber \\&\qquad \qquad \qquad +{d(t) \over h} \left[ \begin{array}{cc} {\tilde{R}}+X_2 - X_1&{}Y_1\\ \star &{} -{\tilde{R}} +Z_1\end{array}\right] \nonumber \\&\qquad \qquad \qquad + \left. \left( d(t) \over h \right) ^2 \left[ \begin{array}{cc} -X_2 &{}Y_2 \\ \star &{}Z_2 \end{array}\right] \right\} \left[ \begin{array}{c}E_a\\ E_b \end{array}\right] \xi _t . \end{aligned}$$
(16)

Combines (14) and (16) to get

$$\begin{aligned} \dot{V} (x_t)= & {} \xi _t^T \Big \{ A_0 ( \dot{d} (t)) + d(t) [A_1 + B(\dot{d}(t)) + B^T (\dot{d}(t)) ] \\&\qquad + d^2 (t) A_2 (\dot{d}(t)) \Big \} \xi _t \\:= & {} \xi _t^T \Big \{ \varOmega [d(t), \dot{d} (t) ] \Big \} \xi _t , \end{aligned}$$

where \( \varOmega [d(t), \dot{d} (t)] = A_0 ( \dot{d} (t)) + d(t) [A_1 + B(\dot{d}(t)) + B^T (\dot{d}(t)) ] + d^2 (t) A_2 (\dot{d}(t)) \) is a quadratic matrix function for the time-delay \(d(t) \in [0,h]\) and an affine form for its time-derivative \( \dot{d} (t)\).

Finally, from Lemma 3,

$$\begin{aligned} (8)-(12) \Rightarrow \varOmega [d(t), \dot{d} (t) ]<0,~\mathrm{under}~ (2), \end{aligned}$$

and equivalently,

$$\begin{aligned}(8)-(12) \Rightarrow \dot{V}(x_t) < 0, \forall \xi _t \ne 0,~\mathrm{under}~ (2), \end{aligned}$$

which means the stability of time-delayed linear system (1) with constarints in (2). This completes the proof. \(\square \)

4 Numerical Examples

To show the usefulness of our result, we give two well-known examples(see [3, 5, 13,14,15]) with various values of \(-\mu _1 = \mu _2 = \mu \).

Example 1

Let us consider the time-delayed system with

$$\begin{aligned} A=\left[ \begin{array}{ccc}-2&{}&{}0\\ 0&{}&{}-0.9\end{array}\right] ,~A_1 =\left[ \begin{array}{ccc}-1&{}&{}0\\ -1&{}&{}-1\end{array}\right] . \end{aligned}$$
(17)

The following Table 1 shows the comparative results.

Table 1 The allowable maximal bound of delay

Example 2

Let us consider the time delayed system with

$$\begin{aligned} A=\left[ \begin{array}{ccc}0&{}&{}1\\ -1&{}&{}-2\end{array}\right] ,~A_1 =\left[ \begin{array}{ccc}0&{}&{}0\\ -1&{}&{}1\end{array}\right] . \end{aligned}$$
(18)

The following Table 2 shows the comparative results.

Table 2 The allowable maximal bound of delay

As we can see in Tables 1 and 2 above, our result improves the stability bound.

Finally, the number of variables, needed to compute the allowable maximal bound of delay, is given in the following Table 3.

Table 3 The number of variables(\(an^2 + bn\))

As expected, it can be seen that the results ( [13,14,15] and This paper) using the constrained quadratic inequality has a larger number of variables than [5] using an affine inequality.

5 Conclsion

In this paper, the stability of time-delayed linear systems has been considered. First, a reciprocally convex inequality in the form of constrained quadratic matrix inequality has been derived. Second, the equivalent transform of the constrained quadratic matrix inequality to the LMI has been derived. Third, the upper bound of the time-derivative of LKF, which is the constrained quadratic matrix form, has been obtained and have converted into LMI using a derived equivalent transformation. Finally, the usefulness of our result has been shown through two well-known examples.