1 Introduction

In this article, we consider a class of time-delay nonlinear systems described by

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{x}_i(t)\,=x_{i+1}(t)+ f_i(t,x(t),x(t-\tau (t))),\\ ~i=1,\ldots ,n-1, \\ \dot{x}_n(t) =u(t)+ f_n(t,x(t),x(t-\tau (t))),\\ y(t)~\,=\theta (t)x_{1}(t), \end{array}\right. } \end{aligned}$$
(1)

where \(x(t)=[x_{1}(t),\ldots ,x_{n}(t)]^{T}\in {\mathbb {R}}^{n}\), \(u(t)\triangleq x_{n+1}(t)\in {\mathbb {R}}\), \(y(t)\in {\mathbb {R}}\) are the state, the input and the output, respectively; an unknown continuous function \(\theta (\cdot )\) denotes time-varying measurement error; \(x(t-\tau (t))=[x_{1}(t-\tau (t)),\ldots ,x_{n}(t-\tau (t))]^T\), and the unknown time-varying delay \(\tau (t)\) satisfies \(0\le \tau (t) \le {\widetilde{\tau }}, {\dot{\tau }}(t)\le {\overline{\tau }}<1\) with \({\widetilde{\tau }}\) and \({\overline{\tau }}\) being known nonnegative constants; \(f_{1}(\cdot ),\ldots ,f_n(\cdot )\), known as nonlinearities of system (1), are unknown continuous functions. The initial condition is \(x(\Xi )=\zeta _{0}(\Xi )\) for any \(\Xi \in [-{\widetilde{\tau }},0]\) and \(\zeta _{0}(\cdot )\) being a specified continuous function.

When \(\tau =0\), Eq. (1) are referred to as a class of feedback nonlinear systems whose control design has been focused on considerable attention since the 1990s, where the distinguished backstepping method [1, 2] was born to provide a systematic methodology in the construction of the desired controller. Soon afterward, adding a power integrator method [3] was proposed to overcome the obstacles of backstepping; that is, the precise cancelation in design procedures is replaced by the domination to deal with the strong nonlinearities. Thanks to these two methods, a series of interesting results have been obtained, see [4,5,6,7,8,9,10,11,12,13,14,15,16] and the references therein. When \(\tau \ne 0\), Eq. (1) are called a class of time-varying delayed nonlinear systems. Time delay widely exists in various control systems and sometimes benefits the control, such as damping and stabilization of ordinary differential equations. However, more often time delay is viewed as an undesirable factor which has the potential tendency to destabilize control systems by deteriorating system performances. Thus, the stability analysis and control design of the time-delay systems have gained a great deal of attention, and a series of research results have been achieved, such as [17,18,19,20,21,22,23,24,25] and the references therein.

It is necessary to point out that uncertainties/unknowns (such as unknown growth rate and unknown output) also exist in practical systems, and these factors make control design difficult. Hence, it is of much importance to investigate their effects on control design and the strategies of manipulating them. Realizing that nonlinearities are restricted by uncertainties, the designer tends to impose the assumptions on nonlinearities. Generally speaking, there are two types of growth rates. The one is linear growth rate; that is, the nonlinearities are bounded by a positive constant multiplied by a linear sum of unmeasurable state variables. Whether or not the constant is known, the system is linear in nature, so combining the constructions of observers with controllers applicable to nonlinear systems accelerates the development of some new control approaches. For example, by a feedback domination method, [26] explicitly constructed a linear output compensator ensuring the globally exponential stability of the closed-loop systems. Furthermore, the hypothesis on [26] was extended to time-delay nonlinear systems and stochastic ones based on the appropriate choice of Lyapunov function in [27] and [28], respectively. The other one is the nonlinear growth rate; that is, the nonlinearities are bounded by a positive constant multiplied by a nonlinear sum of unmeasurable state variables. In such situation, traditional linear observers are inapplicable, so a class of so-called homogeneous observers were proposed for the first time in the celebrated paper [29] to rebuild up the unmeasurable states. As a generalized investigation, [30,31,32] addressed the global stabilization of inherently nonlinear systems via homogeneous domination in terms of a new observer/controller construction. In brief, the existing results [3,4,5, 17, 18, 20,21,22,23,24,25] required the precise knowledge of the output function.

Recently, the designer began to investigate the case that the unknown output exists in the systems. By requiring the continuous differentiability of the output function which could be unknown, [33] solved the output feedback stabilization, and [34] further considered the effect of time delay for the system with an unknown growth. The latest references [9, 35,36,37,38,39,40] removed this restriction in light of the modified construction of a homogeneous observer. Specifically, Chen et al. [37] proposed a linear-like non-differentiable function for the first time; that is, the output has the form: \(y(t) = \theta (t)x_1(t)\) with \(\theta (t)\) being not differentiable and unknown, and they obtained a global output feedback stabilizer by applying the double-domination approach, where the central strategy is that two (double) gains are used to dominate unknown \(f_i\)’s and unknown \(\theta (t)\), respectively. Soon afterward, this approach was extended to investigate the stabilization of stochastic nonlinear system in [41] and disturbance attenuation of feedforward nonlinear system in [9], respectively. However, to the best of the authors’ knowledge, in the case that the time-varying output function is not differentiable but continuous, the output feedback regulation of system (1) whose nonlinearities are bounded by unmeasured states multiplying an unknown constant and a polynomial of output growth rate is not solved until now.

To solve aforementioned regulation problem, we have to face with two types of uncertainties composed of an unknown constant and unmeasurable state variables together with the polynomial of output growth rate, and three difficulties come to mind. First, does traditional adaptive technique compensate the unknown constant in the nonlinearities? Second, is there an appropriate observer rebuilding up unmeasurable states in the presence of time-varying delay? Third, how can we deal with the polynomial of the output? In this paper, the designer overcomes the second difficulty by filtering the information on nonlinearities in the construction of the observer, and conquers the first and third difficulties simultaneously by introducing a dynamic gain and modifying the double-domination approach in [37,38,39,40,41]. Specifically, this paper proposes a linear-like time-varying output feedback controller with a dynamic gain to guarantee that the states of the resulting closed-loop system converge to zero, and theoretical analysis is conducted based on the construction of two integral Lyapunov functions.

At last, we highlight the main contributions of this paper as follows: (i) This paper is the first to investigate the output feedback regulation of time-delay nonlinear systems with unknown continuous output function and unknown growth rate, where the double-domination approach is extended to provide a unified control design instead of traditional inductive procedures. (ii) The assumption imposed on nonlinearities is further relaxed; that is, the growth rate can be unknown, whose influence is studied by a new scaling transformation with the upper bound of a new dynamic gain. (iii) A constant gain is provided to solve the continuous measurement error, and skillful transformations are introduced to search appropriate integral Lyapunov functions.

2 Design of output feedback controller

2.1 Problem formulation and preliminaries

We adopt the following notations throughout this paper. For a real vector \(x(t)=[x_1(t),\ldots ,x_n(t)]^{ T}\in {\mathbb {R}}^n\), the norm is defined by \(\Vert x\Vert =(\sum _{i=1}^{n}x_i^2)^{\frac{1}{2}}\). For a real matrix \(A=(a_{ij})_{m\times n}\), \(A^T\) is the transpose of A; \(\Vert A\Vert \triangleq (\lambda _{\max }(A^TA))^{\frac{1}{2}}\), where \(\lambda _{\max }(A^TA)\) denotes the largest eigenvalue of the matrix \(A^TA\). The arguments of functions are sometimes simplified; for instance, a function \(f_i(t,x(t),x(t-\tau (t)))\) is denoted by \(f_i(\cdot )\) or \(f_i\).

This article is to design an output feedback controller based on an appropriate observer such that the states of system (1) converge to zero. For this aim, the following assumptions are needed.

Assumption 1

There is a sufficiently small parameter \({\bar{\theta }}\) satisfying \(|1-\theta (t)|\le {\bar{\theta }}<1\).

Assumption 2

For each \(i=1,\,\ldots ,\,n\), there exist an unknown constant \(c\ge 0\) and a known constant \(p>0\) such that

$$\begin{aligned} |f_i(\cdot )|\le & {} c(1+|y(t)|^p)\nonumber \\&\times \big (\sum _{j=1}^i|x_{j}(t)|+\sum _{j=1}^i|x_{j}(t-\tau (t))|\big ). \end{aligned}$$
(2)

Assumption 1 depicts that the allowable range of time-varying measurement error \(\theta (t)\) is standard and used in the existing results [9, 37,38,39,40,41]. Of course, the inequality in Assumption 1 excludes the case of \(y(t)\equiv 0\) explicitly, and this implies that system (1) is completely observable. More explanations on Assumption 1 can be found in [37, 41]. In what follows, we illustrate how to enlarge the class of the systems to be investigated by Assumption 2.

Remark 1

It is seen from (2) that the nonlinearities \(f_i(\cdot )\) satisfy linear growth condition on the unmeasured states multiplied by unknown growth rate c and output polynomial function \(1+|y(t)|^p\). The reasonability of Assumption 2 can be explained from two aspects. (i) System (1) satisfying Assumption 2 can cover a wide variety of nonlinear systems in the literature. For example, in the absence of the time delay, Assumption 2 reduces to the Assumption 1.1 in [5]. Neglecting the term \(1+|y(t)|^p\), Assumption 2 reduces to Assumption 2.1 in [19, 34]. (ii) We emphasize that the term \(1+|y(t)|^p\) is not stringent since it can define many functions encountered in practice. For instance, any polynomial/global Lipschitz function can be defined by \(c(1+|y(t)|^p)\). Considering \(b_my(t)^m+b_{m-1}y(t)^{m-1}+\cdots +b_1y(t)+b_0\) with \(m,b_0,\ldots ,b_m\) being known constants, one has

$$\begin{aligned}&b_my(t)^m+b_{m-1}y(t)^{m-1}+\cdots +b_1y(t)+b_0\\&\quad \le (|b_0|+\cdots +|b_m|)(1+|y(t)|^m)\\&\quad \triangleq c(1+|y(t)|^p). \end{aligned}$$

Moreover, the example of \(|y(t)|^4+|y(t)|^3+\ln (2+y(t)^2)+y(t)\cos (y(t)) \le 6(1+|y(t)|^4)\) with \(c=6\) and \(p=4\) also shows that any function satisfying polynomial growth restriction can also be defined by \(c(1+|y(t)|^p)\). In short, Assumption 2 implies that nonlinearities can run far away from the zero for a while, but must not tend to infinity in a fast way as time increases.

At last, we provide two lemmas that will be used to prove the core results in this paper.

Lemma 1

[37] If \(m\ge 1\) is a constant, then for any \(x_{i}\in {\mathbb {R}},~i=1,\ldots ,n\), there is

$$\begin{aligned} (|x_1|+\cdots +|x_n|)^m\le n^{m-1}(|x_1|^m+\cdots +|x_n|^m). \end{aligned}$$

Lemma 2

[14] If \(a>0\), \(b>0\) are constants, and \(\pi (x,y)\) is a known function, then for any \(x,y\in {\mathbb {R}}\), there holds

$$\begin{aligned}&|\pi (x,y)x^a y^b| \le \gamma (x,y)|x|^{a+b}\\&\quad +\,\frac{b}{a+b}\big (\frac{a}{(a+b)\gamma (x,y)}\big )^{\frac{a}{b}} |\pi (x,y)|^\frac{a+b}{b}|y|^{a+b}, \end{aligned}$$

where \(\gamma (x, y)>0\).

2.2 Observer design

For system (1), one constructs the observer/ compensator with a time-varying gain as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{{\hat{x}}}_{i}(t) = \hat{x}_{i+1}(t)-a_{i}r^{i}(t)\hat{x}_{1}(t), \quad i=1,\ldots ,n-1,\\ \dot{{\hat{x}}}_{n}(t) = u(t)-a_{n}r^n(t)\hat{x}_{1}(t), \end{array}\right. }\nonumber \\ \end{aligned}$$
(3)

where \({\hat{x}}_1(t),\ldots ,{\hat{x}}_n(t)\) are the state variables of the observer; \(r(t)\ge 1\) is a monotonically increasing function and satisfies the equation

$$\begin{aligned}&\dot{r}(t) = r(t)\max \left\{ -\frac{L\rho }{16\sigma }r(t)+\frac{\varphi (y(t))}{\sigma },\right. \nonumber \\&\quad \left. \Big (r^{-\omega }(t)\frac{y(t)}{1-{\bar{\theta }}}\Big )^2 +\sum _{i=1}^n\Big (\frac{{{\hat{x}}}_i(t)}{r^{i-1+\omega }(t)L^{i-1}}\Big )^2 \right\} , \end{aligned}$$
(4)

\(r(t)\equiv 1\) if \(t\in [-{\tilde{\tau }}, 0]\). It should be noticed that the positive constants \(a_1,\ldots ,a_n,\sigma ,L,\rho \) and the continuous function \(\varphi (y)\) will be determined in the analysis. \(\omega \) is an appropriate positive constant which will be assigned in design procedure.

Remark 2

We stress the new elements of the observer (3). (i) The terms “\(a_i(y(t)-{\hat{x}}_1(t))\)” or “\(a_ir^i(t)(y(t)-{\hat{x}}_1(t))\)” in [1, 2, 29, 42] are replaced by “\(a_{i}r^{i}(t)\hat{x}_{1}\).” The benefit of this is that it avoids dealing with the unknown time-varying sensitivity measurement \(\theta (t)\) directly with the aid of removing the output in the observer. Another benefit of this lies in the introduction of dynamic gain r(t). Different from the existing results [26, 36, 42], theoretical deductions will show that the expression of \(\dot{r}(t)\) is composed of two parts. One part is used to dominate the output polynomial function \(1+|y(t)|^p\), and the other part can suppress the effect of the unknown constant in the nonlinearities. (ii) The full-order observer (3) looks like the so-called filter used in [1, 4], because it filters unknown nonlinearities and time-varying delay. However, nonzero initial value and the presence of dynamic gain r(t) render it has a more general form than that in [1, 4].

Next, the estimation error is defined by

$$\begin{aligned} \varepsilon _{i}=\frac{x_{i}-{\hat{x}}_{i}}{r^{i-1+\omega }(t)}, \quad i=1,\ldots ,n, \end{aligned}$$
(5)

It follows from (1), (3) and (5) that

$$\begin{aligned} {\dot{\varepsilon }}_{i}&=r(t)\varepsilon _{i+1}-r(t)a_{i}\varepsilon _{1} -\frac{\dot{r}(t)}{r(t)}(\omega +i-1)\varepsilon _{i} \nonumber \\&\quad +\frac{r(t)}{r^\omega (t)}a_{i}x_{1}+\frac{f_{i}}{r^{i-1+\omega }(t)}, \quad i=1,\ldots ,n, \end{aligned}$$
(6)

where \(\varepsilon _{n+1}\triangleq 0\). (6) is expressed in a compact form:

$$\begin{aligned} {{\dot{\varepsilon }}}= & {} r(t)A\varepsilon -\frac{\dot{r}(t)}{r(t)}(\omega I+D)\varepsilon +\frac{r(t)}{r^\omega (t)}Mx_{1}(t)\nonumber \\&+f(t,r(t),x,x(t-\tau (t))), \end{aligned}$$
(7)

where

$$\begin{aligned}&A=\left[ \begin{array}{cccc} -a_1 &{} \quad 1 &{} \quad \cdots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ -a_{n-1} &{} \quad 0 &{} \quad \cdots &{} \quad 1 \\ -a_n &{} \quad 0 &{} \quad \cdots &{} \quad 0 \end{array} \right] ,\\&D=\left[ \begin{array}{cccc} 0 &{} \quad &{} \quad &{} \\ &{} \quad 1 &{} \quad &{} \\ &{}\quad &{} \quad \ddots &{} \\ &{} \quad &{} \quad &{} \quad n-1\\ \end{array} \right] ,\quad \varepsilon (t)=\left[ \begin{array}{c} \varepsilon _1 \\ \varepsilon _2 \\ \vdots \\ \varepsilon _n \\ \end{array} \right] ,\\&~M=\left[ \begin{array}{c} a_1 \\ a_2 \\ \vdots \\ a_n \\ \end{array} \right] ,~f=\left[ \begin{array}{c} \frac{f_{1}}{r^\omega (t)} \\ \frac{f_2}{r^{\omega +1}(t)} \\ \vdots \\ \frac{f_{n}}{r^{n-1+\omega }(t)} \\ \end{array} \right] . \end{aligned}$$

Choose the positive constants \(a_1,\ldots ,a_n\) to guarantee that there is a symmetric and positive definite matrix P such that

$$\begin{aligned} A^TP+PA\le - I,~-\omega P\le DP+PD. \end{aligned}$$
(8)

The proof is given as follows: Lemma 1 in [42] holds for order n; that is, for any constant \(a>0\), there exist a constant \(d_0>0\) and a positive definite and symmetric matrix \({\bar{P}}\) satisfying

$$\begin{aligned} A^T{\bar{P}}+{\bar{P}}A\le -d_0{\bar{P}},~-a{\bar{P}}\le {\bar{P}}D+D{\bar{P}}, \end{aligned}$$
(9)

where the definitions of matrices A and D are the same as these ones in this paper. By using \(\lambda _1>0\) to represent the smallest eigenvalue of the matrix \({\bar{P}}\), then it follows from (9) that

$$\begin{aligned} A^T{\bar{P}}+{\bar{P}}A\le -d_0{\bar{P}}\le -d_0\lambda _1 I, \end{aligned}$$

which can be rewritten as

$$\begin{aligned} A^T \Big (\frac{{\bar{P}}}{d_0\lambda _1}\Big )+\Big (\frac{\bar{P}}{d_0\lambda _1}\Big )A\le -I, \end{aligned}$$

so the first inequality in (8) is obtained by letting \(P=\frac{{\bar{P}}}{d_0\lambda _1}\). The second inequality in (8) is deduced directly according to multiplying both sides by \(\frac{1}{d_0\lambda _1}\) on the second inequality in (9) and letting \(\omega =a,P=\frac{{\bar{P}}}{d_0\lambda _1}\). To proceed with the analysis of estimation error, one chooses

$$\begin{aligned} V_{1} = \varepsilon ^{T}P\varepsilon +\sum ^{n}_{i=1}\frac{c }{1-{\overline{\tau }}} {\int _{t-\tau (t)}^{t}\frac{x_{i}^{2}(s)}{r^{2i-2+2\omega }(s)} \mathrm{d}s}. \end{aligned}$$
(10)

Since \(1-{{\dot{\tau }}}\ge 1-{{\overline{\tau }}}\), and r(t) is monotonically increasing, the time derivative of \(V_1\) along the trajectories of (7) is given as

$$\begin{aligned} {\dot{V}}_{1}= & {} r(t)\varepsilon ^{T}(PA+A^{T}P)\varepsilon \nonumber \\&-\frac{\dot{r}(t)}{r(t)}\varepsilon ^{T}(DP+PD+2\omega P)\varepsilon \nonumber \\&+2\varepsilon ^{T}Pf +2\varepsilon ^{T}P\frac{r(t)}{r^\omega (t)}Mx_{1}\nonumber \\&+\sum ^{n}_{i=1}\frac{c }{1-{\overline{\tau }}}\frac{x_{i}^{2}}{r^{2i-2+2\omega }(t)}\nonumber \\&-\sum ^{n}_{i=1}\frac{c }{1-{\overline{\tau }}} \frac{x_{i}^{2}(t-\tau (t))}{r^{2i-2+2\omega }(t-\tau (t))}(1-{{\dot{\tau }}}(t))\nonumber \\\le & {} -r(t)\Vert \varepsilon \Vert ^{2}-\omega \frac{\dot{r}(t)}{r(t)}\varepsilon ^TP\varepsilon +2\varepsilon ^{T}Pf\nonumber \\&+{\frac{2r(t)}{r^\omega (t)}}\varepsilon ^{T}{} \textit{PM}x_{1}+\sum ^{n}_{i=1} \frac{c x_{i}^{2}}{ (1-{\overline{\tau }})r^{2i-2+2\omega }(t)} \nonumber \\&-\sum ^{n}_{i=1}\frac{c x_{i}^{2}(t-\tau (t))}{r^{2i-2+2\omega }(t)}. \end{aligned}$$
(11)

In what follows, one needs to deal with the indefinite terms on the right-hand side of (11). To begin with, by Lemma 2 one can get

$$\begin{aligned} {\frac{2r(t)}{r^\omega (t)}}\varepsilon ^{T}{} \textit{PM}x_{1} \le \frac{r(t)}{4}\Vert \varepsilon \Vert ^2+\frac{4r(t)}{ r^{2\omega }(t)}\Vert PM\Vert ^2x_1^2. \end{aligned}$$
(12)

On the other hand, it follows from Assumption 2 and \(r(t)\ge 1\) that

$$\begin{aligned} \Vert f\Vert\le & {} \sum ^{n}_{i=1}\frac{|f_{i}|}{r^{i-1+\omega }(t)}\\\le & {} c\sum ^{n}_{i=1}\frac{1+|y|^p}{r^{i-1+\omega }(t)} \sum ^{i}_{j=1}\Big (|x_{j}|+|x_{j}(t-\tau (t))|\Big ) \\\le & {} c(1+|y|^p)\sum ^{n}_{i=1}\frac{n-i+1}{r^{i-1+\omega }(t)}\\&\times \Big (|x_{i}|+|x_{i}(t-\tau (t))|\Big ). \end{aligned}$$

Lemma 2 gives rise to

$$\begin{aligned} 2\varepsilon ^{T}Pf\le & {} 2c(1+|y|^p)\Vert \varepsilon \Vert \cdot {\Vert P\Vert }\nonumber \\&\sum ^{n}_{i=1} \frac{(n-i+1)(|x_{i}|+|x_{i}(t-\tau (t))|)}{r^{i-1+\omega }(t)}\nonumber \\\le & {} cn(n+1)\Vert P\Vert \Big (1+\frac{2n+1}{6}\Vert P\Vert \Big )\nonumber \\&\times (1+|y|^p)^2 \Vert \varepsilon \Vert ^2\nonumber \\&+c\Vert P\Vert \sum ^{n}_{i=1}\frac{(n-i+1)x^2_{i}}{2r^{2i-2+2\omega }(t)}\nonumber \\&+c\sum ^{n}_{i=1}\frac{x^2_{i}(t-\tau (t))}{r^{2i-2+2\omega }(t)}. \end{aligned}$$
(13)

Based on the inequalities (11)–(13), one can get

$$\begin{aligned} \dot{V}_{1}\le & {} {-\frac{ r(t)}{2}}\Vert \varepsilon \Vert ^{2} +\frac{r(t)}{4}\Vert \varepsilon \Vert ^{2} -\omega \frac{\dot{r}(t)}{r(t)}\varepsilon ^T P\varepsilon \nonumber \\&+{\frac{4r(t)}{r^{2\omega }(t)}}\Vert P\Vert ^2\Vert M\Vert ^2x_{1}^2\nonumber \\&+\sum ^{n}_{i=1}{\frac{c x_{i}^2}{(1-{\bar{\tau }})r^{2i-2+2\omega }(t)}}\nonumber \\&+\,cn(n+1)\Vert P\Vert \Big (1+\frac{2n+1}{6}\Vert P\Vert \Big )\nonumber \\&\times \,(1+|y|^p)^2\Vert \varepsilon \Vert ^{2}\nonumber \\&+\,c\Vert P\Vert \sum ^{n}_{i=1}{\frac{(n-i+1)x_{i}^2}{2r^{2i-2+2\omega }(t)}}\nonumber \\\le & {} {-\frac{r(t)}{4}}\Vert \varepsilon \Vert ^{2} -\omega \lambda _1\frac{\dot{r}(t)}{r(t)}\Vert \varepsilon \Vert ^2 +\sum ^{n}_{i=1}\frac{k_{i+2}(c)x_{i}^2}{r^{2i-2+2\omega }(t)}\nonumber \\&+\,\frac{k_{2}r(t)}{r^{2\omega }(t)}x_{1}^2 +k_{1}(c)(1+|y|^p)^2\Vert \varepsilon \Vert ^2, \end{aligned}$$
(14)

where the unknown constant \(k_1(c)>0\) and the known constant \(k_2>0\) and the unknown constants \(k_{i+2}(c)\) for \(i=1,2,\ldots ,n\) are defined by

$$\begin{aligned} {\left\{ \begin{array}{ll} k_{1}(c)=cn(n+1)\Vert P\Vert \Big (1+\dfrac{2n+1}{6}\Vert P\Vert \Big ),\\ k_{2}=4\Vert P\Vert ^2\Vert M\Vert ^2,\\ k_{i+2}(c)=\dfrac{c\Vert P\Vert (n-i+1)}{2}+\dfrac{c}{1-{{\overline{\tau }}}}. \end{array}\right. } \end{aligned}$$
(15)

2.3 Control design

We focus on the following system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{x}_{1} = x_{2}+f_{1}(\cdot ),\\ \dot{{\hat{x}}}_{i} = {\hat{x}}_{i+1}+r^i(t)a_i(r^\omega (t)\varepsilon _{1}-x_{1}), \quad i=2,\ldots ,n-1,\\ \dot{{\hat{x}}}_{n} = u+r^n(t)a_{n}(r^\omega (t)\varepsilon _{1}-x_{1}), \end{array}\right. }\nonumber \\ \end{aligned}$$
(16)

where \(f_1\) is defined in (1). To obtain the desired control, the following transformations are introduced:

$$\begin{aligned}&\xi _{1}=\frac{x_{1}}{r^\omega (t)}, \quad \xi _{i}=\frac{{\hat{x}}_{i}}{r^{i-1+\omega }(t)L^{i-1}},\quad i=2,\ldots ,n,\nonumber \\&v=\frac{u}{r^{n+\omega }(t)L^n}, \end{aligned}$$
(17)

where \(L\ge 1\) is a constant gain to be determined later. It can be deduced from (16) and (17) that

$$\begin{aligned} {\left\{ \begin{array}{ll} {{\dot{\xi }}}_{1} = Lr(t)\xi _{2}-\omega \dfrac{\dot{r}(t)}{r(t)}\xi _{1}+r(t)\varepsilon _{2} +\frac{f_{1}}{r^\omega (t)},\\ {{\dot{\xi }}}_{i} = Lr(t)\xi _{i+1}-(\omega +i-1)\dfrac{\dot{r}(t)}{r(t)}\xi _{i} +\dfrac{r(t)a_{i}}{L^{i-1}}\varepsilon _{1} -\dfrac{r(t)a_{i}}{L^{i-1}}\xi _{1},\quad \\ i=2,\ldots ,n-1,\\ {{\dot{\xi }}}_{n} = Lr(t)v-(\omega +n-1)\dfrac{\dot{r}(t)}{r(t)}\xi _{n} +\dfrac{r(t)a_{n}}{L^{n-1}}\varepsilon _{1} -\dfrac{r(t)a_{n}}{L^{n-1}}\xi _{1}. \end{array}\right. }\nonumber \\ \end{aligned}$$
(18)

Now, one can design the control as follows:

$$\begin{aligned} v=-\frac{b_{1}}{r^\omega (t)}y-b_{2}\xi _{2}- \cdots -b_{n-1}\xi _{n-1}-b_{n}\xi _{n}, \end{aligned}$$
(19)

where positive constants \(b_1,\ldots ,b_n\) can be determined later. Using (19) in (18), we arrive at a compact form of (18) as follows:

$$\begin{aligned} {{\dot{\xi }}}= & {} r(t)LB\xi -(\omega I+D) \frac{\dot{r}(t)}{r(t)}\xi \nonumber \\&+\frac{r(t)}{L}H_{3}(\varepsilon _{1}-\xi _{1}) +r(t) H_{2}\varepsilon _{2}\nonumber \\&+r(t)LH_{1}b_{1}(1-\theta (t))\xi _{1}\nonumber \\&+H_{4}(t,r(t),x,x(t-\tau )), \end{aligned}$$
(20)

where

$$\begin{aligned}&B=\left[ \begin{array}{cccc} 0 &{} \quad 1 &{} \quad \cdots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad 0 &{}\quad \cdots &{} \quad 1 \\ -b_{1} &{} \quad -b_2 &{}\quad \cdots &{} \quad -b_n \end{array} \right] ,~\xi =\left[ \begin{array}{c} \xi _1 \\ \xi _2 \\ \vdots \\ \xi _n \\ \end{array} \right] ,\\&H_1=\left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \\ \end{array} \right] ,~ H_2=\left[ \begin{array}{c} 1\\ 0 \\ \vdots \\ 0 \\ \end{array} \right] ,~H_3=\left[ \begin{array}{c} 0 \\ a_2 \\ \frac{a_3}{L}\\ \vdots \\ \frac{a_{n}}{L^{n-2}} \\ \end{array} \right] ,\\&~H_4(\cdot )=\left[ \begin{array}{c} \frac{f_{1}}{r^\omega (t)}\\ 0 \\ \vdots \\ 0 \\ \end{array} \right] , \end{aligned}$$

where the initial condition is \(\xi (t)=\xi (0)\) for any \(t\in [-{\widetilde{\tau }},0]\). One can select the positive constants \(b_1,\ldots ,b_n\) to guarantee that there is a symmetric and positive definite matrix Q such that

$$\begin{aligned} B^TQ+QB\le -I,~-\omega Q\le DQ+QD. \end{aligned}$$
(21)

In fact, Lemma 1 in [42] shows again that for any constant \(a>0\) and the same BD in this paper, there exist a constant \(d_0>0\) and a positive definite and symmetric matrix \(\bar{Q}\) satisfying

$$\begin{aligned} B^T{\bar{Q}}+{\bar{Q}}B\le -d_0{\bar{Q}},~-a{\bar{Q}}\le {\bar{Q}}D+D{\bar{Q}}. \end{aligned}$$
(22)

If we denote the smallest eigenvalue of the matrix \({\bar{Q}}\) by \(\lambda _2>0\), then (22) gives rise to

$$\begin{aligned} B^T{\bar{Q}}+{\bar{Q}}B\le -d_0{\bar{Q}}\le -d_0\lambda _2 I, \end{aligned}$$

and a simple calculation shows

$$\begin{aligned} B^T \Big (\frac{{\bar{Q}}}{d_0\lambda _2}\Big )+\Big (\frac{\bar{Q}}{d_0\lambda _2}\Big )B\le -I, \end{aligned}$$

so the first inequality in (21) is obtained by letting \(Q=\frac{{\bar{P}}}{d_0\lambda _2}\). The second inequality in (21) is deduced directly according to multiplying both sides by \(\frac{1}{d_0\lambda _2}\) on the second inequality in (22) and letting \(\omega =a,Q=\frac{{\bar{Q}}}{d_0\lambda _2}\).

We construct

$$\begin{aligned} V_{2} =\xi ^{T}Q\xi +\frac{c }{1-{\overline{\tau }}} {\int _{t-\tau (t)}^{t}\frac{x_{1}^{2}(s)}{r^{2\omega }(s)} \mathrm{d}s}, \end{aligned}$$
(23)

whose time derivative along the trajectories of (20) is given as follows:

$$\begin{aligned} {\dot{V}}_{2}\le & {} 2\xi ^{T}Q \Big (r(t)LB\xi -(\omega I+D)\frac{\dot{r}(t)}{r(t)}\xi \nonumber \\&+\,r(t) H_{2}\varepsilon _{2}+H_{4}\nonumber \\&+\,r(t)LH_{1}b_{1}(1-\theta (t))\xi _{1} +\frac{r(t)}{L}H_{3}(\varepsilon _{1}-\xi _{1})\Big )\nonumber \\&\,+\frac{cx_{1}^{2}}{(1-{\overline{\tau }})r^{2\omega }(t)} -\frac{cx_{1}^{2}(t-\tau (t))}{r^{2\omega }(t)}, \end{aligned}$$
(24)

where the monotonic increase in r(t) is used. Similarly, we need to cope with indefinite terms on the right-hand side of (24). To begin with, one can deduce from (21), \(\dot{r}(t)\ge 0\) and \(|\xi _1|\le \Vert \xi \Vert \) that

$$\begin{aligned}&2\xi ^{T}Q \big (r(t)LB\xi -(\omega I+D) \frac{\dot{r}(t)}{r(t)}\xi \nonumber \\&\qquad +\,r(t)LH_{1}b_{1}(1-\theta (t))\xi _{1}\big )\nonumber \\&\quad \le r(t)L\xi ^T(QB+B^TQ)\xi \nonumber \\&\quad -\frac{\dot{r}(t)}{r(t)}\xi ^T(2\omega Q+DQ+QD)\xi \nonumber \\&\qquad +2r(t)Lb_1\Vert \xi \Vert \cdot \Vert Q\Vert \cdot \Vert H_1\Vert \cdot |1-\theta (t)|\cdot |\xi _1|\nonumber \\&\quad \le -r(t)L\Big (1-2b_{1}|1-\theta (t)| \cdot \Vert Q\Vert \Big )\Vert \xi \Vert ^2\nonumber \\&\qquad -\,\omega \lambda _2\frac{\dot{r}(t)}{r(t)}\Vert \xi \Vert ^2. \end{aligned}$$
(25)

Secondly, applying Assumption 2 and Lemma 2, one has

$$\begin{aligned}&2\xi ^{T}QH_{4} \le 2\Vert \xi \Vert \cdot \Vert Q\Vert \cdot \Vert H_4\Vert \nonumber \\&\quad \le 2\Vert \xi \Vert \cdot \Vert Q\Vert \cdot \frac{1}{r^{\omega }(t)}\nonumber \\&\qquad \times \Big (c(1+|y|^p) \big (|x_{1}|+|x_{1}(t-\tau (t))|\big )\Big )\nonumber \\&\quad \le 2c\Vert Q\Vert (1+|y|^p)^2\Vert \xi \Vert ^2\nonumber \\&\qquad +\,c\Vert Q\Vert ^2(1+|y|^p)^2\Vert \xi \Vert ^2\nonumber \\&\qquad +\frac{cx_{1}^{2}(t-\tau (t))}{r^{2\omega }(t)}. \end{aligned}$$
(26)

Thirdly, with \(\Vert H_2\Vert =1\) and \(\Vert H_3\Vert \le (\sum \nolimits ^{n}_{i=2}a_{i}^2)^\frac{1}{2}\triangleq \delta \) in mind, it is not hard to obtain

$$\begin{aligned}&2 \xi ^{T}Q r(t) H_{2}\varepsilon _{2} +2\xi ^{T}Q\frac{r(t)}{L}H_{3}(\varepsilon _{1}-\xi _{1})\nonumber \\&\quad \le \frac{r(t)}{8}\Vert \varepsilon \Vert ^2+16r(t)\Vert Q\Vert ^2\Vert \xi \Vert ^2\nonumber \\&\qquad +\,\frac{16r(t)\delta ^2}{L^2}\Vert Q\Vert ^2\Vert \xi \Vert ^2\nonumber \\&\qquad +\,2\delta \Vert Q\Vert \frac{r(t)}{L}\Vert \xi \Vert ^2. \end{aligned}$$
(27)

Finally, substituting (25)–(27) in (24), one has

$$\begin{aligned} \dot{V}_{2}\le & {} -r(t) L\Big (1-2b_{1}|1-\theta (t)|\cdot \Vert Q\Vert \Big )\Vert \xi \Vert ^2\nonumber \\&-\,\omega \lambda _2\frac{\dot{r}(t)}{r(t)}\Vert \xi \Vert ^2\nonumber \\&+\,\frac{r(t)}{8}\Vert \varepsilon \Vert ^2 +16r(t)\Vert Q\Vert ^2\Vert \xi \Vert ^2\nonumber \\&+\,\frac{16r(t)\delta ^2}{L^2}\Vert Q\Vert ^2\Vert \xi \Vert ^2\nonumber \\&+\,2\delta \Vert Q\Vert \frac{r(t)}{L}\Vert \xi \Vert ^2 +2c\Vert Q\Vert (1+|y|^p)^2\Vert \xi \Vert ^2\nonumber \\&+\,c\Vert Q\Vert ^2(1+|y|^p)^2\Vert \xi \Vert ^2 +\frac{c}{1-{{\overline{\tau }}}}\Vert \xi \Vert ^2\nonumber \\\le & {} -r(t) L\Big (1-2b_{1}|1-\theta (t)|\cdot \Vert Q\Vert \Big )\Vert \xi \Vert ^2\nonumber \\&-\,\omega \lambda _2\frac{\dot{r}(t)}{r(t)}\Vert \xi \Vert ^2\nonumber \\&+\,\frac{r(t)}{8}\Vert \varepsilon \Vert ^2 +r(t)L\Big (\frac{2\delta (1+8\delta \Vert Q\Vert )\Vert Q\Vert +16\Vert Q\Vert ^2}{L}\nonumber \\&+\,\frac{c(1+|y|^p)^2\big (2+\Vert Q\Vert \big )\Vert Q\Vert }{r(t)} +\frac{c}{(1-{{\overline{\tau }}})r(t)}\Big )\Vert \xi \Vert ^2\nonumber \\= & {} -r(t) L\Big (1-2b_{1}|1-\theta (t)|\cdot \Vert Q\Vert \Big )\Vert \xi \Vert ^2\nonumber \\&-\,\omega \lambda _2\frac{\dot{r}(t)}{r(t)}\Vert \xi \Vert ^2\nonumber \\&+\,\frac{r(t)}{8}\Vert \varepsilon \Vert ^2 +r(t)L\Big (\frac{{\bar{k}}_{1}}{L}+\frac{{{\bar{k}}}_{2}(c,y)}{r(t)}\Big )\Vert \xi \Vert ^2, \end{aligned}$$
(28)

where the positive constant \({{\bar{k}}}_{1}\) and the positive function \({{\bar{k}}}_{2}(c,y)\) are defined as

$$\begin{aligned} {\left\{ \begin{array}{ll} {{\bar{k}}}_{1} = 2\delta (1+8\delta \Vert Q\Vert )\Vert Q\Vert +16\Vert Q\Vert ^2,\\ {{\bar{k}}}_{2}(c,y) =c(1+|y|^p)^2\big (2+\Vert Q\Vert \big )\Vert Q\Vert +\dfrac{c}{1-{{\overline{\tau }}}}. \end{array}\right. } \end{aligned}$$

In what follows, we further calculate the time derivative of \(V_1\). Actually, (5) and (17) imply

$$\begin{aligned}&x_{1}=r^\omega (t)\xi _{1},~ x_{i}=r^{i-1+\omega }(t)\varepsilon _{i}+r^{i-1+\omega }(t)L^{i-1}\xi _{i},\nonumber \\&\quad i=2,\ldots ,n. \end{aligned}$$
(29)

By Lemma 1 and \(|\varepsilon _i|\le \Vert \varepsilon \Vert ,|\xi _i|\le \Vert \xi \Vert \) for \(i=2,\ldots ,n\), one gets

$$\begin{aligned}&\frac{1}{r^{2i-2+2\omega }(t)}x_{i}^2 \le 2\Vert \varepsilon \Vert ^2+2L^{2(i-1)}\Vert \xi \Vert ^2,\nonumber \\&\quad i=2,3,\ldots ,n. \end{aligned}$$
(30)

Using (29) and (30) in (14), we can deduce

$$\begin{aligned} \dot{V}_1\le & {} {-\frac{r(t)}{4}}\Vert \varepsilon \Vert ^{2} -\omega \lambda _1\frac{\dot{r}(t)}{r(t)}\Vert \varepsilon \Vert ^2\nonumber \\&+\,k_{1}(c)(1+|y|^p)^2\Vert \varepsilon \Vert ^2\nonumber \\&+\,r(t)k_2\xi _1^2 +k_3(c)\xi _1^2\nonumber \\&+\,\sum ^{n}_{i=2}k_{i+2}(c)(2\Vert \varepsilon \Vert ^2+2L^{2(i-1)}\Vert \xi \Vert ^2)\nonumber \\\le & {} {-\frac{r(t)}{4}}\Vert \varepsilon \Vert ^{2} -\omega \lambda _1\frac{\dot{r}(t)}{r(t)}\Vert \varepsilon \Vert ^2\nonumber \\&+\,\Big (k_{1}(c)(1+|y|^p)^2 +2\sum ^{n}_{i=2}k_{i+2}(c)\Big )\Vert \varepsilon \Vert ^2\nonumber \\&+\,\Big (r(t)k_{2}+L\Big (k_3(c) + 2\sum ^{n}_{i=2}k_{i+2}(c)L^{2i-3}\Big )\Big )\nonumber \\&\quad \Vert \xi \Vert ^2. \end{aligned}$$
(31)

Defining the positive functions \({\tilde{k}}_1\) and \({\tilde{k}}_2\) as

$$\begin{aligned} {\left\{ \begin{array}{ll} {\tilde{k}}_1(c,y)=k_{1}(c)(1+|y|^p)^2 +2\sum \limits ^{n}_{i=2}k_{i+2}(c),\\ {\tilde{k}}_2(c)=k_3(c) +2\sum \limits ^{n}_{i=2}L^{2i-3}k_{i+2}(c), \end{array}\right. } \end{aligned}$$
(32)

one can express (31) as

$$\begin{aligned} \dot{V}_{1}\le & {} {-\frac{r(t)}{4}}\Vert \varepsilon \Vert ^{2} -\omega \lambda _1\frac{\dot{r}(t)}{r(t)}\Vert \varepsilon \Vert ^2+{\tilde{k}}_{1}(c,y)\Vert \varepsilon \Vert ^2\nonumber \\&+\Big (r(t)k_{2}+L{{\tilde{k}}}_{2}(c)\Big )\Vert \xi \Vert ^2. \end{aligned}$$
(33)

3 Main results

Now, we state the main results of this paper.

Theorem 1

Under Assumptions 12, there exists a continuous output feedback controller

$$\begin{aligned} u(t)= & {} -r^n(t)L^nb_1y(t)-r(t)^{n-1}L^{n-1}b_2{\hat{x}}_2(t) -\cdots \nonumber \\&-\,r^2(t)L^2b_{n-1}{\hat{x}}_{n-1}(t)-r(t)Lb_n{\hat{x}}_n(t), \end{aligned}$$
(34)

such that the state \([x(t),{\hat{x}}(t),r(t)]^T\) of closed-loop systems composed of (1), (3), (4) and (34) is globally uniformly bounded and \([x(t),{\hat{x}}(t)]^T\) converges to the origin for any initial conditions, where \({\hat{x}}(t)=[{\hat{x}}_1(t),\ldots ,{\hat{x}}_n(t)]^T\), and \(b_1,\ldots , b_n\) and L are specified constants.

Proof

The proof consists of two parts.

Part I: Determination of design parameters. It must be pointed out that the sensitivity error \({\bar{\theta }}\) is specified through the inequality \({\bar{\theta }}<\min \{1,\frac{1}{2b_{1}\Vert Q\Vert }\}\); that is,

$$\begin{aligned} 1-2b_{1}|1-\theta (t)|\cdot \Vert Q\Vert \ge 1 -2b_{1}{\bar{\theta }}\Vert Q\Vert \triangleq \rho , \end{aligned}$$
(35)

where \(0<\rho <1\) is a known positive constant. Choosing the function \( V=V_1+V_2, \) whose time derivative can be arrived by means of (28), (33) and (35):

$$\begin{aligned} \dot{V}\le & {} - \Big (\rho +\omega \lambda _2\dfrac{\dot{r}(t)}{r^2(t)L} -\frac{ k_{2}+{\bar{k}}_{1}}{L} -\frac{{\tilde{k}}_{2}(c) +{{\bar{k}}}_{2}(c,y)}{r(t)}\Big )\nonumber \\&\quad \cdot \, r(t)L\Vert \xi \Vert ^2 -r(t)\nonumber \\&\quad \times \Big (\frac{1}{8} +\omega \lambda _1\frac{\dot{r}(t)}{r^2(t)} -\frac{\tilde{k}_{1}(c,y)}{r(t)}\Big )\Vert \varepsilon \Vert ^2 . \end{aligned}$$
(36)

To determine the design parameter L, we let \(\rho -\frac{ k_{2}+{\bar{k}}_{1}}{L}\ge \frac{1}{8}\rho \), and obtain \(L\ge \max \{1,\frac{8(k_{2}+{\bar{k}}_1)}{7\rho }\}\). Then, it is straightforward to rewrite (36) as

$$\begin{aligned}&\dot{V} \le -r(t)\Big (\frac{\rho }{8} +\omega \lambda _1\frac{\dot{r}(t)}{r^2(t)L(t)} -\frac{{\tilde{k}}_{1}(c,y)}{r(t)}\Big )\Vert \varepsilon \Vert ^2\nonumber \\&\quad -\,r(t)L\Big (\frac{\rho }{8} +\omega \lambda _2\frac{\dot{r}(t)}{r^2(t)L} -\frac{{\tilde{k}}_{2}(c) +{\bar{k}}_{2}(c,y)}{r(t)}\Big )\nonumber \\&\qquad \Vert \xi \Vert ^2. \end{aligned}$$
(37)

With Lemma 2, specified L and \(\omega \) in mind, there hold

$$\begin{aligned} {\tilde{k}}_1= & {} cn(n+1)\Vert P\Vert (1+\frac{2n+1}{6}\Vert P\Vert )(1+|y|^p)^2\nonumber \\&+2\sum _{i=2}^n\Big (\frac{1}{2}c(n-i+1)\Vert P\Vert +\frac{c}{1-{\bar{\tau }}}\Big )\nonumber \\\le & {} \frac{1}{2}c^2+\frac{1}{2}g_1(y), \end{aligned}$$
(38)

and

$$\begin{aligned}&{\tilde{k}}_2(c)+{\bar{k}}_2(c,y)\nonumber \\&\quad = c(1+|y|^p)^2 (2+\Vert Q\Vert )\Vert Q\Vert +\frac{2c}{1-{\bar{\tau }}}+\frac{1}{2}cn\Vert P\Vert \nonumber \\&\qquad +\,\frac{2c}{1-{\bar{\tau }}} \sum _{i=2}^nL^{2i-3}+\sum _{i=2}^nL^{2i-3}c(n-i+1)\Vert P\Vert \nonumber \\&\quad \le \frac{1}{2}c^2+\frac{1}{2}g_2(y), \end{aligned}$$
(39)

where continuous polynomial functions \(g_1(y)=\Big (n(n+1)\Vert P\Vert (1+\frac{2n+1}{6}\Vert P\Vert )(1+|y|^p)^2+\sum _{i=2}^n ((n-i+1)\Vert P\Vert +\frac{2}{1-{\bar{\tau }}})\Big )^2\) and \(g_2(y)=\Big ((1+|y|^p)^2(2+\Vert Q\Vert )\Vert Q\Vert +\frac{2}{1-{\bar{\tau }}} +2\sum _{i=2}^nL^{2i-3}(\frac{1}{2}(n-i+1)\Vert P\Vert +\frac{1}{1-{\bar{\tau }}})+\frac{1}{2}n\Vert P\Vert \Big )^2\); both their orders are 4p. Therefore, (37) can be rewritten as

$$\begin{aligned} \dot{V}\le & {} -r(t)\Big (\frac{\rho }{8} +\omega \lambda _1\frac{\dot{r}(t)}{r^2(t)L} -\frac{c^2+g_1(y)}{2r(t)}\Big )\Vert \varepsilon \Vert ^2\nonumber \\&-\,r(t)L\Big (\frac{\rho }{8} +\omega \lambda _2\frac{\dot{r}(t)}{r^2(t)L} -\frac{c^2+g_2(y)}{2r(t)}\Big )\nonumber \\&\quad \Vert \xi \Vert ^2. \end{aligned}$$
(40)

Let \(\sigma =\min \{\omega \lambda _1, \omega \lambda _2\}\), \(\varphi (y)=\frac{L}{2}g_1(y)+\frac{L}{2} g_2(y)\), and \(\omega \) can be specified through the inequality \(4p\omega <1\). Since \(\varphi (y),\sigma ,L,\rho \) are assigned properly, using (4) in (40), one has

$$\begin{aligned} \dot{V}\le & {} -r(t)\Big (\frac{\rho }{16}-\frac{c^2}{2r(t)}\Big )\Vert \varepsilon (t)\Vert ^2\nonumber \\&-\,r(t)\Big (\frac{\rho }{16}-\frac{c^2}{2r(t)}\Big )\Vert \xi (t)\Vert ^2. \end{aligned}$$
(41)

At present, the actual control u(t) can be explicitly constructed by (34).

Part II: Stability analysis. Now, it is time to consider the closed-loop systems composed of (1), (3) and (34). By the existence and the continuation of solutions, the closed-loop system state \(X(t)\triangleq [\varepsilon (t), \xi (t), r(t)]^{\mathrm T}\) can be defined on \([-{\tilde{\tau }},T_{f})\), where \(T_{f}>0\) may be finite or \(+\infty \). The left proof is divided into five steps, with a conclusion pointed out at the beginning of each part.

Step 1:r(t) is bounded on \([-{\tilde{\tau }},T_f)\). This can be done by reductio. Suppose that there is \(T_1\in (0, T_f)\) such that \(\lim _{t\rightarrow T_1}r(t)=+\infty \). By this and \(\dot{r}(t)\ge 0\), there must be a finite time \(T_0\in (0,T_1)\) such that

$$\begin{aligned} r(t) \ge \frac{16c^2}{\rho }, \forall t\in [T_0, T_1). \end{aligned}$$
(42)

Then, it follows from (41) that

$$\begin{aligned} \dot{V}(t)\le & {} -\frac{\rho r(t)}{32}\Vert \varepsilon (t)\Vert ^2\nonumber \\&-\frac{\rho r(t)}{32}\Vert \xi (t)\Vert ^2 \le 0, \quad \forall t\in [T_0, T_1). \end{aligned}$$
(43)

Therefore, V(t) is decreasing and bounded on \([T_0,T_1)\), so is \(\Vert \xi (t)\Vert \). By (42) and (43), one can get

$$\begin{aligned}&\frac{c^2}{2}\int _{T_0}^{T_1}\Big (\Vert \xi (t)\Vert ^2+\Vert \varepsilon (t)\Vert ^2\Big )\mathrm{d}t\nonumber \\&\quad \le \int _{T_0}^{T_1}\Big (\frac{\rho r(t)}{32}\Vert \varepsilon (t)\Vert ^2 +\frac{\rho r(t)}{32}\Vert \xi (t)\Vert ^2\Big )\mathrm{d}t\nonumber \\&\quad \le V(T_0)<+\infty . \end{aligned}$$
(44)

Thus, \(\int _{T_0}^{T_1}\Vert \xi (t)\Vert ^2\mathrm{d}t\) and \(\int _{T_0}^{T_1}\Vert \varepsilon (t)\Vert ^2\mathrm{d}t\) are bounded. By this in hand, we derive from (1) and the expression of \(\varphi (y)\) that there are positive numbers \(M_1\) and \(M_2\) such that

$$\begin{aligned}&|y(t)|\le |\theta (t)|\cdot |x_1(t)|\le (1+{\bar{\theta }})|\xi _1(t)|r^\omega (t)\nonumber \\&\quad \le M_1r^\omega (t), \end{aligned}$$
(45)
$$\begin{aligned}&|\varphi (y)| \le M_2(1+|y(t)|^p)^4, ~\forall t\in [T_0, T_1). \end{aligned}$$
(46)

In view of known number p provided in Assumption 2 and \(4p\omega <1\), it is easy to deduce from Lemma 2 that

$$\begin{aligned} \varphi (y(t))\le & {} 8M_2(1+y^{4p}(t))\nonumber \\\le & {} 8M_2+8M_2M_1^{4p}r^{4p\omega }(t)\nonumber \\\le & {} \frac{\rho L}{32}r(t)+M_3, \end{aligned}$$
(47)

where

$$\begin{aligned} M_3= & {} 8M_2+(1-4p\omega ) \\&\times \Big (8M_2M_1^{4p}\Big (\frac{128p\omega }{\rho L}\Big )^{4p\omega }\Big ) ^{\frac{1}{1-4p\omega }}>0. \end{aligned}$$

In terms of (4), one needs to consider two cases.

If \(\dot{r}(t)=-\frac{L\rho }{16\sigma }r^2(t)+\frac{\varphi (y(t))}{\sigma }r(t)\), then (47) yields

$$\begin{aligned} \dot{r}(t) \le -\frac{\rho L}{32\sigma }r^2(t)+\frac{M_3}{\sigma }r(t) \triangleq -l_1r^2(t)+l_2r(t).\nonumber \\ \end{aligned}$$
(48)

A direct calculation shows that

$$\begin{aligned} r(t) \le \frac{l_2e^{l_2t}}{l_2-l_1+l_1e^{l_2t}} \le \lim _{t\rightarrow +\infty }\frac{l_2e^{l_2t}}{l_2-l_1+l_1e^{l_2t}} = \frac{l_2}{l_1}; \end{aligned}$$

that is, r(t) is bounded on \([T_0, T_1)\), which contradicts \(\lim _{t\rightarrow T_1}r(t)=+\infty \),

if \(\dot{r}(t)= r(t)\big (r^{-\omega }(t)\frac{y(t)}{1-{\tiny {\bar{\theta }}}}\big )^2 +r(t)\sum _{i=1}^n\Big (\frac{{{\hat{x}}}_i(t)}{r^{i-1+\omega }(t)L^{i-1}}\Big )^2\). Since (5) and (17) show \(\frac{{\hat{x}}_1(t)}{r^{\omega }(t)}=\xi _1(t)-\varepsilon _1(t)\), it is not hard to deduce from (44) and \(1-{\bar{\theta }}\le \theta (t)\le 1+{\bar{\theta }}\) that

$$\begin{aligned} +\infty= & {} r(T_1)-r(T_0)=\int _{T_0}^{T_1}\dot{r}(t) \mathrm{d}t\\= & {} \int _{T_0}^{T_1}\Big (r(t)\Big (\frac{\theta (t)}{1-{\bar{\theta }}}\Big )^2\xi _{1}^2(t) +r(t)(\xi _1(t)-\varepsilon _1(t))^2\\&+\,r(t)\sum _{i=2}^n{\xi _i}^2(t) \Big )\mathrm{d}t\\\le & {} \Big (2+\big (\frac{1+{\bar{\theta }}}{1-{\bar{\theta }}}\big )^2\Big )\\&\times \int _{T_0}^{T_1}\Big (r(t)\Vert \xi (t)\Vert ^2 +r(t)\Vert \varepsilon (t)\Vert ^2\Big )\mathrm{d}t\\= & {} \frac{32}{\rho }\Big (2+\big (\frac{1+{\bar{\theta }}}{1-{\bar{\theta }}}\big )^2\Big ) \int _{T_0}^{T_1}\\&\times \Big (\frac{\rho r(t)}{32}\Vert \varepsilon (t)\Vert ^2 +\frac{\rho r(t)}{32}\Vert \xi (t)\Vert ^2\Big )\mathrm{d}t\\\le & {} \frac{32}{\rho }\Big (2+\big (\frac{1+{\bar{\theta }}}{1-{\bar{\theta }}}\big )^2\Big )V(T_0)<+\infty . \end{aligned}$$

This leads to a contradiction.

Combining above two cases, one concludes that r(t) is bounded on \([-{\tilde{\tau }},T_f)\).

Step 2:\(\int _{0}^{T_f}\Vert \xi (t)\Vert ^2\mathrm{d}t\)is bounded on\([-{\tilde{\tau }}, T_f)\). Since r(t) is bounded and continuous on \([-{\tilde{\tau }}, T_f)\), therefore, \(r(T_f)\) is finite. By \(1-{\bar{\theta }}\le \theta \le 1+{\bar{\theta }}\), (17) and (4), one can get

$$\begin{aligned}&\int _{0}^{T_f}\Vert \xi (t)\Vert ^2\mathrm{d}t \le \int _{0}^{T_f}\Big (r(t)\Big (\frac{\theta (t)}{1-{\bar{\theta }}}\Big )^2\xi _{1}^2(t)\nonumber \\&\qquad +\,r(t)\sum _{i=2}^n{\xi _i}^2(t)\Big )\mathrm{d}t\nonumber \\&\quad \le \int _{0}^{T_f}\Big (r(t)\Big (\frac{\theta (t)}{1-{\bar{\theta }}}\Big )^2\xi _{1}^2(t) +r(t)\sum _{i=1}^n{\xi _i}^2(t)\Big )\mathrm{d}t\nonumber \\&\quad \le \int _{0}^{T_f}\dot{r}(t)\mathrm{d}t = r(T_f)-r(0) <+\infty ; \end{aligned}$$
(49)

that is, \(\int _{0}^{T_f}\Vert \xi (t)\Vert ^2\mathrm{d}t\) is bounded.

Step 3:\(\Vert \varepsilon (t)\Vert \)is bounded on\([-{\tilde{\tau }},T_f)\). The boundness of r(t) implies that there exists a large enough constant \({\bar{r}}\) such that \(r(t)\le {\bar{r}}\) for all \(t\in [-{\tilde{\tau }},T_f)\). Define

$$\begin{aligned} \eta _i(t)=\frac{x_i(t)-{\hat{x}}_i(t)}{ \bar{r}^{i-1+\omega }r^{i-1+\omega }(t)},~i=1,2,\dots ,n. \end{aligned}$$
(50)

By (1), (3) and (50), we have for \(i=1,\ldots ,n\)

$$\begin{aligned} {{\dot{\eta }}}_i(t)= & {} \frac{x_{i+1}(t)+f_i(\cdot )-{\hat{x}}_{i+1}(t) +a_ir^i(t){\hat{x}}_1(t)}{{{\bar{r}}}^{i-1+\omega }r^{i-1+\omega }(t)}\\&-(i-1+\omega )\frac{(x_i(t)-{\hat{x}}_i(t))\dot{r}(t)}{{\bar{r}}^{i-1+\omega }r^{i+\omega }(t)}\\= & {} \frac{x_{i+1}(t) -{\hat{x}}_{i+1}(t)}{{\bar{r}}^{i+\omega } r^{i+\omega }(t)}{\bar{r}}r(t) +\frac{f_i(\cdot )}{{{\bar{r}}}^{i-1+\omega }r^{i-1+\omega }(t)}\\&+\frac{a_ir^i(t){\hat{x}}_1(t)}{{{\bar{r}}}^{i-1+\omega }r^{i-1+\omega }(t)} -(i-1+\omega )\frac{\dot{r}(t)}{r(t)}\eta _i(t)\\= & {} \frac{x_{i+1}(t)-{\hat{x}}_{i+1}(t)}{{\bar{r}}^{i+\omega } r^{i+\omega }(t)}{\bar{r}}r(t) -\frac{a_i(x_1(t)-{\hat{x}}_1(t))}{{\bar{r}} ^\omega r^\omega (t)}{\bar{r}} r(t)\\&+\frac{a_i(x_1(t)-{\hat{x}}_1(t))}{{\bar{r}} ^\omega r^\omega (t)}{\bar{r}}r(t) +\frac{f_i(\cdot )}{{\bar{r}}^{i-1+\omega }r^{i-1+\omega }(t)}\\&+\frac{a_ir^i(t){\hat{x}}_1(t)}{{\bar{r}}^{i-1+\omega }{r^{i-1+\omega }(t)}} -(i-1+\omega )\frac{\dot{r}(t)}{r(t)}\eta _i(t)\\= & {} {\bar{r}}r(t)\eta _{i+1}(t) -{\bar{r}}r(t)a_i\eta _1(t) +{\bar{r}}r(t)a_i\eta _1(t)\\&+\frac{f_i(\cdot )}{{\bar{r}}^{i-1+\omega } r^{i-1+\omega }(t)} -\frac{a_ir^i(t)(x_1(t)-{\hat{x}}_1(t))}{{\bar{r}}^{i-1+\omega }r^{i-1+\omega }(t)}\\&+\frac{a_ir^i(t)x_1(t)}{{\bar{r}}^{i-1+\omega } r^{i-1+\omega }(t)} -(i-1+\omega )\frac{\dot{r}(t)}{r(t)}\eta _i(t), \end{aligned}$$

where \(\eta _{n+1}(t)\triangleq 0\) and the forgoing equation can be rewritten in a compact form:

$$\begin{aligned} {{\dot{\eta }}}(t)= & {} {\bar{r}}r(t)A\eta (t) +{\bar{r}}r(t)M\eta _1(t)+Z_1(r(t))Mx_1(t)\nonumber \\&-r(t)Z_2M\eta _1(t)+{\bar{f}}(t,x(t),x(t-\tau ))\nonumber \\&-(\omega I+D)\frac{\dot{r}(t)}{r(t)}\eta (t), \end{aligned}$$
(51)

where

$$\begin{aligned} Z_1= & {} \left[ \begin{array}{cccc} \frac{r(t)}{{\bar{r}}^\omega r^\omega (t)}~ &{} ~ &{} ~ &{} \\ ~ &{} \frac{r(t)}{{\bar{r}}^{\omega +1} r^\omega (t)}~ &{} ~ &{} \\ ~ &{} ~ &{} ~\ddots &{} \\ ~ &{} ~ &{} ~&{} \frac{r(t)}{{\bar{r}}^{n-1+\omega } r^\omega (t)}\\ \end{array} \right] ,\\ \eta= & {} \left[ \begin{array}{c} \eta _1 \\ \eta _2 \\ \vdots \\ \eta _n \\ \end{array} \right] ,~ Z_2=\left[ \begin{array}{cccc} 1~ &{} ~ &{} ~ &{} \\ ~ &{} \frac{1}{\bar{r}}~ &{} ~ &{} \\ ~ &{} ~ &{} ~\ddots &{} \\ ~ &{} ~ &{} ~&{} \frac{1}{\bar{r}^{n-1}}\\ \end{array} \right] ,\\ ~~{\bar{f}}= & {} \left[ \begin{array}{c} \dfrac{f_1}{{\bar{r}}^{\omega }r^{\omega }(t)} \\ \dfrac{f_2}{{\bar{r}}^{1+\omega }r^{1+\omega }(t)} \\ \vdots \\ \frac{f_n}{{\bar{r}}^{n-1+\omega }r^{n-1+\omega }(t)} \\ \end{array} \right] , \end{aligned}$$

and the definitions of A and M come from (7). One can calculate

$$\begin{aligned} \Vert Z_1\Vert= & {} \max \Big \{\frac{r(t)}{{\bar{r}}^\omega r^\omega (t)}, \frac{r(t)}{{\bar{r}}^{\omega +1}r^{\omega }(t)}, \ldots ,\frac{r(t)}{{\bar{r}}^{n-1+\omega }r^{\omega }(t)}\Big \}\nonumber \\\le & {} \frac{{\bar{r}}}{{\bar{r}}^{\omega }},\nonumber \\ \Vert Z_2\Vert= & {} \max \Big \{1,\frac{1}{{\bar{r}}},\ldots , \frac{1}{\bar{r}^{n-1}} \Big \} \le 1. \end{aligned}$$
(52)

Next, choosing

$$\begin{aligned} V_3= & {} \eta ^TP\eta +\sum ^{n}_{i=1}\frac{c }{1-{\overline{\tau }}}\nonumber \\&{\int _{t-\tau (t)}^{t}\frac{x_{i}^{2}(s)}{\bar{r}^{2i-2+2\omega }r^{2i-2+2\omega }(s)} \mathrm{d}s} \end{aligned}$$
(53)

and following the deduction of (14), one can deduce from (51) and (52) that

$$\begin{aligned} \dot{V}_3= & {} {\bar{r}}r(t)\eta ^T(t)(PA+A^TP)\eta (t) +2{\bar{r}}r(t)\eta ^T(t)PM\eta _1(t)\nonumber \\&+2\eta ^T(t)PZ_1(t)Mx_1(t)-2r(t)\eta ^T(t)PZ_2M\eta _1(t)\nonumber \\&+2\eta ^T(t)P{\bar{f}}-(2\omega P+DP+PD) \frac{\dot{r}(t)}{r(t)}\Vert \eta (t)\Vert ^2\nonumber \\&+\sum ^{n}_{i=1}\frac{cx_{i}^{2}(t)}{(1-{\overline{\tau }})\bar{r}^{2i-2+2\omega }r^{2i-2+2\omega }(t)}\nonumber \\&- \sum ^{n}_{i=1}\frac{c x_{i}^{2}(t-\tau (t))}{\bar{r}^{2i-2+2\omega }r^{2i-2+2\omega }(t)}\nonumber \\\le & {} -{\bar{r}}r(t)\Vert \eta (t)\Vert ^2+2{\bar{r}} r(t)\Vert \eta (t)\Vert \cdot \Vert PM\Vert \cdot |\eta _1(t)|\nonumber \\&+2\frac{{\bar{r}}}{\bar{r}^\omega }\Vert \eta (t)\Vert \cdot \Vert PM\Vert \cdot |x_1(t)|\nonumber \\&+2r(t)\Vert \eta (t)\Vert \cdot \Vert PM\Vert \cdot |\eta _1(t)|\nonumber \\&+2\Vert \eta (t)\Vert \cdot \Vert P\Vert \cdot \Vert {\bar{f}}\Vert -\omega \lambda _1\frac{\dot{r}(t)}{r(t)}\Vert \eta (t)\Vert ^2\nonumber \\&+\sum ^{n}_{i=1} \frac{cx_{i}^{2}(t)}{(1-{\overline{\tau }}) {\bar{r}}^{2i-2+2\omega }r^{2i-2+2\omega }(t)}\nonumber \\&-\sum ^{n}_{i=1}\frac{c x_{i}^{2}(t-\tau (t))}{{\bar{r}}^{2i-2+2\omega }r^{2i-2+2\omega }(t)}, \end{aligned}$$
(54)

next, we have

$$\begin{aligned}&2\frac{{\bar{r}}}{{\bar{r}}^\omega }\Vert \eta (t)\Vert \Vert P\Vert \Vert M\Vert |x_1(t)|\nonumber \\&\quad \le \frac{{\bar{r}}r(t)}{4}\Vert \eta (t)\Vert ^2 +\frac{4{\bar{r}}}{\bar{r}^{2\omega }}\Vert P\Vert ^2\Vert M\Vert ^2x_1^2(t), \end{aligned}$$
(55)

then, we can get that similar to (13)

$$\begin{aligned}&2\Vert \eta (t)\Vert \Vert P\Vert \Vert {\bar{f}}\Vert \\&\quad \le 2c\Vert \eta (t)\Vert \cdot {\Vert P\Vert }\\&\quad \sum ^{n}_{i=1}\frac{n-i+1}{{\bar{r}}^{i-1+\omega } r^{i-1+\omega }(t)}\big ((1+|y(t)|^p)|x_{i}(t)|\\&\qquad +\,(1+|y(t)|^p)|x_{i}(t-\tau (t))|\big )\\&\quad \le cn(n+1)\Vert P\Vert \Big (1+\frac{2n+1}{6}\Vert P\Vert \Big )\nonumber \\&\quad \Vert \eta (t)\Vert ^2 (1+|y(t)|^p)^2\\&\qquad +\,c\Vert P\Vert \sum ^{n}_{i=1}\frac{(n-i+1)x^2_{i}(t)}{2{\bar{r}}^{2i-2+2\omega }r^{2i-2+2\omega }(t)}\\&\qquad +\,c\sum ^{n}_{i=1}\frac{x^2_{i}(t-\tau (t))}{\bar{r}^{2i-2+2\omega }r^{2i-2+2\omega }(t)}. \end{aligned}$$

On the other hand, by Lemma 2, (5), (52), \(|\eta _1(t)|\le |\varepsilon _1(t)|\) and \(r(t)\le {\bar{r}}\) we have

$$\begin{aligned}&2{\bar{r}}r(t)\Vert \eta (t)\Vert \cdot \Vert PM\Vert \cdot |\eta _1(t)|\nonumber \\&\qquad +\,2r(t)\Vert \eta (t)\Vert \cdot \Vert PM\Vert \cdot |\eta _1(t)|\nonumber \\&\quad \le \frac{{\bar{r}}r(t)}{4}\Vert \eta (t)\Vert ^2 +4{\bar{r}}r(t)\Vert P\Vert ^2\Vert M\Vert ^2\varepsilon _1^2(t)\nonumber \\&\qquad +\,\frac{{\bar{r}}r(t)}{4}\Vert \eta (t)\Vert ^2 +4{\bar{r}}r(t)\Vert P\Vert ^2\Vert M\Vert ^2\varepsilon _1^2(t)\nonumber \\&\quad \le \frac{{\bar{r}}r(t)}{2}\Vert \eta (t)\Vert ^2 +8{{\bar{r}}}^2\Vert P\Vert ^2\Vert M\Vert ^2\varepsilon _1^2(t). \end{aligned}$$
(56)

Based on the inequalities (54)–(56), one can get

$$\begin{aligned} \dot{V}_3(t)\le & {} -\frac{{\bar{r}}r(t)}{4}\Vert \eta (t)\Vert ^2 +k_1(c)(1+|y(t)|^p)^2\Vert \eta (t)\Vert ^2\nonumber \\&+\frac{k_2{\bar{r}}}{{\bar{r}}^{2\omega }(t)}x_1^2(t) +2k_2{\bar{r}}^2\varepsilon _1^2(t) -\omega \lambda _1\frac{\dot{r}(t)}{r(t)}\Vert \eta (t)\Vert ^2\nonumber \\&+\sum _{i=1}^n\frac{k_{i+2}(c)x_i^2(t)}{\bar{r}^{2i-2+2\omega }r^{2i-2+2\omega }(t)}, \end{aligned}$$
(57)

where the definitions of \(k_i\)’s can be found in (15). By (17), (50) and \(r(t)\le {\bar{r}}\), we have for \(i=2,\ldots ,n\)

$$\begin{aligned} x_i^2(t)= & {} \big ({\bar{r}}^{i-1+\omega }r^{i-1+\omega }(t)\eta _i(t) +r^{i-1+\omega }(t)L^{i-1}\xi _i(t)\big )^2\\\le & {} 2{\bar{r}}^{2i-2+2\omega }r^{2i-2+2\omega }(t) \big (\eta _i^2(t)+L^{2i-2}\xi _i^2(t)\big ), \end{aligned}$$

then it is easy to get

$$\begin{aligned} \frac{x_i^2(t)}{{\bar{r}}^{2i-2+2\omega }r^{2i-2+2\omega }(t)} \le 2\Vert \eta (t)\Vert ^2+2L^{2i-2}\Vert \xi (t)\Vert ^2.\nonumber \\ \end{aligned}$$
(58)

Following the deduction of (33), the inequality (57) can be further expressed by

$$\begin{aligned} \dot{V}_3(t)\le & {} -\Big (\frac{\bar{r}r(t)}{4}+\omega \lambda _1\frac{\dot{r}(t)}{r(t)} -k_1(c)(1+|y(t)|^p)^2\Big )\Vert \eta (t)\Vert ^2\nonumber \\&+\,k_2{\bar{r}}\Vert \xi (t)\Vert ^2+2k_2{{\bar{r}}}^2{\varepsilon }_1^2(t) +k_3(c)\frac{{x_1}^2(t)}{{{\bar{r}}}^{2\omega }r^{2\omega }(t)}\nonumber \\&+\,\sum _{i=2}^{n}k_{i+2}(c)\Big (2\Vert \eta (t)\Vert ^2+2L^{2i-2}\Vert \xi (t)\Vert ^2\Big )\nonumber \\\le & {} -\Big (\frac{{\bar{r}}r(t)}{4}+\omega \lambda _1\frac{\dot{r}(t)}{r(t)} -k_1(c)(1+|y(t)|^p)^2\Big )\Vert \eta (t)\Vert ^2\nonumber \\&+\,k_2{\bar{r}}\Vert \xi (t)\Vert ^2+2k_2{{\bar{r}}}^2{\varepsilon }_1^2(t) +2\sum _{i=2}^{n}k_{i+2}(c)\Vert \eta (t)\Vert ^2\nonumber \\&+\,\Big (k_3(c)+2\sum _{i=2}^{n}k_{i+2}(c)L^{2i-2}\Big )\Vert \xi (t)\Vert ^2. \end{aligned}$$
(59)

where the definitions of \(k_i\)’s are given in (15).

Notably, following (38), one can get

$$\begin{aligned} k_1(c)(1+|y(t)|^p)^2\le \frac{1}{2}c^2+\frac{1}{2}g_1(y). \end{aligned}$$
(60)

Then, we can deduce from (4), (32), (59) and (60) that

$$\begin{aligned} \dot{V}_3(t)\le & {} -r\big (\frac{\bar{r}}{4}-\frac{\rho }{16}-\frac{c^2}{2r}\big )\Vert \eta \Vert ^2 +k_2{\bar{r}}\Vert \xi (t)\Vert ^2\nonumber \\&+\,2k_2{{\bar{r}}}^2{\varepsilon _1}^2(t) +2\sum _{i=2}^{n}k_{i+2}(c)\Vert \eta (t)\Vert ^2\nonumber \\&+\,\Big (k_3(c)+2\sum _{i=2}^{n}k_{i+2}(c)L^{2i-2}\Big )\Vert \xi (t)\Vert ^2\nonumber \\\le & {} -\frac{3}{16}{\bar{r}}\Vert \eta \Vert ^2 +\Big (\frac{1}{2}c^2 +2\sum _{i=2}^{n}k_{i+2}(c)\Big )\Vert \eta (t)\Vert ^2\nonumber \\&+\,k_2{\bar{r}}\Vert \xi (t)\Vert ^2 +2k_2{{\bar{r}}}^2{\varepsilon _1}^2(t) +L{\tilde{k}}_2(c)\Vert \xi (t)\Vert ^2.\nonumber \\ \end{aligned}$$
(61)

Since \({\bar{r}}\) is sufficiently large, one can choose \({\bar{r}}\) to ensure \(8(\frac{1}{2}c^2+2\sum _{i=2}^{n}k_{i+2}(c))\le {\bar{r}}\) for all \(t\in [-{\tilde{\tau }},T_f)\). By (4), (5), \(0<{\bar{\theta }}<1\) and \(1-{\bar{\theta }}\le \theta (t)\le 1+{\bar{\theta }}\), there is

$$\begin{aligned} \varepsilon _1^2(t)\le & {} r(t)\frac{(x_1(t)-{\hat{x}}_1(t))^2}{r^{2\omega }(t)} \le 2r(t)\Big (\frac{x_1^2(t)}{r^{2\omega }(t)}+\frac{{\hat{x}}_1^2(t)}{r^{2\omega }(t)} \Big )\nonumber \\\le & {} \frac{2r(t)y^2(t)}{(1-{\bar{\theta }})^2r^{2\omega }(t)} +\frac{2r(t){\hat{x}}_1^2(t)}{r^{2\omega }(t)} \le 2\dot{r}(t). \end{aligned}$$
(62)

Using (62) in (61), one has

$$\begin{aligned} \dot{V}_3(t)\le & {} -\frac{{\bar{r}}}{16}\Vert \eta (t)\Vert ^2 +(L{\tilde{k}}_2(c)+{\bar{r}}k_2)\Vert \xi (t)\Vert ^2 \nonumber \\&+\,4{{\bar{r}}}^2k_2\dot{r}(t). \end{aligned}$$
(63)

Integrating from 0 to \(t~(t\le T_f)\) on both sides of (63) renders

$$\begin{aligned}&V_3(t)-V_3(0)\nonumber \\&\quad \le -\frac{{\bar{r}}}{16}\int _{0}^{t}\Vert \eta (s)\Vert ^2\mathrm{d}s\nonumber \\&\qquad +\,(L{\tilde{k}}_2(c)+{\bar{r}}k_2)\int _{0}^{t}\Vert \xi (s)\Vert ^2\mathrm{d}s\nonumber \\&\qquad +\,4{{\bar{r}}}^2 k_2(r(t)-r(0)). \end{aligned}$$
(64)

From (64), the boundness of r(t) and \(\int _{0}^{T_f}\Vert \xi (t)\Vert ^2\mathrm{d}t\) means that \(V_3(t)\) and \(\int _{0}^{t}\Vert \eta (s)\Vert ^2\mathrm{d}s\) are bounded on \([0,T_f)\). Furthermore, \(\int _{0}^{t}\Vert \varepsilon (s)\Vert ^2\mathrm{d}s\) is bounded. The definition of \(V_3\) shows \(\eta ^T(t)P\eta (t)\le V_3(t)\), so \(\Vert \eta (t)\Vert \) is bounded on \([0, T_f)\). Now, it follows from the relationship \(\varepsilon _i(t)={\bar{r}}^{i-1+\omega }\eta _i(t)\) that \(\Vert \varepsilon (t)\Vert \) is bounded for all \(t\in [-{\tilde{\tau }},T_f)\). Finally, it follows from (41) that

$$\begin{aligned} V(t)\le V(0)+{\bar{r}}\Big (\frac{\rho }{16}+\frac{c^2}{2}\Big ) \int _0^{t}\big (\Vert \varepsilon (s)\Vert ^2+\Vert \xi (s)\Vert ^2\big )\mathrm{d}s, \end{aligned}$$

we can get V(t) is bounded, so is \(\Vert \xi (t)\Vert \).

Step 4: \(T_{f}=+\infty \). If \(T_{f}\) is finite, then \(T_{f}\) would be a escape time, which means that at least one argument of X(t) would tend to \(\infty \) as \(t\rightarrow T_{f}\). However, the continuity of \(\varepsilon _i(t)\), \(\xi _i(t)\) and r(t) guarantees the boundedness of X(t) at \(t=T_{f}\), because X(t) is bounded on \([-{\tilde{\tau }},T_{f})\). This is a contradiction. Therefore, \(T_{f}=+\infty \).

Step 5: \(\lim _{t\rightarrow +\infty }x_i(t)=\lim _{t\rightarrow +\infty }{\hat{x}}_i(t)=0, i=1,\ldots ,n\). To sum up, the boundedness of r(t), \(\Vert \varepsilon (t)\Vert \) and \(\Vert \xi (t)\Vert \) implies that \(x_i(t)\) and \({\hat{x}}_i(t)\) are bounded on \([-{\tilde{\tau }},+\infty )\), which further illustrates the boundedness of \(f_i(\cdot )\). With this in mind, (20) and (51) show that \({{\dot{\xi }}}(t)\) and \({{\dot{\eta }}}(t)\) are bounded. In terms of the boundedness of \(\int _{0}^{T_f}\Vert \xi (t)\Vert ^2\mathrm{d}t\), \(\Vert \xi (t)\Vert \), \(\int _{0}^{T_f}\Vert \eta (t)\Vert ^2\mathrm{d}t\) and \(\Vert \eta (t)\Vert \), we infer from Lemma A.6 in [1] that \(\lim _{t\rightarrow +\infty }\xi _i(t)=\lim _{t\rightarrow +\infty }\eta _i(t)=0\). In consequence, it is easy to prove \(\lim _{t\rightarrow +\infty }x_i(t) =\lim _{t\rightarrow +\infty }{\hat{x}}_i(t)=0\). This completes the proof. \(\square \)

Remark 3

It is crucial to find a delicate time-varying function r(t) which should be fast enough to overtake all possible uncertainties. Hence, \(\dot{r}(t)\) has two parts, and this leads to the choice of the dynamic gain r(t) in this paper which is superior to those in the existing results, such as [10, 18, 19, 34, 36, 42]. (i) There is no unknown growth rate in [18, 42]; that is, c in (2) is known, so the first part in (4) can be used to control the output function sufficiently. (ii) If c in (2) is unknown, then the methods in [19, 34, 36] addressed that the expression of \(\dot{r}(t)\) is obtained by compensating c, which coincides with the second part in (4) to some extent; hence, the expression in this paper has a more general form than those in [19, 34, 36]. In addition, two time-varying gains in [10] are simplified to one here, and this may decrease the control effort.

Remark 4

Substantial obstacles will be inevitably met due to the existence of a dynamic gain r(t) and an unknown growth rate c. (i) Remark 3 shows that it is not easy to prove the boundedness of r(t). (ii) It is intricate to prove the boundedness of \(\Vert \varepsilon (t)\Vert \). For this aim, we introduce a new scaling transformation with the upper bound of r(t) and carefully tease out the relationship between \(\xi (t)\) and \(\dot{r}(t)\). (iii) Without estimating the unknown constant c directly by the traditional adaptive technique, we can simplify the structure of the output feedback controller at the expense of more complicated stability analysis.

4 Simulation examples

Example 1

To show the effectiveness of the proposed control strategy, we consider the following nonlinear system with time-varying delay:

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}_{1}(t)=x_{2}(t)+x_1(t-\tau (t))(1+y(t)),\\ {\dot{x}}_{2}(t)=u(t)+x_{2}(t-\tau (t))(1+y(t))\sin (6x_{1}(t)),\\ y=\theta (t)x_{1}, \end{array}\right. }\nonumber \\ \end{aligned}$$
(65)

where \(\theta (t)\) will be selected through different cases. It is easy to verify that Assumption 1 is satisfied with \(p=c=1\). If one chooses \(\omega =\frac{1}{6}\), \(a_1=5\), \(a_2=1\), \(b_1=1\), \(b_2=\frac{1}{2}, L=1\), then the observer can be expressed by

$$\begin{aligned} \dot{{\hat{x}}}_{1}(t)={\hat{x}}_{2}(t)-5r(t){\hat{x}}_1(t),~ \dot{{\hat{x}}}_{2}(t)=u(t)-r^2(t){\hat{x}}_1(t), \end{aligned}$$

and the time-varying gain can be selected as

$$\begin{aligned} \dot{r}(t)= & {} \max \Big \{4r^{\frac{2}{3}}(t){y}^2(t)+r^{\frac{2}{3}}(t){{\hat{x}}_1}^2(t) +r^{-\frac{4}{3}}(t){{\hat{x}}_2}^2(t),\\&-3r^2(t)+(5+5(1+|y(t)|)^4)r(t)\Big \}. \end{aligned}$$

Now, the output feedback controller is constructed as \(u(t)=-y(t)r^2(t)-0.5r(t){\hat{x}}_2(t)\).

Fig. 1
figure 1

The trajectories of the states \(x_1\) and \(\hat{x}_1\)

Fig. 2
figure 2

The trajectories of the states \(x_2\) and \(\hat{x}_2\)

Fig. 3
figure 3

The trajectories of dynamic gain r

Fig. 4
figure 4

The trajectories of control u

Fig. 5
figure 5

The trajectories of the states \(x_1\) and \(\hat{x}_1\)

Fig. 6
figure 6

The trajectories of the states \(x_2\) and \(\hat{x}_2\)

Fig. 7
figure 7

The trajectories of dynamic gain r

Fig. 8
figure 8

The trajectories of control u

In the simulation, we choose the initial values as \([x_1(\Xi ),{{\hat{x}}}_1(\Xi ),x_2(\Xi ), {{\hat{x}}}_2(\Xi )]^T=[1,-1,1,-1]^T\), \(r(\Xi )=1\) for all \(\Xi \in [-0.1,0]\). Figures 1, 2, 3, 4, 5, 6, 7 and 8 demonstrate the effectiveness of control scheme and correctness of theoretical results. (i) \([x(t),{\hat{x}}(t),r(t)]^T\) is uniformly bounded, and \([x(t),{\hat{x}}(t)]^T\) converges to zero. (ii) By the same time-varying delay \(\tau (t)=1+0.8\sin t\), Figs. 1, 2, 3 and 4 show the effect of the maximal time derivative of the continuous function \(\theta (t)\), especially the larger time derivative can lead to the drastic oscillation of the control u(t). (iii) By the same continuous function \(\theta (t)=0.3+0.4|\sin 5t|\), Figs. 5, 6, 7 and 8 exhibit the effect of different time-varying delays whose upper bound is greater than 1 or less than 1, and it can be seen that the larger delay slows down the convergent time of the states and enlarges the effort of the control.

Fig. 9
figure 9

The trajectories of the states \(x_1\) and \(\hat{x}_1\)

Fig. 10
figure 10

The trajectories of the states \(x_2\) and \(\hat{x}_2\)

Fig. 11
figure 11

The trajectory of dynamic gain r

Fig. 12
figure 12

The trajectory of control u

Example 2

To further show the effectiveness of the proposed control scheme in practice, we consider a two-stage chemical reactor system [43] as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}_{1}(t)=\frac{1-R_\beta }{V_\alpha }x_{2}(t)-\frac{1}{C_\alpha }x_1(t)-K_\alpha x_1(t),\\ {\dot{x}}_{2}(t)=\frac{E}{V_\beta }u(t)-\frac{1}{C_\beta }x_2(t)-K_\beta x_2(t)\\ \qquad \qquad + \frac{R_\alpha }{V_\beta }x_1(t-\tau (t)) +\frac{R_\beta }{V_\beta }x_2(t-\tau (t)),\\ y(t)=\theta (t)x_{1}(t), \end{array}\right. } \end{aligned}$$
(66)

where \(x_1\) and \(x_2\) are the compositions; u and y are the input and output, respectively; \(R_\alpha \) and \(R_\beta \) are the recycle flow rates; \(C_\alpha \) and \(C_\beta \) are the reactor residence times; E is the feed rate; \(V_\alpha \) and \(V_\beta \) are the reactor volumes; \(K_\alpha \) and \(K_\beta \) are the reaction functions. For the possibility of the simulation, set \(R_\alpha =R_\beta =0.5\), \(K_\alpha =K_\beta =0.5\), \(V_\alpha =V_\beta =0.5\), \(C_\alpha =C_\beta =2\), \(E=1\) and \(\theta (t)=0.3+0.4|\cos (5t)|\). Then, the system (66) is changed into

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}_{1}(t)=x_2(t)-x_1(t),\\ {\dot{x}}_{2}(t)=2u(t)-x_2(t)+x_1(t-\tau (t))\\ \qquad \quad \quad +x_2(t-\tau (t)),\\ y(t)=\theta (t)x_{1}(t). \end{array}\right. } \end{aligned}$$
(67)

It is easy to verify that Assumption 1 is satisfied with \(p=c=1\). Now, choosing \(\omega =\frac{1}{8}\), \(a_1=4\), \(a_2=2\), \(b_1=\frac{1}{2}\), \(b_2=1, L=1\), then the observer and the output feedback controller can be achieved as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{{\hat{x}}}_{1}(t)={\hat{x}}_{2}(t)-4r(t){\hat{x}}_1(t),\\ \dot{{\hat{x}}}_{2}(t)=u(t)-2r^2(t){\hat{x}}_1(t),\\ u(t)=-\frac{1}{2}y(t)r^2(t)-r(t){\hat{x}}_2(t), \end{array}\right. } \end{aligned}$$
(68)

where the time-varying gain can be selected as

$$\begin{aligned} {\dot{r}}(t)= & {} \max \Big \{2r^{\frac{3}{4}}(t){y}^2(t)+r^{\frac{3}{4}}(t){{\hat{x}}_1}^2(t) +r^{-\frac{5}{4}}(t){{\hat{x}}_2}^2(t),\\&-2r^2(t)+(4+4(1+|y(t)|)^4)r(t)\Big \}. \end{aligned}$$

In the simulation, we choose the time-varying delay as \(\tau (t)=1+\sin (0.8t)\). Then, Figs. 9, 10, 11 and 12 show the states response of the closed-loop system consisting of (67) and (68) with initial condition and \(r(\Xi )=1\) for all \(\Xi \in [-0.1,0]\), \([x_1(\Xi ),{{\hat{x}}}_1(\Xi ),x_2(\Xi ),{{\hat{x}}}_2(\Xi )]^T=[2,-1,5,-5]^T\). It can be observed that the control and all the states converge to the origin, and the time-varying gain is uniformly bounded.

5 Conclusions

In this paper, we find the feasible dynamic gain and valid integral Lyapunov candidate function for the time-varying time-delay nonlinear output feedback systems with unknown growth rate. We introduce two gains to explicitly construct a state observer as well as an output feedback control law, where one dynamic gain is used to dominate the unknown nonlinear terms and a constant gain is used to dominate unknown measurement error. Finally, output feedback regulation of the investigated system is realized in a unifying framework.