Abstract
This paper concerns the polynomial-logarithmic stability and stabilization of time-varying control systems. We present sufficient Lyapunov-like conditions guaranteeing this polynomial-logarithmic stability with applications to several linear and nonlinear control systems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In control theory, one of the concerns is to explain how the solution of a control system converges (or reaches) the equilibrium point in a long time. The well known rapid exponential stabilization is one of useful property that can gives the qualitative behavior of all static closed-loop system. This property merely follows from the fact that the linearization matrix around the equilibrium point is Hurwitz [8, 35]. The natural question that can be asked; what do we do if this property is not held? Clearly, this criterion is not usually possible for nonlinear case e.g. the scalar system \(\dot{x}=-x^3u^2\) cannot be exponentially stabilizable by regular state feedback laws; rather than, with \(u(x)=\sqrt{|x|}\), we easily get the following polynomial approximation \(|x(t)|\asymp _{+\infty }c/t^{1/3}.\) More precisely, the closed-loop system is polynomially stable. Hence, this paper discusses more responses to the above questions and proposes other alternatives to the exponential decay rate.
In the list of previous papers [19, 20, 22, 23, 36], we have developed results on the polynomial stability and stabilization of dynamical systems with applications to several control systems where stabilizing feedbacks (resp. stabilizing output feedbacks) are constructed. Characterization of this polynomial stability by means of Lyapunov function is proved in [23], while this characterization for homogeneous systems is obtained with optimal decay rate in [20]. The decay rate estimate in this situation is the inverse of the degree of homogeneity of the system, and is therefore a strong result for homogeneous systems. This nice result opened the door for the construction of homogeneous observers followed by output feedbacks with optimal decay rate.
In this paper, we continue our progress in this area, and we examine the possibility to “blended” the polynomial and logarithmic stability. For general control system, in \({\mathbb {R}}^n\times \mathbb R^m,\) of the form \(\dot{y}=f(y,\,u)\) where \(f(0,\,0)=0\), the logarithmic stabilization means the construction of suitable time-varying feedback law \(u(t,\,y)\) (Hölderian in general cases with respect to the state) such that each solution y(t) of the closed-loop system \(\dot{y}=f(y,\,u(t,\,y))\) decreases like \(\displaystyle \frac{c}{t^{\alpha }ln^{\beta }(t)} \) where \(\alpha ,\,\beta >0\) are the decay rates. Indeed, we have remarked that contrary to polynomial stability that holds for time invariant/time-varying systems, the polynomial-logarithmic stability usually concerns the class of time-varying systems; especially for many applied nonlinear systems, there is indeed part of the state that it is not useful to stabilize: it is sufficient to know that this part is converging: this the partial asymptotic stabilization [14, 15]. We show in this article that the logarithmic stabilization by static state feedback law of the form u(x) is insufficient to solve some problems of partial asymptotic stabilization.
In this context, we have developed some sufficient like-integral criteria characterizing this polynomial-logarithmic stability. The main idea of this coupled property of polynomial-logarithmic stability that is obtained via a time-varying Lyapunov function admits the estimate
with \(\theta >1,\) and f(.) is a positive measurable function satisfying the integral constraint
where \(\alpha ,\,\beta \) and k are nonnegative constants. The function \(t\mapsto f(t)\) in the above inequality plays the role of an excitatory function that reinforces the system to reach the equilibrium point asymptotically like the inverse of \(t^{\alpha }\ln ^{\beta }(t).\) In this context, further general results are presented and permit us to obtain either polynomial stability or logarithmic stability.
It is noticed for many previous works [20, 22, 23] that smooth systems (i.e. Lipschitz systems that may not be linearizable at the origin), that polynomial stabilization can be ensured by regular continuous state feedback laws; even smooth if some parameters in the state feedbacks are mastered. This regular property of feedbacks remains preserved for the logarithmic stabilization. We present examples of control systems by proving how one can build stabilizing feedback laws leading to polynomial-logarithmic decay estimates. The first example concerns the system double integrator where we have constructed an unsteady feedback law stabilizing this system in the logarithmic sense. This preliminary result can be generalized for cascade systems; in particular, the backstepping technique [8] is extended for this type of stability; and helped us for the construction of unsteady feedback law stabilizing polynomially-logarithmically for the toy model of controllable quadratic systems [9]. For this class of systems, Coron in [9] has proved that solutions of controllable quadratic systems decrease like c/t, and therefore our result improves those obtained in [9] by increasing the rate of convergence.
More examples as well as the bilinear control systems in \(\mathbb R^n\), the Brockett integrator, and the weak stability in partial sense are treated. For example, for bilinear control systems in \({\mathbb {R}}^n\), we have improved the decay rate obtained by Quin in [24]. One of the strong elements of the treated examples is that we kept the same assumptions proposed by the authors cited above, and we have improved the convergence speed of these systems by constructing new and regular (even, smooth, if the parameter \(\beta \) is a fixed integer) feedbacks.
Finally, we have observed that along the literature that this mixed stability is not frequently studied in dynamical systems/or dynamical control systems, but there are fewer works in infinite-dimensional concerning the polynomial stability, even with optimal rate of decreasing [4, 12].
This paper is outlined as follows: the next section deals with the notion of logarithmic stability and stabilization of time-varying systems/time-varying control systems. The main results are presented in Sects. 2 and 3 where several Lyapunov criteria are proved. In Sect. 4, we illustrate our contribution by examples of control systems where time-varying stabilizing feedback laws in logarithmic stability sense are built. In Sect. 5, we present links between the controllability of the linearized system and the logarithmic stabilizability. Finally, the conclusion is given in Sect. 6.
2 Logarithmic Stability: Main Results
In this paper, we adopt the notations: ||.|| (resp. \(\langle ,\,\rangle \)) denotes the Euclidean norm (resp. the inner product) on \({\mathbb {R}}^n\), \(\text {L}^1([0,\,+\infty ))\) is the Lebesgue space with norm \(||.||_{1}\), \('\) is the symbol of transposition. The set \({\mathbb {Q}}_{odd}^{+}={\{r\in {\mathbb {Q}}^{*}_{+}: r=\frac{p}{q},\,\text {where}\,\, p\, \text {and}\,\, q \,\,\text {are odd nonnegative integers}}\},\) \(Q^{+}_{even}\) is the set of nonnegative rational numbers m of the form \(m=\frac{p}{q}\) where \(p\in 2{\mathbb {N}}\) and q an odd integer, and \(\asymp \) is the symbol of asymptotic equivalent.
A function \(\alpha :{\mathbb {R}}_{+}\rightarrow {\mathbb {R}}_{+}\) is said a \({\mathcal {K}}\)-function if \(\alpha \) is continuous and strictly increasing with \(\alpha (0)=0.\) If the function \(\alpha \) is unbounded, then it is a \({\mathcal {K}}_{\infty }\)-function. A function \(\beta :{\mathbb {R}}_{+}\times {\mathbb {R}}_{+}\rightarrow {\mathbb {R}}_{+}\) is belongs to class \({\mathcal {K}}{\mathcal {L}}\) if for every fixed \(t\geqslant 0,\,\beta (.,\,t)\in {\mathcal {K}}_{\infty }\) and for each fixed \(s\in {\mathbb {R}}_{+}, \beta (s,\,t)\rightarrow 0\) as \(t\rightarrow +\infty .\)
Before presenting the notion of polynomial-logarithmic stability, we introduce the following preliminary results.
2.1 Preliminary Results: Logarithmic Feedback Stabilization
One of our concerns is to understand how solutions of dynamical systems of the form \(\dot{x}(t)=X(x(t)),\,X(0)=0\) tend to the equilibrium point! In his textbook, E. de Sontag [29], for example, proposes the following definition: (the equilibrium point 0 globally asymptotically stable (GAS)) if
Consider, for example, the scalar control system in \({\mathbb {R}}\times {\mathbb {R}}.\)
Our objective is to construct two types of feedback laws.
-
A static stabilizing feedback law of the form u(x) leading to logarithmic stability estimate of the closed-loop system (1) i. e. there exist nonnegative constants \(c,\,\alpha \) such that
$$\begin{aligned} |x(t)|{\asymp _{+\infty }} \displaystyle \frac{c}{\ln ^{\alpha } (t)}. \end{aligned}$$In this case, we said that the closed-loop system \(\dot{x}(t)=u(x)\) is logarithmically stable; or the system (1) is logarithmically stabilizable.
-
A time-varying stabilizing feedback of the form \(u(t,\,x)\) leading to polynomial-logarithmic stability estimate of the closed-loop system (1) i. e. there exist nonnegative constants \(c,\,\alpha ,\,\beta \) such that
$$\begin{aligned} |x(t)|{\asymp _{+\infty }} \displaystyle \frac{c}{t^{\alpha }\,\ln ^{\beta } (t)}. \end{aligned}$$
As solution for the first case, we consider the static feedback of the form
Clearly this feedback u is continuous (even \(\mathcal C^1\)) and stabilizes logarithmically the scalar system (1).
Indeed, the Lyapunov stability follows by considering the Lyapunov function
where its time derivative along the system (1) is given, for \(x\not =0,\) by
For \(V\ne 0,\) let the change of variable \(z:=\displaystyle \frac{1}{V},\) then z(t) satisfies the following ODE.
where its solution is given by
which implies
and
This result can be generalized as follows.
Theorem 1
Consider the dynamical system
Assume that there exist a \({\mathcal {C}}^1-\) function \(V:\mathbb {R}^n\rightarrow {\mathbb {R}},\) some positive constants \(c_1,\,c_2,\,r_1\) and \(r_2\) such that,
-
(1)
for every \(x\in {\mathbb {R}}^n,\) the Lyapunov function V satisfies
$$\begin{aligned} c_1\,||x||^{r_1}\leqslant V(x)\leqslant c_2\,||x||^{r_2}, \end{aligned}$$ -
(2)
there exists a nonnegative constant c such that for \(V\not =0,\) we have along the system (3)
$$\begin{aligned} {\dot{V}}(x)\leqslant -c\,\exp \left( -\frac{1}{V}\right) \,V^{2}(x). \end{aligned}$$Then, \(0\in {\mathbb {R}}^n\) is globally logarithmically stable. More precisely,
$$\begin{aligned} ||x(t)||{\asymp }_{+\infty } \frac{c}{\ln ^{\frac{1}{r_1}}(t)}, \end{aligned}$$(4)
where c regroups all positive constants.
Proof
Using the comparison principle [26], and we solve the ODE: \(\displaystyle {\dot{E}}(t)=-c\,\exp (-\frac{1}{E})\,E^{2}(t)\) with \(E(0)>0.\) The uniqueness result leads to \({E}(t)>0,\,\,\forall \,t\geqslant 0.\) The estimation (4) then follows from the change of variable \(z=\displaystyle \frac{1}{E}\) and the solution of the ODE:
\(\square \)
Another situation can be interest when we ask how to build static feedback law u such that the solution of the closed-loop system (1) satisfies the estimate
As a solution of this question, we can take
It is clear that u is continuous at \(x=0\), even, u is \(\mathcal C^1\) on \({\mathbb {R}}.\)
Indeed, we have for \(x\not =0,\)
and
Testing the candidate Lyapunov function
where its time derivative along the closed-loop system is given, for \(V\not =0,\) by
For \(z:=\displaystyle \frac{1}{V},\,\,V\ne 0\), we get
The solution of (5) is given by
and therefore
which implies the asymptotic estimation
As the previous result, we have.
Proposition 1
Consider the dynamical system (3). Assume that there exist a \({\mathcal {C}}^1-\) function \(V:{\mathbb {R}}^n\rightarrow {\mathbb {R}},\) some positive constants \(c_1,\,c_2,\,r_1\) and \(r_2\) such that,
-
(1)
for every \(x\in {\mathbb {R}}^n,\) the Lyapunov function V satisfies
$$\begin{aligned} c_1\,||x||^{r_1}\leqslant V(x)\leqslant c_2\,||x||^{r_2}, \end{aligned}$$ -
(2)
there exists a nonnegative constant c such that for \(V\not =0\), we have
$$\begin{aligned} {\dot{V}}(x)\leqslant -c\,\exp \left( -\left( \frac{1}{V}+ \exp \left( \frac{1}{V}\right) \right) \right) \,V^2. \end{aligned}$$Then, \(0\in {\mathbb {R}}^n\) is globally logarithmically stable (i. e. \(||x(t)||\asymp _{+\infty } \frac{c}{\ln ^{\frac{1}{r_1}}\left( \ln (t)\right) }\)).
A pathological case:
Consider the system in \({\mathbb {R}}^2\times {\mathbb {R}}\):
Clearly this system is not stabilized by regular static state feedback of the form u(x); this is due to Brockett necessary condition [5]. Usually, to get around the Brockett obstruction, a partial asymptotic stabilization by static feedback laws is required [16] (i. e. construction of an adequate state feedback law u(x) such that in closed-loop we have \(x(t)\rightarrow 0\), and y(t) converges to some value not necessarily zero); for more details on this point see later the Example 5.
Hence, a logarithmic stabilizing feedback is not sufficient to study the partial asymptotic stabilization! Because the integral \(\int _{e}^{+\infty }\frac{dt}{\ln ^{\alpha }(t)}=+\infty .\)
To overcome this problem, a time-varying feedback law of the form \(u(t,\,(x,\,y))\) leading to polynomial-logarithmic stability estimate of the state x as follows
is required. The choice of \(\alpha \) or \(\beta \) leads to the convergence of the state y.
3 Time-Varying System and Polynomial-Logarithmic Stability
Consider the continuous time-varying system
and we assume that
Definition 1
The nonlinear time-varying system (6) is said to be locally polynomially
logarithmically stable (P-L stable in abbreviation) if the following properties are satisfied:
-
The origin \(x=0\) of the system (6) is Lyapunov stable, i. e. \(\forall \varepsilon>0,\,\,\forall \,t_0 \geqslant 0,\,\exists \eta =\eta (\varepsilon ,\,t_0)>0:\) \(||x(t_0)||<\eta \Longrightarrow ||x(t)||<\varepsilon \,\,\forall t\geqslant t_0\).
-
There exist positive numbers \(\alpha ,\,\beta ,\) and \( M(x(t_0))\) function on the initial conditions, such that ifFootnote 1
$$\begin{aligned}{} & {} \big (\exists \,\, r>0,\,\forall \,t_0 \geqslant 0,\, ||x(t_0)||\leqslant r\big )\nonumber \\{} & {} \quad \Longrightarrow \displaystyle \left( ||x(t)||\leqslant \frac{ M(x(t_0))}{t^{\alpha }\,\ln ^{\beta } (t)},\,\forall \,t\geqslant t_0+s,\,\forall \,s>0 \right) . \end{aligned}$$(7)
Remark 1
-
(1)
The second condition (7) means that each solution x(t) of (6) converges asymptotically to the trivial equilibrium point 0, and therefore reflects the attractiveness condition.
-
(2)
Obviously, if we take in the Definition 1, \(\alpha =0\), then the time-varying system is said logarithmically stable; more precisely \(||x(t)||{\asymp _{+\infty }} \displaystyle \frac{c}{\ln ^{\beta } (t)},\) while the case \(\beta =0\) correspond to the polynomial stability; i.e. \(||x(t)||\asymp _{ +\infty } \displaystyle \frac{c}{t^{\alpha }}.\)
Definition 2
The control system
where \(x\in {\mathbb {R}}^{n} \) is the state, \(u\in {\mathbb {R}}^m\) the control, and \(Y\in C^{0}({\mathbb {R}}^{n+m},\,{\mathbb {R}}^{n}),\) is said to be locally (resp. globally) polynomially-logarithmically stabilizable by means of continuous time-varying feedback laws if there exists \(u\in {\mathcal {C}}^{0}([0,\,+\infty )\times \mathbb R^n,\,{\mathbb {R}}^m)\) such that
and \(0\in {\mathbb {R}}^n\) is locally (resp. globally) P-L stable for the closed-loop system \(\dot{x}=Y(x,\,u(t,\,x)).\)
Our first main result of this subsection is the following.
Theorem 2
Consider the nonlinear time-varying system (6). Assume that there exist a \({\mathcal {C}}^1-\) function \(V:[t_0,\,+\infty [\times {\mathbb {R}}^n\rightarrow {\mathbb {R}},\) some positive constants \(c_1,\,c_2,\,r_1\) and \(r_2\) such that,
-
(1)
for every \(x\in {\mathbb {R}}^n,\) the Lyapunov function V satisfies
$$\begin{aligned} c_1\,||x||^{r_1}\leqslant V(t,\,x)\leqslant c_2\,||x||^{r_2}, \end{aligned}$$(8) -
(2)
there exists a continuous and nonnegative function \(f:[t_0,\,+\infty [\rightarrow (0,\,+\infty )\) with the “growth condition”: \(\displaystyle \int _{t_0}^{t} f(s) ds\geqslant k\,t^{\alpha } (\ln (t+1)- \ln ({t_0}+1))^{\beta }\) where \(\alpha ,\,\beta \) and k are nonnegative constants such that
$$\begin{aligned} {\dot{V}}(t,\,x)\leqslant -c\,f(t)\,V^{\gamma }(t,\,x),\,\,\,\gamma>1\,\,{and}\,\,\,c>0. \end{aligned}$$Then, \(0\in {\mathbb {R}}^n\) is globally polynomially-logarithmically stable.
Proof
Clearly, with the assumptions of the theorem, the considered system is Lyapunov stable; this follows from conditions 1. and 2. of the theorem. The logarithmic stability follows from the comparison principle [26, Lemma pp. 110] theorem. Indeed, consider the differential equation with initial condition \(E_0=E(t=t_0).\)
The Cauchy problem (9) admits a unique maximal solution on \([t_0,\,+\infty )\), hence if \(E_0=0\), then \(E(t)=0\,\,\forall \,t\geqslant t_0.\) Moreover, if \(E_0>0\) then \(E(t)>0\,\,\forall \,t\geqslant t_0\); in this case, we denote by \(F(t)=\frac{1}{E^{\gamma -1}(t)}.\) Clearly F satisfies on \([t_0,\,+\infty )\) the differential equation \(\dot{F}=c(\gamma -1)f(t)\) which leads to the following solution:
We consider the differential inequality
and let \(V_0:=V(x_0)\) such that \(V_0\leqslant E_0,\) we obtain by comparison Lemma
Now, by using the assumption \(\displaystyle \int _{t_0}^{t} f(s) ds\ge k\,t^{\alpha }\,\ln ^{\beta }\big (\frac{t+1}{t_0+1}\big ),\) then from (10) we get
If we denote by \(b=E_0^{1-\gamma }\) and \(a=(\gamma -1)\,c\,k.\) Then, from (8), the solution can be estimated by
\(\square \)
Sometimes we need only a logarithmic stability estimate; the next proposition provides sufficient conditions to get this stability estimate for the system (6). Without loss of generality, we consider \(t_0=0\) and \(x_0=x(0)\) for the rest of the paper.
Proposition 2
Consider the nonlinear time-varying system (6). Assume that there exist a \({\mathcal {C}}^1-\) function \(V:[0,\,+\infty [\times \mathbb {R}^n\rightarrow {\mathbb {R}},\) some positive constants \(c_1,\,c_2,\,r_1\) and \(r_2\) such that,
-
(1)
for every \(x\in {\mathbb {R}}^n,\) the Lyapunov function V satisfies
$$\begin{aligned} c_1\,||x||^{r_1}\leqslant V(t,\,x)\leqslant c_2\,||x||^{r_2}, \end{aligned}$$ -
(2)
the Lyapunov function V satisfies the inequality
$$\begin{aligned} \dot{V}\leqslant -\frac{c}{1+t}V^{1+\gamma },\,\gamma >0, \end{aligned}$$then, (6) is globally logarithmically stable. More precisely, the state x satisfies the estimate
$$\begin{aligned} ||x(t)||\asymp _{+\infty } \frac{c}{\ln ^{\frac{1}{\gamma \,r_1}}(t)}. \end{aligned}$$
Proof
The time derivative of V along the system (6) satisfies the constraint \(\dot{V}\leqslant -c f(t)V^{1+\gamma }\), where in this case \(\displaystyle \int _{0}^{t}\frac{ds}{1+s}=\ln (1+t).\) Then Theorem 2 allows to conclude. \(\square \)
Remark 2
If we replace the condition (2) by the following
where \(f:[0,\,+\infty )\rightarrow (0,\,+\infty )\) a measurable function such that
Then
The next proposition deals with logarithmic stability by using comparison estimate for scalar system.
Proposition 3
Consider the system (6), we assume that there exist \({\mathcal {C}}^1-\) function \(V:[0,\,+\infty )\times \mathbb R^n\rightarrow {\mathbb {R}}\), a positive constants \(c_1,\,c_2,\,r_1,\,r_2\) such that.
(i) For every \(x\in {\mathbb {R}}^n\), the Lyapunov function V satisfies
(ii) there exists a continuous function \(r:{\mathbb {R}}^2\rightarrow {\mathbb {R}}\) such that along the system (6) we have
(iii) the scalar system \(\dot{z}=-r(t,\,z)\) is logarithmically stable.
Then, system (6) is globally logarithmically stable.
Proof
Since the scalar system \(\dot{z}=-r(t,\,z)\) is logarithmically stable, then, there exists \(\eta >0\) such that if \(|z(0)|<\eta \) then \(|z(t)|\leqslant \frac{c}{\ln ^{\beta } (t+1)}\) for \(t>0\) is large enough, \(\beta >0\) and the constant c depending on the initial conditions z(0). Therefore, from the condition (ii), we get for \(V(x(0))<\eta \) that \(V(x(t))\leqslant \frac{c}{\ln ^{\beta } (t+1)},\) and by condition (i), we obtain for t is large enough
where c regroups all positive constants depending on the initial conditions. \(\square \)
Results of the above Theorems can be abstracted in the following proposition.
Proposition 4
Consider the system (6), and assume that there exist \({\mathcal {C}}^1-\) function \(V:[0,\,+\infty )\times \mathbb R^n\rightarrow {\mathbb {R}}\), functions \(\alpha _1\) and \(\alpha _2\) in class \({\mathcal {K}}_{\infty }\) such that,
-
(i)
for every \(x\in {\mathbb {R}}^n\), the Lyapunov function V satisfies
$$\begin{aligned} \alpha _1(||x||)\leqslant V(t,\,x)\leqslant \alpha _2(||x||), \end{aligned}$$ -
(ii)
the Lyapunov function V satisfies the inequality
$$\begin{aligned} \dot{V}\leqslant -cf(t)V^{1+\gamma },\,c,\,\gamma >0, \end{aligned}$$
and \(f:[0,\,+\infty )\rightarrow \mathbb (0,\,+\infty )\) a measurable function such that
Then, \(0\in {\mathbb {R}}^n\) is globally-polynomially logarithmically stable.
The next result solves the case when \(\gamma =1\) in Theorem 2.
Theorem 3
Consider the system (6), and assume that there exist \({\mathcal {C}}^1-\) function \(V:[0,\,+\infty )\times \mathbb R^n\rightarrow {\mathbb {R}}\), functions \(\alpha _1\) and \(\alpha _2\) in class \({\mathcal {K}}_{\infty }\) such that,
-
(i)
for every \(x\in {\mathbb {R}}^n\), the Lyapunov function V satisfies
$$\begin{aligned} \alpha _1(||x||)\leqslant V(t,\,x)\leqslant \alpha _2(||x||), \end{aligned}$$ -
(ii)
the Lyapunov function V satisfies the inequality
$$\begin{aligned} \dot{V}\leqslant -c\,\varphi (t)V,\,\,{where}\,\,c>0 \end{aligned}$$(14)and \(\varphi :[0,\,+\infty )\rightarrow \mathbb (0,\,+\infty )\) a measurable function such that
$$\begin{aligned} \int _{0}^{t}\varphi (s)ds\geqslant \ln (\ln ^{\beta }(t+1+\varepsilon )), \,{where}\,\,0<\varepsilon <e-1,\,\,{and}\,\,\beta >0. \end{aligned}$$Then, \(0\in {\mathbb {R}}^n\) is globally logarithmically stable.
Proof
We denote by \(V_0:=V(0,\,x(0))\not =0\). Then, from (14), the function V satisfies the inequalities
Then we get by the growth of the \(\ln \) function that
which together with the condition (ii) implies that \(0\in \mathbb R^n\) is globally logarithmically stable. \(\square \)
The next result deals with the logarithmic stability in finite-time. This means that the target system is finite-time stable and before the settling time, the solution follows a logarithmic curve.
Theorem 4
Consider the system (6), and assume that there exist \({\mathcal {C}}^1-\) function \(V:[0,\,+\infty )\times \mathbb R^n\rightarrow {\mathbb {R}}\), functions \(\alpha _1\) and \(\alpha _2\) in class \(K_{\infty }\) such that,
-
(i)
for every \(x\in {\mathbb {R}}^n\), the Lyapunov function V satisfies
$$\begin{aligned} \alpha _1(||x||)\leqslant V(t,\,x)\leqslant \alpha _2(||x||), \end{aligned}$$ -
(ii)
the Lyapunov function V satisfies the inequality
$$\begin{aligned} \dot{V}\leqslant -c\varphi (t)V^{a},\,0<a<1 \end{aligned}$$and \(\varphi :[0,\,+\infty )\rightarrow \mathbb (0,\,+\infty )\) a continuous function such that
$$\begin{aligned} \int _{0}^{t}\varphi (s)ds\geqslant \ln ^{\beta }(t+1),\,{where}\,\,\,\beta >0. \end{aligned}$$Then, \(0\in {\mathbb {R}}^n\) is globally logarithmically stable in finite-time.
Proof
Consider the ODE
the solution of (15) is given by
Then
Clearly
Now, consider \(V_0\) such that \(V_0\leqslant E_0\), then, we conclude by principal comparison that
Hence, \(V(t)=0\) for every \(t\geqslant T^{**}=\exp \left( \left( \frac{V_{0}^{1-a}}{c(1-a)}\right) ^{\frac{1}{\beta }}\right) -1.\) \(\square \)
For example, the scalar system
is logarithmically finite-time stable (here we chose \(\beta =2\)).
4 Applications to Control Systems
4.1 Some Academic Examples
Now, we illustrate our Theorems by further examples.
Example 1: the scalar system. Consider in \({\mathbb {R}}\) the control system \(\dot{x}=u\), and we will construct time-varying stabilizing feedback \(u(t,\,x)\) renders the closed-loop system globally logarithmically stable. For \(k>1,\) and \(k\in Q^{+}_{odd}\), consider the Lyapunov function \(V(x)=\frac{1}{2k}x^{2k}\). If we take the time derivative of V along the system we get \(\dot{V}=x^{2k-1}u.\) Now, by choosing \(u(t,\,x)=-f(t)x^{2p+1}\) where \(p\in Q^{+}_{odd}\), and f a nonnegative continuous function such that \(\int _{0}^{t}f(s)ds=\ln ^{\beta } (t+1)\) and \(\beta >1\) (i. e. \(f(t)=\displaystyle \frac{\beta \,\ln ^{\beta -1}(1+t)}{1+t}\)), we get
By Theorem 2, we have the asymptotic estimation
which means again that
Example 2: the double integrator example:
Due to its importance in several mechanical control systems, the double integrator attract the attention of many authors, see [1, 2, 11, 21, 23] and the references therein. For this example, regular and polynomial-logarithmic time-varying stabilizing feedback law for the double integrator is constructed.
Proposition 5
Let p and k be two nonnegative odd rational numbers such that \(p>1\) and \(k>1/2.\) Let f be a regular function such that \(f:[0,\,+\infty )\rightarrow (0,\,+\infty )\) and
Then the double integrator
is polynomially-logarithmically stable under the family of time-varying feedbacks
Proof
Let k and p as in the Proposition, and let f be the nonnegative \({\mathcal {C}}^{\kappa }-\) function satisfies the constraint \(\displaystyle \int \nolimits _0^{t} f(s) ds\geqslant (1+t)^{\alpha }\ln ^{\beta } (1+t).\) More precisely the regular function f satisfies the inequality \(f(t)\geqslant (1+t)^{\alpha -1}\big (\alpha \,\ln ^{\beta } (t+1)+\beta \,\ln ^{\beta -1}(1+t))\big .\) Consider the candidate Lyapunov function V of the form
the time derivative of V along the system (16) is given by
then, under the feedback u given in (17), we get in closed-loop
Since \(f>0\), then from (18) we have
therefore, we get
and
Under the integral condition on f, clearly Theorem 2 allows to conclude the polynomial-logarithmic stabilization of the double integrator \(\ddot{x}=u\); (here \(y=\dot{x}\)). \(\square \)
Example 3: the Quadratic control system The quadratic system that we consider here is presented in [9, Section 5.] and takes the form
where the state is \((x,\,y)\in {\mathbb {R}}\times {\mathbb {R}}^n\) and the control is \((u,\,v)\in {\mathbb {R}}\times {\mathbb {R}}^m\), \(A\in \mathcal M_{n}({\mathbb {R}})\) and \(B\in {\mathcal {M}}_{n\times m}({\mathbb {R}}).\) As Coron explain in [9], this system models physical systems such as, for example, the Euler equation of incompressible fluids. We preserve Coron’s assumptions and we assume that the pair \((A,\,B)\) is controllable; and the main objective is the construction of time-varying feedback laws u and v making the closed loop system (19) logarithmically-polynomially stable; and therefore we improved the stabilization result obtained by Coron [9, Theorem 5.2]. The first step is to start with the reduced system
Consider the auxiliary linear system
Since the pair \((A,\,B)\) is controllable; then we can choose \(v:=Ky,\,K\in {\mathcal {M}}_{m.n}({\mathbb {R}}) \) such that \(A+BK\) is Hurwitz; this means that the closed-loop system \(\dot{y}=(A+BK)y\) is exponentially stable. Hence, there exists an optimal rate \(\omega _0\) such that the solution of the closed-loop system of () satisfies the estimate \(||y(t)||\leqslant c\,||y(0)||e^{-\omega _0\,t}.\) This inequality is obtained under a suitable candidate Lyapunov function V (see for example [35]) such that its time derivative satisfies the estimate
Now, taking the time derivative of V along the reduced system (20), we get \(\dot{V}_{\mid _{(20)}}=u\dot{V}.\) By taking \(u\geqslant 0\), for example, \(u(t,\,y)=f(t)V^{\gamma }\), where \(\gamma >1\) and \(f \in {\mathcal {C}}^1([0,\,+\infty ),\,(0,\,+\infty ))\) is chosen such that \(\displaystyle \int \nolimits _{0}^{t}f(s)ds\geqslant (1+t)^{\alpha }\ln ^{\beta } (1+t)\), with \(\alpha>0,\,\beta >2,\) (for example, we can choose \(f(t)=(1+t)^{\alpha -1}\left( \alpha \ln ^{\beta }(1+t)+\beta \ln ^{\beta -1}(1+t)\right) \) ), we get
this leads, by Theorem 2, to the polynomial-logarithmic stability of the closed-loop system (20).
Step 2. For the augmented system, we consider the candidate Lyapunov function W defined by
where \(k>1\) is an integer that will be selected after. Taking the time derivative of W along the system (19), we get, with the new variable \(z:=x-f(t)V^{\gamma }(y)\)
since \(x=z+f(t)V^{\gamma }(y)\), then we get
Now, it is possible to select the time-varying feedback law
where \(\gamma \) and k are two nonnegative rational numbers such that \(\frac{\gamma }{k}\in Q_{odd}^{+}\), and then we get
From (22), it is not hard to see that W satisfies the differential inequality
The conclusion of the polynomial-logarithmic stability follows then from Theorem 2.
The above construction can be extended for general cascade systems as follows: consider the control system
where \(x\in {\mathbb {R}}^n\) is the state, \(u\in {\mathbb {R}}\) the control and \(f\in {\mathcal {C}}^{1}({\mathbb {R}}^n\times {\mathbb {R}},\,\mathbb R^n)\). Assume there exist a \({\mathcal {C}}^1\) time-varying stabilizing feedback law \({\bar{u}}(t,\,x)\), a \({\mathcal {C}}^1-\) Lyapunov function V and a \({\mathcal {C}}^{1}-\) function \(\varphi :[0,\,+\infty )\rightarrow (0,\,+\infty )\) such that along the closed-loop system \(\dot{x}=f(x,\,{\bar{u}}(t,\,x))\) we have
This means that the closed-loop system (24) is polynomially-logarithmically stable.
or the augmented system
where the state is \((x,\,y)\in {\mathbb {R}}^n\times {\mathbb {R}}\) and \(u\in {\mathbb {R}}\) is the control. Consider the Lyapunov function W of the form
The time derivative of W along (26) can be estimated by
Since \(f\in \mathcal C^1(\mathbb R^n\times \mathbb R,\,\mathbb R^n)\), then by Taylor expansion, there exists a matrix \(G\in \mathcal C^0(\mathbb R^n\times \mathbb R\times \mathbb R,\,\mathcal L(\mathbb R,\,\mathbb R^n)) \) such that
therefore, the estimate (28) becomes
Now, it is possible to select the time-varying feedback u as follows
From (25), we have \(\dot{V}_{\mid _{(24)}}\leqslant -\varphi (t)V^{1+\gamma }\) where \(\displaystyle \int _{0}^{t}\varphi (s)ds\geqslant (t+1)^{\alpha }\ln ^{\beta }(1+t).\) Then, the estimate () becomes
It is not hard to see that
with assumption on \(\varphi \) we conclude that system (26) is polynomially-logarithmically stable under the feedback (30).
Hence, we have proved the following result.
Proposition 6
Consider the \({\mathcal {C}}^1-\) control system in \({\mathbb {R}}^n\times {\mathbb {R}}^m\)
Assume there exist a \({\mathcal {C}}^1\) time-varying stabilizing feedback law \({\bar{u}}(t,\,x)\), a \({\mathcal {C}}^1-\) Lyapunov function V and a \({\mathcal {C}}^1-\) function \(\varphi :[0,\,+\infty )\rightarrow (0,\,+\infty )\) such that along the closed-loop system \(\dot{x}=f(x,\,{\bar{u}}(t,\,x))\) we have
Then, the augmented system
is polynomially-logarithmically stabilizable by \({\mathcal {C}}^{0}-\) feedback law.
Remark 3
For the case when \(u\in {\mathbb {R}}^m,\) we adopt the same proof by considering in (27) the candidate Lyapunov function \(W=\displaystyle \frac{1}{k}V^k+\frac{1}{2}||y-{\bar{u}}(t,\,x)||^2.\)
Example 4: the bilinear control system
Consider the finite dimensional bilinear control system \(\dot{x}=Ax+uBx,\) where \(x\in {\mathbb {R}}^n\) the state, \(u\in {\mathbb {R}}\) the control, and \(A,\,B\) are two square matrices in \({\mathbb {R}}^{n\times n}.\) The well known result of such system due to Quin [24], where quadratic and optimal stabilizing feedback law is established. The solution of the closed-loop system is asymptotically equivalent to \(\displaystyle \frac{1}{\sqrt{t}}.\) In this section, we will adopt Quin’s [24] assumptions which are.
The assumption \((H_1)\) means the dissipation of the matrix A, while the second \((H_2)\) means that QB is invertible since \(det (e^{At})=e^{tr(At)}>0.\)
We denote by \(\lambda _{\min }(Q)\) (resp. \(\lambda _{\max }(Q)\)) the smallest (resp. the largest) eigenvalue of the matrix Q.
The question that arises is how to construct a time-varying stabilizing feedback law \(u(t,\,x)\) such that the solution of the closed-loop system obeys the estimate
Lemma 1
There exists a nonnegative constant \(k_0\) such that
Proof
The proof follows from \((H_2)\) and homogeneity argument. Clearly the continuous function \(h: x\mapsto |\langle x,\,QB\,x\rangle | \) is homogeneous of degree 2 with respect to standard dilation. In the compact unity sphere \({\mathcal {S}}\), and based on assumption \((H_2)\), there exists \(\eta >0\) such that \(|\langle x,\,QB\,x\rangle |\geqslant \eta \). Now, for \(x\not =0,\) we have \(\frac{x}{||x||}\in {\mathcal {S}}\), then the conclusion follows from the homogeneity of h. \(\square \)
Then, we have,
Proposition 7
Let f be a nonnegative continuous function such that
Then, the bilinear control system \(\dot{x}=Ax+uBx\) is P-L stabilizable under the time-varying feedback u of the form
Proof
Taking the time derivative of \(V=\frac{1}{2}\langle x,\,Qx\rangle \) along the closed-loop system, then by the assumption \((H_1),\) we have \(\dot{V}=-cf(t)(\langle x,\,QB\,x\rangle )^{2+m},\) which implies by Lemma 1 that \(\dot{V}\leqslant -cf(t)||x||^{2+m}\). Since \(\frac{\lambda _{min}(Q)}{2}||x||^2\leqslant V(x)\leqslant \frac{\lambda _{max}(Q)}{2}||x||^2;\) the two inequalities together imply
where c regroups all constants. With the integral condition on f, and by Theorem 2, we conclude the polynomial-logarithmic stability of the closed-loop system. \(\square \)
For example, consider the following plane control system [24]
where \(y:=(y_1,\,y_2)\in {\mathbb {R}}^2\) the state and \(u\in {\mathbb {R}}\) the control. In this example we have the form \(\dot{y}=Ay+uBy\) with \(A=\left( \begin{array}{cc} 0 &{}\quad 1 \\ -1 &{}\quad 0\\ \end{array} \right) ,\,\text {and}\,\, B=\left( \begin{array}{cc} 2&{}\quad 0 \\ 0 &{}\quad 1 \\ \end{array} \right) . \) The assumptions \((H_1)\) and \((H_2)\) are satisfied with \(Q=q\,I_2,\,q>0.\) Clearly for \(t\geqslant 0\) the excitation \(f(t)=2(1+t)\big (\ln ^2(1+t)+\ln (1+t)\big )\) satisfies the condition (32), then the scalar time-varying feedback \(u(t,\,y)=-16(1+t)\left( \ln ^2(1+t)+\ln (1+t)\right) (2y_{1}^{2}+y_{2}^2)^3\) stabilizes polynomially-logarithmically the system (34) (here \(q=m=2\)).
Example 5: the Brockett’s integrator. In this part we turn to the Brockett’s control system [5] that takes the form
This example has a physical meaning; it describes the motion of the unicycle where \(x_1\) denotes the orientation together with the coordinate \((x_2,\,x_3)\) of the midpoint between the back wheels. The control \(u_1\) is the derive command and \(u_2\) is the steering control [28, 29]. It is known that system (35) is globally controllable but does not satisfy the Brockett necessary condition [5] (and therefore cannot be asymptotically stabilized by means of stationary feedback laws). In the early 1980’s, Sontag and Sussmann [30] have proved that controllable nonlinear scalar system can be locally (resp. globally) asymptotically stabilized by means of time-varying static feedback laws, and Samson [28] has proved that (35) is globally stabilized by means of time-varying static feedback laws. It would be interesting to note that the literature is rich in papers and books on time-varying systems; see for example but not limited to [6, 8, 10, 11, 26, 27, 31] and references therein .
In [16, 22], we have construct Hölder stabilizing feedback laws making the closed-loop system two-partially polynomially stable in the following sense \((x_2,\,x_3)\) is polynomially stable and \(x_1\) converges to a value which depends on initial conditions. This problem of partial asymptotic stabilizability has been extensively studied in the following list of articles [14,15,16,17, 20, 22, 23] and references therein.
The partial stability for time-varying system is studied in the literature [13, 32,33,34] but nothing was said about the partial stability in logarithmic sense. Hence, before to present the logarithmic stabilizing feedback laws for the system (35), we define the partial-logarithmic stability. Let the dynamical systems in finite dimension be in the following form
where \(X=(X_1,\,X_2)\) is a continuous vector field which defined on \([0,\,+\infty )\times {{\mathbb {R}}}^p\times \ {{\mathbb {R}}}^{n-p}\), \( x:=(x_1,\,x_2)\in {{\mathbb {R}}}^{p}\times {{\mathbb {R}}}^{n-p}\) and p is an integer such that \(0<p\leqslant n\). We assume that \((0,\,x_2)\) is a partial equilibrium point of (36) which means
and \(x(0):=(x_{1}(0),\,x_{2}(0))\) is the initial condition.
Definition 3
The system (36) is said to be \(p-\) partially polynomially logarithmically stable if the following properties are satisfied.
-
The origin \((0,\,0)\) of the system (36) is Lyapunov stable, i. e. \(\forall \varepsilon>0,\,\,\exists \eta >0:\,\,\,(||(x_1(0),\,x_2(0))||<\eta )\Rightarrow ( ||(x_1(t),\,x_2(t))||< \varepsilon \,\,\,\forall t\geqslant 0\)).
-
There exist positive numbers \(\alpha ,\,\beta ,\) and M(x(0)) such that if
$$\begin{aligned} (\exists \,r>0: ||(x_1(0),\,x_2(0))||\leqslant r)\Rightarrow \left\{ \begin{array}{lll} ||x_1(t)||\leqslant \frac{ M(x(0))}{t^{\alpha }\,\ln ^{\beta } (t)},\,\forall \,t\geqslant t_0>0.\\ \displaystyle \lim _{t\rightarrow +\infty }x_2(t)=a, \end{array} \right. \end{aligned}$$where a(x(0)) is a constant vector depending on initial conditions.
For example, the control system in \({\mathbb {R}}^2\times {\mathbb {R}}:\) \(\dot{x}=u^3,\,\dot{y}(t)=|u|\) is not completely stabilizable nor by time-varying and/or time varying feedback law because the state \(t\mapsto y(t)\) is increasing. Then, if we choose \(y(0)>0\) we get \(y(t)\geqslant y(0)>0\) and therefore the equilibrium point \((0,\,0)\) is not attractive. For this reason, the stabilization is treated in partial sense by means of smooth time-varying feedback law \(u(t,\,x).\) From the analysis of Example 1, we can take \(u(t,\,x)=-x^{\frac{2p+1}{3}}f^{\frac{1}{3}}(t)\) where f is such that \(\int _{0}^{t}f(s)ds= \ln ^{\beta }(1+t) \), and \((2p+1)\in 3Q_{odd}^{+}\) and \(\beta >1.\)
From Example 1, we have \(|x(t)|\asymp _{+\infty } \frac{c}{\ln ^{\frac{\beta }{2p}}(t)}\), then \(|u(t,\,x)|=\beta \,|x(t)|^{2p+1}\frac{\ln ^{\beta -1}(1+t)}{1+t}\asymp _{+\infty } \frac{c}{t\,\ln ^{\alpha }(t)}\) where \(\alpha =\frac{\beta }{2p}+1>1,\) and \(c>0.\) Hence, by Bertrand’s criterion, clearly \(\int _{1}^{+\infty }\frac{c}{t\,\ln ^{\alpha }(t)}dt<\infty ,\) and therefore the velocity \(\dot{y} \in \) L\(^{1}([0,\,+\infty ))\) which means the convergence of the state \(t\mapsto y(t)\) to constant value \(a(x(0),\,y(0))\) not necessarily zero.
The next result deals with sufficient condition for the \(p-\) partial stability in polynomial-logarithmic sense.
Proposition 8
Let us consider the system (36). Let us assume that there exists a \({\mathcal {C}}^1-\) candidate Lyapunov function \(V:[0,\,+\infty )\times {\mathbb {R}}^{p}\times \mathbb R^{n-p}\rightarrow [0,\,+\infty )\) satisfying.
-
(1)
\(\exists \,k_1,\,k_2\in {\mathcal {K}}\) such that for every \(x=(x_1,\,x_2)\in {\mathbb {R}}^{p}\times {\mathbb {R}}^{n-p},\)
$$\begin{aligned} k_1(||x_1||)\leqslant V(t,\,x)\leqslant k_2(||x||), \end{aligned}$$(37) -
(2)
there exists a measurable function \(\varphi :[0,\,+\infty )\rightarrow (0,\,+\infty )\) such that \(\displaystyle \int _{0}^{t}\varphi (s)ds\geqslant (1+t)^{\alpha }\ln ^{\beta }(1+t)\) for some positive constants \(\alpha \) and \(\beta >1\) such that the time derivative of V along (36) satisfies for every \(x=(x_1,\,x_2)\in {\mathbb {R}}^{p}\times {\mathbb {R}}^{n-p},\)
$$\begin{aligned} \dot{V}\leqslant -c \varphi (t)\,k_2^{1+\gamma }(||x||),\,\,{where}\,\,c,\,\gamma >0,\, \end{aligned}$$ -
(3)
there exists \(\eta >0\) such that ( \(||(x_1(0),\,x_2(0))||<\eta \)) \(\Rightarrow \) \(\int _{0}^{+\infty }||X_2(t,\,x_1,\,x_2)||dt<+\infty .\)
Then, system (36) is \(p-\) partially polynomially-logarithmically stable.
Proof
Conditions (1) and (2) imply that \(0\in {\mathbb {R}}^{p}\times \mathbb R^{n-p}\) is Lyapunov stable for the system (36) (see for instance [29]).
By combining conditions (1) and (2), we obtain the existence of function \(\varphi \) satisfying the integral constraint and constants \(c,\,\gamma >0\) such that along (36) we have
which, together with Theorem 2, implies that for t is large enough
From the right-hand side of (37), we conclude that \(x_1=0\in {\mathbb {R}}^{p}\) is polynomially-logarithmically stable.
Convergence of the part \(x_2\) comes from the condition (3) of the Proposition. \(\square \)
In the next, we will concentrate our effort on the construction of new time-varying feedback laws \(u_1(t,\,x)\) and \(u_2(t,\,x)\) such that in closed-loop the partial state \((x_2,\,x_3)\) is polynomially-logarithmically stable and \(x_1\) converges to a value not necessarily zero.
For this, taking the change of variables [29]: \(z_1=x_1,\,z_2=x_2,\,z_3=x_3-x_1x_2,\) then we get the system
Proposition 9
Let \(\varphi \) be \({\mathcal {C}}^1-\) nonnegative function satisfying the constraint
Let k and p be two odd rational numbers such that (\(0<p<\frac{\alpha }{4}\)) or (\(0<p<\frac{\beta }{4}\)).
Then, the Brockett’s system (38) is two-partially polynomially-logarithmically stable more precisely, \((z_2,\,z_3)=(0,\,0)\) is PL stable and \(z_1\) converges under the following time-varying feedback laws
Proof
Taking the candidate Lyapunov function
and we calculate its time derivative along the closed-loop system (38), we get
A straightforward computation, leads to
From Theorem 2, we conclude that \(V(t,\,z(t))\) satisfies, for t is large enough, the estimate
this means that, in closed-loop, \((z_2,\,z_3)=(0,\,0)\) is polynomially-logarithmically stable. To study the convergence of the state \(z_1\), we study the comportment of the map \(t\mapsto |z_3(t)|\) in large time.
By construction of V, we have \(|z_3(t)|\leqslant (2k V)^{\frac{1}{2k}}.\) So, for t is large enough we get
Then
Clearly for \((\alpha >4p)\) or (\(\alpha =4p\) and \(\beta >4p\)), the Bertrand’s integral \(\displaystyle \int \nolimits _{e}^{+\infty }\frac{dt}{t^{\frac{\alpha }{4p}}(\ln t)^{\frac{\beta }{4p}}}\) converges and therefore the state \(z_1\) converges since we can write the solution \(z_1\) with the integral representation as follows \(z_1(t)=z_1(e)+\int \nolimits _{e}^{t}z_3(s)ds;\) and \(\lim _{t\rightarrow +\infty }\int \nolimits _{e}^{t}z_3(s)ds=\int \nolimits _{e}^{+\infty }z_3(s)ds<\infty .\) \(\square \)
5 Links Between Controllability and Logarithmic Stabilizability
In that follows, we ask if controllability of the linearized system around the equilibrium point leads to the local polynomial-logarithmic stabilizability of the system? The links between the controllability of the linearized system and the local stabilizability by means of static state feedback laws is solved by Kalman theory, see for example, [8, 35] and several extensions to the local finite-time stabilizability was given in [3, 7, 18, 25] and references therein. In this section, it seems an attractive idea to study the links between the controllability of the linearized system and the logarithmic stabilizability.
For this, we consider the \({\mathcal {C}}^1-\) control system \(\dot{x}=f(x,\,u),\,f(0,\,0)=0.\) Let be \(A=\frac{\partial f}{\partial x}(0,\,0)\in {\mathcal {M}}_{n}(\mathbb R)\) and \(B=\frac{\partial f}{\partial u}(0,\,0)\in \mathcal M_{nm}({\mathbb {R}}),\) and we take the expansion around the equilibrium point \((0,\,0).\)
Then, we have the following result.
Theorem 5
If the linear system \(\dot{x}= Ax+Bu\) is controllable, then the nonlinear control system (43) is locally polynomially-logarithmically stabilizable.
The proof of Theorem 5 is not difficult because we use the Brunovsky transformation to write the system \(\dot{x}=Ax+Bu\) in the form of collection of m-chain of integrators; and then the task is reduced to stabilize chain by chain. The detailed proof is similar to the case of double integrator which is extended to \(n-\) integrators by the backstepping techniques (see the analysis of Example 3).
Now, we deal with some cases when not requiring Brunovsky decomposition, for this, we consider the control system \(\dot{x}=f(x,\,u),\,f(0,\,0)=0\) with the expansion (43).
Let be \(r>1\) odd rational, and we denote by \(x^r:=(x_1^r,\,x_2^r,...,\,x_n^r)'\in {\mathbb {R}}^n\). Let be K a m.n matrix such that \(A+BK\) is Hurwitz. Then there exists a real symmetric positive definite matrix P satisfying the Lyapunov equation [26, 29]
where I is the identity matrix. Hence, we have the following result.
Proposition 10
Let be \(r\in (1,\,+\infty )\) an odd rational. Assume that there exists a matrix \(C\in {\mathcal {M}}_{m,n}\) such that the product BC is symmetric positive definite, then system (43) is locally polynomially-logarithmically stable by the time-varying feedback law given by
where \(\varphi \) is nonnegative continuous function such that \(\displaystyle \int _{0}^{t}\varphi (s)ds=(1+t)^{\alpha }\ln ^{\beta }(1+t), \,{with}\,\,\alpha>0,\,{and}\,\,\beta >1.\)
Proof
Under the above consideration, the linear approximation in closed loop becomes
Since \(A+BK\) is Hurwitz, then the system \(\dot{x}=(A+BK)x\) is asymptotically stable. Moreover, the auxiliary system
is polynomially-logarithmically stable. Indeed, let V be the candidate Lyapunov function defined by
From (44), the time derivative of V with respect to (46) can be estimated as follows
where \(\lambda _{min}(BC)\) denotes the minimum of the eigenvalues of matrix BC. Note that \(\lambda _{min}(BC)>0\). Hence, by simple calculation, we get
Thus, we combine (47) and (48) we obtain
where \(c:=\alpha \,\lambda _{min}(BC)2^{\frac{r+1}{2}}>0,\) and \(\displaystyle \frac{{r}+1}{2}>1 (\Longrightarrow \frac{{r}+1}{2}=\mu +1,\,\mu >0).\)
Since the matrix \(A+BK\) is Hurwitz, then \(\omega _0:=\inf {\{Re(\lambda ): \lambda \in sp(A+BK)}\}<0.\) We compute the time derivative of V along the closed loop system (45), we get
With the assumption on \(\varphi \) we conclude, by Theorem 2, that (45) is polynomially-logarithmically stable. Now, we have
This means that system (43) is locally polynomially-logarithmically stable. \(\square \)
5.1 Time-Varying Logarithmic Stabilizing Feedbacks for Linear Control System
In this part, we present another point of view of the construction of time-varying stabilizing feedbacks for linear control system of the form
without recourse to Brunovsky decomposition and backstepping techniques for cascade structures, and without restriction as the above case.
Proposition 11
Consider system (49), and assume that:
-
(1)
there exists a time-varying matrix K(t) such that \(A+BK\) is Hurwitz,
-
(2)
there exists a bounded, symmetric, continuously differentiable and positive definite matrix P such that
$$\begin{aligned} \dot{P}(t)+P(t)(A+BK(t))+(A+BK(t))'P(t)\leqslant -\phi (t)Q \end{aligned}$$(50)where:
-
the matrix Q is continuous symmetric, positive definite such that there exists \(c>0\) and \(Q(t)\geqslant c\,I_{n},\,\,\forall \,t\geqslant 0\).
-
The function \(\phi :[0,\,+\infty )\rightarrow (0,\,+\infty )\) is continuous function such that
$$\begin{aligned} \int _{0}^{t}\phi (s)ds\geqslant \ln (\ln ^{\beta }(1+t+\varepsilon )), \,{where}\,\,0<\varepsilon <e-1\,\,{and}\,\,\beta >0.\qquad \end{aligned}$$(51)
-
Then the feedback \(u(t,\,x)=K(t)x\) stabilizes logarithmically the linear system.
Proof
Taking the candidate Lyapunov function V of the form [26]
due to properties of P(t) that is bounded and positive definite, then, there exist nonnegative constants \(c_1\) and \(c_2\) such that
We calculate the time derivative of V along the closed-loop system \(\dot{x}=(A+BK(t))x\) we get
combining (52) and (53) we get
which together with Theorem 3 implies that the closed-loop system \(\dot{x}=(A+BK(t))x\) is globally logarithmically stable. \(\square \)
Example. Consider the controllable system in \({\mathbb {R}}^2\times {\mathbb {R}}\)
where \(A=\left( \begin{array}{cc} 0 &{}\quad 1 \\ 0 &{}\quad 0 \\ \end{array} \right) \), and \(B=\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right) .\) Let K(t) be the matrix defined by \(K(t)=\left( \begin{array}{cc} k_1(t) &{}\quad k_2(t)\\ \end{array} \right) ,\)
such that \(A+BK(t)=\left( \begin{array}{cc} 0 &{}\quad 1 \\ k_1(t) &{}\quad k_2(t) \\ \end{array} \right) \) is Hurwitz which means in this case \(k_i(t)<0\) for \(i=1,\,2.\) The coefficients \(k_i(t)\) will be defined later in functions of the excitation function \(\phi \) and the coefficients of the matrix P.
Choosing \(P(t)=a(t)M\) where \(t\mapsto a(t)\) is \({\mathcal {C}}^1\) such that \(0<m_2\leqslant a(t)\leqslant m_1\) and \(\dot{a}(t)<0\) which will be selected. The matrix M is symmetric positive definite and given by \(M=\left( \begin{array}{cc} 2 &{}\quad 1 \\ 1 &{}\quad 2 \\ \end{array} \right) \), and taking \(Q(t)=2 I_2\) for every \(t\geqslant 0.\)
In this case, after calculations, the assumption (50) becomes
By comparison principal [26], we keep only with the equality, and we get the system
From the first and third equations of (56) we have \(k_1(t)=1+2k_2(t)\). In this case, we get
From the second equation of (57), the function a satisfies \(\dot{a}+(4+5k_2)a=0\) which means
Since \(k_2(t)<0\) \(\,\,\forall \,t\geqslant 0,\) then
In this case, we can select the function a(t) by taking, for example,
Clearly \(e^{-4t}+1\geqslant 2\,e^{-4t}\), with \(1\leqslant a(t)\leqslant 2\) and \(\dot{a}(t)<0\).
Moreover, we have
Since
which is clearly a decreasing function over \([0,\,+\infty )\). Hence, it is easy to see that, for \(\varepsilon =1\) and \(0<\beta <2\ln 2, \) we have \(k_2(t)<0\,\,\forall \,t\geqslant 0\) and therefore \(k_1(t)<0 \,\,\forall \,t\geqslant 0.\)
Hence the feedback
stabilizes in logarithmically sense the closed-loop system (54).
Remark 4
-
1.
For the scalar system \(\dot{x}=u\) which is globally controllable. The time-varying stabilizing feedback law \(u(t,\,x)=-\phi (t)x\) stabilizes logarithmically \(\dot{x}=u\) (i.e. \(K(t)=-\phi (t),\,\,\forall \,t\geqslant 0\)). Indeed, for this case, we have \(A=0,\,B=1.\) The scalars P(t) and Q(t) are defined as follows \(P(t)=\displaystyle \frac{1}{2}\) and \(Q(t)=1\) \(\forall \,\,t\geqslant 0.\) Clearly, these parameters solve (50).
-
2.
We can replace conditions (50) and (51) by the following:
-
There exist \(t_0>0\), a symmetric, continuously differentiable and positive definite matrix P such that \(c_2I\leqslant P(t)\leqslant c_1I\) for every \(t\geqslant t_0\)
$$\begin{aligned} \dot{P}(t)+P(t)(A+BK(t))+(A+BK(t))'P(t)\leqslant -\varphi (t)Q,\,\forall \,t\geqslant t_0. \end{aligned}$$ -
The matrix Q is continuous symmetric and positive definite matrix such that \(Q(t)\geqslant c\,I,\,\forall \,t\geqslant t_0\), and \(\phi :[t_0,\,+\infty )\rightarrow (0,\,+\infty )\) continuous function such that
$$\begin{aligned} \int _{t_0}^{t}\phi (s)ds=\ln \left( \ln ^{\beta }\left( \frac{t}{t_0}\right) \right) ,\,\beta >0, \end{aligned}$$and the feedback \(u(t,\,x)=K(t)x\) stabilizes logarithmically the linear system.
-
6 Conclusion
In control theory, one of the main issues is whether the solutions of closed-loop dynamic system converge to the equilibrium point. Of course, if yes, one may wonder how the solutions approach? In this context, this paper proposed some practical key ideas, solving partially this problem by developing the polynomial-logarithmic stability.
Therefore, in this framework, some new sufficient conditions guaranteeing the polynomial
logarithmic stability are presented. These conditions are expressed in terms of Lyapunov function excited by temporary functions satisfying an integral constraint, and these conditions reflect the way in which trajectories of dynamical systems converge towards equilibrium. Several toy models of control systems are given for which time-varying feedback laws are constructed and lead to this polynomial-logarithmic stability for the closed-loop systems. The advantage of these feedbacks that they are more regular, even, are in class \(C^{\infty }([0,\,+\infty )\times {\mathbb {R}}^n)\), for \(\beta \geqslant 2\) is an integer, and resolve nonlinear situations where linear feedbacks cannot work. Finally, we have proved that the controllability of the linearized system implies the logarithmic stabilizability by time-varying linear feedback law.
In the future work, two problems can be investigated: the first one is the robustness of this polynomial-logarithmic stability with respect to an unknown perturbation, and the second concerns the converse Lyapunov theorem for this polynomial-logarithmic stability.
Data Availability
No datasets were generated or analysed during the current study.
Notes
If this second condition holds for all \(r>0\), then \(0\in {\mathbb {R}}^n\) is said to be globally P-L stable for \(\dot{x}=X(t,\,x).\)
References
Bernuau, E., Perruqutetti, W., Effimov, D., Moulay, E.: Robust finite-time output feedback stabilisation of the double integrator. Inter. J. Control 88(3), 451–460 (2015)
Bhat, S.P., Bernstein, D.S.: Continuous finite-time stabilization of the translational and rotational double integrators. IEEE Trans. Autom. Control 43(5), 678–682 (1998)
Bhat, S.P., Bernstein, D.S.: Geometric homogeneity with applications to finite-time stability. Math. Control Signals Syst. 17, 101–127 (2005)
Borichev, A.: Optimal polynomial decay of functions and operator semigroups. Math. Ann. 347, 455–478 (2010)
Brockett, R.W.: Asymptotic stability and feedback stabilization. Differ. Geom. Control Theory 27, 181–191 (1983)
Chen, G., Yang, Y.: New stability conditions for a class of linear time-varying systems. Automatica 41, 342–347 (2016)
Coron, J.-M.: Relations entre commandabilité et stabilisations non linéaires. In Nonlinear partial differential equations and their applications. Collège de France Seminar, Vol. XI (Paris, 1989–1991), volume 299 of Pitman Res. Notes Math. Ser., pp 68–86. Longman Sci. Tech., Harlow, (1994)
Coron, J.-M.: Control and Nonlinearity, vol. 136. Mathematical Surveys and Monographs, USA (2007)
Coron, J.-M.: Phantom tracking method, homogeneity and rapid stabilization. Math. Control Related Fields 3(3), 303–322 (2013)
Damak, H., Taieb, N.H., Hammami, M.A.: On input-to-state practical h-stability of nonlinear time-varying systems. Mediterr. J. Math. 19, 249 (2022)
d’Andréa Novel, B., Coron, J.-M., Perruquetti, W.: Small-time stabilization of homogeneous cascaded systems with application to the unicycle and the slider examples. SIAM J. Control Optim. 58(5), 2997–3018 (2020)
Ghader, M., Nasser, R., Wehbe, A.: Optimal polynomial stability of a string with locally distributed Kelvin-Voigt damping and nonsmooth coefficient at the interface. Math. Meth. Appl. Sci. 44(2), 2096–2110 (2020)
Hamzaoui, A., Taieb, N.H., Hammami, M.A.: Practical partial stability of time-varying systems. Discrete Contin. Dyn. Syst.-B 27(7), 3585–3603 (2022)
Jammazi, C.: Backstepping and partial asymptotic stabilization. Applications to partial attitude control. Int. J. Control Autom. Syst. 6(6), 859–872 (2008)
Jammazi, C.: Finite-time partial stabilizability of chained systems. C. R. Acad. Sci. Paris Ser. I 346(17–18), 975–980 (2008)
Jammazi, C.: A discussion on the Hölder and robust finite-time partial stabilizability of Brockett’s integrator. ESAIM Control Optim. Calculus Variations 18(2), 360–382 (2012)
Jammazi, C.: Continuous and discontinuous homogeneous feedbacks finite-time partially stabilizing controllable multi-chained systems. SIAM Contr. Optim. 52(1), 520–544 (2014)
Jammazi, C., Abichou, A.: Controllability of linearized systems implies local finite-time stabilizability: applications to finite-time attitude control. IMA J. Math. Control Info. 35(1), 249–277 (2018)
Jammazi, C., Ben Ahmed, I., Boutayeb, M.: Polynomial stabilization of some control systems by smooth feedbacks. In Proceedings of the European Control Conference (ECC), pp. 746–751, Limassol, Cyprus, (2018)
Jammazi, C., Boutayeb, M., Bouameid, G.: On the global polynomial stabilization and observation with optimal decay rate. Chaos, Solitons and Fractals 153(111447), 1–11 (2021)
Jammazi, C., Boutayeb, M., Saidi, K.: On the fixed-time extinction based nonlinear control and systems decomposition: applications to bilinear systems. Chaos, Solitons and Fractals 174(113893), 1–12 (2023)
Jammazi, C., Zaghdoudi, M.: On the rational stabiliy of autonomous dynamical systems. Applications to chained systems. Appl. Math. Comput. 219, 10158–10171 (2013)
Jammazi, C., Zaghdoudi, M., Boutayeb, M.: On the global polynomial stabilization of nonlinear dynamical systems. Nonlinear Anal. Real Word Appl. 46, 29–46 (2019)
Quinn, J.P.: Stabilization of bilinear systems by quadratic feedback controls. J. Math. Anal. Appl. 75, 66–80 (1980)
Kawski, M.: Homogeneous stabilizing feedback laws. Control Theory Adv. Technol. 6(4), 497–516 (1990)
Khalil, H.K.: Nonlinear Systems, 3rd edn. Prentice Hall, USA (2002)
Morin, P., Samson, C.: Control of nonlinear chained systems. From the Routh-Hurwitz stability criterion to time–varying exponential stabilizers. IEEE Trans. Autom. Cont. 45, 141–146 (2000)
Samson, C.: Velocity and torque feedback control of a nonholonomic cart. In Proceeding of International Workshop on Nonlinear and Adaptive Control, volume 162, chapter Advanced Robot Control, pp. 125–151. Springer-Verlag, (1991)
Sontag, E.D.: Mathematical Control Theory: Determinstic Finite Dimensional Systems. Text in Applied Mathematics, 6th edn. Springer-Verlag, UK (1998)
Sontag, E.D., Sussmann, H.J.: Remarks on continuous feedback. IEEE CDC, Albuquerque 2, 916–921 (1980)
Taieb, N.H.: Stability analysis for time-varying nonlinear systems. Int. J. Control 95, 1497–1506 (2022)
Vorotnikov, V.I.: Partial Stability and Control. Birkhäuser, Boston (1998)
Vorotnikov, V.I.: Partial stability, stabilization and control: a some recent results. IFAC, pp. 1–12, (2002)
Vorotnikov, V.I.: Partial stability and control: the state-of-the art and development. Autom. Remote Control 66(4), 511–561 (2005)
Zabczyk, J.: Mathematical control theory. Birkhäuser, Boston (1992)
Zaghdoudi, M., Jammazi, C.: On the partial rational stabilizability of nonlinear systems by optimal control: examples. IFAC PapersOnLine 50(1), 4051–4056 (2017)
Acknowledgements
The authors would like to thank the anonymous reviewers for valuable suggestions to improve the paper.
Author information
Authors and Affiliations
Contributions
The first author presented the problem with the analysis of theoretical results and the illustration of the main results by examples from control theory (Examples 3, 4 and 5). The third author presented some criticisms of some proofs by improving their quality. His intervention is again based on reasonable hypotheses in the Examples 3 and 4; while the second author, is responsible for presenting some examples of control systems (Examples 1 and 2 and the last Example in Section 4.1) and participates in the bibliography and the writing of the article.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Jammazi, C., Bouamaied, G. & Boutayeb, M. On the Logarithmic Stability Estimates of Non-autonomous Systems: Applications to Control Systems. Qual. Theory Dyn. Syst. 23, 186 (2024). https://doi.org/10.1007/s12346-024-01040-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s12346-024-01040-w