Keywords

11.1 Introduction

In recent years, adaptive control of nonlinear systems with unknown gain sign has received a great deal of attention [14]. Nussbaum function was firstly proposed in Ref. [1] for the control problem of a class of linear time-invariant systems with unknown control gain coefficient. Nussbaum function has already been used to cope with the adaptive control problem of nonlinear systems with unknown control gain. The backstepping design was an important method to construct nonlinear adaptive controller recursively, and has solved a lot of problems that appeared in the process of the design of adaptive controllers in the early stage of the research, such as the matching condition, the growth condition. An adaptive control design was investigated in Ref. [2], using a modified Lyapunov function to remove the possible controller singularity problem. Applying the universal approximation properties of fuzzy logic systems (FLS) and the properties of Nussbaum function, an adaptive control scheme was developed for a class of MIMO nonlinear systems with unknown control gain in Ref. [3]. Using the dynamic surface control method and the properties of Nussbaum function, two adaptive neural network control schemes were proposed for a class of nonlinear pure feedback systems with dead-zone in Ref. [4].

The unmodeled dynamics exists widely in the actual systems, which influences the stability of nonlinear systems, and limits the performance of practical systems. Two robust adaptive control schemes were proposed for the existing unmodeled dynamics in Ref. [5, 6]. On the basis of it, using the universal approximation properties of neural networks, a robust adaptive approach was developed for a class of nonlinear pure feedback systems in Ref. [7]. Applying the small-gain approach and the properties of output feedback, [a novel adaptive control design is investigated in [8]. By introducing an available dynamic signal to dominate the unmodeled dynamics, a fuzzy adaptive control approach was developed for a class of nonlinear systems in Ref. [9]. Based on the neural networks universal approximator, adaptive neural dynamic surface control (DSC) was proposed for a class of pure feedback nonlinear systems in Ref. [10]. This scheme relaxed the assumption of the systems, and used the technique of DSC to deal with the control problem of nonlinear systems including the umodeled dynamics. A new fuzzy adaptive control approach was developed for a class of nonlinear with unknown virtual control gain and the unmodeled dynamics in Ref. [11].

On the basis of Refs. [4, 10, 11], a novel adaptive neural network control scheme is developed for a class of strict feedback nonlinear systems in this paper. The main contributions of this paper are addressed as follows: (1) The discussed plant in Ref. [11] is extended to more general strict-feedback nonlinear systems, and the assumption of the dynamic disturbances is relaxed. Furthermore, tracking performance is carried out by constructing appropriately unknown continuous functions; (2) The completely unknown virtual function control gains are dealt using the property of Nussbaum function while the completely unknown virtual constant control gains are only discussed in Ref. [11]; (3) The other restriction of class \( k_{\infty } \) function \( \bar{\gamma }(|x_{1} |) \) is removed except \( \bar{\gamma }(|x_{1} |) \ge \gamma (|x_{1} |) . \)

11.2 Problem Formulation and Preliminaries

Consider a class of strict-feedback systems with unmodeled dynamics in the following form:

$$ \left\{ {\begin{array}{l} {\dot{z} = q(z,x)} \\ {\dot{x}_{i} = f_{i} (\bar{x}_{i} ) + g_{i} (\bar{x}_{i} )x_{i + 1} + \Updelta_{i} (x,z,t)} \\ {\dot{x}_{n} = f_{n} (\bar{x}_{n} ) + g_{n} (\bar{x}_{n} )u + \Updelta_{n} (x,z,t)} \\ {y = x_{1}} \\ \end{array} } \right. $$
(11.1)

where \( i = 1,2, \ldots ,n - 1 , \) \( z \in R^{{r_{0} }} \) is the unmodeled dynamics, \( \bar{x}_{i} = [x_{1} ,x_{2} , \) \( \ldots ,\;x_{i} ]^{T} \in R^{i} , \) \( \bar{x}_{n} = [x_{1} ,x_{2} , \ldots ,\;x_{n} ]^{T} \in R^{n} \) is the state vector, \( u \in R \) is the system input, \( y \in R \) is the system output, \( f_{i} (\bar{x}_{i} ) \) and \( g_{i} (\bar{x}_{i} ) \) are the unknown continuous functions, \( \Updelta_{i} (z,x,t),i = 1,2, \ldots ,n \) are the nonlinear dynamic disturbances, and \( \Updelta_{i} (z,x,t) \) and \( q(z,x) \) are uncertain Lipschitz functions.

The control objective is to design adaptive control \( u \) for system (11.1) such that the output \( y \) follows the specified desired trajectory \( y_{d} . \)

Assumption 1

The unknown dynamic disturbance \( \Updelta_{i} (z,x,t),i = 1,2, \ldots ,n \) satisfies:

$$ |\Updelta_{i} (z,x,t)| \le \phi_{i1} (||\bar{x}_{i} ||) + \phi_{i2} (||z||) $$
(11.2)

where \( \phi_{i1} ( \cdot ) \) and \( \phi_{i2} ( \cdot ) \) is an unknown non-negative continuous function, \( \phi_{i2} ( \cdot ) \) is a non-negative non-decreasing function, and \( | | { } \cdot { ||} \) is the Euclidean norm.

Assumption 2

The unmodeled dynamics are exponentially input-to-state practically Stable (exp-ISpS); that is, the system \( \dot{z} = q(z,x) \) has an exp-ISpS Lyapunov function \( V(z) \) satisfying:

$$ \alpha_{1} (||z||) \le V(z) \le \alpha_{2} (||z||) $$
(11.3)
$$ \frac{\partial V(z)}{\partial z}q(z,x) \le - cV(z) + \gamma (|x_{1} |) + d $$
(11.4)

where \( \alpha_{1} ( \cdot ) , \) \( \alpha_{2} ( \cdot ) \) and \( \gamma ( \cdot ) \) are of class of \( k_{\infty } \) functions, \( c \) and \( d \) are known positive constants. Moreover, \( \gamma ( \cdot ) \) is a known function.

Assumption 3

The desired tracking trajectory \( \bar{x}_{id} \) is continuous and available, and \( | | { }\bar{x}_{nd} | |\in L_{\infty } , \) i.e., \( \sum\nolimits_{i = 0}^{n} {[y_{d}^{(i)} ]^{2} } \le B_{0} ,\forall t > 0 , \) where \( \bar{x}_{id} = [y_{d} ,\dot{y}_{d} , \) \( \ldots ,y_{d}^{(i)} ]^{T} , \) \( i = 1, \ldots ,n , \) \( B_{0} \) is a positive constant.

Assumption 4

The sign of control gain \( g_{i} ( \cdot ) \) is unknown. Moreover, there exist positive constants \( g_{\hbox{min} } \) and \( g_{\hbox{max} } \) such that \( 0 < g_{\hbox{min} } \le |g_{i} ( \cdot )| \le g_{\hbox{max} } ,1 \le i \le n . \)

Lemma 1 [6]

If\( V \)is an exp-ISPS Lyapunov function for a control system\( \dot{z} = q(z,x) , \)i.e. Eqs. (11.3) and (11.4) hold, then for any constants\( \bar{c} \)in\( (0,c) , \)any initial instant\( t_{0} > 0 , \)anyinitial condition\( z_{0} = z(t_{0} ) \)and\( v_{0} > 0 , \)for any function\( \bar{\gamma }( \cdot ) \)such that\( \bar{\gamma }(|x_{1} |) \ge \gamma (|x_{1} |) , \)there exists a finite\( T_{0} = V(z_{0} )v_{0}^{ - 1} e^{{(c - \bar{c})t_{0} }} \)\( \times (c - \bar{c})^{ - 1} \ge 0 , \)an availablesignal\( v > 0 , \)a nonnegative function\( D(t_{0} ,t) \)defined for all\( t \ge t_{0} \)with\( D(t_{0} ,t) = 0 \)and\( V(z) \le v(t) + D(t_{0} ,t) \)when\( t \ge t_{0} + T_{{_{0} }} , \)and a signal described by

$$ \dot{v} = - \bar{c}v + \bar{\gamma }(|x_{1} |) + d,\;v(t_{0} ) = v_{0} $$
(11.5)

Without loss of generality, we choose\( \bar{\gamma }(|x_{1} |) = \gamma (|x_{1} |) . \)

Lemma 2 [12]

For any real-valued continuous function \( f\left( {x,y} \right) \) where \( x \in R^{m} \) and \( y \in R^{n} , \) there are smooth scalar-value functions \( \varphi \left( x \right) \ge 0 \) and \( \vartheta \left( y \right) \ge 0 , \) such that \( |f\left( {x,y} \right)| \le \varphi (x) + \vartheta (y) . \)

The Nussbaum gain technique is introduced in this paper, in order to deal with the unknown sign of control gain. A function \( N(\zeta ) \) is called a Nussbaum-type function if it has the following properties [1]:

$$ \mathop {\lim }\limits_{s \to \infty } \sup s^{ - 1} \int_{0}^{s} {N(\zeta )} d\zeta = + \infty \;{\text{and}}\;\mathop {\lim }\limits_{s \to \infty } { \inf }s^{ - 1} \int_{0}^{s} {N(\zeta )} d\zeta = - \infty $$
(11.6)

Commonly used Nussbaum functions include: \( \zeta^{2} \cos (\zeta ) , \) \( \zeta^{2} \sin (\zeta ) \) and \( \exp (\zeta^{2} )\cos ((\pi /2)\zeta ) . \) We assume that \( N(\zeta ) = e^{{\zeta^{2} }} \cos ((\pi /2)\zeta ) \) is used in throughout this paper.

Lemma 3 [13]

Let\( V( \cdot ) \)and\( \zeta ( \cdot ) \)both be smooth functions on\( [0,t_{f} ) , \)with\( V(t) \ge 0 \)and\( \forall t \in [0,t_{f} ) , \)\( N( \cdot ) \)be an even smooth Nussbaum-type function, if the following inequality holds:

$$ V(t) \le c_{0} + e^{{ - c_{1} t}} \int_{0}^{t} {g(x(\tau ))N(\zeta )\dot{\zeta }} e^{{c_{1} \tau }} d\tau + e^{{ - c_{1} t}} \int_{0}^{t} {\dot{\zeta }} e^{{c_{1} \tau }} d\tau ,\forall t \in [0,t_{f} ) $$
(11.7)

where\( c_{0} \)is a suitable constant, \( c_{1} \)is a positive constant, \( g(x(t)) \)is a time-varying parameter that takes values in the unknown closed intervals\( I: = [l^{ - } ,l^{ + } ] , \)with\( 0 \notin I , \)then\( V(t) , \)\( \zeta (t) \)and\( \int_{0}^{t} {g(x(\tau ))N(\zeta )\dot{\zeta }} d\tau \)must be bounded on\( [0,t_{f} ) . \)

Lemma 4 [14]

For any given positive constant \( t_{f} > 0 , \) if the solution of the resulting closed-loop system is bounded on the interval \( t \in [0,t_{f} ) , \) then \( t_{f} = \infty . \)

Let\( W_{i}^{*T} \psi_{i} (\xi_{i} ) \)be the approximation of the radial basis function neural networks on a given compact set\( \Upomega_{{\xi_{i} }} \subset R^{q} \)to the unknown continuous function\( h_{i} (\xi_{i} ) , \)i.e., \( h_{i} (\xi_{i} ) = W_{i}^{*T} \psi_{i} (\xi_{i} ) + w_{i} (\xi_{i} ) , \)where\( \xi_{i} \in \Upomega_{{\xi_{i} }} \subset R^{q} \)is the input vector of neural networks; \( W_{i}^{*} \in R^{{l_{i} }} \)is the ideal weight vector for sufficient large integer\( l_{i} \)which denotes the neural networks node number satisfying\( l_{i} > 1 ; \)the basis function vector\( \psi_{i} (\xi_{i} ) = [\rho_{i1} (\xi_{i} ), \ldots ,\rho_{{il_{i} }} (\xi_{i} )]^{T} \in R^{{l_{i} }} \)with\( \rho_{i} (\xi_{i} ) \)being chosen as the commonly used Gaussian functions, which have the form:

$$ \rho_{ij} (\xi_{i} ) = \exp [ - (\xi_{i} - \varsigma_{ij} )^{T} (\xi_{i} - \varsigma_{ij} )\phi_{ij}^{ - 2} ], \, 1 \le j \le l_{i} ,1 \le i \le n $$
(11.8)

where, \( \varsigma_{ij} = [\varsigma_{ij1} ,\varsigma_{ij2} , \ldots ,\varsigma_{{ijq_{ij} }} ]^{T} \)is the center of the receptive field and\( \phi_{ij} \)is the width of the Gaussian function. The unknown ideal weight vector is defined as follows:

$$ W_{i}^{*} = \arg \mathop {\hbox{min} }\limits_{{W_{i} \in R^{{l_{i} }} }} [\mathop {\sup }\limits_{{\xi_{i} \in \Upomega_{{\xi_{i} }} }} |W_{i}^{T} \psi_{i} (\xi_{i} ) - f_{{_{i} }} (\xi_{i} )|] $$
(11.9)

\( |w_{i} (\xi_{i} )| \le \varepsilon_{i} , \)and\( \varepsilon_{i} > 0 \)is the unknown constant.

11.3 Control System Design and Stability Analysis

Based on backstepping, an adaptive neural control scheme will be proposed in this section. The control procedure consists of \( n \) steps, and is based on the following change of coordinates: \( s_{1} = x_{1} - y_{d} , \) \( s_{2} = x_{2} - \alpha_{1} , \) \( \ldots ,s_{n} = x_{n} - \alpha_{n - 1} , \) where \( \alpha_{i} ,i = 1, \ldots ,n - 1 \) is the virtual control input, and will be obtained in the following design. For convenience, define the Lyapunov function candidates as follows:

$$ V_{{s_{1} }} = \int_{0}^{{s_{1} }} | g_{1}^{ - 1} (\sigma + y_{d} )|\sigma d\sigma + \lambda^{ - 1} v $$
(11.10)
$$ V_{{s_{j} }} = \int_{0}^{{s_{j} }} | g_{j}^{ - 1} (\bar{x}_{j - 1} ,\sigma + \alpha_{j - 1} )|\sigma d\sigma ,2 \le j \le n $$
(11.11)
$$ V_{i} = V_{{s_{i} }} + 0.5\gamma_{i}^{ - 1} \tilde{\theta }_{i}^{2} $$
(11.12)

where \( \tilde{\theta }_{i} = \hat{\theta }_{i} - \theta_{i} , \) \( \hat{\theta }_{i} \) is the estimate of \( \theta_{i} \) at time \( t , \) \( \theta_{i} = ||W_{i}^{*} || , \) \( \gamma_{i} > 0 \) is a design constant,\( i = 1, \ldots ,n . \)

The virtual control laws and the adaptive laws are employed as follows (\( i = 1, \cdots ,n \)):

$$ \alpha_{i} = N(\zeta_{i} )[k_{i} s_{i} + 0.5a_{i}^{ - 2} s_{i} \hat{\theta }_{i} ||\psi_{i} (\xi_{i} )||^{2} ] $$
(11.13)
$$ \dot{\zeta }_{i} = k_{i} s_{i}^{2} + 0.5a_{i}^{ - 2} s_{i}^{2} \hat{\theta }_{i} ||\psi_{i} (\xi_{i} )||^{2} $$
(11.14)
$$ \dot{\hat{\theta }}_{i} = \gamma_{i} [0.5a_{i}^{ - 2} s_{i}^{2} ||\Psi_{i} (\xi_{i} )||^{2} - \sigma_{i} \hat{\theta }_{i} ] $$
(11.15)

where \( k_{i} \) is a design constant, \( a_{i} , \) \( \gamma_{i} \) and \( \sigma_{i} \) are strictly positive constants.

For the sake of clarity and convenience, let

$$ B_{1} = |g_{1} (x_{1} )|,\;F_{1} (\sigma ,y_{d} ) = |g_{1}^{ - 1} (\sigma + y_{d} )| $$
(11.16)
$$ B_{i} = |g_{i} (\bar{x}_{i} )|,\;F_{i} (\sigma ,\alpha_{i - 1} ) = |g_{i}^{ - 1} (\bar{x}_{i - 1} ,\sigma + \alpha_{i - 1} )| $$
(11.17)
$$ G_{i,j} (\sigma ,\alpha_{i - 1} ) = \partial |g_{i}^{ - 1} (\bar{x}_{i - 1} ,\sigma + \alpha_{i - 1} )|/\partial x_{j} $$
(11.18)
$$ K_{i} (t) = g_{i} (\bar{x}_{i} )B_{i}^{ - 1} $$
(11.19)

where \( j = 1, \cdots ,i - 1,i = 2, \cdots ,n. \)

Step 1: According to the second mean value theorem, there exists \( \lambda_{1} \in (0,1) \) such that \( \int_{0}^{{s_{1} }} {F_{1} (\sigma ,y_{d} )\sigma } d\sigma \) can be rewritten as \( \int_{0}^{{s_{1} }} {F_{1} (\sigma ,y_{d} )\sigma } d\sigma \)\( = 0.5s_{1}^{2} F_{1} (\lambda_{1} s_{1} ,y_{d} ) . \) Due to \( 0 < g_{\hbox{min} } \le |g_{i} ( \cdot )| \le g_{\hbox{max} } , \) it is shown that \( \int_{0}^{{s_{1} }} {F_{1} (\sigma ,y_{d} )\sigma } d\sigma \) is positive definitive with respect to \( s_{1} . \) Differentiating \( s_{1} \) with respect to \( t , \) we obtain

$$ \dot{s}_{1} = \dot{x}_{1} = f_{1} (\bar{x}_{1} ) + g_{1} (\bar{x}_{1} )x_{2} + \Updelta_{1} (x,z,t) - \dot{y}_{d} $$
(11.20)

The time derivative of \( V_{{s_{1} }} \) is:

$$ \dot{V}_{{s_{1} }} = B_{ 1}^{ - 1} s_{1} \dot{s}_{1} + \dot{y}_{d} \left[ {B_{ 1} s_{1} - \int_{0}^{{s_{1} }} {F_{1} (\sigma ,y_{d} )d\sigma } } \right] + \lambda_{0}^{ - 1} \bar{\gamma }(|x_{1} |) + d\lambda_{0}^{ - 1} - \bar{c}\lambda_{0}^{ - 1} v $$
(11.21)

According to Assumption 1, using Young’s inequality, we obtain

$$ |s_{ 1} \Updelta_{ 1} (x,z,t)| \le |s_{ 1} |\phi_{ 11} (|\bar{x}_{1} |) + |s_{ 1} |\phi_{ 12} (|z|) $$
(11.22)
$$ B_{ 1}^{ - 1} |s_{ 1} |\phi_{ 11} (|\bar{x}_{1} |) \le s_{1}^{2} \phi_{11}^{2} (|\bar{x}_{1} |)\varepsilon_{ 1}^{ - 2} B_{ 1}^{ - 2} + 0.25\varepsilon_{1}^{2} $$
(11.23)

Because of \( \alpha_{1} ( \cdot ) \) being class of \( k_{\infty } \)-functions, it’s seen that \( \alpha_{1}^{ - 1} ( \cdot ) \) is also a single-increasing function. Noting Assumption 2 and Lemma 1, we have

$$ ||z|| \le \alpha_{1}^{ - 1} (\nu (t) + D(t_{0} ,t)) $$
(11.24)
$$ \phi_{ 12} (||z||) \le \phi_{ 12} \circ \alpha_{1}^{ - 1} (\nu (t) + D(t_{0} ,t)) $$
(11.25)

where \( \phi_{ 12} \circ \alpha_{1}^{ - 1} ( \cdot ) = \phi_{ 12} (\alpha_{1}^{ - 1} ( \cdot )) . \) Noticing that \( \phi_{ 12} \circ \alpha_{1}^{ - 1} ( \cdot ) \) is a non-negative smooth function, and using Lemma 2, we have

$$ |s_{ 1} |\phi_{ 12} (||z||) \le | {\text{s}}_{1} |\phi_{ 12} \circ \alpha_{1}^{ - 1} (\nu (t) + D(t_{0} ,t)) \le |s_{ 1} |\varphi_{ 1} \left( {v(t)} \right) + |s_{ 1} |\vartheta_{ 1} (D(t_{0} ,t)) $$
(11.26)

Similar to the inequalities (11.23), from Young’s inequalities, we obtain

$$ B_{ 1}^{ - 1} |s_{ 1} |\varphi_{ 1} \left( {v(t)} \right) \le s_{1}^{2} \varphi_{1}^{2} (v(t))\varepsilon_{\varphi 1}^{{{ - }2}} B_{ 1}^{ - 2} + 0.25\varepsilon_{\varphi 1}^{2} $$
(11.27)
$$ B_{ 1}^{ - 1} |s_{ 1} |\vartheta_{ 1} (D(t_{0} ,t)) \le s_{1}^{2} B_{1}^{ - 2} + 0.25\vartheta_{1}^{2} (D(t_{0} ,t)) $$
(11.28)

From Lemma 1, it is shown that \( D(t_{0} ,t) \) turns to be zero, when \( t \ge t_{0} + T_{0} . \)We assume that \( \vartheta_{i}^{2} (D(t_{0} ,t)) \le \vartheta_{i}^{*} , \) \( i = 1,2, \ldots ,n , \) due to \( D(t_{0} ,t) \) and \( \vartheta_{i} ( \cdot ) \) being smooth functions to be bounded. From Young’s inequality, we obtain

$$ K_{1} (t)s_{1} s_{2} \le s_{1}^{2} + 0.25s_{2}^{2} $$
(11.29)
$$ s_{1} w_{1} (\xi_{1} ) \le s_{1}^{2} + 0. 2 5w_{1}^{2} (\xi_{1} ) \le s_{1}^{2} + 0. 2 5w_{1}^{*2} $$
(11.30)

Substituting (11.23) (11.27) and (11.28) into (11.21), we have

$$ \begin{aligned} \dot{V}_{{s_{1} }} & \le K_{1} (t)s_{1} x_{2} + s_{1} h_{1} (\xi_{1} ) + \left[ {1 - s_{1}^{2} /\varepsilon_{{\bar{\gamma }}}^{2} } \right]\lambda_{0}^{ - 1} \bar{\gamma }(|x_{1} |) \\ & \quad - \bar{c}\lambda_{0}^{ - 1} v + d\lambda_{0}^{ - 1} + 0.25\varepsilon_{1}^{2} + 0.25\varepsilon_{\varphi 1}^{2} + 0.25\vartheta_{1}^{*} \\ \end{aligned} $$
(11.31)

where \( h_{1} (\xi_{1} ) = B_{1}^{ - 1} f_{1} (\bar{x}_{1} ) + s_{1} \varphi_{1}^{2} (v(t))B_{1}^{ - 2} \varepsilon_{\varphi 1}^{ - 2} + s_{1} \phi_{11}^{2} (|x_{1} |)B_{1}^{ - 2} \varepsilon_{ 1}^{ - 2} + s_{1} B_{1}^{ - 2} \)

$$ - \dot{y}_{d} s_{1}^{ - 1} \int_{0}^{{s_{1} }} {F_{1} (\sigma ,y_{d} )} d\sigma + \lambda_{0}^{ - 1} s_{1} \bar{\gamma }(|x_{1} |)\varepsilon_{{\bar{\gamma }}}^{ - 2} . $$

In the above inequalities, \( \varepsilon_{{\bar{\gamma }}} \) is a positive constant. Substituting (11.29) and (11.30) into (11.31), we have:

$$ \begin{aligned} \dot{V}_{{s_{1} }} &\le K_{1} (t)s_{1} \alpha_{1} + 2s_{1}^{2} + s_{1}^{2} \theta_{1} ||W_{1} (\xi_{1} )||^{2} /a_{1}^{2} + 0.25s_{2}^{2} + 0.25w_{1}^{*2} + 0.25a_{1}^{2} \\ &\quad - \bar{c}\lambda_{0}^{ - 1} v + d\lambda_{0}^{ - 1} + 0.25\varepsilon_{1}^{2} + 0.25\varepsilon_{\varphi 1}^{2} + 0.25\vartheta_{1}^{*} + \left[ {1 - s_{1}^{2} \varepsilon_{{\bar{\gamma }}}^{ - 2} } \right]\lambda_{0}^{ - 1} \bar{\gamma }(|x_{1} |) \end{aligned} $$
(11.32)

Substituting (11.13) and (11.14) into (11.32), we obtain:

$$ \begin{aligned} \dot{V}_{{s_{1} }} &\le [K_{ 1} (t)N(\zeta_{1} ) + 1]\dot{\zeta }_{1} - (k - 2)s_{1}^{2} + s_{1}^{2} \tilde{\theta }_{1} ||W_{1} (\xi_{1} )||^{2} /2a_{1}^{2} - \bar{c}\lambda_{0}^{ - 1} v + 0.25s_{2}^{2} \\ &\quad + 0.25\varepsilon_{1}^{2} + 0.25\varepsilon_{\varphi 1}^{2} + 0.5a_{1}^{2} + 0.25\vartheta_{1}^{*} + 0.25w_{1}^{*2} + d\lambda_{0}^{ - 1} + \left[ {1 - s_{1}^{2} \varepsilon_{{\bar{\gamma }}}^{ - 2} } \right]\lambda_{0}^{ - 1} \bar{\gamma }(|x_{1} |) \\ \end{aligned} $$
(11.33)

Differentiating \( V_{1} \) with respect to time \( t , \) moreover, substituting (11.15) and (11.33) into (11.12), we have

$$ \begin{aligned} \dot{V}_{1} &\le [K_{1} (t)N(\zeta_{1} ) + 1]\dot{\zeta }_{1} - (k_{1} - 2)s_{1}^{2} - 0.5\sigma_{1} \tilde{\theta }_{1}^{2} + 0.25s_{2}^{2} - \bar{c}\lambda_{0}^{ - 1} v + 0.25w_{1}^{*2} + 0.5a_{1}^{2} \\ &\quad + d\lambda_{0}^{ - 1} + 0.25\varepsilon_{1}^{2} + 0.25\varepsilon_{\varphi 1}^{2} + 0.25\vartheta_{1}^{*} + 0.5\sigma_{1} \theta_{1}^{2} + \left[ {1 - s_{1}^{2} \varepsilon_{{\bar{\gamma }}}^{ - 2} } \right]\lambda_{0}^{ - 1} \bar{\gamma }(|x_{1} |). \\ \end{aligned} $$
(11.34_)

Define \( c_{11} = \hbox{min} \left\{ {2(k_{1} - 2),\gamma_{1} \sigma_{1} ,\bar{c}} \right\} , \) \( c_{12} = 0. 5\sigma_{1} \theta_{1}^{2} + 0. 2 5w_{1}^{*2} + 0. 5a_{1}^{2} + d\lambda_{0}^{ - 1} + \) \( 0.25\varepsilon_{1}^{2} \) \( + 0.25\varepsilon_{\varphi 1}^{2} + 0.25\vartheta_{1}^{*} . \) From inequality (11.34), we obtain:

$$ \dot{V}_{1} \le [K_{1} (t)N(\zeta_{1} ) + 1]\dot{\zeta }_{1} - c_{11} V_{1} + c_{12} + 0.25s_{2}^{2} + \left[ {1 - s_{1}^{2} \varepsilon_{{\bar{\gamma }}}^{ - 2} } \right]\lambda_{0}^{ - 1} \bar{\gamma }(|x_{1} |) $$
(11.35)

Multiplying (11.35) by \( e^{{c_{11} t}} , \) it becomes:

$$ \begin{aligned} d(V_{1} (t)e^{{c_{11} t}} )/dt &\le c_{12} e^{{c_{11} t}} + [K_{1} (t)N(\zeta_{1} ) + 1]\dot{\zeta }_{1} e^{{c_{11} t}} \hfill \\ &\quad + 0.25s_{2}^{2} e^{{c_{11} t}} + \left[ {1 - s_{1}^{2} \varepsilon_{{\bar{\gamma }}}^{ - 2} } \right]\lambda_{0}^{ - 1} \bar{\gamma }(|x_{1} |)e^{{c_{11} t}} \\ \end{aligned} $$
(11.36)

Integrating (11.36) over \( [0,t] , \) we have:

$$ \begin{aligned} V_{1} (t) &\le c_{13} + V_{1} (0) +\int_{0}^{t} {([K_{1} (t)N(\zeta_{1} ) + 1]\dot{\zeta }_{1} e^{{c_{11} (t - \tau )}} )d\tau } \hfill \\ &\quad + 0.25e^{{ - c_{11} t}} \int_{0}^{t} {s_{2}^{2} } e^{{c_{11} \tau }} d\tau { + }e^{{ - c_{11} t}} \int_{0}^{t} {\left[ {1 - s_{1}^{2} \varepsilon_{{\bar{\gamma }}}^{ - 2} } \right]\lambda_{0}^{ - 1} \bar{\gamma }(|x_{1} |)} e^{{c_{11} \tau }} d\tau \\ \end{aligned} $$
(11.37)

where \( c_{13} = c_{11}^{ - 1} c_{12} . \) Note that

$$ e^{{ - c_{11} t}} \int_{0}^{t} {s_{2}^{2} } e^{{c_{11} \tau }} d\tau \le c_{11}^{ - 1} \mathop {\sup }\limits_{\tau \in [0,t]} [s_{2}^{2} (\tau )] $$
(11.38)

Therefore, if \( s_{2}^{2} \) can be regulated to be bounded, we easily notice from inequality (11.38) that, the extra term \( 0. 2 5e^{{ - c_{11} t}} \int_{0}^{t} {s_{2}^{2} } e^{{c_{11} \tau }} d\tau \) can be bounded. The effect of \( 0. 2 5e^{{ - c_{11} t}} \int_{0}^{t} {s_{2}^{2} } e^{{c_{11} \tau }} d\tau \) will be handled with in the following steps. The other extra term will be discussed in the last of the paper.

Step\( i \) (\( 2 \le i \le n - 1 \)): The time derivative of \( s_{i} \) is:

$$ \dot{s}_{i} = f_{i} (\bar{x}_{i} ) + g_{i} (\bar{x}_{i} )x_{i + 1} + \Updelta_{i} (x,z,t) - \dot{\alpha }_{i - 1} $$
(11.39)

Because \( \alpha_{i - 1} \) is a function of \( \bar{x}_{i - 1} ,\zeta_{1} , \ldots ,\zeta_{i - 1} ,\bar{x}_{id} ,\hat{\theta }_{1} , \ldots ,\hat{\theta }_{i - 1} ,v, \) \( \dot{\alpha }_{i - 1} \) can be expressed as \( \dot{\alpha }_{i - 1} = \sum\limits_{j = 1}^{i - 1} {\frac{{\partial \alpha_{i - 1} }}{{\partial x_{j} }}} \left[ {f_{j} (\bar{x}_{j} ) + g_{j} (\bar{x}_{j} ) + \Updelta_{j} (x,z,t)} \right] + \omega_{i - 1} (t) , \) where

$$ \omega_{i - 1} (t) = \sum\limits_{j = 1}^{i - 1} {\frac{{\partial \alpha_{i - 1} }}{{\partial \zeta_{j} }}} \dot{\zeta }_{j} + \frac{{\partial \alpha_{i - 1} }}{{\partial \bar{x}_{(i - 1)d}^{T} }}\dot{\bar{x}}_{(i - 1)d} + \sum\limits_{j = 1}^{i - 1} {\frac{{\partial \alpha_{i - 1} }}{{\partial \hat{\theta }_{j} }}} \dot{\hat{\theta }}_{j} + \sum\limits_{j = 1}^{i - 1} {\frac{{\partial \alpha_{i - 1} }}{\partial v}\dot{v}} $$
(11.40)

The time derivative of \( V_{{s_{i} }} \) is

$$ \begin{aligned} \dot{V}_{{s_{i} }} &= B_{i}^{ - 1} s_{i} \dot{s}_{i} + \dot{\alpha }_{i - 1} [B_{i}^{ - 1} s_{i} - \int_{0}^{{s_{i} }} {F_{i} (\sigma ,\alpha_{i - 1} )d\sigma } ] \hfill \\ &\quad + \sum\limits_{j = 1}^{i - 1} {\int_{0}^{{s_{i} }} {\sigma G_{i,j} (\sigma ,\alpha_{i - 1} )(f_{j} (\bar{x}_{j} ) + g_{j} (\bar{x}_{j} )x_{j + 1} + \Updelta_{j} (x,z,t))d\sigma } } \\ \end{aligned} $$
(11.41)

Using Assumption 1, we have:

$$ \begin{aligned} |s_{i} |&[\Updelta_{i} (x,z,t) - \sum\limits_{j = 1}^{i - 1} {\frac{{\partial \alpha_{i - 1} }}{{\partial x_{j} }}\Updelta_{j} (x,z,t)} \\ &+ |s_{i} |\sum\limits_{j = 1}^{i - 1} {\int_{0}^{1} {\eta G_{i,j} (\eta s_{i} ,\alpha_{i - 1} )\Updelta_{j} (x,z,t)} } d\eta ] \le |s_{i} |[\bar{\phi }_{i1} (|\bar{x}_{i} |) + \bar{\phi }_{i2} (|z|)] \\ \end{aligned} $$
(11.42)

where \( \bar{\phi }_{i1} (|\bar{x}_{i} |) \ge \phi_{i1} + \sum\limits_{j = 1}^{i - 1} {|\frac{{\partial \alpha_{i - 1} }}{{\partial x_{j} }}|} \phi_{j1} , \) \( \bar{\phi }_{i2} (|z|) \ge \phi_{i2} + \sum\limits_{j = 1}^{i - 1} {|\frac{{\partial \alpha_{i - 1} }}{{\partial x_{j} }}|} \phi_{j2} . \)

Similar to step 1, and according to Lemma 2, we have

$$ |s_{i} |\phi_{i2} (||z||) \le |s_{i} |\varphi_{i} \left( {v(t)} \right) + |s_{i} |\vartheta_{i} (D(t_{0} ,t)) $$
(11.43)

Furthermore, applying Young’s inequality, we have:

$$ B_{i}^{ - 1} |s_{i} |\phi_{i1} (||\bar{x}_{i} ||) \le B_{i}^{ - 2} \varepsilon_{i}^{ - 2} s_{i}^{2} \phi_{i1}^{2} (||\bar{x}_{i} ||) + 0.25\varepsilon_{i}^{ 2} $$
(11.44)
$$ B_{i}^{ - 1} |s_{i} |\varphi_{i} \left( {v(t)} \right) \le B_{i}^{ - 2} \varepsilon_{\varphi i}^{ - 2} s_{i}^{2} \varphi_{i}^{2} (v(t)) + 0.25\varepsilon_{\varphi i}^{ - 2} $$
(11.45)
$$ B_{i}^{ - 1} |s_{i} |\vartheta_{i} (D(t_{0} ,t)) \le B_{i}^{ - 2} s_{i}^{2} + 0.25\vartheta_{i}^{2} (D(t_{0} ,t)) $$
(11.46)
$$ K_{i} (t)s_{i} s_{i + 1} \le s_{i}^{2} + 0.25s_{i + 1}^{2} $$
(11.47)
$$ s_{i} w_{i} (\xi_{i} ) \le s_{i}^{2} + 0. 2 5w_{i}^{2} (\xi_{i} ) \le s_{i}^{2} + 0. 2 5w_{i}^{*2} $$
(11.48)

where \( i = 1, \ldots ,n. \) From Lemma 1, we notice that when \( t \ge t_{0} + T_{0} , \) \( D(t_{0} ,t) \) turns to be zero. \( D(t_{0} ,t) \) and \( \vartheta_{ 1} ( \cdot ) \) are smooth functions. Thus there exists a unknown positive constant \( \vartheta_{i}^{*} \) such that \( \vartheta_{i}^{2} (D(t_{0} ,t)) \le \vartheta_{i}^{*} . \) Let

$$ \begin{aligned} h_{i} (\xi_{i} ) &= B_{i}^{ - 1} f_{i} (\bar{x}_{i} ) + B_{i}^{ - 2} \varepsilon_{i}^{ - 2} s_{i} \phi_{i1}^{2} (|\bar{x}_{i} |) + B_{i}^{ - 2} s_{i} \varepsilon_{\varphi i}^{ - 2} \varphi_{i}^{2} (v(t)) + B_{i}^{ - 2} s_{i} \\ &\quad + s_{i} \sum\limits_{j = 1}^{i - 1} {\int_{0}^{1} {\eta G_{i,j} (f_{j} (\bar{x}_{j} ) + g_{j} (\bar{x}_{j} )x_{j + 1} ))d\eta } } - s_{i}^{ - 1} \dot{\alpha }_{i - 1} \int_{0}^{{s_{i} }} {F_{i} (\sigma ,\alpha_{i - 1} )} d\sigma \\ \end{aligned} $$
(11.49)

where \( \xi_{i} = [\bar{x}_{i}^{T} ,\alpha_{i - 1} ,\frac{{\partial \alpha_{i - 1} }}{{\partial x_{1} }},\frac{{\partial \alpha_{i - 1} }}{{\partial x_{2} }}, \ldots ,\frac{{\partial \alpha_{i - 1} }}{{\partial x_{i - 1} }},\omega_{i - 1} ,\;v]^{T} \in \Upomega_{{s_{i} }} \subset R^{2i + 2} . \) Substituting (11.42)–(11.46) into (11.41), we obtain

$$ \dot{V}_{{s_{i} }} \le K_{i} (t)s_{i} x_{i + 1} + s_{i} h_{i} (\xi_{i} ) + 0.25\varepsilon_{i}^{2} + 0.25\varepsilon_{\varphi i}^{2} + 0.25\vartheta_{i}^{*} $$
(11.50)

Similar to the discussion in the step 1, substituting (11.47) and (11.48) into (11.50), we have

$$ \begin{aligned} \dot{V}_{i} &\le [K_{\text{i}} (t)N(\zeta_{i} ) + 1]\dot{\zeta }_{i} - (k_{i} - 2)s_{i}^{2} - 0.5\sigma_{i} \tilde{\theta }_{i}^{2} + 0.5\sigma_{i} \theta_{i}^{2} \\ &\quad + 0.25s_{i + 1}^{2} + 0.25w_{i}^{*2} + 0.5a_{i}^{2} + 0.5\varepsilon_{i}^{2} + 0.25\varepsilon_{\varphi i}^{2} + 0.25\vartheta_{i}^{*} \\ \end{aligned} $$
(11.51)

Define \( c_{i1} = \hbox{min} \left\{ {2(k_{i} - 2),\gamma_{i} \sigma_{i} } \right\} , \) \( c_{i2} = 0. 5\sigma_{i} \theta_{i}^{2} + 0. 2 5w_{i}^{*2} + 0. 5a_{i}^{2} + 0. 2 5\varepsilon_{i}^{2} + \) \( 0. 2 5\varepsilon_{\varphi i}^{2} + 0. 2 5\vartheta_{i}^{*} . \) From the above inequality, we have

$$ \dot{V}_{i} \le [K_{i} (t)N(\zeta_{i} ) + 1]\dot{\zeta }_{i} - c_{i1} V_{i} + c_{i2} + 0.25s_{i + 1}^{2} $$
(11.52)

Multiplying (11.52) by \( e^{{c_{i1} t}} , \) we obtain

$$ d(V_{i} (t)e^{{c_{i1} t}} )/dt \le c_{i2} e^{{c_{i1} t}} + [K_{i} (t)N(\zeta_{i} ) + 1]\dot{\zeta }_{i} e^{{c_{i1} t}} + 0.25s_{i + 1}^{2} e^{{c_{i1} t}} $$
(11.53)

Integrating (11.53) over \( [0,t] , \) we have

$$ V_{i} (t) \le c_{i3} + V_{i} (0){ + }\int_{0}^{t} {([K_{i} (t)N(\zeta_{i} ) + 1]\dot{\zeta }_{i} e^{{c_{i1} (\tau - t)}} )d\tau } + 0.25c_{i1}^{ - 1} \mathop {\sup }\limits_{\tau \in [0,t]} [s_{i + 1}^{2} (\tau )] $$
(11.54)

where \( c_{i3} = c_{i1}^{ - 1} c_{i2} . \)

Step\( n \): The time derivative of \( s_{n} \) is

$$ \dot{s}_{n} = f_{n} (\bar{x}_{n} ) + g_{n} (\bar{x}_{n} )u + \Updelta_{n} (x,z,t) - \dot{\alpha }_{n - 1} $$
(11.55)

where \( \alpha_{n - 1} \) is a function of \( \bar{x}_{n - 1} ,\;\zeta_{1} , \ldots ,\zeta_{n - 1} ,\;\bar{x}_{nd} ,\;\hat{\theta }_{1} , \ldots ,\hat{\theta }_{n - 1} \) and \( v , \)\( \dot{\alpha }_{i - 1} \) can be expressed as \( \dot{\alpha }_{n - 1} = \sum\limits_{j = 1}^{n - 1} {\frac{{\partial \alpha_{n - 1} }}{{\partial x_{j} }}} \left[ {f_{j} (\bar{x}_{j} ) + g_{j} (\bar{x}_{j} ) + \Updelta_{j} (x,z,t)} \right] + \omega_{n - 1} (t) , \) where

$$ \omega_{n - 1} (t) = \sum\limits_{j = 1}^{n - 1} {\frac{{\partial \alpha_{n - 1} }}{{\partial \zeta_{j} }}} \dot{\zeta }_{j} + \frac{{\partial \alpha_{n - 1} }}{{\partial \bar{x}_{(n - 1)d}^{T} }}\dot{\bar{x}}_{(n - 1)d} + \sum\limits_{j = 1}^{n - 1} {\frac{{\partial \alpha_{n - 1} }}{{\partial \hat{\theta }_{j} }}} \dot{\hat{\theta }}_{j} + \sum\limits_{j = 1}^{n - 1} {\frac{{\partial \alpha_{n - 1} }}{\partial v}\dot{v}} $$
(11.56)

Differentiating \( V_{{s_{n} }} \) with respect to time \( t , \) we obtain

$$ \begin{aligned} \dot{V}_{{s_{n} }} &= B_{n}^{ - 1} s_{n} \dot{s}_{n} + \dot{\alpha }_{n - 1} \left[ {B_{n}^{ - 1} s_{n} - \int_{0}^{{s_{n} }} {F_{n} (\sigma ,\alpha_{n - 1} )d\sigma } } \right] \\ &\quad + \sum\limits_{j = 1}^{n - 1} {\int_{0}^{{s_{n} }} {\sigma G_{i,j} (\sigma ,\alpha_{i - 1} )(f_{j} (\bar{x}_{j} ) + g_{j} (\bar{x}_{j} )x_{j + 1} + \Updelta_{j} (x,z,t))d\sigma } } \end{aligned} $$
(11.57)

Similar to step \( i , \) let

$$ \begin{aligned} h_{n} (\xi_{n} ) &= B_{n}^{ - 1} f_{n} (\bar{x}_{n} ) + B_{n}^{-2} \varepsilon_{n}^{- 2} s_{n} \phi_{n1}^{2} (|\bar{x}_{n} |) + B_{n}^{ - 2} \varepsilon_{\varphi n}^{ - 2} s_{n} \varphi_{n}^{2} (v(t)) + B_{n}^{ - 2} s_{n} \\ &\quad + s_{n} \sum\limits_{j = 1}^{n - 1} {\int_{0}^{1} {\eta G_{n,j} (f_{j} (\bar{x}_{j} ) + g_{j} (\bar{x}_{j} )x_{j + 1} ))d\eta } } - s_{n}^{ - 1} \dot{\alpha }_{n - 1} \int_{0}^{{s_{n} }} {F_{n} (\sigma ,\alpha_{n - 1} )} d\sigma \\ \end{aligned} $$
(11.58)

where \( \xi_{n} = [\bar{x}_{n}^{T} ,\alpha_{n - 1} ,\frac{{\partial \alpha_{n - 1} }}{{\partial x_{1} }},\frac{{\partial \alpha_{n - 1} }}{{\partial x_{2} }}, \cdots ,\frac{{\partial \alpha_{n - 1} }}{{\partial x_{n - 1} }},\omega_{n - 1} ,v]^{T} \in \Upomega_{{s_{n} }} \subset R^{2n + 2} . \)

Substituting (11.42)-(11.46) into (11.57) yields,

$$ \dot{V}_{{s_{n} }} \le K_{n} (t)s_{n} u + s_{n} B_{n}^{ - 1} h_{n} (\xi_{n} ) + \varepsilon_{n}^{2} + 0.25\varepsilon_{\varphi n}^{2} + 0.25\vartheta_{n}^{*} $$
(11.59)

Similar to the discussion in step \( i , \) substituting (11.47) and (11.48) into (11.59), we have

$$ \begin{aligned} \dot{V}_{n} &\le [K_{n} (t)N(\zeta_{n} ) + 1]\dot{\zeta }_{n} - (k_{n} - 1)s_{n}^{2} - 0.5\sigma_{n} \tilde{\theta }_{n}^{2} + 0.5\sigma_{n} \theta_{n}^{2} \\ &\quad+ 0. 25w_{n}^{*2} + 0.5a_{n}^{2} + 0.25\varepsilon_{n}^{2} + 0.25\varepsilon_{\varphi n}^{2} + 0.25\vartheta_{n}^{*} \\ \end{aligned} $$
(11.60)

Define \( c_{n1} = \hbox{min} \left\{ {2(k_{n} - 1),\gamma_{n} \sigma_{n} } \right\},c_{n2} = 0. 5\sigma_{n} \theta_{n}^{2} + 0. 25w_{n}^{*2} + 0.5a_{n}^{2} + 0.25\varepsilon_{n}^{2} \) \( + 0.25\varepsilon_{\varphi n}^{2} + 0.25\vartheta_{n}^{*} . \) The above inequality can be rewritten as

$$ \dot{V}_{n} \le [K_{n} (t)N(\zeta_{n} ) + 1]\dot{\zeta }_{n} - c_{n1} V_{n} + c_{n2} $$
(11.61)

Multiplying (11.61) by \( e^{{c_{n1} t}} , \) we obtain

$$ d(V_{n} (t)e^{{c_{n1} t}} )/dt \le c_{n2} e^{{c_{n1} t}} + [K_{n} (t)N(\zeta_{n} ) + 1]\dot{\zeta }_{n} e^{{c_{n1} t}} $$
(11.62)

Integrating (11.62) over \( [0,t] , \) we have:

$$ V_{n} (t) \le c_{n3} + V_{n} (0){ + }\int_{0}^{t} {([K_{n} (t)N(\zeta_{n} ) + 1]\dot{\zeta }_{n} e^{{c_{n1} (t - \tau )}} )d\tau } $$
(11.63)

where \( c_{n3} = c_{n1}^{{{ - }1}} c_{n2} . \)

Theorem 1

Consider the closed-loop system consisting of plant (11.1) under Assumptions 1–4, the control law (11.13) for\( i = n , \)and the adaptation laws (11.14)–(11.15). Then for the bounded initial conditions, the following properties hold:

  1. (1)

    All signals in the closed-loop system are semi-globally uniformly ultimately bounded.

  2. (2)

    The vector \( \xi_{i} \) stays in the compact set \( \Upomega_{{\xi_{i} }} \subset R^{2i + 1} , \) specified as

$$ \begin{aligned} \Upomega_{{\xi_{i} }} &= \left\{ {\xi_{i} |s_{j}^{2} \le 2g_{\hbox{max} } \mu_{j} ,||\tilde{\theta }_{j} ||^{2} \le 2\gamma_{j} \mu_{j} ,j = 1, \cdots ,i,,v \le \lambda_{0} \mu_{1} ,s_{i + 1}^{2} \le 2g_{\hbox{max} } \mu_{i + 1} } \right\} \\ \Upomega_{{\xi_{n} }} &= \left\{ {\xi_{n} |s_{j}^{2} \le 2g_{\hbox{max} } \mu_{j} ,||\tilde{\theta }_{j} ||^{2} \le 2\gamma_{j} \mu_{j} ,\bar{x}_{jd} \in \Upomega_{jd} ,j = 1, \cdots ,n,v \le \lambda_{0} \mu_{1} } \right\}. \\ \end{aligned} $$

Proof

Similar to the discussion in Ref. [4], the conclusion is true.

11.4 Conclusion

Based on the backstepping design and the Nussbaum function properties, an adaptive neural control scheme is proposed for a class of strict feedback nonlinear systems including unmodeled dynamics. In this paper, an available dynamic signal is introduced to dominate the unmodeled dynamics. Moreover, the unknown control direction and the unknown function control gain are dealt with using the property of Nussbaum function. The controller singularity problem is avoided using integral Lyapunov function, which may be caused by time-varying gain functions. The processing procedure of unmodeled dynamics is simplified. By theoretical analysis, the developed controller can guarantee that all the signals involved are semi-globally uniformly ultimately bounded.