Keywords

1 Introduction

Over the past decades, electromechanical servo system has been widely studied, such as robotic manipulators, motor servo system, and so on [13]. As a popular automatic equipment, robotic manipulators have attracted considerable attention and been widely applied to the field of industrial automation [4, 5]. Since the existence of the inferior flexibility may affect the practical control precision, the joint flexibility should be taken into account in both modeling and control of manipulators, in which torsional elasticity is introduced [6].

So far, many significant researches have been presented to handle the issue of joint flexibility to achieve a better tracking performance. In [6], an adaptive state feedback controller for flexible-joint manipulators is presented using the backstepping technique. However, when some states are immeasurable, the above-mentioned researches may not be applied directly. In [7], an extended state observer-based control scheme is developed for trajectory tracking of flexible-joint robotic manipulators with partially known model. Therefore, the control task of the flexible-joint robotic manipulators with immeasurable states and model uncertainties is still a challenging work.

In this paper, an adaptive neural output feedback control scheme is presented for the tracking control of flexible-joint manipulators with immeasurable states and model uncertainties. The unknown states are estimated using a Luenberger state observer, and the output feedback controller is designed based on the estimated states and dynamic surface control technique. Moreover, the model uncertainties of the system are approximated by employing radial basis function (RBF) neural networks. The position tracking error is proven to converge to a small neighborhood around zero via the Lyapunov synthesis.

2 System Description

As shown in Fig. 1, the considered single-link robotic manipulator system taken from [6] is expressed as

Fig. 1
figure 1

Schematic of flexible-joint manipulator

$$ \left\{ {\begin{array}{*{20}l} {I{\ddot q}_{1} + K(q_{1} - q_{2} ) + MgL\,\sin (q_{1} ) = 0} \hfill \\ {J{\ddot q}_{2} - K(q_{1} - q_{2} ) = u} \hfill \\ \end{array} } \right. $$
(1)

where \( q_{1} \) and \( q_{2} \) are the angles of the link and motor, respectively, g is the acceleration of gravity, I is the link inertia, J is the inertia of the motor, K is the spring stiffness, M and L are the mass and length of link, respectively, and u is the input torque.

For the purpose of simplifying the controller design, define \( x_{1} = q_{1} \), \( x_{2} = \dot{q}_{1} = \dot{x}_{1} \), \( x_{3} = q_{2} \), and \( x_{4} = \dot{q}_{2} = \dot{x}_{3} \) and the system (1) can be transformed into

$$ \left\{ {\begin{array}{*{20}l} {\dot{x}_{1} = x_{2} + f_{1} (X)} \hfill \\ {\dot{x}_{2} = x_{3} + f_{2} (X)} \hfill \\ {\dot{x}_{3} = x_{4} + f_{3} (X)} \hfill \\ {\dot{x}_{4} = \frac{1}{J}u + f_{4} (X)} \hfill \\ {y = x_{1} } \hfill \\ \end{array} } \right. $$
(2)

where \( X = \left[ {x_{1} ,x_{2} ,x_{3} ,x_{4} } \right]^{T} \in \,R^{4} \), \( f_{1} (X) = 0 \), \( f_{2} (X) = - \frac{MgL}{I}\sin \,x_{1} - \frac{K}{I}\left( {x_{1} - x_{3} + \frac{I}{K}x_{3} } \right) \), \( f_{3} (X) = 0 \), and \( f_{4} (X) = \frac{K}{J}(x_{1} - x_{3} ) \), the output y is measurable directly and states \( x_{t} (t = 2,\,3,\,4) \) are immeasurable.

Rewrite (2) in the following form:

$$ \dot{X} = AX + F(X) + Bu + Ly $$
(3)

where \( {\text{A}} = \left[ {\begin{array}{*{20}c} {L_{3} } & {I_{3} } \\ {l_{4} } & {0_{1 \times 3} } \\ \end{array} } \right] \), \( L_{3} = \left[ {l_{1} ,l_{2} ,l_{3} } \right]^{T} \), \( {\text{B}} = \left[ {0,0,0,\frac{1}{J}} \right]^{T} \), and \( {\text{F}}(X) = \left[ {f_{1} (X),\,f_{2} (X),\,f_{3} (X),\,f_{4} (X)} \right]^{T} \).

The parameters \( l_{i} (i = 1,\,2,\,3,\,4) \) are chosen such that the characteristic polynomial of matrix A is strictly Hurwitz, and the control objective is to design the controller u such that the output y follows the desired position trajectory \( y_{r} \).

3 Observer Design

Consider the following observer that estimates the immeasurable state variables X in (3)

$$ \mathop{\hat{X}}\limits^{.} = A\hat{X} + F(\hat{X}) + Bu + Ly $$
(4)

where \( \hat{X} = \left[ {\hat{x}_{1} ,\hat{x}_{2} ,\hat{x}_{3} ,\hat{x}_{4} } \right]^{T} \) and \( \hat{x}_{i} \) are the estimation of \( x_{i} \),

$$ \begin{array}{*{20}l} {F(\hat{X}) = \left[ {f_{1} (\hat{X}),f_{2} (\hat{X}),f_{3} (\hat{X}),f_{4} (\hat{X})} \right]^{T} ,} \hfill & {f_{1} (\hat{X}) = 0,} \hfill \\ { f_{2} (\hat{X}) = - \frac{MgL}{I}\sin \hat{x}_{1} - \frac{K}{I}\left( {\hat{x}_{1} - \hat{x}_{3} + \frac{I}{K}\hat{x}_{3} } \right),} \hfill & {f_{3} (\hat{X}) = 0,} \hfill \\ {f_{4} (\hat{X}) = l_{4} (x_{1} - \hat{x}_{1} ) + \frac{K}{J}\left( {\hat{x}_{1} - \hat{x}_{3} } \right).} \hfill & {} \hfill \\ \end{array} $$

Let \( e_{i} = x_{i} - \hat{x}_{i} (i = 1,\,2,\,3,\,4) \) be the observer error. From (3) and (4), the observer-error equation can be expressed as follows:

$$ \dot{E} = AE + F({\text{X}}) - {\text{F}}(\hat{X}), $$
(5)

where \( {\text{E}} = [e_{1} ,e_{2} ,e_{3} ,e_{4} ]^{T} \).

Construct the following Lyapunov function candidate

$$ V_{e} = E^{T} PE $$
(6)

where \( P = P^{T} > 0 \).

In view of (5), the time derivative of \( V_{e} \) is

$$ \dot{V}_{e} = E^{T} (A^{T} P + PA)E + 2E^{T} P\left[ {F({\text{X}}) - {\text{F}}\left( {\hat{X}} \right)} \right]. $$
(7)

According to the mean-value theorem, the term \( 2E^{T} P\left[ {F({\text{X}}) - {\text{F}}(\hat{X})} \right] \) can be transformed into

$$ 2E^{T} P\left[ {{\text{F}}(X) - {\text{F}}(\hat{X})} \right] = E^{T} \left[ {P\frac{\partial F}{\partial x} + \left( {\frac{\partial F}{\partial x}} \right)^{T} P} \right]E $$
(8)

where \( \frac{\partial F}{\partial x} \) is a Jacobin matrix with its element at the of ith row and the jth column being \( \frac{{\partial f_{i} }}{{\partial x_{j} }} \).

Substitute (8) into (7) and choose a proper matrix A such that and we have

$$ \dot{V}_{e} \, \le \,E^{T} \left[ {PA + A^{T} P + P\frac{\partial F}{\partial x} + \left( {\frac{\partial F}{\partial x}} \right)^{T} P + I} \right]E < 0. $$
(9)

4 Controller Design and Stability Analysis

In this section, an adaptive neural control scheme is developed for flexible-joint robotic manipulators (2) based on the observer (4) and dynamic surface control technique. The whole control process includes the following steps.

Step 1:

Define the tracking error \( s_{1} = y - y_{r} \) and differentiating \( s_{1} \) yields

$$ \dot{s}_{1} = \dot{y} - \dot{y}_{r} = \hat{x}_{2} + e_{2} - \dot{y}_{r} . $$
(10)

Consider the Lyapunov function candidate \( V_{1} = \frac{1}{2}s_{1}^{2} \), whose time derivative is

$$ \dot{V}_{1} = \dot{s}_{1} s_{1} = s_{1} (\hat{x}_{2} - \dot{y}_{r} ) + s_{1} e_{2} . $$
(11)

Applying Young’s inequality \( 2{\text{ab}}\, \le \,a^{2} + b^{2} \), we can obtain the following inequality

$$ s_{1} e_{2} \le \frac{1}{2}s_{1}^{2} + \frac{1}{2}e_{2}^{2} \le \frac{1}{2}s_{1}^{2} + \frac{1}{2}E^{T} E. $$
(12)

By regarding \( \hat{x}_{2} \) is a virtual control input, we choose \( \alpha_{1} \, \triangleq \,\hat{x}_{2} \) as intermediate control input as follows:

$$ \alpha_{1} = - (c_{1} + 0.5)s_{1} + \dot{y}_{r} $$
(13)

where \( c_{1} > 0 \) is a positive parameter.

To avoid the problem of “explosion of complexity” in [6], we introduce a state variable \( {\text{z}}_{2} \) and let \( \upalpha_{1} \) pass through a first-order filter with a time constant \( \uptau_{2} > 0 \) as follows:

$$ \tau_{2} \dot{z}_{2} + z_{2} = \alpha_{1} ,\;z_{2} (0) = \alpha_{1} (0). $$
(14)

Define the output error of the first-order filter as

$$ \chi_{2} = z_{2} - \alpha_{1} $$
(15)

From (11) to (15), we can obtain

$$ \dot{V}_{1} \le s_{1} s_{2} + s_{1} \chi_{2} - c_{1} s_{1}^{2} + \frac{1}{2}E^{T} E $$
(16)

where \( s_{2} = \hat{x}_{2} - z_{2} \).

Step 2:

The time derivative of \( s_{2} \) is

$$ \dot{s}_{2} = \hat{x}_{2} - \dot{z}_{2} = \hat{x}_{3} + l_{2} \left( {x_{1} - \hat{x}_{1} } \right) - \dot{z}_{2} + \hat{f}_{2} $$
(17)

where the uncertainty \( \hat{f}_{2} \) is approximated by the following neural network

$$ \hat{f}_{2} = W^{*} \varphi (Z) + \varepsilon_{2} $$
(18)

where \( Z = \left[ {\hat{x}_{1}^{T} ,\hat{x}_{3}^{T} } \right]^{T} \in \,R^{2} \) is the input vector, \( W^{*} = \left[ {w_{1} , \ldots ,w_{L} } \right]^{T} \in \,R^{L} \) is the ideal weight matrix, and \( \varepsilon_{2} \) is the bounded approximation error satisfying \( \left| {\varepsilon_{2} } \right|\, \le \,\upvarepsilon_{N2}^{*} \) with \( \upvarepsilon_{N2}^{*} \) being a positive constant, the NN node number \( L > 1 \) \( \varphi (Z) = \left[ {\varphi_{1} (Z), \ldots ,\varphi_{5} (Z)} \right] \) with \( \varphi_{i} (Z) \) commonly being used as the Gaussian function, which is in the following form

$$ \varphi_{i} (Z) = exp\left[ {\frac{{ - \|Z - c_{j}^{2}\| }}{{b_{j}^{2} }}} \right] \left( {i = 1,2, \ldots ,n;\,j = 1,2, \ldots ,{\text{L}}} \right) $$
(19)

where the input of NN \( n > 1 \), \( c_{j} = \left[ {c_{11} ,c_{12} , \ldots ,c_{1L} } \right]^{T} \) and \( b_{j} \) are the center of the receptive field and the width of the Gaussian function, respectively.

Construct the following Lyapunov function:

$$ V_{2} = \frac{1}{2}s_{2}^{2} + \frac{1}{2}\chi_{2}^{2} + \frac{1}{{2\gamma_{2} }}\tilde{W}^{T} \tilde{W} + \frac{1}{{2\eta_{2} }}\tilde{\varepsilon }_{N2}^{2} $$
(20)

where \( \gamma_{2} \) and \( \eta_{2} \) are positive parameters, \( \tilde{W} = W^{*} - \hat{W} \) and \( \tilde{\varepsilon }_{N2} =\upvarepsilon_{N2}^{*} - \hat{\varepsilon }_{N2} \), \( \hat{W} \) and \( \hat{\varepsilon }_{N2} \) are the estimations of \( W^{*} \) and \( \upvarepsilon_{N2}^{*} \), respectively.

In view of (14) and (15), we can obtain

$$ \dot{\chi }_{2} = \dot{z}_{2} - \dot{\alpha }_{1} = - \frac{{\chi_{2} }}{{\tau_{2} }} + B_{2} \left( {s_{1} ,s_{2} ,\chi_{2} ,y_{r} ,{\dot y}_{r} ,\textit{\"{y}}_{r} } \right) $$
(21)

where the function of \( B_{2} \) satisfies \( \left| {B_{2} } \right| \le M_{2} \) and \( M_{2} \) is a positive parameter.

Set the virtual input \( \alpha_{2} \) as

$$ \alpha_{2} = - l_{2} e_{1} - \hat{W}\varphi (Z) + \dot{z}_{2} - c_{2} s_{2} - \hat{\varepsilon }_{N2} \tanh \left( {\frac{{s_{2} }}{{\delta_{2} }}} \right) $$
(22)

where \( c_{2} \) and \( \delta_{2} \) are positive parameters.

Similarly, introduce a new state variable \( {\text{z}}_{3} \) and let \( \upalpha_{2} \) pass through a first-order filter with a time constant \( \uptau_{3} > 0 \) as follows:

$$ \tau_{3} \dot{z}_{3} + z_{3} = \alpha_{2}, \quad z_{3}(0)=\alpha_{2}(0). $$
(23)

Define the output error of the first-order filter as

$$ \chi_{3} = z_{3} - \alpha_{2} . $$
(24)

From (14) and (15), we can obtain the derivative of \( V_{2} \)

$$ \begin{aligned} & \dot{V}_{2} \le s_{2} \left( {s_{3} + \chi_{3} } \right) - c_{2} s_{2}^{2} - \frac{{\chi_{2}^{2} }}{{\tau_{2} }} + \frac{{\sigma_{2} }}{{\gamma_{2} }}\tilde{W}^{T} \hat{W} + \chi_{2} B_{2} + \varepsilon_{N2}^{*} \left[ {\left| {s_{2} } \right| - s_{2} \tanh \left( {\frac{{s_{2} }}{{\delta_{2} }}} \right)} \right] \\ & \quad + \varepsilon_{N2}^{*} s_{2} \tanh \left( {\frac{{s_{2} }}{{\delta_{2} }}} \right) - \hat{\varepsilon }_{N2} s_{2} \tanh \frac{{s_{2} }}{{\delta_{2} }} - \frac{1}{{\eta_{2} }}\tilde{\varepsilon }_{N2} \dot{\hat{\varepsilon }}_{N2} \\ \end{aligned} $$
(25)

where \( s_{3} = \hat{x}_{3} - z_{3} \).

The adaptive laws of \( \hat{W} \) and \( \hat{\varepsilon }_{N2} \) are given as

$$ \left\{ {\begin{array}{*{20}l} {\dot{\hat{W}}\, = \,r_{2} s_{2} \varphi_{2} (Z) - \sigma_{2} \hat{W}} \hfill \\ {\dot{\hat{\varepsilon }}_{N2} \, = \,\eta_{2} s_{2} \tanh \left( {\frac{{s_{2} }}{{\delta_{2} }}} \right)} \hfill \\ \end{array} } \right. $$
(26)

where \( r_{2} \), \( \sigma_{2} , \) and \( \delta_{2} \) are positive parameters.

Then, use the facts

$$ 0 \le \left| x \right| - x\,\tanh \left( {\frac{x}{\delta }} \right) \le 0.2785\delta $$
(27)

and

$$ \frac{{ \sigma_{2} }}{{ \gamma_{2} }}\tilde{W}^{T} \hat{W} = \frac{{\sigma_{2} }}{{\gamma_{2} }}\tilde{W}^{T} \left( {W^{*} - \tilde{W}} \right) \le - \frac{{\sigma_{2} \|\tilde{W}^{2} \|}}{{2\gamma_{2} }} + \frac{{\sigma_{2} \|W^{*2}\| }}{{2\gamma_{2} }} $$
(28)

Substitute (26)−(28) into (25), and we have

$$ \dot{V}_{2} \le s_{2} \left( {s_{3} + \chi_{3} } \right) - c_{2} s_{2}^{2} - \frac{{\chi_{2}^{2} }}{{\tau_{2} }} + \chi_{2} B_{2} + 0.2785\delta_{2} \varepsilon_{N2}^{*} + \frac{{\sigma_{2} \|W^{*2}\| }}{{2\gamma_{2} }}. $$
(29)

Step 3:

Following the similar procedures of the Step 2, consider the following Lyapunov function candidate

$$ V_{3} = \frac{1}{2}s_{3}^{2} \, + \,\frac{1}{2}\chi_{3}^{2} $$
(30)

and set the virtual input \( \alpha_{3} \) as

$$ \alpha_{3} = - l_{2} e_{1} \, + \,\dot{z}_{3} - c_{3} s_{3} $$
(31)

where \( c_{3} \) is a positive parameter.

Then, we can obtain

$$ \dot{V}_{3} \le s_{3} (s_{4} + \chi_{4} ) - c_{3} s_{3}^{2} \, - \,\frac{{\chi_{3}^{2} }}{{\tau_{3} }}\, + \,\chi_{3} B_{3} $$
(32)

where \( s_{4} = \hat{x}_{4} - z_{4} \), \( \chi_{3} = z_{3} - \alpha_{1} \), and the function of \( B_{3} \) satisfies \( \left| {B_{3} } \right| \le M_{3} \) with \( M_{3} \) being a positive parameter.

Step 4:

Similarly, the following neural network is utilized to approximate the nonlinear uncertainty \( \hat{f}_{4} \)

$$ \hat{f}_{4} = \theta^{\text{ * }} \varphi (Z) + \varepsilon_{4} $$
(33)

where \( \theta^{*} = \left[ {\theta_{1} , \ldots ,\theta_{5} } \right]^{T} \in R^{5} \) is the ideal weight matrix and \( \varepsilon_{4} \) is the bounded approximation error satisfying \( \left| {\varepsilon_{4} } \right| \le\upvarepsilon_{N4}^{*} \) with \( \upvarepsilon_{N4}^{*} \) being a positive constant.

Construct the following Lyapunov function candidate

$$ V_{4} = \frac{1}{2}s_{4}^{2} + \frac{1}{2}\chi_{4}^{2} + \frac{1}{{2\gamma_{4} }}\tilde{\theta }^{T} \tilde{\theta } + \frac{1}{{2\eta_{4} }}\tilde{\varepsilon }_{N4}^{2} $$
(34)

where \( \gamma_{4} \) and \( \eta_{4} \) are positive parameters, \( \tilde{\theta } = \theta^{*} - \hat{\theta } \) and \( \tilde{\varepsilon }_{N4} =\upvarepsilon_{N4}^{*} - \hat{\varepsilon }_{N4} \), \( \hat{\theta } \) and \( \hat{\varepsilon }_{N4} \) are the estimations of \( \theta^{*} \) and \( \upvarepsilon_{N4}^{*} \), respectively.

The control law u is designed as

$$ u = - {\text{J}}\left[ {l_{4} e_{1} + \hat{\theta }\varphi \left( Z \right) - \dot{z}_{4} - c_{4} s_{4} - \hat{\varepsilon }_{N4} \tanh \left( {\frac{{s_{4} }}{{\delta_{4} }}} \right)} \right] $$
(35)

where \( c_{4} \) and \( \delta_{4} \) are positive constants, and the adaptive laws of \( \hat{\theta } \) and \( \hat{\varepsilon }_{N4} \) are

$$ \left\{ {\begin{array}{*{20}l} {\dot{\hat{\theta }} = r_{4} s_{4} \varphi (Z) - \sigma_{4} \hat{\theta }} \hfill \\ {\dot{\hat{\varepsilon }}_{N4} = \eta_{4} s_{4} \tanh \left( {\frac{{s_{4} }}{{\delta_{4} }}} \right)} \hfill \\ \end{array} } \right. $$
(36)

where \( r_{4} \) are \( \sigma_{4} \) are positive constants.

Differentiating (34) yields

$$ \dot{V}_{4} \le - c_{4} s_{4}^{2} - \frac{{\chi_{4}^{2} }}{{\tau_{4} }} + \chi_{4} B_{4} + 0.2785\delta_{4} \varepsilon_{N4}^{*} + \frac{{\sigma_{4} \|\theta^{*2}\| }}{{2\gamma_{4} }} $$
(37)

where the time constant \( \uptau_{4} > 0 \) and the function of \( B_{3} \) satisfies \( \left| {B_{3} } \right| \le M_{3} \) with \( M_{3} \) being a positive parameter.

Finally, construct the following Lyapunov function candidate

$$ {\text{V}} = V_{e} + V_{1} + V_{2} + V_{3} + V_{4} $$
(38)

According to (9), (16), (29), (32), and (37), differentiate (39) with respect to time and we can obtain

$$ \dot{V} \le - \mathop \sum \limits_{k = 1}^{4} \alpha_{0} s_{k}^{2} + 6\eta + 0.2785\left( {\delta_{2} \varepsilon_{N2}^{*} + \delta_{4} \varepsilon_{N4}^{*} } \right) + \frac{{\sigma_{2} \|W^{*2}\| }}{{2\gamma_{2} }} + \frac{{\sigma_{4} \|\theta^{*2}\| }}{{2\gamma_{4} }} $$
(39)

where \( \alpha_{0} \) and \( \eta \) are positive constants.

Consequently, all the signals of the closed-loop system are semiglobally uniformly bounded, and the output tracking error \( s_{1} \) converges to a small neighborhood around zero on the condition that the parameters are chosen properly.

5 Simulation Results

In this section, simulation examples are carried out to show the feasibility and superiority of the proposed scheme. The sinusoidal wave \( y_{r} = \sin t \) is adopted as the desired reference signal.

The initial conditions of the system are \( \left[ {x_{1} ,x_{2} ,x_{3} ,x_{4} } \right]^{T} = \left[ {0,0,0,0} \right]^{T} \), \( \left[ {\hat{x}_{1} ,\hat{x}_{2} ,\hat{x}_{3} ,\hat{x}_{4} } \right]^{T} = \left[ {0,0.5,0.5,0.5} \right]^{T} \), \( \hat{\theta }_{2} = \hat{\theta }_{4} = \left[ {1,1,1,1,1,1,1} \right]^{T} , \) and \( \hat{\varepsilon }_{N2} = \hat{\varepsilon }_{N4} = 0.01 \). The parameters of state observer are \( {\text{L}} = \left[ {12,40, - 6, - 32} \right]^{T} \). The control parameters are chosen as \( c_{1} = 4.5 \), \( c_{2} = 3.5 \), \( c_{3} = 2.5 \), and \( c_{4} = 9 \). The constants of adaptive laws are \( r_{2} = 0.2 \), \( r_{4} = 0.2 \), \( \delta_{2} = 0.3 \), and \( \sigma_{2} = \sigma_{4} = 0.01 \). The time constants are \( \tau_{1} = 0.01 \), \( \tau_{2} = 3 \), and \( \tau_{3} = 2 \). The RBF NN parameters are \( c_{ij} = \left[ { - 3 - 2 - 1 0 1 2 3} \right] \) and \( b_{j} = 0.5 \). The system parameters are \( {\text{MgL}} = 5 \), \( {\text{I}} = 1 \), \( {\text{J}} = 1 \), and \( {\text{K}} = 40 \). The simulation results are shown by Figs. 2, 3, 4, 5, 6, and 7.

Fig. 2
figure 2

State \( x_{1} \) and its estimation \( \hat{x}_{1} \)

Fig. 3
figure 3

State \( x_{2} \) and its estimation \( \hat{x}_{2} \)

Fig. 4
figure 4

State \( x_{3} \) and its estimation \( \hat{x}_{3} \)

Fig. 5
figure 5

State \( x_{4} \) and its estimation \( \hat{x}_{4} \)

Fig. 6
figure 6

Position tracking performance

Fig. 7
figure 7

Position tracking error

Figures 2, 3, 4, and 5 show the estimation performance of states \( x_{i} (i = 1,2,3,4) \) and corresponding errors \( e_{i} \), respectively. We can clearly see that all the estimation states tracks actual states with a small observed errors. The tracking trajectory and tracking error are shown in Figs. 6 and 7, respectively. Obviously, the tracking error can converge to a small neighborhood around zero. From the aforementioned figures, it is clear that the proposed adaptive neural output feedback control scheme can achieve a good tracking performance for the system (2) with immeasurable states.

6 Conclusion

In this paper, an adaptive neural output feedback control scheme is investigated for single-link flexible-joint robotic manipulator systems with immeasurable states. The immeasurable states are estimated by constructing a Luenberger state observer, and model uncertainties are approximated by using RBF neural networks. Based on the dynamic surface control scheme, the adaptive controller is designed to avoid the problem of “explosion of complexity.” Simulation results illustrate that the proposed method is effective to achieve a good tracking performance.