Abstract
This chapter will focus on the motion control of robotic rigid manipulators. In other words, this chapter does not treat the motion control of mobile robots, flexible manipulators, and manipulators with elastic joints. The main challenge in the motion control problem of rigid manipulators is the complexity of their dynamics and uncertainties. The former results from nonlinearity and coupling in the robot manipulators. The latter is twofold: structured and unstructured. Structured uncertainty means imprecise knowledge of the dynamic parameters and will be touched upon in this chapter, whereas unstructured uncertainty results from joint and link flexibility, actuator dynamics, friction, sensor noise, and unknown environment dynamics, and will be treated in other chapters.
In this chapter, we begin with an introduction to motion control of robot manipulators from a fundamental viewpoint, followed by a survey and brief review of the relevant advanced materials. Specifically, the dynamic model and useful properties of robot manipulators are recalled in Sect. 8.1. The joint and operational space control approaches, two different viewpoints on control of robot manipulators, are compared in Sect. 8.2. Independent joint control and proportional–integral–derivative (GlossaryTerm
PID
) control, widely adopted in the field of industrial robots, are presented in Sects. 8.3 and 8.4, respectively. Tracking control, based on feedback linearization, is introduced in Sect. 8.5. The computed-torque control and its variants are described in Sect. 8.6. Adaptive control is introduced in Sect. 8.7 to solve the problem of structural uncertainty, whereas the optimality and robustness issues are covered in Sect. 8.8. To compute suitable set point signals as input values for these motion controllers, Sect. 8.9 introduces reference trajectory planning concepts. Since most controllers of robot manipulators are implemented by using microprocessors, the issues of digital implementation are discussed in Sect. 8.10. Finally, learning control, one popular approach to intelligent control, is illustrated in Sect. 8.11.Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Motion Controller
- Computed Torque Controller
- Operational Space Control
- Learning Control
- Robotic Manipulators
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction to Motion Control
The dynamical model of robot manipulators will be recalled in this section. Furthermore, important properties of this dynamical model, which is useful in controller design, will then be addressed. Finally, different control tasks of the robot manipulators will be defined.
1.1 Dynamical Model
For motion control, the dynamical model of rigid robot manipulators is conveniently described by Lagrange dynamics. Let the robot manipulator have n links and let the -vector q of joint variables be . The dynamic model of the robot manipulator is then described by Lagrange’s equation [8.1, 8.2, 8.3, 8.4, 8.5, 8.6]
where is the inertia matrix, is the -vector of Coriolis and centrifugal forces, is the -vector of gravity force, and τ is the -vector of joint control inputs to be designed. Friction and disturbance input have been neglected here.
Remark 8.1
Other contributions to the dynamic description of the robot manipulators may include the dynamics of the actuators, joint and link flexibility, friction, noise, and disturbances. Without loss of generality, the case of the rigid robot manipulators is stressed here.
The control schemes that we will introduce in this chapter are based on some important properties of the dynamical model of robot manipulators. Before giving a detailed introduction to these different schemes, let us first give a list of those properties.
Property 8.1
The inertia matrix is a symmetric positive-definite matrix, which can be expressed
where λh and λH denote positive constants.
Property 8.2
The matrix is skew-symmetric for a particular choice of (which is always possible), i. e.,
for any -vector z.
Property 8.3
The -matrix satisfies
for some bounded constant co.
Property 8.4
The gravity force/torque vector satisfies
for some bounded constant go.
Property 8.5
The equation of motion is linear in the inertia parameters. In other words, there is a constant vector a and an regressor matrix such that
The vector a is comprised of link masses, moments of inertia, and the link, in various combinations.
Property 8.6
The mapping is passive; i. e., there exists such that
Remarks 8.1
-
Properties 8.3 and 8.4 are very useful since they allow us to establish upper bounds on the nonlinear terms in the dynamical model. As we will see further, several control schemes require knowledge of such upper bounds.
-
In Property 8.5, the parameter vector a is comprised of several variables in various combinations. The dimensionality of the parameter space is not unique, and the search over the parameter space is an important problem.
-
In this section, we assume that the robot manipulator is fully actuated and this indicates that there is an independent control input for each degree of freedom (GlossaryTerm
DOF
). In contrast, the robot manipulators with joint or link flexibility are no longer fully actuated and the control problems are more difficult in general.
1.2 Control Tasks
It is instructive for comparative purposes to classify control objectives into the following two classes:
-
Trajectory tracking is aimed at following a time-varying joint reference trajectory specified within the manipulator workspace. In general, this desired trajectory is assumed to comply with the actuators’ capacity. In other words, the joint velocity and acceleration associated with the desired trajectory should not violate, respectively, the velocity and acceleration limit of the manipulator. In practice, the capacity of actuators is set by torque limits, which result in bounds on the acceleration that are complex and state dependent.
-
Regulation sometimes is also called point-to-point control. A fixed configuration in the joint space is specified; the objective is to bring to and keep the joint variable at the desired position in spite of torque disturbances and independently of the initial conditions. The behavior of transients and overshooting, are in general, not guaranteed.
The selection of the controller may depend on the type of task to be performed. For example, tasks only requiring the manipulator to move from one position to another without requiring significant precision during the motion between these two points can be solved by regulators, whereas such as welding, painting, and so on, require tracking controllers.
Remarks 8.2
-
The regulation problem may be seen as a special case of the tracking problem (for which the desired joint velocity and acceleration are zero).
-
The task specification above is given in the joint space and results in joint space control, which is the main content of this chapter. Sometimes, the task specification of the robot manipulators in terms of the desired trajectory of the end-effector (e. g., control with eye-in-hand) is carried out in the task space and gives rise to the operational space control, which will be introduced in Sect. 8.2.
1.3 Summary
In this section, we introduced the dynamical model of the robot manipulators and important properties of this dynamical model. Finally, we defined different control tasks of the robot manipulators.
2 Joint Space Versus Operational Space Control
In a motion control problem, the manipulator moves to a position to pick up an object, transports that object to another location, and deposits it. Such a task is an integral part of any higher-level manipulation tasks such as painting or spot-welding.
Tasks are usually specified in the task space in terms of a desired trajectory of the end-effector, while control actions are performed in the joint space to achieve the desired goals. This fact naturally leads to two kinds of general control methods, namely joint space control and operational space control (task space control) schemes.
2.1 Joint Space Control
The main goal of the joint space control is to design a feedback controller such that the joint coordinates track the desired motion as closely as possible. To this end, consider the equations of motion (8.1) of an n-DOF manipulator expressed in the joint space [8.2, 8.4]. In this case, the control of robot manipulators is naturally achieved in the joint space, since the control inputs are the joint torques. Nevertheless, the user specifies a motion in terms of end-effector coordinates, and thus it is necessary to understand the following strategy.
Figure 8.1 shows the basic outline of the joint space control methods. Firstly, the desired motion, which is described in terms of end-effector coordinates, is converted to a corresponding joint trajectory using the inverse kinematics of the manipulator. Then the feedback controller determines the joint torque necessary to move the manipulator along the desired trajectory specified in joint coordinates starting from measurements of the current joint states [8.1, 8.4, 8.7, 8.8].
Since it is always assumed that the desired task is given in terms of the time sequence of the joint motion, joint space control schemes are quite adequate in situations where manipulator tasks can be accurately preplanned and little or no online trajectory adjustments are necessary [8.1, 8.4, 8.7, 8.9]. Typically, inverse kinematics is performed for some intermediate task points, and the joint trajectory is interpolated using the intermediate joint solutions. Although the command trajectory consists of straight-line motions in end-effector coordinates between interpolation points, the resulting joint motion consists of curvilinear segments that match the desired end-effector trajectory at the interpolation points.
In fact, the joint space control includes simple proportional–derivative (GlossaryTerm
PD
) control, PID control, inverse dynamic control, Lyapunov-based control, and passivity-based control, as explained in the following sections.2.2 Operational Space Control
In more complicated and less certain environments, end-effector motion may be subject to online modifications in order to accommodate unexpected events or to respond to sensor inputs. There are a variety of tasks in manufacturing where these type of control problem arise. In particular, it is essential when controlling the interaction between the manipulator and environment is of concern.
Since the desired task is often specified in the operational space and requires precise control of the end-effector motion, joint space control schemes are not suitable in these situations. This motivated a different approach, which can develop control schemes directly based on the dynamics expressed in the operational space [8.10, 8.11].
Let us suppose that the Jacobian matrix, denoted by , transforms the joint velocity () to the task velocity () according to
Furthermore, assume that it is invertible. Then, the operational space dynamics is expressed as follows
where denotes the command forces in the operational space; the pseudo-inertia matrix is defined by
and and are given by
The task space variables are usually reconstructed from the joint space variables, via the kinematic mappings. In fact, it is quite rare to have sensors to directly measure end-effector positions and velocities. Also, it is worth remarking that an analytical Jacobian is utilized since the control schemes operate directly on task space quantities, i. e., the end-effector position and orientation.
The main goal of the operational space control is to design a feedback controller that allows execution of an end-effector motion that tracks the desired end-effector motion as closely as possible. To this end, consider the equations of motion (8.9) of the manipulator expressed in the operational space. For this case, Fig. 8.2 shows a schematic diagram of the operational space control methods. There are several advantages to such an approach because operational space controllers employ a feedback loop that directly minimizes task errors. Inverse kinematics need not be calculated explicitly, since the control algorithm embeds the velocity-level forward kinematics (8.8), as shown in the figure. Now, motion between points can be a straight-line segment in the task space.
3 Independent-Joint Control
By independent-joint control (i. e., decentralized control), we mean that the control inputs of each joint only depends on the measurement of the corresponding joint displacement and velocity. Due to its simple structure, this kind of control schemes offers many advantages. For example, by using independent-joint control, communication among different joints is saved. Moreover, since the computational load of controllers may be reduced, only low-cost hardware is required in actual implementations. Finally, independent-joint control has the feature of scalability, since the controllers on all the joints have the same formulation. In this section, two kinds of design of independent-joint control will be introduced: one focused on the dynamical model of each joint (i. e., based on the single-joint model) and the other based on the analysis of the overall dynamical model (i. e., the multijoint model) of robot manipulators.
3.1 Controller Design Based on the Single-Joint Model
The simplest independent-joint control strategy is to control each joint axis as a single-input single-output (GlossaryTerm
SISO
) system. Coupling effects among joints due to varying configuration during motion are treated as disturbance inputs. Without loss of generality, the actuator is taken as a rotary electric direct-current (GlossaryTermDC
) motor. Hence, the block diagram of the control scheme of joint i can be represented in the domain of the complex variables as shown in Fig. 8.3. In this scheme, θ is the angular variable of the motor, J is the effective inertia viewed from the motor side, Ra is the armature resistance (auto-inductance being neglected), and kt and kv are, respectively, the torque and motor constants. Furthermore, Gv denotes the voltage gain of the power amplifier so that the reference input is the input voltage Vc of the amplifier instead of the armature voltage Va. It has also been assumed that , i. e., the mechanical (viscous) friction coefficient has been neglected with respect to the electrical coefficient. Now the input–output transfer function of the motor can be written aswhere and are, respectively, the voltage-to-velocity gain and the time constant of the motor.
To guide selection of the control structure, start by noticing that an effective rejection of the disturbance d on the output θ is ensured by
-
1.
A large value of the amplifier before the point of intervention of the disturbance
-
2.
The presence of an integral action in the controller so as to cancel the effect of the gravitational component on the output at the steady state (i. e., constant θ).
In this case, as shown in Fig. 8.4, the types of control action with position and velocity feedback are characterized by [8.4]
where and correspond to the position and velocity control actions, respectively. It is worth noting that the inner control action is in a form of propositional–integral (GlossaryTerm
PI
) control to yield zero error in the steady state under a constant disturbance d. Furthermore, kTP and kTV are both transducer constants, and the amplifier gain KV has been embedded in the gain of the inner controller. From the scheme of Fig. 8.4, the transfer function of the forward path iswhile that of the return path is
The zero of the controller at can be chosen so as to cancel the effects of the real pole of the motor at . Then, by setting , the poles of the closed-loop system move on the root locus as a function of the loop gain, . By increasing the position feedback gain KP, it is possible to confine the closed-loop poles to a region of the complex plane with large absolute real part. Then, the actual location can be established by a suitable choice of KV.
The closed-loop input–output transfer function is
which can be compared with the typical transfer function of a second-order system
It can be recognized that, with a suitable choice of the gains, it is possible to obtain any value of natural frequency ωn and damping ratio ζ. Hence, if ωn and ζ are given as design specifications, the following relations can be found
For given transducer constants kTP and kTV, KV and KP will be chosen to satisfy the two equations above, respectively. On the other hand, the closed-loop disturbance/output function is
which shows that the disturbance rejection factor is and is fixed, provided that KV and KP have been chosen via the approach above. Concerning the disturbance dynamics, the zero at the origin introduced by the PI, a real pole at , and the pair of complex poles with real part should be kept in mind. In this case, an estimate TR of the output recovery time needed by the control system to recover from the effect of a disturbance on the joint position can be evaluated by analyzing models of the transfer function above. Such an estimate can reasonably be expressed as .
3.2 Controller Design Based on the Multijoint Model
In recent years, independent-joint control schemes based on the complete dynamic model of the robot manipulators (i. e., a multijoint model) have been proposed. For example, following the approach of computed-torque-like control, [8.12] dealt with the regulation task for horizontal motions, and [8.13] and [8.14] handled the tracking task for arbitrary smooth trajectories. Since the overall dynamic model is considered, the effects of coupling among joints are handled. These schemes will be introduced in detail in Sect. 8.6.
3.3 Summary
In this section, we have presented two independent-joint control schemes: one is based on the single-joint model and the other is based on the multijoint model. The former focuses on the dynamics of a single joint and regards the interaction among joints as a disturbance. This control scheme is simple but may not be suitable for high-speed tracking. Hence, we introduce the latter, which considers the overall dynamical model of robot manipulators such that the interaction among joints can be handled.
3.3.1 Further Reading
There are different types of feedback applied in the independent-joint control based on the single-joint model (such as pure position feedback or position, velocity, and acceleration feedback). A complete discussion is given in [8.4]. When the joint control servos are required to track reference trajectories with high speeds and accelerations, the tracking capabilities of the above schemes are inevitably degraded. A possible remedy is to adopt decentralized feedforward compensation to reduce tracking errors [8.4, 8.5].
4 PID Control
Traditionally, control design in robot manipulators can be understood as the simple fact of tuning of a PD or PID compensator at the level of each motor driving the manipulator joints [8.1]. Fundamentally, a PD controller is a position and velocity feedback that has good closed-loop properties when applied to a double integrator system.
The PID control has a long history since Ziegler and Nichols’ PID tuning rules were published in 1942 [8.15]. Actually, the strong point of PID control lies in its simplicity and clear physical meaning. Simple control is preferable to complex control, at least in industry, if the performance enhancement obtained by using complex control is not significant enough. The physical meanings of PID control [8.16] are as follows:
-
P-control means the present effort making a present state into desired state
-
I-control means the accumulated effort using the experience information of previous states
-
D-control means the predictive effort reflecting the information about trends in future states.
4.1 PD Control for Regulation
A simple design method for manipulator control is to utilize a linear control scheme based on the linearization of the system about an operating point. An example of this method is a PD control with a gravity compensation scheme [8.17, 8.18]. Gravity compensation acts as a bias correction, compensating only for the amount of forces that create overshooting and an asymmetric transient behavior. Formally, it has the following form
where KP and are positive-definite gain matrices. This controller is very useful for set-point regulation, i. e., [8.18, 8.7]. When this controller is applied to (8.1), the closed-loop equation becomes
where , and the equilibrium point is . Now, the stability achieved by PD control with gravity compensation can be analyzed according to the closed-loop dynamic (8.20). Consider the positive-definite function
Then, the derivative of function becomes negative semidefinite for any value of by using Property 8.2 in Sect. 8.1 , i. e.,
where means the smallest eigenvalue of KV. By invoking the Lyapunov stability theory and LaSalle’s theorem [8.1], it can be shown that the regulation error will converge asymptotically to zero, while their high-order derivatives remain bounded. This controller requires knowledge of the gravity components (structure and parameters), though it is simple.
Now, consider simple PD control without gravity compensation
then the closed-loop dynamic equation becomes
Consider the positive definite function
where denotes the potential energy with the relation of and U0 is a suitable constant. Taking time derivative of V along the closed-loop dynamics (8.23) gives the same result (8.21) with previous one using gravity compensation. In this case, the control system must be stable in the sense of Lyapunov, but it can not conclude that the regulation error will converge to zero by LaSalle’s theorem [8.1]. Actually, the system precision (the size of the regulation error vector) will depend on the size of the gain matrix KP in the following form
where g0 is that in Property 8.4 in Sect. 8.1. Hence, the regulation error can be arbitrarily reduced by increasing KP; nevertheless, measurement noise and other unmodeled dynamics, such as actuator friction, will limit the use of high gains in practice.
4.2 PID Control for Regulation
An integral action may be added to the previous PD control in order to deal with gravity forces, which to some extent can be considered as a constant disturbance (from the local point of view). The PID regulation controller can be written in the following general form
where is a positive-definite gain matrix, and:
-
If , we have PID control.
-
If is added, we have PID control.
-
If , we have PD + nonlinear integral control.
Global asymptotic stability (GlossaryTerm
GAS
) by PID control was proved in [8.12] for robotic motion control system including external disturbances, such as Coulomb friction. Also, Tomei proved the GAS of PD control in [8.19] by using an adaptation for gravity term. On the other hand, Ortega et al. showed in [8.20] that PID control could yield semiglobal asymptotic stability (GlossaryTermSGAS
) in the presence of gravity and bounded external disturbances. Also, Angeli proved in [8.21] that PD control could achieve the input-output-to-state stability (GlossaryTermIOSS
) for robotic systems. Also, Ramirez et al. proved the SGAS (with some conditions) for PID gains in [8.22]. Also, Kelly proved in [8.23] that PD plus nonlinear integral control could achieve GAS under gravity.Actually, a large integral action in PID control can cause instability of the motion control system. In order to avoid this, the integral gain should be bounded by an upper limit of the following form [8.1]
where λH is that in Property 8.1 in Sect. 8.1, , , and . This relation gives the guidelines for gain selection implicitly. Also, PID control has generated a great variety of PID control plus something, e. g., PID plus friction compensator, PID plus gravity compensator, PID plus disturbance observer.
4.3 PID Gain Tuning
The PID control can be utilized for trajectory tracking as well as set-point regulation. True tracking control will be treated after Sect. 8.5 . In this section, the simple but useful PID gain tuning method will be introduced for practical use. The general PID controller can be written in the following general form
or, in another form,
In a fundamental stability analysis of tracking control systems, Qu and Dorsey proved in [8.24] that PD control could satisfy uniform ultimate boundedness (GlossaryTerm
UUB
). Also, Berghuis and Nijmeijer proposed output feedback PD control, which satisfies semiglobal uniform ultimate boundedness (GlossaryTermSGUUB
) in [8.25] under gravity and a bounded disturbance. Recently, Choi et al. suggested inverse optimal PID control [8.26], assuring extended disturbance input-to-state stability (GlossaryTermISS
).Actually, if a PID controller (8.25) is repeatedly applied to the same set point or desired trajectory, then the maximum error will be proportional to the gains in the following form
where tf denotes the final execution time of a given task and . This relation can be utilized to tune the gain of a PID controller and is referred to as the compound tuning rule [8.16]. The compound tuning rule implicitly includes simple tuning rules as follows (γ tuning method):
-
Square tuning: , for a small k
-
Linear tuning: , for a large k.
For example, suppose we select positive constant diagonal matrices , while satisfying . For small k, the maximum error will be reduced by according to the square tuning rule, if we reduce the value γ by . For large k, the maximum error will be proportionally reduced as γ according to the linear tuning rule. This means that we can tune the PID controller using only one parameter γ when the other gain parameters are fixed [8.16] ( ). Although these rules are very useful in tuning the control performance, they can be utilized only for repetitive experiments for the same set point or desired trajectory because the tuning rules consist of proportional relations.
4.4 Automatic Tuning
For simplicity, define the composite error to be
Now simple auto-tuning PID control is suggested by choosing one tuning parameter K as shown in the following control form
and its automatic tuning rule as follow
where implies an update gain parameter for i-th control joint.
For practical use of the PID control, a target performance denoted by Ω is specified in advance
in order to maintain the composite error within the region Ω. Moreover, since the auto-tuning rule has the decentralized type (8.27), we suggest the decentralized criterion for auto-tuning as follows
where n is the number of the joint coordinates. As soon as the composite error arrives at the tuning region (8.28), the auto-tuning rule is implemented to assist the achievement of target performance. On the contrary, if the composite error stays in non-tuning region, namely, , then the auto-tuning process stops. For this case, we expect that the gain updated by an auto-tuning rule (8.27) would be larger than the matrix able to achieve the target performance Ω. As a matter of fact, the auto-tuning rule plays a nonlinear damping role in the auto-tuning region.
4.4.1 Matlab Example (Multimedia)
Simple automatic tuning example for one-link manipulator control system is shown in the multimedia source to help readers’ understanding.
4.4.2 Further Reading
The PID-type controllers were designed to solve the regulation control problem. They have the advantage of requiring knowledge of neither the model structure nor the model parameters. Also, the stability achieved by PID-type controllers was presented in this section. A range of books and papers [8.1, 8.15, 8.16, 8.22, 8.27, 8.28] are available to the robotics audience, detailing the individual tuning methods used in PID control and their concrete proofs.
5 Tracking Control
While independent PID controls are adequate in most set-point regulation problems, there are many tasks that require effective trajectory tracking capabilities such as plasma-welding, laser-cutting or high-speed operations in the presence of obstacles. In this case, employing local schemes requires moving slowly through a number of intermediate set points, thus considerably delaying the completion of the task. Therefore, to improve the trajectory tracking performance, controllers should take account of the manipulator dynamic model via a computed-torque-like technique.
The tracking control problem in the joint or task space consists of following a given time-varying trajectory or and its successive derivatives or and or , which describe the desired velocity and acceleration, respectively. To obtain successful performance, significant effort has been devoted to the development of model-based control strategies [8.1, 8.2, 8.7]. Among the control approaches reported in the literature, typical methods include the inverse dynamics control, the feedback linearization technique, and the passivity-based control method.
5.1 Inverse Dynamics Control
Though the inverse dynamics control has a theoretical background, such as the theory of feedback linearization discussed later, its starting point is mechanical engineering intuition based on cancelling nonlinear terms and decoupling the dynamics of each link. Inverse dynamics control in joint space has the form
which, applied to (8.1), yields a set of n decoupled linear systems, e. g., , where v is an auxiliary control input to be designed. Typical choices for v are
or with an integral component
leading to the error dynamics equation
for an auxiliary control input (8.30), and
if an auxiliary control input (8.31) is used. Both error dynamics are exponentially stable by a suitable choice of the gain matrices KV and KP (and KI).
Alternatively, inverse dynamics control can be described in the operational space. Consider the operational space dynamics (8.9). If the following inverse dynamics control is used in the operational space,
where , the resulting error dynamics is
and it is also exponentially stable. One apparent advantage of using this controller is that KP and KV can be selected with a clear physical meaning in operational space. However, as can be seen in (8.10), becomes very large when the robot approaches singular configurations [8.8]. This means that large forces in some direction are needed to move the arm.
5.2 Feedback Linearization
This approach generalizes the concept of inverse dynamics of rigid manipulators. The basic idea of feedback linearization is to construct a transformation as a so-called inner-loop control, which exactly linearizes the nonlinear system after a suitable state space change of coordinates. One can then design a second stage or outer-loop control in the new coordinates to satisfy the traditional control design specifications such as tracking, disturbance rejection, etc. [8.29, 8.5]. The full power of the feedback linearization scheme for manipulator control becomes apparent if one includes in the dynamic description of the manipulator the transmission dynamics, such as the elasticity resulting from shaft windup, gear elasticity, etc. [8.5].
In recent years, an impressive volume of literature has emerged in the area of differential-geometric methods for nonlinear systems. Most of the results in this area are intended to give abstract coordinate-free descriptions of various geometric properties of nonlinear systems and as such are difficult for the non-mathematician to follow. It is our intention here to give only the basic idea of the feedback linearization scheme and to introduce a simple version of this technique that finds an immediate application to the manipulator control problem. The reader is referred to [8.30] for a comprehensive treatment of the feedback linearization technique using differential-geometric methods.
Let us now develop a simple approach to the determination of linear state-space representations of the manipulator dynamics (8.1) by considering a general sort of output
where is a general predetermined function of the joint coordinate and is a general predetermined time function. The control objective will be to select the joint torque inputs τ in order to make the output go to zero.
The choice of and is based on the control purpose. For example, if and , the desired joint space trajectory we would like the manipulator to follow, then is the joint space tracking error. Forcing to zero in this case would cause the joint variables to track their desired values , resulting in a manipulator trajectory-following problem. As another example, could represent the operational space tracking error . Then, controlling to zero would result in trajectory following directly in operational space where the desired motion is usually specified.
To determine a linear state-variable model for manipulator controller design, let us simply differentiate the output twice to obtain
where we defined a transformation matrix of the form
Given the output , it is straightforward to compute the transformation associated with . In the special case where represents the operational space velocity error, then denotes the Jacobian matrix .
According to (8.1),
with the nonlinear terms represented by
Then (8.35 ) yields
Define the control input function
Now we may define a state by and write the manipulator dynamics as
This is a linear state-space system of the form
driven by the control input u. Due to the special form of A and B, this system is called the Brunovsky canonical form and it is always controllable from .
Since (8.40 ) is said to be a linearizing transformation for the manipulator dynamic equation, one may invert this transformation to obtain the joint torque
where denotes the Moore–Penrose inverse of the transformation matrix .
In the special case , and if we select so that (8.41) is stable by the PD feedback , then and the control input torque defined by (8.43) makes the manipulator move in such a way that goes to zero. In this case, the feedback linearizing control and the inverse dynamics control become the same.
5.3 Passivity-Based Control
This method explicitly uses the passivity properties of the Lagrangian system [8.31, 8.32]. In comparison with the inverse dynamics method, passivity-based controllers are expected to have better robust properties because they do not rely on the exact cancellation of the manipulator nonlinearities. The passivity-based control input is given by
With (8.44), we obtain the following closed-loop system
where . Let us choose a Lyapunov function as follows
Since the above equation is positive definite, it has a unique equilibrium at the origin, i. e., . Moreover, V can be bounded by
The time derivative of V gives
where . Since Q is positive definite and quadratic in y, it can be also bounded by
Then, from the bound of the Lyapunov function V, we get
which finally yields
It has been shown that the value of α affects the tracking result dramatically [8.33]. The manipulator tends to vibrate for small values of α. Larger values of α allow better tracking performance and protect sq from being spoiled by the velocity measurement noise when the position error is small. In [8.34], it was suggested that
be used for quadratic optimization.
5.4 Summary
In this section, we have reviewed some of the model-based motion control methods proposed to date. Under some control approaches, the closed-loop system has either asymptotic stability or globally exponential stability. However, such ideal performance cannot be obtained in practical implementation because factors such as sampling rate, measurement noise, disturbances, and unmodeled dynamics will limit the achievable gain and the performance of the control algorithms [8.33, 8.35, 8.36].
6 Computed-Torque Control
Through the years many kinds of robot control schemes have been proposed. Most of these can be considered as special cases of the class of computed-torque control (Fig. 8.5) which is the technique of applying feedback linearization to nonlinear systems in general [8.37, 8.38]. In the section, computed-torque control will be first introduced, and its variant, so-called computed-torque-like control, will be introduced later.
6.1 Computed-Torque Control
Consider the control input (8.29)
which is also known as computed-torque control; it consists of an inner nonlinear compensation loop and an outer loop with an exogenous control signal v. Substituting this control law into the dynamical model of the robot manipulator, it follows that
It is important to note that this control input converts a complicated nonlinear controller design problem into a simple design problem for a linear system consisting of n decoupled subsystems. One approach to the outer-loop control v is propositional–derivative (PD) feedback, as in (8.30)
in which case the overall control input becomes
and the resulting linear error dynamics are
According to linear system theory, convergence of the tracking error to zero is guaranteed [8.29, 8.39].
Remark 8.2
One usually lets KV and KP be diagonal positive-definite gain matrices (i. e., , ) to guarantee the stability of the error system. However, the format of the foregoing control never leads to independent joint control because the outer-loop multiplier and the full nonlinear compensation term in the inner loop scramble all joint signals among different control channels.
6.2 Computed-Torque-Like Control
It is worth noting that the implementation of computed-torque control requires that parameters of the dynamical model are accurately known and the control input is computed in real time. In order to avoid those problems, several variations of this control scheme have been proposed, for example, computed-torque-like control. An entire class of computed-torque-like controllers can be obtained by modifying the computed-torque control as
where represents the computed or nominal value and indicates that the theoretically exact feedback linearization cannot be achieved in practice due to the uncertainty in the systems. The overall control scheme is depicted in Fig. 8.6.
6.2.1 Computed-Torque-Like Control with Variable-Structure Compensation
Since there is parametric uncertainty, compensation is required in the outer-loop design to achieve trajectory tracking. The following shows a computed-torque-like control with variable-structure compensation
where the variable-structure compensation is devised as
where , , P is a symmetric positive-definite matrix satisfying
with A being defined as
Q being any appropriate symmetric positive-definite matrix,
where α and β are positive constants such that for all and , respectively, K is the matrix defined as , being a positive constant such that for all , and the function ϕ being defined as
Convergence of the tracking error to zero can be shown using the Lyapunov function
following the stability analysis in [8.40, 8.5].
Remarks 8.3
-
By Property 8.1 in Sect. 8.1, there exist positive constants and such that for all . If we choose
(8.63)it can be shown that
(8.64)which indicates that there is always at least one choice of for some α < 1.
-
Due to the discontinuity in , chattering phenomenon may occur when the control scheme is applied. It is worth noting that chattering is often undesirable since the high-frequency component in the control can excite unmodeled dynamic effect (such as joint flexibility) [8.29, 8.38, 8.6]. In order to avoid chattering, the variable-structure compensation can be modified to become smooth, i. e.,
(8.65)where ε is a positive constant and is used as the boundary layer. Following this modification, convergence of tracking errors to a residual set can be ensured, and the size of this residual set can be made smaller by use of a smaller value of ε.
6.2.2 Computed-Torque-Like Control with Independent-Joint Compensation
Obviously, the previous compensation scheme is centralized, which implies that the online computation load is heavy and high-cost hardware is required for practical implementation. In order to solve this problem, a scheme with independent-joint compensation is introduced below. In this scheme, a computed-torque-like control is designed with estimates as
Then, we use the outer-loop v as
where the positive constants KP and KV are selected to be sufficiently large, and the i-th component of is
In this compensation, , , and are positive constants. Furthermore, following the properties of robot manipulators, we have
for some suitable positive constants β1, β2, and β3, where and
Finally, εi, , is the variable length of the boundary layer, satisfying
It is worth pointing out that the term w in this control scheme is devised as the desired compensation rather than feedback. Furthermore, this scheme is in a form of independent-joint control and hence has the advantages introduced before. The convergence of the tracking error to zero can be shown using the Lyapunov function
whose time derivative along the trajectory of the closed-loop systems follows
with α being some positive constant, if sufficiently large KP and γ are used. The detailed analysis of stability, which requires Properties 8.3 and 8.4, can be found in [8.13].
Remark 8.3
Similar to computed-torque-like control with variable-structure compensation, we can consider the nonzero boundary layer as
Following this modification, tracking errors converge to a residual set. The size of this residual set can be made smaller by the use of a smaller value of ε (i. e., smaller αi).
For the task of point-to-point control, one PD controller with gravity compensation is designed with the estimates
with being the gravity term of the dynamical model of the robot manipulators. Then, we use the outer-loop v as
such that the control input becomes
This scheme is much simpler to implement than the exact computed-torque controller. The convergence of the tracking error to zero can be shown using the Lyapunov function
whose time derivative along the solution trajectories of the closed-loop system is
The detailed analysis of stability is given in [8.12]. It is necessary to note that this result is for the case of regulation, rather than for the case of tracking, since the former theoretical base, which relies on LaSalle’s lemma, requires the system be autonomous (time invariant) [8.38, 8.41, 8.42].
Remark 8.4
If we neglect gravity in the dynamical model of the robot manipulators, then the gravity estimation can be omitted here, i. e., , so that the control law becomes
which leads to pure PD control. The gain matrices, KP and KV, can be selected to be diagonal so that this PD control is in a form of independent-joint control developed based on the multijoint dynamic model.
6.3 Summary
In this section, we have presented two control schemes: the computed-torque and computed-torque-like control. The former transforms a multi-input multi-output (GlossaryTerm
MIMO
) nonlinear robotic system into a very simple decoupled linear closed-loop system whose control design is a well-established problem. Since the practical implementation of the former control requires preknowledge of all the manipulator parameters and its payload, which may not be realistic, the latter was introduced to relax the foregoing requirement and still to achieve the objective of tracking subject to system’s uncertainty.6.3.1 Further Reading
The PD control with different feedforward compensation for tracking control is investigated in [8.43]. An adaptive control scheme based on PD control is presented in [8.19].
7 Adaptive Control
An adaptive controller differs from an ordinary controller in that the controller parameters are time varying, and there is a mechanism for adjusting these parameters online based on some signals in the closed-loop systems. By the use of such control scheme, the control goal can be achieved even if there is parametric uncertainty in the plant. In this section, we will introduce several adaptive control schemes to deal with the case of imperfect knowledge of dynamical parameters of the robot manipulators. The control performance of those adaptive control schemes, including adaptive computed-torque control, adaptive inertia-related control, adaptive control based on passivity, and adaptive control with desired compensation, are basically derived from Property 8.5. Finally, the condition of persistent excitation, which is important in parameter convergence, will be addressed.
7.1 Adaptive Computed-Torque Control
The computed-torque control scheme is appealing, since it allows the designer to transform a MIMO highly coupled nonlinear system into a very simple decoupled linear system, whose control design is a well-established problem. However, this method of feedback linearization relies on perfect knowledge of the system parameters, and failure to have this will cause erroneous parametric estimates, resulting in a mismatch term in the closed-loop model of the error system. That term can be interpreted as a nonlinear perturbation acting at the input of the closed-loop system. In order to solve the problem due to parametric uncertainty, we instead consider the inverse dynamics method with parameter estimates of
where , , have the same functional form as H, C, . From Property 8.5 of the dynamics model, we have
where , called the regressor, is a known function matrix and a is the vector that summarizes all the estimated parameters. Substituting this control input τ into the manipulator dynamics gives the following closed-loop error model
where . In order to acquire an appropriate adaptive law, we first assume that the acceleration term is measurable, and that the estimated inertia matrix is never singular. Now, for convenience, the error equation is rewritten as
with ,
The adaptive law is considered as
where is an positive-definite constant matrix, and P is a symmetric positive-definite constant matrix satisfying
with Q being a symmetric positive-definite constant matrix with coherent dimension. In this adaptive law, we made two assumptions:
-
The joint acceleration is measurable, and
-
The bounded range of the unknown parameter is available.
The first assumption is to ensure that the regressor is known a priori, whereas the second assumption is to allow one to keep the estimate nonsingular by restricting the estimated parameter to lie within a range about the true parameter value.
Convergence of the tracking error and maintaining boundedness of all internal signals can actually be guaranteed by Lyapunov stability theory with the Lyapunov function
Detailed stability analysis is given in [8.2].
Remark 8.5
For practical and theoretical reasons, the first assumption above is hardly acceptable. In most cases, it is not easy to obtain an accurate measure of acceleration; the robustness of the above adaptive control scheme with respect to such a disturbance has to be established. Moreover, from a pure theoretical viewpoint, measuring means that not only do we need the whole system state vector, but we also need its derivative.
7.2 Adaptive Inertia-Related Control
Another adaptive control scheme is now introduced. This proposed scheme does not require the measurement of the manipulator’s acceleration nor does it require inversion of the estimated inertia matrix. Hence, the drawbacks of the adaptive computed-torque control scheme are avoided. Let us consider the control input
where the auxiliary signals v and s are defined as and , with being an positive-definite matrix. Following Property 8.5 of the dynamic model, we have
where is an matrix of known time functions. The formulation above is the same type of the parameter separation that was used in the formulation of the adaptive computed-torque control. Note that is independent of the joint acceleration. Similar to the formulation above, we also have
Substituting the control input into the equation of motion, it follows that
Since , the previous result can be rewritten as
where . The adaptive law is considered as
The convergence of the tracking error to zero with boundedness on all internal signals can be shown through Lyapunov stability theory using the following Lyapunov-like function
whose time derivative along the trajectories of the closed-loop systems can be found to be
The detailed stability analysis is given in [8.32].
Remark 8.6
-
The restrictions for adaptive computed-torque control formerly seen have been removed here.
-
The term introduces a PD-type linear stabilizing control action to the error system model.
-
The estimated parameters converge to the true parameters provided the reference trajectory satisfies the condition of persistency of excitation,
for all t0, where α1, α2, and t are all positive constants.
7.3 Adaptive Control Based on Passivity
Taking a physics viewpoint of control, we see that the concept of passivity has become popular for the development of adaptive control schemes. Here, we illustrate how the concept of passivity can be used to design a class of adaptive control laws for robot manipulators. First, we define an auxiliary filtered tracking error signal r as
where
and s is the Laplace transform variable. The matrix is chosen such that is a strictly proper, stable transfer function matrix. As in the preceding schemes, the adaptive control strategies has close ties to the ability to separate the known functions from the unknown constant parameters. We use the expression given above to define
where Z is a known regression matrix and φ is a vector of unknown system parameters in the adaptive context. It is important to note that the above can be arranged such that Z and r do not depend on the measurement of the joint acceleration . The adaptive control scheme given here is called the passivity approach because the mapping of is constructed to be a passive mapping. That is, we develop an adaptive law such that
is satisfied for all time and for some positive scalar constant β. For this class of adaptive controllers, the control input is given by
Detailed analysis of stability is given in [8.44].
Remarks 8.4
-
If is selected such that has a relative degree of one, Z and r will not depend on .
-
Many types of control schemes can be generated from the adaptive passive control approach by selected different transfer function matrices in the definition of r.
-
Note that, by defining such that , we have the control input
with
The adaptive law may be chosen as
to satisfy the condition of passive mapping. This indicates that adaptive inertia-related control can be viewed as a special case of adaptive passive control.
7.4 Adaptive Control with Desired Compensation
In order to implement the adaptive control scheme, one needs to calculate the elements of in real time. However, this procedure may be excessively time consuming since it involves computations with highly nonlinear functions of joint position and velocity. Consequently, the real-time implementation of such a scheme is rather difficult. To overcome this difficulty, the adaptive control with desired compensation was proposed and is discussed here. In other words, the variables q, , and are replaced with the desired ones, namely, qd, , and . Since the desired quantities are known in advance, all their corresponding calculations can be performed offline, which renders real-time implementation more plausible. Let us consider the control input
where the positive constants ka, kp, and kn are sufficiently large, and the auxiliary signal s is defined as . The adaptive law is considered as
It is worth noting that the desired compensation is adopted in both the control and adaptive laws such that the computational load can be drastically reduced. For the sake of this analysis, we note that
where ζ1, ζ2, ζ3, and ζ4 are positive constants. In order to achieve trajectory tracking, it is required that
(i. e., the gains ka, kp, and kv must be sufficiently large). The convergence of the tracking error to zero with boundedness on all internal signals can be proved through application of Lyapunov stability theory with the following Lyapunov-like function
whose time derivative along the trajectories of the closed-loop system can be derived as
where
Detailed stability analysis can be found in [8.45].
7.5 Summary
Since the computed-torque control suffers from parametric uncertainty, a variety of adaptive control schemes have been proposed. Firstly, we have presented an adaptive control scheme based on computed-torque control. Then, in order to overcome the mentioned drawbacks such as the measurability of the joint acceleration and the invertibility of the estimated inertia matrix, we presented an alternative adaptive control scheme that is free of these drawbacks. Recently, to incorporate a physics viewpoint into control, adaptive passivity-based control has become popular, and hence is introduced and discussed here. Finally, to reduce the computational load of the adaptive schemes, we presented an adaptive control with desired compensation.
7.5.1 Further Reading
A computationally very fast scheme dealing with adaptive control of rigid manipulators was presented in [8.46]. The stability analysis was completed by assuming that the joint dynamics are decoupled, i. e., that each joint is considered as an independent second-order linear system. A decentralized high-order adaptive variable-structure control is discussed in [8.47], the proposed scheme makes both position and velocity tracking errors of robot manipulators globally converge to zero asymptotically while allowing all signals in closed-loop systems to be bounded without the information of manipulator parameters. Other pioneering works in the field can be found, for example, in [8.48, 8.49]; although none of the fundamental dynamic model properties are used, the complete dynamics are taken into account, but the control input is discontinuous and may lead to chattering. Positive definiteness of the inertia matrix is explicitly used in [8.50], although it was assumed that some time-varying quantities remain constant during the adaptation. It is interesting to note that all of these schemes were based on the concept of model reference adaptive control (GlossaryTerm
MRAC
) developed in [8.51] for linear systems. Therefore, they are conceptually very different from the truly nonlinear schemes presented in this section.A passive-based modified version of the least-squares estimation scheme has been proposed in [8.52] and [8.53], which guaranteed closed-loop stability of the scheme. Other schemes can be found in [8.54], where no use is made of the skew-symmetry property, and in [8.55], where the recursive Newton–Euler formulations is used instead of the Lagrange formulation to derive the manipulator dynamics, and thus computation is simplified to facilitate practical implementation.
Even though adaptive control provides a solution to the problem of parametric uncertainty, the robustness of adaptive controllers remains a topic of great interest in the field. Indeed, measurement noise or unmodeled dynamics (e. g., flexibility) may result in unbounded closed-loop signals. In particular, the estimated parameters may diverge; this is a well-known phenomenon in adaptive control and is called parameter drift. Solutions inspired from the adaptive control of linear systems have been studied [8.56, 8.57], where a modified estimation ensures boundedness of the estimates. In [8.58], the controller in [8.32] is modified to enhance robustness.
8 Optimal and Robust Control
Given a nonlinear system, such as robotic manipulators, one can develop many stabilizing controls [8.29, 8.41]. In other words, the stability of the control system cannot determine a unique controller. It is natural that one seeks an optimal controller among the many stable ones. However, the design of an optimal controller is possible provided that a rather exact information on the target system is available, such as an exact system model [8.34, 8.59]. In the presence of discrepancy between the real system and its mathematical model, a designed optimal controller is no longer optimal, and may even end up being instable in the actual system. Generally speaking, the optimal control design framework is not the best one to deal with system uncertainty. To handle system uncertainty from the control design stage, a robust control design framework is necessary [8.60]. One of main objectives of robust control is to keep the controlled system stable even in presence of uncertainties in the mathematical model, unmodeled dynamics, and the like.
Let us consider an affine nonlinear system described by nonlinear time-varying differential equation in the state
where is the control input, and is the disturbance. Without disturbances or unmodeled dynamics, the system simplifies to
Actually, there are many kinds of methods describing the nonlinear system according to the objective of control [8.1, 8.16, 8.21, 8.23, 8.34, 8.54].
8.1 Quadratic Optimal Control
Every optimal controller is based on its own cost function [8.61, 8.62]. One can define a cost function as [8.63, 8.64]
such that , , and . Then, we have
The quadratic optimal control for the system (8.104) is found by solving, for a first order differentiable positive-definite function , the Hamilton–Jacobi–Bellman (GlossaryTerm
HJB
) equation [8.34, 8.59]where and . Then the quadratic optimal control is defined by
Note that the HJB equation is a nonlinear second-order partial differential equation in .
Unlike the aforementioned optimal control problem, the so-called inverse quadratic optimal control problem is to find a set of and for which the HJB equation has a solution . Then the inverse quadratic optimal control is defined by (8.105).
8.2 Nonlinear Control
When disturbances are not negligible, one can deal with their effect such that
where γ > 0 specifies the L2-gain of the closed-loop system from the disturbance input w to the cost variable z. This is called the L2-gain attenuation requirement [8.63, 8.64, 8.65]. One systematic way to design an optimal and robust control is given by the nonlinear optimal control. Let γ > 0 be given, by solving the following equation
then the control is defined by
The partial differential inequality (8.107) is called the Hamilton–Jacobi–Isaac (GlossaryTerm
HJI
) inequality. Then one can define the inverse nonlinear optimal control problem that finds a set of and such that the L2-gain requirement is achieved for a prescribed L2-gain γ [8.66].Two things deserve further comment. The first is that the L2-gain requirement is only valid for disturbance signals w whose L2-norm is bounded. The second is that optimal control is not uniquely defined. Hence, one can choose a quadratic optimal among many optimal controllers. Precisely speaking, the control (8.108) should be called suboptimal control, since the desired L2-gain is prescribed a priori. A true optimal control is to find a minimal value of γ, such that the L2-gain requirement is achieved.
8.3 Passivity-Based Design of Nonlinear Control
There are many methodologies for the design of optimal and/or robust controls. Among these, passivity-based controls can take full advantage of the properties described above [8.31]. They consist of two parts: one coming from the reference motion compensation while preserving the passivity of the system, and the other to achieve stability, robustness, and/or optimality [8.66, 8.67].
Let us suppose that the dynamic parameters are identified as , , and , whose counterparts are , , and , respectively. Then, passivity-based control generates the following tracking control laws
where is the reference acceleration defined by
where and . Two parameters are involved in generating the reference acceleration. Sometimes the following alternative method can be adopted
This reduces the order of the closed-loop system because the state is sufficient for the system description, while the definition of (8.110) requires the state .
In Fig. 8.7, the closed-loop dynamics under the control is given by
where
If and , , , then . Otherwise, the disturbance is defined as
where , , and . It is of particular interest that the system (8.111) defines a passive mapping between and .
According to the manner in which the auxiliary control input u is specified, passivity-based control can achieve stability, robustness, and/or optimality (Fig. 8.7).
8.4 A Solution to Inverse Nonlinear Control
Let us define the auxiliary control input by the reference-error feedback
where α > 1 is arbitrary. Then, the control provides the inverse nonlinear optimality.
Theorem 8.1 Inverse Nonlinear Optimality [8.66]
Let the reference acceleration generation gain matrices KV and KP satisfy
Then for a given γ > 0, the reference error feedback
satisfies the L2-gain attenuation requirement for
provided that
Given γ, one can set for α > 1. This yields .
When the inertia matrix is identified as a diagonal constant matrix such as , one should set . In addition, one can set . Then this results in a decoupled PID control of the form
for α > 1, which can be rewritten as
This leads to a PID control with the desired acceleration feedforward [8.68] given by
where
9 Trajectory Generation and Planning
This section deals with the problem of reference trajectory generation, that is, the computation of desired position, velocity, acceleration and/or force/torque signals that are used as input values for the robot motion controllers introduced in Sects 8.3–8.8.
9.1 Geometric Paths and Trajectories
9.1.1 Path Planning
A path is a geometric representation of a plan to move from a start to a target pose. The task of planning is to find a collision-free path among a collection of static and dynamic obstacles. Path planning can also include the consideration of dynamic constraints such as workspace boundaries, maximum velocities, maximum accelerations, and maximum jerks. We distinguish between online and offline path planning algorithms. Offline planned paths are static and calculated prior to execution. Online methods require algorithms that meet real-time constraints (i. e., algorithms that do not exceed a determinable worst-case computation time) to enable path (re-)calculations and/or adaptations during the robot motions in order to react to and interact with dynamic environments. This means that a robot moves along a path that has not necessarily been computed completely, and which may change during the movement. Details about path planning concepts are described in Chap. 7, in Parts D and E, and specifically in Chap. 47.
9.1.2 Trajectory Planning
A trajectory is more than a path: It also includes velocities, accelerations, and/or jerks along a path ( ). A common method is computing trajectories for a priori specified paths, which fulfill a certain criterion (e. g., minimum execution time). We distinguish between online and offline trajectory planning methods. An offline calculated trajectory cannot be influenced during its execution, while online trajectory planning methods can (re-)calculate and/or adapt robot motions behavior during the movement. The reasons for this (re-)calculation and/or adaptation can vary: improvement of accuracy, better utilization of currently available dynamics, reaction to and interaction with a dynamic environment, or reaction to (sensor) events. Besides the distinction between online and offline methods, we can further distinguish between (1) one-dimensional (GlossaryTerm
1-D
) and multi-dimensional trajectories and (2) single-point and multi-point trajectories. Multi-point trajectories typically relate to a path.9.2 Joint Space and Operational Space Trajectories
Depending on the control state space, trajectory generators provide set points for tracking controllers in joint space or in operational space. In either space, a trajectory can be represented in several different ways: cubic splines, quintic splines, higher-order splines, harmonics (sinusoidal functions), exponential functions, Fourier series, and more.
9.2.1 Joint Space Trajectories
Consider the torque control input (8.29)
and the PD controller (8.30)
or PID controller (8.31), respectively, the task of a trajectory generator in joint space coordinates is computing the signals , , and . These three signals contain the reference trajectory and are used as input values for the tracking controller.
During nominal operation the joint torques required to execute a trajectory should not exceed joint force/torque limits and ,
9.2.2 Operational Space Trajectories
Similarly to (8.29), we can also consider trajectories represented by xd, , and for an operational space controller (8.9)
with a PD control law of
With the transformation into joint space using the inverse of (8.8), the operational space trajectory generator must assure that the limits given in (8.122) are not violated. It is the responsibility of the path planner (Chap. 7) that all points along the trajectory are in the robot workspace and that start and goal poses can be reached in the same joint configuration. It is the responsibility of the trajectory planner that joint torque and velocity constraints are not violated even in the presence of kinematic singularities.
9.3 Trajectory Representations
9.3.1 Mathematical Representations
Functions for (8.121) and (8.123) can be represented in several ways that are described here.
9.3.1.1 Polynomial Trajectories
One of the simplest ways to represent a robot trajectory is a polynomial function of degree m for each joint
so that can be composed (or in operational space, respectively). In the simplest case, cubic polynomials are used, which, however, leads to non-steady acceleration signals with infinite jerks. Quintic and higher-order polynomials allow for steady acceleration signals as well as arbitrary position, velocity, and acceleration vectors and the beginning and at the end of the trajectory. To determine the coefficients of (8.124), the execution time ttrgt, at which the target state will be reached needs to be known. The left part of Fig. 8.8a shows a quintic trajectory for three DOFs starting at with an execution time of . To connect a trajectory segment to preceding and succeeding segments, the following six constraints need to be satisfied for all n joints
A unique closed-form solution can be computed, so that all polynomial coefficients can be determined for one trajectory segment.
9.3.1.2 Piecewise Polynomials
Polynomials of different degrees can be concatenated to represent a trajectory between an initial state and a target state . For instance, the classical double-S velocity profile [8.69] with trapezoidal acceleration and deceleration profiles and a cruise-velocity segment in the middle consists of seven polynomials of degrees 3-2-3-1-3-2-3 (m in (8.124)). The right part of Fig. 8.8 shows a trajectory represented by piecewise polynomials. To compute time-optimal trajectories under purely kinematic constraints (e. g., , , , etc.), piecewise polynomials are used, because they allow always using one signal at its kinematic limit (Fig. 8.8b and [8.70]).
9.3.1.3 Trigonometric Trajectories
Similarly to (8.124), trigonometric functions can be used to represent harmonic, cycloidal, and elliptic trajectories [8.71, 8.72]. A simple example for a harmonic trajectory for one joint i is
While any order of derivatives of trigonometric functions is continuous, they might be discontinuous at t0 and ttrgt.
9.3.1.4 Other Representations
Exponential Trajectories and Fourier Series Expansions [8.72] are particularly suited to minimize natural vibrations on robot mechanisms introduced by reference trajectories.
9.3.2 Trajectories and Paths
To draw the connection to Chap. 7 and Parts D and E, trajectories and paths are often tightly coupled.
9.3.2.1 Trajectories Along Predefined Paths
A path in joint space can be described by a function with , where the start configuration of the path is and the target configuration is . To move a robot along the path, an appropriate function s(t) needs to be computed that does not violate any of the kinematic and dynamic constraints [8.73, 8.74, 8.75]. If a path is given in operational space, can be mapped to (Sect. 8.2).
9.3.2.2 Multi-Dimensional Trajectories
Instead of using a one-dimensional function s(t) to parameterize a path segment, trajectories can also be described by individual functions for each DOF i to represent or , respectively. To connect two arbitrary states, the signals for each individual degree of freedom need to be time-synchronized [8.70], so that all DOFs reach their target state of motion at the very same instant. Those trajectories may also be phase-synchronized [8.76], so that the trajectories of all DOFs are derived from a one master DOF and only scaled by a factor to achieve homothety [8.77]. The two trajectories in Fig. 8.8 are time-synchronized but not phase-synchronized.
9.3.2.3 Multi-Point Trajectories
If instead of an entirely defined geometric path or a motion to a single way point, an entire series of geometric way points is given, the trajectory that connects all way points in a given state space needs to be computed. Trajectory segments between two way points can be represented with any of the above mentioned representations as long as the position signal and its derivatives up an appropriate order are continuous (at least C1 continuous). Splines, B-splines, or Bezier splines are used to generate either a reference trajectory or a geometric path, which then can be parameterized with a function s(t).
9.4 Trajectory Planning Algorithms
The following part provides an overview of online and offline trajectory planning concepts.
9.4.1 Constraints
Constraints for trajectory planners can manifold:
-
Kinematic: maximum velocities, accelerations, jerks, etc. and workspace space limits
-
Dynamic: maximum joint or actuator forces and/or torques
-
Geometric: no collisions with static and dynamic objects in the workspace
-
Temporal: reaching a state within a given time interval or at a given time.
These and other constraints may have to be taken into account at the same time. Depending on the robot and the task, additional optimization criteria may be considered (e. g., time-optimality, minimum-jerk, maximum distance to workspace boundaries, minimum energy).
9.4.2 Offline Trajectory Planning
Kahn and Roth [8.78] showed results using optimal, linear control theory to achieve a near-time-optimal solution for linearized manipulators. The resulting trajectories are jerk-limited and lead to smaller trajectory-following errors and to less excitation of structural natural frequencies in the system.
The work of Brady [8.79] introduced several techniques of trajectory planning in joint space and Paul [8.80] and Taylor [8.81] published works about the planning of trajectories in Cartesian space in parallel to Brady. Lin et al. [8.82] published another purely kinematic approach as did Castain and Paul [8.69].
Hollerbach [8.83] first introduced the consideration of the nonlinear inverse robot dynamics for the generation of manipulator trajectories.
During the middle of the 1980s, three groups developed techniques for time-optimal trajectory planning for arbitrarily specified paths: Bobrow [8.73], Shin and McKay [8.74], and Pfeiffer and Johanni [8.75]. Trajectories are composed of three curves: the maximum acceleration curve, the maximum velocity curve, and the maximum deceleration curve. The proposed algorithms find the intersection points of these three curves.
These algorithms have become the fundament for many follow-up works: Kyriakopoulos and Sridis [8.84] added minimum-jerk criteria; Slotine and Yang abandoned the computationally expensive calculation of the maximum velocity curve [8.85]; Shiller and Lu added methods for handling dynamic singularities [8.86]; Fiorini and Shiller extended the algorithm for known dynamic environments with moving obstacles [8.87].
9.4.3 Online Trajectory Planning
An online modification of a planned trajectory may have several reasons: (i) The trajectory becomes adapted in order to improve the accuracy with a path specified beforehand; (ii) The robotic system reacts on sensor signals and/or events that cannot be predicted beforehand, because the robot acts in a (partly) unknown and dynamic environment.
9.4.3.1 Improving Path Accuracy
All previously described off-line trajectory planning methods assume a dynamic model that describes the behavior of the real robot exactly. In practice, this is often not the case, and some robot parameters are only estimated, some dynamic effects remain unmodeled, and system parameters may change during operation. If this is the case, the resulting robot motion is not time-optimal anymore and/or the maximum actuator forces and/or torques are exceeded, which leads to an undesired difference between the specified and the executed path.
Dahl and Nielsen [8.88] extended [8.73, 8.74, 8.75] by adapting the acceleration along the path, so that the underlying trajectory-following controller becomes adapted depending on the current state of motion. The approaches of Cao et al. [8.89, 8.90] use cubic splines to generate smooth paths in joint space with time-optimal trajectories. Constantinescu and Croft [8.91] suggest a further improvement to the approach of [8.86] with objective to limit the derivative of actuator forces/torques. Macfarlane and Croft extended this approach further by adding jerk limitations to quintic splines (Fig. 8.8) [8.92].
9.4.4 Sensor-Based Trajectory Adaptation
The last paragraph presented an overview of online trajectory generation methods for improving the path accuracy, while this one focuses on the online consideration of sensor signals, for instance, for the purpose of collision avoidance ( , ) or switching between controllers or control gains ( , ).
In 1988, Andersson presented an online trajectory planning for a Ping-Pong-playing PUMA 260 manipulator that computes parameterized quintic polynomials [8.93, 8.94]. Based on [8.73, 8.74, 8.75], Lloyd and Hayward proposed a technique to transition between two different trajectory segments [8.95] using a transition window [8.81]. Ahn et al. introduced a method to connect two arbitrary motion states online, which does not take into account kinematic or dynamic constraints [8.96]. Broquère et al., Haschke et al., and Kröger extended this approach for multi-dimensional trajectories, so that kinematic constraints are taken into account [8.70, 8.97, 8.98].
9.4.5 Further Reading
Overviews of the domain of robot reference trajectory generation can found in the textbooks of Biagiotti and Melchiorri [8.72], Craig [8.99], Fu et al. [8.100], and Spong et al. [8.101].
10 Digital Implementation
Most controllers introduced in the previous sections are digitally implemented on microprocessors. In this section basic but essential practical issues related to their computer implementation are discussed. When the controller is implemented on a computer control system, the analog inputs are read and the outputs are set with a certain sampling period. This is a drawback compared to analog implementations, since sampling introduces time delays into the control loop. Figure 8.9 shows the overall block diagram of control system with a boxed digital implementation part. When a digital computer is used to implement a control law, it is convenient to divide coding sequence in the interrupt routine into four process routines, as shown in Fig. 8.10. Reading the input signal from sensors and writing the control signal to digital-to-analog (GlossaryTerm
D/A
) converters synchronized at the correct frequency is very important. Therefore, these processes are located in the first routine. After saving counter values and extracting D/A values, which are already calculated one step before, the next routine produces reference values. Control routines with filters follow and produce scalar or vector control outputs. Finally, the user interface for checking parameter values is made and will be used for tuning and debugging.10.1 Z-Transform for Motion Control Implementation
Continuous-time systems are transformed into discrete-time systems by using the Z-transform. A discrete-time system is used to obtain a mathematical model that gives the behavior of a physical process at the sampling points, although the physical process is still a continuous-time system. A Laplace transform is used for the analysis of control system in the s-domain. In most cases, the design of controllers and filters are done using tools in the s-domain. In order to realize those results in program code, understanding the Z-transform is essential. All controllers and filters designed in the s-domain can be easily translated to a program code through a Z-transform because it has the form of digitized difference sequences.
A PID controller is used as an example. In transfer function form, this controller has the basic structure
There are several methods for transformation from the frequency domain to the discrete domain. For stability conservation, backward Euler and Tustin algorithms are often used. Though the Tustin algorithm is known as the more exact one, the backward Euler algorithm is utilized in the following procedure.
After substituting the backward Euler equation into (8.126),
the following discrete form is produced
where
Sometimes a differentiator s in the PID controller makes the implementation infeasible when the measurement noise is severe. One can remedy the controller (8.126) by attaching a lowpass filter with filter time constant σ
Again, substituting the backward Euler equation into (8.128) produces
where
in which the filter time constant σ is determined from a cutoff frequency fc[Hz] for removing noises such as .
10.2 Digital Control for Coding
Inverse Z-transform produces a difference equation for digital control. Furthermore the difference equation can be directly converted into the control program code. Since the inverse Z-transform of Y(z) is yk and z−1 implies the previous sample time, and .
Now, the PID controller expressed by (8.127) is rearranged using the difference equation
For pratical use, the PID controller can be directly coded in the program as follows
where
in which pk is the present position, vk is the present velocity, desired means reference to be followed, and c means coded form for digitial control. Now let us obtain the difference between the present control output and the previous one
Comparing the parameters in (8.130) and (8.131), one obtains
which shows that there is a relation between the designed and coded forms of the gains
As the sampling frequency is increased in the same system, the coded KV gain should be increased and the coded KI gain should be decreased. Using this method, the designed controller can be coded in a digital signal processor (GlossaryTerm
DSP
) or microprocessor. However, sufficient analysis and simulation for control algorithms should be performed beforehand to obtain successful control system performance.In addition, the PID controller with lowpass filter (8.129) can be implemented as
Using the same procedures, one can arrive at the similar control program code for digital control.
10.2.1 PID Control Experiment (Multimedia)
According as the gains change, the performance variations of PID controller implemented in the digital control system are shown in the multimedia source to help readers’ understanding .
11 Learning Control
Since many robotic applications, such as pick-and-place operations, paintings, and circuit-board assembly, involve repetitive motions, it is natural to consider the use of data gathered in previous cycles to try to improve the performance of the manipulator in subsequent cycles. This is the basic idea of repetitive control or learning control. Consider the robot model given in Sect. 8.1 and suppose that one is given a desired joint trajectory on a finite time interval . The reference trajectory qd is used in repeated trails of the manipulator, assuming either that the trajectory is periodic, , (repetitive control) or that the robot is reinitialized to lie on the desired trajectory at the beginning of each trail (learning control). Hereafter, we use the term learning control to mean either repetitive or learning control.
11.1 Pure P-Type Learning Control
Let be the input torque during the k-th cycle, which produces an output . Now, let us consider the following set of assumptions:
-
Assumption 1: Every trial ends at a fixed time of duration .
-
Assumption 2: Repetition of the initial setting is satisfied.
-
Assumption 3: Invariance of the system dynamics is ensured throughout these repeated trails.
-
Assumption 4: Every output qk can be measured and thus the error signal can be utilized in the construction of the next input .
-
Assumption 5: The dynamics of the robot manipulators is invertible.
The learning control problem is to determine a recursive learning law L
where , such that as in some suitably defined function norm, . The initial control input can be any control input that produces a stable output, such as PD control. Such learning control schemes are attractive because accurate models of the dynamics need not be known a priori.
Several approaches have been used to generate a suitable learning law L and to prove convergence of the output error. A pure P-type learning law is one of the form
and is given this name because the correction term for the input torque at each iteration is proportional to the error . Now let be defined by the computed-torque control, i. e.,
One should recall that the function actually does not need to be computed; it is sufficient to know that it exists. Considering the P-type learning control law, we have
where , so that
provided there exist positive constant λ and β such that
for all k. It then follows from the inequality above that in the norm sense as . Detailed stability analysis of this control scheme is given in [8.102, 8.103].
11.2 P-Type Learning Control with a Forgetting Factor
Although pure P-type learning control achieves the desired goal, several strict assumptions may be not valid in actual implementations, for example, there may be an initial setting error. Furthermore, there may be small but nonrepeatable fluctuations of dynamics. Finally, there may exit a (bounded) measurement noise such that
Thus the learning control scheme may fail. In order to enhance the robustness of P-type learning control, a forgetting factor is introduced in the form
The original idea of using a forgetting factor in learning control originated with [8.104].
It has been rigorously proven that P-type learning control with a forgetting factor guarantees convergence to a neighborhood of the desired one of size . Moreover, if the content of a long-term memory is refreshed after every k trials, where k is of , then the trajectories converge to an ε-neighborhood of the desired control goal. The size of ε is dependent on the magnitude of the initial setting error, the nonrepeatable fluctuations of the dynamics, and the measurement noise. For a detailed stability investigation, please refer to [8.105, 8.106].
11.3 Summary
By applying learning control, the performance of repetitive tasks (such as painting or pick-and-place operation) is improved by utilizing data gathered in the previous cycles. In this section, two learning control schemes were introduced. First, pure P-type learning control and its robustness problem were described. Then P-type learning control with a forgetting factor was presented, enhancing the robustness of learning control.
11.3.1 Further Reading
Rigorous and refined exploration of learning control is first discussed independently in [8.12, 8.2].
Abbreviations
- 1-D:
-
one-dimensional
- D/A:
-
digital-to-analog
- DC:
-
direct current
- DOF:
-
degree of freedom
- DSP:
-
digital signal processor
- GAS:
-
global asymptotic stability
- HJB:
-
Hamilton–Jacobi–Bellman
- HJI:
-
Hamilton–Jacobi–Isaac
- IOSS:
-
input-output-to-state stability
- ISS:
-
input-to-state stability
- MIMO:
-
multiple-input–multiple-output
- MRAC:
-
model reference adaptive control
- PD:
-
proportional–derivative
- PID:
-
proportional–integral–derivative
- PI:
-
propositional integral
- SGAS:
-
semiglobal asymptotic stability
- SGUUB:
-
semiglobal uniform ultimate boundedness
- SISO:
-
single input single-output
- UUB:
-
uniform ultimate boundedness
References
C. Canudas de Wit, B. Siciliano, G. Bastin: Theory of Robot Control (Springer, London 1996)
J.J. Craig: Adaptive Control of Mechanical Manipulators, Ph.D. Thesis (UMI Dissertation Information Service, Ann Arbor 1986)
R.J. Schilling: Fundametals of Robotics: Analysis and Control (Prentice Hall, Englewood Cliffs 1989)
L. Sciavicco, B. Siciliano: Modeling and Control of Robot Manipulator (McGraw-Hill, New York 1996)
M.W. Spong, M. Vidyasagar: Robot Dynamics and Control (Wiley, New York 1989)
M.W. Spong, F.L. Lewis, C.T. Abdallah (Eds.): Robot Control (IEEE, New York 1989)
C.H. An, C.G. Atkeson, J.M. Hollerbach: Model–Based Control of a Robot Manipulator (MIT Press, Cambridge, 1988)
R.M. Murray, Z. Xi, S.S. Sastry: A Mathematical Introduction to Robotic Manipulation (CRC, Boca Raton 1994)
T. Yoshikawa: Foundations of Robotics (MIT Press, Cambridge 1990)
O. Khatib: A unified approach for motion and force control of robot manipulators: The operational space formulation, IEEE J. Robotics Autom. 3(1), 43–53 (1987)
J.Y.S. Luh, M.W. Walker, R.P.C. Paul: Resolved–acceleration control of mechanical manipulator, IEEE Trans. Autom. Control 25(3), 468–474 (1980)
S. Arimoto, F. Miyazaki: Stability and robustness of PID feedback control for robot manipulators of sensory capability. In: Robotics Research, ed. by M. Brady, R. Paul (MIT Press, Cambridge 1984) pp. 783–799
L.C. Fu: Robust adaptive decentralized control of robot manipulators, IEEE Trans. Autom. Control 37(1), 106–110 (1992)
H. Seraji: Decentralized adaptive control of manipulators: Theory, simulation, and experimentation, IEEE Trans. Robotics Autom. 5(2), 183–201 (1989)
J.G. Ziegler, N.B. Nichols: Optimum settings for automatic controllers, Trans. ASME 64, 759–768 (1942)
Y. Choi, W.K. Chung: PID Trajectory Tracking Control for Mechanical Systems, Lecture Notes in Control and Information Sciences, Vol. 289 (Springer, New York 2004)
R. Kelly: PD control with desired gravity compensation of robot manipulators: A review, Int. J. Robotics Res. 16(5), 660–672 (1997)
M. Takegaki, S. Arimoto: A new feedback method for dynamic control of manipulators, Trans. ASME J. Dyn. Syst. Meas, Control 103, 119–125 (1981)
P. Tomei: Adaptive PD controller for robot manipulators, IEEE Trans. Robotics Autom. 7(4), 565–570 (1991)
R. Ortega, A. Loria, R. Kelly: A semi-globally stable output feedback PI${}^{2}$ D regulator for robot manipulators, IEEE Trans. Autom. Control 40(8), 1432–1436 (1995)
D. Angeli: Input-to-State stability of PD-controlled robotic systems, Automatica 35, 1285–1290 (1999)
J.A. Ramirez, I. Cervantes, R. Kelly: PID regulation of robot manipulators: Stability and performance, Syst. Control Lett. 41, 73–83 (2000)
R. Kelly: Global positioning of robot manipulators via PD control plus a class of nonlinear integral actions, IEEE Trans. Autom. Control 43(7), 934–937 (1998)
Z. Qu, J. Dorsey: Robust tracking control of robots by a linear feedback law, IEEE Trans. Autom. Control 36(9), 1081–1084 (1991)
H. Berghuis, H. Nijmeijer: Robust control of robots via linear estimated state feedback, IEEE Trans. Autom. Control 39(10), 2159–2162 (1994)
Y. Choi, W.K. Chung, I.H. Suh: Performance and $\mathcal{H}_{\infty}$ optimality of PID trajectory tracking controller for Lagrangian systems, IEEE Trans. Robotics Autom. 17(6), 857–869 (2001)
K. Aström, T. Hagglund: PID Controllers: Theory, Design, and Tuning (Instrument Society of America, Research Triangle Park 1995)
C.C. Yu: Autotuning of PID Controllers: Relay Feedback Approach (Springer, London 1999)
F.L. Lewis, C.T. Abdallah, D.M. Dawson: Control of Robot Manipulators (Macmillan, New York 1993)
A. Isidori: Nonlinear Control Systems: An Introduction, Lecture Notes in Control and Information Sciences, Vol. 72 (Springer, New York 1985)
H. Berghuis, H. Nijmeijer: A passivity approach to controller–observer design for robots, IEEE Trans. Robotics Autom. 9, 740–754 (1993)
J.J. Slotine, W. Li: On the adaptive control of robot manipulators, Int. J. Robotics Res. 6(3), 49–59 (1987)
G. Liu, A.A. Goldenberg: Comparative study of robust saturation–based control of robot manipulators: analysis and experiments, Int. J. Robotics Res. 15(5), 473–491 (1996)
D.M. Dawson, M. Grabbe, F.L. Lewis: Optimal control of a modified computed–torque controller for a robot manipulator, Int. J. Robotics Autom. 6(3), 161–165 (1991)
D.M. Dawson, Z. Qu, J. Duffie: Robust tracking control for robot manipulators: Theory, simulation and implementation, Robotica 11, 201–208 (1993)
A. Jaritz, M.W. Spong: An experimental comparison of robust control algorithms on a direct drive manipulator, IEEE Trans. Control Syst. Technol. 4(6), 627–640 (1996)
A. Isidori: Nonlinear Control Systems, 3rd edn. (Springer, New York 1995)
J.J. Slotine, W. Li: Applied Nonlinear Control (Prentice Hall, Englewood Cliffs 1991)
W.J. Rugh: Linear System Theory, 2nd edn. (Prentice Hall, Upper Saddle River 1996)
M.W. Spong, M. Vidyasagar: Robust microprocessor control of robot manipulators, Automatica 23(3), 373–379 (1987)
H.K. Khalil: Nonlinear Systems, 3rd edn. (Prentice Hall, Upper Saddle River 2002)
M. Vidysagar: Nonlinear Systems Analysis, 2nd edn. (Prentice Hall, Englewood Ciffs 1993)
J.T. Wen: A unified perspective on robot control: The energy Lyapunov function approach, Int. J. Adapt. Control Signal Process. 4, 487–500 (1990)
R. Ortega, M.W. Spong: Adaptive motion control of rigid robots: A tutorial, Automatica 25(6), 877–888 (1989)
N. Sadegh, R. Horowitz: Stability and robustness analysis of a class of adaptive contollers for robotic manipulators, Int. J. Robotics Res. 9(3), 74–92 (1990)
S. Dubowsky, D.T. DesForges: The application of model-reference adaptive control to robotic manipulators, ASME J. Dyn. Syst. Meas. Control 37(1), 106–110 (1992)
S.H. Hsu, L.C. Fu: A fully adaptive decentralized control of robot manipulators, Automatica 42, 1761–1767 (2008)
A. Balestrino, G. de Maria, L. Sciavicco: An adaptive model following control for robotic manipulators, ASME J. Dyn. Syst. Meas. Control 105, 143–151 (1983)
S. Nicosia, P. Tomei: Model reference adaptive control algorithms for industrial robots, Automatica 20, 635–644 (1984)
R. Horowitz, M. Tomizuka: An adaptive control scheme for mechanical manipulators-Compensation of nonlinearity and decoupling control, ASME J. Dyn. Syst. Meas. Control 108, 127–135 (1986)
I.D. Laudau: Adaptive Control: The Model Reference Approach (Dekker, New York 1979)
R. Lozano, C. Canudas de Wit: Passivity based adaptive control for mechanical manipulators using LS type estimation, IEEE Trans. Autom. Control 35(12), 1363–1365 (1990)
B. Brogliato, I.D. Laudau, R. Lozano: Passive least squares type estimation algorithm for direct adaptive control, Int. J. Adapt. Control Signal Process. 6, 35–44 (1992)
R. Johansson: Adaptive control of robot manipulator motion, IEEE Trans. Robotics Autom. 6(4), 483–490 (1990)
M.W. Walker: Adaptive control of manipulators containing closed kinematic loops, IEEE Trans. Robotics Autom. 6(1), 10–19 (1990)
J.S. Reed, P.A. Ioannou: Instability analysis and robust adaptive control of robotic manipulators, IEEE Trans. Autom. Control 5(3), 74–92 (1989)
G. Tao: On robust adaptive control of robot manipulators, Automatica 28(4), 803–807 (1992)
H. Berghuis, R. Ogata, H. Nijmeijer: A robust adaptive controller for robot manipulators, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (1992) pp. 1876–1881
R. Johansson: Quadratic optimization of motion coordination and control, IEEE Trans. Autom. Control 35(11), 1197–1208 (1990)
Z. Qu, D.M. Dawson: Robust Tracking Control of Robot Manipulators (IEEE, Piscataway 1996)
P. Dorato, C. Abdallah, V. Cerone: Linear-Quadratic Control (Prentice Hall, Upper Saddle River 1995)
A. Locatelli: Optimal Control: An Introduction (Birkhäuser, Basel 2001)
A. Isidori: Feedback control of nonlinear systems, Int. J. Robust Nonlin. Control 2, 291–311 (1992)
A.J. der van Schaft: Nonlinear state space $\mathcal{H}_{\infty}$ control theory. In: Essays on Control: Perspective in Theory and its Applications, ed. by H.L. Trentelman, J.C. Willems (Birkhäuser, Basel 1993) pp. 153–190
A.J. der van Schaft: $L_2$-gain analysis of nonlinear systems and nonlinear state feedback $\mathcal{H}_{\infty}$ control, IEEE Trans. Autom. Control 37(6), 770–784 (1992)
J. Park, W.K. Chung, Y. Youm: Analytic nonlinear $\mathcal{H}_{\infty}$ inverse-optimal control for Euler–Lagrange system, IEEE Trans. Robotics Autom. 16(6), 847–854 (2000)
B.S. Chen, T.S. Lee, J.H. Feng: A nonlinear $\mathcal{H}_{\infty}$ control design in robotics systems under parametric perturbation and external disturbance, Int. J. Control 59(12), 439–461 (1994)
J. Park, W.K. Chung: Design of a robust $\mathcal{H}_{\infty}$ PID control for industrial manipulators, ASME J. Dyn. Syst. Meas. Control 122(4), 803–812 (2000)
R.H. Castain, R.P. Paul: An on-line dynamic trajectory generator, Int. J. Robotics Res. 3(1), 68–72 (1984)
T. Kröger: On-Line Trajectory Generation in Robotic Systems, Springer Tracts in Advanced Robotics, Vol. 58 (Springer, Berlin, Heidelberg 2010)
D. Simon, C. Isik: A trigonometric trajectory generator for robotic arms, Int. J. Control 57(3), 505–517 (1993)
L. Biagiotti, C. Melchiorri: Trajectory Planning for Automatic Machines and Robots (Springer, Berlin, Heidelberg 2008)
J.E. Bobrow: Optimal robot path planning using the minimum-time criterion, IEEE J. Robotics Autom. 4(4), 443–450 (1988)
K.G. Shin, N.D. McKay: Minimum-time control of robotic manipulators with geometric path constraints, IEEE Trans. Autom. Control 30(5), 531–541 (1985)
F. Pfeiffer, R. Johanni: A concept for manipulator trajectory planning, Proc. Int. IEEE Conf. Robotics Autom. (ICRA) (1986) pp. 1399–1405
W. Khalil, E. Dombre: Trajectory generation. In: Modeling, Identification and Control of Robots, ed. by W. Khalil, E. Dombre (Butterworth-Heinemann, Oxford 2004)
A.I. Kostrikin, Y.I. Manin: Linear Algebra and Geometry (Gordon and Breach Sci. Publ., Amsterdam 1997)
M.E. Kahn, B. Roth: The near-minimum-time control of open-loop articulated kinematic chains, ASME J. Dyn. Syst. Meas. Control 93, 164–172 (1971)
M. Brady: Trajectory planning. In: Robot Motion: Planning and Control, ed. by M. Brady, J.M. Hollerbach, T.L. Johnson, T. Lozano-Pérez, M.T. Mason (MIT Press, Cambridge 1982)
R.P.C. Paul: Manipulator cartesian path control. In: Robot Motion: Planning and Control, ed. by M. Brady, J.M. Hollerbach, T.L. Johnson, T. Lozano-Pérez, M.T. Mason (MIT Press, Cambridge 1982)
R.H. Taylor: Planning and execution of straight-line manipulator trajectories. In: Robot Motion: Planning and Control, ed. by M. Brady, J.M. Hollerbach, T.L. Johnson, T. Lozano-Pérez, M.T. Mason (MIT Press, Cambridge 1982)
C.-S. Lin, P.-R. Chang, J.Y.S. Luh: Formulation and optimization of cubic polynomial joint trajectories for industrial robots, IEEE Trans. Autom. Control 28(12), 1066–1074 (1983)
J.M. Hollerbach: Dynamic scaling of manipulator trajectories, ASME J. Dyn. Syst. Meas. Control 106(1), 102–106 (1984)
K.J. Kyriakopoulos, G.N. Sridis: Minimum jerk path generation, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (1988) pp. 364–369
J.-J.E. Slotine, H.S. Yang: Improving the efficiency of time-optimal path-following algorithms, IEEE Trans. Robotics Autom. 5(1), 118–124 (1989)
Z. Shiller, H.-H. Lu: Computation of path constrained time optimal motions with dynamic singularities, ASME J. Dyn. Syst. Meas. Control 114(1), 34–40 (1992)
P. Fiorini, Z. Shiller: Time optimal trajectory planning in dynamic environments, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (1996) pp. 1553–1558
O. Dahl, L. Nielsen: Torque limited path following by on-line trajectory time scaling, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (1989) pp. 1122–1128
B. Cao, G.I. Dodds, G.W. Irwin: Time-optimal and smooth constrained path planning for robot manipulators, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (1994) pp. 1853–1858
B. Cao, G.I. Dodds, G.W. Irwin: A practical approach to near time-optimal inspection-task-sequence planning for two cooperative industrial robot arms, Int. J. Robotics Res. 17(8), 858–867 (1998)
D. Constantinescu, E.A. Croft: Smooth and time-optimal trajectory planning for industrial manipulators along specified paths, J. Robotics Syst. 17(5), 233–249 (2000)
S. Macfarlane, E.A. Croft: Jerk-bounded manipulator trajectory planning: Design for real-time applications, IEEE Trans. Robotics Autom. 19(1), 42–52 (2003)
R.L. Andersson: A Robot Ping-Pong Player: Experiment in Real-Time Intelligent Control (MIT Press, Cambridge 1988)
R.L. Andersson: Aggressive trajectory generator for a robot ping-pong player, IEEE Control Syst. Mag. 9(2), 15–21 (1989)
J. Lloyd, V. Hayward: Trajectory generation for sensor-driven and time-varying tasks, Int. J. Robotics Res. 12(4), 380–393 (1993)
K. Ahn, W.K. Chung, Y. Yourn: Arbitrary states polynomial-like trajectory (ASPOT) generation, Proc. IEEE 30th Annu. Conf. Ind. Electron. Soc. (2004) pp. 123–128
X. Broquère, D. Sidobre, I. Herrera-Aguilar: Soft motion trajectory planner for service manipulator robot, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2008) pp. 2808–2813
R. Haschke, E. Weitnauer, H. Ritter: On-line planning of time-optimal, jerk-limited trajectories, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2008) pp. 3248–3253
J.J. Craig: Introduction to Robotics: Mechanics and Control (Prentice Hall, Upper Saddle River 2003)
K.S. Fu, R.C. Gonzalez, C.S.G. Lee: Robotics: Control, Sensing, Vision and Intelligence (McGraw-Hill, New York 1988)
M.W. Spong, S.A. Hutchinson, M. Vidyasagar: Robot Modeling and Control (Wiley, New York 2006)
S. Arimoto: Mathematical theory or learning with application to robot control. In: Adaptive and Learning Control, ed. by K.S. Narendra (Plenum, New York 1986) pp. 379–388
S. Kawamura, F. Miyazaki, S. Arimoto: Realization of robot motion based on a learning method, IEEE Trans. Syst. Man. Cybern. 18(1), 126–134 (1988)
G. Heinzinger, D. Frewick, B. Paden, F. Miyazaki: Robust learning control, Proc. IEEE Int. Conf. Decis. Control (1989)
S. Arimoto: Robustness of learning control for robot manipulators, Proc. IEEE Int. Conf. Decis. Control (1990) pp. 1523–1528
S. Arimoto, T. Naiwa, H. Suzuki: Selective learning with a forgetting factor for robotic motion control, Proc. IEEE Int. Conf. Decis. Control (1991) pp. 728–733
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Video-References
Video-References
- :
-
Gain change of the PID controller available from http://handbookofrobotics.org/view-chapter/08/videodetails/25
- :
-
Safe human-robot cooperation available from http://handbookofrobotics.org/view-chapter/08/videodetails/757
- :
-
Virtual whiskers – Highly responsive robot collision avoidance available from http://handbookofrobotics.org/view-chapter/08/videodetails/758
- :
-
JediBot – Experiments in human-robot sword-fighting available from http://handbookofrobotics.org/view-chapter/08/videodetails/759
- :
-
Different jerk limits of robot arm trajectories available from http://handbookofrobotics.org/view-chapter/08/videodetails/760
- :
-
Sensor-based online trajectory generation available from http://handbookofrobotics.org/view-chapter/08/videodetails/761
Rights and permissions
Copyright information
© 2016 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Chung, W.K., Fu, LC., Kröger, T. (2016). Motion Control. In: Siciliano, B., Khatib, O. (eds) Springer Handbook of Robotics. Springer Handbooks. Springer, Cham. https://doi.org/10.1007/978-3-319-32552-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-32552-1_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-32550-7
Online ISBN: 978-3-319-32552-1
eBook Packages: EngineeringEngineering (R0)