Keywords

12.1 Introduction on Robotics; Robot Types and Applications

Through good design practices and thorough consideration of detail, engineers have succeeded in applying robotic systems to a wide variety of industrial, manufacturing, space, domestic or household, social, and medical situations where the environment is structured or predictable. On a practical level, robots are distinguished from other electromechanical motion equipment by their dexterous manipulation capability in that robots can work, position, and move tools and other objects with far greater dexterity than other machines found in the factory. Process robot systems are functional components with grippers, end-effectors, sensors, and process equipment organized to perform a controlled sequence of tasks to execute a process—they require sophisticated control systems. The combined effects of kinematic structure, axis drive mechanism design, and real-time motion control determine the major manipulation performance characteristics such as reach and dexterity, payload, quickness, and precision. For Cartesian robots, the range of motion of the first three axes describes the reachable workspace. Some robots will have unusable spaces such as dead zones, singular poses, and wrist-wrap poses inside of the boundaries of their reach. Usually motion test, simulations, or other analyses are used to verify reach and dexterity for each application (Lewis et al. 1999).

All common commercial industrial robots are serial link manipulators with no more than six kinematically coupled axes of motion. By convention, the axes of motion are numbered in sequence as they are encountered from the base on out to the wrist. The first three axes account for the spatial positioning motion of the robot; their configuration determines the shape of the space through which the robot can be positioned. Any subsequent axes in the kinematic chain provide rotational motions to orient the end of the robot arm and are referred to as wrist axes. There are, in principle, two primary types of motion that a robot axis can produce in its robot arm: either revolute (rotational) or prismatic (translational). It is often useful to classify robots according to the orientation and type of their first three axes. As the robot arm has only three degrees of freedom, there exist a limited number of possible combinations resulting all together in 36 different structures of robot arms. There are very common commercial robot configurations: Articulated robots (robotic arms), spherical, Selective Compliance Assembly Robot Arm (SCARA), cylindrical, Cartesian/gantry (as shown in Fig. 12.1), and parallel robots. Cartesian coordinate robots use orthogonal prismatic axes, usually referred to as X, Y, and Z, to translate their end-effector or payload through their rectangular workspace. One, two, or three revolute wrist axes may be added for orientation. Gantry robots are the most common Cartesian style. Commercial models of spherical and cylindrical robots were originally very common and popular in machine tending and material handling applications (Lewis et al. 1999; Bajd et al. 2010).

Fig. 12.1
figure 1

Different types of Robot arms: (RRR) all three joints of the rotational type; (RRT) two joints are rotational and one is translational; (RTT) one rotational and two translational degrees of freedom (Bajd et al. 2010)

Robots are also characterized by the type of actuators employed. Typically manipulators have hydraulic or electric actuation. In some cases, pneumatic actuators are used. A number of successful manipulator designs have emerged, each with a different arrangement of joints and links. Some “elbow” designs, such as the PUMA robots and the SPAR Remote Manipulator System, have a fairly anthropomorphic structure, with revolute joints arranged into “shoulder,” “elbow,” and “wrist” sections. A mix of revolute and prismatic joints has been adopted in the Stanford Manipulator and the SCARA types of arms. Other arms, such as those produced by IBM, feature prismatic joints for the “shoulder,” with a spherical wrist attached. In this case, the prismatic joints are essentially used as positioning devices, with the wrist used for fine motions (Lewis et al. 1999).

The largest number of industrial robot manipulators is found in the car industry (Bajd et al. 2010). They are mainly used for welding. Other important applications of industrial robots are to move objects from point to point. Such examples are found in the process of palletizing. Industrial robots are frequently used in aggressive or dangerous environments, such as spray painting. The request for robot manipulators in the area of industrial assembly of component parts into a functional system is progressively increasing. The interest in robot manipulators in medicine is rapidly increasing as well. They can be found in surgical applications (Boonvisut and Cavusoglu 2013; Keung et al. 2013), drug delivery (Zhou et al. 2013), or in rehabilitation for training of a paralyzed extremity after stroke (Freeman et al. 2012). Exceptional cases of robot manipulators are tele-manipulators. These robots are controlled by a human operator. They are used in dangerous environments or distant places (Bolopion and Régnier 2013).

Wheeled mobile robots can be used on smooth ground. They can effectively be used for vision and other sensors assessing distance or contact with objects in the environment. The biologically inspired legged mobile robots usually have six legs and are used on uneven terrain, as in the forestry robot, which is also capable of cutting trees. Another important class is service robotics where robots are used to help people (predominantly aging populations) in daily activities. The most innovative examples are humanoid robots capable of biped locomotion (Lewis et al. 1999). Tripedal robots, quadrupedal robots, and hexapod robots are other increasingly important legged robots. Flying robots (Nonami et al. 2010) and underwater robots (Javier et al. 2013) are broadly used for observation of distant terrains or for ocean studies.

Finally, humans have sought to establish new dimension of human robot communication, interaction, and collaboration. Sophisticated robotic toys are appreciated by children. Interesting experiments can be found in the art where robots are dancing (Augugliaro et al. 2013), playing musical instruments (Cicconet et al. 2013), and even painting (Lewis et al. 1999). Developing Strategies for Robot soccer competitions has achieved highly advanced stages (Wang et al. 2013).

12.2 Robot Kinematic Modelling

Robot arm kinematics is the science that deals with the analytical study of the motion geometry of the robot manipulator with respect to a fixed and reference coordinate system without regard to the forces or the moments causing the movements. In case of Forward Kinematics (FK) the inputs are the joint angle vectors and the link parameters. The output of the forward kinematics is the orientation and the position of the tool or the gripper. When the joint angles that represent the different robot configuration are computed from the position and orientation of the end-effector, the scheme called as Inverse Kinematics (IK) (Hegde 2008). The representation of FK and IK is shown in Fig. 12.2.

Fig. 12.2
figure 2

Forward and inverse kinematics representation

In a serial open loop type of robot manipulators, the links connected to no more than two others via joints at the most. Each pair of a link and a joint gives a single degree of freedom (DOF). Every serial manipulator provides “n” degrees of freedom “n DOF.” In general, every link ‘k’ gets connected at the two ends with the previous link (k − 1) and the next link (k + 1), forming two joints at the ends of connections (as shown in Fig. 12.3), where O n is the joint center. However, the kinematic analysis of an n-link manipulator can be solved using Denavit–Hartenberg convention for finding the link parameters (Hegde 2008; Spong et al. 2006):

  1. 1.

    The distance (d i )

  2. 2.

    The angle (θ i )

  3. 3.

    The length (a i )

  4. 4.

    The twist angle (α k )

Fig. 12.3
figure 3

Serial manipulator end-effector frame transformation to base frame

After coordinate frames assignment to all robot links, according to Denavit–Hartenberg convention, it is possible to establish the relation between the current frames (i) and the next frame (i + 1) by the following transformations in sequence:

  • Rotation about Z i by an angle θ i

  • Translate along Z i by a distance d i

  • Translate along rotated \( {X}_i={X}_{i+1} \) through length a i

  • Rotation about X i by twist angle α i

This may result in a product of four homogeneous transformations relating coordinate frames (the current frames (i) and the next frame (i + 1)) of the serially connected two links. The resulted matrix is known as Arm matrix (A) (Hegde 2008).

$$ {A}_{i+1}^i=T\left(z,\ \theta \right){T}_{\mathrm{trans}}\left(0,\ 0,\ d\right){T}_{\mathrm{trans}}\left(a,\ 0,\ 0\right)T\left(x,\alpha \right) $$
(12.1)
$$ =T\left(z,\ \theta \right){T}_{\mathrm{trans}}\left(\alpha,\ 0,\ d\right)T\left(x,\ \alpha \right). $$
(12.2)

From Equations (12.1) and (12.2) the following matrix may be obtained:

$$ \begin{array}{ll}{A}_{i+1}^i& =\left[\begin{array}{cccc}\hfill \cos\;\theta \hfill & \hfill - \sin\;\theta \hfill & \hfill 0\hfill & \hfill 0\hfill \\ {}\hfill \sin\;\theta \hfill & \hfill \cos\;\theta \hfill & \hfill 0\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right]\left[\begin{array}{cccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill a\hfill \\ {}\hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill d\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right]\left[\begin{array}{cccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill \cos\;\alpha \hfill & \hfill - \sin\;\alpha \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill \sin\;\alpha \hfill & \hfill \cos\;\alpha \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right]\hfill \\ {}& =\left[\begin{array}{c}\hfill \cos\;\theta \hfill \\ {}\hfill \sin\;\theta \hfill \\ {}\hfill 0\hfill \\ {}\hfill 0\hfill \end{array}\begin{array}{c}\hfill - \sin\;\theta\;\cos\;\alpha \hfill \\ {}\hfill\ \cos \kern0.37em \theta\;\cos\;\alpha \hfill \\ {}\hfill \sin\;\alpha \hfill \\ {}\hfill\ 0\hfill \end{array}\begin{array}{c}\hfill\ \sin\;\theta\;\sin\;\alpha \hfill \\ {}\hfill - \cos\;\theta\;\sin\;\alpha \hfill \\ {}\hfill\ \cos\;\alpha \hfill \\ {}\hfill\ 0\hfill \end{array}\begin{array}{c}\hfill\ a\; \cos\;\theta \hfill \\ {}\hfill\ a\; \sin\;\theta \hfill \\ {}\hfill\ d\hfill \\ {}\hfill\ 1\hfill \end{array}\right].\hfill \end{array} $$
(12.3)

The coordinate frame at the end-effector of the manipulator is related to the base reference frame by the ‘T” matrix in terms of (A) matrices for a six degrees of freedom (6 DOF) robot as example, as follows:

$$ {T}_6^0={A}_1^0\ {A}_2^1\ {A}_3^2\ {A}_4^3\ {A}_5^4\ {A}_6^5. $$
(12.4)

12.3 Robotic Dynamics: Modelling and Formulations

Dynamics is the science of describing the motion of massive bodies upon application of forces and moments. The motion can be considered as an evolution of the position, orientation, and time derivatives. In robotics, the dynamic equation of motion for manipulators is utilized to set up the fundamental equations for control (Jazar 2007). The dynamic motion of the manipulator arm in a robotic system is produced by the torques generated by the actuators. The relationship between the input torques and the time rates of change of the robot arm components configurations represents the dynamic modelling of the robotic system which is concerned with the derivation of the equations of motion of the manipulator as a function of the forces and moments acting on it. So, the dynamic modelling of a robot manipulator consists of finding the mapping between the forces exerted on the structures and the joint positions, velocities, and accelerations (Canudas et al. 1996).

Beni and Hackwood (1985) note that a dynamic analysis of a manipulator is useful for the following purposes:

  1. 1.

    It determines the joint forces and torques required to produce specified end-effector motions (the direct dynamic problem).

  2. 2.

    It produces a mathematical model which simulates the motion of the manipulator under various loading conditions (the inverse dynamic problem) and/or control schemes.

  3. 3.

    It provides a dynamic model for use in the control of the actual manipulator.

Equations of motion for the manipulator can be obtained by forming Euler–Lagrange’s equation on the basis of Lagrange’s energy function. The resulting differential equations describe the motion in terms of the joint variables and parameters of the manipulator.

Let K and V be the total kinetic energy and potential energy stored in the dynamic system. The Lagrangian is defined by (Spong et al. 2006; Min et al. 1992) and (Yamamoto 1992):

$$ L\left(q,\overset{.}{q}\right)=K\left(q,\overset{.}{q}\right)-V(q). $$
(12.5)

Using the Lagrangian equations of motion as obtained by Yamamoto (1992):

$$ \frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L}{\partial \overset{.}{q}}-\frac{\partial L}{\partial q}={Q}_i,\kern1em i=1,\dots, n, $$
(12.6)

where Q i is the generalized force corresponding to the generalized coordinate q i . The kinetic energy and potential energy for the link i are given (Spong et al. 2006):

$$ {K}_i=\frac{1}{2}\kern0.37em \mathrm{trace}\;\left[{\displaystyle \sum_{j=1}^i}{\displaystyle \sum_{k=1}^i}\frac{\partial T}{\partial {q}_i}{J}_i\frac{\partial T}{\partial {q}_k}\ {\overset{.}{q}}_j{\overset{.}{q}}_k\right], $$
(12.7)
$$ {V}_i=-{m}_i{g}^{\mathrm{T}}{T}_i{r}^{-(i)}. $$
(12.8)

The Lagrangian motion equations for the nth link manipulators can be represented as a second-order nonlinear differential equation (Spong et al. 2006):

$$ {\displaystyle \sum_{j=1}^n}{B}_{ij}{\ddot{q}}_j+{\displaystyle \sum_{j=1}^n}{\displaystyle \sum_{k=1}^n}{C}_{ijk}{\overset{.}{q}}_j{\overset{.}{q}}_j+{g}_i={Q}_i,\kern1.25em i=1,\dots, n, $$
(12.9)

where

$$ {B}_{ij}={\displaystyle \sum_{k= \max\;\left(i,j\right)}^n}\;\mathrm{trace}\;\left[\frac{\partial {T}_k}{\partial {q}_i}{J}_k\frac{\partial {T}_k^{\mathrm{T}}}{\partial {q}_i}\right], $$
(12.10)
$$ {C}_{ijk}={\displaystyle \sum_{h= \max\;\left(i,j,k\right)}^n}\;\mathrm{trace}\;\left[\frac{\partial {T}_h}{\partial {q}_i}{J}_n\frac{\partial^2{T}_h^{\mathrm{T}}}{\partial {q}_i\partial {q}_k}\right], $$
(12.11)
$$ {g}_i={\displaystyle \sum_{k=i}^n}{m}_k{g}^{\mathrm{T}}\frac{\partial {T}_k}{\partial {q}_{ki}}{r}^{-(i)}. $$
(12.12)

Equation (12.9) can be written as a set of second-order vector differential equations:

$$ \mathbf{B}\left(\mathbf{q}\right)\ddot{\mathbf{q}}+\mathbf{C}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)+\mathbf{g}\left(\mathbf{q}\right)={\mathbf{Q}}_{\mathbf{i}}, $$
(12.13)

where B(q) is the symmetric inertia matrix \( \mathbf{C}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right) \), the matrix of Coriolis and centrifugal effects, the vector g(q) denotes the gravity terms, and Q i is the generalized force vector.

12.4 Path and Trajectory Planning in Robotics

The definition of the path is the sequence of robot configurations in a particular order without regard to the timing of these configurations. The path planning as a process is the planning of the whole way from the start point to the goal point, including stopping in defined path points. While the trajectory is, the path specified by the time law requirement to move the robot from the starting point to the goal point. In the methodologies of trajectory planning, the task is to attain a specific target from an initial starting point, avoid obstacles, and stay within robot capabilities. Trajectories can be planned either in joint space where the time evolution of the joint angles is specified directly or in Cartesian space specifying the position and orientation of the end-effector frame. In joint space trajectory planning, the joint values that satisfy these robot configurations can be calculated and used by the controller for driving the joints to the desired and new positions. Joint space planning approach is more simple and faster than Cartesian space planning approach because of the inverse kinematics calculations avoidance. Obstacle avoidance is difficult in joint space because of the end-effector pose is not directly controlled, while planning in Cartesian space allows directly satisfying the geometric constraints of the robot work space, but then inverse kinematics must be solved (Niku 2001).

12.4.1 Third-Order Polynomial Trajectory Planning

In this planning method, the problem is to find a trajectory that connects the initial and final configuration. We find the final joint angles for the desired position and orientation using the inverse kinematic equations. Considering one of the joints, which at the beginning of the motion segment at time t i is at θ i and has to move to a new value of θ f at time t f. Third-order polynomials can be used to plan the trajectory, such that the velocities at the beginning and the end of the motion are zero or other known values (Niku 2001; Spong et al. 2006).

$$ \theta (t)={c}_0+{c}_1t+{c}_2{t}^2+{c}_3{t}^3, $$

where the initial and final conditions are as follows:

$$ \theta \left({t}_{\mathrm{i}}\right)={\theta}_{\mathrm{i}};\kern1em \theta \left({t}_{\mathrm{f}}\right)={\theta}_{\mathrm{f}};\kern1em \overset{.}{\theta}\left({t}_{\mathrm{i}}\right)=0;\kern1em \overset{.}{\theta}\left({t}_{\mathrm{f}}\right)=0. $$

Taking the first derivative of the polynomial equation:

$$ \theta \left({t}_{\mathrm{i}}\right)={c}_0={\theta}_{\mathrm{i}};\kern1em \theta \left({t}_{\mathrm{f}}\right)={c}_0+{c}_1{t}_{\mathrm{f}}+{c}_2{t}_{\mathrm{f}}^2+{c}_3{t}_{\mathrm{f}}^3;\kern1em \overset{.}{\theta}\left({t}_{\mathrm{i}}\right)={c}_1=0;\kern1em \overset{.}{\theta}\left({t}_{\mathrm{f}}\right)={c}_1+2{c}_2{t}_{\mathrm{f}}+3{c}_3{t}_{\mathrm{f}}^2=0. $$

Solving these four equations simultaneously, we get the necessary values for the constants as follows (Niku 2001):

$$ \theta (t)={c}_0+{c}_1t+{c}_2{t}^2+{c}_3{t}^3; $$
$$ \overset{.}{\theta }(t)={c}_1+2{c}_2t+3{c}_3{t}^2. $$
$$ \mathrm{A}\mathrm{t}\;t=0\kern0.75em \left(t={t}_{\mathrm{i}}=\mathrm{Zero}\right)\kern1em \to \kern1em \theta (0)={c}_0={\theta}_{\mathrm{i}};\kern1em \overset{.}{\theta }(0)={c}_1=0.\kern0.75em \left(\mathrm{initial}\;\mathrm{velocity}=\mathrm{Zero}\right). $$
$$ \mathrm{A}\mathrm{t}\;t={t}_{\mathrm{f}}:\kern1em \to \kern1em \theta \left({t}_{\mathrm{f}}\right) = {\theta}_{\mathrm{f}}={\theta}_{\mathrm{i}}+{c}_2{t}_{\mathrm{f}}^2+{c}_3{t}_{\mathrm{f}}^3\kern1em \to \kern1.25em {c}_2{t}_{\mathrm{f}}^2+{c}_3{t}_{\mathrm{f}}^3={\theta}_f-{\theta}_{\mathrm{i}}. $$
(12.14)
$$ \overset{.}{\theta}\left({t}_{\mathrm{f}}\right) = 2{c}_2{t}_{\mathrm{f}}+3{c}_3{t}_{\mathrm{f}}^2=0\kern0.75em \left(\mathrm{Zero}\;\mathrm{velocity}\;\mathrm{at}\;\mathrm{the}\;\mathrm{final}\;\mathrm{time}\kern0.37em {t}_{\mathrm{f}}\right) $$
$$ \to\ 2{c}_2{t}_{\mathrm{f}}+3{c}_3{t}_{\mathrm{f}}^2=0. $$
(12.15)

Solving Eqs. (12.14) and (12.15) gives:

$$ {c}_3=\frac{-2\left({\theta}_{\mathrm{f}}-{\theta}_{\mathrm{i}}\right)}{t_{\mathrm{f}}^3};\kern1em {c}_2=\frac{3\left({\theta}_{\mathrm{f}}-{\theta}_{\mathrm{i}}\right)}{t_{\mathrm{f}}^2}. $$

As an example, Fig. 12.4 shows the joint positions, velocities, and accelerations using third-order polynomial equation.

Fig. 12.4
figure 4

Joint positions, velocities, and accelerations using third-order polynomial equation

12.4.2 Linear Segments with Parabolic Blends

This method is a joint space trajectory planning with a trapezoidal velocity profile in which to run the joints at constant speed between the initial and final configurations. Achieving this method in order to create a smooth path, we design the desired trajectory in three parts. Starting with the linear segment and adding a parabolic blend region (quadratic polynomial parts) at the beginning and the end of the motion segment for creating continuous position and velocity, as shown in Fig. 12.5. The linear segment is the trajectory segment between points A and B. Assuming that the initial and the final positions are θ i and θ f at time \( {t}_{\mathrm{i}}=0 \) and t f and that the parabolic segments are symmetrically blended with the linear section at blending times t b and \( {t}_{\mathrm{f}}-{t}_{\mathrm{b}} \), we can write (Niku 2001; Spong et al. 2006):

Fig. 12.5
figure 5

Joint positions, velocities, and accelerations using linear segments with parabolic blends

$$ {\theta}_{\mathrm{i}}=\theta \left(t=0\right)=\theta (0)=\mathrm{initial}\;\mathrm{position},\kern1em {\theta}_{\mathrm{f}}=\theta \left(t={t}_{\mathrm{f}}\right)=\theta \left({t}_{\mathrm{f}}\right)=\mathrm{final}\;\mathrm{position}. $$
$$ {t}_{\mathrm{i}}=0\left(\mathrm{starting}\;\mathrm{position}\right),\kern1em {t}_{\mathrm{f}}=\mathrm{final}\;\mathrm{time}\kern0.37em \left(\mathrm{ending}\kern0.37em \mathrm{time}\right). $$

For the first parabolic segment:

$$ \theta (t)={c}_0+{c}_1t+\frac{1}{2}{c}_2{t}^2; $$
$$ \overset{.}{\theta }(t)={c}_1+{c}_2t; $$
$$ \ddot{\theta}(t)={c}_2. $$

The acceleration is constant for the parabolic sections, yielding a continuous velocity at the common points (called knot points) A and B. Substituting the boundary conditions into the parabolic equation segment yields (Niku 2001):

$$ \mathrm{A}\mathrm{t}\kern0.5em t=0\kern1em \to \kern1em \theta (0)={c}_0\kern1em \to \kern1em {c}_0 = {\theta}_i; $$
$$ \overset{.}{\theta }(0)={c}_1=0\ \left(\mathrm{starting}\kern0.37em \mathrm{velocity}=0\right); $$
$$ \ddot{\theta}(0)={c}_2\kern1.25em \to \kern1.25em {c}_2=\ddot{\theta}. $$

Substituting the initial conditions gives parabolic segments in the form:

$$ \theta (t)={\theta}_i+\frac{1}{2}{c}_2{t}^2; $$
$$ \overset{.}{\theta }(t)={c}_2t; $$
$$ \ddot{\theta}(t)={c}_2. $$

For the linear segment the velocity will be constant and can be chosen based on the physical capabilities of the actuators. Substituting zero initial velocity, a constant known joint velocity ω in the linear portion, and zero final velocity, the joint positions and velocities for points A, B and the final point as follows (Niku 2001):

The general linear equation is,

$$ \frac{y}{x}=\frac{y_2-{y}_1}{x_2-{x}_1}\kern1.25em \to \kern1em \frac{\theta }{t}=\frac{\theta_{\mathrm{B}}-{\theta}_{\mathrm{A}}}{t_{\mathrm{f}}-{t}_{\mathrm{b}}-{t}_{\mathrm{b}}}; $$
$$ \frac{\theta }{t}=\omega \kern1.25em \to \kern1em \omega =\frac{\theta_{\mathrm{B}}-{\theta}_{\mathrm{A}}}{t_{\mathrm{f}}-2{t}_{\mathrm{b}}} $$
$$ \to {\theta}_{\mathrm{B}}={\theta}_{\mathrm{A}}+\omega \left({t}_{\mathrm{f}}-2{t}_{\mathrm{b}}\right). $$
$$ \mathrm{A}\mathrm{t}\kern0.62em t={t}_{\mathrm{b}}. $$

Because of point A = the end of the first parabolic segment = the start of the linear segment, then the value of θ A can be found from the end point of the first parabolic segment, so that:

$$ {\theta}_{\mathrm{A}}={\theta}_{\mathrm{i}}+\frac{1}{2}{c}_2{t}_{\mathrm{b}}^2; $$
$$ {\overset{.}{\theta}}_{\mathrm{A}}={c}_2{t}_{\mathrm{b}}=\omega\;\left(\mathrm{constant}\;\mathrm{velocity}\;\mathrm{at}\;\mathrm{the}\;\mathrm{linear}\;\mathrm{segment}\right). $$

The necessary blending time t b can be found as follows:

$$ {\theta}_{\mathrm{f}}={\theta}_{\mathrm{B}}+\left({\theta}_{\mathrm{A}}-{\theta}_{\mathrm{i}}\right); $$
$$ {\theta}_{\mathrm{f}}={\theta}_{\mathrm{A}}+\omega \left({t}_{\mathrm{f}}-2{t}_{\mathrm{b}}\right)+{\theta}_{\mathrm{A}}-{\theta}_{\mathrm{i}}; $$
(12.16)
$$ {\theta}_{\mathrm{A}}={\theta}_{\mathrm{i}}+\frac{1}{2}{c}_2{t}_{\mathrm{b}}^2; $$
(12.17)
$$ {\theta}_{\mathrm{f}}=2\left({\theta}_{\mathrm{i}}+\frac{1}{2}{c}_2{t}_{\mathrm{b}}^2\right)-{\theta}_{\mathrm{i}}+\omega \left({t}_{\mathrm{f}}-2{t}_{\mathrm{b}}\right); $$
$$ {\theta}_{\mathrm{f}}={\theta}_{\mathrm{i}}+{c}_2{t}_{\mathrm{b}}^2+\omega \left({t}_{\mathrm{f}}-2{t}_{\mathrm{b}}\right); $$
$$ {c}_2=\frac{\omega }{t_{\mathrm{b}}} $$
(12.18)
$$ \to {\theta}_{\mathrm{f}}={\theta}_{\mathrm{i}}+\left(\frac{\omega }{t_{\mathrm{b}}}\right){t}_{\mathrm{b}}^2+\omega \left({t}_{\mathrm{f}}-2{t}_{\mathrm{b}}\right); $$
$$ {\theta}_{\mathrm{f}}={\theta}_{\mathrm{i}}+\omega {t}_{\mathrm{b}}+\omega {t}_{\mathrm{f}}-2\omega {t}_{\mathrm{b}}. $$

Then calculating the blending time as:

$$ {t}_{\mathrm{b}}=\frac{\theta_{\mathrm{i}}-{\theta}_{\mathrm{f}}+\omega {t}_{\mathrm{f}}}{\omega }. $$

The time t b cannot be bigger than half of the total time t f which results in a parabolic speedup and a parabolic slowdown. With no linear segment, a corresponding maximum velocity (Niku 2001):

$$ {\omega}_{\max }=2\left({\theta}_{\mathrm{f}}-{\theta}_{\mathrm{i}}\right)/{t}_{\mathrm{f}}. $$

The final parabolic segment is symmetrical with the initial parabola, but with a negative acceleration, and thus can be expressed as follows (Niku 2001):

$$ \theta (t)={\theta}_{\mathrm{f}}-\frac{1}{2}{c}_2{\left({t}_{\mathrm{f}}-t\right)}^2,\kern1em \mathrm{where}\kern0.75em {c}_2=\frac{\omega }{t_{\mathrm{b}}}, $$
$$ \to \left\{\begin{array}{l}\theta (t)={\theta}_{\mathrm{f}}-\frac{\omega }{2{t}_{\mathrm{b}}}{\left({t}_{\mathrm{f}}-t\right)}^2,\hfill \\ {}\overset{.}{\theta }(t)=\frac{\omega }{t_{\mathrm{b}}}\left({t}_{\mathrm{f}}-t\right),\hfill \\ {}\ddot{\theta}(t)=-\frac{\omega }{t_{\mathrm{b}}}.\hfill \end{array}\right. $$

12.5 Classical Control Synthesis and Design

The problem of robot control can be described as a computation of the forces or torques that must be generated by the actuators in order to successfully accomplish the robot task. The robot task can be presented either as the accomplishment of the motions in a free space, where position control is performed, or in contact with the environment, where control of the contact force is required (Bajd et al. 2010).

Control is used to move the robot with respect to the environment as well as to articulate sensor heads, arms, grippers, tools, and implements. Several techniques can be employed for controlling a robot, as in Samad (2001). The choice of the control method depends on the robot task.

A general robot control system consists of the following components: path planning, inverse kinematics, and a closed loop system that contains the controller, actuator, the robot mechanism, and the sensor, as shown in Fig. 12.6. The input to the control system is the desired pose of the robot end-effector, which can be obtained by using trajectory interpolation methods. The variable x r represents the reference pose of the robot end-effector. The x vector, describes the actual pose of the robot end-effector, in general this comprises six variables. Three of them define the position of the robot end point, while the other three determine the orientation of the robot end-effector. Accordingly,

Fig. 12.6
figure 6

A common robot control system (Bajd et al. 2010)

$$ \mathbf{x}={\left[\begin{array}{ccc}\hfill x\hfill & \hfill y\hfill & \hfill z\hfill \end{array}\kern0.75em \begin{array}{ccc}\hfill \varphi \hfill & \hfill \vartheta \hfill & \hfill \psi \hfill \end{array}\right]}^{\mathrm{T}}. $$

The orientation is determined by the angle φ around the z axis (Roll), the angle ϑ around the y axis (Pitch), and the angle ψ around the x axis (Yaw). The internal coordinates q r represent the desired end-effector position, i.e., the angle ϑ for the rotational joint and the distance d for the translational joint. The desired internal coordinates are compared to the actual internal coordinates in the robot control system. Based on the positional error \( \tilde{\boldsymbol{q}} \), the control system output u is calculated. The actuators ensure the forces or torques necessary for the required robot motion. The robot motion is measured by the sensors.

12.5.1 PD Position Control

For position control of a robot, a Proportional Derivative (PD) is commonly designed. For robot control this closed loop is separate for each particular degree of freedom. The control method is based on calculation of the positional error and determination of control parameters, which enable reduction or suppression of the error. The positional error is reduced for each joint separately, which means that as many controllers are to be developed as there are degrees of freedom. A robot manipulator has several degrees of freedom, therefore the error \( \tilde{\boldsymbol{q}}={\boldsymbol{q}}_{\mathbf{r}}-\boldsymbol{q} \) can be stated as a vector, whereas K p is a diagonal matrix of the gains of all joint controllers. The calculated control input incites robot motion toward reduction of the positional error. The positional error is characterized by the position error \( \tilde{\boldsymbol{q}} \) multiplied by K p (whereas K p is a diagonal matrix of the gains of all joint controllers). In addition, to confirm safe and stable robot actions, velocity in closed loop mode is presented with a negative sign. The velocity in closed loop mode brings damping into the system. It is characterized by the actual joint velocities \( \overset{.}{\boldsymbol{q}} \) multiplied by a diagonal matrix of velocity gains K d . The overall control law can be obtained by combining the positional error and the velocity error as given in the following form:

$$ \boldsymbol{u}={\boldsymbol{K}}_{\boldsymbol{p}}\left({\boldsymbol{q}}_{\boldsymbol{r}}-\boldsymbol{q}\right)+{\boldsymbol{K}}_{\boldsymbol{d}}\left(\dot{{\boldsymbol{q}}_{\boldsymbol{r}}}-\overset{.}{\boldsymbol{q}}\right), $$
(12.19)

where \( \dot{{\boldsymbol{q}}_{\boldsymbol{r}}}-\overset{.}{\boldsymbol{q}} \) is the velocity error \(\tilde q\). In Eq. (12.19), the reference velocity signal is included in the PD signal in order to avoid unnecessary high damping at fastest part of the trajectory. The synthesis of the PD position controller involves the determination of the matrices K p and K d . The K p gains must be high for faster response. On the other hand by proper choice of the K d gains, fast response without overshoot for the robot systems is gained. Figure 12.7 illustrates the PD position control configuration.

Fig 12.7
figure 7

PD position control (Bajd et al. 2010)

12.5.2 PD Control of Position with Gravity Compensation

The robot mechanism is usually known to be under the influence of inertial, Coriolis, centripetal, and gravitational forces. For a simplified model, viscous friction, which is proportional to the joint velocity, will be considered here. Consequently, robot dynamics of Eq. (12.13) can be rewritten as follows:

$$ \boldsymbol{B}\left(\boldsymbol{q}\right)\ddot{\boldsymbol{q}}+\boldsymbol{C}\left(\boldsymbol{q},\overset{.}{\boldsymbol{q}}\right)\overset{.}{\boldsymbol{q}}+{\boldsymbol{F}}_{\boldsymbol{v}}\overset{.}{\boldsymbol{q}}+\boldsymbol{g}\left(\boldsymbol{q}\right)=\boldsymbol{\tau}, $$
(12.20)

where τ is the torques in the robot joints. B, C, and g are defined in Sect. 11.2, F v is a diagonal matrix of the joint friction coefficients (Bajd et al. 2010).

In quasi-static conditions, when the robot is moving slowly, it can be assumed that zero accelerations \( \ddot{\boldsymbol{q}}\approx 0 \) and velocities \( \overset{.}{\boldsymbol{q}}\approx 0 \). Accordingly, the robot dynamic model is simplified as

$$ \boldsymbol{\tau} \approx \boldsymbol{g}\left(\boldsymbol{q}\right). $$
(12.21)

The model of gravitational effects \( \hat{\mathbf{g}}\left(\mathbf{q}\right) \) (the circumflex denotes the robot model), which is an acceptable approximation of the actual gravitational forces g(q). The control algorithm shown in Fig. 12.8 can be written as follows:

Fig. 12.8
figure 8

PD control with gravity compensation (Bajd et al. 2010)

$$ \boldsymbol{u}={\boldsymbol{K}}_{\boldsymbol{p}}\left({\boldsymbol{q}}_{\boldsymbol{r}}-\boldsymbol{q}\right)-{\boldsymbol{K}}_{\boldsymbol{d}}\overset{.}{\boldsymbol{q}}+\hat{\boldsymbol{g}}\left(\boldsymbol{q}\right). $$
(12.22)

By introducing gravity compensation, the errors in trajectory tracking are significantly reduced. In addition, this control method can be extended to consider the effect of motion of the robot end-effector; starting from the positional error of the robot end-effector which is calculated as:

$${\bf{\tilde x}} = {{\bf{x}}_{\bf{r}}} - {\bf{x}} = {{\bf{x}}_{\bf{r}}} - {\bf{k}}\left( {\bf{q}} \right),$$
(12.23)

where x r is the reference pose of the robot end-effector and k(.) represents the equations of direct kinematics.

The velocity of the robot end point is calculated with the help of the Jacobian matrix from the joint velocities. The equation describing the PD controller is:

$${\bf{f}} = {{\bf{K}}_{\bf{p}}}{\bf{\tilde x}} - {{\bf{K}}_{\bf{d}}}{\bf{\dot x}}.$$
(12.24)

The negative sign of the velocity error introduces damping into the system. The joint torques are calculated from the force f, acting at the tip of the robot, with the help of the transposed Jacobian matrix and by adding the component compensating gravity. The control algorithm is written as follows:

$$ \mathbf{u}={\mathbf{J}}^{\mathbf{T}}\left(\mathbf{q}\right)\mathbf{f}+\hat{\mathbf{g}}\left(\mathbf{q}\right). $$
(12.25)

The complete control scheme is shown in Fig. 12.9.

Fig. 12.9
figure 9

PD control with gravity compensation in external coordinates (Bajd et al. 2010)

12.5.3 Control of the Robot Based on Inverse Dynamics

This control scheme can be derived from the robot dynamic model described by Eq. (12.20). Assume that the torques τ, generated by the motors, are equal to the control outputs u. Rewrite Eq. (12.20) in order to determine the direct robot dynamic model, which describes robot motions under the influence of the given joint torques. Accordingly, the acceleration \( \ddot{\boldsymbol{q}} \) can be expressed in short as follows (Bajd et al. 2010):

$$ \ddot{\boldsymbol{q}}={\boldsymbol{B}}^{-1}\left(\boldsymbol{q}\right)\left(\boldsymbol{u}-\boldsymbol{n}\left(\boldsymbol{q},\overset{.}{\boldsymbol{q}}\right)\right), $$
(12.26)

where \( \boldsymbol{n}\left(\boldsymbol{q},\overset{.}{\boldsymbol{q}}\right) \) comprising all dynamic components except the inertial component, i.e.

$$ \mathbf{n}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)=\mathbf{C}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)\overset{.}{\mathbf{q}}+{\mathbf{F}}_{\mathbf{v}}\overset{.}{\mathbf{q}}+\mathbf{g}\left(\mathbf{q}\right). $$
(12.27)

Assume that the robot dynamic model is known. The inertial matrix \( \hat{\mathbf{B}\ }\left(\mathbf{q}\right) \) is an approximation of the real values B(q ), while \( \hat{\mathbf{n}\ }\left(\mathbf{q},\overset{.}{\mathbf{q}}\right) \) represents an approximation of \( \mathbf{n}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right) \) as follows:

$$ \hat{\mathbf{n}}\ \left(\mathbf{q},\overset{.}{\mathbf{q}}\right)=\hat{\mathbf{C}}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)\overset{.}{\mathbf{q}}+{\hat{\mathbf{F}}}_{\mathbf{v}}\overset{.}{\mathbf{q}}+\hat{\mathbf{g}}\left(\mathbf{q}\right). $$
(12.28)

The controller output u is based on inverse dynamics as in the following equation:

$$ \mathbf{u}=\hat{\mathbf{B}}\left(\mathbf{q}\right)\mathbf{y}+\hat{\mathbf{n}}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right), $$
(12.29)

where the approximate inverse dynamic model of the robot was used. The system, combining Eqs. (12.26) and (12.29), is shown in Fig. 12.10.

Fig. 12.10
figure 10

Linearization of the control system by implementing the inverse dynamic model (Bajd et al. 2010)

By simple substitutions we can write the vector y, having the acceleration characteristics:

$$ \mathbf{y}={\ddot{\mathbf{q}}}_{\mathbf{r}}+{\mathbf{K}}_{\mathbf{p}}\left({\mathbf{q}}_{\mathbf{r}}-\mathbf{q}\right)+{\mathbf{K}}_{\mathbf{d}}\left({\overset{.}{\mathbf{q}}}_{\mathbf{r}}-\overset{.}{\mathbf{q}}\right). $$
(12.30)

It consists of the reference acceleration \( {\ddot{\boldsymbol{q}}}_{\boldsymbol{r}} \) and two contributing signals which depend on the errors of position and velocity. These two signals suppress the error arising because of the imperfectly modelled dynamics. The complete control scheme is shown in Fig. 12.11. By considering Eq. (12.30) and the equality \( y=\ddot{\boldsymbol{q}} \), the differential equation describing the robot dynamics can be written as follows:

Fig. 12.11
figure 11

Control of the robot based on inverse dynamics (Bajd et al. 2010)

$$\mathop {\widetilde {\bf{q}}}\limits^{..} + {{\bf{K}}_{\bf{d}}}\mathop {\widetilde {\bf{q}}}\limits^{..} + {{\bf{K}}_{\bf{p}}}\mathop {\widetilde {\bf{q}}}\limits^{..} = 0,$$
(12.31)

where the acceleration error \( \ddot{\tilde{\boldsymbol{q}}}={\ddot{\boldsymbol{q}}}_{\boldsymbol{r}}-\ddot{\boldsymbol{q}} \) was introduced. The differential equation (12.31) describes the time dependence of the control error as it approaches zero. The dynamics of the response is determined by the gains K p and K d .

Similar to internal coordinates, the derivation of equation that describe the dynamics of the control error, an analogous equation can be written for the error of the end-effector pose. Accordingly, the acceleration \( \ddot{\mathbf{x}} \) of the robot end-effector can be expressed as follows:

$$ \ddot{\tilde{\mathbf{x}}}+{\mathbf{K}}_{\mathbf{d}}\overset{.}{\tilde{\mathbf{x}}}+{\mathbf{K}}_{\mathbf{p}}\tilde{\mathbf{x}}=0\kern1.25em \Rightarrow \kern1em \ddot{\mathbf{x}}={\ddot{\mathbf{x}}}_{\mathbf{r}}+{\mathbf{K}}_{\mathbf{d}}\overset{.}{\tilde{\mathbf{x}}}+{\mathbf{K}}_{\mathbf{p}}\tilde{\mathbf{x}}. $$
(12.32)

Taking into account the equality \( \mathbf{y}=\ddot{\mathbf{q}} \)

$$ \mathbf{y}={\mathbf{J}}^{-1}\left(\mathbf{q}\right)\left(\ddot{\mathbf{x}}-\overset{.}{\mathbf{J}}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)\overset{.}{\mathbf{q}}\right). $$
(12.33)

Substituting \( \ddot{\mathbf{x}} \) in Eq. (12.33) with expression (12.32), the control algorithm based on inverse dynamics in the external coordinates is obtained as follows:

$$ \mathbf{y}={\mathbf{J}}^{-1}\left(\mathbf{q}\right)\left({\ddot{\mathbf{x}}}_{\boldsymbol{r}}+{\boldsymbol{K}}_{\boldsymbol{d}}\overset{.}{\tilde{\mathbf{x}}}+{\mathbf{K}}_{\mathbf{p}}\tilde{\mathbf{x}}-\overset{.}{\mathbf{J}}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)\overset{.}{\mathbf{q}}\right). $$
(12.34)

The control scheme encompassing the linearization of the system based on inverse dynamics (12.29) and the closed-loop control (12.34) is shown in Fig. 12.12.

Fig. 12.12
figure 12

Robot control based on inverse dynamics in external coordinates (Bajd et al. 2010)

12.5.4 Control Based on the Transposed Jacobian Matrix

This control method is based on the relation connecting the forces acting at the robot end-effector with the joint torques. The relation is defined by the use of the transposed Jacobian matrix:

$$ \boldsymbol{\uptau} ={\mathbf{J}}^{\mathbf{T}}\left(\mathbf{q}\right)\mathbf{f}, $$
(12.35)

where the vector τ represents the joint torques and f is the force at the robot endpoint. The aim is to control the pose of the robot end-effector, where its desired pose is defined by the vector x r and the actual pose is given by the vector x. Robots are usually provided with sensors that measure the joint variables. The pose of the robot end-effector must be therefore determined by using the direct kinematic model x = k(q), where q indicates the vector of joint variables, x indicates the vector of task variables; usually, three position coordinates and three Euler angles The positional error of the robot end-effector \(\left( {\tilde x = {x_r} - x} \right)\) must be reduced to zero. A simple proportional control system with the gain matrix K p (Bajd et al. 2010) is introduced:

$${\bf{f}} = {{\bf{K}}_{\bf{p}}}{\bf{\tilde x}}.$$
(12.36)

As the robot displacement can only be produced by the motors in the joints, the variables controlling the motors must be calculated from the force f. This calculation is performed using the transposed Jacobian matrix in Eq. (12.35).

$$ \mathbf{u}={\mathbf{J}}^{\mathbf{T}}\left(\mathbf{q}\right)\mathbf{f}. $$
(12.37)

The vector u represents the desired joint torques. The control method based on the transposed Jacobian matrix is shown in Fig. 12.13.

Fig. 12.13
figure 13

Control based on the transposed Jacobian matrix (Bajd et al. 2010)

12.5.5 Control Based on the Inverse Jacobian Matrix

In this method, the control is based on the relation between the joint velocities and the velocities of the robot end point, which is known as the Jacobian matrix (Bajd et al. 2010).

$$ \overset{.}{\mathbf{x}}=\mathbf{J}\left(\mathbf{q}\right)\overset{.}{\mathbf{q}}. $$
(12.38)

For small displacements, the relation between changes of the internal coordinates and changes of the pose of the robot end point can be expressed as follows:

$$ \mathbf{d}\mathbf{x}=\mathbf{J}\left(\mathbf{q}\right)\mathbf{d}\mathbf{q}. $$
(12.39)

For small error in the pose, we can calculate the positional error in the internal coordinates by the inverse relation (Eq. 12.39).

$${\bf{\tilde x}} = {{\bf{J}}^{ - 1}}\left( {\bf{q}} \right){\bf{\tilde x}}.$$
(12.40)

In this way, the control method is translated to the known method of robot control in the internal coordinates. The control method, based on the inverse Jacobian matrix, is shown in Fig. 12.14.

Fig. 12.14
figure 14

Control based on the inverse Jacobian matrix (Bajd et al. 2010)

12.5.6 Control of the Contact Force

The robot force control method is based on control of the robot using inverse dynamics. A contact force f appears in the inverse dynamic model due to the interaction of the robot with the environment. As the forces acting at the robot end-effector are transformed into the joint torques by the use of the transposed Jacobian matrix, we can write the robot dynamic model in the following form (Bajd et al. 2010)

$$ \mathbf{B}\left(\mathbf{q}\right)\ddot{\mathbf{q}}+\mathbf{C}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)\overset{.}{\mathbf{q}}+{\mathbf{F}}_{\mathbf{v}}\overset{.}{\mathbf{q}}+\mathbf{g}\left(\mathbf{q}\right)=\boldsymbol{\uptau} -{\mathbf{J}}^{\mathbf{T}}\left(\mathbf{q}\right)\mathbf{f}. $$
(12.41)

It can be seen that the force f acts through the transposed Jacobian matrix in a similar way as the joint torques, i.e., it tries to produce robot motion.

12.5.6.1 Linearization of a Robot System Through Inverse Dynamics

Let us denote the control output, representing the desired actuation torques in the robot joints, by the vector u. The direct dynamic model was described by Bajd et al. (2010)

$$ \ddot{\mathbf{q}} = {\mathbf{B}}^{-1}\left(\mathbf{q}\right)\left(\mathbf{u}-\mathbf{n}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)-{\mathbf{J}}^{\mathbf{T}}\left(\mathbf{q}\right)\mathbf{f}\right). $$
(12.42)

Equation (12.24) describes the response of the robot system to the control input u. Taking into account the initial velocity value, the actual velocity of the robot motion is obtained by integrating the acceleration. While taking into the account the initial position, the actual positions in the robot joints are calculated by integrating the velocity. The described model is represented by the block Robot in Fig. 12.15. The system is linearized by including the inverse dynamic model into the closed loop:

Fig. 12.15
figure 15

Linearization of the control system by implementing the inverse dynamic model and the measured contact force (Bajd et al. 2010)

$$ \mathbf{u}=\hat{\mathbf{B}}\left(\mathbf{q}\right)\mathbf{y}+\hat{\mathbf{n}}\left(\mathbf{q},\overset{.}{\mathbf{q}}\right)+{\mathbf{J}}^{\mathbf{T}}\left(\mathbf{q}\right)\mathbf{f}. $$
(12.43)

The use of circumflex denotes the estimated parameters of the robot system.

12.5.6.2 Force Control

Based on linearized model of the control system, the force control is translated to control the pose of the end-effector. If it is required from the robot to increase the force exerted on the environment, the robot end-effector must be displaced in the direction of the action of the force. The following control system by Bajd et al. (2010) can be used:

$${\bf{y}} = {{\bf{J}}^{ - 1}}\left( {\bf{q}} \right)\left( {{{{\bf{\ddot x}}}_{\bf{r}}} + {{\bf{K}}_{\bf{d}}}{\bf{\dot \tilde x}} + {{\bf{K}}_{\bf{P}}}{\bf{\tilde x}} - \mathop {\bf{J}}\limits^. \left( {{\bf{q}},{\bf{\dot q}}} \right){\bf{\dot q}}} \right).$$
(12.44)

Accordingly, the control of the robot end-effector (including the linearization) while taking into account the contact force can be determined. This is summarized in Fig. 12.16.

Fig. 12.16
figure 16

Robot control based on inverse dynamics in external coordinates including the contact force (Bajd et al. 2010)

Generally, for appropriate handling of interactions between robot and environment, it is necessary to consider force control strategies, either in an indirect way by means of a suitable use of position control laws or in a direct way by means of true force control laws (Samad 2001).

12.6 Robot Vision and Visual Servoing

Vision technology gives robots intelligent eyes. Using these eyes, robots can recognize the position of objects in space and adjust their working steps accordingly. The benefits of sophisticated vision technology include savings, improved quality, reliability, safety, and productivity. Robot vision is used for part identification and navigation. Vision applications generally deal with finding a part and orienting it for robotic handling or inspection before an application is performed. Sometimes vision-guided robots can replace multiple mechanical tools with a single robot station.

12.6.1 Robot Vision

Recognizing the geometry of the robot workspace from a digital image (Fig. 12.17) is the main task of robot vision which is solved by finding the relation between the coordinates of a point in the two-dimensional (2D) image and the coordinates of the point in the real three-dimensional (3D) robot environment. The basic equations of optics determine the position of a point in the image plane with respect to the corresponding point in 3D space. We will therefore find the geometrical relation between the coordinates of the point P(x c ,y c,z c) in space and the coordinates of the point p(u,v) in the image.

Fig. 12.17
figure 17

Perspective projection (Bajd et al. 2010)

Studying the robot geometry and kinematics by attaching the coordinate frame to each rigid robot segments or to objects manipulated by the robot where, the camera itself represents a rigid body and a coordinate frame should be assigned to it. A corresponding coordinate frame will describe the pose of the camera. The z c axis of the camera frame is directed along the optical axis, while the origin of the frame is positioned at the center of projection. Using a right-handed frame where the x c axis is parallel to the rows of the imaging sensor and the y c axis is parallel with its columns. The image plane is in the camera, which is placed behind the center of projection. The focal length is the distance (f c) between the image and the center of projection and has a negative value, as the image plane intercepts the negative z c axis. The equivalent image plane placed at a positive z c value (Fig. 12.18). Both the equivalent image plane and the real image plane are symmetrical with respect to the origin of the camera frame. The origin of this frame is placed in the intersection of the optical axis with the image plane. The x and y axes are parallel to the x c and y c axes of the camera frame. The camera has two coordinate frames, the camera frame and the image frame. The point P be expressed in the camera frame, while the point p represents its projection onto the image plane. The point P is located in the y c, z c plane of the camera frame (Bajd et al. 2010).

Fig. 12.18
figure 18

Equivalent image plane (Bajd et al. 2010)

$$ P=\left[\begin{array}{c}\hfill 0\hfill \\ {}\hfill {y}_{\mathrm{c}}\hfill \\ {}\hfill {z}_{\mathrm{c}}\hfill \end{array}\right],\kern1em p=\left[\begin{array}{c}\hfill 0\hfill \\ {}\hfill y\hfill \end{array}\right]. $$

A famous case study for using vision technology with robotics is the flying robots such as quadcopters (Carloni et al. 2013). These robots have gained increased interest in research. To navigate safely, flying robots need the ability to localize themselves autonomously using their onboard sensors. Potential applications of such systems include the usage as a flying camera, for example to record sport movies or to inspect bridges from the air, as well as surveillance tasks and applications in agriculture. The main idea of the developed flying robot system is described in Fig. 12.19.

Fig. 12.19
figure 19

Flying robot system with vision technology (Carloni et al. 2013)

12.6.2 Robot Control Using Visual Servoing Technique

The task is to control the pose of the robot’s end-effector using visual information, features, extracted from the image. The Pose is represented by a six element vector encoding position and orientation in 3D space. The camera may be fixed or mounted on the robot’s end-effector in which case there exists a constant relationship, between the pose of the camera and the pose of the end-effector. The image of the target is a function of the relative pose between the camera and the target. The relationship between these poses is shown in Fig. 12.17. The distance between the camera and target is frequently referred to as depth or range. The relevant frames required are shown in Fig. 12.20.

Fig. 12.20
figure 20

Relevant coordinate frames; world, end-effector, camera, and target

The camera contains a lens, which forms a 2D projection of the scene on the image plane where the sensor is located. This projection causes direct depth information to be lost, and each point on the image plane corresponds to a ray in 3D space. Some additional information is needed to determine the 3D coordinate corresponding to an image plane point. This information may come from multiple views or knowledge of the geometric relationship between several feature points on the target (Corke 1994).

Robots typically have six degrees of freedom (DOF), allowing the end-effector to achieve, within limits, any pose in 3D space. Visual servoing systems may control six or fewer DOF. Motion so as to keep one point in the scene, the interest point, at the same location in the image plane is referred to as fixation. Animals use fixation to direct the high-resolution fovea of the eye toward regions of interest in the scene. In humans, this low-level, unconscious, fixation motion is controlled by the brain’s medulla region using visual feedback from the retina. Fixation may be achieved by controlling the pan/tilt angles of the camera like a human eye or by moving the camera in a plane normal to the optical axis. High performance fixation control is an important component of many active vision strategies. Keeping the target centered in the field of view has a number of advantages that include:

  • Eliminating motion blur since the target is not moving with respect to the camera

  • Reducing the effect of geometric distortion in the lens by keeping the optical axis pointed at the target

Visual servoing can be classified into position-based visual servoing (Fig. 12.21) and image-based visual servoing. In position-based control (PBVS), the geometric model of the target object is used in conjunction with visual features extracted from the image to estimate the pose with respect to the camera frame, computing the control law by reducing the error in pose space. In this way (Fig. 12.8), the estimation problem involved in computing the object location can be studied separately from the problem of calculate the feedback signal required by the control algorithm (Corke 1994; Miura 2004).

Fig. 12.21
figure 21

Position-based visual servo (PBVS) structure

In image-based servoing (IBVS), the last step is omitted, and servoing is done on the basis of image features directly. The structures referred to as dynamic look and move make use of joint feedback, whereas the PBVS and IBVS structures use no joint position information at all. The image-based approach (as shown in Fig. 12.22) may reduce computational delay, eliminate the necessity for image interpretation, and eliminate errors in sensor modelling and camera calibration. However, it does present a significant challenge to controller design since the plant is nonlinear and highly coupled.

Fig. 12.22
figure 22

Image-based visual servo (IBVS) structure

12.7 Conclusion

This chapter sheds light on the essentials of a robotic system. First, we exhibited types and applications of robots emphasizing the modelling of the kinematic of a robot manipulator. Then, we formulate the dynamics of a robot manipulator. A third-order polynomial trajectory planning was illustrated as well as linear Segments with Parabolic Blends. Suggestions of some practical control structures have been given; PD position control in different configurations can be realized. Control that based on inverse dynamics is widely used for robots. In order to consider the relation between the joint velocities and the velocities of the robot end point, control based on the (or inverse) Jacobian matrix can be applied. Force control method can find many areas of applications as well. Finally, vision technology for robotic system has been illustrated. The study of robotics and its mathematics has great application in sustainability for the future.