Keywords

1 Introduction

Talk of a robot and the image that immediately conjures in the mind is that of a humanoid catering to the daily chores! Possibly the basis of this image is the popular conceptualization of robots largely through science fiction. Over the ages, many science fiction writers imagined human-like mechanical beings with tremendous physical and intellectual capabilities. Starting with Czech playwright Karel Capek, these machines were mostly referred by the science fiction writers and moviemakers as robots. This is in spite of that fact that even the most sophisticated robots of today are nowhere near to what their imaginations created!

Robots and robotics are driven by the urge to create or synthesize machines that could replicate human function and capabilities. This involves use of mechanisms, sensors, and actuators and advanced levels of computer programming. A multitude of fundamental concepts ranging over a spectrum of different areas of engineering is involved. This chapter presents a basic understanding of a robot in the current perspective. Further, the current trends and future directions are presented.

2 History of Robotics

Progress in science, engineering, and technology has influenced the growth of robotics. Growth and development in robotics are interwoven with the breakthroughs and significant advancements in science and engineering. A complete account of such evolvement is neither intended nor possible. The focus of this section is to present a brief history of robotics with particular focus on dreams and dreamers who had ushered in one of the greatest accomplishments in the history of mankind.

2.1 Early Automatons and Automatic Mechanisms

People have long imagined machines endowed with human abilities. Dreamt of automatons that could move on its own and mechanical devices that could perform rational reasoning. Human-like machines are described in many stories and are pictured in sculptures, paintings, and drawings. Not only automatons have been a fascination in the myths and legends across cultures, as early as fourth century BC many mechanized implements have been conceived. The Greek mathematician Archytas of Tarentum conceived a mechanical bird! Aristotle imagined abolition of slavery through a surge in automatons and wrote in The Politics:

For suppose that every tool we had could perform its task, either at our bidding or itself perceiving the need………shuttles in a loom could fly to and fro and a plucker play a lyre of their own accord, then master craftsmen would have no need of servants nor masters of slaves.

Aristotle

The Politics [1]

Around 1495, Leonardo da Vinci (1452–1519) sketched designs for a humanoid robot in the form of a medieval knight. It was supposed to be able to not only move its arms and head but also open its jaw! It is not clear whether Leonardo da Vinci or his contemporaries tried to build the humanoid. However, the seeds of an arduous journey looking for an artificial intelligent-mechanized human were sown. Thus, began the story of dreams and dreamers that drive the urge for looking beyond the mundane to design automatons which no one had imagined before.

In 1651, Thomas Hobbes (1588–1679) the patriarch of artificial intelligence published Leviathan, his book about social contract and the ideal state. Hobbes propounded that it might be possible to build artificial animal and stated:

For seeing life is but a motion of limbs, the beginning whereof is in some principal part within, why may we not say that all automata (engines that move themselves by springs and wheels as doth a watch) have an artificial life? For what is the heart, but a spring; and the nerves, but so many strings; and the joints, but so many wheels, giving motion to the whole body…

Thomas Hobbes

Leviathan [2]

This led to a number of automatons and automatic mechanisms being built in and around 1700. These automatons were capable of drawing, flying, and few could act and play music. French inventor and engineer, Jacques de Vaucanson (1709–1782), created some of the most sophisticated automatons that moved in startlingly lifelike ways. His masterpiece The Digesting Duck was built in 1738. The mechanical duck could quack, flap its wings, paddle, drink water, and eat and digest grain!

2.2 Robots in Science Fiction

Some 17 years before, Joseph Capek wrote the short story Opilec describing automats, Frank Baum in 1900 invented one of the literary world’s most beloved robots in The Wonderful Wizard of Oz. It is Tin Woodsman, a mechanical man in search of a heart! In 1921, Joseph’s brother Karel Capek introduced the word robot in his play R.U.R. (Rossum’s Universal Robots). The play centers around a mad scientist who tries to usurp the powers of God by proving that man has acquired the technology and intelligence to create life. The artificial human is a magnificent worker and a tireless laborer who does not complain! The term robot is derived from the Czech word “robota” that means servitude. Prior to Rossum’s Universal Robots, the term automaton was used.

Science fiction writer Issac Asimov made robots popular through a collection of short stories published between 1938 and 1942. Asimov introduced the word “robotics.” Asimov in his story Runaround describes robotics as the study of robots and enumerated three “laws of robotics”—the fundamental principles based on which he defines his robots and their interaction within the worlds he creates. Thereafter, robots have been a part of literary and cinematic fiction.

More often than not, robots are shown to take over the world and be bad guys rather than as man’s friend! Most significant of these are the images of The Terminator (1984) and its numerous sequels. There are instances of robots being used as more than a simple killer. These include The Day the Earth Stood Still (1951 and 2008); Robbie in Forbidden Planet (1956); 2001: A Space Odyssey (1968); or A.I. Artificial Intelligence (2001).

2.3 Significant Moments in Robot History

Robots in real life are far away from what have been created by the science fiction writers in their imagined worlds. Nevertheless, robots rudimentarily similar to those envisioned in R.U.R. have become a reality! Today robots are routinely used to substitute for manual labor in an effort to minimize human error and injury. Robots are also seen as a way to increase the production efficiency.

The transition from science fiction to reality has its genesis in a chance meeting in 1956 between entrepreneur Joseph Engelberger and inventor George Devol. Engelberger was excited about George’s latest invention, a Programmed Article Transfer device that led to formation of Unimation Inc., and building of the Unimate—the very first industrial robot. General Motors was the first to introduce the Unimate to assist in automobile production. Starting with Unimate way back in 1961 on the assembly line, application of robots in the automobile industry was explosive! Yet, General Motors’ usage of Unimate in a die-casting plant remains the most significant moment in the history of robots and robotics [3].

Even though Devol and Engelberger were the most successful in transforming wishful thinking to an industrial revolution, they may not have been the first to create such machines [4]. Way back in 1948, William Grey Walter constructed some of the first electronic autonomous robots. Grey conducted experiments on mobile, autonomous robot. He was interested if the robots could model brain functions. Grey Walter’s three-wheel machine was crude by today’s standards but a marvel of the day. It had basic analog circuits but could even recharge their own batteries.

One of the most significant moments in robotics is when robots moved from the factory floor and made inroads into our living spaces! Pioneering work in this area was done at Stanford between 1966 and 1972, where Shakey a general-purpose mobile robot that was smart and could reason about its own actions was developed. Life Magazine referred to Shakey as the first electronic man. This definitely has been a significant moment in the history of robotics.

3 What Is a Robot?

Even though robots have penetrated into a number of domains and have taken various forms and functions, basic understanding of robots remains the same. A robot can be seen as a physical agent that manipulates the physical world; the physical agent is artificial rather than natural. In order to manipulate the physical world, a robot needs to work in a Sense–Plan–Act cycle. Each of the above, in turn, demands different capabilities from the robot. “Sense” or sensing requires the robot to take in information about its environment; a robot has to “Plan,” i.e., use that information to make a decision, and finally, a robot needs moving parts to carry out commands or “Act.”

3.1 RIA Definition

The Robotics Institute of America (RIA) defines a robot as follows:

A robot is a reprogrammable, multifunctional manipulator designed to move material, parts, tools or specialized devices through variable programmed motions for the performance of a variety of tasks. The robot is automatically operating equipment, adaptable to complex conditions of the environment in which it operates, by means of reprogramming, managing to prolong, amplify and replace one or more human functions in its interactions with the environment.

Robotics Institute of America [4]

This definition emphasizes the expected characteristics of a robot in terms of its functionality. Robots started its foray into the human space being an important component of modern automobile manufacturing plants. Even today, industrial robots form the single largest group of robots. Nevertheless, robots are now being used almost everywhere! However, robots differ depending on the end use. The hand of a robot or the “end-effector” may be specialized tools, such as spot welders or spray guns, or more general-purpose grippers. This ensures “multi-functionality.” A robot needs to be “reprogrammable,” i.e., define operation through computer commands, with robotic actions based on tasks or objectives. The robot needs to be capable of undertaking a “variety of tasks.”

3.2 JIRA Classification

Japanese Industrial Robot Association (JIRA) has more detailed classification of the varieties of robots. JIRA classifies robots into the following categories (a) manipulators, (b) numerically controlled, (c) sensate, (d) adaptive, (e) smart, and (f) Intelligent Mechatronic System.

  1. 1.

    Manipulators—A manipulator is physically anchored to their workplace, usually with an entire chain of controllable joints, enabling placement of end-effectors in any position within the workplace. Manipulators are by far the most common type of industrial robots. Manipulators are categorized as (a) manual—machines slaved to a human operator; (b) sequential—device that performs a series of tasks in the same sequence every time they are activated; and (c) programmable—define operation through computer commands based on tasks or objectives. Note that Japan calls certain machines robots that would only best be termed “factory machines” in other parts of the world. For the three categories of manipulator highlighted here, only a programmable manipulator would qualify to be a robot outside of Japan.

  2. 2.

    Numerically controlled—These are industrial automation usually referred to as numerical control (NC) machines. These robots are a form of programmable automation instructed to perform tasks through information on sequences and positions in the form of alphanumeric data. The data represent relative positions between a tool and other processing element often referred to as a work-head and the work-part, i.e., object being processed. The operating principle of numerically controlled automation is to control the sequence of motion of the work-head relative to the work-part, leading to required machining. Three important components merge to create a numerical control system: (a) part program, (b) machine control unit, and (c) processing equipment. Part program refers to the detailed set of commands (in the form of alphanumeric data) to be followed by the processing equipment. The machine control unit is usually a microcomputer that stores and executes the program. This is accomplished by converting each command into actions by the processing equipment; operation is sequential with one command being processed at a time.

  3. 3.

    Sensate—The word sensate is derived from the Latin sensatus ‘having senses’ and refers to perceiving or perceived by the senses. Therefore, robots that incorporate senor feedback into their circuitry could be referred to as sensate. However, by “sensate robots” we usually mean embodied machines with the unique capability to sense human body language, thus enabling these machines to better comprehend and respond to their human companions in a natural way. The family of robots that incorporate touch sensors, proximity sensors, vision systems, and so forth predominantly for human–machine societal interaction is referred to as sensate robots.

  4. 4.

    Adaptive—Advances in sensor technology coupled with artificial intelligence have infused new directions to robotics. Robots are slowly transforming from single-purpose, preprogrammed machines to multi-purpose, adaptive workers. Artificial intelligence is increasingly being used to make robots figure out on their own actions for a given goal, instead of being given explicit instructions. Robots that can change the way they function in response to their environment are termed as adaptive robots.

  5. 5.

    Smart—Robots that are considered to possess artificial intelligence leading to cognitive capabilities are smart!

  6. 6.

    Intelligent Mechatronic System—Tetsuro Mori from the Yaskawa Electric Cooperation coined the term “Mechatronics”—a fancy word to mean the intersection of mechanical/electrical and computer control systems [5]. Mechatronics refers to embedment of smart devices into systems already in place leading to Intelligent Mechatronic System.

4 Aspects of Robotics

Driven through science fiction and the proliferation of humanoids therein, robotics has always been concerned with design and synthesis of certain human functions by the use of mechanisms coupled with sensors, actuators, and information processing unit usually a microcontroller or a computer. This is a huge challenge and demands a multitude of concepts from a number of classical areas of study. There are different aspects of robotics research, each carried out by experts in various fields. Nonetheless, synergy between a number of areas including mechanics, control, and cognition plays an important role.

The field of robotics can be partitioned at a high level of abstraction into four major areas: (a) mechanical manipulation, (b) locomotion, (c) computer vision, and (d) artificial intelligence. The discussion in this section is broadly set along these lines.

4.1 Joints and Links

Robotic manipulators comprise nearly rigid links, which are connected by joints that allow relative motion of neighboring links. A serial manipulator is a set of bodies connected in a chain by joints. These bodies are referred to as links. Joints form a connection between a neighboring pair of links, each link maintaining a fixed relationship between the joints at its ends. Even though the serial manipulator is often compared with the human arm, unlike the joints in the human arm, the joints in a robotic manipulator are restricted to one degree of freedom. This is to simplify the mechanics, kinematics, and control of the manipulator.

Two types of joints are commonly found in robots: (a) revolute joints and (b) prismatic joints. A revolute or a rotary joint is a one-degree-of-freedom kinematic pair used in mechanisms. Displacement is referred to as joint angle. Manipulators also at times contain sliding or prismatic joints. The relative displacement between links in a prismatic joint is a translation. The usual convention is to number the immobile base of a serial manipulator as link 0. Links are thereafter marked with the first moving link as link 1, and so on, till one reached the free end of the arm, which is link n [6].

Degrees of freedom (DoF) are defined as the number of independent movements an object can have in 3-D space. A rigid body free in space can have six independent movements—three translations and three rotations—leading to six DoFs. These six independent variables completely specify the position and orientation of the object in 3-D space. Consequently, in order to position an end-effector arbitrarily in 3-D space a manipulator with six DoF is required—three joints for position and three joints for orientation of the end-effector. These are referred to as spatial manipulators. Serial manipulators are the simplest of the industrial robots. Every link in a serial manipulator except the last has a joint at each end. The last link has only one joint. This is located at the end closest to the base, referred to as the proximal end. The end-effector is at the free end, i.e., the end farthest away from the base, referred to as the distal end. The end-effector could be a gripper, a welding torch, an electromagnet, or another device depending on the intended application [6].

Not all robots are as simple; these robots have parallelogram linkages or other closed kinematic structures. Parallel manipulators have one or more loops. None of the link is designated as first or last! Consequently, there does not exist a natural choice of end-effector or output link. The output link must be chosen. More often than not, the number of joints is more than the degree of freedom, with several joints not being actuated [7].

4.2 Drives, Actuators, and Sensors

Actuators are akin to muscles of a robot and play a vital role in “Act” stage of a Sense–Plan–Act cycle. Sensors measure different stimuli that the robot can effectively use. Actuators and sensors together with a feedback control system are the most basic requirements for operation of a robot.

4.2.1 Drives and Actuators

Actuators convert energy to mechanical form; any device that accomplishes this conversion is an actuator. Different types of actuators are available; choice of one over the other is driven by a number of factors including force, torque, and speed of operation, accuracy, precision, and power consumption. Based on the three different modes of energy transfer, actuators are classified as (a) electrical, (b) pneumatic fluid power, and c. hydraulic fluid power.

Electrical Actuation: Electrical motors form the major chunk of electrical actuators. Nevertheless, there are other types of electrical actuators such as solenoids. Typical electrical motor types include (a) DC motors, (b) AC motors, and (c) stepper motors.

Small DC motors have stator magnetic poles produced by a permanent magnet. For large motors, the magnetic field is produced via a stator winding. Voltage supplied to the armature in turn controls the speed of the DC motor. For motors with stator windings, speed control is also possible through varying the current to the stator. AC motors are driven by an alternating current supply to the stator windings. The frequency of the input signal determines the speed of AC motors. Consequently, speed control is done through variation of the signal frequency. Stepper motors possess the ability to rotate a specific number of revolutions or fractions of a revolution. This in turn ensures specific angular displacement.

Pneumatic Power Actuation: Pneumatic actuation is possibly the most widely used in the manufacturing industry, for example in automated assembly, including jig and robot end-effector operation. Pressurized air is used as the power transfer medium. Pneumatic cylinder, a closed cylindrical barrel with a piston, is the primary type of energy transfer device for a linear motion. The piston is made to move by applying the pressurized air at one of two ports found at the ends of the cylinder. Limited rotation rotary actuators that provide back and forth rotation through a fixed angle are the other type of pneumatic energy transfer device.

Hydraulic Power Actuation: Hydraulic actuators use oil instead of air and give a very high power output. Hydraulic actuators are available as linear or rotary actuators. Linear actuators are akin to pneumatic actuators. Rotary actuators are either of limited rotation or of continuous rotation. The continuous rotation hydraulic actuators are referred to as hydraulic motors.

Motion Convertors: Mechanical power transmission systems required to convert actuator outputs to type of motions required by the system are referred to as motion convertors. For example, convertors are required for speed reduction or conversion from rotary to linear motion.

4.2.2 Sensors

A sensor is a transducer that converts a physical stimulus from one form into a more useful form to measure the stimulus. Sensors are a very critical part of any robot, whether autonomous or tele-operated. There was a time when “sensors” for an experimenter’s robot were too rudimentary. It would be just a few whiskers connected to microswitches to sense walls and other obstacles. When the robot banged into a wall or obstacle, the switches were tripped and its simple logic steered it in another direction. But after the invention of microcontrollers, a new era of sensors has begun. We have numerous sensors such as active and passive IR sensors; sound and voice sensors; ultrasonic range sensors, positional encoders on arm joints, head and wheels; compasses, navigational and GPS sensors; active and passive light and laser sensors; a number of bumper switches; and sensors to detect acceleration, turning, tilt, odor detection, magnetic fields, ionizing radiation, temperature, tactile, force, torque, RF, UV, and visual sensors such as CCD cameras.

Sensors are the robot’s contact with the outside world or its own inner workings. Any robotic system has two distinct categories of sensors for measuring internal and environmental parameters referred to as proprioceptors and exteroceptors, respectively [8].

Proprioceptors: Sensors that measure the kinematic and dynamic parameters of the robot are referred to as proprioceptors. Proprioceptive sensors are responsible for controlling internal status and monitoring self-maintenance.

Position transducers such as potentiometers, synchros and resolvers, encoders, and rotary variable differential transformer (RVDT) are the common rotary joint proprioception sensors. Tachometer transducers are used for the measurement of angular velocity. Traditional DC motor tachometers are getting replaced by digital tachometers using magnetic pickup sensors, for the latter are compact and suitable for embedment within robotic manipulators. Acceleration sensors measure the force that produces the acceleration of a known mass. Acceleration transducers based on stress–strain gage, piezoelectric, capacitive, and inductive principles are available. Micromechanical accelerometers where force is estimated by measuring the strain in elastic cantilever beams formed from silicon dioxide are available. Flexibility of a robot’s mechanical structure is also estimated by strain gages mounted on the manipulator’s link. Further joint shaft torques can also be estimated using strain gages mounted on shafts.

Exteroceptors: Sensors that sense the environment in order to estimate the location/position as well as force interaction of the robot with the environment are referred to as exteroceptors. Exteroceptors are broadly categorized into: (a) contact sensors, (b) range sensors, and (c) vision sensors.

Contact Sensors: Contact sensors detect a physical contact using mechanical switches that trigger when contact is made. Contact sensors are typically used to detect contact between mating parts and/or to measure the interaction forces and torques during part mating operations using the robot manipulator. In addition to triggers on physical contact, sensors to continuously sense contact forces ranging over an area are also available. This type of contact sensors is termed as tactile sensors. It is worthwhile to mention that tactile sensing is far more complex than contact sensing. Tactile sensing is based on inductive elastomer, strain gage, piezoelectric, capacitive, and optoelectronic technologies. In terms of their operating principles, tactile sensing can be seen as either force-sensitive or displacement-sensitive. Conductive elastomer, strain gage, and piezoelectric are examples of force-sensitive tactile sensors that measure the contact forces. Displacement-sensitive tactile sensors include optoelectronic and capacitive sensors that measure the mechanical deformation of an elastic overlay.

Range Sensors: Range sensors estimate the distance to objects in their operation area. The most common uses of range sensors include robot navigation, obstacle avoidance, or recovery of the third dimension for monocular vision. Range sensors are based on either principle of time of flight or triangulation. Range can be estimated from the time elapsed between the transmission and return of a pulse in time-of-flight sensors such as laser range finders and sonar. Estimation of range can also be accomplished by the principle of triangulation, i.e., detecting a given point on the object surface from two different points of view at a known distance from each other.

Vision Sensors: Robot vision is one of the most complex yet versatile sensing processes. Robotic vision is an involved process and includes extracting, characterizing, and interpreting information from images. This is to identify or describe objects in environment. A camera converts the visual information to electrical signals; these are in turn sampled and quantized yielding a digital image. CCD image sensors being lightweight, compact, and robust are preferred over conventional tube-type. Vision is largely dependent on illumination; for avoiding many problems associated with robotic vision, controlled illumination is preferred with structured lighting being widely used. Special light stripes, grids, or other patterns are projected on the scene with shape of the projected patterns on different objects offering valuable cues. This allows 3-D object parameters to be recovered from a 2-D image.

Digital image produced by a vision sensor is an array of pixel intensities and requires to be processed for an explicit and meaningful description. Digital image processing involves preprocessing, segmentation, description, recognition, and interpretation. Preprocessing techniques usually deal with noise reduction, and detailed enhancement is addressed in preprocessing. Objects are extracted from the scene using segmentation algorithms, like edge detection or region growing. Recognition involves classifying the objects in the feature space. Finally, interpretation assigns a meaning to the ensemble of recognized objects.

4.3 Kinematics and Dynamics

Robot kinematics is fundamental to describing an end-effector’s position, orientation as well as motion of all the joints. Dynamic modeling lies at the heart of analyzing and synthesizing the dynamic behavior of a robot. In this section, a brief introduction to the kinematics and dynamics of manipulators is presented.

4.3.1 Kinematics

Kinematics is the study of position, velocity, acceleration, and other higher order derivatives of the position variables without considering forces causing these effects. Kinematics of manipulators involves study of the geometrical and time-based properties of motion of the manipulator.

DenavitHartenberg Notation: The Denavit–Hartenberg (D-H) notation is used for kinematic description of a robot. For such a description, the pose of each link in the manipulator relative to the pose of the preceding link is to be described. This should require six parameters in space. However, using a pair of constraints between the coordinate frames, the D-H formalism describes the spatial relation between coordinate frames attached to successive links using a set of four parameters. For a pair of consecutive links, the four parameters describe the links and the joint between them. Each link can be represented by two parameters: (1) the link length ai and (2) the link twist angle αi. The link twist angle αi indicates the axis twist angle of two adjacent joints i and i − 1. Joints are also described by two parameters: (a) the link offset di, which indicates the distance from a link to next link along the axis of joint i and (b) the joint revolute angle θi, which is the rotation of one link with respect to the next about the joint axis [9]. Even though the choices of coordinate frames may not be unique, the D-H assignments always lead to the same expression for the pose of the end-effector with respect to the base [10].

Forward Kinematics: Forward kinematics is the problem of computing the position and orientation of the end-effector given the set of joint angles. Forward kinematics can be thought of as a mapping of the manipulator position from a joint space description into a Cartesian space description.

Inverse Kinematics: Inverse kinematics is the problem of computing all possible sets of joint angles to attain the given position and orientation. For any practical use of the manipulator such as a pick-and-place operation or line-following operation, inverse kinematics is the fundamental problem to be solved. Inverse kinematics can be thought of as a mapping of end-effector pose—position and orientation—in 3-D Cartesian space of the manipulator’s internal joint space.

Although we, humans, solve the inverse kinematics problem routinely in our interaction with the physical world, it is not that simple to compute the joint angles from the Cartesian positions. Forward kinematics is simpler than inverse kinematics. This is compounded by the fact that the kinematic equations are nonlinear. Solution may not be easy or at times even possible in a closed form. Existence of multiple solutions or nonexistence of solutions is questions that need to be encountered. This in turn has effect on the workspace, the volume of space in which the end-effector of the manipulator can attain any desired position and orientation.

Velocities and Singularities: Apart from static positioning problems, kinematic analysis may involve manipulators in motion. A matrix called the Jacobian of the manipulator is defined to undertake velocity analysis. The Jacobian specifies a mapping from velocities in joint space to velocities in Cartesian space. Points at which this mapping is not invertible are called singularities. Knowledge of singularities is important as they lead to problems with motions of the manipulator arm in their neighborhood.

4.3.2 Dynamics

Motion is caused by forces acting on a body. Robot dynamics is devoted to studying the forces required at the joints to cause motion of the end-effector. To be more precise, dynamics includes kinematics as well, and the other part of it is called kinetics. Robot mechanism is modeled as a rigid body system. Consequently, robot dynamics involves the application of rigid body dynamics to robots. A manipulator at rest is accelerated and made to move at a constant end-effector velocity; thereafter, it needs to decelerate and stop. The joint actuators accomplish this feat through a complex set of joint torques. The actuator torque not only depends on path through which the end-effector moves but also on the mass properties of the links and payload.

Forward Dynamics: Forward dynamics involves finding end-effector motion for known joint torques/forces. An important use of dynamic equations involving forward dynamics is in simulation. Dynamic equations can be reformulated to express motion descriptors such as acceleration as a function of actuator torque. This makes it possible to simulate the motion of a manipulator under application of a set of actuator torques.

Inverse Dynamics: Finding joint torques/forces for given joint motions and end-effector moment/force is the problem of inverse dynamics. For a desired path of the end-effector, using the dynamic equations of motion of a manipulator, actuator torque can be estimated. This in turn can be used to control a manipulator.

Trajectory Generation: Path and trajectory describe the motion of an end-effector through space. The locus of points which the manipulator has to follow is the path. A path further qualified with specification of a timing law is referred to as trajectory. Often, a path is described not only by a desired destination but also by some intermediate locations, or via points, through which the manipulator must pass. Curves such as splines are used to approximate a function that passes through a set of via points. Trajectory generation involves computing the motion function of each joint of a manipulator so that motion appears coordinated. Further to force the end-effector to follow a desired path through space, motion in the Cartesian space is converted to an equivalent set of joint motions.

4.4 Planning and Control

Planning and control are fundamental components of robot systems. Motion planning is particularly important for autonomous robots. Over the years, considerable advances have been made in planning and control. Here, the traditional control paradigms with pointers to more advanced schemas are highlighted.

Forces or torques are usually supplied to actuators to drive the manipulators. Inverse dynamics computes the required torques that will cause the desired motion. Even though the problem of dynamics forms a basis of a framework for control of a manipulator, it in itself does not suffice.

Position Control: Control algorithms are based on linear approximations to the dynamics of a manipulator for linear position control. In nonlinear position control, control algorithms based on complete nonlinear dynamics of the manipulator are used. Nonlinear position control performs better than those based on the linear approximations.

Force Control: For many real-world applications, a robot needs to control contact forces between the end-effector and the object being manipulated.

Hybrid Control: Mixed or hybrid control involves some directions of the manipulator end-effector being controlled by a position control law and remaining directions controlled by a force control law.

4.5 Artificial Intelligence

For a robot to be able to perform at par with a human, only having a humanoid structure would not suffice. The robot needs to have the capability of “rational” decision making. Artificial embodiment of such intelligence in a robot is very involved and almost impossible. Nevertheless, intelligent robots form a vibrant and extremely exciting area within robotics.

Human sensory system assists human intelligence. Intelligent robots are equipped with a myriad of sensors, particularly for knowledge of the external world. In line with the use of visual and qualitative information for everyday commonsense reasoning, a robot in an unstructured environment (one that is not known a priori) makes use of “qualitative spatial reasoning” and “robotic vision.” Interpretation of a scene and learning from vision are possibly the most difficult phase in the whole pipeline of visual image processing. This requires intelligence that in turn demands huge volume of knowledge. With maturity of learning techniques through evolving paradigms such as deep learning, artificial intelligence is expected to enhance decision-making capabilities of the robots.

5 Applications of Robotics

Starting from General Motors’ use of Unimate, the first industrial robot in its assembly line, robots have come a long way. They are increasingly being applied in scenarios as diverse as space exploration, medical surgery, rescue, and rehabilitation. The predominant use still remains in the field of automation in manufacturing. Nevertheless, there has been an upsurge of robot activity in close interaction with human in applications as challenging as health service!

5.1 Robotics and Automation

Robotic mechanisms have been at the forefront of mechanization. There is a, however, subtle difference between mechanization and automation. Automation replaces the worker with intelligent control systems, thereby contributing to increase in productivity, speed, and repeatability. Automation using robotic technologies exploiting advances in computing, particularly machine learning and artificial intelligence, has been the trend. Improved flexibility is the hallmark of robotic technology to manufacturing processes [11] and invariably leads to an improvement in productivity.

Robotics and automation drive not only the manufacturing sector but also numerous other sectors such as construction, transportation, health and support, security. For developed countries particularly USA, Japan, Germany, and a few others, automation and robotics have become an integral part of their life. As the cost of human workers goes up, China is slowly exploring automation and robotics. In the near future, China is expected to overtake Japan as the largest operator of industrial robots [12].

5.1.1 Robotics in Manufacturing

Even though the first use of robots in manufacturing was in situations dangerous to a human user, motivation of using robots in manufacturing is driven more by economics rather than societal concern. Robotic manufacturing is preferred for high precision repetitive tasks. Such tasks are better performed by a robot rather than a human who is susceptible to error. Industrial robots which are programmable can be seen as advanced automation systems, control being executed using an onboard computer. Further with advances in sensing and control as well as computer vision, an industrial robot need not be restricted to environments completely known a priori. Traditionally, a stand-alone industrial robot is used for automation in manufacturing. More recently, another form of robotics in manufacturing is evolving through collaborative robots. Collaborative robots have human-in-the-loop, i.e., work in close coordination with human user. This makes it possible for humans to still have a role in automated manufacturing. Since its inception in 2014, collaborative robotics is poised to emerge as a key player in the automotive and electronics industries [12, 13].

Whether a traditional industrial robot or an adaptive collaborative robot, in an automated production line in manufacturing, its major roles include: (a) material handling, (b) welding, (c) assembly operations, (d) dispensing, and to a lesser extent (e) processing.

Robotic Material Handling: Material handling is possibly the most common application for industrial robots worldwide. Material-handling robots automate some of the most tedious, dull, and unsafe tasks in a production line, thus enhancing efficiency. Material handling encompasses part selection, transferring, packing, palletizing, loading, and unloading. Machine feeding or disengaging is also considered as material handling that can be automated using an industrial robot.

Robotic Welding: Robotic welding is the use of industrial robot which completely automates a welding process by both performing the weld and handling the part. Spot welding and arc welding mainly used by the automotive industry are done through robots. As a robotic application, robotic welding is relatively new, introduced only in the late 1970s [14].

Assembling Operations: Assembly operations include fixing, press-fitting, inserting, and disassembling. For each assembly robot, end-of-arm tooling is customized to cater to the specific requirements. Use of robots for assembly operations has undergone a paradigm shift. A traditional industrial robot is seldom used for assembly. Robotic assembly with sophisticated end-of-arm tooling is more commonly used. This shift can be attributed to the progress in terms of sensor-based technologies particularly force-sensing and machine vision [15].

Robot Dispensing Operations: Robots are the most suited for accurate repetitive processes. Therefore, industrial robots for dispensing jobs such as painting, gluing, applying adhesive sealing, spraying and filling are widely used.

Processing: Industrial robots are very rarely used for material processing. This is primarily due to availability of automated machines specifically for these applications. Nevertheless, industrial robots are used for application such as mechanical, laser, and water jet cutting.

5.1.2 Space Robotics

Like the dream of a mechanical man, outer space has also been a great fascination. Design and development of machines that could tolerate the extremes of yet-un-seen space environments and take forward space exploration have been a dream for science fiction writers and space explorers alike. Space robotics is an area concerned with design of machines capable of surviving in such extreme environment and performing maintenance and exploration tasks. Space robotics is a conglomeration of multiple disciplines grounded on well-founded knowledge of robotics, computer science, and space engineering. Starting in the 1950s, space exploration today not only involves manned space explorations but also robotic missions involving a range of mobility or locomotion systems.

Space robots can be either orbital robots or planetary robots. Robots that are conceived and designed for repairing satellites or assembling large space telescopes, etc., are usually deployed in orbits around the planetary bodies and are referred to as orbital robots. Planetary robots are deployed on to the planetary bodies. These robots such as the MARS Explorer undertake survey and observation, examination, and extraction of the extraterrestrial surfaces. The planetary robots are also engaged in preparing for subsequent human arrival on the planetary surface [16].

The four key issues in space robotics include a. mobility, b. manipulation, c. time delay, and d. extreme environments. Mobility and autonomy are essential for a space robot. Degree of mobility (i.e., locomotion capability) and manipulation capability are dictated by the application, i.e., whether it is designed to be an orbital or a planetary robot. Autonomy is decided based on the nature of the mission, particularly distance the robot is expected to travel from Earth. Autonomy could range from tele-operation to fully autonomous robots. Tele-operation is usually effected through master–slave control methodology, wherein the slave robot exactly replicates the movements of the master. Time delay is a particular challenge for manipulation in space robotics. Traditional master–slave approach would not work because of the considerable round trip time between the master and slave. Space manipulators are controlled from the earth station by way of supervisory control. Supervisory control involves human commands only for motion; contact forces are managed by the remote site electronics.

The extent of autonomy incorporated into a space robot leads to three distinct classes of space robots—(a) robotic agent, (b) robotic assistant, and (c) robotic explorer. One with the least autonomy is referred to as a robotic agent. It is in fact a human proxy performing various tasks using tele-operation. Fully autonomous space robots working in close coordination with human astronauts are referred to as robotic assistant. Fully autonomous robots capable of exploring unknown territory in space are space explorers. Largest proponent of space robotics, National Aeronautics and Space Administration (NASA), has to its credit a series of successful planetary rover missions. Possibly the most successful of these have been NASA’s Mars rovers—sojourner, opportunity, and curiosity. The Japanese Hayabusa Robotic Mission to the near-Earth asteroid Itokawa and European Space Agency’s Rosetta lander called Philae to the center of a comet are other noteworthy applications. Majority of the current space robots are in fact robotic agents. Increasingly challenging space missions necessitate higher level of autonomy for the robots, surging an evolution toward robotic assistants and robotic explorers [17].

5.1.3 Service Robots

Until recently, there was no strict technical definition of the term “service robot.” The International Federation of Robotics (IFR) and the United Nations Economic Commission for Europe (UNECE) worked out a service robot definition and classification scheme resulting in ISO-Standard 8373 effective since 2012.

A service robot is a robot that performs useful tasks for humans or equipment excluding industrial automation application. A personal service robot or a service robot for personal use is a service robot used for a non-commercial task, usually by lay persons. Examples are domestic servant robot, automated wheelchair, and personal mobility assist robot. A professional service robot or a service robot for professional use is a service robot used for a commercial task, usually operated by a properly trained operator. Examples are cleaning robot for public places, delivery robot in offices or hospitals, fire-fighting robot, rehabilitation robot and surgery robot in hospitals.

Professional and Personal Service Robots [18]

Possible applications of robot outside of industrial automation to assist human in everyday chores are innumerable. Nevertheless, service robots at present are classified into three categories: (a) industrial service robots, (b) domestic service robots, and (c) scientific service robots.

  1. 1.

    Industrial service robots: Robots are used in the industry beyond industrial robots for automation. These assignments include routine tasks such as examination of welding, inspection of pipelines, and other services such as construction and demolition.

  2. 2.

    Domestic service robots: Domestic robots are substitute for workers in people’s homes. Domestic robots address domestic chores including cleaning floors, mowing the lawn, and window cleaning. Domestic robots are aimed at substituting for the butler! Further, robots for the assistance of elderly and physically challenged such as robotized wheelchairs, personal aids, and assistive devices to carry out activities of daily living are classified as personal/domestic service robots.

  3. 3.

    Scientific service robots: Robotic systems either autonomous or collaborative are capable of performing a variety of repetitive functions required in research. For example, robots are regularly used for gene sampling and sequencing. Autonomous scientific robots are designed to perform difficult or impossible tasks, for example the Woods Hole Sentry for deep sea missions and the Mars rovers.

5.2 Medical Robotics

First used in 1985 for brain biopsy, robots are now regularly used in various medical areas including laparoscopy, neurosurgery, and orthopedic surgery [19]. Medical robotics is one of the fastest growing sectors within the medical devices industry. This unprecedented growth and acceptance of robotics within medical disciplines is largely due to advances in the areas of medical imaging and technological improvements in the area of actuators and control.

Robotic surgery and rehabilitation robotics have emerged as the largest use of medical robotics. Nevertheless, newer and novel uses for medical robots are regularly reported. Radiosurgery involves focused beams of ionizing radiation directed at the particular area. CyberKnife a robotic radiosurgery system ensures directing the beam through the tumor at various orientations for better outcome. There exist medical robot systems for disaster response and battlefield medicine.

5.2.1 Robotic Surgery

Surgical robots have made the greatest impact in medical robotics. Starting with ROBODOC [20] for orthopedic surgery, robots have made inroads into operating theaters in a big way. This is primarily because of the possibility to position, orient, and perform accurate motions of the scalpel in more ways than possible by a human expert. Robots are increasingly being used for a variety of malignant and benign conditions in urology, neurosurgery, and even cardiac operations demanding high order of precision.

Two pathbreaking surgical robots exploiting research on tele-manipulation and miniature tendon-driven wrists are the Zeus by Computer Motion [21] and the da Vinci from Intuitive Surgical [22]. Both feature a master–slave architecture, with the surgeon master console and a patient slave manipulator. The slave manipulator was with three arms—two for tissue manipulation and the third for positioning of the endoscopic camera. The Zeus system, however, is no longer in production. Today Intuitive Surgical’s da Vinci system is undoubtedly the most widely accepted surgical robot in the world.

Following the success and popularity of da Vinci Surgical Robot, a number of research laboratories and companies intensified the quest of surgical robots. Amadeus, a four-armed laparoscopic surgical robot system, is being developed. This would be a direct competitor of Intuitive Surgical’s da Vinci system. The Amadeus uses snakelike multi-articulating arms, expected to provide improved maneuverability. Improved MR-compatible robots are being investigated. Microrobotic and nano-robotic platforms are emerging. These would further improve surgical outcomes [23].

5.2.2 Rehabilitation and Assistive Robotics

Rehabilitation robotics refers to application of robotic technologies in therapeutic procedures for augmenting rehabilitation. Rehabilitation robotics is primarily concerned with restoration of sensorimotor functions for persons with impairment due to various disabling diseases such as stroke or brain and orthopedic trauma. Assistive robotics encompasses robotic assistive systems for persons with disability including design and development of personal aids and prosthetics for independently carrying out activities of daily living. For retention of residual skills, most advanced of the rehabilitation and assistive robots employ techniques to adapt its quantum of assistance based on the response and reaction of the patient.

Rehabilitation robotics has proved itself to be hugely effective, especially in motor impairments due to stroke. The MIT-MANUS, a robotic arm, is possibly the most widely used rehabilitation robot [24]. Other rehabilitation robotic devices include exoskeletons and improved treadmills. Another example of rehabilitation robot is the Hipbot used for people with limited mobility because of hip injury [25]. Even though rehabilitation using these robotic devices has been predominantly for restoration of motor impairment due to stroke, it is equally applicable to individuals with cerebral palsy or other disabling conditions.

Advances in machine learning and signal processing are allowing paradigm shift in medical rehabilitation and assistive systems, particularly through bio-signals and brain imaging. This has brought in newer and challenging role for robotics—robotic systems to be used as natural extensions of the human body—for restoration of neuromotor functions and motor capabilities. These include neurobionic hands and active legs. Use of robotics is not only limited to motor rehabilitation but has been effectively used for recovery of cognitive deficits [26].

5.3 Entertainment Robotics

Robots have not only been confined to utilitarian use. There is a huge segment of robotic industry engaged in select areas of culture and entertainment. Robots are designed and developed for the sole purpose of subjective pleasure of the human it serves. These robots vary in form and function, ranging from simple toy robots such as a pet dog to complex humanoids exploiting artificial intelligence. Entertainment robotics has come a long way. Starting with Sony’s AIBO, a pet dog to its humanoid companion QRIO, we have now Pepper, an emotional humanoid. Designed and developed by Aldebaran Robotics, Pepper is a four-foot humanoid (on wheels) with cute features who have natural communication skills. The best part is that Pepper gives you all the attention you crave! Pepper not only offers advice but also “prattle on and on” making small talk. Even though we have not yet been able to have a slave robot in its truest sense, we are definitely one step closer to the type of artificial intelligence fantasized in science fiction movies.

Entertainment robotics also includes robotic technology used to create sophisticated narrative environments through preprogrammed movement patterns such as one used in Disneyland or the Cadbury World. Entertainment robots are also popular in trade shows and museums. Promotional robots to attract and lure show attendees toward particular products or services or guide them to particular booths are an established technology. For such guides, possibly Intel Museum’s A.I.-driven interactive robot, ARTI, is one of the most well-known. ARTI can recognize faces, understand speech, and update museum guests on the history of the museum and its founders [27].

Perhaps one of the hugely successful endeavors of entertainment robotics is the RoboCup challenge. RoboCup is an international completion of robotic soccer with the ambitious official RoboCup motto: “By the year 2050, develop a team of fully autonomous humanoid robots that can win against the human world soccer champions.” Robot soccer is possibly one of the toughest tests of building autonomous robots. The robots need to have active perception and real-time sensor-based planning. Apart from being a source of entertainment, the games have driven unprecedented advances in robotics research [28].

6 Current Trends

Starting from a modest industrial robot in the early 60s, robotics has achieved brilliant successes in manufacturing as well as personal and service robots. Robots have proliferated everywhere in society: in entertainment, health care, rehabilitation, and other applications. In this section, we survey the current trend toward social robotics, biomimetic robots, and the emergence of cloud robotics.

6.1 Social Robotics

A social robot is “a physically embodied, autonomous agent that communicates and interacts with humans on an emotional level” [29]. Interaction with human or other physical embodied agents is usually through well laid down social behavior associated with the role of the social robot. Personal and sociable robotics is a new entrant. Only by early 2000, robots were seen as teammates and researchers started to focus on human interactive robots leading to the birth of social robotics.

With requirement of close human interaction, physical form of the social robot is of fundamental importance. Most of the social robots are humanoids. Japanese companies focus on humanoids driven by the conviction human embodiment can ensure better human–robot communicative. This is based “on the human innate ability to recognize humans and prefer human interaction” [30].

6.2 Biomimetic Robots

In the late 80s, Brooks proposed a robot control architecture that couples sensory information to action selection without a symbolic representation of an external world [31]. He termed it the subsumption architecture. Following the subsumption architecture, there was a surge of embodiment-inspired robotics. An approach to a nonrepresentational approach to artificial intelligence was the area of “Artificial Life.” It involved arriving at a myriad of complex behaviors based on simple rules. However, the major focus was on biomimetics—the mimicry of biological process or a system. Biomimicry does not only involve mimicking the geometry but also the functionality [32].

Living species have adapted over time to the environment leading to an optimal design. It is but natural to copy that design! It is not now that people have tried to copy nature. Based on anatomical studies of birds, Leanardo da Vinci had made detailed illustrations of a flying machine. Researchers are focussing on biological inspiration to design and develop flying, crawling, and swimming automatons. These are biomimetic robots—robots that are built based on principles extracted from biological systems.

The recent surge in biomimetic robotics lies in availability of (a) huge amount of biological process data and (b) low-cost, power-efficient computational resources.

6.3 Cloud Robotics

Networking of machines in manufacturing automation systems is not new! More than 30 years ago, General Motors delved on the idea of a network of machines [33]. By the late 90s, sporadic usage of Web interfaces to control and tele-operate robots led to a field of robotics christened as Networked Robotics [34]. Simultaneously within the realm of Internet Services, the concept of “cloud” took shape. Cloud is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable resources (e.g., servers, storage, networks, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” [35]. Networked robot and automation systems are greatly enhanced by the cloud.

Exploiting computational and storage resources offered through cloud and taking advantage of the rapid progress in communications, an area of robotics is slowly emerging. Christened cloud robotics, it minimizes “onboard” computation bringing down associated overheads and dependence on custom middleware. Cloud brings with it the possibility to store and handle large datasets facilitating machine learning. The RoboEarth and RoboBrain databases are laudable efforts in this direction. The RoboBrain project “learns from publicly available Internet resources, computer simulations, and real-life robot trials” [36]. The RoboEarth project “stores data related to objects and maps for applications ranging from object recognition to mobile navigation to grasping and manipulation” [37].

7 Future Directions

It is definitely hard to say what the future holds for robotics. With rapid pace of technological growth within as well as in allied field such as machine learning and artificial intelligence, robotics is poised for even greater influence on our life in general. In this section, we review three areas of research that is going to make considerable impact on how robotics evolves in the near future.

7.1 Embodied Cognition

Until now, most of AI within robotics stayed true to this symbolic tradition. Stages in the Sense–Plan–Act cycle were clearly separated. Sensing leads to a set of symbols corresponding to features based on state of the external world. These symbols with other symbolic knowledge are used by the Planner to choose appropriate action. Flow of information is unidirectional with processing at each stage clearly demarcated. It was Brooks, who with his subsumption architecture proposed a radical departure from this view. Recent finding within the psychology and neuroscience community emphasizes on embodiment of intelligence. Intelligence including cognitive functions such as decision making, perception, and language is grounded in our physical presence rather than on abstract symbolic models [38]. Further, grasping and walking dynamics have been explored as embodied, non-symbolic intelligence.

Rather than sensor-driven smart robots, paradigms of embodied cognition could define the design of robots. Recent models from cognitive sciences and neuropsychology could bias human interactive robotics. A new dream is being woven—models of embodied cognition from different fields driving robotic developments in that area. For example, cognitive insights into an industrial expert could drive industrial robotics, the robot learning from the human expert, leading to a novel human–robot apprenticeship [39]. Such developments in embodied cognition with growing embodiment of AI and the advances in human interactive robotics would churn a paradigm shift in adoption of embodied cognition models for autonomous interactive robots.

7.2 Robotics and Internet of Things

Internet of Things (IoT) connects different entities (living or non-living), using different but interoperable communication protocols. IoT envisages “to connect everything and everyone everywhere to everything and everyone else” [40]. Robots are slowly becoming ubiquitous! It is but natural to expect a robot to be connected as a thing and establish connections with other things over the Internet leading to a robot-incorporated IoT. This could be either in the form of Internet of Robots or IoT-aided robotics applications. IoT technology would manage and magnify the already existing sensing, computing, and communication capabilities of modern robots enabling them able to execute complex and coordinated operations. IoT would bring in several players to complement the work primarily undertaken by a robot. Sensing, planning, and intelligent computational goals of robots would be better achieved through a dense IoT network [41].

Smart objects within technological environments such as shop floors in manufacturing would enable more detailed information to be captured and processed. Pervasive and ubiquitous smart objects integrated with a network of robots would lead to development of a number of novel services and applications.

Major challenges that need to be met for interaction among varied devices over a dense IoT network include (a) secure and stable communication paradigm and (b) unique yet sharable data representation. IoT would lay down the foundation of full decentralization of control [41]. Technology is in place for IoT and networking of robots. A careful investigation of requirements thereby working out means to put the two together is all that is required for IoT-enhanced robotics applications.

7.3 Robotics Through Synthetic Biology

Self-replicating machines, machines those are capable of producing a detached, functional copy of itself have been a long-cherished dream. John Von Neumann was the first to put forward a detailed scientific proposal for a non-biological self-replicating system [42]. There has been work on self-reproduction of physical machine [43] demonstrated construction of arbitrarily complex self-reproducing machines [44]. Work on self-assembly and self-replication has got a fillip with emergence of synthetic biology. Synthetic biology can broadly be described as the “design and construction of novel artificial biological pathways, organisms or devices, or the redesign of existing natural biological systems” [45]. Synthetic biology has made it possible to generate “diagnostic tools that improve the care of patients with infectious diseases, as well as devices that oscillate, creep, and play tic-tac-toe” [46].

Design of “custom-specified” cells has been possible because of rapid advancements in synthetic biology. These cells are most preferred in microscale systems because a. these are self-contained systems and b. have natural ability to respond to environmental cues. For example, synthetic gene networks have been designed to create cell-based memory units, data loggers, counters, edge detectors, and multi-input logic circuits as well as analog processing functions such as filtering and timing [47]. Striving for advancements in microrobotics exploiting progress in synthetic biology has significant potential [48]. A case in point is the synthetic biology project of Cyberplasm, a microscale robot. Robotics through synthetic biology is achieved “interfacing multiple cellular devices together, connecting to an electronic brain and in effect creating a multi-cellular bio-hybrid microrobot” [49].

8 Conclusion

A brief overview of the history, trends, and future direction is covered in this chapter. The developments in robotics starting from its conceptualization in science fiction to its current craze of IoT-based robotics are discussed. Many important developments have been highlighted, whereas many more have been ignored in view of space limitations. Robotics based on evolutionary computation, swarm robotics, and other such ideas driven through progress in computing has been ignored. Discussion on specialized robots such as for process industries and robots primarily composed of easily deformable matter is not included. Further, trend toward creation of human-like robots such as Nadine, the robot receptionist [50], is not highlighted. Nevertheless, a clear and concise picture of the interdisciplinary field of robotics has been presented. It is clear that the field of robotics requires collaborative efforts of various disciplines of engineering.

  • True and False Questions

  1. 1.

    Issac Asimov coined the term robot.

  2. 2.

    Unimate was the first industrial robot.

  3. 3.

    D-H notation is used for kinematic description of a robot.

  4. 4.

    Surgical robots were first used for brain biopsies.

  5. 5.

    MIT-MANUS is an entertainment robot.

  6. 6.

    ARTI is a well-known entertainment robot.

  7. 7.

    One of the most popular surgical robots is da Vinci.

  8. 8.

    Biomimicry involves mimicking only the geometry.

  9. 9.

    Networked robots are greatly enhanced by the cloud.

  10. 10.

    IoT would achieve sensing needs of robots better.

(Keys 1-F, 2-T, 3-T, 4-T, 5-F, 6-T, 7-T, 8-F, 9-T, 10-T)