Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Overview and Terminology

Telerobotics is perhaps one of the earliest aspects and manifestations of robotics. Literally meaning robotics at a distance, it is generally understood to refer to robotics with a human operator in control or human-in-the-loop. Any high-level, planning, or cognitive decisions are made by the human user, while the robot is responsible for their mechanical implementation. In essence, the brain is removed or distant from the body.

Herein the term tele, which is derived from the Greek and means distant, is generalized to imply a barrier between the user and the environment. This barrier is overcome by remote-controlling a robot at the environment, as indicated in Fig. 43.1. Besides distance, barriers may be imposed by hazardous environments or scaling to very large or small environments. All barriers prevent the user from physically reaching or directly interacting with the environment.

Fig. 43.1
figure 1

Overview of a telerobotic system (after [43.1], adapted from [43.2])

While the physical separation may be very small, with the human operator and the robot sometimes occupying the same room, telerobotic systems are often at least conceptually split into two sites. The local site encompasses the human operator and all elements necessary to support the system’s connection with the user, which could be joysticks, monitors, keyboards, or other input/output devices. The remote site contains the robot, supporting sensors and control elements, and the environment to be manipulated.

To support its operation, telerobotics integrates many areas of robotics. At the remote site, to operate the robot and execute the human’s commands, the system may control the motion and/or forces of the robot. We refer to Chaps. 7 and 8 for detailed descriptions of these areas. Also, sensors are invaluable (Chap. 5), including force sensors (Chap. 28) and others (Part C). Meanwhile, at the local site information is often displayed haptically (Chap. 41).

A recent addition to telerobotics is the use of computer networks to transmit information between the sites. The ubiquity of network access is allowing remote control from anywhere on demand. Chapter 44 discusses some of these developments, detailing network infrastructure and focusing on visual, often web-based, user interfaces, avoiding the need for specialized mechanical GlossaryTerm

I/O

(input/output) devices. Computer networks also allow new multilateral telerobotic architectures. For example, multiple users may share a single robot or a single user may control multiple robots (Sect. 43.5.2). Unfortunately, computer networks often see transmission delays and can introduce nondeterministic effects such as variable delay times and data losses. These effects can easily destabilize force feedback loops and require particular countermeasures (Sects. 43.4.443.4.6).

We should also point out the relation between telerobotics and human exoskeletons, as described in Chap. 69. Exoskeletons are also controlled by a human operator, leaving all planning and high-level challenges to the user, and their control systems share many aspects with telerobotics. However, the two sites are physically combined in an exoskeleton as the user directly touches and interacts with the robot. In this chapter, we will disallow any such direct mechanical connection.

The inclusion of the human operator makes telerobotics very attractive to handle unknown and unstructured environments. Applications are plentiful (Part F) and range from space robotics (Chap. 55) to dealing with hazardous environments (Chap. 58), from search and rescue situations (Chap. 60), to medical systems (Chap. 63) and rehabilitation (Chap. 64).

Before proceeding, we define some basic terminology. Indeed many other terms are used nearly synonymously with telerobotics, in particular teleoperation and telemanipulation. Telerobotics is the most common, emphasizing a human’s (remote) control of a robot. Teleoperation stresses the task-level operations, while telemanipulation highlights object-level manipulation.

Within telerobotics, a spectrum of control architectures is used. Direct control or manual control falls at one extreme, indicating that the user is controlling the motion of the robot directly and without any automated help. At the other extreme, supervisory control implies that user’s commands and feedback occur at a very high level and the robot requires substantial intelligence or autonomy to fulfill its function. Between the two extrema lie a variety of shared control architectures, where some degree of autonomy or automated help is available to assist the user.

In practice, many systems involve at least some level of direct control and accept the user’s motion commands via a joystick or similar device in the user interface. The joystick is an instrumented mechanism and can itself be viewed as a robot. The local and remote robots are called master and slave respectively, while the system is referred to as a master–slave system. To provide direct control, the slave robot is programmed to follow the motions of the master robot, which is positioned by the user. It is not uncommon for the master robot (joystick) to be a kinematic replica of the slave, providing an intuitive interface.

Some master–slave systems provide force feedback, such that the master robot not only measures motions but also displays forces to the user. The user interface becomes fully bidirectional and such telerobotic systems are often called bilateral. The human–master interactions are a form of human–robot interaction (Chap. 69). The field of haptics (Chap. 41) also discusses bidirectional user interfaces, involving both motion and force, though more commonly to interface the user with virtual instead of remote environments. We should note that both motion and force may become the input or output to/from the user, depending on the system architecture.

Finally, telepresence is often discussed as an ultimate goal of master–slave systems and telerobotics in general. It promises to the user not only the ability to manipulate the remote environment, but also to perceive the environment as if encountered directly. The human operator is provided with enough feedback and sensations to feel present in the remote site. This combines the haptic modality with other modalities serving the human senses of vision, hearing or even smell and taste. See videos , and , and for some early and recent results aiming for this telepresence. We focus our descriptions on the haptic channel, which is created by the robotic hardware and its control systems. The master–slave system becomes the medium through which the user interacts with the remote environment and ideally they are fooled into forgetting about the medium itself. If this is achieved, we say that the master–slave system is transparent.

While bilateral master–slave systems have held the biggest promise for telepresence and intuitive operations, they have also posed some of the largest stability and control problems. Especially considering force feedback from sensors at the remote site, these systems close multiple interwoven feedback loops and have to deal with large uncertainties in the environment. They have received heavy research attention and will therefore be a repeated focus in some of our following discussions.

2 Telerobotic Systems and Applications

Telerobotic systems, like most robotic devices, are typically designed for specific tasks and according to explicit requirements. As such, many unique systems have evolved, of which we present an overview for different applications. We begin with a short historical perspective, then describe different applications with various robot designs and user interfaces.

2.1 Historical Perspective

Teleoperation enjoys a rich history and dates back to nuclear research by Raymond C. Goertz in the 1940s and 1950s. In particular, he created systems for humans to handle radioactive material from behind shielded walls. The first systems were electrical, controlled by an array of on–off switches to activate various motors and move various axes [43.3]. Without any feel, these manipulators were slow and somewhat awkward to operate, leading Goertz to build pairs of mechanically linked master–slave robots [43.3, 43.4]. Connected by gears, linkages, and cables, these systems allowed the operator to use natural hand motions and transmitted forces and vibrations through the connecting structure (Fig. 43.2). Unfortunately they limited the distance between the operator and environment and required the use of kinematically identical devices. Goertz quickly recognized the value of electrically coupled manipulators and laid the foundations of modern telerobotics and bilateral force-reflecting positional servos [43.5].

Fig. 43.2
figure 2

Raymond C. Goertz used electrical and mechanical teleoperators in the early 1950s to handle radioactive material (courtesy Argonne National Labs)

At the beginning of the 1960s the effects of time delay on teleoperation started to become a topic of research [43.6, 43.7]. To cope with this problem the concept of supervisory control was introduced [43.2] and inspired the next years of development. In the late 1980s and early 1990s theoretical control came into play with Lyapunov-based analysis and network theory [43.10, 43.11, 43.12, 43.13, 43.8, 43.9]. Using these new methods, bilateral control of telerobotic systems became the vital research area it is today (Sect. 43.4). The growth of the Internet and its use as a communication medium has further fueled this trend, adding the challenges of nondeterministic time delay.

On the hardware side, the Central Research Laboratory model M2 of 1982 was the first telerobotic system which realized force feedback while separating master and slave electronics. It was developed together with the Oak Ridge National Laboratory and was used for some time for a wide range of demonstration tasks including military, space or nuclear applications. The National Aeronautics and Space Administration (GlossaryTerm

NASA

) tested the M2 system to simulate the ACCESS space truss assembly with excellent results (Fig. 43.3). The advanced servomanipulator (GlossaryTerm

ASM

) was developed from the M2 to improve the remote maintainability of manipulators and intended as a foundation for telerobotic systems [43.14].

Fig. 43.3
figure 3

The telerobotic system CRL Model M2 is used to verify the assembly of space truss structures (1982) (courtesy Oak Ridge National Laboratory)

Also driven by the nuclear application, bilateral servomanipulators for teleoperation were developed in France at the Commission de Energie Atomique (GlossaryTerm

CEA

) by Vertut and Coiffet [43.15]. With the MA 23 they demonstrated telerobotic operation including computer-assisted functionalities to improve the operator’s performance [43.16]. The assistance included software jigs and fixtures or virtual walls and restrictions [43.17] (Sect. 43.3.2).

For space applications a dual-arm force reflecting telerobotic system was developed by Bejczy at the Jet Propulsion Laboratory (GlossaryTerm

JPL

) [43.18] and . This system was the first use of kinematically and dynamically different master and slave robots. It required control in the Cartesian space coordinates of the operator’s hand and slave robot’s tool. Figure 43.4 shows the master control station with its two back-drivable hand controllers. The system was used to simulate teleoperation in space.

Fig. 43.4
figure 4

JPL ATOP control station (early 1980s) (JPL No. 19902Ac, courtesy NASA/JPL-CALTECH)

In 1993 the first telerobotic system was flown in space with the German Spacelab Mission D2 on board the Space Shuttle Columbia. The robot technology experiment (ROTEX), shown in Fig. 43.5, demonstrated remote control of a space robot by means of local sensory feedback, predictive displays, and teleoperation [43.19]. In this experiment the round trip delay was 6–7 s, such that it was not feasible to include force feedback into the control loop.

Fig. 43.5
figure 5

ROTEX, the first remotely controlled robot in space (1993). Telerobot in space and ground operator station (courtesy German Aerospace Center, DLR)

Throughout the 1980s and 1990s, as nuclear power activities began to decline, interests expanded into new areas including medicine and undersea operations. Efforts were accelerated by the availability of increasing computer power as well as the introduction of novel hand controllers, e. g., the PHANToM device [43.20], popularized by haptic applications in virtual reality (Chap. 41).

Simultaneously, surgery was seeing the trend toward minimally invasive techniques, highlighted by the first laparoscopic cholecystectomy (removal of the gallbladder) in 1987. Several groups saw the potential for telerobotics and pursued telesurgical systems. Most noteworthy are the Telepresence Surgery System developed at the Stanford Research Institute (now SRI International) in 1987 [43.21], the Laparoscopic Assistant Robotic System (GlossaryTerm

LARS

) created at the IBM Watson Research Center [43.22], the teleoperated surgical instrument Falcon designed at GlossaryTerm

MIT

(Massachusetts Institute of Technology) [43.23], and the Robot Assisted Microsurgery (GlossaryTerm

RAMS

) workstation developed at JPL [43.24].

In 1995 Intuitive Surgical Inc. was founded to leverage several of these concepts, leading to the da Vinci telesurgical system [43.25] and its introduction to market in 1999. Meanwhile Computer Motion started with a voice-controlled robot moving an endoscopic camera [43.26] and extended those capabilities into the ZEUS system. In 2001 a surgeon in New York (USA) used a ZEUS system to perform the first transatlantic telesurgical laparoscopic cholecystectomy on a patient located in Strasbourg (France) [43.27], as depicted in Fig. 43.6. The system did not include force feedback, so the surgeon had to rely on visual feedback only.

Fig. 43.6
figure 6

Operation Lindberg. The first transcontinental telerobotic surgery (2001) (courtesy M. Ghodoussi)

In this perspective we have given reference only to the systems that may be seen as milestones within the history of telerobotics. Other systems, which have been developed and added value to the research field, unfortunately could not be mentioned here.

2.2 Applications

Telerobotic systems have been motivated by issues of human safety in hazardous environments (e. g., nuclear or chemical plants), the high cost of reaching remote environments (e. g., space), scale (e. g., power amplification or position scaling in micromanipulation or minimally invasive surgery), and many others. Not surprisingly, after their beginning in nuclear research, telerobotic systems have evolved to many fields of application. Nearly everywhere a robot is used, telerobotic systems can be found. The following are some of the more exciting uses.

In minimally invasive surgery telerobots allow procedures to be performed through small incisions, reducing the trauma to the patient compared to traditional surgery [43.28]. The da Vinci system, made by Intuitive Surgical Inc. [43.25] and shown in Fig. 43.7, is the only commercially available device at present. Other efforts, however, have included computer motion [43.26] and endoVia Medical [43.29] on the commercial side, as well as the University of Washington [43.30], Johns Hopkins University [43.31], the German Aerospace Center [43.32], and many others ( ).

Fig. 43.7
figure 7

Intuitive Surgical Inc. makes the da Vinci telerobotic system, which is used in minimally invasive surgery (courtesy 2008 Intuitive Surgical, Inc.)

Protecting the operator from having to reach into a hazardous environment, telerobotic systems are widely used in nuclear or chemical industry. Some systems have been developed for the maintenance of high-voltage electrical power lines, which can be safely repaired without service interruption by a human operator using a telerobotic system. Disarming of explosives is another important task. Many systems like the telerob explosive ordnance disposal and observation robot (GlossaryTerm

tEODor

) shown in Fig. 43.8 or PackBot, made by iRobot [43.33], are used by police and military to disarm mines and other explosives. Similar vehicles are remote controlled for search and rescue in disaster zones [43.34].

Fig. 43.8
figure 8

tEODor, a telerobotic system for disarming of explosives (courtesy telerob Gesellschaft für Fernhantierungstechnik mbH, Ostfildern, Germany)

Space robotics is a classic application, in which distance is the dominating barrier, as discussed in Chap. 55. The NASA rovers on Mars are a famous example. Due to the time delay of several minutes, the rovers are commanded using supervisory control, in which the human operator is defining the goal of a movement and the rover achieves the goal by local autonomy using sensory feedback directly [43.35].

In orbital robotics the German technology experiment GlossaryTerm

ROKVISS

(robot component verification on the international space station (GlossaryTerm

ISS

)) was the most advanced telerobotic system [43.36]. Launched in 2004 and operational through 2010, it was installed outside the Russian module of the international space station. It validated advanced robot components in the slave system, including torque sensors and stereo video cameras, in real space conditions. Using a direct communication link between the space station and the operator station at DLR (German Aerospace Center), the time delay was kept at about 20 ms allowing a bilateral control architecture with high-fidelity force feedback to the operator [43.37] (Fig. 43.9). This technology is leading toward robotic service satellites, called Robonauts, which can be controlled remotely from the ground to help real astronauts during extravehicular activities (GlossaryTerm

EVA

) or to perform repair and maintenance tasks [43.38].

Fig. 43.9
figure 9

ROKVISS, a telerobotic system providing stereo vision and haptic feedback to the ground operator (courtesy German Aerospace Center, DLR)

3 Control Architectures

Compared to plain robotic systems, in which a robot executes a motion or other program without further consultation of a user or operator, telerobotic systems provide information to and require commands from the user. Their control architectures can be described by the style and level of this connection, as shown in Fig. 43.10. Organized in a spectrum, the three main categories are:

  • Direct control

  • Shared control

  • Supervisory control.

In practice, however, control architectures often include parts of all strategies.

Fig. 43.10
figure 10

Different concepts for telerobotic control architectures

Direct control implies no intelligence or autonomy in the system, so that all slave motion is directly controlled by the user via the master interface. This may incorporate sensory feedback to the user in a bilateral configuration. If the slave motion is controlled by a combination of direct user commands and local sensory feedback or autonomy, the architecture is denoted as shared control. It is similarly shared if user feedback is augmented from virtual reality or by other automatic aids. In supervisory control user commands and feedback occur at a higher level. The connection is more loose and the slave has to rely on stronger local autonomy to refine and execute tasks. The following explains the architectures in reverse order, leading to a detailed treatment of direct and bilateral control in Sect. 43.3.3, which introduces the basic ideas for Sect. 43.4.

3.1 Supervisory Control

Supervisory control, introduced by Ferell and Sheridan in 1967 [43.2], is derived from the analog of supervising a human subordinate staff member. The supervisor gives high-level directives to and receives summary information from, in this case, the robot. Sheridan describes this approach in comparison with manual and automatic robot control [43.39]:

Human operators are intermittently programming and continually receiving information from a computer that itself closes an autonomous control loop through artificial effectors and sensors.

In general, supervisory control techniques will allow more and more autonomy and intelligence to shift to the robot system. Today simple autonomous control loops may be closed at the remote site, with only state and model information being transmitted to the operator site. The operator supervises the telerobotic system closely and decides exactly how to act and what to do. A specific implementation of supervisory control is the telesensor programming approach, which is presented hereafter. See also for another implementation of supervisory control for space operation.

3.1.1 Telesensor Programming

Developed for space applications with large communication delays, the telesensor programming (GlossaryTerm

TSP

) approach has been characterized as a task-level-oriented programming technique and sensor-based teaching by showing [43.40, 43.41]. In essence, operators interact with a complex simulation of the robot and remote environment, in which they can test and adjust tasks. The tasks, consisting of robot and environment signals and configuration parameters, are then uploaded to the remote site. The approach presumes that the sensor systems provide sufficient information about the actual environment so that the tasks can be executed autonomously. Specifications and high-level planning remain the responsibility of the human operator.

Figure 43.11 shows the structure of a TSP implementation, consisting of two control loops working in parallel. One loop controls the real (remote) system, which contains internal feedback for local autonomy. The other loop establishes a simulation environment which is structurally equivalent to the real system, with a few exceptions. Most importantly, any signal delay which may result from communication to the remote system, in particular in space applications, is not duplicated in the simulation. This makes the simulation predictive with respect to the real system. A second exception is the display of internal variables in the simulation, which cannot be observed (measured) in the real system. This gives the operator or task planner more insight into what is happening or may happen in the system in response to commands. Communication between the two loops occurs via a common model data base which delivers a priori knowledge for execution on the remote system and a posteriori knowledge for model updating in the simulated world.

Fig. 43.11
figure 11

The concept of telesensor programming as demonstrated during the ROTEX mission

Unique tools are necessary to implement the functionality required for such a telerobotic control system. First a sophisticated simulation system has to be provided to emulate the real robot system. This includes the simulation of sensory perception within the real environment. Also, the operator needs an efficient interface to set up task descriptions, to configure the task control parameters, to decide what kind of sensors and control algorithms should be used, and to debug an entire job execution phase.

For telerobotic systems with large time delays of a few seconds or more, e. g., in space and undersea applications, such a sensor-based task-directed programming approach has advantages. It is not feasible for human operators to handle the robot movements directly under delayed visual feedback. Only a predictive simulation allows the operator to telemanipulate the remote system [43.42]. In addition, the use of force reflecting hand controllers to feed back force signals from the simulated predicted world can improve the operator’s performance [43.43]. Finally, an interactive supervisory user interface makes it possible to configure the environmental and control parameters.

3.2 Shared Control

Shared control tries to combine the basic reliability and sense of presence achievable by direct control with the smarts and possible safety guarantees of autonomous control ([43.44, 43.45] and ). This may occur in various forms. For example, the slave robot may need to correct motion commands, regulate subsets of joints or subtasks, or overlay additional commands.

With large communication delays, a human operator may only be able to specify gross path commands, which the slave must fine-tune with local sensory information [43.46]. We may also want the slave to assume control of subtasks, such as maintaining a grasp over long periods of time [43.47]. And in surgical applications, shared control has been proposed to compensate for beating heart movements (Fig. 43.12). The sensed heart motion is overlaid on the user commands, so the surgeon can operate on a virtually stabilized patient [43.48].

Fig. 43.12
figure 12

An example for the shared control concept in telerobotic surgery

A special application of shared control is the use of virtual fixtures ([43.49, 43.50, 43.51] and ). Virtual elements, such as virtual surfaces, virtual velocity field, guide tubes, or other appropriate objects, are superimposed into the visual and/or haptic scene for the user. These fixtures can help the operator perform tasks by limiting movement into restricted regions and/or influencing movement along desired paths. Control is thus shared at the master site, taking advantage of preknowledge of the system or task to modify the user’s commands and/or to combine them with autonomously generated signals.

Capitalizing on the accuracy of robotic systems while sharing control with the operator, telerobotic systems with virtual fixtures can achieve safer, faster and more intuitive operation. Abbott et al. describe the benefits by comparison to the common physical fixture of a ruler [43.50]:

A straight line drawn by a human with the help of a ruler is drawn faster and straighter than a line drawn freehand. Similarly, a [master] robot can apply forces or positions to a human operator to help him or her draw a straight line.

Based on the nature of the master robot and its controller, the virtual fixtures may apply corrective forces or constrain positions. In both cases, and in contrast to physical fixtures, the level and type of assistance can be programmed and varied.

3.3 Direct and Bilateral Teleoperation

To avoid difficulties in creating local autonomy, most telerobotic systems include some form of direct control: they allow the operator to specify the robot’s motions. This may involve commanding either position or velocity or acceleration. We begin our discussions with the later two options, which are generally implemented unilaterally without force feedback to the user. We then focus on position control, which is more suited to bilateral operation. We will assume a master–slave system, i. e., the user is holding a joystick or master mechanism serving as an input device.

3.3.1 Unilateral Acceleration or Rate Control

For underwater, airborne, or space applications, a slave robot may be a vehicle actuated by thrusters. Direct control thus requires the user to power the thrusters, which in turn accelerates the vehicle. For other applications, the user may be required to command the rate or velocity of the vehicle or slave robot. In both scenarios, the input device is commonly a joystick, often spring centered, where the acceleration or rate commands are proportional to the joystick displacement. For six degree-of-freedom (GlossaryTerm

DOF

) applications, i. e., when the slave needs to be controlled in translation and orientation, a six-dimensional (GlossaryTerm

6-D

) space mouse can be used. Alternatively two joysticks may separately command translation and orientation.

Acceleration and rate control are very attractive when the master and slave robots are fundamentally different, for example if the slave robot can reach an effectively unbounded workspace. Unfortunately, basic implementations can require considerable effort for the operator to reach and hold a given target location. As expected, users can more accurately position a system under rate control than under acceleration control [43.52]. Indeed acceleration control necessitates users to regulate a second-order system versus a first-order system for rate control. Assuming the slave has local position feedback available, a control system is often incorporated locally, such that the user may specify position commands and is relieved from the dynamic control problem. We refer to Sect. 43.5.1 for some emerging developments for bilateral control of mobile robots.

3.3.2 Position Control and Kinematic Coupling

Assuming that the slave is under position control, we can consider a kinematic coupling between master and slave, i. e., a mapping between master and slave positions. In particular, we must remember that the master mechanism moves in the master workspace, while the slave robot moves in the slave workspace. The mapping connects these two spaces, which are nearly always somewhat different.

3.3.2.1 Clutching and Offsets

Before discussing how the two robots are coupled, we must understand that they are not always coupled. For example, before the system is turned on, master and slave robots may, for whatever reason, be placed in some initial position/configuration. We have three options of how to engage the system:

  1. 1.

    First autonomously move one or both robots so they come to the same position

  2. 2.

    Wait until someone (the user) externally moves one robot to match the location of the other, or

  3. 3.

    Connect the two robots with some offset.

Once connected, most systems also allow a temporary disconnection between the two sites. The reason is twofold: to allow the user to rest without affecting the slave state and to allow a shift between the two robots. The later is most important if the workspaces of both robots do not perfectly overlap. This is much like picking up your mouse off your mouse pad to reposition without moving the cursor. In telerobotics the process is called clutching or sometimes indexing. If clutching is allowed, or both robots are not constrained to start at the same location, the system must allow for offsets between the two robots.

When clutched or disconnected, most systems hold the slave at rest or allow it to float in response to environment forces. It is also possible for the slave to retain its preclutching momentum and continue moving, similar to kinetic scrolling popularized in smartphones [43.53].

3.3.2.2 Kinematically Similar Mechanisms

The simplest scenario involves a master and slave mechanism that are kinematically equivalent if not entirely identical. In this case, the two robots can be connected at a joint level. With q denoting joint values and subscripts ‘‘m’’ referring to the master, ‘‘s’’ to the slave, ‘‘offset’’ to a shared offset, and ‘‘d’’ to a desired value, we can write

q sd = q m + q offset , q md = q s - q offset .
(43.1)

At the instance the two robots are to be connected or reconnected, the offset is computed as

q offset = q s - q m .
(43.2)

Most kinematically similar master–slave systems have the same workspace at both sites and do not allow clutching. By construction the offset is then always zero.

Depending on the controller architecture, the joint velocities may be similarly related, taking derivatives of (43.1). An offset in velocities is not necessary.

3.3.2.3 Kinematically Dissimilar Mechanisms

In many cases, the master and slave robots differ. Consider that the master is connected to the human user and thus should be designed accordingly. Meanwhile the slave works in some environment and may have a very different joint configuration and different number of joints. As a result, connecting the robots joint by joint may not be feasible or appropriate.

Instead kinematically dissimilar robots are commonly connected at their tips. If x is a robot’s tip position, we have

x sd = x m + x offset , x md = x s - x offset .
(43.3)

If orientations are also connected, with R describing a rotation matrix, we have

R sd = R m R offset , R md = R s R offset T ,
(43.4)

where the orientational offset is defined as slave relative to master

R offset = R m T R s .
(43.5)

Again velocities and angular velocities may be connected if needed and do not require offsets.

Finally note that most telerobotic systems use a video camera at the remote site and a monitor at the local site. To make the connection appear natural, the slave position and orientation should be measured relative to the camera, while the master position and orientation should be measured relative to the user’s view.

3.3.2.4 Scaling and Workspace Mapping

Kinematically dissimilar master–slave robots are commonly also of different size. This means not only do they require clutching to fully map one workspace to another, but they often necessitate motion scaling. And so (43.3) becomes

x sd = μ x m + x offset , x md = ( x s - x offset ) μ .
(43.6)

The orientation, however, typically should not be scaled. The scale μ may be set to either map the two workspaces as best possible, or to provide the most comfort to the user.

If force feedback is provided, as described below, an equivalent force scale may be desired. This will prevent distortion of the remote environmental conditions, such as stiffness or damping, by the scaling. In addition to the motion and force scalings, it is also possible to directly achieve power scaling between master and slave systems [43.51].

Beyond linear scaling, several research efforts have created nonlinear or time-varying mappings, which deform the workspaces. These may effectively change the scale in the proximity of objects [43.54] or drift the offset to best utilize the master workspace [43.55].

3.3.2.5 Local Position and Advanced Control

By construction we are now assuming that the slave follows a position command. This necessitates a local slave controller to regulate its position. In particular for kinematically dissimilar mechanisms, this will be a Cartesian tip position controller (Chap. 7).

If the slave robot has redundancies or possesses a large number of DOFs, these may be controlled either automatically to optimize some criterion or manually with additional user commands. Indeed some emerging applications are coordinating multiple users to control such complex systems. This is particularly relevant when the kinematic dissimilarity becomes extreme and has received considerable research attention. We refer here to Chap. 11 and especially Sect. 43.5 for appropriate techniques and new developments.

4 Bilateral Control and Force Feedback

In pursuit of telepresence and to increase task performance, many master–slave systems incorporate force feedback. That is, the slave robot doubles as a sensor and the master functions as a display device, so that the system provides both forward and feedback pathways from the user to the environment and back. Figure 43.13 depicts the common architecture viewed as a chain of elements from the user to the environment.

Fig. 43.13
figure 13

A typical bilateral teleoperator can beviewed as a chain of elements reaching from user to environment (CPU – central processing unit)

The bilateral nature of this setup makes the control architecture particularly challenging: multiple feedback loops form and even without environment contact or user intervention, the two robots form an internal closed loop. The communications between the two sites often inserts delays into the system and this loop, so that stability of the system can be a challenging issue [43.56].

To present force information without stability problems, it is possible to use alternate displays, such as audio or tactile devices [43.57]. Meanwhile, the combination of vibrotactile methods with explicit force feedback can increase high-frequency sensations and provide benefits to the user [43.58]. Tactile shape sensing and display also extends the force information presented to the user [43.59].

In the following we discuss explicit force feedback. We first examine the basic architectures before discussing stability and some advanced techniques.

4.1 Position/Force Control

Two basic architectures couple the master and slave robots: position–position and position–force. We assume that the robot tips are to be connected by the equations of Sect. 43.3.3 and give the control laws for translation. Control of orientation or joint motions follows equivalent patterns.

4.1.1 Position–Position Architecture

In the simplest case, both robots are instructed to track each other. Both sites implement a tracking controller, often a proportional–derivative (GlossaryTerm

PD

) controller, to fulfill these commands,

F m = - K m ( x m - x md ) - B m ( x ˙ m - x ˙ md ) , F s = - K s ( x s - x sd ) - B s ( x ˙ s - x ˙ sd ) .
(43.7)

If the position and velocity gains are the same ( K m = K s = K , B m = B s = B ), then the two forces are the same and the system effectively provides force feedback. This may also be interpreted as a spring and damper between the tips of each robot, as illustrated in Fig. 43.14. If the two robots are substantially different and require different position and velocity gains, some suitable force, position or power scalings, as explained in Sect. 43.3.3, may be utilized.

Fig. 43.14
figure 14

A position–position architecture effectively creates a spring and damper between the two robots

Note we have assumed the slave is under impedance control and back-drivable. If the slave is admittance controlled, i. e., it accepts position commands directly, the second part of (43.7) is unnecessary.

Also note that by construction the user feels the slave’s controller forces, which include forces associated with the spring–damper and slave inertia in addition to environment forces. Indeed while moving without contact, the user will feel the inertial and other dynamic forces needed to move the slave. Furthermore, if the slave is not back-drivable, i. e., does not easily move under environment forces, the environment force may be entirely hidden from the user. Naturally this defeats the purpose of force feedback. In these cases, a local force control system may be used to render the slave back-drivable. Alternatively, a position–force architecture may be selected.

4.1.2 Position–Force Architecture

In the above position–position architecture, the user was effectively presented with the slave’s controller force. While this is very stable, it also means the user feels the friction and inertia in the slave robot, which the controller is actively driving to overcome. In many scenarios this is undesirable. To avoid the issue, position–force architectures place a force sensor at the tip of the slave robot and feedback the force from there. That is, the system is controlled by

F m = F sensor , F s = - K s ( x s - x sd ) - B s ( x ˙ s - x ˙ sd ) .
(43.8)

This allows the user to only feel the external forces acting between the slave and the environment and presents a more clear sense of the environment. However, this architecture is less stable: the control loop passes from master motion to slave motion to environment forces back to master forces. There may be some lag in the slave’s motion tracking not to mention any delay in communications. Meanwhile the loop gain can be very high: a small motion command can turn into a large force if the slave is pressing against a stiff environment. In combination, stability may be compromised in stiff contact and many systems exhibit contact instability in these cases.

4.2 Passivity and Stability

The two basic architectures presented in Sect. 43.4.1 clearly illustrate one of the basic tradeoffs and challenges in force feedback: stability versus performance. Stability issues arise because any models of the system depend on the environment as well as the user. Both these elements are difficult to capture and, if we assume we want to explore unknown environments, impossible to predict. This issue makes a stability analysis very difficult. A common tool that avoids some of these issues is the concept of passivity. Although passivity provides only a sufficient (not a necessary) condition for stability, it incorporates the environmental uncertainly very well.

Passivity is an intuitive tool that examines the energy flows in a system and makes stability assertions if energy is dissipated instead of generated. Three rules are of importance here. First, a system is passive if and only if it cannot produce energy. That is the output energy from the system is limited by the initial and accumulated energy in the system. Second, two passive systems can be combined to form a new passive system. Third, the feedback connection of two passive systems is stable.

In the case of telerobotics, we generally assume that the slave robot will only interact with passive environments, that is, that the environments do not contain active motors or the like. Without the human operator, stability can therefore be assured if the system is also passive, without needing an explicit environment model.

On the master side the operator closes a loop and has to be considered in the stability analysis. In general, the master robot will be held by the user’s hand and arm. A variety of models and parameters describe the human arm dynamics, mainly in the form of a mass–damper–spring system. In [43.60] we find a summary of model parameters used by different authors. For an impedance-controlled haptic interface, found in many systems, the operator adds some physical damping and a light touch ( F human 0 ) is often one of the more challenging scenarios [43.61, 43.62]. As such some analyses ignore the human operator entirely. But even in general, humans seem to achieve stable interactions with all passive systems and passivity is usually considered sufficient for stable human–telerobot interactions.

To apply passivity, we take the system originally depicted in Fig. 43.13 and describe it as two-port elements in Fig. 43.15. Each port describes the energy flow between subsystems, where power is the product of a pair of collocated physical variables. We can choose a sign convention, for example declaring positive power to flow toward the remote environment. Then at the first boundary, the positive power flow is the product of master velocity x ˙ m times applied (human) force Fhuman

P left = x ˙ m T F human .
(43.9)

Meanwhile at the last boundary, the positive power flow is the product of the slave velocity x ˙ s times the environment force Fenv (which ultimately opposes the human force)

P right = x ˙ s T F env .
(43.10)

Therefore the entire telerobotic system is passive if

0 t P input d t = 0 t ( P left - P right ) d t = 0 t x ˙ m T F human - x ˙ s T F env d t > - E store ( 0 ) .
(43.11)

Obviously, the ideal teleoperator ( x ˙ m = x ˙ s , F human = F env ) is passive. These definitions can also be generalized to six dimensional twists and wrenches.

Fig. 43.15
figure 15

A teleoperator can be analyzed as a chain of-two port elements connecting the one-port operator to the one-port environment

To simplify the analysis, we can examine the passivity of each two-port element or subsystem and then deduce the overall passivity. The master and slave robots are mechanical elements and hence passive. The controllers of a position–position architecture mimic a spring and damper, which are also passive elements. So without delay and ignoring discretization, quantization, and other unmodelled effects, a simple position–position architecture is passive. The following sections will address some of these limitations and thus create controllers that are passive under more circumstances.

While powerful to handle uncertainty, passivity can be overly conservative. Many controllers are overdamped if every subsystem is passive. In contrast, the combination of an active and a passive subsystem may be passive and stable and show less dissipation. This is particularly true for the cascaded arrangement of two-port elements in the telerobotic system of Fig. 43.15. From network theory, the Llewellyn criterion specifies when a possibly active two-port connected with any passive one-port becomes passive. This two-port is then labeled unconditionally stable, as it will be stable in connection to any two passive one-ports. The Llewellyn criterion may hence be used as a more general stability test for telerobotic systems or components [43.63].

Passive controllers are also limited as they cannot hide the dynamics of the slave robot. In the above position–position architecture, the user will feel the forces associated with the slave inertia. In contrast the position–force architecture hides the slave inertia and friction from the user. As such, when the user inserts kinetic energy into the master without feeling any resistance, the system itself creates and injects the kinetic energy for the slave. This violates passivity and provides another insight as to why the architecture suffers from potential stability problems.

4.3 Transparency and Multichannel Feedback

Both basic architectures can be captured by the general teleoperator control system described by Lawrence [43.13], and later expanded by Hashtrudi-Zaad and Salcudean [43.63] and shown in Fig. 43.16. Ideally a system will equalize both the operator and environment forces as well as the master and slave motions. Therefore, it is desirable to measure both force and velocity (from which position may be integrated or vice versa) at both sites. With this complete information, the slave may, for example, start moving as soon as the user applies a force to the master, even before the master itself has moved.

Fig. 43.16
figure 16

In general, a controller will use both position and force information from both master and slave robot (after [43.63], adapted from [43.13])

Following these concepts derived in [43.13], we can examine the relationships between velocity and force, in the form of impedances and admittances. Note we do this in a single degree of freedom, assuming that all degrees of freedom may be treated independently. The environment will exhibit some impedance Z e ( s ) that is not known in advance and relates the environment force to the slave’s velocity

F e ( s ) = Z e ( s ) v s ( s ) .
(43.12)

If we describe the teleoperator in whole as a two-port with a hybrid matrix formulation

F h ( s ) v m ( s ) = H 11 ( s ) H 12 ( s ) H 21 ( s ) H 22 ( s ) v s ( s ) - F e ( s ) ,
(43.13)

then the user will perceive the impedance

Z to ( s ) = F h ( s ) v m ( s ) = ( H 11 - H 12 Z e ) ( H 21 - H 22 Z e ) - 1 .
(43.14)

Transparency describes how close the user’s perceived impedance comes to recreating the true environment impedance.

For a detailed treatment of passivity in telerobotics, impedance and admittance interpretations and designs, and transparency, we refer to some of the seminal works in [43.11, 43.12, 43.13, 43.63, 43.64, 43.65, 43.66].

4.4 Time Delay and Scattering Theory

When delays occur in the communications between the local and remote site, even position–position architectures can suffer from serious instabilities [43.67, 43.68]. This can be traced to the communications block in Fig. 43.15, where the power entering the left side and exiting the right side do not add up. Rather energy may be generated inside the block, which feeds the instability [43.9].

Several approaches to operate under delay have been studied [43.69], in particular shared compliant control [43.70] and the addition of local force loops [43.71]. The use of the Internet for communication, adding variability to the delay, is also an area of interest [43.72, 43.73]. This further evokes issues of data reduction [43.74].

Here we note that natural wave phenomena are bilateral passive elements that tolerate delay. If the control system is described in the frequency domain and scattering matrices are used in place of impedance and admittance matrices, the system can tolerate delays [43.75]. Scattering matrices relate the sum of velocity and force to their difference, so that passivity becomes a condition on the system gain, which is unaffected by the delay.

4.5 Wave Variables

Building on the realization that delay communications can be active and that wave phenomena circumvent the issue, wave variables provide an encoding scheme that is tolerant of delay [43.76]. Consider the power flowing through the system and separate the power moving forward and returning.

P = x ˙ T F = 1 2 u T u - 1 2 v T v = P forward - P return ,
(43.15)

where the forward and returning power are by construction nonnegative. This leads to the definition of the wave variables

u = b x ˙ + F 2 b , v = b x ˙ - F 2 b ,
(43.16)

where u is the forward-moving and v the returning wave.

If velocity and force signals are encoded into wave variables, transmitted across the delay, and decoded at the far site, the system remains passive regardless of delay. In fact, in the wave domain, passivity corresponds to a wave gain of less than or equal to unity. No requirements are placed on phase and so lag does not destroy stability.

The wave impedance b relates velocity to force and provides a tuning knob to the operator. Large b values mean the system increases force feedback levels at the cost of feeling high inertial forces. Small values of b lower any unwanted sensations, making it easy to move quickly, but also lower the desirable environment forces. Ideally the operator would lower b when there is no risk of contact and raise b when contact is imminent. The concept of scattering has also been extended to a coordinate free context [43.77].

Recent developments are incorporating both position–position and position–force architectures within the wave frame work, so the resulting systems are stable with any environment, stable with any delay, yet maintain the feedback of high-frequency forces that help the operator identify happenings at the remote site [43.78]. To improve performance and assist the operator, especially across the Internet, predictors may also be incorporated [43.79].

4.6 Teleoperation with Lossy Communication

The Internet provides an affordable and ubiquitously accessible communication medium. However, it can also introduce nondeterministic effects and lossy connections due to time-varying delays, packet losses, and packet reordering. How to achieve passive bilateral teleoperation over such lossy communication network has been an active research topic in telerobotics.

Many telerobotic systems have relied on the position-position architecture (43.7) with extra damping injection. This leads to the following proportional-derivative (PD) control law, with a simple structure and explicit position feedback,

F m = - B d x ˙ m ( t ) - B [ x ˙ m ( t ) - x ˙ s ( t - τ 1 ) ] - K [ x m ( t ) - x s ( t - τ 1 ) ] F s = - B d x ˙ s ( t ) - B [ x ˙ s ( t ) - x ˙ m ( t - τ 2 ) ] - K [ x s ( t ) - x m ( t - τ 2 ) ] ,
(43.17)

where B d , K , B are the stabilizing absolute damping and the PD control gains, and τ 1 , τ 2 0 are the communication delays from slave to master and from master to slave, respectively.

The usage of this controller across lossy communication was yet hampered (or at least reserved) due to the lack of theoretical guarantees for its passivity and stability. Anecdotes say that with sufficiently large damping and small delays and PD gains, the closed-loop system remains stable. This anecdotal observation was justified in [43.80] for the case of constant delay, that is, the PD-like controller (43.17) is passive if the following condition is met,

B d > τ ¯ 1 + τ ¯ 2 2 K ,
(43.18)

where ( τ ¯ 1 + τ ¯ 2 ) / 2 is the upper-bound of the round-trip delay, which can typically be estimated easily. Without human or environment forcing, master and slave positions will also converge to each other. This result was extended in [43.81] for the case of time-varying delay. Passivity of the PD-like controller (43.17) is guaranteed if

B d > τ ¯ 1 2 + τ ¯ 2 2 2 K and | τ ˙ i ( t ) | < 1 ,
(43.19)

where the second condition means the delay τ i ( t ) neither grows nor decreases faster than time t. We refer to [43.81] for cases with asymmetric controller gains and to [43.82, 43.83] for extensions to general digital lossy communication networks.

With a fixed structure, the PD controller (43.17) has to be tuned to the worst-case conditions according to (43.18) and (43.19). It may thus exhibit drastic performance degradation for severely variable communications. To overcome such fixed structure limitation, several passivity-enforcing flexible control techniques have been recently proposed.

The technique of passivity-observer/passivity-controller (PO/PC) was originally devised for haptic device control [43.84] and has been extended for the bilateral teleoperation with digital network with time-varying delay [43.85, 43.86, 43.87]. Each GlossaryTerm

PO

does real-time bookkeeping of energy flows at the master and slave sites. The GlossaryTerm

PC

is activated to dissipate energy whenever a passivity violation is detected. The passive set-position modulation (GlossaryTerm

PSPM

) framework was proposed in [43.88], where the desired set-position signal received from a general digital lossy communication network is real-time modulated as close to the original set-position signal as possible, yet, only to the extent permissible by the available energy in the system. Passification of a desired control force utilizing the device’s physical damping was addressed in [43.89] under the name of energy bounding algorithm (GlossaryTerm

EBA

). The idea of modulating a control signal or action under the passivity constraint was also adopted in the two-layer approach [43.90], where the transparency-layer is designed for best performance, while the passivity-layer superimposes constraints to enforce passivity.

Overall, the PO/PC approach may be considered as corrective (i. e., detect violation of passivity first and then apply passifying action) whereas the PSPM, EBA and the two-layer approaches are preventive (i. e., modulate/bound control action to prevent passivity violation). Interestingly, all the PO/PC, PSPM, EBA and two-layer approaches share some common characteristics: 1) they transmit energy packets along with other information over the communication network to replenish energy levels and enable useful work; and 2) each of these approaches activates their passifying action only when necessary, thus, can significantly improve control performance compared to fixed-structure teleoperation controllers, while also robustly enforcing passivity against a variety of communication imperfectness.

4.7 Port-Based Approaches

From Sect. 43.4.2 we know that energy flows provide a great description of telerobotic systems that physically interact with unknown environments and a human user. Indeed all physical interaction dynamics are fundamentally bound to energy exchanges. Passivity is a well suited analysis tool and assures stability if the system energy is bounded. All possible instabilities can be traced to unsupervised energy injections via the actuators. In Sect. 43.4.6 we also saw methods that explicitly monitor energy flows. Here we describe port-based approaches designed explicitly around energy exchanges. This can be advantageous to handle nonlinear system dynamics, discretization, time-varying delays, and other issues.

The concept of power ports and energy flows (Sect. 43.4.2) allows a precise analytical formulation of physical systems. This approach, which stems from concepts used in bond-graphs, enables a pure energy based analysis of complex nonlinear physical systems [43.91]. It has also provided new perspectives on modeling and robot control [43.92].

To handle sampling issues and prevent instabilities due to limited sampling rates, consider an energy-consistent discretization [43.93]. Rather than measuring power flow at discrete intervals, consider the energy flow that occurs continuously over the entire sampling interval. An exact measure of the energy transfer between sample time kT and the following sample time ( k + 1 ) T is

Δ E k = k T ( k + 1 ) T τ ( t ) q ˙ ( t ) d t .
(43.20)

Assume an electric motor with an ideal current amplifier and without any commutation effects is generating the torque as commanded by a zero order hold (GlossaryTerm

ZOH

) digital to analog converter. Further assume the position sensor is collocated and synchronized to the ZOH transitions. The torque may then be considered constant during the interval, so that

Δ E k = k T ( k + 1 ) T τ k , k + 1 q ˙ ( t ) d t .
(43.21)

Taking the torque out of the integral, we realize that

Δ E k = τ k , k + 1 [ q ( k + 1 ) - q ( k ) ] .
(43.22)

This simple result has far reaching consequences for interfacing the digital and physical world and can be easily calculated even if the sampling time varies . And some of the assumptions may be relaxed with suitable adaptation. Using this more precise measurement of energy flow leads to more consistent energy book-keeping.

As it is shown in Fig. 43.17, we can now track energy in the digital world, associating individual available energy reservoirs Hm and Hs with the master and slave controllers. The communication channel transmits both data and energy packets (GlossaryTerm

EP

s), where EPs contain only information about an energy quanta. An EP is only sent if the transmitter has sufficient available energy, which is then decreased by the quanta. An arriving EP injects the quanta into the receiver’s available energy. In this way the total virtual energy in the system will never increase. If the communication protocol allows an EP to be lost, this will remove energy similar to dissipation. This process is independent of any constant or variable delay.

Fig. 43.17
figure 17

Energy balance of the telemanipulation chain (after [43.90])

The port-based paradigm allows any nonlinear control algorithm. But any applied control force will have an energetic consequence and will be allowed if and only if the associated available energy level is sufficiently high. Following the classification of Sect. 43.4.6, this paradigm is preventive. It prevents energy generation without needing to dissipate unexpected energy appearances.

Strategies are needed to address the exchange of energy between master and slave. A simple protocol introduced in [43.90] continually transmits EPs with a percentage of the locally stored energy. It can be shown that this will result in an equal distribution of energy between the master and slave controller. Also, if necessary, a small damper can be superimposed at the master side to extract energy from the human as needed.

With the port-based paradigm, the data communication (related to transparency) and the energy communication (related to passivity) are split: controllers may be nonlinear, energy and data transmission may be independent, and there is basically no restriction on the kind of controller which can be implemented. The fact that energy and data are separated gives the name to the two-layer approach [43.90].

5 Emerging Applications of Telerobotics

Historically, telerobotics research has focused on a conventional setup with two fixed-based robotic manipulators serving as the master and slave devices. Recently, there have been substantial efforts to extend the telerobotic theories and frameworks to more unconventional scenarios. Here we summarize some recent results on these emerging applications. The summary is by no means exhaustive and, to be consistent with the chapter, focuses on controls aspects and providing a stable bilateral user interface.

5.1 Telerobotics for Mobile Robots

Mobile robots are useful slave devices if the task covers a large spatial area. Flying robots, in particular, can operate in three-dimensional space without being bound to the ground. For mobile robot teleoperation, force feedback may be used to convey proprioceptive information of the slave robot (e. g., velocity), or haptic feedback of virtual (or real) objects in the remote environment.

A key difference of mobile and flying robot teleoperation compared to a conventional setup is kinematic dissimilarity [43.94]: the workspace of the master device is bounded while the workspace of the slave robot is unbounded. This suggests to couple the master position to the slave velocity, as with rate-control described in Sect. 43.3.3. A direct coupling between master position and slave velocity, however, cannot be addressed by the standard passivity framework (Sect. 43.4.2). The master position and the slave velocity possess different relative degrees with respect to the torque. One way to circumvent this difficulty is to utilize so called r-variable

r := q ˙ + λ q .
(43.23)

That is, by utilizing inverse-dynamics similar to adaptive control design [43.95, 43.96] or by injecting some PD-type local state feedback with redefined output r [43.97] (Fig. 43.18), it is possible to render the master device to be passive with this r-variable replacing the velocity q ˙ in its original passivity relation (43.9). This implies that we can couple the r-variable and the slave’s velocity passively, just like standard passivity-based controllers. Meanwhile the r-variable contains the master position information.

Fig. 43.18
figure 18

Feedback r-passivation by using PD-type state feedback

We can also achieve passive mobile robot teleoperation with time-varying delays by extending the port-based ideas presented in Sect. 43.4.7 with the concept of a virtual vehicle [43.98]. This virtual vehicle is a simulated slave system, evolving in a gravitation-free field and having a finite energy tank. Commands from the master are only allowed to transfer energy from the tank to accelerate the vehicle or to return energy by decelerating the vehicle. This creates a closed energy system for the slave. A passive behavior can be achieved with a viscoelastic connection between the virtual and real vehicles.

Mobile telerobotics often uses nonholonomically-constrained wheeled mobile robots (GlossaryTerm

WMR

s) with pure-rolling wheels. For slave WMRs, it is often possible (with some low-level control as stated below) to split motions (i. e., velocity directions) into those requiring a position-velocity coupling as stated above (e. g., forward velocity of WMR), and others that may be controlled by a standard position-position coupling (e. g., rotation angle of WMR) as explained in Sects. 43.4.1 and 43.4.6.

Other mobile robots, particularly quadrotor or ducted-fan type aerial robots (Chap. 36) or thrust-propelled autonomous aquatic vehicles (GlossaryTerm

AUV

s, Chap. 25), are under-actuated with fewer control variables than degrees of freedom and also defy standard teleoperation techniques. To address under-actuation, abstraction of the slave robot has been utilized [43.98, 43.99], that is, human users telecontrol a fully-actuated virtual system, assuming that its motion is adequately describing that of the real slave robot, while a certain low-level tracking control is employed to drive the under-actuated slave mobile robot to tightly follow this virtual system. This abstraction leads to a hierarchical control design, composed of: 1) a high-level teleoperation control layer between the virtual vehicle and the master device; and 2) a low-level tracking control layer, into which the issue of the slave’s under-actuation is confined.

For the teleoperation layer, we may then use the conventional teleoperation techniques explained in Sect. 43.4 with the ( r , v ) -coupling as explained above. We may also use the recently proposed control techniques explained in Sect. 43.4.6, which promise sharper haptic feedback with guaranteed stability against lossy communication. Presenting both the slave robot’s proprioceptive information and the presence of surrounding objects for obstacle avoidance via the same haptic feedback channel, however, typically results in a perceptual ambiguity [43.100]. A further perceptual modality (e. g., vision) is usually necessary to resolve this ambiguity. In some applications, the slave mobile robots are required to move around in but to not directly contact any physical environments. In these cases, virtual forces can be generated for any obstacles and the slave robot is interacting only with this precisely known virtual force field. This suggests it would be sufficient to enforce slave stability versus slave passivity ([43.99], and ), as a less conservative controller will likely give better system performance.

5.2 Multilateral Telerobotics

Many practical telerobotic tasks require dexterous, complicated, and large degree-of-freedom motions, e. g., in surgical training, rehabilitation, or exploration. For such tasks, we may utilize a team of multiple cooperative slave robots or a single slave robot possessing many degrees of freedom. The complexity involved in both cases may require multiple human operators to adequately control and coordinate all degrees of freedom. Following [43.102], we thus consider the following scenarios:

  1. 1.

    Single-master multiple-slave (GlossaryTerm

    SMMS

    ) systems

  2. 2.

    Multiple-master multiple-slave (GlossaryTerm

    MMMS

    ) systems

  3. 3.

    Multiple-master single-slave (GlossaryTerm

    MMSS

    ) systems

  4. 4.

    Single-master single-slave (GlossaryTerm

    SMSS

    ) systems, which constitute the conventional telerobotic setup.

Here we introduce some recent results applicable to SMMS and MMSS systems.

It is common for single-master multiple-slave (SMMS) systems to autonomously control simple subtasks among the slaves, e. g., maintaining a grasp, maintaining connectivity, or avoiding collisions (Fig. 43.19). In particular, in [43.101, 43.104, 43.105], passive decomposition [43.106] is utilized to decompose the dynamics of multiple slave robots into their shape system, describing the inter-slave formation aspects, and the locked system, abstracting their collective motion and centroid behavior. The locked system can be telecontrolled by a single human user, while an autonomous controller regulates the shape system to maintain the cooperative grasp of an object between the slaves.

Fig. 43.19
figure 19

SMMS telerobotic control of multiple slave robots (after [43.101])

Another interesting development in SMMS telerobotics is the combination with the frameworks of multiagent cooperative control (i. e., consensus, flocking, synchronization, and other behaviors). For instance, in [43.103] and , a single human user directly telecontrols a single leader agent among the slaves, while the behavior of the other slaves is dictated by a leader-follower information graph (Fig. 43.20). A time-varying graph topology with arbitrary split/join operations among the slaves allows for reconfiguration, e. g., for navigation in cluttered environments. And the concept of virtual energy tanks, along with the port-Hamiltonian modeling and port-based approach (Sect. 43.4.7), is utilized to enforce passivity and stability of the total system.

Fig. 43.20
figure 20

SMMS telerobotic control architecture with possibly time-varying leader-follower information topology (after [43.103])

A distributed SMMS approach was also proposed in [43.99], in and , to enable a single human user to telecontrol some of the slave robots. The inter-slave behaviors are encoded via a distributed artificial potential constructed on a time-invariant undirected information graph (Fig. 43.21). By using kinematic virtual systems to abstract the slave robots, the work achieves the combination of master-passivity and slave-stability. It guarantees no collisions among the slave robots or with obstacles and no separations among the slave robots.

Fig. 43.21
figure 21

SMMS control architecture for multiple unmanned aerial vehicles (UAV s; after [43.99]), where a single user telecontrols some of the virtual systems (via u 1 t , u 4 t ) while telesensing the state of some of real UAVs (i. e., y 2 , y 3 ), which are low-level controlled to follow their respective virtual systems

A control approach for MMSS telerobotic systems was proposed in [43.107] and in , where two human users telecontrol distinct frames on a single large-DOF slave robot. The velocity space of the slave is decomposed according to the two command motions, resolving conflicts between and constraints imposed on them. Priority is given to the primary user’s commands (Fig. 43.22). The kinematics-level prioritized velocity commands are realized by dynamics-level adaptive control for all master and slave robots.

Fig. 43.22
figure 22

Trilateral teleoperation (after [43.107]), where the primary master controls the end-effector frame P, while the secondary master controls task-space frame A, B or C: if A is chosen, primary and secondary tasks are nonconflicting/nonconstrained; if B is chosen, nonconflicting, yet, the secondary task is constrained; if C is chosen, conflicting, thus, prioritized nullspace control is necessary between primary and secondary tasks

A different shared trilateral MMSS teleoperation framework was developedin [43.108], where two human users teleoperate the same point of a single slave robot. Their control authority is adjusted with a dominance factor α [ 0 , 1 ] set according to the task objective (e. g., α = 1 for training or α = 0 for evaluation). The MMSS system is optimized for a measure of transparency and Llewellyn’s criteria is applied to the equivalent two-port system (with environment impedance Ze embedded) to establish unconditional stability.

These multilateral and the previous mobile telerobotic approaches are promising to significantly expand the utility and application horizons of conventional telerobotics. Significant technical challenges and research questions remain, for example:

  1. 1.

    How to assign and determine the roles for multiple coordinating human users?

  2. 2.

    How to systematically split control tasks into teleoperative and autonomous control?

  3. 3.

    What are the most suitable performance measures, which are likely different from the conventional metrics?

  4. 4.

    What is the best form of human interface (or haptic feedback) [43.100] for the mobile and multilateral telerobotics and how to complement the interface with other modality (e. g., visual feedback).

6 Conclusions and Further Reading

Despite its age, telerobotics remains an exciting and vibrant area of robotics. In many ways, it forms a platform which can utilize the advances in robotic technologies while simultaneously leveraging the proven skills and capabilities of human users. Compare this, for example, with the development of the automobile and its relation to the driver. As cars are gradually becoming more sophisticated with added electronic stability control and navigation systems, they are becoming safer and more useful to their operators, not replacing them. Similarly telerobotics serves as a pathway for gradual progress and, as such, is perhaps best suited to fulfill robotics long-held promise of improving human life. It is seeing use in the challenging area of search and rescue. And with the recent developments and commercializations in telerobotic surgery systems, it is indeed impacting on the lives of tens of thousands of patients in a profound fashion and extending the reach of robotics into our world.

For further reading in the area of supervisory control, we refer to Sheridan [43.39]. Though published in 1992, it remains the most complete discussion on the topic. Unfortunately few other books are devoted to or even fully discuss telerobotics. In [43.109] many recent advances, including methods, experiments, applications, and developments, are collected. Beyond this, in the areas of bilateral and shared control, as well as to understand the various applications, we can only refer to the citations provided. Finally, in addition to the standard robotics journals, we note in particular Presence: Teleoperators and Virtual Environments, published by the MIT Press. Combined with virtual-reality applications, it focuses on technologies with a human operator.