Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This chapter discusses robotics technology for space missions. First, a general definition of a robot and an overview of the historical development of space robots are provided. Then technical details of orbital space robots, planetary robots, and telerobotics are given in the subsequent sections.

The term ‘robot’ comes from the word ‘robota’, which means serf labor or hard work in the Slavic languages (Czech, Slovak and Polish). It was largely introduced to the public by the Czech writer Karel \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{\text{C}} \)apek (1890–1938) in his play R.U.R. (Rossum’s Universal Robots), which was premiered in 1920. In this play, the robots are described as artificial creatures, or androids, which can be mistaken for humans.

Today, the word robot is used for an intelligent machine or artificial agent that can exhibit interactive behavior with its environment or a human in a coordinated manner. Although humanoids, or human-looking robots, have attracted public attention, the typical robots used in industry are automated or programmable handling devices that do not necessarily look like humans. Actually, many such industrial robots are successfully working in the mass-production lines of industrial factories, conducting repetitive tasks such as welding or assembling motor vehicles. However, the majority of research efforts now involve robots that can work outside the factory, such as in offices, homes and hospitals, or in outdoor fields or outer space (space robot, the focus of this chapter), or even in inner space (medical robots, which can work inside the human body). Robotics is a discipline involving system integration, which forms the basis for most of our knowledge of many different subject areas including mechanics, electronics, computer technology, and bioengineering, along with various topics in human sciences, such as anthropology and sociology.

Autonomy is a key issue in robotics, and at a primitive level, any non-crewed spacecraft that is under automated sequence control may be referred to as a robotic satellite. However, when the term space robot is used it implies a more capable mechanical system that can facilitate manipulation, assembly, or service tasks in orbit as an assistant to astronauts, or can extend the areas and abilities of exploration on remote planets as a surrogate for human explorers.

The key issues in space robotics are characterized as follows

  • Manipulation—Although manipulation is a basic technology in robotics, the microgravity of the orbital environment requires special attention to the motion dynamics of the manipulator arms and the objects being handled. The reaction dynamics that affect the base body, impact dynamics when the robotic hand contacts an object to be handled, and vibration dynamics due to structural flexibility are included in this issue. Technical details of the manipulator control in the microgravity environment are elaborated in Sect. 19.2.

  • Mobility—Locomotion is particularly important in exploration robots (rovers) that travel on the surface of a moon or planet. These surfaces are natural and rough, and thus challenging to traverse. Sensing and perception, traction mechanics, and vehicle dynamics, control and navigation are all mobile robotics technologies that must be demonstrated in a natural untouched environment. Technical details of the surface mobility systems are elaborated in Sect. 19.3.

  • Teleoperation and Autonomy—There is non-negligible time delay between a robotic system in space and a human operator in an operation room on Earth. In early orbital robotics demonstrations, this latency was typically a few seconds, but can be several tens of minutes, or even hours for planetary missions. Telerobotics technology is therefore indispensable in space exploration, and the introduction of autonomy is a reasonable consequence. Technical details of the telerobotics are elaborated in Sect. 19.4.

  • Extreme Environments—In addition to the microgravity environment, which affects the motion dynamics of a robot, there are many other issues related to extreme space environments that are challenging and must be solved to enable practical engineering applications. Such issues include extremely high or low temperatures, high vacuum or high pressure, corrosive atmospheres, ionizing radiation, and very fine dust, and were discussed in detail in Chap. 3.

  • Versatility—This is the ultimate goal when designing and developing a robot, and is especially highlighted in space applications. Due to the nature of space missions, once launched into space, a robot must perform all of its tasks by itself using its own resources. A space robot, therefore, should be adaptable to the extreme space environments mentioned above and possess the versatility to handle many different situations and scenarios, including contingent ones that arise unexpectedly.

19.1 Overview of the Historical Development of Space Robots

19.1.1 Orbital Space Robots

The first robotic manipulator arm used in the orbital environment was the Space Shuttle Remote Manipulator System (SRMS). It was successfully demonstrated in the STS-2 mission in 1981. This success opened a new era of orbital robotics and inspired numerous mission concepts.

A long-term goal that has been discussed extensively since the early 1980s is the application of a robotic free-flyer or free-flying space robot to the rescue and servicing of malfunctioning spacecraft (for example, the ARAMIS report [1]). In later years, crewed service missions were conducted for the capture-repair-deploy procedure of a malfunctioning satellite (Intelsat 603 by STS-49, for example) and for the maintenance of the Hubble Space Telescope (STS-61, -82, -103, -109 and -125). In each of these examples, the Space Shuttle, a crewed spacecraft with dedicated maneuverability, was used.Footnote 1 In contrast, non-crewed servicing missions have not yet become operational. Although there have been several demonstration flights such as ETS-VII and Orbital Express, the practical technologies for non-crewed satellite servicing missions await the outcomes of future challenges.

19.1.1.1 Space Shuttle Remote Manipulator System

On-board the Space Shuttle, the Shuttle Remote Manipulator System (SRMS), or Canadarm, was a mechanical arm that handled a payload from the payload bay of the Space Shuttle orbiter. It could also grapple a free-flying payload and maneuver it into the payload bay. The SRMS was first used on mission STS-2, launched in 1981. It was used more than 100 times during subsequent missions, performing payload deployment and retrieval, as well as assisting in human extra vehicular activities (EVA) or space walks.Footnote 2 Servicing and maintenance missions to the Hubble Space Telescope and construction tasks for the International Space Station were also carried out by the cooperative use of the SRMS in human EVAs.

The SRMS arm was 15 m long and had six degrees of freedom (DOF), comprising shoulder yaw and pitch joints, an elbow pitch joint, and wrist pitch, yaw, and roll joints. Attached to the end of the arm was a special gripper system called the Standard End Effector (SEE), which was designed to grapple a pole-like fixture (GF) attached to the payload. By attaching a foothold at the end point, the arm could serve as a mobile platform for an astronaut’s EVA, see Fig. 19.1.

Fig. 19.1
figure 1

Space shuttle remote manipulator system (SRMS) used as a platform for an astronaut’s extravehicular activity in the Shuttle cargo bay. Image NASA/CSA

19.1.1.2 International Space Station Mounted Robot Manipulator Systems

The International Space Station (ISS) is the largest international space project to-date, with 15 countries making significant cooperative contributions. The ISS is an outpost for the human presence in space, as well as a flying laboratory with substantial facilities for science and engineering research. To facilitate various activities on the station, there are several robotic systems.

The Space Station Remote Manipulator System (SSRMS), or Canadarm2, see Fig. 19.2, is an extended version of SRMS for use on the ISS. Launched in 2001 by STS-100 (ISS assembly flight 6A), the SSRMS has played a key role in the construction and maintenance of the ISS both by assisting astronauts during EVAs and in the use of the SRMS to hand over a payload from the Shuttle to the SSRMS. As for extensive capability, the SSRMS was designed as a symmetric seven-DOF arm with offset joints to enable it to be folded in half in the stored configuration and it provides system redundancy in operation. Its total length is 17.6 m when fully extended. Latching End Effectors are attached to both ends, through which power, data, and video can be transmitted to and from the arm. The SSRMS is self-relocatable using an inchworm-like movement with alternate grappling of Power Data Grapple Fixtures (PDGF), which are installed all over the station’s exterior surfaces to provide the power, data, and video, as well as a footholds. As another mobility aid to allow the SSRSM to cover wider areas of the ISS, Mobile Base System (MBS) was added in 2002 by STS-111 (ISS assembly flight UF-2). The MBS provides lateral mobility as it traverses the rails on the main trusses.

Fig. 19.2
figure 2

Space station remote manipulator system (SSRMS) grapples the Japan Aerospace Exploration Agency (JAXA) H-II transfer vehicle (HTV) prior to berthing it to the station. Image NASA

The Special Purpose Dexterous Manipulator (SPDM), or Dextre, which was attached at the end of the SSRM in 2008 by STS-123 (ISS assembly flight 1J/A), is a capable mini-arm system that facilitates the delicate assembly tasks currently handled by astronauts during EVAs. The SPDM is a dual-arm manipulator system, where each manipulator has seven DOFs and is mounted on a one-DOF body joint. Each arm has a special tool mechanism dedicated to the handling of standardized orbital replacement units (ORU) [2].

The Japan Space Exploration Agency (JAXA) also provided orbital assets including a robotic manipulator system for the ISS. The Japanese Experiment Module (JEM), which is also known by the nickname Kibo is composed of a pressurized module, exposed facility, experiment logistics module, and remote manipulator system (JEMRMS), see Fig. 19.3. These modules were developed by JAXA and successfully incorporated into the ISS by STS-123, 124 and 127 in 2008–2009.

Fig. 19.3
figure 3

The Japan Space Exploration Agency (JAXA) module, Kibo in orbit; other modules of the International Space Station have been removed through image manipulation. Image creative commons

The JEMRMS comprises two components: the main arm, which is a 9.9-m-long, six-DOF arm, and the small fine arm, which is a 1.9-m-long, six-DOF arm. Unlike the SSRMS, the main arm does not have self-relocation capability. Since its installation, the arm has been used to handle and relocate components for the experiments and observations performed in the exposed facility.

19.1.1.3 ROTEX and ROKVISS

The robot technology experiment, ROTEX, which was developed by the German Aerospace Agency (DLR), is one of the historical milestones of robotics technology in space [3]. A multisensory robotic arm was flown on the Space Shuttle Columbia (STS-55) in 1993. Although the robot was confined to a work cell on the Shuttle, several key technologies were successfully tested, including those for a multisensory gripper, teleoperation from the ground and by the astronauts, shared autonomy, and time-delay compensation by the use of a predictive graphic display.

DLR also developed a two-joint manipulator system called ROKVISS, which was installed on the exterior of the Russian Service Module of the ISS in January 2005. The aim of ROKVISS was the in-flight verification of highly integrated modular lightweight robotic joints, as well as that of control technology, such as high-level system autonomy and force feedback-based teleoperation. The teleoperation experiments were conducted from the ground station via a direct radio link [4]. After 6 years of experiments in space, the ROKVISS flight hardware was brought back to Earth by a Soyuz return capsule.

19.1.1.4 Orbital Express and ETS-VII: ‘Orihime’ and ‘Hikoboshi’

Japanese Engineering Test Satellite VII was another historical milestone in the development of robotics technology in space, particularly in the area of satellite servicing. ETS-VII was developed and launched by the National Space Development Agency of Japan (NASDA, currently JAXA) in November 1997. Numerous experiments were successfully conducted using a 2-m-long, six-DOF manipulator arm mounted on its carrier satellite.

The mission objective of ETS-VII was to test free-flying robotics technology and to demonstrate its utility in orbital operation and servicing tasks. The mission consisted of two subtasks: autonomous rendezvous/docking (RVD) and numerous robot experiments (RBT). For the RVD experiments, the spacecraft was separated into two sub-satellites in orbit, one called ‘Orihime’, which behaved as a target, and the other called ‘Hikoboshi’, which acted as a chaser. The robot experiments included: (1) teleoperation from the ground with a time delay of 5–7 s. (2) Robotic servicing task demonstrations such as orbital replacement unit (ORU) exchange, fuel transfer between the satellite and the ORU, and deployment of space structures; (3) dynamically coordinated control between the manipulator reaction and the satellite attitude response; and (4) the capture and berthing of a target satellite, all of which were conducted successfully [5, 6].

Ten years after ETS-VII, a similar orbital demonstration was conducted under the Orbital Express Space Operations Architecture program by the Defense Advanced Research Projects Agency (DARPA) in the United States. The system consisted of the Autonomous Space Transport Robotic Operations (ASTRO) vehicle, developed by Boeing Integrated Defense Systems, and a prototype modular next-generation serviceable satellite, NextSat, developed by Ball Aerospace. The ASTRO vehicle was equipped with a robotic arm to perform satellite capture and ORU exchange operations. After its launch in March 2007, various mission scenarios were successfully conducted, including visual inspection, fuel transfer, ORU exchange, fly-around, rendezvous, docking and satellite capture. The free-flying capture was conducted autonomously using vision-based feedback [7].

19.1.1.5 Robonaut

Robonaut is a dexterous humanoid robot designed and built at NASA’s Johnson Space Center in the United States. Building machines that can assist humans to work in and explore space is a key challenge. The Robonauts were designed to accomplish dexterous manipulation tasks using sophisticated human-like hands with tendon-driven fingers possessing multiple DOFs. The goal was to achieve dexterity that exceeds that of a suited astronaut. The advantage of a human-like robot is that the same workspace and tools designed for crewed space missions can be used. This not only improves efficiency, but also removes the need for specialized tools or interfaces for performing robotic operations.

Work on the first Robonaut began in 1997, and the first model called Robonaut 1 (R1), came out in 2002. Through 2006, R1 performed numerous experiments in a variety of laboratory and field test environments, proving that the concept of a robotic assistant was valid. The second generation Robonaut 2 (R2), was revealed in 2010, see Fig. 19.4. It is more technologically advanced than R1 and was delivered to the ISS by STS-133 in February 2011, becoming the first humanoid orbital robot on-board the ISS [8].

Fig. 19.4
figure 4

Robonaut 2. Image NASA

The Robonaut is a human-torso-like robot that contains joints with a total of 42 DOFs. Each arm has 7 DOFs, with a hand that has 12-DOF fingers. All the actuators are mounted in the arm. The torso contains 38 Power PC processors. There are more than 350 sensors in total, which are used for force/torque control based dexterous manipulation, as well as for safety behaviors.

Although, at present R2’s primary role on the space station is limited to experiments inside the Destiny laboratory, the future enhancement plan includes the incorporation of a lower body to allow it to move around the station’s interior. In addition, future upgrade could enable it to move outside to help astronauts with EVA tasks or perform repairs on the exterior of the station. Combined with a surface mobility system like legs or wheels, R2 could perform as a human-like manipulation system for future exploration missions on the Moon or Mars.

Orbital space robots will be able to assist humans in space by constructing and maintaining space modules and structures. Robotic manipulators have played essential roles in orbital operations. Moreover, satellite servicing missions are crucial to prevent the increase of space debris. The concept of servicing robots, or free-flying robots, has been discussed for many years, but there has been a limited number of validation flights in orbit, so far. More technological developments are expected to realize free-flying robots for servicing, rescuing or capture-and-removal missions of existing spacecraft in orbit.

19.1.2 Planetary Robots

19.1.2.1 Apollo ‘Moon Buggy’ and Lunokhod

The research on lunar surface mobility systems, which represents the roots of today’s exploration rovers, began in the 1960s, with an initiative to develop a crewed roving vehicle (‘Moon buggy’, see Fig. 19.5) for the Apollo program in the United States, along with that for a teleoperated rover called Lunokhod in the Soviet Union. Both the Apollo rovers (Apollo 15–17 in 1971–1972) and the Lunokhod rovers (Lunokhod 1 in 1970 and Lunokhod 2 in 1973) were successfully operated on the Moon [9].

Fig. 19.5
figure 5

Astronaut Eugene A. Cernan, mission commander, makes a short checkout of the lunar roving vehicle (LRV) during the early part of the first Apollo 17 Extravehicular Activity at the Taurus-Littrow landing site on December 11, 1972. Image NASA

There were numerous engineering design issues that had to be overcome to make vehicles work in this extraterrestrial environment, which contains high radiation, vacuum, severe temperatures and irregular terrain covered with regolith and dust. This was particularly true for the Lunokhod rovers, which had a mass of 840 kg with eight wheels supported by a dedicated suspension mechanism, and traveled 10.5 km (Lunokhod 1) and 37 km (Lunokhod 2) over the lunar terrain via television-image-based teleoperation from the ground station. To keep the rover warm during the long lunar nights, a polonium-210 radioactive heat source was successfully used.

19.1.2.2 Mars Landers: From Viking to Phoenix

Upon the success of the lunar programs, the exploration target shifted to Mars. In 1976, two Viking landers (Viking 1 and Viking 2) developed by NASA landed on the surface of Mars. They each had a simple robotic arm to collect surface soil samples and put them into on-board containers for in situ analysis. After the Viking mission, there were multiple missions that were planned and actually launched to Mars, but it took about 30 years until the next successful lander mission. The Mars Phoenix Lander that successfully landed in a polar region of the Mars in 2008 had a much more sophisticated robotic arm. This robotic arm was operated to dig trenches in the Martian regolith and to acquire (scoop) dry and icy soil samples and deliver them to the in situ analyzers. It was also able to insert a sensor probe into the soil, and to position sensors and cameras at various locations near the lander.

Meanwhile, the Soviet Union also developed multiple missions to Mars, including orbiters, landers and rovers. In 1971, the Mars 2 and 3 missions successfully arrived in Martian orbit and attempted soft landings of both landing modules, which included a miniature rover; Mars 2 crashed on the surface, and Mars 3 lost communication soon after the landing. In 1988, two lander missions to Phobos, a moon (satellite) of Mars were launched in the Soviet Phobos program; Phobos-1 suffered a terminal failure en route to Mars, while Phobos-2 attained Mars orbit and returned 38 images of Phobos with a resolution of up to 40 m, but contact was lost prior to deployment of a planned Phobos lander. Later, Russia also developed the Mars-96 mission, which included an orbiter, lander and penetrator, but failed at launch. Along with that, a landing and rover mission was planned and the technology, including a rover testbed called Marskhod, was developed, but was not launched.

19.1.2.3 Mars Rovers: Pathfinder, MER and MSL

Autonomous or semi-autonomous robotic vehicles are considered as indispensable technology for planetary exploration. As a precursor mission for mobile robotics technology on a remote planet, the Mars Pathfinder mission deployed a micro-rover called Sojourner in 1997, see Fig. 19.6. The Sojourner rover traversed the rocky Martian surface in close vicinity to the landing site by autonomously avoiding obstacles [10]. Based on this successful technology demonstration, NASA developed larger, more capable twins for the Mars Exploration Rover (MER) mission, see Fig. 19.7, both of which were launched in 2003. The MER-A rover (Spirit) landed on the Gusev crater on January 4, 2004, and the MER-B rover (Opportunity) landed on the Meridiani Planum on the opposite side of Mars from Spirit on January 25, 2004.

Fig. 19.6
figure 6

In spacecraft assembly and encapsulation facility-2 (SAEF-2), Jet Propulsion Laboratory workers are closing up the metal ‘petals’ of the Mars Pathfinder lander. The Sojourner small rover is visible on one of the three petals. Image NASA

Fig. 19.7
figure 7

Artist’s rendering of a Mars Exploration Rover. Image Maas Digital LLC for Cornell University and NASA/JPL

Both Pathfinder and the MER rovers introduced new technologies. Firstly, for the landing, a combination of an aerodynamic parachute and a unique airbag system was developed. Compared to a conventional lander, which uses a powered descent and soft landing, the airbag system can greatly reduce the mass of the landing module and its fuel, although it eliminates the precision landing feature by allowing the lander to bounce around on the surface several times before it finally settles down at a certain position.

Secondly, to achieve rough terrain mobility, these rovers use six independently driven wheels connected by a unique suspension arrangement called the rocker-bogie system. The term ‘rocker’ comes from the design of the differential that keeps the rover body balanced, enabling it to ‘rock’ depending on the various positions of the multiple wheels. The term ‘bogie’, on the other hand, comes from the old railroad systems and refers to a train undercarriage with six wheels that can swivel to curve along a track. To achieve this performance, the axles of the six wheels are connected by a passive linkage mechanism, with no need for springs, dampers, or even active elements. Thanks to this mechanism, the rover can move over a rock obstacle that is larger than the diameter of the wheel. The six-wheel and rocker-bogie suspension design was also adopted for NASA’s next rover (Curiosity) in the Mars Science Laboratory, which landed on Mars in 2012.

The MER rovers Spirit and Opportunity have an on-board manipulator arm for scientific operations. At the tip of this arm, several attached instruments can be placed directly up against a rock or soil target of interest. For example, by using a rock abrasion tool, the surface of a rock can be scrubbed, after which the interior of the rock can be carefully observed using a microscopic camera and an alpha-particle X-Ray spectrometer. On-board the MER rovers, a stereo pair of high-resolution color CCD cameras are also mounted at the top of the Pancam Mast Assembly. This allows the cameras to rotate a full 360° to obtain a panoramic view of the Martian landscape. The stereoscopic measurement is used for mapping of the surrounding environment and as a vision-based odometry system for rover navigation [11].

The Sojourner rover weighs about 10.5 kg and is approximately the size of a microwave oven, the Spirit and Opportunity rovers weigh about 175 kg and are the size of golf carts, and the Curiosity rover weighs about 900 kg and is the size of a car. The Sojourner rover was actively operational for almost 3 months and traveled approximately 100 m in total. The mission of the Spirit rover was terminated in May 2011 after more than 7 years of operation on the surface. The total traveling distance was 7.73 km. On the other hand, the Opportunity rover remained operational throughout 2012 into 2013, with a cumulative distance traveled of more than 30 km [12].

The Curiosity rover landed on the Gale Crater on Mars on 6 August, 2012, see Fig. 19.8. As it is much heavier than Sojourner, Spirit and Opportunity and a much more precise landing was demanded, it used an innovative soft-landing system that combined parachute descent, powered descent and finally a ‘sky-crane’ to lower the rover to the surface on a tether. Despite its great complexity, the landing was successful at almost the center of the ellipsoid target area of about 6 km by 20 km [13].

Fig. 19.8
figure 8

A self-portrait by NASA’s Curiosity rover in Gale Crater using the Mars Hand Lens Imager (MAHLI) to capture this set of 55 high-resolution images stitched together to create this full-color image on 31 October, 2012. Image NASA/JPL-Caltech/Malin Space Science Systems

19.1.2.4 Robotic Probes to Minor Celestial Bodies

In our solar system, there are numerous minor celestial bodies, such as asteroids, comets and satellites of the major planets, and the investigation of those bodies is also valuable for science. When comet Halley returned to the vicinity of the Sun (perihelion) in 1986, multiple space probes, including the European spacecraft Giotto, were launched to conduct detailed observations of the structure of the comet nucleus and the mechanism of coma and tail formation. As for asteroids, the first successful mission to rendezvous and long-term observe one was NEAR-Shoemaker, which was launched in 1996 and arrived at the asteroid 433 Eros in 2000. Scientific observation continued until the craft finally touched down on the surface of Eros in 2001. Other minor body missions include Deep Space 1 (NASA, launched 1998), Stardust (NASA, launched 1999), Contour (NASA, launched 2002 but failed), Rosetta (ESA, launched 2004), Deep Impact (NASA, launched 2005), Dawn (NASA, launched 2007) and Hayabusa (ISAS/JAXA, launched 2003).

Hayabusa was to visit a near-Earth asteroid, acquire sample materials from its surface and return them to Earth for detailed analysis. It was developed by the Japanese Institute of Space and Astronautical Science (ISAS), which later became a part of JAXA. The probe was launched in May 2003 from Uchinoura Space Center, Japan and its re-entry capsule safely returned to the Woomera Desert of Australia in June 2010, successfully returning dust-like soil samples of the target asteroid 25143 Itokawa.

Sample-return is the method of bringing material back from space instead of taking analysis equipment all the way to space. It is the most difficult and ultimate probing method. However, the material can be analyzed with greater precision using the latest technology on Earth, even if the specimen is very small.

To achieve the Hayabusa sample-return mission, the following three innovative technologies were developed. The first was an ion engine (electric propulsion system). Hayabusa was equipped with four sets of newly developed cathode-less but microwave-discharge ion engines for the round trip mission to the target. A single engine had a nominal performance of 8 mN of thrust, with 3,000 s of specific impulse. The ion propulsion system worked effectively throughout its 7 year deep space mission. The total accumulated operational time reached almost 40,000 h for all four ion engines, which consumed 47 kg of xenon propellant and provided a total \( \Delta \)V of 2,200 m/s [14].

The second innovative technology was an autonomous optical navigation system for conducting a rendezvous maneuver with Itokawa, and then a touch-down operation on a specific location on the surface of this tiny object (535 × 294 × 209 m) located at a distance of 300,000,000 km from Earth, requiring approximately 33 min (2,000 s) for a round-trip communication [15].

The third technology involved material sampling in a microgravity field. The gravity field on Itokawa’s surface is estimated to be about 100,000 times less than that of Earth’s. This requires a far lower fuel consumption for performing landing and liftoff maneuvers compared to those performed on major planets or the Moon, but the lack of gravity makes it difficult to remain in place on the surface and acquire samples. Therefore, a ‘touch-and-go’ type of sample acquisition system was developed [16].

The Rosetta mission was launched in 2004 to the comet 67P/Churyumov-Gerasimenko, targeting an encounter in 2014. The mission objective is to travel to and land upon the surface of the comet to study its nucleus. The Rosetta probe is equipped with specially a designed anchor system and drilling mechanism to drill the comet’s surface materials and conduct in situ analysis.

The robots that can land and travel on the lunar or planetary surfaces have been greatly contributing to our knowledge of the solar system. Wheeled mobile vehicles or robot rovers are successful on the natural and rough surface terrains of the Moon and Mars. Minor celestial bodies, such as asteroids and comets, have been also visited by many space probes. Minor bodies are characterized by very weak gravity fields; this fact makes the approach and landing maneuvers relatively easier, but the surface locomotion difficult.

19.2 Modeling and Control of Orbital Space Robots

Orbital robots are similar to terrestrial robots in that they are machines composed of multiple links jointed together to form arm-like structures, called manipulators, which are capable of performing a variety of tasks with specialized end-effectors and tools. The joints of the manipulator(s) are usually designed as single-degree-of-freedom (DOF) rotational joints driven by the appropriate actuators.

From the perspective of modeling and controlling orbital robots, it is appropriate to distinguish between extra-vehicular and intra-vehicular orbital robots. On the one hand, representative examples of extra-vehicular robots are SRMS/SSRMS/JEMRMS and ETS-VII/Orbital Express; on the other hand, an example of an intra-vehicular robot is ROTEX. Extra-vehicular robots may pose more challenging modeling and control problems than intra-vehicular robots, because the latter resemble terrestrial robots to a higher degree. Indeed, large-workspace manipulators such as the SRMS on the Space Shuttle and the SSRMS/JEMRMS on the International Space Station are known to exhibit structural vibrations due to the specific design constraints imposed mainly on their mass [18]. Modeling a robot as flexible-link [19] and/or flexible-joint [20] manipulators and employing the respective methods of control is crucial for minimizing the vibrations [2124].

Further, and as noted in Sect. 19.1, smaller manipulators can be attached to the end-links of the SSRMS and the JEMRMS (SPDM/Dextre and the Small Fine Arm, respectively), thus forming a ‘macro-mini’ manipulator structure. This leads to further challenges in terms of modeling and robot control. The motions of the mini-manipulator(s) may induce structural vibrations in the large arm, the joints of which remain locked during mini-manipulator operations. In this case, a flexible-base manipulator model would be appropriate. Hence, a controller must be designed that minimizes the reactions imposed on the flexible base from the mini-manipulator motions, and/or damps the excited vibrations (i.e. active damping via the mini-manipulator) [25].

Another class of extra-vehicular orbital robots are free-flying robots, e.g. the Space Shuttle with SRMS, ETS-VII or Orbital Express, that comprise a manipulator arm mounted on a satellite base. The base can attain any position and orientation depending on the forces and moments acting on it. The maneuvering capability of the satellite base can be achieved in the conventional way, i.e. using jet thrusters and the attitude control system (ACS). Similar to flexible-base robots, the acting forces and moments on the satellite base will also include undesirable reactions when set into motion by the manipulator arm. From the viewpoint of a conventional ACS, these forces are to be regarded as disturbances. However, such disturbances may not always be accommodated by the ACS, i.e. when there are inappropriate manipulator accelerations and/or large unknown payloads. One possibility to deal with this problem is to deactivate the ACS and let the base float freely during manipulator operation [26]. However, as was the case with the ETS-VII and Orbital Express, which used teleoperation from a remote site (Earth), precise orientation of the satellite base is required for communication. Hence, special controller design must be realized in order to minimize the manipulator reactions [27].

This section considers the modeling and control problems of free-floating space robots and ‘macro-mini’ structures modeled as flexible-base manipulators. Modeling issues are discussed in the first five subsections, including the underlying kinematic and dynamic equations, the system linear and angular momenta, two modeling approaches for free-floating robots in Sect. 19.2.3, the Reaction Null Space that is useful for disturbance minimization, and alternative dynamics formulations regarding ignorable coordinates, contact dynamics and extension to multi-arm robots, in Sect. 19.2.5. The last five subsections are devoted to basic control methods: end-link trajectory tracking control, point-to-point motion and non-holonomic path planning for free-floating robots, vibration suppression control for flexible-base robots, end-link impacts and impedance control, and post-impact control for momentum redistribution with regard to free-floating robots.

19.2.1 Kinematic and Dynamic Equations

Assume that the orbital robot is made of rigid-body links connected via \( n \) single-DOF joints. The joint coordinates will be denoted by \( \mathbf{\theta}\in \Re^{n} \). The system can then be described with \( 6 + n \) generalized coordinates \( q = \left( {{\mathcal{X}} ,\mathbf{\theta}} \right) \), where \( {\mathcal{X}} \in SE\left( 3 \right) \) denotes the position/orientation of the satellite base w.r.t. an appropriately chosen inertial coordinate frame (usually assumed to be orbit-fixed).

First, the equation of motion for a free-flying space robot comprising a serial-link manipulator arm mounted on a satellite base is introduced (cf. Fig. 19.9). The equation is conveniently represented in the following block-matrix form

Fig. 19.9
figure 9

Model of free-floating orbital space robot

$$ \left[ {\begin{array}{*{20}c} {\mathbf{M}_{b} } & {\mathbf{M}_{bm} } \\ {\mathbf{M}_{bm}^{T} } & {\mathbf{M}_{m} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\dot{\mathcal{V}}}_{b} } \\ {{\ddot{\mathbf{\theta}}}} \\ \end{array} } \right]\; + \;\left[ {\begin{array}{*{20}c} {C_{b} } \\ {{\mathbf{c}}_{m} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathcal{F}}_{b} } \\\mathbf{\tau}\\ \end{array} } \right]\; + \;\left[ {\begin{array}{*{20}c} {{}^{b}\mathbf{T}_{e}^{T} } \\ {\mathbf{J}_{m}^{T} } \\ \end{array} } \right]{\mathcal{F}}_{e} $$
(19.1)

where

\( {\mathbf{M}}_{m} \)

\( \in \Re^{n \times n} \)

: fixed-base manipulator link inertia matrix

\( \mathbf{M}_{b} \)

\( \in \Re^{6 \times 6} \)

: system articulated body inertia matrix

\( \mathbf{M}_{bm} \)

\( \in \Re^{6 \times n} \)

: coupling inertia matrix

\( \mathbf{c}_{m} \)

\( \in \Re^{n} \)

: fixed-base manipulator link Coriolis and centrifugal forces

\( C_{b} \)

\( \in \Re^{6} \)

: Coriolis and centrifugal forces on the system articulated body

\( \mathbf{\tau} \)

\( \in \Re^{n} \)

: manipulator joint torque vector

\( {\mathcal{V}}_{b} \)

\( \in \Re^{6} \)

: spatial velocity of the base

\( {\mathcal{F}}_{b} ,{\kern 1pt} \,{\mathcal{F}}_{e} \)

\( \in \Re^{6} \)

: spatial forces on the base and the end-link, respectively

\( {}^{b}\mathbf{T}_{e} \)

\( \in \Re^{6 \times 6} \)

: spatial coordinate transform

\( \mathbf{J}_{m} \)

\( \in \Re^{6 \times n} \)

: fixed-base manipulator Jacobian matrix

The lower-case bold characters denote vectors; the upper-case bold characters represent matrices; and the spatial quantities such as the rigid body spatial velocity and spatial forces are denoted by calligraphic symbols, e.g. \( {\mathcal{V}}_{O} ,{\mathcal{F}}_{O} \in \Re^{6} , \) respectively. The convention for spatial vectors composed of 3D quantities is as follows: a linear component followed by an angular component, e.g. \( {\mathcal{V}}_{O} = \left[ {\mathbf{v}_{O}^{T} \quad \;\mathbf{\omega}^{T} } \right]^{T} \) and \( {\mathcal{F}}_{O} \, = \,\left[ {\begin{array}{*{20}c} {\mathbf{f}^{{^{T} }} } & {\mathbf{n}_{O}^{T} } \\ \end{array} } \right]^{T} \) where \( \mathbf{v},\,\omega ,\,\mathbf{f},\,\mathbf{n} \) denote 3D vectors of body velocity, angular velocity, force and moment, respectively. Spatial transforms are represented as

$$ {}^{k}{\mathbf{T}}_{l} = \left[ {\begin{array}{*{20}c} {{}^{k}{\mathbf{R}}_{l} } & { - {}^{k}{\mathbf{R}}_{l} {}^{k}{\mathbf{R}}_{l}^{ \times } } \\ 0 & {{}^{k}{\mathbf{R}}_{l} } \\ \end{array} } \right] $$
(19.2)

with \( {}^{k}\mathbf{R}_{l} \in \Re^{3 \times 3} \) denoting the orientation of coordinate frame \( \{ l\} \) with respect to \( \{ k\} \) and \( {}^{k}\mathbf{R}_{l}^{ \times } \in \Re^{3 \times 3} \) denoting the skew-symmetric operator associated with the vector \( {}^{k}\mathbf{r}_{l} \in \Re^{3} \) that expresses the position of \( \{ l\} \) with respect to \( \{ k\} \).

The upper part of the above equation denotes the system articulated-body dynamics. The coordinates are those of the satellite base, but the inertial properties are those of the entire system, hence the term ‘articulated body’ [28]. The lower part of the above equation describes the dynamics of the manipulator. Because base coordinates were used, the quantities \( \mathbf{M}_{m} \), \( \mathbf{c}_{m} \) and \( \mathbf{J}_{m} \) are those of the respective fixed-base manipulator. Furthermore, the entire equation includes components for the intercoupled inertial and nonlinear generalized forces on the left-hand side, and the external and/or driving forces on the right-hand side.

For the case of a flexible-base space robot (cf. Fig. 19.10), two additional terms are added on the left-hand side to account for the base spatial damping and stiffness: they are expressed via diagonal matrices \( \mathbf{D}_{b} ,{\kern 1pt} {\kern 1pt} \mathbf{K}_{b} \in \Re^{6 \times 6} \) with elements \( d_{bk} \) and \( k_{bk} ,k = 1,2, \ldots ,6 \), respectively

Fig. 19.10
figure 10

Model of flexible-base manipulator system

$$ \begin{aligned} \left[ {\begin{array}{*{20}c} {{\mathbf{M}}_{b} } & {\mathbf {M}_{bm} } \\ {\mathbf {M}_{bm}^{T} } & {\mathbf {M}_{m} } \\ \end{array} } \right] & \left[ {\begin{array}{*{20}c} {{\dot{\mathcal{V}}}_{b} } \\ {{\ddot{\mathbf{\theta}}}} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {C_{b} } \\ {\mathbf{c}_{m} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\mathbf{D}_{b} } & {\mathbf{0}} \\ {\mathbf{0}} & {\mathbf{0}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathcal{V}}_{b} } \\ {\dot{\mathbf{\theta }}} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\mathbf{K}_{b} } & {\mathbf{0}} \\ {\mathbf{0}} & {\mathbf{0}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\Delta }{\mathcal{X}}_{b} } \\ {{\Delta }\mathbf{\theta}} \\ \end{array} } \right] \\ & = \left[ {\begin{array}{*{20}c} {\mathbf{0}} \\\mathbf{\tau}\\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {{}^{b}\mathbf{T}_{e}^{T} } \\ {\mathbf{J}_{m}^{T} } \end{array} } \right]{\mathcal{F}}_{e} \end{aligned} $$
(19.3)

The base external/driving force \( {\mathcal{F}}_{b} \) was set to zero.

The kinematic equation for the velocity is given as

$$ {\mathcal{V}}_{e} = T_{eb} {\mathcal{V}}_{b} + J_{m} ({\mathbf{\theta}}) {\dot{\mathbf{\theta}}} $$
(19.4)

where \( {\mathcal{V}}_{e} \) is the spatial velocity of the end-link. The first component on the right-hand side represents the base motion, and the second component represents the manipulator motion with respect to the base.

19.2.2 Linear and Angular Momenta

The spatial momentum of a free-floating robot consists of two elements: a linear and an angular one. The angular momentum component is written with respect to the center of mass (CoM) of the articulated body

$$ {\mathcal{L}}_{c} \equiv \left[ {\begin{array}{*{20}c} \mathbf{p} \\ {\mathbf{l}_{c} } \\ \end{array} } \right] = \mathbf{M}_{c} {\mathcal{V}}_{c} . $$
(19.5)

The linear element is \( \mathbf{p} = \sum\nolimits_{i = 0}^{n} m_{i} \dot{\mathbf{r}}_{i} = m_{t} \dot{\mathbf{r}}_{c} \) and the angular element is \( l_{c} = \sum\nolimits_{i = 0}^{n} \left( {\mathbf{I}_{i}\mathbf{\omega}_{i} + m_{i} \mathbf{r}_{i} \times \dot{\mathbf{r}}_{i} } \right) \) where \( {\kern 1pt} \;\mathbf{I}{\kern 1pt}_{i} ,{\kern 1pt} {m}_{i} ,\,{\kern 1pt} \;\mathbf{r}{\kern 1pt}_{i} ,\,{\kern 1pt} \;\mathbf{\omega}{\kern 1pt}_{i} \), represent the link \( i \) inertia matrix, mass, CoM position and angular velocity, respectively: all of which are in inertial coordinates. In addition, \( m_{t} \) denotes the mass of the articulated body system, and \( \mathbf{r}_{c} \) and \( {\mathcal{V}}_{c} \) denote its CoM position and spatial velocity, respectively. The matrix \( \mathbf{M}_{c} \) is a block-diagonal matrix including \( m_{t} \mathbf{U} \) and \( \mathbf{I}_{c} \equiv \sum\nolimits_{i = 0}^{n} \left( {\mathbf{I}_{i} - m_{i} \mathbf{R}_{ci}^{ \times } \mathbf{R}_{ci}^{ \times } } \right) \) as upper and lower blocks, respectively.

Redefining spatial momentum with respect to the base gives

$$ {\mathcal{L}}_{b} = \left[ {\begin{array}{*{20}c} \mathbf{p} \\ {{\kern 1pt} \;\mathbf{r}{\kern 1pt}_{bc} \times \mathbf{p} + \mathbf{l}_{c} } \\ \end{array} } \right] $$
(19.6)

where \( \mathbf{r}_{bc} \) denotes the position of the articulated body CoM with respect to the base frame and yields the advantage of the application of familiar fixed-base manipulator inertial properties. This representation can be related to the equation of motion Eq. 19.1, as follows. Extracting the section from Eq. 19.1 that concerns the system articulated-body dynamics yields the following

$$ \mathbf{M}_{b} {\dot{\mathcal{V}}}_{b} + \mathbf{M}_{bm} {\ddot{\mathbf{\theta }}} + C_{b} = {\mathcal{F}}_{qs} $$
(19.7)

where \( {\mathcal{F}}_{qs} = {\mathcal{F}}_{b} + \mathbf{T}_{eb}^{T} {\mathcal{F}}_{e} \) denotes the quasistatic forces. The dynamic equilibrium of the articulated-body system can be then expressed as \( {\mathcal{F}}_{d} - {\mathcal{F}}_{qs} = {\mathbf{0}} \). Then the dynamic force \( {\mathcal{F}}_{d} \) can be obtained as the time derivative \( {\mathcal{F}}_{d} = \frac{d}{dt}{\mathcal{L}}_{b} \). In the absence of quasistatic forces, i.e. when \( {\mathcal{F}}_{qs} = {\mathbf{0}} \) and when the base is unactuated and no external forces act on the end-link, the articulated-body dynamics Eq. 19.7 can be integrated

$$ \mathbf{M}_{b} {\mathcal{V}}_{b} + \mathbf{M}_{bm} {\dot{\mathbf{\theta }}} = {\bar{\mathcal{L}}}_{b} $$
(19.8)

where \( {\bar{\mathcal{L}}}_{b} \) is the integration constant. The first component on the left-hand side, \( \mathbf{M}_{b} {\mathcal{V}}_{b} \), is the articulated-body momentum due to the base motion. The second component, \( \mathbf{M}_{bm} \dot{\mathbf{\theta }} \), is due to the manipulator motion. It plays an important role in path planning and control as will be shown below. The component is called coupling momentum [29] and will be denoted as \( {\mathcal{L}}_{bm} \). It gives rise to a spatial force imposed on the base via manipulator motion

$$ {\mathcal{F}}_{bm} \equiv {\mathbf{M}_{bm} \ddot{\mathbf{\theta }}\, + \,\dot{\mathbf{M}}_{bm} \dot{\mathbf{\theta }}}. $$
(19.9)

\( {\mathcal{F}}_{bm} \) will be henceforth referred to as the imposed force. Then, the articulated-body dynamics of a free-flying space robot in a form familiar from Newtonian mechanics can be represented as

$$ \mathbf{M}_{b} {\dot{\mathcal{V}}}_{b} = -{\mathcal{F}}_{bm} $$
(19.10)

which was obtained from Eq. 19.1 under the assumption of no external forces, and the approximation of \( C_{b} \approx \dot{\mathbf{M}}_{bm} \dot{\mathbf{\theta }} \) [29].

Looking further for integrability of the momentum equation, the linear part is integrable, whereas the angular part is not. Hence, the latter represents a non-holonomic constraint, implying the orientation of the base cannot be expressed as a function of the current manipulator joint angles; rather, it will depend on the history of the joint angle vector.

The articulated-body dynamics of a flexible-base robot have the same form as in Eq. 19.7, with the addition of quasistatic forces

$$ {\mathcal{F}}_{qs} = \mathbf{T}_{eb}^{T} {\mathcal{F}}_{e} - \mathbf{D}_{b} {\mathcal{V}}_{b} - \mathbf{K}_{b}\Delta {\mathcal{X}}_{b} . $$
(19.11)

Even with no external force (\( {\mathcal{F}}_{e} = 0 \)), the quasistatic forces will be non-zero, e.g. when the base is displaced from the equilibrium position because of the manipulator reaction. Hence, momentum conservation does not necessarily hold in this case. The articulated-body dynamics can be rewritten in the classical mass-damper-spring form via the above imposed force notation

$$ \mathbf{M}_{b} {\dot{\mathcal{V}}}_{b} + \mathbf{D}_{b} {\mathcal{V}}_{b} + \mathbf{K}_{b} {\Delta}{\mathcal{X}}_{b} = -{\mathcal{F}}_{bm} . $$
(19.12)

19.2.3 Virtual Manipulator and Generalized Jacobian

A free-flying robot with an unactuated base obeys the law of momentum conservation. This is a special case: the dynamics are simplified, and additionally, velocity-based relations play a predominant role. However, inertial properties are involved in these relations, which is in contrast with the case of fixed-base terrestrial robots. Because the base is unactuated, it moves in reaction to manipulator motions. This results in a diminishment of the motion ability of the end-link and the workspace of the manipulator when compared to the same manipulator mounted on a fixed base.

There are two convenient concepts for dealing with such velocity-level models: the Virtual Manipulator [30] and the Generalized Jacobian [31]. The Virtual Manipulator has a massless kinematic chain fixed at the ‘virtual ground’—a point that does not move (under zero initial momentum) in inertial space. This point is the CoM of the articulated body system. Furthermore, the link lengths of the Virtual Manipulator depend on the inertial properties, if the joint arrangement matches that of the real manipulator, and if the joint axes are parallel to the respective axes of the real manipulator. With this construction, the degraded end-link motion ability due to the base motion can be accounted for.

Another convenient notation for velocity-level relations is the Generalized Jacobian. Spatial momentum conservation, as in Eq. 19.8, can be used as a constraint with respect to the manipulator motion. From Eq. 19.8, the base velocity is obtained as

$$ {\mathcal{V}}_{b} = {\bar{\mathcal{V}}}_{b} - \mathbf{M}_{b}^{ - 1} \mathbf{M}_{bm} \dot{\mathbf{\theta }} $$
(19.13)

where \( {\bar{\mathcal{V}}}_{b} = \mathbf{M}_{b}^{ - 1} {\bar{\mathcal{L}}}_{b} \) is acquired from the initial spatial momentum and the second component is attributed to the coupling momentum induced by the manipulator motion. Inserting \( {\mathcal{V}}_{b} \) into Eq. 19.4, the constrained manipulator end-effector velocity is

$$ {\mathcal{V}}_{e} = {\bar{\mathcal{V}}}_{e} + {\hat{\mathbf{J}}} {\dot{\mathbf{\theta}}} $$
(19.14)

where \( {\bar{\mathcal{V}}}_{e} = \mathbf{T}_{eb} {\bar{\mathcal{V}}}_{b} \). The matrix

$$ \hat{\mathbf{J}} \equiv \mathbf{J}_{m} - \mathbf{T}_{eb} \mathbf{M}_{b}^{ - 1} \mathbf{M}_{bm} $$

is called the Generalized Jacobian.

19.2.4 The Reaction Null Space

As shown previously, the motion of the base in reaction to manipulator motion diminishes the end-link motion ability and the effective workspace. One possibility to mitigate this is to use custom path planning and control methods for manipulator motions that would minimize the reaction at the base. In fact, it is straightforward to predict the existence of reactionless motion. In other words, there are manipulator motions that will guarantee full dynamical decoupling between the base and the manipulator. This condition is expressed simply as \( {\mathcal{F}}_{bm} = {\mathbf{0}}. \)

When the system articulated-body dynamics Eq. 19.7 of an orbital space robot with an unactuated base, zero initial base velocity (\( {\bar{\mathcal{V}}}_{b} = {\mathbf{0}} \)), and zero external forces (\( {\mathcal{F}}_{qs} = {\mathbf{0}} \)) is considered with Eqs. 19.10 or 19.12, the following relation results

$$ {\mathcal{F}}_{bm} = {\mathbf{M}_{bm} {\ddot{\mathbf{\theta }}} + {\dot{\mathbf{M}}}_{bm} {\dot{\mathbf{\theta }}}} = {\mathbf{0}} $$
(19.15)

where the nonlinear force \( C_{b} \) in (19.7) was approximated as it was in Eq. 19.10. This equation can be integrated once to obtain the momentum equation

$$ \mathbf{M}_{bm} \dot{\mathbf{\theta }} = {\bar{\mathcal{L}}}_{bm} $$
(19.16)

where \( {\mathcal{L}}_{bm} \) denotes the coupling momentum. This is a linear equation for the velocities and its solution type depends on the number of manipulator joints \( n \). The equation will be determined if \( n = 6 \), and under-determined otherwise (\( n > 6 \)). In the latter case, the joint velocity vector derived from the above equation is

$$ \dot{\mathbf{\theta }} = \mathbf{M}_{bm}^{ + } {\bar{\mathcal{L}}}_{bm} + \mathbf{P}_{{M_{bm} }} \dot{\mathbf{\theta }}_{a} $$
(19.17)

where \( ( \circ )^{ + } \) is the Moore–Penrose generalized inverse, \( \mathbf{P}_{( \circ )} \) is a null-space projector and \( ( \circ )_{a} \) is an arbitrary vector [29]. The two components on the r.h.s. are orthogonal, implying that any joint velocity from the null space of the coupling inertia matrix will not change the momentum of the base. These types of manipulator motions are termed reactionless and are obtained by varying the arbitrary velocity vector \( \dot{\mathbf{\theta }}_{a} \). The null space itself is termed the Reaction Null Space (RNS) [29] and is useful for motion analysis, path planning and reactionless motion control.

The set of reactionless motions depends on the rank of the RNS projector: \( {\text{rank}}\mathbf{P}_{{M_{bm} }} = n - 6 \). With a seven-DOF manipulator, e.g., the set will be just one-dimensional, implying that reactionless motions are possible only along the integral curves of the above differential equation. In general, it is desirable to have a larger set of such paths. One possibility to achieve this is to increase the number of manipulator joints (i.e. the DOFs). Another option is to redefine the RNS with respect to some of the base coordinates. From a practical viewpoint, the orientation of the base is the most important factor, hence, the RNS can be redefined only with respect to the angular variables. For that case, the rank of the RNS projector will increase to \( n - 3 \). An example is shown in Sect. 19.2.10.

19.2.5 Other Representations of System Dynamics

19.2.5.1 Ignorable Coordinates

From analytical mechanics it is known that conserved quantities in the equation of motion yield ignorable or cyclic coordinates. In the case of free-floating robot dynamics, such are the coordinates of the base. This property was already used when deriving the Generalized Jacobian in Eq. 19.14 from the kinematic and momentum equations. The ignorable coordinates can also be removed in a similar way from the dynamic equation Eq. 19.1. This leads to a representation in a reduced form

$$ \hat{\mathbf{M}}_{m} {\ddot{\mathbf{\theta}}} + \hat{c} = {\kern 1pt} \;\hat{\tau }{\kern 1pt} + {\hat{\mathbf{J}}}^{T} {\mathcal{F}}_{e}$$
(19.18)

where \( \hat{\mathbf{M}}_{m} = \mathbf{M}_{m} - \mathbf{M}_{bm}^{T} \mathbf{M}_{b}^{ - 1} \mathbf{M}_{bm} \), \( \hat{\mathbf{c}}_{m} = \mathbf{c}_{m} - \mathbf{M}_{bm}^{T} \mathbf{M}_{b}^{ - 1} C_{b} \) and \( \hat{\tau } = \tau - \mathbf{M}_{bm}^{T} \mathbf{M}_{b}^{ - 1} {\mathcal{F}}_{b} \) [32]. The dimension of the equation is decreased to \( n \) and is the same as for a fixed-base manipulator.

Furthermore, system dynamics can be represented in terms of quasi-coordinates by using the articulated-body quasi-coordinates \( {\mathcal{V}}_{c} \) instead of the base coordinates \( {\mathcal{X}}_{b} \). The articulated-body dynamics are derived via time differentiation of spatial momentum in Eq. 19.5

$$ \mathbf{M}_{c}{\dot{\mathcal{V}}}_{c} + C_{c} = \mathbf{T}_{ec}^{T}{\mathcal{F}}_{e} $$
(19.19)

where \( C_{c} \) denotes the non-linear forces. Combing with the reduced dynamics in Eq. 19.18, the total dynamics in a decoupled form is as follows

$$\left[ {\begin{array}{*{20}c} \mathbf{M}_{c} & \mathbf{0} \\ \mathbf{0} & {\hat{\mathbf{M}}_{m} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\dot{\mathcal{V}}}_{c} \\ \ddot{\mathbf{\theta}} \\ \end{array} } \right]\, +\, \left[ {\begin{array}{*{20}c} {\mathbf{C}}_{c} \\ {\hat{\mathbf{c}}}_{m} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} \mathbf{0} \\ {\hat{\mathbf{\tau}}} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\mathbf{T}}_{ec}^{T} \\ {\hat{\mathbf{J}}^{T} } \\ \end{array} } \right]{\mathcal{F}}_{e} .$$
(19.20)

19.2.5.2 End-Link Contact Dynamics

During manipulation and other tasks, the end-link may establish contact with an object. Spatial contact forces will thereby be generated and subsequently propagated via the end-link to the rest of the robot links. Hence, it is crucial to model the end-link contact dynamics.

The spatial velocity of the end-link is represented in Sect. 19.4, which uses base coordinates as an intermittent frame. Because these are ignorable coordinates, the relation can be rewritten via the articulated-body quasi-coordinates as

$${\mathcal{V}}_{e} = \mathbf{T}_{ec} {\mathcal{V}}_{c} + {\hat{\mathbf{J}}}{\dot{\theta }}.$$
(19.21)

To obtain the dynamic relations, the respective acceleration will be used

$$ {\dot{\mathcal{V}}}_{e} = \mathbf{T}_{ec} {\dot{\mathcal{V}}}_{c} + {\hat{\mathbf{J}}}{\ddot{\mathbf{\theta}}} + \dot{\mathbf{T}}_{ec} {\mathcal{V}}_{c} + \mathop {\hat{\mathbf{J}}}\limits^{.} \dot{\mathbf{\theta }}. $$
(19.22)

The quasi-coordinate acceleration \({\dot{\mathcal{V}}}_{c}\) and the joint acceleration \( \ddot{\mathbf{\theta}} \) can be obtained from the articulated-body dynamics in Eq. 19.19 and from the reduced form of dynamics in Eq. 19.18, respectively. In contact scenarios, two cases are usually considered: free manipulator joints (\( \tau = {\mathbf{0}} \)) and locked manipulator joints (\( \dot{\mathbf{\theta }} = {\mathbf{0}} \)) [33]. The end-link contact dynamics can then be represented as

$$ \dot{\mathcal{V}}_{e} = \mathbf{M}_{*}^{ - 1} {\mathcal{F}}_{e} + {\mathcal{A}}_{*} $$
(19.23)

where \( A_{*} \) denotes non-linear velocity-dependent end-link acceleration and

$$ \mathbf{M}_{*}^{ - 1} = \mathbf{T}_{ec} \mathbf{M}_{c}^{ - 1} \mathbf{T}_{ec}^{T} + \kappa {\hat{\mathbf{J}}\hat{\mathbf{M}}}_{m}^{ - 1} \hat{\mathbf{J}}^{T} $$
(19.24)

represents the mobility tensor s.t. \( \kappa = 1 \) in the free-joint case, and \( \kappa = 0 \) in the locked-joint case.

19.2.5.3 Extension to Multi-Arm Orbital Robots

When a free-flying space robot has \( l \) manipulator arms mounted on a base, the manipulators comprise a tree-like structure. Each manipulator arm has \( n_{k} \) joints, \( k = 1,2, \ldots ,l \), resulting in the total number of joints of \( n = \sum\nolimits_{k = 1}^{l} n_{k} \). External forces may act on the base as well as on one or more of the end-links. The dynamic equation Eq. 19.1 then becomes

$$ \left[ {\begin{array}{*{20}c} {\mathbf{M}_{b} } & {\mathbf{M}_{bm} } \\ {\mathbf{M}_{bm}^{T} } & {\mathbf{M}_{m} } \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\dot{\mathcal{V}}}_{b} } \\ {\ddot{\mathbf{\theta }}} \end{array} } \right]+ \left[ {\begin{array}{*{20}c} {C_{b} } \\ {\mathbf{c}_{m} } \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathcal{F}}_{b} } \\ \tau \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\mathbf{T}_{eb}^{T} } \\ {\mathbf{J}_{m}^{T} } \end{array} } \right]{\mathcal{F}}_{e} $$
(19.25)

where \(\mathbf{\theta}= \left[ {\mathbf{\theta}_{1}^{T} {\kern 1pt}\mathbf{\theta}_{2}^{T} {\kern 1pt} \ldots\mathbf{\theta}_{l}^{T} } \right]^{T} ,{\kern 1pt} {\kern 1pt} {\mathbf{\tau}} = \left[ {{\mathbf{\tau}}_{1}^{T} {\kern 1pt} {\mathbf{\tau}}_{2}^{T} {\kern 1pt} \ldots {\kern 1pt} {\mathbf{\tau}}_{l}^{T} } \right]^{T} \in \Re^{n} \), \( {\mathcal{F}}_{e} = \left[ { {\mathcal{F}}_{{e_{1} }}^{T} {\kern 1pt} {\mathcal{F}}_{{e_{2} }}^{T} {\kern 1pt} \ldots\,{\mathcal{F}}_{{e_{l} }}^{T} } \right]^{T} \in \Re^{6l} \), the Jacobian \( \mathbf{J}_{m} \in \Re^{6l \times n} \) is a block-diagonal with blocks \( \mathbf{J}_{{m_{k} }} \in \Re^{{6 \times n_{k} }} \) and is the fixed-base manipulator Jacobian of the \( k \)-th arm, \( {\mathcal{F}}_{{e_{k} }} \) is the spatial force acting at its end-link, and \( \mathbf{T}_{eb}^{T} \in \Re^{6 \times 6l} \) is composed of matrices \( \mathbf{T}_{{e_{k} b}}^{T} \) [3437].

The kinematic equations, on the other hand, can be written as

$$ {\mathcal{V}}_{{e_{k} }} = {\bar{\mathcal{V}}}_{{e_{k} }} + \hat{\mathbf{J}}_{k} \dot{\mathbf{\theta }}_{k} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} k = 1,2, \ldots ,l $$
(19.26)

where \( {\mathcal{V}}_{{e_{k} }} \) is the spatial velocity of the \( k \)-th end-link, \( {\bar{\mathcal{V}}}_{{e_{k} }} = T_{{e_{k} b}} {\bar{\mathcal{V}}}_{b} \) is a result of the initial spatial momentum and the matrices \( \hat{\mathbf{J}}_{k} \equiv \hat{\mathbf{J}}_{{m_{k} }} - \mathbf{T}_{{e_{k} b}} \mathbf{M}_{b}^{ - 1} \mathbf{M}_{{bm_{k} }} \) are the Generalized Jacobians [34].

The dynamic equation Eq. 19.3 can be recast in a similar fashion when the flexible-base space robot includes more than one manipulator.

19.2.6 Velocity-Based End-Link Trajectory Tracking Control

Velocity-based control is used in teleoperation mode, as explained in Sect. 19.1. However, velocity-based end-link trajectory tracking is used in an autonomous mode of operation to accomplish precise motion tasks such as approaching specific parts of hardware equipment. The end-link path is planned in order, for example, to avoid undesirable interference with other parts of the equipment. Typically, feedback control would be employed in workspace coordinates based on the manipulator’s inverse Jacobian [38]. Orbital robots can be directly controlled with such methods when the end-link trajectory is designed with respect to the base coordinate frame. For trajectories specified in inertial (orbit-fixed) coordinates (e.g. during satellite capturing, satellite repair or payload transfer), the base deflection due to reactions should be taken into account. For the case of an unactuated base, the feedback controller can be designed using the Generalized Jacobian formulation from the previous section [31]. The manipulator joint velocities to be used as control inputs for the velocity-level feedback controller are

$$ \dot{\mathbf{\theta }} = \hat{\mathbf{J}}^{ - 1} \left( {\mathbf{K}_{p} \left( {{\mathcal{X}}_{e}^{d} - {\mathcal{X}}_{e} } \right) + {\mathcal{V}}_{e}^{d} } \right) $$
(19.27)

where \( {\mathcal{X}}_{e}^{d} \) and \( {\mathcal{V}}_{e}^{d} \) denote the desired end-link spatial position and velocity along the given inertial trajectory and \( \mathbf{K}_{p} \) is a feedback gain matrix. The actual end-link position \( {\mathcal{X}}_{e} \) is obtained by summing up two components: the inertial base position, obtained via appropriate measurements, and the end-link position w.r.t. the base, obtained via the direct kinematics relations for fixed-base robots based on manipulator joint position measurements [38].

19.2.7 Point-to-Point Motion and Nonholonomic Path Planning

Point-to-point (PTP) motion control is a method of manipulator motion control that ensures precise positioning of the end-link at a desired spatial position in inertial space or attaining a desired manipulator configuration. In this case, the motion trajectory is of little interest [38]. The folding and unfolding of the manipulator arm to/from the stowed position is usually carried out via PTP motion control in joint space coordinates. Alternatively, tasks that require end-link positioning with respect to some equipment can be done either in base coordinates, in which the equipment is fixed to the base, or in inertial coordinates, in which the equipment is fixed to another body. In the latter case, PTP motion control can be realized via the Generalized Jacobian feedback control equation

$$ \dot{\mathbf{\theta }} = \hat{\mathbf{J}}^{ - 1} \left( {\mathbf{K}_{p} \left( {{\mathcal{X}}_{e}^{d} - {\mathcal{X}}_{e} } \right) - \mathbf{K}_{d} \dot{{\mathcal{X}} }_{e} } \right). $$
(19.28)

The base may thereby freely change its state.

Especially, in the case of a free-flying robot with an unactuated base, the system exhibits non-holonomic behavior owing to the nonintegrability condition on the spacecraft attitude. Nevertheless, it is possible to control the base attitude during PTP operations, e.g. via a bidirectional path planning method [39, 40].

19.2.8 Vibration Suppression Control

For flexible-base space robots, the vibrations of the base may lead to end-link task performance deterioration. It is possible to suppress the vibrations of the base via manipulator motion, using the inertial coupling between the base and the manipulator [25]. This becomes apparent when analyzing the articulated-body dynamics expressed from Eqs. 19.7 and 19.11 as

$$\mathbf{M}_{b} {\dot{\mathcal{V}}}_{b} + \mathbf{M}_{bm} {\ddot{\mathbf{\theta }}} + C_{b} = \mathbf{T}_{eb}^{T} {\mathcal{F}}_{e} - \mathbf{D}_{b} {\mathcal{V}}_{b} - \mathbf{K}_{b}\Delta {\mathcal{X}}_{b} . $$
(19.29)

Additional damping can be injected into the above dynamics via a control joint acceleration [29]

$$ {\ddot{\mathbf{\theta }}} = \mathbf{M}_{bm}^{ + } \left( {\mathbf{D}_{bc} {\mathcal{V}}_{b} - C_{b} } \right) $$
(19.30)

where \( \mathbf{D}_{bc} \) denotes a matrix for additional damping. This confirms that in the absence of external forces (\( {\mathbf{\mathcal{F}}}_{e} = {\mathbf{0}} \)), the following closed-loop dynamics are obtained

$$ \mathbf{M}_{b}{\dot{\mathcal{V}}}_{b} + \left( {D_{bc} + D_{b} } \right){\mathcal{V}}_{b} + \mathbf{K}_{b}\Delta {\mathcal{X}}_{b} = 0. $$
(19.31)

19.2.9 End-Link Impacts and Impedance Control

A task of utmost importance for orbital space robots is the retrieval of floating bodies, e.g. malfunctioned satellites or space debris. Because it is usually assumed that the target object lacks any dedicated grapple fixture, special care is needed when establishing the initial contact and selecting the post-contact tracking control method for the robot arm so that the target is not pushed away during the operation.

The inertial properties during the initial impact depend on the end-link contact dynamics, as described in Sect. 19.2.5. The end-link approach direction specified via a unit vector \( \varvec{n} \) is assumed to be known. Therefore, the inertial properties can be described in terms of a scalar: the effective mass \( m_{*} \) with an impact along \( \varvec{n} \). This mass can be obtained from the mobility tensor \( \mathbf{M}_{*}^{ - 1} \) in Eq. 19.24 as follows

$$ m_{*} = \frac{{{\parallel }\mathbf{f}_{e}{\parallel }}}{{\mathbf{n}^{T} \dot{\mathop{v}}_{e} }} = \frac{1}{{\mathbf{n}^{T} \mathbf{M}_{ff}^{ - 1} \mathbf{n}}} $$
(19.32)

where \( \mathbf{f}_{e} \) and \( {{\dot{v}}_{e}} \) denote the linear parts of the spatial end-link force and acceleration, respectively, and \( \mathbf{M}_{ff}^{ - 1} \) is the upper-left \( 3 \times 3 \) block sub-matrix of the mobility tensor. Because the tensor is manipulator configuration dependent for a given approach direction \( \varvec{n} \), the effective mass can be varied by changing the configuration at impact.

However, the effective mass variation via manipulator configuration is limited [41]. A broader range can be achieved with the help of mechanical-impedance control: a method suggested in [42] for fixed-base manipulator end-link control during contact tasks. The end-link dynamics are specified thereby via the equation

$$ \mathbf{M}_{e} {\dot{\mathcal {V}}}_{e} + \mathbf{D}_{e} {\mathcal {V}}_{e} + \mathbf{K}_{e}{\Delta }{\mathcal{X}}_{e} = {\mathcal{F}}_{e} $$
(19.33)

where \( \mathbf{M}_{e} \), \( \mathbf{D}_{e} \) and \( \mathbf{K}_{e} \) are desired mechanical-impedance related spatial transforms for inertia, damping and stiffness, respectively. These quantities determine the end-link behavior during contact. To ensure the above end-link dynamics, the following joint control torque is applied

$$ \hat{\mathbf{\tau}} = \left( {\hat{\mathbf{M}}_{m} \hat{\mathbf{J}}^{ - 1} \mathbf{M}_{e}^{ - 1} - \hat{\mathbf{J}}^{T} } \right){\mathcal{F}}_{e} + {\hat{\mathbf{c}}} - \hat{\mathbf{M}}_{m} \hat{\mathbf{J}}^{ - 1} \left[ {\mathbf{M}_{e}^{ - 1} \left( {\mathbf{D}_{e} {\mathcal{V}}_{e} + \mathbf{K}_{e} {\Delta }{\mathcal{X}}_{e} } \right) + \mathop {\hat{\mathbf{J}}}\limits^{.} \dot{\mathbf{\theta }}} \right]. $$
(19.34)

This equation was obtained using the reduced form of the dynamics in Eq. 19.18 and the kinematic relation in Eq. 19.21. Unfortunately, the equation is quite complex. Additionally, a high control bandwidth would be required to realize the desired end-link behavior [41].

A formulation for impedance control of multi-arm free-floating robots can be found in [43].

19.2.10 Post-Impact Control for Momentum Redistribution

The momentum transferred to the articulated body after the impact with the target may lead to a significant base translation or rotation. Rotation can be especially harmful and is highly undesirable. It is possible to employ the manipulator arm to accommodate a portion of the momentum transferred to the space robot via the impact, thus minimizing the initial post-impact base momentum [44]. The accommodated momentum can then be transferred to the base and mitigated thereafter with the assistance of a reaction or momentum wheel control subsystem. This requires proper post-impact momentum redistribution control: the underlying equations are derived as follows. Focusing on the base rotation, the system dynamics are to be rewritten using only base angular velocity quasi-coordinates. First, the translational coordinates of the base are eliminated from the momentum equation. The angular momentum with respect to the base’s CoM can be written as

$$\mathbf{l}_{b} = \tilde{\textbf{M}}_{\omega } {\kern 1pt} {\mathbf{\omega}} {\kern 1pt} + \tilde{\mathbf{M}}_{\omega m} {\dot{\mathbf{\theta }}} + \tilde{\mathbf{M}}_{{\omega} {\phi} } {\dot{\mathbf{\phi }}} $$
(19.35)

where \( \tilde{\mathbf{M}}_{\omega } = \mathbf{M}_{\omega } + m_{t} \mathbf{R}_{bc}^{ \times } \mathbf{R}_{bc}^{ \times } \) and \( \tilde{\mathbf{M}}_{\omega m} = \mathbf{M}_{\omega m} + \mathbf{R}_{bc}^{ \times } \mathbf{M}_{vm} \). These block matrices are derived from the articulated-body system and the coupling inertia matrices \( \mathbf{M}_{b} = \left[ {\begin{array}{*{20}c} {\mathbf{M}_{v} } \hfill & {\mathbf{M}_{v\omega } } \hfill \\ {\mathbf{M}_{v\omega }^{T} } \hfill & {\mathbf{M}_{\omega } } \hfill \\ \end{array} } \right] \) and \( \mathbf{M}_{bm} = \left[ {\begin{array}{*{20}c} {\mathbf{M}_{vm}^{T} } \hfill & {\mathbf{M}_{\omega m}^{T} } \hfill \\ \end{array} } \right]^{T} \), respectively. Detailed expressions for the sub-matrices can be found in [32]. \( \tilde{\mathbf{M}}_{\omega \phi } \mathop \phi \limits^{.} \) represents the angular momentum component due to the momentum wheels, and \( \mathop \phi \limits^{.} \) denotes the respective quasi-coordinates. The tilde operator modifies the respective matrix in such a way that linear motion of the base is implicitly accounted for.

Angular momentum is conserved during the post-impact phase. Hence, the manipulator control joint rates can be derived as

$$ \dot{\mathbf{\theta }} = \tilde{\mathbf{M}}_{\omega m}^{ + } \left( {\bar{\mathbf{l}}_{b} - \tilde{\mathbf{M}}_{\omega } {\mathbf{\omega}}^{d} - \tilde{\mathbf{M}}_{{\omega {{\phi}}}} {\dot{\mathbf{\phi }}}} \right) + \mathbf{P}_{{\tilde{M}_{\omega m} }} \dot{\mathbf{\theta }}_{a}^{d} $$
(19.36)

where \( \bar{\mathbf{l}}_{b} \) denotes the conserved angular momentum. The articulated-body momentum component \( \tilde{\mathbf{M}}_{\omega } {\mathbf{\omega}}^{d} \) and the RNS component \( \mathbf{P}_{{\tilde{M}_{\omega m} }} \dot{\mathbf{\theta }}_{a}^{d} \) can be used to minimize the base rotation and the joint motion, respectively, using damping controls \( {\mathbf{\omega}}^{d} = - \mathbf{K}_{\omega } {\mathbf{\omega}} \) and \( \dot{\mathbf{\theta }}_{a}^{d} = - \mathbf{K}_{\theta } \dot{\mathbf{\theta }} \), respectively, where \( \mathbf{K}_{\omega } \) and \( \mathbf{K}_{\theta } \) are damping gain matrices [44]. Other control designs are also possible, see e.g. [45].

19.3 Modeling and Control of Planetary Robots

Planetary exploration programs have been pursuing extensive scientific missions dedicated to understanding the geological and climatological characteristics of planetary bodies, as well as seeking microorganisms of extraterrestrial life. A robotic probe deployed on a target body plays an important role in achieving scientific missions, in particular, a probe having surface mobility (rover) can get close to a specific point of interest and thoroughly enrich the scientific return of the mission.

A fundamental requirement for a rover is the capability of traversing the rough terrain of a planetary body. It also needs to endure a harsh environment: extremely high/low temperatures and/or strong cosmic radiation. A power management scheme for the rover differs from that used for an orbiting (or interplanetary) spacecraft. This is because the power spent by the mobility system significantly varies according to the terrain conditions (sandy, rocky, or sloped terrain) in which the rover travels. The power generated by the solar array panels depends on the solar elevation angle (varied by the local time and latitude of the rover’s location) and the orbital longitude of the planetary body.Footnote 3 The rover should also employ autonomous/semi-autonomous guidance, navigation, and control (GN&C) to travel to a designated location. These technical issues for each subsystem of the planetary rover are summarized in Table 19.1.

Table 19.1 Technical requirement for rover subsystems

From a robotics point of view, this section primarily focuses on the research and development of robotic mobility and GN&C subsystems, and introduces actual applications/implementations of this technology. General descriptions for the other subsystems, including the power, telecommunications, and environmental durability, are presented in other chapters.Footnote 4

The surface mobility system of the rover is indispensable for traversing rough and deformable terrain. Therefore, vehicle/terrain interaction is fundamental mechanics for the following aspects

  • Design—suspension configuration, vehicle dimensions, and actuator specifications.

  • Mobility evaluation—slope traversability, obstacle crossing, and power required for the mobility.

  • Navigation and control—localization, path planning, and traction control.

The surface terrain of the Moon or a planet such as Mars is covered with fine-grained soil (regolith), boulders, rocks, or stones. Because of such challenging terrain, the rover should be aware of mobility hazards such as rolling over a sloped surface, immobilizing wheel slips on loose sand, and colliding with obstacles such as rocks. In particular, the Mars Exploration Rovers (MER), Spirit and Opportunity, have proven that wheel slip is a critical hindrance to their exploration missions. The issues related to resolving rover mobility requires well-defined mechanics for wheel-terrain interaction and an analytical approach for evaluating rover mobility performance.

The discussion of rover mobility in this section is divided into two issues: the kinematics/dynamics, and the wheel-terrain interaction mechanics. Section 19.3.1 presents the kinematics and dynamics of a planetary rover that can be used for evaluating mobility performance in rough terrain. The wheel-terrain interaction is addressed in Sect. 19.3.2 with a brief review for wheel-terrain interaction research and an introduction to a terramechanics-based analytical model.

The latency in communication owing to the long distance between Earth and a target planet renders the real-time direct teleoperation of a rover infeasible. An operator cannot immediately maneuver the rover when it encounters an obstacle or other contingencies. In addition, the rover cannot obtain prior knowledge of the physical characteristics of an environment. Thus, it needs to consider the environment as it encounters it and make decisions by itself. The GN&C subsystem is designed for these tasks as the autonomous brain of the rover. Section 19.3.3 describes research related to the GN&C, including the sensory system for terrain mapping, localization technique, and path planning.

19.3.1 Kinematics and Dynamics of Mobile Robots

The kinematics and dynamics of a planetary rover are the primary considerations for the mobility analysis of the rover. Whereas there has been work to perfects the kinematics for indoor mobile robots on smooth, flat surface [4648], the challenge of mobility analysis for a rover is accounting for a rough terrain profile. The motion of the rover becomes relatively complicated because of the dynamic interaction of the wheel on deformable terrain (i.e., wheel slips). The kinematic modeling of a mobile robot on rough terrain has been reported [4951].

There has also been extensive research regarding the dynamics of planetary rovers: a rover simulator called ROAMS used for the NASA Mars rovers [52], a dynamic simulation tool used for ExoMars [53], or a multibody system simulation for a rover on deformable terrain [54, 55].

In this section, the kinematic modeling of an articulated rover on rough terrain is introduced and focused on the inverse kinematics problem and kinematic constraints including wheel/vehicle slips. A dynamic model for the rover is also described.

19.3.1.1 Kinematic Analysis

The kinematics of the rover are basically used for navigation and motion control to achieve appropriate maneuvers on rough terrain. Kinematics also play a significant role in the design perspective: a kinematic model may be used to evaluate joint configuration, link length (between joints), and wheelbase or tread dimensions. In this subsection, an inverse kinematic problem is introduced that can be used to evaluate the kinematic validity and static stability of the rover on rough terrain. Here, a six wheeled rover with a rocker-bogie suspension [56] is assumed for the kinematic analysis. This configuration was used to evaluate the Sojourner, MER (Spirit and Opportunity), and Curiosity rovers [57, 58]. In addition, this subsection also addresses a kinematic constraint model for a four-wheeled rover experiencing wheel/vehicle slips. This model can be used for a derivation of the steering maneuver to achieve the desired motion control.

As seen in Fig. 19.11, assuming rover position \( \mathbf{p}_{c} \) and heading \( \varPsi \) with respect to a terrain given as a height map \( z(x, y) \), the kinematic loop closure equations can be written as follows [59]

$$ \begin{aligned} \hfill {z_{rr} } & = {z_{lr} + l_{1} \cos \varTheta (\sin \theta_{1r} - \sin \theta_{1l} ) + w\sin \varTheta } \\ \hfill {z_{rr} } & = \hfill {z_{lm} + \cos \varTheta (l_{1} \sin \theta_{1r} - l_{2} \sin \theta_{1l} - l_{3} \sin \theta_{2l} ) + w\sin \varTheta } \\ \hfill {z_{rr} } & \hfill = \hfill {z_{lf} + \cos \varTheta (l_{1} \sin \theta_{1r} - l_{2} \sin \theta_{1l} - l_{4} \sin \theta_{2l} ) + w\sin \varTheta } \\ \hfill {z_{rr} } & \hfill = \hfill {z_{rm} + \cos \varTheta (l_{1} \sin \theta_{1r} - l_{2} \sin \theta_{1l} - l_{3} \sin \theta_{2r} )} \\ \hfill {z_{rr} } & \hfill = \hfill {z_{rf} + \cos \varTheta (l_{1} \sin \theta_{1r} - l_{2} \sin \theta_{1l} - l_{4} \sin \theta_{2r} )} \\ \end{aligned} $$
(19.37)

where \( z_{ij} (i = \{ r, l\} ,\,j = \{ r, m, f\} ) \) refers to the \( z \) component of \( \mathbf{p}_{ij} \), with index \( i \) referring to the right and left side, and index \( j \) referring to the rear, middle, and front wheels.

Fig. 19.11
figure 11

Kinematic description of six-wheeled rover with a rocker-bogie suspension

Inputs for this equation are a terrain elevation map, the position \( \mathbf{p}_{c} \) of the rover center, and the rover heading. These inputs mitigate the number of unknown parameters that can be determined by solving the equation. The solution for the inverse kinematic problem with multiple contact points on the terrain is subject to the simultaneous cross-solution of multiple nonlinear equations. Newton’s method can be applied to solve such equations.

The kinematics of the rover is also used for motion control of the rover, such as the steering maneuvers needed to follow a specified traveling path. As mentioned in Sect. 19.3.1, wheel/vehicle slips are a critical issue for the rover; therefore, the kinematic model for motion control should include such effects. The rest of this subsection describes the kinematic model with wheel/vehicle slips [60].

A 2D kinematic model of a four-wheeled vehicle, which includes the slip angle of the vehicle \( \beta_{0} \) and lateral wheel slippage \( \beta_{i} \) is shown in Fig. 19.12. In this model, each wheel has a steering angle \( \delta_{i} \), where the subscript \( i \) denotes the wheel ID (\( i = 1, \ldots ,4 \), in this case). The position and orientation of the centroid of the vehicle defined as (\( x_{0} \), \( y_{0} \), \( \theta_{0} \)), and (\( x_{i} \), \( y_{i} \)) give the position of each wheel. The dimension of the rover is defined by \( l_{f} \), \( l_{r} \), \( d_{R} \), and \( d_{L} \). For this model, the following assumptions are considered: (1) the distance between wheels is constant, (2) the steering axle of each wheel is perpendicular to the terrain surface, and (3) the vehicle does not consist of any flexible parts.

Fig. 19.12
figure 12

Kinematic model of four-wheeled rover with wheel/vehicle slips

The non-holonomic constraints with the lateral slips of the wheel and vehicle are defined by the following equations

$$ \begin{array}{*{20}l} {\dot{x}_{0} \sin \phi_{0} - \dot{y}_{0} \cos \phi_{0} = 0} \\ {\dot{x}_{i} \sin \phi_{i} - \dot{y}_{i} \cos \phi_{i} = 0} \\ \end{array} $$
(19.38)

where \( \phi_{0} = \theta_{0} + \beta_{0} \), and \( \phi_{i} = \theta_{0} + \delta_{i} + \beta_{i} \). The geometric constraints between the centroid of the vehicle and each wheel are written as

$$ \left. {\begin{array}{*{20}c} {x_{1} = x_{0} + l_{f} \cos \theta_{0} - d_{L} \sin \theta_{0} } \\ {x_{2} = x_{0} - l_{r} \cos \theta_{0} - d_{L} \sin \theta_{0} } \\ {x_{3} = x_{0} - l_{r} \cos \theta_{0} + d_{R} \sin \theta_{0} } \\ {x_{4} = x_{0} + l_{f} \cos \theta_{0} + d_{R} \sin \theta_{0} } \end{array} } \right\} \to x_{i} = x_{0} + X_{i} $$
(19.39)
$$ \left. {\begin{array}{*{20}c} {y_{1} = y_{0} + l_{f} \sin \theta_{0} + d_{L} \cos \theta_{0} } \\ {y_{2} = y_{0} - l_{r} \sin \theta_{0} + d_{L} \cos \theta_{0} } \\ {y_{3} = y_{0} - l_{r} \sin \theta_{0} - d_{R} \cos \theta_{0} } \\ {y_{4} = y_{0} + l_{f} \sin \theta_{0} - d_{R} \cos \theta_{0} } \end{array} } \right\} \to y_{i} = y_{0} + Y_{i} . $$
(19.40)

Given the desired heading angle \( \theta_{0} = \theta_{d} \) and desired linear velocity \( v_{d} \), the desired steering maneuver (i.e. steering angle \( \delta_{i} \)) is elaborated as follows: first, transform Eq. 19.38

$$ \delta_{di} = \mathop {\tan }\nolimits^{ - 1} \left( {\dot{y}_{i} /\dot{x}_{i} } \right) - \theta_{d} - \beta_{i} $$
(19.41)

and then, substitute Eqs. 19.39 and 19.40 into Eq. 19.41. The desired steering angle is determined as follows

$$ \delta_{di} = \mathop {\tan }\nolimits^{ - 1} \left( {\frac{{v_{d} \sin \theta_{d} - \dot{Y}_{i} (\dot{\theta }_{d} )}}{{v_{d} \cos \theta_{d} - \dot{X}_{i} (\dot{\theta }_{d} )}}} \right) - \theta_{d} - \beta_{i} . $$
(19.42)

The desired velocity \( v_{d} \) and heading angle are derived based on a path following control strategy such as the pure-pursuit algorithm [61], or path following control with slip compensation [60, 62].

19.3.1.2 Dynamic Analysis

The motion profile of the entire rover can be numerically evaluated by using a dynamic model. Despite the slow traveling velocity of a rover,Footnote 5 the motion often behaves dynamically because of rough terrain such as bumpy, sloped, or rocky surfaces. A schematic illustration of the dynamic model of a six-wheeled rover having a rocker-bogie suspension is shown in Fig. 19.13. The dynamics of the rover are modeled as an articulated multibody system as follows [63]

$$ {\mathbf{H}}\left[ {\begin{array}{*{20}c} {\dot{\mathcal{V}}}_{b} \\ {\ddot{\mathbf{q}}} \\ \end{array} } \right] + {\mathbf{C}} + {\mathbf{G}} = \left[ {\begin{array}{*{20}c} {\mathcal{F}}_{\lfloor} \\ {\mathbf{\tau}} \\ \end{array} } \right] + {\mathbf{J}}^{T} {\user2{\mathcal{F}}}_{e} $$
(19.43)

where \( \mathbf{H} \) represents the inertia matrix of each body, \( \mathbf{C} \) is the velocity depending term, \( \mathbf{G} \) is the gravity term, \( {\mathcal{V}}_{\lfloor} \) are the translational and angular velocities of the vehicle, \( \mathbf{q} \) is the angle of each joint (such as wheel rotation and steering angle), \( {\mathcal{F}}_{\lfloor} \) are the forces and moments at the centroid of the vehicle body, \( {\mathbf{\tau}} \) are the torques acting at each joint (driving/steering torques), \( \mathbf{J} \) is the Jacobian matrix, and \( {\mathbf{\mathcal{F}}}_{e} \) consists of the external forces and moments acting at the centroid of each wheel, namely \( f_{ij} (i = \{ r, l\} , j = \{ r, m, f\} ) \). The external (contact) forces and torques on each wheel can be calculated based on a wheel-terrain contact model, as described in the next section. The dynamics of a rover for given traveling and steering conditions are numerically obtained by successively solving Eq. 19.43.

Fig. 19.13
figure 13

Rover dynamics model

19.3.2 Wheel-Terrain Interaction Mechanics

The study of the mechanical properties of the terrain and the terrains response to an off-road vehicle has been included in the field of terramechanics,Footnote 6 in which an analysis of the interaction between wheel/track and soil has been of primary focus.

In classical terramechanics, Bekker, an originator of terramechanics, derived a well-known pressure-sinkage equation and also formulated the shear stress as a function of soil deformation (displacement) [65, 66]. His work greatly contributed to the design and development of the Lunar Roving Vehicle used on the Apollo 15–17 missions to the Moon. Wong developed a comprehensive procedure for predicting the performance of both driven and towed wheels [6769]. The procedure calculates wheel mechanics by applying the stress distribution model beneath the wheel.

Terramechanics can be divided into three methods [70, 71]: (1) an analytical method, (2) an empirical method, and (3) a numerical method.

The analytical method considers a physical model for vehicle-terrain interactions based on a theoretical analysis with experimental results for model validation. The empirical method uses a practical measurement of soil strength with a specialized apparatus, such as a cone index (CI) [67], which is often used for an in situ prediction of vehicle traversability. The numerical method includes the finite-element method and discrete-element method that simulate soil deformation and vehicle-terrain interaction behavior with computer technology [7274].

The wheel-terrain model can be used for the design of rover mobility systems: the terramechanics model can be used as a feasible wheel/track design because it is able to maximize the traction performance for off-the-load locomotion under specific constraints [75, 76]. Additionally, the mobility performance of the rover (i.e., its traversability on sloped or deformable terrain) will be numerically/experimentally analyzed based on the wheel model [77, 78]. This mobility prediction and evaluation technique would be also valuable for the mobility system design [79] in addition to an actual rover operation to determine rover maneuvering. Some recent works have reported dynamic simulation tools combined with the terramechanics wheel model (e.g., NASA Mars rovers [52, 80] and ExoMars [53, 55].

This section focuses on the analytical method and introduces a typical interaction model of a rigid wheel on deformable terrain.

19.3.2.1 Terramechanics-Based Wheel-Terrain Model: Analytical Method

In the analytical method, the basic principle of a wheel traction model considers the stress distribution at the wheel-terrain contact point, which usually depends on wheel slips. An integral of the stress around the contact point derives wheel traction forces, such as drawbar pull, side force, and resistance torque.

A contact model for a rigid wheel on deformable terrain is schematically shown in Fig. 19.14. A classical terramechanics model defines the wheel-terrain contact forces, including the drawbar pull \( F_{x} \), vertical force \( F_{z} \), and resistance torque \( T \), as the following equations [68]

$$ F_{x} = rb\int\limits_{{\theta_{r} }}^{{\theta_{f} }} {\left\{ {\tau_{x} (\theta )\cos \theta - \sigma (\theta )\sin \theta } \right\}} d\theta $$
(19.44)
$$ F_{z} = rb\int\limits_{{\theta_{r} }}^{{\theta_{f} }} {\left\{ {\tau_{x} (\theta )\sin \theta + \sigma (\theta )\cos \theta } \right\}} d\theta $$
(19.45)
$$ T_{x} = r^{2} b\int\limits_{{\theta_{r} }}^{{\theta_{f} }} {\tau_{x} } (\theta )d\theta $$
(19.46)

where \( b \) represents the wheel width, \( \sigma (\theta ) \) is the normal stress beneath the wheel, and \( \tau_{x} (\theta ) \) are the shear stresses in the longitudinal direction of the wheel. The contact point of the wheel is determined by the entry angle \( \theta_{f} \) and the exit angle \( \theta_{r} \).

Fig. 19.14
figure 14

Wheel-terrain contact model

The side force (i.e., the force in the lateral direction) of the wheel appears when the wheel steers or traverses sloped terrain. The side force \( F_{y} \) can be modeled as the summation of two forces generated at the wheel: the force \( F_{u} \) attributable to the shearing motion beneath the wheel and the force \( F_{s} \) generated by the bulldozing motion on the side face of the wheel [63]

$$ F_{y} = F_{u} + F_{s} = \int\limits_{{\theta_{r} }}^{{\theta_{f} }} {rb\tau_{y} (\theta )} + \int\limits_{{\theta_{r} }}^{{\theta_{f} }} {R_{b} \left\{ {r - z(\theta )\cos \theta } \right\}} d\theta $$
(19.47)

where \( \tau_{y} (\theta ) \) are the shear stresses in the lateral direction of the wheel and \( R_{b} \) is modeled as a reaction resistance generated by the bulldozing phenomenon on a side wall of the wheel. \( R_{b} \) is a function of the wheel sinkage \( z \).

In these equations, the normal stress \( \sigma (\theta ) \) and shear stresses \( \tau_{x} (\theta ) \) and \( \tau_{y} (\theta ) \) are defined by the function of soil parameters, wheel contact angle, and wheel dimensions. Details about the stress model can be found in other research [63, 68, 69, 81, 82].

19.3.2.2 Experimental Validation

The previous wheel traction model needs to be validated through multiple experimental tests with varied state parameters such as soil or wheel traveling profiles. A single-wheel test bed (Fig. 19.15) is commonly used for model validation. The test bed primarily consists of a carriage section and wheel section. The carriage velocity is controlled relative to wheel velocity, which realizes wheel slip (or traction load), while measuring wheel traction forces, wheel sinkage, and other parameters. Experimental data are then compared with the values obtained from the numerical simulation of the wheel traction model.

Fig. 19.15
figure 15

Single-wheel test beds for experimental validation of terramechanics models. a Single-wheel test bed at MIT [83]. b Single-track test bed at JAXA [75]. c Single-wheel test bed at DLR [76]. d Single-wheel test bed at Tohoku University [63]

The primary focus of the classical terramechanics model has been devoted to the application of large, heavy vehicles (i.e., vehicles weighing hundreds/thousands kilograms). Therefore, when exploiting the classical model for analyzing lunar/planetary rover test beds (usually small, lightweight), several assumptions for the classical model would be omittedFootnote 7 that may cause an inaccurate calculation of wheel traction performance.Footnote 8, Footnote 9 Some researchers have assumed the errors attributable to the omitted assumptions as modeling errors or the uncertainty of parameters used for the calculation. Recently, several approaches to update/improve the classical terramechanics model were successfully applied to relatively lightweight vehicles. For example, a direct measurement device for the normal stress distribution has been reported [84]. A wheel-diameter dependent pressure-sinkage model has been proposed [85]. An improved approach for the calculation of shear deformation modulus has also been studied [86].

19.3.2.3 Soil Parameter Identification and its Uncertainty Analysis

The wheel-terrain interaction model described in the previous section assumes that the physical properties of the soil are known. These properties must be measured in situ by on-board robotic sensor systems [87], but their values would stochastically vary with location, resulting in tremendous uncertainties.

Several researchers have addressed soil parameter identification, for example an online terrain parameter estimator that uses a linear least-squares method to compute the values of cohesion and internal friction angle with simplified classical terramechanics equations [88], and applying the Newton–Raphson method to a modified nonlinear wheel-terrain interaction model that can identify unknown parameters such as the pressure-sinkage coefficient, internal friction angle, and shear deformation modulus [89].

The parameters identified by these approaches remain subject to uncertainty. Some recent works have attempted to predict rover mobility even under uncertain conditions, for example a learning-based approach for slip prediction that is used for a traversability analysis of a rover [90], and an applied a statistical method for mobility prediction that explicitly considers terrain uncertainty and achieves a computationally-efficient prediction of rover dynamics [91].

19.3.3 Guidance, Navigation, and Control

Planetary rovers need to traverse the surface of a target body with little knowledge of the terrain, such as the physical properties of the soil or the geometrical features of the terrain. Space probes and orbiters around the target body may be able to provide a global terrain map with relatively good accuracy.Footnote 10 The terrain map available from the orbiter is often useful for determining a ‘global’ destination; however, it is not feasible to refer to the map in real-time while the rover travels through intermediate waypoints toward the global destination. Therefore, the rover is required to perceive the local terrain environment and to plan a feasible path to traverse rough terrain. This section introduces the research and development dedicated to terrain mapping, rover localization, and path planning; these are key techniques for the GN&C systems of the rover.

19.3.3.1 Terrain Mapping

Once a rover is deployed on a planetary body, it must first measure terrain features (terrain mapping). 3D information from the terrain map can be exploited to assess obstacle size, slope angle, or terrain roughness so that the rover can plan the path to travel on the map. In addition, an augmented map of the terrain environment can be generated from consecutive maps.

Stereo vision (i.e., visual information taken by a stereoscopic camera mounted on the rover) is a particular technique by which to obtain 3D terrain mapping [92, 96]. An example of the stereo vision results from a MER (Spirit) is shown in Fig. 19.16 [94]. The bottom image of the figure shows an elevation plot of the scene taken from stereo cameras.

Fig. 19.16
figure 16

MER (Spirit) hazcams stereo imagery results (from [94])

Sufficient progress in terms of radiation-hardened flight CPUs for space probes in the last few decades accelerated the on-board stereo vision process, but stereo camera-based terrain mapping is still a time-consuming task for the low-power CPU on the rover because stereo images should be correlated to one another by stereo matching, thus requiring a relatively long computational time [94]. Also, the visual information provided by the camera may vary with the intensity of sunlight.

Another technique for terrain mapping is the use of a laser range-finder (LRF) or laser imaging detection and ranging (LIDAR) that can determine the distance from a laser emitter to an object based on the time-of-flight principle. There has been extensive research and development in which the LIDAR technique was used in robotics for sensing the environment and for classifying the terrain [97, 98]. In particular, the Defense Advanced Research Projects Agency (DARPA) Ground Challenge and Urban Challenge programs have accelerated the development of LIDAR and its implementation for robotic mobile vehicles [99, 100]. Figure 19.17 represents an example of LIDAR-based terrain mapping.

Fig. 19.17
figure 17

LIDAR-based terrain mapping result (from [126])

Although a space-hardened LIDAR was used for the rendezvous and docking of the Space Shuttle to the International Space Station [101, 102], as of 2012, no actual rover has been equipped with LIDAR. Several research and development efforts have been reported that introduce LIDAR techniques and applications for a rover [103105].

The LRF can measure 3D distances from the sensor to objects, providing a ‘point cloud’ of data of the scene without additional processes (c.f., camera-based mapping needs stereo matching for the 3D mapping). A drawback of the LIDAR sensor is that the scanning mechanism including the actuators and their movable parts may be less durable during launch vibrations and/or landing shocks. Alternatively, as a solid-state LIDAR sensor, a 3D flash LIDAR imaging system, is being developed that can capture the real-time 3D depth and intensity of a scene. The flash LIDAR consists of CMOS-based avalanche photodiode detectors, each pixel of which enables the measurement of the range and intensity of the light illuminated by the laser. Therefore, the flash LIDAR acts like a 2D image-plus-depth camera that achieves the relatively fast capturing of the terrain without any movable parts and actuators.

19.3.3.2 Localization

A rover needs to measure and update its position and orientation during its travel on the map obtained. An accurate measurement of position and orientation is challenging because the globally aided navigation schemes, such as the global positioning system (GPS), or heading reference relative to a global magnetic field is not available on planetary bodies.

The internal state sensors such as an inertial measurement unit (IMU) and wheel encoders are often used to achieve a position/pose estimation of the rover by dead reckoning. A sophisticated estimate method with a Kalman filter may be applied to reduce measurement noise. A pose estimation method using stereo imagery with learning from previous examples of traversing similar terrain was proposed by [106, 107]. The MERs have exploited Sun sensing with their cameras for occasional heading updates [108].

Odometry using wheel encoders is a traditional approach to measuring distance traveled; however, it may not be reliable if the rover travels on sandy loose terrain in which the wheels slip, resulting in incorrect calculations of distance traveled with respect to wheel rotations. The errors accumulate over time and will degrade the accuracy of the position estimation. To resolve this drawback, image-based odometry, termed visual odometry, has been widely applied to planetary rovers [108110]. Visual odometry estimates the traveling velocity of the vehicle using the optical flow vectors between the time-consecutive images taken by an on-board camera(s). Integrating the velocity estimates with IMU readouts or stereo images for pose estimation provides an accurate estimation of the six degrees of freedom of the rover’s motion. The visual odometry system of the MER was used for more than 14 %Footnote 11 of its first 10.7 km of travel [110].

For the MERs, a bundle adjustment technique was implemented to update and correct rover localization. The technique uses a stereo pair image and manually selected tie-points on the images to create a geometric configuration of the image. The accumulated images taken day by day propagate the entire image network and determine the global position of the rover on the map [108].

19.3.3.3 Path Planning

The latency in communicating between Earth and the rover on a planetary body often impedes direct teleoperation; therefore, the rover must possess a high degree of autonomous mobility for traversing unknown rough terrain. One primary task for such autonomy is to find a feasible path on the map generated by the on-board sensors and to avoid mobility hazards.

Substantial works dedicated to the path/motion planning of mobile robots have been performed, such as the A* and D* methods [111], the potential field approach [112], the probabilistic roadmap technique [113], and the rapidly exploring random tree (RRT) algorithm [114]. Randomized approaches to kinodynamic motion planning [115] have been reported to be an efficient tool for the purpose of path generation, with RRTs proving to be a highly effective framework. Also, a heuristically biased expansion for generating efficient paths that satisfy dynamic constraints has been developed by [116]. Explicit modeling of a robot’s closed-loop controller in the planning method, which results in trackable paths, has also been studied [117].

Robotic mobility in path planning is important for field conditions in which terrain inclination, roughness, and mechanical properties can significantly degrade a rover’s mobility. Path generation techniques that consider robotic mobility have also been investigated. For example, a trajectory generation method on rough terrain, accounting for predictable vehicle dynamics, has been proposed [118]. A planning algorithm with model-based evaluations, which include the uncertainties of terrain measurement and rover localization, has been developed [59]. In addition, a terrain traversability index with fuzzy logic for mobile robot navigation has been introduced [119], and its terrain traversability map has been used for the path planning of planetary rovers [120]. An explicit consideration of the dynamic mobility of a rover in path planning and an energy-based evaluation of candidate paths has been proposed [121], see Fig. 19.18.

Fig. 19.18
figure 18

Path planning and evaluation simulation (from [121])

The MERs have autonomous navigation with hazard avoidance technology based on a local path planner called GESTALT (i.e., grid-based estimation of surface traversability applied to local terrain, see Fig. 19.19) [93, 122]. The local terrain map created by the on-board stereo camera pair is a grid-based map, with each grid containing a goodness value indicating the terrain traversability. Then, several candidate trajectories, including forward and backward arcs, and two-point turns are evaluated. The trajectory that has the best goodness value is chosen and then the rover executes the predetermined distance and trajectory. The flight software of the MER has been upgraded to manage conflict voting between hazard avoidance and waypoint selection, achieving simultaneous local and global path planning with the Field D* algorithm [123125].

Fig. 19.19
figure 19

Illustration of terrain assessment and path selection. Red cells indicate unsafe areas around the large rock, yellow cells indicate traversable but rougher areas around the smaller rock, and green cells indicate safe and flat areas (from [122])

19.4 Telerobotics

Telerobotics is a technology developed for the remote control of space robots. The primary purpose is the handling of the communication time delays that occur during teleoperation from the ground to a robot in orbit or on the Moon. A communication time delay of 4–7 s usually occurs in such teleoperation, which is the inherent time lag that affects most communications equipment used for transmitting telemetry data. In teleoperation between the Earth and Mars, for example, the time delay is as much as several minutes and is largely dependent on the distance.Footnote 12 This forces the operator to adopt a move-and-wait strategy in executing remote tasks. The operator has to await the response and check it with each command sent. Accordingly, the extended communication time delay reduces efficiency and increases waiting time [127].

Meanwhile, a hierarchical structure can be seen in the task shown in Fig. 19.20. A higher-level (complex) task comprises multiple lower-level (simpler) tasks and this pyramid structure relates to the level of autonomy. Upper-level tasks require a higher level of autonomy. From the perspective of the operator-robot relationship, higher-level commands can reduce command frequency, and consequently the checking frequency and the overall waiting time. Accordingly, higher-level autonomy ease the adverse effects of the communication time delay. This is a basic concept of telerobotics and a standard framework for space teleoperation.

Fig. 19.20
figure 20

Task level and command level

Conversely, direct teleoperation by means of a joystick is a typical example of the use of lower-level commands. Generally, such systems are significantly affected by communication time delay. However, joystick systems are one of the key framework elements of teleoperation, including space teleoperation, since short-distance teleoperation from cabins are not subject to serious communication time delays. Direct teleoperation is part of telerobotics.

19.4.1 Direct Teleoperation

Direct teleoperation utilizes continuous low-level commands, e.g. the position or velocity of the end-effector, and includes control approaches. In unilateral control, the operator commands the position or velocity of the end-effector, but the motions of the remote robot are not signaled to the operator except for visual information. The joystick is the most popular input device for unilateral control. Meanwhile, a master–slave manipulator system is utilized for bilateral control. The master arm is an input device, while the slave arm is a remote manipulator. The master arm can display both the motion and force of the slave arm. The force information is very useful in undertaking skillful tasks. However, bilateral control is not part of mainstream space teleoperation, because it is significantly affected by communication time delay. To date, only a few advanced experiments of bilateral control between the ground and robots in orbit have been performed.

19.4.2 Unilateral Control

Rate control is the most popular approach to unilateral control when teleoperating a robot with a joystick. The SRMS (Shuttle Remote Manipulator System) also employs rate control with joysticks in the cabin. In the SRMS, two joysticks named the Translational Hand Controller (THC) and Rotational Hand Controller (RHC) are used for translational and rotational motions, respectively, as shown in Fig. 19.21. A 6-axis joystick, e.g. SpaceMouse by 3D connection, is also available on the ground, but the combination of THC and RHC has become the standard space application input device due to its long history. Astronauts in particular prefer this combination, because they have extensively trained with the devices for extended periods. Both the JEMRMS (Japan Experiment Module Remote Manipulator System) and Canadarm2 also employed these two joysticks.

Fig. 19.21
figure 21

Astronaut Leroy Chiao, expedition 10 commander and NASA ISS science officer, works with the controls of the Canadarm2, or space station remote manipulator system (SSRMS) in the Destiny laboratory of the International Space Station (18 October 2004). Image NASA

As noted earlier, communication time delay is a critical issue in space teleoperation from the ground. Predictive display was introduced in [128] to address this. The predictive display function indicates the future position of the manipulator by computer graphics, whereupon the operator can teleoperate the remote manipulator as if there were no time delay. Accordingly, the predictive display improves operational efficiency, even for low-level commands, as the operator can continuously send commands that resemble higher-level commands but include a range of lower levels ones. This reduces the checking frequency required and mitigates the adverse effects of the communication time delay.

There have been very few attempts at direct teleoperation from the ground involving real space robots in orbit. ROTEX (Robot Technology Experiment), developed by DLR, achieved the first direct teleoperation from the ground [129]. This involved a 6-axis Space Ball employed as the input device, whereby a precise simulator in a ground-based workstation that predicted the robot motion and the environment in which to compensate for the communication time delay. The simulator included both geometrical and dynamic models. It predicted the motions of a floating object. The ETS-VII (Engineering Test Satellite No. 7) developed by NASDA (currently JAXA) also achieved direct teleoperation from the ground by joysticks and rate control [130].

In practice, rate commands are integrated on the ground, and the results are sent in the form of positional information to the remote robot in orbit, which comprehensively protects its motion when the communications link is broken.

19.4.3 Bilateral Control

Bilateral control is achieved by a master–slave manipulator system. Initially, a master–slave manipulator with the same structure and DOF was employed. Currently however, a different structural master arm is often used, because the motion of the end-effector is a critical issue. It should be noted that if the slave arm has redundant motion, an additional approach is required to operate the redundant joint with a different structural master arm. Through the master–slave manipulator, the operator can sense both the motion and force at the remote site. Although, the slave arm executes the force of the operator, the communication time delay makes some bilateral controls impossible. In response, [131] introduced a scattering transformation approach that ensures system stability. However, the master arm must have a heavier operational feeling to ensure stability, given the extended communication time delay. In practice, the acceptable limit for communication time delays is less than one second, which means that bilateral control cannot be used for Earth-based teleoperation of robots in orbit which entails a communication time delay of several seconds.

A few attempts at master–slave control of a real orbital robot have been made. ETS-VII carried out experiments with a master arm [132], in whcih bilateral control was locally achieved by means of a virtual model on the ground. The reference position, based on the reference force exerted by the operator, was sent to the slave arm, which executed the reference force by compliance control. The remote environment should be known in such a process. Furthermore, real bilateral control in a large loop, that includes the ground and the orbit was also executed on ETS-VII [133]. The operator could feel the remote force with a communication time delay of almost 7 s, but it was difficult to apply the approach to practical tasks as mentioned above. Meanwhile, the ROKVISS (RObotics Component Verification on ISS) developed by DLR also achieved bilateral control [134]. In this project, a round trip delay of 10–20 ms was achieved, because the operator site on the ground was directly connected to the ISS, making reasonable bilateral control possible.

19.4.4 Supervisory Control

Supervisory Control is a concept proposed by Sheridan which includes not only telerobotics but also various semi-autonomous systems [135, 136]. The term ‘supervisory control’ has a longer history than that of telerobotics and establishes a framework for the relationship between humans and semi-autonomous systems. Basically, humans issue higher-level commands and monitor the results as supervisors, while semi-autonomous systems execute the commands as subordinates. Similar relationships can be found, not only in space robots but also various other systems. Fig. 19.22 shows a typical example of supervisory control in a space robot system. The robot achieves semi-autonomous functions with local loops based on various sensors. On the control site, the operator sends commands via a computer-assisted Human Interface (HI). The Human Interactive Computer (HIC) includes a model of the remote environment and an expert advisory system, based on prior information. The HIC also interacts with the operator through sensors and actuators. Autonomy on the HI side is therefore also important. In ROTEX, a multisensory gripper that included various sensors was a key technology for achieving good performance. Intelligent sensory feedback capabilities compensate for errors that the predictive graphic simulator cannot handle.

Fig. 19.22
figure 22

An example of supervisory control

19.4.5 Relationship Between Humans and Systems

Ensuring a reasonable relationship between humans and systems depends on both applications and the current level of technology. The first question that must be asked is whether humans always maintain superior positions to systems. Supervisory control clearly depicts humans acting as supervisors and making the final decisions. Shared control and traded control however, show different frameworks afford flat relationships. Humans perform the tasks to which they are best suited, and robots also do likewise in accomplishing difficult tasks that cannot be achieved without assistance. In shared control, a task is simultaneously shared between a human and a robot. For example, the human controls the trajectory of the end-effector in grasping a glass full of water, while the robot keeps the water from spilling. Task sharing is a key feature of shared control. Conversely, in traded control, humans and robots work in turn, which means the tasks are divided by time. For example, a human firstly decides on the path plan, whereupon the robot checks for possible collisions with obstacles. Alternation timing is a major aspect of in traded control.

The relationship between humans and robots is a subject of debate, not only in space robotics, but also in the human factors in the U.S. and ergonomics in Europe. Human factors research started by analyzing airplane accidents that occurred during World War II. Currently, both words are used for the same meaning. These fields show the value of enhancing safety.

In Germany, the 30 min rule is well known for nuclear power plants. In emergencies, the system should handle all trouble during the first 30 min. In other words, the human operator should not intervene in the operation during this period, but instead gather information and prepare the best solution. This protects against human errors caused by panic and is made possible by the slow process of nuclear power plants. It is noteworthy that during the first 30 min the system adopts a superordinate stance compared to that of humans.

Conversely, in shared control, there is the potential for the actions of humans to conflict with those of robots. The operator should recognize what is happening in the system, otherwise a serious accident may occur. Regardless of circumstances, the relationship between humans and systems should be designed to avoid human errors. In space robots, serious failure is unacceptable due to the cost involved, while safety for astronauts is paramount. The scope of activities in space is expanding to include work in orbit, on the Moon, on Mars and beyond. More critical work would be necessary, which would require the establishment of a proper relationship between humans and systems.

19.4.6 Human Interface

The operator teleoperates a remote robot via a human interface. An intuitive and easily understandable human interface should be provided. A wire-frame graphic model may be superimposed on a real video image to show a predictive display, as in [137]. Conversely, the real video image is installed into the 3D graphic model as texture to understand the camera posture in [138]. It is therefore important to display incomprehensible invisible information, for which a multi-modal interface, including voice, is a key technology. For Robonaut-2 a novel interface was developed where the motions of the operator are captured by a motion tracking system and a head-mounted display is employed to enhance presence; see Fig. 19.23. Robonaut-2 directly follows human motions but includes an indexing function because of the difference in size. This indexing allows each motion to be connected and disconnected with an offset, which means the operator can intuitively teleoperate Robonaut-2. The interface of Robonaut-2 targets telepresence.

Fig. 19.23
figure 23

Human interface for Robonaut 2

19.4.7 Telerobotics with a Rover

Rovers have also been managed under the concept of telerobotics and supervisory control, whereby the operator plays a crucial role. There are three key points compared with space telemanipulation

  1. (1)

    The workplace is far from Earth.

  2. (2)

    The rover operates in an unknown environment.

  3. (3)

    The rover collects explorative information and sends it to Earth.

It is unreasonable to send continuous low-level commands to a rover on Mars, as the communication time delay can be several minutes. This increases the value of autonomous capabilities. Moreover, it is impossible to provide a preliminary remote environment model, meaning more advanced supervisory control is required. The key technology is simultaneous localization and mapping (SLAM), whereby localization and mapping is provided using a laser range-finder or stereo camera.

The main purpose of the rover is exploration, which requires high-level decision making. The rover supplies useful information to the scientists involved in the project by satisfying their requirements. The exploration of Mars by rovers started with Sojourner, which was followed by Spirit, then Opportunity (which remained operational for an unexpectedly long time), and then Curiosity. Curiosity is significantly larger then its predecessors and can travel greater distances, showing that the level of autonomy is rapidly improving.