Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Historical Developments and Advances of Orbital Robotic Systems

Key issues in space robots and systems are characterized as follows:

Manipulation :

– Although manipulation is a basic technology in robotics, microgravity in the orbital environment requires special attention to the motion dynamics of manipulator arms and objects being handled. Reaction dynamics that affect the base body, impact dynamics when the robotic hand contacts an object to be handled, and vibration dynamics due to structural flexibility are included in this issue.

Mobility :

– The ability for locomotion is particularly important in exploration robots (rovers) that travel on the surface of a remote planet. These surfaces are natural and rough, and thus challenging to traverse. Sensing and perception, traction mechanics, and vehicle dynamics, control and navigation; all of these mobile robotics technologies must be demonstrated in a natural untouched environment.

Teleoperation and Autonomy :

– There is a significant time delay between a robotic system at a work site and a human operator in an operation room on the earth. In earlier orbital robotics demonstrations, the latency was typically 5 s, but can be several tens of minutes, or even hours for planetary missions. Telerobotics technology is therefore an indispensable ingredient in space robotics, and the introduction of autonomy is a reasonable consequence.

Extreme Environments :

– In addition to the microgravity environment that affects the manipulator dynamics or the natural and rough terrain that affects surface mobility, there are a number of issues related to extreme space environments that are challenging and must be solved in order to enable practical engineering applications. Such issues include extremely high or low temperatures, high vacuum or high pressure, corrosive atmospheres, ionizing radiation, and very fine dust.

The first robotic manipulator arm used in the orbital environment is the shuttle remote manipulator system (GlossaryTerm

SRMS

). It was successfully demonstrated in the STS-2 mission in 1981 and was operational until the end of the shuttle era. This success opened a new era of orbital robotics and inspired a number of mission concepts to the research community. One ultimate goal that has been discussed intensively after the early 1980s is the application to the rescue and servicing of malfunctioning spacecraft by a robotic free-flyer or free-flying space robot (e. g., ARAMIS report [55.1], Fig. 55.1). In later years, manned service missions were conducted for the capture-repair-deploy procedure of malfunctioning satellites (Intelsat 603 by STS-49, for example) and for the maintenance of the Hubble space telescope (STS-61, 82, 103, and 109). For all of the examples, the Space Shuttle, a manned spacecraft with dedicated maneuverability, was used. However, unmanned servicing missions have not yet become operational. Although there were several demonstration flights, such as GlossaryTerm

ETS-VII

and Orbital Express (to be elaborated later), the practical technologies for unmanned satellite servicing missions await solutions to future challenges.

Fig. 55.1
figure 1

A conceptual design of telerobotic servicer (after [55.1])

1.1 Robotic Arms for Assistance of Human Space Flight

1.1.1 Space Shuttle Remote Manipulator System

Onboard the space shuttle, the SRMS, or Canadarm, is a mechanical arm that maneuvers a payload from the payload bay of the space shuttle orbiter to its deployment position and then releases it [55.2]. It can also grapple a free-flying payload, maneuver it to the payload bay of the orbiter, and berth it back into the orbiter. The SRMS was first used on the second Space Shuttle mission STS-2, launched in 1981. Since then, it was used more than 100 times during space shuttle flight missions, performing such payload deployment or berthing as well as assisting human extra vehicular activities (GlossaryTerm

EVA

s). Servicing and maintenance missions to the Hubble space telescope and construction tasks of the International Space Station (GlossaryTerm

ISS

) have also been successfully carried out by the cooperative use of the SRMS with human EVAs.

As depicted in Fig. 55.2, the SRMS arm was 15 m long and had 6-degrees of freedom (GlossaryTerm

DOF

), comprising shoulder yaw and pitch joints, an elbow pitch joint, and wrist pitch, yaw, and roll joints. Attached to the end of the arm was a special gripper system called the standard end effector (GlossaryTerm

SEE

), which was designed to grapple a pole-like fixture (GlossaryTerm

GF

) attached to the payload.

Fig. 55.2
figure 2

Space Shuttle remote manipulator system (SRMS) (after [55.2])

By attaching a foothold at the end point, the arm could serve as a mobile platform for an astronaut’s EVAs (Fig. 55.3).

Fig. 55.3
figure 3

Space shuttle remote manipulator system (SRMS) used as a platform for an astronaut’s extra vehicular activity in the shuttle cargo bay

After the Space Shuttle COLUMBIA accident during STS-107, NASA outfitted the SRMS with the orbiter boom sensor system – a boom containing instruments to inspect the exterior of the shuttle for damage to the thermal protection system [55.3].

1.1.2 ISS Mounted Manipulator Systems

The ISS is the largest international technology project, with 15 countries making significant cooperative contributions. The ISS is an outpost of human presence in space, as well as a flying laboratory with substantial facilities for science and engineering research. In order to facilitate various activities on the station, there are several robotic systems, some of which are already operational, while others are ready for launch.

The space station remote manipulator system (GlossaryTerm

SSRMS

), or Canadarm 2 (Fig. 55.4), was the next generation of SRMS for use on the ISS [55.4]. Launched in 2001 during STS-100 (ISS assembly flight 6A), the SSRMS has played a key role in the construction and maintenance of the ISS both by assisting astronauts during EVAs and using the SRMS on the Shuttle to hand over a payload from a Shuttle to the SSRMS. The arm is 17.6 m long when fully extended and has 7-DOF. Latching end effectors, through which power, data, and video can be transmitted to and from the arm, are attached to both ends. The SSRMS is self-relocatable using an inch-worm like movement with alternate grappling of power data grapple fixtures (GlossaryTerm

PDGF

s), which are installed over the station’s exterior surfaces to provide the power, data, and video, as well as a foothold.

Fig. 55.4
figure 4

Space station remote manipulator system (SSRMS) (after [55.4])

As another mobility aid for the SSRMS to cover wider areas of ISS, mobile base system (GlossaryTerm

MBS

) was added in 2002 by STS-111 (ISS assembly flight UF-2). The MBS provides lateral mobility as it traverses the rails on the main trusses [55.5].

The special purpose dexterous manipulator (GlossaryTerm

SPDM

), or Dextre, which is attached at the end of the SSRMS, is a capable mini-arm system to facilitate the delicate assembly tasks currently handled by astronauts during EVAs. The SPDM is a dual arm manipulator system, where each manipulator (with 3 m length) has 7-DOF and is mounted on a one degree-of-freedom body joint. Each arm has a special tool mechanism dedicated to the handling of standardized orbital replacement units (GlossaryTerm

ORU

s). The arms are teleoperated from a Robotic WorkStation (GlossaryTerm

RWS

) inside the space station [55.6].

The European Space Agency (GlossaryTerm

ESA

) will also provide a robotic manipulator system for the ISS, the European robotic arm (GlossaryTerm

ERA

), and will be used mainly to work on the Russian segments of the station [55.7]. The arm is 11.3 m long and has 7-DOF. The basic configuration and functionality are similar to SSRMS [55.8].

In Japan, the Japanese experiment module remote manipulator system (GlossaryTerm

JEMRMS

), as shown in Fig. 55.5, was developed by the Japan Space Exploration Agency (GlossaryTerm

JAXA

) [55.10, 55.11, 55.9]. The arm was launched with the STS-124 Mission in 2008 and is now operative on the Japanese module of the ISS. JEMRMS comprises two components: the main arm, a 9.9 m long, six-degree-of-freedom arm, and the small fine arm, a 1.9 m long, six-degree-of-freedom arm.

Fig. 55.5
figure 5

(a) Japan Experiment Module (JEM) on the ISS and (b) the JEMRMS manipulator system

Unlike the SSRMS or the ERA, the main arm does not have self-relocation capability, but is fitted with a small fine arm, with which JEMRMS can form a serial 12-degree-of-freedom macro-micro manipulator system. After installation, the arm is used to handle and relocate the components for the experiments and observations on the exposed facility.

1.2 Future-Oriented Space Robot Experiments

1.2.1 ROTEX

The Robot Technology Experiment (ROTEX) developed by the German Aerospace Agency (GlossaryTerm

DLR

), is one of the important milestones of robotics technology in space [55.12], as it demonstrated the first (remote) ground control of a space robot with of up to 6 s round-trip signal delay using geostationary relay satellites. A multisensory robotic arm was flown on Space Shuttle COLUMBIA (STS-55) in 1993. Although the robot worked inside a work cell on the shuttle, several key technologies, such as a multisensory gripper, teleoperation from the ground and by the astronauts, shared autonomy, and time-delay compensation by use of a predictive graphic display were successfully tested (Fig. 55.6 and ).

Fig. 55.6
figure 6

ROTEX manipulator arm onboard the Spacelab D2 mission, the first remotely controlled space robot

Presumably the most spectacular experiment in ROTEX was the fully automatic grasping of a small free-floating cube with flattened edges by the ground computers who evaluated the stereo images from the robot gripper, estimated the motion, predicted for the above-mentioned 6 s and sent up the commands for grasping ( ).

1.2.2 ETS-VII

Engineering Test Satellite VII (ETS-VII), shown in Fig. 55.7, is another milestone in the development of robotics technology in space, particularly in the area of satellite servicing. ETS-VII was an unmanned spacecraft developed and launched by the National Space Development Agency of Japan (GlossaryTerm

NASDA

, currently JAXA) in November 1997. A number of experiments were successfully conducted using a 2 m long, six-degree-of-freedom manipulator arm mounted on its carrier satellite.

Fig. 55.7
figure 7

Japanese Engineering Test Satellite ETS-VII

The mission objective of ETS-VII was to test free-flying robotics technology and to demonstrate its utility in unmanned orbital operation and servicing tasks. The mission consisted of two subtasks: autonomous rendezvous/docking (GlossaryTerm

RVD

) and a number of robot experiments (GlossaryTerm

RBT

). The robot experiments included: (1) teleoperation from the ground with a large time delay; (2) robotic servicing task demonstrations such as ORU exchange and deployment of a space structure; (3) dynamically coordinated control between the manipulator reaction and the satellite attitude response; and (4) capture and berthing of a cooperative target satellite [55.13].

The communication time delay due to radio propagation (speed-of-light) is relatively small, for example 0.25 s for a round trip to geostationary Earth orbit (GlossaryTerm

GEO

). However, to have a global coverage of communication in low Earth orbit (GlossaryTerm

LEO

) operations, the signals are transmitted via multiple nodes including data relay satellites located at GEO and ground stations. This makes the transmission distance longer, and even more additional delays are added at each node. As a result, the cumulative delay becomes some seconds, actually 5–7 s in case of ETS-VII mission, more or less the same as in the ROTEX experiment. However, it is important to state that these delays have been unnecessarily long as the optimal communication infrastructure was (and is still) missing. Figure 55.8 shows that for a low orbit robot satellite (with typically 1.5 h orbital period) half of the orbit (i. e., around 45 min) would have communication from the ground station via one relay satellite yielding approximately 600 ms round trip delay (including computational delays). For a robot in geostationary orbit having permanent communication to the ground station, the round trip delay would be only 300 ms.

Fig. 55.8
figure 8

Minimal Roundtrip signal propagation delay to robots in LEO is around 0.6 s, to robots in GEO around 0.3 s

In ETS-VII, opportunities for academic experiments were also opened to Japanese universities and European institutions (e. g., DLR and ESA), and important flight data were obtained that validate the concepts and theories for free-flying space robots [55.14, 55.15]. As an example, DLR performed experiments aiming at demonstrating robot control methods for performing ORU exchange tasks (see for example ) and dedicated satellite attitude control via swimming or waiving motions of the robot arm.

1.2.3 Ranger

Ranger is a teleoperated space robot being developed at the University of Maryland, Space Systems Laboratory [55.16]. Ranger consists of two seven-degree-of-freedom manipulators with interchangeable end effectors to perform such tasks as changeout of orbital replacement units (GlossaryTerm

ORU

s) in orbit. Also discussed was the changeout of the electronics controller unit (GlossaryTerm

ECU

) of the Hubble space telescope, which previously required human EVA. A number of tests and demonstrations for servicing missions have been conducted at the University of Maryland Neutral Buoyancy Facility (Fig. 55.9). Originally designed for a free-flying flight experiment, Ranger had been redesigned for a shuttle flight experiment, but ultimately was not manifested on a shuttle flight.

Fig. 55.9
figure 9

Neutral buoyancy test of the Ranger telerobotic shuttle experiment

1.2.4 Orbital Express

The Orbital Express Space Operations Architecture program is a DARPA program developed to validate the technical feasibility of robotic on-orbit refueling and reconfiguration of satellites, as well as autonomous rendezvous, docking, and manipulator berthing [55.17]. The system consists of the autonomous space transport robotic operations (GlossaryTerm

ASTRO

) vehicle, developed by Boeing Integrated Defense Systems, and a prototype modular next-generation serviceable satellite, NextSat, developed by Ball Aerospace. The ASTRO vehicle is equipped with a robotic arm to perform satellite capture and ORU exchange operations (Fig. 55.10).

Fig. 55.10
figure 10

Orbital express flight mission configuration

After its launch in March 2007, various mission scenarios were conducted. These scenarios include (1) visual inspection, fuel transfer, and ORU exchange on NextSat using ASTRO’s manipulator arm when both spacecrafts are connected, (2) separation of NextSat from ASTRO, orbital maneuvers by ASTRO, and fly-around, rendezvous, and docking with NextSat, and (3) capture of NextSat using ASTRO’s manipulator arm.

These scenarios were successfully completed by July 2007, with ASTRO’s onboard autonomy using onboard cameras and advanced video guidance system.

1.2.5 ROKVISS and Delayed Teleoperation

As a precursor demonstration of the planned German Orbital Servicing demonstration mission DEOS (see below), the German Aerospace Agency (DLR) has developed and flown a 0.5 m long, two-degree-of-freedom manipulator arm with a dedicated test bench, called robotic components verification on the ISS (ROKVISS) (Fig. 55.11) [55.18].

Fig. 55.11
figure 11

ROKVISS – 6 years on ISS

Using the 7–8 min contact window of the ISS over-flight through a dedicated real-time space link, resulting in a roundtrip delay as small as 20 ms, this was the first high-fidelity telepresence system in the history of space flight that allowed force feedback teleoperation. A large amount of data documenting the evolution of electrical and mechanical properties of the robot (such as sensor accuracy, friction, and motor parameters) has been collected during the six year mission.

ROKVISS was launched by an unmanned Russian progress transport vehicle in 2004 and installed on the outer platform of the Russian segment of the station in early 2005 ( ).

After six years, the system was still fully functional, even though it had undergone only low-cost qualification, i. e., most of the electronic components were off-the-shelf and the system as a whole had been qualified by radiation, vibration, and temperature tests. The long-time verification of these joints for outer space has been a second main goal of ROKVISS ( ). Toward the end of the mission, ROKVISS was teleoperated from the private home of the project leader using standard internet communication to DLR’s professional ground station. Round-trip delay went then up to around 400 ms still allowing telepresence with force-reflection. At the end of 2011, ROKVISS was brought back to Earth for completion of verification and validation tests ( ).

Although the numbers of joints was small, challenging experiments of telepresence were conducted, in which human operators from ground teleoperated the arm using a force feedback joystick and stereo vision images. The ROKVISS arm, on the other hand, was equipped with joint torque sensors, allowing full impedance control and advanced bilateral control techniques. The secondary goal of the experiment was the space qualification of the joint drives, which are the key components of DLR’s torque-controlled lightweight robots [55.19].

No electrical or mechanical damages were visible, and the main observation was that joint friction had nearly doubled in vacuum on ISS, but went back to its original value when the arm was back on earth again. In the framework of ROKVISS and the accompanying investigations, important basic clarifications of the general telepresence potentials and prerequisites could be achieved.

Already in the early days of space telerobotics as pushed forward by JPL, there have been estimates that humans might be able to master signal delays of up to 1 s in the visual system (i. e., looking at delayed images) and up to 500 ms with the haptic system from the proprioceptive point of view.

The interesting, fairly recent result is here [55.20, 55.21], where feasibility of force-reflecting teleoperation with communication delays as high as 650 ms has been proven. By using the so-called time domain passivity control approach, the mechanical energy of the system is observed and controlled in real time, such that the system is passive for any given communication channel characteristics, including varying time delays and packet loss. These results have been verified with a unique teleperesence experiment, where two torque-controlled light weight arms located in Oberpfaffenhofen (Germany) were connected through a real relayed communication link that used the geostationary satellite ARTEMIS, ground station communication antennas located in Garching (Germany) and a data mirror in Redu (Belgium) [55.22].

Recently, telepresence experiments with a fairly complex multidegree-of-freedom (DOF) system have been performed [55.23]. The system consisted of the DLR’s Space JUSTIN humanoid robot and a human–machine interface based on two torque-controlled light weight arms as a force reflecting hand controller (haptic device) (Fig. 55.12 and ). Complex tasks that required with high levels of dexterity, such (un)screwing with a screwdriver and soldering, were performed with time delays up to 500 ms ( ).

Fig. 55.12
figure 12

Force-reflecting telepresence demos at DLR

Fig. 55.13
figure 13

DEOS – orbital servicing experiment with client (left) and servicer (right)

It is therefore concluded that teleoperation with force feedback is possible over the whole earth orbit.

1.2.6 The DEOS Demonstration Project

The German space robotics-demonstration mission DEOS, based on the ROKVISS-joint torque-controlled arm technology, aims at the demonstration of the maturity and availability of key technologies as needed for an orbit servicing. DEOS is comprised of two satellites, servicer and client (Fig. 55.13). Its main goal is to find, approach, and capture an uncontrolled and uncooperative satellite in LEO ( ). After successful capturing some typical repair and maintenance tasks are planned to be demonstrated before the servicing satellite enters the atmosphere for a controlled descent together with the captured satellite. Project developments are currently being pursued to advance the technological readiness level, in view of future missions. Developments also include adaptations to Active Debris Removal missions in study, such as e.Deorbit (ESA), for the deorbiting of the ENVISAT satellite.

1.2.7 Robonaut and JUSTIN

Humanoid (human-like) robots have been developed to conduct human-compatible dexterous tasks. NASA’s Robonaut and DLR’s JUSTIN are such representative examples. These robots are elaborated in Sect. 55.4.2.

2 Historical Developments and Advances of Surface Robotic Systems

The research on surface exploration rovers began in the mid-1960s, with an initiative (that never flew) for an unmanned rover for the Surveyor lunar landers and a manned rover (Moon buggy) for the human landers in the United States. In the same period, research and development began for a teleoperated rover named Lunokhod in the Soviet Union. Both the Apollo-manned rover and the Lunokhod-unmanned rover were successfully demonstrated in the early 1970s on Moon [55.24]. In the 1990s, the exploration target had expanded to Mars, and in 1997, the Mars Pathfinder Mission successfully deployed a microrover named Sojourner that safely traversed the rocky field adjacent to the landing site by autonomously avoiding obstacles [55.25, 55.26]. Following this success, today, autonomous robotic vehicles are considered indispensable technology for planetary exploration. The twin Mars Exploration Rovers, Sprit and Opportunity, were launched in 2003, and have had remarkable success in terms of remaining operational in the harsh environment of Mars for over three years. Each has traveled more than 5000 m and has made significant scientific discoveries using on-board instruments [55.27, 55.28].

2.1 Teleoperated Rovers

The first remotely operated robotic space surface vehicle was Lunokhod (Fig. 55.14) [55.24]. Lunokhod 1 landed on the Moon on November 17, 1970 as a payload on the lander Luna-17, and Lunokhod 2 landed on the Moon on January 16, 1973. Both were 8-wheeled skid-steered vehicles having a mass of about 840 kg, where almost all the components were in a pressurized bathtub thermal enclosure with a lid that closed over the tub to allow it to survive the deep cold ( 100 K ) of the long lunar nights using only the heat emitted by small pellets of radioisotope. On the inside of the lid were solar arrays which recharged batteries during the day as required to maintain operation of the vehicle. Lunokhod 1 operated for 322 Earth days, traversing over 10.5 km during that period, and returned over 20000 TV images, 200 high-resolution panoramas, and the results of more than 500 soil penetrometer tests and 25 soil analyses using its x-ray fluorescence spectrometer. Lunokhod 2 operated for about 4 months, having traversed more than 37 km, with the mission officially terminated on June 4, 1973. It has been reported that Lunokhod 2 was lost prematurely when it began sliding down a crater slope and hasty commands were sent in response which ultimately caused end-of-mission.

Fig. 55.14
figure 14

Lunokhod

Each of the eight wheels on the Lunokhod vehicles were 0.51 m in diameter and 0.2 m wide, giving an effective ground pressure of less than 5 kPa based on an assumed sinkage of 3 cm. Each wheel had a brush-type DC motor, a planetary gear reduction, a brake, and a dis-engagement mechanism allowing it to free-wheel in the event of some problem with the motor or gears. The mobility commands to the vehicle included two speeds forward or backward, braking, and turning to the right or the left either while moving or in place.

The vehicles had both gyroscope and accelerometer-based tilt sensors which could automatically stop the vehicle in the event of excessive tilt of the chassis. Typical mobility commands specified a time duration over which the motors would run, and then stop. Precision turning commands specified the angle through which the vehicle should turn. These commands were terminated when the specified turn angle had been reached according to the heading gyroscope. Odometry was determined by a ninth small wheel which was unpowered and lightly loaded and used only to determine over-the-ground distance. There was an on-board current overload system, and motor currents, pitch and roll measurements, distance traversed, and many component temperatures were telemetered to the ground operators.

The Lunakod crew consisted of a driver, a navigator, a lead engineer, an operator for steering the pencil-beam communication antenna, and a crew commander. The driver viewed a monoscopic television image from the vehicle, and gave the appropriate commands (turn, proceed, stop, or back up) along with their associated parametric value in terms of duration or angle. The navigator viewed displays of telemetry from the vehicles’ course gyroscope, gyrovertical sensor, and odometer, and was responsible for calculating the trajectory of the vehicle and laying out the route to be followed. Thus, the driver was responsible for vehicle stability about its center of mass and the navigator was responsible for the trajectory of that center of mass. The lead engineer (assisted by many specialists as required), was responsible for assessing the health of the on-board systems. The lead engineer provided both routine updates on energy supply, thermal conditions, etc., as well as possible emergency alerts such as extreme motor currents or chassis tilt. The pencil-beam antenna operator oversaw the functioning of an independent ground-based closed-loop control system that servoed the antenna to always point at Earth, independent of the vehicle motion. The crew commander supervised the implementation and execution of the overall plan, gave any detailed commands for making actual contact with the surface (e. g., by the penetrometer) and also could override any command to the vehicle as he viewed the same information as the driver.

The entire driving system was tested extensively prior to the Lunokhod 1 mission at a lunodrome having simulated lunar terrain which proved to be more challenging than that actually encountered during the Lunokhod 1 mission. Despite this, the operators of Lunokhod 1 said they encountered a dangerous situation (unforeseen entrance into a crater, rolling onto a rock, etc.) slightly more often than once per kilometer. This was attributed to inadequate driving experience, the modest quality of the television images, and the poor illumination conditions on the Moon. The driving direction was often selected primarily to give the best images; even so, the operators reported fictitious dangers caused by varying illumination conditions. In the first three months (lunar days) of operation, the vehicle traversed 5224 m in 49 h of driving using 1695 driving commands, including about 500 turns. Sixteen signals were sent for protection against excessive tilt during that time; approximately 140 craters were traversed at maximum slope angles of 30 °.

With the approximately 2.6 s speed-of-light delay, the operators stated that control experience confirmed the desirability of movement in a starting-stopping regime with mandatory stopping each few meters. The soil properties were found to differ substantially even in terrain sectors not very distant from one another. The soil penetrometer determined that the upper layer of regolith varied from a stiffest where the penetrometer required about 16 kg (Earth weight) of force to penetrate about 26 mm, to a weakest measurement where only 3 kg of Earth weight caused a penetration of about 39 mm. The cone penetrometer had a base diameter of 50 mm and a cone height of 44 mm. Thus the upper layer of regolith had a rate of increase of loadbearing strength ranging from about 400 kPa m - 1 for the weakest soil to about 3 MPa m - 1 for the stiffest soil. Crater walls and the immediate ejecta blanket around craters generally exhibited the weakest soil. Below 5–10 cm of penetration depth, the regolith generally became rapidly stiffer. The mean value of wheel slippage for the first three lunar days was about 10 %. On horizontal terrain, the slippage ranged from zero to 15 % depending on the surface irregularities and ground inhomogeneity. On crater slopes, the slip increased to 20–30 %. The specific resistance of the Lunokhod wheels was generally in the range of 0.05–0.25, while the specific free traction (the ratio of traction to weight) was in the range of 0.2–0.41. The crater distribution in the area explored by Lunokhod 1 was found to be closely approximated by the formula N ( D ) = A D - δ , where N(D) is the number of craters larger than diameter D meters per hectare of lunar surface, A is a scale factor found to be about 250, and δ is the distribution exponent, found to be about 1.4 [55.24].

2.2 Autonomous Rovers

In the mid-1960s, research began on a lunar rover at the US Jet Propulsion Laboratory (JPL) in Pasadena, California, when it was proposed to put a small rover on the Surveyor lunar landers. These landers were led by JPL (based on a system-level contract to Hughes), and were designed to land softly on the Moon to establish the safety of such landing prior to the Apollo landers with humans aboard. At the time it was speculated (notably by T. Gold) that the Moon might be covered in a thick layer of soft dust that would swallow any lander. In 1963, JPL issued a contract to build a small rover concept prototype in support of the Surveyor program to the General Motors Defense Research Laboratories in Goleta, California. That GM facility had recently hired Bekker, who was considered the father of off-road locomotion, having written several seminal textbooks on the subject, and having introduced many of the key concepts relating soil properties to off-road vehicle performance that are still used today [55.29, 55.30] (Sect. 55.3.12).

Bekker and his team proposed an articulated 6-wheeled vehicle based on a novel 3 cab configuration with an axial spring-steel suspension. This vehicle exhibited remarkable mobility, being able to climb vertical steps up to 3 wheel radii high, and crossing crevasses 3 wheel radii wide. Notable people working with Bekker were Farenc Pavlics, who went on to lead the development of the mobility system for the Apollo lunar rover (under contract from Boeing), and Fred Jindra, who developed the underlying equations describing the mobility of the 6-wheeled articulated vehicle that were later used by Don Bickler in conceiving of the rocker bogie chassis used on Sojourner and the Mars Exploration Rovers. Bekker and his team proposed the 6 wheeled articulated vehicle after experimenting with many types of vehicles, including multitracked vehicles, screw-type vehicles (for fine powdered terrain), and others. The 6-wheeled vehicle demonstrated in scale-model testing superior performance in both soft and rocky terrain.

They built and delivered two vehicles that were about 2 m long with approximately 0.5 m wheel diameters. Those vehicles were used in testing throughout the 1960s and early 1970s to conduct simulated operations to determine how such vehicles could actually be used on the Moon. One key issue was that the speed-of-light round trip from the Moon (about 3 s) precluded direct driving of the vehicle. Perhaps most annoying was the fact that, during vehicle motion, the highly directional radio antenna used to communicate with Earth would lose its pointing, and so communications would briefly be lost. This meant that operators driving the rover would be confronted with a series of still images, instead of a stream of moving images. It was quickly realized that much of an operator’s situation awareness and depth perception needed to drive a vehicle with a monocular camera comes from motion. It was very difficult to drive from frozen monocular images. A crude form of stereo was incorporated where the camera mast was raised and lowered slightly and the operator could switch back and forth between the two views.

Following the successful landing of several of the Surveyor spacecraft, and the discovery that all landing sites seemed to have relatively firm soil, it was concluded that the Surveyor lunar rover was not needed. As a result, the prototype was used for research into the early 1970s, and subsequently restored for use again in research in the 1980s, becoming the first vehicle to be outfitted with waypoint navigation of the sort later used on the Sojourner and MER missions.

About the time Viking was conceived and developed, JPL began the 1984 Mars rover effort. (1984 was an energetically favorable launch opportunity from Earth to Mars, and the next likely major mission opportunity after Viking.) Two testbed vehicles were developed, a software prototype and a hardware prototype. The software prototype had a Stanford arm, designed by Vic Scheinman (who went on to design the Unimation PUMA arm and many other famous early robotic devices). This was the only 1.5-scale Stanford arm ever built. Lewis and Bejczy became well-known in robotics for solving the kinematics of this arm, one of is not the first full kinematics ever done in robotics up to that time, e. g., [55.31]. A stereo pan-tilt head was implemented and equipped with the first solid-state cameras to become available. A number of very important works were published in the 1977 International Joint Conference on artificial intelligence, e. g., [55.32, 55.33]. The first hand-eye-locomotion coordination was done with this vehicle, where a rock was designated in a stereo image, and the vehicle maneuvered autonomously to a point where the arm could reach out and pick up the rock. One of the first demonstrations of pin in hole insertion and other dexterous manipulations were also done with this system in the 1970s.

The hardware prototype was built using elastic loop wheels made by Lockheed [55.34]. The vehicle was battery powered and controlled via a handheld RC unit of the type used by hobbyists.

In late 1982, JPL had a contract with the US Army to study the use of robotic vehicles in support of the US Army. During this study, Brian Wilcox at JPL proposed a technique to reduce the need for a real-time video link or high-bandwidth communication channel between the vehicle and the operator. This technique (which became known as computer-aided remote driving, or GlossaryTerm

CARD

) [55.35] required the transmission of a single stereo image from the vehicle to the operator, so the operator could designate waypoints in that image using a GlossaryTerm

3-D

cursor. By use of a single stereo image instead of a continuous stream of monocular images, the amount of information that needed to be transmitted by the vehicle was reduced by orders of magnitude. JPL first demonstrated CARD on the resurrected surveyor lunar rover vehicle (GlossaryTerm

SLRV

, which had been painted baby-blue and so became known as the Blue Rover, Fig. 55.15a), and later on a modified Humvee. During field tests in the Mojave desert in 1988, CARD was demonstrated on the Humvee with path designations of 100 m per stereo image, and with time-to-designate each path of only a few seconds.

Fig. 55.15a,b
figure 15

Six-wheel articulated body rovers developed by JPL (a) SLRV and (b) Robby

As the CARD work was ongoing, an internally funded effort at JPL demonstrated a concept called semiautonomous navigation (GlossaryTerm

SAN

). This concept involved humans on Earth designating global paths using maps of the sort that could be developed from orbiter imagery, and then having the vehicle autonomously refine and execute a local path that avoids hazards. The moderate success of that effort led to a NASA funded effort, leading to the development of a new vehicle, called Robby (Fig. 55.15b). Robby was a larger vehicle that could support the on-board computing and power needed for untethered operation. (The SLRV had been tethered to a VAX 11/750 minicomputer over a 1500 foot tether during arroyo field testing of CARD.) For the first time (in 1990), an autonomous vehicle had made a traverse through an obstacle field that was faster than a rover could have done on Mars using human path designation done on Earth.

However, Robby had a severe public relations problem – it was perceived as too large. Of course, none of the computers or power systems had been miniaturized or lightweighted – it was composed entirely of the lowest-cost components that could do the job. However, because it was the size of a large automobile, observers and NASA management got the impression that future rovers would be car-sized or even truck-sized vehicles. This was compounded by the Mars rover sample return (GlossaryTerm

MRSR

) study done by JPL in the late-1980s which suggested a mass for the rover of 882 kg. An independent study of the MRSR study by Science Applications International, Inc. (SAIC) estimated the overall cost at $13B. When word of this outrageous price tag filtered around NASA Headquarters and into the Congressional Staff, MRSR was summarily killed. Robby died along with it. At about the same time, NASA funded Carnegie Mellon University to develop Ambler (Fig. 55.16), a large walking robot that was able to autonomously choose safe footfall locations, also as a testbed Mars Rover [55.36, 55.37]. Ambler had a similar public-relations problem, being about the same mass as Robby, that the NASA management community was very skeptical that such large systems could affordably be flown to Mars. Both Robby and Ambler had all-on-board power and computing systems, which at that time were not sufficiently miniaturized to make autonomous rovers credible for actual flight missions. Moore’s law was not only causing the computing technology to become miniaturized at a high rate, but also the energy required per computing instruction was dropping rapidly. This meant that early systems devoted most of their power to computing rather than to motive power. Later systems, such as the Mars exploration rovers, have a more nearly equal balance between power for mobility and power for computation. Future systems will presumably devote the majority of their power to mobility as opposed to computation.

Fig. 55.16
figure 16

Ambler

Soon thereafter, the Mars Environmental Survey (GlossaryTerm

MESUR

) mission set was proposed, as a lower cost alternative to a sample return mission. The MESUR Pathfinder mission was proposed as a first test of what was envisioned as a network of 16–20 surface stations to provide global coverage of Mars. A small rover was proposed to the Mars Science Working group [55.38, 55.39]. A very short-term development effort culminated in a demonstration in July 1992 of a  4 kg rover that could move to directed points on the surface nearby a lander using stereo designation of waypoints in a 3-D display of frozen images taken from a lander mast camera pair (Fig. 55.17). This demonstration was sufficiently successful that a similar rover was manifested for the Mars Pathfinder mission. The Pathfinder rover (Fig. 55.18 ) was later named Sojourner, and became the first autonomous vehicle to traverse the surface of another planet, using a hazard detection and avoidance system to move safely between waypoints through a rockfield [55.25, 55.26]. The hazard detection system avoided obstacles, and also was used to position the vehicle accurately in front of rocks. Sojourner operated successfully for 83 Mars days (until the failure of the lander, which was acting as a communications relay between the rover and Earth). Sojourner examined approximately a dozen rock and soil samples with its Alpha-Proton-x-ray spectrometer, which gives the elemental composition of the rocks and soil. The success of Sojourner led directly to the decision to build the twin Mars exploration rovers launched in 2003. Both Sojourner and the subsequent Mars exploration rovers Spirit and Opportunity, and the Mars Science Laboratory Curiosity, use waypoint designation in stereo images by the human operator together with autonomous hazard detection and avoidance to keep the rovers safe if they should wander off the designated path.

Fig. 55.17
figure 17

Rocky 4

Fig. 55.18
figure 18

The Pathfinder rover, Sojourner

During the 1992–1993 summer season in Antarctica, the Dante I robot, built by Carnegie-Mellon University and funded by NASA, attempted to rappel into the caldera of the active volcano Mt. Erebus. Dante was a walking robot, and was the first serious attempt to make a robot rappel down a grade that was too steep to traverse using purely frictional contact. Unfortunately, the extreme cold (even in the summer) compounded by human error caused a kink in the fiber-optic umbilical to snag going through an eyelet, breaking the high-bandwidth fiber communications on which the system depended. The fiber could not be repaired in the field, and so the mission was aborted. Undaunted, in the summer of 1994, Dante II (Fig. 55.19) made a successful rappel into the caldera of Mt. Spur in Alaska, exploring the active vents on the crater floor in a way that would be unsafe in the extreme if done with human explorers. The Dante robot series demonstrated that rappelling, especially when combined with legged locomotion, allows robots to conduct exploration to extremely hazardous sites in ways that humans cannot.

Fig. 55.19
figure 19

Dante II at Mt. Spur in Alaska

In 1984, NASA started the Telerobotics Research program [55.40, 55.41]. This program demonstrated various aspects of on-orbit assembly, maintenance, and servicing. Some highlights of this activity were the automated tracking and grappling of a free-spinning satellite (suspended with a counterweight and gimbal for realistic reactions under external forces), connection of a flight-like fluid coupler, and many busy box functions such as door opening, threaded fastener mating and demating, use of power tools, dual-arm manipulation of a simulated hatch cover and flexible thermal blanket, etc., by various control approaches ranging from force-reflecting teleoperation to fully autonomous sequences. This activity ended in about 1990.

2.3 Research Systems

There have been many mobile robots built by government, university, and industrial groups whose objective was to develop new technologies for planetary surface exploration, or to excite students or young engineers about the possibilities in that area. Carnegie Mellon University developed the Ambler, Dante, Nomad, Hyperion, Zoe, and Icebreaker robot series. The Jet Propulsion Laboratory (GlossaryTerm

JPL

), Draper Labs, MIT, Sandia National Lab, and Martin-Marietta (later Lockheed-Martin) each built more than one planetary surface robot testbed. The Marsokhod chassis built by VNII Transmach of St. Petersburg, Russia was used by research groups there and also in Toulouse (LAAS and CNRS) [55.42, 55.43] as well as the NASA Ames Research Center and McDonnell Douglas Corporation (later part of the Boeing Company) in the US.

These research platforms have been used for two basic avenues of research. One avenue is to perfect safe driving techniques on planetary surfaces, despite the speed-of-light latencies inherent in robotic exploration of the planets. This includes the waypoint navigation technology developed at JPL in the 1980s, where frozen stereo images are used to plan a possibly lengthy series of waypoints or activity sites, and then executed with various sorts of reflexive hazard avoidance or safing techniques, such as used on the Sojourner rover on Mars in 1997. The other is to develop higher level autonomy for improved science data return or mission robustness. Technologies in this latter category include mission planners that attempt to optimize routes and activity sequences based on time, limits to peak power, total energy, expected temperature, illumination angles, availability of communications, and others. Automated classification of possible science targets based on clustering of spectral data, figure-ground segmentation of rocks, and other approaches have been attempted with some success. At the time of this writing, some of these technologies have been uploaded to the twin Mars exploration rovers Spirit and Opportunity (Fig. 55.20 ) [55.27], including automated detection of temporary events of scientific interest such as dust-devils and clouds [55.44]. The Spirit rover was lost when it broke through a surface crust into a small crater filled with fine dust in late 2009, with attempts to free it continuing until mid 2010. The Mars Science Laboratory Curiosity was landed in August 2012 and seeks to explore Mt. Sharpe in the center of Gale Crater.

Fig. 55.20
figure 20

The Mars exploration rovers, Spirit and Opportunity, with a manipulator arm in front

The ESA with a number of contractors, e. g., ASTRIUM and DLR has been working for several years on a 6-wheeled Mars rover ExoMars that is supposed to be flown after 2018 as well as on a next lunar lander NLL Rover, but it is not finally clear, when and in which configuration the mission might be realized.

2.4 Sensing and Perception

In the 1980s, most planetary rover sensing research was based on laser ranging or stereo vision. Stereo vision was too computationally intensive for early low-power, radiation-hard processors, so the Sojourner Mars rover used a simple form of laser ranging to determine which areas were safe to traverse. Between the launch of Sojourner (1996) and the Mars exploration rovers (2003) sufficient progress had been made in radiation-hardened flight processors that stereo vision was used for hazard detection on MER [55.45], mostly in experiments conducted with the Rocky-7 rover (Fig. 55.21). This allowed much larger numbers of range points to be incorporated into the hazard-detection algorithm (thousands of points, instead of the 20 discrete range points used by Sojourner). Perception of hazards on Sojourner was based on simple computations of average slope and roughness over the 4 × 5 array of range measurements, as well as the maximum height differences.

Fig. 55.21
figure 21

Rocky 7

The two MER rovers and the MSL rover use a more sophisticated evaluation of the safety of the rovers along a large number of candidate arcs from its current location. Many other algorithms for perception of terrain hazards have been used with reasonable success by various organizations. Today it is probably fair to say that the unsolved problems lie not in the area of geometric hazards (e. g., hazards that can be evaluated completely based on accurate knowledge of the shape of the terrain) but rather in the area of nongeometric hazards (e. g., hazards where uncertainties in the load-bearing or frictional properties of the terrain determine the safety of a proposed traverse). Accurate estimation of load-bearing or friction properties of terrain by remote sensing is a very challenging task that will not be completely solved anytime soon, if ever.

2.5 Estimation

Most estimation for planetary surface exploration relates to the internal state of the robot, or its position, pose and kinematic configuration with respect to the environment. Internal state sensors such as encoders on any active or passive articulations in the vehicle are used, along with a kinematic model and inertial sensors such as accelerometers and gyroscopes, to estimate the pose of the vehicle in inertial space. Perceptual algorithms such as surface reconstruction from clouds of range-points as developed by stereo vision put terrain geometry estimates into this same representation. Heading in inertial space is generally the most difficult to reliably estimate, due to the lack of navigation aids such as the global positioning system, or any easily measured heading reference such as a global magnetic field. Integration of rate-gyro data is used to maintain local attitude during motion while accurate estimation of the rotation axis for Mars is possible by long integrations of 3 axis rate-gyro data while the vehicle is stopped. Similar approaches are probably not feasible for the Moon, because of its slow rotation rate. Imaging of the solar disk or constellations of stars at precisely known times can be combined with stored models of the rotation of the planet to allow accurate estimation of the complete pose of the vehicle in inertial space. Kalman filtering or related techniques are generally employed to reduce the effects of measurement noise.

2.6 Manipulators for In-Situ Science

The Mars exploration rovers were the first planetary exploration vehicles to have general-purpose manipulators. (Lunokhod had a single-purpose soil penetrometer, and Sojourner had a single degree-of-freedom device to place an Alpha-Proton X-Ray spectrometer in direct contact with the terrain.) The MER arms each have 5 °-of-freedom and a reach of over 1 m. Accurate gravity-sag models of the lightweight arm allow the precise position to be predicted in advance of any command to deploy an instrument, and contact sensors allow the arm to stop before any excessive forces build up in the relatively flexible arm. Future arms for planetary surface operations, especially any proposed assembly, maintenance, or servicing tasks as part of the proposed lunar outpost, will require force sensing to protect the stronger but much more rigid arms from damage, as well as to allow controlled forces to be applied to the terrain or workpieces. Of course, there is a huge body of knowledge associated with industrial robot arms, and undersea robotics (e. g., for the offshore oil industry), but such arms are generally very heavy, fast, and stiff compared with credible systems for planetary surface use. Delicate force control has rarely been applied to industrial settings. Space hardware is necessarily very lightweight, and so both the arms and the workpieces will need to have well-resolved force sensing and control to prevent damage to one or both. Because of severe limits on both mass and power, and to avoid unnecessary risk, space manipulation tends to be slow. Historically, this means the gear ratio between each motor and the corresponding output shaft is very large, making the use of motor current as an estimator for output torque very problematic. Other low-mass and robust means for accurate sensing of applied forces in the space environment are needed.

3 Mathematical Modeling

Broadly speaking, both on-orbit manipulators and surface mobile robots are considered to be common articulated body systems with a moving base. One point that clearly distinguishes them from other ground-based robots, such as industrial manipulators, is the existence of a moving base.

3.1 Space Robot as an Articulated Body System

The robotic systems discussed in this chapter comprise one or multiple articulated limbs mounted on a base body that has a dynamic coupling with these limbs. Typical styles of such moving base systems are categorized into the following groups [55.46].

3.1.1 Free-Floating Manipulator Systems

A space free-flyer that has one or more manipulator arms, as illustrated schematically in Fig. 55.22a, is a typical example of this group. When operating the manipulator arm(s), the position and orientation of the base spacecraft fluctuates due to the manipulator reaction. The kinetic momentum of the system is conserved if no external forces or moments are applied, and the conservation law for this system governs the reaction dynamics. Coordination or isolation, between the base and manipulator dynamics is key to advanced motion control.

Fig. 55.22a–d
figure 22

Four basic types for moving base robots: (a) Free-floating manipulator system, (b) macro–micro manipulator system, (c) flexible structure-based manipulator system, and (d) surface locomotive robot system

3.1.2 Macro–Micro Manipulator Systems

A robotic system that comprises a relatively small arm (microarm) for fine manipulation mounted on a relatively large arm (macro-arm) for coarse positioning, is called a macro–micro manipulator system. The SSRMS (Canadarm2) and the SPDM (Dextre) system, as well as the JEMRMS on the Japanese module of the ISS, are good examples. Here, the connecting interface at the end point of the macro arm or the root of the micro arm is modeled as the base body (Fig. 55.22b). A free-flying space robot may be treated in this group when its base body is actively controlled by actuators that produce external forces and moments, such as gas-jet thrusters. In this case, these actuators can be modeled as a virtual macro-arm [55.47].

3.1.3 Flexible-Based Manipulator Systems

If the macro arm behaves as a passive flexible (elastic) structure in a macro–micro manipulator system, the system is considered to be a flexible-based manipulator (Fig. 55.22c). Such a situation can be observed in the operation of the ISS, when the SSRMS is servo or brake locked after its coarse operation. Here, the issue is that the base body (the root of the micro arm, or the end of the macro arm, according to the definition above) is subject to vibrations that will be excited by the reaction of the micro arm.

3.1.4 Mobile Robots with Articulated Limbs

Mobile robots for surface locomotion have the same structure in terms of the dynamics equation as the above groups. This group includes wheeled vehicles, walking (articulated limb) robots, and their hybrids. In a wheeled vehicle, suspension mechanisms, if any, are also modeled as articulated limb systems. The forces and moments yielded by contact with ground, or planetary surface, govern the motion of the system (Fig 55.22d).

3.2 Equations for Free-Floating Manipulator Systems

Let us first consider a free-floating system with a single or multiple manipulator arm(s) mounted on a base spacecraft. The pioneering work in the mathematical models of this type of space manipulator systems were conducted in late 1980s and early 1990s and collected in the book [55.48], published in 1993. In this section, the models that are widely accepted today are introduced.

The base body, termed link 0, is floating in inertial space without any external forces or moments. At the end point of the arm(s), external forces/moments may apply. For such a system, the equation of motion is expressed as follows

H b H bm H bm T H m x ¨ b ϕ ¨ + c b c m = 0 τ + J b T J m T F h .
(55.1)

The kinematic relationship among xh, xb, and ϕ is expressed using Jacobian matrices as

x ˙ h = J m ϕ ˙ + J b x ˙ b ,
(55.2)
x ¨ h = J m ϕ ¨ + J ˙ m ϕ ˙ + J b x ¨ b + J ˙ b x ˙ b .
(55.3)

Where the notations are listed as follows:

  1. xb∈R6\vector{x}_{\text{b}}\in\mathbb{R}^{6}:

    position/orientation of the base

  2. ϕ∈Rn\vector{\phi}\in\mathbb{R}^{n}:

    joint angle of the arm

  3. xh∈R6k\vector{x}_{\text{h}}\in\mathbb{R}^{6k}:

    position/orientation of the end point(s)

  4. τ∈Rn\vector{\tau}\in\mathbb{R}^{n}:

    joint torque of the arm

  5. Fh∈R6k\vector{F}_{\text{h}}\in\mathbb{R}^{6k}:

    external forces/moments on the end point(s)

  6. nn:

    number of total joints

  7. kk:

    number of manipulator arms,

and Hb, Hm, and Hbm are inertia matrices for the base body, manipulator arm, and the coupling between the base and the arm, respectively, and cp and cq are nonlinear Coriolis and centrifugal forces, respectively.

For a free-floating manipulator in orbit, the gravity forces exerted on the system can be neglected, and so the nonlinear term becomes

c b = H ˙ b x ˙ b + H ˙ bm ϕ ˙ b .

Integrating the upper set of equation in (55.1) with respect to time, we obtain the total momentum of the system as

L = J b T F h d t = H b x ˙ b + H bm ϕ ˙ .
(55.4)

For the case in which reaction wheels are mounted on the base body, they are included as additional manipulator arms.

3.3 Generalized Jacobian and Inertia Matrices

From (55.2) and (55.4), the coordinates of the manipulator base x ˙ b , which are passive and unactuated coordinates, can be eliminated, as follows

x ˙ h = J ^ ϕ ˙ + x ˙ h0 ,
(55.5)

where

J ^ = J m - J b H b - 1 H bm ,
(55.6)

and

x ˙ h0 = J b H b - 1 L .
(55.7)

Since Hb is the inertia tensor of a single rigid body (the manipulator base), it is always positive definite, and so its inverse exists.

The matrix J ^ was first introduced in [55.49, 55.50] and is referred to as the generalized jacobian matrix. In its original definition, it was assumed that no external forces/moments acted on the system. If F h = 0 , then the term x ˙ h0 becomes constant, and, in particular, if the system has zero initial momentum x ˙ h0 = 0 , then (55.5) becomes simple. However, note that in the derivation of (55.6), zero or constant momentum is not a necessary condition.

Using this matrix, the manipulator hand can be operated under a resolved motion-rate control or resolved acceleration control in inertial space. Thanks to the generalized Jacobian, although the reactive base motion occurs during the operation, the hand is not disturbed by the motion.

From the upper and lower sets of equations in (55.1), x ¨ b can be eliminated to obtain the following expression

H ^ ϕ ¨ + c ^ = τ + J ^ F h ,
(55.8)

where

H ^ = H m - H bm T H b - 1 H bm .
(55.9)

The matrix H ^ is known as the generalized inertia matrix for space manipulators [55.48]. This matrix represents the inertia property of the system in the joint space and can be mapped onto the Cartesian space using the generalized Jacobian matrix

G ^ = J ^ H ^ - 1 J ^ T .
(55.10)

The matrix G ^ is referred to as the inversed inertia tensor for space manipulators and is useful for the discussion of impact dynamics when a space manipulator collides with or captures a floating target in orbit [55.51].

The generalized Jacobian matrix (GlossaryTerm

GJM

) is a useful concept, with which the manipulator end point can perform positioning or trajectory tracking control by a simple control algorithm regardless of the attitude deviation during the operation.

A simplified laboratory demonstration was carried out using a two-dimensional free-floating test bed called EFFORTS [55.52]. To simulate the motion in a microgravity environment, a robot model was floated on a thin film of pressurized air on a horizontal plate, so that frictionless motion with momentum conservation was achieved.

Figure 55.23 depicts the test bed and a typical experimental result. For Fig. 55.23b, the control command was given to the floating robot by

ϕ ˙ = J ^ - 1 x ˙ d ,
(55.11)

where x ˙ d is the desired velocity of the manipulator end point, the value of which was given and updated by on-line measurement of the end point position xh and the target position xt, as follows

x ˙ d = x t - x h Δ t ,
(55.12)

where Δ t is the time interval of the on-line control loop. The desired end point velocity was simply resolved into the joint velocity by (55.11).

Fig. 55.23a,b
figure 23

Laboratory test bed for a free-floating space robot: (a) The EFFORTS test bed, (b) a target capture result

The result clearly shows that the manipulator end point properly reached the target in an optimum manner, although the robot base rotated considerably due to the manipulator reaction. Note that, since the target was stationary in this example, the resulting motion trace was a straight line. However, thanks to the on-line control, the manipulator was also able to track and reach a moving target with the same control law.

The validity and effectiveness of the GJM-based manipulator control were also demonstrated in orbit by Japanese ETS-VII mission [55.14].

3.4 Linear and Angular Momenta

The integral of the upper set of equation in (55.1) gives a momentum equation, as shown in (55.4), which is composed of the linear and angular momenta. The linear part is expressed as

H ˘ b v b + H ˘ bm ϕ ˙ = P ,
(55.13)

where vb is the linear velocity of the base, P is the initial linear momentum, and the inertia matrices with the mark of ( ˘ ) are the corresponding components for the linear momentum [55.48]. When the linear momentum is further integrated, the result verifies the principle that the mass centroid of the entire system either remains stationary or translates with a constant velocity.

The angular momentum equation, however, does not have a second integral, and therefore provides a first-order nonholonomic constraint [55.53]. The equation can be expressed as

H ̃ b ω b + H ̃ bm ϕ ˙ = L ,
(55.14)

where ω b is the angular velocity of the base, L is the initial angular momentum, and the inertia matrices with the mark of ( ̃ ) are the corresponding components for the angular momentum [55.48]; H ̃ bm ϕ ˙ represents the angular momentum generated by the manipulator motion.

Equation (55.14) can be solved for ω b with zero initial angular momentum

ω b = - H ̃ b - 1 H ̃ bm ϕ ˙ .
(55.15)

This expression describes the resulting disturbance motion of the base when there is joint motion ϕ ˙ in the manipulator arm.

There are a number of points worth discussion when analyzing this equation. The magnitudes and directions of the maximum and minimum disturbances can be obtained from the singular value decomposition of the matrix ( - H ̃ b - 1 H ̃ bm ) and displayed on the map. Such a map is called a disturbance map [55.54, 55.55]. Equation (55.15) is also used for the feed-forward compensation in the coordinated manipulator/base control model [55.56, 55.57].

3.5 Virtual Manipulator

The concept of the virtual manipulator (GlossaryTerm

VM

) is an augmented kinematic representation that considers the base motion due to reaction forces or moments. The model is based on the fact that the mass centroid of the entire system does not move in the free-floating system without any external forces [55.58]. The mobility of the end point of the arm is decreased by the base motion. In the VM representation, such mobility degradation is expressed by virtually shrinking the length of the real arm according to the mass property. Note that the VM considers only linear momentum conservation. If the differential expression of VM is obtained using a Jacobian matrix, the Jacobian is not a conventional kinematic Jacobian, but rather a version of the generalized Jacobian defined by the combination of the kinematic equation (55.2), and the linear momentum equation (55.13).

3.6 Dynamic Singularity

Dynamic singularities are singular configurations in which the manipulator end point loses mobility in some inertial direction [55.59]. Dynamic singularities are not found in earth-based manipulators, but rather occur in free-floating space manipulator systems due to the coupling dynamics between the arm and the base. Dynamic singularities coincide with the singularities of the generalized Jacobian matrix determined by (55.6). The singular value decomposition (GlossaryTerm

SVD

) of a manipulator Jacobian matrix provides manipulability analysis. Likewise, the SVD of the generalized Jacobian matrix yields the manipulability analysis of a free-floating space manipulator [55.60]. Figure 55.24 shows the comparison of the manipulability distribution between a 2-DOF ground-based manipulator and a 2-DOF floating manipulator, from which the degradation of the manipulability is observed in the space arm due to the dynamic coupling.

Fig. 55.24a,b
figure 24

Manipulability distribution in the work space (after [55.60])

3.7 Reaction Null-Space (RNS)

From a practical point of view, any change in the base attitude is undesirable. As such, manipulator motion planning methods that minimize the base attitude disturbance have been investigated extensively. Analysis of the angular momentum equation reveals that the ultimate goal of achieving zero disturbance is possible.

The following is the angular momentum equation with zero initial angular momentum L = 0 and the zero attitude disturbance ω b = 0 given in (55.14)

H ̃ bm ϕ ˙ = 0 .
(55.16)

This equation yields the following null-space solution

ϕ ˙ = ( I - H ̃ bm + H ̃ bm ) ζ ˙ .
(55.17)

The joint motion given by (55.17) is guaranteed not to disturb the base attitude. Here, the vector ζ ˙ R n is arbitrary and the null-space of the inertia matrix H ̃ bm R 3 × n is called the reaction null-space (GlossaryTerm

RNS

) [55.61].

The DOF for ζ ˙ is n - 3 . For example, if a manipulator arm mounted on a free-floating space robot has 6-DOF, i. e., n = 6, then 3-DOF remain in the reaction null-space. These DOFs can be specified by introducing additional motion criteria, such as end point positioning of the arm. Such manipulator operation that produces no reaction in the base is called the reactionless manipulation  [55.62].

The validity and effectiveness of the RNS-based reactionless manipulation were demonstrated in orbit by Japanese ETS-VII mission [55.14].

3.8 Motion Planning Issues

This subsection addresses the generation of feasible trajectories for a free-flying robot for executing typical point-to-point or grasping tasks. The subject falls under the problem domain of motion planning, with the aim of satisfying motion constraints which generally cannot be satisfied with use of local methods alone (feedback control and model predictive control). A trajectory resulting from the motion planning is fed to a tracking controller, which accounts for any modeling errors, to accomplish the task in question. This approach also aims at providing autonomous skills for supporting a human ground operator.

A typical task of interest here is that of a point-to-point maneuver of a robot manipulator mounted on a servicer satellite, to bring its end-effector in some desired inertial position and orientation. This task may require actuation of the servicer, or may, if possible, be preferably executed in free-floating mode, to avoid issues related to the use of the on-board thrusters, such as fuel expenditure. In this context, to minimize the attitude change of the servicer resulting from the manipulator motion, a noticeable fundamental result for a point-to-point maneuver, derived from nonlinear optimization theory, is the V-maneuver [55.63]. Intuitively, the attitude change is minimized by making the robot first move radially inward, toward the system center of mass, before turning and then radially outward, to reach the new desired final position.

A second task that has received much attention is the grasping of a free tumbling target, like a defected satellite. Currently, there are a number of space programs worldwide which are addressing this task, like the preparation of the demonstration mission DEOS of the DLR (Sect. 55.1.2.6) for grasping a small satellite in LEO, the e.Deorbit study of the ESA for the deorbiting of the defected ENVISAT satellite and PHOENIX of DARPA for grasping a geostationary satellite in the graveyard orbit. Different approaches can be found in the literature to tackle this problem, to include [55.64, 55.65] for feedback control, [55.66] for model predictive control, [55.67] for optimal control in Cartesian space and [55.68] for nonlinear optimization, to mention some.

Due to the nonholonomic nature of the dynamics of a free-floating robot (Sect. 55.3.4), in order to satisfy all relevant motion constraints, the motion planning problems above can only be solved through numerical integration. Note in fact, that the final system configuration for a given final end-effector position in inertial space is function of the whole path taken by the robot throughout the motion.

Principally, the motion planning tasks above may be formulated as an optimal control problem of the type

min t f , ϕ ( t ) Γ ( ϕ ( t ) , τ ( t ) , t f ) ,
(55.18)

subject to

g ( t f , ϕ ( t ) ) = 0
(55.19)
h ( t f , ϕ ( t ) ) 0 ,
(55.20)

for 0 t t f and where tf is the final time; ϕ ( t ) is the vector of joint positions; τ ( t ) is the vector of joint torques; Γ is a predefined cost function; g are equality constraints to include, for example, the state transition equations; h are inequality constraints, for example the joint box constraints on position, velocity and torque, or collision avoidance constraints. Other motion constraints may include inequality constraints on the end-effector forces during contact, or other operational constraints.

This consists of an infinite dimensional problem, in the given time interval, which generally cannot be solved in closed form. The direct shooting optimization methods lend themselves well to solving these problems iteratively [55.69], where for example the independent DOF (in the case of a free-floating system, the robot joint angles) are parameterized in time as

ϕ = ϕ ( t , p )
(55.21)

with p R n , for n optimization parameters, where ϕ may be a polynomial function, a B-spline function, or any other. This way, the problem becomes that of finding a suitable value for the parameters p which satisfies all motion constraints (feasibility), as well as perhaps minimizes a cost function (optimality), like the time-to-collision [55.70], the end time [55.67], or the mechanical energy [55.68]. The problem is as such transformed into a finite dimensional problem and can be solved as a nonlinear programming problem (GlossaryTerm

NLP

) with classical numerical iterative methods such as sequential quadratic programming (GlossaryTerm

SQP

).

For example, for a Cartesian point-to-point problem, we have the following supplementary equality constrains at the end time

X e = 0 t f J ^ ϕ ˙ ( t , p ) d t = X des e ,
(55.22)

where X e R n denotes the end-effector pose (array matrix of dimension DOFe × 1 ). Note that the constraint itself has DOF+3–DOFe solutions for the system configuration (for DOF robot joints), however these all imply a specific base body attitude as well as a specific robot configuration. Solving the problem of reaching a specific system configuration is a hard nonholonomic control problem, which is avoided by solving the constraint thorough the integral above and an adequate parameter set p.

Extensions of this are necessary to treat the grasping task. The latter can ideally be separated into three phases: approach, tracking and, stabilization. The first comprises a point-to-point maneuver, however with a nonzero end velocity. The second comprises a tracking maneuver, in which the Cartesian motion of the robot end-effector is dictated by the tumbling motion and the geometry of the target as it follows the grasping point and homes in onto it, to finally grasp it. Note that this phase aims at minimizing the impact between the end-effector and the target. The third phase involves a robot joint velocity decay maneuver, once the target is grasped. This formulation results in a multiple-phase problem, for which the boundaries between the phases introduce supplementary motion constraints in the motion planning problem.

From a methodological point of view, note that there is no simple measure to determine if and when the grasping point on the target will be reachable from the current robot configuration (Fig. 55.25) and whether the trajectory which derives from a local control law will be feasible at all times (accounting for the motion constraints listed above). It is also necessary to provide information on the time synchronization between the motion of the grasping point on the target and that of the robot. Furthermore, the nonlinear nature of the robot kinematics needs to be exploited to favor a successful grasp. These considerations speak in favor of the use of a reference trajectory, which is computed off-line by means of a global search, based on a motion prediction of the target and on a geometric model of the same target and of the space robot [55.68].

Fig. 55.25
figure 25

Orbital scenario: servicer satellite with 7-DOF manipulator and target satellite with solar panels. Coordinate system of a predefined grasping point on target ring shown

It is also well known that optimization methods suffer from convergence issues (arising from local minima), if a judicious initial guess is not available. It is for this reason that a look-up table approach is necessary [55.68] in order to provide a sufficiently high probability of convergence for a given grasping task.

The necessity of a look-up table also arises from the long computation times of the optimization process, which result from the aforementioned necessity to integrate the equations of motion for any given optimization iteration. This time is generally reduced if a solution close to the sought one is given as a starting point. An attempt to eliminate the necessity of integrating the equations of motion was also made in [55.71], where a differentially flat representation for a free-floating robot was sought. In such a formulation, there are as many differential equations as there are independent state variables, or flat variables, and as such, any parameterization of the flat variables is solution of the equations of motion. Such a representation was found for the case in which the robot has three joints and the center of mass of its load lies on the rotation axis of the last joint.

It is also of interest to make some considerations on practical technological issues which can influence one or the other future research direction. With regard to the minimization of the servicer attitude change during robotic operations, it is worth realizing that this is generally only an issue for maintaining the communication link from low Earth orbit with a geostationary satellite, for which a high pointing accuracy is required (note that for a communication link from low Earth orbit to ground instead, an omnidirectional antenna is sufficient). Therefore, rather than limiting the robot workspace to its reaction null-space to avoid any attitude change at all, in which case the resulting robot movements would generally be confined to the extent of becoming of little practical use, a simple technological solution is possible through the implementation of a gimbal joint for the communication antenna on the servicer.

Another simple technological solution to a theoretically involved problem is that of controlling the attitude of the servicer by means of adequate closed-loop maneuvers of the robot. As is well known, reaction wheels on a satellite achieve the same result, also with the same principle of momentum transfer, but however with a far simpler control law than that necessary for a robot. It is then important to realize that, although reaction wheels were until today far too small for any useful robotic application, larger reaction wheels are currently being developed, for example, in the context of the BIROS mission of the DLR, which is planned to fly in 2014. Although these will still not be able to fully compensate the typical robot dynamic coupling terms (if wanting to fully stabilize the servicer attitude), they will however be able to allow significant attitude slew maneuvers of the servicer quickly, without having to resort to the use of the robot or of the thrusters. Furthermore, they will be useful to reduce the robot singularities, due to the enhanced system’s actuation.

3.9 Equations for Flexible-Based Manipulator Systems

Next, let us consider a flexible-based manipulator system in which a single or multiple manipulator arm(s) are supported by a flexible beam or a spring and damper (visco-elastic) system. For such a system, the equation of motion is obtained using the following variables:

  1. xb∈R6\vector{x}_{\text{b}}\in\mathbb{R}^{6}:

    position/orientation of the base

  2. ϕ∈Rn\vector{\phi}\in\mathbb{R}^{n}:

    joint angle of the arm

  3. xh∈R6k\vector{x}_{\text{h}}\in\mathbb{R}^{6k}:

    position/orientation of the end point(s)

  4. Fb∈R6\vector{F}_{\text{b}}\in\mathbb{R}^{6}:

    forces/moments to deflect the flexible base

  5. τ∈Rn\vector{\tau}\in\mathbb{R}^{n}:

    joint torque of the arm

  6. Fh∈R6k\vector{F}_{\text{h}}\in\mathbb{R}^{6k}:

    external force/torque on the end point(s).

H b H bm H bm T H m x ¨ b ϕ ¨ + c b c m = F b τ + J b T J m T F h ,
(55.23)
x ˙ h = J m ϕ ˙ + J b x ˙ b ,
(55.24)
x ¨ h = J m ϕ ¨ + J ˙ m ϕ ˙ + J b x ¨ b + J ˙ b x ˙ b .
(55.25)

Here, with the gravitational force g in Cartesian space, the term cb is generally expressed as

c b = f ( x b , ϕ , x ˙ b , ϕ ˙ ) + g ( x b , ϕ ) .
(55.26)

The difference from the equation of a free-floating manipulator system (55.1) is the existence of the base constraint force  Fb. Let Db and Kb be the matrix representing the damping and spring factors, respectively, of the flexible-base. The constraint forces and moments Fb are then expressed as

F b = - D b x ˙ b - K b Δ x b .
(55.27)

Since the base is constrained, the total momentum is not conserved, and it might be meaningless to check the system momentum. However, it is important to consider the partial momentum L m for the part of the manipulator arm

L m = H bm ϕ ˙ ,
(55.28)

which is termed the coupling momentum [55.72]. Its time derivative describes the forces and moments Fm, which are yielded by the dynamic reaction from the manipulator arm to the base

F m = H bm ϕ ¨ + H ˙ bm ϕ ˙ .
(55.29)

Using Fb and Fm, the upper set of equation in (55.23) is rearranged as

H b x ¨ b + D b x ˙ b + K b Δ x b = - g - F m + J b T F h .
(55.30)

Equation (55.23) or (55.30) is a familiar expression for flexible-base manipulators [55.73, 55.74].

3.10 Advanced Control for Flexible Structure Based Manipulators

In this subsection, advanced control issues for a macro–micro space manipulator are discussed. The SSRMS attached to the SPDM (Fig. 55.4) and the JEMRMS (Fig. 55.5) are examples. Operation modes for this class of space manipulators include coarse motion by the macro (long-reach) component and fine manipulation by the micro component. In normal cases, these two control modes are executed exclusively. Namely, while one component is active the other component should be servo (or brake) locked. Thus, during the operation of the micro component, the macro component behaves just as a passive base.

Due to the flexible nature of the space long-reach arm, the macro part is subject to vibrations. These vibrations can be excited during coarse positioning and may remain for a long time after each operation. In fine manipulation, the macro arm behaves as a passive flexible structure, but then vibrations can be excited by reactions from the motion of the micro arm. These motions degrade the control accuracy and operational performance of the system. In practice, the booms are usually sufficiently stiff, but flexibility comes mainly from the low stiffness at the joints and gear trains. Moreover, lightweight and microgravity characteristics make the structure sensitive to yield vibrations, and the surrounding vacuum, or the lack of air viscosity, provides a reduced damping effect to the structure.

Conventionally, the vibration issue has been managed for SRMS and SSRMS by the operational skill of well-trained astronauts and by limiting the maximum operational velocity according to the inertia of the handling object. However, if an advanced controller is introduced on the ISS, the training time for astronauts will be reduced and the operational speed can be increased.

Here, the following two subtasks are considered in dealing with this issue:

  • Suppression of the vibrations of the flexible base

  • End point control in the presence of vibrations.

To suppress the vibrations of the macro arm (flexible base), the coupled dynamics is effectively used. Such control is called coupled vibration suppression control [55.75]. The coupled dynamics is a solution space of the micro arm motion with maximum coupling with the vibration dynamics of the macro arm. Note that this solution space is perpendicular to the reaction null-space introduced in Sect. 55.3.7. Since the spaces are orthogonal, the coupled vibration suppression control and reactionless manipulation can be superimposed without any mutual interference.

The motion command of the micro arm to suppress the vibrations is determined with a feedback of the linear and angular velocity of the end point of the macro arm  x ˙ b

ϕ ¨ = H bm + H b G b x ˙ b ,
(55.31)

where ( ) + denotes the right pseudo-inverse, and Gb is a positive definite gain matrix.

If written in the form of a joint torque input, the vibration control law is expressed as follows

τ = H m H bm + G b x ˙ b .
(55.32)

In the presence of redundancy in the micro arm, (55.31) can be extended to control with a null-space component

ϕ ¨ m = H bm + ( H b G b x ˙ b - H ˙ bm ϕ ˙ m ) + P RNS ζ ,
(55.33)

where ζ is an n-DOF arbitrary vector and P RNS = ( E - H bm + H bm ) is a projector onto a null space of the coupling inertia matrix Hbm. When the micro arm is operated using (55.33), the closed-loop system is expressed as

H b x ¨ b + H b G b x ˙ b + K b Δ x b = F b + J b T F h .
(55.34)

Equation (55.34) represents a second-order damped vibration system. With no force input, i. e., F b = F h = 0 , the vibrations converge to zero with a proper choice of the gain matrix Gb.

For the determination of vector ζ, feedback control to reduce the positioning error of the micro arm end point is considered. The error vector is defined as

x ̃ h = x h d - x h .
(55.35)

After some derivation, the control law for the joint torque of the micro arm is obtained in the following form [55.75, 55.76]

τ = ( ( J m T ) + H m P RNS ) + ( K p x ̃ h - K d J m ϕ ˙ ) - G m ϕ ˙ ,
(55.36)

where Kp, Kd, and Gm are positive definite gain matrices.

Figure 55.26 shows a block diagram for the control system described by (55.32) and (55.36).

Fig. 55.26
figure 26

Block diagram for simultaneous vibration suppression and manipulator end point control for a flexible-structure-mounted manipulator system

As a simplified demonstration, a planar system with a four-joint redundant manipulator arm atop a flexible beam is considered. Figure 55.27 shows the vibration amplitude of the flexible beam after an impulsive external force. The graph labeled w/o vs depicts the vibrations of the beam without any manipulator control but with natural damping. The graph labeled with vs depicts the case in which vibration suppression control given by (55.32) is applied, where the vibrations are damped quickly.

Fig. 55.27
figure 27

Vibrations of the flexible base

In addition, Fig. 55.28 shows the end point motion of the manipulator arm during the control. The graph labeled w/o RNS is the case of using (55.32), where the base vibrations were successfully suppressed but the position of the manipulator end point was deflected by this suppression behavior. The graph labeled with RNS depicts the case in which both vibration suppression control given by (55.32) and the end point control given by (55.36) are applied simultaneously. This last case shows that the vibrations were damped successfully and that the positioning error of the manipulator end point converged to zero. This is a result of the redundancy of the arm.

Fig. 55.28
figure 28

Positioning error of the manipulator end tip

Here, note that the proposed control method requires precise information on dynamic characteristics, such as the inertia parameters of the arms and the handled payload, if any. To achieve more practical applications, the proposed method must be extended to schemes for parameter identification [55.77] and adaptive control [55.78], with which the convergence of the control is guaranteed even with imprecise a priori knowledge of the dynamic parameters [55.76].

3.11 Contact Dynamics and Impedance Control

The capture and retrieval operation of a floating and tumbling target, such as a malfunctioning satellite, by a manipulator arm mounted on a servicing robot (called a chaser) can be decomposed into the following three phases:

  1. 1.

    Approaching phase (before contact with the target)

  2. 2.

    Contact/impact phase (at the moment of contact)

  3. 3.

    Post contact phase (after contact or grasping).

If the contact is impulsive, the first and the third phases are discontinued by the impulsive phenomena of the second phase. Understanding of this impulsive phenomena is indispensable when designing a comprehensive capture control scheme. In this subsection, the formulation of impact dynamics is first considered, and impedance control (which is useful to minimize the impact forces and prolong the contact duration) is then discussed.

Let us consider a chain of rigid links composed of n + 1 bodies freely floating in inertial space. As discussed in Sect. 55.3.3, the equation of motion for this type of system becomes (55.8). Here, the impulsive contact force is assumed to be applied at the manipulator end tip and is expressed as F h = ( f h T , n h T ) T R 6 . This impulsive force also yields a change in the system momenta ( P g T , L g T ) T R 6 , expressed as

P ˙ g L ˙ g = w E 0 0 I g v ˙ g ω ˙ g + 0 ω g × I g ω g = E 0 r ̃ gh E f h n h ,
(55.37)

where E is a 3 × 3 identity matrix, and the symbols with the suffix g indicate the corresponding values observed around the mass centroid of the n + 1 link system.

In the above equations, (55.8) describes an internal joint motion (termed local motion) of the system, whereas (55.37) describes the overall motion (termed global motion) about the centroid of the system. As a result of force input Fh, the floating chain induces both the local motion around its articulated joints and the global translation and rotation with respect to the centroid.

From (55.8) and (55.37), the acceleration of the manipulator end tip α h can be expressed in the inertial frame as

α h = G * F h + d * ,
(55.38)

where

G * = J ^ H ^ - 1 J ^ T + R h M - 1 R h T ,
(55.39)
R h = E - r ̃ gh 0 E , M = w E 0 0 I g ,
(55.40)

and d * is a velocity dependent term.

Equations (55.38)–(55.40) are expressions for the motion of the hand (the point at which collision occurs) induced by the impact force Fh, where the matrix G * , which is the augmented version of (55.10), represents the dynamic characteristics of this system.

Further augmentation for the inverted inertia matrix has been discussed for the case in which the contact duration is not considered to be infinitesimal [55.51].

Now let us assume the case in which two free-floating chains, A and B, with dynamic characteristics G A * and G B * collide with each other at their respective hands (end points) and an impact force Fh is induced by this collision.

The equations of motion at the instance of collision are

G A * F h = v ˙ hA ω ˙ hA - d A *
(55.41)

for the chain A, and

G B * ( - F h ) = v ˙ hB ω ˙ hB - d B *
(55.42)

for the chain B, where the subscripts A and B indicate the label of the chain.

Assuming that G A * and G B * remain constant during the infinitesimal contact duration and the velocity-dependent terms d A * and d B * are small and negligible, integration of (55.41) and (55.42) yields

G A * t t + δ t F h d t = v hA ω hA - v hA ω hA ,
(55.43)
G B * t t + δ t F h d t = v hB ω hB - v hB ω hB ,
(55.44)

where { } indicates the velocity after the collision. Integration of Fh

F h = lim δ t 0 t t + δ t F h d t
(55.45)

represents the impulse (force-time product) acting on both chains. Providing that the total momenta of the two systems are strictly conserved before and after the collision, we obtain the following expression from (55.43) and (55.44 )

G A * + G B * F h = v hA ω hA - v hB ω hB + v hB ω hB - v hA ω hA .
(55.46)

In general collision analysis, the coefficient of restitution (elasticity factor) associated with the relative velocities before and after the collision is often employed [55.79]. If we accept this restitution coefficient for 6-DOF linear and angular velocities, the relationship between relative velocities before and after contact becomes

v hA ω hA - v hB ω hB = ϵ v hB ω hB - v hA ω hA ,
(55.47)

where

ϵ diag ( e 1 , , e 6 ) , 0 e i 1
(55.48)

is the restitution coefficient matrix.

Substituting (55.47) into (55.46), the impulse induced by this collision can be expressed only by the relative velocity of two points before the contact

F h = ( E 6 + ϵ ) G Σ * - 1 V hAB ,
(55.49)

where

G Σ * = G A * + G B * ,
(55.50)
V hAB = v hB ω hB - v hA ω hA .
(55.51)

Using the introduced notation, the magnitude of impulse is expressed as

F h = ( E 6 + ϵ ) T V hAB T G Σ * - T G Σ * - 1 V hAB ( E 6 + ϵ ) ,
(55.52)

and the velocity after collision becomes

v hA ω hA = G A * - 1 + G B * - 1 - 1 × ( E 6 + ϵ ) G B * - 1 v hB ω hB + ( G A * - 1 - ϵ G B * - 1 ) v hA ω hA ,
(55.53)

where suffixes A and B are interchangeable.

These expressions are considered to be an augmentation of the impact theory for a two-point-mass system into articulated body systems.

Impedance control is a concept by which we can control the manipulator end tip so as to obtain the desired mechanical impedance characteristics. Such control is useful to alter the dynamic characteristics of the arm during the contact phase. In a special case, the desired impedance of the manipulator end tip (hand) may be tuned to achieve impedance matching with the colliding target object so that the hand can easily maintain stable contact with the target [55.80].

Let Md, Dd, and Kd be the matrices for the desired impedance properties of inertia, viscosity, and stiffness, respectively, measured at the manipulator end point. The equation of motion for the desired system is then expressed as

M d x ¨ h + D d Δ x ˙ h + K d Δ x h = F h .
(55.54)

From (55.8) and (55.54), the impedance control law for a free-floating manipulator system is obtained as [55.81]

τ h = H * J ^ - 1 { M d - 1 ( D d Δ x ˙ h + K d Δ x h - F h ) - J ^ ˙ ϕ ˙ - x ¨ gh } - J ^ T F h + c * .
(55.55)

The usefulness of the impedance control in free-flying space robots has been discussed in [55.81, 55.82, 55.83].

3.12 Dynamics of Mobile Robots

The equation of motion for a mobile robot that has multiple articulated limbs, such as that shown in Fig. 55.29, is given by the following equation

H b H bm1 H bm k H bm1 T H m11 H m1 k H bm k T H m1 k T H mk k x ¨ b ϕ ¨ 1 ϕ ¨ k + c b c m1 c m k = F b τ 1 τ n + J b T F ex J m1 T F ex 1 J m k T F ex k ,
(55.56)

where the symbols have the following meanings:

k :

number of limbs

x b R 6 ::

position/orientation of the base

ϕ = ( ϕ 1 T , , ϕ k T ) T R n ::

articulated joint angles

x ex = ( x ex 1 T , , x ex k T ) T R 6 k ::

position/orientation of the end points

F b R 6 ::

forces/moments directly apply on the base

τ = ( τ 1 T , , τ k T ) T R n ::

joint articulated torque

F ex = ( F ex 1 T , , F ex k T ) T R 6 k ::

external forces/moments on the end points.

Fig. 55.29
figure 29

Schematic model of a mobile robot

Note that for Fig. 55.29, F ex i = [ f w i T , n w i T ] T .

Comparing the above equations with (55.1), no difference is observed in the mathematical structure. The gravity force on the vehicle main body and the configuration-dependent gravity terms of the articulated bodies are included in cb and c m i , respectively. In practice, however, one substantial difference is the existence of ground contact forces/moments at the end point of each limb. Unlike floating target capture discussed in Sect. 55.3.10, the contact is not considered impulsive, but instead continues for a nonnegligible period of time. In such cases, a well-accepted approach is to explicitly evaluate the contact forces/moments Fex, according to the virtual penetration of the end point into the collided object or the ground surface [55.79].

In cases in which each limb has a wheel on its end terminal, rather than the point penetration model, a wheel traction model will be adopted to evaluate Fex. For planetary exploration missions, rovers (mobile robots) are expected to travel over natural rough terrain. A number of studies have examined the modeling of tire traction forces on loose soil, called regolith (where there is no organic component) [55.84, 55.85, 55.86, 55.87, 55.88, 55.89, 55.90, 55.91, 55.92, 55.93, 55.94, 55.95, 55.96, 55.97, 55.98]. Particularly, these studies investigate the soil mechanics called terramechanics to understand the tractive forces generated by wheels.

In the following subsection, models for wheel traction mechanics are summarized.

3.13 Wheel Traction Mechanics

Terramechanics is the study of soil properties, specifically the interaction of wheeled, legged, or tracked vehicles and various surfaces. For the modeling of the wheel traction forces and the analysis of the vehicle mobility, textbooks written by Bekker [55.29, 55.30] and Wong [55.99] are good references. Although these books were written in the 1960s and 1970s, basic formulae from these books are frequently cited by researchers even today [55.88]. In this subsection, the models for a rigid wheel on loose soil are summarized.

3.13.1 Slip Ratio and Slip Angle

Slips are generally observed when a rover travels on loose soil. In particular, slips in the lateral direction are observed during steering or slope-traversing maneuvers.

The slip in the longitudinal direction is measured by slip ratio s, which is defined as a function of the longitudinal traveling velocity vx of the wheel and the circumference velocity of the wheel r ω (r is the wheel radius and ω is the angular velocity of the wheel)

s = r ω - v x r ω if  | r ω | > | v x | : driving r ω - v x v x if | r ω | < | v x | : braking .
(55.57)

The slip ratio takes a value from −1 to 1.

On the other hand, the slip in the lateral direction is measured by slip angle β, which is defined in terms of vx and the lateral traveling velocity vy as

β = tan - 1 v y v x .
(55.58)

Note that the above definitions, (55.57) and (55.58) have been traditionally used in the vehicle community as standards. However, planetary rovers in a challenging terrain, such as Spirit and Opportunity on Mars, experienced the cases in which the rovers slip backward while attempting to drive up hill, or travel faster than the wheel’s circumference velocity in downhill driving. In these cases, the slip ratio can exceed the range from −1 to 1. Also while traversing side slopes, the case may arise in which v y > 0 but vx is nearly 0, making the definition (55.58) nearly singular. Therefore, these definitions are needed to be further discussed for the rovers in very loose terrain.

3.13.2 Wheel–Soil Contact Angle

Figure 55.30 depicts a schematic model of a rigid wheel contacting loose soil. In the figure, the angle from the surface normal to the point at which the wheel initially makes contact with the soil ( AOB) is defined as the entry angle. The angle from the surface normal to the point at which the wheel departs from the soil ( BOC in Fig. 55.30) is the exit angle. The wheel contact region on loose soil is represented from the entry angle to the exit angle.

Fig. 55.30
figure 30

Wheel contact angles

The entry angle θf is geometrically described in terms of wheel sinkage h as

θ f = cos - 1 1 - h r .
(55.59)

The exit angle θr is described using the wheel sinkage ratio λ, which denotes the ratio between the forward and rear sinkage of the wheel

θ r = cos - 1 1 - λ h r .
(55.60)

The value of λ depends on the soil characteristics, the wheel surface pattern, and the slip ratio. It becomes smaller than 1.0 when the soil compaction occurs, but can be greater than 1.0 when the soil is dug up by the wheel and transported to the rear region of the wheel.

3.13.3 Wheel Sinkage

The amount of wheel sinkage is constituted by static and dynamic components. The static sinkage depends on the vertical load on the wheel, while the dynamic sinkage is caused by the rotation of the wheel.

According to the equation formulated by Bekker [55.29], the static stress p(h) generated under a flat plate, which has a sinkage h and a width b, is calculated as

p ( h ) = k c b + k ϕ h n ,
(55.61)

where kc and k ϕ are pressure-sinkage modules, and n is the sinkage exponent. Applying (55.61) to the wheel, as shown in Fig. 55.31, the static sinkage is evaluated as follows.

Fig. 55.31
figure 31

Static sinkage

First, the wheel sinkage h ( θ ) at an arbitrary wheel angle θ is geometrically given by

h ( θ ) = r ( cos θ - cos θ s ) ,
(55.62)

where θs is the static contact angle. Then, substituting (55.62) into (55.61) yields

p ( θ ) = r n k c b + k ϕ ( cos θ - cos θ s ) n .
(55.63)

The wheel eventually sinks into the soil until the stress from the soil balances the vertical load W on the wheel.

W = - θ s θ s p ( θ ) b r cos θ d θ = r n + 1 k c + k ϕ b - θ s θ s ( cos θ - cos θ s ) n cos θ d θ .
(55.64)

Using this equation, the static contact angle θs is evaluated for the given W. In practice, (55.64) does not yield a closed form solution for θs, although θs can be evaluated numerically.

Finally, the static sinkage hs is obtained by substituting θs into the following equation

h s = r ( 1 - cos θ s ) .
(55.65)

However, as illustrated in Fig. 55.32 , the dynamic sinkage becomes a complicated function depending on the slip ratio of the wheel, the wheel surface pattern, and the soil characteristics. Although it is difficult to obtain an analytical form for the dynamic sinkage, it is again possible to evaluate the dynamic sinkage numerically, using the condition W = F z , where Fz is the normal force given by (55.76), which will be presented later herein. The force Fz increases with the wheel sinkage because the area of the contact patch increases accordingly.

Fig. 55.32
figure 32

Dynamic sinkage

3.13.4 Stress Distribution Under the Wheel

Based on terramechanics models, the stress distribution under the rotating wheel can be modeled as shown in Fig. 55.33.

Fig. 55.33
figure 33

Stress distribution model under a wheel

The normal stress σ ( θ ) is determined by the following equation [55.87, 55.88]

σ ( θ ) = r n k c b + k ϕ cos θ - cos θ f n for  θ m θ < θ f , σ ( θ ) = r n k c b + k ϕ cos θ f - θ - θ r θ m - θ r ( θ f - θ m ) - cos θ f n  for  θ r < θ θ m .
(55.66)

Note that the above equations are based on Bekker’s formula, as given in (55.61), and they become equivalent to the Wong–Reece model for normal stress [55.100] when n = 1. Also note that by linearizing this distribution, Iagnemma et al. [55.84, 55.88] developed a Kalman-filter-based method to estimate the soil parameters.

The term θm is the specific wheel angle at which the normal stress is maximized

θ m = ( a 0 + a 1 s ) θ f ,
(55.67)

where a0 and a1 are parameters that depend on the wheel–soil interaction. Their values are generally assumed to be a 0 0.4 and 0 a 1 0.3  [55.100].

The maximum terrain shear force is a function of the terrain cohesion c and internal friction angle ϕ, and can be computed from Coulomb’s equation

τ max ( θ ) = c + σ max ( θ ) tan ϕ .
(55.68)

Based on the above equation, the shear stresses under the rotating wheel, τ x ( θ ) and τ y ( θ ) , are written as follows [55.101]

τ x ( θ ) = [ c + σ ( θ ) tan ϕ ] 1 - e - j x ( θ ) / k x ,
(55.69)
τ y ( θ ) = [ c + σ ( θ ) tan ϕ ] 1 - e - j y ( θ ) / k y ,
(55.70)

where kx and ky are the shear deformation moduli in each direction. In addition, jx and jy, which are the soil deformations in each direction, can be formulated as a function of the wheel angle θ with the slip ratio and the slip angle, respectively [55.100, 55.89]

j x ( θ ) = r [ θ f - θ - ( 1 - s ) ( sin θ f - sin θ ) ] ,
(55.71)
j y ( θ ) = r ( 1 - s ) ( θ f - θ ) tan β .
(55.72)

3.13.5 Drawbar Pull: Fx

Using the normal stress σ ( θ ) and the shear stress in the x direction τ x ( θ ) , the drawbar pull Fx, which is the net traction force exerted from the soil to the wheel, is calculated as the integral from the entry angle θf to the exit angle θr [55.100]

F x = r b θ r θ f τ x ( θ ) cos θ - σ ( θ ) sin θ d θ .
(55.73)

3.13.6 Side Force: Fy

The side force Fy appears in the lateral direction of the wheel when the vehicle makes steering maneuvers or traverses a side slope. The side force is decomposed into two components [55.89]

F y = F u + F s ,

where Fu is the force produced by the shear stress in the y direction τ y ( θ ) underneath the wheel and Fs is the reaction force generated by the bulldozing phenomenon on a side face of the wheel. The above equation can be rewritten as

F y = θ r θ f r b τ y ( θ ) F u + R b [ r - h ( θ ) cos θ ] F s d θ .
(55.74)

Here, Hegedus’s bulldozing resistance estimation [55.102] is employed to evaluate the side face force Fs. As shown in Fig. 55.34, a bulldozing resistance Rb is generated on a unit width blade when the blade moves toward the soil. According to Hegedus’s theory, the bulldozed area is defined by a destructive phase that is modeled by a planar surface. In the case of a horizontally placed wheel, the angle of approach α should be zero; Rb can then be calculated as a function of wheel sinkage h ( θ )

R b ( h ) = D 1 c h ( θ ) + D 2 ρ d h 2 ( θ ) 2 ,
(55.75)

where

D 1 ( X c , ϕ ) = cot X c + tan ( X c + ϕ ) , D 2 ( X c , ϕ ) = cot X c + cot 2 X c cot ϕ .
Fig. 55.34
figure 34

Estimation model of the bulldozing resistance

In the above equations, ρd denotes the soil density. Based on Bekker’s theory [55.29], the destructive angle Xc can be approximated as

X c = 45 - ϕ 2 .

3.13.7 Normal Force: Fz

The normal force Fz is obtained in the same manner as for the (55.73) [55.100]

F z = r b θ r θ f τ x ( θ ) sin θ + σ ( θ ) cos θ d θ ,
(55.76)

which should balance the normal load of the wheel in a static condition.

Motion dynamics simulation for a vehicle traveling over loose soil can be performed by plugging the forces Fx, Fy, and Fz obtained from the above equations into the equation of motion (55.5).

A better understanding of the soil–wheel contact and traction mechanics is important in order to improve the navigation and control behavior of exploration rovers, in terms of minimization of wheel slippage, for example. Reducing the wheel slippage will increase the power efficiency of surface locomotion, decrease the errors in path tracking maneuvers, and decrease the risks of wheel spinning and sinking, which can cause immobilization of the vehicle.

One key in realizing such advanced control of slippage minimization is determining how to properly estimate the slip ratios and slip angles in real time using on-board sensors. The slip ratio is determined by the ratio between the wheel spinning velocity and the traveling velocity of the vehicle, but proper sensing of the velocity of the vehicle is usually difficult. One simple solution is to use a free wheel specialized for traveling velocity measurement. Another solution is to employ inertial sensors, which are however usually subject to noise and drift.

An alternative, but promising possibility is visual odometry, which is based on optical flow or feature tracking in the sequence of optical images. Actually, this technique has been applied to the Mars exploration rovers, Spirit, and Opportunity, in their long range navigation, and verified to be very useful. Particularly, the algorithm based on feature detection and tracking using stereo pair of cameras provides reliable results with good accuracy for the estimation of driving distance as well as the wheel slippage [55.28].

4 Future Directions of Orbital and Surface Robotic Systems

4.1 Robotic Maintenance and Service Missions

For many years, we have sent satellites and other systems into space without caring too much about what might happen at the end of their life cycle. There has been recently awareness about the dramatic increase of space debris and the danger of a fatal chain reaction increased in collisions. Generally speaking, space debris removal may become a prerequisite for future spaceflight. Space systems above approx. 600 km flight path altitude are not currently accessible to astronauts by means of present transport systems and therefore are excluded from any kind of human removal, repair, or maintenance.

In contrast, satellites equipped with robotic arms or humanoid robonauts may be remotely controlled or only supervised from Earth in any orbit including the geostationary one. In the future, they should be able to support astronauts during routine and maintenance work on space stations, capture uncontrollably tumbling satellites, prolong their life-time by repair or refuel, deorbit or relocate them if necessary. Efficient telerobotic and telepresence technologies allow us to select flexibly the appropriate level of robot autonomy within shared autonomy frameworks between ground operator and space robot.

Telepresence technologies as mentioned above make sure that by real-time feedback of stereo images and force/torque information, the operator on the ground gets the feeling as though he was actually working at the remote site. High-quality telepresence requires low round-trip communication time delays. The challenge here is twofold: (a) to provide the mentioned technically feasible communication infrastructure, and (b) to apply the mentioned optimized delay-compensating telepresence technologies which yield satisfactory haptic feedback up to 650 ms delay. For large robots mounted on a carrier satellite, their dynamic interactions, including the physical contacts when grasping a target, have to be mastered (Sect. 55.3). Autonomous skills are needed for supporting a human ground operator in performing the risky task of grasping and stabilizing noncooperative tumbling targets (satellite or space debris).

Due to the high cost of space validation missions, simulation capabilities are crucial for the development and verification of on-orbit servicing systems. This applies both for the required hardware simulation facilities including sensors and illumination effects as well as for dynamics modeling techniques and software tools. Various hardware simulators using industrial robots for simulation of chaser and target satellite motion and the dynamic interaction with the space-robot have been built up not only in DLR (Fig. 55.35).

Fig. 55.35
figure 35

OOS-SIM, a hardware-in-the-loop simulator in DLR

Robotic maintenance and service missions for space infrastructure have been a long-term dream in the space robotics community since their conceptual designs were first published in ARAMIS report in the early 1980s (Fig. 55.1) [55.1].

ROTEX, ETS-VII, Ranger, and ASTRO that were introduced in the earlier section are technological developments toward this goal, but robotic maintenance and service missions have not become routinely operational yet (A good comparative study of orbital robotic missions is provided by [55.103]). The Hubble space telescope (GlossaryTerm

HST

) is a huge space telescope which has the capability to be serviced in orbit, but it has been visited by Space Shuttle and serviced (components exchanged and trouble fixed) only by human EVA. After the COLUMBIA accident in 2003, NASA seriously considered the possibility of robotic maintenance of the HST, investigating available technologies and selecting a prime contractor of the mission development. Figure 55.36 depicts one possible configuration for the robotic rescue mission. Ultimately, it was decided to perform this last servicing mission with human astronauts. Maintenance of the HST involves tasks that are too complicated to be done by a robot, because the HST itself was designed for human-based maintenance and not specifically designed for robots.

Fig. 55.36
figure 36

A conceptual drawing for robotic rescue of Hubble space telescope

Robonaut, which is described in the following subsection, is therefore considered as an interesting option to be capable in conducting practical maintenance and service missions, due to its compatibility and similar level of dexterity with human astronauts.

4.2 Robonaut and JUSTIN

Robonaut is a humanoid robot designed by the Robot Systems Technology Branch at NASA Johnson Space Center in a collaborative effort with DARPA. The Robonaut project seeks to develop and demonstrate a robotic system that can function as an EVA astronaut equivalent. Robonaut jumps generations ahead by eliminating the robotic scars (e. g., special robotic grapples and targets), but it still keeps the human operator in the control loop through its telepresence control system. Robonaut is designed to be used for EVA tasks (extra-vehicular activities or space walks), i. e., those which were not specifically designed for robots.

A key challenge is to build machines that can help humans work and explore in space. Working side by side with humans, or going where the risks are too great for people, machines like Robonaut will expand capabilities for construction and discovery. Over the past five decades, space flight hardware has been designed for human servicing. Space walks are planned for most of the assembly missions for the ISS, and they are a key contingency for resolving on-orbit failures. To maintain compatibility with existing EVA tools and equipments, a humanoid shape and an assumed level of human performance (at least a human in a space suit) are required for this robotic surrogate.

The manipulator and dexterous hand have been developed with a substantial investment in mechatronics design. The arm structure has embedded avionics elements within each link, reducing cabling and noise interference. Robonaut has been designed based on a biologically inspired approach. For example, it uses a chordate neurological system in data management, bringing all feedback to a central nervous system, where even low-level servo control is performed. Such a biologically inspired approach is extended to left-right computational symmetry, sensor and power duality and kinematical redundancy, enabling learning and optimization in mechanical, electrical and software forms.

Robonaut has a broad mix of sensors including thermal, position, tactile, force and torque instrumentation, with over 150 sensors per arm. The control system for Robonaut includes an onboard, real-time CPU with miniature data acquisition and power management. Off-board guidance is delivered with human supervision using a telepresence control station with human tracking.

Robonaut 2 (Fig. 55.37), the latest generation of the Robonaut family, launched to the ISS aboard Space Shuttle Discovery on the STS-133 mission in February 2011. It is the first humanoid robot in space, and although its initial job is demonstrating its capabilities inside the space station, the goal is that through upgrades and advancements it will one day venture outside the station to help spacewalkers make repairs or additions to the station. Robonaut 2 is a dexterous, anthropomorphic robotic torso that has significant technical improvements over its predecessor to make it a far more valuable tool for astronauts. Upgrades include: increased force sensing, greater range of motion, higher bandwidth, and improved dexterity. Robonaut 2’s integrated mechatronic design results in a more compact and robust distributed control system with a fraction of the wiring of the original Robonaut.

Fig. 55.37a–c
figure 37

NASA’s Robonaut family: (a) Robonaut 2, (b) Zero-G Leg for surface inspection of ISS, (c) Centaur with a surface mobility system

Robonaut 2, also called R2, has completed many firsts during its two years on the ISS. During its initial checkout, it used American sign language to say Hello World. R2 illustrated its unique control system design that permits it to work directly with astronauts by shaking hands with ISS Commander Dan Burbank (Fig. 55.38). More recently, it has been using standard crew tools to measure airflow and demonstrate its ability to perform autonomous inventory scans. As part of gaining experience that will be useful once R2 starts working on the outside of the Space Station, on-board crew have successfully demonstrated teleoperation. Using a variety of sensors that track human hand, arm and neck motion, astronaut Tom Marshburn, while also onboard the station, became the first person to remotely control R2 to have it catch a free flying object inside the ISS.

Fig. 55.38a,b
figure 38

Robonaut 2 onboard ISS: (a) measuring airflow, (b) shaking hands with ISS Commander Dan Burbank

One potential application of Robonaut technology is a regular monitoring and contingent maintenance work of human habitant modules of the space station. Figure 55.37b depicts such an application where Robonaut crawls on the surface of the station module by using hand rails which were originally designed for human EVA.

The application of the humanoid robot is not limited to orbital tasks. Figure 55.37c depict an idea to combine the humanoid torso on a surface mobility system, which shall be useful for robotic planetary explorations.

DLR’s anthropomorphic JUSTIN is based on high-fidelity joint-torque-controlled light weight-technology and adjustable whole-body compliance in Cartesian space. JUSTIN on the mobile platform (Fig. 55.39) has actuated joints and torque controlled sensors. With JUSTIN’s upper body the new delay compensating technologies have been verified using copies of JUSTIN’s light weight arms as force reflecting hand-controllers up to delays of slightly more than 700 ms.

Fig. 55.39a,b
figure 39

DLR’ JUSTIN, wheeled version (a) and legged version (b)

The European space agency ESA too, is pushing forward robonaut-type concepts, e. g., via testing a dexterous 4-finger-hand DEXHAND as developed in contract by DLR (Fig. 55.40) In spring 2012, the DEXHAND successfully passed the acceptance test and is delivered to ESA.

Fig. 55.40
figure 40

DEXHAND

4.3 Aerial Platforms

There are three planetary candidates for aerial robotic systems: Venus, Mars, and Titan (a moon of Saturn) [55.104, 55.105]. Venus has a very dense but hot atmosphere ( 460 °C and 65 kg m - 3 at the surface), and so can easily float relatively heavy payloads. Mars has a very thin and cold atmosphere (somewhat variable but often - 100 °C and 0.02 kg m - 3 ). Titan has an atmosphere even colder than Mars ( 100 K ) but about 50 % denser than Earth’s atmosphere. Thus very different vehicles have been envisioned for the three candidate mission targets. On Venus, buoyant devices are generally considered, especially those that can continuously or periodically rise high enough to reach moderate temperatures where conventional electronics can survive. One candidate is to use a phase-change fluid as part of the buoyant system, so that the fluid can condense in the cool upper atmosphere and be trapped in a pressure vessel, causing a loss of buoyancy and allowing the vehicle to descend, possibly all the way to the surface. After a brief stay, and before the heat flux to the interior of the device destroys all the sensitive equipment, a valve would be opened so that the phase change fluid can evaporate, increasing the buoyancy and allowing the craft to ascend to the cool upper atmosphere. After a suitable period of heat rejection into this cool zone, the process can be repeated, perhaps indefinitely. The density of the Venus atmosphere is sufficiently high that powered dirigibles can be used, so that the buoyant vehicles can use propulsion and steering to reach particular locations in the atmosphere or on the surface [55.106].

In contrast, the Mars atmosphere is too thin for powered dirigibles to work (at least with the power-to-weight ratio of any current propulsion technology). Balloon aerobots could be deployed in the Mars atmosphere, and could ascend and descend, but probably could not be steered precisely to specific locations, at least not by use of a propulsion system. Polar balloons could circumnavigate either pole many times, or equatorial balloons could make one partial circuit around the planet, until they impact the Tharsis Bulge, a North–South string of high-altitude volcanoes that represents an essentially impenetrable barrier to any equatorial balloon having a reasonable payload. Because of the problems with lighter-than-air vehicles in the thin Mars atmosphere, there has been considerable study of airplanes for use in exploring Mars. Aircraft can be designed to have reasonable lift-to-drag ratios in the Mars atmosphere, so that their performance is not too different from airplanes on Earth. Most often considered are gliders that deploy directly from an aeroshell that comes in to the Mars atmosphere at hypersonic velocity, and then proceed to glide hundreds or a thousand kilometers before impact. One common mission concept is to fly down the great Valles Marineris canyon, taking high-resolution imagery and spectrometry of the walls of that canyon. Powered aircraft have also been considered, including those that land and regenerate their propellant (e. g., using solar power and atmospheric CO 2 ) so as to be able to make multiple flights.

On Titan, like Venus, buoyant devices are generally considered more attractive than surface vehicles (although helicopters have been proposed). Also like Venus, the atmosphere on Titan contains many obscuring particles and aerosols so that high-resolution imaging over a broad spectrum is only possible by getting close to the surface. This makes balloons or powered dirigibles very attractive. On Venus the extreme surface temperature makes it challenging to make a surface vehicle operate for any extended duration. On Titan, there is a significant risk that some sort of hydrocarbon goo exists on the surface that might foul any surface vehicle. Thus both Titan and Venus are considered especially attractive targets for the use of aerobots, especially in the form of powered dirigibles. Navigation of such aerobots presumably would be accomplished primarily by sensing the terrain and navigating relative to any landmarks that can be discerned. When these vehicles operate in the upper atmosphere, they can augment their position knowledge by means of sun or star tracking (as referenced to the local vertical). Deeper in the atmosphere, this may not be possible. One key issue is whether direct communications to Earth are envisioned, or relay via satellite. If there is a satellite in orbit, it can provide considerable radio-navigation assistance and relatively frequent communications when the aerobot is on the side away from the Earth (both Venus and Titan spin very slowly). But a satellite relay is expensive, so the least expensive options require that the dirigible have a large high-gain antenna (usually presumed to be inside the gas bag). Radio-based servo pointing at the Earth will provide precise navigation information (again along with precise measurements of local vertical). However, when the aerobot goes out-of-sight beyond the limb of the planet, it may spend days or weeks out of communications with the Earth. This is probably the situation calling for the highest degree of autonomy of any that have been envisioned in robotic planetary exploration of the solar system.

4.4 Mobility Concepts and Subsurface Platforms

For high mobility on Moon, planets, and asteroids there is still not a final answer which technology would be optimal. Although multilegged crawlers (e. g., DLR’s six-legged version as shown in Fig. 55.41) seem to be the best alternative for investigating steep craters, 4-wheeled rovers may climb up and down unbelievably steep slopes. May be wheel–leg combinations as realized in JPL’s ATHLETE (Fig. 55.42) or in DLR’s conceptual design (Fig. 55.43) will turn out to be the optimal solution.

Fig. 55.41
figure 41

DLR’s six-legged crawler

Fig. 55.42
figure 42

JPL’s ATHLETE

Fig. 55.43
figure 43

Modular rover concept

Precise autonomous landing based on visual data is a prerequisite for exploration, closely related to the Future Space Systems program.

DLR’s main interest however aims at fast locomotion by local autonomy thus (including collision avoidance and real-time path planning) circumventing the problem of long signal delays from 3 s (Moon) to 15–30 min (Mars). Stereo cameras with field-programmable gate array (GlossaryTerm

FPGA

) processor chips are capable of modeling the environment in 3-D real time, using e. g., the so-called semiglobal matching (GlossaryTerm

SGM

). Thus the goal of moving up to 10 km per hour seems realizable now.

Other modes of mobility may be superior when gravity is e. g., only 10000 times smaller than that on earth as is the case on some asteroids. e. g., for a Japanese mission Hayabusa 2 a jumping shoe-box is developed by DLR using just a small excentric motor that causes moderate hopping motions over a few 100 m without reaching the fairly low escape velocity.

Subsurface exploration of planetary bodies holds great promise: it is believed that a liquid-water aquifer may exist at significant depths on Mars, and perhaps an under-ice ocean on Europa and Ganymede which probably represent the best possible locations within the solar system to look for extant (as opposed to extinct) extraterrestrial life. Also, in the lunar polar dark craters there is some evidence of the existence of water ice or other volatiles, and perhaps there exists a layered geologic record of impacts in the Earth–Moon system in these cold-traps. Even access to a depth of a few meters holds the promise of reaching pristine scientific samples that have not been exposed to thermal cycling or ionizing radiation  [55.107].

The prevailing wisdom has been that traditional sorts of drilling rigs are required to access deep underground, involving drill towers, multisegmented drill strings, large robotic systems to serve the function of a terrestrial drilling crew, and large power systems. Also, terrestrial drilling is usually done using large amounts of fluids (water, air, or mud) to flush away cuttings and to cool and lubricate the cutter. The NASA Mars Technology Program has funded contractors that demonstrated reaching 10 m of depth in a realistic setting with segmented drill strings without the use of fluids. While this is much less than needed to reach the putative liquid water, it is much more than is reachable by previous techniques [55.108].

Other approaches have been proposed such as Moles or Inchworms that could be relatively self-contained and yet might reach great depths without the mass and complexity of a large drill tower and segmented drill string. A key issue is that it appears that the needed energy cannot be stored on-board such self-contained drills, at least if it is stored as chemical energy. This is because drilling through terrain requires that some of the chemical bonds that hold the terrain together be broken, and so if the energy of chemical bonds is used to provide that power, then a given volume of chemical energy storage can only advance some fixed ratio of its length into the terrain, where the ratio is determined by the efficiency in taking bond energies of one sort to break bonds of a different sort. Based on these considerations, it appears unlikely that a completely self-contained subsurface vehicle could advance more than perhaps a hundred times its own length. Unless nuclear power sources are considered (and they have been), this requires some sort of tether to the surface to provide a nearly unlimited source of energy. Another problem for subsurface vehicles is that rock tends to expand when it is pulverized (in a process called comminution). Nonporous rock typically expands in volume by a few tens of percent when excavated, which means that fully self-contained subsurface vehicles have a severe conservation of volume problem. In principle the rock can be compressed back into its original volume, but this generally requires pressures much greater than the compressive strength of the original rock. The energy required to do this is much larger than the energy required to excavate the rock in the first place, and would become the dominant use of energy in an already energy-intensive effort.

As a result, it is generally assumed that any subsurface vehicle must keep some access tunnel open to the surface so that the excess volume of cuttings can be transported out. If this tunnel is available, then it seems that a means for getting power from the surface is also available, so that self-contained nuclear power is not needed. Subsurface vehicles with diameters as small as one or a few centimeters have been proposed that could potentially reach great depths within the mass and power constraints of feasible planetary robotic exploration missions.

5 Conclusions and Further Reading

Space robotics as a field is still in its infancy. The speed-of-light delays inherent in remote space operations makes problematic the master–slave teleoperation approach that has been very useful in the undersea and nuclear industries. Space robotics lacks the highly repetitive operations in a tightly structured environment that characterize industrial robotics. Hardware handled by space robots is very delicate and expensive. All three of these considerations have led to the fact that relatively few space robots have been flown, they have been very slow in operation, and only a small variety of tasks have been attempted. Nonetheless, the potential rewards of space robotics are great – exploring the solar system, creating vast space telescopes that may unlock the secrets of the universe, and enabling any viable space industries all seem to require major use of space robots. The scale of the solar system is not so great (a few light-hours) that human intelligence cannot always supplement even the most remote space robot that becomes confused or stuck. Indeed, for the Moon (with only a few seconds of time delay) it seems that hazard avoidance and reliable closure of force-feedback loops is all that is required to make a highly useful robotic system. For Mars (with tens of minutes of time delay), along with hazard avoidance and force loop closure, it seems that robust anomaly detection (with modest reflexive safing procedures) and perhaps scientific-novelty detection are probably all that is needed. High levels of autonomy are enhancing but not enabling for work in the inner solar system, and become more and more desirable for robots that are sent farther into the outer solar system.

For further reading, the following materials are suggested [55.109, 55.110, 55.111, 55.112, 55.113, 55.114].