Abstract
Using robots to explore venues that are beyond human reach has been a longstanding aspiration of scientists and expeditionists alike. The deep sea exemplifies such an unchartered environment that is currently inaccessible to humans. Ocean One (O\({_2}\)) is an anthropomorphic underwater robot, designed to operate in deep aquatic conditions and equipped with an array of sensor modalities. Central to the O\({_2}\) concept is a human interface that connects the robot and human operator through haptics and vision. In this paper, we focus on O\({_2}\)’s control architecture and show how it enables an avatar-like synergy between the robot and human pilot. We establish functional autonomy by resolving kinematic and actuation redundancy, allowing the pilot to control O\({_2}\) in a lower-dimensional space. We illustrate O\({_2}\)’s hierarchical whole-body control tasks including manipulation and posture tasks, feed-forward compensation as well as constraint handling. We also describe how to coordinate the dynamics of body and arms to achieve superior performance in contact and demonstrate O\({_2}\)’s capabilities in simulation, experiments in the pool as well as deployment to its archeological maiden mission to the ‘Lune’, a French naval vessel that sunk to 91 m depth in 1664 in the mediterranean sea.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
1 Introduction
Over the past decades, advances in automation have enabled robots to replace humans in performing manual labor [1, 2]. This was possible because the manufacturing floor is tightly structured and tasks are highly repetitive. The next evolution for robots is to proxy for humans in unstructured, inhospitable environments and advancing the boundaries of human exploration by entering places that are currently inaccessible. The deep sea exemplifies an environment that is inhospitable and largely inaccessible to humans. The field of ROVs has recently brought major advancements to underwater robots that can navigate, observe and map [3,4,5] and the need for underwater operations has led to the development of submersible manipulators [6, 7]. The O\({_2}\) concept offers the capability to perform operations typical for human divers by synthesizing a humanoid robot that is functionally autonomous with a human pilot, who provides higher-lever cognitive abilities, perception and decision making. In this paper, we focus on O\({_2}\)’s control architecture. We illustrate how the robot acquires functional autonomy in coordinating manipulation tasks with posture and constraints in a hierarchical manner. Subsequently we establish an avatar-like synergy by interfacing the human pilot with the robot through vision and bimanual haptic devices. We demonstrate O\({_2}\)’s capabilities in simulation, experiments at the pool and eventually its maiden deployment, where it explored a french naval vessel that sank to \(91\,\)m depth in the Mediterranean sea, and retrieved archeological artifacts (Fig. 1).
2 Robot
Ocean One is a humanoid underwater robot of approximately adult-human dimensions and \(200\,\)kg overall weight. Its body is conceived in an anthropomorphic shape with shoulders, two arms and a head. Each arm has 7-DoF with electrically driven, torque-controlled joints adapted from the original Meka arm design. The arms are fitted with series elastic actuators that provide torque feedback to enhance compliant motion as well as force control, and safety. In order to withstand the pressure at oceanic depths, the arms are oil-filled and positively pressurized by spring-loaded compensators. Each hand has three fingers with three phalanges per finger that are driven with a tendon, which attaches to the distal finger phalanx, passes through the medial and proximal phalanges and loops around an axis driven by the single actuator [8]. The head contains a pair of high-definition cameras with global shutters. Pan and tilt actuation at the neck is currently being implemented. Another camera is attached on O\({_2}\)’s chest, offering a wide-angle perspective on the surroundings in front and below. The body is actuated by eight thrusters. More details on hardware components can be reviewed in [9].
3 Pilot Interface
The O\({_2}\) concept is not only the underwater robot itself but a distributed system of hardware and software components. The surface station allows the human pilot to connect to a set of interfaces: Haptic devices, GUI, live vision, and world display. These interfaces play different roles in different modes of operation. In Avatar-mode, the haptic devices and live vision are central, while GUI and virtualization are more predominant in autonomous modes (Fig. 2).
4 Modeling
O\({_2}\) is modeled as a tree-like structure, with two arms branching out from the body. We model its kinematics using generalized coordinates, with 6 virtual DoF for the body and 7 DoF for each arm. Each link is a rigid body with associated mass properties. This leads us to a multi-body dynamic system represented by twenty-dimensional equations of motion
where q is the vector of generalized coordinates, A(q) is the kinetic energy matrix, \(b(q,\dot{q})\) is the vector of centrifugal and coriolis forces, g(q) the gravitational forces, h the hydrodynamic contribution and \(\varGamma \) is the vector of generalized forces. We extract the mass properties of the body and arms from the CAD models and experimental inspection.
5 Control
O\({_2}\) is a force controlled robot, which allows us to take advantage of the robot’s dynamics at a global level, coordinating the slow dynamics of the body with the fast dynamics of the arms in order to achieve superior performance in motion and force control.
5.1 Whole Body Architecture
The objective behind O\({_2}\)’s control architecture is to enable a connection to the human interface, where a small set of control inputs is sufficient to pilot the entire robot, while achieving a high degree of autonomy already at the controller level. Because the interaction between robot and environment happens primarily through physical contact, we directly control the two hands while the body autonomously follows the hands. Beyond this, the controller monitors the pilot, the robot and its environment, and overrides actions that would lead to constraint violations, such as collisions with obstacles or joint limits. The remaining null-space is used to optimize arm and body posture for a given task. This behavior is realized by the control law
a hierarchical architecture comprised of tasks. The four additive terms in (2) are associated with the contributions of Constraints, Manipulation Task, Posture and feed-forward compensation, respectively. The controller coordinates these tasks in a prioritized manner. The notation t|c, for instance, reads Task t consistent with Task c. In case tasks are not simultaneously feasible, a lower-priority task will only be executed to the extent that is does not interfere with a higher-priority task. For instance, a new position goal at the hands might lead the body to collide with a rock. In such a case the constraint task will engage, and make the body evade the obstacle while still performing the task (Fig. 3).
For each task t, we specify an operational-space \(\vartheta _t\) and a control force \(F_t\). The task Jacobian \(J_t\), establishes a dual velocity-force mapping between generalized coordinate space q to operational space \(\vartheta \) with
To prevent a lower-priority task from interfering with a higher-priority task, we filter Jacobians and control forces through null-space projections. Details on this implementation can be seen in [10].
5.2 Manipulation Task
The central element of the controller is the Manipulation Task, directly programming the hands. It is the only task that takes direct control inputs. These inputs are specified by six-dimensional operational space coordinates for each arm.
v and \(\omega \) represent the linear and angular velocities of O\({_2}\)’s hands. In this space we implement unified motion and force control laws expressed in \(F_t\) [11]. This abstraction also allows us to specify the manipulation task in a way that is agnostic of the robot’s morphology.
5.3 Posture Task
After specifying the twelve-dimensional manipulation task, there are (in general) 8 uncontrolled DoF left. This remaining null-space (Fig. 4) is occupied by the posture task, which consists of two sub-tasks Body Posture and Arm Posture. Body Posture positions the body relative to the hands. Its goal is to align the body’s coronal plane with the horizontal plane while aligning its longitudinal axis with the horizontal perpendicular to the axis connecting the two operational points and keeping a constant linear offset to the mid-point of the hands. This sub-task occupies 6 DoF.
Finally, there is 1 DoF of null-space left in each of the arms, which is controlled by the Arm Posture sub-task.
In order to merge the two sub-tasks into a combined posture task, we stack the Jacobians \(J_{p,\, \mathrm {Body} } \) and \(J_{p,\, \mathrm {Arms} }\) and receive
5.4 Constraints Task
To prevent the robot from damaging itself or obstacles in the environment, we insert a constraint task to which we assign the highest priority. These constraints consist of joint limit avoidance, self collision avoidance and obstacle avoidance. Any action arising from the manipulation and posture tasks that would violate these constraints are filtered directly in the control loop. All three constraints rely on artificial potential fields [12] and use efficient distance computation using capsules. An example is given in Sect. 7.3.
5.5 Hydrodynamic Feed-Forward Compensation
O\({_2}\) experiences hydrodynamic effects when operating in an underwater environment, which is captured by the last term in (1). In order to compensate for these forces, we add a feed-forward term \(( J_\mathrm {h}^\intercal F_\mathrm {h} )\) to the controller in (2). Because this computation is part of the controller, we need to rely on models that are executable in real-time. For this purpose, we model O\({_2}\)’s body, upper arms, lower arms, and hands as rigid links. For each link, we compute two forces: Viscous Damping and Buoyancy. Buoyancy is computed using each link’s volume and center-of-buoyancy extracted from the CAD files. For viscous damping, we use the standard model of a cylinder
We assume a local coordinate system in each link, originating at the center and \(\hat{x}\) along the cylinder axis. \(C_x\), \(C_y\) and \(C_z\) are constant parameters, \(\bar{m}\), \(\bar{L}\) and \(\bar{r}\) are cylinder parameters. The combined hydrodynamic force on each link is computed with
With the associated Jacobians we can now compute the total hydrodynamic compensation
5.6 Thruster/Body Control
Ocean One has eight thrusters, four horizontal thrusters arranged in diamond-shape to control yaw and planar translates, four vertical thrusters arranged in square-shape to control roll, pitch and vertical translation. This redundancy allows for holonomic actuation and full maneuverability in case of a thruster failure. The mapping from the eight-dimensional thruster force vector T to six-dimensional generalized force vector \(\varGamma \) acting on the body is given by \(\varGamma = J_\mathrm {Thruster}^\intercal T\), more specifically
where \(J_i^\intercal \) is the Jacobian of the \(i\mathrm {th}\) thruster and \(n_i\) its associated unit thrust vector. The robot is controlled in terms of generalized coordinates thus the inversion of (16) is needed. Because \(J_\mathrm {Thruster}^\intercal \) is a wide matrix, it is not immediately invertible but two more constraints are required. These constraints arise from the elimination of internal forces and moments given in
Inverting this system of equations is equivalent to solving the optimization problem
The thrusters are limited by their force capacity \(T_\mathrm {max}\). This limit is imposed by proportionally scaling back all thruster forces until none exceeds this limit.
6 Teleoperation and Haptic Interaction
The pilot and robot are directly coupled at their hands via bimanual haptic devices. The robot mimics the pilot’s movements and the pilot receives force feedback that is perceived through the robot’s 6D force sensors at the wrists. We refer to this mode of collaboration as Avatar-mode. The pilot is stationary at the console while the robot is navigating through space in a holonomic manner. To achieve this mapping, we superimpose position and rate control to compute goal position and orientation of the two hands. For this purpose we introduce an intermediary coordinate frame referred to as Manipulation-Frame or Mframe. This frame is responsible for the rate contribution, it drifts in translation and yaw in proportion to the sum and difference to the haptic devices’ linear positions. The position contributions are hand translations and orientations, directly mapped from the haptic devices’ positions and orientations (Fig. 5).
These goals are then forwarded and enforced by the Manipulation task. O\({_2}\)’s contact forces are measured by force sensors located at the wrists. The raw signals are passed through filters in order to eliminate high frequency noise while maintaining haptic transparency. The haptic devices do not only reflect the filtered contact forces but are actively controlled. This allows the pilot to perform guided motions, which simplifies the teleoperation task for the pilot by reducing its dimensionality. For instance, certain fetching tasks only require 1 active DoF in orientation, and a docking maneuver only requires 1 linear DoF (Fig. 6).
7 Simulation
Simulation played an integral part throughout the development of O\({_2}\). Most importantly, it enabled us to develop and test O\({_2}\)’s software stack prior to fabrication and deployment on the physical robot. It also informed mechanical design choices and allowed us to train the pilots and practice entire missions. SAI2 is a real-time interactive simulation environment comprised of a collection of libraries that include the simulation of multi-body dynamic systems, contact and collision resolution. In addition, we utilized the Chai3D [13] libraries to facilitate haptic and visual rendering.
7.1 Step Response in Operational Space
O\({_2}\)’s kinematic structure can be decomposed into three parts. The body, referred to as macro-manipulator with 6 DoF and two arms referred to as mini-manipulators with 7 DoF each. This is a valid decomposition because the minis have full range in operational space and the macro has at least 1 DoF [14]. The serial combination of macro and mini offers two advantages. First, the effective inertias of the macro-mini combination is upper bounded by the effective inertias of the mini-manipulator alone. Second, the dynamic performance of the macro-mini can be made comparable to that of the mini.
This behavior is illustrated in Fig. 7. A square wave function is applied as position-goal for both hands in forward-direction, while lateral and vertical position-goals remain constant. The step-response of the body alone (bottom) shows slow dynamics with large overshoot, oscillation and long duration for convergence. This behavior is due to the body’s large inertia, and comparatively weak actuation capabilities due to thruster force limitations. The step-responses of the hands’ combined macro-mini dynamics display fast response with small rise-time and critical convergence. Hence, the fast, lightweight arms compensate for the slower body and overall response is fast and accurate.
7.2 Docking Maneuver with Force Control
To illustrate the advantages of whole-body coordination in force control mode, we perform a docking maneuver, where both hands apply and maintain a force normal to a given plane. The maneuver initiates at close proximity to the contact surface, where force control is activated in the forward-direction and position control is maintained in the orthogonal sub-space. Figure 8 shows the results comparing force control with and without whole-body coordination. We see that whole-body coordination leads to superior transitional and steady state behavior. The spikes during transition are greatly reduced, convergence is faster, and steady state errors are smaller.
7.3 Obstacle Avoidance
We simulate a scenario, where O\({_2}\) manipulates a container while avoiding local obstacles. To do this, we enclose O\({_2}\) in five (green) collider capsules and the obstacle in a (red) collision capsule (Fig. 9). In every servo loop, we monitor the distances between colliders and obstacles. In case a distance is smaller than the specified activation distance \(\rho _0\), the constraint task is activated and an artificial potential field is applied to avoid the collision. In the given scenario, we program O\({_2}\) to unscrew the container’s lid by specifying circle-segment trajectories at its hands. Without obstacle avoidance, O\({_2}\)’s body would be sweeping through the obstacle during this motion. The smallest distance between the obstacle and O\({_2}\)’s body is rendered by the red line segment between the blue and red spheres. Instead of colliding with the obstacle, the artificial potential field leads the body to glide over the barrel while the trajectory of the hands remains unaltered. The comparison between active and inactive obstacle avoidance is given in Fig. 9.
8 Deployment and Experimental Results
After O\({_2}\)’s hardware components were assembled, we deployed it in shallow depth at the Stanford Aquatic Center. We experimentally tuned parameters for buoyancy compensation and validated the kinematic and dynamic models as well as the sensors and communication protocols. The pool also offered the first opportunity to practice piloting the robot during navigation, grasping, and docking operations. In Fig. 10 (left) we compare the body’s dynamic responses between simulated and physical robot by applying sinusoidal trajectories at 0.05 Hz and \(45^\circ \) amplitude to yaw and 0.3 m to depth. The responses align well with the exception of some coupling that is likely due to unmodeled hydrodynamic contributions. In Fig. 10 (right) we compare the hands’ responses in operational-space position control by applying sinusoidal trajectories at 0.1 Hz to all three cartesian directions. Again, we observe good alignment between simulation and physical robot with additional coupling. The physical robot exhibits slightly decreased amplitudes, which is likely a result of under-estimated hydrodynamics and friction in the arm joints.
O\({_2}\)’s maiden mission took place at an archeological site in the mediterranean sea near the coast of Toulon, France. The Lune is a two-decked, fifty-four-gun french naval vessel of Lois XIV’s that sunk in 1664 with nearly a thousand men on board to 91 m of depth, where it was discovered in 1993 by Nautile, a submarine of the French Oceanographic Institute. The mission was executed from the Andre Malraux, a research vessel operated by DRASSM [15]. After initial tests at 15 m depth, collaborating with a human diver, O\({_2}\) descended to a 4 h long mission to the Lune, where it explored the site, fetched a vase and deposited it in a collection box that was subsequently floated to the surface.
9 Conclusion
In this paper, we focussed on O\({_2}\)’s control architecture. We illustrated the hierarchical implementation of whole-body control and showed how to create an immersive interface with a human pilot that enabled an avatar-like collaboration. We demonstrated the system’s capabilities in simulated whole-body control, force-controlled docking maneuvers as well as a manipulation task involving autonomous obstacle avoidance. We validated the dynamic models and controller with experiments in the pool and finally established O\({_2}\)’s effectiveness with its deployment to its maiden mission in the mediterranean sea (Fig. 11).
References
Hägele, M., Nilsson, K., Pires, J.N., Bischoff, R.: Industrial Robotics in Springer Handbook of Robotics. Springer, pp. 1385–1422 (2016)
Groover, M.P.: Automation, Production Systems, and Computer-Integrated Manufacturing. Prentice Hall Press (2007)
Dudek, G., Giguere, P., Prahacs, C., Saunderson, S., Sattar, J., Torres-Mendez, L.-A., Jenkin, M., German, A., Hogue, A., Ripsman A., et al.: Aqua: an amphibious autonomous robot. Computer, 40(1) (2007)
Vasilescu, I., Varshavskaya, P., Kotay, K., Rus, D.: Autonomous modular optical underwater robot (amour) design, prototype and feasibility study. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005. pp. 1603–1609. IEEE (2005)
Kimball, P.W., Rock, S.M.: Mapping of translating, rotating icebergs with an autonomous underwater vehicle. IEEE J. Oceanic Eng 40(1), 196–208 (2015)
Schilling Robotics, LLC, 260 Cousteau Place, Davis, CA 95618, USA, Homepage. http://www.schilling.com (Accessed: December. 14, 2011). Internet (2011)
ECA Group, Homepage. http://www.ecagroup.com/en/solutions/arm-5e-micro (Accessed: September. 18, 2016), Internet (2016)
Stuart, H., Wang, S., Gardineer, B., Christensen, D.I., Aukes, D.M., Cutkosky, M.R.: A compliant underactuated hand with suction flow for underwater mobile manipulation. In: ICRA 2014, pp. 6691–6697 (2014)
Khatib, O., Yeh, X., Brantner, G., Soe, B., Kim, B., Ganguly, S., Stuart, H., Wang, S., Cutkosky, M., Edsinger, A., et al.: Ocean one: a robotic avatar for oceanic discovery. IEEE Robot. Autom. Mag. 23(4), 20–29 (2016)
Khatib, O., Sentis, L., Park, J., Warren, J.: Whole-body dynamic behavior and control of human-like robots. Int. J. Humanoid Robot. 1(01), 29–43 (2004)
Khatib, O.: A unified approach for motion and force control of robot manipulators: the operational space formulation. IEEE J. Robot. Autom. 3(1), 43–53 (1987)
Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. In: Autonomous Robot Vehicles 1986, pp. 396–404. Springer (1986)
Conti, E., Barbagli, F., Balaniuk, R., Halg, M., Lu, C., Morris, D., Sentis, L., Warren, J., Khatib, O., Salisbury, K,: The chai libraries. In: Proceedings of Eurohaptics 2003, pp. 496–500. Dublin, Ireland (2003)
Khatib, O.: Inertial properties in robotic manipulation: an object-level framework. Int. J. Robot. Res. 14(1), 19–36 (1995)
L’Hour, M.: The french department of underwater archaeology: a brief overview. Eur. J. Archaeol. 15(2), 275–284 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Brantner, G., Khatib, O. (2018). Controlling Ocean One. In: Hutter, M., Siegwart, R. (eds) Field and Service Robotics. Springer Proceedings in Advanced Robotics, vol 5. Springer, Cham. https://doi.org/10.1007/978-3-319-67361-5_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-67361-5_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67360-8
Online ISBN: 978-3-319-67361-5
eBook Packages: EngineeringEngineering (R0)