Abstract
Nowadays, there is an increasing demand for underwater intervention systems around the world in several application domains. The commercially available systems are far from what is demanded in many aspects, justifying the need of more autonomous, cheap and easy-to-use solutions for underwater intervention missions. The chapter begins making a review of the most important research projects that have been able to demonstrate some results in sea conditions. Then, the expertise and know-how developed in the context of our research group in the last years is presented. Maybe, one of the main achieved results, from the methodological point of view, is a three-layer general system architecture based on the Robot Operating System (ROS), which allows an underwater vehicle to perform intervention missions with a high degree of autonomy, independently of the targeted scenario. Moreover, the use of an underwater simulator as a 3D simulation tool for benchmarking and Human Robot Interaction (HRI) is also discussed. In summary, a methodology has been developed for experimental validation, independently of the specific underwater intervention problem to solve. It consists on the use of the simulator, as a prior step before moving to any of the testbeds used for experimental validation. The reliability and feasibility of this methodology has been demonstrated for intervention missions in sea trial conditions.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
The need for intervention in underwater environments is significantly increasing in the last years. A large number of applications in marine environments need intervention capabilities. Potential areas include maintenance interventions in permanent observatories and offshore scenarios, and search and recovery for collecting objects of interest for different application domains like biology, geology, fishery, or marine rescue just to name a few.
Nowadays, these kind of tasks are usually solved with “work class” ROVs (i.e. Remote Operated Vehicles) that are launched from support vessels, and remotely operated by expert pilots through an umbilical communications cable and complex control interfaces. These solutions present several drawbacks. Firstly, ROVs are normally large and heavy vehicles that need significant logistics for its transportation and handling. Secondly, the complex user interfaces and control methods require skilled pilots for their use. These two facts significantly increase the cost of the applications. Moreover, the need of an umbilical cable introduces additional problems of control, or range limitation. Finally, the fatigue and high stress that users of remotely operated systems normally suffer supposes another serious drawback.
All the pointed questions justify the need of more autonomous, cheap and easy-to-use solutions for underwater intervention missions. With this aim, looking for higher autonomy levels in underwater intervention missions, a new concept, named “Autonomous Underwater Vehicles for Intervention” (I-AUV hereinafter), was born during the 90s. It is worth mentioning that this is a very new technology and, according to Gilmour et al. [18]: “Long-term AUV vision the technology for light intervention systems is still immature, but very promising. I-AUVs are currently in level 3 out of 9 (9 meaning routinely used) of the development cycle necessary to adopt this technology in the oil and gas industry, being expected to achieve up to level 7 by the end of 2018”.
However the progress becomes slow for this new technology. In fact, only very few I-AUV prototypes have been tested till date in real underwater scenarios. Among the reasons would be: complexities on required mechatronics (e.g. the vehicle, hand-arm, all kind of sensors, etc.); very hard communication problems; intelligent control architectures needed; letting apart the hostile environment inherent to underwater (e.g. poor visibility, currents, increasing pressure with depth, etc.).
After the pioneering works during the 90s (OTTER [51], ODIN [8] and UNION [40]), significant advances in this direction arrived during the last decade, when the first simple autonomous operations at sea were demonstrated. A dexterous subsea robot hand incorporating force and slip contact sensing, using fluid-filled tentacles for fingers, was developed in the mid 90s in the context of the AMADEUS project (Advanced MAnipulator for DEep Underwater Sampling) [21]. Fixed base manipulation (opening/closing a valve) was demonstrated in ALIVE [14]. Free-floating object manipulation was demonstrated in SAUVIM [25] and object search and recovery was demonstrated in TRIDENT [47]. In summary, to the best of author’s knowledge, only recent projects like SWIMMER [13], ALIVE, SAUVIM, RAUVI [45], and TRIDENT have been able to demonstrate its performance in sea trials. It is noticeable that currently the only ongoing European project trying to demonstrate sea trials performance is the PANDORA project [17]. A summary of the most relevant international finished projects related to underwater intervention is given in Table 1.
Bearing in mind the aforementioned context, a three-year Spanish Coordinated Project, named TRITON (Multisensory Based Underwater Intervention through Cooperative Marine Robots) was launched in 2012. The TRITON marine robotics research project is focused on the development of autonomous intervention technologies really close to the real needs of the final user and, as such, it can facilitate the potential technological transfer of its results. This research project includes three sub-projects:
-
COMAROB: “Cooperative Robotics”, under responsibility of Universitat de Girona (UdG)
-
VISUAL2: “Multisensorial Perception”, under responsibility of Universitat de les Illes Baleares (UIB)
-
GRASPER: “Autonomous Manipulation”, under responsibility of Universitat Jaume-I (UJI)
The project proposes two scenarios as a proof of concept to demonstrate the developed capabilities: (1) the search and recovery of an object of interest (e.g. a “black-box mockup” of a crashed airplane), and (2) the intervention on an underwater panel in a permanent observatory. In the area of search and recovery, previous projects like SAUVIM, RAUVI and more recently TRIDENT, have become milestone projects, progressively decreasing the operational costs and increasing the degree of autonomy. With respect to the intervention on an underwater panel, the ALIVE project demonstrated the capability of an underwater vehicle to dock autonomously with a ROV-friendly panel by using hydraulic grabs. Nevertheless, unlike in TRITON, a very simple automata-based manipulation strategy was used to open/close a valve. Finally, it is worth mentioning that currently only PANDORA has some similarities with TRITON, where a learning solution for autonomous robot valve turning, using Extended Kalman Filtering and Fuzzy Logic to learn manipulation trajectories via kinaesthetic teaching was recently proposed [1, 6].
The work presented in this chapter is mainly concerning GRASPER, focusing on one of its recent achievements: endowing an I-AUV with the ability to manipulate an underwater observatory panel in an autonomous way.
Moreover, the use of an underwater simulator as a 3D simulation tool for benchmarking and Human Robot Interaction (HRI) is also presented. Several definitions of the term benchmark have been proposed in the literature. In this chapter, the one stated in [11] is taken, in the sense that a benchmark adds numerical evaluation of results (performance metrics) as a key element. Moreover, the main aspects are repeatability, independency, and unambiguity. This objective, numerical evaluation (also known as performance metrics), will allow a fair comparison of algorithms from different origins.
According to our methodology, we always test the algorithms first in simulation an then in real conditions, with increasing degree of complexity: water tank, pool, harbour, shallow water, etc. The obtained results are related to simulations and also to water tank conditions, while we are currently working on the challenge of testing the manipulation capabilities in the sea.
2 Underwater Intervention Mission Planning
For better understanding of the required mission planning issues, the specific context of TRITON (2012–15) project will be used. The main goal of TRITON is the use of autonomous vehicles for the execution of complex underwater intervention tasks. The project is focused on the use of several vehicles (an ASC, Autonomous Surface Craft, and an I-AUV) running in a coordinated manner during the execution of a mission, and on the improvement of the manipulation capabilities required for intervention (i.e. opening/closing a valve, plugging/unplugging a connector, etc.).
The mission scenario that we are currently working on (the panel intervention in the context of underwater observatories), to be developed autonomously, is structured in 5 phases (see Fig. 1):
-
1.
DIVE: Both vehicles are sequentially deployed from the support boat. Then, the I-AUV dives a few meters until establishing an acoustic communication link with the surface vehicle. Next, the I-AUV descends to the bottom, while the ASC describes circles on the surface to better localize and geo-reference the underwater robot. At the bottom, the I-AUV performs station keeping while being geo-referenced by the ASC, which forwards an absolute position fix.
-
2.
TRANSIT: The vehicle uses cooperative navigation with the surface AUV until reaching the acoustic area of coverage of the panel-mounted transponder. Then, the vehicle uses a transceiver to interrogate the transponder mounted on the permanent observatory. Using its dead reckoning navigation system combined with range only navigation techniques, the vehicle estimates the position of the observatory and transits towards it.
-
3.
APPROACH: When the vehicle reaches the surroundings of the observatory, establishes visual contact, identifying the AUV-friendly intervention panel where it should dock. To achieve the robust accurate navigation requirements needed for docking, the vehicle switches to real-time vision based navigation relative to the panel.
-
4.
DOCKING: Real time vision based localization techniques will be used to visually guide the vehicle during the docking. Three non-actuated mechanical bars will be used for docking to the panel using passive accommodation techniques. When the I-AUV docks, it becomes rigidly attached to the panel.
-
5.
INTERVENTION: Once the vehicle is rigidly attached to the panel, the manipulation operation takes place. As proof of concept, two demonstrative applications have been designed with increasing complexity: (1) Opening/Closing a valve, and (2) Plugging/Unplugging a connector.
2.1 The System Architecture
Concerning the implemented architecture required for the current intervention system, the high level structure can be observed in Fig. 2. Obviously, general mission planning considerations are out of the scope of this work and, in the following, only “grasping and manipulation” aspects will be taken into account. In the figure, the real mechatronics used in the TRITON project are represented: the Girona500 I-AUV [39], equipped with the Light-Weight ARM5E 4 DOF underwater robotic arm [15], and the SPARUS AUV [24], used as surface vehicle in the mission.
The whole I-AUV control architecture is composed of two initially independent architectures: the underwater vehicle and the manipulator architectures. Concerning the manipulator architecture, the reactive actions are performed in the low-level control layer that communicates with the real or simulated I-AUV via an abstraction interface. The control layer also includes control strategies like station keeping or free floating to help in the manipulation actions. The station keeping approach allows to keep the position and orientation of the vehicle to facilitate the intervention. A combination of vision and inertial measurement systems are used to achieve this purpose. With this approach, it is possible to use the arm degrees of freedom to perform the desired manipulation [36]. The free floating approach uses all the available degrees of freedom, both from the vehicle and the arm, to increase the total amount of space configurations for a required task. In the TRIDENT project, a strategy based on the prioritization of tasks of equality and inequality type, once combined with Dynamic Programming techniques, was used for coordinately controlling the motion of the I-AUV [7]. In [49], real intervention experiments in sea conditions are described, in which task priorities and a dynamic programming based approach is used for underwater floating manipulation.
At a higher level, the whole mission is supervised at a high level by a Mission Control System (MCS), implemented using the Petri net formalism [36].
The Robot Operating System (ROS) [38], is used to integrate the heterogeneous computing hardware and software of all the system components, to allow for easy integration of additional mission-specific components, and to record all sensor input in a suitable playback format for simulation purposes.
The mission control system is the part of the control architecture in charge of defining the task execution flow to fulfill a mission. Each task can be executed by means of some manipulator action. The mission programmer must define how these actions/primitives are executed to fulfill each task and how the tasks are combined to fulfill the whole mission. The MCS was developed as generic as possible and it allows for an easily tailoring to different control architectures (refer to [36] for further details).
2.2 Planning Grasping and Manipulation for Intervention Missions
Planning a grasp is generally known to be a difficult problem due to the large search space resulting from all possible hand configurations, grasp types and object properties that occur in regular environments. The dominant approach to this problem has been the model-based paradigm, in which the object shape, contacts, and forces are modelled according to physical laws. Then, the research has been focused on grasp analysis (the study of the physical properties of a given grasp) and grasps synthesis (the computation of grasps that meet certain desirable properties) [48]. Unfortunately, these approaches have failed to deliver practical implementations, mainly because they rely on assumptions that are difficult to satisfy in complex and uncertain environments.
The current trend is to incorporate sensor information for grasp planning and synthesis, such as vision [9, 10, 19, 30, 46] or range sensors [41]. In this line, several approaches have also adopted machine learning techniques to determine the relevant features that indicate a successful grasp [10, 20, 29, 44]. Others make use human demonstrations for learning grasp tasks [12]. Most of these approaches commonly consider grasps as a fixed number of contact locations with no regard to hand geometry [4, 48]. Some recent work includes kinematics constraints of the hand in order to prune the search space [5, 27, 28]. Alternatively, the so-called knowledge-based approach tries to simplify the grasp planning problem by reasoning on a more symbolic level. Objects are often described using shape primitives [22, 50], grasp prototypes are defined in terms of purposeful hand preshapes [27, 28], and the planning and selection of grasps is made according to programmed decision rules [3]. Recently, the knowledge-based approach has been combined with vision-force-tactile feedback and task-related features that improve the robot performance in real scenarios [35].
Regarding autonomous manipulation in underwater environments, very few research has been carried out. So, the first fully autonomous intervention at sea, was demonstrated by the ALIVE project, where a hovering capable AUV was able to home to a subsea intervention panel using an imaging sonar, and then, docking into it with hydraulic grasps using visual feedback. Once attached to the panel, a very simple manipulation strategy (fixed base manipulation) was used to open/close a valve. First object manipulation from a floating vehicle (I-AUV) was achieved in 2009 within SAUVIM project. It was demonstrated the capability of searching for an object whose position was roughly known a priori. The object was endowed with artificial landmarks and the robot autonomously located it and hooked it with a recovery device while hovering.
Recently, the first multipurpose object search and recovery strategy was demonstrated in the TRIDENT project in 2012. First, the object was searched using a down-looking camera and photo-mosaicing techniques. Next, it was demonstrated how to autonomously “hook” the object in a water tank [36]. The experiment was repeated in a harbour environment using a 4 DOF arm [33] and later with a 7 DOF arm endowed with a 3 fingered hand [43, 47].
In summary, grasping and manipulation remain open research problems, and this situation becomes drastically worst in underwater scenarios. In the shallow water context, new complexities arise increasing the difficulty to control grasping and manipulation actions with agility capabilities. Under these very hostile conditions, only a few robot systems are endowed with semi-autonomous manipulation capabilities, mainly focused in specialized operations requiring an environment reasonably structured, like those devoted to the offshore industries.
For further bibliography related to the motion control of I-AUVs and its manipulation systems, refer to [2], that addresses the main control aspects in underwater manipulation tasks; and [26], which provides an extensive tract on sensory-based autonomous manipulation for intervention tasks in unstructured environments.
3 UWSim: A 3D Simulation Tool for Benchmarking and HRI
UWSimFootnote 1 [34] is a software tool for visualization and simulation of underwater robotic missions (see Fig. 3). The software is able to visualize underwater virtual scenarios that can be configured using standard modeling software and can be connected to external control programs by using the Robot Operating System (ROS) [38] interfaces. UWSim is currently used in different ongoing projects funded by European Commission (MORPH [16] and PANDORA [17]) in order to perform HIL (Hardware in the Loop) experiments and to reproduce real missions from the captured logs. UWSim is not only useful for software validation, but also for defining benchmarking mechanisms inside the simulator, so that control and vision algorithms can be easily compared in common scenarios. UWSim is also used as a Graphical User Interface (GUI) providing the necessary Human Robot Interaction (HRI) that is required to specify a task.
3.1 The Benchmarking Module for UWSim
A benchmarking module is available to be used with UWSim [37]. Like UWSim, this module uses ROS to interface with external software with which it can interact. The ROS interface allows the external program to be evaluated and can communicate both with the simulator (it can send commands to carry out a task) and the benchmarking module (it can send the results or data necessary to be evaluated).
Benchmarks are defined in XML (eXtensible Markup Language) files. Each file will define which measures are going to be used and how they will be evaluated. This allows the creation of standard benchmarks defined in a document to evaluate different aspects of underwater robotic algorithms, being able to compare algorithms from different origins. Each of these benchmarks will be associated with one or more UWSim scene configuration files, being the results of the benchmark dependent on the predefined scene. The whole process is depicted in Fig. 4. Detailed information on how to setup and execute a benchmark in UWSim can be found in our previous work [37].
3.2 A User Interface for UWSim to Provide HRI
Traditionally, Remotely Operated Vehicles (ROVs), which are commercially available to develop all kind of intervention missions, are teleoperated by an expert user by means of a specific Graphical User Interface (GUI) (thus providing the necessary Human Robot Interaction, HRI) thanks to the tethered cable which connects the robot to the oceanographic vessel. The main drawback of this kind of systems, apart from the necessary expertise degree of the pilots, concerns the cognitive fatigue inherent to the master-slave control architectures. The evolution of this kind of robots (ROVs) are the Intervention Autonomous Underwater Vehicles (I-AUVs). These robots can perform some tasks autonomously, but the presence of the operator in the programming phase, is still required. Most of the GUI used in these robots use their own programming language, and the GUIs tend to be more complex, with lots of windows displaying information. So, this kind of GUIs are very suitable for expert users, but are very difficult to use for non-expert users.
From our previous work and the know-how developed in the context of the aforementioned RAUVI and TRIDENT projects, a GUI is being developed by following a twofold strategy: (1) to guarantee the “intelligence” of the system and a good system performance, including the user in the control loop, and (2), not to require the user intervention in a continuous way like in ROV’s, just when it is strictly necessary. Despite the fact that we assume that the user has a minimum level of abilities related to the mission to be carried out and the robot to be used, the GUI is oriented to non-expert users. In order to include full 3D support at all the stages of the mission, the GUI is being integrated with the UWSim simulator. This will allow us to perform realistic simulations and take advantages of visual aspects like 3D representation, Virtual and Augmented Reality (VR and AR), and general good system performance. In order to integrate the GUI in the whole project architecture, ROS is being used as a middleware.
The GUI will adapt the design and the information to show to the user, depending on the intervention to perform. Thus, when the user selects a “panel intervention” type of mission, the scenario configuration and the intervention panel CAD/VRMLFootnote 2 files are loaded. Then, the user will be able to navigate through the scenario, looking for the target, and will get all the panel details and will select the predefined actions: plug-in a cable or valve operation. Nevertheless, some modification over these predefined actions could be done by using a specific menu.
Once the intervention is defined, it can be tested in the simulator or can be downloaded to the robot through the ROS communication module. In Fig. 5, the GUI integrated with UWSim (named QtUWSim) shows the panel to configure the scene environment.
Moreover, a 3D interface with a VR and AR layer is being developed, focusing on the human hand interaction (using a hand tracker device) and the vision (using a Head-mounted Display, HMD), allowing the user to interact with the scene with the support of interactive markers. An “interactive marker” is a marker that can be applied to an object in a 3D scene and allows the user to interact with it. Depending on the type of interactive marker, the user can perform either translations or rotations over the object, in one of the spatial axes. So, when the user selects the “grasp specification 3D” option, the end effector of the I-AUV defined in an URDFFootnote 3 file is loaded into the 3D scene. This end effector, which will be surrounded by 6 interactive markers (3 translational and 3 rotational), can be defined by a hand, a hook or a jaw. The user moves these interactive markers to indicate the end effector position and orientation to reach the target. These movements are currently done by the user with the mouse/trackpad, but a ROS package is being developed in order to allow the use of a hand tracker device. This will allow the user to interact with the GUI more fluently and in a more natural way (see Fig. 6).
The use of a Head-mounted Display (HMD) benefits the user, evolving him/her in a more realistic environment. One of the current development is to adapt this kind of device in order to get the most benefit to the VR and AR layer. Furthermore, if the HMD is endowed with sensors, these could be used to move the camera point-of-view, adding more realism to the scene.
4 The Roadmap for Experimental Validation
Following the know-how generated through our recent projects (i.e. RAUVI, TRIDENT, TRITON), a methodology has been developed for experimental validation, independently of the specific underwater intervention problem to solve.
As can be seen in Fig. 7, the four generic steps designed for the experimental validation roadmap are defined (see red blocks), highlighting its instantiation for two different underwater intervention missions: (1) the search and recovery problem (under RAUVI and TRIDENT projects) (see yellow blocks) and (2) manipulation on a panel (under TRITON project) (see green blocks).
This methodology becomes very successful independently of the mechatronic system and the testbed used for experimental validation. As can be observed in Fig. 7 (red blocks), the idea is to start out with the performance test on the simulator (UWSim block), where the mechatronics, sensors, and scenario have been modelled in advance.
After succeeding in different current and visibility conditions, the next step will be the intervention trial, without the vehicle but with the real hand-arm system and sensors, and real devices to manipulate, on the water tank available in UJI (Water Tank block). An iterative process will follow here, between simulation and real tests, until complete succeed in the water tank conditions.
Later, the complete system integration, including now the vehicle, and real performance will be carried out in the pool available at UdG (Pool block). This is the last step before the sea trials (Harbour block). Obviously, the iterative process between UWSim and real tests will be always running, looking for success.
5 Simulation Results
By using the aforementioned benchmarking module for the UWSim simulator, we are able to setup many configurable options. Algorithms can be tested to their limits, to know under which conditions they can work, and which results can be obtained with them. This way, resources can be optimized to provide the best results in each situation. In the following sections, two different benchmarks for UWSim are explained, followed by the experimental results. The first one is a visibility benchmark, where a visual tracking algorithm is evaluated under different visibility conditions. The second one, is a position error benchmark, where a pattern recognition algorithm is evaluated to see if it can be used to estimate the end effector position of a robot manipulator arm depending on the distance from the camera to the visual marker.
5.1 Benchmarking: Visibility Tracker
Below is an example of benchmarking done with UWSim. In this case, the goal is to evaluate how the underwater fog affects a visual ESM tracking algorithm [23], as done in our previous work with a black-box mockup [37]. Now, we will use the TRITON scenario, that includes the underwater panel and the Girona500 I-AUV, equipped with the Light-Weight ARM5E arm and a camera. We are considering now that the vehicle has already done the docking to the panel, but despite this fact, we are still interested in keeping track of it with the camera, as the intervention requires to manipulate the valve and connector installed on the panel. Thus, with the aid of this benchmark we will evaluate how the algorithm is able to keep track of the intervention panel while visibility conditions change.
The configuration files for the scene and the benchmark are the same that were used for the black-box recovery example mentioned before [37]. It includes measure definitions needed to evaluate the performance of the tracking. Since the tracking algorithm returns the position of a four-corner object, an “euclideanNorm” measure is used, which measures the distance between the position returned by the tracking software and the real position on the simulator.
This measure is divided in two parts to get more information. On one hand, the distance between the actual corners with the ones that the tracking algorithm returns. On the other hand, the real distance from the centroid of the simulated object to the one calculated through vision.
For the final result, these two measurements are added, so that, the lower the result, the smaller the object recognition error is. In addition to these measurements, the scene updater “sceneFogUpdater” is configured varying the underwater scene visibility through time.
Finally, some triggers have been set up to make the evaluation task easier. The benchmark module will wait for a service call made by the tracking algorithm, and it will end when there are no more “sceneFogUpdater” iterations. The measurements will always be active, as it is taken as valid the last one received by the ROS “topic” that the vision system sends is taken.
Once the simulator and the benchmark are configured, a service call must be added in the tracking algorithm when it starts, and the estimated position of the box must be sent to the benchmark module. As shown on Fig. 8, the tracking algorithm is able to find the manipulation panel while the fog is increasing in the benchmark, until finally it is completely lost when the visibility is very poor.
Once the benchmark is complete, the module stores the results in a file. These results are stored in a text file in table format. This file can be processed later with any statistical or graphical tool. For this case study, the results can be seen on Fig. 9. It can be observed how the tracking software error is very small throughout the experiment, less than 5 error pixels. When the fog level increases the value above 1.7, the error of the tracking algorithm increases drastically.
As we can see on the graph, the benchmarking module offers results for every measure, allowing the user to analyze the performance of the algorithm. In this case we can see how corners information is completely lost at 1.65 fog factor, centroid is still near the objective. So we can conclude that with fog factor bigger than 1.6 is not precise enough to do manipulation although it almost know where the target is.
According to the results provided, the vision system is reliable for fog levels below 1.6. Figure 10 shows a comparison between this levels of fog on UWSim simulator screenshots. The fog level is a value ranging from 0 to infinity and defines the visibility in the water depending on the distance. Visibility is a value between 0 and 1 where 0 represents a perfect visibility of the object and 1 represents no visibility at all. The visibility depends therefore on the water fog level and on the distance to the object, as it is represented by the following formula:
In Fig. 11, different values have been used to plot the relationship between visibility and the distance to the object. As can be seen, visibility drastically worsens with relatively small values of fog when the distance to the object increases. Under a 1.60 value of fog (represented in a cyan color line, which was the operating limit of the tracking software in this experiment), there is virtually no visibility for a distance greater than 1 m.
In Fig. 12, the distance to the object has been set to 0.9 m, which is actually the distance between the camera and the panel used in the benchmark, and it represents the visibility with respect to the fog factor. The value of visibility for a fog factor of 1.6 is depicted with a horizontal line. Thus the tracking algorithm is able to find an object when the degree of visibility is below 0.878, which is almost the same result as the one obtained using the same tracker in a different environment in the previous work [37].
5.2 Benchmarking: Position Error on End Effector Position
In this section, a position error benchmark, is defined to evaluate if a pattern recognition algorithm can be used to estimate the end effector position of a robot manipulator arm, compared to a kinematic solution, and thus, if the algorithm can be used for manipulation purposes. The results of the experiment will allow choosing the best way to estimate the end effector position when performing a manipulation.
The pattern recognition algorithm will estimate the position of a marker placed on the gripper of the Light-Weight ARM5E robotic arm. Using this method, some errors such as a bad initialization of the arm or miscalibration of the joints that affect to the kinematics of the arm, can be avoided.
The marker is detected using the ARToolkit library (a software library for building Augmented Reality applications) that, among others, provides multiple methods for detecting and localizing the position and orientation of a marker. In order to do this, the arm moves in the camera field of view and the position error of the end effector is measured by the two different proposed systems.
The first approach, the direct kinematics, estimates the end effector position numerically using the known joints transforms from the base of the arm to the end effector. The advantage of this method is that it does not depend on the cameras, so it’s immune to poor visibility. The main drawback is that some errors appear from bad initialization of the joints position and miscalibration of the joints, which depend on self-positioning sensors. To simulate the errors of the real arm, small offsets were applied to the joints, thus simulating this kind of errors in each joint.
The second method uses the pattern recognition algorithm that finds the marker placed in the hand. Then, a transform from this position to the end effector is used. In this case, a low visibility can be an important disadvantage, but on the other hand, as most of the intervention missions require vision systems to find targets to manipulate, using this approach makes target and manipulator be referenced from the same origin, avoiding arm-camera calibration.
As can be seen on Fig. 13, marker error is significantly smaller than kinematic error. This is caused by small errors on joints, mainly the joints that are far in the kinematic chain from the end effector, the ones that produce big errors on kinematics. The marker approach allow us to avoid this kinematic chain errors driving the error to 0.003–0.01 m, which is a good error in order to manipulate the panel.
Another interesting result is that kinematic errors decrease when the target is far from the camera, while marker detection error increases and becomes unstable. The increase in the marker position error is caused by the fact that even small errors in the camera space produce appreciable precision errors. To avoid this, higher resolution cameras could be used. Instability is probably caused by light effects such as shadows, reflexes, etc.
To sum up, it seems that marker estimation is better than kinematics, although kinematic errors depend on each arm sensors and may be smaller depending on the arm used. In the particular context of the TRITON project, a hybrid solution was adopted, allowing changing the method depending on markers visibility because kinematic errors were too high to achieve a manipulation in a robust manner.
6 Real Scenarios Results
Once the algorithms to perform the proposed intervention (open and close a valve and plug and unplug a hot-stab connector) have been tested in simulated scenarios, the action moves to the real ones, according to our roadmap.
6.1 Intervention on a Panel in Water Tank Conditions
The first real scenario in our roadmap is the Water Tank at UJI (see Fig. 14), where an intervention panel mockup is installed inside the tank. In this scenario we test the manipulation actions, with the real hand-arm mechatronics and sensors. In this scenario, the complete AUV system is not used: at this point, it is only important the real hand-arm mechatronics and sensors (i.e. the Light-Weight ARM5E equipped with sensors).
The detailed steps for the proposed dual operation (open and close a valve and plug and unplug a connector) are highlighted in a flow chart in Fig. 15. The first step is the system initialization, in this case, the arm. After the system has been completely initialized, the manipulation execution plan starts. In order to reach the positions to manipulate the objects in a correct way, some waypoints respect to the position of the object have been defined (see Fig. 16, where the frames are represented on a virtual visualization of the scene). To reach each waypoint, the system calculates the Cartesian distance between the end-effector and the waypoint, and using Cartesian velocities, the end-effector tries to reach the position of the waypoint. Now, depending of the intervention to perform (valve or connector) a series of steps are followed (refer to [32] for more details). Video sequences of the two interventions can be seen on-line.Footnote 4
Recently, and after two years of work (TRITON is a three year project), this envisioned concept concerning intervention on a panel become in a real system performing a successful intervention in pool conditions (see Fig. 17). The whole real system, once the mechatronics integration is complete, includes the Girona500 AUV with the docking devices assembled; the hand-arm system (the Light-Weight ARM5E), and different sensors; and the panel mockup (see Fig. 18). In this case, the intervention mission begins after the vehicle is rigidly attached to the panel after an autonomous docking [31]. In this case, the manipulation experiment takes place in a similar environment to the one described above, but this time including a more challenging scenario, taking into account the visibility issues. The details of this intervention are out of the scope of this chapter.
6.2 Ongoing Research on Autonomous Manipulation with Visibility Constraints
In the subsea context, the quality of the images captured by the camera mounted on the autonomous robots, can be strongly affected by the degree of the water turbidity. In unfavorable circumstances, the distance at which this device is usable (i.e. the range of visibility) is the required parameter in order to know how to make a proper use of it. On the other hand, when the image captured by the camera does not contain objects near the robot, it is not possible to determine whether this absence on the image is due to the fact that there are really no objects near the vehicle or, contrarily, that water turbidity prevents their vision.
To have a metric to determine the maximum distance at which the camera is effective at each instant, a calibration experiment has been developed. Two high intensity LEDs (one red and one white), placed at a fixed distance from the camera, have been used. To reach this fixed distance the diodes can be placed in the submarine’s robotic arm and then it can be moved properly until the LEDs reach the calibration location. On the other hand, a calibration image that is positioned at a distance of 1 m from the camera and is lightened by the built-it autonomous robot focus has been made.
To muddy the water, a special dye for decorative paintings has been used: a powder containing particles of different sizes. Thus, the water in the container in which the experiment has been developed, progressively blurred without having absolute measurements of turbidity. For each concentration of dye, in the absence of ambient light, the vehicle’s built-in lights have been activated to illuminate the test image, and then, a screenshot of the captured image has been taken. After that, now with the lights turned off, both the red and white LEDs have been independently activated, taking screenshot of each of them.
The different images are the reference for the calibration of the degree of visibility of the focus-camera set for this particular conditions of turbidity. After that, LED halos are binarized for increasing water turbidity with different thresholds (see Fig. 19). Thus, the aspect of each of the LEDs makes it possible to determine the degree of visibility at 1 m of distance, and this can be used as a starting point for an estimation of the maximum distance that will have some degree of visibility.
7 Conclusions and Further Work
The field of underwater manipulation for intervention missions is an active research topic that still has many challenges to overcome. The most important research projects in this field, that have been able to demonstrate some results in sea conditions, are still far from what would be desirable for a fully autonomous underwater vehicle for intervention.
Nevertheless, the expertise and know-how developed in the context of our research group in the last years, in projects like RAUVI (09-11), TRIDENT (10-13), or TRITON (currently active), has resulted in a general system architecture that allows an underwater vehicle to perform intervention missions in different real scenarios with a high degree of autonomy. The results obtained in TRITON, and in particular, in the GRASPER subproject, in the field of autonomous underwater manipulation, represent the cutting edge of research in this area.
The use of UWSim as a 3D simulation tool for benchmarking and Human Robot Interaction (HRI) has also been presented. The simulator has demonstrated to be a useful tool in our roadmap, a methodology developed for experimental validation, where we first perform benchmarking and Hardware in the Loop (HIL) simulations as a prior stage before moving to the real testbeds, independently of the specific underwater intervention problem to solve. UWSim is also used as a Graphical User Interface (GUI), providing the necessary Human Robot Interaction (HRI) that is required to specify a task.
The benchmarking characteristics of UWSim allow the design of specific experiments on autonomous underwater interventions. More specifically, the simulator allows the integration, in a unique platform, of the data acquired from the sensors in a real submarine intervention, and define a dataset, in order to allow further experiments to work on the same scenario, permitting a better understanding of the results provided by previous experiments.
The usefulness of UWSim has been recently proven, as it is currently used in different ongoing projects funded by European Commission (MORPH and PANDORA). Moreover, it is available to the scientific community as live open source project,Footnote 5 and is also included as a moduleFootnote 6 within the ROS platform.
The underwater operation results on a permanent observatory panel in water tank conditions has also been presented, as a prior step to the next experiments that will take place in real sea conditions. The experiment consisted on opening and closing a valve and plugging and unplugging a connector. To perform the operation, the proposed general system architecture, that allows an underwater vehicle to perform intervention missions in different real scenarios, with a high degree of autonomy, has been used.
As future lines, it is worth mentioning that cooperation research actions with University of Coimbra (Portugal) are now open to explore other paradigms for improvements in manipulation like those based on “learning by demonstration” [42]. Experimental validation is being carried on UWSim with the aid of complementary modules to allow the user interaction for the learning process (see Fig. 20). It is expected that we incorporatethose learning capabilities to the proposed system architecture, to be used in future interventions.
Notes
- 1.
Available on-line: http://www.irs.uji.es/uwsim.
- 2.
Computer Aided Design/Virtual Reality Modelling Software.
- 3.
Unified Robot Description Format.
- 4.
Valve and connector autonomous intervention: (1) (side) http://youtu.be/6pYBL-6Tw4c, (2) (top) http://youtu.be/_WkQYtcLsMU.
- 5.
Available on-line: http://www.irs.uji.es/uwsim.
- 6.
Available on-line: http://wiki.ros.org/uwsim.
References
Ahmadzadeh S, Kormushev P, Caldwell D (2013) Autonomous robotic valve turning: a hierarchical learning approach. In: 2013 IEEE international conference on robotics and automation (ICRA), pp 4629–4634. doi:10.1109/ICRA.2013.6631235
Antonelli G (2014) Underwater robots. Springer tracts in advanced robotics, vol 96. Springer, Heidelberg
Bekey G, Liu H, Tomovic R, Karplus W (1993) Knowledge-based control of grasping in robot hands using heuristics from human motor skills. IEEE Trans Robot Autom 9(6):709–722. doi:10.1109/70.265915
Bicchi A, Kumar V (2000) Robotic grasping and contact: a review. In: IEEE international conference on robotics and automation, ICRA’00, vol 1, pp 348–353. doi:10.1109/ROBOT.2000.844081
Borst C, Fischer M, Haidacher S, Liu H, Hirzinger G (2003) DLR hand II: experiments and experience with an anthropomorphic hand. In: IEEE international conference on robotics and automation, ICRA’03, vol 1, pp 702–707. doi:10.1109/ROBOT.2003.1241676
Carrera A, Ahmadzadeh SR, Ajoudani A, Kormushev P, Carreras M, Caldwell DG (2012) Towards autonomous robotic valve turning. J Cybern Inf Technol (CIT) 12(3):17–26
Casalino G, Zereik E, Simetti E, Torelli S, Sperinde A, Turetta A (2012) Agility for underwater floating manipulation: task & subsystem priority based control strategy. In: 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1772–1779. doi:10.1109/IROS.2012.6386127
Choi S, Takashige GY, Yuh J (1994) Experimental study on an underwater robotic vehicle: ODIN. In: Proceedings of 1994 symposium on autonomous underwater vehicle technology, AUV’94, pp 79–84. doi:10.1109/AUV.1994.518610
Cipolla R, Hollinghurst N (1997) Visually guided grasping in unstructured environments. Robot Auton Syst 19(3–4):337–346. doi:10.1016/S0921-8890(96)00060-7
Coelho J, Piater J, Grupen R (2001) Developing haptic and visual perceptual categories for reaching and grasping with a humanoid robot. Robot Auton Syst 37(2–3):195–218. doi:10.1016/S0921-8890(01)00158-0
Dillmann R (2004) KA 1.10 Benchmarks for Robotics Research. Technical report
Ekvall S, Kragic D (2004) Interactive grasp learning based on human demonstration. In: 2004 IEEE international conference on robotics and automation, ICRA’04, vol 4, pp 3519–3524. doi:10.1109/ROBOT.2004.1308798
Evans J, Keller K, Smith J, Marty P, Rigaud O (2001) Docking techniques and evaluation trials of the SWIMMER AUV: an autonomous deployment. In: AUV for work-class ROVs. OCEANS, 2001, pp 520–528
Evans J, Redmond P, Plakas C, Hamilton K, Lane D (2003) Autonomous docking for Intervention-AUVs using sonar and video-based real-time 3D pose estimation. In: OCEANS, vol 4, pp 2201–2210. doi:10.1109/OCEANS.2003.178243
Fernández J, Prats M, Sanz P, García J, Marín R, Robinson M, Ribas D, Ridao P (2013) Grasping for the seabed: developing a new underwater robot arm for shallow-water intervention. IEEE Robot Autom Mag 4(20):121–130. doi:10.1109/MRA.2013.2248307
FP7-MORPH: Marine Robotic System of Self-Organizing, Logically Linked Physical Nodes (MORPH). http://morph-project.eu/
FP7-PANDORA: Persistent Autonomy through learNing, aDaptation, Observation and Re-plAnning (PANDORA). http://persistentautonomy.com/
Gilmour B, Niccum G, O’Donnell T (2012) Field resident AUV systems: Chevron’s long-term goal for AUV development. In: 2012 IEEE/OES autonomous underwater vehicles (AUV), pp 1–5. doi:10.1109/AUV.2012.6380718
Hauck A, Ruttinger J, Sorg M, Farber G (1999) Visual determination of 3D grasping points on unknown objects with a binocular camera system. In: Proceedings of 1999 IEEE/RSJ international conference on intelligent robots and systems, IROS’99, vol 1, pp 272–278. doi:10.1109/IROS.1999.813016
Kamon I, Flash T, Edelman S (1998) Learning visually guided grasping: a test case in sensorimotor learning. IEEE Trans Syst, Man Cybern Part A 28(3):266–276
Lane D, Davies J, Casalino G, Bartolini G, Cannata G, Veruggio G, Canals M, Smith C, O’Brien D, Pickett M, Robinson G, Jones D, Scott E, Ferrara A, Angelleti D, Coccoli M, Bono R, Virgili P, Pallas R, Gracia E (1997) AMADEUS: advanced manipulation for deep underwater sampling. IEEE Robot Autom Mag 4(4):34–45. doi:10.1109/100.637804
Liu H, Iberall T, Bekey G (1989) The multi-dimensional quality of task requirements for dextrous robot hand control. In: IEEE international conference on robotics and automation (ICRA’89), pp 452–457
Malis E (2004) Improving vision-based control using efficient second-order minimization techniques. In: 2004 IEEE international conference on robotics and automation, ICRA’04, vol 2, pp 1843–1848. doi:10.1109/ROBOT.2004.1308092
Mallios A, Ridao P, Carreras M, Hernandez E (2011) Navigating and mapping with the SPARUS AUV in a natural and unstructured underwater environment. In: OCEANS 2011, Waikoloa, Kona, Hawai, Kona, pp 1–7
Marani G, Choi SK, Yuh J (2009) Underwater autonomous manipulation for intervention missions AUVs. Ocean Eng 36(1):15–23. doi:10.1016/j.oceaneng.2008.08.007
Marani G, Yuh J (2014) Introduction to autonomous manipulation—case study with an underwater robot, SAUVIM. Springer tracts in advanced robotics, vol 102. Springer, Berlin
Miller AT, Knoop S, Christensen HI, Allen PK (2003) Automatic grasp planning using shape primitives. In: Proceedings of the IEEE international conference on robotics and automation (ICRA’03), Taipei, Taiwan, pp 1824–1829
Morales A, Asfour T, Azad P, Knoop S, Dillmann R (2006) Integrated grasp planning and visual object localization for a humanoid robot with five-fingered hands. In: IEEE/RSJ international conference on intelligent robots and systems, Beijing, China, pp 5663–5668
Morales A, Chinellato E, Fagg A, del Pobil A (2004) Experimental prediction of the performance of grasps tasks from visual features. Int J Humanoid Robot 10(1):671–691
Morales A, Recatalá G, Sanz P, del Pobil A (2001) Heuristic vision-based computation of planar antipodal grasps on unknown objects. In: IEEE international conference on robotics and automation (ICRA), vol 1, pp 583–588. doi:10.1109/ROBOT.2001.932613
Palomeras N, Ribas D, Vallicrosa G, Ridao P, Carreras M (2014) Autonomous I-AUV docking for fixed-base manipulation. In: 19th world congress of the international federation of automatic control (IFAC), pp 12160–12165. doi:10.3182/20140824-6-ZA-1003.01878
Peñalver A, Pérez J, Fernández JJ, Sales J, Sanz PJ, García JC, Fornas D, Marín R (2014) Autonomous intervention on an underwater panel mockup by using visually-guided manipulation techniques. In: 19th world congress of the international federation of automatic control (IFAC), pp 5151–5156. doi:10.3182/20140824-6-ZA-1003.02545
Prats M, Garcia J, Wirth S, Ribas D, Sanz P, Ridao P, Gracias N, Oliver G (2012) Multipurpose autonomous underwater intervention: a systems integration perspective. In: 2012 20th mediterranean conference on control automation (MED), pp 1379–1384. doi:10.1109/MED.2012.6265831
Prats M, Pérez J, Fernández J, Sanz P (2012) An open source tool for simulation and supervision of underwater intervention missions. In: 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 2577–2582. doi:10.1109/IROS.2012.6385788
Prats M, del Pobil AP, Sanz PJ (2013) Robot physical interaction through the combination of vision, tactile and force feedback. Applications to assistive robotics, Springer tracts in advanced robotics, vol 84. Springer, Berlin
Prats M, Ribas D, Palomeras N, García J, Nannen V, Wirth S, Fernández J, Beltrán J, Campos R, Ridao P, Sanz P, Oliver G, Carreras M, Gracias N, Marín R, Ortiz A (2012) Reconfigurable AUV for intervention missions: a case study on underwater object recovery. Intell Serv Robot 5(1):19–31. doi:10.1007/s11370-011-0101-z
Pérez J, Sales J, Prats M, Martí JV, Fornas D, Marín R, Sanz PJ (2013) The underwater simulator UWSim: benchmarking capabilities on autonomous grasping. In: 11th international conference on informatics in control, automation and robotics (ICINCO)
Quigley M, Conley K, Gerkey BP, Faust J, Foote T, Leibs J, Wheeler R, Ng AY (2009) ROS: an open-source robot operating system. In: ICRA workshop on open source software
Ribas D, Palomeras N, Ridao P, Carreras M (2012) Girona 500 AUV, from survey to intervention. IEEE/ASME Trans Mechatron (Focused Section on Marine Mechatronic Systems) 17(1):46–53
Rigaud V, Coste-Maniere E, Aldon M, Probert P, Perrier M, Rives P, Simon D, Lang D, Kiener J, Casal A, Amar J, Dauchez P, Chantler M (1998) Union: underwater intelligent operation and navigation. IEEE Robot Autom Mag 5(1):25–35. doi:10.1109/100.667323
Rusu R, Holzbach A, Diankov R, Bradski G, Beetz M (2009) Perception for mobile manipulation and grasping using active stereo. In: 9th IEEE-RAS international conference on humanoid robots, humanoids 2009, pp 632–638. doi:10.1109/ICHR.2009.5379597
Sales J, Santos L, Sanz PJ, Dias J, García JC (2013) Increasing the autonomy levels for underwater intervention missions by using learning and probabilistic techniques. In: First Iberian robotics conference (ROBOT 2013), Madrid, Spain
Sanz PJ, Marín R, Sales J, Oliver G, Ridao P (2012) Recent advances in underwater robotics for intervention missions. Soller harbor experiments, Low-cost books
Sanz PJ, Marín R, Sánchez JS (2005) Including efficient object recognition capabilities in online robots: from a statistical to a neural-network classifier. IEEE Trans Syst, Man, Cybern Part C: Appl Rev 35(1):87–96. doi:10.1109/TSMCC.2004.840055
Sanz PJ, Prats M, Ridao P, Ribas D, Oliver G, Orti A (2010) Recent progress in the RAUVI project a reconfigurable autonomous underwater vehicle for intervention. In: 52th international symposium ELMAR-2010. Zadar, Croatia, pp 471–474
Sanz PJ, Requena A, Iñesta JM, del Pobil AP (2005) Grasping the not-so-obvious: vision-based object handling for industrial applications. IEEE Robot Autom Mag 12(3):44–52. doi:10.1109/MRA.2005.1511868
Sanz PJ, Ridao P, Oliver G, Casalino G, Petillot Y, Silvestre C, Melchiorri C, Turetta A (2013) TRIDENT: an European project targeted to increase the autonomy levels for underwater intervention missions. In: OCEANS’13 MTS/IEEE conference, San Diego
Shimoga KB (1996) Robot grasp synthesis algorithms: a survey. Int J Robot Res 15(3):230–266. doi:10.1177/027836499601500302
Simetti E, Casalino G, Torelli S, Sperinde A, Turetta A (2013) Experimental results on task priority and dynamic programming based approach to underwater floating manipulation. In: OCEANS—Bergen, 2013 MTS/IEEE, pp 1–7. doi:10.1109/OCEANS-Bergen.6608016
Stansfield S (1991) Robotic grasping of unknown objects: a knowledge-based approach. Int J Robot Res 10(4):314–326. doi:10.1177/027836499101000402
Wang H, Rock S, Lee M (1995) Experiments in automatic retrieval of underwater objects with an AUV. In: OCEANS’95. Proceedings of conference on MTS/IEEE. Challenges of our changing global environment, vol 1, pp 366–373. doi:10.1109/OCEANS.1995.526796
Acknowledgments
This research was partly supported by Spanish Ministry of Research and Innovation DPI2011-27977-C03 (TRITON Project) and by Foundation Caixa Castelló-Bancaixa and Universitat Jaume I grant PI 1B2011-17.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Pérez, J. et al. (2015). Robotic Manipulation Within the Underwater Mission Planning Context. In: Carbone, G., Gomez-Bravo, F. (eds) Motion and Operation Planning of Robotic Systems. Mechanisms and Machine Science, vol 29. Springer, Cham. https://doi.org/10.1007/978-3-319-14705-5_17
Download citation
DOI: https://doi.org/10.1007/978-3-319-14705-5_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-14704-8
Online ISBN: 978-3-319-14705-5
eBook Packages: EngineeringEngineering (R0)