Keywords

1 Introduction

The bridge inspections sector has a large variety of procedures to ensure the safety of its facilities and personnel, and at the same time tightly control costs. These procedures involve extensive inspections, many of which should be performed at height and using sensors that require to be in contact with the surfaces being inspected. Contact inspection is traditionally performed by technicians accessing to the specific inspection points using man-lifts, cranes, scaffolds, or rope-access techniques. Apart from that, only box type bridges allows inner inspections, as it is possible for a person to get inside them. However, it becomes necessary to inspect these types of bridges from outside as well.

There is a strong demand among operators for bridge inspections to develop alternatives to manual inspection. Sensors deployed at selected locations in the bridge, such as in the following works (Akhondi et al. 2010) and (Savazzi et al. 2013), provide punctual measurements at a high rate, which is suitable for intensive monitoring of few or small critical structures, but it is not ideal for covering/inspecting a full bridge. The use of aerial robots is very promising in order to reduce the cost and time required to perform complete inspections at height in bridges. However, there are two main technological challenges to be solved. First is the autonomous navigation of aerial vehicles without GNSS that will allow the automatization of general visual inspections using drones. Second, it is the use of drones for contact inspection applied to detail measurements on specific spots.

2 Aerial Visual Inspection Robot

The AERO-CAM RPAS is the aerial platform used for the visual inspection of the viaduct. The UAV is based on a DJI Matrice 600 Pro customized for the purpose of the mission. The current version of the UAV is the one shown in Fig. 1, where it can be seen the CAD model of the drone and the final implementation of it. As it can be seen, the configuration of the UAV is a 6-motors UAV, so the aircraft is robust against the failure of one engine due to the redundancy achieved by the configuration.

Fig. 1.
figure 1

AERO-CAM aerial robot

The aerial robot is being designed as small as possible, considering the weight and size. Moreover, the flight time has been optimized, maximizing its endurance to achieve the required flight times with the needed payload, that in that case are the camera and localization sensors. The AERO-CAM operates in BVLOS (Beyond Visual Line of Sight), and the camera is integrated on top of the aircraft, to allow photos to be taken of the underside of the viaduct deck, as well as to have cleaner pictures of the horizontal surfaces of the pilar and the rest of possible walls. The electronics of the UAV incorporates an electronic cover for dust protection. Due to the expectable lost of GNSS signal under the viaduct or bridge, AERO-CAM integrates a LIDAR underneath the airframe, between the two legs of the landing skid.

Regarding the hardware architecture for the integration of the camera in charge of the visual inspection, two Wi-Fi antennas are integrated onboard the aerial robot. These antennas allow the communication between the main computer, located at the bottom of the aircraft, and the computer of the camera system. The schematic of the hardware integration of the camera and the localization sensors is shown in Fig. 2.

Fig. 2.
figure 2

Aerial robot hardware integration relative to the main added components

3 GNSS-Free Aerial Robot Navigation

To be able to perform the GNSS-free localization and navigation, the drone is equipped with an Ouster lidar sensor, model OS0 with 128 channels. As mentioned, this lidar sensor is used to perform an in-flight location of the drone movements, but it is also used to locate the drone’s reference system with respect to the global inspection reference system. To complement this lidar, the drone has a 9-axis IMU that works at 400 Hz. In addition, the system includes a laser altimeter that allows more precise landings. The processing of all the sensor readings is carried out on an onboard computer, model Intel NUC i7.

In order to locate the aerial robot (or drone) in the same coordinate system than the one used for the location of the detected defects on the bridge, it requires the creation of a previous 3D map using a total station. This map will be a point cloud that identifies a coordinate origin for the entire inspection system. To create this map, operators must ensure that the ENU system is followed (x = east, y = north, z = up). This map can be reused in future inspections of the viaduct.

The drone system has its own localization and navigation algorithm (\({{\varvec{T}}}_{LD}\)) that has its origin in the take-off point (\({\varvec{L}}\)). Since this location may vary, the complete system requires a second localization system that stablishes the 3D transformation between the initial drone pose and the global reference system (\({{\varvec{T}}}_{GL}\)), expressed in ENU coordinates at the origin of the map created by the total station (\({\varvec{G}}\)). These transforms can be visualized in the following Figure, where \({\varvec{i}}\) is and arbitrary time instant (Fig. 3):

Fig. 3.
figure 3

Full transformation system.

Therefore, the system has two localization processes that are described below:

  • A. Global localization system

The function of the global location system is to find the transform \({{\varvec{T}}}_{GL}\), which establishes the connection between the global reference system of the 3D map of the viaduct and the drone localization system. Finding this transform is crucial, as it will allow the drone to safely navigate to areas selected by the inspector as interesting to inspect without maintaining the same take-off position between flights. This is especially beneficial for eliminating the total station dependency beyond creating the initial 3D map. In addition, since the viaduct can be found in an inaccessible area, the take-off position may not be replicated between flights or even inspections on different days.

The transform \({{\varvec{T}}}_{GL}\) is fixed and will only vary during the flight if another transform with better accuracy has been obtained. Therefore, this global localization system is designed to calculate the transform at the initial instant, just before the drone takes off. During the flight, this system could continue working by calculating the transform between the drone current position and the base map (\({{\varvec{T}}}_{GD}\)), so that if the accuracy of the transform improves, update it. This last case can be visualized in the following Figure (Fig. 4).

Fig. 4.
figure 4

In-air global localization transformation system.

This localization system does not have to be executed on the onboard computer, and it can be executed on a ground computer, since it does not require the calculation to be instantaneous. In this case, the onboard computer would send the data to the ground computer, which will perform the calculations and send the result back to the drone.

To find the correspondence between the previous 3D map generated by the total station and the data from the onboard sensors, we are working on an algorithm that makes use of the geometric characteristics of the point clouds. Specifically, the algorithm makes use of the characteristics of both clouds extracting FPFH (Fast Point Feature Histogram) features. These features encode the geometric properties of the k nearest neighbors of a given point using the average curvature of the multidimensional histogram around that point. Among its advantages, these features are invariant in position and a certain level of noise. After this feature extraction process, the algorithm applies RANSAC (Random Sample Consensus) to find a first approximation between both inputs. The result is then corrected with the assumptions of the problem and ICP (Iterative Closest Point) is applied to refine the result. The following Figure shows a block diagram of the main steps of the algorithm (Fig. 5):

Fig. 5.
figure 5

Point clouds matching diagram.

  • B. Relative localization system

The function of the relative localization system is to find the transform \({{\varvec{T}}}_{LD}\), which describes the motion of drone from its take-off point. This take-off point will be a flat area near the viaduct and is assumed to be flat with respect to the horizon so that the drone can take off safely. This localization is performed only with the onboard sensors and does not require any data beforehand, only the current sensor reading. Of course, it is desirable that this localization is as accurate as possible and, above all, minimize drifts in time as much as possible to avoid a significant divergence between reality and the pose in which the drone thinks it is (Fig. 6).

Fig. 6.
figure 6

Relative localization transformation system.

The relative localization system makes use of the lidar and the 9-axis IMU to calculate the drone’s pose at each instant. The algorithm operates at high frequency in real time, updating the pose at the same frequency as the IMU, which in the case of AERO-CAM is 400 Hz. This algorithm is executed entirely onboard the drone in the Intel NUC equipped. Despite running in real time, this algorithm has the highest processing load among the programs executed by the drone.

The relative localization system is of vital importance to the system, as it is used to provide localization feedback to the drone control algorithm so it can ensure a stable flight by navigating autonomously to the desired target points.

The localization algorithm is based on LIO-SAM (Shan et al. 2020), where the following general architecture, adapted to AERO-CAM, is proposed (Fig. 7):

Fig. 7.
figure 7

LIDAR+IMU localization general architecture.

This architecture establishes a tightly coupled fusion between the lidar and the IMU, build a factor graph in which the measurements made by the sensors are integrated to build and optimize the map. The IMU pre-integration is based on (Forster et al. 2016). An important clarification it that to integrate the IMU measurements the magnetometer is not used, only the angular velocities and linear accelerations. Since the double integration of an IMU leads to large drift, the architecture proposes the short-term integration of the IMU, correcting its bias thanks to the localization at lower frequency in the built map using the information of the lidar point cloud. In order to process everything in real time, the algorithm discards lidar readings if they are not sufficiently displaced with respect to the previous reading considered (lidar keyframes). In this way, a lot of redundant information that would increase the computational load is discarded. Between lidar keyframes, the IMU readings are integrated, converging in a node of the graph that would be the state of the location at that given instant. The following diagram, simplified from (Shan et al. 2020), shows a schematic of the processing of the sensor readings (Fig. 8):

Fig. 8.
figure 8

Localization factor graph example.

As already mentioned, the result of all this processing is the location of the drone without GPS with a high frequency (400 Hz) that server the control algorithm to proceed with the AERO-CAM.

4 Aerial Contact Inspection Vehicle

The design and development of contact drone, AeroX, was based on requirements driven by the inspection sector end-user needs for giving successful solutions to the industry. The operation of the robotic solution should be robust under nominal working conditions at heights with potential strong wind gusts. Besides, the proposed aerial robotic system should be integrated with the current maintenance operations in many industries and should be easily operated by the personnel that is currently involved in bridges inspections, which often have low training in robotics. The main requirements considered in the design of AeroX can be summarized in the following:

  • It should keep a sensor in steady physical contact at a point on the surface where a measurement is going to be taken.

  • It should mount a variety of physical contact sensors.

  • The aerial platform should have fast reactivity and controllability in order to fly close to obstacles with wind gusts.

  • It should be easy to be operated for the personnel currently involved in bridge inspection with low training.

  • Its operation should be easily adaptable to the specific inspection procedures currently used in bridge inspections.

  • The robotic system should be very robust and reliable for everyday operations in industrial settings.

The proposed aerial robotic system—the AeroX robot—has three main components, see Fig. 9:

Fig. 9.
figure 9

The AeroX robot keeping physical contact during a validation experiment

  • The aerial platform. Its design enables applying on the surface the contact forces required for physical inspections. The aerial platform is endowed with tilted rotors, which increases its stability and reactivity to compensate perturbations.

  • The robotic contact device. It is responsible for providing the capability of steady contact with surfaces. It is a mechatronic device with six DoF and has the robotic end-effector at its end. Due to its efficient design, surface contact forces are transmitted to the centre of mass (CoM) of the aerial robot which enables their efficient and effective compensation.

  • The robotic end-effector. Located at the end of the robotic contact device, it is endowed with wheels for moving on the surface under inspection. It integrates the sensors to be used for inspection and also additional sensors to facilitate operation. A variety of robotic end-effectors can be mounted on the robotic contact device. An end-effector with three wheels, an ultrasonic emitter and a sensor, and a camera is presented in this paper.

The operation of AeroX has two main modes. In the free-flight mode, the pilot guides the aerial robot (in manual or assisted way) to the element to be inspected and moves the robotic contact device to the selected inspection point until the robotic end-effector is in contact with the surface. As soon as the contact has been performed, and so the contact-flight mode starts, the pilot activates the fully-autonomous stabilization mode, which keeps the aerial vehicle steady w.r.t. the surface contact point with uninterrupted contact using only the measurements from the robot internal sensors. The inspector teleoperates the movement of the wheels of the robotic end-effector on the surface. As a result, when the inspector moves the wheels of the end-effector, the surface contact point changes and the aerial vehicle moves to keep its position steady w.r.t. the surface contact point. Hence, the aerial robot follows the end-effector commanded by the inspector.

5 Main AeroX Components

This section summarizes the main components of AeroX including the aerial platform, the robotic contact device, and the robotic end-effector. In order to have a better understanding of the overall aerial robot, in Fig. 10 is shown the overall system architecture.

Fig. 10.
figure 10

AeroX UAV hardware architecture

  • A. Robotic Contact Device

One of the main components of the proposed aerial robotic system is its novel robotic contact device, whose patent has been granted at the beginning of 2018 (Trujillo et al. 2018). Due to its mechanical design and integration to the aerial platform, all the surface contact forces are transmitted to the CoM of the aerial robot, which enables simplifying the stabilization and control of the aerial vehicle. With the robotic contact device, the aerial robot is capable of absorbing perturbations, keeping a robust and stable contact perpendicular to the contacted surface. The main characteristics of the robotic contact device are:

  • Its mechanical design and integration with the aerial platform enable inspecting surfaces with different positions and orientations such as vertical, horizontal and inclined surfaces, see pictures from outdoor experiments in Fig. 11.

Fig. 11.
figure 11

AeroX in contact inspection of surfaces with different positions orientations in different outdoor experiments and with different end-effectors: inspecting pipes (bottom-left and bottom-right), horizontal ceilings (top-right) and vertical surfaces (top-left).

  • The robotic contact device transmits all forces directly to the CoM of the aerial vehicle, simplifying its stabilization and control.

  • All the joints of the robotic contact device are equipped with shock absorbers, which increase perturbation rejection and surface compliance during contact.

  • The joints of the robotic contact device are equipped with internal sensors. During contact-flight their measurements are used to estimate and control the relative position of the aerial robot w.r.t. the contact point using only its internal sensors and without any external positioning system or sensor (Fig. 12).

Fig. 12.
figure 12

Drawing of AeroX. Its size is 170 × 230 cm.

  • B. The Aerial Platform

The controllability and reactivity to fly close to obstacles and withstand wind gusts, manoeuvrability and agility to fly in constrained industrial environments and the robustness for everyday operation have been the main characteristics in the design of the aerial platform.

The proposed aerial vehicle has eight 2 kW motors (KDE5215XF-220), providing a maximum take-off weight of 25 kg. The motors are set such that the robotic contact device can rotate around the vehicle CoM without colliding with the propellers. Each set of motor-propellers is alternatively tilted 30° around the motor boom. The tilting of the motors is fixed and cannot be controlled. The actuation variables that can be controlled are each of the eight motor desired forces. Tilted rotors enable independent control of the six DoF of the vehicle adding the capability to control the linear lateral forces, providing full control of the aerial platform during hovering, highly improving the aircraft controllabilit.

Furthermore, as demonstrated in (Michieletto et al. 2018), six tilted propellers are sufficient to be able to control the six DoF of the vehicle and ensure control of four DoF in case one motor fails.

The adopted configuration with eight motors was selected to overcome the failure of one motor keeping the full robot six DoF controllability. This makes this aerial vehicle the first one of its class, this feature being an important step to its future industrialization.

The aerial platform design and its hardware elements integration have been carefully implemented for distributing the weight in order to keep the CoM of the robot at the geometric center of the aircraft. Besides, the integration of the robotic contact device is such that the surface contact forces are transmitted to the CoM of the aerial vehicle, which improves stability and controllability during contact-flight mode.

  • C. The robotic end-effector

The end-effector is the device that is connected to the robotic contact device and it contains the ultrasonic sensor to accurately measure the depth of the cracks, see Fig. 13. The following requirements for the end-effector were identified in coordination with the end users and with the ultrasonic sensor developers:

Fig. 13.
figure 13

General view of the AeroX end effector.

  • The end-effector must be able to move along the surface of the bridge. The main reason for this requirement is that it is very difficult to position the ultrasonic sensor from the air exactly in the place where the end-user wants to perform the measurement. Even in an autonomous mode, the navigation precision is not enough.

  • The end-effector should incorporate one or two cameras in order to allow the ground operator to know exactly where the ultrasonic inspection is going to be performed.

  • The end-effector must have all the components included in order to facilitate the installation on the robotic arm, and also, allow exchanging the end-effector if it is required.

  • The end-effector should be light and small enough to be carried by the AeroX UAV.

  • The end-effector should be designed so the ultrasonic sensor can be operated remotely from the ground.

The end-effector main components can be seen in Fig. 14. They are:

Fig. 14.
figure 14

End effector description

  • Motion subsystem: a set of three omnidirectional wheels have been used. These three omnidirectional wheels allow us to have full motion capabilities on the end-effector such as moving to any direction on the plane or even rotating around a single point at any time. The wheels also facilitate the positioning of the sensor in the precise place where the measurement is required. Furthermore, this motion subsystem is composed of the necessary electronics and communications systems which allow the operation of the motion subsystem from the ground.

  • Sensor subsystem: the sensor subsystem is composed of the ultrasonic sensor, the electronics required to process the information and the actuators required to move the ultrasonic probes (and apply a specific contact force towards the surface).

  • Video cameras: two cameras have been installed in the system. One camera is integrated in the end-effector and allows knowing where the ultrasonic measurement is being performed with the required precision. The camera is positioned in a way that the crack will be seen in the centre of the image, being in this way in the middle between both probes. A second camera is used to have a wider view and have an idea of the position of the end-effector. This second camera is actually installed in the robotics arm in order to cover enough surface of the infrastructure and increase the situational awareness of the ground operator.

This designed specific end-effector for proper concrete surfaces applications, such as bridges, have been developed. The result is shown in Fig. 15. This end-effector has 3 omnidirectional motorized wheels distributed in a way the operator can move the tool in every direction, essential for its correct positioning with respect to the crack because it has to be aligned with the centre. Additionally, two micro-motors are in charge of moving the up and down mechanism for placing the ultrasonic transducers against the concrete. Additional sensors have been integrated on the end-effector to be used at the same time of the UT transducers, like a crack width measurement system as shown in Fig. 15. The piezoelectric emitter was mounted on one of the two micro-motors of the end effector and the acoustic-optical microsensor was attached to it using a rigid plastic support. In this way, both devices could be brought in contact with the surface of the concrete by the end-effector during the measurements. This design can be seen with more detailed in Fig. 15.

Fig. 15.
figure 15

Images of the integration experiments of the end-effector for concrete cracks characterization. The left images shows different perspectives of the end-effector. The top-right image shows the operator view of the end-effector internal camera aligned with a crack while taking a measurement. The bottom-right image shows the integration of a crack width measurement sensor which could be used at the same time than the ultrasonic transducers for crack depth.

6 Experimental Validation

Several experiments have been conducted in order to validate the UAV platforms and associated technologies (GNSS-free navigation and contact inspection). Afterwards, the system was tested on a real viaduct and bridge on operation in Spain, where the general visual inspection and the contact device and the end-effector for depth measurements in concrete were tested.

  • A. Validation for Visual Inspections in Viaducts

A number of experiments were performed first to validate both the GNSS-free navigation solution and the general visual inspection procedure using AEROCAM platform. These experiments took place in a viaduct. A total of 360 images were obtained with AERO-CAM platform. The flights were performed under the viaduct in a space between two pillars. The captured photos contain visual information of both pillars and the viaduct deck (see Fig. 16).

All images are in JPG format and were attempted to be taken at a safe distance to ensure the integrity of the AERO-CAM but still meet the 1mm per pixel density requirement (see Fig. 17).

Fig. 16.
figure 16

Two of the inspected areas

Fig. 17.
figure 17

Example of images of the Álora viaduct dataset

  • B. Validation for Bridges using Ultrasonic Sensors for Concrete

For the second set of experiments, these were performed at a bridge in order to test the capability of contacting a surface of the bridge, move along it and take ultrasonic measurements to obtain the depth of the cracks. Unfortunately, there were not visible cracks at the selected bridge and we had to limit the measurements to the concrete surface velocity.

The surface velocity measurement is the first step that has to be performed before the crack measurement. In this way, the velocity of transmission of the ultrasonic waves through the concrete is calculated. If a crack depth wants to be measured, it is needed to have that data before, that typically is taken near the crack. Afterward, the crack is measured using the difference of time measured between the previous one. Furthermore, this means that if the system is able to take a surface velocity measurement, it will validate the crack depth capability because it is using precisely the same tools and procedure.

The views the operator had for controlling the end effector were: a general view used for moving while avoiding obstacles and a close view of the operation of the sensors to check their coupling and the working surface conditions. Thus, the operator navigated from the initial contacted point to the one defined in the operation plan. Once the operator was at that point, he checked the surface with the close view camera and decided to move the sensors against the surface and proceed with the measurement of the surface velocity.

After two measurements on the vertical surface of the pillar, the pilot performed a detachment of the contact, and the ground operator commanded a change of the configuration of the robotic arm to inspect at the top of the UAV in order to touch the first beam of the first span.

After making contact with the beam of the bridge, the operator moved the end-effector along it to reach the point that was planned. Even if the end-effector is easy to be controlled, that operation had to be performed with caution due to the proximity of the wheels to the limits of the beam. However, with just some minutes of training, an operator with no previous training can control the inspection system. After reaching the objective point, the operator took another surface velocity measurement and concluded the operation giving the pilot the end mission instruction. The pilot commanded a detachment of the beam surface, and at the same time, the robotic arm returned to its standard front configuration and proceeded to land.

7 Conclusions

A novel solution for complete bridge inspection has been designed, and its validation is presented on this paper. The system is formed by two drones: one for general visual inspection and a second one for specific and detail inspections using ultrasonic contact inspections. The first one is an aerial robot that can obtain pictures of the overall bridge fully autonomously thanks to a non-GPS navigation system. The second one is the AeroX drone platform, a novel solution for inspection of difficult access areas. The AeroX can perform contact inspection due to its robotic contact device, which is equipped with an end-effector. Also, both drones were validated and results and videos of the validation experiments were presented.

In conclusion, a new opportunity for the inspection of bridges and viaducts has arisen, allowing complete inspection operations in bridges in less time, reducing costs, improving quality of the inspection, and increasing safety of operators.