1 Introduction

Infrastructures can be exposed to different loading conditions, recurrent ones due to vehicular traffic and extraordinary ones caused by earthquakes, wind, and strong rain. The consequently induced stresses may determine structural deterioration and damage, which can even cause catastrophic collapses related to significant socio-economic losses [1]. Therefore, the issues related to the possibility of reaching/increasing a level of automation for inspection and maintenance of infrastructure are still prone to research interests. During these last years, the classical activities conducted commonly with human inspectors by visual quality control for damage assessment is under a deep innovative renovation due to newly available tools coming from Information and Communication Technologies (ICT). Indeed, for example, current visual inspection, which highly relies on an inspector’s subjective or empirical knowledge that can induce false evaluation [2], can be enhanced by robotic/automatic assisted operations.

Usually, the actions performed by inspectors require a long time to examine large areas, which may be difficult to access. Inspection can be also performed with specialized equipment like large under bridge units, large trucks, special elevating platforms or scaffolding on structures. Those solutions are in most cases expensive and cause high logistical efforts and costs as well as high personnel costs for the specially trained machine operators. These units can even interfere with the operational conditions of structures and infrastructure. Additionally, specially trained staff, like industrial climbers, can access the site for the inspection, but they can rarely evaluate which influences detected damage can have on these structures. Therefore, they can only take photos or videos of the concerned part of the structure, which must be further analyzed by civil engineers after data recording.

In the meanwhile, the results obtained by more affordable visual inspection campaigns can be organized in a Bridge Management System (BMS), as in the case of DOMUS (in Italian Diagnostica Opere d’arte Manutenzione Unificata Standard), which is currently used by the Italian National Railways Company (RFI) [3], to evaluate bridge condition and to manage maintenance. In addition, several Non Destructive Testing/Non Destructive Evaluation (NDT/NDE) technologies for structures and infrastructure inspection are currently accompanying visual inspection, such as radiographic testing, liquid penetrant tests, and active infrared thermography (e.g. [4, 5]). In particular, in the latter references it is reported, those sensors such as a camera and/or IR (Infrared Radiation) camera are placed at a given distance of the surface to be analyzed. Additionally, sensors can be mounted at the tip of a robotic manipulator (such as UAVs or mobile robots) and its position and orientation can be controlled to ensure the relative position and orientation with respect to the observed object [6,7,8].

Recent works address the problem of the automation of inspection and maintenance tasks based on robotic systems [9, 10]. Existing automatic or robotic systems based on the ground or aerial solutions have been proposed for inspection of dangerous sites or those difficult to access, but at the present state-of-the-art, human-based procedures are not yet completely substituted. Examples of ground systems used for inspection are wheeled robots [11], legged robots [12], but efficient type of locomotion is the hybrid one combining the advantages of both types, as discussed in [13, 14]. In case of inspection of vertical surfaces, wall-climbing robots were developed using magnetic devices [15] or using vacuum suction techniques [16].

Recently, unmanned aerial vehicles (UAVs) have shown a great advantage in inspection applications, showing potentialities such as extended flight time and great stability [17]. Remote-controlled UAV equipped with high definition photo and video cameras can be used to get high-quality inspection data.

Mobile or hybrid solutions with wheels and legs have been developed for automatic inspection [18, 19]. Although robotic systems for inspection, together with newer measurement techniques can significantly enhance infrastructure inspections, the development level of these robotic systems is still much lower than in other areas and it needs to be better developed. Indeed, these good perspectives for the automated inspection will decrease costs, increase inspection speed, accuracy and safety [20].

The integration among robotics, automation and information and communication technologies (ITC) allows creating useful tools able to help to generate very reliable models, which are helpful in the decision-making processes [21].

Most of the infrastructure and civil structures are made by concrete, steel and masonry, which are prone to cracks due to creep, shrinkage and corrosion of reinforcements. Crack information (e.g., the number of cracks and crack width and length) represents the current structural health indicators, which can be used for the proper maintenance to improve the structural safety [22,23,24,25]. Very often different damages in buildings and bridges (e.g., cracking, spalling, deformation, or collapse-induced debris) could be captured using a commercial digital camera. Therefore, damage detection by visual inspection can be assisted by image analysis and processing. In this regard, two main tasks should be tackled, namely object recognition, and damage detection and quantification.

Standard techniques for object recognition can be based on the Haar wavelets [26]. Another approach aims at finding correspondences between two images of the same scene [27]. Image processing based methods can be classified as based on color information, textural information or a combination of the two to segment and extract regions of interest in images. Texture and color-based segmentation approaches are the primary modes of segmentation employed for image analysis. While both approaches have important applications in image processing methods, the color-based methods have been researched largely. Texture may be considered as an innate property of surfaces and this technique finds particular relevance in cases where the regions of interest are more separable from the background based on their texture than color.

Color detection techniques can be used for the detection and classification of local images structures (i.e. edges, corners, and T-Junctions) when the discriminant is the color. Color is important for many applications such as image segmentation, image matching, object recognition, visual tracking in the fields of image processing and computer vision [28]. The color detection Techniques allow fast processing and are highly robust to geometric variations of the object pattern and viewing direction [29]. One possible choice for the color space is RGB, which is widely used for many applications, as it was presented in [30]. In [31] a comparative analysis is proposed among different color spaces to evaluate the performance of color image segmentation using an automatic object image extraction technique. The outcome of the study revealed that that the segmentation depending on the RGB color space provided the best results compared to other color spaces for the considered set of images. In [32] an image processing technique for rapid and automated crack detection was proposed as based on RGB color space. The comparative study among different color spaces in [33] does not show significant differences. All adopted color spaces actually provide a meaningful segmentation. Although RGB is not the best solution when a great variability of chromatic and intensity changes is present in the images, such as the face recognition problem, for other applications involving crack detection, automatic or semiautomatic object image extraction technique, RGB color space is one of the most used [31,32,33,34,35,36].

Recently, an algorithm based on the Convolutional Neural Network is considered in [40] to detect concrete cracks without calculating the defect features [41, 42]. Furthermore, a modified architecture of Fusion Convolutional Neural Network to handle the multilevel convolutional features of the sub-images is proposed and employed in [43] for crack identification in steel box girders containing complicated disturbing background and handwriting.

In this context, the present work proposes a procedure, which starts from the use of aerial and ground robotics, and permits a defect recognition for railway bridges, together with the consequent description and extension evaluation. The aim is to identify defects such as efflorescence, corrosion, paint gap, loss of thickness, moss and/or plants that are the typical defects considered in DOMUS. Unlike the crack identification task, which is addressed mainly by Convolutional Neural Network algorithms and which requires a suitable object distance and focal length [43], in this work the defects to be detected are not characterized by a shape, they are wider (they occupy larger areas of the structural elements) and the object distance is greater. For these reasons, the procedure here proposed, based on digital image processing (DIP), is suitable for the recognition of wide and extensive defects. In particular, a color detection technique is used and it allowed detecting different types of defects associating different ranges of color to each defect. Therefore, the classification is not based on the feature of the defect but on its color. The drawbacks of this approach is related to light conditions which in some cases are not relevant.

In the first part of the paper, robotic operations are reviewed and, on this basis, a precise procedure is described for the automatic acquisition of images and database storage, evidencing the interrelation with the BMS adopted as framework. The second part of the paper deals with a procedure for defect evaluation based on DIP. The methodology here proposed follows, research endeavours attempt to empower visual inspection and optical imaging by quantitative image analysis; to achieve automatic or semi-automatic damage detection [44, 45]. The features of the new software developed are discussed. A validation of the work is described based on a robotic acquisition campaign using UAV on railway bridges belonging the Italian National network.

2 Defect evaluation procedure

A commonly adopted and recognized methodology for visual inspection, according to a computer or ICT based approach, identifies the concept of the so-called Engineer-Imaging-Computing (EIC) unit [46], which consists of an expert civil engineer with his equipment in the working area. Each EIC unit consists of two basic elements namely, the engineer, who is the expert in damage assessment, and a digital camera. The standard method then heavily relies on human activities eventually performed with the aid of a device for image capturing. This section illustrates a new proposed framework to acquire data and to evaluate damage extension on bridge elements by the use of automation. The main activities can be outlined providing the specific aspects of the method related to the enhancement reachable by the automation of the process, as the following:

  1. 1.

    Image or video capturing along pre-defined paths: a mobile device, either a ground or an aerial system carrying the data acquisition system is used for the phase of the image and video capturing. For bridge inspection regular digital camera are considered, for which a basic requirement is that it can be tethered wireless to the mobile computing device. Alternatively, data is transferred to PC after data recording for further elaboration.

  2. 2.

    Contextual interface: the DOMUS BMS has been considered to describe both structural elements and damages in the infrastructure. The path followed by the robots is driven by the information present in the database, which describes the elements of the inspected bridge.

  3. 3.

    Experimental calibration: evaluation of the image analysis accuracy through a comparison between the information coming from the image processing and the on-site set-up.

  4. 4.

    Defect analysis: defect recognition and defect extension evaluation by image processing. The first one is pursued by processing the information contained in the database and obtained by the image processing while the second one is performed based on the developed algorithm successively here described.

The flow chart in Fig. 1 describes schematically the proposed procedure showing also the outcome: fast damage evaluation, defects evolution, defect severity and maintenance planning. In the block diagrams, the horizontal arrows indicate that the procedure can be reiterated based on the information obtained by a specific phase.

Fig. 1
figure 1

Novel framework of the defect evaluation for image-based bridge inspection

More specifically, Phase 1 dealing with image or video capturing can assist and partially reduce traditional visual inspection performed by expert personnel. Research in this direction has driven the development of vehicle-mounted imaging.

Different technological solutions for Phase 1 are considered in the proposed procedure to determine optimized features for the robotized inspection based on the information contained in the BMS.

Solutions for Phase 1 can semi-automatized with the use of ground or aerial systems tele-operated by expert users. Main characteristics of the robotic agents are summarized in Table 1. This level of automation allows faster and safer operations because permits the personnel to work by remote control avoiding onsite direct inspection. Since a robotic agent performs the operation of image and video capturing the overall cost, risks and danger issues are drastically reduced.

Table 1 Main characteristic of the robotic agents usable in Phase 1 of the proposed procedure

Figure 2a shows a scheme of the use and access of robotic and automatic solutions that can be adopted in the proposed procedure for the monitoring and survey, which may be also used in combination with dynamic acceleration measurements of the deck vibration [30]. According to the type of structure, robotic systems can be used, as shown in Fig. 2a, namely ground (mobile, walking and hybrid robots) UAV and climbing/serpentine.

Fig. 2
figure 2

Bridge image acquisition: a automatic/robotic systems, b path planning for an automatic/robotic access

All possible types of robotic solutions displayed in Fig. 2a have to be used for semi-autonomous or autonomous reliefs taking into account the paths shown in Fig. 2b and their advantages and drawbacks, as reported in Table 1. Different levels of automation can be pursued for each phase detailed in Fig. 1. In particular, in Phase 1 the highest possible level of automation deals with the fully autonomous access to the structure/infrastructure using Simultaneous Localization and Mapping (SLAM, [47]) techniques, performing inspections by considering the possible paths reported in Table 2, as it is shown in the scheme of Fig. 2b. A lower level of automation can be pursued by a tele-operated survey; in this regard planning the inspection path can be performed, to increase the level of automation in Phase 2. Semi-autonomous inspection can be performed relying then on operator’ supervision.

Table 2 Planning and access path for automatic inspection

Video and pictures can be taken automatically, at a given rate, or the operator defines them.

The second phase deals with the creation of a Contextual interface, automation in Phase 2 deals with the use of a database. Among a large amount of possible solutions and examples for the database creation for both structural elements and damage or cracks in the infrastructure, in this context we consider the DOMUS Bridge Management System (BMS) as presented in [3]. This management tool is currently used by the Italian National Railways Company (RFI), which is working on the possibility to realize automated inspections of bridges. The procedure itself contains a good level of automation because there are defined thresholds for a large number of components and defects. Therefore, the proposed procedure is inspired by the possibility to realize a whole set of actions for inspection and maintenance of bridges relying on automation. In particular, DOMUS adopts a procedure for bridge condition assessment by visual inspection.

The main modules used in the procedure are bridge inventory, computer-aided visual inspection, automated defect catalogue, and priority ranking procedure. At the current state, the process is completely manual, i.e. no automatic procedure is performed.

A probabilistic model used to calibrate the condition evaluation algorithm has been developed to determine different levels of deficiency for each class of bridge structure belonging to the managed stock. The procedure allows comparison and relative ranking of deficiency conditions across different types of bridge structures.

The first module, the bridge inventory, consists of a complex and exhaustive database, including sufficient data to describe any bridge in the bridge inventory.

The software links the seventh database sections to the second main module, which is devoted to provide automated digital assistance to visual inspections. The second module incorporates a strict, coded procedure that ensures homogeneous visual inspections over time and for different bridge types. This module has been used in the proposed procedure to produce all the information that are necessary to operate with robots according to the previous described solution to run the inspections. Indeed, the procedure adopted in DOMUS is prone to automation, because the role of the inspector can be easily assisted by robots and expert systems. Systems, components and subcomponents of the bridge are known and described in DOMUS. During the robotized inspections, this information can be used to assign the observed damage to the subcomponents affected determining a-priori optimized and fast path for the robot. Additionally, in the following section, a procedure is proposed to evaluate the extent coefficient values for a series of defect based on image processing.

Furthermore, the procedure of assigning a defect to specifically identifiable components or subcomponents allows automatic assignment of coefficient values associated with deficiency type and structural importance. Indeed, three relevant coefficients define the importance (B), intensity (K2) and extension (K3) of the defect. All parameters can assume four values belonging to four classes, which are growing with the defect dangerousness. For example, in Table 3, the thresholds are reported for the ratio between the defected area and the total examined area (E) to evaluate the K3-parameter.

Table 3 Value of K3 as a function of E (ratio between the defected area and the total examined area in DOMUS)

It is worth to highlight that E is given in term of percentage and it is evaluated in the current practice by the inspectors.

For these reasons, the distribution of the K3-parameter will be very coarse without pointing out which defect is really important to improve. In this sense, the desirable improvements, based on the proposed procedure, for DOMUS could be the following: (1) better evaluation of the E-parameter describing the extension, (2) possibility to reach areas not accessible by a human inspector, (3) optimization and speeding in decision-making and finally (4) automation of the procedure.

In this context, once the first phase is partially or completely automated, its outcome is a series of pictures ordered or preordered according to the type of access that has been described in Fig. 2 and Tables 1 and 2 and automatically related with the inventory module of DOMUS.

More specifically, the preordering of the pictures allows a fast classification of the picture to be assessed. After image processing, filling the defect database can be highly automated, referring to intensity (K2) and extension (K3) of the defect. It is worth to note that the preordering of the structural components in the database must be the same as the image capturing. The phase 2 can be highly automated but not completely because it needs human supervision. Apart this limitation, it gives high benefit because it reduces drastically the time needed for its completion.

The third phase deals with experimental calibration. To extract the required information is necessary that the acquired images possess specific characteristics. A relevant feature is related to the camera sensor resolution that allows representing the defect with a certain number of pixel. Given an infrastructure or building containing a defect, greater will be the number of pixels representing the scene and better will be the extraction of the inherent defect information. Using a single camera, the perspective distortion of the scene can provide a relevant error of the defect recognition. If the defect is located on a flat wall, it is very advantageous to have the camera sensor in a plane parallel to the one containing the defect, to minimize the perspective deformation. Even if an image improvement can be performed, environmental factors influence picture quality. Indeed, weather conditions, camera position with respect sun, external shadows and casual interferences/disturbances, can provide different outcomes of images even if the scene is the same. Once these parameters have been set at the beginning of the survey they do not change. It is worth to mention that the use of a UAV drastically reduce the time of the survey allowing the best conditions to have the same environmental conditions and then parameters e.g. luminosity, saturation. For these reasons, phase 3 is the most difficult to automatize.

The last phase deals with the defect analysis that requires the recognition and identification of both the observed object with respect to the background and the specific defect identification in the observed object.

In the last two decades, several techniques have been developed to recognize objects in digital images. The fourth phase contains a good level of possible automation. The developed software application that will be described in detail in the next sub-section allows the identification and quantification of a defect in a given image.

2.1 The DEEP software-based image processing technique for defect extension evaluation

The software named DEEP (DEfect detection by Enhanced image Processing), developed by the authors in Visual Basic environment is able to evaluate the defect extension through image processing. In particular, it uses color detection techniques based on RGB color code processing, implementing the following rules:

$$\left\{ \begin{gathered} R_{{\min}} < R < R_{{\max}} \hfill \\ G_{{\min}} < G < G_{{\max}} \hfill \\ B_{{\min}} < B < B_{{\max}} \hfill \\ \end{gathered} \right.$$
(1)
$$RG_{{\min}} < \left| {R - G} \right| < RG_{{\max}}$$
(2)
$$GB_{{\min}} < \left| {G - B} \right| < GB_{{\max}}$$
(3)

where the letters R, G, B stand for the color components of red, green, and blue to which the human retina is sensitive. These components, which are defined by integers, are mixed together to form a unique color value. Each color integer value range from 0 through 255.

The defect is determined in pixels with color values that satisfy the Eqs. (1)–(3) depending on 10 parameters: Rmin, Rmax, Gmin, Gmax, Bmin, Bmax, RGmin, RGmax, GBmin, GBmax.

The selection of the optimal parameters is performed by an extensive training on acquired images containing the studied defect. It has been observed that, after the training, the parameters associated with a certain defect remains valid to detect the specific damage in new images.

Furthermore, the role played by the constraints used to extract the number of pixels associated with a faulty portion of the examined component can be described as follows.

Equation (1) sets the possible ranges for the RBG colors independently each other. It permits to select the pixels associated with the RGB triplets that satisfy at the same time the three inequalities. Equations (2) and (3) introduce the possibility to select an admissible range for two chromatic distances (|RG| and |GB|). These added constrains permit to select only the pixels that, even if they change their absolute color, preserve their chromatic distance. It is right to highlight that the limit values for the definition of the ranges in Eqs. (2) and (3) correspond to the maximum and minimum distances evaluated through chosen limit values in Eq. (1). A large choice of the interval fixing large limit values makes the constraint vanish. Differently, accurately selected limit values, RGmin,max and GBmin,max, determines the coupling of inequalities (13) allowing the evaluation of defect extension cleaned by the brightness effect of the selected picture.

The addressed procedure has been implemented in DEEP, a new software developed by the authors and presented here. In this framework, DEEP has been used for two main purposes:

  1. 1.

    Detection of structural objects (e.g. bridge structure or components);

  2. 2.

    Detection of defects (e.g. corrosion or paint stripping, efflorescence, vegetation, etc.).

A general view of the software working is shown in the flow chart reported in Fig. 3.

Fig. 3
figure 3

Flow chart that highlights the main characteristics of the DEEP software

Three toolbox packages can be highlighted:

  1. 1.

    Management commands;

  2. 2.

    Pre-processing transformations;

  3. 3.

    Processing procedure.

When the software is started the menu permits to activate the classical commands needed to manage an image, such as open, show/hide. Subsequently, the toolbox can be activated to carry out some pre-processing transformations such as gray scale, brightness, contrast. These two command areas can be used to get the pre-processed image.

The processing commands defined in DEEP are two, namely Classical RGB Method (C-RGB) and Bounded RGB Method (B-RGB).

In the first one, a segmentation using classical RGB method is implemented (Eq. 1). This tool selects the pixels associated with a specific range of values for each fundamental color R, G and B.

Moreover, the width of these intervals can be different in each of the three cases. An example of this procedure is shown in Fig. 4a where the target was the recognition of the structural elements in the picture. Looking at the processed image, it is evident how not all the pixels representing the structural elements have been retained, especially the ones positioned to the contour of the image.

Fig. 4
figure 4

Result coming from Classical RGB Method (a) and Bounded RGB Method (b)

The B-RGB command carries out a search of the pixels whose RGB color satisfies Eqs. (1)–(3). The procedure is called bounded RGB method because it has to fulfill different constraints, as illustrated in Fig. 4b.

The output of the processed image shows an improvement if compared to that obtained through the previous analysis (C-RGB). Indeed, almost all structural elements have been entirely selected especially regarding the external and vertical ones.

Consequently, the information taken by the use of this command is more refined and affects the defect extension evaluation through the following parameters: the number of pixels of the structure without defect, number of pixels of the defected area, ratio between these two values and defected area evaluation. Finally, it is worth noting that the recognition of the elements/defects on a single image takes less than a minute using a standard laptop.

3 Procedure validation

In this section, the results obtained by image processing are illustrated and analyzed as taken in an inspection campaign of several railway bridges belonging to the Italian National Network. In particular, five samples have been considered: one steel (A) and four masonry bridges (B, C, D, E). For each bridge, we collected about two hundred pictures; therefore, the method was massively applied to a large number of images. A parameter sensitivity analysis has been also reported in the last part of the paper to discuss the effects in the selection of the extreme value of the distance ranges in Eqs. (2) and (3).

The first one is a railway steel bridge, while the masonry ones are arch bridges. In all cases, according to the proposed procedure, the data acquisition is performed using Aibot X6, which is a flying hexacopter, specifically designed for demanding tasks in surveying, industrial inspection, agriculture and forestry. Equipped with a high level of artificial intelligence, this UAV reaches any target and can independently create high-resolution images and videos. It is also suited for autonomous flight mode. An unmatched feature of the Aibot X6 offers the possibility to adapt various kinds of sensors such as hyper- and multi-spectral sensors, infrared and thermal sensors, and sensors for other industry-specific missions. With the data captured by the Aibot X6 UAV and the software solutions of Aibotix and Hexagon orthophotos, 3D models and high-density point clouds can be generated with great accuracy. Data analysis is carried out using the developed procedure described in Sect. 2, following a predetermined sequence for each bridge to tether a preordered sequence of structural elements, as it is acquired in DOMUS. Although possible, at this stage the flight was tele-operated by an operator through a logical path following the sequence of structural elements that facilitates the storing of the pictures. The path is following the bridge deck along a predefined direction of viability starting from shorter distance from Rome. The deck first and piles after.

It has been experimentally verified that following a predetermined sequence for the bridge inspection reduces the time of the survey of at least 15% and reduces the time of ordering the images for the processing of more than 30%. The data is stored and further processed off-line.

For the cases of study, an UAV was used only.

A ground mobile robot can be also used applying the described procedure. The integration of aerial and ground mobile systems is not addressed in the paper, and it is ongoing work.

In case (A) steel truss-beams have been considered while the bridge abutments and arches are the main elements under observation for cases (B–E). As explained in the previous sections, the image processing of the whole set of pictures has been performed using a bounded RGB method (B-RGB). The 10 parameters needed to define the Eqs. (1), (2) and (3), have been set to allow the automation of the recognition of both structure and defects.

Table 4 contains the values used to identify the structure, the paint absence, efflorescence and vegetation that are usually found in the old masonry abutments. The RGB parameter values applied for the structure recognition are similar for both steel and masonry bridges. This occurrence is due to the close color shades for the two different types of structures.

Table 4 Chosen parameters for element and defect recognition

For opposite reasons, the values for the defect identification are quite different in the analyzed cases. Indeed, it is understandable that the paint stripping leaves a surface that has a color tone close to red, while the efflorescence and vegetation are near to white and green, respectively. Even if there is not a specific rule to follow for the choice of RGB values, considerations will be presented in the following sections based on visualization and results obtained for different combinations of the parameters. After this analysis, further steps can be automated.

Figure 5a, c and e are related to the original images while Fig. 5b, d and f are the processed ones. Three different targets have been pursued in these analyses: in the first case (Fig. 5a, b) only the element recognition, in the second both element and defect, while in the third one only defect. It is worth to note that in all processed images, the green color is referred to the element, while the red one to the defect. This is only a choice of the authors to better identify and highlight the corresponding areas and so it has no relation with the RGB values.

Fig. 5
figure 5

Steel bridge A: a, c and e original and b, d, and f processed images: structure and defect recognition

In particular, in Fig. 5d it is shown the ability to capture also paint absence in structural elements not in foreground as the red area found also in the diagonal brace in the background.

Instead, the third processing concerns only one structural element, i.e. the longitudinal and horizontal beams picked out by the blue lines in Fig. 5e. In this case, the method can be also used to investigate the evolution of a local material degradation.

Figure 6 shows the obtained results in the defects analysis conducted on the masonry retaining wall of the shoulder and the pier of the arch for a masonry bridge. Even in this case, in Fig. 6a and d the original images are given. The aim of the image processing is to show the ability of the procedure to recognize two typical masonry defects: contemporary presence of both efflorescence and vegetation. The exact area to be investigated has been indicated by a contour with blue lines (Fig. 6a, d). Fig. 6b, e try to identify the efflorescence. In the literature, this physical effect is related to the migration of a salt on the porous masonry surface phenomenon where it forms a coating. Usually, these occurrences are transient events especially at the end of the construction phase but they can become chronic due to the presence of the surrounding terrain, behind the porous material or due to polluted rains. In particular, this damage is primarily aesthetic, but it can induce also degradation in the mortar and bricks. For example, in Fig. 6a is well-visible a wide zone affected by efflorescence (lower left) while other smaller parts are spread especially in the top of the masonry. Such areas have been taken looking to the results in Fig. 6b. For the same image, the vegetation is widespread in all surface and even in this case there is a correspondence between defect and the highlighted pixels (Fig. 6c). Interesting it is to note how in the masonry C (Fig. 6d) the efflorescence defect is collocated between two areas defected by vegetation.

Fig. 6
figure 6

Elements and defects of masonry bridge B and C: a, d original images, b, e, efflorescence, c, f vegetation

The results in Fig. 6e and f show the correct ability of the procedure to recognize rapidly the corresponding extension. Other results on the effectiveness of the proposed procedure are illustrated in Fig. 7. In particular, the lateral surfaces of two masonry arch bridges have been analyzed. In the original images, placed in the left column (Fig. 7a, c, e), efflorescence appears in the mortar between bricks.

Fig. 7
figure 7

Masonry bridge D: ad original images and efflorescence defect. Masonry bridge E: e, f original image and vegetation defect

The defect highlights the shape of the brick and it is clearly identified in the corresponding processed images (Fig. 7b, d). In the left part of Fig. 7c and d, where is depicted the masonry arch, a portion of the element is hidden by leafs that are in foreground and blurred due to the distance with respect to the observed object. However, the element for which the defect is evaluated is not affected by this disturbance. In Fig. 7e and f, the vegetation in the brick joints is precisely evaluated.

Figure 8 reports a quantitative evaluation of the defect extension conducted through the newly developed software DEEP. For each analyzed image previously presented, the percentage of surface covered by the defect has been calculated with respect to the relative structural element. Such percentage has been measured through the pixels’ ratio between the number of pixels associated with the selected structural element and the ones associated with the defect. For this reason, in each corresponding figure the upper part of the table presents the number of pixels for the structure (S) and for the defect (D). Moreover, Fig. 8a summarizes the results obtained in the evaluation of the number of pixels related to a defect with respect the number of pixels related to an element of the steel bridge, while Fig. 8b and c for the masonry ones. In the first case, the evaluation conduces to low portion of the area affected by the painting absence determining the percentage of defective area.

Fig. 8
figure 8

Presence of the defect in percentage for the processed images. S and D indicate the number of pixels for structure and defect, respectively

Differently, in Fig. 8b a relatively large portion of the considered element has been found affected by a defect of vegetation.

The result agrees with a rough visual estimation of the inspector, which can look directly at the original image (Fig. 6d). Instead, in Fig. 8c the displayed results show that looking at two different portions of the same bridge an exact comparison of defect extension can be done only through the proposed procedure, which permits to determine, in a situation difficult to analyze by the inspector, which is the portion of the bridge more affected by the efflorescence defect. The developed tool permits a fast evaluation of the defect extension based on the RGB method. The procedure appears reasonable for the aim of the entire process and for its capacity to be automated in the processing of a large number of images. The proposed DEEP software can be used for other types of defects, which are characterized by evident superficial color change by properly setting the ranges of the parameters.

Other specific techniques should be used for cracks and spalling, as it has recently presented in [48]. Notwithstanding the possibility to enhance the process, the presented results allow to obtain reasonable information reducing time and cost, increasing the observation and consequently the safety related to unknown and unobserved defects.

3.1 Robustness evaluation

In this paragraph, the image-processing procedure robustness is evaluated by a series of analyses conducted on the evaluation data.

Figure 9 shows a comparison between the results obtained in the defect extension evaluation using the RGB bounded method (Fig. 9b, c) and a CAD-aided procedure (i.e. by tracing the edges of the structure and defect in computer-aided design, CAD, environment from the original image, as provided in Fig. 9d and e).

Fig. 9
figure 9

Comparison between the results obtained through the RGB bounded method and a CAD-aided one: a original image, b structure and c defect detection in RGB bounded method; CAD-aided procedure: d structure and e defect

In Fig. 9a the original image is reported, processed by the two methods. It is worth to note that this image gives the lateral view of a steel railway bridge and it can only be acquired using the aforementioned agents (robots or drones) because it cannot be accessed by an operator.

The following observations can be drawn:

  1. 1.

    both methods depend on the image resolution;

  2. 2.

    regarding the area related to the elements, a manual selection provides more accurate results, while the RGB bounded one is faster;

  3. 3.

    differences between the two procedures can be assumed as minimal (indeed, the ratio between the defected area over the structure is 5.66% and 4.42% for the RGB bounded method and the manual one, respectively);

  4. 4.

    RGB bounded method provides a greater percentage value because a few number of pixels have been associated with other objects, highlighted in the background.

A sensitivity analysis is performed by considering a reasonable parameter variations around the optimal values related to Eqs. (1)–(3) and obtained processing a large number of images and defects, as reported in Table 4. In particular, the following variables have been defined, to perform the sensitivity analysis:

$$\Delta_{1} = R_{{\max}} - R_{{\min}}$$
(4)
$$\Delta_{2} = G_{{\max}} - G_{{\min}}$$
(5)
$$\Delta_{3} = B_{{\max}} - B_{{\min}}$$
(6)
$$\Delta_{4} = RG_{{\max}} - RG_{{\min}}$$
(7)
$$\Delta_{5} = GB_{{\max}} - GB_{{\min}}$$
(8)

The defect percentage D/S has been evaluated by varying one of the range variable defined by Eqs. (4)–(8) and keeping the other ones fixed to the values used in Table 4. In Fig. 10 are reported the results of this analysis related to the case of the Paint Absence defect. Figure 10a shows the variation of the first variable ∆1 corresponding to the interval of the red color of Eq. (1). The label “opt” is used for optimal value ∆1 of the Paint Absence defect (in Table 4, ∆1 = 130). The label “A” represents the varied value ∆1 = 120 and “B” the varied value ∆1 = 140.

Fig. 10
figure 10

Sensibility analysis of the Paint Absence defect conducted by varying the limits of the intervals in the Eqs. (1)–(3). The images below are related to the ranges defined in the corresponding of points opt, A and B

The other figures start from three different values of the range ∆1 giving rise three different paths (solid, dot and dash-dot line).

The results of the analysis permit to highlight the following observations:

  1. 1.

    from the first three figures, it can be observed that among the RGB-parameters, the one more sensitive to the analyzed defect extraction is the Red color.

  2. 2.

    even if there are differences in the choice of the range ∆1 (from 120, A-point, to 140 B-point) large changes of the evaluated damage percentage are not observed. Indeed, looking at the reasonable minimum and maximum values that should be considered, which are correspondent to the triangle and square symbols respectively, the difference in the obtained results in terms of defect extension is about 1%. This observation can be supported by looking at the processed images reported in the lower row of Fig. 10. The paths corresponding to the ranges of the A-point, opt-point and B-point are very similar each other in Fig. 10a–e.

  3. 3.

    when the range for the choice of the R-parameter is very large (B-point) the damage percentages assume the highest values vice versa in the A-point (narrow range).

  4. 4.

    finally, in the Fig. 10e three maximum points can be observed, one for each path. The optimum in the ∆5 parameter is independent with respect to the other parameters, which is a feature that permits to highlight the importance of the introduction of the inequality expressed by Eqs. (2)–(3). The results indicate that the color pattern associated with the defect has its own structure.

4 Conclusions

This research has been focused on the definition of a suitable procedure for bridge inspection, which is assisted by color-based image processing on data acquired through Unmanned Aerial Vehicle (UAV). In particular, the procedures and the proposed software have been developed according to the bridge management system currently used for the Italian National Railway Network. Steel and masonry bridges have been used as illustrative real cases of study. The robotics and computer-aided procedure enables quantitative evaluation of defect extension by analyzing data contained in digital images taken on pre-classified structural elements. A color-based algorithm has been used for damage detection and quantification. A software named DEEP has been proposed to identify and quantify superficial defects such as paint absence, efflorescence, and vegetation on structural elements. For this kind of defects, that are wide, extensive and not characterized by a shape, the color detection technique is effective. The proposed procedure has been validated through comparison with a CAD-aided evaluation of the extension of specific superficial defects, evidencing the significant reduction of the human time realized by the proposed procedure to analyze images. Furthermore, the computer-assisted treatment of images taken by robots allows to evaluate defect extension that human eye cannot even access and precisely quantify.