Abstract
Robot welding is a basic but indispensable technology for many industries in modern manufacturing. However, many welding parameters affect welding quality. During the real welding process, welding defects are inevitably generated that affect the structural strengths and comprehensive performances of different welding products. Therefore, an accurate welding defect recognition algorithm is necessary for automatic robot welding to assess the effects of defects on structural properties and system maintenance. Much work has been devoted to welding defect recognition. It can be mainly divided into two categories: feature-based and deep learning-based methods. The detection performances of feature-based methods rely on effective image features and strong classifiers. However, faced with weak-textured and weak-contrast welding images, the realization of strong image feature expression still faces a certain challenge. Deep learning-based methods can provide end-to-end detection schemes for welding robots. Nevertheless, an effective deep network model relies on much training data that are not easily collected during real manufacturing. To address the above issues regarding defect detection, a novel welding defect recognition algorithm is proposed based on multi-feature fusion for accurate defect detection based on X-ray images. To improve network training, an effective data augmentation process is proposed to construct the dataset. Combined with transfer learning, the multi-scale features of welding images are acquired for effective feature expression with the pre-trained AlexNet network. On this basis, based on multi-feature fusion, a welding defect recognition algorithm fused to a support vector machine with Dempster–Shafer evidence theory is proposed for multi-scale defect detection. Experiments show that the proposed method achieves a better recognition performance in terms of detecting welding defects than those of other related recognition algorithms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Currently, intelligent manufacturing is an important extension of manufacturing automation, and many countries have proposed different future policies to improve the efficiency and autonomy of modern manufacturing; these include “Industry 4.0” by Germany and “Intelligent manufacturing 2025” by China. Robot welding is a typical representative of intelligent manufacturing that has broad applications in many areas. However, robot welding is a relatively complex manufacturing process due to many factors, such as welding voltage, welding current, welding speed, welding gun height, etc. Welding defects that affect welding quality do not inevitably appear in welding workpieces. Different welding defects have different impacts on the structural strengths and comprehensive performances of welding objects. Therefore, an effective and accurate defect recognition system is a key element of intelligent welding robots that can effectively help with the assessment of structural properties and system maintenance [1].
For welding defect detection, a suitable sensor system is the key component of the detection system. Until now, different sensors, such as vision sensors [2], infrared sensors [3], ultrasonic sensors [1], and X-ray sensors [4], have been applied in industrial inspection applications. Compared with other sensors, nondestructive X-ray detection sensors can acquire the internal structure and defects effectively, and this can help to evaluate the effects of defects on the structural strengths and the comprehensive performances of different objects. Duan et al. proposed an automatic inspection method for detecting welding defects with X-ray images [5]. Roy et al. proposed a welding defect identification method for friction stir welding by using X-ray micro-CT scans [6]. Inspired by these works, the X-ray inspection is proposed in this paper for welding defect recognition.
With the X-ray inspection, faced with weak-textured and weak-contrast welding images, a novel welding defect recognition algorithm is proposed based on multi-feature fusion to assist with assessing structural properties and performing system maintenance. It is evaluated and verified on a public dataset (GDXray set) through a comprehensive experimental analysis and comparison. The main contributions of this paper can be summarized as follows: (1) To address the training issue of recognition networks, an effective data augmentation algorithm is proposed to enlarge and construct the dataset. (2) Combined with transfer learning, with the pre-trained AlexNet network model, a novel feature extraction method is proposed for multi-scale feature extraction of X-ray welding images to acquire abstract and effective image features. (3) To ensure the detection precision of the proposed approach, based on multi-feature fusion, a welding defect recognition algorithm that fuses an SVM classifier with DS evidence theory is proposed to realize accurate defect detection.
The rest of this paper is organized as follows. Section 2 gives the detailed related work. Section 3 shows the system framework of the proposed method. Section 4 describes the data augmentation algorithm for welding images. Section 5 describes the feature extraction methods used. Section 5 explains the proposed defect recognition algorithm. Section 6 presents detailed experiments and discussions. Finally, the conclusions and future prospects of this paper are described.
2 Related Work
To improve the recognition efficiency and precision of detection methods, a considerable amount of literature has been published on welding defect recognition. These studies can mainly be divided into three categories: image-based methods [7], feature-based methods [8] and deep learning-based methods [9].
2.1 Image-Based Methods
Image-based methods are conventional image analysis methods for different detection tasks based on the principles of image morphology [10].
Due to the of good robustness and high precision of laser structured light, Chu et al. proposed an automatic post-welding quality detection method [11]. Laser structured light acted as the robot sensor to acquire the 3D profile of weld beads. On this basis, the detailed parameters of the weld beads and welding defects were extracted. To improve the measurement efficiency of laser structured light, some optimized laser structured light sensors have been designed for welding robots. Zhang et al. proposed a weld bead inspection method based on cross-structured light [12]. Jia et al. proposed an inspection method for weld beads based on grid laser structured light sensors [13]. However, structured light sensors are local sensors, and they can only acquire limited measurement data for each measurement. To address the above issues, some researchers have proposed different inspection methods based on passive light vision that can acquire additional measurement information and large measurement ranges. Combined with monocular vision, Du et al. proposed an inspection method for weld beads based on the shape from shading (SFS) algorithm [14]. Chen et al. developed a defect detection algorithm based on X-ray welding images [15]. Combined with optimized image smoothing and the information fusion method, Du et al. proposed a real-time defect inspection method based on X-ray welding images [16].
For welding defect recognition, image-based methods always involve many links, such as image filtering, edge analysis, and image postprocessing. However, a complex welding environment has a certain effect on the robustness of such algorithms. Therefore, based on a priori knowledge, image-based methods are mainly designed for specific objects or application scenes.
2.2 Feature-Based Methods
Due to their good detection performance on small-scale samples, fusion with feature vectors, and different classifiers, many researchers have proposed different feature-based recognition methods for detecting welding defects [17].
In our previous work, a welding defect inspection algorithm was proposed based on a SVM classifier [18]. Combined with monocular vision, the 3D profiles of weld beads were acquired based on the SFS algorithm. On this basis, a defect recognition algorithm was proposed based on the 3D curvature features and SVM classifier. Kasban et al. proposed a new welding defect detection approach based on radiography images [19]. The discrete wavelet transform (DWT), discrete cosine transform (DCT), and discrete sine transform (DST) were proposed for effective feature extraction, and an artificial neural network (ANN) was built for defect detection. Duan et al. proposed an automatic welding defect detection method based on X-ray welding images [5]. It could be mainly divided into three steps: defect extraction, detection and recognition. Defect extraction was used to detect potential defects. On this basis, defect detection and recognition were solved by the adaptive cascade boosting (AdaBoost) classifier. Das et al. proposed a welding quality evaluation method based on an ANN model [20]. The wavelet packet transformation was proposed for the feature extraction of friction stir welding. The ANN model was built for accurate quality evaluation.
Feature-based detection methods provide a fast and accurate detection scheme for different small-scale samples. However, the detection performance relies on effective feature selection and design. How to realize strong image feature expression against complex welding environment still faces a certain challenges, such as handling backgrounds and materials.
2.3 Deep Learning-Based Methods
With the strong support of hardware platforms and big data, deep learning methods have been greatly developed that can process raw data well and provide an end-to-end detection scheme. Based on the strong feature expression abilities of deep learning models, many researchers have sought to apply deep learning methods in welding robots to realize intelligent detection schemes [21,22,23].
Combined with a three-way image acquisition system, Zhang et al. proposed an online defect detection method based on a convolutional neural network (CNN) [24]. Based on transfer learning, Sassi et al. proposed a quality control and assessment method for the inspection of welding defects [25]. To realize the inspection of small-scale weld beads in complex welding environments, Yang et al. proposed a weld bead location method based on a deep convolutional neural network (DCNN) [26]. Günther et al. proposed a representation and prediction method for laser welding [27]. A deep auto-encoding neural network was proposed for the feature expression of welding images. The temporal-difference learning algorithm was adopted for automatic predictions about the welding process. Combined with the SqueezeNet-based CNN model, Yang et al. proposed a machine vision-based surface defect detection method with multi-scale and channel-compressed features [28]. Inspired by feature fusion, Gao et al. proposed a vision-based defect recognition method [29]. The Gaussian pyramid was proposed to generate multiscale images of defects. On this basis, a pretrained VGG16 CNN model was applied to multiscale images to learn strong image features, and these outputs were fused to improve the recognition precision of the model. By incorporating fusion with a CNN model and a multilayer perceptron (MLP), Makantasis et al. proposed a fully automated tunnel assessment method [30]. Combined with a deep semantic segmentation network, Zou et al. proposed an automatic crack detection and location method [31]. Gong et al. proposed a defect detection of aeronautics composite materials with a deep transfer learning model which could be well applied into inclusion defect detection form X-ray images [32].
Although deep learning achieves good detection performances in many application scenarios, it mainly relies on labeled data for network training purposes. The manual annotation of large datasets of welding images is a time-consuming and laborious task. Furthermore, for real welding production, it is not easy to collect many samples under different welding situations for model training. More importantly, when faced with a complex welding process, different welding parameters cause different welding defects. Unbalanced samples of welding defects will also affect the detection performances of deep learning models.
3 System Framework
For different welding objects, based on the unique features of X-ray detection, X-rays detection can acquire internal defect information, which is the basis for accurately assessing structural properties and performing system maintenance. To effectively help with the quantitative assessment of welding defects with respect to the comprehensive performances of welding objects, combined with X-ray detection, a welding defect recognition system is set up in this paper, as shown in Fig. 1.
Nevertheless, X-ray welding images present some unique characteristics that bring a certain challenges for accurate welding defect recognition.
(1) Due to the materials of welding workpieces, X-ray welding images exhibit weak-textured and weak-contrast features, which affect the accuracy of feature expression.
(2) The welding defects are small-scale samples, and it is not easy to collect sufficient training samples for model training.
To address the above issues regarding welding defect recognition, a novel welding defect recognition algorithm is proposed, and some key links are needed to ensure the performance of the algorithm on X-ray welding images as follows.
(1) Data Acquisition: In addition to an X-ray detection system, a suitable dataset needs to be set up, and this is the basis of the welding defect recognition system.
(2) Feature Expression: Faced with weak-textured and weak-contrast X-ray welding images, the effective feature expression of welding images is the core of the defect recognition system.
(3) Defect Recognition: To ensure the detection performance of the proposed method on small-scale welding defects, a suitable recognition model is also a key component of the defect recognition system.
4 Data Augmentation
4.1 Dataset
Combined with the X-ray sensor, a public X-ray image set called GDXray is set up for research and educational purposes only. It includes a subset of welding images (Welds) by the BAM Federal Institute for Materials Research and Testing, Berlin, Germany [33], and this set is composed of 78 welding images with a length of 4 K. Figure 2 shows some samples of X-ray welding images.
As shown in Fig. 2, the welding images present the weak-textured and weak-contrast characteristic. And the cracks, blow holes or solids randomly appear in the X-ray welding images.
On the basis of the image set, a variety of different welding defects exist in the welding images. Here, two main flaws are considered in this paper for welding defect recognition [34]: cracks and blow holes (or solids), as shown in Fig. 3.
4.2 Image Preprocessing
X-ray welding images are inevitably affected by image noise during the process of image collection. To improve the image quality, image preprocessing is proposed to reduce the noise.
The valuable information contained in 2D X-ray images is mainly concentrated in the low-frequency part. Otherwise, the image noise belongs to high-frequency signals. On this basis, a Gauss low-pass filter is applied to the 2D X-ray images for image filtering.
However, the Gauss low-pass filter cannot completely remove the image noise, and there is also a relatively large amount of noise. The median filter provides a good means for removing the large noise. Furthermore, it can effectively retain the image details. Therefore, by combining the Gauss low-pass filter and median filter, a preprocessing method for 2D X-ray images is proposed.
4.3 Random Cropping
The Welds subset in GDXray only includes 78 X-ray welding images that cannot be directly applied to model training and testing. Due to the 4K length, random cropping is used to process the raw images to obtain many image patches for constructing the dataset.
As shown in Fig. 3, different image patches with different sizes, such as 320*320 and 240*240, are acquired from the X-ray welding images to construct the dataset. The set includes three types of samples: cracks, blow holes or solids and no defects. Figure 4 shows some samples of different patches.
5 Feature Extraction
For the recognizing task of welding defects, the weak-textured and weak-contrast X-ray images bring certain challenges to feature expression for welding images. Furthermore, multi-scale samples also have some impact on defect recognition. To ensure the recognition precision of the proposed approach for welding defects, an effective feature expression method is a core component of the defect recognition algorithm.
5.1 Transfer Learning
For feature expression and description, many researchers have proposed different handcrafted features, such as histograms of oriented gradients (HOGs) [35] and local binary patterns (LBPs) [36]. They can be used to extract low-level image features, such as edges and textures. For weak-textured and weak-contrast X-ray welding images, these handcrafted features have certain limitations with respect to high-precision defect detection.
Transfer learning is a typical representative of multi-task learning models that can transfer the learned information from the source domain to the target domain. It does not need a large training dataset regarding the target domain, and this enables researchers to avoid a large amount of data collection and annotation work. Through model training on a large-scale image dataset, transfer learning provides a good detection scheme for a small-scale dataset. To acquire strong features from X-ray welding images, combined with transfer learning, a pre-trained CNN network is adopted to act as a feature extractor for X-ray welding images.
The AlexNet network is a typical CNN model that achieves good detection performance on ImageNet [37]. Figure 5 shows the special structure of the AlexNet network. For welding image defect recognition, a pre-trained AlexNet network on ImageNet is proposed for the feature expression of welding images.
Specifically, as shown in Fig. 5, due to the 3 input channels of the pre-trained AlexNet network, the gray-scale images are converted to RGB images to serve the feature extraction of X-ray welding images.
5.2 Feature Selection
Generally, a single image feature has limited feature expression ability, and this results in certain limitations with respect to recognition tasks with complex samples. The pre-trained AlexNet network can generate many different feature maps for the given images. Faced with these feature maps, a suitable feature selection scheme is a core part of the proposed defect recognition algorithm.
For the pre-trained CNN network model, different network layers can acquire different feature maps with different spatial resolutions. To effectively demonstrate the feature expression abilities of different network layers, with typical samples, such as blow holes or solids, cracks and no defects, Fig. 6 shows the feature maps generated from different network layers.
As shown in Fig. 6, for the experimental samples with different categories, there are obvious differences between these feature maps, and these could with defect recognition for welding images. Furthermore, the shallow network layer can acquire lower-level feature maps with higher spatial resolution, and these are suitable for small-scale objects. As the number of network layers increases, the network could acquire more higher-level and abstract feature maps that are suitable for large-scale object detection or recognition.
However, with the increase of network layers, the features for details or micro defects will lost which will affect the detection precision on multi-scale samples. For welding defects (see Fig. 4), there are large gaps in the image scales of welding defects between different samples. And some micro defects also exist in the welding images. Therefore, the single feature map from the special network cannot meet the detection demands of defect recognition.
To ensure the detection precision of the recognition network with respect to multi-scale welding defects, multi-feature fusion is proposed to enhance its detection performance. Here, combined with the pre-trained AlexNet, the feature maps from different network layers are fused for high-precision defect detection.
6 Defect Recognition
On the basis of data augmentation, this section focuses on defect recognition algorithm in X-ray welding image patches. To effectively solve the recognition issue regarding small-scale welding defects, a novel welding defect recognition algorithm is proposed based on the shallow learning method. Therefore, detailed descriptions of feature fusion and defect classification are provided in this section.
6.1 Defect Classification
For accurate defect recognition in welding images, faced with small-scale samples, an effective classifier is also a key part of the whole recognition system. To date, different classifiers have been proposed for different recognition tasks related to small-scale samples, such as ANNs [38], AdaBoost [39], K-nearest neighbors (KNN) [40], and SVMs [41]. Based on its excellent classification performances on small-scale samples, nonlinear problems, and high-dimensional spaces, the SVM classifier is proposed for accurate defect recognition, as shown in Fig. 7.
The image features are fed into the SVM classifier as the network input. To solve high-dimensional and linearly non-separable sample classification, the inner product function is applied to the SVM classifier for a nonlinear mapping transformation. Due to its good nonlinear mapping capability, the radial basis function (RBF) is proposed as the kernel function to act as the inner product function, as shown in Eq. 1.
where g denotes the parameters of the RBF function. The optimal parameters of the SVM classifier are solved by a grid search strategy and a cross-validation method.
6.2 Feature Fusion
The image feature maps of the pre-trained AlexNet model have different feature lengths. To effectively fuse different features, DS evidence theory is proposed for multi-feature fusion. It is a typical fusion method that can realize the fusion of multiple subjects, such as multichannel sensor data and multiple classifiers. The special flow chart of multi-feature fusion is shown in Fig. 8.
The image features from different network layers obtained from pre-trained AlexNet model are fed into SVM classifiers to obtain the prediction probabilities regarding the X-ray welding images. The prediction probabilities are input into the DS evidence theory module for feature fusion as follows.
where m is the output probability and W are the statuses of the welding images. \(A_i(i=1,2,3)\) are the output probabilities of different SVM classifiers.
To better illustrate the flow chart of the proposed method, Algorithm 1 shows the pseudocode of the whole training process.
7 Experiments and Discussions
To verify the effectiveness and superiority of the proposed method, this section tests the model performance through a comprehensive experimental analysis and comparison.
First, the detailed experimental configuration is described. Second, the effectiveness of different feature expression methods is verified. Third, different multi-feature fusion experiments are carried out on the constructed welding image dataset. Finally, the superiority of the proposed method is tested through an experimental comparison with other advanced methods.
7.1 Experimental Configuration
The Welds subset of GDXray is divided into two parts with a ratio of 55:45 (training set and test set). For model validation purposes, these two sets are disjoint.
On this basis, due to the 4K length, to enlarge and construct the dataset, some image patches are acquired by random cropping. They are labeled with different values for algorithm verification. Furthermore, the numbers of samples belonging to different categories are almost the same to avoid the issue of imbalanced data. Detailed information about the dataset is shown in Table 1.
The proposed defect detection method includes multiple SVM classifiers for feature fusion. Five-fold cross-validation is utilized for these SVM classifiers to ensure the reliability of the experimental results.
7.2 Feature Extraction
Different convolution layers of the AlexNet model have different feature expression abilities, and this leads to different detection precisions. On the basis of the dataset, as shown in Fig. 5, the feature expression abilities of typical network layers are tested. The special experimental results on welding images are shown in Table 2.
Table 2 shows that the different network layers result in different identification precision rates due to their different feature expression abilities. For the shallow network layer, lower-level feature maps yield relatively lower identification precision and vice versa.
Furthermore, to demonstrate the feature expression performance of transfer learning, common handcrafted features, such as HOG and LBP, are also set as comparison methods. Based on the welding defect dataset, the classification results of different handcrafted features are shown in Table 3.
As shown in Table 3, transfer learning can achieve excellent recognition performance on X-ray welding images compared with handcrafted features. It can be seen that the pre-trained deep network models have stronger feature expression abilities than the handcrafted features through network training on the large-scale image set, so they can acquire more effective image features.
7.3 Feature Fusion
To ensure the recognition performance of the proposed method with respect to welding defects, multi-feature fusion is used for accurate defect detection. Combined with a pre-trained AlexNet network, to solve the classification problem for multi-scale defect samples, low, middle and deep feature maps are fused together to improve the detection precision for welding defects.
For the AlexNet network, different network layers have different feature expression abilities, and these have certain effects on feature fusion. Here, different combinations of network layers are tested. Furthermore, to verify the fusion performance of the network, a common feature fusion method, score-level fusion [42], is set as a comparative method. Table 4 shows the special classification results of different fusion methods.
As shown in Table 4, combined with the single features, feature fusion further improves the recognition performance for welding defects. Because the \(Conv_3\) layer can acquire more effective features than those of the \(Conv_2\) layer, the feature fusion of the Conv\(\_\)3, Pooling\(\_\)5 and FC\(\_\)2 layers results in a higher classification precision. The score-level fusion method also achieves the similar result as the proposed method. Additionally, compared with the score-level fusion method, the proposed method based on DS evidence theory can achieve a higher recognition precision.
To better show the recognition performance of the proposed method, the confusions matrix are given for better experiment analysis, as shown in Fig. 9.
For the confusion matrix in Fig. 9, the proposed recognition method shows a relatively poor precision on crack defects. For the crack defects, some micro defects is a little similar to the normal samples. Meanwhile, part areas of large crack defects cause similar image features like blow holes or solids. These factors affects the recognition precision of crack defects. In the whole, the proposed recognition method acquires a better recognition performance on X-ray welding images compared with single features or other fusion methods.
7.4 Comparison with Other Detection Methods
To better show the superiority of the proposed method, some advanced pre-trained CNN network models, such as VGG16 [43], GoogleNet [44], MobilenetV2 [45], ResNet18 [46] and InceptionV3 [47], are set as feature extraction comparison methods.
As shown in Table 5, transfer learning can achieve higher classification accuracy for welding defects than handcrafted features due to its stronger feature expression ability.
Table 5 also indicates that the different pre-trained CNN network models have different feature expression abilities, leading to different classification accuracies for welding defects. Compared with other pre-trained CNN network models, the proposed fusion method results in a higher classification accuracy, indicating a better detection performance on the X-ray welding images.
7.5 Time Analysis
For different defect recognition methods, running time is also an important model evaluation indicator. Therefore, the running times of various methods are counted and discussed in this section. For a typical defect recognition system, the core links involve data loading time, model loading time, feature extraction time and recognition time. Here, the individual models and the proposed fusion method are tested separately, and the related experiments are carried out on an Intel i7-7700HQ CPU with 16 GB of memory. The special experimental results are shown in Fig. 10.
From the above experiments in Fig. 10, the proposed fusion method requires 2.35 s for defect recognition, so it cannot meet the needs of fast defect detection. For the defect recognition system, the model training and pre-loading processes are always offline, and the model loading time can be ignored in the model evaluation. Therefore, the running time only takes 0.42 s, which is faster than the times of other pre-trained network models, as shown in Table 6. Furthermore, this running time could be further improved with high-performance hardware, such as Nvidia Graphics Processing Unit (GPU) and Field Programmable Gate Array (FPGA).
Through the above experiments and analysis, the proposed fusion method does not only achieve higher detection precision on X-ray welding images than other methods but also has a faster running speed. Therefore, the proposed method provides a good detection scheme for detection issues related to small-scale samples.
8 Conclusion
Faced with weak-textured and weak-contrast X-ray welding images, inspired by multi-feature fusion, a novel defect recognition method is proposed based on transfer learning and DS evidence theory for accurate defect recognition to assist with the assessment of structural properties and system maintenance. Combined with transfer learning, to solve classification problems for multi-scale samples, with the pre-trained AlexNet network, multi-scale feature extraction is acquired for effective feature expression. The recognition model is established based on the SVM classifier and DS evidence theory to predict welding defects in X-ray welding images online. It is evaluated and verified on a public dataset (GDXray), and it can achieve a better recognition performance than those of existing methods, as seen through a comprehensive experimental analysis and comparison.
In the future, we will be devoting ourselves to this work and will perform more research to improve the recognition precision of our approach with respect to welding defects.
References
Stavridis, J., Papacharalampopoulos, A., Stavropoulos, P.: Quality assessment in laser welding: a critical review. Int. J. Adv. Manuf. Technol. 94(5–8), 1825–1847 (2018)
Tao, X., Wang, Z., Zhang, Z., Zhang, D., Xu, D., Gong, X., Zhang, L.: Wire defect recognition of spring-wire socket using multitask convolutional neural networks. IEEE Trans. Compon. Packag. Manuf. Technol. 8(4), 689–698 (2018)
Alfaro, S.C., Franco, F.D.: Exploring infrared sensoring for real time welding defects monitoring in GTAW. Sensors 10(6), 5962–5974 (2010)
Jain, D.K., et al.: An evaluation of deep learning based object detection strategies for threat object detection in baggage security imagery. Patt. Recogn. Lett. 120, 112–119 (2019)
Duan, F., Yin, S., Song, P., Zhang, W., Zhu, C., Yokoi, H.: Automatic welding defect detection of X-ray images by using cascade adaboost with penalty term. IEEE Access 7, 125 929-125 938 (2019)
Roy, R.B., Ghosh, A., Bhattacharyya, S., Mahto, R.P., Kumari, K., Pal, S.K., Pal, S.: Weld defect identification in friction stir welding through optimized wavelet transformation of signals and validation through X-ray micro-CT scan. Int. J. Adv. Manuf. Technol. 99(1–4), 623–633 (2018)
Zhan, X., Zhang, D., Yu, H., Chen, J., Li, H., Wei, Y.: Research on X-ray image processing technology for laser welded joints of aluminum alloy. Int. J. Adv. Manuf. Technol. 99(1–4), 683–694 (2018)
Fan, J., Jing, F., Fang, Z., Tan, M.: Automatic recognition system of welding seam type based on SVM method. Int. J. Adv. Manuf. Technol. 92(1–4), 989–999 (2017)
Fioravanti, C.C.B., Centeno, T.M., Da Silva, M.R.D.B. et al.: A deep artificial immune system to detect weld defects in DWDI radiographic images of petroleum pipes. IEEE Access 7, 180 947–180 964 (2019)
Liu, Y.-K., Zhang, Y.-M.: Supervised learning of human welder behaviors for intelligent robotic welding. IEEE Trans. Autom. Sci. Eng. 14(3), 1532–1541 (2015)
Li, Y., Li, Y.F., Wang, Q.L., Xu, D., Tan, M.: Measurement and defect detection of the weld bead based on online vision inspection. IEEE Trans. Instrum. Meas. 59(7), 1841–1849 (2009)
Zhang, L., Ye, Q., Yang, W., Jiao, J.: Weld line detection and tracking via spatial-temporal cascaded hidden Markov models and cross structured light. IEEE Trans. Instrum. Meas. 63(4), 742–753 (2013)
Jia, N., Li, Z., Ren, J., Wang, Y., Yang, L.: A 3d reconstruction method based on grid laser and gray scale photo for visual inspection of welds. Optics Laser Technol. 119, 105648 (2019)
Quanying, D., Shanben, C., Tao, L.: Inspection of weld shape based on the shape from shading. Int. J. Adv. Manuf. Technol. 27(7–8), 667–671 (2006)
Chen, B., Fang, Z., Xia, Y., Zhang, L., Huang, Y., Wang, L.: Accurate defect detection via sparsity reconstruction for weld radiographs. NDT & E Int. 94, 62–69 (2018)
Du, D., Cai, G.-r., Tian, Y., Hou, R.-s., Wang, L.: Automatic inspection of weld defects with X-ray real-time imaging. In: Robotic Welding, Intelligence and Automation. Springer, pp. 359–366 (2007)
Cui, K., Jing, X.: Research on prediction model of geotechnical parameters based on bp neural network. Neural Comput. Appl. 31(12), 8205–8215 (2019)
Yang, L., Li, E., Long, T., Fan, J., Mao, Y., Fang, Z., Liang, Z.: A welding quality detection method for arc welding robot based on 3d reconstruction with SFS algorithm. Int. J. Adv. Manuf. Technol. 94(1–4), 1209–1220 (2018)
Kasban, H., Zahran, O., Arafa, H. , El-Kordy, M., Elaraby, S. M., Abd El-Samie, F.: Welding defect detection from radiography images with a cepstral approach. Ndt & E Int. 44(2), 226–231 (2011)
Das, B., Pal, S., Bag, S.: Weld quality prediction in friction stir welding using wavelet analysis. Int. J. Adv. Manuf. Technol. 89(1–4), 711–725 (2017)
Chen, F.-C., Jahanshahi, M.R.: Nb-cnn: deep learning-based crack detection using convolutional neural network and Naïve Bayes data fusion. IEEE Trans. Ind. Electron. 65(5), 4392–4400 (2017)
Ren, R., Hung, T., Tan, K.C.: A generic deep-learning-based approach for automated surface inspection. IEEE Trans. Cybern. 48(3), 929–940 (2017)
Yang, Y., Yang, R., Pan, L., Ma, J., Zhu, Y., Diao, T., Zhang, L.: A lightweight deep learning algorithm for inspection of laser welding defects on safety vent of power battery. Comput. Ind. 123, 103306 (2020)
Zhang, Z., Wen, G., Chen, S.: Weld image deep learning-based on-line defects detection using convolutional neural networks for al alloy in robotic arc welding. J. Manuf. Process. 45, 208–216 (2019)
Sassi, P., Tripicchio, P., Avizzano, C.A.: A smart monitoring system for automatic welding defect detection. IEEE Trans. Ind. Electron. 66(12), 9641–9650 (2019)
Yang, L., Liu, Y., Peng, J.: An automatic detection and identification method of welded joints based on deep neural network. IEEE Access 7, 164 952–164 961 (2019)
Günther, J., Pilarski, P.M., Helfrich, G., Shen, H., Diepold, K.: Intelligent laser welding through representation, prediction, and control learning: an architecture with deep neural networks and reinforcement learning. Mechatronics 34, 1–11 (2016)
Yang, J., Fu, G., Zhu, W., Cao, Y., Cao, Y., Yang, M.Y.: A deep learning-based surface defect inspection system using multi-scale and channel-compressed features. IEEE Trans. Instrum. Meas. (2020)
Gao, Y., Gao, L., Li, X., Wang, X.V.: A multi-level information fusion-based deep leaning method for vision-based defect recognition. IEEE Trans. Instrum. Meas. (2019)
Makantasis, K., Protopapadakis, E., Doulamis, A., Doulamis, N., Loupos, C.: Deep convolutional neural networks for efficient vision based tunnel inspection. In: Proceedings of IEEE International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, pp. 335–342 (2015)
Zou, Q., Zhang, Z., Li, Q., Qi, X., Wang, Q., Wang, S.: Deepcrack: learning hierarchical convolutional features for crack detection. IEEE Trans. Image Process. 28(3), 1498–1512 (2018)
Gong, Y., Shao, H., Luo, J., Li, Z.: A deep transfer learning model for inclusion defect detection of aeronautics composite materials. Compos. Struct. 252, 112681 (2020)
Mery, D., Riffo, V., Zscherpel, U., Mondragón, G., Lillo, I., Zuccar, I., Lobel, H., Carrasco, M.: Gdxray: the database of X-ray images for nondestructive testing. J. Nondestruct. Eval. 34(4), 42 (2015)
Liu, B., Zhang, X., Gao, Z., Chen, L.: Weld defect images classification with vgg16-based neural network. In: Proceedings of International Forum on Digital TV and Wireless Multimedia Communications. Springer, pp. 215–223 (2017)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1. IEEE, pp. 886–893 (2005)
Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Patt. Anal. Mach. Intell. 24(7), 971–987 (2002)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)
Zapata, J., Vilar, R., Ruiz, R.: Automatic inspection system of welding radiographic images based on ANN under a regularisation process. J. Nondestruct. Eval. 31(1), 34–45 (2012)
Liu, J., Fan, Z., Olsen, S.I., Christensen, K.H., Kristensen, J.K.: Boosting active contours for weld pool visual tracking in automatic arc welding. IEEE Trans. Autom. Sci. Eng. 14(2), 1096–1108 (2015)
Wang, G., Liao, T.W.: Automatic identification of different types of welding defects in radiographic images. Ndt & E Int. 35(8), 519–528 (2002)
You, D., Gao, X., Katayama, S.: Wpd-pca-based laser welding process monitoring and defects diagnosis by using fnn and svm. IEEE Trans. Ind. Electron. 62(1), 628–636 (2014)
Kuang, H., Chen, L., Gu, F., Chen, J., Chan, L., Yan, H.: Combining region-of-interest extraction and image enhancement for nighttime vehicle detection. IEEE Intell. Syst. 31(3), 57–65 (2016)
Simonyan,K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprintarXiv:1409.1556 (2014)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520 (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826 (2016)
Acknowledgements
The authors wish to thank the anonymous reviewers for their valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was supported by the National Natural Science Foundation of China (No. 62003309), the National Key Research & Development Project of China (2020YFB1313701), Science & Technology Research Project in Henan Province of China (No. 202102210098) and Outstanding Foreign Scientist Support Project in Henan Province of China (No. GZS2019008)
Rights and permissions
About this article
Cite this article
Yang, L., Fan, J., Huo, B. et al. Inspection of Welding Defect Based on Multi-feature Fusion and a Convolutional Network. J Nondestruct Eval 40, 90 (2021). https://doi.org/10.1007/s10921-021-00823-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10921-021-00823-4