Abstract
Most welding manufacturing in heavy industries, such as shipbuilding and construction, is conducted in unstructured workspaces, indicating that the production environment is irregular, changeable, and unmodeled. In this case, changeable workpiece position/shape and environmental background/illumination should be carefully considered. Owing to these complicated characteristics, welding currently relies on manual operation, resulting in high cost, low efficiency, and inconsistent quality. This study proposes a portable robotic welding system and a novel seam-tracking method. Compared with existing methods, it can track more general and complex spatial weld seams. First, the tracking pose of the robot was modeled using a proposed dual-sequence tracking strategy. On this basis, the working parameters can be adjusted to avoid robot-workpiece collision around the workpiece corners during tracking processes. By associating the forward direction of a welding torch with the viewpoint direction of a camera, the problem that the weld seam feature points are prone to lost during tracking processes can be solved by conventional methods. Point cloud registration is adopted to locate the multi-segment weld seams in the workpiece globally because the system deployment location is not fixed. Various experiments on single or multiple weld seams under different environmental conditions show that even if the robot was deployed in different positions, it could reach the initial points of the seams smoothly and accurately track along them.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Welding plays an irreplaceable role in modern manufacturing, especially in heavy industries, as a basic fabrication method in joining materials, usually metals or thermoplastics. It is well known that up to 70% of the world’s steel production is used to manufacture welded products, structures, and equipment. Therefore, it remains one of the leading technological processes in the global economy [1]. It is employed in constructing numerous industrial sectors, such as shipbuilding, railway construction, machine manufacturing, bridge construction, the petrol industry, and aerospace engineering. In many cases, welding is the only possible and most effective method for building unassembled structures with approximately optimal shapes.
However, most welding work in unstructured environments relies on manual work. According to statistics, more than 90% of welding work is still done manually, which is even higher in heavy industries [2]. However, manual welding production has inherent drawbacks. For instance, human work is costly, and workers may suffer from health problems owing to the unavoidable intense heat radiation, smoke, and arcs during welding. More importantly, human work cannot consistently guarantee welding quality, which may influence the reliability and safety of products, especially in public transport products such as ships, high-speed railways, and bridges. Unfortunately, unlike structured production lines in traditional workshops, it is challenging to conduct automatic welding in heavy industries because the working environment is often unstructured, including the following: (i) workpiece diversity. Unlike an assembly line, where everything is fixed, various structural parts with different shapes and sizes in heavy industry are assembled on-site. This constraint usually results in a significant deviation between the actual product size and ideal model design, making it difficult to apply traditional offline programming or teaching-playback mode for robotic processing. (ii) Workplace variability. Much welding work is conducted in an outdoor environment, requiring robotic welding systems to have excellent mobility, which is difficult to achieve using conventional cumbersome industrial robots.
The first issue above drives research on weld seam-tracking technology, but there are currently few methods capable of fully autonomous welding of multi-seams in unstructured environments. On the one hand, almost all existing methods require certain prior information, such as the curve equation [3] or nominal path [4,5,6,7,8], which is extremely difficult to satisfy in an unstructured environment. For instance, a weld seam trajectory recognition method based on laser scanning displacement sensing was proposed for the automated guidance of a welding torch in skip welding of a spatially intermittent weld seam [8]. This method requires scanning all seams before welding, which is inefficient and even impossible in some special environments, such as the autonomous welding of certain nuclear facilities and large outdoor structures. In 2020, Zou et al. [3] presented a tracking method for complex spatially curved weld seams, but this method requires prior knowledge of the mathematical curve for the seams and has major limitations. However, most current studies only show the tracking of single-segment linear seams or single planar curved weld seams by employing various sensors. A curved weld seam is characterized by irregular shapes, and its width and angle may frequently change, which is undoubtedly challenging for seam tracking [9]. The sensors utilized for automatic robotic welding are generally based on arc and optical sensing [10]. In the early stage before 2010, arc sensing was studied to track single-segment continuous seams [11,12,13]. Arc sensors are available in the market from company IGM and POWER-TRACTM from SERVO-ROBOT. Vision-based sensors are considered to be the most promising because of their non-contact characteristics, high precision, fast detection, and strong adaptability [9, 14]. Passive vision has been adopted to track a straight weld seam [5, 15] and a curved seam [16, 17]. In [18,19,20], stereo passive vision systems with multiple optical paths were designed to realize straight line weld seam tracking. Active vision is currently popular for seam tracking. Based on this new technology, tracking of a straight weld seam [7, 21,22,23] and a single-section curved weld seam [24,25,26] have been studied. Liu et al. [27] applied a composite sensor system including an RGB-D camera and a laser vision sensor to detect weld seams automatically.
The key to the tracking of multi-segment weld seams lies in identifying the initial point of each seam, especially for welding tasks where the workplace needs to be changed in an unstructured environment. Earlier studies [28,29,30] used RGB cameras to extract the initial points of weld seams, whereas recent studies [26, 31,32,33] have revealed the potential of laser vision sensors in searching for initial points. However, in contrast to the initial point identification problem of single-segment weld seams, the initial point identification of multi-segment weld seams also involves the pose estimation of the entire workpiece. Kim et al. [34] proposed extracting all detectable weld seams from multi-view RGB images and then filtering and merging them in the corresponding RGB-D images to detect multiple weld seams in workpieces. However, they did not consider the influence of environmental factors on object edge extraction and indiscriminately regarded the detected edge lines as weld seams. The problem of initial point identification for multi-segment weld seams still needs to be researched.
In unstructured environments, welding carriages have good application prospects for automatic welding technology. They are convenient to deploy because of their small sizes . This type of system, including representative commercial products, is usually driven by portable rails, such as LIZARD from ITM company [35], or wheels, such as products from [36, 37]. These products use manual assistance and internal motor encoders to locate weld seams to complete the automatic welding of simple trajectories such as straight lines or circular arcs, similar to [38]. Mao et al. [13] studied the automatic welding of circular weld seams using arc sensors to guide the trolleys. Because the arc sensor is unable to identify the initial point of a weld seam, it is unsuitable for the autonomous welding of multi-segment seams. Wang et al. [39] utilized vision sensors for real-time seam identification. Laser vision sensors were adopted in [40,41,42] to make the welding carriage more intelligent in a wall-climbing welding robot. However, one drawback of this platform is that it cannot weld complex spatially curved weld seams because the end-effector has only two or three degrees of freedom.
This fact motivates to complement multiple-seam tracking with a 6-DOF cooperative manipulator. This portable platform can complete the welding of curved seams and achieve the effect of “placing it wherever you need it and carrying it away after welding.” The main contributions of this work are as follows: (1) a technique that calibrates the laser-vision sensor with a common chequerboard calibration board; (2) a fully autonomous tracking technique for spatial curve weld seams; (3) a novel method to estimate the real-time pose of a weld seam using three feature points on a laser stripe; and (4) the initial point identification of multiple seams with point cloud registration.
The remainder of this paper is organized as follows: Sect. 2 introduces the design of the portable robotic welding system and an overview of the weld tracking algorithm; Sect. 2.3 describes the designed laser vision sensing system and its calibration method; Sect. 3 presents the tracking method for complex curved weld seams and the global positioning method for multiple welds; Sect. 4 illustrates the laboratory tests and results; Sect. 6 concludes this work.
2 System description
2.1 System overview
The proposed system for mobile-robotic welding in unstructured environments is shown in Fig. 1. It mainly consists of a cooperative manipulator (Elfin E05), a robot controller, a vision system, a welding power supply (Panasonic YD-500GR5), and a host computer (Intel NUC11). The cooperative robot carries the welding torch under motion control signals from the controller. The host computer processes the information captured by the vision sensor and sends commands to the robot controller. The sensor is installed around the end of the cooperative robot. The system can be placed on various mobile platforms such as AGVs or easily carried by workers to different worksites. As shown in Fig. 1b, the mass of the cooperative robot with the end-effector is merely 23kg. The vision system contains two sensors: a RGB-D camera (Realsense D435i) to locate the workpiece globally, and a customized laser vision sensor to accurately detect the position and orientation of the weld seam.
2.2 Weld seam tracking algorithm
The weld seam tracking algorithm of our system is divided into several modules: global seam identification, seam local identification, and abnormal monitoring. The entire process is illustrated in Fig. 2.
-
1.
Seam global identification: Workpiece geometry is used to determine the spatial distribution of each weld seam in the robot base coordinate frame by extracting the global characteristics of each weld seam from the CAD model of workpieces. Subsequently, offline attitude planning of the end-effector and accurate positioning of the weld seam can be performed.
-
2.
Seam local detection: Seam local detection primarily uses an image-processing algorithm for feature points extraction of the weld seam. In particular, geometric morphological processing was used to obtain the position of the seam feature points.
-
3.
Abnormal monitoring: The module monitors two safety indicators: the vision system state, \(Flag_{cam}\), and robot state \(Flag_{R}\). When the camera stops working (detected by indicator \(Flag_{cam}\)) or the motion position of the cooperative manipulator deviates significantly from the expected position (detected by indicator \(Flag_{R}\)), the robotic system stops in a timely manner.
2.3 Laser-vision sensing system
2.3.1 Mathematical model of 3D reconstruction
As illustrated in Fig. 3b, the laser-vision sensor obtains the 3D coordinates of any point on the laser stripe using its pixel position \(p_{img}(u,v)\) in the image coordinate frame. Here, the laser light plane equation is first introduced, followed by correcting the image distortion. The lens distortion model [43, 44] can be expressed as:
where \((\widetilde{u},\widetilde{v})\) is the distorted pixel in an image captured by the camera, and the corrected pixel is denoted as (u, v). \(K_{1}\), \(K_{2}\), and \(K_{3}\) are the radial distortion coefficients, and \(P_{1}\) and \(P_{2}\) are the tangential distortion coefficients. These parameters can be obtained by camera calibration, and the distorted image can be corrected according to Eq. (1). As shown in Fig. 3, for a particular point \(P_c=[X_c, Y_c, Z_c]^\mathrm {T}\) on the laser stripe in the camera coordinate frame, according to the linear imaging model, the corresponding relationship between the pixel coordinate system and camera coordinate system is:
where \(M_I\) represents the camera intrinsics. Assuming that the plane equation of the laser light plane in the camera coordinate system is:
Combining Eqs. (2) and (3), the transformation between the camera coordinate system and the pixel coordinate system for any point on the laser stripe is:
We then can obtain :
Thus, we can obtain \(P_c = [X_c, Y_c, Z_c]^\mathrm {T}\) on the laser stripe from the pixel point \(p_{img}(u,v)\) of the image using Eq. (5). To guide the robot to work, \(P_c\) from the camera coordinate still needs to be converted to the robot base coordinate frame:
where \(^{t}T_{c}\) is the pose of the camera coordinate system with respect to the robot tool coordinate system and can be obtained by hand-eye calibration. \(^{B}T_{t}\) is the pose and position of the robot tool coordinate system in the robot base coordinate system and can be read from the robot controller in real time.
2.3.2 Calibration of laser-vision sensor
This subsection introduces the calibration of the coefficients in Eq. (3): A, B, C, and D. Although there have been many related studies on the calibration of laser-vision sensors, the current and primary problem is the demand for the use of complex calibration artifacts and excessive model parameters. For instance, Idrobo-Piz et al. [45] proposed a calibration model using a camera, a lens, two lasers, and a special calibration board to calibrate the position parameters of the laser. This approach is difficult to implement because there are too many model parameters, and the calibration board is a customized product that is difficult to obtain. In a similar study [46], a particular serrated stereo target was used to calibrate the hand-eye relationship between the laser vision sensor and robot. To solve the problem of tedious calibration, Xiao et al. [47] adopted a plane calibration board with a special pattern to implement an automatic calibration. However, this type of calibration board is also a specially customized product. The most common calibration board was used for our laser-vision sensor. Assume that the coordinates of a point P on the laser stripe in the coordinate system of the calibration plate are \(P_w = [X_{w}, Y_{w}, Z_{w}]^{T}\), and R and t are external parameters of the camera relative to the calibration board, then it can be verified as:
Then, Eq. (7) becomes:
where \(t=[t_x,t_y,t_z]^T\). As shown in Fig. 4, because all points on the laser stripe are located on the plane of the calibration plate, that is, \(Z_{w}=0\), then we have:
combining with Eq. (2), we can obtain:
Any point \(P_c\) in the camera coordinate system can be obtained via Eqs. (2) and (10) from \(p_{img}\). We can then keep the camera posture still, change the calibrated posture, and repeat the steps shown in Fig. 4 to collect multiple images. After converting all collected points to the camera coordinate system, the singular value decomposition (SVD) method is used to fit these spatial points into a plane; the plane equation coefficients in Eq. (3) can be finally obtained.
3 Seam tracking methods
There are two main cases of seam tracking in an unstructured environment: single-weld seam tracking and multiple-weld seam tracking. In Subsect. 3.1, the pose modeling and tracking method for a particular weld seam are discussed in detail. On this basis, in Subsect. 3.2, taking ship structural part welding as an example, the continuous tracking method of multi-segment weld seams using 3D vision to guide the robot to transition between different seams is presented.
3.1 Single-weld seam tracking
3.1.1 Seam trajectory generation
Weld joints can be categorized into several different types. However, it is helpful to classify commonly used weld joints into the following types: butt joint, lap joint, and T joint [48]. These three typical types of weld joints can be represented by different combinations of points, which can be regarded as the features of the weld seam in this paper. Once those feature points were obtained from the images, the position and orientation of the weld seam are calculated according to the type of weld joint. In this study, the features of the weld joint are defined as three points, as shown in Fig. 5.
The first step in single-seam welding involves extracting the seam features via image processing. The seam shape can be obtained via 3D reconstruction of the feature points on the laser stripe of the image (see \(p_i\), \(p_{i,l}\), and \(p_{i,r}\) in Fig. 5a–c).
We first extract the center line (red line in Fig. 5a) of the laser strip at the sub-pixel level. We then find the intersection point \(p_i\) of the two lines and give \(p_{i,l}\) and \(p_{i,r}\) on the center line, which are equal distances from \(p_i\). The feature point \(p_i\) represents the position of the weld seam path point, and the two auxiliary points on both sides are used to calculate the direction of the seam. We save \(p_i\) of every frame captured in the dynamic buffer during the tracking process. From {\(p_{1}\), \(p_{2}\), \(p_{3}\), ..., \(p_{i}\), ...} in the dynamic buffer, the corresponding target points on the weld seam {\(P_1\), \(P_2\), \(P_3\), ..., \(P_i\), ...} in the robot base coordinate system can be computed using Eqs. (5) and (6) and the robot is driven sequentially through each target point.
When the robot moves along the weld seam, due to the difference between the moving speed of the robot and the camera capture frequency, the time steps are not synchronized. If the feature points detected by the camera are sent directly to the robot, significant errors will occur. For example, if the robot moves at a speed of 100 mm/s while the sensor captures at 2 frames per second, the interval between the two images is approximately 50 mm, which is sparse compared with extracting a continuous laser stripe on each image. However, when the robot moves at a much lower speed and the camera captures at a higher rate, the feature points obtained from the camera will be dense with little difference between them, which produces a large number of invalid feature points.
To filter out the redundant points, every subsequent point \(P_{i+1}\) should satisfy:
where d denotes the Euclidean distance between \(P_{i+1}\) and \(P_i\). Typically, \(d_{thres}\) is set to \(1-2\) mm. The algorithm only processes qualified points, as described by down-sampling in Fig. 5c.
3.1.2 Tracking posture computation
After obtaining the seam path point sequence, the posture of the weld seam \([X_{p_i}, Y_{p_i}, Z_{p_i}]\) is obtained by curve fitting. For a certain point \(P_{i}\) in the sequence, the adjacent 2m points are curve-fitted using polynomial as follows:
Afterwards, the tangent vector \(\tau _{i}=(1,\frac{\partial y}{\partial x},\frac{\partial z}{\partial x})=(1,n_{i,y},n_{i,z})^{T}\) of \(P_{i}\) can be calculated. As shown in Fig. 6a–b, the plane \(\pi _{1i}\) with its normal vector \(\tau _{i}\) is defined as the posture plane of \(P_{i}\). To ensure that the welding torch is always located at the center of the seam, the unit vectors \({n}_{i,l}\) (from \(p_i\) to \(p_{i,l}\)) and \({n}_{i,r}\) (from \(p_i\) to \(p_{i,r}\)) on both sides of the laser stripe are projected onto plane \(\pi _{i,1}\) to obtain \({\acute{n}}_{i,l}\) and \({\acute{n}}_{i,r}\), respectively:
where \(proj_{{\tau }_i}({n}_{i,l})\) represents the projection vector of \({n}_{i,l}\) on \({\tau }_i\):
and the rest is the same. The vector \({Z}_{p_i}\) is the direction of the angular bisector between \({\acute{n}}_{i,l}\) and \({\acute{n}}_{i,r}\).
To define the forward direction of tracking, the plane \(\pi _{i,2}\) is defined as the plane passing through point \(P_{i}\) with \({Z}_{p_i}\) being the normal vector. To ensure that the feature point of the weld seam is always within the field of view of the camera during the tracking process, line \(P_{i+1}O_{c}\) is defined between feature point \(P_{i+1}\) and the origin of the camera coordinate system \(O_c\). The intersection point of the extended line \(P_{i+1}O_{c}\) and the plane \(\pi _{2i}\) is \(M_{i}\). The unit vector from the current weld feature point \(P_{i}\) to \(M_{i}\) is defined as the forward direction vector \({X}_{p_i}\):
Another direction vector \(Y_{p_i}\) can be obtained via the right-hand rule, that is, \(Y_{p_i}=Z_{p_i}\times X_{p_i}\). Through the above steps, a local posture coordinate system \({^{B}T}^{i}=[X_{p_i},Y_{p_i},Z_{p_i}]^T\) can be constructed, representing its attitude at each point \(P_i\). Compared with traditional methods [3, 4], the advantage of this tracking posture calculation method is that it can better deal with complex large-curvature weld seams. In particular, the robot moves to \(P_{i+1}\) while pointing to \(M_i\) rather than \(P_{i+1}\), such that the laser-vision sensor would not lose the tracking of the curved seam, as shown in Fig. 7.
3.1.3 Dual-queue tracking strategy
In actual welding, the welding pose also has a significant impact on the mechanical properties of welded joints [49]. The robot is in a normal posture has been given. However, in many manufacturing scenarios, the welding torch is not required to be in contact with (or perpendicular to) the welding surface. This requires the robot to adjust its posture appropriately during welding. Three main parameters can be tuned: the welding pitch d, travel angle \(\alpha\), and working angle \(\beta\) [50, 51]. As shown in Fig. 8, for each weld seam path point \(P_{i}\), its local attitude is assumed to be \({^{B}T}^{i}\). Then, we have:
Where
where \({^{B}\breve{T}}^{i}\) is the expected pose for the welding torch to be at \(P_i\), and \(\alpha\) and \(\beta\) are the angular offset between the tracking coordinate system and the local coordinate system for \(P_i\). By regulating \(\mathcal {T}\), the tracking sequence corresponding to each point in the path point sequence can be generated, as shown in Fig. 8.
We used a position-based visual servo (PBVS) mechanism to control the movement of the robot. The robot keeps moving towards the target point \(Q_i\) in the tracking sequence; once it reaches the point, the point is deleted from the memory, and the robot moves towards the next target point \(Q_{i+1}\) until the entire tracking process is completed. During this process, the welding torch of the robot must constantly change its position and attitude. To control the robot by adapting to different intervals of points in the tracking sequence, a novel and simple approach to for robot motion control is proposed.
Assuming that the robot moves in the velocity mode, the motion from point \(Q_i\) to \(Q_{i+1}\) can be decomposed into translation \(T_{\nu }=[t_x,t_y,t_z]^T\) and rotation \(R_{\nu }=[R_x,R_y,R_z]^T\), where \(t_x\), \(t_y\), and \(t_z\) are the translation velocities in the X, Y, and Z directions of the tool coordinate system, respectively, and \(R_x\), \(R_y\), and \(R_z\) are the rotational velocity speeds in the corresponding directions.
Let the translation vector between the current point \(Q_i\) and the next point \(Q_{i+1}\) be:
Suppose the time required for the robot to move from \(Q_{i}\) to \(Q_{i+1}\) is t, then the speed of position movement is
where \(\delta =1/t\) is the speed adjustment of the position translation. The translation speed of the welding torch can be set during the tracking process by adjusting \(\delta\).
Assuming that when the welding torch reaches \(Q_{i+1}\) and simultaneously completes its posture transformation, it is preferable to set the posture transformation speed consistent with the translation speed \(\delta\). Let the pose of \(Q_{i}\) and \(Q_{i+1}\) in the robot base coordinate system be \(R_{i}\) and \(R_{(i+1)}\), respectively, and the transformation matrix between the two be \(\Delta R\), then
By converting \(\Delta R\) to RPY angle form, we can obtain
Let \(\Delta r= \left[ \widetilde{R}_{x}, \widetilde{R}_{y}, \widetilde{R}_{z}\right]\) and its normalized vector be \(\Delta \widetilde{r}\); then, the final speed of pose transformation for the robot is \(R_{\nu }=\delta \widetilde{r}=\left[ \delta \widetilde{R}_{x}, \delta \widetilde{R}_{y}, \delta \widetilde{R}_{z}\right] .\)
Finally, the calculated speeds are transmitted to the robot to realize real-time changes in the robot pose. Despite tracking discrete points in the path, overall motion properties can be maintained consistently by unitizing the motion vector.
3.2 Continuous tracking of multi-weld seams
Because our welding system needs to be deployed in various workplaces and the posture of each placement is not fixed, it is necessary to guide the robot to the initial point of the weld seam. As introduced in Sect. 1, under an unstructured workspace and, for multi-seam welding, the initial position of every seam should be located. Fortunately, the CAD model of the workpiece provides us with reliable prior information. From the CAD model, the iterative closest point (ICP) algorithm [52] is used to obtain the pose of the workpiece \(^{CAD}T_{C}\) in the camera coordinate system using a 3D camera.
3.2.1 Global working parameters
As shown in Fig. 9a–b, the robot may collide with the workpiece near the corner. To accurately locate the initial point of the seam, we defined the direction vector \(\widetilde{\Gamma }\) to guide the robot to scan the seam (Table 1). Several parameters are defined for multi-seam welding in the coordinate system of the CAD model {M} and the coordinate system of the robot base {B}, as shown in Table 1.
3.2.2 Global positioning of multiple welds
Point cloud registration is used to position the workpiece, as illustrated in Fig. 10. First, we sample points from the CAD model (Fig. 10b) to form a high-density point cloud, as shown in Fig. 10c. Then, we downsample the high-density point cloud to a sparse point cloud, as shown in Fig. 10d.
The point cloud from the RGB-D camera is obtained after deploying the robot. Owing to the imaging mechanism of the laser speckle of the camera, there may be a large number of point cloud clutter in the field of view, as shown in Fig. 10e. Point cloud data filtering is applied to remove outlier and noise with the help of the filter module of the Point Cloud Library (PCL) [53], as illustrated in Fig. 10f. Correspondingly, we downsample the captured point cloud to a sparse cloud, as shown in Fig. 10g. Finally, the iterative closet point (ICP) algorithm is employed to match the two-point clouds in Fig. 10d and g, respectively, as shown in Fig. 10h.
3.2.3 Initial posture for different docking locations
For the portable system, the placement position is not necessarily the same each time; it is necessary to set a proper posture fixed to the workpiece to ensure that the robot can correctly image when it reaches the initial point of the weld seam. For any seam, after each placement of the robot, the following formula holds
where superscript B, c, M, and t represent the base coordinate system \(\{B\}\), camera coordinate system \(\{c\}\), CAD model coordinate system \(\{M\}\), and tool coordinate system \(\{t\}\), respectively; \(^tR_{c}\) is the attitude of the camera coordinate system in the tool coordinate system, which can be obtained by hand-eye calibration; \(^BR_{t}\) can be queried from the robot controller; and \(^{M}{R}_{c}\) can be obtained by point cloud registration using a 3D camera.
To determine an appropriate posture, we assign the posture of the robot at the initial point in advance such that the laser stripe line is approximately in the middle of the image. We denote the pose of the robot as \(^B\mathfrak {R}_{t}\), and substitute it into Eq. (22) to compose the pose \(^{M}\mathfrak {R}_{B}\). From Eq. (23), we can obtain the required parameter denoted as \(^{M}\mathfrak {R}_{t}\). Because this attitude is the end-effector relative to the CAD coordinate system, it is independent of the deployment location of the robot. For any placement,
where superscript i represents the deployment time. In this manner, the relative attitude of the end-effector to the workpiece can be kept unchanged by simply sending \(^B\mathop {{\mathop {{\text {R}}}\nolimits _t^i}}\limits\) to the robot controller after each deployment of a robot.
3.2.4 Positioning of initial point
The automatic guidance of the initial point of the weld seam determines the success of welding. However, research on positioning the initial welding point remains problematic [54], and few studies have been reported. In this work, we use the point cloud registration method to roughly locate multiple weld seams and guide the robot to move to a point near the initial point. Then, a laser-vision sensor is scanned to locate the initial point accurately. As illustrated in Table 1, assuming that the initial point of a certain weld seam in the CAD model is \(\widetilde{x}_{0}\), after applying the transformation in Eq. (6), we obtain the corresponding \(x_{0}\). Directing the robot to the initial point \(x_{0}\) may lead to collisions due to an error in point cloud registration. Therefore, we construct a virtual ball using \(x_{0}\) as the center and r as the radius so that the actual position of the welding torch falls on the ball sphere, as shown in Fig. 11a. Assuming that the coordinates of the initial alignment point on the sphere \(x_{m,0}\), it can be determined by:
where O is the origin of {B}, and \(^B\widehat{Z}_t\) is the z-axis of \(^B\mathop {{\mathop {{\text {R}}}\nolimits _t^i}}\limits\) in Eq. (24).
After reaching \(x_{m,0}\), the weld seam is scanned according to the pre-defined direction \(\widetilde{\Gamma }\), and the initial point of the weld seam is accurately approximated by determining the shape of the light line. As shown in Fig. 11c, we consider the length of the laser stripe line on the left as the basis for the judgment. When the welding torch gradually approaches the initial point of the seam, the length of the sideline gradually increases. When its length exceeds a certain threshold, it can be considered to have reached the initial point.
4 Experiments
Three types of experiments were carried out to verify the feasibility of the proposed seam-tracking method. The first experiment was to verify the accuracy of our laser-vision sensor; the second experiment was to confirm the effect of single-weld seam tracking; and the last was to test the performance of multi-weld seam tracking with considering changeable elements under unstructured environments. In the test of single-weld seam tracking, instead of choosing a real torch with a welding wire as the end-effector, we adopted a displacement sensor with a rigid pen as the experimental torch, see Figs. 13a and 15a. Compared with the deformable welding wire, it became easier to observe the subtle collision and interference between the end-effector and workpiece.
After testing our tracking method for a single weld, we used a ship structure as a test artifact to verify the feasibility of the continuous tracking of multiple weld seams. The difference between multi-seam and single-seam tracking is that multi-seam tracking requires the global positioning of the artifact. In this section, the system performance in terms of global positioning in an unstructured environment is also verified. The experimental platform for the case of multi-seam tracking is shown in Fig. 17.
4.1 Precision evaluation
Based on the mathematical model in Eq. (5) in Subsect. 2.3, we can obtain the three-dimensional coordinate \(P_i=[X_c,Y_c,Z_c]^T\) of a point in the camera coordinate system through the 2-dimensional pixel coordinate \(p_i=[u,v]^T\) in the image coordinate system. As shown in Fig. 12a, we kept the laser vision sensor stationary and placed the calibration plate at an appropriate position so that the edge of the chequerboard grid coincided with the laser stripe. We evaluate the accuracy of our model by measuring the sides of the squares. The side length error of each square is very small, and its declared value can be used as a theoretical value for the measurement. For a certain grid in a row, it is assumed that the coordinates of the left and right corner points were \(^c\mathop {{\mathop {{\text {P}}}\nolimits _{L}}}\limits =[x_{L},y_{L},z_{L}]^T\) and \(^c\mathop {{\mathop {{\text {P}}}\nolimits _{R}}}\limits =[x_{R},y_{R},z_{R}]^T\), respectively. We define the measurement error as:
We measured all the 324 squares on the calibration board and calculated their measurement errors, as shown in Fig. 12b. The results show that the average measurement error is \(\widetilde{E}\) = 0.0991 mm, and the overall deviation range can be controlled within \(0.1659 \sim 0.0152\) mm using our calibration method in Sect. 2.3.2. To further verify the 3D reconstruction capability of the designed sensor, we performed a 3D reconstruction of all pixel points on the entire laser stripe. As shown in Fig. 12c, the robot carried the sensor to scan a straight weld seam. The obtained result is shown in Fig. 12d, which clearly shows that our reconstructed data points reflect the 3D shape of the weld seam.
4.2 Single-weld seam tracking
4.2.1 Seam tracking of a complex space curve
To verify the tracking method proposed in this work, a special part with a large curvature was selected for the experiment. As shown in Fig. 13a, when the initial point was given, the robot first moved toward it. Once it reached the initial point, the robot end-effector started an autonomous movement along the curved weld seam until it reached the given endpoint. A portion of the snapshots taken during the tracking process is shown in Fig. 13c. The entire tracking process did not require any manual intervention or prior information such as the curve equation of the weld seam, CAD model, or pre-taught trajectory. While the robot was tracking along the weld seam, we recorded the position of the robot end-effector in the backend program of the host computer and projected these position points onto the XOY plane of the robot base coordinate frame, as shown in Fig. 13b. It can be observed that the motion trajectory of the robot matches the profile of the part to be welded.
The forward direction is vital for tracking the complex curved weld seams. To confirm the tracking stability, we recorded the pixel coordinates of all the feature points, as shown in Fig. 14a. The size of the image frame is \(1920\times 960\) pixels, and the red anchor is located at its center (960, 480). This shows that the feature points were maintained at the image center throughout the entire tracking process by our menthod. As illustrated in Fig. 7, for comparison, we set the forward direction as the tangent direction of the weld seam, and the result is shown in Fig. 14b–c. Noticeably, after a certain tracking time, the weld feature points escaped from the visible range of the image, which definitively led the tracking to fail.
4.2.2 Accuracy evaluation in single-weld seam tracking
The basic principle of the accuracy evaluation for weld seam tracking is to compare the moving trajectory of the welding torch with the actual weld seam curve. However, it is difficult to precisely obtain the true coordinates of the points on the weld seam. For this reason, most existing practices sampled the weld seam points using the teaching method to obtain the true coordinates of multiple trajectory points [3, 4]. To compare the difference between the two kinds of trajectories more accurately, we first calibrated the position and pose between the coordinate system of the workpiece and the robot base coordinate system. The purpose is to convert the point coordinates of the weld to the base coordinate frame of the robot. The established workpiece coordinate system is shown in Fig. 15a. The coordinates of each point on the weld seam were extracted from the CAD model of the workpiece to obtain the actual trajectory of the weld seam via coordinate system transformation. Because we strictly controlled the manufacturing precision of the workpiece, we were able to ensure the consistency of the actual weld seam and trajectory on the CAD model. Assuming that a point on the weld seam in the workpiece coordinate system is expressed as \(P_{i}\), it can be converted to the base coordinate system by the following equation:
By applying Eq. (27) to all the points on the weld seam, the trajectory of the actual weld in the robot base coordinate system can be obtained. Similarly, the normal attitude of each point can be calculated using the CAD model and used as the theoretical value.
The accuracy evaluation experiments were divided into four groups, as shown in Fig. 15b. When the working parameters were separately regulated, it was observed that the end-effector could move in the expected posture throughout the entire process, as shown in Fig. 15a. This demonstrates that the proposed dual-sequence tracking strategy is effective. For accuracy evaluation, we compare the recorded robot trajectory with the actual weld seam trajectory. After tracking with a normal attitude, we plot the motion trajectory of the robot and the actual weld seam, as shown in Fig. 16a. Evidently, the two trajectories are highly coincident. Fig. 16b shows that the tracking pose was close to the theoretical pose during the tracking process. Adding welding pitch, Fig. 16b shows that the robot motion trajectory is almost parallel to the actual weld seam trajectory and fits the actual trajectory well. For any point \(\widetilde{P_{i}}(x, y, z)\) on the motion trajectory of the end-effector, the nearest Euclidean point \(P_{i}(x, y, z)\) is taken as its corresponding point on the weld joint, and the Euclidean distance \(e_{i}=\Vert \widetilde{P_{i}}-P_{i}\Vert _2\) between two points is regarded as the trajectory tracking error. The error statistics for the 4 groups of experiments are shown in Table 2. It can be quantitatively observed from Table 2 that the proposed method can maintain the mean error within 0.6 mm. Furthermore, it can be observed that the tracking test with the normal attitude has the highest accuracy, and adjusting the working parameters does not have a significant influence on the tracking accuracy.
4.3 Multi-weld seam tracking
It is known that illumination is the main factor affecting image quality and further affecting the positioning performance. We also tested the global positioning performance of the system in an unstructured environment with different illumination conditions, that is, indoor and outdoor environments on a sunny summer day between 13:00 and 18:00. An adjustable light source was used to generate different lighting conditions in an indoor environment. An easy-to-measure assessment point in the structural part was used as a reference point to replace the initial point for testing, as shown in Fig. 17. As illustrated in Fig. 18, to evaluate the global positioning performance of the portable system, we positioned the workpiece at different placement positions and verified the reaching accuracy to a specified point and the retention of its end pose.
As shown in Fig. 18, even at different placement positions, the posture of the robot end-effector when reaching the assessment point is largely the same, which verifies the feasibility of our initial point positioning method.
Under certain lighting conditions, after the robot end-effector reached the assessment point in the desired posture, a vernier caliper was used to measure the distance between the end of the torch and the reference point. This distance is regarded as the positioning error of global positioning. We then maintained the lighting condition and changed the placement position of the robot to measure its positioning result, as shown in Fig. 19. This experiment was repeated 20 times, and the positioning errors were recorded along with the illumination conditions. Then, we moved the system to an outdoor environment and performed the same operation. As shown in Fig. 19, the first group of data is the statistical result of positioning errors 20 times under the 315 LX illumination, and the result of 1789−81400 LX illumination is from the outdoor environment test. It can be seen that even under strong sunlight, the positioning error is in the range of 10−25 mm, indicating that even if the robot is placed at different sites, our method can guide the robot to the vicinity of the initial point of the weld seam. We conducted a multi-seam tracking test with an experimental structure in an outdoor environment. The results show that the robot could track the multi-segment weld seam smoothly without human intervention. The recorded tracking trajectory is shown in Fig. 20. It can be seen that the welding trajectory of the robot fits the actual trajectory very well.
Overall, in terms of spatial weld seams, the proposed algorithm has better flexibility than the traditional teaching-playback working modes. Because of the excellent lightweight of the collaborative manipulator, the system is easy to place. Through human-machine collaboration, we can easily adjust the photo posture of the 3D camera to globally position the multiple weld seam in the workpiece, even if it is placed in a different position. We first performed experimental tests on a single weld seam under various conditions and then performed autonomous welding tests on multiple seams, and both obtained satisfactory results.
5 Discussion
Although certain progress has been made by this work, some technical limitations still exist and need to be further considered in future research. Here, the following four aspects are discussed.
-
1.
Application scenario: Although a 6-DOF manipulator was used in the experimental setup, the proposed tracking method could also be applied to a welding cart with fewer DOFs. One potential application is mounting a 6-DOF cooperative manipulator on a mobile platform for more flexible welding tasks in unstructured environments.
-
2.
Noise interference: The welding arc light can potentially interfere with feature point recognition. Because the effectiveness of the tracking method depends on the real-time detection accuracy of weld seams, a large amount of noises such as arc light and spatter during welding may affect feature point recognition. This problem needs to be solved in future studies.
-
3.
Joint adaptability: In this work, the position and orientation are calculated using three feature points, and we performed verification for the lap joint, butt joint with V-groove, and T joint by experiments. However, different types of weld joints have different laser stripes appearances. Whether other types of weld seams, such as U-shaped weld joints or discontinuous weld seams whose laser stripes in images are broken at weld seam edges, can also be calculated in this way is a subject for further study.
-
4.
Control unpredictability: In this study, a cubic polynomial was used to fit the path points and generate the direction of the weld seam. The fitting order must be adjusted based on the actual situation. An empirical suggestion is that the 3rd or 4th order is sufficient for most tasks, whereas the 5th order is prone to the Runge phenomenon, which may cause unstable robot motion. In addition, the parameter used to control the number of fitting points should be set within the range in \(\left[ 70,100 \right]\). Smaller values are also prone to unstable motions.
6 Conclusion
This study demonstrates an autonomous tracking method for multi-segment spatial weld seams. The experiments reveal good tracking performance. Applying the visual guidance method, a dual sequence tracking mechanism was proposed, which can effectively solve the problem that traditional methods cannot flexibly adjust important welding parameters such as working and travel angles. Autonomous tracking of multi-segment weld seams can be realized using point cloud registration for global workpiece positioning. By using a collaborative robot to form a portable automatic welding system, our research could better address the problem of autonomous tracking of spatial welds under an unstructured environment. The proposed method facilitates fully autonomous robotic welding in unstructured environments.
Data availability
Not applicable.
Code availability
Not applicable.
References
Nikolov M (2014) Trends in development of weld overlaying during the 21 century. Acta Technologica Agriculturae 17(2):35–38
Xu Y, Yu H, Zhong J, Lin T, Chen S (2012) Real-time seam tracking control technology during welding robot GTAW process based on passive vision sensor. J Mater Process Technol 212(8):1654–1662
Zou Y, Chen J, Wei X (2020) Research on a real-time pose estimation method for a seam tracking system. Opt Lasers Eng 127:105947
De Graaf M, Aarts R, Jonker B, Meijer J (2010) Real-time seam tracking for robotic laser welding using trajectory-based control. Control Eng Pract 18(8):944–953
Xu Y, Lv N, Fang G, Du S, Zhao W, Ye Z, Chen S (2017) Welding seam tracking in robotic gas metal arc welding. J Mater Process Technol 248:18–30
Moon HS, Ko SH, Kim JC (2009) Automatic seam tracking in pipeline welding with narrow groove. Int J Adv Manuf Technol 41(3–4):234–241
Regaard B, Kaierle S, Poprawe R (2009) Seam-tracking for high precision laser welding applications–methods, restrictions and enhanced concepts. J Laser Appl 21(4):183–195
Li G, Hong Y, Gao J, Hong B, Li X (2020) Welding seam trajectory recognition for automated skip welding guidance of a spatially intermittent welding seam based on laser vision sensor. Sensors 20(13):3657
Xu Y, Wang Z (2021) Visual sensing technologies in robotic welding: recent research developments and future interests. Sens Actuators A Phys p 112551
Rout A, Deepak B, Biswal B (2019) Advances in weld seam tracking techniques for robotic welding: a review. Robot Comput Integr Manuf 56:12–37
Lee GY, Oh MS, Kim SB (2003) Development of a high speed rotating arc sensor system for tracking complicate curved fillet welding lines. Int J Precis Eng Manuf 4(6):20–28
Liu W, Li L, Hong Y, Yue J (2017) Linear mathematical model for seam tracking with an arc sensor in P-GMAW processes. Sensors 17(3):591
Mao Zw, Pan Jl, Zhang H (2010) Mobile welding robot system based on rotating arc sensor applied for large fillet welding seam tracking. In: 2010 Sixth International Conference on Natural Computation, IEEE 1:394–397
Ushio M, Mao W (1994) Sensors for arc welding: advantages and limitations. Trans JWRI 23(2):135–141
Xue K, Wang Z, Shen J, Hu S, Zhen Y, Liu J, Wu D, Yang H (2021) Robotic seam tracking system based on vision sensing and human-machine interaction for multi-pass mag welding. J Manuf Process 63:48–59
Dinham M, Fang G (2013) Autonomous weld seam identification and localisation using eye-in-hand stereo vision for robotic arc welding. Robot Comput Integr Manuf 29(5):288–301
Shen H, Lin T, Chen S, Li L (2010) Real-time seam tracking technology of welding robot with visual sensing. J Intell Robot Syst 59(3):283–298
Nilsen M, Sikström F, Christiansson AK, Ancona A (2017) Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding. Opt Laser Technol 96:107–116
Zhang Z, Wen G, Chen S (2019) Weld image deep learning-based on-line defects detection using convolutional neural networks for Al alloy in robotic arc welding. J Manuf Process 45:208–216
Bračun D, Sluga A (2015) Stereo vision based measuring system for online welding path inspection. J Mater Process Technol 223:328–336
Li X, Li X, Ge SS, Khyam MO, Luo C (2017) Automatic welding seam tracking and identification. IEEE Trans Ind Electron 64(9):7261–7271
Yang L, Liu Y, Peng J, Liang Z (2020) A novel system for off-line 3D seam extraction and path planning based on point cloud segmentation for arc welding robot. Robot Comput Integr Manuf 64:101929
Fang Z, Xu D, Tan M (2010) A vision-based self-tuning fuzzy controller for fillet weld seam tracking. IEEE/ASME Trans Mechatron 16(3):540–550
Zou Y, Chen T, Chen X, Li J (2022) Robotic seam tracking system combining convolution filter and deep reinforcement learning. Mech Syst Signal Process 165:108372
Xu P, Tang X, Yao S (2008) Application of circular laser vision sensor (CLVS) on welded seam tracking. J Mater Process Technol 205(1–3):404–410
Fan J, Deng S, Jing F, Zhou C, Yang L, Long T, Tan M (2019) An initial point alignment and seam-tracking system for narrow weld. IEEE Trans Ind Inf 16(2):877–886
Liu C, Wang H, Huang Y, Rong Y, Meng J, Li G, Zhang G (2022) Welding seam recognition and tracking for a novel mobile welding robot based on multi-layer sensing strategy. Meas Sci Technol 33(5):055109
Yan Z, Xu D, Li Y (2008) A visual servoing system for the torch alignment to initial welding position. In: International Conference on Intelligent Robotics and Applications, Springer, pp 697–706
Sc Wei, Wang J, Lin T, Sb Chen (2012) Application of image morphology in detecting and extracting the initial welding position. Journal of Shanghai Jiaotong University (Science) 17(3):323–326
Chen XZ, Chen SB (2010) The autonomous detection and guiding of start welding position for arc welding robot. Ind Robot 37(1):70–78
Ma Y, Fan J, Deng S, Luo Y, Ma X, Jing F, Tan M (2021) Efficient and accurate start point guiding and seam tracking method for curve weld based on structure light. IEEE Trans Instrum Meas 70:1–10
Wang N, Shi X, Zhang X (2017) Recognition of initial welding position based on structured-light for arc welding robot. In: International Conference on Intelligent Robotics and Applications, Springer, pp 564–575
Zhang L, Xu Y, Du S, Zhao W, Hou Z, Chen S (2018) Point cloud based three-dimensional reconstruction and identification of initial welding position. Transactions on Intelligent Welding Manufacturing. Springer, Singapore, pp 61–77
Kim J, Lee J, Chung M, Shin YG (2021) Multiple weld seam extraction from RGB-depth images for automatic robotic welding via point cloud registration. Multimed Tools Appl 80(6):9703–9719
Tool II, Sales M. lizard welding carriage. https://www.industrialtool.com.au/category/822-products-by-category, Accessed 10 Nov 2021
Koike Aronson I. Wel-handy multi next. https://www.koike.com/wel-handy-multi-next, Accessed 10 Nov 2021
Feng X, Gao L, Tian W, Wei R, Wang Z, Chen Y (2020) Application of wall climbing welding robot in automatic welding of island spherical tank. J Coast Res 107(SI):1–4
Yanfeng G, Hua Z, Yanhui Y (2011) Back-stepping and neural network control of a mobile robot for curved weld seam tracking. Procedia Engineering 15:38–44
Wang Z, Zhang K, Chen Y, Luo Z, Zheng J (2017) A real-time weld line detection for derusting wall-climbing robot using dual cameras. J Manuf Process 27:76–86
Zhang L, Sun J, Yin G, Zhao J, Han Q (2015) A cross structured light sensor and stripe segmentation method for visual tracking of a wall climbing robot. Sensors 15(6):13725–13751
Kermorgant O (2018) A magnetic climbing robot to perform autonomous welding in the shipbuilding industry. Robot Comput Integr Manuf 53:178–186
Zhang L, Ke W, Ye Q, Jiao J (2014) A novel laser vision sensor for weld line detection on wall-climbing robot. Opt Laser Technol 60:69–79
Corke P (2017) Robotics, vision and control: fundamental algorithms in MATLAB® second, completely revised, vol 118
Yu S, Hong J, Zhang T, Yang Z, Guan Y (2020) A self-correction based algorithm for single-shot camera calibration. In: International Conference on Intelligent Robotics and Applications, Springer, pp 442–455
Idrobo-Pizo GA, Motta JMS, Sampaio RC (2019) A calibration method for a laser triangulation scanner mounted on a robot arm for surface mapping. Sensors 19(8):1783
Yanbiao Zou, Xiangzhi Chen (2018) Hand-eye calibration of arc welding robot and laser vision sensor through semidefinite programming. Robot Comput Integr Manuf 45(5):597–610
Xiao R, Xu Y, Hou Z, Chen C, Chen S (2021) An automatic calibration algorithm for laser vision sensor in robotic autonomous welding system. J Intell Manuf pp 1–14
HICKS J (1997) Chapter 5 - weld preparations. In: HICKS J (ed) Welded joint design (second edition), second edition edn, Woodhead Publishing, Cambridge England, pp 29–38
Li J, Li H, Wei H, Gao Y (2016) Effect of torch position and angle on welding quality and welding process stability in pulse on pulse MIG welding-brazing of aluminum alloy to stainless steel. Int J Adv Manuf Technol 84(1–4):705–716
Teeravarunyou S, Poopatb B (2009) Computer based welding training system. International Journal of Industrial Engineering 16(2):116–125
Shultz EF, Cole EG, Smith CB, Zinn MR, Ferrier NJ, Pfefferkorn FE (2010) Effect of compliance and travel angle on friction stir welding with gaps. J Manuf Sci Eng 132(4)
Besl PJ, Mckay HD (1992) A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256
Rusu RB, Cousins S (2011) 3D is here: Point Cloud Library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA), IEEE, Shanghai, China
Fang Z, Xu D, Tan M (2013) Vision-based initial weld point positioning using the geometric relationship between two seams. Int J Adv Manuf Technol 66(9–12):1535–1543
Funding
This work was supported in part by the Research and Development Programs in Key Areas of Guangdong Province (Grant Nos. 2020B090928002 and 2019B090915001), the Natural Science Foundation of China (Grant No. 51905105), the Natural Science Foundation of Guangdong Province (Grant No. 2020A1515011262), and the State Key Laboratory of Precision Electronic Manufacturing Technology and Equipment.
Author information
Authors and Affiliations
Contributions
Shuangfei Yu: conceptualization, investigation, methodology, and writing original draft preparation. Yisheng Guan: project leader, validation. Zhi Yang: partial methodology, experimental assistance. Chutian Liu: experimental assistance. Jiacheng Hu: experimental assistance. Jie Hong: original draft preparation. Haifei Zhu: original draft preparation. Tao Zhang: original draft preparation and validation.
Corresponding authors
Ethics declarations
Ethics approval
Not applicable.
Consent for publication
All authors have read and agreed to the published version of the manuscript.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yu, S., Guan, Y., Yang, Z. et al. Multiseam tracking with a portable robotic welding system in unstructured environments. Int J Adv Manuf Technol 122, 2077–2094 (2022). https://doi.org/10.1007/s00170-022-10019-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00170-022-10019-3