1 Introduction

Welding plays an irreplaceable role in modern manufacturing, especially in heavy industries, as a basic fabrication method in joining materials, usually metals or thermoplastics. It is well known that up to 70% of the world’s steel production is used to manufacture welded products, structures, and equipment. Therefore, it remains one of the leading technological processes in the global economy [1]. It is employed in constructing numerous industrial sectors, such as shipbuilding, railway construction, machine manufacturing, bridge construction, the petrol industry, and aerospace engineering. In many cases, welding is the only possible and most effective method for building unassembled structures with approximately optimal shapes.

However, most welding work in unstructured environments relies on manual work. According to statistics, more than 90% of welding work is still done manually, which is even higher in heavy industries [2]. However, manual welding production has inherent drawbacks. For instance, human work is costly, and workers may suffer from health problems owing to the unavoidable intense heat radiation, smoke, and arcs during welding. More importantly, human work cannot consistently guarantee welding quality, which may influence the reliability and safety of products, especially in public transport products such as ships, high-speed railways, and bridges. Unfortunately, unlike structured production lines in traditional workshops, it is challenging to conduct automatic welding in heavy industries because the working environment is often unstructured, including the following: (i) workpiece diversity. Unlike an assembly line, where everything is fixed, various structural parts with different shapes and sizes in heavy industry are assembled on-site. This constraint usually results in a significant deviation between the actual product size and ideal model design, making it difficult to apply traditional offline programming or teaching-playback mode for robotic processing. (ii) Workplace variability. Much welding work is conducted in an outdoor environment, requiring robotic welding systems to have excellent mobility, which is difficult to achieve using conventional cumbersome industrial robots.

The first issue above drives research on weld seam-tracking technology, but there are currently few methods capable of fully autonomous welding of multi-seams in unstructured environments. On the one hand, almost all existing methods require certain prior information, such as the curve equation [3] or nominal path [4,5,6,7,8], which is extremely difficult to satisfy in an unstructured environment. For instance, a weld seam trajectory recognition method based on laser scanning displacement sensing was proposed for the automated guidance of a welding torch in skip welding of a spatially intermittent weld seam [8]. This method requires scanning all seams before welding, which is inefficient and even impossible in some special environments, such as the autonomous welding of certain nuclear facilities and large outdoor structures. In 2020, Zou et al. [3] presented a tracking method for complex spatially curved weld seams, but this method requires prior knowledge of the mathematical curve for the seams and has major limitations. However, most current studies only show the tracking of single-segment linear seams or single planar curved weld seams by employing various sensors. A curved weld seam is characterized by irregular shapes, and its width and angle may frequently change, which is undoubtedly challenging for seam tracking [9]. The sensors utilized for automatic robotic welding are generally based on arc and optical sensing [10]. In the early stage before 2010, arc sensing was studied to track single-segment continuous seams [11,12,13]. Arc sensors are available in the market from company IGM and POWER-TRACTM from SERVO-ROBOT. Vision-based sensors are considered to be the most promising because of their non-contact characteristics, high precision, fast detection, and strong adaptability [9, 14]. Passive vision has been adopted to track a straight weld seam [5, 15] and a curved seam [16, 17]. In [18,19,20], stereo passive vision systems with multiple optical paths were designed to realize straight line weld seam tracking. Active vision is currently popular for seam tracking. Based on this new technology, tracking of a straight weld seam [7, 21,22,23] and a single-section curved weld seam [24,25,26] have been studied. Liu et al. [27] applied a composite sensor system including an RGB-D camera and a laser vision sensor to detect weld seams automatically.

The key to the tracking of multi-segment weld seams lies in identifying the initial point of each seam, especially for welding tasks where the workplace needs to be changed in an unstructured environment. Earlier studies [28,29,30] used RGB cameras to extract the initial points of weld seams, whereas recent studies [26, 31,32,33] have revealed the potential of laser vision sensors in searching for initial points. However, in contrast to the initial point identification problem of single-segment weld seams, the initial point identification of multi-segment weld seams also involves the pose estimation of the entire workpiece. Kim et al. [34] proposed extracting all detectable weld seams from multi-view RGB images and then filtering and merging them in the corresponding RGB-D images to detect multiple weld seams in workpieces. However, they did not consider the influence of environmental factors on object edge extraction and indiscriminately regarded the detected edge lines as weld seams. The problem of initial point identification for multi-segment weld seams still needs to be researched.

In unstructured environments, welding carriages have good application prospects for automatic welding technology. They are convenient to deploy because of their small sizes . This type of system, including representative commercial products, is usually driven by portable rails, such as LIZARD from ITM company [35], or wheels, such as products from [36, 37]. These products use manual assistance and internal motor encoders to locate weld seams to complete the automatic welding of simple trajectories such as straight lines or circular arcs, similar to [38]. Mao et al. [13] studied the automatic welding of circular weld seams using arc sensors to guide the trolleys. Because the arc sensor is unable to identify the initial point of a weld seam, it is unsuitable for the autonomous welding of multi-segment seams. Wang et al. [39] utilized vision sensors for real-time seam identification. Laser vision sensors were adopted in [40,41,42] to make the welding carriage more intelligent in a wall-climbing welding robot. However, one drawback of this platform is that it cannot weld complex spatially curved weld seams because the end-effector has only two or three degrees of freedom.

This fact motivates to complement multiple-seam tracking with a 6-DOF cooperative manipulator. This portable platform can complete the welding of curved seams and achieve the effect of “placing it wherever you need it and carrying it away after welding.” The main contributions of this work are as follows: (1) a technique that calibrates the laser-vision sensor with a common chequerboard calibration board; (2) a fully autonomous tracking technique for spatial curve weld seams; (3) a novel method to estimate the real-time pose of a weld seam using three feature points on a laser stripe; and (4) the initial point identification of multiple seams with point cloud registration.

The remainder of this paper is organized as follows: Sect. 2 introduces the design of the portable robotic welding system and an overview of the weld tracking algorithm; Sect. 2.3 describes the designed laser vision sensing system and its calibration method; Sect. 3 presents the tracking method for complex curved weld seams and the global positioning method for multiple welds; Sect. 4 illustrates the laboratory tests and results; Sect. 6 concludes this work.

2 System description

2.1 System overview

The proposed system for mobile-robotic welding in unstructured environments is shown in Fig. 1. It mainly consists of a cooperative manipulator (Elfin E05), a robot controller, a vision system, a welding power supply (Panasonic YD-500GR5), and a host computer (Intel NUC11). The cooperative robot carries the welding torch under motion control signals from the controller. The host computer processes the information captured by the vision sensor and sends commands to the robot controller. The sensor is installed around the end of the cooperative robot. The system can be placed on various mobile platforms such as AGVs or easily carried by workers to different worksites. As shown in Fig. 1b, the mass of the cooperative robot with the end-effector is merely 23kg. The vision system contains two sensors: a RGB-D camera (Realsense D435i) to locate the workpiece globally, and a customized laser vision sensor to accurately detect the position and orientation of the weld seam.

Fig. 1
figure 1

Overview of the portable robotic welding system. a System components; b portability demonstration

2.2 Weld seam tracking algorithm

The weld seam tracking algorithm of our system is divided into several modules: global seam identification, seam local identification, and abnormal monitoring. The entire process is illustrated in Fig. 2.

  1. 1.

    Seam global identification: Workpiece geometry is used to determine the spatial distribution of each weld seam in the robot base coordinate frame by extracting the global characteristics of each weld seam from the CAD model of workpieces. Subsequently, offline attitude planning of the end-effector and accurate positioning of the weld seam can be performed.

  2. 2.

    Seam local detection: Seam local detection primarily uses an image-processing algorithm for feature points extraction of the weld seam. In particular, geometric morphological processing was used to obtain the position of the seam feature points.

  3. 3.

    Abnormal monitoring: The module monitors two safety indicators: the vision system state, \(Flag_{cam}\), and robot state \(Flag_{R}\). When the camera stops working (detected by indicator \(Flag_{cam}\)) or the motion position of the cooperative manipulator deviates significantly from the expected position (detected by indicator \(Flag_{R}\)), the robotic system stops in a timely manner.

Fig. 2
figure 2

Weld seam tracking algorithm. The workflow of the algorithm contains two main processes: weld seam global identification and local detection

2.3 Laser-vision sensing system

2.3.1 Mathematical model of 3D reconstruction

As illustrated in Fig. 3b, the laser-vision sensor obtains the 3D coordinates of any point on the laser stripe using its pixel position \(p_{img}(u,v)\) in the image coordinate frame. Here, the laser light plane equation is first introduced, followed by correcting the image distortion. The lens distortion model [43, 44] can be expressed as:

$$\begin{aligned} \left( \begin{array}{cc} u \\ v \end{array} \right) = \left( \begin{array}{cc} \widetilde{u}+\widetilde{u}(K_{1}r^{2}+K_{2}r^{4}+K_{3}r^{6})+P_{1} (r^{2}+2\widetilde{u})+2P_{1}\widetilde{u}\widetilde{v}\\ \widetilde{v}+\widetilde{v}(K_{1}r^{2}+K_{2}r^{4}+K_{3}r^{6})+2P_{1}\widetilde{u}\widetilde{v}+P_{2}(r^{2}+2\widetilde{v}) \end{array} \right) \end{aligned}$$
(1)

where \((\widetilde{u},\widetilde{v})\) is the distorted pixel in an image captured by the camera, and the corrected pixel is denoted as (uv). \(K_{1}\), \(K_{2}\), and \(K_{3}\) are the radial distortion coefficients, and \(P_{1}\) and \(P_{2}\) are the tangential distortion coefficients. These parameters can be obtained by camera calibration, and the distorted image can be corrected according to Eq. (1). As shown in Fig. 3, for a particular point \(P_c=[X_c, Y_c, Z_c]^\mathrm {T}\) on the laser stripe in the camera coordinate frame, according to the linear imaging model, the corresponding relationship between the pixel coordinate system and camera coordinate system is:

$$\begin{aligned} \left[ \begin{array}{c} u \\ v\\ 1 \end{array} \right] = \left[ \begin{array}{ccc} f_{x} &{} 0 &{} u_{0}\\ 0 &{} f_{y} &{} v_{0}\\ 0 &{} 0 &{} 1 \end{array} \right] \left[ \begin{array}{c} X_{c}/Z_{c} \\ Y_{c}/Z_{c} \\ 1 \end{array} \right] =M_I\left[ \begin{array}{c} X_{c}/Z_{c} \\ Y_{c}/Z_{c} \\ 1 \end{array} \right] \end{aligned}$$
(2)

where \(M_I\) represents the camera intrinsics. Assuming that the plane equation of the laser light plane in the camera coordinate system is:

$$\begin{aligned} AX_{c}+BY_{c}+CZ_{c}+D=0 \end{aligned}$$
(3)

Combining Eqs. (2) and (3), the transformation between the camera coordinate system and the pixel coordinate system for any point on the laser stripe is:

$$\begin{aligned} \left[ \begin{array}{ccc} f_{x} &{} 0 &{} u_{0}-u\\ 0 &{} f_{y} &{} v_{0}-v\\ -A &{} -B &{} -C \end{array} \right] \left[ \begin{array}{c} X_{c} \\ Y_{c} \\ Z_{c} \end{array} \right] =\left[ \begin{array}{c} 0 \\ 0 \\ D \end{array} \right] \end{aligned}$$
(4)

We then can obtain :

$$\begin{aligned} {\left\{ \begin{array}{ll} X_{c}=-\dfrac{Df_{y}(u-u_{0})}{Cf_{y}f_{x}+Af_{y}(u-u_{0})+Bf_{x}(v-v_{0})} \\ Y_{c}=-\dfrac{Df_{y}(v-v_{0})}{Cf_{y}f_{x}+Af_{y}(u-u_{0})+Bf_{x}(v-v_{0})} \\ Z_{c}=-\dfrac{Df_{x}f_{y}}{Cf_{y}f_{x}+Af_{y}(u-u_{0})+Bf_{x}(v-v_{0})} \end{array}\right. } \end{aligned}$$
(5)
Fig. 3
figure 3

Laser-vision sensor: diagram of the mathematical model. a Sensor configuration; b imaging model: {c} is the camera coordinate frame, {B} is the base coordinate frame of the robot, and {o} is the pixel coordinate frame

Thus, we can obtain \(P_c = [X_c, Y_c, Z_c]^\mathrm {T}\) on the laser stripe from the pixel point \(p_{img}(u,v)\) of the image using Eq. (5). To guide the robot to work, \(P_c\) from the camera coordinate still needs to be converted to the robot base coordinate frame:

$$\begin{aligned} ^BP=^{B}T_t\cdot ^{t}T_c \cdot P_c \end{aligned}$$
(6)

where \(^{t}T_{c}\) is the pose of the camera coordinate system with respect to the robot tool coordinate system and can be obtained by hand-eye calibration. \(^{B}T_{t}\) is the pose and position of the robot tool coordinate system in the robot base coordinate system and can be read from the robot controller in real time.

2.3.2 Calibration of laser-vision sensor

This subsection introduces the calibration of the coefficients in Eq. (3): ABC, and D. Although there have been many related studies on the calibration of laser-vision sensors, the current and primary problem is the demand for the use of complex calibration artifacts and excessive model parameters. For instance, Idrobo-Piz et al. [45] proposed a calibration model using a camera, a lens, two lasers, and a special calibration board to calibrate the position parameters of the laser. This approach is difficult to implement because there are too many model parameters, and the calibration board is a customized product that is difficult to obtain. In a similar study [46], a particular serrated stereo target was used to calibrate the hand-eye relationship between the laser vision sensor and robot. To solve the problem of tedious calibration, Xiao et al. [47] adopted a plane calibration board with a special pattern to implement an automatic calibration. However, this type of calibration board is also a specially customized product. The most common calibration board was used for our laser-vision sensor. Assume that the coordinates of a point P on the laser stripe in the coordinate system of the calibration plate are \(P_w = [X_{w}, Y_{w}, Z_{w}]^{T}\), and R and t are external parameters of the camera relative to the calibration board, then it can be verified as:

$$\begin{aligned} P_c = R \cdot P_w+t \end{aligned}$$
(7)

Then, Eq. (7) becomes:

$$\begin{aligned} P_w=R^{-1}(P_c-t)=\left[ \begin{array}{ccc} \acute{r_{11}} &{} \acute{r_{12}} &{} \acute{r_{13}}\\ \acute{r_{21}} &{} \acute{r_{22}} &{} \acute{r_{23}}\\ \acute{r_{31}} &{} \acute{r_{32}} &{} \acute{r_{33}}\\ \end{array} \right] \left[ \begin{array}{c} X_{c}-t_{x} \\ Y_{c}-t_{y}\\ Z_{c}-t_{z}\\ \end{array} \right] \end{aligned}$$
(8)

where \(t=[t_x,t_y,t_z]^T\). As shown in Fig. 4, because all points on the laser stripe are located on the plane of the calibration plate, that is, \(Z_{w}=0\), then we have:

$$\begin{aligned} \acute{r_{31}}(X_{c}-T_{x})+\acute{r_{32}}(Y_{c}-T_{y})+\acute{r_{33}}(Z_{c}-T_{z})=0 \end{aligned}$$
(9)

combining with Eq. (2), we can obtain:

$$\begin{aligned} Z_{c}=\frac{f(\acute{r_{31}}T_{x}+\acute{r_{32}}T_{y}+\acute{r_{33}}T_{z})}{\acute{r_{31}}(u-u_{0})d_{x}+\acute{r_{32}}(v-v_{0})d_{y}+\acute{r_{33}}} \end{aligned}$$
(10)

Any point \(P_c\) in the camera coordinate system can be obtained via Eqs. (2) and (10) from \(p_{img}\). We can then keep the camera posture still, change the calibrated posture, and repeat the steps shown in Fig. 4 to collect multiple images. After converting all collected points to the camera coordinate system, the singular value decomposition (SVD) method is used to fit these spatial points into a plane; the plane equation coefficients in Eq. (3) can be finally obtained.

Fig. 4
figure 4

Laser-vision sensor calibration method. a An image of the calibration board is first taken by a camera to obtain the relative pose between the calibration board and the camera. b Remaining position and posture of the camera in (a) and opening the laser projector, an image of the laser stripe is taken. c Transforming all points \(p_{img}\) on the laser stripe to point \(P_c\) in the camera coordinate system. Then, repeating (a) and (b) to obtain multiple lines (in yellow color) in the camera coordinate system. Finally, fitting the points to a spatial plane

Fig. 5
figure 5

Weld seam feature point definition and extraction. a Laser stripe and its feature points on the fillet weld seam; b feature points on the lap weld seam; c feature points on the butt weld seam; d flowchart of weld seam point generation from feature points

3 Seam tracking methods

There are two main cases of seam tracking in an unstructured environment: single-weld seam tracking and multiple-weld seam tracking. In Subsect. 3.1, the pose modeling and tracking method for a particular weld seam are discussed in detail. On this basis, in Subsect. 3.2, taking ship structural part welding as an example, the continuous tracking method of multi-segment weld seams using 3D vision to guide the robot to transition between different seams is presented.

3.1 Single-weld seam tracking

3.1.1 Seam trajectory generation

Weld joints can be categorized into several different types. However, it is helpful to classify commonly used weld joints into the following types: butt joint, lap joint, and T joint [48]. These three typical types of weld joints can be represented by different combinations of points, which can be regarded as the features of the weld seam in this paper. Once those feature points were obtained from the images, the position and orientation of the weld seam are calculated according to the type of weld joint. In this study, the features of the weld joint are defined as three points, as shown in Fig. 5.

The first step in single-seam welding involves extracting the seam features via image processing. The seam shape can be obtained via 3D reconstruction of the feature points on the laser stripe of the image (see \(p_i\), \(p_{i,l}\), and \(p_{i,r}\) in Fig. 5a–c).

We first extract the center line (red line in Fig. 5a) of the laser strip at the sub-pixel level. We then find the intersection point \(p_i\) of the two lines and give \(p_{i,l}\) and \(p_{i,r}\) on the center line, which are equal distances from \(p_i\). The feature point \(p_i\) represents the position of the weld seam path point, and the two auxiliary points on both sides are used to calculate the direction of the seam. We save \(p_i\) of every frame captured in the dynamic buffer during the tracking process. From {\(p_{1}\), \(p_{2}\), \(p_{3}\), ..., \(p_{i}\), ...} in the dynamic buffer, the corresponding target points on the weld seam {\(P_1\), \(P_2\), \(P_3\), ..., \(P_i\), ...} in the robot base coordinate system can be computed using Eqs. (5) and (6) and the robot is driven sequentially through each target point.

When the robot moves along the weld seam, due to the difference between the moving speed of the robot and the camera capture frequency, the time steps are not synchronized. If the feature points detected by the camera are sent directly to the robot, significant errors will occur. For example, if the robot moves at a speed of 100 mm/s while the sensor captures at 2 frames per second, the interval between the two images is approximately 50 mm, which is sparse compared with extracting a continuous laser stripe on each image. However, when the robot moves at a much lower speed and the camera captures at a higher rate, the feature points obtained from the camera will be dense with little difference between them, which produces a large number of invalid feature points.

To filter out the redundant points, every subsequent point \(P_{i+1}\) should satisfy:

$$\begin{aligned} d = \Vert P_{i+1}-P_{i}\Vert \ge d_{thres} \end{aligned}$$
(11)

where d denotes the Euclidean distance between \(P_{i+1}\) and \(P_i\). Typically, \(d_{thres}\) is set to \(1-2\) mm. The algorithm only processes qualified points, as described by down-sampling in Fig. 5c.

Fig. 6
figure 6

Tracking posture computation. a Finding the tangent vector \(\tau _i\) of the ith path point \(P_i\); b constructing the path point tangent plane \(\pi _{2i}\) from \(Z_p\); c constructing the normal vector \(Z_p\) from the feature points \(p_{0}\), \(p_{1}\), and \(p_{2}\); d calculating the forward direction \(X_p\) of the welding torch

Fig. 7
figure 7

Forward direction comparison in normal tracking attitude with conventional methods [3, 4]. The traditional method defines the tangential direction \(\tau _i\) of the path point as the forward direction of the torch, which is easy to lead the feature point to escape from the camera’s field of view. a Forward direction defined by this method. b Forward direction defined by traditional methods

3.1.2 Tracking posture computation

After obtaining the seam path point sequence, the posture of the weld seam \([X_{p_i}, Y_{p_i}, Z_{p_i}]\) is obtained by curve fitting. For a certain point \(P_{i}\) in the sequence, the adjacent 2m points are curve-fitted using polynomial as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} X_i= t\\ Y_i= A_{1}t^{3}+B_{1}t^{2}+C_{1}t+D_{1}\\ Z_i=A_{2}t^{3}+B_{2}t^{2}+C_{2}t+D_{2} \end{array}\right. } \end{aligned}$$
(12)

Afterwards, the tangent vector \(\tau _{i}=(1,\frac{\partial y}{\partial x},\frac{\partial z}{\partial x})=(1,n_{i,y},n_{i,z})^{T}\) of \(P_{i}\) can be calculated. As shown in Fig. 6a–b, the plane \(\pi _{1i}\) with its normal vector \(\tau _{i}\) is defined as the posture plane of \(P_{i}\). To ensure that the welding torch is always located at the center of the seam, the unit vectors \({n}_{i,l}\) (from \(p_i\) to \(p_{i,l}\)) and \({n}_{i,r}\) (from \(p_i\) to \(p_{i,r}\)) on both sides of the laser stripe are projected onto plane \(\pi _{i,1}\) to obtain \({\acute{n}}_{i,l}\) and \({\acute{n}}_{i,r}\), respectively:

$$\begin{aligned} {\left\{ \begin{array}{ll} {\acute{n}}_{i,l}= {n}_{i,l}-proj_{{\tau }_i}({n}_{i,l})\\ {\acute{n}}_{i,r}={n}_{i,r}-proj_{{\tau }_i}({n}_{i,r})\\ {Z}_{p_i}={\acute{n}}_{i,l}+{\acute{n}}_{i,r} \end{array}\right. } \end{aligned}$$
(13)

where \(proj_{{\tau }_i}({n}_{i,l})\) represents the projection vector of \({n}_{i,l}\) on \({\tau }_i\):

$$\begin{aligned} proj_{{\tau }_i}({n}_{i,l})=\frac{{\tau }_i\cdot {n}_{i,l}}{{\Vert {\tau }_i\Vert }^2}{n}_{i,l} \end{aligned}$$
(14)

and the rest is the same. The vector \({Z}_{p_i}\) is the direction of the angular bisector between \({\acute{n}}_{i,l}\) and \({\acute{n}}_{i,r}\).

To define the forward direction of tracking, the plane \(\pi _{i,2}\) is defined as the plane passing through point \(P_{i}\) with \({Z}_{p_i}\) being the normal vector. To ensure that the feature point of the weld seam is always within the field of view of the camera during the tracking process, line \(P_{i+1}O_{c}\) is defined between feature point \(P_{i+1}\) and the origin of the camera coordinate system \(O_c\). The intersection point of the extended line \(P_{i+1}O_{c}\) and the plane \(\pi _{2i}\) is \(M_{i}\). The unit vector from the current weld feature point \(P_{i}\) to \(M_{i}\) is defined as the forward direction vector \({X}_{p_i}\):

$$\begin{aligned} {X}_{p_i}=\frac{M_{i}-P_{i}}{\Vert M_{i}-P_{i}\Vert } \end{aligned}$$
(15)

Another direction vector \(Y_{p_i}\) can be obtained via the right-hand rule, that is, \(Y_{p_i}=Z_{p_i}\times X_{p_i}\). Through the above steps, a local posture coordinate system \({^{B}T}^{i}=[X_{p_i},Y_{p_i},Z_{p_i}]^T\) can be constructed, representing its attitude at each point \(P_i\). Compared with traditional methods [3, 4], the advantage of this tracking posture calculation method is that it can better deal with complex large-curvature weld seams. In particular, the robot moves to \(P_{i+1}\) while pointing to \(M_i\) rather than \(P_{i+1}\), such that the laser-vision sensor would not lose the tracking of the curved seam, as shown in Fig. 7.

3.1.3 Dual-queue tracking strategy

In actual welding, the welding pose also has a significant impact on the mechanical properties of welded joints [49]. The robot is in a normal posture has been given. However, in many manufacturing scenarios, the welding torch is not required to be in contact with (or perpendicular to) the welding surface. This requires the robot to adjust its posture appropriately during welding. Three main parameters can be tuned: the welding pitch d, travel angle \(\alpha\), and working angle \(\beta\) [50, 51]. As shown in Fig. 8, for each weld seam path point \(P_{i}\), its local attitude is assumed to be \({^{B}T}^{i}\). Then, we have:

$$\begin{aligned} {^{B}\breve{T}}^{i} = {^{B}T}^{i} \cdot \begin{bmatrix}R_{Set} &{} t_{Set} \\ \varvec{0} &{} 1 \end{bmatrix} ={^{B}T}^{i}\mathcal {T} \end{aligned}$$
(16)

Where

$$\begin{aligned} \begin{aligned} R_{Set}&=R_{Z}(0)R_{Y}(\beta )R_{X}(\alpha )\\&=\begin{bmatrix}1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \end{bmatrix} \begin{bmatrix} \cos \beta &{} 0 &{} \sin \beta \\ 0 &{} 1 &{} 0 \\ -\sin \beta &{} 0 &{} \cos \beta \end{bmatrix} \begin{bmatrix} 1 &{} 0 &{} 0 \\ 0 &{} \cos \alpha &{} -\sin \alpha \\ 0 &{} \sin \alpha &{} \cos \alpha \end{bmatrix} \end{aligned} \end{aligned}$$
(17)

where \({^{B}\breve{T}}^{i}\) is the expected pose for the welding torch to be at \(P_i\), and \(\alpha\) and \(\beta\) are the angular offset between the tracking coordinate system and the local coordinate system for \(P_i\). By regulating \(\mathcal {T}\), the tracking sequence corresponding to each point in the path point sequence can be generated, as shown in Fig. 8.

Fig. 8
figure 8

Dual-sequence tracking strategy for adjusting working parameters

We used a position-based visual servo (PBVS) mechanism to control the movement of the robot. The robot keeps moving towards the target point \(Q_i\) in the tracking sequence; once it reaches the point, the point is deleted from the memory, and the robot moves towards the next target point \(Q_{i+1}\) until the entire tracking process is completed. During this process, the welding torch of the robot must constantly change its position and attitude. To control the robot by adapting to different intervals of points in the tracking sequence, a novel and simple approach to for robot motion control is proposed.

Assuming that the robot moves in the velocity mode, the motion from point \(Q_i\) to \(Q_{i+1}\) can be decomposed into translation \(T_{\nu }=[t_x,t_y,t_z]^T\) and rotation \(R_{\nu }=[R_x,R_y,R_z]^T\), where \(t_x\), \(t_y\), and \(t_z\) are the translation velocities in the X, Y, and Z directions of the tool coordinate system, respectively, and \(R_x\), \(R_y\), and \(R_z\) are the rotational velocity speeds in the corresponding directions.

Let the translation vector between the current point \(Q_i\) and the next point \(Q_{i+1}\) be:

$$\begin{aligned} \Delta d = \frac{Q_{i+1}-Q_{i}}{\Vert Q_{i+1}-Q_{i}\Vert } = \left[ \Delta t_{x}, \Delta t_{y}, \Delta t_{z} \right] \end{aligned}$$
(18)

Suppose the time required for the robot to move from \(Q_{i}\) to \(Q_{i+1}\) is t, then the speed of position movement is

$$\begin{aligned} T_{\nu }= \frac{\Delta d}{t} = \left[ \frac{\Delta t_{x}}{t}, \dfrac{\Delta t_{y}}{t}, \dfrac{\Delta t_{z}}{t} \right] = \Delta d \delta \end{aligned}$$
(19)

where \(\delta =1/t\) is the speed adjustment of the position translation. The translation speed of the welding torch can be set during the tracking process by adjusting \(\delta\).

Assuming that when the welding torch reaches \(Q_{i+1}\) and simultaneously completes its posture transformation, it is preferable to set the posture transformation speed consistent with the translation speed \(\delta\). Let the pose of \(Q_{i}\) and \(Q_{i+1}\) in the robot base coordinate system be \(R_{i}\) and \(R_{(i+1)}\), respectively, and the transformation matrix between the two be \(\Delta R\), then

$$\begin{aligned} R_{i+1}=\Delta RR_{i} \qquad \Rightarrow \qquad \Delta R=R_{i+1}\cdot \mathop {{\mathop {\mathop {R}}\nolimits ^{-1}_i}}\limits =\begin{bmatrix}r_{11} &{} r_{12} &{} r_{13} \\ r_{21}&{} r_{22} &{} r_{23} \\ r_{31} &{}r_{32} &{} r_{33} \end{bmatrix} \end{aligned}$$
(20)

By converting \(\Delta R\) to RPY angle form, we can obtain

$$\begin{aligned} {\left\{ \begin{array}{ll} \widetilde{R}_{x}= Atan2(-r_{31},\sqrt{\mathop {{\mathop {\mathop {r}}\nolimits ^{2}_{11}}}\limits +\mathop {{\mathop {\mathop {r}}\nolimits ^{2}_{21}}}\limits }) \\ \widetilde{R}_{y}= Atan2(r_{21},r_{11})\\ \widetilde{R}_{z}= Atan2(r_{32},r_{33}) \end{array}\right. } \end{aligned}$$
(21)

Let \(\Delta r= \left[ \widetilde{R}_{x}, \widetilde{R}_{y}, \widetilde{R}_{z}\right]\) and its normalized vector be \(\Delta \widetilde{r}\); then, the final speed of pose transformation for the robot is \(R_{\nu }=\delta \widetilde{r}=\left[ \delta \widetilde{R}_{x}, \delta \widetilde{R}_{y}, \delta \widetilde{R}_{z}\right] .\)

Finally, the calculated speeds are transmitted to the robot to realize real-time changes in the robot pose. Despite tracking discrete points in the path, overall motion properties can be maintained consistently by unitizing the motion vector.

3.2 Continuous tracking of multi-weld seams

Because our welding system needs to be deployed in various workplaces and the posture of each placement is not fixed, it is necessary to guide the robot to the initial point of the weld seam. As introduced in Sect. 1, under an unstructured workspace and, for multi-seam welding, the initial position of every seam should be located. Fortunately, the CAD model of the workpiece provides us with reliable prior information. From the CAD model, the iterative closest point (ICP) algorithm [52] is used to obtain the pose of the workpiece \(^{CAD}T_{C}\) in the camera coordinate system using a 3D camera.

Table 1 Multiple-seam welding: global parameters
Fig. 9
figure 9

Dexterous welding of multi-segment seams by adjusting working parameters \(\alpha\), \(\beta\), and d. a A working posture; b collision near the corner along the working posture (a); c adjusting the posture parameters \(\alpha\), \(\beta\) midway; d completing welding near the corner without collision

3.2.1 Global working parameters

As shown in Fig. 9a–b, the robot may collide with the workpiece near the corner. To accurately locate the initial point of the seam, we defined the direction vector \(\widetilde{\Gamma }\) to guide the robot to scan the seam (Table 1). Several parameters are defined for multi-seam welding in the coordinate system of the CAD model {M} and the coordinate system of the robot base {B}, as shown in Table 1.

3.2.2 Global positioning of multiple welds

Point cloud registration is used to position the workpiece, as illustrated in Fig. 10. First, we sample points from the CAD model (Fig. 10b) to form a high-density point cloud, as shown in Fig. 10c. Then, we downsample the high-density point cloud to a sparse point cloud, as shown in Fig. 10d.

Fig. 10
figure 10

Multi-seam positioning via point cloud registration. a Ship watertight patch plate assembly on site; b an experimental component in this paper; c corresponding CAD model; d high-density point cloud generated; e sparse point cloud obtained by voxel grid filtering; f point cloud image of workpiece, cluttered by other objects, captured by RGB-D camera; g clutter is filtered by setting the range along the Z and X directions; h sparse point cloud filtered by voxel grid filter and radius outlier filter; i point cloud registration of (e) and (h)

The point cloud from the RGB-D camera is obtained after deploying the robot. Owing to the imaging mechanism of the laser speckle of the camera, there may be a large number of point cloud clutter in the field of view, as shown in Fig. 10e. Point cloud data filtering is applied to remove outlier and noise with the help of the filter module of the Point Cloud Library (PCL) [53], as illustrated in Fig. 10f. Correspondingly, we downsample the captured point cloud to a sparse cloud, as shown in Fig. 10g. Finally, the iterative closet point (ICP) algorithm is employed to match the two-point clouds in Fig. 10d and g, respectively, as shown in Fig. 10h.

3.2.3 Initial posture for different docking locations

For the portable system, the placement position is not necessarily the same each time; it is necessary to set a proper posture fixed to the workpiece to ensure that the robot can correctly image when it reaches the initial point of the weld seam. For any seam, after each placement of the robot, the following formula holds

$$\begin{aligned} {^M\mathrm{R}_{B}}= {^M\mathrm{R}_c}\cdot {^{t}R_{c}^\mathrm{T}} \cdot ^{B}R_{t}^\mathrm {T} \end{aligned}$$
(22)
$$\begin{aligned} {^M{R_t}} = {^M\mathrm{R}_B}\cdot {^{B}R_{t}} \end{aligned}$$
(23)

where superscript B, c, M, and t represent the base coordinate system \(\{B\}\), camera coordinate system \(\{c\}\), CAD model coordinate system \(\{M\}\), and tool coordinate system \(\{t\}\), respectively; \(^tR_{c}\) is the attitude of the camera coordinate system in the tool coordinate system, which can be obtained by hand-eye calibration; \(^BR_{t}\) can be queried from the robot controller; and \(^{M}{R}_{c}\) can be obtained by point cloud registration using a 3D camera.

To determine an appropriate posture, we assign the posture of the robot at the initial point in advance such that the laser stripe line is approximately in the middle of the image. We denote the pose of the robot as \(^B\mathfrak {R}_{t}\), and substitute it into Eq. (22) to compose the pose \(^{M}\mathfrak {R}_{B}\). From Eq. (23), we can obtain the required parameter denoted as \(^{M}\mathfrak {R}_{t}\). Because this attitude is the end-effector relative to the CAD coordinate system, it is independent of the deployment location of the robot. For any placement,

$$\begin{aligned} ^B{\mathrm{R}_{t}^i}={^{M}\mathfrak{R}_{t}}\cdot{^M\text {R}_B^i}\end{aligned}$$
(24)

where superscript i represents the deployment time. In this manner, the relative attitude of the end-effector to the workpiece can be kept unchanged by simply sending \(^B\mathop {{\mathop {{\text {R}}}\nolimits _t^i}}\limits\) to the robot controller after each deployment of a robot.

Fig. 11
figure 11

Accurate alignment of the initial point. a Generating \(x_0\) via ICP-based positioning of the actual initial point, and driving the robot to \(x_{m,0}\). b Scanning the initial point via laser-vision sensor and driving the robot along the defined direction vector \(\widetilde{\Gamma }\) on both sides of \(x_0\). c Relocating the position of the initial point by observing the shape of the laser stripe: 1–scenario far from the initial point; 2–scenario approaching the initial point; 3–scenario reaching the initial point

3.2.4 Positioning of initial point

The automatic guidance of the initial point of the weld seam determines the success of welding. However, research on positioning the initial welding point remains problematic [54], and few studies have been reported. In this work, we use the point cloud registration method to roughly locate multiple weld seams and guide the robot to move to a point near the initial point. Then, a laser-vision sensor is scanned to locate the initial point accurately. As illustrated in Table 1, assuming that the initial point of a certain weld seam in the CAD model is \(\widetilde{x}_{0}\), after applying the transformation in Eq. (6), we obtain the corresponding \(x_{0}\). Directing the robot to the initial point \(x_{0}\) may lead to collisions due to an error in point cloud registration. Therefore, we construct a virtual ball using \(x_{0}\) as the center and r as the radius so that the actual position of the welding torch falls on the ball sphere, as shown in Fig. 11a. Assuming that the coordinates of the initial alignment point on the sphere \(x_{m,0}\), it can be determined by:

$$\begin{aligned} \overrightarrow{Ox_{m,0}} = \overrightarrow{Ox_0} - r\cdot ^B\widehat{Z}_t \end{aligned}$$
(25)

where O is the origin of {B}, and \(^B\widehat{Z}_t\) is the z-axis of \(^B\mathop {{\mathop {{\text {R}}}\nolimits _t^i}}\limits\) in Eq. (24).

After reaching \(x_{m,0}\), the weld seam is scanned according to the pre-defined direction \(\widetilde{\Gamma }\), and the initial point of the weld seam is accurately approximated by determining the shape of the light line. As shown in Fig. 11c, we consider the length of the laser stripe line on the left as the basis for the judgment. When the welding torch gradually approaches the initial point of the seam, the length of the sideline gradually increases. When its length exceeds a certain threshold, it can be considered to have reached the initial point.

4 Experiments

Three types of experiments were carried out to verify the feasibility of the proposed seam-tracking method. The first experiment was to verify the accuracy of our laser-vision sensor; the second experiment was to confirm the effect of single-weld seam tracking; and the last was to test the performance of multi-weld seam tracking with considering changeable elements under unstructured environments. In the test of single-weld seam tracking, instead of choosing a real torch with a welding wire as the end-effector, we adopted a displacement sensor with a rigid pen as the experimental torch, see Figs. 13a and 15a. Compared with the deformable welding wire, it became easier to observe the subtle collision and interference between the end-effector and workpiece.

After testing our tracking method for a single weld, we used a ship structure as a test artifact to verify the feasibility of the continuous tracking of multiple weld seams. The difference between multi-seam and single-seam tracking is that multi-seam tracking requires the global positioning of the artifact. In this section, the system performance in terms of global positioning in an unstructured environment is also verified. The experimental platform for the case of multi-seam tracking is shown in Fig. 17.

Fig. 12
figure 12

Precision evaluation of the 3D reconstructed model. a The measuring precision of the calibration method is verified by the square length in the calibration plate; b deviation range in all measurements; c a straight weld seam to be scanned; d weld seam point cloud obtained by 3D reconstruction after scanning

4.1 Precision evaluation

Based on the mathematical model in Eq. (5) in Subsect. 2.3, we can obtain the three-dimensional coordinate \(P_i=[X_c,Y_c,Z_c]^T\) of a point in the camera coordinate system through the 2-dimensional pixel coordinate \(p_i=[u,v]^T\) in the image coordinate system. As shown in Fig. 12a, we kept the laser vision sensor stationary and placed the calibration plate at an appropriate position so that the edge of the chequerboard grid coincided with the laser stripe. We evaluate the accuracy of our model by measuring the sides of the squares. The side length error of each square is very small, and its declared value can be used as a theoretical value for the measurement. For a certain grid in a row, it is assumed that the coordinates of the left and right corner points were \(^c\mathop {{\mathop {{\text {P}}}\nolimits _{L}}}\limits =[x_{L},y_{L},z_{L}]^T\) and \(^c\mathop {{\mathop {{\text {P}}}\nolimits _{R}}}\limits =[x_{R},y_{R},z_{R}]^T\), respectively. We define the measurement error as:

$$\begin{aligned} e_{i}=\Vert P_{L}-P_{R}\Vert _2-7=\sqrt{{({x_{L}}-{x_{R}})}^2 +{({y_{L}}-{y_{R}})}^2+{({z_{L}}-{z_{R}})}^2} -7 \end{aligned}$$
(26)

We measured all the 324 squares on the calibration board and calculated their measurement errors, as shown in Fig. 12b. The results show that the average measurement error is \(\widetilde{E}\) = 0.0991 mm, and the overall deviation range can be controlled within \(0.1659 \sim 0.0152\) mm using our calibration method in Sect. 2.3.2. To further verify the 3D reconstruction capability of the designed sensor, we performed a 3D reconstruction of all pixel points on the entire laser stripe. As shown in Fig. 12c, the robot carried the sensor to scan a straight weld seam. The obtained result is shown in Fig. 12d, which clearly shows that our reconstructed data points reflect the 3D shape of the weld seam.

Fig. 13
figure 13

Tracking a large curved complex weld seam without prior information. a Experimental scene; b tracking trajectory of the robot end-effector; c snapshots during the tracking process

4.2 Single-weld seam tracking

4.2.1 Seam tracking of a complex space curve

To verify the tracking method proposed in this work, a special part with a large curvature was selected for the experiment. As shown in Fig. 13a, when the initial point was given, the robot first moved toward it. Once it reached the initial point, the robot end-effector started an autonomous movement along the curved weld seam until it reached the given endpoint. A portion of the snapshots taken during the tracking process is shown in Fig. 13c. The entire tracking process did not require any manual intervention or prior information such as the curve equation of the weld seam, CAD model, or pre-taught trajectory. While the robot was tracking along the weld seam, we recorded the position of the robot end-effector in the backend program of the host computer and projected these position points onto the XOY plane of the robot base coordinate frame, as shown in Fig. 13b. It can be observed that the motion trajectory of the robot matches the profile of the part to be welded.

Fig. 14
figure 14

Stability demonstration of curved seam tracking. a Feature point trajectories on the image by our method. The color bar reflects the index of the frame where the feature points are located. b Feature point trajectories on the image by traditional methods [3, 4]. c Snapshots of the motion process of feature points by traditional methods of (b)

The forward direction is vital for tracking the complex curved weld seams. To confirm the tracking stability, we recorded the pixel coordinates of all the feature points, as shown in Fig. 14a. The size of the image frame is \(1920\times 960\) pixels, and the red anchor is located at its center (960, 480). This shows that the feature points were maintained at the image center throughout the entire tracking process by our menthod. As illustrated in Fig. 7, for comparison, we set the forward direction as the tangent direction of the weld seam, and the result is shown in Fig. 14b–c. Noticeably, after a certain tracking time, the weld feature points escaped from the visible range of the image, which definitively led the tracking to fail.

Fig. 15
figure 15

Tracking accuracy tests under different working parameters. a The space curved weld seam to be tracked. (b) Four types of tests with different working parameters. (c) Motion effect of the four experiments. The tracking parameters are regulated as follows: 1–normal attitude, 2–welding pitch \(d=15\) mm, 3–travel angle \(\alpha = 15 ^{\circ }\), 4–working angle \(\beta = 15^{\circ }\)

4.2.2 Accuracy evaluation in single-weld seam tracking

The basic principle of the accuracy evaluation for weld seam tracking is to compare the moving trajectory of the welding torch with the actual weld seam curve. However, it is difficult to precisely obtain the true coordinates of the points on the weld seam. For this reason, most existing practices sampled the weld seam points using the teaching method to obtain the true coordinates of multiple trajectory points [3, 4]. To compare the difference between the two kinds of trajectories more accurately, we first calibrated the position and pose between the coordinate system of the workpiece and the robot base coordinate system. The purpose is to convert the point coordinates of the weld to the base coordinate frame of the robot. The established workpiece coordinate system is shown in Fig. 15a. The coordinates of each point on the weld seam were extracted from the CAD model of the workpiece to obtain the actual trajectory of the weld seam via coordinate system transformation. Because we strictly controlled the manufacturing precision of the workpiece, we were able to ensure the consistency of the actual weld seam and trajectory on the CAD model. Assuming that a point on the weld seam in the workpiece coordinate system is expressed as \(P_{i}\), it can be converted to the base coordinate system by the following equation:

$$\begin{aligned} ^{b}P_{i}= ^{b}T_{w} \cdot ^{w}P_{i} \end{aligned}$$
(27)

By applying Eq. (27) to all the points on the weld seam, the trajectory of the actual weld in the robot base coordinate system can be obtained. Similarly, the normal attitude of each point can be calculated using the CAD model and used as the theoretical value.

Fig. 16
figure 16

Trajectory comparison for accuracy evaluation. a Robot motion trajectory and weld seam trajectory by the normal tracking attitude. b Tracking pose by the normal attitude. c Motion trajectory by regulating the welding pitch to \(d=15\) mm

The accuracy evaluation experiments were divided into four groups, as shown in Fig. 15b. When the working parameters were separately regulated, it was observed that the end-effector could move in the expected posture throughout the entire process, as shown in Fig. 15a. This demonstrates that the proposed dual-sequence tracking strategy is effective. For accuracy evaluation, we compare the recorded robot trajectory with the actual weld seam trajectory. After tracking with a normal attitude, we plot the motion trajectory of the robot and the actual weld seam, as shown in Fig. 16a. Evidently, the two trajectories are highly coincident. Fig. 16b shows that the tracking pose was close to the theoretical pose during the tracking process. Adding welding pitch, Fig. 16b shows that the robot motion trajectory is almost parallel to the actual weld seam trajectory and fits the actual trajectory well. For any point \(\widetilde{P_{i}}(x, y, z)\) on the motion trajectory of the end-effector, the nearest Euclidean point \(P_{i}(x, y, z)\) is taken as its corresponding point on the weld joint, and the Euclidean distance \(e_{i}=\Vert \widetilde{P_{i}}-P_{i}\Vert _2\) between two points is regarded as the trajectory tracking error. The error statistics for the 4 groups of experiments are shown in Table 2. It can be quantitatively observed from Table 2 that the proposed method can maintain the mean error within 0.6 mm. Furthermore, it can be observed that the tracking test with the normal attitude has the highest accuracy, and adjusting the working parameters does not have a significant influence on the tracking accuracy.

Table 2 Trajectory error of experiments in Fig. 15b
Fig. 17
figure 17

Experimental setup for multi-seam tracking: 1–host computer, 2–teaching pendant, 3–touch screen, 4–robot controller, 5–background interference, 6–fluorescent lamp, 7–workpiece, 8–laser-vision sensor, 9–welding torch, 10–RGB-D camera. The red dashed lines denote the weld seams to be tracked

Fig. 18
figure 18

Global positioning experiments under different placements. a and b are different placement poses. The bottom e and g are corresponding point clouds obtained by the RGB-D camera. c and d are the corresponding posture when reaching the assessment point. f and h are the global positioning results for the placements of (b) and (d), respectively

4.3 Multi-weld seam tracking

It is known that illumination is the main factor affecting image quality and further affecting the positioning performance. We also tested the global positioning performance of the system in an unstructured environment with different illumination conditions, that is, indoor and outdoor environments on a sunny summer day between 13:00 and 18:00. An adjustable light source was used to generate different lighting conditions in an indoor environment. An easy-to-measure assessment point in the structural part was used as a reference point to replace the initial point for testing, as shown in Fig. 17. As illustrated in Fig. 18, to evaluate the global positioning performance of the portable system, we positioned the workpiece at different placement positions and verified the reaching accuracy to a specified point and the retention of its end pose.

Fig. 19
figure 19

Global positioning performance under different illumination conditions

As shown in Fig. 18, even at different placement positions, the posture of the robot end-effector when reaching the assessment point is largely the same, which verifies the feasibility of our initial point positioning method.

Fig. 20
figure 20

Motion trajectory of the robot end-effector for tracking multiple weld seams. In the experiment, we set \(r=20, \widetilde{\Gamma }=[0,-100,0]\)

Under certain lighting conditions, after the robot end-effector reached the assessment point in the desired posture, a vernier caliper was used to measure the distance between the end of the torch and the reference point. This distance is regarded as the positioning error of global positioning. We then maintained the lighting condition and changed the placement position of the robot to measure its positioning result, as shown in Fig. 19. This experiment was repeated 20 times, and the positioning errors were recorded along with the illumination conditions. Then, we moved the system to an outdoor environment and performed the same operation. As shown in Fig. 19, the first group of data is the statistical result of positioning errors 20 times under the 315 LX illumination, and the result of 1789−81400 LX illumination is from the outdoor environment test. It can be seen that even under strong sunlight, the positioning error is in the range of 10−25 mm, indicating that even if the robot is placed at different sites, our method can guide the robot to the vicinity of the initial point of the weld seam. We conducted a multi-seam tracking test with an experimental structure in an outdoor environment. The results show that the robot could track the multi-segment weld seam smoothly without human intervention. The recorded tracking trajectory is shown in Fig. 20. It can be seen that the welding trajectory of the robot fits the actual trajectory very well.

Overall, in terms of spatial weld seams, the proposed algorithm has better flexibility than the traditional teaching-playback working modes. Because of the excellent lightweight of the collaborative manipulator, the system is easy to place. Through human-machine collaboration, we can easily adjust the photo posture of the 3D camera to globally position the multiple weld seam in the workpiece, even if it is placed in a different position. We first performed experimental tests on a single weld seam under various conditions and then performed autonomous welding tests on multiple seams, and both obtained satisfactory results.

5 Discussion

Although certain progress has been made by this work, some technical limitations still exist and need to be further considered in future research. Here, the following four aspects are discussed.

  1. 1.

    Application scenario: Although a 6-DOF manipulator was used in the experimental setup, the proposed tracking method could also be applied to a welding cart with fewer DOFs. One potential application is mounting a 6-DOF cooperative manipulator on a mobile platform for more flexible welding tasks in unstructured environments.

  2. 2.

    Noise interference: The welding arc light can potentially interfere with feature point recognition. Because the effectiveness of the tracking method depends on the real-time detection accuracy of weld seams, a large amount of noises such as arc light and spatter during welding may affect feature point recognition. This problem needs to be solved in future studies.

  3. 3.

    Joint adaptability: In this work, the position and orientation are calculated using three feature points, and we performed verification for the lap joint, butt joint with V-groove, and T joint by experiments. However, different types of weld joints have different laser stripes appearances. Whether other types of weld seams, such as U-shaped weld joints or discontinuous weld seams whose laser stripes in images are broken at weld seam edges, can also be calculated in this way is a subject for further study.

  4. 4.

    Control unpredictability: In this study, a cubic polynomial was used to fit the path points and generate the direction of the weld seam. The fitting order must be adjusted based on the actual situation. An empirical suggestion is that the 3rd or 4th order is sufficient for most tasks, whereas the 5th order is prone to the Runge phenomenon, which may cause unstable robot motion. In addition, the parameter used to control the number of fitting points should be set within the range in \(\left[ 70,100 \right]\). Smaller values are also prone to unstable motions.

6 Conclusion

This study demonstrates an autonomous tracking method for multi-segment spatial weld seams. The experiments reveal good tracking performance. Applying the visual guidance method, a dual sequence tracking mechanism was proposed, which can effectively solve the problem that traditional methods cannot flexibly adjust important welding parameters such as working and travel angles. Autonomous tracking of multi-segment weld seams can be realized using point cloud registration for global workpiece positioning. By using a collaborative robot to form a portable automatic welding system, our research could better address the problem of autonomous tracking of spatial welds under an unstructured environment. The proposed method facilitates fully autonomous robotic welding in unstructured environments.