1 Introduction

Plasma arc additive manufacturing (PAAM) is a promising additive manufacturing process with a high resource efficiency and low device cost compared with conventional machining processes. A wide range of materials, including titanium alloys, steels, and aluminum alloys, can be used in PAAM processes [1, 2]. PAAM is characterized by a high deposition rate for large-scale components. During the deposition of mid-complexity components, the thermal masses of different parts of the components change with the geometry, which affects the thermal behavior of the weld pool. Therefore, the weld pool size is not typically kept constant during the deposition process, potentially resulting in defects in the components [3,4,5]. Some researchers have attempted to use path optimization methods to compensate for the deposition defects caused by thermal mass changes. For example, a high-quality T-crossing component can be deposited by increasing the deposition distance to the intersecting area [6, 7]. Li et al. [8] proposed a new path strategy called end lateral extension (ELE) for additive manufacturing, while Davis and Shin [9] increased the wire feeding speed when depositing around an intersection to improve the deposition quality. However, these methods were realized by adjusting some parameters offline and not maintaining a stable weld pool size in real time. However, the real-time measurement of the weld pool size is highly beneficial because this information can be used as a feedback signal for deposition control. Consequently, the weld pool size can be stabilized, thereby improving the geometric accuracy of the deposited components and preventing potential defects.

Studies have been previously conducted on the monitoring of the weld pool size. Researchers have used additional light sources such as LED lights [10] or laser dot arrays [11] to illuminate the molten area and obtain the weld pool size by analyzing the reflected images. Dual cameras have also been used to obtain the pool width [12], and an infrared camera has been used to obtain the temperature distribution on the metal surface [13, 14]. The light emitted from the plasma beam has a high intensity over the entire waveband; therefore, it is difficult for the camera to record the LED light. Laser dot array systems or binocular vision systems require multiple cameras, which means that errors accumulate when each camera is calibrated. An infrared camera is required to estimate the reflectivity of the weld pool surface, which changes with the pool temperature during deposition. Because of these factors, existing methods potentially cause measurement errors in weld pool monitoring, resulting in a low feedback control accuracy. Other researchers have focused on simpler and more applicable monitoring systems. Setting a pixel value threshold for image segmentation can assist in the extraction of the boundary of a steel weld pool [15,16,17]; however, this is not applicable when the contrast around the boundary is not evident [18]. A numerical simulation method was used to predict the size of a titanium alloy weld pool in laser welding [19] and powder bed fusion [20]. The weld pool size was controlled through calculations using physical models [21]. However, various types of materials are employed in additive manufacturing using plasma arcs. Moreover, plasma arcs have different reflection and radiation characteristics during the deposition process, which limits the applicability of the pool size measurement method.

For the deposition of mid-complexity components, the weld pool width is limited by the widths of the previously-deposited layers, such as the T-crossing component, as shown in Fig. 1b. In this case, the weld pool size is primarily influenced by the pool length. Therefore, a novel method that is suitable for different types of materials is proposed in this study for the real-time measurement of the weld pool length in plasma arc additive manufacturing. Applying this method, the length of the weld pool was monitored and compared with reference data to determine the wire feeding speed that produced steady deposition, and to the ensure that the proper component size was used during the plasma arc additive manufacturing process. The rest of this study is structured as follows. Section 2 describes the visual monitoring system, which comprises an endoscope used to capture close images of the weld pool at a high resolution. Section 3 introduces the algorithms for processing both the brightness and brightness gradients of the image pixels, which are used to detect the boundary between the solid and liquid metals at the rear of the weld pool. The visual system is then calibrated in Sect. 4 to realize the conversion between image and world coordinates. In Sect. 5, the measurement of the weld pool length based on the detected feature points is presented. Finally, the algorithms are integrated into software and tested by depositing different materials on the intersection components in Sect. 6. A comparison of the results revealed the optimal algorithm for extracting the length of the weld pool for different materials.

Fig. 1
figure 1

Schematic diagram of the PAAM a monitoring system structure, b top view and front section of the weld pool

2 System overview

A schematic of the main component of the system is shown in Fig. 1a. The system includes a plasma arc torch, an endoscope, an XIRIS camera, and a metal blocking plate. The samples were sealed in a glove box and moved using a 3-axis computer numerical control (CNC) system. The plasma power source, wire feeder, and computer were placed outside the glovebox and connected to the torch and camera by cables. The glovebox was filled with argon gas for global protection to prevent the oxidation of the deposited metal. When deposition began, the CNC motion system moved the torch along the substrate, and an image of the weld pool was collected using the endoscope and camera. The images were then transmitted to a computer, where the weld pool geometry was detected, and the processed results were displayed on the monitoring screen.

The main technical specifications of the XIRIS camera are listed in Table 1. The dynamic range of the chosen camera was sufficiently wide to record the entire plasma weld pool and prevent over- or under-exposure in the image. The emission spectrum of the plasma-arc light has several peaks within the wavelength range of 300–600 nm [22], whereas the selected camera has a high spectral response within 400–650 nm. Therefore, a 10 nm wide, 550 nm thick hard-coated bandpass interference filter was used to block extra light and record the image of the weld pool. To obtain a better image of the weld pool, a special vision system that includes a light-blocking plate was used. The endoscope was connected coaxially to the camera, and changed the light path by 90 °, allowing the camera to be placed beside the substrate and have a clear view of the back of the weld pool at a vertical capturing angle. This protected the camera from damage caused by heat generated during deposition. To prevent the plasma arc light from affecting the image quality, a blocking plate was fixed between the endoscope and torch. By adding a blocking plate and adjusting the position of the endoscope, the effect of the specular reflection of the arc light was reduced, thereby allowing the determination of the boundary of the weld pool, as shown in Fig. 2. The original, blocked titanium, and blocked steel images are shown in Figs. 2a–c, respectively. Figure 2d shows the changes in the brightness of the pixels along the weld pool centerline in Figs. 2a–c. Figure 2a shows a record of the reflection of the arc light in the middle of the pool and behind the pool, which hinders the recognition of the boundary in the image. By adding a blocking plate, the influence of the reflected light is prevented, as shown in Fig. 2b. In addition, Fig. 2d shows that the curvature of the curve at the boundary position in Fig. 2b is larger than that in Fig. 2a. Consequently, the change in the brightness at the boundary is larger and easily detected when the blocking plate is used. Figure 2c shows an extension of the application of the plate in steel, and the result shows a more evident boundary than that of the titanium alloy.

Table 1 Main technical specifications of the selected camera
Fig. 2
figure 2

Image improvement with the blocking plate a original deposition image, b titanium alloy deposition with blocking plate, c steel deposition with blocking plate, d pixel brightness changes along the centerline of the weld pool in ac

3 Image processing method

3.1 Problem analysis

To achieve the best detection performance while minimizing the computational cost, it is necessary to obtain the optimal extraction algorithm for different materials. For certain types of metals such as steel, the boundary appears clearly in the image, which indicates that the pixel brightness from the weld pool center to the edge suddenly changes, as shown in Fig. 2c. This is owing to the unsmooth metal surface that is formed after the liquid solidifies. For other metals, such as titanium alloys, the surface brightness from the weld pool to the solid metal slowly decreases without forming a clear contrast in the brightness, as shown in Fig. 2b. A series of tests were conducted to obtain images of titanium alloy deposition under different torch speeds, currents, and wire feeding speeds, as shown in Fig. 3. Based on these images, it can be concluded that although the weld pool maintains its shape, there is a change in the size. In addition, the brightness of the pixels along the pool length direction varies following a certain trend. Therefore, brightness-based and gradient-based algorithms corresponding to metals with different surface smoothness during the solidification process (steel and titanium alloy, respectively) were included in the weld pool boundary detection method proposed in this study. This process is illustrated in Fig. 4.

Fig. 3
figure 3

ac changing torch speed with the same current of 180 A and wire feeding speed of 1 800 mm/min, df changing current with the same torch speed of 4.5 mm/s and wire feeding speed of 1 800 mm/min, gi changing wire feeding speed with the same current of 180 A and torch speed of 4.5 mm/s

Fig. 4
figure 4

Flow chart of the image processing method for weld pool monitoring

3.2 Feature point detection

3.2.1 Determination of the ROI and CDL

Image processing was used to detect the boundary of the weld pool in the image captured by the monitoring system mentioned above. The boundaries of the steel and titanium weld pool in the images are both parabola to ensure that the intersection between the CDL and boundary can be used as the feature point for determining the size of the weld pool. The image process starts with the determination of ROI and CDL. Subsequently, the pixels from the line segments parallel to the CDL are extracted for data processing. The useful information is only contained in the circular area of the image captured by the endoscope. The image is then cropped into a rectangular ROI to reduce the computation cost. Finally, the image coordinate is set up within the image, as shown in Fig. 5.

Fig. 5
figure 5

Setting of the ROI, the CDL and the DL in the image coordinate

The upper left corner was taken as the origin of the coordinates \(o\). The line along the width was taken as the \(x\) axis, whereas that along the height was taken as the \(y\) axis. Two points in the image were selected in advance and defined as the upper left corner \({P}_{\mathrm{R}1}\) and lower right corner \({P}_{\mathrm{R}3}\) of the ROI rectangle. The four corners of the ROI were defined as \({P}_{\mathrm{R}1}\left({x}_{\mathrm{R}1},{y}_{\mathrm{R}1}\right)\), \({P}_{\mathrm{R}2}\left({x}_{\mathrm{R}2},{y}_{\mathrm{R}1}\right)\), \({P}_{\mathrm{R}3}\left({x}_{\mathrm{R}2},{y}_{\mathrm{R}2}\right)\) and \({P}_{\mathrm{R}4}\left({x}_{\mathrm{R}1},{y}_{\mathrm{R}2}\right)\). It should be noted that the size of the ROI should include only the circular area in the image, as indicated by the red rectangle in Fig. 5. Moreover, the boundary profile is parabolic. This means that on the centerline of the deposition layer, the position of the boundary is the farthest from the front of the weld pool. Therefore, a CDL can be drawn along the deposition layer to detect the feature point, which is the farthest position of the boundary. Because the positions of the camera, endoscope, blocking plate, and torch were relatively fixed, the position of the CDL could be determined in advance and fixed along the deposition layer. Two points, \({P}_{\mathrm{c}1}\left({x}_{\mathrm{c}1},{y}_{\mathrm{c}1}\right)\) and \({P}_{\mathrm{c}2}\left({x}_{\mathrm{c}2},{y}_{\mathrm{c}2}\right)\) were then selected within the ROI and used as the start and end points of the CDL, respectively. The CDL should be close as possible to the centerline of the deposition layer, as shown by the blue line segment in Fig. 5. The expression of the CDL in image coordinates is

$$y={k}_{\mathrm{c}}x+{b}_{\mathrm{c}}=\frac{{y}_{\mathrm{c}1}-{y}_{\mathrm{c}2}}{{x}_{\mathrm{c}1}-{x}_{\mathrm{c}2}}x+\frac{{y}_{\mathrm{c}2}{x}_{\mathrm{c}1}-{y}_{\mathrm{c}1}{x}_{\mathrm{c}2}}{{x}_{c1}-{x}_{\mathrm{c}2}},$$
(1)

where \({k}_{\mathrm{c}}\) is the slope of the CDL and \({b}_{\mathrm{c}}\) is the bias. Consequently, a feature point can be obtained from the CDL. To avoid the low extraction accuracy caused by extraction errors, more DLs were drawn to obtain more feature points. This is achieved by drawing \(2N\) line segments parallel to the CDL. These line segments are equally distributed on both sides of the CDL, with \(N\) on one side and \(N\) on the other, as indicated by the green line segment in Fig. 5. The distance between the detection lines was fixed at \(d\). The endpoints of these line segments were then obtained based on the CDL endpoints by adding the increments \(\mathrm{d}x\) and \(\mathrm{d}y\). For the nth DL, the starting point \({P}_{\mathrm{L}1}^{n}\) and ending point \({P}_{\mathrm{L}2}^{n}\) can be expressed as

$${P}_{\mathrm{L}1}^{n}=\left({x}_{\mathrm{c}1}+n\cdot \mathrm{d}x,{y}_{\mathrm{c}1}+n\cdot \mathrm{d}y\right),$$
(2)
$${P}_{\mathrm{L}2}^{n}=\left({x}_{\mathrm{c}2}+n\cdot \mathrm{d}x,{y}_{\mathrm{c}2}+n\cdot \mathrm{d}y\right),$$
(3)

where \(n\in \left[-N,N\right]\) and {\({P}_{\mathrm{L}1}^{n},{P}_{\mathrm{L}2}^{n}\)} are the endpoint of the nth detection line. When \(n<0\), the detection line is on the left side of the CDL, and when \(n>0\), the line is on the right side. \(n=0\) refers to the CDL itself. The coordinates of the endpoints for each DL can be obtained as shown below.

As shown in Fig. 6, the distance between the CDL and the nth DL is \(nd\). Moreover, the increment in the coordinate is \(n\cdot \mathrm{d}x\) and \(n\cdot \mathrm{d}y,\) respectively. By drawing a dashed line parallel to the \(x\) and \(y\) axes, two similar triangles, \(\Delta {P}_{\mathrm{c}1}{{O}_{1}P}_{\mathrm{L}1}^{n}\) and \(\Delta {P}_{\mathrm{c}2}{O}_{2}{P}_{\mathrm{c}1},\) can be formed. For triangles with a similar property, the ratios of the corresponding sides are the same.

Fig. 6
figure 6

Geometric relationship between CDL and DL

$$\frac{n\cdot \mathrm{d}x}{n\cdot \mathrm{d}y}=\frac{{y}_{\mathrm{c}1}-{y}_{\mathrm{c}2}}{{x}_{\mathrm{c}1}-{x}_{\mathrm{c}2}}={k}_{\mathrm{c}}.$$
(4)

According to the Pythagorean Theorem, the following equation holds

$$ \left( {n \cdot {\text{d}}x} \right)^{2} + \left( {n \cdot {\text{d}}y} \right)^{2} = \left( {n \cdot d} \right)^{2} . $$
(5)

By combining Eqs. (4) and (5), the increment in the endpoints can be obtained as follows

$$\left\{\begin{array}{c}\text{d}x=\frac{{k}_{\mathrm{c}}\cdot d}{\sqrt{{{k}_{\mathrm{c}}}^{2}+1}},\\ \text{d}y=\frac{d}{\sqrt{{{k}_{\mathrm{c}}}^{2}+1}}.\end{array}\right.$$
(6)

Consequently, when \(n\in \left[-N,N\right]\), the general expression for the nth DL is

$$ y = k_{{\text{c}}} x + b_{{\text{c}}} + n \cdot {\text{d}}y. $$
(7)

3.2.2 Pixel extraction from the DL

Each DL intersects the boundary at a point that can be considered a feature point. To obtain the feature point from the DL, it is necessary to first extract the pixels falling on the DL. Because the pixels are discrete rather than continuous points, the pixels whose distance from the DL was less than \({L}_{\text{d}}\) were defined as points falling on the DL. Therefore, by traversing all the pixels in the ROI, the pixels falling on the DL can be extracted. Taking \(P(x,y)\) as a pixel point at a certain position \((x,y)\) within the ROI, \({P}_\text{DL}^{n}\) is a set containing all the pixel points that fall on the nth DL.

$${P}_{\mathrm{DL}}^{n}=\left\{P(x,y)|\frac{\left|{k}_{\mathrm{c}}x+{b}_{\mathrm{c}}+n\cdot \mathrm{d}y-y\right|}{\sqrt{{{k}_{\mathrm{c}}}^{2}+1}}<{L}_{\mathrm{d}}\right\},$$
(8)

where \(x\in [{x}_{\mathrm{R}1},{x}_{\mathrm{R}2}]\), \(y\in [{y}_{\mathrm{R}1},{y}_{\mathrm{R}2}]\), and \(n\in \left[-N,N\right]\). There are \((2N+1)\) different \({P}_{\mathrm{DL}}^{n}\) values corresponding to each DL. Consequently, the values of the pixel points from every \({P}_{\mathrm{DL}}^{n}\) can be processed to obtain the feature points.

3.2.3 Brightness-based algorithm

An image of the deposited steel is shown in Fig. 7a. It was found that the liquid and solid metals in the weld pool exhibited different properties in terms of the amount of light reflected and radiated from their surfaces, resulting in a change in brightness at the boundary of the image.

Fig. 7
figure 7

Feature point detection with brightness-based algorithm a the image of the steel deposition, b the smoothed brightness curve got via brightness-based algorithm

In this section, a brightness-based algorithm is introduced to detect the steel pool boundaries. According to the method described in Sect. 3.2.1, a group of DLs was determined along the deposited layer, and the pixel brightness was obtained. One DL was selected as an example for further examination, as indicated by the blue line in Fig. 7a. The brightness of the pixels in \({P}_{\mathrm{DL}}^{n}\) can be represented by \(\left\{{B}^{n}\left(1\right),{B}^{n}\left(2\right),\cdots ,{B}^{n}\left({M}_{n}\right)\right\},\) and \({M}_{n}\) is the number of pixel points in \({P}_{\mathrm{DL}}^{n}\). \(k\in \{\mathrm{1,2},\cdots ,{M}_{n}\}\), \({B}^{n}(k)\) is a discrete function that shows the change in brightness along the nth DL. Because \({B}^{n}(k)\) contains the signal noise produced by the camera, a smoothing function, \({f}_{\mathrm{s}}\left(x\right),\) is used to reduce it. The smoothing function is defined as follows

$${f}_{\mathrm{s}}(A\left(x\right),L)=\left\{\begin{array}{l}\frac{1}{2a+1}\sum_{i=k-a}^{k+a}A(i), a\le k\le L-a,\\ \frac{1}{2k}\sum_{i=1}^{2k}A(i), k<a,\\ \frac{1}{2(L-k)}\sum_{i=2k-L}^{L}A(i), L-a<k<L,\end{array}\right.$$
(9)

where \(A(i)\) is a discrete array containing \(L\) elements and \((2a+1)\) is the number of adjacent values used for averaging. Taking \({B}_\text{a}^{n}\left(k\right)={f}_{\mathrm{s}}\left({B}^{n}\left(k\right),{M}_{n}\right),\) the curve of the filtered brightness changing function is as shown in Fig. 7b, where the first peak from the back to the front represents the change in the brightness at the boundary. This implies that the peak position can be regarded as a feature point. Therefore, a peak search algorithm was proposed, and its flowchart is shown in Fig. 8. An array \(W=\left\{W\left(1\right), W\left(2\right), \cdots ,W\left({w}_{\mathrm{d}}\right)\right\}\) that has a width of \({w}_\text{d}\) and slides over \({B}_{\mathrm{a}}^{n}\left(k\right)\) with an index \(j\) was defined. For every iteration, \(W\) was used to store \({B}_{\mathrm{a}}^{n}\left(k\right)\) values from \(j\) to \(j-{w}_{\mathrm{d}}\). Initially, \(j\) is equal to \({M}_{n}\), which means that array \(W\) slides from the end to the beginning within \({B}_{\mathrm{a}}^{n}\left(k\right)\). During each iteration, the maximum value of \(W,\) \(\mathrm{Max}\left\{W\right\}=W\left(i\right),\) was obtained. \(\mathrm{Max}\left\{W\right\}\) is the maximum of the array in one iteration, and \(i\) is the position of \(\mathrm{Max}\left\{W\right\}\) in \(W\). Additionally, \({B}_{\mathrm{max}}\) is the global maximum of all iterations. When the \(\mathrm{Max}\left\{W\right\}\) in an iteration was larger than \({B}_{\mathrm{max}}\), a new peak was observed. Subsequently, \({B}_{\mathrm{max}}=\mathrm{Max}\left\{W\right\}\) was updated and \(j\) was updated to \(j-{w}_{\mathrm{d}}+i\) before starting the next iteration. The iterations were then stopped when \({B}_{\mathrm{max}}\) no longer changed. This indicated that the position of the peak was found. The final \(j\) was then used as the output. Consequently, the pixel point corresponding to the jth element in \({B}_{\mathrm{a}}^{n}\left(k\right)\) was considered as the feature point.

Fig. 8
figure 8

Flow charts of the peak-search algorithm

3.2.4 Gradient-based algorithm

For the titanium alloys, the brightness of the weld pool is similar to that of the solid material. Figure 9a shows an example of the deposition of Ti6Al4V, and Fig. 10b shows its brightness curve. No peak can be observed at the feature point position in the brightness curve along the DL. Instead, the brightness remains at a high level in the molten area before slowly decreasing at the boundary position. This implies that the brightness-based algorithm does not perform well for titanium alloys.

Fig. 9
figure 9

Feature point detection with gradient-based algorithm a ROI of the titanium alloy deposition image, b smoothed brightness curve got via brightness-based algorithm

Fig. 10
figure 10

Smoothed gradient curve got via gradient-based algorithm

In this section, a gradient-based algorithm is proposed for detecting the pool boundaries of titanium alloys. First, the pixel gradient \({G}^{n}\left(k\right)\) was defined to describe the changes in the slope of the brightness

$$ G^{n} \left( k \right) = B_{{\text{a}}}^{n} \left( {k + b} \right) - B_{{\text{a}}}^{n} \left( k \right), $$
(10)

where \(b\) is the interval between the two pixels and \(k\in [1,{M}_{n}-b]\). Additionally, \({M}_{n}-b\) is the number of elements in \({G}^{n}\left(k\right);\) and \({{G}_{\mathrm{a}}^{n}={f}_{\mathrm{s}}(G}^{n}\left(k\right),{M}_{n}-b)\) is used to smoothen the gradient. The smoothed curve is shown in Fig. 10.

The first trough from the back to the front of the gradient curve corresponds to the position where the brightness sharply changes. Therefore, the position of the trough can be used to locate the feature points. To this regard, the trough search algorithm was proposed, which was similar to the peak search algorithm described in Sect. 3.2.2. Its flowchart is shown in Fig. 11. For the trough search algorithm, a \({w}_{\mathrm{d}}\)-wide \(W\) with an index \(j\) was used to slide over \({G}_{\mathrm{a}}^{n}\left(k\right)\). For every iteration, \(W\) was used to store \({G}_{\mathrm{a}}^{n}\) values from \(j\) to \(j-{w}_{\mathrm{d}}\). Initially, \(j\) is equal to \({M}_{n}-b\). Moreover, \(\mathrm{Min}\left\{W\right\}=W(i)\) represents the minimum value of each iteration, and \({G}_{\mathrm{min}}\) is the minimum global gradient. When the \(\mathrm{Min}\left\{W\right\}\) in one iteration was smaller than \({G}_{\mathrm{min}}\), \({G}_{\mathrm{min}}=\mathrm{Min}\left\{W\right\}\) was updated and \(j\) was updated to \(j-{w}_{\mathrm{d}}+i\) before starting the next iteration. The iterations were stopped when \({G}_{\mathrm{min}}\) no longer changed. The final \(j\) was used as the output. Consequently, the pixel point corresponding to the jth element in \({G}_{\mathrm{a}}^{n}\left(k\right)\) was considered as a feature point.

Fig. 11
figure 11

Flow charts of the trough-search algorithm

4 Calibration

The feature points were detected in the image coordinates; therefore, they have to be converted into world coordinates to obtain the real size of the weld pool [23]. For camera-based monitoring systems, the inherent characteristics of the lens and photosensitive element are described using the internal parameters that determine the distortion produced in the captured image. In addition, the relative position of the camera with respect to the weld pool is described using the external parameters that show the rotation and translation of the images. Consequently, a conversion formula can be used to describe the mapping relationship between the three-dimensional world coordinates and two-dimensional image coordinates

$$ \left[ {\begin{array}{*{20}c} x \\ y \\ 1 \\ \end{array} } \right] = s\left[ {\begin{array}{*{20}c} {f_{x} } & 0 & {c_{x} } \\ 0 & {f_{y} } & {c_{y} } \\ 0 & 0 & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\varvec{r}}_{1} } & {{\varvec{r}}_{2} } & { {\varvec{r}}_{3} } & {\varvec{t}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} X \\ Y \\ Z \\ 1 \\ \end{array} } \right], $$
(11)

where \(x\) and \(y\) are the coordinates of the point in the image. Additionally, \(s\) is the scale factor, \({f}_{x}\) and \({f}_{y}\) are the focal lengths on different optical axes, and \({c}_{x}\) and \({c}_{y}\) are the displacements away from the axis. These enabled the formation of a 3 × 3 matrix of the internal parameters. \({{\varvec{r}}}_{1}\), \({{\varvec{r}}}_{2}\), and \({{\varvec{r}}}_{3}\) are 3 × 1 vectors representing the rotational transformation, and \({\varvec{t}}\) is a 3 × 1 vector representing the translation transformation. Moreover, \({{\varvec{r}}}_{1}\), \({{\varvec{r}}}_{2}\), \({{\varvec{r}}}_{3}\), and \({\varvec{t}}\) form a 4 × 3 matrix of the external parameters, and \(X\), \(Y\), and \(Z\) denote the world coordinates of the feature points. In this study, the length of the weld pool was considered as the distance between the feature point and front of the weld pool. This implies that the size of the weld pool can be measured on a plane parallel to the layer surface. Therefore, only two-dimensional world coordinates were considered. The conversion formula can be simplified by setting \(Z=0\)

$$\left[\begin{array}{c}x\\ y\\ 1\end{array}\right]=s\left[\begin{array}{ccc}{f}_{x}& 0& {c}_{x}\\ 0& {f}_{y}& {c}_{y}\\ 0& 0& 1\end{array}\right]\left[\begin{array}{ccc}{{\varvec{r}}}_{1}& {{\varvec{r}}}_{2}& {\varvec{t}}\end{array}\right]\left[\begin{array}{c}X\\ Y\\ 1\end{array}\right]={\varvec{H}}\left[\begin{array}{c}X\\ Y\\ 1\end{array}\right],$$
(12)

where \({\varvec{H}}\) is a 3 × 3 matrix (called a homography matrix) that comprises a combination of all the parameters in the function. In this study, the method described in Ref. [24] was used for calibration. As shown in Fig. 12, a chessboard was placed on the deposition plane for calibration. The corner points of the square in the chessboard image were then selected. Additional corner points were averaged to reduce the influence of noise, and their coordinates were input into the formula to obtain the equations. Finally, \({\varvec{H}}\) was solved as follows

$$ {\varvec{H}} = \left[ {\begin{array}{*{20}c} {0.87} & {0.49} & {347.18} \\ { - 0.69} & {0.76} & {518.95} \\ { - 4.80\!\times\!10^{-5}} & {2.07\!\times\!10^{-4}} & 1 \\ \end{array} } \right]. $$
(13)
Fig. 12
figure 12

Square corners are chosen from the chessboard for calibration

5 Weld pool length measurement

To measure the length of the weld pool, the positions of the feature points and front of the weld pool were obtained. Owing to the limitations of the blocking plate and gas shield, only a part of the weld pool could be recorded in the images. To solve this problem, the weld pool length, represented as \({P}_{\mathrm{L}}\), was divided into two portions using a reference line. The reference line is defined as a line segment that passes through the CDL starting point, is orthogonal to the CDL, and is fixed in the image coordinates. As shown in Fig. 13a, the first portion is the distance between the feature point and reference line, represented as \({P}_{\mathrm{L}1}\). The second portion is the distance between the reference line and front of the weld pool, represented as \({P}_{\mathrm{L}2}\).

Fig. 13
figure 13

Schematic diagram of the weld pool length a two portions \({P}_{\mathrm{L}1}\) and \({P}_{\mathrm{L}2}\) of the weld pool length \({P}_{\mathrm{L}}\), b manual measurement of \({P}_{\mathrm{L}2}\)

To obtain \({P}_{\mathrm{L}1}\), the coordinates of the detected feature points and reference line were converted from image coordinates to world coordinates based on the conversion formula obtained during calibration

$$\left[\begin{array}{c}{X}_{n}\\ {Y}_{n}\\ 1\end{array}\right]={{\varvec{H}}}^{-1}\left[\begin{array}{c}{x}_{n}\\ {y}_{n}\\ 1\end{array}\right],$$
(14)

where \({X}_{n}\) and \({Y}_{n}\) are the world coordinates of the detected feature points; \({x}_{n}\) and \({y}_{n}\) are the image coordinates; and \({{\varvec{H}}}^{-1}\) is the inverse matrix of \({\varvec{H}}\). The positions of the detected points and reference line before and after conversion are shown in Figs. 14 and 15, respectively. The reference line was converted into world coordinates with the formula \(Y={k}_{\mathrm{r}}X+{b}_{\mathrm{r}}\), where \({k}_{\mathrm{r}}\) and \({b}_{\mathrm{r}}\) are obtained based on the definitions of the reference line. Certain points may be incorrectly detected during the detection of feature points. To reduce the effects of these incorrect points on the measurement accuracy, an outlier elimination method was used [25]. First, the mean and standard deviation of the \({Y}_{{n}}\) were obtained as follows

Fig. 14
figure 14

Position of the detected feature points and the reference line in the image coordinate a steel, b titanium alloy

Fig. 15
figure 15

Position of the detected feature points and the reference line in the world coordinate

$${\mu }_{y}=\frac{1}{2N+1}\sum_{n=-N}^{N}{Y}_{n},$$
(15)
$${\sigma }_{y}=\sqrt{\frac{1}{2N+1}\sum_{n=-N}^{N}{{(Y}_{n}-{\mu }_{y})}^{2}}.$$
(16)

Subsequently, the points whose Y coordinate is within \([{\mu }_{y}-3{\sigma }_{y},{\mu }_{y}+3{\sigma }_{y}]\) were considered and the other points were removed as outliers. The mean of the X and Y coordinates of the remaining feature points were then obtained and represented as \({X}_{\mathrm{m}}\) and \({Y}_{\mathrm{m}}\), respectively. Consequently, \({P}_{\mathrm{L}1}\) is equal to the distance between the point \(({X}_{\mathrm{m}}\),\({Y}_{\mathrm{m}})\) and reference line.

$${P}_{\mathrm{L}1}=\frac{\left|{k}_{{r}}{X}_{\mathrm{m}}+{b}_{{r}}-{Y}_{\mathrm{m}}\right|}{\sqrt{{{k}_{{r}}}^{2}+1}}.$$
(17)

The reference line was fixed because the camera and torch positions were fixed. Although there was a slight change in the weld front position when the parameters were different, this change stabilized under a specific set of fixed parameters, as listed in Table 2. The \({P}_{\mathrm{L}2}\) values were manually measured using the method shown in Fig. 13b. For the various groups of tests, the shared settings were current of 180 A current and wire feeding speed of 1 800 mm/min. Table 2 shows that the position of the weld pool front was stable when a certain set of parameters were used for deposition. Therefore, \({P}_{\mathrm{L}2}\) can be obtained beforehand through trials. The complete weld pool length can be obtained as follows

Table 2 Value of \({P}_{\mathrm{L}2}\) under different parameters (mm)
$${P}_{\mathrm{L}}={P}_{\mathrm{L}1}+{P}_{\mathrm{L}2}.$$
(18)

6 Test and verification

Tests were conducted to verify the performance of the proposed weld pool length detection algorithm. The substrates were machined to form an intersection where metal layers were deposited, as shown in Figs. 16a and b. Two materials were tested, Ti6-Al-4V and S355 steel. The deposition parameters of the two materials are listed in Table. 3. The proposed monitoring method was integrated into a software to detect the weld pool length in real time during deposition. The interface allowed the parameters to be set and displayed the value of the detected length. The raw image and detection results are shown in the picture for monitoring purposes.

Fig. 16
figure 16

Verification results with intersection a front view of the intersection component, b top view of the intersection component, c weld pool length changing of Ti6-Al-4V alloy, d weld pool length changing of S355 steel

Table 3 Parameters of the deposition test

The data for the different depositions are shown in Figs. 16c and d. During the deposition of the titanium alloy and steel, the software tracked the change in the weld pool at a frequency of approximately 2–3 Hz. After experiencing a change at the start of deposition, the pool length stabilized. When passing through the intersection, the weld pool length decreased rapidly before slowly recovering. The change in the detected data was synchronized with the change in the actual weld pool. Consequently, the proposed weld pool detection algorithm effectively tracked the change in the size of the weld pool during the intersection deposition process. Therefore, the feasibility was verified to have good reliability by testing titanium 6-Al-4V alloy and S355 steel.

7 Conclusions

This study proposes a novel method for monitoring the change in the length of the weld pool caused by a thermal mass change in a component. This method can be used to ensure the geometric accuracy of the deposited component by stably controlling the size of the weld pool. It includes the establishment of a special imaging system, the rapid extraction of feature points on the boundary of the weld pool, outlier rejection for accuracy improvement, the measurement of the weld pool length based on calibration, and the analysis of the optimal algorithm for different materials. The rest of this study was organized as follows.

  1. (i)

    An endoscope and a shielding plate were added to the monitoring system to obtain a vertical viewing angle and prevent arc interference, which improved the image quality of the weld pool boundary.

  2. (ii)

    A gradient-based algorithm was proposed to detect the boundary of a titanium alloy weld pool, which solved the monitoring problem caused by the low brightness contrast on the pool surface.

  3. (iii)

    For comparison, the optimal extraction algorithms for the S322 steel and titanium 6-Al-4V alloys were determined.

  4. (iv)

    The pixel extraction method based on the detection line was used to reduce the computational cost of the entire algorithm, which was beneficial for realizing real-time monitoring.

However, the proposed method requires some parameter settings to be set in advance, and more materials need to be tested. Therefore, future work will focus on the following: (i) automatic setting of the software parameters based on the signal from the manipulation system, (ii) additional tests using other materials commonly used in plasma arc additive manufacturing.