Dynamic Modeling of Weld Bead Geometry Features in Thick Plate GMAW Based on Machine Vision and Learning

Weld bead geometry features (WBGFs) such as the bead width, height, area, and center of gravity are the common factors for weighing welding quality control. The effective modeling of these WBGFs contributes to implementing timely decision making of welding process parameters to improve welding quality and enhance automatic levels. In this work, a dynamic modeling method of WBGFs is presented based on machine vision and learning in multipass gas metal arc welding (GMAW) with typical joints. A laser vision sensing system is used to detect weld seam profiles (WSPs) during the GMAW process. A novel WSP extraction method is proposed using scale-invariant feature transform and machine learning. The feature points of the extracted WSP, namely the boundary points of the weld beads, are identified with slope mutation detection and number supervision. In order to stabilize the modeling process, a fault detection and diagnosis method is implemented with cubic exponential smoothing, and the diagnostic accuracy is within 1.50 pixels. A linear interpolation method is presented to implement sub pixel discrimination of the weld bead before modeling WBGFs. With the effective feature points and the extracted WSP, a scheme of modeling the area, center of gravity, and all-position width and height of the weld bead is presented. Experimental results show that the proposed method in this work adapts to the variable features of the weld beads in thick plate GMAW with T-joints and butt/lap joints. This work can provide more evidence to control the weld formation in a thick plate GMAW in real time.


Introduction
The gas metal arc welding (GMAW) process has become a popular method for joining the thick plates of steel structures and been used in the shipbuilding industry, wire arc additive manufacturing [1], pipe manufacturing [2,3], etc. Automatic and intelligent welding technologies have been widely applied to the thick plate GMAW process, such as automatic seam tracking [4] and weld formation monitoring, to enhance welding quality and efficiency [5]. In thick plate GMAW, the online monitoring of weld formation for each pass is necessary for weldment quality control, and welding process scale-invariant feature transform (SIFT) and machine learning algorithms to strengthen the robustness comparing with the previous ones [36,37]. This extraction method adapts to variable WSPs. The feature points of the extracted WSP are effectively identified through slope mutation detection and supervising the number. In order to stabilize the seam tracking and the subsequent modeling process of WBGFs, a fault detection and diagnosis process is applied to the feature point identification process via the cubic exponential smoothing method. A scheme of modeling WBGFs is proposed using the diagnosed feature points and the extracted WSP. With the proposed method in this work, the variable weld bead of the area, center of gravity, and all-position weld bead width (APWBW) and height (APWBH) are modeled in real time for different passes. Various experimental results show the effectiveness of the proposed method. The proposed method here contributes to implementing timely decision making of welding process parameters to improve the welding quality and enhance automatic levels.
This work consists of WSP extraction, feature point identification of the WSP, fault detection and diagnosis, modeling WBGFs, experimental results, discussion, and conclusions.

WSP Extraction with SIFT and Machine Learning
In order to monitor the WBGFs, especially the center of the gravity of the weld bead, simultaneous imaging of the welding torch and the WSP is investigated through a special combination of dimmer glass and the filter in this work. Experimental results show that when the central wavelength of the filter is about 660 nm, the half bandwidth is 20 nm, and the transmittance of the dimmer glass is about 0.02%, the welding torch can be marked with the welding wire. Figure 1a shows the typical work state of the sensor, and Figure 1b gives the imaging effect of the laser stripe and the wire. The direction detection of the welding torch contributes to weld formation control. Here, only the WSP extraction process is given as follows, and WSP extraction with T-joints is used as an example.
Sensors 2020, 20, x FOR PEER REVIEW 3 of 18 detection and supervising the number. In order to stabilize the seam tracking and the subsequent modeling process of WBGFs, a fault detection and diagnosis process is applied to the feature point identification process via the cubic exponential smoothing method. A scheme of modeling WBGFs is proposed using the diagnosed feature points and the extracted WSP. With the proposed method in this work, the variable weld bead of the area, center of gravity, and all-position weld bead width (APWBW) and height (APWBH) are modeled in real time for different passes. Various experimental results show the effectiveness of the proposed method. The proposed method here contributes to implementing timely decision making of welding process parameters to improve the welding quality and enhance automatic levels. This work consists of WSP extraction, feature point identification of the WSP, fault detection and diagnosis, modeling WBGFs, experimental results, discussion, and conclusions.

WSP Extraction with SIFT and Machine Learning
In order to monitor the WBGFs, especially the center of the gravity of the weld bead, simultaneous imaging of the welding torch and the WSP is investigated through a special combination of dimmer glass and the filter in this work. Experimental results show that when the central wavelength of the filter is about 660 nm, the half bandwidth is 20 nm, and the transmittance of the dimmer glass is about 0.02%, the welding torch can be marked with the welding wire. Figure 1a shows the typical work state of the sensor, and Figure 1b gives the imaging effect of the laser stripe and the wire. The direction detection of the welding torch contributes to weld formation control. Here, only the WSP extraction process is given as follows, and WSP extraction with T-joints is used as an example. WSPs changes with the weld beads during the multipass GMAW process. However, the image (reference image) that is captured before arc starting has a similar laser stripe (the shape, the position, and the intensity) with the one captured after the welding process begins (raw image). Thus, the WSP extraction method is proposed based on SIFT and machine learning as shown in Figure 2, in which the SIFT algorithm can effectively match the characteristics between the two images. WSPs changes with the weld beads during the multipass GMAW process. However, the image (reference image) that is captured before arc starting has a similar laser stripe (the shape, the position, and the intensity) with the one captured after the welding process begins (raw image). Thus, the WSP extraction method is proposed based on SIFT and machine learning as shown in Figure 2, in which the SIFT algorithm can effectively match the characteristics between the two images.
This method includes two image processing paths. The first one is using the Gabor filter and local thresholds to highlight the WSP from the background and simplify the background data, respectively. The second one is locating the WSP from the interference data based on SIFT and the nearest neighbor clustering algorithm. This method includes two image processing paths. The first one is using the Gabor filter and local thresholds to highlight the WSP from the background and simplify the background data, respectively. The second one is locating the WSP from the interference data based on SIFT and the nearest neighbor clustering algorithm.
The first image processing path is as follows. The region of interest (ROI) is normally used to reduce the influence of the interference data on WSP extraction and the computational burden [38]. There is a complete arc region in the image (Figure 3a), and it has the maximum intensity. The global binarization is first conducted with the maximum intensity ( Figure 3b). Then, the region below the bottom boundary of the arc region is the ROI in this work (Figure 3c).

Gabor Filtering
Gabor filtering is a classic orientation feature detection method, which is mostly used in image or video processing [32]. The orientation features of the WSP are salient, and most of them do not change much except for the part of the weld bead during the multipass welding process. A great number of tests show that three specific filtering angles of 12  , 15 , and 90 can effectively highlight the WSP from the arc background. The orientation feature map is calculated as where o F is the computational orientation feature map and G represents the Gabor filtering result. The weight 0.5 accounts for the relatively low intensity of the groove region ( Figure 3a).

Local Thresholding
The local thresholds are defined as The first image processing path is as follows. The region of interest (ROI) is normally used to reduce the influence of the interference data on WSP extraction and the computational burden [38]. There is a complete arc region in the image (Figure 3a), and it has the maximum intensity. The global binarization is first conducted with the maximum intensity ( Figure 3b). Then, the region below the bottom boundary of the arc region is the ROI in this work ( Figure 3c). This method includes two image processing paths. The first one is using the Gabor filter and local thresholds to highlight the WSP from the background and simplify the background data, respectively. The second one is locating the WSP from the interference data based on SIFT and the nearest neighbor clustering algorithm.
The first image processing path is as follows. The region of interest (ROI) is normally used to reduce the influence of the interference data on WSP extraction and the computational burden [38]. There is a complete arc region in the image (Figure 3a), and it has the maximum intensity. The global binarization is first conducted with the maximum intensity ( Figure 3b). Then, the region below the bottom boundary of the arc region is the ROI in this work ( Figure 3c).

Gabor Filtering
Gabor filtering is a classic orientation feature detection method, which is mostly used in image or video processing [32]. The orientation features of the WSP are salient, and most of them do not change much except for the part of the weld bead during the multipass welding process. A great number of tests show that three specific filtering angles of 12  , 15 , and 90 can effectively highlight the WSP from the arc background. The orientation feature map is calculated as where o F is the computational orientation feature map and G represents the Gabor filtering result. The weight 0.5 accounts for the relatively low intensity of the groove region ( Figure 3a).

Local Thresholding
The local thresholds are defined as

Gabor Filtering
Gabor filtering is a classic orientation feature detection method, which is mostly used in image or video processing [32]. The orientation features of the WSP are salient, and most of them do not change much except for the part of the weld bead during the multipass welding process. A great number of tests show that three specific filtering angles of −12 • , 15 • , and 90 • can effectively highlight the WSP from the arc background. The orientation feature map is calculated as where F o is the computational orientation feature map and G represents the Gabor filtering result. The weight 0.5 accounts for the relatively low intensity of the groove region ( Figure 3a).

Local Thresholding
The local thresholds are defined as Sensors 2020, 20, 7104

of 18
where i and j represent the row and column of the image, respectively, and LT i is the ith local threshold used for the region ranging from (i − 2) th to (i + 2) th rows when the columns are limited from j − 2 to j + 2.

WSP Location Using SIFT
SIFT is a method for extracting distinctive invariant features from two target images. It can be used to perform reliable matching between different views of an object or scene. This function is invariant to image scale and rotation [39]. In this work, the reference image and raw image are the input images of the SIFT algorithm, and the output of this algorithm is a certain amount of matching points. The ratio of vector angles from the nearest to the second nearest neighbor highly influences the number of the matching points that are used to locate the WSP. Tests show that this number increases with the ratio from 0.9 to 0.99 ( Figure 4). However, the higher the ratio, the more fake the matching points. The ratio is set to 0.95 in this work (although more fake matching points exist too, they do not affect the extraction result using the proposed scheme).
where i and j represent the row and column of the image, respectively, and i LT is the ith local threshold used for the region ranging from (i − 2) th to (i + 2) th rows when the columns are limited from j − 2 to j + 2.

WSP Location Using SIFT
SIFT is a method for extracting distinctive invariant features from two target images. It can be used to perform reliable matching between different views of an object or scene. This function is invariant to image scale and rotation [39]. In this work, the reference image and raw image are the input images of the SIFT algorithm, and the output of this algorithm is a certain amount of matching points. The ratio of vector angles from the nearest to the second nearest neighbor highly influences the number of the matching points that are used to locate the WSP. Tests show that this number increases with the ratio from 0.9 to 0.99 ( Figure 4). However, the higher the ratio, the more fake the matching points. The ratio is set to 0.95 in this work (although more fake matching points exist too, they do not affect the extraction result using the proposed scheme).
In the second processing path, the orientation feature map is first binarized via local thresholds. Then, the nearest neighbor clustering is used to mark the segments of the WSP. Third, the cluster that is the nearest to the matching points is considered as a segment of the WSP. The WSP extraction process is illustrated in Figure 5. Two kinds of arrows represent the two processing paths. The method proposed here adapts to variable WSPs with higher robustness. It is a valuable reference to visual information acquisition, in which visual-sensing-based technologies are used in the automatic welding process.  In the second processing path, the orientation feature map is first binarized via local thresholds. Then, the nearest neighbor clustering is used to mark the segments of the WSP. Third, the cluster that is the nearest to the matching points is considered as a segment of the WSP. The WSP extraction process is illustrated in Figure 5. Two kinds of arrows represent the two processing paths. The method proposed here adapts to variable WSPs with higher robustness. It is a valuable reference to visual information acquisition, in which visual-sensing-based technologies are used in the automatic welding process.

Feature Point Identification
The feature points in this work mean the boundary points of the weld bead and the groove. They are usually identified with least squares fitting [36], Hough transform [38], search algorithm [40], slope detection [41], etc. The challenges of identifying all feature points are the variable WSPs and the unknown number of the feature points during the multipass GMAW process. In order to overcome these adverse factors and adapt to possible imperfect WSP extraction results (some interference data points remain), this work presents an effective feature point identification method, as shown in Figure 6. The extracted WSP is first thinned (the average in the vertical direction) before calculating the slopes. Then, the linear WSP is interpolated with the least square method. The slopes easily

Feature Point Identification
The feature points in this work mean the boundary points of the weld bead and the groove. They are usually identified with least squares fitting [36], Hough transform [38], search algorithm [40], slope detection [41], etc. The challenges of identifying all feature points are the variable WSPs and the unknown number of the feature points during the multipass GMAW process. In order to overcome these adverse factors and adapt to possible imperfect WSP extraction results (some interference data points remain), this work presents an effective feature point identification method, as shown in Figure 6.

Feature Point Identification
The feature points in this work mean the boundary points of the weld bead and the groove. They are usually identified with least squares fitting [36], Hough transform [38], search algorithm [40], slope detection [41], etc. The challenges of identifying all feature points are the variable WSPs and the unknown number of the feature points during the multipass GMAW process. In order to overcome these adverse factors and adapt to possible imperfect WSP extraction results (some interference data points remain), this work presents an effective feature point identification method, as shown in Figure 6. The extracted WSP is first thinned (the average in the vertical direction) before calculating the slopes. Then, the linear WSP is interpolated with the least square method. The slopes easily The extracted WSP is first thinned (the average in the vertical direction) before calculating the slopes. Then, the linear WSP is interpolated with the least square method. The slopes easily fluctuate Sensors 2020, 20, 7104 7 of 18 because of the remaining interference data points and the distorted data points of the WSP. The slope calculation is defined as where n is the number of the involved data points for calculating the ith slope, and k is used to represent the slope vector. k still fluctuates abnormally. Without any loss of generality, a one-dimensional linear filter is used to further smooth k with the size 1 × 9. Piecewise polynomial fitting is used to approach the actual variation characteristics of k as where Q i (i = 1, 2) is the fitting slope vector, a i j (i = 1, 2) are two sets of coefficients, and k h i is each half of k. Monotone interval acquisition is implemented with (5): represents the monotone intervals, and mi b i and mi e i are respectively the start and end slopes in MI (Figure 7). The length of each monotone interval is defined as are sorted in the ascending order. After the number (e.g., P) of the feature points has been supervised (designated), the first P monotone intervals of MI are selected. The center of each selected monotone interval indicates the position of the feature points.
Sensors 2020, 20, x FOR PEER REVIEW 7 of 18 fluctuate because of the remaining interference data points and the distorted data points of the WSP. The slope calculation is defined as where n is the number of the involved data points for calculating the ith slope, and k is used to represent the slope vector. k still fluctuates abnormally. Without any loss of generality, a one-dimensional linear filter is used to further smooth k with the size 1 × 9. Piecewise polynomial fitting is used to approach the actual variation characteristics of k as is the fitting slope vector, ( 1,2) i j ai  are two sets of coefficients, and i h k is each half of k . Monotone interval acquisition is implemented with (5): represents the monotone intervals, and    The effectiveness of the feature point identification method proposed here is validated using some continuous sampling images shown in Figure 8. The experimental results show that the feature points of 95.6% of the images can be identified precisely with the error of 3 pixels. The feature points deviate from the actual positions when there is some interference such as splash, welding slag, etc., near these positions (images in red rectangular in Figure 8). An error correction mechanism is necessary to predict/optimize the feature point when ineffective identification results occur. Multithreading processing is used to predict/optimize all feature points that are utilized to implement the measurement of WBGFs. The effectiveness of the feature point identification method proposed here is validated using some continuous sampling images shown in Figure 8. The experimental results show that the feature points of 95.6% of the images can be identified precisely with the error of 3 pixels. The feature points deviate from the actual positions when there is some interference such as splash, welding slag, etc., near these positions (images in red rectangular in Figure 8). An error correction mechanism is necessary to predict/optimize the feature point when ineffective identification results occur. Multithreading processing is used to predict/optimize all feature points that are utilized to implement the measurement of WBGFs.

Cubic Exponential Smoothing for Stabilizing Feature Point Identification Process
The cubic exponential smoothing method is typically used to stabilize time serial variables through predicting the real state for the next several sampling periods. This method is effective when the time serial variable fluctuates periodically in a linear trend. The identified feature points have this characteristic when the welding torch moves from the start to the end welding positions. Thus, they can be predicted with the cubic exponential smoothing method when the ineffective feature point identification result is diagnosed. The state of the target feature point is iterated as (1) (1) 1 where  is the smoothing coefficient, and t x is the coordinate of the target feature point in the y-/x-direction in images (suppose that they are independent). The coordinate of the target feature point is predicted/optimized as where T is the sampling period, and the coefficients A, B and C are defined as (1) (1) where t x is initialized with the coordinate of the corresponding feature point of the reference image, namely the designated tracking position. The trigger of starting the smoothing process is that the designated tracking position deviates from its last position by 3 pixels. One-step prediction is used in this work. The typical prediction process is given in Algorithm 1.

Cubic Exponential Smoothing for Stabilizing Feature Point Identification Process
The cubic exponential smoothing method is typically used to stabilize time serial variables through predicting the real state for the next several sampling periods. This method is effective when the time serial variable fluctuates periodically in a linear trend. The identified feature points have this characteristic when the welding torch moves from the start to the end welding positions. Thus, they can be predicted with the cubic exponential smoothing method when the ineffective feature point identification result is diagnosed. The state of the target feature point is iterated as where α is the smoothing coefficient, and x t is the coordinate of the target feature point in the y-/x-direction in images (suppose that they are independent). The coordinate of the target feature point is predicted/optimized as where T is the sampling period, and the coefficients A, B and C are defined as where x t is initialized with the coordinate of the corresponding feature point of the reference image, namely the designated tracking position. The trigger of starting the smoothing process is that the designated tracking position deviates from its last position by 3 pixels. One-step prediction is used in this work. The typical prediction process is given in Algorithm 1.
The designated tracking position in the images within the red rectangle in Figure 8 is optimized with the cubic exponential smoothing method shown in Figure 9. The optimized data in the y-/x-direction are given in Figure 10. The accuracy of identifying the feature points is within 1.50 pixels through this diagnostic process overcoming any random interference. Algorithm 1 Feature point optimization process using cubic exponential smoothing.
x t+1 is the coordinate of the current identified feature point.
1 t x  is the coordinate of the current identified feature point.
The designated tracking position in the images within the red rectangle in Figure 8 is optimized with the cubic exponential smoothing method shown in Figure 9. The optimized data in the y-/x-direction are given in Figure 10. The accuracy of identifying the feature points is within 1.50 pixels through this diagnostic process overcoming any random interference.  Experimental results show that the cubic exponential smoothing method enhances the accuracy of feature point identification. It can be used to restrain the abnormal fluctuation of the tracking position in visual-sensing-based automated GMAW. This process stabilizes seam tracking as well as the subsequent WBGF modeling.
x  is the coordinate of the current identified feature point.
The designated tracking position in the images within the red rectangle in Figure 8 is optimized with the cubic exponential smoothing method shown in Figure 9. The optimized data in the y-/x-direction are given in Figure 10. The accuracy of identifying the feature points is within 1.50 pixels through this diagnostic process overcoming any random interference.  Experimental results show that the cubic exponential smoothing method enhances the accuracy of feature point identification. It can be used to restrain the abnormal fluctuation of the tracking position in visual-sensing-based automated GMAW. This process stabilizes seam tracking as well as the subsequent WBGF modeling. Experimental results show that the cubic exponential smoothing method enhances the accuracy of feature point identification. It can be used to restrain the abnormal fluctuation of the tracking position in visual-sensing-based automated GMAW. This process stabilizes seam tracking as well as the subsequent WBGF modeling.

Sub Pixel Discrimination of WBGFs
In order to improve modeling accuracy, linear interpolation is applied to sub pixel discrimination of the weld bead before modeling as , and A i+1 are the coordinates of two adjacent pixels in the region of the weld bead, and A i and O i are the coordinates of interpolation points. Nine points are linearly interpolated between two adjacent pixels in this work.

Modeling Process of WBGFs
The reference profile ( Figure 11) is from the reference image. The two profiles are aligned by overlapping a common feature point (e.g., the leftmost feature point). The bead region is determined after some feature points of two input WSPs are designated respectively. The region consists of the lower and upper curve lines. That is, the upper boundary is first determined with the corresponding feature points, and then the start point of the lower boundary is the nearest point to the left of the upper boundary. The end point of the lower boundary is also the nearest point to the right of the upper boundary. There are two gaps between the two boundaries. Two lines are defined with the two-point form of straight-line equations ( Figure 12) to connect the two boundaries: where x 1i , y 1i , x 2i , and y 2i (i = 1, 2) are the coordinates of the two sets of endpoints of the curve lines. Thus, the weld bead in the images is a closed region.

Sub Pixel Discrimination of WBGFs
In order to improve modeling accuracy, linear interpolation is applied to sub pixel discrimination of the weld bead before modeling as

Modeling Process of WBGFs
The reference profile (Figure 11) is from the reference image. The two profiles are aligned by overlapping a common feature point (e.g., the leftmost feature point). The bead region is determined after some feature points of two input WSPs are designated respectively. The region consists of the lower and upper curve lines. That is, the upper boundary is first determined with the corresponding feature points, and then the start point of the lower boundary is the nearest point to the left of the upper boundary. The end point of the lower boundary is also the nearest point to the right of the upper boundary. There are two gaps between the two boundaries. Two lines are defined with the two-point form of straight-line equations ( Figure 12) to connect the two boundaries: where 1i x , 1i y , 2i x , and 2 ( 1, 2) i yi  are the coordinates of the two sets of endpoints of the curve lines. Thus, the weld bead in the images is a closed region. Figure 11. Scheme of modeling weld bead geometry features (WBGFs). Figure 11. Scheme of modeling weld bead geometry features (WBGFs).  (11) where N is the order and initialized to 10. In order to implement the all-position measurement, two ranges of the weld bead are first determined as   Figure 13). Thus, the APWBW is then modeled with 12, the APWBH is modeled with 13, and the area is acquired with 14.
where N is the order and initialized to 10. In order to implement the all-position measurement, two ranges of the weld bead are first determined as LR j ( ) in the y-/x-direction, respectively.
For ∀a j ∈ [min( LR j , max( LR j ))], there exist two intersection points between the line x = a j and the closed weld bead region. Their ordinates are A j1 and A j2 in the y-direction, respectively. In addition, for ∀b j ∈ [min( UD j , max( UD j ))], there also exist two intersection points between the line y = b j and the bead region. The two abscissas are O j1 and O j2 in the x-direction, respectively ( Figure 13). Thus, the APWBW is then modeled with 12, the APWBH is modeled with 13, and the area is acquired with 14.
Sensors 2020, 20, x FOR PEER REVIEW 12 of 18 Figure 13. Diagrammatic sketch of determining the APWBW and APWBH.
In addition, the zero and first moment of the binary image are used to calculate the coordinate of the center of the gravity of the weld bead as In addition, the zero and first moment of the binary image are used to calculate the coordinate of the center of the gravity of the weld bead as where v(i, j) is the gray value at point (i, j), and x c , y c are the corresponding coordinates of the center of gravity. It is marked with "+" in images in the subsequent modeling experiments.

Experimental Results
The welding system is shown in Figure 14. The modeling processes with T-joints of 30 mm and 50 mm thickness are first used to show the effectiveness of the method proposed in this work (Figures 15-18). APWBW modeling is conducted from top to bottom. APWBH modeling is carried out from left to right. The feature point optimization process based on the cubic exponential smoothing method enhances the modeling accuracy (Figures 16 and 18). In addition, the zero and first moment of the binary image are used to calculate the coordinate of the center of the gravity of the weld bead as   (15) where ( , ) v i j is the gray value at point ( , ) ij, and c x , c y are the corresponding coordinates of the center of gravity. It is marked with "+" in images in the subsequent modeling experiments.

Experimental Results
The welding system is shown in Figure 14. The modeling processes with T-joints of 30 mm and 50 mm thickness are first used to show the effectiveness of the method proposed in this work (Figures 15-18). APWBW modeling is conducted from top to bottom. APWBH modeling is carried out from left to right. The feature point optimization process based on the cubic exponential smoothing method enhances the modeling accuracy (Figures 16 and 18).   In addition, the zero and first moment of the binary image are used to calculate the coordinate of the center of the gravity of the weld bead as   (15) where ( , ) v i j is the gray value at point ( , ) ij, and c x , c y are the corresponding coordinates of the center of gravity. It is marked with "+" in images in the subsequent modeling experiments.

Experimental Results
The welding system is shown in Figure 14. The modeling processes with T-joints of 30 mm and 50 mm thickness are first used to show the effectiveness of the method proposed in this work (Figures 15-18). APWBW modeling is conducted from top to bottom. APWBH modeling is carried out from left to right. The feature point optimization process based on the cubic exponential smoothing method enhances the modeling accuracy (Figures 16 and 18).    The robustness of the modeling method proposed in this work is further investigated with the variable WSP by changing the welding current during the same welding process (Figures 17 and  18).   The robustness of the modeling method proposed in this work is further investigated with the variable WSP by changing the welding current during the same welding process (Figures 17 and  18).  Two modeling processes with a butt joint of 30 mm thickness are conducted to further show the effectiveness of the proposed method here (Figures 19 and 20). These experimental results show that the proposed method in this work still meets the modeling requirement in butt-joint multipass GMAW regarding these typical WBGFs. The robustness of the modeling method proposed in this work is further investigated with the variable WSP by changing the welding current during the same welding process (Figures 17 and 18).
Two modeling processes with a butt joint of 30 mm thickness are conducted to further show the effectiveness of the proposed method here (Figures 19 and 20). These experimental results show that the proposed method in this work still meets the modeling requirement in butt-joint multipass GMAW regarding these typical WBGFs. Two modeling processes with a butt joint of 30 mm thickness are conducted to further show the effectiveness of the proposed method here (Figures 19 and 20). These experimental results show that the proposed method in this work still meets the modeling requirement in butt-joint multipass GMAW regarding these typical WBGFs.   Two modeling processes with a butt joint of 30 mm thickness are conducted to further show the effectiveness of the proposed method here (Figures 19 and 20). These experimental results show that the proposed method in this work still meets the modeling requirement in butt-joint multipass GMAW regarding these typical WBGFs.            These modeling experiments show that this method can be applied to the real-time modeling of the WBGFs with typical joints, thin or thick steel plates.

Discussion
This paper presented an effective WBGF modeling method during the multipass GAMW process with T-joints and butt joints based on machine vision and learning. This method can stably acquire the area, center of gravity, and APWBW and APWBH of the weld bead in real time. This study expanded the traditional feature measurement of irregular weld beads. It contributes to weld formation control and planning by providing more evidence to optimize the welding process parameters. In addition, the improvements on WBGF modeling in this work include an attempt at real-time modeling, the stable modeling process, and the adaptability of the variable modeling objects.
This work proposed an "all-position" concept. The all-position width and height of the weld bead contain more useful information to optimize the welding process parameter compared with [24,25]. In addition, the imaging of the welding wire is worthy of research attention, because it can provide direct evidence to control the posture of the welding torch.
The modeling processes show that the one-dimensional filtering size (from 1 × 3 to 1 × 29) is the most sensitive to the modeling results. It is necessary to develop an adaptive mechanism to approach the appropriate setting, which is the first aspect to be improved in the future study. Polynomial fitting is currently the popular method to model the objects. However, there is an over-fitting phenomenon. Although this work uses piecewise fitting to respectively represent the upper and lower boundaries of the weld bead, over-fitting still happens. For example, another mechanism monitoring the maximum error between the last several data points of the actual boundary and the fitting result should be built to optimize the order of the polynomial function. The relationship between the visual features of the weld bead and welding process parameters will be investigated in the next study.

Conclusions
This work implemented real-time modeling on the area, center of gravity, and all-position width and height of the weld bead in thick plate gas metal arc welding with T-joints and butt joints based on visual sensing. Some conclusions about visual information acquisition, fault detection and diagnosis, and the weld bead geometry feature modeling method are given as follows.
(1) The proposed feature point identification method combined with the weld seam profile extraction method adapts to the various weld seam profiles. It provides the valuable reference to visual information acquisition for visual-sensing-based automated welding. (2) The proposed fault detection and diagnosis of feature point identification based on the cubic exponential smoothing method shows that this optimization process enhances the identification accuracy to 1.50 pixels. This method shows its potential application value for improving tracking accuracy and welding quality. (3) The proposed modeling method in this work can obtain the area, center of gravity, and all-position width and height of the weld bead in real time in gas metal arc welding with typical joints. This modeling method provides more effective evidence to control the weld formation and planning, particularly during the multipass arc welding process.