Next Article in Journal
Multi-Robot System for Cooperative Tidying Up with Mobile Manipulators and Transport Agents
Previous Article in Journal
Research on Measurement of Coal–Water Slurry Solid–Liquid Two-Phase Flow Based on a Coriolis Flow Meter and a Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Seam Extraction Using Laser Vision Sensing: Hybrid Approach with Dynamic ROI and Optimized RANSAC

1
School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China
2
Hubei Engineering Research Center for Intelligent Production Line Systems, Wuhan Institute of Technology, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(11), 3268; https://doi.org/10.3390/s25113268
Submission received: 10 April 2025 / Revised: 16 May 2025 / Accepted: 19 May 2025 / Published: 22 May 2025
(This article belongs to the Topic Innovation, Communication and Engineering)

Abstract

:
Laser vision sensors for weld seam extraction face critical challenges due to arc light and spatter interference in welding environments. This paper presents a real-time weld seam extraction method. The proposed framework enhances robustness through the sequential processing of historical frame data. First, an initial noise-free laser stripe image of the weld seam is acquired prior to arc ignition, from which the laser stripe region and slope characteristics are extracted. Subsequently, during welding, a dynamic region of interest (ROI) is generated for the current frame based on the preceding frame, effectively suppressing spatter and arc interference. Within the ROI, adaptive Otsu thresholding segmentation and morphological filtering are applied to isolate the laser stripe. An optimized RANSAC algorithm, incorporating slope constraints derived from historical frames, is then employed to achieve robust laser stripe fitting. The geometric center coordinates of the weld seam are derived through the rigorous analysis of the optimized laser stripe profile. Experimental results from various types of weld seam extraction validated the accuracy and real-time performance of the proposed method.

1. Introduction

In modern industrial manufacturing, welding is a crucial method for joining and processing materials, extensively used in the aerospace, shipbuilding, automotive, and construction industries [1,2,3,4]. The hazards of the intense arc radiation, high-decibel noise, and toxic fumes generated during welding processes severely constrain the quality consistency and operational efficiency of manual welding. With the advancement of robotic control technology, automated welding has become an inevitable trend in industrial manufacturing. Currently, the mainstream technical approaches can be categorized into two methods: The first method is the teaching-and-playback approach, which simply replicates predefined trajectories. When there are deviations in the workpiece positioning or environmental parameters [5,6], weld seam displacement can occur, resulting in compromised welding quality. The second method involves intelligent welding systems that utilize multi-modal sensing technology. These systems achieve environmental perception and dynamic trajectory compensation using sensor arrays and adaptive control algorithms, and they now represent the core focus of research on automated welding. Different sensing systems, such as visual sensing, acoustic sensing, ultrasonic sensing, and arc sensing [1,5,7,8,9], can be used in this method. The successful implementation of this framework relies on the high-precision, real-time identification of weld seam characteristics. Laser vision sensing technology, with its non-contact measurements, high accuracy, and real-time detection capabilities [5,10,11], has become an essential tool for weld seam detection. However, the complexity and variability of industrial environments, unstable lighting conditions, potential oil stains on workpiece surfaces [12], and disturbances from arc light and spatter during welding often degrade the quality of weld images, posing significant challenges to accurate weld seam extraction.
Currently, methods for weld seam extraction can be categorized into two main approaches: traditional digital image processing techniques and deep learning-based intelligent detection methods [13]. Johan, N. F. et al. [14] achieved the precise extraction of weld seam feature points through three crucial stages: laser extraction, broken-line fitting, and pixel location determination. Wu et al. [15] employed median filtering and the Otsu algorithm for image preprocessing and binarization, combined with an improved Hough transform algorithm to detect the weld seam positions. Muhammad et al. [16] employed median filtering, color processing, and blob analysis to extract weld seam feature points. Li et al. [17] developed various laser stripe templates and similarity evaluation functions, using template matching to extract weld seams in noisy environments. Shao et al. [18] utilized static ROIs for extracting narrow-gap butt welds, but these static ROIs lacked flexibility and were inadequate for complex welding environments. Yu et al. [19] designed a weld tracking algorithm that combines morphological extraction and kernel correlation filtering, achieving the accurate tracking of multiple weld seams.
In recent years, the applications of deep learning in computer vision have attracted significant attention, including YOLOv5, Faster R-CNN, DeepLabV3, etc. YOLOv5 (You Only Look Once version 5) is a state-of-the-art, single-stage object detection model renowned for its speed, accuracy, and ease of deployment. Building on the YOLO architecture, it employs a CSPDarknet53 backbone for feature extraction and PANet for multi-scale feature fusion, enabling the efficient detection of objects across varying sizes. Li et al. [20] employed Mask R-CNN for the instance segmentation of weld seams and integrated Hough transformbased image processing to achieve the high-precision extraction of weld seam trajectories. Gao et al. [21] introduced the RepVGG network and a Normalized Attention Module (NAM) to optimize YOLOv5, enhancing the detection speed for weld seam feature points and enabling accurate extraction with complex backgrounds. Mobaraki, M. et al. [22] implemented an automatic tracking system for fillet welds between pipes and flanges during Gas Metal Arc Welding (GMAW) using a network architecture based on ResNet 101 and the Stacked Hourglass model, which reduced welding defects and the need for rework, saving significant manufacturing costs. Kang et al. [23] replaced the backbone network of DeepLabV3+ with MobileNetV2 and introduced a DenseASPP structure alongside an attention mechanism, specifically focused on laser stripe edge extraction, thereby obtaining clearer laser stripe images and reducing noise interference. Lin et al. [24] proposed a hybrid methodology that initially localizes the welding torch through a YOLOv5-based object detection algorithm. Subsequently, an adaptive ROI-driven image processing algorithm is employed to extract the weld seam centerline, fulfilling the real-time performance and precision requirements of the K-TIG seam tracking system. These studies highlight the robust capabilities of deep learning in complex scenarios. However, deep learning-based methods require extensive amounts of annotated data and offline training, raising the barrier to user adoption. Coupled with high computational complexity, this means that these approaches struggle to meet the stringent real-time demands in welding applications.
To address these challenges, this paper proposes a laser vision-based weld seam extraction method that combines a dynamic ROI and an optimized random sample consensus (RANSAC) algorithm. The method begins by automatically acquiring the laser stripe region using a dynamic ROI. Next, adaptive threshold segmentation is applied to extract the laser stripe, followed by morphological filtering and medial axis transformation for refinement. Finally, an optimized RANSAC algorithm is employed to fit the extracted laser stripe, enabling the precise extraction of weld seam feature points. The main contributions of this study include the following:
  • A Dynamic ROI Transmission Mechanism: An adaptive ROI generation strategy based on the geometric features of laser stripes from historical frames is proposed, which intelligently suppresses noise regions through inter-frame spatial correlation analysis.
  • Slope-Constrained RANSAC Optimization: The conventional RANSAC algorithm is enhanced by integrating historical slope constraints, where slope threshold restrictions are imposed on random sampling processes to accelerate iterative convergence.
  • A Sequential Processing Architecture: A recursive workflow incorporating “pre-arc baseline initialization → dynamic ROI updating → constrained fitting” is established, thereby achieving high-efficiency, real-time tracking throughout the entire welding process cycle.
The structure of this paper is organized as follows: Section 2 elaborates on the proposed methodology. Section 3 details the experimental design and implementation. Section 4 provides a discussion of the experimental results. Finally, Section 5 concludes the paper and outlines potential directions for future research.

2. Methodology

2.1. System Architecture and Data Acquisition

The system primarily consists of a welding robot, laser vision system, and computer, as shown in Figure 1. The laser vision system is composed of a filter, line laser, and industrial CCD camera, all of which are encapsulated in a protective shell. The shell and welding torch are mounted on the end of the robot arm, and the filter is installed in front of the CCD camera to effectively filter the arc light generated during welding, reducing the interference from arc light and spatter during image processing.
The laser emitted by the line laser illuminates the surface of the welding workpiece, forming a laser stripe at the point of intersection. The industrial CCD camera captures the weld images with laser stripes. The captured images are transmitted to the computer via TCP/IP communication. The computer then runs the proposed algorithm to extract the weld seam.

2.2. Proposed Algorithm

During real-time weld seam tracking, the weld trajectory and vision sensor data are acquired continuously. Due to the minimal inter-frame variation between consecutive frames, the recognition results from the previous frame can be leveraged to optimize the processing of the current frame. The detailed process is illustrated in Figure 2.

2.2.1. Dynamic ROI

In weld seam imagery, laser stripe feature regions typically occupy only a small portion of the original image space. Employing a full-frame image processing strategy not only significantly increases the computational load but also degrades the feature extraction accuracy due to arc light interference and spatter noise in highly dynamic welding environments. We propose a temporally correlated dynamic region of interest (ROI) generation method. By analyzing the spatial continuity of weld trajectories and the temporal continuity of image acquisition, the approach constructs the ROI model for the current frame using the laser stripe feature point set from the preceding frame.
Prior to torch ignition at the welding start point, a pristine weld seam image is acquired as the initial reference frame. The subsequent processing of this initial frame extracts the laser stripe profile, mathematically represented as the discrete point set P ( 0 ) = p i ( 0 ) ( u , v ) i [ 1 , N ] , where N is the total number of contour pixel points. Through laser vision sensing, the captured weld seam profile can be reduced to three critical geometric feature points [19] along the laser stripe, denoted as { p l ( t ) , p o ( t ) , p r ( t ) } , where t represents the frame index, with the determination method explicitly defined as follows:
p l ( t ) = p i * ( t ) , where i * = arg min i p i ( t ) ( v ) , i = 1 , 2 , , N p o ( t ) = p i * ( t ) , where i * = arg max i d i ( t ) , i = 1 , 2 , , N p r ( t ) = p i * ( t ) , where i * = arg max i p i ( t ) ( v ) , i = 1 , 2 , , N
As shown in Equation (1), the distance parameter d i t is defined as the perpendicular distance from an arbitrary point, p i t , within the discrete laser stripe contour point set P ( t ) = p i ( t ) ( u , v ) extracted from the tth frame weld seam image to the segment determined by boundary feature points p l t and p r t .
d i ( t ) = A ( t ) p i ( t ) ( u ) + B ( t ) p i ( t ) ( v ) + C ( t ) A ( t ) 2 + B ( t ) 2
where A ( t ) , B ( t ) , and C ( t ) are the coefficients of the linear equation corresponding to the line containing segment p l ( t ) p r ( t ) . The equation of the line segment p l ( t ) p r ( t ) is given as follows:
A ( t ) u + B ( t ) v + C ( t ) = 0
Upon the determination of the three feature points, { p l ( t ) , p o ( t ) , p r ( t ) } , a polygonal confinement region encompassing the laser stripe can be algorithmically constructed, which is geometrically parameterized by the discrete point set { P k ( t ) } k = 1 6 , as illustrated in Figure 3.
Assuming that the minimum separation distance between the polygonal boundary and laser centerline is 2 n , the coordinates of the polygonal confinement region are defined as follows:
P 1 ( t ) = p l ( t ) ( u ) + n , p l ( t ) ( v ) P 2 ( t ) = p o ( t ) ( u ) + n , p o ( t ) ( v ) P 3 ( t ) = p r ( t ) ( u ) + n , p r ( t ) ( v ) P 4 ( t ) = p r ( t ) ( u ) n , p r ( t ) ( v ) P 5 ( t ) = p o ( t ) ( u ) n , p o ( t ) ( v ) P 6 ( t ) = p l ( t ) ( u ) n , p l ( t ) ( v )
Given the continuous image acquisition at 30 fps, the time interval between two consecutive frames is extremely short, leading to minimal changes between adjacent frames. Based on this observation, the ROI in the weld image for the current frame can be approximated as an irregular polygonal region from the previous frame. The core principle of the dynamic ROI is demonstrated in Algorithm 1 below.
Algorithm 1 Core principle of dynamic ROI
Require: Initial frame I 0 without noise
Require: Subsequent frames { I 1 , I 2 , . . . , I n }
Ensure: Real-time laser stripe tracking
         Initialization:
    1: Extract initial ROI from I 0 :
    2: ROI prev ProcessFrame ( I 0 )
         Iterative Processing:
    3: for each frame I t at time t 1 do
    4:           Apply previous ROI:
    5:                  Cropped t I t ROI prev
    6:           Extract new stripe region:
    7:                  Stripe t DetectLaser ( Cropped t )
    8:           Propagate ROI to next frame:
    9:                  ROI prev CalculateNewROI ( Stripe t )
    10: end for

2.2.2. Weld Seam Segmentation

The implementation of weld seam segmentation primarily involves three key steps: First, an adaptive threshold segmentation method is applied to binarize the weld seam image. Subsequently, morphological filtering is employed to eliminate isolated noise regions in the binary image. Finally, the laser stripe skeleton is extracted from the processed weld seam image to delineate the precise geometric profile.

Step 1: Thresholding

Accurate threshold determination is critical for achieving precise weld seam extraction in automated visual inspection systems. Traditional segmentation methods, such as fixed thresholding, often prove inadequate in welding applications due to dynamic challenges, including material heterogeneity across workpieces, intermittent arc glare interference, and a nonuniform spatter distribution. To address these limitations, this study implemented Otsu’s method [25], an adaptive thresholding method that statistically optimizes the inter-class separability. Let the weld seam segmentation candidate threshold be the grayscale value T. It divides the weld image into two classes: the foreground and background. The probabilities of the foreground and background classes are denoted as p 1 ( T ) and p 2 ( T ) , respectively, while their corresponding mean grayscale values are represented by μ 1 ( T ) and μ 2 ( T ) . The inter-class variance σ b 2 ( T ) is then formulated as
σ b 2 ( T ) = p 1 ( T ) · p 2 ( T ) · μ 1 ( T ) μ 2 ( T ) 2
The optimal threshold T * is determined by exhaustively maximizing σ b 2 ( T ) across all possible T values:
T * = arg max 0 T L 1 σ b 2 ( T )

Step 2: Morphological Filtering

Due to the presence of spattering during the welding process, binary images of the weld seam often contain small isolated noise regions that require further filtering. We can introduce an area constraint to remove isolated regions with an area smaller than a specified threshold.
R keep = k R k where R keep : Set of retained regions ( in pixels ) R k : k th connected component ( candidate region ) T area : Area threshold : Union operator satisfying Area ( R k ) T area
Simultaneously, by leveraging the slender nature of laser stripes, we can apply aspect ratio-based filtering constraints to further reduce the image noise while preserving the integrity of the laser stripes. Common binary image filtering methods include morphological filtering and spatial domain filtering. However, spatial domain filtering may blur the image edges while removing noise, which could affect subsequent analysis and processing. Therefore, we employed morphological filtering. Specifically, morphological opening not only effectively eliminates small noise points, protrusions, and burrs in the binary image but also preserves the integrity of the laser stripes well. The morphological opening operation is defined as follows:
A B = A B B
where A is the input image, B is the structuring element, ⊖ denotes the erosion operation, and ⊕ denotes the dilation operation in the morphology.

Step 3: Laser Stripe Centerline Extraction

Since laser stripes exhibit finite width distributions, the precise extraction of their centerlines is essential for accurately acquiring the weld seam feature points. The mainstream methods include the grayscale centroid method [26], Steger’s algorithm [27], and the medial axis transform (MAT) [28]. However, the grayscale centroid method demonstrates susceptibility to noise interference that may induce centerline deviations, while Steger’s algorithm incurs significantly higher computational complexity despite its superior precision. Consequently, this study adopted the MAT to achieve laser stripe centerline extraction with enhanced computational efficiency.

2.3. Optimized RANSAC Algorithm

Due to effects from noise interference, the fitting process of the extracted laser stripe centerlines becomes necessary to construct high-precision weld seam models. The RANSAC algorithm [29], which effectively eliminates outliers during this procedure through an iterative mechanism, is illustrated in Figure 4.
According to the operational requirements of the RANSAC algorithm, weld seam model estimation requires a minimum sample size of 2. During the iterative hypothesis generation phase, two points are randomly selected from the point set to construct the initial laser stripe model.
a x + b y + c = 0
As shown in Figure 4b, the iterative procedure comprises three key steps:
  • Step 1: The computation of the Euclidean distance from all data points to the model M in each iteration.
    d i = a x i + b y i + c a 2 + b 2
    where ( x i , y i ) are the coordinates of the ith data point in the RANSAC algorithm.
  • Step 2: The classification of inliers and outliers using a preset distance threshold, w. The parameter w is determined using the laser stripe width extracted from the initial reference frame. The discrimination criteria are as follows:
    p i = Inlier , d i w Outlier , d i > w
  • Step 3: The selection of the model with the maximum number of inliers as the optimal laser stripe solution.
Although the RANSAC algorithm demonstrates satisfactory precision and robustness, its application to real-time weld seam extraction is constrained by two critical limitations:
(1)
The stochastic sampling mechanism introduces inherent randomness and a trial-and-error nature, which necessitates excessive iterations and significantly degrades the computational efficiency;
(2)
The predetermined fixed iteration count fails to dynamically adapt to varying inlier ratios, leading to a suboptimal trade-off between the processing speed and estimation accuracy.
To address the first limitation, we propose a slope-constrained sampling strategy based on temporal coherence. During continuous welding processes, adjacent laser stripes (frames t and t 1 ) exhibit minimal variation in their slope characteristics within the weld zone. This allows for the implementation of a temporal constraint:
| θ t ( i ) θ t 1 | θ th
where θ t ( i ) denotes the slope of the ith sampled candidate in frame t, θ t 1 is the validated slope from frame t 1 , and θ t h is the slope threshold used to verify the qualified point. These procedures effectively mitigate the aimlessness of random sampling while reducing the algorithm’s computational complexity.
To overcome the second limitation, a convergence criterion is activated when the improvement margin of the current optimal model falls below a predefined threshold, allowing for the early termination of redundant computational cycles. The optimized RANSAC algorithm is formally described in Algorithm 2.
Algorithm 2 Improved slope-constrained RANSAC algorithm with early termination
Require: Current frame laser stripe point set P, previous slope m prev ,
    1:            max iterations K, distance threshold τ ,
    2:            slope tolerance δ , consecutive patience N
Ensure: Optimal line model ( a * , b * ) , inlier set I *
    3: Initialize best inliers: I *
    4: Initialize optimal model: ( a * , b * ) ( 0 , 0 )
    5: Iteration counter: k 0
    6: No improvement counter: no _ improve 0
    7: while  k < K  and  no _ improve < N   do
    8:      k k + 1
    9:     repeat
    10:            Randomly select two distinct points p 1 ( x 1 , y 1 ) , p 2 ( x 2 , y 2 ) P
    11:            Compute slope: m curr y 2 y 1 x 2 x 1 + ϵ
    12:     until  | m curr m prev | < δ
    13:     Construct line model: a m curr , b y 1 a x 1
    14:     Temporary inlier set: I
    15:     for each point p ( x , y ) P  do
    16:            Distance: d | a x y + b | a 2 + 1
    17:            if  d < τ  then
    18:                 I I { p }
    19:            end if
    20:     end for
    21:     if  | I | > | I * |  then
    22:            I * I
    23:             ( a * , b * ) ( a , b )
    24:            no _ improve 0
    25:     else
    26:             no _ improve no _ improve + 1
    27:     end if
    28: end while
    29: return  ( a * , b * ) , I *

2.4. Weld Seam Feature Point Estimation

As illustrated in Figure 3, the laser stripe is partitioned into two distinct regions, p l ( t ) p o ( t ) and p o ( t ) p r ( t ) , using p o ( t ) as the demarcation point. By applying the optimized RANSAC algorithm within each partitioned region, two linear equations are derived as follows:
a 1 x + b 1 y + c 1 = 0 a 2 x + b 2 y + c 2 = 0
The intersection point X, which serves as the welding seam feature point, is calculated as follows:
X = A 1 C
where A = a 1 b 1 a 2 b 2 and C = c 1 c 2 .

2.5. Coordinate Transformation

For robotic welding trajectory generation, the laser image coordinates need to be transformed into robot base coordinates through three critical stages:
  • Camera Calibration: Obtains the intrinsic matrix K and removes lens distortion.
  • Laser Plane Calibration: Establishes the mapping from the 2D laser plane coordinates to the 3D camera space.
  • Hand–Eye Calibration: Solves the transformation matrix T E C between the robot tool end coordinate system { E } and the camera coordinate system { C } .
Following the transformation described in the preceding steps, the spatial coordinates of the weld seam in the robot base frame { B } can be derived through the following relationship:
x B y B z B 1 = T B E · T E C · Z C K 1 u v 1 1
where Z C is the depth (Z-coordinate) of the weld seam feature point; K 1 denotes the inverse of the camera intrinsic matrix; ( u , v ) are the pixel coordinates of the weld seam feature point; and ( x B , y B , z B ) are the spatial coordinates of the weld seam feature point in the robot base frame.

3. Experiments

3.1. Experimental Setup

The experimental platform, as illustrated in Figure 5, consisted of a laser vision sensor mounted at the end effector of a 6-DOF collaborative robot. The laser vision sensor captured the weld images with laser stripes. The captured images were transmitted to the computer via TCP/IP communication. The computer then ran the proposed algorithm to extract the weld seam.
In this study, a 450 nm line laser was selected because the arc intensity in the neighborhood of the 450 nm band is relatively weak [10]. The experimental equipment parameters are summarized in Table 1.

3.2. Weld Seam Extraction Experiment

To evaluate the effectiveness of the proposed method, experimental trials were conducted with three representative weld configurations: fillet welds, butt welds, and lap joints. The welding parameters applied in the experiment are shown in Table 2.
A total of 172 fillet weld images, 157 lap joint weld images, and 131 butt joint weld images were acquired through the experimental process. The proposed algorithm achieved robust weld seam extraction across all experimental datasets. Representative experimental results are presented in Figure 6, Figure 7 and Figure 8. The red lines represent the algorithmically extracted laser stripes, while the red cross markers denote the identified weld seam locations. The right panel provides a magnified view of the extracted weld seam shown in the left image. As demonstrated in Figure 6, Figure 7 and Figure 8, the proposed algorithm achieved the high-precision extraction of fillet weld, butt weld, and lap weld features. These results confirm the method’s effectiveness in cross-type weld feature extraction across diverse joint configurations.

3.3. Ablation Experiment

3.3.1. Impact of Dynamic ROI

The precise determination of the ROI serves as a critical preprocessing step in weld seam image extraction, where its accuracy fundamentally dictates the effectiveness of subsequent feature recognition. To validate the superiority of the dynamic ROI method, an image dataset comprising 172 fillet weld samples, 157 lap joint weld samples, and 131 butt weld samples was established. For each weld category, a dual-group controlled experiment was conducted: for the experimental group, we employed the proposed dynamic ROI extraction method, while for the control group, we utilized the conventional static ROI approach. During experimentation, both the processing time and extraction status (success/failure) were recorded for each sample. The average processing time [15,24,30] and extraction accuracy [13,19,30] for each algorithm across identical weld types were quantified as the performance metrics. The definition of the extraction accuracy was as follows:
Accuracy = N correct N total × 100 %
where N correct denotes the number of images with correctly extracted welds, and N total represents the total number of test samples. Comprehensive comparative results are presented in Table 3.

3.3.2. Impact of Improved RANSAC Algorithm

To evaluate the impact of the improvements to the RANSAC algorithm on weld seam recognition, we conducted a comparative analysis using the execution time as the primary metric. The experimental dataset comprised 172 fillet weld images, 157 lap joint weld images, and 131 butt weld images, ensuring the comprehensive coverage of common welding scenarios. The results are presented in Table 4.

3.4. Comparative Experiments

To evaluate the advantages of the proposed method, a comparative analysis was conducted with the approaches documented in [15,17]. Considering the prevalence of fillet welds in practical welding applications [12], the comparative experiments were specifically designed for this weld type. The experimental outcomes are demonstrated in Figure 9. The red lines represent the algorithmically extracted laser stripes, while the green cross markers denote the identified weld seam locations.

4. Discussion

4.1. Interference Mechanism of Spatter and Surface Reflections

In laser vision-based seam tracking systems, welding spatter and metal surface reflections degrade the laser stripe extraction accuracy through the following mechanisms:
  • Spatter Interference: The dynamic dispersion of molten metal spatter induces the transient local occlusion and geometric distortion of the laser stripe, which may be misinterpreted as topological continuity features, leading to fragmented centerline reconstruction. Moreover, the high-velocity trajectories of spatter particles create spatio-temporal coupling interference with the laser stripe, significantly increasing the complexity of optical feature separation.
  • Reflection Interference: Multi-order reflection artifacts generated by highly reflective metal surfaces produce competing optical signals, causing path ambiguity in stripe centerline extraction. Concurrently, photon saturation effects in specular reflection regions reduce the contrast threshold of valid signals, inducing subpixel-level edge resolution degradation.

4.2. Analysis of Dynamic ROI

As is quantitatively demonstrated in Table 3, the dynamic ROI method exhibited significant improvements over static ROI approaches in terms of both its extraction accuracy (averaging a 5.26% enhancement) and processing efficiency (averaging a 45.87% reduction in the computational time). This methodology focuses computational resources on localized laser stripe regions, thereby achieving dual technical benefits:
  • The suppression of the arc-induced noise and spatter interference inherent to welding environments.
  • A real-time processing capability at 30 fps.
The static ROI method employs a larger, predefined region to ensure the continuous coverage of the laser stripe throughout the welding process, while the dynamic ROI method adaptively tracks and confines the analysis to the immediate laser stripe area. Consequently, using a dynamic ROI achieves higher precision and improved computational efficiency relative to a static ROI, owing to its adaptive localization capability.

4.3. Analysis of Improved RANSAC Algorithm

As shown in Table 4, the experimental results demonstrate a significant increase in the convergence speed achieved by our enhanced RANSAC implementation. This improvement primarily stems from the introduced slope constraint mechanism, which effectively optimizes the random sampling process in the traditional RANSAC algorithm by reducing unnecessary iterations using prior geometric knowledge. Notably, the improved algorithm achieved 63% faster processing for fillet welds (21.26 ms → 7.97 ms) and maintained a speed increase of over 50%.

4.4. Analysis of Comparative Experiments

A quantitative evaluation of the computational efficiency and extraction accuracy was performed, with the statistical results being presented in Table 5.
Table 5 demonstrates that the proposed method achieved optimal performance in terms of both its detection accuracy and computational efficiency. The limitations of the method proposed in [15] were primarily caused by two aspects: (1) the absence of an ROI mechanism necessitated full-image preprocessing and weld detection using an improved Hough transform, resulting in an elevated computational load (21.55 ms), (2) and an insufficient adaptability to noise interference factors such as the welding spatter, smoke, and arc light led to significant accuracy degradation (84.88%). Although the method proposed in [17] enhanced noise immunity through Kalman filter-based laser stripe tracking (97.09%), it remained constrained by its rectangular ROI design and template matching strategy. In contrast, the proposed method employed an irregular polygonal ROI that further reduced the number of processed pixels compared to the rectangular ROI in [17], thereby achieving good real-time detection performance (7.97 ms) while maintaining high precision.

5. Conclusions

The accuracy of weld seam extraction is significantly challenged due to complex welding environments and interference factors such as the arc light and spatter generated during the process. To address these issues, this study proposed a laser vision-based weld seam extraction method incorporating a dynamic ROI mechanism and an improved RANSAC algorithm. The method dynamically generated an ROI for the current frame based on prior frame information, effectively suppressing noise interference while improving the computational efficiency. During image preprocessing, an adaptive threshold segmentation technique was employed to extract laser stripes. Subsequently, the RANSAC algorithm was optimized by refining the sampling strategies and dynamically adjusting the iteration counts, enabling subpixel-level precision in feature point extraction. Weld seam recognition experiments and comparative studies demonstrated the robust performance of the proposed method under complex working conditions, confirming its practical applicability. However, the current approach does not account for laser stripe interference caused by specular reflections on workpiece surfaces, the consideration of which will be prioritized in future research to enhance the segmentation robustness in such scenarios.

Author Contributions

Conceptualization, G.C.; methodology, G.C. and Y.Z.; software, G.C.; validation, G.C. and Y.A.; formal analysis, G.C. and W.X.; investigation, G.C. and Y.A.; resources, Y.Z. and B.Y.; data curation, G.C. and W.X.; writing—original draft preparation, G.C.; writing—review and editing, G.C., Y.Z., B.Y., and W.X.; visualization, Y.A.; supervision, B.Y. and W.X.; project administration, Y.Z., B.Y., and W.X.; funding acquisition, B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by The Key Laboratory for Crop Production and Smart Agriculture of Yunnan Province under Grant 2024ZHNY12.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fang, W.; Xu, X.; Tian, X. A vision-based method for narrow weld trajectory recognition of arc welding robots. Int. J. Adv. Manuf. Technol. 2022, 121, 8039–8050. [Google Scholar] [CrossRef]
  2. Lei, T.; Rong, Y.; Wang, H.; Huang, Y.; Li, M. A review of vision-aided robotic welding. Comput. Ind. 2020, 123, 103326. [Google Scholar] [CrossRef]
  3. Wu, Q.; Li, Z.; Gao, C.; Biao, W.; Shen, G. Research on Welding Guidance System of Intelligent Perception for Steel Weldment. IEEE Sens. J. 2023, 23, 5220–5231. [Google Scholar] [CrossRef]
  4. Jia, X.; Luo, J.; Li, K.; Wang, C.; Li, Z.; Wang, M.; Jiang, Z.; Veiko, V.P.; Duan, J.A. Ultrafast laser welding of transparent materials: From principles to applications. Int. J. Extrem. Manuf. 2025, 7, 032001. [Google Scholar] [CrossRef]
  5. Fan, J.; Jing, F.; Yang, L.; Long, T.; Tan, M. A precise seam tracking method for narrow butt seams based on structured light vision sensor. Opt. Laser. Technol. 2019, 109, 616–626. [Google Scholar] [CrossRef]
  6. Zou, Y.; Wang, Y.; Zhou, W.; Chen, X. Real-time seam tracking control system based on line laser visions. Opt. Laser. Technol. 2018, 103, 182–192. [Google Scholar] [CrossRef]
  7. Wang, B.; Hu, S.J.; Sun, L.; Freiheit, T. Intelligent welding system technologies: State-of-the-art review and perspectives. J. Manuf. Syst. 2020, 56, 373–391. [Google Scholar] [CrossRef]
  8. Rout, A.; Deepak, B.B.V.L.; Biswal, B.B. Advances in weld seam tracking techniques for robotic welding: A review. Rob. Comput. Integr. Manuf. 2019, 56, 12–37. [Google Scholar] [CrossRef]
  9. Zhang, G.; Zhang, Y.; Tuo, S.; Hou, Z.; Yang, W.; Xu, Z.; Wu, Y.; Yuan, H.; Shin, K. A novel seam tracking technique with a four-step method and experimental investigation of robotic welding oriented to complex welding seam. Sensors 2021, 21, 3067. [Google Scholar] [CrossRef]
  10. Banafian, N.; Fesharakifard, R.; Menhaj, M.B. Precise seam tracking in robotic welding by an improved image processing approach. Int. J. Adv. Manuf. Technol. 2021, 114, 251–270. [Google Scholar] [CrossRef]
  11. Xu, F.; Hou, Z.; Xiao, R.; Xu, Y.; Wang, Q.; Zhang, H. A novel welding path generation method for robotic multi-layer multi-pass welding based on weld seam feature point. Measurement 2023, 216, 112910. [Google Scholar] [CrossRef]
  12. Dinham, M.; Fang, G. Detection of fillet weld joints using an adaptive line growing algorithm for robotic arc welding. Rob. Comput. Integr. Manuf. 2014, 30, 229–243. [Google Scholar] [CrossRef]
  13. Deng, L.; Lei, T.; Wu, C.; Liu, Y.; Cao, S.; Zhao, S. A weld seam feature real-time extraction method of three typical welds based on target detection. Measurement 2023, 207, 112424. [Google Scholar] [CrossRef]
  14. Johan, N.F.; Mohd Shah, H.N.; Sulaiman, M.; Naji, O.A.A.M.; Arshad, M.A. Weld seam feature point extraction using laser and vision sensor. Int. J. Adv. Manuf. Technol. 2023, 127, 5155–5170. [Google Scholar] [CrossRef]
  15. Wu, Q.Q.; Lee, J.P.; Park, M.H.; Jin, B.J.; Kim, D.H.; Park, C.K.; Kim, I.S. A study on the modified Hough algorithm for image processing in weld seam tracking. J. Mech. Sci. Technol. 2015, 29, 4859–4865. [Google Scholar] [CrossRef]
  16. Muhammad, J.; Altun, H.; Abo-Serie, E. A robust butt welding seam finding technique for intelligent robotic welding system using active laser vision. Int. J. Adv. Manuf. Technol. 2016, 94, 13–29. [Google Scholar] [CrossRef]
  17. Li, X.; Li, X.; Ge, S.S.; Khyam, M.O.; Luo, C. Automatic welding seam tracking and identification. IEEE Trans. Ind. Electron. 2017, 64, 7261–7271. [Google Scholar] [CrossRef]
  18. Shao, W.J.; Huang, Y.; Zhang, Y. A novel weld seam detection method for space weld seam of narrow butt joint in laser welding. Opt. Laser. Technol. 2018, 99, 39–51. [Google Scholar] [CrossRef]
  19. Yu, S.; Guan, Y.; Hu, J.; Hong, J.; Zhu, H.; Zhang, T. Unified seam tracking algorithm via three-point weld representation for autonomous robotic welding. Eng. Appl. Artif. Intell. 2024, 128, 107535. [Google Scholar] [CrossRef]
  20. Li, J.; Li, B.; Dong, L.; Wang, X.; Tian, M. Weld seam identification and tracking of inspection robot based on deep learning network. Drones 2022, 6, 216. [Google Scholar] [CrossRef]
  21. Gao, A.; Fan, Z.; Li, A.; Le, Q.; Wu, D.; Du, F. YOLO-Weld: A Modified YOLOv5-Based Weld Feature Detection Network for Extreme Weld Noise. Sensors. 2023, 23, 5640. [Google Scholar] [CrossRef] [PubMed]
  22. Mobaraki, M.; Ahani, S.; Gonzalez, R.; Yi, K.M.; Van Heusden, K.; Dumont, G.A. Vision-based seam tracking for GMAW fillet welding based on keypoint detection deep learning model. J. Manuf. Process. 2024, 117, 315–328. [Google Scholar] [CrossRef]
  23. Kang, S.; Qiang, H.; Yang, J.; Liu, K.; Qian, W.; Li, W.; Pan, Y. Research on a Feature Point Detection Algorithm for Weld Images Based on Deep Learning. Electronics 2024, 13, 4117. [Google Scholar] [CrossRef]
  24. Lin, Z.; Shi, Y.; Wang, Z.; Li, B.; Chen, Y. Intelligent seam tracking of an ultranarrow gap during K-TIG welding: A hybrid CNN and adaptive ROI operation algorithm. IEEE Trans. Instrum. Meas. 2022, 72, 1–14. [Google Scholar] [CrossRef]
  25. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  26. Li, Y.; Zhou, J.; Huang, F.; Liu, L. Sub-pixel extraction of laser stripe center using an improved gray-gravity method. Sensors 2017, 17, 814. [Google Scholar] [CrossRef]
  27. Steger, C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 113–125. [Google Scholar] [CrossRef]
  28. Jiang, C.; Li, W.L.; Wu, A.; Yu, W.Y. A novel centerline extraction algorithm for a laser stripe applied for turbine blade inspection. Meas. Sci. Technol. 2020, 31, 095403. [Google Scholar] [CrossRef]
  29. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  30. Li, W.; Mei, F.; Hu, Z.; Gao, X.; Yu, H.; Housein, A.A.; Wei, C. Multiple weld seam laser vision recognition method based on the IPCE algorithm. Opt. Laser. Technol. 2022, 155, 108388. [Google Scholar] [CrossRef]
Figure 1. System architecture composition.
Figure 1. System architecture composition.
Sensors 25 03268 g001
Figure 2. Weld seam extraction process.
Figure 2. Weld seam extraction process.
Sensors 25 03268 g002
Figure 3. Dynamic ROI. u and v represent the pixel coordinate system. { p l ( t ) , p o ( t ) , p r ( t ) } denotes three critical feature points of the laser stripe. { P k ( t ) } k = 1 6 denotes dynamic ROI.
Figure 3. Dynamic ROI. u and v represent the pixel coordinate system. { p l ( t ) , p o ( t ) , p r ( t ) } denotes three critical feature points of the laser stripe. { P k ( t ) } k = 1 6 denotes dynamic ROI.
Sensors 25 03268 g003
Figure 4. The principle of the RANSAC algorithm. (a) Raw Data Distribution (b) Iterative Optimization (c) Final Model Selection.
Figure 4. The principle of the RANSAC algorithm. (a) Raw Data Distribution (b) Iterative Optimization (c) Final Model Selection.
Sensors 25 03268 g004
Figure 5. Experimental platform.
Figure 5. Experimental platform.
Sensors 25 03268 g005
Figure 6. Fillet weld feature extraction results.
Figure 6. Fillet weld feature extraction results.
Sensors 25 03268 g006
Figure 7. Butt weld feature extraction results.
Figure 7. Butt weld feature extraction results.
Sensors 25 03268 g007
Figure 8. Lap weld feature extraction results.
Figure 8. Lap weld feature extraction results.
Sensors 25 03268 g008
Figure 9. Feature extraction comparisons: (a) the proposed method successfully identified the actual weld seam characteristics; (b) the method from [15] failed to extract features from noise-contaminated images; (c) the method from [17] achieved effective feature extraction.
Figure 9. Feature extraction comparisons: (a) the proposed method successfully identified the actual weld seam characteristics; (b) the method from [15] failed to extract features from noise-contaminated images; (c) the method from [17] achieved effective feature extraction.
Sensors 25 03268 g009
Table 1. Technical specifications of experimental setup.
Table 1. Technical specifications of experimental setup.
System Component
Line laserWavelength: 450 nm; optical power: 80 mW; laser stripe width: 1 mm
Bandpass filter(450 ± 20) nm
Imaging deviceCCD camera (2592 × 1944 pixel) @ 30 fps
Computing unitIntel Core i7-11800H @ 2.3 GHz, 16 GB RAM
RobotFairino 6-DOF robotic arm; repeatability: ±0.02 mm
Hand–eye calibration accuracy1.2 mm
Table 2. Welding parameters for experimental configurations.
Table 2. Welding parameters for experimental configurations.
ParameterValue
Welding processGMAW
Welding materialsQ235 steel (8 mm thick)
Welding current117  A
Welding voltage22  V
Feed speed3 mm/s
Welding speed5 mm/s
Gas flow rate18 L/min
Wire diameter1.0  m m
Shielding gasCO2
Table 3. Performance comparison between dynamic ROI and static ROI.
Table 3. Performance comparison between dynamic ROI and static ROI.
MethodFillet WeldButt WeldLap Weld
Time (ms)Acc. (%)Time (ms)Acc. (%)Time (ms)Acc. (%)
Dynamic ROI7.9798.845.3196.186.2697.45
Static ROI12.4091.2810.2092.3713.5994.27
Table 4. Execution time comparison between original and improved RANSAC algorithms.
Table 4. Execution time comparison between original and improved RANSAC algorithms.
Weld TypeFillet WeldButt WeldLap Joint
Original RANSAC21.26 ms 10.79 ms 12.63 ms
Improved RANSAC7.97 ms 5.31 ms 6.26 ms
Table 5. Performance comparison of welding seam detection methods.
Table 5. Performance comparison of welding seam detection methods.
MethodProposedWu et al. [15]Li et al. [17]
Time (ms)7.9721.5512.28
Accuracy (%)98.8484.8897.09
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, G.; Zhang, Y.; Ai, Y.; Yu, B.; Xu, W. Real-Time Seam Extraction Using Laser Vision Sensing: Hybrid Approach with Dynamic ROI and Optimized RANSAC. Sensors 2025, 25, 3268. https://doi.org/10.3390/s25113268

AMA Style

Chen G, Zhang Y, Ai Y, Yu B, Xu W. Real-Time Seam Extraction Using Laser Vision Sensing: Hybrid Approach with Dynamic ROI and Optimized RANSAC. Sensors. 2025; 25(11):3268. https://doi.org/10.3390/s25113268

Chicago/Turabian Style

Chen, Guojun, Yanduo Zhang, Yuming Ai, Baocheng Yu, and Wenxia Xu. 2025. "Real-Time Seam Extraction Using Laser Vision Sensing: Hybrid Approach with Dynamic ROI and Optimized RANSAC" Sensors 25, no. 11: 3268. https://doi.org/10.3390/s25113268

APA Style

Chen, G., Zhang, Y., Ai, Y., Yu, B., & Xu, W. (2025). Real-Time Seam Extraction Using Laser Vision Sensing: Hybrid Approach with Dynamic ROI and Optimized RANSAC. Sensors, 25(11), 3268. https://doi.org/10.3390/s25113268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop