Next Article in Journal
Dynamic Analysis and Design of Cylindrical Roller Bearings with Arc End Surfaces of Rollers
Previous Article in Journal
Impact of Crown-Type Cage Eccentricity in New Energy Vehicle Motor Ball Bearings on Their Dynamic Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Recognition of Weld Seams on Heat Exchanger Plates and Generation of Welding Trajectories

Department of Mechanical Design Engineering, Hanyang University, Ansan 15588, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work and should be considered co-first authors.
Machines 2025, 13(11), 992; https://doi.org/10.3390/machines13110992
Submission received: 15 September 2025 / Revised: 21 October 2025 / Accepted: 25 October 2025 / Published: 29 October 2025
(This article belongs to the Section Advanced Manufacturing)

Abstract

The large-format, long-distance welding of heat exchanger plates is widely used in shipbuilding, oil and gas, power and metallurgical equipment, rail transportation, and other fields. To address issues such as low automation and information silos in actual welding production, this paper proposes an intelligent weld seam identification and trajectory generation method, accurately achieving coordinate generation for large-format, long-distance heat exchanger plate welding. The method investigates a camera calibration model based on coordinate transformation, preprocesses collected weld seam images, develops an edge approximation algorithm using median filters for denoising, and proposes a two-stage fusion strategy of “deep learning localization + optimized operator refinement” for edge intelligent identification. This strategy utilizes deep learning object detection for the fast and robust coarse localization of weld regions, combined with optimized operators for high-precision, efficient pixel-level edge extraction. Finally, a weld trajectory coordinate generation program based on the Hough transform algorithm is developed, enabling the rapid automatic welding of plates by welding robots. Experiments demonstrate the accurate identification of heat exchanger plate welds with an error of only 0.33%, meeting welding requirements. The method shows the fast integration of identification and welding information: overall welding efficiency improved by over 100%, and we achieved strong real-time performance, compatibility, and low hardware requirements.

1. Introduction

With the continuous development of the manufacturing industry, the scale of the heat exchanger manufacturing sector has been expanding. The primary function of heat exchangers is to maintain specific temperatures for media during technological processes, while also serving as important devices for improving energy utilization efficiency. Due to their advantages of high heat transfer efficiency, precise temperature control, ease of cleaning, and long service life, they are widely used in industrial sectors including petrochemicals, power and metallurgy, food and pharmaceuticals, shipbuilding, machinery, and central heating [1,2,3,4,5,6]. The petrochemical industry is the main application field for heat exchangers, accounting for approximately 30% of the market share. This is because nearly all processes in petrochemical production require heating, cooling, or condensation, leading to sustained demand for heat exchangers. The power and metallurgical industries account for about 17% of the market share for heat exchangers. The shipbuilding industry, which mainly uses central coolers and other heat exchange equipment, accounts for about 9% of the market share. The machinery industry, which widely employs oil coolers and intercoolers in automobiles, construction machinery, and agricultural machinery, accounts for about 8% of the market share. Additionally, demand for heat exchangers continues to grow in industrial fields such as central heating, food, and pharmaceuticals.
In recent years, with the rapid development of heat exchanger manufacturing technology, the requirements for welding technology have become increasingly demanding. Welding is an important manufacturing process technology that joins metals or other thermoplastic materials together through heating, high temperatures, or high pressure to achieve solid-state bonding, playing a critical role in industrial production. Traditional welding methods primarily rely on manual operation by workers, often in harsh environments that can pose certain hazards to human health. Furthermore, manual welding is prone to issues such as poor welding quality and low efficiency, making it difficult to meet enterprises’ demands for high precision and stability in welding operations [7,8,9,10].
During the welding process, clamping is a crucial step that not only maintains the stability of the workpieces being welded but also ensures precision and quality throughout the welding operation. Welding clamping technology has been developed over many years and is widely applied. Addressing the issue of weld clamping in fully welded plate heat exchangers in enterprises, reference [11] analyzed the clamping actions for these exchangers and proposed a three-axis synchronous rapid clamping method along with a design scheme for welding clamping equipment. The study examined the workflow of key clamping components in the welding clamping device, performed 3D modeling and simulation of the heat exchanger welding clamping equipment using SolidWorks software (SolidWorks 2022), and conducted mechanical analysis of the top plate section of the clamping device using Workbench software to optimize the structure. The developed welding clamping device exhibits high moving speed, effective clamping performance, and a high degree of automation, significantly improving worker productivity and thereby solving issues such as large edge gaps in plate assemblies, low welding efficiency, and high production costs during the plate welding process.
Since the 1960s, with the continuous advancement of robotic technology, welding robots have gradually emerged as a replacement for manual labor and have been widely adopted across various industries such as automotive manufacturing, aerospace, electronics manufacturing, and building structures. Most welding robots operate using a teach-and-playback method, where operators guide the manipulator to specific positions, automatically record these positional data, and subsequently repeat the operations based on the stored information. This teach-and-playback mode provides welding robots with high repeatability and ease of operation, enabling them to perform welding tasks in complex environments. This approach enhances production efficiency and quality stability while reducing labor costs and workload [12,13,14,15]. Subsequently, driven by Industry 4.0, welding robots have progressively evolved toward intelligence and flexibility, with machine vision technology also achieving further development. The integration of vision systems into welding robots enables the monitoring and autonomous control of the welding process, thereby replacing manual teaching. This allows robots to independently acquire positional and shape information, executing various welding tasks flexibly and autonomously. Compared to traditional welding robots, vision-equipped welding robots offer a higher degree of automation. The application of this technology provides greater convenience to the industrial manufacturing sector, while reducing the need for manual intervention, minimizing human-induced errors, and improving production line efficiency and stability [16,17,18,19,20,21].
In traditional robotic welding systems, it is usually necessary to use a teach pendant to control the welding torch along the weld path while setting welding parameters, initiating and extinguishing the arc, and controlling the shielding gas. Additionally, it is necessary to use the robot controller for simulation [22]. These processes can easily lead to torch positioning errors, thereby affecting welding quality. By integrating deep learning technology with vision-based automated welding technology, these issues can be effectively addressed. This approach is of great significance for enhancing the level of automation and intelligence of automated welding robots to meet the needs of future industrial applications. Through the application of deep learning technology, welding robots can more accurately identify weld seams, automatically adjust welding parameters, and achieve higher-quality welds. Meanwhile, the use of machine vision technology enables welding robots to monitor the welding process in real time, promptly adjusting the torch position and welding parameters to improve welding accuracy and stability. This automated welding technology that combines deep learning and machine vision can not only improve production efficiency but also reduce production costs, enhance product quality, and provide powerful technical support for the intelligent transformation of manufacturing.
In recent years, robotic automated welding technology has been one of the hot topics in the field of advanced welding. Xi Wenming et al. [23] explored the transformation relationship between the robot coordinate system and the camera coordinate system, utilized vision technology to track weld seams, successfully determined spatial points on the weld seam, and applied this data to the robotic automated welding process. This research provided key technical support for achieving automatic weld seam tracking and autonomous robotic welding. Xu Hao et al. [24] proposed an improved preprocessing algorithm for obscured weld images, which can clearly identify the weld centerline. This algorithm introduced new ideas and methods to the field of weld image processing, providing valuable references and insights for technological advancement and development in related fields, with certain application prospects. Chen Weihua et al. [25] adopted a quintic polynomial transition method to study the joint trajectory of welding robots in space. This method ensured smooth operation of the robot at corners and reduced vibrations in the robotic arm. Dang Hongshe et al. [26] proposed a trajectory planning algorithm that integrates sinusoidal acceleration, linear interpolation, and quintic polynomial interpolation, addressing the issue of significant vibrations during task execution by the robot. This new algorithm smooths the velocity curve through a sinusoidal acceleration curve and optimizes the robot’s trajectory planning using linear interpolation and quintic polynomial interpolation, thereby enabling smoother motion during task execution and improving working accuracy and stability. You Yong et al. [27] introduced relevant machine vision technologies for welding robots and proposed an automated welding system solution based on line laser camera vision guidance. This system has basically achieved welding of straight lines or curves on different planes.
Dinham et al. [28] successfully achieved the accurate identification of filet welds using corner detection technology, particularly on workpieces with surface imperfections such as scratches or rust. This work addresses a gap in weld seam identification under challenging surface conditions and provides a reliable technical approach for welding quality control and automated welding system applications. Olaf C. et al. [29] designed a low-cost shape recognition system to program industrial robots for two-dimensional welding processes, enabling consistent welding of identical contours on 2D planes.
Rezaei et al. [30] developed an auxiliary ball-screw servo mechanism to support weld seam tracking in robotic welding systems, along with a decentralized control strategy based on adaptive sliding mode theory. This method facilitates precise welding maneuvers on fixtures and improves error compensation accuracy. Banafian et al. [31] introduced an improved edge detection algorithm for determining weld seam and molten pool parameters, which significantly enhanced detection speed and precision. However, the high computational cost prevents real-time processing when handling high-quality images. Charalampos Loukas et al. [32] conducted investigations into optical and vision-based sensor methods for weld position detection, improving the accuracy of welding path generation and reducing the number of required returns. Nilsen et al. [33] carried out research on multi-sensor-based weld tracking to enhance the predictability of joint strength.
Although existing studies have advanced weld target identification and welding technologies, several challenges persist in practical implementation. Specifically, the identification of large-format, long-distance plate welds in heat exchangers suffers from inadequate accuracy and low efficiency. Additional limitations include high hardware dependency, computationally intensive processes, large model sizes with poor real-time performance, and constraints related to the working range and cost-effectiveness of welding robotic arms. These issues collectively hinder the application of automated welding technology for large-format heat exchanger plates and limit its practical adoption in industrial production environments.
This paper aims to address the technical bottlenecks in teach-based programming by equipping welding robots with vision systems and developing intelligent recognition and trajectory generation software, achieving precise weld seam identification and seamless integration with welding robots.
The main contributions are summarized as follows:
(1)
A camera calibration model based on coordinate transformation was constructed, enabling camera calibration and image correction, thereby providing data support for establishing positional transformations for welding robots.
(2)
An intelligent edge recognition method for weld seam images integrating deep learning and optimized operators was proposed, significantly reducing computational load and improving processing efficiency. This method achieves accurate identification of heat exchanger plate welds with minimal error, meeting welding precision requirements.
(3)
A weld trajectory coordinate detection and generation program based on the Hough transform algorithm was developed, addressing issues such as low efficiency in robot teach-based welding and information silos between recognition and welding systems. This enables high-real-time, high-quality automated weld identification for large-format, long-distance heat exchanger plates.
The structure of this paper is organized as follows: Section 1 provides a review of relevant literature; Section 2 introduces the intelligent identification method for plate weld seams and welding trajectory generation; Section 3 analyzes and investigates camera calibration methods; Section 4 explores a deep learning-based approach for plate weld seam identification; Section 5 addresses the generation of weld seam coordinates; Section 6 presents the experimental results; and Section 7 summarizes the research findings.

2. Presentation of the Method for Intelligent Identification of Plate Welds and Generation of Welding Trajectories

The previous work proposed a three-axis linkage rapid clamping method and designed a welding clamping device, which solved the problem of weld clamping of plate heat exchangers in the actual welding process in enterprises [11]. In order for the enterprise to smoothly introduce the welding robot, the next problem to be solved is the weld seam identification of the heat exchanger. In robot automated welding, precise and efficient weld seam identification is the key to achieving efficient and high-quality welding. To address the challenge of weld path recognition, this chapter proposes an intelligent approach for detecting plate welds and generating corresponding welding trajectories, as illustrated in Figure 1, thereby establishing a technical foundation for integrating robotic welding systems in industrial settings.

3. Analysis of Camera Calibration Methods

This study adopted an Eye-to-Hand vision system, in which the camera was stationarily mounted and operates independently from the welding robot. Accordingly, the primary aim of the calibration process in this chapter is to define the transformational relationship between the world coordinate system and the fixed camera coordinate system. Camera imaging involves a sequence of coordinate transformations: initially, the global world coordinates of an object are converted into the local camera coordinate system. The camera’s intrinsic parameter matrix is then applied to project this local position onto the image coordinate system, yielding the object’s projected image location. Finally, based on image resolution and pixel density, the image coordinates are transformed into the pixel coordinate system. This process is illustrated in Figure 2.
Figure 3 illustrates the camera imaging model, which comprises both intrinsic and extrinsic parameters. The accurate calibration of these parameters is essential to ensure image precision and reliability [34].
WCS denotes the world coordinate system, CCS refers to the camera coordinate system, IPCS indicates the image coordinate system, and ICS represents the pixel coordinate system. These coordinate systems are interlinked through spatial transformation relationships, typically established using a combination of geometric operations including rotation, translation, and scaling. Defining these transformations accurately is essential for maintaining precision and consistency during data conversion and processing across different coordinate frameworks.
(1)
The conversion of the camera coordinate system to the world coordinate system
The transformation from the world coordinate system (WCS) to the camera coordinate system (CCS) is generally accomplished using rigid-body motions, including translation and rotation. In this conversion process, the translation operation aligns the origins of the two coordinate systems, while the rotation operation adjusts the orientation relationship between the coordinate systems to keep them consistent in space, ensuring the accurate positioning of objects in different coordinate systems, as shown in Figure 4.
In the figure,
T represents translation vectors.
R stands for rotation matrix.
From this, the homogeneous coordinates of point P in the camera coordinate system are derived:
X C Y C Z C 1 = R T 0 1 X w Y w Z w 1
In the formula, R represents the orthogonal identity matrix, and the elements in R are represented by the Euler angles α ,   β ,   θ in the axial direction of the X ,   Y ,   Z axes. Then, R can be denoted as follows:
R = R x R y R z = 1 0 0 0 cos α sin α 0 sin α cos α cos β 0 sin β 0 1 0 sin β 0 cos β cos θ sin θ 0 sin θ cos θ 0 0 0 1
T is a 3 × 1 translation vector, T can be expressed as follows:
T = T x T y T z T
(2)
Conversion of the camera coordinate system to the image coordinate system
The conversion from CCS to IPCS is achieved through perspective projection relations, a process involving mapping points in three-dimensional stereoscopic space onto two-dimensional image planes. According to the principle of pinhole imaging, the IPCS should actually be on the other side of the CCS, forming an inverted reverse image. However, for ease of understanding and calculation, in practical applications the IPCS are usually set on the same side as the CCS, that is, reflected to the same side [35], as shown in Figure 5.
Based on the triangle similarity theorem, the form of the homogeneous coordinate matrix is expressed as follows:
Z C x y 1 = f 0 0 0 0 f 0 0 0 0 1 0 X C Y C Z C 1
In addition, the construction problem of the camera itself can cause various nonlinear distortions during the imaging process. Radial distortion arises primarily from deviations in the lens shape from an ideal spherical profile, leading to positional discrepancies between pixels near the center and those at the edges of the image. Tangential distortion, on the other hand, occurs when the lens is not perfectly parallel to the image sensor plane [36]. These distortion types are visualized in Figure 6. To eliminate the impact of these distortions on image quality, distortion correction techniques are used during the coordinate transformation process to convert ideal image coordinates to actual image coordinates, thereby obtaining more accurate image information.
In the picture:
Q ( x , y ) is the ideal image coordinates;
Q d ( x , y ) is the actual image coordinates;
d r represents the radial error;
d t represents the tangential error.
Let the distortion coefficient of the camera be D and f ( D ) be the correction function, then we have:
x y = f ( D ) x y
After substituting the distortion parameters, the formula is as follows:
x = x ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + 2 p 1 y + p 2 ( r 2 + 2 x 2 ) y = y ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + 2 p 1 x + p 2 ( r 2 + 2 y 2 )
In the formula,
The coefficients k1, k2, and k3 correspond to radial distortion;
The terms p1 and p2 account for tangential distortion components;
The variable r denotes the Euclidean distance from any image point to the optical center.
(3)
Conversion of the image coordinate system to pixel coordinates
As shown in Figure 7, in the image coordinate system, the coordinates of the point are (x, y); in the pixel coordinate system, the coordinates of the point are (u, v).
Let the coordinates of the origin O1 of the image coordinate system in the pixel coordinate system be (u0, v0). Through coordinate transformation, derive the conversion relationship between them:
u v 1 = a x 0 u 0 0 a y v 0 0 0 1 x y 1
In the formula,
ax is the scale factor of the pixel coordinate u-axis;
ay is the scale factor of the pixel coordinate v-axis.
To sum up, get the expression for the conversion from world coordinates to pixel coordinates.
Z C u v 1 = a x 0 u 0 0 a y v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R T O 3 T 1 X w Y w Z w 1                               = f x 0 u 0 0 0 f y v 0 0 0 0 1 0 R T O 3 T 1 X w Y w Z w 1 = M 1 M 2 X w Y w Z w 1
In the formula,
( u , v ) is the pixel coordinate of ( u , v ) after correction;
f x , f y is the normalized focal length of the u-axis and v-axis;
u 0 , v 0 The position of the main light point;
M 1 , M 2 are the internal and external parameters of the camera.
(4)
Camera calibration results
Considering the practical constraints of the industrial environment and after evaluating multiple calibration techniques [37], this study employs Zhang’s calibration approach. The resulting camera parameters are summarized in Table 1.
Then the internal parameter matrix M 1 of the camera is shown in Equation (9).
M 1 = f x 0 u 0 0 f y v 0 0 0 1 = 757.6382 0 263.7319 0 760.3847 270.5986 0 0 1
Calibrate the external parameter rotation matrix R of the image as shown in Equation (10).
R = 0.9940 0.0306 0.1049 0.0197 0.9944 0.1040 0.1075 0.1013 0.9890
The translation vector T is as shown in Equation (11).
T = 141.6184 82.5740 317.6636
The distortion coefficient matrix Q of the camera is as shown in Equation (12).
Q = k 1 k 2 p 1 p 2 k 3 = 0.1802 0.2232 0.0018 0.0049 0.4325

4. Research on Plate Weld Recognition Based on Deep Learning

4.1. Preprocessing of Plate Weld Images

To enhance the stability of the system, select to capture weld images before the welding robot performs the welding action to reduce the impact of disturbances such as arc light, electromagnetic interference, and spatter on camera imaging and equipment communication. After the camera calibration experiment was completed, weld images were captured using the camera, as shown in Figure 8a. To improve robustness in challenging industrial settings, the captured weld images underwent preprocessing. Typical image artifacts encountered include salt-and-pepper, multiplicative, and Gaussian noise. During experimentation, the original weld images were converted to grayscale and artificially corrupted with salt-and-pepper noise for analysis, as depicted in Figure 8b,c.

4.2. Plate Weld Seam Image Denoising Processing

The preprocessed weld seam images were denoised. Common denoising methods include mean filtering and median filtering. The code was written through image processing tools. The processing results are shown in Figure 9 and Figure 10, which demonstrate the comparison of the weld seam images before and after denoising.
As verified by experimental comparison, the processing results of median filtering were better. In this experiment, the weld images were denoised using median filtering, and finally the processing results of the 3 × 3 template of median filtering were selected to continue the experiment. The median filtering process is implemented through the following steps: (1) A sliding window of dimensions m × n is utilized; (2) The window traverses the image, with its center aligned sequentially with each pixel; (3) Within each window position, the intensity values of the covered pixels are sorted in ascending order; (4) The median value of the sorted sequence is identified; (5) This median value is assigned to the corresponding pixel in the output image. The operation of the median filter is formally defined as g ( x , y ) = m e d { f ( x k , y l ) , ( k , l W ) } , where f ( x , y ) represents the original image, g ( x , y ) denotes the filtered image, and W is a two-dimensional template commonly implemented in sizes such as 3 × 3, 5 × 5, 7 × 7, or 9 × 9. An illustration of median filtering employing a 3 × 3 window is provided in Figure 11.
After the position processing that satisfies 9 pixels is completed, it is found that there are less than 9 pixels around the pixel at the edge position. To eliminate the edge noise, an edge approximation method is proposed, that is, the pixel value at the edge position takes the approximate value of the pixel after the median filtering processing of the adjacent position points is completed. The principle is shown in Figure 12.
The adoption of the edge approximation approach significantly suppresses edge artifacts in weld imagery, enhancing both the quality and definition of the source images while minimizing interfering elements for subsequent processing and analysis stages. These enhanced images serve as input to the deep learning model.

4.3. Plate Weld Recognition Based on Deep Learning

In image processing, typical edge detection operators comprise the Canny, Log, Sobel, Roberts, and Prewitt operators [38]. Custom code was developed using image processing libraries, and experimental validation was performed to compare their performance. The outcomes, illustrated in Figure 13, highlight the differential effects of each operator when applied to weld images.
Experiments show that there are many interfering factors, numerous discrete points, and the weld seam trajectories detected are complex. To reduce interference factors and identify welds more accurately and efficiently, this study proposes a two-stage fusion strategy of “deep learning localization + optimized operator fine inspection”. First, deep learning object detection is used to quickly and robustably locate the approximate area of the weld seam, generating a dynamic Bounding Box as an accurate ROI. Then, within the high confidence ROI region of deep learning location, the optimized operator is applied for high-precision and high-efficiency pixel-level weld seam edge extraction.

4.3.1. Weld Region Localization Based on Deep Learning

YOLO algorithms, underpinned by deep learning, are extensively employed for object detection across diverse applications owing to their efficient architecture and capability for real-time inference. To further adapt to the high real-time and low resource consumption deployment requirements in industrial scenarios, this study proposes a lightweight and efficient Shufflet-YOLOV8 model based on YOLOv8, and its model graph is shown in Figure 14.
The CSPDarknet network structure adopted by YOLOv8 strikes a good balance between accuracy and speed, but its parameter count and computational complexity are still high for deployment on embedded devices with limited computing resources. To further improve the model’s efficiency, we removed the original CSPDarknet structure from YOLOv8 and then used two ShuffleNet V2 modules to build the new feature extraction network, as shown in the Shuffle Backbone section in Figure 14. Among them, Shuffle Down and Shuffle Basic are, respectively, the downsampling module and the basic module in ShuffleNet V2, and the structural diagrams of the two modules are shown in Figure 15. The downsampling module is placed at the beginning of each network phase to halve the spatial resolution of the feature map and double the number of channels, achieving efficient compression of the spatial dimension. The base modules are stacked within each network stage for deep feature transformation and enhancement. Its unique Channel Split and Channel Shuffle operations facilitate the exchange of information between groups at an extremely low computational cost, significantly enhancing the model’s representational ability.
The SPPF module from the original model was retained during the design of the Shuffle Backbone. The module captures multi-scale features by paralleling multiple ki × ki Max pooling layers of different sizes and concatenates and fuses their outputs, as shown in the formula.
Y = C o n c a t [ M a x P o o l ( X , k 1 ) , M a x P o o l ( X , k 2 ) , M a x P o o l ( X , k 3 ) , X ]
Here, k1, k2, k3 is the increasing size of the pooling kernel. This operation significantly enlarges the receptive field for each feature point in the output feature map Y, providing it with rich context information, which is crucial for ensuring the accuracy of object detection, especially for large objects.
Shuffle-YOLOv8 utilizes the Task Aligned Assigner framework to assign positive samples during loss computation and incorporates Distribution Focal Loss. For classification loss, it uses Cross-Entropy Loss, while for bounding box regression, it adopts DFL Loss + CIoU Loss [39]. The loss function for bounding box regression takes into account three essential geometric properties: the area of overlap, the distance between center points, and the aspect ratio [40]. CIoU [41] incorporates all three of these factors, with the calculation Formulas (14)–(16) as follows.
L C I o u = L I o u ρ b g , b p c 2 α V
V = 4 π 2 arctan w g h g arctan w p h p 2
α = V 1 L I o u + V
In the formula, ρ denotes the Euclidean distance separating the centroids of two bounding boxes; p represents the predicted bounding box; c is a scale factor used to adjust for the effect of the center point distance term; V quantifies the consistency in aspect ratio by evaluating the discrepancy between the predicted and ground-truth bounding boxes; α serves as a weighting coefficient that balances the contribution of the aspect ratio consistency term V.
Compared with the recently widely used YOLOv8 model, Shufflet-YOLOV8 reduces network parameters from 11.2 M to 3.7 M, floating-point operations (FLOPs) from 28.6 G FLOPs to 9.5 G FLOPs, and model size from 22.4 MB to 7.2 MB.

4.3.2. Precise Extraction of Weld Edges Within ROI Regions Based on Optimization Operators

In the ROI region of the weld located by YOLOv8, precise edge extraction is achieved using the optimized Prewitt operator. The Prewitt operator performs convolution operations on the grayscale image I through the horizontal convolution kernel and the vertical convolution kernel, as shown in Formulas (17) and (18).
G x = 1 0 1 1 0 1 1 0 1 I
G y = 1 1 1 0 0 0 1 1 1 * I
where I is the grayscale image of the ROI region, and * represents the convolution operation.
The edge gradient amplitude G and direction θ are determined by Formulas (19) and (20), respectively.
G = G x 2 + G y 2
θ = arctan G y G x
To enhance detection accuracy, three key improvements are introduced in this method. First, based on the gray-scale distribution characteristics of the ROI region, the Otsu algorithm is used to dynamically calculate the optimal threshold for adaptive binarization, as shown in Formula (21)
T = arg max t σ b 2 t
Here σ b 2 t represents the inter-class variance and t is the candidate threshold.
Next, based on the geometric features (width W, height H) of the bounding box output by YOLOv8, construct the direction weight system as shown in Formulas (22) and (23).
w x = W W + H
w y = H W + H
To enhance the main direction response of the weld seam, the gradient magnitudes are corrected to
G = w x G x 2 + w y G y 2
Finally, perform the morphological closure operation on the binarized edge map, see Formula (25)
E c l o s e = E B   e r o d e   B
where E is the edge binary graph, B is the 3 × 3 rectangular structure element, and erode represents the dilation and erosion operations, respectively. This operation effectively fills in the tiny breaks at the edge and smooths the profile. This optimization algorithm combines the positioning advantage of deep learning with the computational efficiency of the optimization operator to output continuous and precise edge features, laying the foundation for subsequent trajectory coordinate generation.

5. The Weld Coordinates of the Plate Are Automatically Generated

5.1. Weld Trajectory Coordinate Generation Process

To achieve automatic generation of weld trajectories for heat exchanger plates, this study systematically executed the following coordinate generation procedure based on the aforementioned preprocessing and edge recognition results.
First, a 3 × 3 median filter algorithm was applied to the original weld image for denoising, effectively suppressing impulse noise while preserving edge features. The processing result is shown in Figure 16a. Subsequently, the Prewitt operator was employed for edge detection with its convolution kernel threshold set to 0.1, thereby extracting the preliminary contour of the weld region. The detection outcome is presented in Figure 16b.
Building upon this, a weld trajectory coordinate generation program based on Hough transform was developed. The program initially sets the image coordinate axis range and displays the grid, then performs Hough transform on the binarized image obtained from edge detection to acquire the transformation matrix in parameter space. For precise identification of linear weld features, the key parameters of Hough transform were configured as follows: accumulator distance resolution ρ = 1 pixel, angular resolution θ = 1°; peak detection threshold = 150; minimum length of valid line segments = 100 pixels; maximum allowable gap between connectable line segments = 10 pixels. Based on this threshold, the program selects significant peak points in the transformation matrix and calculates their corresponding line parameters. Finally, the HoughLines function completes line segment detection, and all identified weld trajectory segments are superimposed and plotted on the original image. The final detection result clearly displays the continuous weld path, as shown in Figure 16c.
To further quantify the weld position, a curve fitting procedure was executed based on the discrete trajectory point coordinates obtained from Figure 16c. Using this series of coordinate points as input, a high-precision weld trajectory curve was derived through fitting algorithms, calculating the average vertical coordinate of the weld centerline as 39.35 mm. At this stage, the recognition system packages and transmits the generated coordinate data in the native instruction format supported by the welding robot, directly triggering the robot to initiate welding programming. The welding robot subsequently receives and parses this data, seamlessly executing automated welding tasks. This approach fundamentally eliminates data barriers between the recognition system and the robot system, avoiding information silos. Experimental verification confirms that this method significantly enhances the overall operational efficiency of robotic welding.

5.2. Parameter Sensitivity Analysis

To ensure the scientific basis and robustness of key parameter selection in the aforementioned weld trajectory coordinate generation algorithm, this study further conducted a sensitivity analysis of critical parameters in the Hough transform and median filter window size. By fixing other parameters while sequentially varying the target parameter, the mean absolute error (MAE) between the detected weld path and the ground truth was calculated to assess parameter sensitivity. The analysis results are shown in Figure 17.
Figure 17a demonstrates that the Hough transform peak threshold achieves the minimum error (0.13 mm) at 150. When the threshold falls below 120, erroneous interference segments are introduced due to noise peaks in the accumulator being misdetected, leading to a significant increase in error. Conversely, when the threshold exceeds 180, peaks corresponding to actual weld seams are filtered out, resulting in missed detections and increased error. The asymmetric error curve indicates that low threshold values have a more detrimental effect on performance than high threshold values.
The sensitivity analysis of minimum line length, as shown in Figure 17b, indicates that 100 pixels provides optimal performance. Lengths below 80 pixels produce overly fragmented detection results with numerous meaningless short segments, while lengths above 120 pixels cause the algorithm to miss shorter valid weld segments, compromising path integrity. The steeper error increase on the left side of the optimal value demonstrates that excessive fragmentation has a more detrimental impact on detection accuracy than missing shorter segments.
The maximum line gap sensitivity analysis presented in Figure 17c reveals that a 10-pixel gap achieves optimal connection performance. Gaps smaller than 5 pixels cause excessive segmentation of paths that should be connected, while gaps larger than 15 pixels prevent proper connection of broken paths. The asymmetric characteristic of the error curve indicates that insufficient gap tolerance (over-segmentation) is more problematic than excessive gap tolerance.
The comparison of median filter window sizes in Figure 17d confirms the superiority of the 3 × 3 window, which effectively suppresses noise while maximally preserving weld edge details. The 5 × 5 window provides better noise reduction but causes slight edge blurring, whereas the 7 × 7 window leads to significant loss of edge information and substantially increased error.

6. Experiments and Results Analysis

6.1. Dataset

This study utilized a custom dataset acquired through collaboration with Shandong Wangtai Technology Co., Ltd. (Zibo City, Shandong Province, China). Image acquisition was carried out with an industrial-grade camera, yielding 1000 raw images capturing weld seam regions on plate heat exchangers. To improve model generalization and robustness, augmentation techniques described in the previous image processing section were applied to the dataset. Initially, the original images were randomly split into two subsets at a 6:4 ratio. The first subset then underwent dedicated augmentation procedures. Upon completion, the first subset contained 3000 augmented images, while the second subset remained unchanged with 400 images, forming a combined dataset of 3400 samples. All images were then annotated using the LabelImg tool to demarcate weld regions, with resulting labels stored in .txt files. The augmented subset was further partitioned into 2600 images for training and 400 for validation, facilitating model training and hyperparameter adjustment. The remaining 400-image subset was reserved exclusively as a test set for evaluating model performance.

6.2. Training Configuration and Parameters

The experiments in this study were conducted on a Windows 11 operating system, with the specific software and hardware configurations detailed in Table 2. It should be noted that although an NVIDIA GeForce RTX 4060 Ti GPU was used for model training and validation, this was primarily for efficiency during the development phase. The proposed Shuffle-YOLOv8 model incorporates a lightweight ShuffleNetV2 backbone network, with significantly fewer parameters and computational complexity compared to the standard YOLOv8 model. This design demonstrates strong computational efficiency in practical industrial deployment scenarios, enabling compatibility with common edge computing devices and embedded platforms. All models were trained on the weld starting point dataset under identical hardware and software conditions. The experimental hyperparameter settings are listed in Table 3, where the AdamW optimizer and ReLU activation function were employed.

6.3. Training Results

As shown in Figure 18, after 200 training cycles, the average precision (mAP) was 0.5 and the precision for all categories was 0.994 and 0.981, respectively, and the recall was 0.98.The extraction results of the weld area clamped by the fixture are shown in Figure 19.

6.4. Weld Area Edge Detection Experiment

Edge detection experiments were conducted in areas with high confidence ROI provided by deep learning, and the results are shown in Figure 20.
Experimental verification shows that the improved Prewitt operator has better edge detection results. Compared with other contrast operators, it retains a more complete weld trajectory at the center of the fixture and is more suitable for processing weld area images with more noise and variable scenes in the welding scene.

6.5. Experimental Study on Plate Weld Measurement

To validate the spatial accuracy of the extracted weld paths on the heat exchanger, laboratory measurements of the plate weld seam were carried out using a 3D stereo scanner integrated with welding fixture equipment. This experiment provides a practical foundation for subsequent research on the implementation of welding robotics in industrial environments. A view of the experimental setup is provided in Figure 21.
The complete model of the weld seam was scanned by a 3D scanner, as shown in Figure 22.
Ten points were selected from the middle position of the weld seam in Figure 22, and their positions were measured. The measurement results are shown in Table 4.
Based on the vertical coordinate data in Table 4, the average value is calculated as 39.22 mm. The programmatically calculated data are compared with the prototype experimental data using the obtained experimental point coordinates, as shown in Figure 23.
Figure 23 shows the comparison between the programmatically calculated data (represented by the solid orange line) and the prototype experimental data (represented by the solid green line). The horizontal axis represents the weld seam’s horizontal position coordinate, while the vertical axis represents the weld seam’s vertical position coordinate. The curve of the programmatically calculated data is consistently slightly higher than that of the prototype experimental data, yet both curves exhibit the same trend of variation. The deviation between all corresponding measurement points is controlled within 0.2 mm, demonstrating good consistency between the algorithm output and the actual measurement results. Although there is a minor discrepancy between the programmatically calculated average vertical coordinate of 39.35 mm and the experimentally measured average of 39.22 mm, the calculated mean error is only 0.33%, which fully meets the welding accuracy requirements. These results validate the feasibility of the Hough transform-based weld seam trajectory coordinate detection method.
The welding task was carried out by a six-axis industrial robot of machine specification LH1500-B-6, which is compact, highly flexible and runs smoothly. The industrial robot is composed of six connecting rods hinged in series and can perform processes such as welding, spraying, palletizing and stamping. Integrated with the welding clamping equipment, experimental research on weld seam welding for heat exchanger plates was conducted in the laboratory. The experimental setup was prepared, and coordinate transformation was performed on the obtained weld trajectory position coordinates. The program for the LH1500-B-6 welding robot was developed. The welding experiment site is shown in Figure 24. During the welding process, the robot operated smoothly with a stable arc, and spatter was significantly reduced compared to conventional manual teach-based welding. This improvement is attributed to the precise trajectory coordinates, which enabled the torch posture and welding parameters to be maintained at their optimal states. As shown in Figure 25, the completed weld exhibits uniform and continuous formation without noticeable macro-defects such as undercut, overlap, or lack of penetration. This demonstrates that the welding trajectory generated by the method proposed in this paper is stable and accurate, laying a foundation for achieving high-quality weld formation.
Through experimental research on the welding of heat exchanger plate welds, the seamless application of machine vision and welding robots in the heat exchanger welding process was achieved. The recognition and welding information fusion speed was fast, the welding efficiency was increased by more than 100%, the welding quality was high, the welding cost was reduced, and it had strong real-time performance, compatibility and low hardware requirements.

7. Conclusions

This paper addresses issues such as low automation and information silos in the welding of large-format, long-distance heat exchanger plates. It proposes an intelligent weld seam recognition and trajectory generation method that integrates deep learning with optimized operators, which has been successfully applied to a welding robot system. The main conclusions are as follows:
(1)
A camera calibration model based on coordinate transformation was constructed, achieving precise camera calibration and image correction, thereby providing data support for establishing the positional transformation of the welding robot. The calibration result showed an average reprojection error of only 0.14 pixels, laying a solid foundation for subsequent high-precision recognition.
(2)
A two-stage fusion strategy of “deep learning coarse localization + optimized operator fine detection” was proposed for the intelligent recognition of weld seam image edges. This method uses the lightweight Shuffle-YOLOv8 model to quickly and robustly locate the approximate area of the weld seam, and then applies an optimized Prewitt operator within this ROI for high-precision, high-efficiency pixel-level edge extraction. This strategy significantly reduces the computational load and improves processing efficiency. Experiments demonstrate that the extracted weld trajectory between heat exchanger plates is accurate, with an average error of only 0.33%, meeting the precision requirements for welding.
(3)
A weld trajectory coordinate detection and generation program based on the Hough transform algorithm was developed. This solves problems such as the low efficiency of robot teach-based welding and information silos between the recognition and welding systems. It enables high-real-time, high-quality automated weld seam identification and detection for large-format, long-distance heat exchanger plates, ultimately more than doubling the overall welding efficiency.
Despite achieving the expected results, this study has certain limitations. Firstly, the robustness of the current algorithm under extreme welding conditions, such as strong arc light and heavy smoke interference, requires further verification. Secondly, the validation in this study primarily focused on linear weld seams, and its adaptability to complex curved weld seams needs further in-depth exploration. Future work will focus on two main aspects: first, investigating stronger image preprocessing and anti-interference algorithms to enhance system stability in harsh environments; second, extending the application of this method to more types of weld seams, such as curved and lap joints.

Author Contributions

Conceptualization, M.H. and N.W.; Methodology, M.H.; Software, F.X.; Validation, F.X.; Formal analysis, N.W. and L.L.; Investigation, N.W.; Resources, X.Y.; Data curation, F.X.; Writing—original draft, F.X. and M.H.; Writing—review and editing, M.H.; Visualization, L.L.; Supervision, L.L. and X.Y.; Project administration, X.Y.; Funding acquisition, X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, N.; Han, C.; Wang, J. Research Review and Application of Printed Circuit Heat Exchanger. Liaoning Chem. Ind. 2023, 52, 289–291. [Google Scholar] [CrossRef]
  2. Wang, H. Research on Vision Guidance System of Plate Heat Exchanger Feeding Robot Based on 3D Images. Master’s Thesis, China Jiliang University, Hangzhou, China, 2021. [Google Scholar]
  3. Sun, P. Research on Plate Depth Detection System of Plate Heat Exchanger. Master’s Thesis, Shenyang University of Technology, Shenyang, China, 2022. [Google Scholar]
  4. Liu, J. A Review of the Current Status and Future Trends of Heat Exchangers. China Equip. Eng. 2022, 21, 261–263. [Google Scholar]
  5. Xu, K.; Qin, K.; Wu, H.; Smith, R. A New Computer-Aided Optimization-Based Method for the Design of Single Multi-Pass Plate Heat Exchangers. Processes 2022, 10, 767. [Google Scholar] [CrossRef]
  6. Nahes, A.L.M.; Bagajewicz, M.J.; Costa, A.L.H. Simulation of Gasketed-Plate Heat Exchangers Using a Generalized Model with Variable Physical Properties. Appl. Therm. Eng. 2022, 217, 119197. [Google Scholar] [CrossRef]
  7. Wu, Y. Welding Manual: Welding Methods and Equipment; China Machine Press Co., Ltd.: Beijing, China, 2016. [Google Scholar]
  8. Acherjee, B. Laser Transmission Welding of Polymers—A Review on Process Fundamentals, Material Attributes, Weldability, and Welding Techniques. J. Manuf. Process 2020, 60, 227–246. [Google Scholar] [CrossRef]
  9. Liu, Y. Analysis of the Application of Automatic Welding Technology in Mechanical Processing. South. Agric. Mach. 2020, 23, 114–116. [Google Scholar]
  10. Orłowska, M.; Pixner, F.; Majchrowicz, K.; Enzinger, N.; Olejnik, L.; Lewandowska, M. Application of Electron Beam Welding Technique for Joining Ultrafine-Grained Aluminum Plates. Metall. Mater. Trans. A 2022, 53, 18–24. [Google Scholar] [CrossRef]
  11. Wang, N.; Yang, L.; Xue, D. Design and Experimental Study of Welding Clamping Device Forheat Exchanger Plates. Res. Mod. Manuf. Eng. 2023, 6, 1–5. [Google Scholar] [CrossRef]
  12. Yang, Y.; Shi, Y. The Development of Welding of Dissimilar Material—Aluminum and Steel. J. Chang. Univ. 2011, 2, 21–25. [Google Scholar]
  13. Zhang, S. Analysis of Impact of Mechanical Design and Manufacturing and Automation Industry under the Background of German Industry 4.0. Times Agric. Mach. 2016, 12, 8–10. [Google Scholar]
  14. Lei, H. Research on the Transformation Strategies of China’s Manufacturing Industry under the Wave of Industry 4.0. Sci. Technol. Innov. Her. 2017, 16, 125–127. [Google Scholar] [CrossRef]
  15. Tao, F.; Anwer, N.; Liu, A.; Wang, L.; Nee, A.Y.C.; Li, L.; Zhang, M. Digital Twin towards Smart Manufacturing and Industry 4.0. J. Manuf. Syst. 2021, 58, 1–2. [Google Scholar] [CrossRef]
  16. Lei, J. Research on 3D Profile Detection Methods and Key Technologies for High-End Hydraulic Components Based on Machine Vision; Anhui Jianzhu University: Hefei, China, 2021. [Google Scholar]
  17. Fan, H. Application of Machine Vision Technology in Industrial Inspection. Digit. Commun. World 2020, 12, 156–157. [Google Scholar] [CrossRef]
  18. Xu, H. Research on Machine Vision Calibration and Object Detection Tracking Methods & Application. Ph.D. Thesis, Hunan University, Changsha, China, 2011. [Google Scholar]
  19. Zhang, G.; Zhu, J. Application Status of Visual Imaging Technology for Intelligent Manufacturing Equipment. Mod. Comput. 2021, 27, 84–89. [Google Scholar] [CrossRef]
  20. Xiao, G.; Li, Y.; Xia, Q.; Cheng, X.; Chen, W. Research on the On-Line Dimensional Accuracy Measurement Method of Conical Spun Workpieces Based on Machine Vision Technology. Measurement 2019, 148, 106881. [Google Scholar] [CrossRef]
  21. AL-Karkhi, N.K.; Abbood, W.T.; Khalid, E.A.; Jameel Al-Tamimi, A.N.; Kudhair, A.A.; Abdullah, O.I. Intelligent Robotic Welding Based on a Computer Vision Technology Approach. Computers 2022, 11, 155. [Google Scholar] [CrossRef]
  22. Sun, Z. Research on Intelligent Manufacturing Technology of Robot Welding Based on Image Processing. Master’s Thesis, Shandong University of Technology, Zibo, China, 2020. [Google Scholar]
  23. Xi, W.; Zheng, M.; Yan, J. Industrial Robot Tracking Complex Seam by Vision. J. Southeast Univ. (Nat. Sci. Ed.) 2000, 30, 79–83. [Google Scholar]
  24. Xu, H.; Li, G.; Ma, P. Seam Image Recognition Preprocessing Based on Machine Vision. J. Guangxi Univ. (Nat. Sci. Ed.) 2017, 42, 1693–1700. [Google Scholar] [CrossRef]
  25. Chen, W.; Zhang, T.; Cui, M. Study of Robot Trajectory Based on Quintic Polynomial Transition. Coal Mine Mach. 2011, 32, 49–50. [Google Scholar] [CrossRef]
  26. Dang, H.; Zhang, M.; Hou, J. Research on Trajectory Planning Algorithms for Industrial Assembly Robots. Mod. Electron. Tech. 2019, 42, 63–67. [Google Scholar] [CrossRef]
  27. You, Y. Research on Welding Technology Based on Vision Guidance of Line Laser Camera. Master’s Thesis, Soochow University, Suzhou, China, 2021. [Google Scholar]
  28. Dinham, M.; Fang, G. Detection of Fillet Weld Joints Using an Adaptive Line Growing Algorithm for Robotic Arc Welding. Robot. Comput.-Integr. Manuf. 2014, 30, 229–243. [Google Scholar] [CrossRef]
  29. Ciszak, O.; Juszkiewicz, J.; Suszyński, M. Programming of Industrial Robots Using the Recognition of Geometric Signs in Flexible Welding Process. Symmetry 2020, 12, 1429. [Google Scholar] [CrossRef]
  30. Ebrahimpour, R.; Fesharakifard, R.; Rezaei, S.M. An Adaptive Approach to Compensate Seam Tracking Error in Robotic Welding Process by a Moving Fixture. Int. J. Adv. Robot. Syst. 2018, 15. [Google Scholar] [CrossRef]
  31. Banafian, N.; Fesharakifard, R.; Menhaj, M.B. Precise Seam Tracking in Robotic Welding by an Improved Image Processing Approach. Int. J. Adv. Manuf. Technol. 2021, 114, 251–270. [Google Scholar] [CrossRef]
  32. Charalampos Loukas, N.; Warner, V.; Jones, R.; MacLeod, C.N.; Vasilev, M.; Mohseni, E.; Dobie, G.; Sibson, J.; Pierce, S.G.; Gachagan, A. A Sensor Enabled Robotic Strategy for Automated Defect-Free Multi-Pass High-Integrity Welding. Mater. Des. 2022, 224, 111424. [Google Scholar] [CrossRef]
  33. Nilsen, M.; Sikström, F. Integrated Vision-Based Seam Tracking System for Robotic Laser Welding of Curved Closed Square Butt Joints. Int. J. Adv. Manuf. Technol. 2025, 137, 3387–3399. [Google Scholar] [CrossRef]
  34. Xu, P.; Tang, X.; Li, L. A Visual Sensing Robotic Seam Tracking System. J. Shanghai Jiaotong Univ. 2008, 42, 28–31. [Google Scholar] [CrossRef]
  35. Ge, D.; Yao, X.; Li, K. Calibration of Binocular Stereo-Vision System. Mech. Des. Manuf. 2010, 6, 188–189. [Google Scholar] [CrossRef]
  36. Hu, Z.; Wu, F. A Review on Some Active Vision Based Camera Calibration Techniques. Chin. J. Comput. 2002, 25, 1149–1156. [Google Scholar]
  37. Zhang, Z.; He, L.; Wu, Y.; Zhang, F. Camera Calibration Approach Using Polarized Light Based on High-Frequency Component Variance Weighted Entropy for Pose Measurement. Meas. Sci. Technol. 2023, 34, 115015. [Google Scholar] [CrossRef]
  38. Orhei, C.; Bogdan, V.; Bonchis, C.; Vasiu, R. Dilated Filters for Edge-Detection Algorithms. Appl. Sci. 2021, 11, 10716. [Google Scholar] [CrossRef]
  39. Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression. Proc. AAAI Conf. Artif. Intell. 2020, 34, 12993–13000. [Google Scholar] [CrossRef]
  40. Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. IEEE Trans. Cybern. 2022, 52, 8574–8586. [Google Scholar] [CrossRef] [PubMed]
  41. Xue, J.; Cheng, F.; Li, Y.; Song, Y.; Mao, T. Detection of Farmland Obstacles Based on an Improved YOLOv5s Algorithm by Using CIoU and Anchor Box Scale Clustering. Sensors 2022, 22, 1790. [Google Scholar] [CrossRef]
Figure 1. The workflow of weld trajectory detection method based on Hough transform.
Figure 1. The workflow of weld trajectory detection method based on Hough transform.
Machines 13 00992 g001
Figure 2. Camera imaging process.
Figure 2. Camera imaging process.
Machines 13 00992 g002
Figure 3. Camera imaging model.
Figure 3. Camera imaging model.
Machines 13 00992 g003
Figure 4. Rigid body transformation.
Figure 4. Rigid body transformation.
Machines 13 00992 g004
Figure 5. Center perspective projection.
Figure 5. Center perspective projection.
Machines 13 00992 g005
Figure 6. Lens distortion.
Figure 6. Lens distortion.
Machines 13 00992 g006
Figure 7. Image and pixel coordinate system.
Figure 7. Image and pixel coordinate system.
Machines 13 00992 g007
Figure 8. Image preprocessing.
Figure 8. Image preprocessing.
Machines 13 00992 g008
Figure 9. Mean filtering processing results.
Figure 9. Mean filtering processing results.
Machines 13 00992 g009
Figure 10. Median filtering processing results.
Figure 10. Median filtering processing results.
Machines 13 00992 g010
Figure 11. Principle of median filter.
Figure 11. Principle of median filter.
Machines 13 00992 g011
Figure 12. Principle of edge approximation method.
Figure 12. Principle of edge approximation method.
Machines 13 00992 g012
Figure 13. Edge detection results.
Figure 13. Edge detection results.
Machines 13 00992 g013
Figure 14. Shuffle-YOLOv8 model diagram.
Figure 14. Shuffle-YOLOv8 model diagram.
Machines 13 00992 g014
Figure 15. Two modules in ShuffleNet V2.
Figure 15. Two modules in ShuffleNet V2.
Machines 13 00992 g015
Figure 16. Plate weld seam trajectory coordinate detection results.
Figure 16. Plate weld seam trajectory coordinate detection results.
Machines 13 00992 g016
Figure 17. Parameter sensitivity analysis.
Figure 17. Parameter sensitivity analysis.
Machines 13 00992 g017
Figure 18. Training results.
Figure 18. Training results.
Machines 13 00992 g018
Figure 19. Extraction results of weld seam area.
Figure 19. Extraction results of weld seam area.
Machines 13 00992 g019
Figure 20. ROI region edge detection results.
Figure 20. ROI region edge detection results.
Machines 13 00992 g020
Figure 21. Three-dimensional stereoscopic scanner experimental site.
Figure 21. Three-dimensional stereoscopic scanner experimental site.
Machines 13 00992 g021
Figure 22. Complete model of weld seam.
Figure 22. Complete model of weld seam.
Machines 13 00992 g022
Figure 23. Data comparison results.
Figure 23. Data comparison results.
Machines 13 00992 g023
Figure 24. Welding test site.
Figure 24. Welding test site.
Machines 13 00992 g024
Figure 25. Welding effect.
Figure 25. Welding effect.
Machines 13 00992 g025
Table 1. Camera internal parameters.
Table 1. Camera internal parameters.
ParametersCalibration Values (pixel)
Focal length757.6382760.3847
Main point position (u0, v0)263.7319270.5986
Pixel coordinate axis Angle0
Distortion coefficient0.1802−0.2232
Average reprojection error0.14
Table 2. Software and hardware configuration.
Table 2. Software and hardware configuration.
NameCPUGPUCUDAPytorchPyCharm
Model/VersionIntel i5-14600kfNVIDIA GeForce RTX 4060ti12.42.4.12024.1.2
Table 3. Experimental hyperparameter settings.
Table 3. Experimental hyperparameter settings.
Parameter NamesTraining RoundsBatch SizeImage SizeInitial Learning RateMomentum
Parameter value20086400.00060.9
Table 4. Location coordinates of experimental points.
Table 4. Location coordinates of experimental points.
PointHorizontal Coordinate/mmVertical Coordinate/mmAverage Value of the Vertical Coordinate/mm
1625.8539.2039.22
2731.6139.23
3837.4339.20
4943.0339.19
51049.0739.20
61154.7639.21
71260.9639.20
81366.0439.18
91471.0739.20
101576.1139.21
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, F.; Huang, M.; Wang, N.; Li, L.; Yang, X. Intelligent Recognition of Weld Seams on Heat Exchanger Plates and Generation of Welding Trajectories. Machines 2025, 13, 992. https://doi.org/10.3390/machines13110992

AMA Style

Xie F, Huang M, Wang N, Li L, Yang X. Intelligent Recognition of Weld Seams on Heat Exchanger Plates and Generation of Welding Trajectories. Machines. 2025; 13(11):992. https://doi.org/10.3390/machines13110992

Chicago/Turabian Style

Xie, Fuyao, Mingda Huang, Neng Wang, Linyuxuan Li, and Xianhai Yang. 2025. "Intelligent Recognition of Weld Seams on Heat Exchanger Plates and Generation of Welding Trajectories" Machines 13, no. 11: 992. https://doi.org/10.3390/machines13110992

APA Style

Xie, F., Huang, M., Wang, N., Li, L., & Yang, X. (2025). Intelligent Recognition of Weld Seams on Heat Exchanger Plates and Generation of Welding Trajectories. Machines, 13(11), 992. https://doi.org/10.3390/machines13110992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop