Next Article in Journal
Dynamic Correction of Preview Weighting in the Driver Model Inspired by Human Brain Memory Mechanisms
Previous Article in Journal
AI-Driven Robotics: Innovations in Design, Perception, and Decision-Making
Previous Article in Special Issue
Human–Seat–Vehicle Multibody Nonlinear Model of Biomechanical Response in Vehicle Vibration Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction

School of Mechanical and Automotive Engineering, Shanghai University of Engineering Science, No. 333, Longteng Road, Guangfulin Street, Songjiang District, Shanghai 201620, China
*
Author to whom correspondence should be addressed.
Machines 2025, 13(7), 616; https://doi.org/10.3390/machines13070616
Submission received: 15 June 2025 / Revised: 8 July 2025 / Accepted: 14 July 2025 / Published: 17 July 2025

Abstract

As a key component in the hydraulic brake system of automobiles, the brake joint directly affects the braking performance and driving safety of the vehicle. Therefore, improving the quality of brake joints is crucial. During the processing, due to the complexity of the welding material and welding process, the weld seam is prone to various defects such as cracks, pores, undercutting, and incomplete fusion, which can weaken the joint and even lead to product failure. Traditional weld seam detection methods include destructive testing and non-destructive testing; however, destructive testing has high costs and long cycles, and non-destructive testing, such as radiographic testing and ultrasonic testing, also have problems such as high consumable costs, slow detection speed, or high requirements for operator experience. In response to these challenges, this article proposes a defect detection and classification method for laser welding seams of automotive brake joints based on machine vision inspection technology. Laser-welded automotive brake joints are subjected to weld defect detection and classification, and image processing algorithms are optimized to improve the accuracy of detection and failure analysis by utilizing the high efficiency, low cost, flexibility, and automation advantages of machine vision technology. This article first analyzes the common types of weld defects in laser welding of automotive brake joints, including craters, holes, and nibbling, and explores the causes and characteristics of these defects. Then, an image processing algorithm suitable for laser welding of automotive brake joints was studied, including pre-processing steps such as image smoothing, image enhancement, threshold segmentation, and morphological processing, to extract feature parameters of weld defects. On this basis, a welding seam defect detection and classification system based on the cascade classifier and AdaBoost algorithm was designed, and efficient recognition and classification of welding seam defects were achieved by training the cascade classifier. The results show that the system can accurately identify and distinguish pits, holes, and undercutting defects in welds, with an average classification accuracy of over 90%. The detection and recognition rate of pit defects reaches 100%, and the detection accuracy of undercutting defects is 92.6%. And the overall missed detection rate is less than 3%, with both the missed detection rate and false detection rate for pit defects being 0%. The average detection time for each image is 0.24 s, meeting the real-time requirements of industrial automation. Compared with infrared and ultrasonic detection methods, the proposed machine-vision-based detection system has significant advantages in detection speed, surface defect recognition accuracy, and industrial adaptability. This provides an efficient and accurate solution for laser welding defect detection of automotive brake joints.

1. Introduction

The electric automotive brake joints is a key component that connects the brake master cylinder and brake caliper. It is located at both ends of the brake oil pipe and plays a crucial role in ensuring efficient and stable braking of the brake system. Laser welding and brazing are two common methods in the welding process of automotive brake joints. However, brazing has many limitations, such as low joint strength, poor heat resistance, and electrochemical corrosion problems caused by differences in the composition of the base material and brazing material, all of which can weaken the corrosion resistance of the joint. In addition, brazing requires high assembly accuracy; otherwise, there is a risk of deformation. Given the shortcomings of brazing, laser welding technology is usually preferred for welding automotive brake joints. Laser welding has good adaptability and superior performance, but in actual operation, due to improper welding methods, errors in operation processes, interference from production environmental factors, defects such as cracks, porosity, undercutting, and incomplete fusion may occur in the weld seam. These defects can reduce the strength of the joint and even lead to product failure, thereby affecting the production efficiency and product quality of automotive brake joints. Therefore, it is crucial to inspect and screen the welding quality.
The common methods for weld seam detection are mainly divided into two categories: destructive testing and non-destructive testing. Destructive testing wastes materials, is costly, and has a long cycle; therefore, it is less commonly used. The non-destructive testing methods widely used in industry currently include radiographic testing and ultrasonic testing. Radiographic testing uses radiation to penetrate metal materials and image them on film to evaluate welding quality. This method has high detection accuracy, but it takes a long time to evaluate and verify the results, and the cost of using equipment such as X-ray film is high. At the same time, radiation is harmful to the human body, and protective measures need to be taken. Ultrasonic testing detects defects through the propagation and reflection of ultrasonic waves in the workpiece. The detection accuracy is usually above 80%, and the detection speed is fast (generally within 1 s). However, the detection effect is not good for workpieces with complex shapes, requiring high surface smoothness and the use of coupling agents to ensure acoustic coupling. For coarse-grained castings and welds, it is difficult to judge due to chaotic reflection waves. In addition, ultrasonic testing also requires experienced inspectors to operate and judge the results. In contrast, machine-vision-based methods for detecting weld defects in automotive joints have certain advantages in terms of cost, efficiency, and ease of operation, and have high engineering value and practical significance.
The mandatory demand for “zero defects” in laser welding in the automotive manufacturing industry is continuously driving the upgrading of inspection technology. In the early years, Developed laser vision sensors and applied them to laser welding, which not only provided a foundation for the research of laser weld detection technology in machine vision, but also pioneered the use of visual technology to detect laser welds [1,2]. Peng and Jin [3] used a CCD camera to capture laser welds on automotive manufacturing components and achieved weld quality inspection through visual image processing. Although the hardware conditions were not high at the time, resulting in low detection accuracy, this was a bold attempt to apply machine vision to industrial laser weld inspection. Chen [4] proposed class activation mapping (CAM) to identify and determine defect areas and locate target objects. It integrates CAM with existing pixel spatial gradient visualization techniques to achieve multi-directional detection of welds. Xu [5] has developed a detection system for different weld defects in the laser welding process of gas pipelines based on machine vision, which has attracted widespread attention from academia and industry. Zhang [6] designed a three-dimensional visual sensing system for measuring the surface profile of laser welding melt pool. The system projects the dot matrix laser image onto the molten pool, reconstructs the three-dimensional shape of the molten pool surface based on the reflected image, and measures and controls the size of the welding molten pool online. Xu et al. [7] has developed a laser weld seam detection system based on machine vision. The detection system is mainly used to measure the surface forming size and appearance defect detection of welds, and to take the center stripe of the weld seam. The feature detection algorithm is used to achieve defect recognition and detection. The measurement error of the system for detecting the forming size of welds is 0.08 mm. Huang [8] utilized data obtained from the image processing module to adaptively adjust the position of the welding torch, achieving non-contact weld seam tracking, which is well suited for measuring the geometric features of welds, weld seam tracking, and 3D contour production. Chu and Wang [9] used the centroid method to extract the centerline of structured light generated by laser vision sensors for automated size measurement and defect detection after welding. The early developed laser vision sensor was used for laser welding, pioneering the method of visual technology for detecting welds and laying the foundation for subsequent technologies. Related research continues to deepen, integrating class activation mapping (CAM) with pixel space gradient visualization technology for identifying and determining weld defect areas, using the Gaussian fitting method to extract the center of laser stripes and detect the roughness of surface defects in welds. With the development of technology, various detection systems and technologies have emerged, such as machine-vision-based gas pipeline laser welding defect detection system, a 3D vision sensing system for measuring the surface contour of the molten pool, a feature detection algorithm for measuring the forming size of the weld surface and detecting appearance defects, non-contact weld seam tracking using image processing data, and the centroid method for extracting structured light centerlines for size measurement and defect detection. These efforts focus on utilizing advanced visual technology and image-processing algorithms to achieve precise detection, recognition, and measurement of surface defects in laser welding seams, promoting the development of laser welding seam detection technology towards automation, intelligence, and high precision. Moreover, many countries have matured their technologies and achieved commercial production.
In 2019, Bacioiua et al. [10] utilized the latest machine learning research results to train a model to recognize the appearance features of the welding pool and its surrounding area, in order to achieve automatic classification of defects in SS304 TIG welding process. In the same year, The team adopted the adaptive cascaded AdaBoost algorithm for binary classification to extract real defects from a large number of potential defects. This algorithm improves the True Positive Rate (TPR) of detecting welding defects by introducing a penalty term to achieve an automatic welding defect detection system. Sun et al. [11] used a Gaussian mixture model-based background subtraction method to extract feature regions of welding defects, and extracted defect regions to achieve detection and classification of welding defects in thin-walled metal cans. He et al. [12] proposed a method that combines a top-down visual attention model and an exponentially weighted moving average (EWMA) control chart to extract the welding seam contour during the intelligent robot welding process of thick steel plates, and perform fault detection and diagnosis (FDD). Dong and Sun [13] uses multiple edge detection algorithms to process weld seam images in order to improve the accuracy of edge detection. By applying automatic threshold selection methods to detect image edges, a reasonable threshold can be obtained to improve the defect recognition accuracy of digital images of pipeline welds. Xia et al. [14] developed an advanced convolutional neural network (Resnet) to identify different welding states, including good welding, incomplete penetration, burn through, misalignment, etc. In 2020, Pan et al. [15] proposed a transfer learning model to solve the problem of welding defect detection by adding a new fully connected layer (FC-128) and a Softmax classifier to the traditional Mobile-Net model. Wang and Shen [16] designed a specialized module to predict the distance field between the inner and outer boundaries of the welding area. This module is used to restore the details that are lost due to hierarchical and multi-layer convolution, and solve the image segmentation problem in radiographic inspection of water-cooled tube welding areas. Tang et al. [17] used a pre-trained HMM to establish the relationship between keyhole geometry and welding quality defects, which effectively evaluated the quality of stainless steel 304 fiber laser welding. Zhu et al. [18] proposed a lightweight multi-scale attention semantic segmentation algorithm that can accurately evaluate the experimental evaluation method for detecting laser welding defects on power battery safety valves. Wang et al. [19] combined artificial intelligence image recognition technology to develop an automatic detection algorithm based on Mask R-CNN, which can automatically detect and identify common defects in building steel structure welds with optimal parameter configuration. Parameshwaran et al. [20] proposed a shape matching algorithm to achieve reliable and accurate seam tracking process that is sensitive to groove shape, and to track welding seams. Ma et al. [21] utilized eight feature parameters extracted from weld seam feature points to monitor and classify welding defects generated during gas metal arc welding (GMAW) of galvanized steel. He et al. [22] used the Mask R-CNN model to extract features from welding seam images, and adjusted the loss function to solve the problem of identifying and locating welding seams in response to changes in the network structure of Mask R-CNN. Deng et al. [23] utilized deep convolutional neural networks (CNN) for feature extraction and introduces transfer learning (TL) techniques for defect detection and industrial laser welding defect detection and image defect recognition on datasets. Chen et al. [24] proposed an image processing algorithm based on the Mask RCNN model to accurately identify the centerline of the weld seam in welding images and detect weld seam gap deviation. Qin et al. [25] used a convolutional neural network-based object detection method (Faster R-CNN) to train and recognize typical defects in TOFD (Time of Flight Diffraction) welding seam images. Yang et al. [26] proposed a welding defect recognition algorithm that combines support vector machine (SVM) classifier and Dempster–Shafer evidence theory to achieve effective feature representation for welding defect recognition. Chen et al. [27] applied a Faster R-CNN convolutional neural network for welding defect detection to obtain more accurate defect localization information and improve the performance of welding ultrasonic spectrum defect detection. In recent years, neural network technology has been widely applied in the field of welding defect detection to promote intelligent detection. Researchers trained fully connected neural networks (FCN) and convolutional neural networks (CNN) to recognize welding pools and their surrounding appearance features, automatically classifying good and defective welding. The transfer learning model effectively solves the welding defect detection problem by adding a new fully connected layer (FC-128) and a Soft-max classifier to the traditional Mobile-Net model. In addition, the regional proposal network (RPN) incorporates a neural network system with tilt parameters to adapt to complex environments and achieve accurate and rapid detection of welds. After training, the VGG16 CNN model is used for real-time defect detection in narrow overlap welding. A deep convolutional neural network (CNN) combined with transfer learning (TL) technology is used for industrial laser welding defect detection and image defect recognition. The Faster R-CNN convolutional neural network is applied in welding defect detection to obtain more accurate defect localization information and improve the performance of welding ultrasonic spectrum defect detection. Object detection methods based on convolutional neural networks, such as Faster R-CNN, are used to train and recognize typical defects in TOFD weld seam images. These studies promote the development of welding defect detection towards automation, intelligence, and high precision, helping to improve welding quality.
In 2022, Buongiorno et al. [28] utilized traditional machine learning algorithms and deep learning architectures to process extracted features, and constructed a classification model for welding defect detection to achieve real-time detection of laser welding defects. Li et al. [29] utilized a Mask R-CNN deep learning model combined with image processing technology to achieve high-precision weld seam tracking. Liu et al. [30] used a pre-trained Faster R-CNN two-stage object detection algorithm to iteratively train and test the model parameters to achieve object detection in the weld area. Liu et al. [31] constructed a restoration and extraction network (REN) based on the CGAN principle to restore weld seam feature information in noise-contaminated weld seam images, and implemented a weld seam tracking system for robot multi-layer and multi-pass MAG welding. Li et al. [32] improved the Otsu algorithm by calculating the optimal center calculation threshold for each column pixel of the laser stripe within a finite length, improving the resistance to arc light and splash noise to achieve robot welding seam recognition based on line-structured laser (LSL) vision sensors. In April 2023, Hong et al. [33] applied Kalman filter denoising processing to achieve real-time quality monitoring of ultra-thin plate edge welding. Da Rocha et al. [34] explored various machine vision technologies, including image processing algorithms, sensors, and tracking methods, to improve the welding accuracy and reliability of machine vision systems. In recent years, research in the field of welding defect detection has been continuously advancing, with a focus on improving the accuracy and efficiency of detection. Researchers use traditional machine learning algorithms and deep learning architectures to process extracted features and construct classification models to achieve real-time detection of laser welding defects. In the same year, research combined Mask R-CNN deep learning model with image-processing technology to achieve high-precision weld seam tracking. In addition, the pre-trained Faster R-CNN two-stage object detection algorithm optimizes model parameters through iterative training and testing, thereby achieving object detection in the weld area. A restoration and extraction network (REN) based on the CGAN principle is constructed to restore feature information in weld seam images contaminated by noise, thereby realizing a weld seam tracking system during the welding process. There is also an improved Otsu algorithm that improves the resistance to arc light and splash noise by calculating the optimal center calculation threshold within a finite length of each column pixel of the laser stripe. Research has applied Kalman filter denoising processing to achieve real-time quality monitoring of ultra-thin plate edge welding [35]. Machine vision technology has been extensively explored in improving welding accuracy and reliability, including research on image-processing algorithms, sensors, and tracking methods. These works mainly revolve around visual technology, improving the accuracy and real-time performance of welding defect detection by constructing and optimizing models.
In this research context, focusing on the analysis of laser welding seams, this study focuses on the visual defect detection method of laser welding seams for automotive brake joints. A visual inspection system was designed based on the characteristics of laser weld seam images. The system software algorithm design meets the requirements of image processing, covering the pre-processing of weld seam images and analysis of image defect features, accurately extracting defect feature vectors. At the same time, high-performance cascaded classifiers are designed, classifier training is completed, and defect recognition and classification is achieved. Ultimately, the system can effectively detect and analyze laser welding defects in automotive brake joints, which have been experimentally verified and provide strong support for improving welding quality.
The specific research content and chapter arrangement are as follows: the Introduction introduces the research background and significance of the subject, summarizes the main detection methods of laser welds at present, and summarizes the advantages and disadvantages of each method. The research status and application of machine vision weld defect detection methods at home and abroad are described. Section 2 introduces the common defect types of laser weld of automobile brake joints, and introduces the form and characteristics of common weld defects of automobile brake joints. Section 3 introduces the analysis of the collected weld images and studies the algorithm of weld defect image processing. A cascade classifier for detection and classification of weld defects is designed, and its detection effect is verified by experiments. Section 4 mainly introduces how to present the defects after detection and identification in real time, classify and count the defects, and complete the analysis of experimental results. Section 5 is the conclusion of the paper.

2. Defects Characteristics Welding Seam

2.1. Characteristics of Weld Defects

The quality defects existing in welding engineering mainly include the following aspects: the defects located on the weld surface or vision can be seen by using a low-power magnifying glass, which is called external welding defects. The defects that must be detected by destructive testing or by using ultrasonic testing and radiographic testing methods are called internal defects. However, in most cases, common welding scars are caused by not cleaning up welding slag and splash after welding and not cleaning up. Figure 1 shows the braking system where the car brake is located, and Figure 2 shows the physical picture and welding seam of the car brake joint.
Automotive brake joints in the laser welding production process will be affected by a variety of factors, such as material, production technology, operator level, resulting in a variety of weld defects. The weld is prone to defects such as breakpoints, bumps, cracks, craters, holes, and nibbling, which have a serious impact on product quality. According to the actual production of automobile brake joints by manufacturers, the most common defects of welds are craters, holes, and nibbling.
(1)
Characteristics of Crater Defects
Craters are special types of cracks that occur after the welding process and before the welded joint is completed. It is often caused by the improper filling of the arc pit before arc break. As a result, a pit-crack welding defect was formed. The crater defects of the automobile brake joint are manifested as concave defects on the weld surface, circular defects around the area, and irregular distribution. The main reason for its formation is that during the laser welding operation, the end of the weld or the joint of the weld is lower than the matrix of the weld [36]. Compared to the surrounding normal area, the craters area has a significant depth change and is larger and darker in color. Figure 3 shows the crater defect in the weld seam.
(2)
Characteristics of Holes Defects
Also known as a wormhole weld, porosity defects occur when air or bubbles are trapped in the weld. The welding process usually produces gases such as hydrogen, carbon dioxide, and steam. Trapped gas may be concentrated in specific locations or evenly distributed in the weld. These bubbles weaken the joints of welded metal, making them prone to fatigue and damage. The hole defects in the weld seam of automobile brake joint are manifested as the hole defects on the weld surface, which are few in number and uneven in distribution. The main reason for its formation is that during laser welding, the heat capacity of the weld pass is small and it solidifies quickly, resulting in the inability of air and debris in the weld [36]. The holes appear in the image as circular holes that are darker than the background and have a smaller area. Figure 4 shows the hole defect in the weld seam.
(3)
Characteristics of Nibbling Defects
A nibbling defect is an irregular groove in the shape of a notch in the base metal. They are produced by the melting of a metal matrix away from the weld zone and are characterized by their length, depth, and sharpness. As a result, welded joints become more prone to fatigue. The edge defect of the automobile brake joint weld appears as a concave groove notch in the square sleeve. The main reason for its formation is that during laser welding, the material at the weld boundary is not fully filled with metal dressing in time after melting [36]. The nibbling appears as a black patch in the image with darker color at the edge, different sizes, and irregular shape. Figure 5 shows the nibbling defect in the weld seam.
It can be seen that there are some differences in the characteristics and causes of the common defects in the laser weld of the automobile brake joint, and it has a serious impact on the quality and performance of the automobile brake joint. Therefore, the subsequent research needs to further complete the processing and recognition of different types of defect images, so as to achieve efficient and accurate defect detection and classification, and screen out unqualified automotive brake joints.

2.2. Weld Defect Identification and Detection Features

Image features refer to the distinguishing attributes or characteristics of images, which can be divided into natural features and artificial features. Natural features include shapes, colors, edges, and textures; the artificial features are obtained by computer processing, such as histogram features, gray edge features, and spectral features. Image feature vectors represent the position distribution information of image pixels and the change in gray value digitally, and a set of vectors represent the extracted image features. These feature vectors are one-dimensional vectors, and the system uses multi-dimensional feature vectors to identify and distinguish target objects, so image features directly affect the effect of subsequent image description, recognition, and classification.
(1)
Geometric Characteristics of Defects in Welding Seam
Geometric features refer to the appearance shape of the region after the image is segmented. Geometric features include position, direction, circumference, area, diameter, slope, curvature, and various form factors. Geometric features refer to the appearance shape of the region after the image is segmented.
a. Area of Defects
The defective area is the sum of the number of pixel points in an image region and can be described by A j . Let the gray value of the pixel at the position of A in a defect image be V . Let the gray value of the pixel at position x , y in the defective image be f x , y , and for a binary map, f x , y takes the values 0 and 1, and the area calculation method can be expressed as follows:
A j = x y f ( x , y )  
b. Length of Defects
The length of the boundary of the region is L , which can be determined by counting the pixels on the image boundary, and the length calculation method can be expressed as follows:
L = y d o w n y u p
c. Width of Defects
The boundary width of the region is W , and the width calculation method can be expressed as follows:
W = x l e f t x r i g h t
(2)
Gray-Scale Characteristics of Defects in Welding Seam
Gray feature is a feature obtained by statistical analysis of the common gray value of local or global pixels in an image. It is found that the gray level of pixels changes obviously in the defect area of automobile brake joint, and different defects correspond to different gray values. In this study, the mean and variance of the gray value of the image are used to represent the change in the gray value, and the kurtosis is used to describe the distribution of the gray value to distinguish different defect types. The following will introduce in detail the three gray feature parameters of gray mean, variance, and kurtosis extracted from the gray histogram of the image.
Using z i i = 0 , 1 , 2 , L 1 to denote the gray level n K values that would appear in a M × N size image, and denoting the number of times an image gray level K appears, the probability of gray level k appearing is b o l d s y m b o l P ( K ) .
P ( K ) = n k M N
a. Gray-Scale Mean
The average gray level μ , which reflects the overall gray level of the image, is calculated and expressed as follows:
μ = K = 0 L 1 K × P ( K )
b. Gray-Scale Variance
The gray-scale variance σ 2 , which reflects the dispersion in the distribution of the mean gray-scale value of the image across the distribution of gray-scale values at individual pixel points, is calculated as follows:
σ 2 = K = 0 L 1 ( K μ ) 2 × P ( K )
c. Gray-Scale Kurtosis
Kurtosis F , which reflects the steepness of the distribution of the image gray level histogram around the gray-level mean, is calculated and expressed as follows:
F = 1 σ 2 K = 0 L 1 ( K μ ) 4 × P ( K )
(3)
Texture Characteristics of Defects in Welding Seam
Texture feature is one of the important features of image target region recognition. This feature is based on the size and direction of the gray change in pixels, and can describe the spatial characteristics of the internal structure of the object, such as smoothness and change rules. The defects of automobile brake joint are unevenly distributed, and the appearance structure of the nibbling is different from other defects. Therefore, texture features are used to present the spatial distribution and structural details of the weld image. Usually, texture feature parameters are calculated by the gray co-occurrence matrix method. This method is an analysis technique based on the second-order statistics of a gray image histogram, and describes the texture by studying the gray interaction between two pixels in the gray space of the image, so as to fully display the gray information of the image in the direction, distance, and range of change. Then, the formula of gray co-occurrence matrix is as follows:
P i , j = # x 1 , y 1 , x 2 , y 2 S | f x 1 , y 1 = i , f x 2 , y 2 = j }
a. Entropy of Gray-Scale Images
Entropy H can reflect the neatness of the gray-scale distribution of the image and the clarity or confusion of the texture in the image [37]. In the gray-scale co-occurrence matrix, the pixel gray value of the image texture is proportional to the entropy value. If there is no texture in the image, the entropy of the image is close to zero. Its calculation method is expressed as follows:
H = i = 0 L 1 j = 0 L 1 P i j lg P i j
b. Contrast of Gray-Scale Images
Contrast H 2 reflects the uniformity of the distribution of image pixels from center to edge [37]. The deeper the furrow of image texture, the greater the contrast; the smaller the grain furrow, the smaller the contrast, and the visual effect is blurred. Its calculation method is expressed as follows:
H 2 = i = 0 L 1 j = 0 L 1 i j 2 P i j
c. Uniformity of Gray-Scale Images
Uniformity H 3 reflects the uniformity of the distribution of image pixels from the center to the edges [37]. The greater the uniformity, the more evenly distributed the image pixels.
H 3 = i = 0 L 1 j = 0 L 1 P i j 1 + i j

3. Weld Defect Detection Methods

Machine vision inspection technology is used to identify common defects in laser-welded automotive brake joint welds, such as pits, holes, and bottom cuts, and to achieve comprehensive inspection of the entire workpiece. In this process, image-processing algorithms for laser welds of automotive brake joints are studied to detect and classify defect features. Through this method, we can improve the efficiency of automobile brake joint welding quality inspection, and then improve the production efficiency of enterprises.
In the process of image acquisition of laser weld of automobile brake joints, it is affected by the working light environment and camera performance. The image-smoothing technology is used to effectively remove the noise in the image, and the image contrast enhancement method is used to process the features of the interesting region in the image to obtain a clearer image. To improve the accuracy, edge detection technology is used to obtain the characteristics of the weld surface, and the contour detection technology is used to obtain the surface defect profile.
Figure 6 shows a series of processing procedures for weld seam images. Firstly, the captured weld seam samples are preprocessed through ROI region extraction, image smoothing, image enhancement, threshold segmentation, and morphological processing to obtain the corresponding defect feature parameters. After normalizing the obtained defect parameters relative to the weld seam, the cascaded classifier is adapted to classify and screen the processed weld defect parameters, remove defect samples from the samples, and finally retain intact weld samples. The arrows in the figure represent the detection and processing flow of weld seam images. The dots in the figure represent the omission of a large number of classifiers. Figure 7 shows the original image of the weld seam of the car brake joint obtained.

3.1. Image Processing Flow

(1)
ROI Interested Area Extraction
ROI (region of interest) is a technique for extracting regions of interest. In the process of ROI extraction, the target object is positioned as the region of interest, and the high fidelity of the content of the target object is ensured by removing background interference [38].
ROI extraction technology is used to reduce the redundancy of subsequent image data for the weld seam images of the left and right halves of the car brake joint. Due to the semi-circular shape of the weld detection area, the least squares method was used to obtain the center and radius of the inner and outer circles of the weld. By using the semi-circular ROI extraction algorithm, the weld seam and end-face regions in the left and right half circle weld seam images can be separated, thereby accurately extracting the weld seam region of the car brake joint. Figure 8 shows the weld seam image after extracting the ROI.
(2)
Image-Smoothing Processing
On the premise of preserving the details and features of the original image, image filtering can suppress the noise generated during image imaging by smoothing processing.
In order to obtain the true feature information of the weld seam image, image filtering is used to remove noise from the extracted image of the sensitive area. Image filtering is divided into linear rate wave and nonlinear filtering, and image-filtering algorithms include median filtering, Gaussian filtering, and mean filtering.
Figure 9 can be seen that a comparison was made between the three filtered weld seam images. Among the three types of filtering, the median filtering has the best effect, which not only eliminates the salt and pepper noise in the weld seam image, but also improves the image contrast, making the image surface smoother and allowing more details to be seen. Meanwhile, the edge information of the weld seam image has not been eliminated and still maintains good resolution and clarity. Therefore, the median filtering method was chosen to process the image.
In simple terms, median filtering is the process of scanning a box of odd-numbered points in an area around a target pixel, using the median value of all pixels in the scanning box to replace the pixel value of that point. If it is a scanning box with even points, the average of the two middle values is used to represent the value of the pixel at this position. This intermediate value filtering has a good filtering effect on pulse noise, and can effectively preserve the edge and defect features of the image.
The following intermediate value y is obtained from the x i 1 , x i 2 , x i 3 , , x i n dataset sorted in the following order of magnitude:
y = = x i [ x + 1 2 ] O d d 1 2 [ x i n 2 + ( n 1 2 ) x i ] E v e n
If the input data is x , let { x i , i I } , The intermediate value obtained is the result of median filtering processing, and the intermediate value y is obtained as follows:
y = M e d x i = Med X i u , , X i , X i + u
(3)
Image Enhancement Processing
Due to various reasons, the weld seam image after image-smoothing processing has the disadvantages of insufficient contrast and difficulty in distinguishing details. Therefore, we continued to choose the method of image gray-scale transformation to improve the quality of the weld seam image, expanding the original gray-scale range to a more suitable gray-scale range. Nonlinear gray-scale transformations include exponential transformation functions, power transformation functions, and logarithmic transformation functions to enhance images. Linear gray-scale transformation relies on efficiency, controllability, and structural fidelity. It can improve the contrast between defects and background at a millisecond rate, providing low-noise and high-fidelity input for subsequent algorithms. Therefore, we choose the linear gray-scale algorithm to process the smoothed weld seam image.
g x , y = c · log f x , y + 1 g x , y = b c f ( x , y ) a 1 g x , y = c · f x , y γ
where a is the slope of the standard linear transformation and b is the intercept of the standard linear transformation function on the y axis. In the gray-scale transformation, when a > 1 , the output image contrast will increase; when a < 1 , the output image contrast will decrease; when a = 1 and b 0 , the whole image linkage degree changes with the change in b ; when a < 0 , the whole image brightness of the high region will become darker, and the whole image brightness of the dark region will become brighter. The linear gray-scale enhancement effect of the weld seam image is shown in Figure 10. The highlight area in the processed weld seam image is shown in Figure 11.
(4)
Threshold Segmentation Processing
When using machine vision detection technology to analyze laser weld images of automotive brake joints after image enhancement, there are problems such as overlapping gray distribution defect areas and complete areas, as well as overlapping gray values between defects and noise, which make defect extraction difficult. To solve this problem, the watershed algorithm was introduced, which can accurately partition weak boundary regions in the image, thereby obtaining connectivity and closed contours of the regions. By applying the watershed algorithm to the weld seam image after removing the highlight area, we can accurately segment the defect area.
The watershed algorithm was introduced, which is able to accurately divide the weak boundary regions in the image to obtain the connectivity and closure profiles of the regions. By applying the watershed algorithm to the weld image after removing the highlight area, we can segment the defect area accurately [39]. The image after removing the highlight area from the weld seam image segmentation is shown in Figure 12.
The calculation of the watershed is a process of iterative labeling, and the result of the watershed calculation is the image after the input image gradient calculation. The regional boundary point in the gradient image is the watershed, which represents the maximum value point of the input image and can reflect the edge information of the image. The formula for calculating watershed is as follows:
g x , y = g a r d f x , y = [ f ( x , y ) f ( x 1 , y ) ] 2 + [ f ( x , y ) f ( x , y 1 ) ] 2
where f ( x , y ) denotes the input image, g a r d ( · ) denotes the gradient operation, and g ( x , y ) is the output image.
We continue to use the watershed algorithm to segment the weld seam image on the weld seam image after removing the highlight area, separating the defect area from the interference area. The first step is to sequentially scan the gradient layer of the weld seam image to obtain pixel points, and sort them in ascending order of their gradient values; the second step is to determine and label the pixels in the image in a first-in, first-out order based on the gray-scale values obtained from scanning. Finally, the pixels in the minimum gradient influence area are obtained, and the edge contour of the image area is drawn to form a watershed by calculating the minimum pixel points, thus achieving image watershed segmentation.
Figure 13 shows the weld image obtained after dividing the watershed. Figure 13a Outlines the edge contour of each region of the weld image, and accurately obtains the connectivity and closure contour of the region. In Figure 13b, different colors are used to identify each closed area after segmentation. It can be seen that after segmentation by the watershed algorithm, the weld is divided into several continuously closed areas. After watershed segmentation, the edge characteristics of weld images are well preserved, which solves the overlapping of some regions, and provides a good condition for the subsequent analysis and extraction of defect region features of weld images. Figure 14 shows the weld seam image after watershed threshold separation processing on the enhanced image.
(5)
Morphological Processing
Morphological processing is performed on the segmented weld seam images, and the shapes corresponding to the main morphological structures of weld defects are measured and extracted. Expansion and corrosion treatment methods are used separately.
a. Expansion Processing
The input image f ( x , y ) is defined by the structure element b ( x , y ) expansion, as follows:
f b x , y = max f x s y t + b s , t | x s , y t D f ; s , t D b
where D f , D b are the definition domains of the input image f ( x , y ) and structure elements, respectively.
The meaning of the image expansion operation is to translate each point of image f ( x , y ) in the opposite direction by ( s , t ) , then add the translated image to b s , t and take all the values in s , t to obtain the maximum result. It is equivalent to mapping the structural elements with respect to their own references, using the mapped structural element b ^ as a template to move and scan the image, and adding the pixel values in the area covered by the template with the corresponding values of b ^ to obtain the maximum value as the value of the current pixel point.
f ( x , y ) the image, the structure element b ( x , y ) and the structure element do the mapping about the origin of the structure element b ^ . After mapping, the structure element b ^ translates to a position on the image with the scanned input image after the expansion process structure. As a result of numerical summation and maximization, the value of the expanded image is larger and the image is brighter than before the expansion. Figure 15 shows the effect image after dilation morphology processing.
b. Corrosion Processing
The input image f ( x , y ) is defined by the corrosion of the structure element b s , t :
f b x , y = m i n f x + s , y + t b s , t | x + s , y + t D f , s , t D b
It is similar to the expansion process, the structuring element scans through the input image and subtracts f ( x , y ) from b s , t to obtain the minimum of the desired values and the value of the target pixel. The corroded image is darker due to numerical subtraction and minimization. Figure 16 shows the effect image after corrosion morphology processing.

3.2. Feature Extraction Normalization and Classifier

(1)
Feature Normalization
In the field of machine vision, the detection system realizes the recognition of defects by extracting the features of image defect content information and training these feature parameters. Image feature extraction refers to the process of obtaining data and information parameters that can represent or describe the target from the image.
After image segmentation, an image sample is selected for three kinds of weld defects such as craters, holes, nibbling, and the contour of the defect area is collected. Then defect feature extraction is carried out according to the nine feature vectors of the above three features, and the defect image sample is calculated by the feature formula above to complete the extraction of the feature vector parameters. Figure 17 shows the defects detected after a series of processing. Table 1 shows the extracted feature vector parameters.
At the same time, there are significant differences between different feature vector parameters with the same feature category, and there are also large differences between the same feature vector parameters with different defect categories. If these different feature parameters are combined into feature vectors as the input of the classifier, the information distribution will be uneven, which will affect the data analysis results. The feature parameters of larger values take a larger proportion in the feature vector, while those of smaller values are easy to be ignored, thus affecting the learning speed and classification accuracy of the subsequent classifiers. Therefore, in order to ensure the consistency and certainty of the feature data of each defect, this paper adopts data normalization processing for the feature vector parameters to make the feature parameters have a stronger correlation, so as to improve the learning and classification efficiency of the classifier. Maximum–minimum normalization is a commonly used data normalization method, which compresses and expands the data series by dividing the maximum and minimum values. This method maps the original data to the standard data distribution with a mean of 0 and a variance of 1, which improves the accuracy of data processing and is a common data processing method [40].
Let the minimum value of attribute A be m i n A and the maximum value be m a x A , and map an original value X of A to the value x in the interval [0, 1] using the highest value normalization, then x is the normalized value, and the normalization formula is as follows.
x = x m i n A m a x A m i n A x = x × m x m i + m i
It can be seen from Table 2 that after the maximum and minimum standardization treatment, the characteristic parameters of each defect in the weld are in the same order of magnitude and the range of value changes is small.
It can be seen that the contrast between the feature parameters of various defect images is higher, and the topological structure of the feature vector in the feature space is not changed. It shows that the normalization of weld defect characteristic parameters guarantees the validity of data. At the same time, it also improves the operation speed for the subsequent identification of defect types of weld images by feature vector.
(2)
Cascade Classifier
A laser welding seam defect recognition and classification method for automotive brake joints based on cascade classifiers and the AdaBoost algorithm is proposed. A cascaded classifier is a strong classifier structure that filters non target regions layer by layer through multi-level concatenation, achieving efficient and accurate object detection. This classifier combines AdaBoost’s ensemble learning capability with the fast exclusion mechanism of a cascaded architecture. Each classifier layer is responsible for filtering out a large number of easily recognizable negative samples, allowing only suspicious areas to pass to the next layer. The deeper the classifier design, the more complex it is for identifying finer features.
A weak classifier is designed based on the extracted defect feature vectors, and AdaBoost’s ensemble process is used to repeatedly adjust sample weights, forcing the classifier to focus on “difficult samples” that are difficult to distinguish. Each iteration selects the optimal weak classifier and assigns weights, and then constructs a strong classifier based on a series of weak classifiers. Afterwards, a cascaded classifier is formed based on multiple strong classifiers to form a recognizer capable of detecting normal and defective laser weld samples of automotive brake joints. The first three layers of the cascaded classifier are mainly used to filter uniform backgrounds and untextured areas, quickly eliminating more than 90% of the unaffected areas. The middle three layers mainly detect obvious defects and perform rough classification of the main areas. The last three layers are used to identify subtle defects, perform high-precision discrimination, and ultimately detect and screen out defect detection results. Figure 18 shows the structure diagram of a cascaded classifier.
a. Positive Sample Set
The positive sample is an image containing the tested object. Due to the presence of a large amount of background in the actual laser weld seam image of the car brake joint captured by the camera, the image samples in the training set are extracted from the weld seam using a semi-circular ROI extraction algorithm to obtain positive samples, which are then normalized to unify the extracted positive sample images to the same scale. This algorithm uses a normalized image of 40 pixels by 40 pixels as the positive training sample image. The capacity of the positive sample set is 80 images, and the aspect ratio of the rectangular boxes obtained by image capture is 2:1. The positive sample set of some weld seam images is shown in Figure 19.
b. Negative Sample Set
Negative samples are obtained by randomly cropping the background area in the weld seam image samples, using a normalized image of 40 pixels by 40 pixels. The aspect ratio of the rectangular boxes obtained by cropping negative samples is 2:1. The negative sample set has a capacity of 220 images, mainly including positioning points, square sleeves, convex platforms, and other objects in the facility environment. Figure 20 shows some negative samples of weld seam images.
c. Generate Descriptive Files
A CreateSamples command was used to generate a positive sample description file pos.txt from positive sample data and a negative sample description file neg.txt from negative sample data, which was easy to access at any time during training. In addition, the positive sample also generates a vector description file.
d. Training Cascade Classifiers
The operators opencv createssamples and opencv traincascade were used to train the classifier, and a file was obtained for the program to call. It contains some important parameters, such as the number of stages, training size, maximum false detection rate, and feature type [41]. The number of stages is the number of cascades of the integrated strong classifier, so the training series is set to nine according to the number of existing feature vectors, the maximum false detection rate of each layer of the classifier is set to 0.5% and the minimum detection rate is set to 0.995% according to the detection needs, and the maximum false detection rate of the cascade classifier is set to 0.3%, and the corresponding information is recorded in the strong separator structure. Finally, the final cascade classifier is obtained by concatenating the calculated strong classifiers.

4. Test Results and Analysis

The test samples in this paper are three kinds of defect images of laser welding seam of automobile brake joint, such as craters, holes, and nibbling, and the detection effect of cascade classifier on weld defect detection is presented, respectively. During the implementation of the training algorithm, each image is detected separately. After the defects are detected, the original image and defect area are superimposed and the defects are framed with a rectangular box to make the defects display intuitively. As can be seen from the figure, for the detection of each laser weld image, the defects of the weld can be accurately located and the results displayed.
(1)
The Influence of Training Frequency on the Results of Cascaded Classifiers
Two indexes, accuracy P and recall rate R are used to evaluate the algorithm performance of the cascade classifier. In order to analyze and verify the training effect of the cascade classifier, this paper conducts comparative experiments with different training times.
By adjusting the training frequency, the effectiveness of the cascaded classifier training is reflected based on the detection rate and false alarm rate of the test results. The tests were conducted using 50, 500, and 1000 training iterations, with the entire training set used as the training sample. The training comparison results are shown in Table 3. According to Table 3, after reaching a certain number of training iterations, the improvement effect on the detection results is relatively small, Therefore, under a certain amount of training, the laser welding seam defect recognition algorithm for automotive brake joints based on cascade classifiers can effectively detect and recognize defects, with good accuracy and stability. At the same time, it can cope with automated industrial environments and meet industrial testing requirements.
(2)
Detection Time
The real-time performance of the recognition algorithm has an important impact on the efficiency of the laser weld defect detection system of automobile brake joint, so it is necessary to analyze the speed of the recognition and classification algorithm of weld defect based on the cascade classifier. The cascade classifier algorithm is used to detect 20 test sample images, respectively, and the test result speed reflects the effect of the cascade classifier training.
Table 4 shows the time spent results of the algorithm acting on the test sample pictures. It can be seen from the results that the time consuming of each sample picture is different, ranging from 0.17 s to 0.38 s, and the average time consuming is 0.24 s. For the detection speed of the system required by industrial automation, the recognition speed is lower than 0.3 s, and the cascade classifier in this paper meets the requirement of the vision sensor for algorithm speed.
(3)
Test Results
For the defect detection and image classification of 200 laser welding seam images of automotive brake joints. the detection performance of the system is tested using indicators such as detection rate, false negative rate, and false positive rate. Table 5 shows the detection results of the test samples, and Table 6 shows the detection results of each defect category. Figure 21 shows the detection image of the sample, Figure 22 shows the image of detecting normal samples as defects incorrectly, and Figure 23 shows the image of missing defect samples for detection.
Based on the analysis in the table above, the laser welding seam defect detection and classification system for this automobile brake joint can accurately identify and distinguish welding seam defects. The system generally has a high detection accuracy for craters defects, but a relatively low detection accuracy for undercut defects. The average classification accuracy of the system for three types of weld defects in this article reaches over 90%, with a detection and recognition rate of 100% for crater defects, Due to the unclear characteristics of nibbling defects, the accuracy of detection and classification is generally not high. However, the recognition rate of our method in this paper has also reached 92.6%, and the overall missed detection rate is less than 3%, with both the missed detection rate and false detection rate for craters defects being 0%. The main reasons for false positives are insufficient clarity of the weld seam image, obstruction by dust during the welding process, or some new defects. The main reasons for missed detections are false positives caused by specific textures or lighting conditions of the weld seam. In addition, the detection time for each image is less than 0.3 s. Through experiments, it has been proven that the machine-vision-based laser welding seam defect detection and classification system for automotive brake joints designed in this way has high detection accuracy and efficiency, and is suitable for practical engineering to have stable and reliable defect judgment ability for both defects and normal welding seam images.
Based on the impact of training frequency on the cascade classifier, the detection time and results can be obtained. The machine-vision-based laser weld defect detection system for automotive brake joints developed in this study reached performance saturation after 500 training sessions. By using a cascade classifier algorithm, efficient detection with an average single image time of only 0.24 s is achieved, meeting the strict cycle requirements of less than 0.3 s in industry. The average classification accuracy for three types of defects, namely craters, holes, and nibbling is over 90%, with a 100% recognition rate for pit defects and zero missed or false detections, and a 92.6% recognition rate for undercutting defects. The overall missed detection rate of the system is less than 3%; compared to infrared detection, which takes 0.5–2 s to detect each sample, the recognition rate of surface craters is only about 85%, and the missed detection rate is about 10%. Ultrasonic detection takes 1–5 s to detect each sample, with an accuracy rate of 80–88% and a missed detection rate of 5–8%. This breakthrough not only improves the detection speed by 4–8 times, but also relies on non-contact optical acquisition intelligent algorithms to completely avoid the thermal interference and coupling agent pollution problems of infrared and ultrasonic methods, while significantly reducing hardware costs and operation and maintenance investment. The visual inspection system based on cascaded classifiers surpasses infrared and ultrasonic methods in terms of efficiency, economy, and surface defect recognition accuracy, becoming the preferred solution for online quality inspection of automotive component welds. In the future, multimodal sensing fusion can further break through the bottleneck of deep defect detection and build a comprehensive industrial quality inspection.

5. Conclusions

According to the quality requirements of an automobile parts manufacturer for an electric automobile brake joint, there are three kinds of defects which easily appear in the laser weld of joints, such as craters, holes, and nibbling. Based on the analysis of corresponding research results at home and abroad, machine vision technology is used to conduct research on the detection and classification methods of weld defects. The system design was completed through image-processing operations and the construction of cascaded classifiers, and the image detection algorithm was validated by combining actual defect samples. The design of an image defect detection and classification system performs image processing operations on the collected images to segment the defect areas in the weld seam. A nine-dimensional weld defect feature vector was constructed for the defect area, and corresponding feature parameters were extracted; then, the extracted feature parameters to the maximum and minimum values are standardized, and data normalization processing is performed, using them as inputs for the classifier. A cascaded classifier based on AdaBoost algorithm for training laser welding seam images of automotive brake joints was researched and designed, achieving defect detection and classification.
The feasibility of the laser welding seam defect detection and classification system for automotive brake joints in this article was verified through experiments. The welding seam images captured on site were used for testing, and the experimental results showed that the defect detection rate of this method reached 97%, with an average detection time of 0.24 s per image. Compared with infrared and ultrasonic detection methods, this system has significant advantages in improving detection speed by four to eight times, and defect recognition accuracy on non-contact, low-cost surfaces, and industrial adaptability. Infrared detection relies on thermal diffusion imaging, with a detection time between 0.5 and 2 s. The recognition rate of surface pits is only about 85%, and the missed detection rate is about 10%. Ultrasonic testing relies on high-frequency sound waves, with a detection time between 1 and 5 s. The recognition rate of biting defects is about 70%, and the missed detection rate is 5–8%. Additionally, coupling agents are required, resulting in higher equipment costs. This experimental result validates the effectiveness of the selected features and the performance of Adaboost algorithm in weld defect recognition, and the recognition results meet the design requirements. Moreover, this method has good stability and can effectively improve detection efficiency.
In the research, machine vision technology was used to detect and classify laser weld defects in automotive brake joints. A set of laser weld defect detection system for automotive brake joints was designed, and the corresponding software algorithm deployment was studied and analyzed to achieve intelligent weld defect detection. However, the core limitation of the detection method is that it is only applicable to surface defect recognition (with a bite recognition rate of 92.6% lower than 100% for dents), cannot detect internal defects, and is susceptible to strong reflection and oil pollution interference. The performance of the model is highly dependent on the quality and diversity of the training data, especially when dealing with complex operating conditions and new types of defects, which may require additional data collection and annotation. In future work, the combination of infrared thermal imaging and ultrasonic flaw detection technology can be integrated to enhance the detection capability of internal defects and achieve comprehensive detection of surface and deep defects; develop image-processing algorithms based on dynamic lighting compensation to reduce the impact of environmental interference on detection accuracy; and based on the detection, further research on defecSut cause analysis and predictive maintenance can be carried out by combining welding process parameters.

Author Contributions

Conceptualization, C.Y. and D.F.; methodology, C.Y.; software, H.Z.; validation, X.L., L.S. and D.F.; formal analysis, C.Y.; investigation, X.L.; resources, D.F.; data curation, L.S.; writing—original draft preparation, C.Y.; writing—review and editing, X.L.; visualization, H.Z.; supervision, L.S.; project administration, D.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author, D.F. upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, X. Research on Real-Time Detection and Classification of Defects on Metal Sheet Surface Based on Machine Vision. Master’s Thesis, University of Science and Technology of China, Hefei, China, 2021; p. 71. [Google Scholar]
  2. Zhang, X. Research on Image Processing Techniques for DR Inspection Data of Circumferential Welds. Master’s Thesis, Beihua Institute of Aerospace Industry, Langfang, China, 2021; p. 60. [Google Scholar]
  3. Peng, J.; Jin, Y. An unbiased algorithm for detection of curvilinear structures in urban remote sensing images. Int. J. Remote Sens. 2007, 28, 5377–5395. [Google Scholar] [CrossRef]
  4. Chen, S. Research on Detection of Surface Defects on Rails Based on Machine Vision. Master’s Thesis, Shijiazhuang Railway University, Shijiazhuang, China, 2021; p. 71. [Google Scholar]
  5. Xu, C. Research on Defect Detection and Classification Technology of Lens Based on Machine Vision. Master’s Thesis, Harbin Institute of Technology, Harbin, China, 2021; p. 75. [Google Scholar]
  6. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  7. Xu, Y.; Lv, N.; Fang, G.; Du, S.; Zhao, W.; Ye, Z.; Chen, S. Welding seam tracking in robotic gas metal arc welding. J. Mater. Process. Technol. 2017, 248, 18–30. [Google Scholar] [CrossRef]
  8. Huang, W.; Kovacevic, R. Development of a real-time laser-based machine vision system to monitor and control welding processes. Int. J. Adv. Manuf. Technol. 2000, 63, 235–248. [Google Scholar] [CrossRef]
  9. Chu, H.; Wang, Z. A vision-based system for post-welding quality measurement and defect detection. Int. J. Adv. Manuf. Technol. 2016, 86, 3007–3014. [Google Scholar] [CrossRef]
  10. Bacioiu, D.; Melton, G.; Papaelias, M.; Shaw, R. Automated defect classification of ss304 tig welding process using visible spectrum camera and machine learning. NDT E Int. 2019, 107, 102139. [Google Scholar] [CrossRef]
  11. Sun, J.; Li, C.; Wu, X.; Palade, V.; Fang, W. An effective method of weld defect detection and classification based on machine vision. IEEE Trans. Ind. Inform. 2019, 15, 6322–6333. [Google Scholar] [CrossRef]
  12. He, Y.; Yu, Z.; Li, J.; Ma, G. Weld seam profile extraction using top-down visual attention and fault detection and diagnosis via ewma for the stable robotic welding process. Int. J. Adv. Manuf. Technol. 2019, 104, 3883–3897. [Google Scholar] [CrossRef]
  13. Dong, S.; Sun, X. Automatic defect identification technology of digital image of pipeline weld. Nat. Gas Ind. B 2018, 6, 399–403. [Google Scholar] [CrossRef]
  14. Xia, C.; Pan, Z.; Fei, Z.; Zhang, S.; Li, H. Vision based defects detection for keyhole tig welding using deep learning with visual explanation. J. Manuf. Process. 2020, 56, 845–855. [Google Scholar] [CrossRef]
  15. Pan, H.; Pang, Z.; Wang, Y.; Wang, Y.; Chen, L. A new image recognition and classification method combining transfer learning algorithm and mobilenet model for welding defects. IEEE Access 2020, 8, 119951–119960. [Google Scholar] [CrossRef]
  16. Wang, L.; Shen, Q. Visual inspection of welding zone by boundary-aware semantic segmentation algorithm. IEEE Trans. Instrum. Meas. 2020, 70, 5001309. [Google Scholar] [CrossRef]
  17. Tang, X.; Zhong, P.; Zhang, L.; Gu, J.; Liu, Z.; Gao, Y.; Hu, H.; Yang, X. A new method to assess fiber laser welding quality of stainless steel 304 based on machine vision and hidden markov models. IEEE Access 2020, 8, 130633–130646. [Google Scholar] [CrossRef]
  18. Zhu, Y.; Yang, R.; He, Y.; Ma, J.; Guo, H.; Yang, Y.; Zhang, L. A lightweight multiscale attention semantic segmentation algorithm for detecting laser welding defects on safety vent of power battery. IEEE Access 2020, 9, 39245–39254. [Google Scholar] [CrossRef]
  19. Wang, Y.; Cao, Y.; Ye, B.; Zhang, H.; Fu, Y. Research on detection method for welding seam defects in ultrasonic tofd image based on mask r-cnn. J. Phys. Conf. Ser. 2021, 1995, 012032. [Google Scholar] [CrossRef]
  20. Parameshwaran, R.; Maheswari, C.; Nithyavathy, N.; Govind, R.R.; Selvakumar, N.; Dharshan, V.R.D.; Vasanth, M. Labview based simulation on welding seam tracking using edge detection technique. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1055, 012026. [Google Scholar] [CrossRef]
  21. Ma, G.; Yuan, H.; Yu, L.; He, Y. Monitoring of weld defects of visual sensing assisted gmaw process with galvanized steel. Mater. Manuf. Process. 2021, 36, 1178–1188. [Google Scholar] [CrossRef]
  22. He, F.; Sun, X.; Wang, Y.; Rong, S.; Hu, Y. Research on Weld Recognition Method Based on Mask R-CNN. In Proceedings of the IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2021; pp. 545–551. [Google Scholar]
  23. Deng, H.; Cheng, Y.; Feng, Y.; Xiang, J. Industrial laser welding defect detection and image defect recognition based on deep learning model developed. Symmetry 2021, 13, 1731. [Google Scholar] [CrossRef]
  24. Chen, Y.; Shi, Y.; Cui, Y.; Chen, X. Narrow gap deviation detection in keyhole tig welding using image processing method based on mask-rcnn model. Int. J. Adv. Manuf. Technol. 2021, 112, 2015–2025. [Google Scholar] [CrossRef]
  25. Qin, L.; Wang, Y.; Ye, B. Recognition confidence of welding seam defects in tofd images based on artificial intelligence. Autom. Control Comput. Sci. 2022, 56, 180–188. [Google Scholar] [CrossRef]
  26. Yang, L.; Fang, J.; Huo, B. Inspection of welding defect based on multi-feature fusion and a convolutional network. J. Nondestruct. Eval. 2021, 40, 90. [Google Scholar] [CrossRef]
  27. Chen, C.; Wang, S.; Hua, S. An improved faster rcnn-based weld ultrasonic atlas defect detection method. Meas. Control 2023, 56, 832–843. [Google Scholar] [CrossRef]
  28. Domenico, B.; Michela, P.; Stefano, G.; Mehboob, H.S.; Alessandro, R.; Nicola, L.; Giovanni, D.S.; Vitoantonio, B.; Antonio, B. Inline defective laser weld identification by processing thermal image sequences with machine and deep learning techniques. Appl. Sci. 2022, 12, 6455. [Google Scholar] [CrossRef]
  29. Li, J.; Li, B.; Dong, L.; Wang, X.; Tian, M. Weld seam identification and tracking of inspection robot based on deep learning network. Drones 2022, 6, 216. [Google Scholar] [CrossRef]
  30. Liu, C.; Chen, S.; Huang, J. Machine Vision-Based Object Detection Strategy for Weld Area. Sci. Program. 2022, 1, 1188974. [Google Scholar] [CrossRef]
  31. Liu, C.; Shen, J.; Hu, S. Seam tracking system based on laser vision and cgan for robotic multi-layer and multipass mag welding. Eng. Appl. Artif. Intell. 2022, 116, 105377. [Google Scholar] [CrossRef]
  32. Li, W.; Mei, F.; Hu, Z.; Alaa, A.H. Multiple weld seam laser vision recognition method based on the ipce algorithm. Opt. Laser Technol. 2022, 155, 108388. [Google Scholar] [CrossRef]
  33. Hong, Y.; Yang, M.; Jiang, Y.; Du, D.; Chang, B. Real-time quality monitoring of ultrathin sheets edge welding based on microvision sensing and SOCIFS-SVM. IEEE Trans. Ind. Inform. 2022, 19, 5506–5516. [Google Scholar] [CrossRef]
  34. Da Rocha, D.A.; Barbosa, A.B.L.; Guimar, D.S.; Greg, L.M.; Gomes, L.H.N.; da Silva Amorim, L.; Peixoto, Z.M.A. An unsupervised approach to improve contrast and segmentation of blood vessels in retinal images using clahe, 2d gabor wavelet, and morphological operations. Res. Biomed. Eng. 2020, 36, 67–75. [Google Scholar] [CrossRef]
  35. Liang, F.; Shen, C.; Wu, F. An iterative bp-cnn architecture for channel decoding. IEEE J. Sel. Top. Signal Process. 2018, 12, 144–159. [Google Scholar] [CrossRef]
  36. Shi, L. Research on Automatic Classification and Character Defect Detection Technology of Automobile Shift Panel Based on Machine Vision. Master’s Thesis, Zhejiang Institute of Science and Technology, Hangzhou, China, 2020; p. 80. [Google Scholar]
  37. Shoja, S.; Berbyuk, V.; Bostr, A. Guided wave–based approach for ice detection on wind turbine blades. Wind Eng. 2018, 42, 483–495. [Google Scholar] [CrossRef]
  38. Zhang, L. Research on x-ray welding image defect detection based on convolution neural network. J. Phys. Conf. Ser. 2019, 1237, 032005. [Google Scholar] [CrossRef]
  39. Chen, R.; Xu, Y. Threshold optimization selection of fast multimedia image segmentation processing based on lab view. Multimed. Tools Appl. 2019, 79, 9451–9467. [Google Scholar] [CrossRef]
  40. Pare, S.; Bhandari, A.; Kumar, A.; Singh, G. An optimal color image multilevel thresholding technique using grey-level co-occurrence matrix. Expert Syst. Appl. 2017, 87, 335–362. [Google Scholar] [CrossRef]
  41. Ye, G.; Guo, J.; Sun, Z.; Li, C.; Zhong, S. Weld bead recognition using laser vision with model-based classification. Robot. Comput. Integr. Manuf. 2018, 52, 9–16. [Google Scholar] [CrossRef]
Figure 1. Automobile brake braking system.
Figure 1. Automobile brake braking system.
Machines 13 00616 g001
Figure 2. Automotive joints and welded seam. (a) Physical drawing of automobile brake joint. (b) Location of laser-welded brake joints.
Figure 2. Automotive joints and welded seam. (a) Physical drawing of automobile brake joint. (b) Location of laser-welded brake joints.
Machines 13 00616 g002
Figure 3. Characteristics of crater defects.
Figure 3. Characteristics of crater defects.
Machines 13 00616 g003
Figure 4. Characteristics of hole defects.
Figure 4. Characteristics of hole defects.
Machines 13 00616 g004
Figure 5. Characteristics of nibbling defects.
Figure 5. Characteristics of nibbling defects.
Machines 13 00616 g005
Figure 6. The process of testing weld seam samples.
Figure 6. The process of testing weld seam samples.
Machines 13 00616 g006
Figure 7. Camera-captured diagram of a car’s brake joints.
Figure 7. Camera-captured diagram of a car’s brake joints.
Machines 13 00616 g007
Figure 8. ROI-extracted weld map.
Figure 8. ROI-extracted weld map.
Machines 13 00616 g008
Figure 9. Comparison chart of filtering effect. (a) Original figure. (b) Averaging filter. (c) Gaussian filter. (d) Median filter.
Figure 9. Comparison chart of filtering effect. (a) Original figure. (b) Averaging filter. (c) Gaussian filter. (d) Median filter.
Machines 13 00616 g009
Figure 10. Weld seam image enhancement.
Figure 10. Weld seam image enhancement.
Machines 13 00616 g010
Figure 11. Highlighted areas in the weld image.
Figure 11. Highlighted areas in the weld image.
Machines 13 00616 g011
Figure 12. After segmentation of the weld image into highlighted areas.
Figure 12. After segmentation of the weld image into highlighted areas.
Machines 13 00616 g012
Figure 13. Schematic diagram of watershed division effect. (a) Pre-segmentation. (b) After segmentation.
Figure 13. Schematic diagram of watershed division effect. (a) Pre-segmentation. (b) After segmentation.
Machines 13 00616 g013
Figure 14. Image of welding seam after watershed segmentation. (a) Boundary of the region after image segmentation. (b) Post-segmentation region of the image.
Figure 14. Image of welding seam after watershed segmentation. (a) Boundary of the region after image segmentation. (b) Post-segmentation region of the image.
Machines 13 00616 g014
Figure 15. Expansion effect diagram. (a) Before expansion. (b) After expansion.
Figure 15. Expansion effect diagram. (a) Before expansion. (b) After expansion.
Machines 13 00616 g015
Figure 16. Corrosion effect diagram. (a) Before corrosion. (b) After corrosion.
Figure 16. Corrosion effect diagram. (a) Before corrosion. (b) After corrosion.
Machines 13 00616 g016
Figure 17. Defective areas of the weld. (a) Characteristics of crater defects. (b) Characteristics of hole defects. (c) Characteristics of nibbling defects.
Figure 17. Defective areas of the weld. (a) Characteristics of crater defects. (b) Characteristics of hole defects. (c) Characteristics of nibbling defects.
Machines 13 00616 g017
Figure 18. Cascade classifier structure diagram.
Figure 18. Cascade classifier structure diagram.
Machines 13 00616 g018
Figure 19. Positive sample set.
Figure 19. Positive sample set.
Machines 13 00616 g019
Figure 20. Negative sample set.
Figure 20. Negative sample set.
Machines 13 00616 g020
Figure 21. Defect image detection results. (a) Image of crater detection results. (b) Image of hole detection results. (c) Characteristics of nibbling defects. (d) Normal image detection results. (e) Normal image detection results.
Figure 21. Defect image detection results. (a) Image of crater detection results. (b) Image of hole detection results. (c) Characteristics of nibbling defects. (d) Normal image detection results. (e) Normal image detection results.
Machines 13 00616 g021
Figure 22. Misidentified image. (a) Misidentified nibbling edge. (b) Misidentified nibbling edge. (c) Misidentified hole edge. (d) Misidentified hole edge.
Figure 22. Misidentified image. (a) Misidentified nibbling edge. (b) Misidentified nibbling edge. (c) Misidentified hole edge. (d) Misidentified hole edge.
Machines 13 00616 g022
Figure 23. Unrecognized image. (a) Nibbling edge not recognized. (b) Nibbling edge not recognized. (c) Hole edge not recognized. (d) Hole edge not recognized.
Figure 23. Unrecognized image. (a) Nibbling edge not recognized. (b) Nibbling edge not recognized. (c) Hole edge not recognized. (d) Hole edge not recognized.
Machines 13 00616 g023
Table 1. Sample eigenvalue parameters for weld defects.
Table 1. Sample eigenvalue parameters for weld defects.
Feature TypeFeature NameCratersHolesNibbling
Geometric FeatureArea0.5040.0150.561
Lengths0.2130.0550.337
Width0.2440.0310.179
Gray-Scale FeatureAverage Gray-Scale2.5110.6871.429
Gray-Scale Variance209.185123.421167.113
Kurtosis105.904245.884181.707
Texture FeatureEntropy0.9331.9881.785
Contrast2.6700.6871.128
Evenness0.1080.1580.826
Table 2. Parameter normalization of sample eigenvalues of weld defects.
Table 2. Parameter normalization of sample eigenvalues of weld defects.
Feature TypeFeature NameCratersHolesNibbling
Geometric FeatureArea0.17200.052
Lengths0.0040.0020.007
Width0.00600.002
Gray-Scale FeatureAverage Gray-Scale0.0100.0020.009
Gray-Scale Variance10.4211
Kurtosis0.59610.539
Texture FeatureEntropy0.0040.0070.005
Contrast0.0120.0020.006
Evenness00.0010.001
Table 3. The effect of the number of training sessions on the training of cascade classifiers.
Table 3. The effect of the number of training sessions on the training of cascade classifiers.
Number of Training Sessions/TimesCorrectness P/%Recall Rate R/%
5089.98.6
50096.73.4
100096.73.1
Table 4. Detection time of weld defects using cascaded classifiers.
Table 4. Detection time of weld defects using cascaded classifiers.
Sample NumberRecognition Time/sSample NumberRecognition Time/s
10.21110.34
20.24120.25
30.38130.21
40.19140.17
50.26150.23
60.19160.26
70.26170.28
80.21180.24
90.26190.23
100.27200.25
Table 5. Weld picture sample test results.
Table 5. Weld picture sample test results.
SampleNumber of DetectionsMisdetection NumberDetection Rate/%Leakage Rate/%
20095597.52.5
Table 6. Detection results of each defect in weld image samples.
Table 6. Detection results of each defect in weld image samples.
Defect TypeNumber of DefectsNumber of DetectionsNumber of Missed TestsMisdetection NumberDetection Rate/%Leakage Rate/%False Detection Rate/%
Craters38380010000
Holes35351197.12.91
Nibbling27262192.67.41
Normal100101239823
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, D.; Yu, C.; Sha, L.; Zhang, H.; Liu, X. Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction. Machines 2025, 13, 616. https://doi.org/10.3390/machines13070616

AMA Style

Fan D, Yu C, Sha L, Zhang H, Liu X. Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction. Machines. 2025; 13(7):616. https://doi.org/10.3390/machines13070616

Chicago/Turabian Style

Fan, Diqing, Chenjiang Yu, Ling Sha, Haifeng Zhang, and Xintian Liu. 2025. "Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction" Machines 13, no. 7: 616. https://doi.org/10.3390/machines13070616

APA Style

Fan, D., Yu, C., Sha, L., Zhang, H., & Liu, X. (2025). Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction. Machines, 13(7), 616. https://doi.org/10.3390/machines13070616

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop