Next Article in Journal
Power Line Segmentation Algorithm Based on Lightweight Network and Residue-like Cross-Layer Feature Fusion
Previous Article in Journal
CB-MTE: Social Bot Detection via Multi-Source Heterogeneous Feature Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PCB Electronic Component Soldering Defect Detection Using YOLO11 Improved by Retention Block and Neck Structure

School of Mechanical Engineering, Anhui University of Technology, Ma’anshan 243002, China
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(11), 3550; https://doi.org/10.3390/s25113550
Submission received: 5 May 2025 / Revised: 21 May 2025 / Accepted: 29 May 2025 / Published: 4 June 2025
(This article belongs to the Section Electronic Sensors)

Abstract

Printed circuit board (PCB) assembly, on the basis of surface mount electronic component welding, is one of the most important electronic assembly processes, and its defect detection is also an important part of industrial generation. The traditional two-stage target detection algorithm model has a large number of parameters and the runtime is too long. The single-stage target detection algorithm has a faster running time, but the detection accuracy needs to be improved. To solve this problem, we innovated and modified the YOLO11n model. Firstly, we used the Retention Block (RetBlock) to improve the C3K2 module in the backbone, creating the RetC3K2 module, which makes up for the limitation of the original module’s limited, purely convolutional local receptive field. Secondly, the neck structure of the original model network is fused with a Multi-Branch Auxiliary Feature Pyramid Network (MAFPN) structure and turned into a multi-branch auxiliary neck network, which enhances the model’s ability to fuse multiple scaled characteristics and conveys diverse information about the gradient for the output layer. The improved YOLO11n model improves its mAP50 by 0.023 (2.5%) and mAP75 by 0.026 (2.8%) in comparison with the primitive model network, and detection precision is significantly improved, proving the superiority of our proposed approach.

1. Introduction

With the fast growth of the electronic information industry, printed circuit boards (PCB) are more and more widely used in various types of electronic equipment. For PCB assembly, due to the need to install various types of electronic components, surface mount technology (SMT) has become an important electronic assembly technology [1]. SMT replaces traditional plug-in technology and offers the advantages of lower assembly volume and weight, high density, and high reliability. However, due to instability in SMT welding processes such as reflow soldering or wave soldering, the welding of electronic components can produce random defects, such as missing electronic components, offsets, tombstoning, etc. If these defects are not detected, they will greatly impact PCB performance. Therefore, in actual production, surface solder defect detection in the electrical components of PCBs has become a key issue [2,3,4].
Early approaches to soldering defect detection in PCB electronic components included the traditional artificial visual detection method and automatic optical inspection method based on machine vision (AOI). The artificial visual inspection method mainly relies on a large number of artificial, lengthy visual inspections of PCB electronic components after SMT patch welding appearance. Although this method is simple, it is not only time-consuming and laborious due to the heavy inspection workload but also greatly limited by factors such as human eyesight, inspection experience, and emotions, with limited inspection efficiency and accuracy [5]. The AOI system makes use of a high-resolution camera to capture PCB sample surface images and detects defects such as missing electronic components, offsets, and tombstoning through image processing techniques. The AOI system detects defects mainly by distinguishing between the differences between the shape, color, and other features of the sample acquisition image and the defect-free template image [6]. However, this method can lead the system to incorrectly find defects due to small differences in the soldering color and shape of electronic components between the sample acquisition picture and the flawless template picture. In addition, the AOI system also requires very good cameras and light environmental conditions, which are costly, and the accuracy of the inspection results is also affected by the quality of the camera and the environmental conditions.
Deep learning algorithms have become popular in recent years, and their application in the defect detection domain has matured. At present, there are two main types of deep learning detection algorithms in the area of PCB electronic component soldering defects, namely, two-stage target detection algorithms based on candidate regions and the single-stage object detection approach for direct localization-based classification. Typical representatives of two-stage target detection algorithms are R-CNN [7], Faster R-CNN [8,9], Mask R-CNN [10,11], etc. First, these algorithms pick out pre-selected regions from the sample image, and then they classify and regress the pre-selected regions to filter out the target region. The advantage of this is that the inspection is more precise, but the amount of computation becomes larger and the model runs slower. The single-stage target detection algorithm is dominated by the YOLO algorithm [12], which does not need to produce pre-selected regions but directs the localization and classification of target objects. Its advantages are that the model’s calculations are small and run quickly, but the detection accuracy is not high. For PCB electronic part soldering, which involves a target dataset and detecting many electronic component targets against a complex background, the above methods do not give good results and are not well-suited to industrial production. Therefore, in this paper, we have improved and innovated the YOLO11n model [13] so that it can be better applied to PCB electronic component welding defect dataset samples. We improved the C3K2 module fusion Retention Block (RetBlock) [14] in the YOLO11n backbone network, forming the RetC3K2 module, which is more suitable for detecting sample datasets with complex backgrounds. In addition, we also improved the original neck network fusion Multi-Branch Auxiliary FPN structure [15] of YOLO11n, forming a multi-branch auxiliary neck network structure that enhances sensitivity to small target detection and reduces leakage rate. Based on our proposed improved method of YOLO11n, all surface patch element solder defects of varying sizes can be detected within a single image, which improves detection accuracy compared to the original network model.
Our research contributions in this paper are mainly in the following areas:
  • Unlike earlier detection approaches that only detect soldering defects on a single PCB surface mount component, we use an innovative and improved model structure based on YOLO11n for the first time to detect soldering defects on a variety of surface patch components of different sizes and shapes within a single picture.
  • We improve the C3K2 module in the YOLO11n backbone network model to RetC3K2 module, which utilizes the combination of C3K2 module and RetBlock. This modification retains the attention mechanism to strengthen the overall modeling ability, make up for the limitation of pure convolution, and improve the detection accuracy.
  • We improve the original neck network of YOLO11n into a multi-branch auxiliary neck network structure, so that the shallow information is retained as an auxiliary branch into the deep network, which enhances the model’s multi-scale feature fusion capability and improves the accuracy of detecting defects on small targets.
This paper is organized as follows. In Section 2, we analyze the development of the research area of soldering defect detection in PCB electronic components. Section 3 describes the knowledge of theoretical principles of our research approach. Section 4 conducts a comparative analysis of experiments with different previous approaches. Section 5 briefly summarizes the entire article.

2. Related Work

Defect detection for soldering PCB electronic components is a crucial process in the electronic assembly industry. The current mainstream defect detection approaches are broadly categorized into two types: early traditional defect detection and recognition algorithms and deep learning-based defect detection and recognition algorithms.
Early traditional defect detection algorithms primarily combine machine learning with digital image processing techniques [16]. Jiang et al. [17] proposed a novel method on PCB solder paste defect detection, using bionic color features to characterize the solder paint image and introducing an innovative sub-flow shape learning approach. This method effectively identifies the poor-quality solder paste while addressing the limitations of the traditional method, such as high cost and slow detection speed. Wu [18] achieved high accuracy in PCB defect detection by extracting color and geometric features of solder joints and subsequently applying the random forest approach to classify and detect the defects. Luo et al. [19] proposed a novel multistep preprocessing method on the basis of MiniLED backlight PCB pad images to improve the accuracy. Their approach incorporated threshold segmentation, fuzzy C-mean clustering-based segmentation, and edge detection based on Canny operator, among other techniques. However, the threshold range selected for threshold segmentation requires greater precision, and inappropriate thresholds may cause problems such as under-segmentation or over-segmentation. Zhu et al. [20] presented a detection method employing wavelet de-noising technology. The detection accuracy is further enhanced by wavelet de-noising and histogram equilibrium enhancement technologies to achieve the detection percentage of defects to 100% and the recognition percentage to more than 90%. Despite the mentioned machine learning-based detection methods having achieved satisfactory results, they exhibit several limitations. First, feature extraction in the case of complex target samples is time-consuming and labor-intensive, and there is a lack of generalization ability. Second, these algorithms are typically limited to classification and identification tasks, and additional postprocessing is required for localization of defects.
Recently, deep learning-based detection approaches have been extensively used in PCB defect detection. Ding [21] enhanced the Faster R-CNN network framework and proposed a specialized detection framework for small defects, significantly improving the efficiency of PCB defect detection for complex and diverse PCB defects. Experimental results demonstrate that the method performs well on public datasets and has good generalization capability. Liu et al. [22] improved the Cascade Mask R-CNN defect detection framework by replacing its original backbone with a Swin-Transformer network to enhance the defect feature acquisition quality from samples, thereby increasing the detection accuracy of the soldering defects on the PCB surface patches. Nevertheless, the introduction of a Swin-Transformer network results in higher computational costs and a longer training and inference time. Li [23] proposed a novel deep integration approach for PCB solder defect detection, which mainly combines YOLOv2 and Faster R-CNN to increase the detection percentage as well as decrease the false alarm percentage. Du et al. [24] improved the YOLOv5 network by introducing a bi-directional feature pyramid network (BiFPN) and convolutional block attention module (CBAM) to enhance the multi-scale feature fusion and realize the goal of increasing the precision and real-time performance. Chen [25] developed a Transformer-YOLO model detection approach, which combined the Swin-Transformer and YOLOv5 to optimize the acquisition of feature images and improve the accuracy and efficiency of PCB defect detection. Liu [26] presented a CSYOLOv8 model based on YOLOv8, which enhances detection accuracy by designing a composite backbone structural network for additional feature representation to strengthen the feature expression capability. Zheng [27] proposed an FDDC-YOLOv10 network on the basis of YOLOv10 network model, incorporating both the full-dimensional dynamic convolution module (FDDC) and the cross-channel enhanced attention (CECA) block. This design strengthens the ability of feature extraction and the local mutuality between channels, significantly improving the detection capability of small target defects. The main advantage of the single-stage target detection algorithm based on the YOLO model algorithm for PCB defect detection is that its model calculation is small and inference speed is fast. But the shortcoming is that the detection accuracy needs to be improved.

3. Proposed Method

First, we perform a dataset expansion experiment on the PCB surface patch component soldering defects experimental dataset using the ControlNet-based [28] stable diffusion model [29]. Subsequently, we employ the improved YOLO11n network model to detect the expanded dataset. The overall improved network model still consists of the backbone network, the neck network, and the detection head. The C3K2 block in the backbone network is improved to RetC3K2 block using Retention Block, and the original neck network is improved to a multi-branch auxiliary neck network structure.

3.1. Expanding Dataset Using ControlNet-Based Stable Diffusion Models

As the PCB surface patch component soldering defects dataset has less variety in actual production, which is not enough to support deep learning model training, it needs to be expanded. The commonly used expansion method is to use the adversarial generative network (GAN) [30] for expansion, but this experimental dataset is a non-independent sample, there will be more than one surface patch component target in the sample image, which will contain both normal and defective component welding detection targets, and the adversarial generative network cannot be applied to expand the dataset. In this experiment, the ControlNet-based stable diffusion model is used to expand the dataset, and its structure is shown in Figure 1 below. Firstly, Contrastive Language-Image Pre-training (CLIP) [31] is utilized to extract sample key cues from the input sample images. The textual cues are then encoded and input into the latent space, where they are fused with the features in the latent space to further control the model output. The cue words are selected to be re-injected into the stable diffusion model to assist the cueing of Img2Img with the higher-frequency cue words. At the same time, ControlNet is utilized to extract key features from the input image and feed them to the stable diffusion model for additional control over the output image. Ultimately, the stable diffusion model generates an expanded sample image that closely resembles the original input image with respect to its logical distribution and spatial layout, which are visually very similar to each other but very different at the pixel level. As illustrated in Figure 2 below, Figure 2a and Figure 2b show the input sample image and generated sample image, respectively, while Figure 2c represents the pixel-level differences between them. In addition, the generated image does not need to be re-labeled with an annotation tool, and its label information can be shared with the original image, significantly reducing time and labor costs.

3.2. Improved Overall Network Architecture of YOLO11n

The complete framework of the improved YOLO11n is illustrated in Figure 3 below. As the experimental results have shown, the YOLO11n “https://docs.ultralytics.com/zh/models/yolo11” (accessed on 10 February 2025) detection network exhibits limited performance in the detection of PCB surface patch electronic component soldering defects, such as the detection of a variety of target types, different scales, and the more complex background dataset detection performance is insufficiently sensitive to small target detection and it is easy to miss the detection. To address these limitations, we improve the C3K2 module in the backbone combined with Retention Block to RetC3K2 module. Additionally, we improve the original neck network to a multi-branch auxiliary neck network.
The process principle of the model to detect defects is mainly to preprocess the input image first, such as scaling and normalization, and then carry out feature extraction through the backbone network. These extracted features are subsequently fed into the neck network for multi-scale feature fusion. At the end, the detection head is in charge of outputting the ultimate prediction for both target detection and classification.

3.3. RetC3K2 Module Structure Theory

The C3K2 module is an important and efficient feature extraction component in the YOLO11n model, which is improved on the basis of the design structure of the traditional C3 module by introducing multi-scale convolutional kernels and adopting a channel separation strategy, enabling the model to capture contextual information in a larger range and enhances the ability of feature extraction in complex scenarios and deep-level tasks. However, the original C3K2 module relies solely on a purely convolutional stacking structure, which is computationally efficient and stable for training, but it lacks flexibility, with branches fixed to bottleneck layers and limited receptive fields, and is unable to dynamically adjust the receptive fields. Its localization limitation is also large and lacks global dependency modeling capability. Furthermore, the original C3K2 module performs poorly when processing datasets like PCB surface patch component soldering defects, which are detected with smaller targets, more target types, and complex and dense backgrounds. To address this issue, we introduce Retention Block (RetBlock) into the C3K2 module, upgrading it to the RetC3K2 module. RetBlock introduces dynamic retention of attention based on the local perception of traditional convolution. Through the coordinated design of convolution and attention, local–global feature cooperative modeling is achieved and the accuracy of small target detection is improved. RetBlock, with its structure illustrated in Figure 4, first employs Depthwise Separable Positional Convolution (DWConv) to inject local positional information, thereby enhancing the model network’s ability for the recognition of low-level feature details. Subsequently, Manhattan Self-Attention (MaSA) [14], as the core component of RetNet, mainly utilizes the dynamic spatial attention mechanism to strengthen the ability of the model to capture the global context and improve the accuracy of small target detection. Finally, the nonlinear transformation is introduced through a feed-forward network (FFN) to further enhance feature representation.
The structure of the RetC3K2 module, obtained by combining the C3K2 module with the RetBlock, is illustrated in Figure 4. Firstly, the numbers of channels are adapted through convolution and then split into two branches using the Split operation. One branch retains the original features as the residual benchmark, while the other branch connects to the module components in series by RetC3K or RetBlock, and the number of its series n is generally chosen as 2. The parameter controlling whether the branch connects to the module components via RetC3K or RetBlock is C3K, which balances the computational cost and performance. When C3K is true, the model chooses to pass the RetC3K module, whose internal relative position encoding RelPos based on Manhattan distance generates the attenuation mask, enabling the model to adaptively learn to adjust the weights according to the correlation of different positions, realizing a kind of spatial a priori perception, and the integration of MaSA allows the model to understand the spatial relationship between pixels better. When C3K is false, the model chooses to pass the RetBlock module. In this case, it needs to pass RelPos, encoding the relative position, from the outside at the meantime to keep the representation of the spatial relationship and the integrity of information.
Compared with the original C3K2 module, RetC3K2 introduces RetBlock, which replaces the local receptive field of pure convolution with the MaSA attention mechanism, and, at the same time, combines with the attenuation mask generated by the relative position encoding RelPos, which is different from the original convolutional kernel with fixed weights, and allows the model to adaptively learn the importance of different positions. This enhancement significantly improves the detection accuracy of target defects.

3.4. Multi-Branch Auxiliary Neck Network

The neck network structure of the original YOLO11 primarily employs a Path Aggregation Network (PAN), which combines and splices feature maps from different layers through an up-sampling module and then performs feature fusion through a convolution module. The introduced C3K2 module in this structure helps address the multiscale feature fusion challenges in target detection. However, this approach treats all feature maps equally during fusing, lacking targeted enhancement for critical features. In complex scenes, low-quality features may be fused indiscriminately, leading to an increase in the false detection rate. Moreover, the feature reuse efficiency is low, and features lack an adaptive filtering mechanism when passing across layers, which in turn leads to shallow detailed features (e.g., P3) being easily overwhelmed by high-level semantic features (e.g., P5) when passing to the deeper layers, affecting the accuracy for small target detection.
To address the aforementioned issues, we integrate the original neck network with the Multi-Branch Auxiliary FPN (MAFPN) structure and propose the multi-branch auxiliary neck network structure. The MAFPN primarily consists of Surface Auxiliary Fusion (SAF) and Advanced Auxiliary Fusion (AAF). The SAF structure diagram is illustrated in Figure 5, where Pn−1, Pn, and Pn+1 mean feature maps with various resolutions; Pn represents the feature layer of the backbone structure, and P′n and P″n denote the two paths of the MAFPN. We integrated the SAF structure into the two Concat modules (numbered 16 and 20, indicated by light red boxes in Figure 3). The main role of the fused SAF structure is to integrate the features of the deep layer information with the same-layer and high-resolution shallow layer within the backbone structure with each other through cross-layer hopping connections and to retain the shallow layer information as an auxiliary branch into the deeper network to enhance the performance of detection for a small target. The AAF structure is illustrated in Figure 6, which combines the aggregated layers across the shallow high-resolution layer P′n+1, the superficial low-resolution layer P′n−1, the similar superficial layer P′n, and the previous layer P″n−1 that are integrated with each other to produce output P″n. We fuse the AAF structure into the Concat module (number 26, indicated by the blue box in Figure 3). The fused AAF structure can simultaneously merge feature information from four different layers, enabling the output layer to retain comprehensive multi-scale information and thereby improving the performance of medium-sized target detection.
The multi-branch auxiliary network structure, formed by integrating the original neck network with the MAFPN structure, relies on the SAF, which can consolidate the output features of the backbone and the neck, thus preserving the shallow information in deeper networks. At the same time, relying on the deeper AAF can integrate the multi-scale feature information and deliver diverse gradient information to the output layer. This architecture significantly improves the detection accuracy of defects for a small target.

4. Results of the Experiment

4.1. Experimental Dataset and Related Parameter Settings

The dataset utilized in our experiment is the PCB surface mount electronic component soldering defects, containing a total of 210 samples. These samples are divided into the training, validation, and test dataset according to the proportions of 8:1:1. It mainly contains seven kinds of patch electronic components, and the schematic diagram is shown in Figure 7. Each SMT electronic component has corresponding defect types: capacitance and resistance missing, offset, and tombstoning, inductance missing as well as offset, and diodes, transistors, thermistors, and voltage dividers missing; examples are shown in Table 1 below. The normal plus defective labels add up to a total of 19 distinct labeled categories.
Our experiments were conducted on a Linux system equipped with an NVIDIA GeForce RTX 4060Ti GPU. The deep learning frameworks consisted of CUDA12.4, torch-2.5.1, and Python-3.9.21. We adopted the following training parameters: a learning rate of 0.001, batch size of 2, and 300 training epochs, using the Stochastic Gradient Descent (SGD) optimizer.

4.2. Comparative Analysis of Experimental Results

The methodology of this defect detection experiment is based on the improved YOLO11n defect detection algorithm, by comparing the two-stage detection algorithms Mask R-CNN, Swin Transformer Mask-CNN, and Cascade Mask R-CNN algorithms based on ConvNeXT with the same single-stage detection algorithms YOLO11n, to demonstrate the superiority of our improved algorithm on the PCB surface patch component soldering dataset. The experimental metrics utilized in our experiment are precision (P), average recall (AR) and mean average precision (mAP) of the detection tasks under various IoUs as the main metrics to validate the superiority of our method. The metrics of the counts of model parameters (Params) and Giga Floating-Point Operations Per Second (GFLOPs) are also utilized to assess the computational cost of the model.
Table 2 presents the comparison between our improved YOLO11n defect detection algorithm and the two-stage detection algorithm. It can be seen that our proposed method does not have as high mAP as well as AR accuracy metrics as the ConvNeXT-based Cascade Mask R-CNN algorithm, but the number of Params and GFLOPs of our proposed method are much smaller in comparison, which is not very demanding on computers, and the computational complexity is very small.
Table 3 displays the comparison between the improved YOLO11n defect detection algorithm and the original YOLO11n algorithm, as well as single-stage detection algorithms such as YOLOv10n and YOLOv8n. It can be observed that the average accuracy of our improved YOLO11n algorithm has been greatly improved compared to the original YOLO11n model, with its mAP50 improved from the original 0.927 to 0.950, an improvement of 0.023 (2.5%), and its mAP75 improved from the original 0.924 to 0.950, an improvement of 0.026 (2.8%). The improved model precision P and average recall AR are also improved by 0.66% and 0.67%, respectively. The Params and GFlops metrics are comparable to the original network model, with the same computational cost. The quantitative results clearly demonstrate the superiority of our proposed improvements.
Table 4 presents the ablation study results comparing different module combinations in the improved YOLO11n architecture. From the table, it can be found that YOLO11n combined with the RetC3K2 module and multi-branch assisted necking network has a good improvement in mAP compared with the single module, and the Params as well as GFlops indexes are not much different and the computational cost is about the same. These results validate the superiority of the two improved methods after superposition.
In Figure 8 we list some soldering defect detection results of PCB surface mount electronic components with the improved YOLO11n model and the original YOLO11n model; Figure 8(a1–a3) show the original picture of our experimental test, Figure 8(b1–b3) show the original YOLO11n model’s detection results on the test images, and Figure 8(c1–c3) show the detection results of the improved YOLO11n model on the test images. It can be intuitively found that the original YOLO11n model is prone to misdetection or missed detection in the face of the complex detection background based on multiple detection objects, such as the misdetection of the offset inductance as multiple targets in Figure 8(c1), the missed detection of the voltage divider in the upper-right corner of Figure 8(c2), and the missed detection of the voltage divider in the middle of Figure 8(c3) due to the appearance of some complex holes in the background around it. Compared with our improved YOLO11n detection method, our method effectively addresses the above misdetection and missed detection while maintaining high detection accuracy.

5. Conclusions

This paper focuses on detecting soldering defects in PCB surface mount electronic components. To improve the detection accuracy of the original YOLO11n, we first enhance the C3K2 module in the backbone to the RetC3K2 module by using Retention Block (RetBlock). This upgraded module addresses the limited receptive field of conventional convolution operations through integration with the MaSA attention. Furthermore, we incorporate relative position encoding, RelPos, to generate an adaptive attenuation mask, replacing the original fixed-weight convolutional kernels. This modification enables the model to dynamically learn the significance of diverse positions, which greatly improves the detection performance of target defects. Then, we develop a multi-branch auxiliary neck network (MAAN) by integrating the Multi-Branch Auxiliary FPN (MAFPN) structure. The enhanced architecture effectively integrates the output features of the backbone and the neck while preserving the shallow information in the deep network, which strengthens the ability of integrating the multiscale feature information, and delivers diversified gradient information to the output layer, substantially improving the detection accuracy of defects in small targets.
The results of the experiments in our paper demonstrate the superiority of our improved YOLO11n model for soldering defect detection of PCB surface-mounted electronic components. Compared with the original YOLO11n model network, our model achieves significant improvements of 0.023 (2.5%) in mAP50 and 0.026 (2.8%) in mAP75, along with markedly enhanced detection accuracy. The proposed method also maintains high accuracy as well as detection productivity, presenting the practical solution for defect detection in industry. Although the whole performance of the improved model has been enhanced, there are still some deficiencies that need to be refined, and there is still potential for improvement on detection accuracy and detection precision. The future research direction will focus on exploring more advanced modeling algorithms to improve the detection accuracy, expand detectable target types, and enhance the model’s applicability in industrial settings.

Author Contributions

Conceptualization, Y.X. and H.W.; methodology, H.W.; software, Y.X.; validation, Y.X., H.W. and Y.L.; formal analysis, Y.X.; investigation, H.W.; resources, Y.L.; data curation, Y.X.; writing—original draft preparation, Y.X.; writing—review and editing, X.Z.; visualization, X.Z.; supervision, H.W.; project administration, H.W.; funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Anhui Provincial Natural Science Foundation (2108085ME166), as well as by the Natural Science Research Project of Universities in Anhui Province (KJ2021A0408).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, H.W., upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bingyi, L.; Songtao, Q.; Gong, Z. The preparation and wettability of the Sn-9Zn-2.5Bi-1.5In solder paste for SMT process and high shear ball performance. Solder. Surf. Mt. Technol. 2025, 37, 163–172. [Google Scholar]
  2. Da Silva, H.G.; Amaral, T.G. Automatic Optical Inspection for Detecting Defective Solders on Printed Circuit Boards. In Proceedings of the 36th Annual Conference of IEEE Industrial Electronics Society, Glendale, AZ, USA, 7–10 November 2010; pp. 1087–1091. [Google Scholar]
  3. Cai, N.; Lin, J.; Ye, Q.; Wang, H.; Weng, S.; Ling, B.W. A New IC Solder Joint Inspection Method for an Automatic Optical Inspection System Based on an Improved Visual Background Extraction Algorithm. IEEE Trans. Compon. Packag. Manuf. Technol. 2016, 6, 161–172. [Google Scholar]
  4. Zhou, Y.; Yuan, M.; Zhang, J.; Ding, G.; Qin, S. Review of vision-based defect detection research and its perspectives for printed circuit board. J. Manuf. Syst. 2023, 70, 557–578. [Google Scholar] [CrossRef]
  5. Abd Al Rahman, M.; Mousavi, A. A Review and Analysis of Automatic Optical Inspection and Quality Monitoring Methods in Electronics Industry. IEEE Access 2020, 8, 183192–183271. [Google Scholar]
  6. Huang, N.; Feng, P. Motion Speed Detection Algorithm Based on the Control Platform of Automatic Optical Inspection Systems for PCB. Adv. Sci. Ind. Res. Cent. CMSMS 2018, 391–395. [Google Scholar] [CrossRef]
  7. Chen, Y.; Wang, J.; Wang, G. Intelligent Welding Defect Detection Model on Improved R-CNN. IETE J. Res. 2023, 69, 9235–9244. [Google Scholar] [CrossRef]
  8. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV 2015), Santiago, Chile, 11–18 December 2015; pp. 1440–1448. [Google Scholar]
  9. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  10. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision: ICCV 2017, Venice, Italy, 22–29 October 2017; pp. 2970–3705. [Google Scholar]
  11. Song, H.C.; Knag, M.S.; Kimg, T.E. Object Detection based on Mask R-CNN from Infrared Camera. J. Digit. Contents Soc. 2018, 19, 1213–1218. [Google Scholar] [CrossRef]
  12. Redmon, J.; Divvala, K.S.; Girshick, B.R. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  13. Ji, Y.; Zhang, D.; He, Y.; Zhao, J.; Duan, X.; Zhang, T. Improved YOLO11 Algorithm for Insulator Defect Detection in Power Distribution Lines. Electronics 2025, 14, 1201. [Google Scholar] [CrossRef]
  14. Fan, Q.; Huang, H.; Chen, M.; Liu, H.; He, R. RMT: Retentive Networks Meet Vision Transformers. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 641–5651. [Google Scholar]
  15. Yang, Z.; Guan, Q.; Zhao, K.; Yang, J.; Xu, X. Multi-branch Auxiliary Fusion YOLO with Re-parameterization Heterogeneous Convolutional for Accurate Object Detection. In Pattern Recognition and Computer Vision; Lecture Notes in Computer Science: 15042 LNCS; Springer: Berlin/Heidelberg, Germany, 2024; pp. 492–505. [Google Scholar]
  16. Kumar, A.; Pang, H.K.G. Defect detection in textured materials using Gabor filters. IEEE Trans. Ind. Appl. 2002, 38, 425–440. [Google Scholar] [CrossRef]
  17. Jiang, J.; Cheng, J. Color Biological Features-Based Solder Paste Defects Detection and Classification on Printed Circuit Boards. IEEE Trans. Compon. Packag. Manuf. Technol. 2012, 2, 1536–1544. [Google Scholar] [CrossRef]
  18. Wu, H. Solder joint defect classification based on ensemble learning. Solder. Surf. Mt. Technol. 2017, 29, 164–170. [Google Scholar] [CrossRef]
  19. Luo, W.; Zou, X.; Chen, J.; Liang, T.; Ding, H.; Ni, Q. Machine vision-based mini led backlight pcb pad inspection system. Autom. Inform. Eng. 2022, 43, 20–2640. [Google Scholar]
  20. Zhu, J.; Wu, A.; Liu, X. Printed circuit board defect visual detection based on wavelet denoising. IOP Conf. Ser. Mater. Sci. Eng. 2018, 392, 062055. [Google Scholar] [CrossRef]
  21. Ding, R.; Dai, L.; Li, G.; Liu, H. TDD-net: A tiny defect detection network for printed circuit boards. CAAI Trans. Intell. Technol. 2019, 4, 110–116. [Google Scholar] [CrossRef]
  22. Liu, Y.; Wu, H.; Xu, Y.; Liu, X.; Yu, X. Automatic PCB Sample Generation and Defect Detection Based on ControlNet and Swin Transformer. Sensors 2024, 24, 3473. [Google Scholar] [CrossRef]
  23. Li, Y.T.; Kuo, P.; Guo, J.I. Automatic Industry PCB Board DIP Process Defect Detection with Deep Ensemble Method. In Proceedings of the 29th IEEE International Symposium on Industrial Electronics (ISIE), Delft, The Netherlands, 17–19 June 2020; pp. 453–459. [Google Scholar]
  24. Du, B.; Wan, F.; Lei, G.; Xu, L.; Xu, C.; Xiong, Y. YOLO-MBBi: PCB Surface Defect Detection Method Based on Enhanced YOLOv5. Electronics 2023, 12, 2821. [Google Scholar] [CrossRef]
  25. Chen, W.; Huang, Z.; Mu, Q.; Sun, Y. PCB Defect Detection Method Based on Transformer-YOLO. IEEE Access 2022, 10, 129480–129489. [Google Scholar] [CrossRef]
  26. Liu, C.J.; Zhang, M.; Run, H.; Wu, X.S. CSYOLO: A YOLOv8 PCB defect detection model integrating the main trunk network and dynamic snake convolution is proposed. J. Meas. Sci. Instrum. 2025, 1–12. Available online: http://kns.cnki.net/kcms/detail/14.1357.th.20250324.1743.002.html (accessed on 10 March 2025).
  27. Zheng, H.; Peng, J.; Yu, X.; Wu, M.; Huang, Q.; Chen, L. FDDC-YOLO: An efficient detection algorithm for dense small-target solder joint defects in PCB inspection. J. Real-Time Image Process. 2025, 22, 83. [Google Scholar] [CrossRef]
  28. Zhang, L.; Rao, A.; Agrawala, M. Adding Conditional Control to Text-to-Image Diffusion Models. In Proceedings of the CVF International Conference on Computer Vision: ICCV 2023, Paris, France, 1–6 October 2023; pp. 3813–3824. [Google Scholar]
  29. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10674–10685. [Google Scholar]
  30. Goodfellow, J.I.; Pouget-Abadie, J.; Mirza, M. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
  31. Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models from Natural Language Supervision. In Proceedings of the International Conference on Machine Learning: ICML 2021, Online, 18–24 July 2021; Volume 139, pp. 8748–8763. [Google Scholar]
  32. Liu, Y.; Wu, H. Automatic Solder Defect Detection in Electronic Components Using Transformer Architecture. IEEE Trans. Compon. Packag. Manuf. Technol. 2024, 14, 166–175. [Google Scholar] [CrossRef]
  33. Xu, Y.; Wu, H.; Liu, Y.; Liu, X. Printed Circuit Board Sample Expansion and Automatic Defect Detection Based on Diffusion Models and ConvNeXt. Micromachines 2025, 16, 261. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The structure of ControlNet-based stable diffusion model.
Figure 1. The structure of ControlNet-based stable diffusion model.
Sensors 25 03550 g001
Figure 2. Sample generation results: (a) original image; (b) generated image; (c) difference between the original image and the generated image.
Figure 2. Sample generation results: (a) original image; (b) generated image; (c) difference between the original image and the generated image.
Sensors 25 03550 g002
Figure 3. Diagram of the overall network architecture of the improved YOLO11n.
Figure 3. Diagram of the overall network architecture of the improved YOLO11n.
Sensors 25 03550 g003
Figure 4. RetBlock, RetC3K, and RetC3K2 module architecture schematics.
Figure 4. RetBlock, RetC3K, and RetC3K2 module architecture schematics.
Sensors 25 03550 g004
Figure 5. SAF structure schematic.
Figure 5. SAF structure schematic.
Sensors 25 03550 g005
Figure 6. AAF structure schematic.
Figure 6. AAF structure schematic.
Sensors 25 03550 g006
Figure 7. Images of mounted electronic components: (a) resistor; (b) thermistor; (c) inductor; (d) diode; (e) capacitor; (f) triode; and (g) potentiometer.
Figure 7. Images of mounted electronic components: (a) resistor; (b) thermistor; (c) inductor; (d) diode; (e) capacitor; (f) triode; and (g) potentiometer.
Sensors 25 03550 g007
Figure 8. Partial image of defect detection results: (a1a3) the original images of defect detection; (b1b3) original YOLO11n model detection results; (c1c3) the improved YOLO11n model detection results.
Figure 8. Partial image of defect detection results: (a1a3) the original images of defect detection; (b1b3) original YOLO11n model detection results; (c1c3) the improved YOLO11n model detection results.
Sensors 25 03550 g008aSensors 25 03550 g008b
Table 1. PCB electronic component soldering defect schematic table.
Table 1. PCB electronic component soldering defect schematic table.
Picture ExamplesName of DefectPicture ExamplesName of DefectPicture ExamplesName of Defect
Sensors 25 03550 i001resistor missing
(R-missing)
Sensors 25 03550 i002resistor shift
(R-shift)
Sensors 25 03550 i003resistor tombstone
(R-tombstone)
Sensors 25 03550 i004capacitor missing
(C-missing)
Sensors 25 03550 i005capacitor shift
(C-shift)
Sensors 25 03550 i006capacitor tombstone
(C-tombstone)
Sensors 25 03550 i007inductor missing
(L-missing)
Sensors 25 03550 i008inductor shift
(L-shift)
Sensors 25 03550 i009thermistor missing
(M-missing)
Sensors 25 03550 i010triode missing
(Q-missing)
Sensors 25 03550 i011diode missing
(D-missing)
Sensors 25 03550 i012Potentiometer missing
(RP-missing)
Table 2. Comparative analysis of improved YOLO11n detection algorithm and two-target phase detection algorithm.
Table 2. Comparative analysis of improved YOLO11n detection algorithm and two-target phase detection algorithm.
ModelmAPmAP50mAP75ARParams/MGFlops/G
Mask R-CNN [11]0.7460.9340.9290.81443.75258.2
ST–Mask R-CNN [32]0.8640.9470.9440.89247.37261.8
ConvNeXt Cascade Mask-RCNN [33]0.8860.9620.9620.90785.84472.3
Our proposed method0.8500.9500.9500.8992.586.9
Table 3. Comparative analysis of improved YOLO11n detection algorithm and single-target phase detection algorithm.
Table 3. Comparative analysis of improved YOLO11n detection algorithm and single-target phase detection algorithm.
ModelPmAP50mAP75ARParams/MGFlops/G
YOLOv8n0.8860.9090.9010.8643.018.1
YOLOv10n0.8630.8810.8810.8352.276.5
YOLO11n0.8980.9270.9240.8932.596.3
Our proposed method0.9040.9500.9500.8992.586.9
Table 4. Comparative data table of ablation experiments with improved YOLO11n.
Table 4. Comparative data table of ablation experiments with improved YOLO11n.
ModelPmAP50mAP75ARParams/MGFlops/G
YOLO11n0.8980.9270.9240.8932.596.3
YOLO11n + RetC3K20.8920.9370.9350.8812.476.2
YOLO11n + MAFPN Neck0.9000.9420.9380.8842.707.1
YOLO11n + RetC3K2
+ MAFPN Neck
0.9040.9500.9500.8992.586.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, Y.; Wu, H.; Liu, Y.; Zhang, X. PCB Electronic Component Soldering Defect Detection Using YOLO11 Improved by Retention Block and Neck Structure. Sensors 2025, 25, 3550. https://doi.org/10.3390/s25113550

AMA Style

Xu Y, Wu H, Liu Y, Zhang X. PCB Electronic Component Soldering Defect Detection Using YOLO11 Improved by Retention Block and Neck Structure. Sensors. 2025; 25(11):3550. https://doi.org/10.3390/s25113550

Chicago/Turabian Style

Xu, Youzhi, Hao Wu, Yulong Liu, and Xing Zhang. 2025. "PCB Electronic Component Soldering Defect Detection Using YOLO11 Improved by Retention Block and Neck Structure" Sensors 25, no. 11: 3550. https://doi.org/10.3390/s25113550

APA Style

Xu, Y., Wu, H., Liu, Y., & Zhang, X. (2025). PCB Electronic Component Soldering Defect Detection Using YOLO11 Improved by Retention Block and Neck Structure. Sensors, 25(11), 3550. https://doi.org/10.3390/s25113550

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop