Detection Method of Stator Coating Quality of Flat Wire Motor Based on Improved YOLOv8s
Abstract
:1. Introduction
- DSFI-HEAD Module: Traditional detection heads often suffer from feature loss or inaccurate localization when dealing with complex backgrounds and small targets. To address this challenge, the DSFI-HEAD module is proposed. This module enhances feature fusion and improves the detection head’s representational capability, thus significantly improving detection accuracy, especially for small targets and in complex scenarios;
- LFEG Module: Conventional algorithms may face issues with increased network complexity and parameter count when handling complex task learning, which can affect the model’s real-time performance and efficiency. To tackle this, the LFEG module is introduced, which reduces network parameters and complexity, optimizing the network structure and enhancing detection speed and efficiency. The core is to use feature extractors to extract and fuse task-related features from multi-layer convolutional networks to generate joint feature maps with richer information. By aligning tasks, the synergy between tasks is improved, thereby improving the accuracy and efficiency of the network.
- Based on YOLOv8s, DSFI-HEAD and LEFG are added to jointly construct the YOLOv8s-DFJA network, which realizes the requirements of network’s lightweightness and real-time accurate detection while ensuring accuracy.
2. Related Work
3. Proposed YOLOv8s-DFJA
3.1. DSFI-HEAD
- (1)
- Concat_features generate offset_and_mask with the size of (B, 3 × 3 × 3, W, H) through the spatial offset convolution layer. Using offset _ dim with the shape of (B, 2 × 3 × 3), the first 18 channels are extracted as offset. The remaining nine channels are used as masks. The extracted mask processed by Sigmod is used to normalize the extracted mask so that offset_and_mask is divided into two parts: offset and mask, that is, classification and regression task decomposition module;
- (2)
- After an adaptive average pooling, the size of Concat_features is changed to (B, C, 1, 1). In the classification and regression task decomposition modules, the feature size is changed to (B, 16, 1, 1) by convolution operation. After ReLU activation processing and convolution again, the size is adjusted to (B, 2, 1, 1), and weight (B, 2) is generated. The weight reshape is (B, 1, 2, 1), which is multiplied by the convolution weight and reshaped as (B, C/2, C). The input feature size is reshaped as (B, C, W * H). After torch.bmm operation, the size becomes (B, C//2, W * H) and is then reshaped as (B, C//2, W, H). After the regression task, the feature information, offset, and mask are input to the DCNv2 layer. The DCNv2 layer is subjected to convolution, normalization, and other operations, and the final output regression feature shape size is (B, C//2, W, H);
- (3)
- Concat_features after Conv, ReLU, and Conv operations of Concat_features are processed by Sigmod to obtain the classification probability. The output features of the classification task and the classification probability are multiplied pixel by pixel to obtain the weighted classification features. Then, through a 1 × 1 convolution layer, the weighted classification features are transformed into the final classification prediction output, and the regression features are transformed into the required output dimension. Because of the shared weight, the detection head cannot detect targets of different sizes. Therefore, after the REG convolution, the feature information is transmitted to the scale layer for feature scaling to realize the function of detecting targets of different sizes. The regression prediction output and the classification prediction output are spliced in the channel dimension to form the final output result.
3.2. LEFG
4. Experiment and Discussion
4.1. Experimental Condition
4.2. Ablation Experiment of Improved Algorithm
4.3. Comparison with Other Object Detection Algorithms
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Popescu, M.; Goss, J.; Staton, D.A.; Hawkins, D.; Chong, Y.C.; Boglietti, A. Electrical vehicles—Practical solutions for power traction motor systems. IEEE Trans. Ind. Appl. 2018, 54, 2751–2762. [Google Scholar] [CrossRef]
- Steinacker, A.; Bergemann, N.; Braghero, P.; Campanini, F.; Cuminetti, N.; De Buck, J.; Ferraris, M. In Hair pin motors: Possible impregnation and encapsulation techniques, materials and variables to be considered. In Proceedings of the AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE), Torino, Italy, 18–20 November 2020; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
- Zhu, H.; Song, Y.; Pan, G.; Chen, N.; Song, X.; Xia, L.; Liu, D.; Hu, S. Multilayer laminated Cu coil/CaO–Li2O–B2O3–SiO2 glass-ceramic preparation via a novel insulation packaging strategy for flat wire motor applications. Nano Mater. Sci. 2024; in press. [Google Scholar]
- Robinson, S.L.; Miller, R.K. Automated Inspection and Quality Assurance; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Chen, Y.; Ding, Y.; Zhao, F.; Zhang, E.; Wu, Z.; Shao, L. Surface defect detection methods for industrial products: A review. Appl. Sci. 2021, 11, 7657. [Google Scholar] [CrossRef]
- Zheng, X.; Zheng, S.; Kong, Y.; Chen, J. Recent advances in surface defect inspection of industrial products using deep learning techniques. Int. J. Adv. Manuf. Technol. 2021, 113, 35–58. [Google Scholar] [CrossRef]
- Luo, Q.; Fang, X.; Liu, L.; Yang, C.; Sun, Y. Measurement. Automated visual defect detection for flat steel surface: A survey. IEEE Trans. Instrum. Meas. 2020, 69, 626–644. [Google Scholar] [CrossRef]
- Li, C.; Li, J.; Li, Y.; He, L.; Fu, X.; Chen, J.J.S.; Networks, C. Fabric defect detection in textile manufacturing: A survey of the state of the art. Secur. Commun. Netw. 2021, 2021, 9948808. [Google Scholar] [CrossRef]
- Jian, C.; Gao, J.; Ao, Y. Automatic surface defect detection for mobile phone screen glass based on machine vision. Appl. Soft Comput. 2017, 52, 348–358. [Google Scholar] [CrossRef]
- Soltani Firouz, M.; Sardari, H. Defect detection in fruit and vegetables by using machine vision systems and image processing. Food Eng. Rev. 2022, 14, 353–379. [Google Scholar] [CrossRef]
- Bai, T.; Gao, J.; Yang, J.; Yao, D. A study on railway surface defects detection based on machine vision. Entropy 2021, 23, 1437. [Google Scholar] [CrossRef] [PubMed]
- Jawahar, M.; Babu, N.C.; Vani, K.; Anbarasi, L.J.; Geetha, S.J. Vision based inspection system for leather surface defect detection using fast convergence particle swarm optimization ensemble classifier approach. Multimed. Tools Appl. 2021, 80, 4203–4235. [Google Scholar] [CrossRef]
- Yan, A.; Rupnowski, P.; Guba, N.; Nag, A. Towards deep computer vision for in-line defect detection in polymer electrolyte membrane fuel cell materials. Int. J. Hydrogen Energy 2023, 48, 18978–18995. [Google Scholar] [CrossRef]
- Wang, P.; Fan, E.; Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
- Luo, J.; Lu, L.; Dong, S. Detection of spontaneous explosion of transmission line insulators based on improved mask-rcnn algorithm, ECITech 2022. In Proceedings of the 2022 International Conference on Electrical, Control and Information Technology, Kunming, China, 25–27 March 2022; VDE: Offenbach am Main, Germany; pp. 1–5. [Google Scholar]
- Wang, S.; Xia, X.; Ye, L.; Yang, B. Automatic detection and classification of steel surface defect using deep convolutional neural networks. Metals 2021, 11, 388. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 30 June 2016; pp. 779–788. [Google Scholar]
- Zhang, Z.; Wang, W.; Tian, X.; Luo, C.; Tan, J. Visual inspection system for crack defects in metal pipes. Multimed. Tools Appl. 2024, 1–18. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, H.; Xin, Z. Efficient detection model of steel strip surface defects based on yolo-v7. IEEE Access 2022, 10, 133936–133944. [Google Scholar] [CrossRef]
- Bosquet, B.; Mucientes, M.; Brea, V.M. Stdnet: Exploiting high resolution feature maps for small object detection. Eng. Appl. Artif. Intell. 2020, 91, 103615. [Google Scholar] [CrossRef]
- Zhang, J.; Meng, Y.; Yu, X.; Bi, H.; Chen, Z.; Li, H.; Yang, R.; Tian, J. Mbab-yolo: A modified lightweight architecture for real-time small target detection. IEEE Access 2023, 11, 78384–78401. [Google Scholar] [CrossRef]
- Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. In Tph-yolov5: Improved yolov5 based on transformer prediction head for object detection on drone-captured scenarios. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 17 October 2021; pp. 2778–2788. [Google Scholar]
- Kim, M.; Jeong, J.; Kim, S. Ecap-yolo: Efficient channel attention pyramid yolo for small object detection in aerial image. Remote Sens. 2021, 13, 4851. [Google Scholar] [CrossRef]
- Zhang, L.-g.; Wang, L.; Jin, M.; Geng, X.-s.; Shen, Q. Small object detection in remote sensing images based on attention mechanism and multi-scale feature fusion. Int. J. Remote Sens. 2022, 43, 3280–3297. [Google Scholar] [CrossRef]
- Ju, M.; Luo, H.; Wang, Z.; Hui, B.; Chang, Z. The application of improved yolo v3 in multi-scale target detection. Appl. Sci. 2019, 9, 3775. [Google Scholar] [CrossRef]
- Su, Y.; Yan, P.; Yi, R.; Chen, J.; Hu, J.; Wen, C. A cascaded combination method for defect detection of metal gear end-face. J. Manuf. Syst. 2022, 63, 439–453. [Google Scholar] [CrossRef]
- Wang, Y.; Song, X.; Feng, L.; Zhai, Y.; Zhao, Z.; Zhang, S.; Wang, Q. Mci-gla plug-in suitable for yolo series models for transmission line insulator defect detection. IEEE Trans. Instrum. Meas. 2024, 73, 9002912. [Google Scholar] [CrossRef]
- Hussain, M. Yolo-v1 to yolo-v8, the rise of yolo and its complementary nature toward digital manufacturing and industrial defect detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
- Feng, C.; Zhong, Y.; Gao, Y.; Scott, M.R.; Huang, W. Tood: Task-aligned one-stage object detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 17 October 2021; IEEE Computer Society: Washington, DC, USA; pp. 3490–3499. [Google Scholar]
- Zhu, X.; Hu, H.; Lin, S.; Dai, J. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9308–9316. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. Fcos: A simple and strong anchor-free object detector. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1922–1933. [Google Scholar] [CrossRef] [PubMed]
Name | Configuration |
---|---|
CPU | Inter Xeon(R) Silver4210R CPU@2.40 GHZ*40 |
GPU | Nvidia GeForce RTX4090*2 |
Memory | 24 G |
Model | Parameters (M) | GFLOPs | FPS |
---|---|---|---|
YOLOv8s | 10.64 | 28.6 | 72.9 |
YOLOv8s-DSFI-HEAD | 8.48 | 33.2 | 61.9 |
YOLOv8s-LFEG | 7.56 | 20.5 | 127.3 |
YOLOv8s-DFJA | 7.64 | 30.4 | 82.7 |
Method | Class | P (%) | R (%) | mAP@.5 (%) |
---|---|---|---|---|
YOLOv8s | All | 84.9 | 65.1 | 70.5 |
YOLOv8s-DSFI-HEAD | 87.0 | 70.8 | 75.9 | |
YOLOv8s-LFEG | 86.1 | 64.2 | 68.7 | |
YOLOv8s-DFJA | 86 | 73.2 | 76.9 | |
YOLOv8s | Bared | 84.4 | 56.4 | 64.8 |
YOLOv8s-DSFI-HEAD | 85.2 | 57.5 | 65 | |
YOLOv8s-LFEG | 79.7 | 56.4 | 61.5 | |
YOLOv8s-DFJA | 83.7 | 58.9 | 66.5 | |
YOLOv8s | Impurity | 72.8 | 39.4 | 47.7 |
YOLOv8s-DSFI-HEAD | 78.3 | 55.7 | 63.8 | |
YOLOv8s-LFEG | 81.1 | 36.9 | 45.5 | |
YOLOv8s-DFJA | 77.4 | 61.6 | 65 | |
YOLOv8s | Adhesion | 97.5 | 99.3 | 99.1 |
YOLOv8s-DSFI-HEAD | 97.4 | 99.3 | 99.1 | |
YOLOv8s-LFEG | 97.4 | 99.2 | 99 | |
YOLOv8s-DFJA | 98 | 99.3 | 99.1 |
Model | All | Parameters (M) | GFLOPs | Weight_Size | FPS | |
---|---|---|---|---|---|---|
mAP@.5 | mAP@.5:.95 | |||||
YOLOv8s | 0.396 | 0.236 | 10.61 | 28.5 | 22.5 | 49.3 |
YOLOv8s-DSFI-HEAD | 0.409 | 0.248 | 8.48 | 33.0 | 18.0 | 46.3 |
YOLOv8s-LFEG | 0.382 | 0.226 | 7.53 | 20.3 | 16.3 | 67.2 |
YOLOv8s-DFJA | 0.414 | 0.249 | 7.63 | 30.4 | 16.4 | 58.8 |
Model | All | Parameters (M) | GFLOPs | Weight_Size | FPS | |
---|---|---|---|---|---|---|
mAP@.5 | mAP@.5:.95 | |||||
YOLOv8s | 0.685 | 0.442 | 10.61 | 28.4 | 21.4 | 41.2 |
YOLOv8s-DSFI-HEAD | 0.746 | 0.459 | 8.47 | 33.0 | 17.1 | 44.2 |
YOLOv8s-LFEG | 0.673 | 0.422 | 7.51 | 20.9 | 15.5 | 69.4 |
YOLOv8s-DFJA | 0.751 | 0.46 | 7.63 | 30.4 | 15.6 | 50.8 |
Model | All | Bared | Impurity | Adhesion | Parameters (M) | GFLOPs | Weight_Size | FPS | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
mAP@.5 | mAP@.5:.95 | mAP@.5 | mAP@.5:.95 | mAP@.5 | mAP@.5:.95 | mAP@.5 | mAP@.5:.95 | |||||
YOLOv3 | 0.716 | 0.448 | 0.671 | 0.309 | 0.488 | 0.245 | 0.989 | 0.791 | 98.86 | 282.2 | 198.2 | 102.3 |
YOLOv3-tiny | 0.545 | 0.362 | 0.383 | 0.169 | 0.262 | 0.111 | 0.99 | 0.804 | 11.57 | 18.9 | 23.3 | 337.3 |
YOLOv5s | 0.682 | 0.435 | 0.649 | 0.291 | 0.406 | 0.197 | 0.991 | 0.816 | 8.69 | 23.8 | 18.5 | 134.6 |
YOLOv6s | 0.682 | 0.439 | 0.624 | 0.292 | 0.435 | 0.209 | 0.991 | 0.817 | 15.54 | 44.0 | 32.9 | 140.3 |
YOLOv7 | 0.693 | 0.381 | 0.458 | 0.151 | 0.634 | 0.243 | 0.987 | 0.748 | 34.80 | 103.2 | 74.8 | 64.5 |
YOLOX-s | 0.755 | 0.453 | 0.640 | 0.281 | 0.636 | 0.281 | 0.991 | 0.796 | 8.94 | 26.76 | 71.8 | 61.3 |
YOLOv9s | 0.686 | 0.424 | 0.629 | 0.274 | 0.441 | 0.207 | 0.99 | 0.79 | 6.84 | 26.7 | 15.3 | 44.8 |
SSD | 0.664 | 0.403 | 0.588 | 0.246 | 0.413 | 0.179 | 0.99 | 0.782 | 23.5 | 230.6 | 96.1 | 59.8 |
YOLOv8s-DFJA | 0.769 | 0.463 | 0.665 | 0.285 | 0.650 | 0.285 | 0.991 | 0.819 | 7.64 | 30.4 | 15.7 | 82.7 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, H.; Chen, G.; Rong, X.; Zhang, Y.; Song, L.; Shang, X. Detection Method of Stator Coating Quality of Flat Wire Motor Based on Improved YOLOv8s. Sensors 2024, 24, 5392. https://doi.org/10.3390/s24165392
Wang H, Chen G, Rong X, Zhang Y, Song L, Shang X. Detection Method of Stator Coating Quality of Flat Wire Motor Based on Improved YOLOv8s. Sensors. 2024; 24(16):5392. https://doi.org/10.3390/s24165392
Chicago/Turabian StyleWang, Hongping, Gong Chen, Xin Rong, Yiwen Zhang, Linsen Song, and Xiao Shang. 2024. "Detection Method of Stator Coating Quality of Flat Wire Motor Based on Improved YOLOv8s" Sensors 24, no. 16: 5392. https://doi.org/10.3390/s24165392
APA StyleWang, H., Chen, G., Rong, X., Zhang, Y., Song, L., & Shang, X. (2024). Detection Method of Stator Coating Quality of Flat Wire Motor Based on Improved YOLOv8s. Sensors, 24(16), 5392. https://doi.org/10.3390/s24165392