Study on Novel Surface Defect Detection Methods for Aeroengine Turbine Blades Based on the LFD-YOLO Framework
Abstract
:1. Introduction
- Proposal of an LFD-YOLO-based ATB surface defect detection method: a novel method for detecting surface defects on ATBs is proposed, employing the LFD-YOLO framework. This approach incorporates LDconv to adjust the size and shape of convolutional kernels dynamically, utilizes the deformable attention mechanism to learn minute defect features effectively, and includes the Focaler-CIoU module to optimize the bounding box loss function of the network. These innovations collectively provide precise and accurate detection of surface defects on ATBs.
- Construction of an ATB detection framework: a specialized ATB detection framework is developed to capture defect images, including cracks, nicks, dents, and burns. The collected dataset is used to train the LFD-YOLO network, ensuring robust performance in defect identification.
- Experimental validation: extensive experimental data demonstrate the effectiveness of the proposed method. The LFD-YOLO framework achieves a mean average precision (mAP0.5) of 96.2%, an F-measure of 96.7%, and an identification rate (Ir) of 98.8%. The system processes over 25 images per second, meeting the stringent requirements for accuracy and real-time performance in ATB surface defect detection.
2. Background Information
2.1. YOLO Series Network for ATB Surface Defect Detection
2.2. Other Common Networks Used for ATB Surface Defect Detection
3. Mechanism of ATB Surface Defect Detection Based on LFD-YOLO
3.1. Overall Framework for ATB Surface Defect Classification and Detection
3.2. Design of an LFD-YOLO Network for ATB Surface Defect Detection
3.2.1. Replacement of CBS by LDconv
Algorithm 1 Pseudocode for Initial Coordinate Generation for Convolution Kernels in a PyTorch-like Approach [38] |
# func get_p_o(num_param, dtype) # num_param:the kernel size of LDConv # dtype:the type of data ####### function body ######## # get a base integer to define coordinate base_int = round(math.sqrt(num_param)) row_number = num_param//base_int mod_numer = num_param % base_int # get the sampled coordinate of regular kernels p_o_x,p_o_y = torch.meshigrid( torch.meshgrid(0, row_numb) torch.meshgird(0, base_int)) # flatten the sampled coordinate of regular kernels p_o_x = torch.flatten(p_o_x) P_o_y = torch.flatten(p_o_y) # get the sampled coordinate of irregular kernels If mod_number > 0: mod_p_o_x, mod_p_o_y = torch.meshgird( torch.arange(row_number,row_number+1,torch.arange(0,mod_number)) mod_p_o_x = torch.flatten(mod_p_o_x) mod_p_o_y = torch.flatten(mod_p_o_y) P_o_x,p_o_y = torch.cat((p_o_x,mod_p_o_x)),torch.cat((p_o_y,mod_p_o_y)) # get the completed sampled coordinate p_o = torch.cat([p_o_x, p_o_y], 0) p_o = p_o.view(1, 2 * num_param, 1, 1).type(dtype) return p_o |
3.2.2. Incorporation of the DAT Deformable Attention Mechanism
- (1)
- For an input feature map x with dimensions H × W × C, the predefined scaling factor is . Then, the down-sampling expression for the original grid can be represented as follows (as indicated by the process labeled ① in Figure 7):
- (2)
- If the projection weight for the query is defined as Wq, then by linearly projecting the feature map x onto the query tokens, the query vector q is obtained. Then, through the offset generation subnetwork (offset network), the deformation offset is generated (as indicated by the process labeled ② in Figure 7):
- (3)
- The vectors and are obtained by performing linear projections on in the directions of the key tokens and value tokens, with projection weights Wk and Wv, respectively (as indicated by the process labeled ④ in Figure 7):
- (4)
- The multi-head attention module integrates the outputs. If the softmax function is denoted as σ(·), then the head size of the multi-head attention is dhead, and the query, key, and value vectors for the mth attention head are , , and , respectively, the bilinear interpolation operation for the relative position bias is denoted as , and the output z(m) of the multi-head attention can be expressed as follows (as indicated by the process labeled ⑤ in Figure 7):
3.2.3. Optimization of the Focaler-IoU Bounding Box Loss Function
4. Experimental Design and Results Analysis
4.1. Experimental Design
- (1)
- mAP0.5 is defined as the mean average precision at an IoU threshold of 0.5 [41]:
- (2)
- F-measure is defined as the weighted harmonic mean of precision and recall [42]:
- (3)
- Ir denotes the recognition rate, defined as follows [41]:
- (4)
- FPS represents the number of images detected by the network per second [13]:
4.2. Experimental Data Analysis of the LFD-YOLO Network Performance
4.2.1. Comparative Experimental Data on the Network Performance
- (1)
- The SSD network achieved an FPS of 35.6 frames per second (f·s−1), demonstrating good real-time performance. However, its mAP0.5 of 61.2% and F-measure of 60.9% indicate relatively low accuracy during ATB defect detection. Similarly, the YOLOv7-tiny network achieved an FPS of 47.8 f·s−1, showing excellent real-time performance, but its mAP0.5 of 77.1% and F-measure of 76.8% suggested suboptimal ATB defect detection accuracy.
- (2)
- The Faster R-CNN, Mask R-CNN, YOLOv3, YOLOv5, YOLOv7, YOLOv8, and YOLOv11 networks achieved mAP0.5 values of 73.7%, 76.3%, 77.5%, 86.8%, 89.1%, 89.7%, and 91.4%, respectively, and F-measure values of 74.1%, 78.2%, 76.7%, 86.9%, 88.7%, 90.3%, and 91.9%, respectively. These networks exhibited moderate to good real-time performance and reached a relatively high level of accuracy for ATB surface defect detection.
- (3)
- The LFD-YOLO network achieved an FPS of 25.4 f·s−1 and an mAP0.5 of 96.2%, demonstrating the highest accuracy for ATB defect detection while meeting real-time performance requirements, showcasing significant advantages.
4.2.2. Ablation Study
- (1)
- The YOLOv8 + LDconv network shows improvements in mAP0.5, F-measure, and Ir by 3.1%, 2.9%, and 5.1%, respectively (89.7% → 92.8%, 90.3% → 93.2%, 87.9% → 93.0%), while the FPS decreases by 1.9 f·s−1 (29.1 f·s−1 → 27.2 f·s−1). This indicates that replacing CBS with LDconv effectively improved the accuracy of ATB surface defect detection, with a minor impact on real-time performance.
- (2)
- The YOLOv8 + DAT network demonstrates improvements in mAP0.5, F-measure, and Ir by 4.8%, 4.9%, and 8.4%, respectively (89.7% → 94.5%, 90.3% → 95.2%, 87.9% → 96.3%), while the FPS decreases by 1.5 f·s−1 (29.1 f·s−1 → 27.6 f·s−1). This suggests that incorporating the DAT deformable attention mechanism significantly increases the accuracy of ATB surface defect detection with a minimal impact on real-time performance.
- (3)
- The YOLOv8 + Focaler-CIoU network achieves improvements in mAP0.5, F-measure, and Ir by 2.4%, 1.2%, and 3.8%, respectively (89.7% → 92.1%, 90.3% → 91.5%, 87.9% → 91.7%), while the FPS decreases by 1.0 f·s−1 (29.1 f·s−1 → 28.1 f·s−1). This trend indicates that optimizing the bounding box loss function with Focaler-IoU moderately improved the accuracy of ATB surface defect detection with negligible impact on real-time performance.
- (4)
- The proposed LFD-YOLO network shows improvements in mAP0.5, F-measure, and Ir by 6.5%, 6.4%, and 8.9%, respectively (92.8% → 96.2%, 90.3% → 96.7%, 87.9% → 98.8%), while the FPS decreases by 3.7 f·s−1 (29.1 f·s−1 → 25.4 f·s−1). This demonstrates that LFD-YOLO significantly enhanced the accuracy of ATB surface defect detection, but moderately impacted real-time performance.
4.3. Effectiveness in Practical Applications
- (1)
- The YOLOv8, YOLOv8 + LDconv, and YOLOv8 + Focaler-CIoU networks can only identify three types of defects: dents, nicks, and cracks, detecting 8, 11, and 12 instances, respectively. These networks exhibit missed detections, with burns not being detected. The cumulative accuracy rates for the four types of defects are 53.3%, 73.3%, and 80%, respectively.
- (2)
- The YOLOv8 + DAT network identifies all four types of defects (dents, nicks, cracks, and burns), detecting a total of 13 instances. Although it missed some defects, the cumulative accuracy rate for the four types of defects is 86.7%.
- (3)
- The LFD-YOLO network also identifies all four types of defects (dents, nicks, cracks, and burns), detecting a total of 15 instances with no missed detections. The results align with manual inspections, achieving an accuracy rate of 100%.
4.4. Model Generalization Experiment
5. Conclusions and Future Work
- (1)
- This paper proposes an ATB surface defect detection method based on LFD-YOLO, which meets the requirements of ATB surface defect detection. The proposed model effectively addresses the challenges of low accuracy and inadequate efficiency in ATB surface defect detection. The method builds upon YOLOv8, which was modified by substituting LDconv for the CBS module, incorporates the DAT deformable attention mechanism, and optimizes the bounding box loss function with Focaler-IoU. In the LFD-YOLO network, the LDconv module replaces the original CBS module in the backbone section, and the DAT deformable attention mechanism is added to improve the detection capability of tiny defects. In the neck section, the PAN–FPN structure is added and combined with the LDconv + DAT deformable attention mechanism to fuse multiscale features and improve the detection of various types of ATB surface defects. The head section employs Focaler-CIoU which optimizes the bounding box loss function, thus also contributing to improved bounding box regression accuracy.
- (2)
- The ablation experimental data confirm that replacing CBS with LDconv improves the network’s mAP0.5, F-measure, and Ir by 3.1%, 2.9%, and 5.1%, respectively, compared to YOLOv8, while the FPS decreases by 1.9 f·s−1. Incorporating the DAT deformable attention mechanism improves the network’s mAP0.5, F-measure, and Ir by 4.8%, 4.9%, and 8.4%, respectively, compared to YOLOv8, with the FPS decreasing by 1.5 f·s−1. Optimizing the bounding box loss function with Focaler-IoU increases the network’s mAP0.5, F-measure, and Ir by 2.4%, 1.2%, and 3.8%, respectively, compared to YOLOv8, while the FPS decreases by 1.0 f·s−1. The proposed LFD-YOLO network improved mAP0.5, F-measure, and Ir by 6.5%, 6.4%, and 8.9%, respectively, compared to YOLOv8, with the FPS decreasing by 3.7 f·s−1. Overall, the results indicate that LFD-YOLO significantly increases the accuracy of ATB surface defect detection, although with a moderate impact on real-time performance.
- (3)
- Comparative experiments and practical application results demonstrate that the proposed LFD-YOLO network outperforms SSD, Faster R-CNN, Mask R-CNN, YOLOv3, YOLOv5, YOLOv7, YOLOv7-tiny, and YOLOv8 in terms of accuracy for ATB surface defect detection. YOLOv8, YOLOv8 + LDconv, and YOLOv8 + Focaler-CIoU can only identify three types of defects (dents, nicks, and cracks), with some missed detections and burns not being detected. YOLOv8 + DAT can identify all four types of defects (dents, nicks, cracks, and burns), but still fails to detect some defects. In contrast, LFD-YOLO can identify all four types of defects—dents, nicks, cracks, and burns—with no missed detections. As network improvements are added incrementally, the recognition accuracy for ATB surface defects continues to increase. The LFD-YOLO network achieves optimal defect recognition accuracy, demonstrates satisfactory real-time performance, and meets the requirements for on-site detection.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhao, B.; Xie, L.; Li, H.; Zhang, S.; Wang, B.; Li, C. Reliability Analysis of Aero-Engine Compressor Rotor System Considering Cruise Characteristics. IEEE Trans. Reliab. 2020, 69, 245–259. [Google Scholar] [CrossRef]
- Ma, P.; Xu, C.; Xiao, D. Robotic Ultrasonic Testing Technology for Aero-Engine Blades. Sensors 2023, 23, 3729. [Google Scholar] [CrossRef] [PubMed]
- Aust, J.; Pons, D. Taxonomy of Gas Turbine Blade Defects. Aerospace 2019, 6, 58. [Google Scholar] [CrossRef]
- Le, H.F.; Zhang, L.J.; Liu, Y.X. Surface Defect Detection of Industrial Parts Based on YOLOv5. IEEE Access 2022, 10, 130784–130794. [Google Scholar] [CrossRef]
- Ding, P.; Song, D.; Shen, J.; Zhao, X.; Jia, M. A novel graph structure data-driven crack damage identification for compressor blade based on vibro-acoustic signal. Struct. Health Monit.-Int. J. 2024, 23, 3046–3062. [Google Scholar] [CrossRef]
- Abdulrahman, Y.; Eltoum, M.A.M.; Ayyad, A.; Moyo, B.; Zweiri, Y. Aero-Engine Blade Defect Detection: A Systematic Review of Deep Learning Models. IEEE Access 2023, 11, 53048–53061. [Google Scholar] [CrossRef]
- Aust, J.; Shankland, S.; Pons, D.; Mukundan, R.; Mitrovic, A. Automated Defect Detection and Decision-Support in Gas Turbine Blade Inspection. Aerospace 2021, 8, 30. [Google Scholar] [CrossRef]
- Zhang, H.-B.; Zhang, C.-Y.; Cheng, D.-J.; Zhou, K.-L.; Sun, Z.-Y. Detection Transformer with Multi-Scale Fusion Attention Mechanism for Aero-Engine Turbine Blade Cast Defect Detection Considering Comprehensive Features. Sensors 2024, 24, 1663. [Google Scholar] [CrossRef]
- Song, M.; Zhang, Y. Aviation-engine blade surface anomaly detection based on the deformable neural network. Signal Image Video Process. 2025, 19, 87. [Google Scholar] [CrossRef]
- Huang, X.; Zhu, J.; Huo, Y. SSA-YOLO: An Improved YOLO for Hot-Rolled Strip Steel Surface Defect Detection. IEEE Trans. Instrum. Meas. 2024, 73, 1–17. [Google Scholar] [CrossRef]
- Cui, L.; Jiang, X.; Xu, M.; Li, W.; Lv, P.; Zhou, B. SDDNet: A Fast and Accurate Network for Surface Defect Detection. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
- Usamentiaga, R.; Lema, D.G.; Pedrayes, O.D.; Garcia, D.F. Automated Surface Defect Detection in Metals: A Comparative Review of Object Detection and Semantic Segmentation Using Deep Learning. IEEE Trans. Ind. Appl. 2022, 58, 4203–4213. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A.; IEEE. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Liu, G.; Yan, Y.; Meng, J. Study on the detection technology for inner-wall outer surface defects of the automotive ABS brake master cylinder based on BM-YOLOv8. Meas. Sci. Technol. 2024, 35, 055109. [Google Scholar] [CrossRef]
- Zhang, G.; Liu, G.; Zhong, F. Research on UAV Autonomous Recognition and Approach Method for Linear Target Splicing Sleeves Based on Deep Learning and Active Stereo Vision. Electronics 2024, 13, 4872. [Google Scholar] [CrossRef]
- Zhang, Y.; Guo, Z.; Wu, J.; Tian, Y.; Tang, H.; Guo, X. Real-Time Vehicle Detection Based on Improved YOLO v5. Sustainability 2022, 14, 12274. [Google Scholar] [CrossRef]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M.; IEEE. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A.J.M.L.; Extraction, K. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Yeh, J.; Liao, H.-Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. In Proceedings of the 18th European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; pp. 1–21. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. Yolov10: Real-time end-to-end object detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Chen, Z.-H.; Juang, J.-C. YOLOv4 Object Detection Model for Nondestructive Radiographic Testing in Aviation Maintenance Tasks. Aiaa J. 2022, 60, 526–531. [Google Scholar] [CrossRef]
- Kannadaguli, P. YOLO v4 Based Human Detection System Using Aerial Thermal Imaging for UAV Based Surveillance Applications. In Proceedings of the 2020 International Conference on Decision Aid Sciences and Application (DASA), Sakheer, Bahrain, 8–9 November 2020; pp. 1213–1219. [Google Scholar]
- Li, X.; Wang, W.; Sun, L.; Hu, B.; Zhu, L.; Zhang, J. Deep learning-based defects detection of certain aero-engine blades and vanes with DDSC-YOLOv5s. Sci. Rep. 2022, 12, 13067. [Google Scholar] [CrossRef]
- Liao, D.; Cui, Z.; Zhang, X.; Li, J.; Li, W.; Zhu, Z.; Wu, N. Surface defect detection and classification of Si3N4 turbine blades based on convolutional neural network and YOLOv5. Adv. Mech. Eng. 2022, 14, 1–13. [Google Scholar] [CrossRef]
- Wang, D.; Xiao, H.; Huang, S. Automatic Defect Recognition and Localization for Aeroengine Turbine Blades Based on Deep Learning. Aerospace 2023, 10, 178. [Google Scholar] [CrossRef]
- Li, S.; Yu, J.; Wang, H. Damages Detection of Aeroengine Blades via Deep Learning Algorithms. IEEE Trans. Instrum. Meas. 2023, 72, 1–11. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J.; IEEE. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef]
- Ahmed, A.; Tangri, P.; Panda, A.; Ramani, D.; Nevronas, S.K.; IEEE. VFNet: A Convolutional Architecture for Accent Classification. In Proceedings of the 16th IEEE-India-Council International Conference (INDICON), Rajkot, India, 13–15 December 2019. [Google Scholar]
- Liu, Y.; Wu, D.; Liang, J.; Wang, H. Aeroengine Blade Surface Defect Detection System Based on Improved Faster RCNN. Int. J. Intell. Syst. 2023, 2023, 1992415. [Google Scholar] [CrossRef]
- Shang, H.; Sun, C.; Liu, J.; Chen, X.; Yan, R. Deep learning-based borescope image processing for aero-engine blade in-situ damage detection. Aerosp. Sci. Technol. 2022, 123, 107473. [Google Scholar] [CrossRef]
- Zhang, H.; Zu, K.; Lu, J.; Zou, Y.; Meng, D. EPSANet: An Efficient Pyramid Squeeze Attention Block on Convolutional Neural Network. In Proceedings of the 16th Asian Conference on Computer Vision (ACCV), Macao, China, 4–8 December 2022; pp. 541–557. [Google Scholar]
- Liu, P.; Yuan, X.; Han, Q.; Xing, B.; Hu, X.; Zhang, J. Micro-defect Varifocal Network: Channel attention and spatial feature fusion for turbine blade surface micro-defect detection. Eng. Appl. Artif. Intell. 2024, 133, 108075. [Google Scholar] [CrossRef]
- Aust, J.; Pons, D. Methodology for Evaluating Risk of Visual Inspection Tasks of Aircraft Engine Blades. Aerospace 2021, 8, 117. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Zhang, X.; Song, Y.; Song, T.; Yang, D.; Ye, Y.; Zhou, J.; Zhang, L. LDConv: Linear deformable convolution for improving convolutional neural networks. Image Vis. Comput. 2024, 149, 105190. [Google Scholar] [CrossRef]
- Xia, Z.; Pan, X.; Song, S.; Li, L.E.; Huang, G. Vision Transformer with Deformable Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 4784–4793. [Google Scholar]
- Zhang, H.; Zhang, S. Focaler-IoU: More Focused Intersection over Union Loss. arXiv 2024, arXiv:2401.10525. [Google Scholar]
- Li, K.; Huang, Z.; Cheng, Y.-C.; Lee, C.-H.; IEEE. A maximal figure-of-merit learning approach to maximizing mean average precision with deep neural network based classifiers. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014. [Google Scholar]
- Nan, Y.; Chai, K.M.; Lee, W.S.; Chieu, H.L. Optimizing F-measure: A tale of two approaches. arXiv 2012, arXiv:1206.4625. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Farhadi, A.; Redmon, J. Yolov3: An incremental improvement. In Proceedings of the Computer Vision and Pattern Recognition, Salt Lake, UT, USA, 18–22 June 2018; pp. 1–6. [Google Scholar]
- Ma, L.; Zhao, L.; Wang, Z.; Zhang, J.; Chen, G. Detection and Counting of Small Target Apples under Complicated Environments by Using Improved YOLOv7-tiny. Agronomy 2023, 13, 1419. [Google Scholar] [CrossRef]
- Khanam, R.; Hussain, M. YOLOv11: An Overview of the Key Architectural Enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar]
- Zhang, Y.-F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and efficient IOU loss for accurate bounding box regression. Neurocomputing 2022, 506, 146–157. [Google Scholar] [CrossRef]
- Tong, Z.; Chen, Y.; Xu, Z.; Yu, R. Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism. arXiv 2023, arXiv:2301.10051. [Google Scholar]
- Siliang, M.; Yong, X. MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression. arXiv 2023, arXiv:2307.07662. [Google Scholar]
- Song, K.; Yan, Y. A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Appl. Surf. Sci. 2013, 285, 858–864. [Google Scholar] [CrossRef]
Model | Backbone | mAP0.5, % | F-Measure, % | FPS, (f·s−1) |
---|---|---|---|---|
SSD [43] | VGG-16 | 61.2 | 60.9 | 35.6 |
Faster R-CNN [28] | ResNet-50 | 73.7 | 74.1 | 16.2 |
Mask R-CNN [30] | ResNet-50 | 76.3 | 78.2 | 19.3 |
YOLOv3 [44] | DarkNet-53 | 77.5 | 76.7 | 27.2 |
YOLOv5 [16] | CSP-Darknet53 | 86.8 | 86.9 | 27.7 |
YOLOv7 [18] | DarkNet-53 + E-ELAN | 89.1 | 88.7 | 31.1 |
YOLOv7-tiny [45] | DarkNet-53 + E-ELAN | 77.1 | 76.8 | 47.8 |
YOLOv8 [19] | DarkNet53 + C2f | 89.7 | 90.3 | 29.1 |
YOLOv11 [46] | DarkNet53 + C3k2 | 91.4 | 91.9 | 30.7 |
LFD-YOLO | DarkNet53 + C2f +DAT | 96.2 | 96.7 | 25.4 |
Model | mAP0.5, % | F-Measure, % | Ir, % | FPS, (f·s−1) |
---|---|---|---|---|
YOLOv8 + LDconv + DAT | 93.0 | 92.4 | 92.3 | 27.0 |
YOLOv8 + LDconv + DAT + EIoU | 93.1 | 91.8 | 93.2 | 26.6 |
YOLOv8 + LDconv + DAT + WIoU | 93.5 | 92.4 | 92.7 | 26.3 |
YOLOv8 + LDconv + DAT + MPDIoU | 94.9 | 94.6 | 95.1 | 24.7 |
LFD-YOLO | 96.2 | 96.7 | 98.8 | 25.4 |
Model | mAP0.5, % | F-Measure, % | Ir, % | FPS, (f·s−1) |
---|---|---|---|---|
YOLOv8 + LDconv + Focaler-CIoU | 93.2 | 93.7 | 94.0 | 27.1 |
YOLOv8 + LDconv + Focaler-CIoU+ DAT (backbone) | 94.7 | 94.9 | 95.2 | 26.9 |
YOLOv8 + LDconv + Focaler-CIoU+ DAT (neck) | 95.1 | 94.5 | 96.0 | 26.4 |
LFD-YOLO | 96.2 | 96.7 | 98.8 | 25.4 |
Model | mAP0.5, % | F-Measure, % | Ir, % | FPS, (f·s−1) |
---|---|---|---|---|
YOLOv8 | 89.7 | 90.3 | 87.9 | 29.1 |
YOLOv8 + LDconv | 92.8 | 93.2 | 93.0 | 27.2 |
YOLOv8 + DAT | 94.5 | 95.2 | 96.3 | 27.6 |
YOLOv8 + Focaler-CIoU | 92.1 | 91.5 | 91.7 | 28.1 |
LFD-YOLO | 96.2 | 96.7 | 98.8 | 25.4 |
Model | Dents | Nicks | Cracks | Burns |
---|---|---|---|---|
YOLOv8 | 6 | 1 | 1 | 0 |
YOLOv8 + LDconv | 8 | 2 | 1 | 0 |
YOLOv8 + Focaler-CIoU | 9 | 2 | 1 | 0 |
YOLOv8 + DAT | 9 | 2 | 1 | 1 |
LFD-YOLO | 11 | 2 | 1 | 1 |
Dataset | mAP0.5, % | F-Measure, % | Ir, % |
---|---|---|---|
NEU-DET [50] | 92.2 | 93.7 | 91.9 |
ATB | 96.2 | 96.7 | 95.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Deng, W.; Liu, G.; Meng, J. Study on Novel Surface Defect Detection Methods for Aeroengine Turbine Blades Based on the LFD-YOLO Framework. Sensors 2025, 25, 2219. https://doi.org/10.3390/s25072219
Deng W, Liu G, Meng J. Study on Novel Surface Defect Detection Methods for Aeroengine Turbine Blades Based on the LFD-YOLO Framework. Sensors. 2025; 25(7):2219. https://doi.org/10.3390/s25072219
Chicago/Turabian StyleDeng, Wei, Guixiong Liu, and Jun Meng. 2025. "Study on Novel Surface Defect Detection Methods for Aeroengine Turbine Blades Based on the LFD-YOLO Framework" Sensors 25, no. 7: 2219. https://doi.org/10.3390/s25072219
APA StyleDeng, W., Liu, G., & Meng, J. (2025). Study on Novel Surface Defect Detection Methods for Aeroengine Turbine Blades Based on the LFD-YOLO Framework. Sensors, 25(7), 2219. https://doi.org/10.3390/s25072219