TEB-YOLO: A Lightweight YOLOv5-Based Model for Bamboo Strip Defect Detection
Abstract
1. Introduction
- (1)
- A lightweight and task-specific feature extraction framework is proposed by integrating EfficientViT with an Efficient Local Attention (ELA) mechanism. Unlike previous works that apply ViT backbones in general detection tasks, our design is tailored for bamboo strip defects, where fine-grained local features are critical. This combination enhances the representation power while maintaining model compactness.
- (2)
- A BiFPN-based neck with embedded ELA was introduced to improve multi-scale feature fusion. Compared to conventional PANet or even standard BiFPN usage, our configuration is optimized for defect types with subtle or overlapping appearances, leading to improved robustness under complex backgrounds.
- (3)
- A modified loss function using Efficient IoU (EIoU) is employed to enhance localization precision. Unlike CIoU, which struggles with aspect ratio variation, EIoU directly penalizes width and height misalignment, resulting in faster convergence and better regression accuracy, especially on irregular-shaped bamboo defects.
2. Materials and Methods
2.1. Data Acquisition and Preparation
2.2. YOLOv5 Model Enhancements
2.2.1. Proposed Methods
2.2.2. Optimization of the Backbone Network
2.2.3. Attention Mechanism
2.2.4. Optimization of the Neck Network
2.2.5. Loss Function Optimization
3. Experiments and Analysis
3.1. Experimental Environment
3.2. Evaluation Metrics
3.3. Comparison Experiments of Different Module Combination
3.4. Ablation Studies
3.5. Improved Model Results
3.6. Comparison of Different Models
4. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Yun, J. A Comparative Study of Bamboo Culture and Its Applications in China, Japan and South Korea. In Proceedings of the SOCIOINT14—International Conference on Social Sciences and Humanities, Istanbul, Turkey, 8–10 September 2014; pp. 8–10. [Google Scholar]
- Lucy, B.; Vahid, N.; Chunping, D. Bamboo Industrialization in the Era of Industry 5.0: An Exploration of Key Concepts, Synergies and Gaps. Environ. Dev. Sustain. 2024, 1, 1–3. [Google Scholar] [CrossRef]
- Ibrahim, Y.; Liam, B.; Fadi El, K.; Ramy, H. Leveraging Computer Vision Towards High-Efficiency Autonomous Industrial Facilities. J. Intell. Manuf. 2024, 35, 401–442. [Google Scholar]
- Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep Learning and Its Applications to Machine Health Monitoring: A Survey. J. Manuf. Syst. 2022, 62, 738–752. [Google Scholar] [CrossRef]
- Amir, T.; Joonho, C.; Jangwoon, P.; Wonsup, L.; Myeongsup, C.; Jongchul, P.; Kihyo, J. Development of a Human-Friendly Visual Inspection Method for Painted Vehicle Bodies. Appl. Ergon. 2023, 106, 103911. [Google Scholar]
- Muhammad, B.R.; Sheheryar, M.Q.; Muhammad, S.H.; Tayyab, N.; Muhammad, A.; Hafsa, J. Evaluation of Human Factors on Visual Inspection Skills in Textiles and Clothing: A Statistical Approach. Cloth. Text. Res. J. 2022, 41, 43–58. [Google Scholar]
- Alireza, S.; Jing, R.; Moustafa, E.G. Defect Detection Methods for Industrial Products Using Deep Learning Techniques: A Review. Algorithms 2023, 16, 95. [Google Scholar] [CrossRef]
- Zhang, Q.; Wang, H.; Wang, X.; Shang, J.; Wang, X.; Li, J.; Wang, Y. DSCW-YOLO: Vehicle Detection from Low-Altitude UAV Perspective via Coordinate Awareness and Collaborative Module Optimization. Sensors 2025, 25, 3413. [Google Scholar] [CrossRef]
- Wang, X.; Hong, W.; Liu, Y.; Yan, G.; Hu, D.; Jing, Q. Improved YOLOv8 Network of Aircraft Target Recognition Based on Synthetic Aperture Radar Imaging Feature. Sensors 2025, 25, 3231. [Google Scholar] [CrossRef]
- Prasad, N.; Muthusamy, A. A Review on Sustainable Product Design, Marketing Strategies and Conscious Consumption of Bamboo Lifestyle Products. Intell. Inf. Manag. 2023, 15, 67–99. [Google Scholar] [CrossRef]
- Zhang, H.; Zhang, X.; Ding, Y.; Huang, F.; Cai, Z.; Lin, S. Charting the Research Status for Bamboo Resources and Bamboo as a Sustainable Plastic Alternative: A Bibliometric Review. Forests 2024, 15, 1812. [Google Scholar] [CrossRef]
- Bi, Y.; Xue, B.; Mesejo, P.; Cagnoni, S.; Zhang, M. A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends. IEEE Trans. Evol. Comput. 2022, 27, 5–25. [Google Scholar] [CrossRef]
- Qin, X.; Song, X.; Liu, Q.; He, F. Online Detection and Sorting System of Bamboo Strip Based on Visual Servo. In Proceedings of the 2009 IEEE International Conference on Industrial Technology, Gippsland, Australia, 10–13 February 2009. [Google Scholar]
- Zeng, Q.; Lu, Q.; Yu, X.; Li, S.; Chen, N.; Li, W.; Zhang, F.; Chen, N.; Zhao, W. Identification of Defects on Bamboo Strip Surfaces Based on Comprehensive Features. Eur. J. Wood Wood Prod. 2023, 81, 315–328. [Google Scholar] [CrossRef]
- Yeni, L.; Shaowei, Y. Defect Inspection System of Carbonized Bamboo Cane Based on LabView and Machine Vision. In Proceedings of the 2017 International Conference on Information, Communication and Engineering (ICICE), Xiamen, China, 17–20 November 2017. [Google Scholar]
- Kuang, H.; Ding, Y.; Li, R.; Liu, X. Defect Detection of Bamboo Strips Based on LBP and GLCM Features by Using SVM Classifier. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018. [Google Scholar]
- Yang, R.X.; Lee, Y.R.; Lee, F.S.; Liang, Z.; Chen, N.; Liu, Y. Improvement of YOLO Detection Strategy for Detailed Defects in Bamboo Strips. Forests 2025, 16, 595. [Google Scholar] [CrossRef]
- Hu, J.; Yu, X.; Zhao, Y.; Wang, K.; Lu, W. Research on Bamboo Defect Segmentation and Classification Based on Improved U-Net Network. Wood Res. 2022, 67, 109–122. [Google Scholar] [CrossRef]
- Qin, X.; He, F.; Liu, Q.; Song, X. Online Defect Inspection Algorithm of Bamboo Strip Based on Computer Vision. In Proceedings of the 2009 IEEE International Conference on Industrial Technology, Gippsland, Australia, 10–13 February 2009. [Google Scholar]
- Hu, J.F.; Yu, X.; Zhao, Y.F. Bamboo Defect Classification Based on Improved Transformer Network. Wood Res. 2022, 67, 501–510. [Google Scholar] [CrossRef]
- Guo, Y.; Zeng, Y.; Gao, F.; Qiu, Y.; Zhou, X.; Zhong, L.; Zhan, C. Improved YOLOv4-CSP Algorithm for Detection of Bamboo Surface Sliver Defects with Extreme Aspect Ratio. IEEE Access 2022, 10, 29810–29820. [Google Scholar] [CrossRef]
- Yang, R.X.; Lee, Y.R.; Lee, F.S.; Liang, Z.; Liu, Y. An Improved YOLOv5 Algorithm for Bamboo Strip Defect Detection Based on the Ghost Module. Forests 2024, 15, 1480. [Google Scholar] [CrossRef]
- Haq, M.A.; Rahaman, G.; Baral, P.; Ghosh, A. Deep Learning Based Supervised Image Classification Using UAV Images for Forest Areas Classification. J. Indian Soc. Remote Sens. 2021, 49, 601–606. [Google Scholar] [CrossRef]
- Liu, B.; Guo, T.; Luo, B.; Cui, Z.; Yang, J. Cross-Attention Regression Flow for Defect Detection. IEEE Trans. Image Process. 2024, 33, 5183–5193. [Google Scholar] [CrossRef]
- Ji, M.; Zhang, W.; Han, J.K.; Miao, H.; Diao, X.L.; Wang, G.F. A Deep Learning-Based Algorithm for Online Detection of Small Target Defects in Large-Size Sawn Timber. Ind. Crops Prod. 2024, 222, 119671. [Google Scholar] [CrossRef]
- Lin, Y.; Xu, Z.; Chen, D.; Ai, Z.; Qiu, Y.; Yuan, Y. Wood crack detection based on data-driven semantic segmentation network. IEEE/CAA J. Autom. Sin. 2023, 10, 1510–1512. [Google Scholar] [CrossRef]
- Zhang, Y.Z.; Xu, C.; Li, C.; Yu, H.L.; Cao, J. Wood Defect Detection Method with PCA Feature Fusion and Compressed Sensing. J. For. Res. 2015, 26, 745–751. [Google Scholar] [CrossRef]
- Xu, Z.; Lin, Y.; Chen, D.; Yuan, M.; Zhu, Y.; Ai, Z.; Yuan, Y. Wood Broken Defect Detection with Laser Profilometer Based on Bi-LSTM Network. Expert Syst. Appl. 2024, 242, 122789. [Google Scholar] [CrossRef]
- Haq, M.A. CNN Based Automated Weed Detection System Using UAV Imagery. Comput. Syst. Sci. Eng. 2022, 42, 837–849. [Google Scholar] [CrossRef]
- Liu, X.; Peng, H.; Zheng, N.; Yang, Y.; Hu, H.; Yuan, Y. EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Wei, X.; Yi, W. ELA: Efficient Local Attention for Deep Convolutional Neural Networks. arXiv 2024, arXiv:2403.01123. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Hou, Q.; Zhang, L.; Cheng, M.M.; Feng, J. Strip Pooling: Rethinking Spatial Pooling for Scene Parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. IEEE Trans. Cybern. 2021, 52, 8574–8586. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and Efficient IoU Loss for Accurate Bounding Box Regression. Neurocomputing 2022, 506, 146–157. [Google Scholar] [CrossRef]
- Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. ShuffleNet v2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Andrew, H.; Mark, S.; Bo, C.; Weijun, W.; Liangchieh, C.; Mingxing, T.; Hartwig, A. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Jie, H.; Li, S.; Gang, S. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Xu, X.; Jiang, Y.; Chen, W.; Huang, Y.; Zhang, Y.; Sun, X. Damo-YOLO: A Report on Real-Time Object Detection Design. arXiv 2022, arXiv:2211.15444. [Google Scholar]
- Chen, Y.; Zhang, C.; Chen, B.; Huang, Y.; Sun, Y.; Wang, C.; Fu, X.; Dai, Y.; Qin, F.; Peng, Y.; et al. Accurate Leukocyte Detection Based on Deformable-DETR and Multi-Level Feature Fusion for Aiding Diagnosis of Blood Diseases. Comput. Biol. Med. 2024, 170, 107917. [Google Scholar] [CrossRef]
- Li, H.; Li, J.; Wei, H.; Liu, Z.; Zhan, Z.; Ren, Q. Slim-Neck by GSConv: A Lightweight Design for Real-Time Detector Architectures. J. Real-Time Image Process. 2024, 21, 6. [Google Scholar] [CrossRef]
Model | {C1, C2, C3} | {L1, L2, L3} | {H1, H2, H3} |
---|---|---|---|
EfficientViT-M0 | {64, 128, 192} | {1, 2, 3} | {4, 4, 4} |
EfficientViT-M1 | {128, 144, 192} | {1, 2, 3} | {2, 3, 3} |
EfficientViT-M2 | {128, 192, 224} | {1, 2, 3} | {4, 3, 2} |
EfficientViT-M3 | {125, 240, 320} | {1, 2, 3} | {4, 3, 4} |
EfficientViT-M4 | {128, 256, 384} | {1, 2, 3} | {4, 4, 4} |
EfficientViT-M5 | {192, 288, 384} | {1, 3, 4} | {3, 3, 4} |
Parameter Categories | Parameter Setting |
---|---|
Image-size | 640 × 640 |
Optimizer | SGD |
Epochs | 300 |
Batch-size | 8 |
Momentum | 0.937 |
Weight_decay | 0.0005 |
Learning-rate | 0.01 |
Models | Precision (%) | mAP@50 (%) | GFLOPs | Params (M) |
---|---|---|---|---|
YOLOv5 + CSPDarknet | 88.9 | 92.6 | 16.0 | 7.035 |
YOLOv5 + Shuffulnetv2 | 75.6 | 80.0 | 2.0 | 0.871 |
YOLOv5 + Mobilenetv3 | 83.5 | 91.9 | 7.7 | 4.917 |
YOLOv5 + EfficientViT | 86.9 | 91.5 | 9.9 | 5.326 |
Models | Precision (%) | mAP@50 (%) | GFLOPs | Params (M) |
---|---|---|---|---|
YOLOv5s + EfficientViT | 86.9 | 91.5 | 9.9 | 5.326 |
YOLOv5s + EfficientViT + CA | 84 | 90.7 | 14.0 | 7.251 |
YOLOv5s + EfficientViT + CBAM | 85 | 90.9 | 7.8 | 4.296 |
YOLOv5s + EfficientViT + SE | 86.8 | 91.1 | 10.1 | 5.474 |
YOLOv5s + EfficientViT + ELA | 89.3 | 90.9 | 10.0 | 5.342 |
Models | Precision (%) | mAP@50 (%) | GFLOPs (M) | Params (M) |
---|---|---|---|---|
YOLOv5s + EfficientViT + ELA + PANet | 89.3 | 90.9 | 10.0 | 5.342 |
YOLOv5s + EfficientViT + ELA + GFPN | 85.3 | 91.8 | 12.7 | 6.946 |
YOLOv5s + EfficientViT + ELA + HSFPN | 88.5 | 91.0 | 21.0 | 4.274 |
YOLOv5s + EfficientViT + ELA + SlimNeck | 83.6 | 92.3 | 9.2 | 5.341 |
YOLOv5s + EfficientViT + ELA + BiFPN | 91.7 | 90.8 | 10.5 | 5.423 |
Model | EIoU | EfficientViT | ELA | BiFPN | Precision (%) | Recall (%) | mAP5 (%) | GFLOPs (M) | Params (M) | FPS (f/s) |
---|---|---|---|---|---|---|---|---|---|---|
YOLOv5s | × | × | × | × | 87 | 88.6 | 92.8 | 16.0 | 7.035 | 123.5 |
√ | × | × | × | 88.9 | 89.7 | 92.6 | 16.0 | 7.035 | 133.4 | |
√ | √ | × | × | 86.9 | 87.1 | 91.5 | 9.9 | 5.326 | 71.1 | |
√ | √ | √ | × | 89.3 | 86.2 | 90.9 | 10.0 | 5.342 | 67.4 | |
√ | √ | √ | √ | 91.7 | 85.6 | 90.8 | 10.5 | 5.423 | 67.3 |
Model | Mean Activation | Mean Activation |
---|---|---|
TEB-YOLO | 34.5001 | 1.47 |
YOLOv5s | 33.1852 | 1.31 |
Model | Precision (%) | Recall (%) | mAP@50 (%) | GFLOPs (M) | Params (M) | FPS-GPU (f/s) | FPS-CPU (f/s) |
---|---|---|---|---|---|---|---|
YOLOv5n | 82.0 | 89.9 | 92.1 | 4.2 | 1.8 | 108.0 | 34.1 |
YOLOv5s | 87.0 | 88.6 | 92.8 | 16.0 | 7.035 | 123.5 | 18.3 |
YOLOv7 | 90.2 | 87.2 | 93 | 103.2 | 36.3 | 87.1 | 12.5 |
YOLOv8n | 89.9 | 91.2 | 95.9 | 8.1 | 3.0 | 84.7 | 11.2 |
YOLOv8s | 92.0 | 90.0 | 96.2 | 28.4 | 11.1 | 72.99 | 3.8 |
YOLOv9t | 89.2 | 91.3 | 95.6 | 10.7 | 2.61 | 34.6 | 6.9 |
YOLOv9s | 90.7 | 88.2 | 95.4 | 38.7 | 9.60 | 58.4 | 2.6 |
TEB-YOLO | 91.7 | 85.6 | 90.8 | 10.5 | 5.423 | 67.3 | 12.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, X.; Ruan, C.; Yu, F.; Yang, R.; Guo, B.; Yang, J.; Gao, F.; He, L. TEB-YOLO: A Lightweight YOLOv5-Based Model for Bamboo Strip Defect Detection. Forests 2025, 16, 1219. https://doi.org/10.3390/f16081219
Yang X, Ruan C, Yu F, Yang R, Guo B, Yang J, Gao F, He L. TEB-YOLO: A Lightweight YOLOv5-Based Model for Bamboo Strip Defect Detection. Forests. 2025; 16(8):1219. https://doi.org/10.3390/f16081219
Chicago/Turabian StyleYang, Xipeng, Chengzhi Ruan, Fei Yu, Ruxiao Yang, Bo Guo, Jun Yang, Feng Gao, and Lei He. 2025. "TEB-YOLO: A Lightweight YOLOv5-Based Model for Bamboo Strip Defect Detection" Forests 16, no. 8: 1219. https://doi.org/10.3390/f16081219
APA StyleYang, X., Ruan, C., Yu, F., Yang, R., Guo, B., Yang, J., Gao, F., & He, L. (2025). TEB-YOLO: A Lightweight YOLOv5-Based Model for Bamboo Strip Defect Detection. Forests, 16(8), 1219. https://doi.org/10.3390/f16081219