YOLOv11-RCDWD: A New Efficient Model for Detecting Maize Leaf Diseases Based on the Improved YOLOv11
Abstract
:1. Introduction
- In the backbone layer, we replace the C3k2 backbone network with the RepLKNet network, improving detection performance and efficiency through re-parameterization;
- Introducing the CBAM (Selective Kernel Attention, CBAM) attention module to enhance the model’s ability to capture multi-scale features. The CBAM allows the model to focus on key features of diseased maize leaves, thereby improving feature extraction capability;
- Adopting the DynamicHead detection head to unify the scale-aware, space-aware, and task-aware aspects of object detection by integrating multiple attention mechanisms. This strategy significantly improves the representation ability of the detection head without increasing computational overhead;
- Optimizing the loss function using the symmetric IoU loss function WIoU to improve the bounding box regression accuracy and enhance the model’s localization capability;
- Replacing the original fixed-ratio positive and negative sample allocation strategy with DynamicATSS to adjust the selection mechanism for positive and negative samples based on statistical information obtained during training. DynamicATSS improves the model’s generalization ability.
2. Materials and Methods
2.1. Production of Datasets
2.2. Model Improvement
2.2.1. Improved YOLOv11 Network Model Construction
2.2.2. RepLKNet
2.2.3. CBAM Attention
2.2.4. DynamicHead
2.2.5. Wise-IoU
2.2.6. DynamicATSS
2.3. Model Training and Evaluation Metrics
2.3.1. Maize Leaf Algorithm Model Training Environment
2.3.2. Evaluation Metrics
3. Results
3.1. Comparative Experiments of Different Backbone Networks
3.2. Comparative Experiments of Various Attention Mechanisms
3.3. Ablation Experiments
3.4. Comparative Experiments of the Performance of Different Network Models
3.5. Visualization of Analysis Results
4. Discussion
5. Conclusions
- (1)
- The improved model, YOLOv11s-RCDWD, exhibited the best detection performance, with all accuracy metrics surpassing those of the comparative models. Its precision reached 92.6%, recall was 85.4%, and F1 score was 88.9%, all higher than the other models;
- (2)
- In terms of mAP@0.5 and mAP@0.5~0.95, YOLOv11s-RCDWD achieved 90.2% and 72.5%, respectively, improvements of 4.9% and 9.0% over YOLOv11s, indicating stronger detection capabilities in complex scenarios;
- (3)
- The parameter count and computational load (GFLOPs) of YOLOv11s-RCDWD remained largely consistent with YOLOv11s, at 9.41 M and 10.52 M, respectively, which are significantly lower than those of more complex models like YOLOv9c. Its memory footprint (memory cost) was found to be 16.4 MB, a reduction from YOLOv11s’s 19.2 MB, demonstrating the improved model’ more efficient utilization of resources;
- (4)
- Different backbone networks exhibited significant performance variations in maize disease detection tasks. The YOLOv11 model with RepLKNet as the backbone (YOLOv11s-RepLKNet) demonstrated notable comprehensive advantages in four key aspects: detection accuracy, detection speed, model complexity, and resource usage, particularly excelling in detection accuracy. RepLKNet achieved the highest values in both precision and F1 score, reaching 89.1% and 84.2%, respectively;
- (5)
- Comparing seven common attention mechanisms—CBAM, EC, EMA, GAM, SA, SimAM, and SK—demonstrated that the CBAM attention module significantly outperformed other attention mechanisms in terms of detection accuracy. Its precision reached 90.5%, and its F1 score was 84.1%.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ranum, P.; Peña Rosas, J.P.; Garcia Casal, M.N. Global maize production, utilization, and consumption. Ann. N. Y. Acad. Sci. 2014, 1312, 105–112. [Google Scholar] [CrossRef] [PubMed]
- Ren, L.; Li, C.; Yang, G.; Zhao, D.; Zhang, C.; Xu, B.; Feng, H.; Chen, Z.; Lin, Z.; Yang, H. The Detection of Maize Seedling Quality from UAV Images Based on Deep Learning and Voronoi Diagram Algorithms. Remote Sens. 2024, 16, 3548. [Google Scholar] [CrossRef]
- Savary, S.; Ficke, A.; Aubertot, J.; Hollier, C. Crop losses due to diseases and their implications for global food production losses and food security. Food Secur. 2012, 4, 519–537. [Google Scholar] [CrossRef]
- John, M.A.; Bankole, I.; Ajayi-Moses, O.; Ijila, T.; Jeje, T.; Lalit, P. Relevance of advanced plant disease detection techniques in disease and Pest Management for Ensuring Food Security and Their Implication: A review. Am. J. Plant Sci. 2023, 14, 1260–1295. [Google Scholar] [CrossRef]
- DeChant, C.; Wiesner-Hanks, T.; Chen, S.; Stewart, E.L.; Yosinski, J.; Gore, M.A.; Nelson, R.J.; Lipson, H. Automated identification of northern leaf blight-infected maize plants from field imagery using deep learning. Phytopathology 2017, 107, 1426–1432. [Google Scholar] [CrossRef] [PubMed]
- Setiawan, W.; Rochman, E.; Satoto, B.D.; Rachmad, A. Machine learning and deep learning for maize leaf disease classification: A review. J. Phys. Conf. Ser. 2022, 2406, 12019. [Google Scholar] [CrossRef]
- Jafar, A.; Bibi, N.; Naqvi, R.A.; Sadeghi-Niaraki, A.; Jeong, D. Revolutionizing agriculture with artificial intelligence: Plant disease detection methods, applications, and their limitations. Front. Plant Sci. 2024, 15, 1356260. [Google Scholar] [CrossRef]
- Abdullah, H.M.; Mohana, N.T.; Khan, B.M.; Ahmed, S.M.; Hossain, M.; Islam, K.S.; Redoy, M.H.; Ferdush, J.; Bhuiyan, M.; Hossain, M.M. Present and future scopes and challenges of plant pest and disease (P&D) monitoring: Remote sensing, image processing, and artificial intelligence perspectives. Remote Sens. Appl. Soc. Environ. 2023, 32, 100996. [Google Scholar]
- Shi, Y.; Duan, Z.; Qing, S.; Zhao, L.; Wang, F.; Yuwen, X. YOLOV9S-Pear: A lightweight YOLOV9S-Based improved model for young Red Pear Small-Target recognition. Agronomy 2024, 14, 2086. [Google Scholar] [CrossRef]
- Panigrahi, K.P.; Das, H.; Sahoo, A.K.; Moharana, S.C. Maize leaf disease detection and classification using machine learning algorithms. In Progress in Computing, Analytics and Networking—Proceedings of the ICCAN 2019, Bhubaneswar, India, 14–15 December 2019; Springer: Singapore, 2020; pp. 659–669. [Google Scholar]
- Paul, H.; Udayangani, H.; Umesha, K.; Lankasena, N.; Liyanage, C.; Thambugala, K. Maize leaf disease detection using convolutional neural network: A mobile application based on pre-trained VGG16 architecture. N. Z. J. Crop Hortic. Sci. 2024, 53, 367–383. [Google Scholar] [CrossRef]
- Reddy, J.; Niu, H.; Scott, J.L.L.; Bhandari, M.; Landivar, J.A.; Bednarz, C.W.; Duffield, N. Cotton Yield Prediction via UAV-Based Cotton Boll Image Segmentation Using YOLO Model and Segment Anything Model (SAM). Remote Sens. 2024, 16, 4346. [Google Scholar] [CrossRef]
- Song, Y.; Yang, L.; Li, S.; Yang, X.; Ma, C.; Huang, Y.; Hussain, A. Improved YOLOv8 Model for Phenotype Detection of Horticultural Seedling Growth Based on Digital Cousin. Agriculture 2024, 15, 28. [Google Scholar] [CrossRef]
- Ngugi, L.C.; Abelwahab, M.; Abo-Zahhad, M. Recent advances in image processing techniques for automated leaf pest and disease recognition–A review. Inf. Process. Agric. 2021, 8, 27–51. [Google Scholar] [CrossRef]
- Sharma, A.; Kumar, V.; Longchamps, L. Comparative performance of YOLOv8, YOLOv9, YOLOv10, YOLOv11 and Faster R-CNN models for detection of multiple weed species. Smart Agric. Technol. 2024, 9, 100648. [Google Scholar] [CrossRef]
- Zhang, Z.; Yang, Y.; Xu, X.; Liu, L.; Yue, J.; Ding, R.; Lu, Y.; Liu, J.; Qiao, H. GVC-YOLO: A Lightweight Real-Time Detection Method for Cotton Aphid-Damaged Leaves Based on Edge Computing. Remote Sens. 2024, 16, 3046. [Google Scholar] [CrossRef]
- Meng, Z.; Du, X.; Sapkota, R.; Ma, Z.; Cheng, H. YOLOv10-pose and YOLOv9-pose: Real-time strawberry stalk pose detection models. Comput. Ind. 2025, 165, 104231. [Google Scholar] [CrossRef]
- Fan, Y.; Zhang, S.; Feng, K.; Qian, K.; Wang, Y.; Qin, S. Strawberry maturity recognition algorithm combining dark channel enhancement and YOLOv5. Sensors 2022, 22, 419. [Google Scholar] [CrossRef]
- Wang, S.; Zhao, J.; Cai, Y.; Li, Y.; Qi, X.; Qiu, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W. A method for small-sized wheat seedlings detection: From annotation mode to model construction. Plant Methods 2024, 20, 15. [Google Scholar] [CrossRef]
- Li, T.; Zhang, L.; Lin, J. Precision agriculture with YOLO-Leaf: Advanced methods for detecting apple leaf diseases. Front. Plant Sci. 2024, 15, 1452502. [Google Scholar] [CrossRef]
- Lu, Z.; Han, B.; Dong, L.; Zhang, J. COTTON-YOLO: Enhancing Cotton Boll Detection and Counting in Complex Environmental Conditions Using an Advanced YOLO Model. Appl. Sci. 2024, 14, 6650. [Google Scholar] [CrossRef]
- Yang, S.; Yao, J.; Teng, G. Corn leaf spot disease recognition based on improved YOLOv8. Agriculture 2024, 14, 666. [Google Scholar] [CrossRef]
- Zhang, X.; Qiao, Y.; Meng, F.; Fan, C.; Zhang, M. Identification of maize leaf diseases using improved deep convolutional neural networks. IEEE Access 2018, 6, 30370–30377. [Google Scholar] [CrossRef]
- Liu, J.; He, C.; Jiang, Y.; Wang, M.; Ye, Z.; He, M. A High-Precision Identification Method for Maize Leaf Diseases and Pests Based on LFMNet under Complex Backgrounds. Plants 2024, 13, 1827. [Google Scholar] [CrossRef]
- Sun, J.; Yang, Y.; He, X.; Wu, X. Northern maize leaf blight detection under complex field environment based on deep learning. IEEE Access 2020, 8, 33679–33688. [Google Scholar] [CrossRef]
- Zhang, J.; Meng, Y.; Yu, X.; Bi, H.; Chen, Z.; Li, H.; Yang, R.; Tian, J. Mbab-yolo: A modified lightweight architecture for real-time small target detection. IEEE Access 2023, 11, 78384–78401. [Google Scholar] [CrossRef]
- Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
- Moschidis, C.; Vrochidou, E.; Papakostas, G.A. Annotation tools for computer vision tasks. In Proceedings of the Seventeenth International Conference on Machine Vision (ICMV 2024), Edinburgh, UK, 10–13 October 2024; Volume 13517, pp. 372–379. [Google Scholar]
- Khanam, R.; Hussain, M. Yolov11: An overview of the key architectural enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar]
- Ding, X.; Zhang, X.; Han, J.; Ding, G. Scaling up your kernels to 31 × 31: Revisiting large kernel design in cnns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11963–11975. [Google Scholar]
- Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the effective receptive field in deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2016, 29, 4905–4913. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7373–7382. [Google Scholar]
- Nowozin, S. Optimal decisions from probabilistic models: The intersection-over-union case. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 548–555. [Google Scholar]
- Tong, Z.; Chen, Y.; Xu, Z.; Yu, R. Wise-IoU: Bounding box regression loss with dynamic focusing mechanism. arXiv 2023, arXiv:2301.10051. [Google Scholar]
- Zhang, F.; Zhou, S.; Wang, Y.; Wang, X.; Hou, Y. Label assignment matters: A gaussian assignment strategy for tiny object detection. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–12. [Google Scholar] [CrossRef]
- Zhang, T.; Luo, B.; Sharda, A.; Wang, G. Dynamic label assignment for object detection by combining predicted ious and anchor ious. J. Imaging 2022, 8, 193. [Google Scholar] [CrossRef] [PubMed]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.; Berg, A.C. Ssd: Single shot multibox detector. In Computer Vision–ECCV 2016—Proceedings of the 14th European Conference, Proceedings, Part I 14, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Ma, L.; Yu, Q.; Yu, H.; Zhang, J. Maize leaf disease identification based on yolov5n algorithm incorporating attention mechanism. Agronomy 2023, 13, 521. [Google Scholar] [CrossRef]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Wang, C.; Bochkovskiy, A.; Liao, H.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Zhang, C.; Hu, Z.; Xu, L.; Zhao, Y. A YOLOv7 incorporating the Adan optimizer based corn pests identification method. Front. Plant Sci. 2023, 14, 1174556. [Google Scholar] [CrossRef]
- Wang, C.; Yeh, I.; Mark Liao, H. Yolov9: Learning what you want to learn using programmable gradient information. In Computer Vision—ECCV 2024—Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer: Cham, Switzerland, 2024; pp. 1–21. [Google Scholar]
- Gharat, K.; Jogi, H.; Gode, K.; Talele, K.; Kulkarni, S.; Kolekar, M.H. Enhanced Detection of Maize Leaf Blight in Dynamic Field Conditions Using Modified YOLOv9. In Proceedings of the 2024 IEEE Space, Aerospace and Defence Conference (SPACE), Bangalore, India, 22–23 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 140–143. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J. Yolov10: Real-time end-to-end object detection. Adv. Neural Inf. Process. Syst. 2025, 37, 107984–108011. [Google Scholar]
Parameter | Value | Parameter | Value |
---|---|---|---|
Epochs | 300 | Optimizer | SGD |
Patience | 50 | Weight_decay | 0.0005 |
Batch | 8 | momentum | 0.937 |
Imgsize | 640 | Warmup_momentum | 0.8 |
Workers | 8 | Lrf | 0.05 |
Model | Precision/% | Recall/% | F1 Score/% | mAP@0.5/% | mAP@0.5~0.95/% | Detection Speed/ms | Parameter/M | GFLOPs | MemoryCost/MB |
---|---|---|---|---|---|---|---|---|---|
YOLOv11s-C2K3 | 87.7 | 78.0 | 82.6 | 85.3 | 63.5 | 1.3 | 10.52 | 21.2 | 19.2 |
YOLOv11s-CFNet | 87.3 | 79.8 | 83.4 | 85.6 | 63.9 | 3.2 | 8.89 | 22.7 | 18.1 |
YOLOv11s-FasterNet | 86.5 | 80.2 | 83.2 | 85.5 | 63.3 | 2.4 | 9.04 | 23.7 | 18.5 |
YOLOv11s-GhostNetV2 | 87.3 | 77.1 | 81.9 | 84.6 | 62.6 | 3.3 | 8.57 | 21.2 | 17.6 |
YOLOv11s- MobileViTBv3 | 83.2 | 80.9 | 82.0 | 85.8 | 63.5 | 4.4 | 10.60 | 34.8 | 21.6 |
YOLOv11s-RepLKNet | 89.1 | 79.9 | 84.2 | 87.1 | 66.7 | 3.2 | 9.85 | 25.2 | 20.2 |
Model | Precision/% | Recall/% | F1 Score/% | mAP@0.5/% | mAP@0.5~0.95/% | Detection Speed/ms | Parameter/M | GFLOPs | MemoryCost/MB |
---|---|---|---|---|---|---|---|---|---|
YOLOv11s | 87.7 | 78.0 | 82.6 | 85.3 | 63.5 | 1.3 | 10.52 | 21.2 | 19.2 |
YOLOv11s-CBAM | 90.5 | 79.4 | 84.1 | 86.2 | 64.6 | 1.6 | 8.49 | 20.6 | 17.3 |
YOLOv11s-EC | 87.7 | 78 | 82.6 | 85.3 | 63.5 | 4.2 | 9.41 | 21.3 | 19.2 |
YOLOv11s-EMA | 86.4 | 81.1 | 83.7 | 86.9 | 64.6 | 3.3 | 9.08 | 21.7 | 18.5 |
YOLOv11s-GAM | 88.7 | 76.6 | 82.2 | 84.1 | 62.4 | 4.6 | 11.91 | 23.3 | 24.2 |
YOLOv11s-SA | 84.8 | 81.4 | 83.1 | 86.6 | 64.5 | 4.1 | 8.36 | 20.2 | 17.2 |
YOLOv11s-SimAM | 85.5 | 78.8 | 82.0 | 86 | 63.7 | 3.7 | 8.42 | 20.5 | 17.5 |
RepLKNet | CBAM | DynamicHead | WIoU | DynamicATSS | Precision/% | Recall/% | F1 Score/% | mAP@0.5/% | mAP@0.5~0.95/% | Detection Speed/ms | Parameter/M | GFLOPs | MemoryCost/MB |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
- | - | - | - | - | 87.7 | 78.0 | 82.6 | 85.3 | 63.5 | 1.3 | 10.52 | 21.2 | 19.2 |
√ | - | - | - | - | 89.1 | 79.9 | 84.2 | 87.1 | 66.7 | 3.2 | 9.85 | 25.2 | 20.2 |
- | √ | - | - | - | 90.5 | 79.4 | 84.1 | 86.2 | 64.6 | 1.6 | 8.49 | 20.6 | 17.3 |
- | - | √ | - | - | 87.8 | 82.1 | 83.8 | 85.5 | 65.0 | 6.5 | 9.72 | 21.2 | 19.8 |
- | - | - | √ | - | 86.6 | 81.0 | 83.7 | 86.7 | 64.4 | 1.2 | 10.2 | 21.3 | 19.2 |
- | - | - | - | √ | 86.3 | 80.1 | 83.1 | 85.6 | 63.5 | 3.8 | 10.3 | 21.3 | 19.2 |
√ | √ | √ | √ | √ | 92.6 | 85.4 | 88.9 | 90.2 | 72.5 | 1.6 | 9.41 | 19.3 | 16.4 |
Model | Precision/% | Recall/% | F1 Score/% | mAP@0.5/% | mAP@0.5~0.95/% | Detection Speed/ms | Parameter/M | GFLOPs | MemoryCost/MB |
---|---|---|---|---|---|---|---|---|---|
Faster R-CNN | 82.6 | 77.2 | 80.0 | 80.2 | 60.1 | 23.0 | 28.86 | 48.6 | 42.3 |
SSD | 85.2 | 78.1 | 80.9 | 82.1 | 61.6 | 15.2 | 25.6 | 36.2 | 30.6 |
YOLOv5s | 85.9 | 78.9 | 82.3 | 84.3 | 61.0 | 3.2 | 7.81 | 18.7 | 16.0 |
YOLOv6s | 82.8 | 77.9 | 80.3 | 81.9 | 60.5 | 2.5 | 15.97 | 42.8 | 32.3 |
YOLOv7 | 87.7 | 79.6 | 83.5 | 86 | 63.1 | 3.9 | 9.82 | 23.4 | 20.0 |
YOLOv9s | 88.6 | 83.3 | 85.9 | 88.7 | 65.4 | 2.7 | 6.31 | 22.6 | 13.3 |
YOLOv9c | 87.6 | 83.3 | 85.4 | 88.0 | 67.0 | 4.2 | 21.35 | 84.0 | 43.3 |
YOLOv10s | 88.6 | 78.2 | 83.1 | 85.1 | 63.4 | 2.6 | 8.03 | 24.5 | 16.6 |
YOLOv11s | 87.7 | 78.0 | 82.6 | 85.3 | 63.5 | 1.3 | 10.52 | 21.2 | 19.2 |
YOLOv11s-RCDWD | 92.6 | 85.4 | 88.9 | 90.2 | 72.5 | 1.6 | 9.41 | 19.3 | 16.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
He, J.; Ren, Y.; Li, W.; Fu, W. YOLOv11-RCDWD: A New Efficient Model for Detecting Maize Leaf Diseases Based on the Improved YOLOv11. Appl. Sci. 2025, 15, 4535. https://doi.org/10.3390/app15084535
He J, Ren Y, Li W, Fu W. YOLOv11-RCDWD: A New Efficient Model for Detecting Maize Leaf Diseases Based on the Improved YOLOv11. Applied Sciences. 2025; 15(8):4535. https://doi.org/10.3390/app15084535
Chicago/Turabian StyleHe, Jie, Yi Ren, Weibin Li, and Wenlin Fu. 2025. "YOLOv11-RCDWD: A New Efficient Model for Detecting Maize Leaf Diseases Based on the Improved YOLOv11" Applied Sciences 15, no. 8: 4535. https://doi.org/10.3390/app15084535
APA StyleHe, J., Ren, Y., Li, W., & Fu, W. (2025). YOLOv11-RCDWD: A New Efficient Model for Detecting Maize Leaf Diseases Based on the Improved YOLOv11. Applied Sciences, 15(8), 4535. https://doi.org/10.3390/app15084535