Rapid Identification of Mangrove Leaves Based on Improved YOLOv10 Model
Abstract
1. Introduction
2. Dataset and Preprocessing
2.1. Experimental Dataset
2.2. Unconstrained Real-World Leaf Dataset
3. Principle of the YOLOv10 Object Detection Algorithm
3.1. YOLOv10 Model Architecture
3.2. Model Improvement and Optimization
- Similar leaf morphology: Leaves from different species may exhibit highly similar shapes, textures, and colors;
- Multi-scale targets: Varying leaf sizes may appear in the same image at different scales;
- Complex backgrounds: Mangrove environments contain interfering elements like branches, water surfaces, and mud;
- Limited data: Mangrove leaf datasets are typically small with significant class imbalance issues.
3.2.1. Backbone
3.2.2. Neck
- Top-down path: High-level features (C5) are upsampled and concatenated with mid-level features (C4). The concatenated feature map undergoes a convolution operation to generate the fused feature map (P4);
- Bottom-up path: Mid-level features (P4) are downsampled and concatenated with low-level features (C3). The concatenated feature map is then convolved to produce the fused feature map (P3);
- Finally, the fused feature maps undergo weighted summation to generate the final multi-scale feature maps (P3, P4, P5).
3.2.3. Head
4. Experimental Results
4.1. Experimental Environment and Configuration
4.2. Evaluation Metrics
4.3. Results Analysis
4.3.1. Experimental Results of the Standard YOLOv10 Model
4.3.2. Experimental Results of the Improved YOLOv10 Model
4.3.3. Comparative Analysis of Various Algorithms
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Shao, X.; Feng, Q.; Shao, S.; Wang, J.; Wang, Y. Research progress in plant identification technology based on leaf images. J. Gansu Agric. Univ. 2010, 45, 156–160. [Google Scholar]
- Zhang, X.; Chen, J.; Zhuge, J.; Yu, L. Fast plant image recognition based on deep learning. J. East China Univ. Sci. Technol. 2018, 44, 887–895. [Google Scholar]
- Wu, Y.; Sun, X.; Ji, C.M.; Hu, N.J. Recognition of edible wild vegetable species based on deep learning. China Cucurbits Veg. 2024, 37, 57–66. [Google Scholar]
- Zhang, S.; Huai, Y. Plant leaf recognition using hierarchical convolutional deep learning system. J. Beijing For. Univ. 2016, 38, 108–115. [Google Scholar]
- Chen, X.; Wang, Y.; Xing, S.X. Plant Species Recognition Based on Wavelet and Variable Local Edge Patterns. Comput. Appl. Softw. 2018, 35, 230–235. [Google Scholar]
- Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosci. 2016, 2016, 3289801. [Google Scholar] [CrossRef]
- Mohanty, S.P.; Hughes, D.P.; Salathe, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 215232. [Google Scholar] [CrossRef]
- Geetharamani, G.; Pandian, A.J. Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng. 2019, 76, 323–338. [Google Scholar] [CrossRef]
- Konstantinos, P. Ferentinos, Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
- Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
- Brahimi, M.; Boukhalfa, K.; Moussaoui, A. Deep Learning for Tomato Diseases: Classification and Symptoms Visualization. Appl. Artif. Intell. 2017, 31, 299–315. [Google Scholar] [CrossRef]
- Chen, Z.; Chen, B.; Huang, Y.; Zhou, Z. GE-YOLO for Weed Detection in Rice Paddy Fields. Appl. Sci. 2025, 15, 2823. [Google Scholar] [CrossRef]
- Zhou, Q.; Li, H.; Cai, Z.; Zhong, Y.; Zhong, F.; Lin, X.; Wang, L. YOLO-ACE: Enhancing YOLO with Augmented Contextual Efficiency for Precision Cotton Weed Detection. Sensors 2025, 25, 1635. [Google Scholar] [CrossRef] [PubMed]
- Han, Y.; Duan, B.; Guan, R.; Yang, G.; Zhen, Z. LUFFD-YOLO: A Lightweight Model for UAV Remote Sensing Forest Fire Detection Based on Attention Mechanism and Multi-Level Feature Fusion. Remote Sens. 2024, 16, 2177. [Google Scholar] [CrossRef]
- Wang, J.; Zhang, H.; Liu, Y.; Zhang, H.; Zheng, D. Tree-Level Chinese Fir Detection Using UAV RGB Imagery and YOLO-DCAM. Remote Sens. 2024, 16, 22. [Google Scholar] [CrossRef]
- Li, S.; Lideskog, H. Implementation of a System for Real-Time Detection and Localization of Terrain Objects on Harvested Forest Land. Forests 2021, 12, 1142. [Google Scholar] [CrossRef]
- Li, N.; Wu, Y.Y.; Liu, Y. Pedestrian attribute recognition algorithm based on multi-scale attention network. Laser Optoelectron. Prog. 2021, 58, 0410025. [Google Scholar]
- Zhao, J.L.; Zhang, X.Z.; Dong, H.Y. Defect detection in transmission line based on scale-invariant feature pyramid networks. Comput. Eng. Appl. 2022, 58, 289–296. [Google Scholar]
- Zhou, Y.; Liu, W.P.; Luo, Y.Q.; Zong, S.X. Small object detection for infected trees based on the deep learning method. Sci. Silvae Sin. 2021, 57, 98–107. [Google Scholar]
- Yao, Y.Q.; Cheng, G.; Xie, X.X.; Han, J.W. Optical remote sensing image object detection based on multi-resolution feature fusion. Natl. Remote Sens. Bull. 2021, 25, 1124–1137. [Google Scholar] [CrossRef]
- Wang, X.; Zhao, Q.; Jiang, P.; Zheng, Y.; Yuan, L.; Yuan, P. LDS-YOLO: A lightweight small object detection method for dead trees from shelter forest. Comput. Electron. Agric. 2022, 198, 107035. [Google Scholar] [CrossRef]
- Yuan, Q.; Zou, S.; Wang, H.; Luo, W.; Zheng, X.; Liu, L.; Meng, Z. A Lightweight Pine Wilt Disease Detection Method Based on Vision Transformer-Enhanced YOLO. Forests 2024, 15, 1050. [Google Scholar] [CrossRef]
- Zhao, J.; Zhang, X.; Yan, J.; Qiu, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W. A Wheat Spike Detection Method in UAV Images Based on Improved YOLOv5. Remote Sens. 2021, 13, 3095. [Google Scholar] [CrossRef]
- Ma, Y.; Liu, H.; Ling, C.; Zhao, F.; Jiang, Y.; Zhang, Y. Object Detection of Individual Mangrove Based on Improved YOLOv5. Laser Optoelectron. Optoelectron. Progress. 2022, 59, 1828003. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J. Yolov10: Realtime end-to-end object detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I.; Sutskever, I.; Bengio, S.; et al. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Pmcessing Systems (NlPS 2017), Long Beach, CA, USA, 31 December 2017. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 EEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Liu, S.; Lu, Q.; He, C.C.; Tian, Z.M.; Li, J.; Yan, J.C.; Deng, L.; Wang, Y.F.; Hu, X.W.; Chen, G.; et al. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759–8768. [Google Scholar] [CrossRef]
Configuration | Parameters |
---|---|
CPU | Intel® Xeon® Silver 4314 CPU @ 2.40 GHz |
GPU | GeForce RTX 4090 24G |
Memory | 40 GB |
Operating Environment | Window11 |
Development Environment | Python 3.11 PyTorch 1.11.1 |
Training Environment | CUDA 12.1 CUDNN 7.6.5 |
Parameter | Definition | Dynamic Range | Analysis |
---|---|---|---|
train/box_loss | Bounding Box Loss Trend | ~1.60 → 1.00 | Non-smooth curve trend |
train/cls_loss | Classification Loss Trend | ~1.40 → 0.80 | Non-smooth curve trend |
train/dfl_loss | Aggregation Loss Trend | ~2.30 → 1.90 | Non-smooth curve trend |
metrics/precision(B) | Precision Trend | 0.57~0.72 | Non-convergence |
metrics/recall(B) | Recall Trend | 0.54~0.62 | Non-convergence |
val/box_loss | Bounding Box Loss Trend | ~4.75 → 4.35 | Unsatisfactory loss |
val/cls_loss | Classification Loss Trend | ~5.40 → 4.60 | High loss |
val/dfl_loss | Aggregation Loss Trend | ~6.80 → 6.60 | Unsatisfactory loss |
metrics/mAP50(B) | mAP@0.5 Trend | ~6.80 → 6.60 | Insignificant change |
metrics/mAP50-95(B) | mAP@[0.5:0.95] Trend | ~0.27 → 0.30 | Insignificant change |
Parameter | Definition | Dynamic Range | Analysis |
---|---|---|---|
train/box_loss | Bounding Box Loss Trend | ~2.00 → 1.20 | Smooth curve trend |
train/cls_loss | Classification Loss Trend | ~2.00 → 0.50 | Smooth curve trend |
train/dfl_loss | Aggregation Loss Trend | ~1.20 → 0.90 | Smooth curve trend |
metrics/precision(B) | Precision Trend | 0.40~0.85 | Convergence |
metrics/recall(B) | Recall Trend | 0.65~0.85 | Convergence |
val/box_loss | Bounding Box Loss Trend | ~1.50 → 1.90 | Accuracy improvement |
val/cls_loss | Classification Loss Trend | ~2.50 → 1.00 | Improved precision |
val/dfl_loss | Aggregation Loss Trend | ~0.98 → 1.10 | Significantly reduced |
metrics/mAP50(B) | mAP@0.5 Trend | ~0.30 → 0.90 | Significantly improved |
metrics/mAP50-95(B) | mAP@[0.5:0.95] Trend | ~0.20 → 0.50 | Significantly improved |
Leaf Species | mAP@0.5 of Different Algorithms | |||||
---|---|---|---|---|---|---|
YOLOv5s | YOLOX | YOLOv7-tiny | YOLOv8n | YOLOv10n | YOLOv10-MSDet | |
Sonneratia apetala | 85.6 | 88.3 | 87.1 | 90.1 | 87.3 | 91.3 |
Kandelia obovata | 87.6 | 85.4 | 80.6 | 81.5 | 89.8 | 88.7 |
Excoecaria agallocha | 90.2 | 87.0 | 88.7 | 88.0 | 90.3 | 94.0 |
Clerodendrum inerme | 92.9 | 94.2 | 92.6 | 94.8 | 94.6 | 98.4 |
Hibiscus tiliaceus | 85.6 | 83.3 | 80.5 | 83.6 | 87.2 | 88.8 |
Rhizophora stylosa | 90.8 | 90.4 | 85.6 | 91.1 | 90.7 | 91.3 |
Scaevola taccada | 84.6 | 84.0 | 85.8 | 83.8 | 88.6 | 90.2 |
Pongamia pinnata | 87.4 | 90.2 | 92.0 | 87.1 | 88.0 | 96.5 |
Model | Parameter (M) | Computational (GFLOPs) | Speed (FPS) | Training Time (Epoch/Hour) | mAP@0.5 |
---|---|---|---|---|---|
YOLOv5s | 7.2 | 16.5 | 142 | 3.8 | 87.6% |
YOLOX-s | 9.0 | 26.8 | 118 | 4.2 | 88.3% |
YOLOv7-tiny | 6.0 | 13.2 | 155 | 3.5 | 87.1% |
YOLOv8n | 3.2 | 8.1 | 198 | 2.6 | 89.5% |
YOLOv10n | 3.1 | 7.9 | 210 | 2.5 | 89.6% |
YOLOv10-MSDet | 3.4 | 8.3 | 195 | 2.7 | 92.4% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sang, H.; Li, Z.; Shen, X.; Wang, S.; Zhang, Y. Rapid Identification of Mangrove Leaves Based on Improved YOLOv10 Model. Forests 2025, 16, 1068. https://doi.org/10.3390/f16071068
Sang H, Li Z, Shen X, Wang S, Zhang Y. Rapid Identification of Mangrove Leaves Based on Improved YOLOv10 Model. Forests. 2025; 16(7):1068. https://doi.org/10.3390/f16071068
Chicago/Turabian StyleSang, Haitao, Ziming Li, Xiaoxue Shen, Shuwen Wang, and Ying Zhang. 2025. "Rapid Identification of Mangrove Leaves Based on Improved YOLOv10 Model" Forests 16, no. 7: 1068. https://doi.org/10.3390/f16071068
APA StyleSang, H., Li, Z., Shen, X., Wang, S., & Zhang, Y. (2025). Rapid Identification of Mangrove Leaves Based on Improved YOLOv10 Model. Forests, 16(7), 1068. https://doi.org/10.3390/f16071068