Efficient Identification and Classification of Pear Varieties Based on Leaf Appearance with YOLOv10 Model
Abstract
:1. Introduction
2. Materials and Methods
2.1. Image Acquisition and Dataset Building
2.2. YOLOv10
2.3. Selection of Other Network Models
2.4. SCConv Module
2.5. Evaluation Metrics
2.6. Experimental Environment
2.7. Statistical Analysis
3. Results and Discussion
3.1. Image and Label Datasets
3.2. Model Training
3.3. Comparative Analysis of Different Models on the Self-Made Dataset
3.4. Comparative Experiment on Leaf Recognition and Classification Performance of Different Pear Varieties
3.5. Improved YOLOv10 Model
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
CIB | Compact Inverted Block |
Conv | Convolutional Layer |
C2f | Cross Stage Partial Fusion with 2 convolutions |
DW | Depthwise Convolution |
FFN | Feed-Forward Network |
PSA | Partial Self-Attention |
SPPF | Spatial Pyramid Pooling Fast |
SCDown | Spatial Channel Downsampling |
References
- Hussain, S.Z.; Naseer, B.; Qadri, T.; Fatima, T.; Bhat, T.A. Pear (Pyrus Communis)—Morphology, Taxonomy, Composition and Health Benefits. In Fruits Grow in Highland Regions of the Himalayas: Nutritional and Health Benefits; Springer International Publishing: Cham, Switzerland, 2021; pp. 35–48. [Google Scholar]
- Fruits and Nuts; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007.
- Liu, J.; Wang, H.; Fan, X.; Zhang, Y.; Sun, L.; Liu, C.; Fang, Z.; Zhou, J.; Peng, H.; Jiang, J. Establishment and application of a Multiple nucleotide polymorphism molecular identification system for grape cultivars. Sci. Hortic. 2024, 325, 112642. [Google Scholar] [CrossRef]
- Sun, H.; Zhang, S.; Ren, R.; Su, L. Surface defect detection of “Yuluxiang” pear using convolutional neural network with class-balance loss. Agronomy 2022, 12, 2076. [Google Scholar] [CrossRef]
- Zhang, L.; Yang, Q.; Sun, Q.; Feng, D.; Zhao, Y. Research on the size of mechanical parts based on image recognition. J. Vis. Commun. Image Represent. 2019, 59, 425–432. [Google Scholar] [CrossRef]
- Wang, Z.; Cui, J.; Zhu, Y. Review of plant leaf recognition. Artif. Intell. Rev. 2023, 56, 4217–4253. [Google Scholar] [CrossRef]
- Aslam, T.; Qadri, S.; Qadri, S.F.; Nawaz, S.A.; Razzaq, A.; Zarren, S.S.; Ahmad, M.; Rehman, M.U.; Hussain, A.; Hussain, I.; et al. Machine learning approach for classification of mangifera indica leaves using digital image analysis. Int. J. Food Prop. 2022, 25, 1987–1999. [Google Scholar] [CrossRef]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [Google Scholar] [CrossRef]
- Zhang, Y.; Peng, J.; Yuan, X.; Zhang, L.; Zhu, D.; Hong, P.; Wang, J.; Liu, Q.; Liu, W. MFCIS: An automatic leaf-based identification pipeline for plant cultivars using deep learning and persistent homology. Hortic. Res. 2021, 8, 172. [Google Scholar] [CrossRef]
- Liu, C.; Han, J.; Chen, B.; Mao, J.; Xue, Z.; Li, S. A novel identification method for apple (Malus domestica Borkh.) cultivars based on a deep convolutional neural network with leaf image input. Symmetry 2020, 12, 217. [Google Scholar] [CrossRef]
- Siravenha, A.C.; Carvalho, S.R. Plant classification from leaf textures. In Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–8. [Google Scholar]
- Grinblat, G.L.; Uzal, L.C.; Larese, M.G.; Granitto, P.M. Deep learning for plant identification using vein morphological patterns. Comput. Electron. Agric. 2016, 127, 418–424. [Google Scholar] [CrossRef]
- Zhang, C.; Zhou, P.; Li, C.; Liu, L. A convolutional neural network for leaves recognition using data augmentation. In Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, UK, 26–28 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 2143–2150. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Ultralytics yolov5. Available online: https://github.com/ultralytics/yolov5 (accessed on 5 March 2025).
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Lu, J.; Yu, M.; Liu, J. Lightweight strip steel defect detection algorithm based on improved YOLOv7. Sci. Rep. 2024, 14, 13267. [Google Scholar] [CrossRef]
- Yang, M.; Yuan, W.; Xu, G. Yolox target detection model can identify and classify several types of tea buds with similar characteristics. Sci. Rep. 2024, 14, 2855. [Google Scholar]
- Susa, J.A.B.; Nombrefia, W.C.; Abustan, A.S.; Macalisang, J.; Maaliw, R.R. Deep learning technique detection for cotton and leaf classification using the YOLO algorithm. In Proceedings of the 2022 International Conference on Smart Information Systems and Technologies (SIST), Nur-Sultan, Kazakhstan, 28–30 April 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
- Alam Soeb, J.; Jubayer, F.; Tarin, T.A.; Al Mamun, M.R.; Ruhad, F.M.; Parven, A.; Mubarak, N.M.; Karri, S.L.; Meftaul, I.M. Tea leaf disease detection and identification based on YOLOv7 (YOLO-T). Sci. Rep. 2023, 13, 6078. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J. Yolov10: Real-time end-to-end object detection. Adv. Neural Inf. Process. Syst. 2024, 37, 107984–108011. [Google Scholar]
- Qiu, X.; Chen, Y.; Cai, W.; Niu, M.; Li, J. LD-YOLOv10: A lightweight target detection algorithm for drone scenarios based on YOLOv10. Electronics 2024, 13, 3269. [Google Scholar] [CrossRef]
- Aktouf, L.; Shivanna, Y.; Dhimish, M. High-Precision Defect Detection in Solar Cells Using YOLOv10 Deep Learning Model. Solar 2024, 4, 639–659. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, part v 13. Springer International Publishing: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Communications of the ACM; AcM: New York, NY, USA, 2017; pp. 84–90. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Li, J.; Wen, Y.; He, L. Scconv: Spatial and channel reconstruction convolution for feature redundancy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 6153–6162. [Google Scholar]
- Wang, Y.; Yang, C.; Yang, Q.; Zhong, R.; Wang, K.; Shen, H. Diagnosis of cervical lymphoma using a YOLO-v7-based model with transfer learning. Sci. Rep. 2024, 14, 11073. [Google Scholar] [CrossRef] [PubMed]
- Vijayakumar, A.; Vairavasundaram, S. Yolo-based object detection models: A review and its applications. Multimed. Tools Appl. 2024, 83, 83535–83574. [Google Scholar] [CrossRef]
- Chen, G.; Hou, Y.; Cui, T.; Li, H.; Shangguan, F.; Cao, L. YOLOv8-CML: A lightweight target detection method for Color-changing melon ripening in intelligent agriculture. Sci. Rep. 2024, 14, 14400. [Google Scholar] [CrossRef]
- Nguyen, P.T.; Nguyen, G.L.; Bui, D.D. LW-UAV–YOLOv10: A lightweight model for small UAV detection on infrared data based on YOLOv10. Geomatica 2025, 77, 100049. [Google Scholar] [CrossRef]
Species | Name | Origin | Number of Leaf Photos in the Dataset |
---|---|---|---|
P. communis | Bartlett | Britain | 520 |
Beurré Hardy | France | 524 | |
Conference | Britain | 514 | |
Dr. Jules Guyot | France | 500 | |
Early Red Comice | Britain | 553 | |
Harrow Sweet | Canada | 552 | |
P. pyrifolia | Akiziki | Japan | 513 |
Cangxi Xueli | China | 500 | |
Dayexue | China | 553 | |
Hongshaobang | China | 504 | |
Mandingxue | Korea | 566 | |
Whangkeumbae | China | 564 | |
P. bretschneideri | Chili | China | 510 |
Dangshan Suli | China | 515 | |
Fojianxi | China | 515 | |
Shuihongxiao | China | 500 | |
Xuehuali | China | 500 | |
Yali | China | 533 | |
P. ussuriensis | Anli | China | 567 |
Huagai | China | 566 | |
Jianbali | China | 542 | |
Jingbaili | China | 575 | |
Nanguoli | China | 544 | |
Xiaoxiangshui | China | 573 | |
P. sinkiangensis | Huachangba | China | 554 |
Korla Pear | China | 572 | |
Seerkefu | China | 562 | |
P.hybrid | Cuiguan | China | 509 |
Hongxiangsu | China | 546 | |
Huangguan | China | 547 | |
Jinfeng | China | 506 | |
Xinli 7 | China | 543 | |
Yuluxiang | China | 514 |
Experimental Environment | Details |
---|---|
Programming language | Python 3.9 |
Operating system | Windows 11 |
Deep learning framework | PyTorch 1.11.0 |
Development environment | CUDA 11.3 |
GPU | NVIDIA GeForce RTX 3090 |
Model | Epochs | Precision (%) | Recall (%) | mAP50 (%) | F1-Score |
---|---|---|---|---|---|
YOLOv10 | 100 | 99.60 ± 0.72 a | 99.40 ± 0.93 a | 99.50 ± 0.17 a | 0.99 |
YOLOv7 | 100 | 97.20 ± 3.0 b | 96.00 ± 4.06 bc | 98.90 ± 1.37 b | 0.97 |
ResNet50 | 100 | 96.79 ± 2.62 b | 96.76 ± 2.07 b | 98.53 ± 0.38 b | 0.96 |
VGG16 | 100 | 95.27 ± 2.82 c | 95.24 ± 2.73 c | 97.87 ± 0.54 c | 0.95 |
Swin Transformer | 100 | 97.92 ± 0.87 b | 94.85 ± 1.37 c | 98.64 ± 0.75 b | 0.97 |
Model | Parameter/M | FLOPs/G | Training-Time |
---|---|---|---|
YOLOv10 | 2.59 | 8.5 | 11 h 49 m 37 s |
YOLOv7 | 35.64 | 105.7 | 32 h 31 m 30 s |
ResNet50 | 23.58 | 4.12 | 43 h 31 m 7 s |
VGG16 | 134.4 | 15.5 | 27 h 10 m 9 s |
Swin Transformer | 28.3 | 4.5 | 13 h 48 m 28 s |
Leaf-Variety | Precision (%) | Error Value (%) | Recall (%) | Error Value (%) | mAP50 (%) | Error Value (%) | mAP50-95 (%) | Error Value (%) |
---|---|---|---|---|---|---|---|---|
Anli | 99.1 | −0.47 | 99.4 | 0.04 | 99.5 | 0.04 | 79.6 | −13.42 |
Bartlett | 99.9 | 0.34 | 100 | 0.65 | 99.5 | 0.04 | 88.4 | −3.85 |
Cangxi Xueli | 100 | 0.44 | 99.6 | 0.24 | 99.5 | 0.04 | 81.8 | −11.03 |
Chili | 99.7 | 0.14 | 100 | 0.65 | 99.5 | 0.04 | 92.5 | 0.61 |
Cuiguan | 100 | 0.44 | 99.6 | 0.24 | 99.5 | 0.04 | 94.7 | 3.00 |
Dangshan Suli | 99.9 | 0.34 | 100 | 0.65 | 99.5 | 0.04 | 91.6 | −0.37 |
Dayexue | 99.7 | 0.14 | 99.4 | 0.04 | 99.5 | 0.04 | 94.6 | 2.89 |
Fojianxi | 99.9 | 0.34 | 100 | 0.65 | 99.5 | 0.04 | 89.9 | −2.22 |
Beurré Hardy | 99.4 | −0.16 | 98.9 | −0.46 | 99.5 | 0.04 | 86.4 | −6.03 |
Harrow Sweet | 99.8 | 0.24 | 100 | 0.65 | 99.5 | 0.04 | 82.2 | −10.59 |
Hongshaobang | 100 | 0.44 | 98.9 | −0.46 | 99.5 | 0.04 | 95.8 | 4.20 |
Hongxiangsu | 99.5 | −0.06 | 99.4 | 0.04 | 99.5 | 0.04 | 90.9 | −1.13 |
Huachangba | 99.9 | 0.34 | 100 | 0.65 | 99.5 | 0.04 | 92.5 | 0.61 |
Huagai | 99.8 | 0.24 | 99.4 | 0.04 | 99.5 | 0.04 | 94 | 2.24 |
Huangguan | 98.2 | −1.37 | 98 | −1.37 | 99.1 | −0.36 | 91.5 | −0.48 |
Whangkeumbae | 99.2 | −0.37 | 99.4 | 0.04 | 99.5 | 0.04 | 91.6 | −0.37 |
Jianbali | 99.2 | −0.37 | 100 | 0.65 | 99.5 | 0.04 | 96.6 | 5.07 |
Jinfeng | 98.7 | −0.87 | 100 | 0.65 | 99.5 | 0.04 | 95 | 3.33 |
Jingbaili | 99.9 | 0.34 | 99.4 | 0.04 | 99.5 | 0.04 | 89.9 | −2.22 |
Conference | 99.5 | −0.06 | 99.4 | 0.04 | 99.5 | 0.04 | 88.1 | −4.18 |
Korla Pear | 100 | 0.44 | 97.7 | −1.67 | 99.5 | 0.04 | 94.5 | 2.79 |
Mandingxue | 100 | 0.44 | 99.2 | −0.16 | 99.5 | 0.04 | 95.1 | 3.44 |
Nanguoli | 99.9 | 0.34 | 100 | 0.65 | 99.5 | 0.04 | 93.7 | 1.91 |
Akiziki | 100 | 0.44 | 100 | 0.65 | 99.5 | 0.04 | 99.1 | 7.79 |
Dr. Jules Guyot | 99.8 | 0.24 | 100 | 0.65 | 99.5 | 0.04 | 93 | 1.15 |
Seerkefu | 100 | 0.44 | 98.1 | −1.27 | 99.5 | 0.04 | 94.7 | 3.00 |
Shuihongxiao | 99.6 | 0.04 | 100 | 0.65 | 99.5 | 0.04 | 91.7 | −0.26 |
Xiaoxiangshui | 99.8 | 0.24 | 100 | 0.65 | 99.5 | 0.04 | 92.7 | 0.83 |
Xinli 7 | 99.9 | 0.34 | 99.4 | 0.04 | 99.5 | 0.04 | 97.2 | 5.72 |
Xuehuali | 100 | 0.44 | 98.9 | −0.46 | 99.5 | 0.04 | 92.4 | 0.50 |
Yali | 100 | 0.44 | 99.8 | 0.45 | 99.5 | 0.04 | 95.4 | 3.76 |
Yuluxiang | 96.4 | −3.18 | 95.5 | −3.88 | 98.6 | −0.87 | 93.4 | 1.59 |
Early Red Comice | 98.9 | −0.67 | 99.4 | 0.04 | 99.5 | 0.04 | 93.5 | 1.70 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, N.; Wu, Y.; Jiang, Z.; Mou, Y.; Ji, X.; Huo, H.; Dong, X. Efficient Identification and Classification of Pear Varieties Based on Leaf Appearance with YOLOv10 Model. Horticulturae 2025, 11, 489. https://doi.org/10.3390/horticulturae11050489
Li N, Wu Y, Jiang Z, Mou Y, Ji X, Huo H, Dong X. Efficient Identification and Classification of Pear Varieties Based on Leaf Appearance with YOLOv10 Model. Horticulturae. 2025; 11(5):489. https://doi.org/10.3390/horticulturae11050489
Chicago/Turabian StyleLi, Niman, Yongqing Wu, Zhengyu Jiang, Yulu Mou, Xiaohao Ji, Hongliang Huo, and Xingguang Dong. 2025. "Efficient Identification and Classification of Pear Varieties Based on Leaf Appearance with YOLOv10 Model" Horticulturae 11, no. 5: 489. https://doi.org/10.3390/horticulturae11050489
APA StyleLi, N., Wu, Y., Jiang, Z., Mou, Y., Ji, X., Huo, H., & Dong, X. (2025). Efficient Identification and Classification of Pear Varieties Based on Leaf Appearance with YOLOv10 Model. Horticulturae, 11(5), 489. https://doi.org/10.3390/horticulturae11050489