Detection and Maturity Classification of Dense Small Lychees Using an Improved Kolmogorov–Arnold Network–Transformer
Abstract
1. Introduction
- A KAN-enhanced Transformer model (GRN-KANformer) is developed for dense small lychee detection and maturity classification in the field.
- Multiple enhancements for Transformer are proposed: a stackable, dynamic lightweight GhostResNet is incorporated; a small object detection layer is added to the hybrid encoder, enabling feature extraction from dense small lychees; an efficient channel attention mechanism is introduced for cross-level multi-scale fusion, reducing interference and strengthening lychee-specific feature extraction.
- An improved KAN module is incorporated into the detection head, enhancing detection capability without significant increases in model complexity.
- GRN-KANformer has been validated using a public lychee dataset and it outperforms multiple popular deep learning models.
2. Proposed Method
2.1. GhostResNet Block
2.1.1. GhostResNet (GRN)
2.1.2. Dimensionality Reduction Hyperparameter
2.2. Addition of the S2 Small Object Detection Layer
2.3. Multi-Layer Cross-Feature Fusion with Efficient Channel Attention
2.4. KAN-Enhanced Detection Head
3. Experimental Results
3.1. Dataset and Experimental Settings
3.2. Evaluation Metrics
3.3. Selecting the Dimensionality Reduction Hyperparameter
3.4. Ablation Experiments
3.5. Comparison with State-of-the-Art Deep Learning Models
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Chen, H.; Huang, H. China litchi industry: Development, achievements and problems. In Proceedings of the I International Symposium on Litchi and Longan, Guangzhou, China, 18–24 June 2000; Volume 558, pp. 31–39. [Google Scholar]
- Wang, C.; Han, Q.; Zhang, T.; Li, C.; Sun, X. Litchi picking points localization in natural environment based on the Litchi-YOSO model and branch morphology reconstruction algorithm. Comput. Electron. Agric. 2024, 226, 109473. [Google Scholar] [CrossRef]
- Li, B.; Lu, H.; Wei, X.; Guan, S.; Zhang, Z.; Zhou, X.; Luo, Y. An Improved Rotating Box Detection Model for Litchi Detection in Natural Dense Orchards. Agronomy 2023, 14, 95. [Google Scholar] [CrossRef]
- Li, T.; Cong, P.; Xu, Y.; Liang, J.; Wang, K.; Zhang, X. Target detection model for litchi picking in complex scenes. Digit. Eng. 2025, 5, 100032. [Google Scholar] [CrossRef]
- Xie, Z.; Liu, W.; Li, Y.; Du, J.; Long, T.; Xu, H.; Long, Y.; Zhao, J. Enhanced litchi fruit detection and segmentation method integrating hyperspectral reconstruction and YOLOv8. Comput. Electron. Agric. 2025, 237, 110659. [Google Scholar] [CrossRef]
- Jiao, Z.; Huang, K.; Jia, G.; Lei, H.; Cai, Y.; Zhong, Z. An effective litchi detection method based on edge devices in a complex scene. Biosyst. Eng. 2022, 222, 15–28. [Google Scholar] [CrossRef]
- Diaz, R.; Faus, G.; Blasco, M.; Blasco, J.; Molto, E. The application of a fast algorithm for the classification of olives by machine vision. Food Res. Int. 2000, 33, 305–309. [Google Scholar] [CrossRef]
- Bulanon, D.M.; Kataoka, T.; Ota, Y.; Hiroma, T. AE—Automation and emerging technologies: A segmentation algorithm for the automatic recognition of Fuji apples at harvest. Biosyst. Eng. 2002, 83, 405–412. [Google Scholar] [CrossRef]
- Guo, A.; Xiao, D.; Zou, X. Computation model on image segmentation threshold of litchi cluster based on exploratory analysis. J. Fiber Bioeng. Inform. 2014, 7, 441–452. [Google Scholar] [CrossRef]
- Xiong, J.; He, Z.; Lin, R.; Liu, Z.; Bu, R.; Yang, Z.; Peng, H.; Zou, X. Visual positioning technology of picking robots for dynamic litchi clusters with disturbance. Comput. Electron. Agric. 2018, 151, 226–237. [Google Scholar] [CrossRef]
- Xiong, J.; Lin, R.; Liu, Z.; He, Z.; Tang, L.; Yang, Z.; Zou, X. The recognition of litchi clusters and the calculation of picking point in a nocturnal natural environment. Biosyst. Eng. 2018, 166, 44–57. [Google Scholar] [CrossRef]
- Wang, C.; Tang, Y.; Zou, X.; Luo, L.; Chen, X. Recognition and matching of clustered mature litchi fruits using binocular charge-coupled device (CCD) color cameras. Sensors 2017, 17, 2564. [Google Scholar] [CrossRef] [PubMed]
- Xiao, F.; Wang, H.; Xu, Y.; Zhang, R. Fruit detection and recognition based on deep learning for automatic harvesting: An overview and review. Agronomy 2023, 13, 1625. [Google Scholar] [CrossRef]
- Zheng, Z.; Xiong, J.; Lin, H.; Han, Y.; Sun, B.; Xie, Z.; Yang, Z.; Wang, C. A method of green citrus detection in natural environments using a deep convolutional neural network. Front. Plant Sci. 2021, 12, 705737. [Google Scholar] [CrossRef] [PubMed]
- Yang, X.; Zhao, W.; Wang, Y.; Yan, W.Q.; Li, Y. Lightweight and efficient deep learning models for fruit detection in orchards. Sci. Rep. 2024, 14, 26086. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Annual Conference on Neural Information Processing Systems 2015, Montreal, QC, Canada, 7–12 December 2015; Volume 28. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Gao, F.; Fu, L.; Zhang, X.; Majeed, Y.; Li, R.; Karkee, M.; Zhang, Q. Multi-class fruit-on-plant detection for apple in SNAP system using Faster R-CNN. Comput. Electron. Agric. 2020, 176, 105634. [Google Scholar] [CrossRef]
- Yu, Y.; Zhang, K.; Yang, L.; Zhang, D. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput. Electron. Agric. 2019, 163, 104846. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Wang, L.; Zhao, Y.; Xiong, Z.; Wang, S.; Li, Y.; Lan, Y. Fast and precise detection of litchi fruits for yield estimation based on the improved YOLOv5 model. Front. Plant Sci. 2022, 13, 965425. [Google Scholar] [CrossRef]
- Liang, C.; Liu, D.; Ge, W.; Huang, W.; Lan, Y.; Long, Y. Detection of litchi fruit maturity states based on unmanned aerial vehicle remote sensing and improved YOLOv8 model. Front. Plant Sci. 2025, 16, 1568237. [Google Scholar] [CrossRef]
- Xie, J.; Liu, J.; Chen, S.; Gao, Q.; Chen, Y.; Wu, J.; Gao, P.; Sun, D.; Wang, W.; Shen, J. Research on inferior litchi fruit detection in orchards based on YOLOv8n-BLS. Comput. Electron. Agric. 2025, 237, 110736. [Google Scholar] [CrossRef]
- Peng, H.; Li, Z.; Zou, X.; Wang, H.; Xiong, J. Research on litchi image detection in orchard using UAV based on improved YOLOv5. Expert Syst. Appl. 2025, 263, 125828. [Google Scholar] [CrossRef]
- Allmendinger, A.; Saltık, A.O.; Peteinatos, G.G.; Stein, A.; Gerhards, R. Assessing the capability of YOLO-and transformer-based object detectors for real-time weed detection. Precis. Agric. 2025, 26, 52. [Google Scholar]
- Cardellicchio, A.; Renò, V.; Cellini, F.; Summerer, S.; Petrozza, A.; Milella, A. Incremental learning with domain adaption for tomato plant phenotyping. Smart Agric. Technol. 2025, 12, 101324. [Google Scholar] [CrossRef]
- Ragu, N.; Teo, J. Object detection and classification using few-shot learning in smart agriculture: A scoping mini review. Front. Sustain. Food Syst. 2023, 6, 1039299. [Google Scholar] [CrossRef]
- James, J.A.; Manching, H.K.; Hulse-Kemp, A.M.; Beksi, W.J. Few-shot fruit segmentation via transfer learning. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; pp. 13618–13624. [Google Scholar]
- Shrawne, S.; Sawant, M.; Shingate, M.; Tambe, S.; Patil, S.; Sambhe, V. Fine-tuning retinanet for few-shot fruit detection. In Proceedings of the International Conference on Advanced Network Technologies and Intelligent Computing, Varanasi, India, 19–21 December 2024; pp. 124–143. [Google Scholar]
- Liu, Q.; Lv, J.; Zhang, C. MAE-YOLOv8-based small object detection of green crisp plum in real complex orchard environments. Comput. Electron. Agric. 2024, 226, 109458. [Google Scholar] [CrossRef]
- Li, T.; Chen, Q.; Zhang, X.; Ding, S.; Wang, X.; Mu, J. PeachYOLO: A lightweight algorithm for peach detection in complex orchard environments. IEEE Access 2024, 12, 96220–96230. [Google Scholar] [CrossRef]
- Zhang, N.; Cao, J. Robust Real-Time Blueberry Counting in Greenhouses Using Small-Object Detection and Mamba-Driven Multi-Step Trajectory Completion. Smart Agric. Technol. 2025, 12, 101402. [Google Scholar] [CrossRef]
- Tu, S.; Pang, J.; Liu, H.; Zhuang, N.; Chen, Y.; Zheng, C.; Wan, H.; Xue, Y. Passion fruit detection and counting based on multiple scale faster R-CNN using RGB-D images. Precis. Agric. 2020, 21, 1072–1091. [Google Scholar] [CrossRef]
- Lu, Y.; Sun, M.; Guan, Y.; Lian, J.; Ji, Z.; Yin, X.; Jia, W. SOD head: A network for locating small fruits from top to bottom in layers of feature maps. Comput. Electron. Agric. 2023, 212, 108133. [Google Scholar] [CrossRef]
- Lin, C.; Jiang, W.; Zhao, W.; Zou, L.; Xue, Z. DPD-YOLO: Dense pineapple fruit target detection algorithm in complex environments based on YOLOv8 combined with attention mechanism. Front. Plant Sci. 2025, 16, 1523552. [Google Scholar] [CrossRef]
- Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljačić, M.; Hou, T.Y.; Tegmark, M. Kan: Kolmogorov-arnold networks. arXiv 2024, arXiv:2404.19756. [Google Scholar] [PubMed]
- Somvanshi, S.; Javed, S.A.; Islam, M.M.; Pandit, D.; Das, S. A survey on kolmogorov-arnold network. ACM Comput. Surv. 2024, 58, 1–35. [Google Scholar] [CrossRef]
- Xie, X.; Zheng, W.; Xiong, S.; Wan, T. MTAD-Kanformer: Multivariate time-series anomaly detection via kan and transformer. Appl. Intell. 2025, 55, 796. [Google Scholar] [CrossRef]
- Wang, Z.; Zainal, A.; Siraj, M.M.; Ghaleb, F.A.; Hao, X.; Han, S. An intrusion detection model based on Convolutional Kolmogorov-Arnold Networks. Sci. Rep. 2025, 15, 1917. [Google Scholar] [CrossRef] [PubMed]
- Zhang, J.; Jin, Z.; Xia, Y.; Yuan, X.; Wang, Y.; Li, N.; Yu, Y.; Li, D. SS-KAN: Self-Supervised Kolmogorov-Arnold Networks for Limited data Remote Sensing Semantic Segmentation. Neural Netw. 2025, 192, 107881. [Google Scholar] [CrossRef]
- Zhan, S.; Su, J.; Liu, P.; Fu, Y.; Zhu, J. Object re-identification using Kolmogorov-Arnold attention networks. Math. Found. Comput. 2025. early access. [Google Scholar] [CrossRef]
- Zhang, Q.; Xu, X.; Wang, Z.; Wen, Y. Defect detection of PCB-AoI dataset based on improved YOLOv10 algorithm. In Proceedings of the 4th International Conference on Computer, Artificial Intelligence and Control Engineering, Heifei China, 10–12 January 2025; pp. 60–66. [Google Scholar]
- Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. Detrs beat yolos on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 16965–16974. [Google Scholar]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Chai, S.; Wen, M.; Li, P.; Zeng, Z.; Tian, Y. DCFA-YOLO: A Dual-Channel Cross-Feature-Fusion Attention YOLO Network for Cherry Tomato Bunch Detection. Agriculture 2025, 15, 271. [Google Scholar] [CrossRef]
- Yan, S.; Hou, W.; Rao, Y.; Jiang, D.; Jin, X.; Wang, T.; Wang, Y.; Liu, L.; Zhang, T.; Genis, A. Multi-scale cross-modal feature fusion and cost-sensitive loss function for differential detection of occluded bagging pears in practical orchards. Artif. Intell. Agric. 2025, 15, 573–589. [Google Scholar] [CrossRef]
- Xu, D.; Wang, C.; Li, M.; Ge, X.; Zhang, J.; Wang, W.; Lv, C. Improving passion fruit yield estimation with multi-scale feature fusion and density-attention mechanisms in smart agriculture. Comput. Electron. Agric. 2025, 239, 110958. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Yang, X.; Wang, X. Kolmogorov-arnold transformer. arXiv 2024, arXiv:2409.10594. [Google Scholar]
- Yi, W.; Zhang, Z.; Chai, S.; Liu, Y.; Xie, Z.; Huang, W.; Li, P.; Luo, Z.; Lu, D.; Tian, Y. A Benchmark Dataset for Lychee Detection and Maturity Classification for Robotic Harvesting. Available online: https://github.com/SeiriosLab/Lychee (accessed on 20 August 2025).
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Yang, Z.; Sinnott, R.O.; Bailey, J.; Ke, Q. A survey of automated data augmentation algorithms for deep learning-based image classification tasks. Knowl. Inf. Syst. 2023, 65, 2805–2861. [Google Scholar] [CrossRef]
- Akbiyik, M.E. Data augmentation in training CNNs: Injecting noise to images. arXiv 2023, arXiv:2307.06855. [Google Scholar] [CrossRef]
- Ma, J.; Hu, C.; Zhou, P.; Jin, F.; Wang, X.; Huang, H. Review of image augmentation used in deep learning-based material microscopic image segmentation. Appl. Sci. 2023, 13, 6478. [Google Scholar] [CrossRef]
- Jocher, G.; Jing, Q.; Ayush, C. Ultralytics YOLO. Available online: https://github.com/ultralytics/ultralytics (accessed on 20 July 2025).
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Jocher, G. YOLOv5 by Ultralytics, Version 7.0. computer software. 2020. Available online: https://ultralytics.com (accessed on 29 July 2025).
- Sohan, M.; Sai Ram, T.; Rami Reddy, C.V. A review on yolov8 and its advancements. In Proceedings of the International Conference on Data Intelligence and Cognitive Informatics, Tirunelveli, India, 27–28 June 2023; pp. 529–545. [Google Scholar]
- Tian, Y.; Ye, Q.; Doermann, D. Yolov12: Attention-centric real-time object detectors. arXiv 2025, arXiv:2502.12524. [Google Scholar]
- Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. Centernet: Keypoint triplets for object detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6569–6578. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Cai, Y.; Cui, B.; Deng, H.; Zeng, Z.; Wang, Q.; Lu, D.; Cui, Y.; Tian, Y. Cherry Tomato Detection for Harvesting Using Multimodal Perception and an Improved YOLOv7-Tiny Neural Network. Agronomy 2024, 14, 2320. [Google Scholar] [CrossRef]
- Li, P.; Wen, M.; Zeng, Z.; Tian, Y. Cherry Tomato Bunch and Picking Point Detection for Robotic Harvesting Using an RGB-D Sensor and a StarBL-YOLO Network. Horticulturae 2025, 11, 949. [Google Scholar] [CrossRef]
- Wu, X.; Tian, Y.; Zeng, Z. Leff-yolo: A lightweight cherry tomato detection yolov8 network with enhanced feature fusion. In Proceedings of the International Conference on Intelligent Computing, Ningbo, China, 26–29 July 2025; pp. 474–488. [Google Scholar]











| Maturity | Metric | Ratio | |||
|---|---|---|---|---|---|
| 1 (Baseline) | 0.125 | 0.25 | 0.5 | ||
| All | Precision | 0.760 | 0.812 | 0.817 | 0.801 |
| Recall | 0.816 | 0.791 | 0.794 | 0.786 | |
| mAP50 | 0.823 | 0.805 | 0.821 | 0.801 | |
| mAP50-95 | 0.512 | 0.517 | 0.528 | 0.508 | |
| GFLOPs | 125.600 | 71.900 | 79.4 | 94.3 | |
| Paras (M) | 41.941 | 22.489 | 24.873 | 29.641 | |
| Unripe | Precision | 0.749 | 0.807 | 0.803 | 0.778 |
| Recall | 0.968 | 0.966 | 0.957 | 0.969 | |
| mAP50 | 0.925 | 0.906 | 0.923 | 0.933 | |
| mAP50-95 | 0.598 | 0.607 | 0.618 | 0.618 | |
| Semi_ripe | Precision | 0.809 | 0.838 | 0.847 | 0.839 |
| Recall | 0.704 | 0.719 | 0.714 | 0.720 | |
| mAP50 | 0.762 | 0.768 | 0.771 | 0.752 | |
| mAP50-95 | 0.461 | 0.743 | 0.479 | 0.470 | |
| Ripe | Precision | 0.723 | 0.791 | 0.800 | 0.786 |
| Recall | 0.777 | 0.690 | 0.711 | 0.671 | |
| mAP50 | 0.782 | 0.743 | 0.768 | 0.720 | |
| mAP50-95 | 0.478 | 0.466 | 0.488 | 0.436 | |
| Model | Precision | Recall | mAP50 | mAP50–95 | GFLOPs | Paras (M) |
|---|---|---|---|---|---|---|
| Baseline | 0.760 | 0.816 | 0.823 | 0.512 | 125.6 | 41.940 |
| +S2 | 0.808 | 0.823 | 0.832 | 0.526 | 133.3 | 42.220 |
| +MCFA | 0.845 | 0.850 | 0.893 | 0.562 | 126.7 | 42.137 |
| +GRN0.25 | 0.827 | 0.864 | 0.881 | 0.548 | 79.4 | 24.873 |
| +KAN (Full) | 0.863 | 0.885 | 0.947 | 0.584 | 114.5 | 37.228 |
| Model | Precision | Recall | mAP50 | mAP50–95 | GFLOPs | Paras (M) |
|---|---|---|---|---|---|---|
| YOLOv5n | 0.819 | 0.807 | 0.875 | 0.553 | 7.100 | 2.503 |
| YOLOv8n | 0.852 | 0.809 | 0.882 | 0.548 | 8.100 | 3.006 |
| YOLOv12n | 0.860 | 0.810 | 0.891 | 0.557 | 6.300 | 2.557 |
| Fast R-CNN | 0.724 | 0.792 | 0.803 | 0.506 | 161.074 | 41.358 |
| CenterNet | 0.810 | 0.846 | 0.861 | 0.538 | 150.215 | 32.116 |
| EfficientNet | 0.832 | 0.807 | 0.844 | 0.521 | 91.932 | 18.389 |
| DETR-Res50 | 0.760 | 0.816 | 0.823 | 0.512 | 125.600 | 41.940 |
| GRN-KANformer | 0.863 | 0.885 | 0.947 | 0.584 | 114.500 | 37.228 |
| Model | Precision | Recall | mAP50 | mAP50–95 | GFLOPs | Paras (M) |
|---|---|---|---|---|---|---|
| YOLOv8s | 0.825 ± 0.022 | 0.814 ± 0.035 | 0.881 ± 0.003 | 0.556 ± 0.003 | 28.400 | 11.126 |
| YOLOv8m | 0.857 ± 0.018 | 0.801 ± 0.017 | 0.887 ± 0.004 | 0.559 ± 0.003 | 78.7 | 25.841 |
| YOLOv12s | 0.821 ± 0.004 | 0.846 ± 0.012 | 0.882 ± 0.003 | 0.558 ± 0.003 | 21.2 | 9.232 |
| YOLOv12m | 0.817 ± 0.032 | 0.850 ± 0.006 | 0.890 ± 0.004 | 0.557 ± 0.003 | 67.1 | 20.107 |
| YOLOv12l | 0.842 ± 0.010 | 0.818 ± 0.025 | 0.880 ± 0.003 | 0.548 ± 0.004 | 88.6 | 26.341 |
| GRN-KANformer | 0.867 ± 0.015 | 0.883 ± 0.002 | 0.925 ± 0.002 | 0.585 ± 0.003 | 114.500 | 37.228 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Z.; Wang, Y.; Chai, S.; Tian, Y. Detection and Maturity Classification of Dense Small Lychees Using an Improved Kolmogorov–Arnold Network–Transformer. Plants 2025, 14, 3378. https://doi.org/10.3390/plants14213378
Zhang Z, Wang Y, Chai S, Tian Y. Detection and Maturity Classification of Dense Small Lychees Using an Improved Kolmogorov–Arnold Network–Transformer. Plants. 2025; 14(21):3378. https://doi.org/10.3390/plants14213378
Chicago/Turabian StyleZhang, Zhenpeng, Yi Wang, Shanglei Chai, and Yibin Tian. 2025. "Detection and Maturity Classification of Dense Small Lychees Using an Improved Kolmogorov–Arnold Network–Transformer" Plants 14, no. 21: 3378. https://doi.org/10.3390/plants14213378
APA StyleZhang, Z., Wang, Y., Chai, S., & Tian, Y. (2025). Detection and Maturity Classification of Dense Small Lychees Using an Improved Kolmogorov–Arnold Network–Transformer. Plants, 14(21), 3378. https://doi.org/10.3390/plants14213378

