Training Strategy Optimization of a Tea Canopy Dataset for Variety Identification During the Harvest Period
Abstract
1. Introduction
2. Materials and Methods
2.1. Image Acquisition and Dataset Construction
2.2. Convolutional Neural Network
2.3. Transfer Learning
2.4. Training Process Optimization
2.5. Experimental Platform Configuration
2.6. Performance Evaluation Metrics
3. Results and Analysis
3.1. Performance of Different CNN-Based Models
3.2. Optimize the Training Process
3.2.1. Optimizers
3.2.2. Input Image Size and Dataset Division Ratio
3.2.3. Training Parameters
3.3. Identification Performance Under Different Environmental Conditions
4. Discussion
4.1. Comparison with Methods Proposed in Other Studies
4.2. Comparison with Other Convolutional Neural Network Models
4.3. Limitations and Feature Work
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhou, H.; Fu, H.; Wu, X.; Wu, B.; Dai, C. Discrimination of tea varieties based on FTIR spectroscopy and an adaptive improved possibilistic c-means clustering. J. Food Process. Preserv. 2020, 44, e14795. [Google Scholar] [CrossRef]
- Wu, X.; He, F.; Wu, B.; Zeng, S.; He, C. Accurate classification of Chunmee tea grade using NIR spectroscopy and fuzzy maximum uncertainty linear discriminant analysis. Foods 2023, 12, 541. [Google Scholar] [CrossRef] [PubMed]
- Pan, S.; Nie, Q.; Tai, H.; Song, X.; Tong, Y.; Zhang, L.; Wu, X.; Lin, Z.; Zhang, Y.; Ye, D.; et al. Tea and tea drinking: China’s outstanding contributions to the mankind. Chin. Med. 2022, 17, 27. [Google Scholar] [CrossRef] [PubMed]
- Food and Agriculture Organization of the United Nations. Available online: https://www.fao.org (accessed on 15 July 2025).
- Zhang, Z.; Lu, Y.; Zhao, Y.; Pan, Q.; Jin, K.; Xu, G.; Hu, Y. TS-YOLO: An All-Day and lightweight tea canopy shoots detection model. Agronomy 2023, 13, 1411. [Google Scholar] [CrossRef]
- Luo, Y.; Wei, L.; Xu, L.; Zhang, Q.; Liu, J.; Cai, Q.; Zhang, W. Stereo-vision-based multi-crop harvesting edge detection for precise automatic steering of combine harvester. Biosyst. Eng. 2022, 215, 115–128. [Google Scholar] [CrossRef]
- Andronie, M.; Lăzăroiu, G.; Karabolevski, O.; Ștefănescu, R.; Hurloiu, I.; Dijmărescu, A.; Dijmărescu, I. Remote big data management tools, sensing and computing technologies, and visual perception and environment mapping algorithms in the internet of robotic things. Electronics 2022, 12, 22. [Google Scholar] [CrossRef]
- Zhang, Z.; Lu, Y.; Yang, M.; Wang, G.; Zhao, Y.; Hu, Y. Optimal training strategy for high-performance detection model of multi-cultivar tea shoots based on deep learning methods. Sci. Hortic. 2024, 328, 112949. [Google Scholar] [CrossRef]
- Li, Y.; Yu, S.; Yang, S.; Ni, D.; Jiang, X.; Zhang, D.; Zhou, J.; Li, C.; Yu, Z. Study on taste quality formation and leaf conducting tissue changes in six types of tea during their manufacturing processes. Food Chem. X 2023, 18, 100731. [Google Scholar] [CrossRef]
- Wong, M.; Sirisena, S.; Ng, K. Phytochemical profile of differently processed tea: A review. J. Food Sci. 2022, 87, 1925–1942. [Google Scholar] [CrossRef]
- Ge, X.; Sun, J.; Lu, B.; Chen, Q.; Xun, W.; Jin, Y. Classification of oolong tea varieties based on hyperspectral imaging technology and BOSS-LightGBM model. J. Food Process Eng. 2019, 42, e13289. [Google Scholar] [CrossRef]
- Li, X.; Wu, J.; Bai, T.; Wu, C.; He, Y.; Huang, J.; Li, X.; Shi, Z.; Hou, K. Variety classification and identification of jujube based on near-infrared spectroscopy and 1D-CNN. Comput. Electron. Agric. 2024, 223, 109122. [Google Scholar] [CrossRef]
- Cao, Q.; Yang, G.; Wang, F.; Chen, L.; Xu, B.; Zhao, C.; Duan, D.; Jiang, P.; Xu, Z.; Yang, H. Discrimination of tea plant variety using in-situ multispectral imaging system and multi-feature analysis. Comput. Electron. Agric. 2022, 202, 107360. [Google Scholar] [CrossRef]
- Saletnik, A.; Saletnik, B.; Puchalski, C. Raman method in identification of species and varieties, assessment of plant maturity and crop quality—A Review. Molecules 2022, 27, 4454. [Google Scholar] [CrossRef]
- Wang, J.; Gao, Z.; Zhang, Y.; Zhou, J.; Wu, J.; Li, P. Real-Time detection and location of potted flowers based on a ZED camera and a YOLO V4-Tiny deep learning algorithm. Horticulturae 2021, 8, 21. [Google Scholar] [CrossRef]
- Wang, W.; Xi, Y.; Gu, J.; Yang, Q.; Pan, Z.; Zhang, X.; Xu, G.; Zhou, M. YOLOV8-TEA: Recognition Method of tender shoots of tea based on instance segmentation algorithm. Agronomy 2025, 15, 1318. [Google Scholar] [CrossRef]
- Chen, X.; Xun, Y.; Li, W.; Zhang, J. Combining discriminant analysis and neural networks for corn variety identification. Comput. Electron. Agric. 2009, 71, S48–S53. [Google Scholar] [CrossRef]
- Osako, Y.; Yamane, H.; Lin, S.; Chen, P.; Tao, R. Cultivar discrimination of litchi fruit images using deep learning. Sci. Hortic. 2020, 269, 109360. [Google Scholar] [CrossRef]
- Wang, B.; Li, H.; You, J.; Chen, X.; Yuan, X.; Feng, X. Fusing deep learning features of triplet leaf image patterns to boost soybean cultivar identification. Comput. Electron. Agric. 2022, 197, 106914. [Google Scholar] [CrossRef]
- Larese, M.; Granitto, P. Finding local leaf vein patterns for legume characterization and classification. Mach. Vis. Appl. 2015, 27, 709–720. [Google Scholar] [CrossRef]
- Baldi, A.; Pandolfi, C.; Mancuso, S.; Lenzi, A. A leaf-based back propagation neural network for oleander (Nerium oleander L.) cultivar identification. Comput. Electron. Agric. 2017, 142, 515–520. [Google Scholar] [CrossRef]
- Altuntas, Y.; Kocamaz, A.; Yeroglu, C. Identification of apricot varieties using leaf characteristics and KNN classifier. In Proceedings of the 2019 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 21–22 September 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Hong, P.; Hai, T.; Lan, L.; Hoang, V.; Hai, V.; Nguyen, T. Comparative study on vision based rice seed varieties identification. In Proceedings of the 2015 Seventh International Conference on Knowledge and Systems Engineering (KSE), Ho Chi Minh City, Vietnam, 8–10 October 2015; pp. 377–382. [Google Scholar] [CrossRef]
- Khosravi, H.; Saedi, S.I.; Rezaei, M. Real-time recognition of on-branch olive ripening stages by a deep convolutional neural network. Sci. Hortic. 2021, 287, 110252. [Google Scholar] [CrossRef]
- Zhang, Z.; Lu, Y.; Peng, Y.; Yang, M.; Hu, Y. A lightweight and High-Performance YOLOV5-Based model for tea shoot detection in field conditions. Agronomy 2025, 15, 1122. [Google Scholar] [CrossRef]
- Wu, M.; Liu, S.; Li, Z.; Ou, M.; Dai, S.; Dong, X.; Wang, X.; Jiang, L.; Jia, W. A review of intelligent orchard sprayer technologies: Perception, control, and system integration. Horticulturae 2025, 11, 668. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, Z.; Jia, W.; Ou, M.; Dong, X.; Dai, S. A review of environmental sensing technologies for targeted spraying in orchards. Horticulturae 2025, 11, 551. [Google Scholar] [CrossRef]
- Jiang, L.; Xu, B.; Husnain, N.; Wang, Q. Overview of agricultural machinery automation technology for sustainable agriculture. Agronomy 2025, 15, 1471. [Google Scholar] [CrossRef]
- Zhou, X.; Chen, W.; Wei, X. Improved field obstacle detection algorithm based on YOLOV8. Agriculture 2024, 14, 2263. [Google Scholar] [CrossRef]
- Ji, W.; Zhang, T.; Xu, B.; He, G. Apple recognition and picking sequence planning for harvesting robot in a complex environment. J. Agric. Eng. 2023, 55, 1549. [Google Scholar] [CrossRef]
- Xu, Z.; Liu, J.; Wang, J.; Cai, L.; Jin, Y.; Zhao, S.; Xie, B. Realtime picking point decision algorithm of trellis grape for high-speed robotic cut-and-catch harvesting. Agronomy 2023, 13, 1618. [Google Scholar] [CrossRef]
- Pan, B.; Liu, C.; Su, B.; Ju, Y.; Fan, X.; Zhang, Y.; Sun, L.; Fang, Y.; Jiang, J. Research on species identification of wild grape leaves based on deep learning. Sci. Hortic. 2024, 327, 112821. [Google Scholar] [CrossRef]
- Zhu, X.; Chen, F.; Zheng, Y.; Li, Z.; Zhang, X. Identification of olive cultivars using bilinear networks and attention mechanisms. Trans. Chin. Soc. Agric. Eng. 2023, 39, 183–192. [Google Scholar] [CrossRef]
- Zhang, R.; Yuan, Y.; Meng, X.; Liu, T.; Zhang, A.; Lei, H. A multitask model based on MobileNetV3 for fine-grained classification of jujube varieties. J. Food Meas. Charact. 2023, 17, 4305–4317. [Google Scholar] [CrossRef]
- De Nart, D.; Gardiman, M.; Alba, V.; Tarricone, L.; Storchi, P.; Roccotelli, S.; Ammoniaci, M.; Tosi, V.; Perria, R.; Carraro, R. Vine variety identification through leaf image classification: A large-scale study on the robustness of five deep learning models. J. Agric. Sci. 2024, 162, 19–32. [Google Scholar] [CrossRef]
- Quan, W.; Shi, Q.; Fan, Y.; Wang, Q.; Su, B. Few-shot learning for identifying wine grape varieties with limited data. Trans. Chin. Soc. Agric. Eng. 2025, 41, 211–219. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Attri, I.; Awasthi, L.; Sharma, T.; Rathee, P. A review of deep learning techniques used in agriculture. Ecol. Inform. 2023, 77, 102217. [Google Scholar] [CrossRef]
- You, J.; Li, D.; Wang, Z.; Chen, Q.; Ouyang, Q. Prediction and visualization of moisture content in Tencha drying processes by computer vision and deep learning. J. Sci. Food Agric. 2024, 104, 5486–5494. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for Large-Scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Ma, N.; Zhang, X.; Zheng, H.; Sun, J. ShuffleNet V2: Practical guidelines for efficient CNN architecture design. In Computer Vision–ECCV 2018; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2018; pp. 122–138. [Google Scholar] [CrossRef]
- Zhao, S.; Peng, Y.; Liu, J.; Wu, S. Tomato leaf disease diagnosis based on improved convolution neural network by attention module. Agriculture 2021, 11, 651. [Google Scholar] [CrossRef]
- Simhadri, C.; Kondaveeti, H. Automatic recognition of rice leaf diseases using transfer learning. Agronomy 2023, 13, 961. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Li, F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar] [CrossRef]
- Wen, X.; Zhou, M. Evolution and role of optimizers in training deep learning models. IEEE/CAA J. Auto-Matica Sin. 2024, 11, 2039–2042. [Google Scholar] [CrossRef]
- Ramos, L.; Casas, E.; Bendek, E.; Romero, C.; Rivas-Echeverría, F. Hyperparameter optimization of YOLOv8 for smoke and wildfire detection: Implications for agricultural and environmental safety. Artif. Intell. Agric. 2024, 12, 109–126. [Google Scholar] [CrossRef]
- Lee, Y.; Patil, M.; Kim, J.; Seo, Y.; Ahn, D.; Kim, G. Hyperparameter optimization of apple leaf dataset for the disease recognition based on the YOLOv8. J. Agric. Food Res. 2025, 21, 101840. [Google Scholar] [CrossRef]
- Selvaraju, R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar] [CrossRef]
- Pawluszek-Filipiak, K.; Borkowski, A. On the importance of train–test split ratio of datasets in automatic land-slide detection by supervised classification. Remote Sens. 2020, 12, 3054. [Google Scholar] [CrossRef]
- Cao, Q.; Xu, Z.; Xu, B.; Yang, H.; Wang, F.; Chen, L.; Jiang, X.; Zhao, C.; Jiang, P.; Wu, Q.; et al. Leaf phenotypic difference analysis and variety recognition of tea cultivars based on multispectral imaging technology. Ind. Crops Prod. 2024, 220, 119230. [Google Scholar] [CrossRef]
- Cao, Q.; Zhao, C.; Bai, B.; Cai, J.; Chen, L.; Wang, F.; Xu, B.; Duan, D.; Jiang, P.; Meng, X.; et al. Oolong tea cultivars categorization and germination period classification based on multispectral information. Front. Plant Sci. 2023, 14, 1251418. [Google Scholar] [CrossRef]
- Sun, L.; Shen, J.; Mao, Y.; Li, X.; Fan, K.; Qian, W.; Wang, Y.; Bi, C.; Wang, H.; Xu, Y.; et al. Discrimination of tea varieties and bud sprouting phenology using UAV-based RGB and multispectral images. Int. J. Remote Sens. 2025, 46, 6214–6234. [Google Scholar] [CrossRef]
- Liu, Z.; Zhou, T.; Fu, D.; Peng, H. Extraction of fresh tea leaf image features based on color and shape with application in tea plant variety identification. Jiangsu Agric. Sci. 2021, 49, 168–172. [Google Scholar] [CrossRef]
- Sun, D.; Ding, Z.; Liu, J.; Liu, H.; Xie, J.; Wang, W. Classification method of multi-variety tea leaves based on improved SqueezeNet model. Trans. Chin. Soc. Agric. Mach. 2023, 54, 223–230. [Google Scholar] [CrossRef]
- Zhang, Z.; Yang, M.; Pan, Q.; Jin, X.; Wang, G.; Zhao, Y.; Hu, Y. Identification of tea plant cultivars based on canopy images using deep learning methods. Sci. Hortic. 2024, 339, 113908. [Google Scholar] [CrossRef]
- Ding, Y.; Huang, H.; Cui, H.; Wang, X.; Zhao, Y. A Non-Destructive Method for Identification of Tea Plant Cultivars Based on Deep Learning. Forests 2023, 14, 728. [Google Scholar] [CrossRef]
- Wu, T.; Zhou, L.; Zhao, Y.; Qi, H.; Pu, Y.; Zhang, C.; Liu, Y. Applications of Deep Learning in Tea Quality Monitoring: A Review. Artif. Intell. Rev. 2025, 58, 342. [Google Scholar] [CrossRef]
Configurations | Values |
---|---|
Optimizer | Adadelta, Adagrad, Adam, Adamax, AdamW, ASGD, NAdam, RAdam, SGD |
Image size | 224 × 224, 416 × 416, 608 × 608 |
Split ratio | 70:30, 80:20, 90:10 |
Learn rate | 0.001, 0.0001, 0.00001 |
Batch size | 4, 8, 16 |
Epoch | 20, 50, 80 |
Models | Accuracy (%) | Weight Size (MB) | FLOPs (G) | Parameters (M) | FPS | Training Time (Min) | ||
---|---|---|---|---|---|---|---|---|
Train | Val | Test | ||||||
VGG11 | 100.00 | 94.60 | 94.85 | 491.42 | 26.04 | 128.82 | 96.85 | 74.64 |
VGG13 | 99.95 | 95.93 | 96.52 | 492.13 | 38.86 | 129.00 | 81.31 | 97.87 |
VGG16 | 99.74 | 94.13 | 95.08 | 512.41 | 53.24 | 134.31 | 71.18 | 111.26 |
VGG19 | 99.93 | 93.84 | 94.24 | 532.69 | 67.61 | 139.63 | 48.05 | 125.30 |
GoogleNet | 100.00 | 97.25 | 96.14 | 38.19 | 5.21 | 5.61 | 53.62 | 87.43 |
ResNet18 | 99.98 | 96.88 | 96.59 | 42.73 | 6.29 | 11.18 | 82.50 | 48.45 |
ResNet34 | 99.98 | 96.40 | 95.98 | 81.35 | 12.69 | 21.29 | 65.19 | 65.68 |
ResNet50 | 100.00 | 96.78 | 97.12 | 90.07 | 14.25 | 23.53 | 57.87 | 79.42 |
ResNet101 | 100.00 | 96.21 | 95.76 | 162.81 | 27.12 | 42.52 | 35.65 | 130.06 |
ResNet152 | 99.83 | 96.50 | 95.98 | 222.76 | 40.01 | 58.17 | 23.87 | 179.23 |
ShuffleNetV2-0.5× | 99.95 | 92.61 | 92.50 | 1.49 | 0.15 | 0.35 | 61.54 | 77.20 |
ShuffleNetV2-1.0× | 99.98 | 96.59 | 95.76 | 5.00 | 0.52 | 1.26 | 61.73 | 77.38 |
ShuffleNetV2-1.5× | 100.00 | 96.40 | 95.61 | 9.70 | 1.05 | 2.49 | 62.00 | 76.80 |
ShuffleNetV2-2.0× | 100.00 | 97.35 | 97.05 | 20.72 | 2.06 | 5.37 | 61.51 | 75.10 |
Optimizer | Accuracy (%) | Precision (%) | Specificity (%) | F1-Score (%) | Training Time (Min) |
---|---|---|---|---|---|
Adadelta | 85.61 | 86.49 | 98.56 | 84.97 | 78.31 |
Adagrad | 97.27 | 97.31 | 99.73 | 97.27 | 66.45 |
Adam | 97.27 | 97.33 | 99.73 | 97.27 | 75.24 |
Adamax | 98.03 | 98.06 | 99.80 | 98.03 | 79.49 |
AdamW | 97.12 | 97.15 | 99.71 | 97.12 | 79.42 |
ASGD | 96.14 | 96.20 | 99.61 | 96.14 | 67.07 |
NAdam | 97.42 | 97.52 | 99.74 | 97.43 | 80.24 |
RAdam | 97.05 | 97.08 | 99.70 | 97.05 | 86.57 |
SGD | 95.45 | 95.50 | 99.54 | 95.46 | 61.89 |
Image Size | Split Ratio | Accuracy (%) | Precision (%) | Specificity (%) | F1-Score (%) | FLOPs (G) | FPS |
---|---|---|---|---|---|---|---|
224 × 224 | 70:30 | 92.95 | 92.99 | 99.30 | 92.91 | 4.13 | 76.81 |
80:20 | 93.86 | 93.87 | 99.39 | 93.83 | |||
90:10 | 94.62 | 94.75 | 99.46 | 94.63 | |||
416 × 416 | 70:30 | 98.26 | 98.28 | 99.83 | 98.26 | 14.25 | 57.87 |
80:20 | 98.03 | 98.06 | 99.80 | 98.03 | |||
90:10 | 98.48 | 98.52 | 99.85 | 98.49 | |||
608 × 608 | 70:30 | 98.56 | 98.59 | 99.86 | 98.56 | 30.44 | 37.90 |
80:20 | 99.02 | 99.03 | 99.90 | 99.02 | |||
90:10 | 99.02 | 99.02 | 99.90 | 99.01 |
Learn Rate | Batch Size | Epoch | Accuracy (%) | Precision (%) | Specificity (%) | F1-Score (%) | Training Time (Min) |
---|---|---|---|---|---|---|---|
0.001 | 4 | 20 | 96.89 | 96.96 | 99.69 | 96.89 | 49.70 |
4 | 50 | 97.88 | 97.90 | 99.79 | 97.88 | 124.18 | |
4 | 80 | 97.12 | 97.17 | 99.71 | 97.11 | 198.90 | |
8 | 20 | 97.27 | 97.32 | 99.73 | 97.28 | 43.11 | |
8 | 50 | 97.35 | 97.42 | 99.74 | 97.36 | 107.83 | |
8 | 80 | 97.65 | 97.69 | 99.77 | 97.65 | 173.33 | |
16 | 20 | 98.41 | 98.42 | 99.84 | 98.41 | 43.20 | |
16 | 50 | 98.48 | 98.50 | 99.85 | 98.49 | 107.98 | |
16 | 80 | 98.64 | 98.65 | 99.86 | 98.64 | 172.13 | |
0.0001 | 4 | 20 | 98.64 | 98.69 | 99.86 | 98.64 | 52.31 |
4 | 50 | 99.02 | 99.03 | 99.90 | 99.02 | 126.44 | |
4 | 80 | 99.09 | 99.10 | 99.91 | 99.09 | 201.34 | |
8 | 20 | 99.32 | 99.33 | 99.93 | 99.32 | 43.76 | |
8 | 50 | 99.17 | 99.17 | 99.92 | 99.17 | 108.64 | |
8 | 80 | 98.86 | 98.87 | 99.89 | 98.86 | 172.93 | |
16 | 20 | 98.94 | 98.95 | 99.89 | 98.94 | 43.18 | |
16 | 50 | 98.79 | 98.80 | 99.88 | 98.79 | 107.86 | |
16 | 80 | 99.02 | 99.02 | 99.90 | 99.02 | 172.76 | |
0.00001 | 4 | 20 | 98.33 | 98.35 | 99.83 | 98.34 | 50.02 |
4 | 50 | 98.79 | 98.81 | 99.88 | 98.79 | 127.69 | |
4 | 80 | 98.56 | 98.60 | 99.86 | 98.56 | 208.72 | |
8 | 20 | 97.95 | 97.98 | 99.80 | 97.95 | 45.89 | |
8 | 50 | 98.64 | 98.65 | 99.87 | 98.64 | 114.24 | |
8 | 80 | 98.56 | 98.58 | 99.86 | 98.56 | 181.53 | |
16 | 20 | 97.50 | 97.52 | 99.75 | 97.50 | 42.99 | |
16 | 50 | 98.41 | 98.41 | 99.84 | 98.41 | 107.05 | |
16 | 80 | 98.41 | 98.45 | 99.84 | 98.41 | 177.28 |
Environments | Accuracy (%) | Precision (%) | Specificity (%) | F1-Score (%) | |
---|---|---|---|---|---|
Season | Spring | 99.40 | 99.40 | 99.94 | 99.39 |
Summer | 99.24 | 99.25 | 99.92 | 99.24 | |
Light conditions | Uniform illumination | 99.85 | 99.85 | 99.98 | 99.85 |
Uneven illumination | 98.79 | 98.81 | 99.88 | 98.79 |
Existing Studies | Objects | Methods | Technical Details | Accuracy (%) |
---|---|---|---|---|
Cao et al. [52] | Leaf | Spectral analysis | Multispectral features + support vector machines | 91.56 |
Cao et al. [53] | Canopy | Spectral analysis | Multispectral features + support vector machines | 88.67 |
Sun et al. [54] | Canopy | Spectral analysis | Multispectral features + support vector machines | 92.63 |
Liu et al. [55] | Leaf | Image processing | Color and shape features + support vector machines | 89.50 |
Sun et al. [56] | Leaf | Image processing | Improved SqueezeNet | 90.50 |
Zhang et al. [57] | Canopy | Image processing | DenseNet201 | 97.81 |
Ours | Canopy | Image processing | Tea-ResNet | 99.32 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Z.; Lu, Y.; Liu, P. Training Strategy Optimization of a Tea Canopy Dataset for Variety Identification During the Harvest Period. Agriculture 2025, 15, 2027. https://doi.org/10.3390/agriculture15192027
Zhang Z, Lu Y, Liu P. Training Strategy Optimization of a Tea Canopy Dataset for Variety Identification During the Harvest Period. Agriculture. 2025; 15(19):2027. https://doi.org/10.3390/agriculture15192027
Chicago/Turabian StyleZhang, Zhi, Yongzong Lu, and Pengfei Liu. 2025. "Training Strategy Optimization of a Tea Canopy Dataset for Variety Identification During the Harvest Period" Agriculture 15, no. 19: 2027. https://doi.org/10.3390/agriculture15192027
APA StyleZhang, Z., Lu, Y., & Liu, P. (2025). Training Strategy Optimization of a Tea Canopy Dataset for Variety Identification During the Harvest Period. Agriculture, 15(19), 2027. https://doi.org/10.3390/agriculture15192027