Hazelnut Yield Estimation: A Vision-Based Approach for Automated Counting of Hazelnut Female Flowers
Abstract
:1. Introduction
1.1. Motivation and Novel Work
- A robust methodology that combines state-of-the-art object detection models with a regression-based bias correction, requiring only two high-resolution images per plant, enabling practical large-scale application even by non-expert operators;
- An image-tiling strategy combined with a YOLO-based model to detect extremely small, low-contrast flowers;
- A custom architecture of the YOLO11x model with the addition of a P2 layer to improve the detection of small objects by enhancing fine-grained spatial resolution;
- A comparative analysis of different object detection models to evaluate their effectiveness for this specific task;
- An evaluation of the proposed method in a real hazelnut field compared with manual counts by experienced operators in the field.
1.2. Organization of the Paper
2. Materials and Methods
Object Detection Model Training
3. Results and Discussion
3.1. Dataset
3.2. Model Training
3.3. Field Model Testing
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kanyepe, J.; Chibaro, M.; Morima, M.; Moeti-Lysson, J. AI-Powered Agricultural Supply Chains: Applications, Challenges, and Opportunities. In Integrating Agriculture, Green Marketing Strategies, and Artificial Intelligence; IGI Global: Hershey, PA, USA, 2025; pp. 33–64. [Google Scholar]
- Joseph, J.E.; Rao, K.; Swai, E.; Whitbread, A.M.; Rötter, R.P. How beneficial are seasonal climate forecasts for climate risk management? An appraisal for crop production in Tanzania. Clim. Risk Manag. 2025, 47, 100686. [Google Scholar] [CrossRef]
- Sun, J.; Tian, P.; Li, Z.; Wang, X.; Zhang, H.; Chen, J.; Qian, Y. Construction and Optimization of Integrated Yield Prediction Model Based on Phenotypic Characteristics of Rice Grown in Small–Scale Plantations. Agriculture 2025, 15, 181. [Google Scholar] [CrossRef]
- Agarwal, N.; Choudhry, N.; Tripathi, K. A novel hybrid time series deep learning model for forecasting of cotton yield in India. Int. J. Inf. Technol. 2025, 17, 1745–1752. [Google Scholar] [CrossRef]
- Fontana, M.; Somenzi, M.; Tesio, A. Cultivation, harvest and postharvest aspects that influence quality and organoleptic properties of hazelnut production and related final products. In Proceedings of the VIII International Congress on Hazelnut 1052, Temuco City, Chile, 19–22 March 2012; pp. 311–314. [Google Scholar]
- Bacchetta, L.; Rovira, M.; Tronci, C.; Aramini, M.; Drogoudi, P.; Silva, A.; Solar, A.; Avanzato, D.; Botta, R.; Valentini, N.; et al. A multidisciplinary approach to enhance the conservation and use of hazelnut Corylus avellana L. genetic resources. Genet. Resour. Crop Evol. 2015, 62, 649–663. [Google Scholar] [CrossRef]
- Gasparri, A.; Ulivi, G.; Rossello, N.B.; Garone, E. The H2020 project Pantheon: Precision farming of hazelnut orchards. In Proceedings of the Convegno Automatica, Florence, Italy, 12–14 September 2018; p. 10. [Google Scholar]
- Germain, E. The reproduction of hazelnut (Corylus avellana L.): A review. In Proceedings of the III International Congress on Hazelnut 351, Alba, Italy, 14–18 September 1992; pp. 195–210. [Google Scholar]
- Pacchiarelli, A.; Lupo, M.; Ferrucci, A.; Giovanelli, F.; Priori, S.; Pica, A.L.; Silvestri, C.; Cristofori, V. Phenology, Yield and Nut Traits Evaluation of Twelve European Hazelnut Cultivars Grown in Central Italy. Forests 2024, 15, 833. [Google Scholar] [CrossRef]
- Weiss, M.; Jacob, F.; Duveiller, G. Remote sensing for agricultural applications: A meta-review. Remote Sens. Environ. 2019, 236, 111402. [Google Scholar] [CrossRef]
- Jinasena, K.; Sonnadara, U. A dynamic simulation model for tree development. In Proceedings of the Conference Proceedings—International Forum for Mathematical Modeling 2014, Karlstad, Sweden, 16–18 June 2014. [Google Scholar]
- Martinelli, A.; Fabiocchi, D.; Picchio, F.; Giberti, H.; Carnevale, M. Design of an Environment for Virtual Training Based on Digital Reconstruction: From Real Vegetation to Its Tactile Simulation. Designs 2025, 9, 32. [Google Scholar] [CrossRef]
- Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Estrada, J.S.; Vasconez, J.P.; Fu, L.; Cheein, F.A. Deep Learning based flower detection and counting in highly populated images: A peach grove case study. J. Agric. Food Res. 2024, 15, 100930. [Google Scholar] [CrossRef]
- Yi, X.; Chen, H.; Wu, P.; Wang, G.; Mo, L.; Wu, B.; Yi, Y.; Fu, X.; Qian, P. Light-FC-YOLO: A Lightweight Method for Flower Counting Based on Enhanced Feature Fusion with a New Efficient Detection Head. Agronomy 2024, 14, 1285. [Google Scholar] [CrossRef]
- Wang, N.; Cao, H.; Huang, X.; Ding, M. Rapeseed flower counting method based on GhP2-YOLO and StrongSORT algorithm. Plants 2024, 13, 2388. [Google Scholar] [CrossRef]
- Tan, C.; Sun, J.; Paterson, A.H.; Song, H.; Li, C. Three-view cotton flower counting through multi-object tracking and RGB-D imagery. Biosyst. Eng. 2024, 246, 233–247. [Google Scholar] [CrossRef]
- Lin, J.; Li, J.; Ma, Z.; Li, C.; Huang, G.; Lu, H. A Framework for Single-Panicle Litchi Flower Counting by Regression with Multitask Learning. Plant Phenomics 2024, 6, 172. [Google Scholar] [CrossRef]
- Rahim, U.F.; Mineno, H. Tomato flower detection and counting in greenhouses using faster region-based convolutional neural network. J. Image Graph. 2020, 8, 107–113. [Google Scholar] [CrossRef]
- Li, J.; Li, Y.; Qiao, J.; Li, L.; Wang, X.; Yao, J.; Liao, G. Automatic counting of rapeseed inflorescences using deep learning method and UAV RGB imagery. Front. Plant Sci. 2023, 14, 1101143. [Google Scholar] [CrossRef]
- Yu, G.; Cai, R.; Luo, Y.; Hou, M.; Deng, R. A-pruning: A lightweight pineapple flower counting network based on filter pruning. Complex Intell. Syst. 2024, 10, 2047–2066. [Google Scholar] [CrossRef]
- Li, W.; Solihin, M.I.; Nugroho, H.A. RCA: YOLOv8-Based Surface Defects Detection on the Inner Wall of Cylindrical High-Precision Parts. Arab. J. Sci. Eng. 2024, 49, 12771–12789. [Google Scholar] [CrossRef]
- Chen, Z.; Chen, G. STTSBI: A Fast Inference Framework for Small Object Detection in Ultra-High-Resolution Images. In Proceedings of the 2024 4th International Conference on Intelligent Technology and Embedded Systems (ICITES), Chengdu, China, 20–23 September 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 129–135. [Google Scholar]
- Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2623–2631. [Google Scholar]
- Nguyen, V. Bayesian optimization for accelerating hyper-parameter tuning. In Proceedings of the 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), Sardinia, Italy, 3–5 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 302–305. [Google Scholar]
- Giulietti, N.; Revel, G.M.; Chiariotti, P. Automated vision-based concrete crack measurement system. Measurement 2025, 242, 115858. [Google Scholar] [CrossRef]
- Giulietti, N.; Chiariotti, P.; Zanelli, F.; Debattisti, N.; Cigada, A. Combined Use of Infrared Imaging and Deep-Learning Techniques for Real-Time Temperature Measurement of Train Braking Components. IEEE Trans. Instrum. Meas. 2025, 74, 5009908. [Google Scholar] [CrossRef]
- Schmidt, R.M.; Schneider, F.; Hennig, P. Descending through a crowded valley-benchmarking deep learning optimizers. In Proceedings of the International Conference on Machine Learning. PMLR, Virtual, 18–24 July 2021; pp. 9367–9376. [Google Scholar]
- Varghese, R.; Sambath, M. YOLOv8: A Novel Object Detection Algorithm with Enhanced Performance and Robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
- Wang, C.Y.; Yeh, I.H.; Mark Liao, H.Y. Yolov9: Learning what you want to learn using programmable gradient information. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; pp. 1–21. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. Yolov10: Real-time end-to-end object detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Khanam, R.; Hussain, M. Yolov11: An overview of the key architectural enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar]
- Sapkota, R.; Karkee, M. Comparing YOLOv11 and YOLOv8 for instance segmentation of occluded and non-occluded immature green fruits in complex orchard environment. arXiv 2024, arXiv:2410.19869. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2117–2125. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 8759–8768. [Google Scholar]
Reference | Base Model | Technique | Results | Vision System | Target Plant |
---|---|---|---|---|---|
Wang et al. (2024) [16] | YOLOv8m | P2 head, Ghost modules | Precision: 86.1% Recall: 84.4% | RGB camera | Rapeseed |
Yi et al. (2024) [15] | YOLOv8s | Feature fusion | mAP@50: 87.0% Recall: 81.1% | RGB camera | Not specified |
Yu et al. (2024) [21] | YOLOv5 | Filter pruning, StrongSORT | mAP: 71.7% Recall: 72.0% | RGB camera (mobile) | Pineapple |
Rahim et al. (2020) [19] | Faster R-CNN | Region-based CNN, thresholding | Precision: 96.02% Recall: 93.09% | RGB camera | Tomato |
Lin et al. (2024) [18] | YOLACT++ | Multitask learning | AP@50: 84.8% | RGB camera | Lychee |
Tan et al. (2024) [17] | YOLOv8x | Deep optical flow | Precision: 96.4% : 0.92 | RGB-D | Cotton |
Model | mAP50_95 | Precision | Recall | Inference Time | Params |
---|---|---|---|---|---|
[-] | [-] | [-] | [ms] | [-] | |
yolo11x | 0.85 | 0.98 | 0.95 | 8.52 | 56,874,931 |
yolo11l | 0.79 | 0.96 | 0.93 | 4.62 | 25,311,251 |
yolo11m | 0.80 | 0.96 | 0.96 | 3.94 | 20,053,779 |
yolo11s | 0.82 | 0.95 | 0.94 | 2.02 | 9,428,179 |
yolo11n | 0.73 | 0.94 | 0.91 | 1.23 | 2,590,035 |
yolov10x | 0.78 | 0.96 | 0.92 | 8.95 | 31,656,806 |
yolov10b | 0.85 | 0.97 | 0.94 | 4.73 | 20,452,566 |
yolov10l | 0.84 | 0.97 | 0.96 | 5.83 | 25,766,870 |
yolov10m | 0.83 | 0.96 | 0.94 | 3.76 | 16,485,286 |
yolov10s | 0.84 | 0.96 | 0.95 | 2.40 | 8,067,126 |
yolov10n | 0.76 | 0.94 | 0.91 | 1.32 | 2,707,430 |
yolov9e | 0.81 | 0.95 | 0.94 | 10.98 | 58,145,683 |
yolov9c | 0.81 | 0.95 | 0.95 | 5.18 | 25,530,003 |
yolov9m | 0.80 | 0.96 | 0.94 | 4.18 | 20,159,043 |
yolov9s | 0.69 | 0.92 | 0.91 | 2.19 | 7,287,795 |
yolov9t | 0.79 | 0.94 | 0.92 | 1.55 | 2,005,603 |
yolov8x | 0.85 | 0.95 | 0.97 | 9.32 | 68,153,571 |
yolov8l | 0.85 | 0.96 | 0.92 | 5.86 | 43,630,611 |
yolov8m | 0.81 | 0.96 | 0.92 | 3.59 | 25,856,899 |
yolov8s | 0.79 | 0.94 | 0.93 | 1.59 | 11,135,987 |
yolov8n | 0.79 | 0.94 | 0.95 | 0.98 | 3,011,043 |
Tree ID | Side | YOLO11x-P2 (Side) | YOLO11x-P2 (Total) | GroundTruth | YOLO11x-P2 (Interp.) |
---|---|---|---|---|---|
1 | a | 54 | 150 | 184.50 | 202.46 |
b | 96 | ||||
2 | a | 75 | 222 | 165.50 | 137.14 |
b | 147 | ||||
3 | a | 47 | 199 | 285.75 | 309.36 |
b | 152 | ||||
4 | a | 76 | 259 | 518.50 | 499.39 |
b | 183 | ||||
5 | a | 163 | 163 | 339.00 | 331.13 |
b | 0 | ||||
6 | a | 102 | 336 | 647.75 | 651.82 |
b | 157 | ||||
7 | a | 89 | 175 | 242.25 | 251.95 |
b | 85 | ||||
MAE = 15.81 | R2 = 0.989 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Giulietti, N.; Tombesi, S.; Bedodi, M.; Sergenti, C.; Carnevale, M.; Giberti, H. Hazelnut Yield Estimation: A Vision-Based Approach for Automated Counting of Hazelnut Female Flowers. Sensors 2025, 25, 3212. https://doi.org/10.3390/s25103212
Giulietti N, Tombesi S, Bedodi M, Sergenti C, Carnevale M, Giberti H. Hazelnut Yield Estimation: A Vision-Based Approach for Automated Counting of Hazelnut Female Flowers. Sensors. 2025; 25(10):3212. https://doi.org/10.3390/s25103212
Chicago/Turabian StyleGiulietti, Nicola, Sergio Tombesi, Michele Bedodi, Carol Sergenti, Marco Carnevale, and Hermes Giberti. 2025. "Hazelnut Yield Estimation: A Vision-Based Approach for Automated Counting of Hazelnut Female Flowers" Sensors 25, no. 10: 3212. https://doi.org/10.3390/s25103212
APA StyleGiulietti, N., Tombesi, S., Bedodi, M., Sergenti, C., Carnevale, M., & Giberti, H. (2025). Hazelnut Yield Estimation: A Vision-Based Approach for Automated Counting of Hazelnut Female Flowers. Sensors, 25(10), 3212. https://doi.org/10.3390/s25103212