A Coffee Plant Counting Method Based on Dual-Channel NMS and YOLOv9 Leveraging UAV Multispectral Imaging
Abstract
:1. Introduction
2. Materials and Methods
2.1. Overview of the Research Area
2.2. Data and Preprocessing
2.2.1. Equipment
2.2.2. Data Collection
2.2.3. Data Preprocessing
- Mosaic images from areas with a complex terrain and various land features were selected. These included roads, buildings, macadamia trees, mango trees, and coffee trees of different ages;
- Different sample plot images for each phenological stage were selected, effectively avoiding the issue of having the same number of coffee plants across different phenological stages in the same experimental area.
- Images that were clear and free from flight disturbances;
- Images rich in terrain and land feature information, including various tree species such as mango, macadamia, and young coffee seedlings;
- Images where coffee tree canopies exhibited overlap.
2.2.4. Spectral Combinations
2.3. Methods
2.3.1. YOLOs
- Backbone: This is a convolutional neural network responsible for extracting key multi-scale features from the image. Shallow features, such as edges and textures, are extracted in the early stages of the network, while deeper layers capture high-level features like objects and semantics. YOLOv5 uses a variant of CSPNet (Channel-wise Spatial Pyramid Network) as its backbone [37], making the network lighter and optimizing inference speed. YOLOv7 employs an improved Efficient Layer Aggregation Network (ELAN) as its backbone [38], enhancing feature extraction through a more efficient layer aggregation mechanism. YOLOv8 utilizes a similar backbone to YOLOv5 but with an improved CSPLayer [39], thereby enhancing feature representation capabilities.
- Neck: Positioned between the backbone and head, the neck is responsible for feature fusion and enhancement. It extracts multi-scale features to detect objects of varying sizes, making it ideal for identifying coffee trees of different ages. YOLOv5’s neck retains the Path Aggregation Network (PAN) structure, similar to YOLOv4 [40], but with lightweight improvements to optimize the speed of feature fusion. YOLOv7 enhances the model’s ability to perceive multi-scale objects, maintaining a focus on small targets during feature aggregation. YOLOv7 strikes a better balance between efficiency and accuracy [41]. YOLOv8 employs the PANet structure, allowing for both top–down and bottom–up feature fusion [42], increasing the receptive field and enriching semantic information.
- Head: The head is the final part of the network, responsible for predicting the object’s class, location, and confidence score based on the multi-scale features provided by the neck. YOLOv5 and YOLOv7 optimized the detection head to be more efficient while retaining high accuracy [43,44]. YOLOv8 introduced a new decoupled head structure that separates classification and regression tasks [45], improving the model’s flexibility and stability.
2.3.2. Dual-Channel NMS
2.3.3. Workflow
2.4. Metrics
3. Results
3.1. Models and Parameters
3.2. Optimal Model
3.3. Optimal Spectral Band Combination
3.4. Coffee Counting
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Gaspar, S.; Ramos, F. Caffeine: Consumption and Health Effects. In Encyclopedia of Food and Health, 1st ed.; Caballero, B., Finglas, P.M., Toldrá, F., Eds.; Academic Press: 2016; pp. 573–578.
- Chéron-Bessou, C.; Acosta-Alba, I.; Boissy, J.; Payen, S.; Rigal, C.; Setiawan, A.A.R.; Sevenster, M.; Tran, T.; Azapagic, A. Unravelling life cycle impacts of coffee: Why do results differ so much among studies? Sustain. Prod. Consum. 2024, 47, 251–266. [Google Scholar] [CrossRef]
- Zhu, T. Research on the Current Situation and Development of China’s Coffee Market. Adv. Econ. Manag. Political Sci. 2023, 54, 197–202. [Google Scholar]
- China Industry Research Institute. Annual Research and Consultation Report of Panorama Survey and Investment Strategy on China Industry; China Industry Research Institute: Shenzhen, China, 2023; Report No. 1875749. (In Chinese) [Google Scholar]
- Yunnan Statistics Bureau. 2023 Yunnan Statistical Yearbook; Yunnan Statistics Bureau: Yunnan, China, 2023. (In Chinese) [Google Scholar]
- Li, W.; Zhao, G.; Yan, H.; Wang, C. A Research Report on Yunnan Specialty Coffee Production. Trop. Agric. Sci. 2024, 47, 31–40. (In Chinese) [Google Scholar]
- Alahmad, T.; Neményi, M.; Nyéki, A. Applying IoT Sensors and Big Data to Improve Precision Crop Production: A Review. Agronomy 2023, 13, 2603. [Google Scholar] [CrossRef]
- Xu, D.; Chen, J.; Li, B.; Ma, J. Improving Lettuce Fresh Weight Estimation Accuracy through RGB-D Fusion. Agronomy 2023, 13, 2617. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhao, D.; Liu, H.; Huang, X.; Deng, J.; Jia, R.; He, X.; Tahir, M.N.; Lan, Y. Research hotspots and frontiers in agricultural multispectral technology: Bibliometrics and scientometrics analysis of the Web of Science. Front. Plant Sci. 2022, 13, 955340. [Google Scholar] [CrossRef]
- Ivezić, A.; Trudić, B.; Stamenković, Z.; Kuzmanović, B.; Perić, S.; Ivošević, B.; Budēn, M.; Petrović, K. Drone-Related Agrotechnologies for Precise Plant Protection inWestern Balkans: Applications, Possibilities, and Legal Framework Limitations. Agronomy 2023, 13, 2615. [Google Scholar] [CrossRef]
- Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of Remote Sensing in Precision Agriculture: A Review. Remote Sens. 2020, 12, 3136. [Google Scholar] [CrossRef]
- Jiménez-Brenes, F.M.; López-Granados, F.; Torres-Sánchez, J.; Peña, J.M.; Ramírez, P.; Castillejo-González, I.L.; de Castro, A.I. Automatic UAV-based detection of Cynodon dactylon for site-specific vineyard management. PLoS ONE 2019, 14, e0218132. [Google Scholar] [CrossRef]
- Osco, L.P.; dos de Arruda, M.S.; Marcato Junior, J.; da Silva, N.B.; Ramos, A.P.M.; Moryia, É.A.S.; Imai, N.N.; Pereira, D.R.; Creste, J.E.; Matsubara, E.T.; et al. A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2020, 160, 97–106. [Google Scholar] [CrossRef]
- Bai, X.; Liu, P.; Cao, Z.; Lu, H.; Xiong, H.; Yang, A.; Yao, J. Rice plant counting, locating, and sizing method based on high-throughput UAV RGB images. Plant Phenomics 2023, 5, 0020. [Google Scholar] [CrossRef] [PubMed]
- Barata, R.; Ferraz, G.; Bento, N.; Soares, D.; Santana, L.; Marin, D.; Mattos, D.; Schwerz, F.; Rossi, G.; Conti, L.; et al. Evaluation of Coffee Plants Transplanted to an Area with Surface and Deep Liming Based on multispectral Indices Acquired Using Unmanned Aerial Vehicles. Agronomy 2023, 13, 2623. [Google Scholar] [CrossRef]
- Zeng, L.; Wardlow, B.D.; Xiang, D.; Hu, S.; Li, D. A review of vegetation phenological metrics extraction using time-series, multispectral satellite data. Remote Sens. Environ. 2020, 237, 111511. [Google Scholar] [CrossRef]
- Boegh, E.; Soegaard, H.; Broge, N.; Hasager, C.; Jensen, N.; Schelde, K.; Thomsen, A. Airborne multispectral data for quantifying leaf area index, nitrogen concentration, and photosynthetic efficiency in agriculture. Remote Sens. Environ. 2002, 81, 179–193. [Google Scholar] [CrossRef]
- Lin, H.; Tse, R.; Tang, S.K.; Qiang, Z.P.; Pau, G. The Positive Effect of Attention Module in Few-Shot Learning for Plant Disease Recognition. In Proceedings of the 2022 5th International Conference on Pattern Recognition and Artificial Intelligence (PRAI), Chengdu, China, 19–21 August 2022; IEEE: New York, NY, USA, 2022; pp. 114–120. [Google Scholar]
- Wang, X.; Zhang, C.; Qiang, Z.; Xu, W.; Fan, J. A New Forest Growing Stock Volume Estimation Model Based on AdaBoost and Random Forest Model. Forests 2024, 15, 260. [Google Scholar] [CrossRef]
- Alkhaldi, N.A.; Alabdulathim, R.E. Optimizing Glaucoma Diagnosis with Deep Learning-Based Segmentation and Classification of Retinal Images. Appl. Sci. 2024, 14, 7795. [Google Scholar] [CrossRef]
- Bouachir, W.; Ihou, K.E.; Gueziri, H.E.; Bouguila, N.; Belanger, N. Computer vision system for automatic counting of planting microsites using UAV imagery. IEEE Access 2019, 7, 82491–82500. [Google Scholar] [CrossRef]
- Buzzy, M.; Thesma, V.; Davoodi, M.; Mohammadpour Velni, J. Real-Time Plant Leaf Counting Using Deep Object Detection Networks. Sensors 2020, 20, 6896. [Google Scholar] [CrossRef]
- Zhang, S.; Chi, C.; Yao, Y.; Lei, Z.; Li, S.Z. Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 16–18 June 2020; pp. 9759–9768. [Google Scholar]
- Think Autonomous. Finally Understand Anchor Boxes in Object Detection (2D and 3D). Available online: https://www.thinkautonomous.ai/blog/anchor-boxes/ (accessed on 6 October 2024).
- Jiang, T.; Yu, Q.; Zhong, Y.; Shao, M. PlantSR: Super-Resolution Improves Object Detection in Plant Images. J. Imaging 2024, 10, 137. [Google Scholar] [CrossRef]
- Lin, H.; Chen, Z.; Qiang, Z.; Tang, S.-K.; Liu, L.; Pau, G. Automated Counting of Tobacco Plants Using Multispectral UAV Data. Agronomy 2023, 13, 2861. [Google Scholar] [CrossRef]
- Chandra, N.; Vaidya, H. Automated detection of landslide events from multi-source remote sensing imagery: Performance evaluation and analysis of yolo algorithms. J. Earth Syst. Sci. 2024, 133, 1–17. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 278–282. [Google Scholar]
- Wang, N.; Cao, H.; Huang, X.; Ding, M. Rapeseed Flower Counting Method Based on GhP2-YOLO and StrongSORT Algorithm. Plants 2024, 13, 2388. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed.; Springer: New York, NY, USA, 2009; pp. 347–369. [Google Scholar]
- Feng, S.; Qian, H.; Wang, H.; Wang, W. Real-time object detection method based on yolov5 and efficient mobile network. J. Real-Time Image Process. 2024, 21, 56. [Google Scholar] [CrossRef]
- Bai, Y.; Yu, J.; Yang, S.; Ning, J. An improved yolo algorithm for detecting flowers and fruits on strawberry seedlings. Biosyst. Eng. 2024, 237, 1–12. [Google Scholar] [CrossRef]
- Guan, H.; Deng, H.; Ma, X.; Zhang, T.; Zhang, Y.; Zhu, T.; Zhou, H.; Gu, Z.; Lu, Y. A corn canopy organs detection method based on improved DBi-YOLOv8 network. Eur. J. Agron. 2024, 154, 127076. [Google Scholar] [CrossRef]
- Xu, D.; Xiong, H.; Liao, Y.; Wang, H.; Yuan, Z.; Yin, H. EMA-YOLO: A Novel Target-Detection Algorithm for Immature Yellow Peach Based on YOLOv8. Sensors 2024, 24, 3783. [Google Scholar] [CrossRef]
- Wang, C.; Yeh, I.; Liao, H. Yolov9: Learning what you want to learn using programmable gradient information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
- Badgujar, C.; Poulose, A.; Gan, H. Agricultural object detection with You Only Look Once (YOLO) Algorithm: A bibliometric and systematic literature review. Comput. Electron. Agric. 2024, 223, 109090. [Google Scholar] [CrossRef]
- Zhan, W.; Sun, C.; Wang, M.; She, J.; Zhang, Y.; Zhang, Z.; Sun, Y. An improved Yolov5 real-time detection method for small objects captured by UAV. Soft Comput. 2022, 26, 361–373. [Google Scholar] [CrossRef]
- Li, S.; Tao, T.; Zhang, Y.; Li, M.; Qu, H. YOLO v7-CS: A YOLO v7-based model for lightweight bayberry target detection count. Agronomy 2023, 13, 2952. [Google Scholar] [CrossRef]
- Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Wu, W.; Liu, H.; Li, L.; Long, Y.; Wang, X.; Wang, Z.; Chang, Y. Application of local fully Convolutional Neural Network combined with YOLOv5 algorithm in small target detection of remote sensing image. PLoS ONE 2021, 16, e0259283. [Google Scholar] [CrossRef]
- Wu, D.; Jiang, S.; Zhao, E.; Liu, Y.; Zhu, H.; Wang, W.; Wang, R. Detection of Camellia oleifera fruit in complex scenes by using YOLOv7 and data augmentation. Appl. Sci. 2022, 12, 11318. [Google Scholar] [CrossRef]
- Wang, G.; Chen, Y.; An, P.; Hong, H.; Hu, J.; Huang, T. UAV-YOLOv8: A small-object-detection model based on improved YOLOv8 for UAV aerial photography scenarios. Sensors 2023, 23, 7190. [Google Scholar] [CrossRef]
- Ashraf, A.H.; Imran, M.; Qahtani, A.M.; Alsufyani, A.; Almutiry, O.; Mahmood, A.; Attique, M.; Habib, M. Weapons detection for security and video surveillance using CNN and YOLO-v5s. CMC-Comput. Mater. Contin. 2022, 70, 2761–2775. [Google Scholar]
- Zhao, L.; Zhu, M. MS-YOLOv7: YOLOv7 based on multi-scale for object detection on UAV aerial photography. Drones 2023, 7, 188. [Google Scholar] [CrossRef]
- Contributors, M. YOLOv8 by MMYOLO. 2023. Available online: https://github.com/open-mmlab/mmyolo/tree/main/configs/yolov8 (accessed on 10 March 2024).
- Chien, C.T.; Ju, R.Y.; Chou, K.Y.; Chiang, J.S. YOLOv9 for fracture detection in pediatric wrist trauma X-ray images. Electronics Lett. 2024, 60, e13248. [Google Scholar] [CrossRef]
- Neubeck, A.; Van Gool, L. Efficient Non-Maximum Suppression. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006. [Google Scholar]
- Zaghari, N.; Fathy, M.; Jameii, S.; Shahverdy, M. The improvement in obstacle detection in autonomous vehicles using YOLO non-maximum suppression fuzzy algorithm. J. Supercomput. 2021, 77, 13421–13446. [Google Scholar] [CrossRef]
- Zhang, Y.F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and efficient IOU loss for accurate bounding box regression. Neurocomputing 2022, 506, 146–157. [Google Scholar] [CrossRef]
- Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating Multispectral Images and Vegetation Indices for Precision Farming Applications from UAV Images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef]
- Santana, L.S.; Ferraz, G.A.e.S.; Santos, G.H.R.d.; Bento, N.L.; Faria, R.d.O. Identification and Counting of Coffee Trees Based on Convolutional Neural Network Applied to RGB Images Obtained by RPA. Sustainability 2023, 15, 820. [Google Scholar] [CrossRef]
Band Combinations | UAV Bands | Wave Length |
---|---|---|
G3 | Green, Green, Green | Green: 560 nm ± 16 nm Nir: 860 nm ± 26 nm Red: 650 nm ± 16 nm Red edge: 730 nm ± 16 nm |
N3 | Nir, Nir, Nir | |
R3 | Red, Red, Red | |
Re3 | Red edge, Red edge, Red edge | |
RNRe | Red, Nir, Red edge | |
NGRe | Nir, Green, Red edge | |
RGN | Red, Green, Nir | |
RGRe | Red, Green, Red edge | |
RGB | A full-color (RGB) band | Red: 620–750 nm Green: 495–570 nm Blue: 450–495 nm |
Parameter | Value | Explain |
---|---|---|
epoch | 1000 | Number of complete passes through the training data |
batch | 16 | Size of the data subset processed in one pass |
workers | 16 | Number of sub-processes used for data loading |
Model | P (%) | R (%) | mAP50 (%) | mAP50–95 (%) |
---|---|---|---|---|
YOLOv5l | 86.50% | 90.40% | 92.70% | 60.80% |
YOLOv5m | 84.40% | 91.60% | 93.50% | 63.80% |
YOLOv5n | 88.80% | 89.90% | 94.30% | 59.70% |
YOLOv5s | 88.80% | 88.30% | 93.90% | 60.90% |
YOLOv5x | 87.40% | 90.50% | 93.70% | 61.00% |
YOLOv7 | 85.20% | 92.20% | 93.70% | 63.50% |
YOLOv8l | 86.70% | 90.10% | 94.50% | 61.60% |
YOLOv8m | 85.90% | 87.90% | 93.90% | 61.50% |
YOLOv8n | 88.10% | 88.00% | 93.90% | 61.70% |
YOLOv8s | 88.00% | 90.40% | 94.10% | 63.80% |
YOLOv8x | 86.10% | 86.00% | 93.00% | 61.50% |
YOLOv9 | 89.30% | 87.80% | 94.60% | 64.60% |
Model | P (%) | R (%) | mAP50 (%) | mAP50–95 (%) |
---|---|---|---|---|
YOLOv5l | 86.20% | 90.20% | 92.30% | 58.30% |
YOLOv5m | 84.60% | 91.30% | 93.40% | 59.90% |
YOLOv5n | 87.90% | 88.90% | 93.00% | 54.30% |
YOLOv5s | 88.30% | 88.80% | 93.80% | 57.70% |
YOLOv5x | 87.50% | 90.50% | 93.80% | 58.70% |
YOLOv7 | 85.30% | 92.30% | 94.20% | 60.80% |
YOLOv8l | 87.00% | 90.40% | 94.80% | 57.80% |
YOLOv8m | 86.20% | 87.40% | 93.50% | 58.60% |
YOLOv8n | 88.10% | 88.40% | 93.90% | 58.00% |
YOLOv8s | 87.60% | 90.80% | 93.60% | 61.10% |
YOLOv8x | 85.70% | 85.50% | 93.10% | 58.70% |
YOLOv9 | 88.90% | 89.10% | 94.80% | 60.80% |
Band Combinations | Category | Coffee Detection | Single-Plant Segmentation | ||||||
---|---|---|---|---|---|---|---|---|---|
P (%) | R (%) | mAP 50 (%) | mAP 50–95 (%) | P (%) | R (%) | mAP 50 (%) | mAP 50–95 (%) | ||
G3 | Mixed | 87.20 | 89.00 | 92.10 | 60.60 | 87.60 | 89.20 | 92.60 | 57.60 |
N3 | Mixed | 87.60 | 85.00 | 90.60 | 59.80 | 88.10 | 85.50 | 91.20 | 57.20 |
R3 | Mixed | 88.10 | 82.80 | 90.70 | 59.40 | 88.40 | 83.30 | 91.30 | 56.70 |
Re3 | Mixed | 87.50 | 87.40 | 92.00 | 61.70 | 87.90 | 87.70 | 92.10 | 58.50 |
RNRe | Mixed | 85.70 | 87.30 | 93.10 | 64.10 | 85.60 | 87.70 | 93.30 | 60.70 |
NGRe | Mixed | 87.10 | 87.70 | 92.60 | 64.00 | 87.50 | 88.10 | 92.80 | 61.10 |
RGN | Mixed | 89.10 | 88.20 | 94.70 | 65.30 | 89.20 | 88.30 | 94.40 | 61.40 |
RGRe | Mixed | 87.20 | 87.60 | 93.40 | 64.40 | 86.70 | 88.90 | 92.90 | 60.50 |
RGB | Mixed | 89.30 | 87.80 | 94.60 | 64.60 | 88.90 | 89.10 | 94.80 | 60.80 |
Band Combinations | Category | Coffee Detection | Single-Plant Segmentation | ||||||
---|---|---|---|---|---|---|---|---|---|
P (%) | R (%) | mAP 50 (%) | mAP 50–95 (%) | P (%) | R (%) | mAP 50 (%) | mAP 50–95 (%) | ||
G3 | Flower | 86.60 | 90.30 | 93.70 | 61.20 | 87.10 | 90.40 | 93.70 | 57.60 |
N3 | Flower | 87.20 | 85.00 | 89.70 | 56.30 | 87.20 | 85.00 | 90.00 | 53.20 |
R3 | Flower | 88.90 | 87.70 | 93.50 | 63.10 | 88.50 | 87.60 | 93.20 | 59.20 |
Re3 | Flower | 86.70 | 87.10 | 91.80 | 58.90 | 87.20 | 87.50 | 91.90 | 55.30 |
RNRe | Flower | 88.40 | 87.90 | 94.80 | 63.80 | 88.40 | 87.90 | 94.90 | 59.80 |
NGRe | Flower | 86.40 | 90.10 | 93.80 | 63.00 | 86.60 | 90.40 | 93.40 | 59.90 |
RGN | Flower | 89.30 | 88.20 | 95.50 | 65.60 | 89.30 | 88.20 | 94.90 | 61.40 |
RGRe | Flower | 90.30 | 85.90 | 94.30 | 64.10 | 90.00 | 87.70 | 93.70 | 59.40 |
RGB | Flower | 91.00 | 91.10 | 96.50 | 67.20 | 90.50 | 92.10 | 96.20 | 62.70 |
Band Combinations | Category | Coffee Detection | Single-Plant Segmentation | ||||||
---|---|---|---|---|---|---|---|---|---|
P (%) | R (%) | mAP 50 (%) | mAP 50–95 (%) | P (%) | R (%) | mAP 50 (%) | mAP 50–95 (%) | ||
G3 | Non-flowering | 87.80 | 87.70 | 90.60 | 60.00 | 88.00 | 87.90 | 91.50 | 57.70 |
N3 | Non-flowering | 88.10 | 85.10 | 90.60 | 59.80 | 88.10 | 85.50 | 91.20 | 57.20 |
R3 | Non-flowering | 87.30 | 77.90 | 87.80 | 55.70 | 88.30 | 79.00 | 89.30 | 54.30 |
Re3 | Non-flowering | 88.30 | 87.60 | 92.20 | 64.40 | 88.60 | 87.90 | 92.30 | 61.60 |
RNRe | Non-flowering | 82.90 | 86.70 | 91.40 | 64.40 | 82.80 | 87.40 | 91.60 | 61.50 |
NGRe | Non-flowering | 87.70 | 85.20 | 91.50 | 65.00 | 88.30 | 85.90 | 92.10 | 62.40 |
RGN | Non-flowering | 88.80 | 88.10 | 93.90 | 65.00 | 89.10 | 88.40 | 94.00 | 61.40 |
RGRe | Non-flowering | 84.20 | 89.30 | 92.50 | 64.70 | 83.40 | 90.10 | 92.00 | 61.70 |
RGB | Non-flowering | 87.70 | 84.40 | 92.70 | 62.00 | 87.40 | 86.10 | 93.30 | 58.90 |
Band Combinations | Category | Detected Plants | Correctly Detected Plants | Count Accuracy (%) |
---|---|---|---|---|
RGN | All | 809 | 781 | 96.50 |
RGB | 798 | 776 | 97.20 | |
RGN + RGB | 826 | 813 | 98.40 | |
RGN | Flower | 422 | 396 | 93.80 |
RGB | 420 | 403 | 96.00 | |
RGN + RGB | 432 | 422 | 97.70 | |
RGN | Non-flowering | 387 | 385 | 99.40 |
RGB | 378 | 373 | 98.70 | |
RGN + RGB | 394 | 391 | 99.20 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, X.; Zhang, C.; Qiang, Z.; Liu, C.; Wei, X.; Cheng, F. A Coffee Plant Counting Method Based on Dual-Channel NMS and YOLOv9 Leveraging UAV Multispectral Imaging. Remote Sens. 2024, 16, 3810. https://doi.org/10.3390/rs16203810
Wang X, Zhang C, Qiang Z, Liu C, Wei X, Cheng F. A Coffee Plant Counting Method Based on Dual-Channel NMS and YOLOv9 Leveraging UAV Multispectral Imaging. Remote Sensing. 2024; 16(20):3810. https://doi.org/10.3390/rs16203810
Chicago/Turabian StyleWang, Xiaorui, Chao Zhang, Zhenping Qiang, Chang Liu, Xiaojun Wei, and Fengyun Cheng. 2024. "A Coffee Plant Counting Method Based on Dual-Channel NMS and YOLOv9 Leveraging UAV Multispectral Imaging" Remote Sensing 16, no. 20: 3810. https://doi.org/10.3390/rs16203810
APA StyleWang, X., Zhang, C., Qiang, Z., Liu, C., Wei, X., & Cheng, F. (2024). A Coffee Plant Counting Method Based on Dual-Channel NMS and YOLOv9 Leveraging UAV Multispectral Imaging. Remote Sensing, 16(20), 3810. https://doi.org/10.3390/rs16203810