Detecting Cassava Plants under Different Field Conditions Using UAV-Based RGB Images and Deep Learning Models
Abstract
:1. Introduction
2. Materials and Methods
2.1. Dataset
2.1.1. Experimental Setup
2.1.2. Image Acquisition
2.1.3. Image Annotation
2.1.4. Data Augmentation
2.2. Object Detection Model
2.2.1. Model Training and Validation
2.2.2. Performance Metrics
2.3. Model Deployment on NVIDIA Jetson AGX Orin
2.3.1. Jetson Orin Setup
2.3.2. Inference on Jetson Orin
2.4. Validation for Cassava Counting
3. Results
3.1. Captured UAV Images
3.2. Model Training Performance
3.2.1. Performance of the Models for Varying Image Sizes
3.2.2. Effect of Varying Batch Sizes on the Model Performance
3.3. Model Inference Performance
3.3.1. Detection Speed
3.3.2. Speed vs. Accuracy
3.3.3. Use Case on a Farm Field
3.4. Model Performance for Cassava Counting
3.4.1. Light Conditions
3.4.2. Growth Stages
3.4.3. Weed Density
4. Discussion
4.1. Captured UAV Images
4.2. Model Training Performance
4.2.1. Performance of the Models for Varying Image Sizes
4.2.2. Effect of Varying Batch Sizes on the Model Performance
4.3. Model Inference Performance
4.3.1. Detection Speed
4.3.2. Speed vs. Accuracy
4.3.3. Use Case on a Farm Field
4.4. Model Performance for Cassava Counting
4.4.1. Light Conditions
4.4.2. Growth Stages
4.4.3. Weed Density
4.5. Limitations
4.6. Future Work
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Food and Agriculture Organisation of the United Nations. The Future of Food and Agriculture–Trends and Challenges; FAO: Rome, Italy, 2017. [Google Scholar]
- Department of Economics and Social Affairs, Population Division. World Population Prospects 2019; United Nations: New York, NY, USA, 2019. [Google Scholar]
- Duckett, T.; Pearson, S.; Blackmore, S.; Grieve, B.; Chen, W.-H.; Cielniak, G.; Cleaversmith, J.; Dai, J.; Davis, S.; Fox, C.; et al. Agricultural Robotics: The Future of Robotic Agriculture. arXiv 2018, arXiv:1806.06762. [Google Scholar]
- Rahman, A.; Lu, Y.; Wang, H. Performance Evaluation of Deep Learning Object Detectors for Weed Detection for Cotton. Smart Agric. Technol. 2023, 3, 100126. [Google Scholar] [CrossRef]
- MacEachern, C.B.; Esau, T.J.; Schumann, A.W.; Hennessy, P.J.; Zaman, Q.U. Detection of Fruit Maturity Stage and Yield Estimation in Wild Blueberry Using Deep Learning Convolutional Neural Networks. Smart Agric. Technol. 2023, 3, 100099. [Google Scholar] [CrossRef]
- Jackulin, C.; Murugavalli, S. A Comprehensive Review on Detection of Plant Disease Using Machine Learning and Deep Learning Approaches. Meas. Sens. 2022, 24, 100441. [Google Scholar] [CrossRef]
- Adair, R.J.; Richard, H.G. Impact of Environmental Weeds on Biodiversity: A Review and Development of a Methodology; Biodiversity Group, Environment Australia: Canberra, Australia, 1998. [Google Scholar]
- Balasubramanian, D.; Grard, P.; Le Bourgeois, T.; Ramesh, B.R. A Biodiversity Platform for Weed Identification and Knowledge System in the Western Indian Ocean. In Proceedings of the Biodiversity Information Standards (TDWG), Jönköping, Sweden, 27–31 October 2014; pp. 1–3. [Google Scholar]
- Podlaski, S.; Chomontowski, C. Various Methods of Assessing Sugar Beet Seed Vigour and Its Impact on the Germination Process, Field Emergence and Sugar Yield. Sugar Tech 2020, 22, 130–136. [Google Scholar] [CrossRef]
- Li, B.; Xu, X.; Han, J.; Zhang, L.; Bian, C.; Jin, L.; Liu, J. The Estimation of Crop Emergence in Potatoes by UAV RGB Imagery. Plant Methods 2019, 15, 15. [Google Scholar] [CrossRef]
- Valente, J.; Sari, B.; Kooistra, L.; Kramer, H.; Mücher, S. Automated Crop Plant Counting from Very High-Resolution Aerial Imagery. Precis. Agric. 2020, 21, 1366–1384. [Google Scholar] [CrossRef]
- Jin, X.; Liu, S.; Baret, F.; Hemerlé, M.; Comar, A. Estimates of Plant Density of Wheat Crops at Emergence from Very Low Altitude UAV Imagery. Remote Sens. Environ. 2017, 198, 105–114. [Google Scholar] [CrossRef]
- Liu, M.; Su, W.-H.; Wang, X.-Q. Quantitative Evaluation of Maize Emergence Using UAV Imagery and Deep Learning. Remote Sens. 2023, 15, 1979. [Google Scholar] [CrossRef]
- Bai, Y.; Nie, C.; Wang, H.; Cheng, M.; Liu, S.; Yu, X.; Shao, M.; Wang, Z.; Wang, S.; Tuohuti, N.; et al. A Fast and Robust Method for Plant Count in Sunflower and Maize at Different Seedling Stages Using High-Resolution UAV RGB Imagery. Precis. Agric. 2022, 23, 1720–1742. [Google Scholar] [CrossRef]
- Vong, C.N.; Conway, L.S.; Zhou, J.; Kitchen, N.R.; Sudduth, K.A. Early Corn Stand Count of Different Cropping Systems Using UAV-Imagery and Deep Learning. Comput. Electron. Agric. 2021, 186, 106214. [Google Scholar] [CrossRef]
- Lu, H.; Cao, Z. TasselNetV2+: A Fast Implementation for High-Throughput Plant Counting from High-Resolution RGB Imagery. Front. Plant Sci. 2020, 11, 541960. [Google Scholar] [CrossRef] [PubMed]
- Ukaegbu, U.F.; Tartibu, L.K.; Okwu, M.O.; Olayode, I.O. Development of a Light-Weight Unmanned Aerial Vehicle for Precision Agriculture. Sensors 2021, 21, 4417. [Google Scholar] [CrossRef] [PubMed]
- Mustafa, M.M.; Hussain, A.; Ghazali, K.H.; Riyadi, S. Implementation of Image Processing Technique in Real Time Vision System for Automatic Weeding Strategy. In Proceedings of the ISSPIT 2007—2007 IEEE International Symposium on Signal Processing and Information Technology, Giza, Egypt, 15–18 December 2007; pp. 632–635. [Google Scholar]
- Saha, D.; Hanson, A.; Shin, S.Y. Development of Enhanced Weed Detection System with Adaptive Thresholding and Support Vector Machine. In Proceedings of the International Conference on Research in Adaptive and Convergent Systems—RACS ’16; ACM Press: New York, NY, USA, 2016; pp. 85–88. [Google Scholar]
- Barrero, O.; Rojas, D.; Gonzalez, C.; Perdomo, S. Weed Detection in Rice Fields Using Aerial Images and Neural Networks. In Proceedings of the 2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA), IEEE, Bucaramanga, Colombia, 31 August–2 September 2016; pp. 1–4. [Google Scholar]
- Wu, Z.; Chen, Y.; Zhao, B.; Kang, X.; Ding, Y. Review of Weed Detection Methods Based on Computer Vision. Sensors 2021, 21, 3647. [Google Scholar] [CrossRef]
- Thanh Le, V.N.; Truong, G.; Alameh, K. Detecting Weeds from Crops under Complex Field Environments Based on Faster RCNN. In Proceedings of the 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE), Phu Quoc Island, Vietnam, 13–15 January 2021; Bahk, S., Tran-Gia, P., Van der Spiegel, J., Quynh, N.X., Eds.; pp. 350–355. [Google Scholar]
- Tang, J.; Wang, D.; Zhang, Z.; He, L.; Xin, J.; Xu, Y. Weed Identification Based on K-Means Feature Learning Combined with Convolutional Neural Network. Comput. Electron. Agric. 2017, 135, 63–70. [Google Scholar] [CrossRef]
- Bah, M.D.; Dericquebourg, E.; Hafiane, A.; Canals, R. Deep Learning Based Classification System for Identifying Weeds Using High-Resolution UAV Imagery. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2019; Volume 857, pp. 176–187. ISBN 9783030011765. [Google Scholar]
- Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S.; Vasilakoglou, I. Towards Weeds Identification Assistance through Transfer Learning. Comput. Electron. Agric. 2020, 171, 105306. [Google Scholar] [CrossRef]
- Gao, J.; Liu, C.; Han, J.; Lu, Q.; Wang, H.; Zhang, J.; Bai, X.; Luo, J. Identification Method of Wheat Cultivars by Using a Convolutional Neural Network Combined with Images of Multiple Growth Periods of Wheat. Symmetry 2021, 13, 2012. [Google Scholar] [CrossRef]
- Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Anwar, S. Deep Learning-Based Identification System of Weeds and Crops in Strawberry and Pea Fields for a Precision Agriculture Sprayer. Precis. Agric. 2021, 22, 1711–1727. [Google Scholar] [CrossRef]
- Osorio, K.; Puerto, A.; Pedraza, C.; Jamaica, D.; Rodríguez, L. A Deep Learning Approach for Weed Detection in Lettuce Crops Using Multispectral Images. AgriEngineering 2020, 2, 471–488. [Google Scholar] [CrossRef]
- Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 3296–3305. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Czymmek, V.; Harders, L.O.; Knoll, F.J.; Hussmann, S. Vision-Based Deep Learning Approach for Real-Time Detection of Weeds in Organic Farming. In Proceedings of the 2019 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Auckland, New Zealand, 20–23 May 2019; Volume 2019, pp. 1–5. [Google Scholar]
- Gao, J.; French, A.P.; Pound, M.P.; He, Y.; Pridmore, T.P.; Pieters, J.G. Deep Convolutional Neural Networks for Image-Based Convolvulus Sepium Detection in Sugar Beet Fields. Plant Methods 2020, 16, 29. [Google Scholar] [CrossRef] [PubMed]
- Xu, X.; Wang, L.; Shu, M.; Liang, X.; Ghafoor, A.Z.; Liu, Y.; Ma, Y.; Zhu, J. Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image. Remote Sens. 2022, 14, 5388. [Google Scholar] [CrossRef]
- Mota-Delfin, C.; López-Canteñs, G.d.J.; López-Cruz, I.L.; Romantchik-Kriuchkova, E.; Olguín-Rojas, J.C. Detection and Counting of Corn Plants in the Presence of Weeds with Convolutional Neural Networks. Remote Sens. 2022, 14, 4892. [Google Scholar] [CrossRef]
- Food and Agriculture Organisation of the United Nations. Cassava Diseases in Africa a Major Threat to Food Security; Strategic programme framework 2010–2015; Food and Agriculture Organisation of the United Nations: Rome, Italy, 2010. [Google Scholar]
- Hauser, S.; Wairegi, L.; Asadu, C.L.A.; Asawalam, D.O.; Jokthan, G.; Ugbe, U. Cassava System Cropping Guide; Africa Soil Health Consortium: Nairobi, Kenya, 2014. [Google Scholar]
- Tzutalin. 2015. LabelImg (version 1.8.6). Windows. Git Code.
- Hertel, L.; Barth, E.; Kaster, T.; Martinetz, T. Deep Convolutional Neural Networks as Generic Feature Extractors. In Proceedings of the International Joint Conference on Neural Networks, Killarney, Ireland, 12–17 July 2015. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common Objects in Context. In Proceedings of the IEEE Conference on Computer Visual and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3686–3693. [Google Scholar]
- Geiß, M.; Wagner, R.; Baresch, M.; Steiner, J.; Zwick, M. Automatic Bounding Box Annotation with Small Training Datasets for Industrial Manufacturing. Micromachines 2023, 14, 442. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Xia, B. Weakly Supervised Image Segmentation beyond Tight Bounding Box Annotations. arXiv 2023, arXiv:2301.12053. [Google Scholar]
- Deng, C.; Wang, M.; Liu, L.; Liu, Y.; Jiang, Y. Extended Feature Pyramid Network for Small Object Detection. IEEE Trans. Multimed. 2022, 24, 1968–1979. [Google Scholar] [CrossRef]
- Glenn, J. Image Augmentation Functions. Available online: https://github.com/ultralytics/yolov5/blob/6ea81bb3a9bb1701bc0aa9ccca546368ce1fa400/utils/augmentations.py#L279-L284 (accessed on 25 April 2023).
- Marko, H.; Ljudevit, J.; Gordan, G. A Comparative Study of YOLOv5 Models Performance for Image Localization and Classification. In Proceedings of the Central European Conference on Information and Intelligent Systems, Dubrovnik, Croatia, 20–22 September 2022; pp. 349–356. [Google Scholar]
- Ullah, M.B. CPU Based YOLO: A Real Time Object Detection Algorithm. In Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; pp. 552–555. [Google Scholar]
- Lee, J.; Hwang, K. YOLO with Adaptive Frame Control for Real-Time Object Detection Applications. Multimed. Tools Appl. 2022, 81, 36375–36396. [Google Scholar] [CrossRef]
Y5n | Y5s | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Image Size | Batch Size | Train Time (h) | Precision | Recall | [email protected] | [email protected]:0.95 | Train Time (h) | Precision | Recall | [email protected] | [email protected]:0.95 |
960 × 720 | 20 | 0.344 | 0.951 | 0.897 | 0.939 | 0.724 | 0.402 | 0.967 | 0.918 | 0.960 | 0.791 |
16 | 0.354 | 0.959 | 0.886 | 0.947 | 0.736 | 0.422 | 0.954 | 0.922 | 0.965 | 0.793 | |
8 | 0.433 | 0.954 | 0.907 | 0.946 | 0.740 | 0.482 | 0.960 | 0.918 | 0.955 | 0.798 | |
640 × 480 | 32 | 0.234 | 0.950 | 0.838 | 0.903 | 0.628 | 0.276 | 0.978 | 0.834 | 0.918 | 0.674 |
16 | 0.294 | 0.911 | 0.864 | 0.911 | 0.637 | 0.316 | 0.938 | 0.867 | 0.917 | 0.679 | |
8 | 0.403 | 0.927 | 0.853 | 0.904 | 0.635 | 0.431 | 0.962 | 0.859 | 0.924 | 0.695 | |
512 × 384 | 64 | 0.185 | 0.881 | 0.781 | 0.836 | 0.512 | 0.224 | 0.897 | 0.810 | 0.867 | 0.566 |
32 | 0.227 | 0.947 | 0.760 | 0.854 | 0.551 | 0.253 | 0.930 | 0.808 | 0.880 | 0.600 | |
16 | 0.279 | 0.932 | 0.783 | 0.868 | 0.560 | 0.318 | 0.906 | 0.829 | 0.886 | 0.605 | |
8 | 0.389 | 0.916 | 0.785 | 0.861 | 0.559 | 0.422 | 0.914 | 0.848 | 0.888 | 0.610 | |
256 × 192 | 64 | 0.169 | 0.786 | 0.602 | 0.658 | 0.326 | 0.191 | 0.877 | 0.648 | 0.721 | 0.386 |
32 | 0.214 | 0.841 | 0.659 | 0.722 | 0.369 | 0.232 | 0.837 | 0.695 | 0.738 | 0.412 | |
16 | 0.274 | 0.864 | 0.659 | 0.726 | 0.385 | 0.289 | 0.886 | 0.709 | 0.759 | 0.414 | |
8 | 0.352 | 0.855 | 0.644 | 0.723 | 0.384 | 0.371 | 0.892 | 0.682 | 0.749 | 0.433 |
Model | Image Size | Speed (FPS) | [email protected]:0.95 | Weights File Size (mb) |
---|---|---|---|---|
Y5n | 960 × 720 | 38.26 | 0.724 | 4.0 |
640 × 480 | 52.67 | 0.628 | 3.8 | |
512 × 384 | 54.14 | 0.512 | 3.7 | |
256 × 192 | 58.32 | 0.326 | 3.6 | |
Y5s | 960 × 720 | 34.79 | 0.791 | 14.3 |
640 × 480 | 47.60 | 0.674 | 14.1 | |
512 × 384 | 50.18 | 0.566 | 14.0 | |
256 × 192 | 57.08 | 0.386 | 13.9 |
Model | Image Size | Speed (fps) | Best Possible Mission Duration (s) |
---|---|---|---|
Y5n | 960 × 720 | 38.26 | 78.31 |
640 × 480 | 52.67 | 57.42 | |
512 × 384 | 54.14 | 55.85 | |
256 × 192 | 58.32 | 51.86 | |
Y5s | 960 × 720 | 34.79 | 86.93 |
640 × 480 | 47.60 | 63.53 | |
512 × 384 | 50.18 | 60.27 | |
256 × 192 | 57.08 | 52.98 |
Y5s | Y5n | |
---|---|---|
RMSE | 0.82 | 1.31 |
R2 | 0.9982 | 0.9949 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nnadozie, E.C.; Iloanusi, O.N.; Ani, O.A.; Yu, K. Detecting Cassava Plants under Different Field Conditions Using UAV-Based RGB Images and Deep Learning Models. Remote Sens. 2023, 15, 2322. https://doi.org/10.3390/rs15092322
Nnadozie EC, Iloanusi ON, Ani OA, Yu K. Detecting Cassava Plants under Different Field Conditions Using UAV-Based RGB Images and Deep Learning Models. Remote Sensing. 2023; 15(9):2322. https://doi.org/10.3390/rs15092322
Chicago/Turabian StyleNnadozie, Emmanuel C., Ogechukwu N. Iloanusi, Ozoemena A. Ani, and Kang Yu. 2023. "Detecting Cassava Plants under Different Field Conditions Using UAV-Based RGB Images and Deep Learning Models" Remote Sensing 15, no. 9: 2322. https://doi.org/10.3390/rs15092322
APA StyleNnadozie, E. C., Iloanusi, O. N., Ani, O. A., & Yu, K. (2023). Detecting Cassava Plants under Different Field Conditions Using UAV-Based RGB Images and Deep Learning Models. Remote Sensing, 15(9), 2322. https://doi.org/10.3390/rs15092322