Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey
Abstract
:1. Introduction
- The paper discusses a wide variety of precision agriculture problems that can be addressed using image data acquired from UAVs.
- The paper provides a technical discussion of the most recent papers using image data from UAVs to address agricultural problems.
- The paper evaluates the effectiveness of the various machine learning and deep learning techniques used that use UAV image data to address agricultural problems.
- The paper points out some fruitful future research directions based on the work conducted to date.
2. Challenges in Agriculture
2.1. Plant Disease Detection and Diagnosis
2.2. Pest Detection and Control
2.3. Urban Vegetation Classification
2.4. Crop Yield Estimation
2.5. Over- and Under-Irrigation
2.6. Seed Quality and Germination
2.7. Soil Quality and Composition
2.8. Fertilizer Usage
2.9. Quality of the Crop Output
3. Survey Design
- The study must include a clear report on the performance of the models.
- The study must present an in-depth description of the model architecture.
- The study carries out detection/classification/segmentation tasks or a combination of these using UAV image datasets.
- The study is not indexed in a reputable database.
- The study does not propose any significant addition or change to previously existing deep learning or machine learning solutions in its domain.
- The study presents vague descriptions of the experimentation and classification results.
- The study proposes irrelevant or unsatisfactory results.
- What data sources and image datasets were used in the paper?
- What type of preprocessing, data cleaning, and augmentation methods were utilized?
- What type of machine learning or deep learning architectures were used?
- What overall performance was achieved, and which metrics were used to report the performance?
- Which architectures and techniques performed best for a class of agricultural problems?
4. Background
4.1. Image Data from UAVs
4.2. Image Features Used in UAV Data
4.3. Vision Tasks Using UAV Data
4.4. Evaluation Metrics
- Accuracy, as shown in Equation (1), is a measure of an algorithm’s ability to make correct predictions. Accuracy is described as the ratio of the sum of true-positive ( and true-negative ( predictions to the algorithm’s total number of predictions including false predictions (.
- Precision, as shown in Equation (2), is a measure of an algorithm’s ability to make correct positive predictions. Precision is described as the ratio of true-positive ( predictions to the sum of true-positive ( and false-positive ( predictions.
- Recall, as shown in Equation (3), measures an algorithm’s ability to identify positive samples. Recall is the ratio of true-positive ( predictions made by the algorithm to the sum of its true-positive ( and false-negative ( predictions.
- F1-score, as shown in Equation (4), is the harmonic mean of precision and recall. A high algorithm F1-score value indicates high accuracy. F1-score is calculated as follows:
- Area under the curve (AUC) is the area under an ROC curve which is a plot of an algorithm’s true-positive rate (TPR) (Equation (5)) vs. its false-positive rate (FPR) (Equation (6)). An algorithm’s true-positive rate can be defined as the ratio of positive samples an algorithm correctly classifies to the total actual positive samples. The false-positive rate, on the other hand, can be defined as the ratio of an algorithm’s false-positive sample classifications to the total actual negative samples.
- Intersection over union (IoU), as shown in Equation (7), is an evaluation metric used to assess how accurate a detection algorithm’s output bounding boxes around an object of interest in an image (e.g., a weed) are compared to the ground truth boxes. IoU is the ratio of the intersection area between a bounding box and its associated ground-truth box to their area of union.
- Mean average precision (mAP), as shown in Equation (8), is used to assess the quality of object detection models. This metric requires finding a model’s average AP across its classes. The calculation of AP requires calculating a model’s precision and recall, followed by drawing its precision–recall curve, and finally, finding the area under the curve.
- Average residual, as shown in Equation (9), is used to assess how erroneous a model is. Average residual displays the average difference between a model’s predictions and ground-truth values.
- Root-mean-square error (RMSE), as shown in Equation (10), is used to assess an algorithm’s ability to produce numeric predictions that are close to ground-truth values. RMSE is calculated by finding the square root of the average distance between an algorithm’s predictions and their associated truth values.
- Mean absolute error (MAE), as shown in Equation (11), is an error metric used to assess how far off an algorithm’s numeric predictions are from truth values. MAE is calculated by finding the average value of the absolute difference between predictions and truth values.
- Frames per second (FPS) is a measure used to assess how fast a machine learning model is at analyzing and processing images.
5. Survey Results
5.1. Traditional Machine Learning
5.1.1. Support Vector Machines (SVM)
5.1.2. K-Nearest Neighbors (KNN)
5.1.3. Decision Trees (DT) and Random Forests (RF)
5.2. Neural Networks and Deep Learning
5.2.1. Convolutional Neural Networks (CNN)
5.2.2. U-Net Architecture
5.2.3. Other Segmentation Models
5.2.4. You Only Look Once (YOLO)
5.2.5. Single-Shot Detector (SSD)
5.2.6. Region-Based Convolutional Neural Networks
5.2.7. Autoencoders
5.2.8. Transformers
5.2.9. Semi-Supervised Convolutional Neural Networks
5.2.10. Miscellaneous
6. Discussion and Future Work
6.1. Machine Learning Techniques
6.2. Best Techniques for Agricultural Problems
6.3. Future Work
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- World Food Programme. 2022 Global Report on Food Crises; World Food Programme: Rome, Italy, 2022; p. 277. [Google Scholar]
- Ayoub Shaikh, T.; Rasool, T.; Rasheed Lone, F. Towards leveraging the role of machine learning and artificial intelligence in precision agriculture and smart farming. Comput. Electron. Agric. 2022, 198, 107119. [Google Scholar] [CrossRef]
- Mylonas, I.; Stavrakoudis, D.; Katsantonis, D.; Korpetis, E. Chapter 1—Better farming practices to combat climate. In Climate Change and Food Security with Emphasis on Wheat; Academic Press: Cambridge, MA, USA, 2020; pp. 1–29. [Google Scholar] [CrossRef]
- Wolińska, A. Metagenomic Achievements in Microbial Diversity Determination in Croplands: A Review. In Microbial Diversity in the Genomic Era; Das, S., Dash, H.R., Eds.; Academic Press: Cambridge, MA, USA, 2019; pp. 15–35. ISBN 978-0-12-814849-5. [Google Scholar]
- Mohamed, Z.; Terano, R.; Sharifuddin, J.; Rezai, G. Determinants of Paddy Farmer’s Unsustainability Farm Practices. Agric. Agric. Sci. Procedia 2016, 9, 191–196. [Google Scholar] [CrossRef] [Green Version]
- Krishna, K.R. Push Button Agriculture: Robotics, Drones, Satellite-Guided Soil and Crop Management; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- ISPA. Precision Ag Definition. International Society of Precision Agriculture. Available online: https://www.ispag.org/about/definition (accessed on 14 July 2022).
- Singh, P.; Pandey, P.C.; Petropoulos, G.P.; Pavlides, A.; Srivastava, P.K.; Koutsias, N.; Deng, K.A.K.; Bao, Y. 8—Hyperspectral remote sensing in precision agriculture: Present status, challenges, and future trends. In Hyperspectral Remote Sensing; Pandey, P.C., Srivastava, P.K., Balzter, H., Bhattacharya, B., Petropoulos, G.P., Eds.; Earth Observation; Elsevier: Amsterdam, The Netherlands, 2020; pp. 121–146. ISBN 978-0-08-102894-0. [Google Scholar]
- Cisternas, I.; Velásquez, I.; Caro, A.; Rodríguez, A. Systematic literature review of implementations of precision agriculture. Comput. Electron. Agric. 2020, 176, 105626. [Google Scholar] [CrossRef]
- Hosseiny, B.; Rastiveis, H.; Homayouni, S. An Automated Framework for Plant Detection Based on Deep Simulated Learning from Drone Imagery. Remote Sens. 2020, 12, 3521. [Google Scholar] [CrossRef]
- Aburasain, R.Y.; Edirisinghe, E.A.; Albatay, A. Palm Tree Detection in Drone Images Using Deep Convolutional Neural Networks: Investigating the Effective Use of YOLO V3. In Digital Interaction and Machine Intelligence, Proceedings of the MIDI’202—8th Machine Intelligence and Digital Interaction Conference, Warsaw, Poland, 9–10 December 2020; Biele, C., Kacprzyk, J., Owsiński, J.W., Romanowski, A., Sikorski, M., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 21–36. [Google Scholar]
- Lu, Y.; Young, S. A survey of public datasets for computer vision tasks in precision agriculture. Comput. Electron. Agric. 2020, 178, 105760. [Google Scholar] [CrossRef]
- Shafi, U.; Mumtaz, R.; García-Nieto, J.; Hassan, S.A.; Zaidi, S.A.R.; Iqbal, N. Precision Agriculture Techniques and Practices: From Considerations to Applications. Sensors 2019, 19, 3796. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef] [Green Version]
- Chengjuan Ren, D.-K.K.; Jeong, D. A Survey of Deep Learning in Agriculture: Techniques and Their Applications. J. Inf. Process. Syst. 2020, 16, 1015–1033. [Google Scholar] [CrossRef]
- Meshram, V.; Patil, K.; Meshram, V.; Hanchate, D.; Ramkteke, S.D. Machine learning in agriculture domain: A state-of-art survey. Artif. Intell. Life Sci. 2021, 1, 100010. [Google Scholar] [CrossRef]
- Shin, J.; Mahmud, M.S.; Rehman, T.U.; Ravichandran, P.; Heung, B.; Chang, Y.K. Trends and Prospect of Machine Vision Technology for Stresses and Diseases Detection in Precision Agriculture. AgriEngineering 2023, 5, 3. [Google Scholar] [CrossRef]
- Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
- Aslan, M.F.; Durdu, A.; Sabanci, K.; Ropelewska, E.; Gültekin, S.S. A Comprehensive Survey of the Recent Studies with UAV for Precision Agriculture in Open Fields and Greenhouses. Appl. Sci. 2022, 12, 1047. [Google Scholar] [CrossRef]
- FAO. News Article: New Standards to Curb the Global Spread of Plant Pests and Diseases. Available online: https://www.fao.org/news/story/en/item/1187738/icode/ (accessed on 21 July 2022).
- Khakimov, A.; Salakhutdinov, I.; Omonlikov, A.; Utagnov, S. Traditional and Current-Prospective Methods of Agricultural Plant Diseases Detection: A Review. IOP Conf. Ser. Earth Environ. Sci. 2022, 951, 012002. [Google Scholar] [CrossRef]
- Fang, Y.; Ramasamy, R.P. Current and Prospective Methods for Plant Disease Detection. Biosensors 2015, 5, 537–561. [Google Scholar] [CrossRef] [Green Version]
- Ecological Understanding of Insects in Organic Farming Systems: How Insects Damage Plants. eOrganic. Available online: https://eorganic.org/node/3151 (accessed on 21 July 2022).
- Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer Neural Network for Weed and Crop Classification of High Resolution UAV Images. Remote Sens. 2022, 14, 592. [Google Scholar] [CrossRef]
- Arun, R.A.; Umamaheswari, S.; Jain, A.V. Reduced U-Net Architecture for Classifying Crop and Weed Using Pixel-Wise Segmentation. In Proceedings of the 2020 IEEE International Conference for Innovation in Technology (INOCON), Bangluru, India, 6–8 November 2020; pp. 1–6. [Google Scholar]
- Sharma, A.; Kumar, V.; Shahzad, B.; Tanveer, M.; Sidhu, G.P.S.; Handa, N.; Kohli, S.K.; Yadav, P.; Bali, A.S.; Parihar, R.D.; et al. Worldwide pesticide usage and its impacts on ecosystem. SN Appl. Sci. 2019, 1, 1446. [Google Scholar] [CrossRef] [Green Version]
- Rai, M.; Ingle, A. Role of nanotechnology in agriculture with special reference to management of insect pests. Appl. Microbiol. Biotechnol. 2012, 94, 287–293. [Google Scholar] [CrossRef]
- Pest Control Efficiency in Agriculture—Futurcrop. Available online: https://www.futurcrop.com/en/es/blog/post/efficacy-of-plant-protection-products/ (accessed on 28 July 2022).
- Tigges, J.; Lakes, T.; Hostert, P. Urban vegetation classification: Benefits of multitemporal RapidEye satellite data. Remote Sens. Environ. 2013, 136, 66–75. [Google Scholar] [CrossRef]
- Weng, Q.; Quattrochi, D.A.; Carlson, T.N. Remote sensing of urban environments: Special issue. Remote Sens. Environ. 2021, 117, 1–2. [Google Scholar] [CrossRef]
- IowaAgLiteracy. Why Do They Do That?—Estimating Yields; Iowa Agriculture Literacy: West Des Moines, IA, USA, 2019. [Google Scholar]
- Horie, T.; Yajima, M.; Nakagawa, H. Yield forecasting. Agric. Syst. 1992, 40, 211–236. [Google Scholar] [CrossRef]
- Crop Yield. Investopedia. Available online: https://www.investopedia.com/terms/c/crop-yield.asp (accessed on 21 July 2022).
- Altalak, M.; Ammad uddin, M.; Alajmi, A.; Rizg, A. Smart Agriculture Applications Using Deep Learning Technologies: A Survey. Appl. Sci. 2022, 12, 5919. [Google Scholar] [CrossRef]
- Prange, R.K. Pre-harvest, harvest and post-harvest strategies for organic production of fruits and vegetables. Acta Hortic. 2012, 933, 43–50. [Google Scholar] [CrossRef]
- Mitchell, J.P.; Shennan, C.; Grattan, S.R.; May, D.M. Tomato Fruit Yields and Quality under Water Deficit and Salinity. J. Am. Soc. Hortic. Sci. 1991, 116, 215–221. [Google Scholar] [CrossRef] [Green Version]
- Atay, E.; Hucbourg, B.; Drevet, A.; Lauri, P.-E. Investigating effects of over-irrigation and deficit irrigation on yield and fruit quality in pink ladytm “rosy glow” apple. Acta Sci. Pol. Hortorum Cultus 2017, 16, 45–51. [Google Scholar] [CrossRef]
- Li, X.; Ba, Y.; Zhang, M.; Nong, M.; Yang, C.; Zhang, S. Sugarcane Nitrogen Concentration and Irrigation Level Prediction Based on UAV Multispectral Imagery. Sensors 2022, 22, 2711. [Google Scholar] [CrossRef] [PubMed]
- Tuan, P.A.; Sun, M.; Nguyen, T.-N.; Park, S.; Ayele, B.T. 1—Molecular mechanisms of seed germination. In Sprouted Grains; Feng, H., Nemzer, B., DeVries, J.W., Eds.; AACC International Press: Washington, DC, USA, 2019; pp. 1–24. ISBN 978-0-12-811525-1. [Google Scholar]
- El-Maarouf-Bouteau, H. The Seed and the Metabolism Regulation. Biology 2022, 11, 168. [Google Scholar] [CrossRef]
- Vidak, M.; Lazarević, B.; Javornik, T.; Šatović, Z.; Carović-Stanko, K. Seed Water Absorption, Germination, Emergence and Seedling Phenotypic Characterization of the Common Bean Landraces Differing in Seed Size and Color. Seeds 2022, 1, 27. [Google Scholar] [CrossRef]
- Lin, Y.; Chen, T.; Liu, S.; Cai, Y.; Shi, H.; Zheng, D.; Lan, Y.; Yue, X.; Zhang, L. Quick and accurate monitoring peanut seedlings emergence rate through UAV video and deep learning. Comput. Electron. Agric. 2022, 197, 106938. [Google Scholar] [CrossRef]
- Aden, S.; Bialas, J.; Champion, Z.; Levin, E.; McCarty, J.L. Low cost infrared and near infrared sensors for UAVS. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-1, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Arah, I.K.; Amaglo, H.; Kumah, E.K.; Ofori, H. Preharvest and Postharvest Factors Affecting the Quality and Shelf Life of Harvested Tomatoes: A Mini Review. Int. J. Agron. 2015, 2015, 478041. [Google Scholar] [CrossRef] [Green Version]
- What Methods Can Improve Crop Performance? Royal Society. Available online: https://royalsociety.org/topics-policy/projects/gm-plants/what-methods-other-than-genetic-improvement-can-improve-crop-performance/ (accessed on 29 July 2022).
- Takamatsu, T.; Kitagawa, Y.; Akimoto, K.; Iwanami, R.; Endo, Y.; Takashima, K.; Okubo, K.; Umezawa, M.; Kuwata, T.; Sato, D.; et al. Over 1000 nm Near-Infrared Multispectral Imaging System for Laparoscopic In Vivo Imaging. Sensors 2021, 21, 2649. [Google Scholar] [CrossRef]
- Sellami, A.; Tabbone, S. Deep neural networks-based relevant latent representation learning for hyperspectral image classification. Pattern Recognit. 2022, 121, 108224. [Google Scholar] [CrossRef]
- Multispectral Image—An Overview. ScienceDirect Topics. Available online: https://www.sciencedirect.com/topics/earth-and-planetary-sciences/multispectral-image (accessed on 13 December 2022).
- Seager, S.; Turner, E.L.; Schafer, J.; Ford, E.B. Vegetation’s Red Edge: A Possible Spectroscopic Biosignature of Extraterrestrial Plants. Astrobiology 2005, 5, 372–390. [Google Scholar] [CrossRef] [PubMed]
- Color-Infrared (CIR) Imagery.MN IT Services. Available online: https://www.mngeo.state.mn.us/chouse/airphoto/cir.html (accessed on 13 December 2022).
- Jang, G.; Kim, J.; Yu, J.-K.; Kim, H.-J.; Kim, Y.; Kim, D.-W.; Kim, K.-H.; Lee, C.W.; Chung, Y.S. Review: Cost-Effective Unmanned Aerial Vehicle (UAV) Platform for Field Plant Breeding Application. Remote Sens. 2020, 12, 998. [Google Scholar] [CrossRef] [Green Version]
- Tendolkar, A.; Choraria, A.; Manohara Pai, M.M.; Girisha, S.; Dsouza, G.; Adithya, K.S. Modified crop health monitoring and pesticide spraying system using NDVI and Semantic Segmentation: An AGROCOPTER based approach. In Proceedings of the 2021 IEEE International Conference on Autonomous Systems (ICAS), Montreal, QC, Canada, 11–13 August 2021; pp. 1–5. [Google Scholar]
- Júnior, P.C.P.; Monteiro, A.; Ribeiro, R.D.L.; Sobieranski, A.C.; Wangenheim, A.V. Comparison of Supervised Classifiers and Image Features for Crop Rows Segmentation on Aerial Images. Appl. Artif. Intell. 2020, 34, 271–291. [Google Scholar] [CrossRef]
- Lottes, P.; Khanna, R.; Pfeifer, J.; Siegwart, R.; Stachniss, C. UAV-based crop and weed classification for smart farming. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3024–3031. [Google Scholar]
- Lakshmanan, V.; Görner, M.; Gillard, R. Practical Machine Learning for Computer Vision; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2021. [Google Scholar]
- Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef] [PubMed]
- Zaidi, S.S.A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.; Lee, B. A survey of modern deep learning based object detection models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar] [CrossRef]
- Tian, D.; Han, Y.; Wang, B.; Guan, T.; Gu, H.; Wei, W. Review of object instance segmentation based on deep learning. JEI 2021, 31, 041205. [Google Scholar] [CrossRef]
- Zhang, J.; Xie, T.; Yang, C.; Song, H.; Jiang, Z.; Zhou, G.; Zhang, D.; Feng, H.; Xie, J. Segmenting Purple Rapeseed Leaves in the Field from UAV RGB Imagery Using Deep Learning as an Auxiliary Means for Nitrogen Stress Detection. Remote Sens. 2020, 12, 1403. [Google Scholar] [CrossRef]
- Liliane, T.N.; Charles, M.S. Factors Affecting Yield of Crops; IntechOpen: London, UK, 2020; ISBN 978-1-83881-223-2. [Google Scholar]
- Natividade, J.; Prado, J.; Marques, L. Low-cost multi-spectral vegetation classification using an Unmanned Aerial Vehicle. In Proceedings of the 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Coimbra, Portugal, 26–28 April 2017; pp. 336–342. [Google Scholar]
- Pérez-Ortiz, M.; Peña, J.M.; Gutiérrez, P.A.; Torres-Sánchez, J.; Hervás-Martínez, C.; López-Granados, F. A semi-supervised system for weed mapping in sunflower crops using unmanned aerial vehicles and a crop row detection method. Appl. Soft Comput. 2015, 37, 533–544. [Google Scholar] [CrossRef]
- Rodríguez-Garlito, E.C.; Paz-Gallardo, A. Efficiently Mapping Large Areas of Olive Trees Using Drones in Extremadura, Spain. IEEE J. Miniat. Air Space Syst. 2021, 2, 148–156. [Google Scholar] [CrossRef]
- Rocha, B.M.; da Silva Vieira, G.; Fonseca, A.U.; Pedrini, H.; de Sousa, N.M.; Soares, F. Evaluation and Detection of Gaps in Curved Sugarcane Planting Lines in Aerial Images. In Proceedings of the 2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), London, ON, Canada, 30 August–2 September 2020; pp. 1–4. [Google Scholar]
- Crimaldi, M.; Cristiano, V.; De Vivo, A.; Isernia, M.; Ivanov, P.; Sarghini, F. Neural Network Algorithms for Real Time Plant Diseases Detection Using UAVs. In Innovative Biosystems Engineering for Sustainable Agriculture, Forestry and Food Production, Proceedings of the International Mid-Term Conference 2019 of the Italian Association of Agricultural Engineering (AIIA), Matera, Italy, 12–13 September 2019; Coppola, A., Di Renzo, G.C., Altieri, G., D’Antonio, P., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 827–835. [Google Scholar]
- Milioto, A.; Lottes, P.; Stachniss, C. Real-Time Blob-Wise Sugar Beets vs Weeds Classification for Monitoring Fields Using Convolutional Neural Networks. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-2/W3, 41–48. [Google Scholar] [CrossRef] [Green Version]
- Bah, M.D.; Dericquebourg, E.; Hafiane, A.; Canals, R. Deep Learning Based Classification System for Identifying Weeds Using High-Resolution UAV Imagery. In Intelligent Computing; Arai, K., Kapoor, S., Bhatia, R., Eds.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2019; Volume 857, pp. 176–187. ISBN 978-3-030-01176-5. [Google Scholar]
- Reddy, S.R.G.; Varma, G.P.S.; Davuluri, R.L. Optimized convolutional neural network model for plant species identification from leaf images using computer vision. Int. J. Speech Technol. 2023, 26, 23–50. [Google Scholar] [CrossRef]
- Sembiring, A.; Away, Y.; Arnia, F.; Muharar, R. Development of Concise Convolutional Neural Network for Tomato Plant Disease Classification Based on Leaf Images. J. Phys. Conf. Ser. 2021, 1845, 012009. [Google Scholar] [CrossRef]
- Geetharamani, G.; Arun Pandian, J. Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng. 2019, 76, 323–338. [Google Scholar]
- Karthik, R.; Hariharan, M.; Anand, S.; Mathikshara, P.; Johnson, A.; Menaka, R. Attention embedded residual CNN for disease detection in tomato leaves. Appl. Soft Comput. 2020, 86, 105933. [Google Scholar] [CrossRef]
- Mohanty, S.P. PlantVillage-Dataset. 19 May 2023. Available online: https://github.com/spMohanty/PlantVillage-Dataset (accessed on 30 April 2023).
- Nanni, L.; Manfè, A.; Maguolo, G.; Lumini, A.; Brahnam, S. High performing ensemble of convolutional neural networks for insect pest image detection. Ecol. Inform. 2022, 67, 101515. [Google Scholar] [CrossRef]
- Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
- Prasad, A.; Mehta, N.; Horak, M.; Bae, W.D. A two-step machine learning approach for crop disease detection: An application of GAN and UAV technology. Remote Sens. 2022, 14, 4765. [Google Scholar] [CrossRef]
- Albattah, W.; Javed, A.; Nawaz, M.; Masood, M.; Albahli, S. Artificial Intelligence-Based Drone System for Multiclass Plant Disease Detection Using an Improved Efficient Convolutional Neural Network. Front. Plant Sci. 2022, 13, 808380. [Google Scholar] [CrossRef]
- Mishra, S.; Sachan, R.; Rajpal, D. Deep Convolutional Neural Network based Detection System for Real-Time Corn Plant Disease Recognition. Procedia Comput. Sci. 2020, 167, 2003–2010. [Google Scholar] [CrossRef]
- Bah, M.D.; Hafiane, A.; Canals, R. Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images. Remote Sens. 2018, 10, 1690. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Y.; Sarigul, E.; Panicker, G.; Stott, D. Vineyard LAI and canopy coverage estimation with convolutional neural network models and drone pictures. In Sensing for Agriculture and Food Quality and Safety XIV, Proceedings of the SPIE Defense + Commercial Sensing, Orlando, FL, USA, 3 April–13 June 2022; SPIE: Bellingham, WA, USA, 2022; Volume 12120, pp. 29–38. [Google Scholar]
- Yang, R.; Lu, X.; Huang, J.; Zhou, J.; Jiao, J.; Liu, Y.; Liu, F.; Su, B.; Gu, P. A Multi-Source Data Fusion Decision-Making Method for Disease and Pest Detection of Grape Foliage Based on ShuffleNet V2. Remote Sens. 2021, 13, 5102. [Google Scholar] [CrossRef]
- Briechle, S.; Krzystek, P.; Vosselman, G. Classification of tree species and standing dead trees by fusing UAV-based LiDAR data and multispectral imagery in the 3D deep neural network pointnet++. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, V-2–2020, 203–210. [Google Scholar] [CrossRef]
- Aiger, D.; Allen, B.; Golovinskiy, A. Large-Scale 3D Scene Classification with Multi-View Volumetric CNN. arXiv 2017. [Google Scholar] [CrossRef]
- Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309. [Google Scholar] [CrossRef] [Green Version]
- Silva, C.; Hudak, A.; Vierling, L.; Louise Loudermilk, E.; O’Brien, J.; Kevin Hiers, J.; Jack, S.; Gonzalez-Benecke, C.; Lee, H.; Falkowski, M.; et al. Imputation of Individual Longleaf Pine (Pinus palustris Mill.) Tree Attributes from Field and LiDAR Data. Can. J. Remote Sens. 2016, 42, 554–573. [Google Scholar] [CrossRef]
- Bah, M.D.; Hafiane, A.; Canals, R. CRowNet: Deep Network for Crop Row Detection in UAV Images. IEEE Access 2020, 8, 5189–5200. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2022. [Google Scholar] [CrossRef]
- Kitano, B.T.; Mendes, C.C.T.; Geus, A.R.; Oliveira, H.C.; Souza, J.R. Corn Plant Counting Using Deep Learning and UAV Images. IEEE Geosci. Remote Sens. Lett. 2019, 1–5. [Google Scholar] [CrossRef]
- Sreedhar, K. Enhancement of Images Using Morphological Transformations. IJCSIT 2012, 4, 33–50. [Google Scholar] [CrossRef]
- Lin, Z.; Guo, W. Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning. Front. Plant Sci. 2020, 11, 534853. [Google Scholar] [CrossRef] [PubMed]
- El Hoummaidi, L.; Larabi, A.; Alam, K. Using unmanned aerial systems and deep learning for agriculture mapping in Dubai. Heliyon 2021, 7, e08154. [Google Scholar] [CrossRef] [PubMed]
- Doha, R.; Al Hasan, M.; Anwar, S.; Rajendran, V. Deep Learning based Crop Row Detection with Online Domain Adaptation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 2773–2781. [Google Scholar]
- Zhang, T.; Yang, Z.; Xu, Z.; Li, J. Wheat Yellow Rust Severity Detection by Efficient DF-UNet and UAV Multispectral Imagery. IEEE Sens. J. 2022, 22, 9057–9068. [Google Scholar] [CrossRef]
- Tsuichihara, S.; Akita, S.; Ike, R.; Shigeta, M.; Takemura, H.; Natori, T.; Aikawa, N.; Shindo, K.; Ide, Y.; Tejima, S. Drone and GPS Sensors-Based Grassland Management Using Deep-Learning Image Segmentation. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 608–611. [Google Scholar]
- Yang, M.-D.; Boubin, J.G.; Tsai, H.P.; Tseng, H.-H.; Hsu, Y.-C.; Stewart, C.C. Adaptive autonomous UAV scouting for rice lodging assessment using edge computing with deep learning EDANet. Comput. Electron. Agric. 2020, 179, 105817. [Google Scholar] [CrossRef]
- Weyler, J.; Magistri, F.; Seitz, P.; Behley, J.; Stachniss, C. In-Field Phenotyping Based on Crop Leaf and Plant Instance Segmentation. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 2968–2977. [Google Scholar]
- Guo, Y.; Zhang, J.; Yin, C.; Hu, X.; Zou, Y.; Xue, Z.; Wang, W. Plant Disease Identification Based on Deep Learning Algorithm in Smart Farming. Discret. Dyn. Nat. Soc. 2020, 2020, 2479172. [Google Scholar] [CrossRef]
- Getreuer, P. Chan-Vese Segmentation. Image Process. On Line 2012, 2, 214–224. [Google Scholar] [CrossRef]
- Torres-Sánchez, J.; Mesas-Carrascosa, F.J.; Jiménez-Brenes, F.M.; de Castro, A.I.; López-Granados, F. Early Detection of Broad-Leaved and Grass Weeds in Wide Row Crops Using Artificial Neural Networks and UAV Imagery. Agronomy 2021, 11, 749. [Google Scholar] [CrossRef]
- Zhang, X.; Li, N.; Ge, L.; Xia, X.; Ding, N. A Unified Model for Real-Time Crop Recognition and Stem Localization Exploiting Cross-Task Feature Fusion. In Proceedings of the 2020 IEEE International Conference on Real-time Computing and Robotics (RCAR), Asahikawa, Japan, 28–29 September 2020; pp. 327–332. [Google Scholar]
- Li, N.; Zhang, X.; Zhang, C.; Guo, H.; Sun, Z.; Wu, X. Real-Time Crop Recognition in Transplanted Fields with Prominent Weed Growth: A Visual-Attention-Based Approach. IEEE Access 2019, 7, 185310–185321. [Google Scholar] [CrossRef]
- Chen, C.-J.; Huang, Y.-Y.; Li, Y.-S.; Chen, Y.-C.; Chang, C.-Y.; Huang, Y.-M. Identification of Fruit Tree Pests With Deep Learning on Embedded Drone to Achieve Accurate Pesticide Spraying. IEEE Access 2021, 9, 21986–21997. [Google Scholar] [CrossRef]
- Qin, Z.; Wang, W.; Dammer, K.-H.; Guo, L.; Cao, Z. A Real-time Low-cost Artificial Intelligence System for Autonomous Spraying in Palm Plantations. arXiv 2021. [Google Scholar] [CrossRef]
- Parico, A.I.B.; Ahamed, T. An Aerial Weed Detection System for Green Onion Crops Using the You Only Look Once (YOLOv3) Deep Learning Algorithm. Eng. Agric. Environ. Food 2020, 13, 42–48. [Google Scholar] [CrossRef]
- Rui, C.; Youwei, G.; Huafei, Z.; Hongyu, J. A Comprehensive Approach for UAV Small Object Detection with Simulation-based Transfer Learning and Adaptive Fusion. arXiv 2021. [Google Scholar] [CrossRef]
- Parico, A.I.B.; Ahamed, T. Real Time Pear Fruit Detection and Counting Using YOLOv4 Models and Deep SORT. Sensors 2021, 21, 4803. [Google Scholar] [CrossRef] [PubMed]
- Jintasuttisak, T.; Edirisinghe, E.; Elbattay, A. Deep neural network based date palm tree detection in drone imagery. Comput. Electron. Agric. 2022, 192, 106560. [Google Scholar] [CrossRef]
- Tian, Y.; Yang, G.; Wang, Z.; Li, E.; Liang, Z. Detection of Apple Lesions in Orchards Based on Deep Learning Methods of CycleGAN and YOLOV3-Dense. J. Sens. 2019, 2019, 7630926. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Lecture Notes in Computer Science, Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Veeranampalayam Sivakumar, A.N.; Li, J.; Scott, S.; Psota, E.; Jhala, A.J.; Luck, J.D.; Shi, Y. Comparison of Object Detection and Patch-Based Classification Deep Learning Models on Mid- to Late-Season Weed Detection in UAV Imagery. Remote Sens. 2020, 12, 2136. [Google Scholar] [CrossRef]
- Ridho, M.F.; Irwan. Strawberry Fruit Quality Assessment for Harvesting Robot Using SSD Convolutional Neural Network. In Proceedings of the 2021 8th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Semarang, Indonesia, 20–21 October 2021; pp. 157–162. [Google Scholar]
- Ammar, A.; Koubaa, A.; Benjdira, B. Deep-Learning-Based Automated Palm Tree Counting and Geolocation in Large Farms from Aerial Geotagged Images. Agronomy 2021, 11, 1458. [Google Scholar] [CrossRef]
- Su, W.-H.; Zhang, J.; Yang, C.; Page, R.; Szinyei, T.; Hirsch, C.D.; Steffenson, B.J. Automatic Evaluation of Wheat Resistance to Fusarium Head Blight Using Dual Mask-RCNN Deep Learning Frameworks in Computer Vision. Remote Sens. 2021, 13, 26. [Google Scholar] [CrossRef]
- Yang, M.D.; Tseng, H.H.; Hsu, Y.C.; Tseng, W.C. Real-time Crop Classification Using Edge Computing and Deep Learning. In Proceedings of the 2020 IEEE 17th Annual Consumer Communications Networking Conference (CCNC), Las Vegas, NV, USA, 10–13 January 2020; pp. 1–4. [Google Scholar]
- Menshchikov, A.; Shadrin, D.; Prutyanov, V.; Lopatkin, D.; Sosnin, S.; Tsykunov, E.; Iakovlev, E.; Somov, A. Real-Time Detection of Hogweed: UAV Platform Empowered by Deep Learning. IEEE Trans. Comput. 2021, 70, 1175–1188. [Google Scholar] [CrossRef]
- Weyler, J.; Quakernack, J.; Lottes, P.; Behley, J.; Stachniss, C. Joint Plant and Leaf Instance Segmentation on Field-Scale UAV Imagery. IEEE Robot. Autom. Lett. 2022, 7, 3787–3794. [Google Scholar] [CrossRef]
- Lottes, P.; Behley, J.; Chebrolu, N.; Milioto, A.; Stachniss, C. Robust joint stem detection and crop-weed classification using image sequences for plant-specific treatment in precision farming. J. Field Robot. 2020, 37, 20–34. [Google Scholar] [CrossRef]
- Su, D.; Qiao, Y.; Kong, H.; Sukkarieh, S. Real time detection of inter-row ryegrass in wheat farms using deep learning. Biosyst. Eng. 2021, 204, 198–211. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; 2017; Volume 30. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2021. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. arXiv 2020. [Google Scholar] [CrossRef]
- Thai, H.-T.; Tran-Van, N.-Y.; Le, K.-H. Artificial Cognition for Early Leaf Disease Detection using Vision Transformers. In Proceedings of the 2021 International Conference on Advanced Technologies for Communications (ATC), Ho Chi Minh City, Vietnam, 14–16 October 2021; pp. 33–38. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.-F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; p. 8. [Google Scholar]
- Cassava Leaf Disease Classification. Available online: https://kaggle.com/competitions/cassava-leaf-disease-classification (accessed on 18 June 2022).
- Karila, K.; Alves Oliveira, R.; Ek, J.; Kaivosoja, J.; Koivumäki, N.; Korhonen, P.; Niemeläinen, O.; Nyholm, L.; Näsi, R.; Pölönen, I.; et al. Estimating Grass Sward Quality and Quantity Parameters Using Drone Remote Sensing with Deep Neural Networks. Remote Sens. 2022, 14, 2692. [Google Scholar] [CrossRef]
- Dersch, S.; Schottl, A.; Krzystek, P.; Heurich, M. Novel Single Tree Detection By Transformers Using Uav-Based Multispectral Imagery. ProQuest. Available online: https://www.proquest.com/openview/228f8f292353d30b26ebcdd38372d40d/1?pq-origsite=gscholar&cbl=2037674 (accessed on 15 June 2022).
- Chen, G.; Shang, Y. Transformer for Tree Counting in Aerial Images. Remote Sens. 2022, 14, 476. [Google Scholar] [CrossRef]
- Liu, W.; Salzmann, M.; Fua, P. Context-Aware Crowd Counting. arXiv 2019. [Google Scholar] [CrossRef]
- Zhang, X.; Han, L.; Sobeih, T.; Lappin, L.; Lee, M.; Howard, A.; Kisdi, A. The self-supervised spectral-spatial attention-based transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery. arXiv 2022. [Google Scholar] [CrossRef]
- Coletta, L.F.S.; de Almeida, D.C.; Souza, J.R.; Manzione, R.L. Novelty detection in UAV images to identify emerging threats in eucalyptus crops. Comput. Electron. Agric. 2022, 196, 106901. [Google Scholar] [CrossRef]
- Bosilj, P.; Aptoula, E.; Duckett, T.; Cielniak, G. Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture. J. Field Robot. 2020, 37, 7–19. [Google Scholar] [CrossRef]
- Li, W.; Liu, C.; Yang, Y.; Awais, M.; Li, W.; Ying, P.; Ru, W.; Cheema, M.J.M. A UAV-aided prediction system of soil moisture content relying on thermal infrared remote sensing. Int. J. Environ. Sci. Technol. 2022, 19, 9587–9600. [Google Scholar] [CrossRef]
- Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Iqbal, J.; Alam, M. A novel semi-supervised framework for UAV based crop/weed classification. PLoS ONE 2021, 16, e0251008. [Google Scholar] [CrossRef] [PubMed]
- Alom, Z.; Taha, T.M.; Asari, V.K. Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation. arXiv 2018. [Google Scholar] [CrossRef]
- Fawakherji, M.; Potena, C.; Prevedello, I.; Pretto, A.; Bloisi, D.D.; Nardi, D. Data Augmentation Using GANs for Crop/Weed Segmentation in Precision Farming. In Proceedings of the 2020 IEEE Conference on Control Technology and Applications (CCTA), Montreal, QC, Canada, 24–26 August 2020; pp. 279–284. [Google Scholar]
- Fawakherji, M.; Potena, C.; Pretto, A.; Bloisi, D.D.; Nardi, D. Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming. Robot. Auton. Syst. 2021, 146, 103861. [Google Scholar] [CrossRef]
- Mazzia, V.; Khaliq, A.; Salvetti, F.; Chiaberge, M. Real-Time Apple Detection System Using Embedded Systems with Hardware Accelerators: An Edge AI Application. IEEE Access 2020, 8, 9102–9114. [Google Scholar] [CrossRef]
- Tay, Y.; Dehghani, M.; Bahri, D.; Metzler, D. Efficient Transformers: A Survey. ACM Comput. Surv. 2022, 55, 1–28. [Google Scholar] [CrossRef]
- Poli, M.; Massaroli, S.; Nguyen, E.; Fu, D.Y.; Dao, T.; Baccus, S.; Bengio, Y.; Ermon, S.; Ré, C. Hyena Hierarchy: Towards Larger Convolutional Language Models. arXiv 2023. [Google Scholar] [CrossRef]
- Senecal, J.J.; Sheppard, J.W.; Shaw, J.A. Efficient Convolutional Neural Networks for Multi-Spectral Image Classification. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
- Yang, Z.; Li, J.; Shi, X.; Xu, Z. Dual flow transformer network for multispectral image segmentation of wheat yellow rust. In Proceedings of the International Conference on Computer, Artificial Intelligence, and Control Engineering (CAICE 2022), Zhuhai, China, 25–27 February 2022; SPIE: Bellingham, WA, USA, 2022; Volume 12288, pp. 119–125. [Google Scholar]
- Tao, C.; Meng, Y.; Li, J.; Yang, B.; Hu, F.; Li, Y.; Cui, C.; Zhang, W. MSNet: Multispectral semantic segmentation network for remote sensing images. GIScience Remote Sens. 2022, 59, 1177–1198. [Google Scholar] [CrossRef]
- López, M.; Alberto, J. The Use of Multispectral Images and Deep Learning Models for Agriculture: The Application on Agave. December 2022. Available online: https://repositorio.tec.mx/handle/11285/650159 (accessed on 12 March 2023).
- Victor, B.; He, Z.; Nibali, A. A systematic review of the use of Deep Learning in Satellite Imagery for Agriculture. arXiv 2022. [Google Scholar] [CrossRef]
- Sarigiannidis, P. Peach Tree Disease Detection Dataset. IEEE, 23 November 2022. Available online: https://ieee-dataport.org/documents/peach-tree-disease-detection-dataset (accessed on 12 March 2023).
Stage | Challenges |
---|---|
Preharvest | Disease detection and diagnosis, seed quality, fertilizer application, field segmentation, and urban vegetation classification |
Harvesting | Crops detection and classification, pest detection and control, crop yield estimation, tree counting, maturity level, and cropland extent |
Postharvest | Fruit grading, quality-retaining processes, storage environmental conditions, and chemical usage detection |
Vegetation Index | Spectrum | Equation |
---|---|---|
Excess green (ExG) | RGB | 2G − R − B |
Excess red (ExR) | RGB | 1.3R − G |
Color index of vegetation (CIVE) | RGB | 0.441R – 0.81G + 0.385B + 18.7874 |
Excess green minus red (ExGR) | RGB | 3G − 2.4R − B |
Normalized difference vegetation index (NDVI) | RGB + NIR | (NIR − R)/(NIR + R) |
Normalized difference red-edge index (NDRE) | RGB + NIR | (NIR − RE)/(NIR + RE) |
Ratio vegetation index (RVI) | RGB + NIR | NIR/R |
Perpendicular vegetation index (PVI) | RGB + NIR | [(0.335NIR − 0.149R)2 + (0.335R − 0.852NIR)2] |
Paper | SVM | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Approach | Comments | Best Results | |
Tendolkar et al. [52] | SVM | Pesticide/disease treatment | Dual-step based approach of pixel-wise NDVI calculation and semantic segmentation helps in overcoming NDVI issues | Model was not compared to any other | Precision: 85% Recall: 81% F1-score: 79% |
Natividade et al. [61] | SVM | Fertilization | Pattern recognition system allows for classification of images taken by low-cost cameras | Accuracy, precision, and recall values of model varied highly across datasets | Dataset #1 (1st configuration): [accuracy: 78%, precision: 93%, recall: 86%, accuracy: 72%] Dataset #2: [accuracy: 83%, precision: 97%, recall: 94%, accuracy: 73%] |
Pérez-Ortiz et al. [62] | SVM | Crop-row detection | Able to detect weeds outside and within crop rows; does not require a big training dataset | Segmentation process produced salt-and-pepper noise effect on images Training images were manually selected Model inference was influenced by training image selection | Mean average error (MAE): 12.68% |
César Pereira et al. [53] | LSVM | Crop-row detection | Model can be trained fast with a small training set | Image dataset was small and simple, containing images of sugarcane cultures only | Using RGB + EXG + GABOR filters: IOU: 0.788654 F1: 0.880129 |
Paper | KNN | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
César Pereira et al. [53] | KNN3, KNN11 | Crop-row detection | Simplest algorithms to implement amongst implemented algorithms in the paper | Models did not achieve the best results in the paper | KNN3 and 11 with RGB + EXG + GABOR filters: [IOU: 0.76, F1: 0.86] |
Rodríguez-Garlito and Paz-Gallardo [63] | KNN | Crop-row detection/land-cover mapping | Uses an automatic window processing method that allows for the use of ML algorithms on large multispectral images | Model did not achieve the best results in the paper | Approximate values: AP: 0.955 Accuracy score: 0.918 |
Rocha et al. [64] | KNN | Crop-row detection | Best-performing classifier | Model could not perform sugarcane line detection and fault measurement on sugarcane fields of all growth stages | Relative error: 1.65% |
Paper | DT and RF | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Natividade et al. [61] | DT | Fertilization | Pattern recognition system allows for classification of images taken by low-cost cameras | Model did not outperform SVM on all chosen metrics | Dataset #2: [accuracy: 77%, precision: 87%, recall: 90%, accuracy: 79%] |
Lottes et al. [54] | RF | Pesticide/disease treatment | Model can detect plants relying only on its shape | Low precision and recall for detecting weeds under the “other weeds” class | Saltbush class recall: 95%, chamomile class recall: 87%, sugar beet class recall: 78%, recall of other weeds class: 45% Overall model accuracy for predicted objects: 86% |
Paper | CNN | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Crimaldi et al. [65] | Inception V3 | Pesticide/disease treatment | The identification time was 200 ms which is good for real-time applications | Low accuracy | Accuracy of 78.1% |
Milioto et al. [66] | CNN model fed with RGB + NIR camera images | Spatial segregation and segmentation | High accuracy for early growth stage | Low accuracy for the later growth stage | Early growth stage Accuracy: 97.3% Recall: 98% Later growth stage Accuracy: 89.2% Recall: 99% |
Bah et al. [67] | AlexNet | Pesticide/disease treatment | Fewer images with high resolution from a drone | Overlapping of the leaves between crops and weeds | Best precision was for the spinach dataset with 93% |
Reddy et al. [68] | Customized CNN | Spatial segregation and segmentation | The results had a high precision and recall | Large dataset | Precision of 99.5% for the leaf snap dataset; the Flavia, Swedish leaf, and UCI leaf datasets had a recall of 98% |
Sembiring et al. [69] | Customized CNN | Pesticide/disease treatment | Low training time compared to other models compared in the paper | Not the highest-performing model compared in the paper | Accuracy of 97.15% |
Geetharamani et al. [70] | Deep CNN | Pesticide/disease treatment | Can classify 38 distinct classes of healthy and diseased plants | Large dataset | Classification accuracy of 96.46% |
Karthik et al. [71] | Residual learning CNN with attention mechanism | Pesticide/disease treatment | Prominent level of accuracy and only 600 k parameters, which is lower than the other papers compared in this paper | Large dataset | Overall accuracy of 98% |
Nanni et al. [73] | Ensembles of CNNs based on different topologies (ResNet50, GoogleNet, ShuffleNet, MobileNetv2, and DenseNet201) | Pesticide/disease treatment | Using Adam helps in decreasing the learning rate of parameters whose gradient changes more frequently | IP102 is a large dataset | 95.52% on Deng and 73.46% on IP102 datasets |
Bah et al. [85] | CrowNet | Crop-row detection | Able to detect rows in images of several types of crops | Not a single CNN model | Accuracy: 93.58% IoU: 70% |
Atila et al. [74] | EfficientNet | Pesticide/disease treatment | Reduces the calculations by the square of the kernel size | Did not have the lowest training time compared to the other models in the paper | Plant Village dataset Accuracy: 99.91% Precision: 98.42% Original and augmented datasets Accuracy: 99.97% Precision: 99.39% |
Prasad et al. [75] | EfficientDet | Pesticide/disease treatment | Scaling ability and FLOP reduction | Performed well for limited labeled datasets; however, the accuracy was still low | Identifier model average accuracy: 75.5% |
Albattah et al. [76] | EffecientNetV2-B4 | Pesticide/disease treatment | Really reliable results and has low time complexity | Large dataset | Precision: 99.63% Recall: 99.93% Accuracy: 99.99% F1: 99.78% |
Mishra et al. [77] | Standard CNN | Pesticide/disease treatment | Can run on devices such as raspberry-pi or smartphones and drones; works in real-time with no internet | NCS recognition accuracy was not good and could be improved according to the authors | Accuracy GPU: 98.40% NCS chip: 88.56% |
Bah et al. [78] | ResNet18 | Spatial segregation and segmentation | Outperformed SVM and RF methods and uses unsupervised training dataset | Results of the ResNet18 are lower than SVM and RF in the spinach field | AUC: 91.7% on both supervised and unsupervised labelled data |
Zheng et al. [79] | Multiple CNN models including: CNN joint Model, Xception model, and ResNet50 model | Pesticide/disease treatment | Compares multiple models | The joint model had trouble with LAI estimation, and the vision transformer had trouble with percentage canopy cover estimation | Xception model: 0.28 CNN–ConvLSTM: 0.32 ResNet50: 0.41 |
Yang et al. [80] | ShuffleNet V2 | Pesticide/disease treatment | The total number of params was 3.785 M, which makes it portable and easy to apply | Not the lowest number of params when compared to the models in the paper | Accuracy MSI: 82.4% RGB: 93.41% TIRI: 68.26% |
Briechleet et al. [81] | PointNet++ | Spatial segregation and segmentation | Good score compared to the models mentioned in the paper | Not yet tested for practical use | Accuracy: 90.2% |
Aiger et al. [82] | CNN | Environmental conditions | Large-scale, robust, and high-accuracy | Low accuracy for 2D CNN | 96.3% accuracy |
Paper | U-Net | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Lin et al. [89] | U-Net | Tree/crop counting | Can detect overlapping sorghum panicles | The performance decreased with a lower number of training images (<500) | Accuracy: 95.5% RMSE: 2.5% |
Arun et al. [25] | Reduced U-NET | Spatial segregation and segmentation | Reduces the total number of parameters and results in a lower error rate | The comparison was made with models that were used to problems not related to agriculture | Accuracy: 95.34% Error rate: 7.45% |
Hoummaidi et al. [90] | U-Net | Land-cover mapping | Real-time and uses multispectral images | Tree obstruction and physical characteristics caused it to have errors; however, it could be improved using a better dataset | Accuracy: 89.7% Palm trees Detection rate: 96.03% Ghaf trees Detection rate: 94.54% |
Doha et al. [91] | U-Net | Crop-row detection | The method they used could refine the results of the U-Net to reduce errors, as well as do frame interpolation of the input video stream | Not enough results were given | Variance: 0.0083 |
Zhang et al. [92] | DF-U-Net | Pesticide/disease treatment | Reduced the computation load by more than half and had the highest accuracy among other models compared | Early-stage rust disease is difficult to recognize | F1: 94.13% OA: 96.93% Precision: 94.02% |
Tsuichihara et al. [93] | U-Net | Spatial segregation and segmentation | Accuracy was 80% for only 48 images trained | Low accuracy and that is due to the small number of images which is due to manually painting 6 colors on each image | Accuracy: ~80% |
Paper | Other Segmentation Models | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Yang et al. [94] | EDANet | Spatial segregation and segmentation | Improved prior work on identifying and lodging by 2.51% and 8.26%, respectively | Drone images taken from a greater height did not perform well; however, with the method they proposed, it could have reliable results | Identify rice Accuracy: 95.28% Lodging Accuracy: 86.17% If less than 2.5% lodging is neglected, then the accuracy increases to 99.25% |
Weyler et al. [95] | ERFNet-based instance segmentation | Spatial segregation and segmentation | Data were gathered from real agricultural fields | Comparison was made on different datasets based on average precision scores using ERFNet | Crop leaf segmentation Average precision: 48.7% Average recall: 57.3% Crop segmentation Average precision: 60.4% Average recall: 68% |
Guo et al. [96] | Three-stage model with RPN, Chan-Vese algorithm, and a transfer learning model | Pesticide/disease treatment | Outperformed the traditional ResNet101 which had an accuracy of 42.5% and is unsupervised | The Chan-Vese algorithm ran for a long time | Accuracy 83.75% |
Sanchez et al. [98] | MLP | Spatial segregation and segmentation | Evaluated in commercial fields and not under controlled conditions | The dataset was captured noon to avoid shadow | Overall accuracy on two classes of crops: 80.09% |
Zhang et al. [99] | UniSteamNet | Spatial segregation and segmentation | Joint crop recognition and stem detection in real time; fast and could finish processing each image within 6 ms | The scores of this model were not always the best, and the differences were small | Segmentation F1: 97.4% IoU: 94.5 Stem detection SDR: 97.8% |
Paper | YOLO | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Chen et al. [101] | Tiny-YOLOv3 | Pesticide/disease treatment | Results in excellent outcomes with regard to FPS and mAP; reduces pesticide use. | Has a high false identification in adult T. papillosa | mAP score of 95.33% |
Qin et al. [102] | Ag-YOLO (v3-tiny) | Pesticide/disease treatment | Tested Yolov3 with multiple backbones and achieved optimum results in terms of FPS and power consumption | Uses NCS2 that supports 16-bit float point values only | F1 Score of 92.05% |
Parico et al. [103] | YOLO-Weed (v3) | Pesticide/disease treatment | High speed and mAP score | Limitations in detecting small objects | mAP score of 93.81% F1 score of 94% |
Parico et al. [105] | YOLOv4 (multiple versions) | Tree/crop counting | Proved that YOLOv4-CSP has the lowest FPS with the highest mAP | Limitations in detecting small objects | AP score of 98% |
Jintasuttisak et al. [106] | YOLOv5m | Tree/crop counting | Compared different YOLO versions and proved that YOLOv5 with medium depth outperforms the rest even with overlapped trees | YOLOv5x scored a higher detection average due to the increased number of layers | mAP score of 92.34% |
Tian et al. [107] | YOLOv3 (modified) | Pesticide/diseases treatment | Tackles the lack of data by generating new images using CycleGAN | The model is weak without the images generated using CycleGAN | F1-score of 81.6% and IoU score of 91.7% |
Paper | SSD | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Veeranampalayam Sivakumar et al. [109] | SSD with a feature extraction module made of an Inception v2 network and 4 convolutional layers | Spatial segregation and segmentation | Model is scale- and translation-invariant | Low optimal confidence threshold value of 0.1; failure to detect weeds at the borders of images | Precision: 0.66, recall: 0.68, F1-score: 0.67, mean IoU: 0.84, inference time: 0.21 s |
Ridho and Irwan [110] | SSD with MobileNet as a base for the feature extraction module | Seed quality and germination | Fast detection and image processing | Detection was not performed on a UAV; model did not yield the best accuracy in the paper | Accuracy: 90% |
Paper | Two-Stage Detectors | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Sivakumar et al. [109] | FRCNN | Spatial segregation and segmentation | The optimal confidence threshold of the SSD model was found to be much lower than that of the faster RCNN model | Inference time of SSD is better than that of FRCNN, but it can be improved at the cost of performance | 66% F1-score and 85% IoU. |
Ammar et al. [111] | FRCNN | Tree/crop counting | Large advantage in terms of speed | Very weak in detecting trees; outperformed by Efficient-Det D5 and YOLOv3 | 87.13% and 49.41% IoU for palm and other trees, respectively |
Su et al. [112] | Mask-RCNN | Pesticide/disease treatment | Superior in comparison to CNN | Inference time was not taken into consideration | 98.81% accuracy |
Yang et al. [113] | FCN-AlexNet | Spatial segregation and segmentation | Provides good comparison between SegNet and AlexNet | Outperformed by SegNet | 88.48% recall rate |
Menshchikov et al. [114] | FCNN | Pesticide/diseases treatment | Proposed method is applicable in real-world scenario, and the use of RGB cameras is cheaper than multispectral cameras | Complex algorithms compared to the multispectral approach | ROC AUC in segmentation: 0.96 |
Hosseiny et al. [10] | A model with the core of the framework based on the faster regional CNN (RCNN) with a backbone of ResNet101 for object detection | Tree/crop counting | Results are good for an unsupervised method | Tested only on single object detection and automatic crop row estimation can fail due to dense plant distribution | Precision: 0.868 Recall: 0.849 F1: 0.855 |
Paper | Autoencoder | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Weyler et al. [115] | CNN/autoencoder | Spatial segregation and segmentation | Performed joint instance segmentation of crop plants and leaves using a two-step approach of detecting individual instances of plants and leaves followed by pixel-wise segmentation of the identified instances | Low segmentation precision for smaller plants; outperformed by Mask-RCNN | 0.94 for AP50 |
Lottes et al. [116] | FCN/autoencoder | Pesticide/disease treatment | Performed joint stem detection and crop–weed segmentation using an autoencoder with two task-specific decoders, one for stem detection and the other for pixel-wise semantic segmentation | Did not achieve best mean recall across all tested datasets. + false detections of stems in soil regions | Achieved mAP scores of 85.4%, 66.9%,42.9%, and 50.1% for Bonn, Stuttgart, Ancona, and Eschikon datasets, respectively, for stem detection and 69.7%, 58.9%, 52.9%, and 44.2% mAP scores for Bonn, Stuttgart, Ancona, and Eschikon datasets, respectively, for segmentation |
Su et al. [117] | Autoencoder | Spatial segregation and segmentation | Utilized two position-aware encoder–decoder subnets in their DNN architecture to perform segmentation of inter-row and intra-row rygrass with higher segmentation accuracy | Low pixel-wise semantic segmentation accuracy for early-stage wheat | Mean accuracy and IoU scores of 96.22% and 64.21%, respectively. |
Paper | Transformers | ||||
---|---|---|---|---|---|
Model/Architecture | Application | Strengths | Comments | Best Results | |
Thai et al. [121] | ViT | Pesticide/diseases treatment | Proposed a smart solution powered by Internet of things (IoT) | The performance was not tested with the system attached to a drone | 90.3% F1-score in comparison to the best CNN score of 89.2% achieved by Resnet50 model |
Reedha et al. [24] | ViT | Spatial segregation and segmentation | The classification of crops and weed images using ViTs yielded the best prediction performance | Slightly outperformed existing CNN models | F1-scores of 99.4% and 99.2% were obtained from ViT-B16 and ViT-B32, respectively |
Karila et al. [124] | ViT | Crop yield estimation/seed quality and germination | The ViT RGB models performed the best on several types of datasets | VGG CNN models provided equally reliable results in most cases | Multiple results shown on several types of datasets |
Dersch et al. [125] | DETR | Spatial segregation and segmentation | DETR clearly outperformed YOLOv4 in mixed and deciduous plots | DETR failed to detect smaller trees, far worse than YOLOv4 in multiple cases | An F1-score of 86% and 71% in mixed and deciduous plots respectively |
Chen et al. [126] | DENT | Tree/crop counting | The model outperformed most of the state-of-the-art methods | CANNet achieved better results | A mean absolute error (MAE) of 10.7 and a root-mean-squared error (RMSE) of 13.7 |
Coletta et al. [129] | Active Learning | Pesticide/disease treatment | The model can classify unknown data | Did not test the performance of other classification models | An accuracy of 98% and a recall of 97% |
Problem | Type of Learning | Paper | Model/Architecture | Dataset | Best Results |
---|---|---|---|---|---|
Spatial segregation and segmentation | Supervised | Jintasuttisak et al. [106] | YOLOv5 | Date palm trees collected using a drone | mAP score of 92.34% |
Semi-supervised | Fawakherji et al. [118] | cGANs | 5400 RGB images of pears and strawberries, of which 20% were labeled | An IoU score of 83.1% on mixed data including both original and synthesized | |
Unsupervised | Bah et al. [78] | ResNet18 | UAV images of spinach and bean fields | AUC: 91.7% | |
Pesticide/diseases treatment | Supervised | Zhang et al. [73] | DF-U-Net | Yangling UAV images | F1: 94.13% Accuracy: 96.93% Precision: 94.02% |
Semi-supervised | Coletta et al. [129] | Active learning: SVM | UAV images collected from Eucalyptus plantations | An accuracy of 98% and a recall of 97% | |
Unsupervised | Khan et al. [117] | SGAN | UAV images collected from strawberry and pea fields | Accuracy ~90% | |
Fertilization | Supervised | Natividade et al. [61] | SVM | UAV images of vineyards and forests | Accuracy: 83%, precision: 97%, recall: 94% |
Unsupervised | Zhang et al. [115] | SSVT | UAV images of a wheat field | 96.5% on 384 × 384 pixel images | |
Crop-row detection | Supervised | Cesar Pereira et al. [47] | SVM | Manually collected RGB images | 88.01% F1-Score |
Semi-supervised | Pérez-Ortiz et al. [62] | SVM | UAV images collected from a sunflower plot | MAE: 12.68% | |
Tree/crop counting | Supervised | Ammar et al. [98] | FRCNN | Tree counting | 87.13% IoU on palms and 49.41% on other trees |
Semi-supervised | Chen et al. [113] | DENT | Yosemite tree dataset | 10.7 MAE score | |
Others | Supervised | Aiger et al. [82] | CNN | UAV images of various types of land cover | 96.3% accuracy |
Supervised | Semi-Supervised | Unsupervised | ||
---|---|---|---|---|
Agricultural Problem | Spatial Segregation and Segmentation |
|
|
|
Pesticides/Diseases Treatment |
|
|
| |
Tree/Crop Counting |
| |||
Crop-Row Detection |
|
| ||
Fertilization |
|
| ||
Others |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; ElMohandes, M. Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023, 7, 382. https://doi.org/10.3390/drones7060382
Zualkernan I, Abuhani DA, Hussain MH, Khan J, ElMohandes M. Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones. 2023; 7(6):382. https://doi.org/10.3390/drones7060382
Chicago/Turabian StyleZualkernan, Imran, Diaa Addeen Abuhani, Maya Haj Hussain, Jowaria Khan, and Mohamed ElMohandes. 2023. "Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey" Drones 7, no. 6: 382. https://doi.org/10.3390/drones7060382
APA StyleZualkernan, I., Abuhani, D. A., Hussain, M. H., Khan, J., & ElMohandes, M. (2023). Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones, 7(6), 382. https://doi.org/10.3390/drones7060382