Next Article in Journal
Regression-Based Performance Prediction in Asphalt Mixture Design and Input Analysis with SHAP
Previous Article in Journal
Effect of Artificial Diet Modification with Dextrose on the Growth and Fatty Acid Composition of Tenebrio molitor Larvae for Biodiesel Production
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review

1
Faculty of Agrobiotechnical Sciences Osijek, Josip Juraj Strossmayer University of Osijek, Vladimira Preloga 1, 31000 Osijek, Croatia
2
Layer d.o.o., Vukovarska Cesta 31, 31000 Osijek, Croatia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(19), 10778; https://doi.org/10.3390/app151910778
Submission received: 14 September 2025 / Revised: 6 October 2025 / Accepted: 6 October 2025 / Published: 7 October 2025

Abstract

The integration of unmanned aerial vehicles (UAVs) and deep learning (DL) has significantly advanced crop disease detection by enabling scalable, high-resolution, and near real-time monitoring within precision agriculture. This systematic review analyzes peer-reviewed literature indexed in the Web of Science Core Collection as articles or proceeding papers through 2024. The main selection criterion was combining “unmanned aerial vehicle*” OR “UAV” OR “drone” with “deep learning”, “agriculture” and “leaf disease” OR “crop disease”. Results show a marked surge in publications after 2019, with China, the United States, and India leading research contributions. Multirotor UAVs equipped with RGB sensors are predominantly used due to their affordability and spatial resolution, while hyperspectral imaging is gaining traction for its enhanced spectral diagnostic capability. Convolutional neural networks (CNNs), along with emerging transformer-based and hybrid models, demonstrate high detection performance, often achieving F1-scores above 95%. However, critical challenges persist, including limited annotated datasets for rare diseases, high computational costs of hyperspectral data processing, and the absence of standardized evaluation frameworks. Addressing these issues will require the development of lightweight DL architectures optimized for edge computing, improved multimodal data fusion techniques, and the creation of publicly available, annotated benchmark datasets. Advancements in these areas are vital for translating current research into practical, scalable solutions that support sustainable and data-driven agricultural practices worldwide.

1. Introduction

Precise crop leaf disease detection is an essential element for the modern precision agriculture, allowing for efficient monitoring of the plant growth and timely disease treatment [1]. Accurate leaf disease identification and monitoring also allows for the assessment of plant health [2] and stress responses [3], which are key for maximum yield and efficiency of resource use. Meanwhile, conventional manual observations for crop leaf disease detection are labor-intensive and prone to errors [4]. Rapid development in remote sensing, computer vision and machine learning enabled the development of leaf disease detection that can be performed nondestructively, in high throughput and in real time [5]. Adversely, global agricultural needs are getting enhanced with climate change and growing population and thus robust and scalable leaf detection methods are becoming more essential to satisfy food and environmental security [6]. In addition, these technologies enable the more effective use of agricultural inputs like water, fertilizers, and pesticides and avoid overapplication and reduce the environmental footprint of the farming process [7]. Advanced leaf detection methods, in the capacity of precision agriculture, not only help to decrease the cost of farming but also counteracts unfavorable impacts of agricultural practices on ecosystems, for example, soil loss and water pollution [8,9]. Global agricultural systems continue to strive to meet the dual imperative of food security and environmental constraint, and creating sustainable and consequently climate smart farming practices is important for shaping the future of the agriculture [10].
For decades, traditional remote sensing platforms were limited in their ability to collect real time data with high spatial and temporal resolution to be able to monitor crops at the plant level [11]. The limitations of satellite imagery with its inadequate spatial resolution for crop leaf detection at individual plant level are addressed by the use of UAVs [12,13]. When compared to satellite imagery with coarse spatial resolution [14], UAVs provide adequate ground sampling distance (GSD) to allow the identification of individual plant leaves [15]. Early detection of such indicators in plant health, namely disease symptoms, nutrient deficiencies and water stress, which are often indistinguishable from low resolution satellite data, is one of the driving need for this capability [16]. UAVs also enable a high level of flexibility, which allows for acquisition of data as required on demand remotely at a particular growth stage or under differing environmental conditions [17]. Thanks to the integration of advanced sensors, like multispectral and hyperspectral cameras, UAVs generate rich datasets, which are adequate support to machine learning and computer vision algorithms in order to detect accurately leaves and provide a basis for phenotyping [2]. The consequences of these properties resulted in UAVs becoming an indispensable tool for the progression of research of areas of precision agriculture [18], an innovative and inexpensive means of improving crop management [13], resource utilization and agricultural productivity beyond what is possible with satellite imagery [19].
Conventional approaches to detecting crop diseases are still frequently based on manual scouting and remote sensing with satellites and have serious limitations [20]. Manual scouting is both labor intensive and subjective and may not be practical on large fields or regions where there is a low labor supply [21]. Although providing extensive coverage, satellite-based techniques generally do not provide the necessary spatial resolution to identify the symptoms of diseases at an early stage, such as at the leaf or canopy scale, as well as the frequency at which they are collected is limited by orbital periods and atmospheric scattering [22]. These limitations are overcome by UAVs using high-resolution and multi-temporal imagery and deep learning techniques for improving the plant feature extraction and interpretation [23]. Agricultural environments are complex and their variety often creates difficulty for traditional image processing techniques dealing with it, such as leaf overlapping [24] and different atmospheric conditions [25]. The challenges seen in the problem of leaf detection, segmentation and classification are best handled by deep learning models, in particular CNNs, because they can learn hierarchical features from raw data that are robust and accurate [26,27]. When applied to UAV-captured imagery, deep learning models are able to process a large amount of high-resolution data in order to identify individual leaves [28], determine early signs of stress or disease and monitor plant growth with little to no human intervention [29]. On top of that, transfer learning and data augmentation provide the ability to customize deep learning techniques to the specific crop and soil conditions, gaining more adaptability and accuracy [30]. Advanced deep learning architectures combined with multispectral and hyperspectral UAV imagery further reduces feature extraction and enhances its ability for potential real-time, scalable crop monitoring [31]. However, despite these advances, there still are critical areas of research across different environments and computational efficiency, as well as selection of optimal imaging sensors and data variability.
This study aims to analyze the latest trends in UAV and deep learning for crop disease analysis, evaluate the performance of different deep learning algorithms, and identify key challenges in current research, building up to the knowledge of previous studies and reviews, such as Bouguettaya et al. [23], Kuswidiyanto et al. [32] and Shahi et al. [33]. As this is a novel study topic, subjected to rapid developments in both remote sensing aspect from UAVs and image processing aspect from deep learning, this review focused on improvements of state-of-the-art research in previous few years, which were not covered by previous reviews. The existing research is mostly limited to particular crops, sensors or algorithms, preventing the generalization to different agricultural settings [23,32,33]. Moreover, the reviews conducted previously have focused predominantly on either platforms or deep learning methods separately, without giving an integrated view of how the two technologies can be used together to detect crop leaf diseases. The review is structured to present crop leaf disease studies based on UAVs and deep learning on a broad scale in Section 2, while in-depth analysis of UAV and deep learning aspects was performed in Section 3 and Section 4, respectively. Section 5 contains main conclusions from the study, as well as remarks for future studies in the field.

2. Crop Leaf Disease Studies Based on UAVs and Deep Learning Indexed in the Web of Science Core Collection

The analysis of crop leaf disease studies based on UAVs and deep learning was performed according to the official Web of Science Core Collection database. Queries were based on combining keywords related to study topic with Boolean operators and by searching all indexed fields. The search included all records indexed from the database inception through December 2024, which was the latest complete year at the time of analysis. The general query structure was (“unmanned aerial vehicle*” OR “UAV” OR “drone”) AND (“deep learning”) AND (“agriculture” OR “crop” OR “leaf disease” OR “crop disease”). All resulting studies from set queries were included in the quantitative analysis, while more detailed analysis on properties of UAV and deep learning segments included only articles. This included documents indexed as “Article” and “Proceeding paper”, which included only full conference papers, in the Web of Science Core Collection database. Short abstracts, editorial materials, book chapters, patents, and non-English papers were excluded from the study. For evaluation of cumulative number of studies per year, all indexed studies were considered regardless of their Journal Impact Factor in Web of Science.

State of Crop Leaf Disease Detection Studies Based on UAVs and Deep Learning on a Global Scale

Figure 1 presents the cumulative number of studies per major agrotechnical segment based on UAVs with deep learning alongside broad scope of studies in agriculture over time, categorized into six application areas: crop monitoring, weed detection, yield prediction, leaf disease detection, irrigation, and digital soil mapping. The used query for broad agricultural studies combined “unmanned aerial vehicles” OR “UAV” OR “drone” with “deep learning” and “agriculture”. To analyze these six major agrotechnical application areas, an additional keyword was included in each query including “crop monitoring”, “leaf disease” OR “crop disease”, “weed detection”, “yield prediction”, “irrigation” and “soil mapping”. The data reveal a sharp increase in overall studies based on combining UAVs and deep learning in agriculture from 2019 onward, which noted an average of 24.3% increase in studies on a year-to-year basis. Among the categories, crop monitoring traditionally had the highest number of studies, closely followed by leaf disease studies with a total of 138 indexed studies up to 2024. The overall trend indicates that the integration of UAVs, deep learning, and agriculture has gained significant research interest in recent years, as there were no indexed studies matching all set queries prior to 2017.
To analyze the state of the UAV and deep learning application in crop disease detection into perspective, both the geographical distribution of the research activity, as well as its consistency with the global sustainability priorities were analyzed. Mapping research contributions by country (Figure 2) highlights regional strengths and gaps, where technological advances are focused and where they need to be focused to produce global agricultural resilience. Such points of view do not only point to the countries leading in scientific progress but also explain how the ongoing work is progressing in responding to urgent global problems like food security, climate action, and sustainable land management. China and the USA and are the leading contributors in overall UAV and deep learning studies in agriculture, with 488 studies (28.8% of total indexed studies) and 218 studies (12.9% of total indexed studies), respectively. Leaf disease research is concentrated primarily in China (30 studies), India (27 studies) and the USA (23 studies), but the difference in relative percentage of total studies is notably lower in comparison to following countries, including Saudi Arabia (11 studies), Brazil (10 studies) and France (9 studies). This suggests regional research priorities, with most countries focusing more on specific agricultural challenges like leaf disease detection.
The distribution of overall studies based on UAVs and deep learning in agriculture related to United Nations Sustainable Development Goals (SDGs) indexed in Web of Science Core Collection per indexed study, demonstrating how this research has a wider societal and environmental applicability (Figure 3). Five SDGs were dominantly contained in resulting studies, including Zero Hunger (SDG 2), Good Health and Well-being (SDG 3), Sustainable Cities and Communities (SDG 11), Climate Action (SDG 13), and Life on Land (SDG 15). Life on Land (SDG 15) has the highest number of studies (475), followed closely by Climate Action (SDG 13) with 435 studies and Zero Hunger (SDG 2) with 453 studies, suggesting a strong research emphasis on environmental sustainability and food security.

3. Latest Developments in UAV Aspect of Analyzed Crop Leaf Disease Studies

3.1. Trends in UAV Aspect of Crop Leaf Disease Studies Indexed in Web of Science

The research trend from 2018 to 2024 indicates a significant increase in UAV-based vegetation studies, with a growing preference for multirotor drones over fixed-wing UAVs (Table 1), likely due to their maneuverability and suitability for small-scale agricultural monitoring [34]. Geographically, research has diversified beyond early adopters like the USA and China, with increasing contributions from Asia, Europe, South America, and Africa, suggesting a broader global interest in UAV-based remote sensing for agriculture and environmental monitoring. In early studies, leaf disease detection was often used as a secondary study objective in research focused on tree detection [35,36,37]. Leaf detection was often restricted to binary classification to healthy/diseased plants [38] or crop/weed classes [39]. Despite most of the studies in Table 1 being classification problems, no universal statistical metrics were used for the accuracy assessment, thus making cross-comparison across the studies difficult. A study by Mazzia et al. [40], for example, used deep learning in satellite imagery refinement framework, which was then assessed by ANOVA according to raw datasets, without providing either overall accuracy (OA) or F1-score metrics [41]. Moreover, Duarte-Carvajalino et al. [42] used a regression based on deep learning for quantitative prediction of severity of potato leaf disease, using a coefficient of determination (R2), root mean square error (RMSE) and mean absolute error (MAE) for the accuracy assessment. UAVs were not exclusively used for aerial photogrammetric imaging with the aim of creating orthomosaics, but also for indoor imaging at high oblique angles at stationary position [43]. However, it is not certain that such an approach requires the use of UAVs, as similar imagery could be acquired using standalone cameras at lower cost. A study by Shah et al. [44] successfully utilized crop leaf disease images produced by commercial UAVs equipped with RGB camera alongside images from PlantVillage plant leaf disease dataset [45].

3.2. Imaging Sensors in Crop Leaf Disease Studies Indexed in Web of Science

While RGB and multispectral sensors have been consistently used, there is a noticeable rise in the adoption of hyperspectral imaging in recent years (2022–2024), enabling more detailed spectral analysis of vegetation [50,52,59]. The use of vegetation indices has also expanded, with earlier studies primarily relying on NDVI, while more recent research incorporates multiple indices for enhanced analysis. The spatial resolution quantified by GSD generally remained stable in the analyzed period, with some studies achieving resolutions as fine as 0.5 cm [54], which was maximized with the combination of multirotor UAVs with RGB cameras. RGB cameras on multirotor UAVs were notably more frequently utilized in recent studies than multispectral and hyperspectral cameras, as well as fixed wing UAVs, as such combination ensured superior spatial resolution (3 cm and higher) than other combinations (5–10 cm). While some studies dominantly based their methodology on multispectral or hyperspectral cameras, multirotor UAVs with RGB camera were still used to supplement fieldwork by providing more reliable ground-truth data [59]. The observed dominance of RGB sensors and CNN-based models across many regions can be explained by their relatively low cost, widespread availability, and ease of integration into commercial UAV platforms. RGB cameras are standard on most off-the-shelf UAVs, making them accessible to research groups with limited budgets, particularly in developing countries. Moreover, CNNs are well-documented, and supported by widely available pre-trained weights, lowering the entry barrier for model development. In contrast, hyperspectral and multispectral systems require specialized, expensive sensors and generate high-dimensional data that demand advanced computing infrastructure and domain expertise. As a result, these technologies appear more frequently in studies from well-funded institutions in China, Europe, and the USA, where research programs can sustain the cost of equipment and computation. Ease of deployment further shapes technology adoption as multirotor UAVs dominate small-scale and high-resolution monitoring tasks due to their maneuverability and ability to capture fine spatial detail at low altitude, whereas fixed-wing platforms are more common in large-area monitoring but less suited to leaf-level disease detection.
Figure 4 presents trends in UAV-based agricultural research, particularly focusing on imaging sensor types (RGB, multispectral, and hyperspectral) and their representation in deep learning and crop leaf disease studies. The total number of UAV-based agricultural studies has been rising steadily since 2015, with multispectral sensors being the most commonly used, followed by RGB and hyperspectral sensors. However, the percentage of studies incorporating deep learning has fluctuated but remains the highest for RGB sensors, likely due to their previously elaborated ability to produce the highest spatial resolution out of the three [61,62]. Contrary to overall UAV-based research in agriculture, studies utilizing deep learning and specifically addressing crop leaf diseases rose rapidly since 2017 and mostly relied on RGB sensors. However, the percentage of the use of multispectral sensors in crop leaf disease studies based on UAVs and deep learning has been on the rise since 2022 and a high focus on their implementation in future studies is expected. The trend highlights an overall shift towards integrating deep learning with UAV imagery, but the adoption of advanced hyperspectral imaging remains limited compared to RGB and multispectral approaches [63].

3.3. UAV Platforms in Crop Leaf Disease Studies Indexed in Web of Science

Overall, multirotor platforms dominate the reviewed literature, mainly because they provide low-altitude hover capability and fine spatial resolution suited to leaf- and canopy-level inspection [64]. RGB cameras are the most frequent sensor choice, reflecting a trade-off between cost and achievable spatial detail, as they often produce the finest GSDs. Multispectral and hyperspectral sensors appear increasingly after 2021, typically paired with experiments that prioritize spectral discrimination, including severity estimation and early stress detection, over extremely fine spatial detail [65]. Finally, there is a clear practical trade-off with studies reporting the smallest GSDs or simple RGB setups typically emphasizing detection/localization accuracy under favorable conditions, whereas multispectral/hyperspectral studies emphasized spectral sensitivity and quantitative severity estimation at the cost of heavier computation and more complex data handling.

4. Latest Developments in Deep Learning Aspect of Analyzed Crop Leaf Disease Studies

4.1. Deep Learning Algorithms in Crop Leaf Disease Studies Indexed in Web of Science

The integration of deep learning into plant disease recognition has significantly enhanced the precision, scalability, and automation of crop health monitoring systems. By utilizing the computational power of deep neural networks, deep learning enables the direct identification of disease symptoms from raw image data, thereby eliminating the need for handcrafted features or subjective visual assessments typical of traditional diagnostic methods [66,67]. This capability allows for the consistent and early recognition of complex and often subtle indicators of plant stress. One of the core strengths of deep learning lies in its ability to process and learn from high-dimensional, heterogeneous datasets. In the context of agricultural remote sensing, this includes the analysis of multispectral and hyperspectral imagery [31,68] collected through advanced imaging platforms, such as unmanned aerial vehicles. These datasets contain rich spectral information and detailed physiological cues that may serve as early warning signs of disease. Deep learning models are particularly adept at detecting these subtle patterns, enabling more proactive and precise crop management [69]. Recent advances in model architecture, including convolutional neural networks, vision transformers, and modern object detection frameworks like YOLO and Faster R-CNN, have significantly improved the accuracy and granularity of disease mapping [70]. These deep learning models can recognize and localize disease symptoms across various spatial scales, ranging from lesions on individual leaves to widespread infections observable across entire fields. The combination of deep learning with aerial imaging technologies provides an efficient solution for real-time, high-resolution monitoring of plant health [71]. This synergy enhances the frequency, scale, and quality of data acquisition, contributing to more informed and timely decision-making in agriculture. As a result, deep learning-based disease recognition systems are playing an increasingly vital role in precision farming, supporting higher yields, reduced input costs, and more sustainable agricultural practices.
A systematic examination of Web of Science-indexed literature highlights the evolving landscape of DL methodologies employed in UAV-based crop disease detection as shown in Table 2. The analysis reveals a clear progression, from foundational convolutional models to sophisticated detection and segmentation frameworks, and most recently to hybrid and transformer-based architectures designed for contextual awareness and real-time UAV deployment. Initially, from 2018 to 2020, CNNs were the predominant choice across numerous studies. Their capability to learn discriminative features from UAV-captured imagery made them highly effective for disease classification.
Early applications utilized both standard and custom CNN models. For instance, citrus disease detection in the USA using a basic CNN model achieved an F1-score of 96.24% [37], while a custom CNN was used in Colombia for potato crops [42]. During the same period, more advanced architectures such as Inception-v3, VGG-19, and ResNet demonstrated superior performance in various tasks, with a Brazilian soybean study reporting an OA of 99.04% [47]. As application needs evolved, a distinct methodological transition emerged, shifting from conventional classification toward object detection and semantic segmentation. This shift addressed the growing demand for spatially explicit disease identification. Algorithms such as YOLO, Faster R-CNN, and Mask R-CNN became increasingly prevalent due to their capacity to localize symptoms with bounding boxes or segmentation masks. In 2019, YOLOv3 was employed in a citrus detection task with F1-score of 99.80% [35]. In subsequent years, detection frameworks were expanded to other crops, including sugarcane in Sri Lanka with Mean Average Precision (mAP) of 79.00% [49] and banana in the DR Congo using YOLOv8 and Faster R-CNN with F1-score of 98.00% [58]. Simultaneously, instance segmentation techniques have gained traction. Mask R-CNN, in particular, has been used effectively in Germany for analyzing sugar beet and cauliflower, achieving precision and recall values above 95.00% and 97.00%, respectively [51].
These approaches not only offer precise disease localization but also support field-scale disease mapping for precision interventions. To enable real-time inference on resource-constrained UAV platforms, recent studies have prioritized lightweight architectures such as MobileNet and EfficientNet, which maintain high prediction accuracy while maintaining computational efficiency. EfficientNet-B3, for example, achieved an F1-score of 98.80% across tomato, potato, and pepper in a 2023 study from Pakistan [44]. Similarly, France-based research demonstrated that EfficientNet outperformed both ResNet and transformer-based ViT in spinach detection tasks with F1-score of 99.40% [39]. MobileNet and MobileViT were also employed in a 2024 German study on wheat diseases, achieving OA and F1-scores of 89.06% and 88.95%, respectively [55], thus affirming the growing relevance of lightweight models in real-world UAV deployment. Furthermore, recent studies from 2023 to 2024 reveal a gradual emergence of hybrid and transformer-based DL architectures. These models integrate CNN backbones with attention mechanisms or spectral data fusion strategies to capture global and contextual features more effectively. For example, DeepLabV3+, HRNet, and Segformer were applied in maize and wheat disease quantification tasks, achieving high OA values above 91.00% [52,57]. Such architectures provide improved spatial coherence and robustness, particularly under variable environmental and phenological conditions, signaling a strategic shift toward tailored architectural innovation in precision agriculture.
Custom hybrid models are also becoming more common. A 2024 study in Germany employed a domain-specific hybrid model for sugar beet classification, obtaining an F1-score of 78.76% [53]. Similarly, spatiotemporal learning was addressed using 3D-CNNs in potato disease detection, capturing UAV image sequences for enhanced temporal representation and achieving 97.33% OA [50]. Another notable innovation includes MSA-CNN, a multispectral attention model that reached 94.11% OA in rubber tree plantations by leveraging spectral feature fusion [59]. Moreover, a new direction involves the use of generative techniques and super-resolution algorithms to enhance input quality before analysis. In a 2024 USA-based study, models such as MuLUT, LeRF, and REAL-ESRGAN were used to refine UAV images in maize detection tasks, allowing subsequent DL models to operate on higher-quality data [54]. These generative methods are particularly valuable in overcoming limitations posed by poor resolution, occlusion, or suboptimal flight conditions. Collectively, these algorithmic advancements from 2022 onward represent a significant leap in UAV-based plant disease detection. They reflect a broader movement toward robust, scalable, and context-aware solutions [72], addressing the multifaceted challenges of agricultural monitoring. The convergence of detection accuracy, computational efficiency, and architectural innovation is shaping the next generation of UAV-assisted precision agriculture systems.

4.2. Comparative Assessment of Major Deep Learning Approaches in Crop Leaf Disease Studies

CNNs remain foundational due to their ability to hierarchically extract spatial features from high-resolution UAV imagery. Their local receptive fields and weight sharing enable efficient feature learning from leaf textures, color variations, and disease lesions. However, deeper CNNs such as VGG and Inception introduced more complex structures, VGG by stacking small convolutional filters to capture fine-grained features [73], and Inception by combining multi-scale filters to improve robustness against varying leaf sizes and shapes [74]. ResNet further advanced this by introducing residual connections, which mitigate vanishing gradients and allow very deep networks to converge, improving classification stability in complex agricultural scenes. Lightweight networks like MobileNet and EfficientNet were designed to address the computational constraints of UAV deployment [75]. MobileNet employs depthwise separable convolutions to drastically reduce parameters and inference cost, making it feasible for on-board analysis. EfficientNet scales depth, width, and resolution in a balanced way through compound scaling, which increases accuracy without proportional increases in computation. Both models are critical for real-time field applications where UAVs have limited processing power. More recently, attention-based architectures have been introduced to overcome the limitations of CNN local receptive fields [76]. Vision transformers (ViTs) and hybrid CNN–transformer models employ self-attention mechanisms to capture long-range dependencies and contextual information across entire fields, improving sensitivity to diffuse disease symptoms that may not be localized to a single leaf. Object detection and segmentation frameworks such as YOLO, Faster R-CNN, and Mask R-CNN incorporate both localization and classification [77]. Their innovation lies in region proposal networks (Faster R-CNN) or grid-based detection (YOLO), enabling detection of small lesions at the leaf level while maintaining efficiency. Similarly, segmentation networks like U-Net and DeepLabV3+ are optimized for pixel-level predictions, with skip connections or atrous convolutions that preserve spatial detail and capture multi-scale context, which is critical for mapping irregular disease patches across crop canopies. These architectural innovations collectively illustrate how deep learning methods evolved from generic classification toward scalable, context-aware, and resource-efficient models suitable for UAV platforms.

4.3. Computational Efficiency of Major Deep Learning Approaches in Crop Leaf Disease Studies

Standard CNNs, such as VGG-16 and ResNet-50, typically range between 14–25 million parameters and require 15–30 GFLOPs for processing a 224 × 224 image. Their hierarchical convolutional design makes them relatively efficient for UAV imagery, with inference times suitable for near real-time disease monitoring on edge devices or portable GPUs. In contrast, ViTs generally involve significantly higher computational loads due to their quadratic self-attention mechanism. A base ViT model often exceeds 80 million parameters and requires over 55 GFLOPs per image, limiting its feasibility for onboard UAV inference without specialized accelerators. Lightweight variants such as MobileNet and EfficientNet drastically reduce parameter counts (<5 M for MobileNetV2, ~7 M for EfficientNet-B0) and FLOPs (<1 GFLOP for MobileNet), enabling real-time inference directly on UAV processors. CNNs and lightweight hybrids remain the most practical solutions for UAV deployment [78], while transformer-based models provide enhanced accuracy in research settings but require further optimization.

4.4. Analysis of Statistical Metrics Used for Accuracy Assessment

An in-depth review of UAV-based crop disease detection studies reveals considerable variability in the use and reporting of performance metrics. This diversity stems from the wide array of model architectures, task objectives (e.g., classification, detection, segmentation, regression), crop types, and dataset characteristics. As DL models have advanced in precision agriculture, so too have the metrics employed to evaluate their effectiveness, albeit with notable inconsistencies across the literature. Traditionally, classification tasks have predominantly relied on OA as the principal performance indicator. In early studies, OA values typically exceeded 90.00%, indicating reliable performance even in foundational models. For example, a Brazilian study utilizing Inception-v3, VGG-19, and ResNet on soybean images achieved an OA of 99.04% [47]. Similarly, a CNN-based approach to citrus tree detection in the USA reported a high F1-score of 96.24% [37]. These metrics suggest that even standard CNNs were initially capable of producing satisfactory results on balanced, well-curated datasets.
However, as the field matured, researchers increasingly recognized the limitations of OA, particularly in the presence of class imbalance. This led to the broader adoption of F1-score, a harmonic mean of precision and recall that better captures performance on minority classes. F1-score has become especially important in scenarios involving rare or early-stage diseases, which are often underrepresented in UAV datasets. For example, a 2023 study employing EfficientNet-B3 for tomato, potato, and pepper disease classification achieved an F1-score of 98.80% [44], highlighting the utility of this metric for nuanced class distributions. For object detection tasks, mAP has emerged as the standard metric, reflecting both detection accuracy and localization quality. YOLO-based models, known for their real-time detection capabilities, frequently report mAP values. For instance, in a 2024 Malaysian study on melon crops, YOLOv8 achieved a mAP of 83.20% [43], while YOLOv5 and Faster R-CNN were used in a Moroccan study, yielding a combined mAP of 73.70% [60]. Although slightly lower than classification metrics, these mAP values indicate robust detection performance, particularly under real-world UAV imaging conditions characterized by variable lighting, occlusion, and background complexity. Beyond classification and detection, a few recent studies have shifted toward regression-based models aimed at quantifying disease severity rather than simply identifying its presence. These studies often report R2 as a measure of how well the model predicts continuous severity scores. A noteworthy example is a 2023 Chinese study on wheat disease, which employed DeepLabv3+, HRNet, and OCRNet, and reported an R2 of 0.875 [52]. While promising, this application area lacks standardized performance benchmarks, making it difficult to compare across studies or to integrate regression models into operational workflows. Another layer of complexity arises from inconsistencies in metric reporting. Some publications, particularly those involving novel or hybrid architectures, omit key metrics such as OA or F1-score. For example, the 2024 USA-based study on maize disease detection using MuLUT, LeRF, and REAL-ESRGAN [54] did not report accuracy-related figures, focusing instead on image enhancement techniques. Likewise, the RarefyNet-based vineyard analysis from Italy [40] lacked explicit performance metrics. These omissions hinder rigorous cross-comparison and meta-analysis efforts, underscoring the need for community-driven standards in reporting DL model outcomes.
Additionally, task-specific segmentation metrics such as the Dice coefficient have appeared in studies where pixel-level precision is vital. In a Brazilian study focusing on sugarcane segmentation using U-Net, LinkNet, and PSPNet, a Dice score of 0.721 was reported [56]. While lower than classification accuracies, this result is acceptable given the complexity of field-level image segmentation and the absence of dense labeling. Collectively, the evolving use of performance metrics reflects both the diversification of DL applications and the increasing complexity of UAV-based agricultural diagnostics. While metrics like OA, F1-score, and mAP remain central to performance evaluation, their applicability is context-dependent. Emerging areas such as severity quantification and spatiotemporal analysis demand new, task-appropriate metrics, possibly involving time-series consistency, explainability, or uncertainty quantification. To advance the field meaningfully, future studies should adopt standardized, transparent reporting practices, ideally including multiple complementary metrics that account for dataset imbalance, spatial resolution, and task type. Moreover, the use of metrics with predetermined value intervals, such as the R2 and normalized RMSE (NRMSE) for quantitative variables, as well as OA and F1-score for qualitative variables, enables more straightforward comparison of prediction accuracy across studies.

4.5. Advantages and Limitations of Deep Learning Approaches Based on Used Input Crop Leaf Disease Image Datasets

The increasing deployment of DL methods in UAV-based crop disease detection has underscored the importance of efficient learning strategies, particularly in scenarios constrained by limited annotated data. In this context, transfer learning has emerged as a dominant approach to enhance model performance while minimizing data and computational requirements. Specifically, the adaptation of pre-trained CNNs, originally trained on large-scale datasets such as ImageNet, has proven advantageous for accelerating convergence, mitigating overfitting, and improving generalizability across diverse agricultural contexts. A clear trend in the literature shows that pre-trained models consistently outperform custom-built architectures when applied to small or imbalanced agricultural datasets. For instance, VGG-16, one of the most widely used ImageNet-trained backbones, was employed in a study on banana disease detection across the Democratic Republic of Congo and Benin, yielding a high OA of 97.00% [38]. Similarly, ResNet variants were utilized in Brazilian soybean classification tasks and achieved OA values exceeding 99.00% [47]. These results highlight the capacity of generalized feature extractors trained on large-scale visual datasets to capture relevant representations, even in the agricultural domain, where intra-class variability and environmental noise are significant. Fine-tuning pre-trained networks to adapt them to crop-specific conditions has proven especially effective in cases involving underrepresented or region-specific crops. For example, EfficientNet-B3, when fine-tuned for disease detection in tomato, potato, and pepper crops in Pakistan, achieved an impressive F1-score of 98.8% [44], reflecting its robustness even in varied climatic and soil conditions. A similar strategy was adopted in a 2024 study on rubber tree disease in China, where a custom MSA-CNN architecture incorporated elements of transfer learning to achieve an OA of 94.11% [59]. These cases illustrate that transfer learning not only enhances generalization but also facilitates knowledge transfer across domains with shared visual features (e.g., leaf patterns, color discolorations, lesion shapes). Importantly, the choice of backbone architecture also plays a role in data efficiency.
Lightweight models such as MobileNet and MobileViT, designed for real-time, edge-level deployment, have also benefited from pre-training. In a recent German study involving wheat crop disease detection, these models, when fine-tuned, achieved competitive OA and F1-score values of 89.06% and 88.95%, respectively [55]. These findings are significant for field-level applications, where hardware constraints and limited network connectivity require efficient yet accurate solutions. Even in more advanced applications involving segmentation or quantification, transfer learning remains relevant. The use of DeepLabV3+ for disease severity estimation in wheat [52] and Segformer for maize disease segmentation [57] showcases how pre-trained transformer-based encoders can be fine-tuned to agricultural image datasets. Despite being designed for general-purpose semantic segmentation, these architectures yielded high performance when adapted to crop-specific tasks, with OA values above 91.00%. This highlights a promising direction for leveraging transformer-based pre-trained models, particularly as annotated agricultural datasets remain scarce. Nevertheless, while the benefits of transfer learning are widely acknowledged, several studies still deploy custom CNNs trained from scratch, often in contexts where the visual characteristics of the disease or imaging modality significantly diverge from those in ImageNet. For example, a 2022 potato disease detection study in China used a 3D-CNN without transfer learning, achieving OA of 97.33% [50]. Such models, while effective, demand more extensive datasets and tuning, making them less practical in data-scarce environments. These findings collectively emphasize that transfer learning is not merely a computational shortcut but a strategic necessity in agricultural deep learning applications [79]. Particularly in UAV-based plant pathology, where acquiring and annotating high-quality datasets is often hindered by logistical and economic constraints, transfer learning provides a scalable solution to bridge the gap between model complexity and data availability. By utilizing pre-trained backbones as feature extractors and selectively fine-tuning them on agricultural datasets, researchers have effectively reduced training times and computational overhead while maintaining high levels of accuracy and generalization.
The period from 2022 to 2024 marks a distinct phase of technological evolution in UAV-based crop disease detection, characterized by the integration of advanced deep learning paradigms and multimodal data strategies. As conventional convolutional approaches reach maturity, researchers are increasingly exploring innovative architectures and data processing techniques to address complex field conditions, heterogeneous crop features, and the limitations of traditional 2D imaging. One of the most salient trends during this period is the proliferation of hybrid architectures, particularly those combining CNNs with transformer-based components. These hybrid systems aim to leverage the complementary strengths of CNNs, known for their local spatial sensitivity, and transformers, which excel in capturing long-range dependencies and global context. Similarly, MobileViT, a mobile-optimized hybrid model integrating transformer modules within a CNN framework, was utilized for wheat disease detection in 2024 by Alirezazadeh et al., achieving an F1-score of 88.95% and overall accuracy of 89.06% [55]. These models signify a paradigm shift toward context-aware feature extraction in resource-constrained environments. Complementing this trajectory, researchers have also incorporated multi-sensor fusion strategies. Notably, the MSA-CNN architecture (Multi-Scale Attention CNN) employed by Zeng et al. [59] demonstrated a novel integration of multispectral and hyperspectral UAV data for rubber tree disease classification. Achieving 94.11% overall accuracy, this model capitalized on both spatial and spectral richness, offering improved sensitivity to subtle disease-induced reflectance variations. This represents an important advancement in the development of DL architectures tailored not only to spatial structure but also to spectral diversity inherent in crop imaging. Another notable innovation is the application of 3D-CNNs, which incorporate spatiotemporal dynamics into model learning. Traditional 2D CNNs, while effective for static imagery, are limited in their ability to model sequential or volumetric patterns. In contrast, 3D-CNNs enable temporal feature extraction from UAV flight sequences, which is particularly useful for tracking disease progression over time. This approach was exemplified in a 2022 potato study by Shi et al. [50], where the fusion of 2D and 3D CNN models led to a 97.33% overall accuracy, offering a promising direction for time-series analysis in agricultural monitoring. Finally, the integration of generative models, specifically in the form of image enhancement tools, has begun to emerge in agricultural DL workflows. The study by Alves Nogueira et al. [54] introduced REAL-ESRGAN, a generative adversarial network-based method for super-resolution image reconstruction. Applied to maize UAV imagery, this approach improved input data quality by refining texture and structural details, potentially augmenting the performance of downstream detection models. Although REAL-ESRGAN was not used directly for classification, its incorporation signifies an increasing interest in pre-processing pipelines that amplify model input quality, particularly when low-resolution imagery or adverse flight conditions compromise data fidelity. Collectively, these innovations reflect a broadening of the UAV-based plant disease detection research agenda, moving beyond accuracy optimization toward more holistic considerations of data richness, temporal continuity, and deployment feasibility. The integration of attention mechanisms, spectral fusion, and generative enhancement illustrates a new phase of algorithmic creativity aligned with real-world agricultural constraints. As these methodologies continue to mature, they are expected to play a pivotal role in enabling autonomous, context-aware, and data-efficient disease monitoring systems within precision agriculture frameworks.

4.6. Potential of Deep Learning Algorithms in Practical Applications for Crop Leaf Disease Detection

While UAV-based DL systems for crop disease detection have achieved remarkable progress in recent years, several persistent limitations constrain their broader applicability and scalability in real-world agricultural settings. The UAV-based crop disease detection approaches using deep learning approaches have a similar set of challenges that are not limited to overfitting. The available annotated datasets are limited and usually cause an imbalance in the classes, thus decreasing the model generalizability. Real-time applications on UAVs have high computational requirements, especially when hyperspectral images or transformer-based models are involved. Overfitting is one of the key concerns, but it must be addressed along with these wider constraints that form the practical limitations to implementation.
A principal limitation identified across multiple studies is the issue of data scarcity, particularly for rare or geographically specific crop diseases. This challenge is twofold: first, collecting high-quality annotated imagery for rare pathologies is labor-intensive and logistically complex; second, class imbalance can skew model learning, leading to reduced generalizability and robustness. This issue is exemplified in the 2024 study on bean disease detection by Slimani et al. [60], where the YOLOv5, YOLOv8, YOLO-NAS, and Faster R-CNN models achieved a modest mAP of 73.7%. The lower performance underscores the difficulty in detecting diseases with limited or noisy training examples and highlights the urgent need for more inclusive and representative datasets that capture the full spectrum of crop-pathogen interactions. Another critical barrier is the computational burden associated with advanced DL models, especially those utilizing multispectral or hyperspectral inputs. While such data types offer enhanced discriminatory power for subtle disease symptoms, they substantially increase model complexity and training times. However, the spatiotemporal processing significantly increased memory demands, making it less practical for real-time UAV deployments or usage in low-resource environments. One of the most critical problems that have been largely underresearched is the computing requirements of algorithms in the context of UAV-based detecting crop diseases. Existing research is split into two categories, with one where data are processed on-board the UAV (edge computing), and another where data are sent to external computing facilities, such as ground stations or cloud servers. The benefit of onboard processing is that real-time decision-making is provided and this is very useful in time-sensitive systems like early disease intervention. This strategy however needs lightweight models that are optimized with limited onboard resources and is limited by power consumption and hardware resources. By comparison, offboard processing enables computationally more expensive architectures, such as transformer-based or generative models, but it introduces latency and requires constant high-bandwidth communication.
These studies illustrate the trade-off between model precision and computational cost, prompting a need for optimization strategies such as model pruning, quantization, or knowledge distillation. A third key issue is the lack of standardization in performance evaluation metrics, which hinders cross-study comparisons and the synthesis of results across different domains, crops, or geographies. While OA remains a common metric for classification tasks, its utility diminishes in the presence of class imbalance. F1-score has emerged as a more informative measure in such cases (e.g., 98.8% for EfficientNet-B3 in the detection of tomato and potato diseases [44]), while mAP is the accepted standard for object detection tasks (e.g., 83.2% mAP for YOLOv8 in melon disease detection [43]). However, several studies still omit these critical metrics, such as the work by Mazzia et al. [40], which did not report OA or F1, thereby limiting reproducibility and comparative analysis. Additionally, the inclusion of statistical confidence intervals, confusion matrices (for classification tasks), and class-wise performance metrics could further improve the transparency and reproducibility of reported results. Looking forward, future research should focus on building open-access, multi-crop datasets that emphasize rare diseases and diverse environmental conditions. The use of synthetic data generation techniques, such as Generative Adversarial Networks (GANs), could also mitigate data scarcity by augmenting underrepresented classes. Moreover, the integration of edge computing paradigms with lightweight DL architectures such as MobileNet or MobileViT could reduce computational load and enable real-time analysis on UAVs, especially in remote or resource-limited areas.
To address data scarcity, collaborative initiatives to establish open-access, multi-crop benchmark datasets are critical. Projects modeled after ImageNet or PlantVillage could be expanded to UAV imagery, with standardized annotation protocols that facilitate cross-study comparability. Federated learning frameworks offer a complementary approach [80], allowing models to be trained across geographically distributed datasets without requiring raw data sharing, thereby respecting privacy and local data restrictions while improving model generalization. Synthetic data generation using GANs or physics-based crop simulators also represents a promising strategy for augmenting underrepresented disease classes.

5. Conclusions and Future Remarks

The integration of UAVs and DL has led to a paradigm shift in agricultural disease monitoring, offering unprecedented capabilities in spatial precision, temporal frequency, and analytic scalability. The maturation of these technologies has enabled early detection, classification, and quantification of plant diseases with high accuracy, marking significant progress toward more sustainable and data-driven agricultural practices. Recent advancements highlight the dominance of multirotor UAVs equipped primarily with RGB sensors due to their affordability and compatibility with common deep learning frameworks. At the same time, the adoption of hyperspectral and multispectral imaging systems has expanded, especially in recent years, offering enhanced spectral sensitivity that is critical for identifying subtle physiological changes associated with crop stress and disease. The combination of high-resolution imagery and robust DL algorithms has become central to modern crop monitoring systems. The reliable and reproducible UAV-based monitoring systems are capable of minimizing the use of chemical pesticides through prompt and accurate interventions, maximizing the utilization of agricultural inputs, and reducing the production costs. These technologies can aid farmers to make management decisions that can maximize yields and reduce environmental impact by offering objective and timely information.
On the computational front, CNNs have remained foundational due to their proven effectiveness across diverse agricultural scenarios. Progressively, these architectures have evolved to include lightweight and transformer-based models, as well as hybrid networks that integrate spectral and spatial information for improved generalization. These models have achieved high classification accuracy and detection precision, making them suitable for both large-scale agricultural monitoring and targeted disease assessment. Furthermore, the application of 3D convolutional models and generative approaches reflects the field’s move toward more sophisticated and holistic data modeling techniques. Despite these technological achievements, several persistent challenges remain. The limited availability of annotated datasets, particularly for rare diseases and less commonly studied crops, continues to constrain model development and evaluation. Class imbalance within existing datasets further complicates training processes, often resulting in biased performance outcomes. Addressing these issues requires coordinated efforts to develop comprehensive, publicly accessible datasets that reflect the diversity and complexity of real-world agricultural conditions. Computational demands also represent a critical barrier, particularly when deploying complex models on UAV platforms in the field. While high-end DL architectures offer superior performance, they often necessitate specialized hardware, which can hinder scalability and limit use in resource-constrained settings. In contrast, efficient models tailored for edge computing, such as those optimized through pruning, quantization, or model distillation, offer promising alternatives for real-time, on-device analysis.
Another major concern is the lack of standardized evaluation protocols across studies. The inconsistent use of accuracy metrics such as F1-score, overall accuracy, and mean average precision makes it difficult to compare performance across different approaches. The adoption of a unified benchmarking framework is essential for facilitating objective assessments and guiding methodological improvements. Looking ahead, future research should prioritize three interrelated objectives: improving data availability through synthetic generation and open-access initiatives, optimizing models for real-time deployment in diverse environments, and establishing clear performance standards to enhance comparability. Equally important is the collaboration between agricultural scientists and machine learning experts to ensure that developed models are interpretable, actionable, and aligned with the needs of end-users such as agronomists and domain experts.
This review has several limitations, as only studies published in English were included, which may introduce a language bias and exclude relevant research published in other languages. Also, the analysis relied exclusively on the Web of Science Core Collection, and while this database is comprehensive, it does not capture all peer-reviewed literature, particularly conference proceedings and regional journals. The review covered publications for the latest full year and while the most recent advances in 2025 were included in the analysis of specific aspects of the research topic, cumulative numbers do not currently reflect research trends in current year. These constraints may restrict the completeness of the trends identified, and future reviews could broaden database coverage, include multilingual sources, and update findings with the latest developments.
The systematic review of studies indexed in the Web of Science data showed that the production of research has grown exponentially since 2019, with multirotor UAVs and RGB sensors being the most popular, as they are less expensive and can be deployed more easily, whereas hyperspectral imaging and hybrid deep learning models are becoming popular in research projects with large budgets. Three main issues that should be resolved in future research are as follows: (1) present knowledge is still fragmented with the majority of studies targeting certain crops, regions or algorithms, which reduces generalizability; (2) methodological inconsistency is still present as the accuracy metrics and validation approaches are unevenly reported and it is challenging to compare accuracy across studies; and (3) the distribution of research is not even around the world, as China, the USA, and India are major contributors, while countries that struggle with the food security problems are under-represented.

Author Contributions

Conceptualization, D.R. and P.R.; methodology, D.R. and P.R.; software, D.R.; validation, I.P. and M.J.; formal analysis, D.R. and P.R.; investigation, D.R. and P.R.; resources, D.R.; data curation, D.R.; writing—original draft preparation, D.R. and P.R.; writing—review and editing, D.R. and P.R.; visualization, D.R.; supervision, I.P. and M.J.; project administration, I.P.; funding acquisition, P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available in Web of Science Core Collection database at https://www.webofscience.com/wos/woscc/advanced-search (accessed on 5 September 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ngugi, H.N.; Akinyelu, A.A.; Ezugwu, A.E. Machine Learning and Deep Learning for Crop Disease Diagnosis: Performance Analysis and Review. Agronomy 2024, 14, 3001. [Google Scholar] [CrossRef]
  2. Terentev, A.; Dolzhenko, V.; Fedotov, A.; Eremenko, D. Current State of Hyperspectral Remote Sensing for Early Plant Disease Detection: A Review. Sensors 2022, 22, 757. [Google Scholar] [CrossRef] [PubMed]
  3. Wen, T.; Li, J.-H.; Wang, Q.; Gao, Y.-Y.; Hao, G.-F.; Song, B.-A. Thermal Imaging: The Digital Eye Facilitates High-Throughput Phenotyping Traits of Plant Growth and Stress Responses. Sci. Total Environ. 2023, 899, 165626. [Google Scholar] [CrossRef]
  4. Ding, W.; Abdel-Basset, M.; Alrashdi, I.; Hawash, H. Next Generation of Computer Vision for Plant Disease Monitoring in Precision Agriculture: A Contemporary Survey, Taxonomy, Experiments, and Future Direction. Inf. Sci. 2024, 665, 120338. [Google Scholar] [CrossRef]
  5. Wang, Y.M.; Ostendorf, B.; Gautam, D.; Habili, N.; Pagay, V. Plant Viral Disease Detection: From Molecular Diagnosis to Optical Sensing Technology—A Multidisciplinary Review. Remote Sens. 2022, 14, 1542. [Google Scholar] [CrossRef]
  6. Fu, X.; Jiang, D. Chapter 16–High-Throughput Phenotyping: The Latest Research Tool for Sustainable Crop Production under Global Climate Change Scenarios. In Sustainable Crop Productivity and Quality Under Climate Change; Liu, F., Li, X., Hogy, P., Jiang, D., Brestic, M., Liu, B., Eds.; Academic Press: Amsterdam, The Netherlands, 2022; pp. 313–381. ISBN 978-0-323-85449-8. [Google Scholar]
  7. Polymeni, S.; Skoutas, D.N.; Sarigiannidis, P.; Kormentzas, G.; Skianis, C. Smart Agriculture and Greenhouse Gas Emission Mitigation: A 6G-IoT Perspective. Electronics 2024, 13, 1480. [Google Scholar] [CrossRef]
  8. Laveglia, S.; Altieri, G.; Genovese, F.; Matera, A.; Di Renzo, G.C. Advances in Sustainable Crop Management: Integrating Precision Agriculture and Proximal Sensing. AgriEngineering 2024, 6, 3084–3120. [Google Scholar] [CrossRef]
  9. Yadav, A.; Yadav, K.; Ahmad, R.; Abd-Elsalam, K.A. Emerging Frontiers in Nanotechnology for Precision Agriculture: Advancements, Hurdles and Prospects. Agrochemicals 2023, 2, 220–256. [Google Scholar] [CrossRef]
  10. Khan, N.; Ray, R.L.; Sargani, G.R.; Ihtisham, M.; Khayyam, M.; Ismail, S. Current Progress and Future Prospects of Agriculture Technology: Gateway to Sustainable Agriculture. Sustainability 2021, 13, 4883. [Google Scholar] [CrossRef]
  11. Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  12. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-Based Plant Height from Crop Surface Models, Visible, and near Infrared Vegetation Indices for Biomass Monitoring in Barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  13. Radočaj, D.; Šiljeg, A.; Plaščak, I.; Marić, I.; Jurišić, M. A Micro-Scale Approach for Cropland Suitability Assessment of Permanent Crops Using Machine Learning and a Low-Cost UAV. Agronomy 2023, 13, 362. [Google Scholar] [CrossRef]
  14. Radočaj, D.; Gašparović, M.; Jurišić, M. Open Remote Sensing Data in Digital Soil Organic Carbon Mapping: A Review. Agriculture 2024, 14, 1005. [Google Scholar] [CrossRef]
  15. Wilke, N.; Siegmann, B.; Postma, J.A.; Muller, O.; Krieger, V.; Pude, R.; Rascher, U. Assessment of Plant Density for Barley and Wheat Using UAV Multispectral Imagery for High-Throughput Field Phenotyping. Comput. Electron. Agric. 2021, 189, 106380. [Google Scholar] [CrossRef]
  16. Maguluri, L.P.; Geetha, B.; Banerjee, S.; Srivastava, S.S.; Nageswaran, A.; Mudalkar, P.K.; Raj, G.B. Sustainable Agriculture and Climate Change: A Deep Learning Approach to Remote Sensing for Food Security Monitoring. Remote Sens. Earth Syst. Sci. 2024, 7, 709–721. [Google Scholar] [CrossRef]
  17. Mohsan, S.A.H.; Khan, M.A.; Noor, F.; Ullah, I.; Alsharif, M.H. Towards the Unmanned Aerial Vehicles (UAVs): A Comprehensive Review. Drones 2022, 6, 147. [Google Scholar] [CrossRef]
  18. Aslan, M.F.; Durdu, A.; Sabanci, K.; Ropelewska, E.; Gültekin, S.S. A Comprehensive Survey of the Recent Studies with UAV for Precision Agriculture in Open Fields and Greenhouses. Appl. Sci. 2022, 12, 1047. [Google Scholar] [CrossRef]
  19. Benincasa, P.; Antognelli, S.; Brunetti, L.; Fabbri, C.A.; Natale, A.; Sartoretti, V.; Modeo, G.; Guiducci, M.; Tei, F.; Vizzari, M. Reliability of NDVI Derived by High Resolution Satellite and UAV Compared to In-Field Methods for the Evaluation of Early Crop N Status and Grain Yield in Wheat. Exp. Agric. 2018, 54, 604–622. [Google Scholar] [CrossRef]
  20. Khanal, S.; Kc, K.; Fulton, J.P.; Shearer, S.; Ozkan, E. Remote Sensing in Agriculture—Accomplishments, Limitations, and Opportunities. Remote Sens. 2020, 12, 3783. [Google Scholar] [CrossRef]
  21. Kakarla, S.C.; Costa, L.; Ampatzidis, Y.; Zhang, Z. Applications of UAVs and Machine Learning in Agriculture. In Unmanned Aerial Systems in Precision Agriculture: Technological Progresses and Applications; Zhang, Z., Liu, H., Yang, C., Ampatzidis, Y., Zhou, J., Jiang, Y., Eds.; Springer Nature: Singapore, 2022; pp. 1–19. ISBN 978-981-19-2027-1. [Google Scholar]
  22. Mulla, D.J. Satellite Remote Sensing for Precision Agriculture. In Sensing Approaches for Precision Agriculture; Kerry, R., Escolà, A., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 19–57. ISBN 978-3-030-78431-7. [Google Scholar]
  23. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep Learning Techniques to Classify Agricultural Crops through UAV Imagery: A Review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef]
  24. Kaushalya Madhavi, B.G.; Bhujel, A.; Kim, N.E.; Kim, H.T. Measurement of Overlapping Leaf Area of Ice Plants Using Digital Image Processing Technique. Agriculture 2022, 12, 1321. [Google Scholar] [CrossRef]
  25. Jurado, J.M.; López, A.; Pádua, L.; Sousa, J.J. Remote Sensing Image Fusion on 3D Scenarios: A Review of Applications for Agriculture and Forestry. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102856. [Google Scholar] [CrossRef]
  26. Qadri, S.A.A.; Huang, N.-F.; Wani, T.M.; Bhat, S.A. Advances and Challenges in Computer Vision for Image-Based Plant Disease Detection: A Comprehensive Survey of Machine and Deep Learning Approaches. IEEE Trans. Autom. Sci. Eng. 2025, 22, 2639–2670. [Google Scholar] [CrossRef]
  27. Younesi, A.; Ansari, M.; Fazli, M.; Ejlali, A.; Shafique, M.; Henkel, J. A Comprehensive Survey of Convolutions in Deep Learning: Applications, Challenges, and Future Trends. IEEE Access 2024, 12, 41180–41218. [Google Scholar] [CrossRef]
  28. Retallack, A.; Finlayson, G.; Ostendorf, B.; Lewis, M. Using Deep Learning to Detect an Indicator Arid Shrub in Ultra-High-Resolution UAV Imagery. Ecol. Indic. 2022, 145, 109698. [Google Scholar] [CrossRef]
  29. Neupane, K.; Baysal-Gurel, F. Automatic Identification and Monitoring of Plant Diseases Using Unmanned Aerial Vehicles: A Review. Remote Sens. 2021, 13, 3841. [Google Scholar] [CrossRef]
  30. Tamayo-Vera, D.; Wang, X.; Mesbah, M. A Review of Machine Learning Techniques in Agroclimatic Studies. Agriculture 2024, 14, 481. [Google Scholar] [CrossRef]
  31. Khan, A.; Vibhute, A.D.; Mali, S.; Patil, C.H. A Systematic Review on Hyperspectral Imaging Technology with a Machine and Deep Learning Methodology for Agricultural Applications. Ecol. Inform. 2022, 69, 101678. [Google Scholar] [CrossRef]
  32. Kuswidiyanto, L.W.; Noh, H.-H.; Han, X. Plant Disease Diagnosis Using Deep Learning Based on Aerial Hyperspectral Images: A Review. Remote Sens. 2022, 14, 6031. [Google Scholar] [CrossRef]
  33. Shahi, T.B.; Xu, C.-Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
  34. Telli, K.; Kraa, O.; Himeur, Y.; Ouamane, A.; Boumehraz, M.; Atalla, S.; Mansoor, W. A Comprehensive Review of Recent Research Trends on Unmanned Aerial Vehicles (UAVs). Systems 2023, 11, 400. [Google Scholar] [CrossRef]
  35. Ampatzidis, Y.; Partel, V. UAV-Based High Throughput Phenotyping in Citrus Utilizing Multispectral Imaging and Artificial Intelligence. Remote Sens. 2019, 11, 410. [Google Scholar] [CrossRef]
  36. Kerkech, M.; Hafiane, A.; Canals, R. Vine Disease Detection in UAV Multispectral Images Using Optimized Image Registration and Deep Learning Segmentation Approach. Comput. Electron. Agric. 2020, 174, 105446. [Google Scholar] [CrossRef]
  37. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of Citrus Trees from Unmanned Aerial Vehicle Imagery Using Convolutional Neural Networks. Drones 2018, 2, 39. [Google Scholar] [CrossRef]
  38. Selvaraj, M.G.; Vergara, A.; Montenegro, F.; Ruiz, H.A.; Safari, N.; Raymaekers, D.; Ocimati, W.; Ntamwira, J.; Tits, L.; Omondi, A.B.; et al. Detection of Banana Plants and Their Major Diseases through Aerial Images and Machine Learning Methods: A Case Study in DR Congo and Republic of Benin. ISPRS J. Photogramm. Remote Sens. 2020, 169, 110–124. [Google Scholar] [CrossRef]
  39. Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer Neural Network for Weed and Crop Classification of High Resolution UAV Images. Remote Sens. 2022, 14, 592. [Google Scholar] [CrossRef]
  40. Mazzia, V.; Comba, L.; Khaliq, A.; Chiaberge, M.; Gay, P. UAV and Machine Learning Based Refinement of a Satellite-Driven Vegetation Index for Precision Agriculture. Sensors 2020, 20, 2530. [Google Scholar] [CrossRef]
  41. Chicco, D.; Jurman, G. The Advantages of the Matthews Correlation Coefficient (MCC) over F1 Score and Accuracy in Binary Classification Evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef]
  42. Duarte-Carvajalino, J.M.; Alzate, D.F.; Ramirez, A.A.; Santa-Sepulveda, J.D.; Fajardo-Rojas, A.E.; Soto-Suarez, M. Evaluating Late Blight Severity in Potato Crops Using Unmanned Aerial Vehicles and Machine Learning Algorithms. Remote Sens. 2018, 10, 1513. [Google Scholar] [CrossRef]
  43. Robi, S.N.A.M.; Ahmad, N.; Izhar, M.A.M.; Kaidi, H.M.; Noor, N.M. Utilizing UAV Data for Neural Network-Based Classification of Melon Leaf Diseases in Smart Agriculture. Int. J. Adv. Comput. Sci. Appl. IJACSA 2024, 15, 1212–1219. [Google Scholar] [CrossRef]
  44. Shah, S.A.; Lakho, G.M.; Keerio, H.A.; Sattar, M.N.; Hussain, G.; Mehdi, M.; Vistro, R.B.; Mahmoud, E.A.; Elansary, H.O. Application of Drone Surveillance for Advance Agriculture Monitoring by Android Application Using Convolution Neural Network. Agronomy 2023, 13, 1764. [Google Scholar] [CrossRef]
  45. PlantVillage-Dataset/Raw/Color at Master—spMohanty/PlantVillage-Dataset. Available online: https://github.com/spMohanty/PlantVillage-Dataset/tree/master/raw/color (accessed on 20 March 2025).
  46. Zhou, C.; Ye, H.; Hu, J.; Shi, X.; Hua, S.; Yue, J.; Xu, Z.; Yang, G. Automated Counting of Rice Panicle by Applying Deep Learning Model to Images from Unmanned Aerial Vehicle Platform. Sensors 2019, 19, 3106. [Google Scholar] [CrossRef]
  47. Tetila, E.C.; Machado, B.B.; Menezes, G.K.; Oliveira, A.D.S.; Alvarez, M.; Amorim, W.P.; de Souza Belete, N.A.; da Silva, G.G.; Pistori, H. Automatic Recognition of Soybean Leaf Diseases Using UAV Images and Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2020, 17, 903–907. [Google Scholar] [CrossRef]
  48. Su, J.; Yi, D.; Su, B.; Mi, Z.; Liu, C.; Hu, X.; Xu, X.; Guo, L.; Chen, W.-H. Aerial Visual Perception in Smart Farming: Field Study of Wheat Yellow Rust Monitoring. IEEE Trans. Ind. Inform. 2021, 17, 2242–2249. [Google Scholar] [CrossRef]
  49. Amarasingam, N.; Gonzalez, F.; Salgadoe, A.S.A.; Sandino, J.; Powell, K. Detection of White Leaf Disease in Sugarcane Crops Using UAV-Derived RGB Imagery with Existing Deep Learning Models. Remote Sens. 2022, 14, 6137. [Google Scholar] [CrossRef]
  50. Shi, Y.; Han, L.; Kleerekoper, A.; Chang, S.; Hu, T. Novel CropdocNet Model for Automated Potato Late Blight Disease Detection from Unmanned Aerial Vehicle-Based Hyperspectral Imagery. Remote Sens. 2022, 14, 396. [Google Scholar] [CrossRef]
  51. Günder, M.; Ispizua Yamati, F.R.; Kierdorf, J.; Roscher, R.; Mahlein, A.-K.; Bauckhage, C. Agricultural Plant Cataloging and Establishment of a Data Framework from UAV-Based Crop Images by Computer Vision. GigaScience 2022, 11, giac054. [Google Scholar] [CrossRef]
  52. Deng, J.; Zhang, X.; Yang, Z.; Zhou, C.; Wang, R.; Zhang, K.; Lv, X.; Yang, L.; Wang, Z.; Li, P.; et al. Pixel-Level Regression for UAV Hyperspectral Images: Deep Learning-Based Quantitative Inverse of Wheat Stripe Rust Disease Index. Comput. Electron. Agric. 2023, 215, 108434. [Google Scholar] [CrossRef]
  53. Noroozi, H.; Shah-Hosseini, R. Cercospora Leaf Spot Detection in Sugar Beets Using High Spatio-Temporal Unmanned Aerial Vehicle Imagery and Unsupervised Anomaly Detection Methods. J. Appl. Remote Sens. 2024, 18, 024506. [Google Scholar] [CrossRef]
  54. Alves Nogueira, E.; Moraes Rocha, B.; da Silva Vieira, G.; Ueslei da Fonseca, A.; Paula Felix, J.; Oliveira, A., Jr.; Soares, F. Enhancing Corn Image Resolution Captured by Unmanned Aerial Vehicles with the Aid of Deep Learning. IEEE Access 2024, 12, 149090–149098. [Google Scholar] [CrossRef]
  55. Alirezazadeh, P.; Schirrmann, M.; Stolzenburg, F. A Comparative Analysis of Deep Learning Methods for Weed Classification of High-Resolution UAV Images. J. Plant Dis. Prot. 2024, 131, 227–236. [Google Scholar] [CrossRef]
  56. Ribeiro, J.B.; da Silva, R.R.; Dias, J.D.; Escarpinati, M.C.; Backes, A.R. Automated Detection of Sugarcane Crop Lines from UAV Images Using Deep Learning. Inf. Process. Agric. 2024, 11, 385–396. [Google Scholar] [CrossRef]
  57. Martins, J.A.C.; Hisano Higuti, A.Y.; Pellegrin, A.O.; Juliano, R.S.; de Araújo, A.M.; Pellegrin, L.A.; Liesenberg, V.; Ramos, A.P.M.; Gonçalves, W.N.; Sant’Ana, D.A.; et al. Assessment of UAV-Based Deep Learning for Corn Crop Analysis in Midwest Brazil. Agriculture 2024, 14, 2029. [Google Scholar] [CrossRef]
  58. Mora, J.J.; Selvaraj, M.G.; Alvarez, C.I.; Safari, N.; Blomme, G. From Pixels to Plant Health: Accurate Detection of Banana Xanthomonas Wilt in Complex African Landscapes Using High-Resolution UAV Images and Deep Learning. Discov. Appl. Sci. 2024, 6, 377. [Google Scholar] [CrossRef]
  59. Zeng, T.; Wang, Y.; Yang, Y.; Liang, Q.; Fang, J.; Li, Y.; Zhang, H.; Fu, W.; Wang, J.; Zhang, X. Early Detection of Rubber Tree Powdery Mildew Using UAV-Based Hyperspectral Imagery and Deep Learning. Comput. Electron. Agric. 2024, 220, 108909. [Google Scholar] [CrossRef]
  60. Slimani, H.; Mhamdi, J.E.; Jilbab, A. Deep Learning Structure for Real-Time Crop Monitoring Based on Neural Architecture Search and UAV. Braz. Arch. Biol. Technol. 2024, 67, e24231141. [Google Scholar] [CrossRef]
  61. Poblete-Echeverría, C.; Olmedo, G.F.; Ingram, B.; Bardeen, M. Detection and Segmentation of Vine Canopy in Ultra-High Spatial Resolution RGB Imagery Obtained from Unmanned Aerial Vehicle (UAV): A Case Study in a Commercial Vineyard. Remote Sens. 2017, 9, 268. [Google Scholar] [CrossRef]
  62. Lee, J.; Sung, S. Evaluating Spatial Resolution for Quality Assurance of UAV Images. Spat. Inf. Res. 2016, 24, 141–154. [Google Scholar] [CrossRef]
  63. Zhang, J.; Su, R.; Fu, Q.; Ren, W.; Heide, F.; Nie, Y. A Survey on Computational Spectral Reconstruction Methods from RGB to Hyperspectral Imaging. Sci. Rep. 2022, 12, 11905. [Google Scholar] [CrossRef]
  64. Lan, Y. Overview of Precision Agriculture Aviation Technology. In Precision Agricultural Aviation Application Technology; Lan, Y., Ed.; Springer Nature: Cham, Switzerland, 2025; pp. 1–38. ISBN 978-3-031-89917-1. [Google Scholar]
  65. Logavitool, G.; Horanont, T.; Thapa, A.; Intarat, K.; Wuttiwong, K. Field-Scale Detection of Bacterial Leaf Blight in Rice Based on UAV Multispectral Imaging and Deep Learning Frameworks. PLoS ONE 2025, 20, e0314535. [Google Scholar] [CrossRef]
  66. Li, L.; Zhang, S.; Wang, B. Plant Disease Detection and Classification by Deep Learning—A Review. IEEE Access 2021, 9, 56683–56698. [Google Scholar] [CrossRef]
  67. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosci. 2016, 2016, 3289801. [Google Scholar] [CrossRef] [PubMed]
  68. Ang, K.L.-M.; Seng, J.K.P. Big Data and Machine Learning with Hyperspectral Information in Agriculture. IEEE Access 2021, 9, 36699–36718. [Google Scholar] [CrossRef]
  69. Elavarasan, D.; Vincent, P.M.D. Crop Yield Prediction Using Deep Reinforcement Learning Model for Sustainable Agrarian Applications. IEEE Access 2020, 8, 86886–86901. [Google Scholar] [CrossRef]
  70. Rodriguez-Conde, I.; Campos, C.; Fdez-Riverola, F. Optimized Convolutional Neural Network Architectures for Efficient On-Device Vision-Based Object Detection. Neural Comput. Appl. 2022, 34, 10469–10501. [Google Scholar] [CrossRef]
  71. Sultana, F.; Sufian, A.; Dutta, P. A Review of Object Detection Models Based on Convolutional Neural Network. In Intelligent Computing: Image Processing Based Applications; Mandal, J.K., Banerjee, S., Eds.; Springer: Singapore, 2020; pp. 1–16. ISBN 978-981-15-4288-6. [Google Scholar]
  72. Bouacida, I.; Farou, B.; Djakhdjakha, L.; Seridi, H.; Kurulay, M. Innovative Deep Learning Approach for Cross-Crop Plant Disease Detection: A Generalized Method for Identifying Unhealthy Leaves. Inf. Process. Agric. 2025, 12, 54–67. [Google Scholar] [CrossRef]
  73. Gálvez-Gutiérrez, A.I.; Afonso, F.; Martínez-Heredia, J.M. On the Usage of Deep Learning Techniques for Unmanned Aerial Vehicle-Based Citrus Crop Health Assessment. Remote Sens. 2025, 17, 2253. [Google Scholar] [CrossRef]
  74. Dhruthi, P.A.; Somashekhar, B.M.; Rani, N.S.; Lin, H. Advanced Plant Species Classification with Interclass Similarity Using a Fine-Grained Computer Vision Approach. In Proceedings of the 2024 International Conference on Advances in Modern Age Technologies for Health and Engineering Science (AMATHE), Shivamogga, India, 16–17 May 2024; pp. 1–9. [Google Scholar]
  75. Albanese, A.; Nardello, M.; Brunelli, D. Low-Power Deep Learning Edge Computing Platform for Resource Constrained Lightweight Compact UAVs. Sustain. Comput. Inform. Syst. 2022, 34, 100725. [Google Scholar] [CrossRef]
  76. Zhang, Z.; Song, W.; Wu, Q.; Sun, W.; Li, Q.; Jia, L. A Novel Local Enhanced Channel Self-Attention Based on Transformer for Industrial Remaining Useful Life Prediction. Eng. Appl. Artif. Intell. 2025, 141, 109815. [Google Scholar] [CrossRef]
  77. Das, A.; Nandi, A.; Deb, I. Recent Advances in Object Detection Based on YOLO-V4 and Faster RCNN. In Mathematical Modeling for Computer Applications; John Wiley & Sons, Ltd: Hoboken, NJ, USA, 2024; pp. 405–417. ISBN 978-1-394-24843-8. [Google Scholar]
  78. Albahli, S. AgriFusionNet: A Lightweight Deep Learning Model for Multisource Plant Disease Diagnosis. Agriculture 2025, 15, 1523. [Google Scholar] [CrossRef]
  79. Huang, Z.; Bai, X.; Gouda, M.; Hu, H.; Yang, N.; He, Y.; Feng, X. Transfer Learning for Plant Disease Detection Model Based on Low-Altitude UAV Remote Sensing. Precis. Agric. 2024, 26, 15. [Google Scholar] [CrossRef]
  80. Zhang, Y.; Lin, Z.; Chen, Z.; Fang, Z.; Chen, X.; Zhu, W.; Zhao, J.; Gao, Y. SatFed: A Resource-Efficient LEO-Satellite-Assisted Heterogeneous Federated Learning Framework. Engineering 2025, in press. [Google Scholar] [CrossRef]
Figure 1. Cumulative number of broad studies indexed in Web of Science Core Collection based on UAVs and deep learning in agriculture and specific agrotechnical operations.
Figure 1. Cumulative number of broad studies indexed in Web of Science Core Collection based on UAVs and deep learning in agriculture and specific agrotechnical operations.
Applsci 15 10778 g001
Figure 2. Geographical distribution of studies indexed in Web of Science Core Collection combining deep learning and UAVs in crop leaf disease research.
Figure 2. Geographical distribution of studies indexed in Web of Science Core Collection combining deep learning and UAVs in crop leaf disease research.
Applsci 15 10778 g002
Figure 3. Distribution of overall studies based on UAVs and deep learning in agriculture related to United Nations Sustainable Development Goals (SDGs) indexed in Web of Science Core Collection per indexed study.
Figure 3. Distribution of overall studies based on UAVs and deep learning in agriculture related to United Nations Sustainable Development Goals (SDGs) indexed in Web of Science Core Collection per indexed study.
Applsci 15 10778 g003
Figure 4. Display of trends in UAV-based agricultural and crop leaf disease research based on deep learning, focusing on imaging sensor types (RGB, multispectral, and hyperspectral).
Figure 4. Display of trends in UAV-based agricultural and crop leaf disease research based on deep learning, focusing on imaging sensor types (RGB, multispectral, and hyperspectral).
Applsci 15 10778 g004
Table 1. Analyzed UAV properties from selected articles combining “unmanned aerial vehicles” OR “UAV” OR “drone” with “deep learning”, “agriculture” and “leaf disease” OR “crop disease”.
Table 1. Analyzed UAV properties from selected articles combining “unmanned aerial vehicles” OR “UAV” OR “drone” with “deep learning”, “agriculture” and “leaf disease” OR “crop disease”.
Publication YearCountryStudy AreaImaging SensorsVegetation
Indices
Multirotor/Fixed WingGSDReference
2018USA64.6 haMultispectral (4 bands)NDVIFixed wing (senseFly eBee)12 cm[37]
2018Colombia/Multispectral (4 bands)NDVIMultirotor (3DR IRIS+)/[42]
2019China/RGB/Multirotor (DJI Matrice 600)2.51 cm[46]
2019USA5.7 haMultispectral (5 bands)NDVIMultirotor (DJI Matrice 600)/[35]
2020France3.3 haMultispectral (4 bands)NDVIMultirotor (Scanopy)1 cm[36]
2020DR Congo and Benin/Multispectral (5 bands)13 indicesMultirotor (DJI Phantom 4 Multispectral)3.5–8.0 cm[38]
2020Brazil/RGB/Multirotor (DJI Phantom 3)/[47]
2020Italy2.5 haMultispectral (4 bands)NDVI/5 cm[40]
2021China/Multispectral (5 bands)OSAVIMultirotor (DJI Matrice 100)1.3 cm[48]
2022France4 haRGB/Multirotor (EagleView Starfury)/[39]
2022Sri Lanka0.75 haRGB/Multirotor (DJI Phantom 4)1.1 cm[49]
2022China0.25 haHyperspectral (125 bands)/Multirotor (DJI S1000)2.5 cm[50]
2022Germany0.3 haRGBNGRDI, GLIMultirotor (DJI Matrice 600)/[51]
2023Pakistan/RGB/Multirotor (DJI Mini 2)/[44]
2023China1.36 haHyperspectral (176 bands)BI, DBSIMultirotor (DJI Matrice 600)/[52]
2024Germany0.3 haMultispectral (6 bands)9 indicesMultirotor (DJI Matrice 210)/[53]
2024USA/RGB/Multirotor (DJI Matrice 100)0.5 cm[54]
2024Germany0.77 haRGB/Multirotor (OktopusXL)/[55]
2024Brazil/RGB/Fixed wing (senseFly eBee)5.3 cm[56]
2024Brazil/RGB/Multirotor (DJI Mavic 2 Pro)3 cm[57]
2024DR Congo/Multispectral (5 bands)7 indicesMultirotor (DJI Phantom 4)6.5 cm[58]
2024China1.6 haHyperspectral (150 bands)22 indicesMultirotor (DJI Matrice 600)10 cm[59]
2024Malaysia/RGB/Multirotor (DJI Mavic Air 2s)/[43]
2024Morocco/RGB/Multirotor (DJI Mavic 3)/[60]
senseFly (Lausanne, Switzerland), 3DR (Chula Vista, CA, USA), DJI (Shenzhen, China), Scanopy (Quincy, France), EagleView (Rochester, NY, USA).
Table 2. An overview of Web of Science Core Collection indexed research articles on UAV-based crop disease detection using deep learning, identified through a systematic search combining the terms ‘unmanned aerial vehicles’ OR ‘UAV’ OR ‘drone’ with ‘deep learning’, ‘agriculture’, and ‘leaf disease’ OR ‘crop disease’.
Table 2. An overview of Web of Science Core Collection indexed research articles on UAV-based crop disease detection using deep learning, identified through a systematic search combining the terms ‘unmanned aerial vehicles’ OR ‘UAV’ OR ‘drone’ with ‘deep learning’, ‘agriculture’, and ‘leaf disease’ OR ‘crop disease’.
Publication YearCountryCropsDeep Learning AlgorithmsMaximum Accuracy MetricsReference
2018USACitrus treesCNNF1-score = 96.24%[37]
2018ColombiaPotatoCNN (custom)/[42]
2019ChinaRiceCNN (AlexNet, VGG, Inception-v3, improved R-FCN)OA = 91.67%, F1-score = 87.4%[46]
2019USACitrus treesCNN (YOLO v3)F1-score = 99.8%[35]
2020FranceVineyardCNN (SegNet)OA = 95.02%, F1-score = 97.66%[36]
2020DR Congo and BeninBanana plantsCNN (VGG-16)OA = 97%[38]
2020BrazilSoybeanCNN (Inception-v3, ResNet, VGG-19)OA = 99.04%[47]
2020ItalyVineyardCNN (RarefyNet)/[40]
2021ChinaWheatCNN (U-Net)F1-score = 92%[48]
2022FranceSpinachCNN (ViT, ResNet, EfficientNet)F1-score = 99.4%[39]
2022Sri LankaSugarcaneYOLOv5, YOLOR, DETR, Faster R-CNNmAP = 79%[49]
2022ChinaPotatoCustom CNN, 3D-CNNOA = 97.33%[50]
2022GermanySugar beet, cauliflowerMask R-CNNprecision > 95%, recall > 97%[51]
2023PakistanTomato, potato, pepperEfficientNet-B3F1-score = 98.8%[44]
2023ChinaWheatDeepLabv3+, HRNet, OCRNet, UNetR2 = 0.875[52]
2024GermanySugar beetCustom hybrid modelF1-score = 78.76%[53]
2024USAMaizeMuLUT, LeRF, REAL-ESRGAN/[54]
2024GermanyWheatMobileNet, ResNet, MobileViTOA = 89.06% F1-score = 88.95%[55]
2024BrazilSugar caneU-Net, LinkNet, PSPNetDice coefficient = 0.721[56]
2024BrazilMaizeFcn, DeepLabV3+, SegformerOA = 91.41%[57]
2024DR CongoBananaFaster R-CNN, YOLOv8F1-score = 98%[58]
2024ChinaRubber treeMSA-CNNOA = 94.11%[59]
2024MalaysiaMelonYOLOv8mAP = 83.2%[43]
2024MoroccoBeansYOLOv5, YOLOv8, Faster RCNN, YOLO-NASmAP = 73.7%[60]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Radočaj, D.; Radočaj, P.; Plaščak, I.; Jurišić, M. Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review. Appl. Sci. 2025, 15, 10778. https://doi.org/10.3390/app151910778

AMA Style

Radočaj D, Radočaj P, Plaščak I, Jurišić M. Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review. Applied Sciences. 2025; 15(19):10778. https://doi.org/10.3390/app151910778

Chicago/Turabian Style

Radočaj, Dorijan, Petra Radočaj, Ivan Plaščak, and Mladen Jurišić. 2025. "Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review" Applied Sciences 15, no. 19: 10778. https://doi.org/10.3390/app151910778

APA Style

Radočaj, D., Radočaj, P., Plaščak, I., & Jurišić, M. (2025). Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review. Applied Sciences, 15(19), 10778. https://doi.org/10.3390/app151910778

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop