Next Article in Journal
Allelopathic Potential and Cytotoxic, Genotoxic, and Antigenotoxic Effects of Tecoma stans Flowers (Bignoniaceae)
Previous Article in Journal
Genome-Wide Identification and Expression Analysis of LOX-HPL-ADH Pathway Genes Contributing to C6 Volatile Diversity in Chinese Plum (Prunus salicina)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava

by
Wasiu Akande Ahmed
1,*,
Olayinka Ademola Abiola
1,
Dongkai Yang
1,
Seyi Festus Olatoyinbo
2 and
Guifei Jing
1
1
Regional Centre for Space Science and Technology Education in Asia and the Pacific (RCSSTEAP), Hangzhou International Innovation Institute, Beihang University, Hangzhou 311115, China
2
National Space Research and Development Agency (NASRDA), Obasanjo Space Centre, Abuja P.M.B. 437, Nigeria
*
Author to whom correspondence should be addressed.
Horticulturae 2026, 12(1), 87; https://doi.org/10.3390/horticulturae12010087
Submission received: 22 November 2025 / Revised: 4 January 2026 / Accepted: 7 January 2026 / Published: 12 January 2026

Abstract

Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes current advances in combining unmanned aerial vehicles (UAVs) with deep learning (DL) to enable scalable, data-driven cassava disease detection. It examines UAV platforms, sensor technologies, flight protocols, image preprocessing pipelines, DL architectures, and existing datasets, and it evaluates how these components interact within UAV–DL disease-monitoring frameworks. The review also compares model performance across convolutional neural network-based and Transformer-based architectures, highlighting metrics such as accuracy, recall, F1-score, inference speed, and deployment feasibility. Persistent challenges—such as limited UAV-acquired datasets, annotation inconsistencies, geographic model bias, and inadequate real-time deployment—are identified and discussed. Finally, the paper proposes a structured research agenda including lightweight edge-deployable models, UAV-ready benchmarking protocols, and multimodal data fusion. This review provides a consolidated reference for researchers and practitioners seeking to develop practical and scalable cassava-disease detection systems.

1. Introduction

Cassava (Manihot esculenta) remains one of the most important food-security crops across Africa and Southeast Asia, ranking as the fourth largest global staple and serving over a billion people worldwide [1]. Its resilience to drought, ability to grow in low-fertility soils, and year-round cultivation make it indispensable to rural livelihoods and national food systems [1]. Africa contributes more than half of global cassava output, with Nigeria recognized as the world’s leading producer [1,2]. Yet, despite its agronomic advantages, cassava yields across Sub-Saharan Africa remain far below their biological potential due largely to destructive diseases. Cassava mosaic disease (CMD) and cassava brown streak disease (CBSD) account for some of the highest losses—often exceeding USD 2–3 billion annually [2,3,4,5]. Additional biotic stresses, including cassava bacterial blight (CBB), cassava green mite (CGM), and red mite damage (RMD), further reduce productivity and compromise root quality [6,7].
Traditional disease diagnosis methods depend on expert visual assessment, which is labor-intensive, subjective, and limited in scalability, particularly in heterogeneous or remote cassava-growing regions [8,9]. These challenges have accelerated the adoption of precision-agriculture technologies such as unmanned aerial vehicles (UAVs) and deep learning (DL), which offer a pathway to automated, timely, and high-resolution disease detection at field scale. UAVs enable rapid acquisition of detailed imagery at low altitudes, while DL architectures—ranging from convolutional neural networks (CNNs) to newer Transformer-based models—provide strong capabilities for extracting discriminative features from cassava leaves under diverse field conditions [4,10,11,12,13]. Their integration presents a promising solution for early-stage disease surveillance, allowing farmers and agricultural agencies to intervene before yield losses escalate.
Despite growing interest in cassava disease detection using machine learning and remote sensing, the current body of UAV-enabled deep learning research remains technically fragmented and methodologically inconsistent. Existing studies vary widely in UAV platform selection, sensor configuration, flight protocols, annotation granularity, and evaluation metrics, making cross-study comparison difficult and limiting reproducibility [6,7,14]. More critically, many reported models demonstrate strong performance only under narrowly defined experimental conditions, with limited evidence of generalization across cultivars, geographic regions, or imaging environments. Dataset-related issues—such as severe class imbalance, inconsistent labelling between whole-leaf and leaflet-level annotations [6,15], and geographically constrained data collection—further undermine the reliability of reported results and obscure the true operational readiness of UAV–DL systems for cassava pathology.
While some review papers have addressed cassava diseases, plant disease detection, or UAV-based agricultural monitoring more broadly (Table 1), none provide a cassava-specific synthesis that simultaneously interrogates UAV imaging strategies, DL architectures, dataset composition, and field-level deployment constraints. As a result, existing reviews tend to summarize techniques without critically examining how upstream design choices in data acquisition and annotation propagate into downstream model performance and transferability. This lack of integrative, cassava-focused analysis leaves unresolved questions regarding best practices for scalable UAV–DL deployment in real farming contexts.
Table 1. Overview of related review papers and the gaps addressed by the present study.
Table 1. Overview of related review papers and the gaps addressed by the present study.
CitationScopeStrengthsLimitationsGap Filled by This Review
Ahmed et al. [16]Compares ED-Swin (UAV) vs. Inception-v3 (leaf images) for cassava.Clear model-level comparison.Very narrow: only two models; no dataset or pipeline coverage.Provides a broad synthesis of DL models, datasets, and UAV workflows.
Zhu et al. [17]UAV + DL for general crop diseases.Strong UAV–sensor overview; global trends.Not cassava-specific; no cassava datasets or phenotype issues.Gives a cassava-focused UAV–DL analysis tied to cassava disease traits.
Kouadio et al. [18]UAV disease detection across many crops.Large quantitative survey (103 papers).Cassava hardly represented.Offers a full UAV–DL review dedicated to cassava.
Chusyairi et al. [19]UAV monitoring of cassava fertilization/irrigation.Good vegetation-index coverage.Not disease-focused; no pathogen context.Focuses specifically on cassava disease symptoms and detection.
Vasavi et al. [20]ML/DL for crop leaf disease classification.Clear ML vs. DL comparison.Static images; no UAV considerations; not cassava-specific.Explains how UAV imaging + dataset issues affect cassava disease detection.
Addressing this gap, the present review systematically consolidates UAV-enabled cassava disease studies and critically evaluates the technical interdependencies between platform selection, sensor characteristics, dataset design, and deep learning methodology. By synthesizing evidence across studies and highlighting recurring methodological limitations, this review aims to clarify current research bottlenecks and provide a structured foundation for developing robust, transferable, and operationally viable UAV–DL systems for cassava disease surveillance.
A structured review methodology was adopted to ensure comprehensive and transparent coverage of relevant studies. Literature was searched across Scopus, Web of Science, IEEE Xplore, and Google Scholar for the years 2010–2025 using combinations of keywords such as cassava disease, UAV imaging, deep learning, crop health detection, and remote sensing. Reference lists of key publications were also screened to identify additional studies. Articles were included if they focused on cassava disease detection or cassava plant-health assessment using UAV imagery, aerial sensing, or DL models, and if they reported measurable quantitative outcomes. Studies that used only laboratory leaf images, lacked imaging components, or were unrelated to ML/DL-based cassava analysis were excluded.
The initial search produced 254 documents, which were screened by title and abstract, reducing the list to 118. Full-text examination resulted in 52 studies that met the inclusion criteria. For each selected study, information regarding UAV platforms, sensor specifications, dataset characteristics, disease type, deep learning architecture, and performance metrics were extracted and synthesized. These data were consolidated into the comparative tables and thematic analyses presented in Section 4, Section 5, Section 6 and Section 7.
Artificial intelligence-based tools were used to assist in summarizing, reorganizing, and improving the readability and clarity of the manuscript. Generative AI tools were employed to assist in specific stages of this review’s synthesis and visualization processes. Google NotebookLM was used to extract structured information and performance metrics from the reviewed literature. Claude 3.5 Sonnet (Anthropic) and ChatGPT-4 (OpenAI) were utilized for processing extracted data, synthesizing comparative analyses, and generating Python code for figure visualization in Google Colaboratory (Python 3.10). The AI tools did not contribute to study design, literature selection criteria, or interpretation of findings.
All information presented in this review was derived from real, valid, and citable sources, and the authors take full responsibility for the accuracy, integrity, and originality of the work. Final intellectual synthesis, critical analysis, and all conclusions presented in this review remain entirely the work of the authors.
This paper is structured as follows: Section 2 summarizes major cassava diseases and their visual detectability; Section 3 reviews UAV platforms, sensors, and data-acquisition strategies; Section 4 synthesizes DL techniques applied to cassava disease recognition; Section 5 presents the UAV–DL pipeline and representative case studies; Section 6 compares existing studies; Section 7 discusses datasets, benchmarking challenges, and research gaps; and Section 8 concludes with actionable future directions.

2. Overview of Cassava Diseases and Detection Needs

Cassava is an essential staple for more than 800 million people globally, and its productivity is increasingly threatened by a range of diseases and pests that cause substantial yield losses and undermine food security in tropical regions [1,2]. A clear understanding of the major cassava diseases—their causal agents, symptom characteristics, and detectability through UAV imagery—is central to designing effective automated disease-monitoring frameworks.

2.1. Cassava Diseases

2.1.1. Cassava Mosaic Disease (CMD)

Cassava mosaic disease (CMD), caused by several cassava mosaic begomoviruses such as the East African cassava mosaic virus, is one of the most widespread and destructive cassava diseases across Africa [6,21]. CMD symptoms include mosaic, mottling, chlorosis, leaf deformation, and stunted growth, which make it visually distinguishable in red–blue–green (RGB) and multispectral UAV imagery [6,7,21]. Severe CMD infection can reduce photosynthetic efficiency and lead to yield losses exceeding 80%, contributing to estimated annual economic damage of up to USD 2.7 billion in Sub-Saharan Africa [4,22]. Modern DL architectures—ranging from CNNs to ED-Swin Transformers—have achieved strong performance in CMD recognition from aerial and field images [13].

2.1.2. Cassava Brown Streak Disease (CBSD)

Cassava brown streak disease (CBSD), caused by Cassava brown streak virus (CBSV) and Ugandan cassava brown streak virus (UCBSV), presents with yellowing veins, patchy chlorosis, chlorotic mottling, and characteristic root necrosis that severely degrades root quality [6,7]. Symptoms vary across cassava varieties, growing environments, and disease stages, making early visual detection more challenging than CMD [6,14]. Although UAV multispectral imaging can detect subtle changes in leaf reflectance, inconsistent symptom expression remains a major barrier to automated recognition. CBSD contributes substantially to annual losses, and alongside CMD accounts for more than USD 1 billion in yield decline across East and Central Africa [6].

2.1.3. Cassava Bacterial Blight (CBB)

Cassava bacterial blight (CBB), caused by Xanthomonas axonopodis pv. manihotis, is prominent in humid cassava-growing regions and produces symptoms such as water-soaked angular lesions, black blight, wilt, and dieback [13]. These features can be detected in high-resolution UAV imagery, although distinctiveness varies with environmental conditions. While comprehensive yield-loss estimates for CBB are limited, the disease remains a significant constraint to cassava vigor and productivity [22].

2.1.4. Cassava Green Mite (CGM)

Pest-induced disorders such as cassava green mite (CGM) and red mite damage (RMD) also impact cassava health. CGM produces small white scratch-like spots, chlorosis, and leaf contraction [7]; RMD causes reddish-brown spotting and leaf discoloration, particularly under severe infestations [6]. These symptoms overlap with CMD and CBSD, contributing to misclassification in DL models. Nevertheless, UAV-mounted RGB sensors paired with DL have achieved up to 96% accuracy in detecting RMD-related symptoms [6].

2.1.5. Other Cassava Diseases

Other notable diseases include brown leaf spot (BLS), caused by Mycosphaerella henningsii, which presents as brown lesions with yellow halos and is efficiently identified using DL models (≈98% reported accuracy), and cassava phytoplasma disease (CPD), which is associated with color-intensity changes detectable through UAV-supported geographical information system (GIS) mapping [6,23].
Figure 1 and Figure 2 summarize major cassava diseases, their causal agents, key symptoms, affected plant parts, and recommended management strategies. This conceptual diagram provides a compact overview of disease characteristics relevant to UAV imaging and deep learning-based detection workflows.
Figure 1. Radial mind map linking cassava diseases with their causes, symptoms, affected plant parts, and management options.
Figure 1. Radial mind map linking cassava diseases with their causes, symptoms, affected plant parts, and management options.
Horticulturae 12 00087 g001
Figure 2. Visual representation of detectable cassava diseases: (a) cassava mosaic disease (CMD), (b) cassava green mite (CGM), (c) cassava bacterial blight (CBB), (d) cassava brown streak disease (CBSD) [7].
Figure 2. Visual representation of detectable cassava diseases: (a) cassava mosaic disease (CMD), (b) cassava green mite (CGM), (c) cassava bacterial blight (CBB), (d) cassava brown streak disease (CBSD) [7].
Horticulturae 12 00087 g002

2.2. Why We Should Use UAVs and DL for Smart Agriculture

Traditional field scouting methods are slow, subjective, and inconsistent due to symptom variability across cultivars, plant age, and environmental conditions [8,9]. UAVs overcome these constraints by enabling rapid, scalable field coverage and by acquiring high-resolution images at controlled altitudes, thereby standardizing visual inputs for DL models [10,12,24]. DL architectures, particularly CNNs, YOLO-based detectors, and Transformer models, further enable automated recognition of complex foliar patterns associated with viral, bacterial, and pest-induced cassava diseases [4,10,12,13].
UAV–DL systems thus allow early detection of disease outbreaks, objective and repeatable assessments across growing seasons, scalable monitoring beyond the capacity of manual surveys, and real-time diagnostics for timely interventions.
Figure 3 and Figure 4 illustrate cassava detectability patterns and disease-specific performance distributions across multiple DL studies. Table 2 summarizes diseases, symptoms, causal agents, and UAV detectability characteristics [7,25].
Figure 3. Real-time cassava detection using machine learning [24].
Figure 3. Real-time cassava detection using machine learning [24].
Horticulturae 12 00087 g003
Table 2. Summary of major cassava diseases and unmanned aerial vehicle (UAV) detectable features.
Table 2. Summary of major cassava diseases and unmanned aerial vehicle (UAV) detectable features.
DiseaseCausal AgentKey SymptomsUAV DetectabilityImpact on YieldSource
CMD 1BegomovirusesMosaic, chlorosis, twisted/stunted leavesHigh—RGB and multispectral30–40% average loss; up to 97.3%; annual losses of USD 1.9–2.7 billion[2,5]
CBSD 2CBSV 8, UCBSV 9Yellowing veins, root necrosis, patchy chlorosisModerate—multispectral; variable expressionUp to 70–75% loss in susceptible varieties; USD 100 million annual loss[2,5]
CBB 3Xanthomonas axonopodisLeaf blight, black spots, wiltingModerate to High—RGB/HSIMajor; not precisely quantified[2,26]
CGM 4Mononychellus tanajoaScratch-like white spots, leaf shrinkageModerate—RGB with image recognitionUp to 30% yield loss[2,27]
RMD 5Oligonychus biharensisReddish-brown leaf spots, discolorationHigh—strong visual cuesSerious but unquantified
BLS 6Mycosphaerella henningsiiBrown circular spots, yellowingHigh—DL achieves ~98% accuracyTypically low yield loss
CPD 7PhytoplasmaColor intensity variationModerate—GIS heat mappingSerious; data limited[28]
1 Cassava mosaic disease; 2 cassava brown streak disease; 3 cassava bacterial blight; 4 cassava green mite; 5 red mite damage; 6 brown leaf spot; 7 cassava phytoplasma disease; 8 Cassava brown streak virus; 9 Ugandan cassava brown streak virus.
Figure 4. Comparative performance of deep learning models reported in detection studies of cassava diseases: cassava mosaic disease (CMD), cassava brown streak disease (CBSD), cassava bacterial blight (CBB), cassava green mite (CGM). Panels (a,b) show class-level distributions with healthy samples clearly separated from disease categories. Panel (c) presents the minimum, average, and maximum accuracies achieved across individual studies (data extracted from studies [6,7,29,30,31,32] and processed).
Figure 4. Comparative performance of deep learning models reported in detection studies of cassava diseases: cassava mosaic disease (CMD), cassava brown streak disease (CBSD), cassava bacterial blight (CBB), cassava green mite (CGM). Panels (a,b) show class-level distributions with healthy samples clearly separated from disease categories. Panel (c) presents the minimum, average, and maximum accuracies achieved across individual studies (data extracted from studies [6,7,29,30,31,32] and processed).
Horticulturae 12 00087 g004

3. UAV Technologies for Agricultural Monitoring

UAVs, also known as drones, have become central tools in precision agriculture due to their ability to acquire rapid, high-resolution imagery across diverse field conditions [33,34,35]. Their value in plant disease monitoring derives from their low-altitude imaging capability, operational flexibility, and compatibility with a range of sensors used in plant health assessment. UAV platforms can be broadly categorized into fixed-wing, rotary-wing, and hybrid VTOL (Vertical Take-Off and Landing), each offering distinct operational characteristics relevant to disease detection tasks [33,35,36].

3.1. Sensor Technologies for UAV-Based Agricultural Monitoring

The effectiveness of UAV-assisted disease detection depends largely on the sensor used, as the spectral and spatial properties of the sensor determine the type of plant stress signatures that can be captured [37,38,39]. UAV-mounted sensors commonly include RGB, multispectral, hyperspectral, thermal, and LiDAR systems, each offering varying levels of diagnostic value for cassava diseases [38,40]. Table 3 provides a comparative summary of sensor types with their key properties. Figure 5 gives a visual illustration of the spectral bands of all camera sensor types.
Table 3. Comparative summary of unmanned aerial vehicle (UAV)-mounted sensor technologies for disease detection.
Table 3. Comparative summary of unmanned aerial vehicle (UAV)-mounted sensor technologies for disease detection.
Sensor TypeSpectral RangeUse CaseAdvantagesLimitationsSuitability for Cassava Disease DetectionSources
RGB CameraVisible (Red, Green, Blue)General crop monitoring, cassava leaf visual symptomsLow cost, lightweight, widely available, high-resolution imagingCannot capture non-visible stress indicators; limited spectral dataHigh—effective for CMD and CGM where visual symptoms are clear[24,35]
MultispectralVisible + Near-Infrared (e.g., Red-edge)Vegetation health indices (NDVI 10, GNDVI 11), early stress detectionCaptures subtle changes in plant physiology; ideal for disease mappingMore expensive than RGB; fewer commercial models for very low-altitude UAVsHigh—suitable for early-stage detection of CBSD and BLS[41,42,43]
HyperspectralDozens to hundreds of narrow bandsEarly detection of physiological and biochemical stressHigh spectral resolution; very sensitive to stress signalsVery high cost, complex processing, heavier payloadMedium—excellent potential but limited by cost and UAV payload limits[44,45]
Thermal CameraInfrared (Surface Temperature)Water stress, fungal infections, plant stress diagnosticsDetects stress not visible in RGB; complements visual dataLow spatial resolution; affected by environment and requires calibrationMedium—indirect support; more useful in multi-sensor configurations[46,47]
LiDARLaser-based 3D structure detectionCanopy structure, plant height, volume estimationProvides 3D data, unaffected by light conditionsHigh cost, not specific to disease unless combined with spectral imagingComplementary 3D tool for cassava.[43]
10 Normalized Difference Vegetation Index; 11 Green Normalized Difference Vegetation Index.
Figure 5. Visual summary of different types of camera sensors [17]. RGB = red–green–blue.
Figure 5. Visual summary of different types of camera sensors [17]. RGB = red–green–blue.
Horticulturae 12 00087 g005

3.1.1. Multispectral Cameras

Multispectral sensors capture data across specific visible and near-infrared bands that are sensitive to vegetation physiology [48,49,50,51,52]. They allow the computation of vegetation indices such as Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), and red-edge metrics that can reveal early-stage disease signals not detectable by visual inspection [51,53,54,55,56]. In cassava fields, multispectral imaging supports early stress detection related to CBSD, nutrient deficiency, or physiological decline. However, multispectral sensors are costlier than RGB cameras and often require more advanced post-processing expertise [49].

3.1.2. RGB Cameras

RGB sensors, despite their spectral limitations, remain the most widely used for cassava disease detection due to affordability, ease of integration, and compatibility with DL-based visual classification [57]. RGB imagery from rotary-wing UAVs has proven effective in capturing visually distinct symptoms of CMD, CBSD, CBB, and CGM [13,24,57]. Studies employing UAVs such as the DJI Phantom 4 Pro V2.0 at altitudes of 2–5 m have demonstrated that high-resolution RGB imagery is sufficient for modern DL models—especially YOLO variants, MobileNet-based architectures, and ED-Swin Transformers—to perform accurate disease detection [13,24,41,58].

3.1.3. Other Sensors: Hyperspectral, Thermal and LiDAR

Hyperspectral sensors, capable of capturing hundreds of narrow spectral bands, provide exceptional sensitivity to biochemical plant stress but remain limited in cassava research due to high cost, heavy payload requirements, and complex data processing [17,59]. Thermal cameras provide valuable indicators of water stress or transpiration anomalies but suffer from low spatial resolution and environmental sensitivity [60,61,62,63]. LiDAR offers 3D canopy structure information useful for plant architecture or height assessment, though its contribution to disease detection is indirect and best used as a complementary modality [64,65].
In summary, for cassava disease detection, the optimal sensor strategy involves a trade-off between spectral precision and practical affordability. Multispectral cameras offer superior diagnostic capabilities for subtle or early-stage disease symptoms, making them highly valuable in research and precision monitoring scenarios. However, for direct, cost-effective, and field-deployable cassava disease detection, RGB cameras on rotary-wing UAVs, enhanced with DL models, provide a highly viable solution. This configuration supports real-time, high-accuracy disease mapping—particularly for visually expressive diseases like CMD—without the economic or technical barriers associated with more advanced sensors.

3.2. UAV Data Acquisition Protocols for Agricultural Disease Detection

High-quality-data acquisition is essential for training reliable DL models and ensuring reproducible results. UAV-based data collection must therefore adhere to well-defined flight parameters including altitude, overlap, lighting conditions, and environmental stability.

3.2.1. Flight Altitude

Low-altitude flights (2–5 m above the canopy) are optimal for cassava disease detection, enabling ultra-high-resolution imagery required to capture subtle foliar symptoms [7,13,17,66]. Depending on camera resolution, such altitudes yield ground sampling distances between 0.5–10 cm/pixel, sufficient for distinguishing visually similar symptoms such as mosaic mottling, chlorosis, and necrotic lesions [24,67].

3.2.2. Image Overlap

High overlap—typically 80% front and 60% side—is crucial for complete field coverage and robust image alignment [14,67,68,69,70]. These configurations reduce geometric distortion, minimize occlusion effects, and facilitate high-quality orthomosaic generation and multi-view consistency necessary for cassava monitoring.

3.2.3. Time-of-Day Considerations

Consistent lighting significantly improves image quality, particularly for RGB sensors. Midday flights (11:00–13:00) minimize shadows and variability in illumination, producing more uniform inputs for DL models [59,71]. However, cassava-focused UAV studies have demonstrated acceptable results even under moderate cloud cover or non-uniform lighting when supported by strong preprocessing techniques [22,58].

3.2.4. Environmental Constraints

UAV performance is affected by wind, rain, and canopy interference from weeds or overlapping leaves [71,72]. Rotary-wing UAVs are better suited to managing these constraints because of their stable hovering and low-speed maneuverability. Automated, global positioning system (GPS)-guided flight paths are widely used to ensure consistent repeatability in longitudinal studies and across growing seasons [36].
Table 4 summarizes recommended altitude ranges based on multiple reported values from across different plant studies, overlap settings, ground sampling distance (GSD) requirements, sensor orientations, and environmental constraints.
Table 4. Typical unmanned aerial vehicle (UAV) flight parameters for plant disease detection.
Table 4. Typical unmanned aerial vehicle (UAV) flight parameters for plant disease detection.
ParameterRecommended Range/SettingPurpose/JustificationSources
Flight altitude2–5 mEnsures ultra-high spatial resolution needed to detect subtle foliar symptoms[24,73]
Overlap (front/side)80%/60%Enables high-quality orthomosaics and accurate 3D reconstructions[35,36,74]
Ground sampling distance (GSD)~0.5–10 cm/pixelDetermines image detail; lower GSD is ideal for distinguishing symptom-level features[34,67,75]
Camera resolution≥12 MP (e.g., 4000 × 3000 pixels)Higher resolution improves detection of small-scale anomalies[24,41]
Sensor OrientationNadir (direct downward)Minimizes distortion and improves model training consistency[41,73,76]
Time of day11:00–13:00Reduces shadowing and exposure variability[34,77]
Environmental constraintsClear sky, low wind (<10 km/h)Improves image consistency; ensures stable flight[41,67]

3.3. Challenges of UAV in Disease Detection in Agriculture

UAVs enable timely, high-resolution crop monitoring at relatively low cost, offering clear advantages over manual field scouting and conventional satellite imagery for disease detection [17,18,45,67,78,79,80,81]. They enable early diagnosis, targeted agrochemical application, and responsive crop management by delivering actionable insights at spatial resolutions suitable for detecting both localized and field-scale disease patterns [10,68,82,83].
Despite their advantages, UAV adoption is constrained by short battery life, weather sensitivity, regulatory restrictions, high sensor costs, and the need for specialized technical expertise [18,45,67,72,80,82,83,84,85,86]. Data processing demands—including storage, orthomosaicking, and ML pipeline preparation—remain significant barriers, especially in low-resource settings. Model transferability is also limited, as DL systems trained in one geographic region often do not generalize well to others due to cultivar differences and variable environmental conditions [4,21,24]. Addressing these constraints requires harmonized protocols, improved dataset diversity, and more adaptive ML models.

4. Deep Learning Techniques for Plant Disease Detection

DL has become central to automated plant disease detection, particularly when paired with UAV imagery for cassava pathology. DL enables high-capacity feature extraction, robust pattern recognition, and scalable deployment across diverse field conditions. This section reviews relevant architectures, dataset-preparation workflows, training strategies, and the strengths and limitations associated with cassava disease detection models.

4.1. Model Architectures

CNNs remain the dominant architecture for cassava disease detection due to their strong ability to extract hierarchical visual features directly from raw imagery [4,34,59,87]. Models such as AlexNet, VGG, GoogLeNet, ResNet, DenseNet, and Xception have been widely employed across plant disease datasets, including cassava, where they consistently outperform traditional ML methods [24,59,88,89,90]. Their success is driven by efficient feature extraction, compatibility with transfer learning, and suitability for RGB imagery commonly captured by UAVs.
Recent advancements include the adoption of Vision Transformers (ViTs) and Swin Transformers, whose self-attention mechanisms enhance global feature representation under complex field conditions involving variable lighting, occlusion, and clutter [34,39]. The ED-Swin Transformer, in particular, has demonstrated strong performance in cassava disease classification under heterogeneous backgrounds [13].
Object detection architectures such as YOLOv5n and YOLOv5s are increasingly used for UAV-based workflows because they simultaneously localize and classify diseased leaf regions, enabling real-time diagnosis during aerial scouting [24,58,75]. YOLOv5n offers high inference speed for resource-limited deployment, while YOLOv5s provides improved accuracy for detailed disease identification. Similarly, segmentation models such as U-Net and Mask R-CNN support pixel-level mapping of disease severity [17,34].
The temporal progression of these architectures—from traditional CNNs (2017–2019), through enhanced CNN and hybrid models (2020–2022) [91], to Transformer-based approaches (2023–2025)—is summarized in Figure 6. This figure illustrates the shift toward models emphasizing deployment efficiency and robustness rather than merely maximizing accuracy.
Lightweight models such as MobileNetV3Small (as in CDDNet) and EfficientNet variants (B0, B3, and B4) further enhance deployability by reducing parameters and computational overhead while maintaining high accuracy [41,92,93,94].
Figure 6. Evolution of deep learning (DL) architecture for cassava disease detection over 9 years [4,6,13,14,15,30,31,32,65,90,91,95,96] (2017–2025 inclusive). Numbers in brackets correspond to literature citations.
Figure 6. Evolution of deep learning (DL) architecture for cassava disease detection over 9 years [4,6,13,14,15,30,31,32,65,90,91,95,96] (2017–2025 inclusive). Numbers in brackets correspond to literature citations.
Horticulturae 12 00087 g006

4.2. Dataset Preparation

Reliable cassava disease detection depends strongly on the quality of datasets used for model training, including source imagery, annotation practices, and data augmentation.

4.2.1. Data Collection

Image datasets are collected through field surveys, handheld cameras, or UAV platforms, depending on the study design [4,7,15,24]. UAV-based imagery offers high-resolution and wide-area coverage but is more sensitive to lighting conditions, canopy height variability, and background clutter. Camera resolutions commonly range from 12–20 MP for RGB, and specialized multispectral sensors are used where early stress detection is required.

4.2.2. Annotation

Annotation is typically conducted manually by crop experts. Studies have used two main annotation strategies—whole-leaf labeling and leaflet-level cropping [6,15]. Results show that while leaflet-level datasets increase sample count, whole-leaf datasets sometimes yield higher classification accuracy for disease classes with strong global symptoms such as CMD and RMD [6]. Inconsistent annotation across studies remains a major cause of model performance variation [14,15,25,97].

4.2.3. Data Augmentation and Preprocessing

Due to class imbalance and limited expert-labelled images—especially for CBSD and CGM—augmentation is essential. Common techniques include flips, rotations, scaling, brightness adjustments, and Gaussian noise injection [7,24,34,75]. Approaches such as SMOTE have been employed to increase minority-class samples synthetically [4,7]. Preprocessing pipelines typically include resizing to standardized dimensions (e.g., 224 × 224, 300 × 300, or 640 × 640 pixel) and noise reduction for improved model learning [22,34,88,98]. Hyperspectral and multispectral imagery additionally requires radiometric and geometric correction.

4.3. Training Strategies

DL models for plant disease detection commonly use two training paradigms: training from scratch and transfer learning.
Training from scratch is feasible for large datasets or when specific domain features require full representation, but it is computationally expensive and prone to overfitting in datasets [7,34]. Transfer learning—leveraging pretrained models (e.g., ImageNet weights)—is the dominant approach due to reduced computational requirements and superior performance with limited labelled data [6,15,17,25,88].
Training typically focuses on three major tasks: segmentation, classification and object detection.
Segmentation is used to delineate diseased regions within foliage using algorithms such as U-Net, Mask R-CNN, or classical methods such as k-means clustering [25,34].
Classification assigns each image to a disease category. CNN-based classifiers and Transformer-based classifiers have achieved high performance (>90% accuracy) on cassava datasets [7,25,34].
Finally, object detection localizes and classifies disease regions in images using detectors such as YOLOv5n, YOLOv5s, or Faster R-CNN [24,58,75]. This is particularly relevant for UAV workflows.
Loss functions such as cross-entropy, focal loss, and class-weighted losses help counter class imbalance and improve detection of minority diseases [4,7,99].

4.4. Strengths and Limitations

DL models consistently outperform traditional ML methods in cassava disease detection, achieving >93–99% accuracy in several studies [6,25,34,89,100]. Their ability to automatically learn hierarchical features eliminates the need for handcrafted descriptors, and many models exhibit strong robustness under field conditions involving lighting variability, occlusion, and background clutter [6,24]. Lightweight architectures (e.g., CDDNet, YOLOv5n) enable deployment on mobile and edge devices, supporting real-time in-field diagnosis [4,24].
However, several limitations remain critical. DL models require large annotated datasets, which are scarce in cassava research, especially for UAV-acquired imagery [7,34,101]. Model performance often drops when applied across different geographic regions, cultivars, or environmental conditions due to domain shift [4,21,24]. Training is computationally expensive for Transformer-based models, and many studies report challenges in early-stage disease detection where symptoms are visually subtle. The “black-box” nature of DL also limits interpretability, complicating agronomic decision-making [17].
Persistent dataset challenges—including heavy class imbalance, inconsistent annotation, and lack of standardized preprocessing workflows—continue to affect reproducibility and comparability across studies. These limitations highlight the need for unified benchmark protocols, diverse UAV-sourced datasets, and improved domain adaptation methods.

5. UAV–DL Integration Framework for Cassava Disease Detection

A complete UAV–deep learning (DL) pipeline for cassava disease detection consists of four major components: image acquisition, preprocessing and annotation, model training and inference, and deployment for field diagnosis. These components work together to produce a scalable and responsive system for monitoring disease expression across cassava fields. The main elements of this workflow are summarized in Table 5.
Figure 7 illustrates a typical end-to-end UAV–DL pipeline for cassava disease detection. Table 5 presents a summary of UAV-DL integration pathways across all reviewed studies. It is evident that only two studies utilized UAV-based data acquisition, with the majority relying on ground-based or laboratory imaging. Furthermore, real-time deployment capability remains limited, with only four studies achieving operational inference speeds suitable for field applications. This concentration of research on non-UAV methodologies, despite the proven advantages of aerial monitoring, highlights a significant gap between technological potential and practical implementation.
Table 5. A cassava-focused deep learning (DL) application pipeline, from data collection to deployment, spanning studies from 2017 to 2025.
Table 5. A cassava-focused deep learning (DL) application pipeline, from data collection to deployment, spanning studies from 2017 to 2025.
StudyData SourceSensorDL ModelDeploymentReal-TimeCategory
Unmanned aerial vehicle-based studies (n = 2)
Nnadozie et al., 2023 [24]550 mm QuadcopterRed–blue–green (RGB)YOLOv5n/sEdge (Jetson)YesUAV + real-time
Zhang et al., 2025 [13]DJI Phantom 4 ProRGB (20 MP)ED-Swindesktop graphical processing unit (GPU)PotentialUAV + Potential
Ground-based with real-time capability (n = 3)
Ramcharan et al., 2019 [15]Handheld cameraRGB (20.2 MP)Mobile convolutional neural network (CNN)SmartphoneYesGround + real-time
Mrisho et al., 2020 [14]SmartphoneRGBPlantVillage NuruSmartphoneYesGround + real-time
Dosset et al., 2025 [4]Lab/GroundNot specifiedCDDNetEdge (Jetson)YesGround + real-time
Ground-based with potential real-time (n = 4)
Sambasivam & Opiyo 2021 [7]Lab (Kaggle)RGBCustom convolutional neural network (CNN)Mobile (proposed)PotentialGround + Potential
Ramcharan et al., 2017 [6]Handheld cameraRGB (20.2 MP)InceptionV3Mobile (testing)PotentialGround + Potential
Lilhore et al., 2022 [30]Lab (Kaggle)Not specifiedEnhanced convolutional neural network (CNN)Future targetPotentialGround + Potential
Sambasivam et al., 2024 [32]Lab (Kaggle)Not specifiedDenseNet + EfficientNetFuture (Internet of Things)PotentialGround + Potential
Ground-based without real-time (n = 7)
Akinpelu et al., 2025 [102]Lab (Kaggle)Not specifiedVisual geometry group (VGG16)SmartphoneNoGround + no real-time
Elliott et al., 2022 [103]Lab (controlled)RGBSupport vector machine (SVM)DesktopNoGround + no real-time
Goyal & Gill 2024 [95]Lab (Kaggle)Not specifiedEfficientNetB3Not specifiedNoGround + no real-time
Shahriar et al., 2022 [96]Lab (Kaggle)RGBXceptionDesktopNoGround + no real-time
Maryum et al., 2021 [104]Lab (Kaggle)RGBEfficientNetB4Not specifiedNoGround + no real-time
Abayomi-Alli et al., 2021 [29]Lab (Kaggle)Not specifiedMobileNetV2Mobile (future)NoGround + no real-time
Lokesh et al., 2024 [31]Lab (GAN augmented)Not specifiedCNN + VGG16 + ResNetPortable (future)NoGround + no real-time

5.1. Image Acquisition

UAV-based image acquisition begins with the selection of suitable platforms and sensors that determine the resolution, spectral richness, and consistency of captured imagery. Rotary-wing UAVs—including quadcopters and hexacopters—are the most frequently used platforms because of their ability to hover, fly at low altitudes, and acquire detailed images of individual plants [24,36,66,81]. Fixed-wing UAVs offer wider coverage but lack the maneuverability required for close-range cassava disease imaging.
RGB sensors are the most common because they provide high-resolution visual imagery suitable for detecting foliar symptoms associated with CMD, CBSD, CBB, CGM, and RMD [17,24]. Multispectral cameras, such as MicaSense RedEdge or Parrot Sequoia, are used when vegetation-index analysis is required, particularly for early stress detection that may precede visible symptoms [49,51,53]. Hyperspectral sensors, although powerful, remain less common due to cost and payload limitations [59,105].
Optimal flight parameters typically include low altitudes (2–5 m), front and side overlap of 80% and 60%, and automated GPS-guided flight planning to ensure consistent coverage and reproducibility across surveys [24,106,107].

5.2. Preprocessing and Annotation

Preprocessing prepares imagery for DL model development through steps such as radiometric correction, orthomosaic generation, geometric alignment, and tiling to isolate leaf regions [34,73,107,108]. This stage addresses issues such as shadowing, illumination variability, occlusion from overlapping leaves, and background clutter from soil or weeds [22,58].
Annotation involves the labelling of images by experts using bounding boxes, segmentation masks, or whole-leaf categorization, depending on the task [34,106,109]. Two annotation strategies are commonly used: whole-leaf labelling and leaflet-level labelling. Whole-leaf annotation is effective for diseases with global symptoms such as CMD and RMD, while leaflet-level datasets may improve detection of subtle symptoms commonly found in CBSD or BLS [6,15].
Augmentation techniques—such as rotation, flipping, brightness transformation, cropping, and noise injection—are widely adopted to increase dataset diversity and counter class imbalance [4,7,34,75]. Image resizing to standardized input dimensions ensures compatibility with pretrained architectures [22,34,88,98].

5.3. Model Training and Inference

Model training in cassava disease detection relies on CNN-based architectures and emerging Transformer-based models. Transfer learning using pretrained models such as InceptionV3, ResNet-50, EfficientNet, MobileNetV2/V3, and VGGNet improves performance on small datasets and reduces training cost [6,7,15,17,88]. Transformer models—including Swin and ED-Swin—offer strong global feature extraction beneficial under challenging field conditions [13].
Three categories of DL tasks are commonly implemented:
  • Classification: Assigning images to disease classes using CNNs or Transformer-based classifiers, with reported accuracy often exceeding 93% [7,25,34].
  • Object detection: Simultaneous localization and classification using detectors such as YOLOv5n or YOLOv5s for UAV imagery where diseased regions must be isolated from background clutter [24,58,75].
  • Segmentation: Pixel-level mapping using U-Net or Mask R-CNN to delineate diseased leaf regions [17,34].
Loss functions such as cross-entropy, class-weighted loss, and focal loss are used to mitigate class imbalance and improve recall for minority diseases [4,7,99].

5.4. Edge Deployment and Real-Time Decision Support

Several recent models are deployable on smartphones, embedded processors, or UAV onboard systems. Lightweight architectures such as MobileNetV3Small, EfficientNet-B0, YOLOv5n, and CDDNet enable real-time inference with latency often below 0.03 s per frame [6,7,24,58]. Quantization and TFLite conversion reduce model size and support offline inference.
Edge deployment allows field-ready diagnosis without dependence on internet connectivity. Real-time output enables early intervention and more precise monitoring of disease progression across seasons.

5.5. Representative Case Studies

Several studies have demonstrated the utility of UAV and/or DL integration in cassava pathology. Table 6 presents a detailed analysis.
Table 6. Summary of studies that utilized unmanned aerial vehicle (UAV) and/or deep learning (DL) integration in cassava pathology.
Table 6. Summary of studies that utilized unmanned aerial vehicle (UAV) and/or deep learning (DL) integration in cassava pathology.
ReferenceDL Model/ApproachPlatform/DeploymentOutcome/AccuracyHighlights
Ramcharan et al. (2017) [6]Inception v3 (transfer learning)TensorFlow on smartphones (Tanzania)93% accuracy on 2756 field imagesFirst field-deployable cassava model; multi-disease detection
Sambasivam and Opiyo [7]Various models + SMOTE & focal lossKaggle dataset of 10,000 annotated imagesTop models achieved > 93% accuracyBenchmark competition; focused on five disease classes
Mrisho, Mbilinyi [14]Mobile object detection (DL integration)PlantVillage Nuru-Smartphone mobile application (offline use)65–88% accuracy; outperformed farmers and extension agentsPerformance improved with multi-leaf analysis
Nnadozie, Iloanusi [24]YOLOv5n/YOLOv5sNVIDIA Jetson AGX Orin (edge deployment)YOLOv5s: higher accuracy; YOLOv5n: faster inferenceTested under variable growth stages and weed interference
Dosset, Dang [4]CDDNet (MobileNetV3Small + soft attention)Lightweight model for real-time classification98.95% classification accuracyHigh speed and compact; optimized for mobile deployment
Zhang, Zhou [13]ED-Swin TransformerField imagery analysis98.56% F1-score; 94.32% accuracyAddressed complex backgrounds and disease morphology
Ozichi Emuoyibofarhe (2019) [22]CSVM, CGSVMField imagery analysis. 18,000-image dataset83.8% and 61.6% accuracy for CSVM and CGSVM, respectively.Targeted CMD and CBD. Manual data collection instead of UAV. Implemented ML.

5.6. Evaluation Metrics and Real-Time Deployment Considerations

Evaluation metrics include accuracy, precision, recall, F1-score, intersection over union (IoU), and mean average precision (mAP), depending on whether the task is classification, detection, or segmentation [24,34,89,110,111]. Real-time deployment often prioritizes inference time and frames per second (FPS), with some lightweight models achieving 30–60 FPS in field tests [24,58].
Common constraints include illumination variability, canopy occlusion, limited UAV battery capacity, environmental interference, computational demands of Transformer models, and inconsistent evaluation protocols [4,21,24,112]. Standardized data-acquisition frameworks and harmonized benchmarking datasets are needed to improve cross-study comparability and operational deployment. Table 7 concisely summarizes the evaluation metrics used to evaluate different prediction models across multiple sources.
Table 7. Summary of evaluation metrics in cassava disease detection studies.
Table 7. Summary of evaluation metrics in cassava disease detection studies.
MetricPurpose/DescriptionTypical Values (Cassava Studies)Relevance to UAV–DL Applications
AccuracyOverall % of correct predictions (true positives + true negatives)85–99% [16,30]Good for initial assessment; may be misleading in imbalanced datasets
Precision% of true positive predictions among all predicted positives74–93% (CMD and CBSD detection)Helps reduce false positives in multi-disease UAV scans
Recall (Sensitivity)% of true positives identified out of all actual positives70–95% [16,32]Critical for early-stage detection where missing diseased plants is costly
F1 ScoreHarmonic mean of precision and recall; balances both in one metric88–98.56% (ED-Swin [13], EfficientNet [32,94,95,104])Especially useful in imbalanced datasets (e.g., CMD-dominant images)
IoU (intersection over union)Degree of overlap between predicted and actual bounding boxes>0.5 (threshold for object detection)Key metric for evaluating object detectors like YOLOv5n/s
mAP (mean average precision)Averaged precision across all classes and IoU thresholds>90% (YOLO [24])Comprehensive object detection score; standard for detection tasks
Inference time/FPSTime taken per image or frames per second (real-time performance)0.016 s/image or >30 FPS (YOLOv5n)Vital for edge deployment and UAV real-time inference
Model size/parametersNumber of parameters, affecting memory use and portability<2M (YOLOv5n), ~3M (CDDNet)Determines compatibility with smartphones and UAV onboard processors

6. Comparative Review of Existing Related Studies

Comparative Review of Existing Studies on UAV and/or DL Applications for Cassava Disease Detection

Table 8 summarizes the main findings, datasets, architectures, and performance levels reported across existing cassava disease detection studies. Clear differences emerge between UAV-based and non-UAV approaches.
Most early studies depend on handheld or crowdsourced images. These datasets are easy to collect but introduce variation in angle, lighting, and background, which reduces model stability—especially for diseases with mild or inconsistent symptoms such as CBSD and BLS. UAV-based imagery, by contrast, offers more uniform perspectives and larger field coverage, improving consistency and supporting stronger feature extraction.
Across studies, CNN models generally perform well for CMD and RMD because their symptoms are visually distinct. Transformer-based models show improved performance under complex backgrounds, especially when paired with UAV imagery. Detection-focused models (e.g., YOLO variants) add the advantage of locating diseased regions directly within the canopy, which is useful for operational field monitoring.
However, several limitations persist. Many datasets are imbalanced, with underrepresentation of CBSD, CBB, and BLS. Annotation practices differ across studies, reducing comparability. UAV datasets remain limited, and sample sizes are often small. Differences in preprocessing, augmentation, and validation strategies also contribute to inconsistent performance reporting.
Table 8. Comparative review of deep learning (DL) studies for cassava disease detection, color-coded by accuracy and unmanned aerial vehicle (UAV) use).
Table 8. Comparative review of deep learning (DL) studies for cassava disease detection, color-coded by accuracy and unmanned aerial vehicle (UAV) use).
Author, YearDL MethodUAV Type/Data SourceDatasetDisease TargetAccuracyLimitation
Sambasivam & Opiyo, 2021 [7]CNNs + SMOTE, focal lossTraditional field survey10,000 Uganda cassava imagesCMD, CBSD, CGM, CRM, healthyOver 93%Small, imbalanced dataset; not UAV-acquired
Ramcharan et al., 2017 [6]Inception v3 (transfer learning)Handheld camera (Sony Cybershot)2756 full leaves; 15,000 leafletsCMD, CBSD, BLS, RM, GMD93% (leaflets); 73–91% (full leaves)Accuracy varies by input; limited generalization
Mrisho et al., 2020 [14]Mobile CNN (PlantVillage Nuru)Smartphone imagesRamcharan datasetCMD, CBSD, CGM, healthy65–88% (based on leaf count)Weak on subtle symptoms; low-light visibility
Emuoyibofarhe et al., 2019 [22]Cubic SVM, Gaussian SVMUAV noted as future optionSparse info; Nigeria-basedCMD, CBB, healthy/unhealthy83.9% (CSVM); 61.6% (CGSVM)Traditional ML; no UAV data; limited dataset
Dosset et al., 2025 [4]CDDNet (MobileNetV3 + attention)Designed for edge; UAV not specified27,053–58,807 merged imagesCMD, CBSD, CGM, CRM, CBLS, CHL97–99%Meteorological variation; UAV deployment untested
Nnadozie et al., 2023 [24]YOLOv5n/s (object detection)DJI Phantom 4 Pro V2.0 UAVCustom dataset, NigeriaPlant detection (precursor to disease mapping)Moderate (YOLOv5s better, YOLOv5n faster)Private data; trade-off between accuracy and speed
Zhang et al., 2025 [13]ED-Swin TransformerDJI Phantom 4 Pro V2.0 UAV54,353 images (China)Blight, CBSD, CMD, mottle, healthy94.32% Accuracy, 98.56% RecallComplex model; occlusion & lighting issues
Hasan Shahriar et al., 2022 [96]Ensemble CNNs (Xception, etc.)No UAV; sourced from Kaggle21,367 imagesCMD, CBSD, CBB, CGM, healthy68–91% depending on modelData from farmers; high GPU demand
Abayomi-Alli et al., 2021 [29]Modified MobileNetV2 + augmentationMakerere/NARO (field lab dataset)5656 labeled imagesCMD, CBSD, CGM, CBB, healthy0.977–0.997No boost for extreme image degradation
Akinpelu et al., 2025 [102]VGG16 (Transfer Learning)Public Kaggle dataset5656 cassava imagesCMD, CBSD, CGM, CBB, healthy88% (F1: 82%)Needs regularization & tuning; convergence instability
Elliott et al., 2022 [103]Few-shot SVM (segmentation)Raspberry Pi box setup32 cassava leavesCBB lesions (segmentation only)No classification metricSmall dataset; no DL; lacks generalizability
Lokesh et al., 2024 [31]Hybrid CNN + CycleGANGAN-augmented Kaggle dataset12,880 imagesCMD, CBSD, CGM, CBB, healthy99.51% hybridHigh compute load; recommends real-time testing
Sambasivam et al., 2025 [32]DenseNet169 + EfficientNetB0 (hybrid)Kaggle Dataset~36,000 imagesCMD, CBSD, CGM, CBB, healthy89.94% hybridPoor transferability; dataset imbalance; high computational cost
Goyal et al., 2024 [95]EfficientNetB3, Inception, KNN; ensemble & TL usedKaggle cassava dataset5656 images (5 disease + healthy classes)CMD, CBSD, CGM, CBB, healthy89.9% (EffNetB3); 77% (Incep); 62% (KNN)CBB, CGM, and CBSD often misclassified; needs improved disambiguation for similar symptom classes
Maryum et al., 2021 [104]EfficientNetB4 + U-Net segmentation; transfer learningKaggle 2020 dataset (Uganda)21,397 images (segmented vs. raw)CMD, CBSD, CGM, CBB, healthy89.1% (segmented); 81.4% (original)CMD class imbalance; similar disease confusion (CGM as CMD); CBSD mislabeled as healthy
Overall, Table 8 shows that while DL models achieve high accuracy, their reliability depends heavily on dataset quality, annotation consistency, and the imaging approach used. UAV-based pipelines generally provide more stable outputs but require more systematic data collection.

7. Dataset Availability and Benchmarking Issues in UAV–DL for Cassava Disease Detection

Table 9 and Figure 8 summarize the main datasets used across cassava disease detection studies, while Table 8 compares the performance of the corresponding DL approaches. Together, these tables highlight the uneven landscape of dataset quality, size, disease coverage, and imaging conditions.
Table 9. Summary of datasets used in cassava disease detection studies.
Table 9. Summary of datasets used in cassava disease detection studies.
Dataset NameOrigin/AuthorsContent/CategoriesSizeUAV-BasedPublic AccessReference/URL
Kaggle CVPR 2019 DatasetAIcrowd, Makerere University, IITAFive fine-grained cassava leaf diseases10,000 imagesNoPublicKaggle Dataset
Cassava Image DatasetRamcharan et al. (2017), IITA TanzaniaWhole cassava leaves (CBSD, CMD, BLS, GMD, RMD, Healthy)2756 imagesNoPublicRamcharan et al., 2017 [6]
Leaflet Cassava DatasetRamcharan et al. (derived)Cropped individual leaflets (same categories as above)15,000 imagesNoPublicDerived from Cassava Image Dataset [6]
Cassava Plant Disease MergedDosset et al. (2025)CBSD, CMD, CGM, CRM, CBLS, Healthy27,053 imagesNoNot PublicDosset et al., 2025 [4]
Cassava Leaf Disease CombinedDosset et al. (2025)Cassava Image + other datasets merged (multi-source)58,807 imagesNoNot PublicDosset et al., 2025 [4]
PlantVillage DatasetHughes & Salathe (2015)Multi-crop diseases incl. cassava26,590 (multi-crop)NoPublicGitHub Repository [113]
iBean Leaf DatasetAIR Lab Makerere University, (2020)Bean disease (non-cassava, for model transfer evaluation)1296 imagesNoPublichttps://github.com/AI-Lab-Makerere/ibean (accessed on 12 August 2025) [25]
Nnadozie et al. DatasetNnadozie et al. (2023), NigeriaUAV RGB imagery for cassava plant detectionSize unspecifiedYesNot PublicIn-house dataset, referenced in Nnadozie et al., 2023 [24]
Zhang et al. DatasetZhang et al. (2025), Guangxi, ChinaUAV + ground camera; 5 classes (bacterial blight, mosaic, mottle, CBSD, healthy)54,353 imagesYesNot PublicIn-house, referenced in Zhang et al., 2025 [13]
Makerere-NARO DatasetAbayomi-Alli et al. (2021), Makerere & National Crops Resources Research InstituteHigh- and low-quality cassava disease images; used for augmentation study5656 labelled imagesNoNot PublicReferenced in Abayomi-Alli et al., 2021 [29]
Elliott et al. DatasetElliott et al. (2022), Advanced digital SVM setupTime-lapse images from Raspberry Pi box for CBB lesion segmentation32 leavesNoNot PublicReferenced in Elliott et al., 2022 [103]
Kaggle Combined Cassava DatasetSambasivam et al. (2025)Five categories; used to benchmark hybrid DL models~36,000 imagesNoPublichttps://www.kaggle.com/competitions/cassava-leaf-disease-classification/data (accessed on 12 August 2025) [32]
Lokesh CycleGAN DatasetLokesh et al. (2024)Generated using GANs for 5 disease categories; used to test hybrid CNNs12,880 (augmented)NoNot PublicDataset described in Lokesh et al., 2024 [31]

7.1. Availability and Characteristics of Cassava Datasets

Current cassava datasets vary widely in scale and structure. Most available collections—such as the Kaggle dataset and the Ramcharan full-leaf and leaflet datasets—are handheld, publicly accessible, and widely used due to their simplicity and expert-labelled categories. However, they show strong class imbalance, with CMD and healthy samples dominating, and limited representation of CBSD, BLS, or CBB.
More recent merged datasets, including those by Dosset et al. [4], offer larger sample sizes but remain non-public and still lack UAV imagery. Genuine UAV-derived datasets are scarce; the few existing ones (e.g., Nnadozie et al. [58], Zhang et al. [13]) are not publicly released, limiting reproducibility and benchmarking.

7.2. Benchmarking Challenges

Table 8 makes clear that reported performance varies not only by model architecture but also by dataset characteristics. Models trained on handheld, visually diverse datasets often face lower recall for subtle symptom classes, whereas those trained on more uniform datasets may achieve higher accuracy but risk overfitting. The absence of a unified benchmarking protocol—covering training–test splits, augmentation strategies, annotation format, and class definitions—makes cross-study comparison difficult.
Differences in disease granularity (e.g., merging mite classes, differing CBSD severity labels) further contribute to inconsistent reporting. In addition, many studies do not evaluate performance under field-realistic or real-time conditions, leaving gaps between laboratory accuracy and operational reliability.

7.3. Implications for Model Reliability and Research Gaps

The combined insights from Table 8 and Table 9 highlight several important gaps. First, the lack of publicly available UAV datasets limits the development of models capable of canopy-level disease detection. Second, inconsistent annotation and labelling conventions reduce comparability between studies. Third, geographic concentration in East and West Africa restricts model transferability to other cassava-growing regions. Lastly, early-stage symptoms remain underrepresented, hindering early detection.
These gaps underline the need for harmonized datasets, unified evaluation practices, and better representation of field variability. Addressing them is essential for developing UAV–DL pipelines that are robust beyond controlled experimental conditions.

8. Conclusions and Future Directions

This review brought together recent work on UAV-enabled DL approaches for cassava disease detection, drawing on evidence from imaging strategies, model design, dataset composition, and evaluation practices. Across studies, UAV platforms combined with modern convolutional and Transformer-based models show clear potential for field-scale disease monitoring. At the same time, the comparative analyses presented in Table 8, along with the dataset summaries in Figure 8 and Table 9, indicate that most existing approaches are still some distance from consistent, real-world deployment.
The main limitations are not primarily algorithmic. Instead, they stem from weaknesses in the underlying data. Publicly available UAV datasets remain scarce, annotation practices vary widely between studies, and class imbalance is common, particularly for early-stage symptoms. Geographic coverage is often narrow, and this further limits model transferability. Together, these factors make it difficult to assess how well reported models would perform outside the specific conditions under which they were developed. Differences in preprocessing, augmentation, and validation strategies only add to this uncertainty, reinforcing the need for clearer and more consistent benchmarking practices.
Future research efforts should therefore prioritize the development of harmonized datasets that integrate UAV and ground-level imagery, adopt consistent and transparent labelling schemes, and capture disease expression across diverse agro-ecological contexts. In parallel, greater emphasis is needed on lightweight and energy-efficient model architectures capable of reliable edge-based inference under field conditions. The integration of multispectral and hyperspectral UAV sensors, combined with multimodal data fusion techniques, represents a particularly promising direction for detecting pre-symptomatic physiological stress that precedes visible disease manifestation.
In the longer term, coordinated UAV data-collection efforts across cassava-producing regions would address many of the limitations identified in this review. Shared flight parameters, common annotation guidelines, and open data practices would help produce datasets that better reflect the diversity of cassava-growing environments. Such efforts would support more meaningful benchmarking and help move UAV–deep learning systems beyond proof-of-concept studies toward tools that can realistically support disease surveillance and management in the field. By consolidating existing evidence and highlighting where current approaches fall short, this review aims to provide a clearer reference point for future research in UAV-based cassava disease detection.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adebayo, W.G. Cassava production in Africa: A panel analysis of the drivers and trends. Heliyon 2023, 9, e19939. [Google Scholar] [CrossRef]
  2. Chikoti, P.C.; Tembo, M. Expansion and impact of cassava brown streak and cassava mosaic diseases in Africa: A review. Front. Sustain. Food Syst. 2022, 6, 1076364. [Google Scholar] [CrossRef]
  3. Amelework, A.B.; Bairu, M.W.; Maema, O.; Venter, S.L.; Laing, M. Adoption and promotion of resilient crops for climate risk mitigation and import substitution: A case analysis of cassava for South African agriculture. Front. Sustain. Food Syst. 2021, 5, 617783. [Google Scholar] [CrossRef]
  4. Dosset, A.; Dang, L.M.; Alharbi, F.; Habib, S.; Alam, N.; Park, H.Y.; Moon, H. Cassava disease detection using a lightweight modified soft attention network. Pest Manag. Sci. 2025, 81, 607–617. [Google Scholar] [CrossRef]
  5. Mkamilo, G.; Kimata, B.; Masinde, E.A.; Masisila, F.F.; Menya, R.O.; Matondo, D.; Maruthi, M.N. Impact of viral diseases and whiteflies on the yield and quality of cassava. J. Plant Dis. Prot. 2024, 131, 959–970. [Google Scholar] [CrossRef]
  6. Ramcharan, A.; Baranowski, K.; McCloskey, P.; Ahmed, B.; Legg, J.; Hughes, D.P. Deep learning for image-based cassava disease detection. Front. Plant Sci. 2017, 8, 1852. [Google Scholar] [CrossRef] [PubMed]
  7. Sambasivam, G.; Opiyo, G.D. A predictive machine learning application in agriculture: Cassava disease detection and classification with imbalanced dataset using convolutional neural networks. Egypt. Inform. J. 2021, 22, 27–34. [Google Scholar] [CrossRef]
  8. Ding, W.; Abdel-Basset, M.; Alrashdi, I.; Hawash, H. Next generation of computer vision for plant disease monitoring in precision agriculture: A contemporary survey, taxonomy, experiments, and future direction. Inf. Sci. 2024, 665, 120338. [Google Scholar] [CrossRef]
  9. Jafar, A.; Bibi, N.; Naqvi, R.A.; Sadeghi-Niaraki, A.; Jeong, D. Revolutionizing agriculture with artificial intelligence: Plant disease detection methods, applications, and their limitations. Front. Plant Sci. 2024, 15, 1356260. [Google Scholar] [CrossRef]
  10. Abbas, A.; Zhang, Z.; Zheng, H.; Alami, M.M.; Alrefaei, A.F.; Abbas, Q.; Naqvi, S.A.H.; Rao, M.J.; Mosa, W.F.; Abbas, Q. Drones in plant disease assessment, efficient monitoring, and detection: A way forward to smart agriculture. Agronomy 2023, 13, 1524. [Google Scholar] [CrossRef]
  11. Guebsi, R.; Mami, S.; Chokmani, K. Drones in Precision Agriculture: A Comprehensive Review of Applications, Technologies, and Challenges. Drones 2024, 8, 686. [Google Scholar] [CrossRef]
  12. Wu, Y.; Yang, H.; Mao, Y. Detection of the pine wilt disease using a joint deep object detection model based on drone remote sensing data. Forests 2024, 15, 869. [Google Scholar] [CrossRef]
  13. Zhang, J.; Zhou, H.; Liu, K.; Xu, Y. ED-Swin Transformer: A Cassava Disease Classification Model Integrated with UAV Images. Sensors 2025, 25, 2432. [Google Scholar] [CrossRef]
  14. Mrisho, L.M.; Mbilinyi, N.A.; Ndalahwa, M.; Ramcharan, A.M.; Kehs, A.K.; McCloskey, P.C.; Murithi, H.; Hughes, D.P.; Legg, J.P. Accuracy of a smartphone-based object detection model PlantVillage Nuru in identifying the foliar symptoms of the viral diseases of cassava–CMD and CBSD. Front. Plant Sci. 2020, 11, 590889. [Google Scholar] [CrossRef]
  15. Ramcharan, A.; McCloskey, P.; Baranowski, K.; Mbilinyi, N.; Mrisho, L.; Ndalahwa, M.; Legg, J.; Hughes, D.P. A mobile-based deep learning model for cassava disease diagnosis. Front. Plant Sci. 2019, 10, 272. [Google Scholar] [CrossRef] [PubMed]
  16. Ahmed, W.A.; Yan, D.; Hamed, J.O.; Olatoyinbo, S.F. Advances in UAV-Based Deep Learning for Cassava Disease Monitoring and Detection: A Comprehensive Review of Models, Imaging Techniques, and Agricultural Applications. Smart Agric. Technol. 2025, 12, 101400. [Google Scholar] [CrossRef]
  17. Zhu, H.; Lin, C.; Liu, G.; Wang, D.; Qin, S.; Li, A.; Xu, J.-L.; He, Y. Intelligent agriculture: Deep learning in UAV-based remote sensing imagery for crop diseases and pests detection. Front. Plant Sci. 2024, 15, 1435016. [Google Scholar] [CrossRef] [PubMed]
  18. Kouadio, L.; El Jarroudi, M.; Belabess, Z.; Laasli, S.-E.; Roni, M.Z.K.; Amine, I.D.I.; Mokhtari, N.; Mokrini, F.; Junk, J.; Lahlali, R. A review on UAV-based applications for plant disease detection and monitoring. Remote Sens. 2023, 15, 4273. [Google Scholar] [CrossRef]
  19. Chusyairi, A.; Herdiyeni, Y.; Sukoco, H.; Santosa, E. Machine Learning Monitoring Model for Fertilization and Irrigation to Support Sustainable Cassava Production: Systematic Literature Review. J. Online Inform. 2024, 9, 190–200. [Google Scholar] [CrossRef]
  20. Vasavi, P.; Punitha, A.; Rao, T.V.N. Crop leaf disease detection and classification using machine learning and deep learning algorithms by visual symptoms: A review. Int. J. Electr. Comput. Eng. 2022, 12, 2079. [Google Scholar] [CrossRef]
  21. Chaiyana, A.; Khiripet, N.; Ninsawat, S.; Siriwan, W.; Shanmugam, M.S.; Virdis, S.G. Mapping and predicting cassava mosaic disease outbreaks using earth observation and meteorological data-driven approaches. Remote Sens. Appl. Soc. Environ. 2024, 35, 101231. [Google Scholar] [CrossRef]
  22. Emuoyibofarhe, O.; Emuoyibofarhe, J.O.; Adebayo, S.; Ayandiji, A.; Demeji, O.; James, O. Detection and classification of cassava diseases using machine learning. Int. J. Comput. Sci. Softw. Eng. (IJCSSE) 2019, 8, 166–176. [Google Scholar]
  23. Plata, I.T.; Panganiban, E.B.; Alado, D.B.; Taracatac, A.C.; Bartolome, B.B.; Labuanan, F.R.E. Drone-based geographical information system (GIS) mapping of cassava Pythoplasma disease (CPD) for precision agriculture. Int. J. Emerg. Technol. Adv. Eng 2022, 12. [Google Scholar] [CrossRef]
  24. Nnadozie, E.C.; Iloanusi, O.N.; Ani, O.A.; Yu, K. Detecting cassava plants under different field conditions using UAV-based RGB images and deep learning models. Remote Sens. 2023, 15, 2322. [Google Scholar] [CrossRef]
  25. Tiwari, V.; Joshi, R.C.; Dutta, M.K. Dense convolutional neural networks based multiclass plant disease detection and classification using leaf images. Ecol. Inform. 2021, 63, 101289. [Google Scholar] [CrossRef]
  26. Chin, R.; Catal, C.; Kassahun, A. Plant disease detection using drones in precision agriculture. Precis. Agric. 2023, 24, 1663–1682. [Google Scholar] [CrossRef]
  27. Moreno-Cadena, P.; Hoogenboom, G.; Cock, J.H.; Ramirez-Villegas, J.; Pypers, P.; Kreye, C.; Tariku, M.; Ezui, K.S.; Lopez-Lavalle, L.A.B.; Asseng, S. Modeling growth, development and yield of cassava: A review. Field Crops Res. 2021, 267, 108140. [Google Scholar] [CrossRef] [PubMed]
  28. Collavino, A.; Zanini, A.A.; Medina, R.; Schaller, S.; Di Feo, L. Cassava common mosaic virus infection affects growth and yield components of cassava plants (Manihot esculenta) in Argentina. Plant Pathol. 2022, 71, 980–989. [Google Scholar] [CrossRef]
  29. Abayomi-Alli, O.O.; Damaševičius, R.; Misra, S.; Maskeliūnas, R. Cassava disease recognition from low-quality images using enhanced data augmentation model and deep learning. Expert Syst. 2021, 38, e12746. [Google Scholar] [CrossRef]
  30. Lilhore, U.K.; Imoize, A.L.; Lee, C.-C.; Simaiya, S.; Pani, S.K.; Goyal, N.; Kumar, A.; Li, C.-T. Enhanced convolutional neural network model for cassava leaf disease identification and classification. Mathematics 2022, 10, 580. [Google Scholar] [CrossRef]
  31. Lokesh, G.H.; Chandregowda, S.B.; Vishwanath, J.; Ravi, V.; Ravi, P.; Al Mazroa, A. Intelligent Plant Leaf Disease Detection Using Generative Adversarial Networks: A Case-study of Cassava Leaves. Open Agric. J. 2024, 18, e18743315288623. [Google Scholar] [CrossRef]
  32. Sambasivam, G.; Prabu Kanna, G.; Chauhan, M.S.; Raja, P.; Kumar, Y. A hybrid deep learning model approach for automated detection and classification of cassava leaf diseases. Sci. Rep. 2025, 15, 7009. [Google Scholar] [CrossRef]
  33. Mukherjee, A.; Misra, S.; Raghuwanshi, N.S. A survey of unmanned aerial sensing solutions in precision agriculture. J. Netw. Comput. Appl. 2019, 148, 102461. [Google Scholar] [CrossRef]
  34. Sandoval-Pillajo, L.; García-Santillán, I.; Pusdá-Chulde, M.; Giret, A. Weed Detection Based on Deep Learning from UAV Imagery: A Review. Smart Agric. Technol. 2025, 12, 101147. [Google Scholar] [CrossRef]
  35. Xie, C.; Yang, C. A review on plant high-throughput phenotyping traits using UAV-based sensors. Comput. Electron. Agric. 2020, 178, 105731. [Google Scholar] [CrossRef]
  36. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
  37. Ecke, S.; Dempewolf, J.; Frey, J.; Schwaller, A.; Endres, E.; Klemmt, H.-J.; Tiede, D.; Seifert, T. UAV-Based Forest Health Monitoring: A Systematic Review. Remote Sens. 2022, 14, 3205. [Google Scholar] [CrossRef]
  38. Olson, D.; Anderson, J. Review on unmanned aerial vehicles, remote sensors, imagery processing, and their applications in agriculture. Agron. J. 2021, 113, 971–992. [Google Scholar] [CrossRef]
  39. Zhang, Z.; Zhu, L. A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  40. Jurado, J.M.; López, A.; Pádua, L.; Sousa, J.J. Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102856. [Google Scholar] [CrossRef]
  41. Dash, S.K.; Sembhi, H.; Langsdale, M.; Wooster, M.; Dodd, E.; Ghent, D.; Sinha, R. Assessing the field-scale crop water condition over an intensive agricultural plain using UAV-based thermal and multispectral imagery. J. Hydrol. 2025, 655, 132966. [Google Scholar] [CrossRef]
  42. Nascimento, J.H.B.; Cortes, D.F.M.; Andrade, L.R.B.d.; Gallis, R.B.d.A.; Barbosa, R.L.; Oliveira, E.J.d. High-Throughput Phenotyping for Agronomic Traits in Cassava Using Aerial Imaging. Plants 2024, 14, 32. [Google Scholar] [CrossRef]
  43. Tuenpusa, P.; Samseemoung, G.; Soni, P.; Kuankhamnuan, T.; Sarasureeporn, W.; Poonsri, W.; Pinthong, A. Integrating Low-Altitude Remote Sensing and Variable-Rate Sprayer Systems for Enhanced Cassava Crop Management. AgriEngineering 2025, 7, 195. [Google Scholar] [CrossRef]
  44. Aierken, N.; Yang, B.; Li, Y.; Jiang, P.; Pan, G.; Li, S. A review of unmanned aerial vehicle based remote sensing and machine learning for cotton crop growth monitoring. Comput. Electron. Agric. 2024, 227, 109601. [Google Scholar] [CrossRef]
  45. Velusamy, P.; Rajendran, S.; Mahendran, R.K.; Naseer, S.; Shafiq, M.; Choi, J.-G. Unmanned Aerial Vehicles (UAV) in precision agriculture: Applications and challenges. Energies 2021, 15, 217. [Google Scholar] [CrossRef]
  46. Coulibaly, A. Smart Farming: Computer Simulation and Predictive Model for Cassava. Ph.D. Thesis, Université d’Ottawa/University of Ottawa, Ottawa, ON, Canada, 2024. [Google Scholar]
  47. Gago, J.; Douthe, C.; Coopman, R.E.; Gallego, P.P.; Ribas-Carbo, M.; Flexas, J.; Escalona, J.; Medrano, H. UAVs challenge to assess water stress for sustainable agriculture. Agric. Water Manag. 2015, 153, 9–19. [Google Scholar] [CrossRef]
  48. Aslahishahri, M.; Stanley, K.G.; Duddu, H.; Shirtliffe, S.; Vail, S.; Bett, K.; Pozniak, C.; Stavness, I. From RGB to NIR: Predicting of near infrared reflectance from visible spectrum aerial images of crops. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
  49. Choubey, A.; Reddy, B.C. Drones in Agriculture: Multispectral Analysis, in Computational Intelligence in Robotics and Automation; CRC Press: Boca Raton, FL, USA, 2023; pp. 217–241. [Google Scholar]
  50. Helfer, G.A.; Barbosa, J.L.V.; Alves, D.; da Costa, A.B.; Beko, M.; Leithardt, V.R.Q. Multispectral cameras and machine learning integrated into portable devices as clay prediction technology. J. Sens. Actuator Netw. 2021, 10, 40. [Google Scholar] [CrossRef]
  51. Scutelnic, D.; Muradore, R.; Daffara, C. A multispectral camera in the VIS–NIR equipped with thermal imaging and environmental sensors for non invasive analysis in precision agriculture. HardwareX 2024, 20, e00596. [Google Scholar] [CrossRef] [PubMed]
  52. Zahiri, Z.; Laefer, D.F.; Kurz, T.; Buckley, S.; Gowen, A. A comparison of ground-based hyperspectral imaging and red-edge multispectral imaging for façade material classification. Autom. Constr. 2022, 136, 104164. [Google Scholar] [CrossRef]
  53. Adegoye, G.A.; Olorunwa, O.J.; Alsajri, F.A.; Walne, C.H.; Wijewandana, C.; Kethireddy, S.R.; Reddy, K.N.; Reddy, K.R. Waterlogging effects on soybean physiology and hyperspectral reflectance during the reproductive stage. Agriculture 2023, 13, 844. [Google Scholar] [CrossRef]
  54. Poole, L. A Multispectral and Machine Learning Approach to Early Stress Classification in Plants. Master’s Thesis, Rhodes University, Grahamstown, South Africa, 2022. [Google Scholar]
  55. Zhang, D.; Qi, H.; Guo, X.; Sun, H.; Min, J.; Li, S.; Hou, L.; Lv, L. Integration of UAV Multispectral Remote Sensing and Random Forest for Full-Growth Stage Monitoring of Wheat Dynamics. Agriculture 2025, 15, 353. [Google Scholar] [CrossRef]
  56. Zhou, J.-J.; Zhang, Y.-H.; Han, Z.-M.; Liu, X.-Y.; Jian, Y.-F.; Hu, C.-G.; Dian, Y.-Y. Evaluating the performance of hyperspectral leaf reflectance to detect water stress and estimation of photosynthetic capacities. Remote Sens. 2021, 13, 2160. [Google Scholar] [CrossRef]
  57. Roslim, M.H.M.; Juraimi, A.S.; Che’Ya, N.N.; Sulaiman, N.; Manaf, M.N.H.A.; Ramli, Z.; Motmainna, M. Using remote sensing and an unmanned aerial system for weed management in agricultural crops: A review. Agronomy 2021, 11, 1809. [Google Scholar] [CrossRef]
  58. Nnadozie, E.C.; Iloanusi, O.; Ani, O.; Yu, K. Cassava detection from UAV images using YOLOv5 object detection model: Towards weed control in a cassava farm. BioRxiv 2022, 516748. [Google Scholar] [CrossRef]
  59. Neupane, K.; Baysal-Gurel, F. Automatic identification and monitoring of plant diseases using unmanned aerial vehicles: A review. Remote Sens. 2021, 13, 3841. [Google Scholar] [CrossRef]
  60. Brenner, M.; Reyes, N.H.; Susnjak, T.; Barczak, A.L. RGB-D and thermal sensor fusion: A systematic literature review. IEEE Access 2023, 11, 82410–82442. [Google Scholar] [CrossRef]
  61. Nguyen, T.X.B.; Rosser, K.; Chahl, J. A review of modern thermal imaging sensor technology and applications for autonomous aerial navigation. J. Imaging 2021, 7, 217. [Google Scholar] [CrossRef] [PubMed]
  62. Smigaj, M.; Agarwal, A.; Bartholomeus, H.; Decuyper, M.; Elsherif, A.; de Jonge, A.; Kooistra, L. Thermal infrared remote sensing of stress responses in forest environments: A review of developments, challenges, and opportunities. Curr. For. Rep. 2024, 10, 56–76. [Google Scholar] [CrossRef]
  63. Wen, T.; Li, J.-H.; Wang, Q.; Gao, Y.-Y.; Hao, G.-F.; Song, B.-A. Thermal imaging: The digital eye facilitates high-throughput phenotyping traits of plant growth and stress responses. Sci. Total Environ. 2023, 899, 165626. [Google Scholar] [CrossRef]
  64. Kuswidiyanto, L.W.; Noh, H.-H.; Han, X. Plant disease diagnosis using deep learning based on aerial hyperspectral images: A review. Remote Sens. 2022, 14, 6031. [Google Scholar] [CrossRef]
  65. Zhang, T.; Cai, Y.; Zhuang, P.; Li, J. Remotely sensed crop disease monitoring by machine learning algorithms: A review. Unmanned Syst. 2024, 12, 161–171. [Google Scholar] [CrossRef]
  66. Shahi, T.B.; Xu, C.-Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
  67. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A review on UAV-based applications for precision agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef]
  68. Maes, W.H. Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors. Remote Sens. 2025, 17, 606. [Google Scholar] [CrossRef]
  69. Torres-Sánchez, J.; López-Granados, F.; Borra-Serrano, I.; Peña, J.M. Assessing UAV-collected image overlap influence on computation time and digital surface model accuracy in olive orchards. Precis. Agric. 2018, 19, 115–133. [Google Scholar] [CrossRef]
  70. Zhang, J.; Xu, S.; Zhao, Y.; Sun, J.; Xu, S.; Zhang, X. Aerial orthoimage generation for UAV remote sensing. Inf. Fusion 2023, 89, 91–120. [Google Scholar] [CrossRef]
  71. Amarasingam, N.; Salgadoe, A.S.A.; Powell, K.; Gonzalez, L.F.; Natarajan, S. A review of UAV platforms, sensors, and applications for monitoring of sugarcane crops. Remote Sens. Appl. Soc. Environ. 2022, 26, 100712. [Google Scholar] [CrossRef]
  72. Abrahams, M.; Sibanda, M.; Dube, T.; Chimonyo, V.G.; Mabhaudhi, T. A systematic review of UAV applications for mapping neglected and underutilised crop species’ spatial distribution and health. Remote Sens. 2023, 15, 4672. [Google Scholar] [CrossRef]
  73. Cavalaris, C.; Karamoutis, C.; Markinos, A. Efficacy of cotton harvest aids applications with unmanned aerial vehicles (UAV) and ground-based field sprayers—A case study comparison. Smart Agric. Technol. 2022, 2, 100047. [Google Scholar] [CrossRef]
  74. Gašparović, M.; Zrinjski, M.; Barković, Đ.; Radočaj, D. An automatic method for weed mapping in oat fields based on UAV imagery. Comput. Electron. Agric. 2020, 173, 105385. [Google Scholar] [CrossRef]
  75. Hnida, Y.; Mahraz, M.A.; Yahyaouy, A.; Achebour, A.; Riffi, J.; Tairi, H. Enhanced Multi-Scale Detection of Olive Tree Crowns in UAV Orthophotos Using a Deep Learning Architecture. Smart Agric. Technol. 2025, 12, 101126. [Google Scholar] [CrossRef]
  76. Ramírez-Cuesta, J.M.; Intrigliolo, D.S.; Lorite, I.; Moreno, M.A.; Vanella, D.; Ballesteros, R.; Hernández-López, D.; Buesa, I. Determining grapevine water use under different sustainable agronomic practices using METRIC-UAV surface energy balance model. Agric. Water Manag. 2023, 281, 108247. [Google Scholar] [CrossRef]
  77. Guo, A.; Ye, H.; Huang, W.; Qian, B.; Wang, J.; Lan, Y.; Wang, S. Inversion of maize leaf area index from UAV hyperspectral and multispectral imagery. Comput. Electron. Agric. 2023, 212, 108020. [Google Scholar] [CrossRef]
  78. Alotaibi, A.; Chatwin, C.; Birch, P. Ubiquitous Unmanned Aerial Vehicles (UAVs): A Comprehensive Review. Shanlax Int. J. Arts Sci. Humanit. 2023, 11, 62–90. [Google Scholar] [CrossRef]
  79. Mohsan, S.A.H.; Khan, M.A.; Noor, F.; Ullah, I.; Alsharif, M.H. Towards the unmanned aerial vehicles (UAVs): A comprehensive review. Drones 2022, 6, 147. [Google Scholar] [CrossRef]
  80. Sinha, J.P.; Kushwaha, H.; Kushwaha, D.; Singh, N.; Purushottam, M. Prospect of Unmanned Aerial Vehicle (UAV) Technology for Agricultural Production Management. In Proceedings of the International Conference on Emerging Technologies in Agricultural and Food Engineering, Kharagpur, India, 27–30 December 2016. [Google Scholar]
  81. Zhang, H.; Wang, L.; Tian, T.; Yin, J. A review of unmanned aerial vehicle low-altitude remote sensing (UAV-LARS) use in agricultural monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar] [CrossRef]
  82. Negi, P.; Anand, S. Plant disease detection, diagnosis, and management: Recent advances and future perspectives. In Artificial Intelligence and Smart Agriculture: Technology and Applications; Springer: Berlin/Heidelberg, Germany, 2024; pp. 413–436. [Google Scholar]
  83. Nguyen, C.; Sagan, V.; Skobalski, J.; Severo, J.I. Early detection of wheat yellow rust disease and its impact on terminal yield with multi-spectral UAV-imagery. Remote Sens. 2023, 15, 3301. [Google Scholar] [CrossRef]
  84. Chen, P.; Yan, S.; Janicke, H.; Mahboubi, A.; Bui, H.T.; Aboutorab, H.; Bewong, M.; Islam, R. A survey on unauthorized UAV threats to smart farming. Drones 2025, 9, 251. [Google Scholar] [CrossRef]
  85. Mane, S. Conceptual Aspects on Unmanned Aerial Vehicles System. Int. J. All Res. Educ. Sci. Methods (IJARESM) 2024, 12, 17–26. [Google Scholar]
  86. Mishra, P. Disease Detection in Plants Using UAS and Deep Neural Networks. Ph.D. Thesis, Tennessee State University, Nashville, TN, USA, 2024. [Google Scholar]
  87. Jackulin, C.; Murugavalli, S. A comprehensive review on detection of plant disease using machine learning and deep learning approaches. Meas. Sens. 2022, 24, 100441. [Google Scholar] [CrossRef]
  88. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  89. Sharma, A.; Jain, A.; Gupta, P.; Chowdary, V. Machine learning applications for precision agriculture: A comprehensive review. IEEE Access 2020, 9, 4843–4873. [Google Scholar] [CrossRef]
  90. Mwebaze, E.; Gebru, T.; Frome, A.; Nsumba, S.; Tusubira, J. iCassava 2019 fine-grained visual categorization challenge. arXiv 2019, arXiv:1908.02900. [Google Scholar]
  91. Sangbamrung, I.; Praneetpholkrang, P.; Kanjanawattana, S. A novel automatic method for cassava disease classification using deep learning. J. Adv. Inf. Technol. 2020, 11, 241–248. [Google Scholar] [CrossRef]
  92. Aravind, S.; Harini, S.; Kumar, V. Cassava leaf disease classification using Deep Learning. Nveo-Nat. Volatiles Essent. Oils J. (NVEO) 2021, 8, 9375–9389. [Google Scholar]
  93. Chai, M.X.; Fam, Y.D.; Octaviano, Q.N.; Pee, C.-Y.; Wong, L.-K.; Tan, M.I.S.M.H.; See, J. Improved Cassava Plant Disease Classification with Leaf Detection. In Proceedings of the 2024 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Macau, China, 3–6 December 2024. [Google Scholar]
  94. Srivathsan, M.S.; Jenish, S.A.; Arvindhan, K.; Karthik, R. An explainable hybrid feature aggregation network with residual inception positional encoding attention and EfficientNet for cassava leaf disease classification. Sci. Rep. 2025, 15, 11750. [Google Scholar] [CrossRef]
  95. Goyal, C.; Gill, K.S.; Upadhyay, D.; Devliyal, S. Assessing the Effectiveness of Deep Learning Approaches in Forecasting Cassava Leaf Diseases. In Proceedings of the 2024 International Conference on Artificial Intelligence and Emerging Technology (Global AI Summit), Greater Noida, India, 4–6 September 2024. [Google Scholar]
  96. Shahriar, H.; Shuvo, P.S.; Fahim, M.S.H.; Sordar, M.S.; Haque, M.E. Cassava Leaf Disease Classification Using Deep Learning and Convolutional Neural Network Ensemble. Ph.D. Thesis, Brac University, Dhaka, Bangladesh, 2022. [Google Scholar]
  97. Wieme, J.; Leroux, S.; Cool, S.R.; Van Beek, J.; Pieters, J.G.; Maes, W.H. Ultra-high-resolution UAV-imaging and supervised deep learning for accurate detection of Alternaria solani in potato fields. Front. Plant Sci. 2024, 15, 1206998. [Google Scholar] [CrossRef] [PubMed]
  98. Sorbelli, F.B.; Palazzetti, L.; Pinotti, C.M. YOLO-based detection of Halyomorpha halys in orchards using RGB cameras and drones. Comput. Electron. Agric. 2023, 213, 108228. [Google Scholar] [CrossRef]
  99. Mahum, R.; Munir, H.; Mughal, Z.-U.-N.; Awais, M.; Sher Khan, F.; Saqlain, M.; Mahamad, S.; Tlili, I. A novel framework for potato leaf disease detection using an efficient deep learning model. Hum. Ecol. Risk Assess. Int. J. 2023, 29, 303–326. [Google Scholar] [CrossRef]
  100. Duckett, T.; Pearson, S.; Blackmore, S.; Grieve, B.; Chen, W.-H.; Cielniak, G.; Cleaversmith, J.; Dai, J.; Davis, S.; Fox, C. Agricultural robotics: The future of robotic agriculture. arXiv 2018, arXiv:1806.06762. [Google Scholar] [CrossRef]
  101. Yao, Y.; Yue, J.; Liu, Y.; Yang, H.; Feng, H.; Shen, J.; Hu, J.; Liu, Q. Classification of maize growth stages based on phenotypic traits and UAV remote sensing. Agriculture 2024, 14, 1175. [Google Scholar] [CrossRef]
  102. Akinpelu, S.A.; Olasoji, O.E.; Akindolani, A.; Adeyanju, K.I.; Ajagbe, S.A.; Taiwo, G.A. Detection of cassava plant disease using deep transfer learning approach. Paradigmplus 2025, 6, 1–12. [Google Scholar] [CrossRef]
  103. Elliott, K.; Berry, J.C.; Kim, H.; Bart, R.S. A comparison of ImageJ and machine learning based image analysis methods to measure cassava bacterial blight disease severity. Plant Methods 2022, 18, 86. [Google Scholar] [CrossRef] [PubMed]
  104. Maryum, A.; Akram, M.U.; Salam, A.A. Cassava leaf disease classification using deep neural networks. In Proceedings of the 2021 IEEE 18th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET), Karachi, Pakistan, 11–13 October 2021. [Google Scholar]
  105. Zhao, G.; Zhang, Y.; Lan, Y.; Deng, J.; Zhang, Q.; Zhang, Z.; Li, Z.; Liu, L.; Huang, X.; Ma, J. Application Progress of UAV-LARS in Identification of Crop Diseases and Pests. Agronomy 2023, 13, 2232. [Google Scholar] [CrossRef]
  106. Allred, B.; Martinez, L.; Fessehazion, M.K.; Rouse, G.; Koganti, T.; Freeland, R.; Eash, N.; Wishart, D.; Featheringill, R. Time of day impact on mapping agricultural subsurface drainage systems with UAV thermal infrared imagery. Agric. Water Manag. 2021, 256, 107071. [Google Scholar] [CrossRef]
  107. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  108. Elleouet, J.S.; Main, R.; Hartley, R.J.; Watt, M.S. Leveraging UAVspectral thermal traits for the genetic improvement of resistance to Dothistroma needle blight in Pinus radiata D. Don. Front. Plant Sci. 2025, 16, 1574720. [Google Scholar] [CrossRef]
  109. Chutichaimaytar, P.; Zongqi, Z.; Kaewtrakulpong, K.; Ahamed, T. An Improved Small Object Detection CTB-YOLO Model for Early Detection of Tip-Burn and Powdery Mildew Symptoms in Coriander (Coriandrum sativum) for Indoor Environment Using an Edge Device. Smart Agric. Technol. 2025, 12, 101142. [Google Scholar] [CrossRef]
  110. Ali, M.L.; Zhang, Z. The YOLO framework: A comprehensive review of evolution, applications, and benchmarks in object detection. Computers 2024, 13, 336. [Google Scholar] [CrossRef]
  111. Vijayakumar, A.; Vairavasundaram, S. Yolo-based object detection models: A review and its applications. Multimed. Tools Appl. 2024, 83, 83535–83574. [Google Scholar] [CrossRef]
  112. Hsiao, C. Uav Low-Altitude Agricultural Information Remote Sensing, Monitoring. J. Comput. Sci. Electr. Eng. 2024, 6, 1–8. [Google Scholar] [CrossRef]
  113. Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
Figure 7. Simplified unmanned aerial vehicle (UAV)-deep learning (DL) integration framework for cassava disease detection.
Figure 7. Simplified unmanned aerial vehicle (UAV)-deep learning (DL) integration framework for cassava disease detection.
Horticulturae 12 00087 g007
Figure 8. Dataset characteristics matrix of cassava disease detection studies from 2017–2025: (a) accessibility distribution of datasets, (b) dataset-size–balance relationship, and (c) color-coded dataset imbalance ratio with blue representing datasets with acceptable imbalance ratio.
Figure 8. Dataset characteristics matrix of cassava disease detection studies from 2017–2025: (a) accessibility distribution of datasets, (b) dataset-size–balance relationship, and (c) color-coded dataset imbalance ratio with blue representing datasets with acceptable imbalance ratio.
Horticulturae 12 00087 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, W.A.; Abiola, O.A.; Yang, D.; Olatoyinbo, S.F.; Jing, G. Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava. Horticulturae 2026, 12, 87. https://doi.org/10.3390/horticulturae12010087

AMA Style

Ahmed WA, Abiola OA, Yang D, Olatoyinbo SF, Jing G. Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava. Horticulturae. 2026; 12(1):87. https://doi.org/10.3390/horticulturae12010087

Chicago/Turabian Style

Ahmed, Wasiu Akande, Olayinka Ademola Abiola, Dongkai Yang, Seyi Festus Olatoyinbo, and Guifei Jing. 2026. "Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava" Horticulturae 12, no. 1: 87. https://doi.org/10.3390/horticulturae12010087

APA Style

Ahmed, W. A., Abiola, O. A., Yang, D., Olatoyinbo, S. F., & Jing, G. (2026). Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava. Horticulturae, 12(1), 87. https://doi.org/10.3390/horticulturae12010087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop