Skip Content
You are currently on the new version of our website. Access the old version .
J. ImagingJournal of Imaging
  • Review
  • Open Access

5 February 2026

A Survey of Crop Disease Recognition Methods Based on Spectral and RGB Images

,
,
and
1
School of Computer Science and Technology, Xinjiang University, Urumqi 830046, China
2
Joint International Research Laboratory of Silk Road Multilingual Cognitive Computing, Xinjiang University, Urumqi 830046, China
3
Key Laboratory of Software Engineering, Xinjiang University, Urumqi 830091, China
4
Xinjiang Engineering Research Center of Big Data and Intelligent Software, School of Software, Xinjiang University, Urumqi 830046, China
This article belongs to the Special Issue AI-Driven Remote Sensing Image Processing and Pattern Recognition

Abstract

Major crops worldwide are affected by various diseases yearly, leading to crop losses in different regions. The primary methods for addressing crop disease losses include manual inspection and chemical control. However, traditional manual inspection methods are time-consuming, labor-intensive, and require specialized knowledge. The preemptive use of chemicals also poses a risk of soil pollution, which may cause irreversible damage. With the advancement of computer hardware, photographic technology, and artificial intelligence, crop disease recognition methods based on spectral and red–green–blue (RGB) images not only recognize diseases without damaging the crops but also offer high accuracy and speed of recognition, essentially solving the problems associated with manual inspection and chemical control. This paper summarizes the research on disease recognition methods based on spectral and RGB images, with the literature spanning from 2020 through early 2025. Unlike previous surveys, this paper reviews recent advances involving emerging paradigms such as State Space Models (e.g., Mamba) and Generative AI in the context of crop disease recognition. In addition, it introduces public datasets and commonly used evaluation metrics for crop disease identification. Finally, the paper discusses potential issues and solutions encountered during research, including the use of diffusion models for data augmentation. Hopefully, this survey will help readers understand the current methods and effectiveness of crop disease detection, inspiring the development of more effective methods to assist farmers in identifying crop diseases.

1. Introduction

1.1. Overview

Agriculture is the cornerstone of national development and is crucial to the national economy. Currently, global food supplies are under strain, and crops, as a strategic and foundational core industry, often face significant impacts on their yield and quality due to climatic conditions, pests, and various pathogens [1]. Agriculture 4.0 has become a transformative trend in modern agriculture worldwide. Leveraging opportunities from the Internet of Things, machine learning, drones, big data analytics, and artificial intelligence can benefit all stages and processes of the agricultural production chain [2]. Tasks involved include disease recognition, yield estimation, and production management.
With the advancement of computer hardware, photography techniques, and computer technology, computer vision image recognition methods based on spectral and RGB images have attracted increasing attention from researchers. Figure 1 displays the band information of RGB images, multispectral images, and hyperspectral images. These methods find wide applications in agriculture, such as weed detection [3], crop yield estimation [4], and more. They alleviate the subjectivity, workload, and time constraints associated with manual recognition and significantly enhance the efficiency and accuracy of related tasks.
Figure 1. (a) RGB bands, (b) multispectral bands, (c) hyperspectral bands.
Thousands of common crop diseases are identified [5], severely impacting global crop yields and resulting in significant economic losses annually. Therefore, it is crucial to prevent and control crop diseases through disease recognition methods. However, traditional disease recognition typically relies on experts visually inspecting plant disease symptoms using microscopes or conducting physicochemical analyses, requiring significant human and material resources and specialized knowledge and equipment. This approach fails to meet the accuracy and flexibility requirements of modern agriculture [6]. Moreover, lacking specialized knowledge, farmers often use large quantities of chemicals to prevent crop diseases, which can effectively suppress disease occurrence but may damage soil environments and pose numerous health risks [7].
In order to tackle the challenges above effectively, many researchers are integrating spectral and RGB images associated with crop diseases with artificial intelligence technologies. It has increasingly made disease recognition methods based on spectral and RGB images a critical safeguard for crop yields, food security, and the health of animals and humans [8], elevating its significance in addressing real-world agricultural issues. The ongoing enhancement and improvement of crop disease recognition methods not only aid in the timely detection of crop diseases, the optimization of agricultural practices, and the prompt development of disease eradication strategies but also support farmers who may lack specialized knowledge in minimizing the use of chemicals, ensuring the sustainable growth of crop farming.

1.2. Methodology and Contributions

This paper reviews the existing crop disease recognition methods based on spectral and RGB images. To ensure the comprehensiveness and reliability of the content, we initially selected articles from various academic resources such as Google Scholar, ELSEVIER, Scopus, Frontiers, IEEE Xplore, and Springer using keywords such as “plant/crop disease detection/recognition/classification”, “spectral image-based disease recognition”, and “RGB image-based disease recognition”. Articles from 2020 to 2025 were screened based on their relevance to this survey, the significance of the methods employed, the performance of the models, and the datasets used.
There are also several literature reviews related to crop disease identification technology.
For instance, Iqbal et al. [9] proposed a detailed classification system for citrus diseases, encompassing preprocessing, segmentation, feature extraction, feature selection, and classification, along with their challenges, advantages, and disadvantages. Additionally, it delineated the gaps between methods introduced to enhance detection and classification accuracy. More recently, Upadhyay et al. [10] provided a comprehensive review of deep learning techniques in precision agriculture published in 2025, highlighting the paradigm shift towards real-time detection models, Generative AI, and Foundation Models, which complements the scope of earlier reviews. The review study by Noon et al. [11] focused on the performance of Convolutional Neural Network (CNN) models in detecting diseases in vegetables, fruits, and miscellaneous plant species. Abade et al. [12] primarily reviewed the techniques of CNN in crop disease identification and classification, investigating significant contributions to the listed challenges and different innovations to improve CNN performance. They also summarized available methods based on the models, datasets, and types of CNNs used for plant disease detection. Terentev et al. [13] analyzed the current state of early plant disease detection using hyperspectral remote sensing in four different crops: oil palm, citrus, solanaceae, and wheat. They demonstrated the feasibility of early plant disease detection through hyperspectral remote sensing. They proposed a systematic table for detecting plant diseases via hyperspectral remote sensing, including critical wavelength bands and sensor model information. Thakur et al. [14] briefly investigated the types of equipment used in data capture, capture conditions, regions for dataset collection, and preprocessing techniques. Furthermore, they introduced various public datasets used by researchers in this field. The article then discussed the impact of emerging trends in deep learning, such as transfer learning, attention mechanisms, localization methods, and lightweight models in plant disease identification. It examined the suitability of attention-based convolutional neural networks for field plant disease identification. Finally, the article elaborated on the plant diseases considered necessary in different countries. Jafar et al. [15] emphasized the most common diseases and infections in four vegetables: tomato, pepper, potato, and cucumber, along with their symptoms. They provided detailed predetermined steps for predicting plant diseases using artificial intelligence, including image acquisition, preprocessing, segmentation, feature selection, and classification. They also comprehensively reviewed machine learning-based and deep learning-based research to detect diseases in these four crops, including datasets used to evaluate these studies.
Despite the existing review studies that have covered various methods and pointed out significant challenges, there is still a need to revisit the approaches for crop disease identification from different perspectives to assess the strengths and weaknesses of the current methods. The foundation of disease identification research lies in data, and training models capable of accurately identifying diseases requires the assembly of credible datasets comprising various disease images. Therefore, unlike previous reviews, this paper focuses on the types of images frequently used for crop disease identification. Firstly, it reviews the available spectral and RGB public datasets for crop disease identification and the evaluation metrics used for assessing crop disease identification models, helping readers better understand model performance and conduct related experiments. Subsequently, it summarizes the crop disease identification techniques based on spectral and RGB images and discusses some limitations of these techniques in crop disease identification, along with possible improvement strategies. Additionally, the paper discusses the differences among different crop disease identification methods and the considerations for researching disease identification. Finally, the paper summarizes potential future research directions for disease identification based on spectral and RGB images, including the combination of spectral and RGB and the generation of disease images using diffusion techniques. Table 1 summarizes the coverage and objectives of the existing review studies and the review study proposed in this paper.
Table 1. Comparison between existing reviews and this review in the field of crop disease detection.
The remaining content of this review is as follows: Section 2 summarizes the public datasets and model performance evaluation metrics used for crop disease identification. Section 3 introduces methods for crop disease identification based on spectral images, mainly summarizing the commonly used traditional machine learning methods for disease identification based on spectral images, as well as the limitations of these methods and possible improvement strategies. Section 4 introduces methods for crop disease identification based on RGB images, mainly summarizing the commonly used deep learning methods for disease identification based on RGB images, as well as the limitations of these methods and possible improvement strategies. Section 5 discusses the precautions for different crop disease identification methods. Section 6 summarizes and outlines the future directions of crop disease identification technologies.

2. Public Datasets and Evaluation Metrics

2.1. Public Datasets

When developing efficient plant disease recognition methods, the availability of sufficient high-quality datasets is crucial. Some researchers have provided public datasets covering a wide range of plant diseases. Table 2 summarizes the publicly available spectral and RGB image datasets for disease recognition. These datasets include various disease types and can be used to train different disease recognition models. Figure 2 shows some examples from these datasets.
Table 2. Publicly available datasets. Abbreviations: RGB = red–green–blue images; Hyperspectral = hyperspectral images with tens to hundreds of continuous spectral bands; XDB = Extended Dataset for Plant Disease Detection; SWD = Strawberry Wilt Disease dataset; NLB = Northern Leaf Blight dataset; FGVC8 = Fine-Grained Visual Categorization 8 (Plant Pathology); LWDCD2020 = Leaf Wilt Crop Disease Dataset 2020; AFB = American Foulbrood dataset.
Figure 2. Sample images from various datasets.
  • PlantVillage—The PlantVillage dataset is widely used for crop disease classification tasks. It covers 38 crop disease categories across 14 plant species, with 54,305 images [16]. All images were obtained in controlled laboratory environments with simple backgrounds.
  • XDB [17] from Embrapa Agricultural Research Institute introduced the Plant Disease Detection Database (PDDB), covering 171 diseases and other conditions across 21 plant species, with 2326 images as of October 2016. To support powerful technologies like deep learning, this dataset was expanded into the XDB dataset, where each image was subdivided according to specific criteria, increasing the total number of images to 46,513. However, as indicated in Table 2, most public datasets are collected in controlled environments. To address the scarcity of diverse field data, recent studies have increasingly explored generative models for data augmentation. Han et al. [26] demonstrated that GAN-based synthesis can significantly improve disease detection robustness. Moreover, recent reviews highlight diffusion-based augmentation as a promising direction for enhancing generalization in real-world agricultural scenarios [27].
  • SWD [18] constructed a large dataset of 3531 images to evaluate and test their proposed method. The dataset includes four categories: Healthy leaf, Healthy petiole, Verticilium leaf, and Verticilium petiole. Each image indicates whether the strawberry plant is infected with Verticillium wilt disease.
  • NLB—NLB is an object detection dataset created by [19] specifically for northern corn leaf blight. It consists of 18,222 images collected from actual fields using smartphones, fixed cameras, and drones, with annotations of 105,735 bounding boxes for northern corn leaf blight lesions.
  • FGVC8 [20] manually captured 3651 high-quality images of actual symptoms, showcasing various apple leaf diseases with different lighting, angles, surfaces, and noise. Based on this, they created pilot datasets for apple scab, cedar rust, and healthy leaves provided to the Kaggle community and named FGVC7 for the Fine-Grained Visual Categorization Challenge at CVPR2020. FGVC8 significantly increased the number of apple leaf disease images from FGVC7 and added more segmented disease categories, totaling 18,632 images covering 12 categories.
  • LWDCD2020—LWDCD2020 comprises approximately 12,000 images covering nine wheat disease categories (loose smut, spot blotch, powdery mildew, leaf rust, Fusarium head blight, crown rot, black point, Karnal bunt, and wheat streak mosaic virus) and one healthy category [21]. These images have undergone dimensionally unified preprocessing. Almost all images in LWDCD2020 contain only one type of disease and exhibit complex backgrounds, varying acquisition conditions, features from different disease development stages, and similar features among different wheat diseases.
  • Plantdoc—The Plantdoc dataset consists of 2598 images covering 13 plant species and 17 disease categories [22]. It is a publicly available dataset used for object detection.
  • BARI-Sunflower—The BARI-Sunflower dataset was constructed from collections at the Bangladesh Agricultural Research Institute (BARI) Gazipur Demonstration Farm, comprising 467 original images of healthy and diseased sunflower leaves and flowers [23]. To meet the demands of deep learning for data, data augmentation techniques such as random rotation, scaling, and cropping were applied, resulting in 470 images of downy mildew, 509 images of leaf scars, 398 images of gray mold, and 515 images of fresh leaves.
  • Katra-Twelve—Katra-Twelve is a public leaf image dataset provided by Shri Mata Vaishno Devi University, encompassing images of healthy and diseased leaves [24]. The dataset features 12 economically and environmentally beneficial plant species, covering 22 leaf types. There are 4503 images, with 2278 images of healthy leaves and 2225 images of diseased leaves.
  • AFB—The dataset by [25] comprises three hyperspectral images of apple tree plants. The first set of images involves a 15-day monitoring period of seven apple trees infected with fire blight and six control plants. The second set of images includes time monitoring of three infected plants, seven plants subjected to water stress, and seven control plants. The third set of images was collected in an orchard, involving nine trees exhibiting symptoms of fire blight and six control trees. All images displaying symptomatic plants provide pixel location information for the infected areas.

2.2. Evaluation Metrics

In recent years, multiple evaluation metrics have been introduced to assess the performance of disease identification models. Standard metrics for evaluating machine learning disease identification models based on spectral images include accuracy, precision, Matthews correlation coefficient (MCC), and kappa coefficient. Commonly used metrics for disease detection models based on RGB images include average precision (AP), mean average precision (mAP), F1 score, accuracy, and recall. Additionally, some studies estimate the severity of disease occurrence while identifying crop diseases. There are evaluation metrics for assessing the accuracy of these estimates, such as root mean square error (RMSE), R-squared ( R 2 ), and mean absolute error (MAE). This section summarizes evaluation metrics for assessing disease model recognition performance and evaluating the accuracy of estimating disease severity in Table 3.
Table 3. Commonly used evaluation metrics in crop disease recognition.

3. Crop Disease Recognition Methods Based on Spectral Images

Spectral imaging technology originated from multispectral remote sensing technology in the 1970s, which typically includes 3 to 10 discrete bands such as red, green, and blue, primarily covering specific bands within the visible and near-infrared spectral ranges but failing to provide continuous spectral information. Spectral imaging technology has continually evolved with the need for Earth remote sensing, leading researchers to develop further hyperspectral sensors capable of capturing tens or even hundreds of continuous narrow bands from the visible to infrared spectral ranges, assigning each pixel of the spatial image with its unique spectral information. Figure 1 illustrates the band information of these two types of spectral images. Thanks to the characteristics of spectral images, researchers can analyze the chemical composition and inherent physical structure differences of crops, obtaining information that reflects the biological status of crops, similar to the Normalized Difference Vegetation Index (NDVI) [28]. Abbott et al. [29] was the first to apply spectral imaging to the agricultural field, achieving quality measurement of fruits and vegetables. Judith’s success attracted many researchers to apply spectral imaging to agriculture, gradually becoming one of the most commonly used image types for crop disease identification. Most existing research on disease detection based on spectral images has been conducted using machine learning algorithms and has achieved excellent results. While traditional machine learning methods have long dominated spectral disease analysis, recent studies have increasingly applied deep learning architectures to hyperspectral data. Goyal et al. [30] employed hybrid deep learning models to extract spectral-spatial features for early disease detection. Similarly, Zhang et al. [31] utilized 3D-CNNs to improve classification accuracy in apple quarantine diseases. Furthermore, Chossegros et al. [32] demonstrated the effectiveness of deep learning in distinguishing multiple overlapping infections in wheat.

3.1. Spectral Image-Based Traditional Machine Learning Crop Disease Recognition Methods

Machine learning is a branch of computer science that aims to enable computers to “learn” without direct programming [33]. Figure 3 shows the basic process of machine learning algorithms. The development history of traditional machine learning is a continuously evolving and expanding process. Since the 1950s, machine learning, as an essential branch of artificial intelligence, has begun to receive attention and has transitioned from early research to the exploration of knowledge implantation and symbolic representation and then to the revitalization and multi-concept learning stage. Machine learning technology has continuously matured and gradually integrated with multiple disciplines, such as psychology, biology, mathematics, automation, and computer science, forming a solid theoretical foundation. With the continuous advancement of technology and the increasing demand for applications, machine learning has been widely applied since the mid-1980s, covering various fields such as finance, e-commerce, transportation, entertainment, and manufacturing. These applications have driven the development of related industries and further promoted the improvement and innovation of machine learning technology. Today, machine learning is widely applied in the agricultural field. Researchers provide rich suggestions and insights about crops to help machines learn crop characteristics, assisting farmers in reducing agricultural losses, achieving more efficient and precise agricultural management, and improving production quality [34].
Figure 3. The basic process of machine learning algorithms.
Since spectral images are correlated with the biophysical and biochemical properties of crops, they can assist researchers in analyzing the chemical composition and inherent physical structure differences of crops, thereby achieving the purpose of monitoring crop health status. Traditional machine learning methods primarily utilize the spectral information of images when processing spectral image data, placing greater emphasis on the spectral characteristics of each pixel, which can improve the accuracy of crop disease identification. Therefore, an increasing number of researchers are utilizing various traditional machine learning methods to develop crop disease classification methods based on spectral images. Figure 4 illustrates the basic process of machine learning-based disease identification methods using spectral images.
Figure 4. The basic process of spectral image-based machine learning disease recognition methods.
Based on the selection criteria of this paper, 27 studies on spectral image-based machine-learning disease recognition methods from 2020 to 2024 were selected. Table 4 shows the studies on spectral image-based machine learning disease recognition methods chosen for this paper, including publication time, references, recognition disease, camera model, wavelength, machine learning algorithms (highlighted in bold for the best-performing ones), and the optimal recognition results.
Table 4. Spectral image-based traditional machine learning crop disease recognition methods.

3.2. Limitations of Spectral Image-Based Traditional Machine Learning Crop Disease Recognition Methods

From Table 4, we can observe that machine learning algorithms such as SVM, Random Forest (RF), and K-Nearest Neighbors (KNN) can effectively utilize spectral imagery for crop disease classification and severity prediction. However, these input-intensive methods currently face several limitations regarding theory, technology, and practical applications.

3.2.1. Limitations of Data Processing and Traditional Machine Learning Methods

From Figure 4, we can observe that when using traditional machine learning methods for crop disease identification, data preprocessing is required to correct or restore spectral images in order to eliminate data anomalies caused by noise from factors such as geometric shape and atmospheric condition changes [62], as well as geometric parameters of sensors and plants. This preprocessing step facilitates the extraction of useful information. However, different types of spectral images, wavebands, or capture devices may necessitate different preprocessing techniques, posing challenges in data processing. Following data preprocessing, steps such as designing feature extractors by experienced experts are necessary to convert the preprocessed crop disease data into intermediate representations or feature vectors rich in disease characteristics. These features are then used for classification by traditional machine learning algorithms.
Although manual feature extraction has achieved good results so far, it is subject to issues such as strong subjectivity and inefficiency, and cannot achieve end-to-end applications. Therefore, establishing standard data preprocessing and feature extraction methods may be of significant importance. To address the aforementioned issues, we can also find solutions from the deep learning wave triggered by AlexNet [63]. Compared to traditional machine learning algorithms, deep learning algorithms are neural network algorithms with deep structures within machine learning, boasting high performance, efficiency, and scalability. They can automatically extract image features, capture spatial relationships, and enhance detection of disease areas, thereby mitigating the impact of subjectivity in traditional machine learning processes and improving work efficiency. Deep learning algorithms have demonstrated powerful capabilities in complex nonlinear modeling tasks. Currently, many scholars have combined spectral imagery with deep learning methods for crop disease identification [28,64,65,66,67,68]. They primarily use spectral information and resolution to create datasets, and then incorporate new designs into existing networks to enhance the model’s extraction and representation capabilities. These datasets are then used to train models, enabling effective identification of crop diseases. However, these methods also have limitations. They are typically supervised learning methods, meaning that large-scale spectral datasets are required. The difficulties and limitations associated with collecting and labeling large-scale datasets cannot be ignored. Generating unlabeled spectral data using methods such as generative adversarial networks and diffusion models seems feasible, and we will discuss these methods in Section 6.

3.2.2. Limitations of Datasets and Single Spectral Modality

Datasets are crucial in ensuring the reliability of artificial intelligence model training. However, there currently needs to be a unified and standardized spectral dataset for common crop diseases, which leads researchers to lack a comparison standard when building datasets or conducting experiments. This, in turn, makes it difficult to determine the reliability of the dataset and the model [62]. Therefore, one of the urgent tasks is to create a standard and open spectral disease dataset for crop diseases as much as possible. In addition, as shown in Table 4, researchers have selected different spectral bands for crop disease detection based on different crops. However, differences in spectral collection equipment in terms of resolution, bands, and wavelength range can lead to some loss and inadequacy in the crop disease features contained in the collected spectral images, posing challenges for the model to learn various disease features (Wei et al., Bu et al.). We can utilize the fusion of multiple image formats to complement each other and enhance the disease information contained in the images. Currently, many technologies for hyperspectral and multispectral image fusion, as well as the fusion of spectral images and RGB images, have been applied in the agricultural field [69,70,71,72] and have achieved excellent results. Therefore, selecting appropriate fusion algorithms to fuse multiple crop disease images to enhance crop disease features and improve the accuracy of crop detection is a noteworthy research area.

3.2.3. Limitations of Experimental Analysis and External Influences

Currently, most disease detection methods based on spectral images do not focus on how spectral images reflect the changes in physicochemical indicators within infected crops. Instead, they primarily explore the feasibility of the established datasets on the applied models. This approach largely overlooks the ability of spectral images to reflect crops’ biophysical and biochemical characteristics. More analysis of this characteristic needs to be performed to prevent the establishment of a connection between the rich features of spectral images and the internal physicochemical changes in crop diseases. Consequently, researchers may need help to extract the most suitable feature information for the model, leading to misjudgments about model performance. Additionally, agricultural research has inherent delays, and images captured in specific time scenarios may be affected by external environmental changes such as leaf occlusion and brightness variations, impacting the reproducibility and validity of experiments [13]. Therefore, when collecting images, it is essential to comprehensively consider various potential factors and reduce the impact of external factors on the model, thereby achieving precise detection of crop diseases in the field.

4. Disease Recognition Methods Based on Red–Green–Blue Images

Compared to the difficulties and high costs associated with acquiring spectral images in field conditions, RGB images, characterized by their low cost, ease of acquisition, and simplicity of use [73], have become the more commonly used images in the application of computer vision technology in the agricultural field. RGB images, which simulate human vision in the visible spectrum, are more realistic color images that can more intuitively reflect disease areas and characteristics. As early as 2001, Takakura et al. [74] utilized RGB images for non-destructive testing of plant health. Although RGB images provide only limited information from three bands compared to spectral images, there have been multiple studies on crop disease identification based on RGB images. Regarding the limitations of spectral image-based disease identification technology discussed above, we mentioned the dependence of deep learning methods on large-scale datasets. However, acquiring RGB images is simpler and more convenient compared to spectral images, and the difficulty of constructing large-scale disease datasets is not high. Therefore, deep learning methods are more prevalent in research on crop disease detection based on RGB images.
It should be noted that traditional machine learning methods based on hand-crafted features, such as color, texture, and shape descriptors combined with classifiers (e.g., SVM and random forest), have also been explored in early RGB image-based disease recognition studies. However, due to their limited representation capacity and relatively weak generalization ability in complex field scenarios, such methods have become less dominant compared with deep learning approaches in recent studies.

4.1. Deep Learning Methods for Crop Disease Recognition Based on Red–Green–Blue Images

Deep learning is a branch of machine learning and a frontier area of artificial intelligence. It leverages multi-layered structures to represent abstract representations of data, thereby constructing computational models that can be iteratively trained. During the rapid development and widespread application era since the 2010s, deep learning technology has achieved remarkable progress and has been extensively applied in various fields. In 2012, deep learning’s outstanding performance in the ImageNet image recognition competition marked the maturity of its technology and the breadth of its applications. Subsequently, deep learning demonstrated powerful capabilities in multiple fields, such as face recognition, image generation, natural language processing, medical diagnosis, financial analysis, artistic creation, and autonomous driving. The success of these applications is not only attributed to the continuous optimization and innovation of deep learning algorithms but also benefits from the rapid development of technologies such as big data, high-performance computing, and cloud computing. With the continuous advancement of technology and increasingly diverse application scenarios, deep learning has become a significant force driving the development of artificial intelligence. It continues to bring revolutionary changes to human society.
Disease recognition methods based on deep learning primarily leverage deep learning’s classification, segmentation, and object detection techniques. Figure 5 demonstrates the basic workflow of research on classification, segmentation, and object detection methods based on RGB images. These methods aim to identify and classify crop diseases accurately, primarily by training models on the disease regions of crop leaves in RGB images to recognize disease patterns, assisting people in determining the occurrence of diseases. Disease segmentation methods based on deep learning aim to segment the lesion areas from the leaves, accurately assigning each pixel in the lesion area to a predefined category, achieving pixel-level semantic understanding, and better-helping people understand the location, shape, and other information of the crop’s diseased areas. Disease detection methods based on deep learning can help people identify the location and disease category of crops, usually with high localization accuracy and classification accuracy.
Figure 5. The basic process of RGB image-based deep learning crop disease recognition methods.
Based on the selection criteria outlined in this paper, 45 studies from 2020 to 2025 on deep learning-based disease identification methods using RGB images were selected, Table 5 shows papers on crop disease recognition methods based on RGB images from 2020 to 2025, including their publication time, references, detection objects, baseline, improvement methods, and result.
Table 5. Red–green–blue image-based deep learning crop disease recognition methods.

4.2. The Limitations of Red–Green–Blue Image-Based Deep Learning Crop Disease Recognition Methods

Table 5 shows that most studies are conducted based on object detection methods. This is because classification and segmentation methods have certain limitations compared to object detection algorithms. For instance, classification methods typically accept inputs of narrow-scope images that contain only one or a few objects centered in the image, and they have limitations in classifying wide-scope images that contain multiple objects or even tens of objects [123]. Furthermore, these studies often emphasize the distinction between various disease types but cannot fully meet the need for flexible and real-time detection of specific disease locations [8]. Although segmentation methods, like detection methods, allow people to see the categories and locations of diseased leaves intuitively and can even reflect the shape of disease spots, segmentation methods have issues with difficulties in labeling and annotation [124]. Besides the problems caused by methodological limitations, deep learning-based disease identification methods using RGB images also face the following limitations and challenges.

4.2.1. Lack of Crop Disease Data

Although the acquisition and use of RGB images are cost-effective, the amount of data collected may sometimes fail to meet the requirements of deep learning models. Researchers and practitioners face significant data collection, labeling, quality assurance, and cost control challenges. Data augmentation techniques, which increase the quantity and diversity of data through operations such as translation, rotation, and cropping based on existing technology without additional labeling, are a commonly used improvement strategy [125]. Currently, researchers have combined multiple data augmentation strategies to enhance the diversity and complexity of models, improve their robustness, and prevent overfitting [75,85,96,101]. However, over-reliance on data augmentation can also lead to overfitting and may introduce new noise into the dataset. Alternatively, transfer learning (as shown in Figure 6) can also address the issue of poor model performance due to insufficient datasets [126]. When disease data is scarce, training on a larger dataset and then applying transfer learning to the crop diseases that need to be identified [127,128,129,130,131] is a practical approach. However, transfer learning also faces challenges posed by differences in dataset distributions. If the data distributions between the source and target data domains are significantly different, it will affect the effectiveness of model training [132]. These issues can be overcome through feature alignment, multitask learning, feature selection, or transformation. Additionally, we can utilize data synthesis techniques to generate diverse training samples by creating any number of synthetic data samples from simulated scenarios and then using them to train models [110,132].
Figure 6. The basic process of transfer learning.

4.2.2. Lack of Publicly Available Standard Field Datasets

Although some public datasets on crop diseases have been proposed in recent years, most of them are laboratory-based rather than field-based. Consequently, models trained on these datasets may perform poorly in field conditions. Currently, disease recognition models developed by researchers are either based on partially public datasets or entirely on private datasets, and most of these datasets only contain a single crop species. Furthermore, researchers need to make these data public. Therefore, even for the same disease on the same crop, there is no unified standard to compare the performance of models developed by different researchers. Hence, images of common crop diseases must be captured in actual field conditions to construct benchmark datasets, providing research standards for relevant researchers.

4.2.3. The Problems of Interference from Complex Scenes and Difficulty in Identifying Small Lesion Sizes

In actual field scenarios, factors such as background interference, leaf occlusion, morphological differences, and scale variations may limit the model’s ability to extract disease features, thereby restricting the accuracy of disease recognition [133]. Furthermore, changes in weather and lighting conditions may introduce additional redundant features, affecting the model’s learning of disease-leaf features. When using drones with cameras for crop disease recognition, such dynamic background changes may make disease recognition even more difficult [14]. Additionally, the infection time and disease severity may vary among individual leaves in field scenarios. During the early stages of infection, when symptoms are relatively small, the difficulty of the model in recognizing the disease increases. There are several possible improvements to address the issues above. Firstly, complex example mining can be employed, which uses misdetected samples as new training samples to retrain the model, thereby enhancing its ability to recognize “hard-to-detect” samples [134]. This strategy has been widely applied in fields such as person recognition, speech recognition, disease recognition, and other agricultural areas [135,136,137,138,139,140,141]. Secondly, contrastive learning (as shown in Figure 7) can learn data representations by comparing the similarities and differences between different samples [142], enhancing the accuracy and generalization ability of crop disease recognition. Currently, commonly used contrastive learning methods in the field of crop disease recognition include image-pair contrastive learning [143,144,145,146] and self-supervised contrastive learning [147,148,149]. Additionally, researchers have addressed the issues of complex background interference and small lesion sizes making target recognition difficult to a certain extent by improving loss functions [67,88,96,117]. Alternatively, the attention mechanism in data can be leveraged to improve the model’s ability to focus on interfered areas and leaves with small lesion sizes, thereby enhancing recognition accuracy [75,96,98,106,150,151]. Building upon earlier CNN-based approaches, recent studies in 2024 and 2025 have increasingly adopted advanced Transformer-based and hybrid architectures to cope with complex field environments. Salman et al. [152] demonstrated that Vision Transformer-based models are particularly effective in in-the-wild scenarios by capturing long-range contextual dependencies, thereby improving the discrimination of disease symptoms from cluttered backgrounds. To further enhance robustness under varying illumination and background conditions, hybrid CNN–ViT frameworks have been proposed, combining local feature extraction with global contextual modeling [153]. In parallel, model interpretability has gained growing attention; explainable ensemble learning frameworks have been integrated to visualize the decision-making process, enhancing user trust and facilitating the distinction between true lesions and background noise [154].
Figure 7. The basic process of contrastive learning.

5. Discussion

From a critical summary of existing research, we can understand that both spectral image-based and RGB image-based methods for crop disease recognition have achieved accuracy and reliability. However, we believe it is worth discussing how researchers should choose between spectral image-based and RGB image-based methods for crop disease recognition based on their own needs, as well as whether new models need to be developed for recognizing diseases in different crops or diseases within the same crop. In addition, it is necessary to discuss the applicability of traditional machine learning and deep learning methods under different image modalities and application scenarios.
Regarding the first question of whether to choose spectral or RGB images for crop disease recognition in one’s research, the essence of this issue lies in which stage of crop disease occurrence the research is targeting. If the aim is to utilize artificial intelligence technology before crop diseases spread on a large scale, then undoubtedly, spectral images are a better choice than RGB images. In such early detection scenarios, traditional machine learning methods combined with spectral features and physically interpretable indices can still be effective, while deep learning methods are increasingly adopted when sufficient labeled data are available. This is because before diseases spread widely, many crop leaves or fruits may appear visually indistinguishable from their healthy state, either to the human eye or in photos captured by RGB devices. However, they may already be affected internally by diseases, resulting in changes invisible to the human eye. Based on the characteristics of spectral images, spectral changes in crops that appear healthy on the surface but are infected can be detected before diseases spread widely, allowing for the identification of infected crops and the implementation of early measures to suppress the disease. However, experimental conditions are limited in actual research, such as the need for spectral capture equipment or only to identify the specific disease affecting crops once it has occurred. In that case, using RGB images is a better option. RGB images are more accessible and can be collected in the field using common smartphones. Furthermore, when a disease occurs, there are often prominent disease characteristics on the crop surface, making RGB images sufficient to meet recognition needs. For RGB images, deep learning methods dominate current research due to their strong feature learning capability, whereas traditional machine learning methods are more commonly reported in controlled environments or small-scale datasets. In summary, if conditions permit and early detection of crop diseases is required, then choosing spectral images for research is preferable; otherwise, using RGB images for research is sufficient.
Regarding the second question, researchers have developed different models for different diseases in different crops, for different diseases in the same crop, and even for the same disease in the same crop. This phenomenon needs to be viewed critically. For the first scenario, the leaves of different crops vary in size, shape, and other characteristics, and the diseases to be identified also differ. These significant differences may affect the ability of existing models to recognize unknown crop diseases. For the second scenario, significant differences in disease manifestations may also affect the ability to recognize existing models for crop diseases. However, if the differences in disease characteristics are insignificant, developing a new model may not be necessary. In addition, for the first two scenarios, incremental learning can help the model gradually absorb information from new data without forgetting old knowledge, thereby enabling effective recognition of both new and old data [155]. For the third scenario, perhaps developing an excellent method is sufficient, but scenario changes may lead to a decline in the model’s recognition performance for diseases. To address this issue, scene adaptation technology is a good solution [156]. This technology can help the model improve its recognition ability for the same category of targets in different scenarios.

6. Summary and Prospects

Crop diseases have caused irretrievable economic losses to the global agricultural industry. In contrast, traditional disease detection methods require a considerable workforce and material resources, and traditional disease control methods may cause environmental pollution and other issues. Therefore, there is a need for automated, economical, reliable, accurate, and rapid diagnostic systems to detect crop diseases. With the rapid development of artificial intelligence technology, an increasing number of researchers have applied AI methods to the field of disease recognition and achieved promising results. Many studies on crop disease recognition primarily utilize RGB images, multispectral images, and hyperspectral images. Therefore, this review provides a detailed overview of some publicly available datasets based on RGB and spectral images and some primary research methods. These methods are non-destructive and hold significant importance in the detection and prevention of agricultural crop diseases. It also summarizes some of the shortcomings of these methods and potential improvements. However, research on crop disease detection is still in a continuous development stage. Here, we discuss possible future trends and research directions in this field.

6.1. Integrating Multi-Source Data for Crop Disease Identification

The work summarized in this article shows that identification research efforts have relied solely on a single modality for recognition. However, single-modality approaches always have limitations in specific scenarios. Therefore, combining images from two modalities can enhance the efficiency of disease identification. There are primarily two ways to integrate spectral images with RGB images for crop disease identification: space–ground integration and multimodal fusion.

6.1.1. Space–Ground Integration

The primary approach of space–ground integration involves using drones with spectral imaging devices to capture wide-range images during field patrols from high altitudes. Subsequently, disease recognition methods are employed to determine which farmland is affected by diseases, helping farmers pinpoint the locations of disease outbreaks. Once the locations are identified, farmers can visit the affected areas and capture RGB images using smartphones to obtain more detailed information about the diseases. Establishing and developing an integrated space–ground platform is a challenging but practical technical method. This approach ensures full-growth-cycle disease recognition for crops in actual field scenarios. Further, it enhances the accuracy of assessing the overall disease severity in farmland and identifying individual plant disease conditions.

6.1.2. Multimodal Fusion

Multimodal fusion refers to integrating data from two or more modalities for crop disease identification, thereby overcoming the limitations of single-modality approaches. Its potential has been demonstrated in crop disease recognition [157,158]. By utilizing multimodal fusion techniques, such as extracting features from spectral and RGB images separately and then using a multi-level fusion model, the model’s performance can be enhanced to varying degrees. This is because the rich information in spectral images can compensate for the insufficient information obtained from RGB images when dealing with crops with mild diseases. Furthermore, RGB images can further supplement the information lacking in spectral images within the visible light spectrum. Combining multiple modalities allows the model to learn more comprehensive and sufficient phenotypic information, leading to superior performance. However, currently, most methods for multi-source data fusion focus more on the feasibility of this approach in downstream tasks. The key to the success of such research methods lies in how to efficiently and accurately align and fuse data from different modalities, which is also one of the potential research directions for the future.

6.2. Constructing a High-Quality Dataset

Although most current spectral disease identification methods are based on machine learning, deep learning represents the primary research direction for the future. Therefore, constructing high-quality datasets remains an inevitable challenge. While methods such as data augmentation and transfer learning have, to some extent, met the needs of disease identification, with the advancement of artificial intelligence technology, an increasing number of new data generation methods are worth exploring in the field of disease identification. Generative Adversarial Networks (GANs) may be a good choice, as they can be used for super-resolution reconstruction of images, thereby improving the quality of collected data and potentially enhancing model performance [159,160]. Additionally, GANs can generate images of crop diseases, enriching the dataset of crop disease images. In recent years, the rapid development of diffusion models cannot be ignored [161,162]. The generation of data using diffusion models has already been implemented in the agricultural field [163,164]. Therefore, utilizing diffusion models to generate crop disease data is also feasible.

6.3. Conduct Research on Disease Identification Using Video Modality

Instead of having farmers conduct field inspections in person or using drones for field patrols for disease identification, installing cameras in the field for round-the-clock, all-weather disease identification may be a superior approach. Research on video-based disease identification methods is indispensable in utilizing cameras for disease identification. Furthermore, this may also lead to the development of rain and fog removal technologies, thereby enhancing the universal applicability of camera-based disease identification. Looking beyond specific sensing modalities, recent studies indicate a clear trend toward large-scale pre-trained and cross-modal learning frameworks for intelligent crop disease diagnosis. Recent reviews highlight that integrating deep representation learning with broader contextual information can significantly improve generalization performance under limited labeled data [27,165]. In addition, emerging cross-modal architectures, such as hybrid CNN–GNN models with attention mechanisms, demonstrate the potential of combining visual perception with structured agronomic knowledge for more interpretable disease diagnosis [166]. Together with advances in explainable and human-centered AI, these developments point toward more interactive and knowledge-aware crop protection systems in future precision agriculture.

Author Contributions

Conceptualization, H.Z.; methodology, H.Z.; formal analysis, H.W.; investigation, H.W.; resources, H.W.; writing—original draft preparation, H.Z.; writing—review and editing, Y.Q. and H.D.; visualization, H.W. and H.D.; supervision, Y.Q.; funding acquisition, Y.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Excellent Youth Foundation of Xinjiang Uygur Autonomous Region of China (2023D01E01), Outstanding Young Talent Foundation of Xinjiang Uygur Autonomous Region of China (2023TSYCCX0043), Tianshan Innovation Team Program of Xinjiang Uygur Autonomous Region of China (2023D14012), Finance science and technology project of Xinjiang Uygur Autonomous Region (2023B01029-1), National Natural Science Foundation of China (62266043).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

NNNeural Network
GBRTGradient Boosting Regression TreeLSVCLinear Support Vector Classifier
NBNaive BayesPLSRPartial Least Squares Regression
GWO-ELMGrey Wolf Optimization-Extreme Learning MachineBPNNBackpropagation Neural Network
PSO-BPParticle Swarm Optimization-Backpropagation Neural NetworkDTDecision Trees
SVMSupport Vector MachinePLS-DAPartial Least Squares-Discriminant Analysis
KNNK-nearest NeighborsLDALinear Discriminant Analysis
RFRandom ForestLS-SVMLeast Squares-Support Vector Machine
GPRGaussian Process RegressionAdaBoostAdaptive Boosting
LRLogistic RegressionXGBoostExtreme Gradient Boosting

References

  1. Legrand, N. War in Ukraine: The rational “wait-and-see” mode of global food markets. Appl. Econ. Perspect. Policy 2023, 45, 626–644. [Google Scholar] [CrossRef]
  2. da Silveira, F.; Barbedo, J.G.A.; da Silva, S.L.C.; Amaral, F.G. Proposal for a framework to manage the barriers that hinder the development of agriculture 4.0 in the agricultural production chain. Comput. Electron. Agric. 2023, 214, 108281. [Google Scholar] [CrossRef]
  3. Shao, Y.; Guan, X.; Xuan, G.; Gao, F.; Feng, W.; Gao, G.; Wang, Q.; Huang, X.; Li, J. GTCBS-YOLOv5s: A lightweight model for weed species identification in paddy fields. Comput. Electron. Agric. 2023, 215, 108461. [Google Scholar] [CrossRef]
  4. Yang, R.; Zhou, J.; Lu, X.; Shen, J.; Chen, H.; Chen, M.; He, Y.; Liu, F. A robust rice yield estimation framework developed by grading modeling and normalized weight decision-making strategy using UAV imaging technology. Comput. Electron. Agric. 2023, 215, 108417. [Google Scholar] [CrossRef]
  5. Dai, G.; Tian, Z.; Fan, J.; Sunil, C.; Dewi, C. DFN-PSAN: Multi-level deep information feature fusion extraction network for interpretable plant disease classification. Comput. Electron. Agric. 2024, 216, 108481. [Google Scholar] [CrossRef]
  6. Mishra, S.; Volety, D.R.; Bohra, N.; Alfarhood, S.; Safran, M. A smart and sustainable framework for millet crop monitoring equipped with disease detection using enhanced predictive intelligence. Alex. Eng. J. 2023, 83, 298–306. [Google Scholar] [CrossRef]
  7. Margni, M.; Rossier, D.; Crettaz, P.; Jolliet, O. Life cycle impact assessment of pesticides on human health and ecosystems. Agric. Ecosyst. Environ. 2002, 93, 379–392. [Google Scholar] [CrossRef]
  8. Gao, C.; Guo, W.; Yang, C.; Gong, Z.; Yue, J.; Fu, Y.; Feng, H. A fast and lightweight detection model for wheat fusarium head blight spikes in natural environments. Comput. Electron. Agric. 2024, 216, 108484. [Google Scholar] [CrossRef]
  9. Iqbal, Z.; Khan, M.A.; Sharif, M.; Shah, J.H.; ur Rehman, M.H.; Javed, K. An automated detection and classification of citrus plant diseases using image processing techniques: A review. Comput. Electron. Agric. 2018, 153, 12–32. [Google Scholar] [CrossRef]
  10. Upadhyay, S.K.; Kumar, A. Deep learning and computer vision in plant disease detection: A comprehensive review of techniques, models, and trends in precision agriculture. Artif. Intell. Rev. 2025, 58, 1–55. [Google Scholar] [CrossRef]
  11. Noon, S.K.; Amjad, M.; Qureshi, M.A.; Mannan, A. Use of deep learning techniques for identification of plant leaf stresses: A review. Sustain. Comput. Inform. Syst. 2020, 28, 100443. [Google Scholar] [CrossRef]
  12. Abade, A.; Ferreira, P.A.; de Barros Vidal, F. Plant diseases recognition on images using convolutional neural networks: A systematic review. Comput. Electron. Agric. 2021, 185, 106125. [Google Scholar] [CrossRef]
  13. Terentev, A.; Dolzhenko, V.; Fedotov, A.; Eremenko, D. Current state of hyperspectral remote sensing for early plant disease detection: A review. Sensors 2022, 22, 757. [Google Scholar] [CrossRef] [PubMed]
  14. Thakur, P.S.; Khanna, P.; Sheorey, T.; Ojha, A. Trends in vision-based machine learning techniques for plant disease identification: A systematic review. Expert Syst. Appl. 2022, 208, 118117. [Google Scholar] [CrossRef]
  15. Jafar, A.; Bibi, N.; Naqvi, R.A.; Sadeghi-Niaraki, A.; Jeong, D. Revolutionizing agriculture with artificial intelligence: Plant disease detection methods, applications, and their limitations. Front. Plant Sci. 2024, 15, 1356260. [Google Scholar] [CrossRef]
  16. Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar] [CrossRef]
  17. Barbedo, J.G.A.; Koenigkan, L.V.; Halfeld-Vieira, B.A.; Costa, R.V.; Nechet, K.L.; Godoy, C.V.; Junior, M.L.; Patricio, F.R.A.; Talamini, V.; Chitarra, L.G.; et al. Annotated plant pathology databases for image-based detection and recognition of diseases. IEEE Lat. Am. Trans. 2018, 16, 1749–1757. [Google Scholar] [CrossRef]
  18. Nie, X.; Wang, L.; Ding, H.; Xu, M. Strawberry verticillium wilt detection network based on multi-task learning and attention. IEEE Access 2019, 7, 170003–170011. [Google Scholar] [CrossRef]
  19. Sun, J.; Yang, Y.; He, X.; Wu, X. Northern maize leaf blight detection under complex field environment based on deep learning. IEEE Access 2020, 8, 33679–33688. [Google Scholar] [CrossRef]
  20. Thapa, R.; Snavely, N.; Belongie, S.; Khan, A. The plant pathology 2020 challenge dataset to classify foliar disease of apples. arXiv 2020, arXiv:2004.11958. [Google Scholar] [CrossRef]
  21. Goyal, L.; Sharma, C.M.; Singh, A.; Singh, P.K. Leaf and spike wheat disease detection & classification using an improved deep convolutional architecture. Inform. Med. Unlocked 2021, 25, 100642. [Google Scholar] [CrossRef]
  22. Singh, D.; Jain, N.; Jain, P.; Kayal, P.; Kumawat, S.; Batra, N. PlantDoc: A dataset for visual plant disease detection. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD; Association for Computing Machinery: New York, NY, USA, 2020; pp. 249–253. [Google Scholar] [CrossRef]
  23. Sara, U.; Rajbongshi, A.; Shakil, R.; Akter, B.; Sazzad, S.; Uddin, M.S. An extensive sunflower dataset representation for successful identification and classification of sunflower diseases. Data Brief 2022, 42, 108043. [Google Scholar] [CrossRef] [PubMed]
  24. Chouhan, S.S.; Singh, U.P.; Kaul, A.; Jain, S. A data repository of leaf images: Practice towards plant conservation with plant pathology. In Proceedings of the 2019 4th International Conference on Information Systems and Computer Networks (ISCON); IEEE: New York, NY, USA, 2019; pp. 700–707. [Google Scholar] [CrossRef]
  25. Gaci, B.; Abdelghafour, F.; Ryckewaert, M.; Mas-Garcia, S.; Louargant, M.; Verpont, F.; Laloum, Y.; Moronvalle, A.; Bendoula, R.; Roger, J.M. Visible–Near infrared hyperspectral dataset of healthy and infected apple tree leaves images for the monitoring of apple fire blight. Data Brief 2023, 50, 109532. [Google Scholar] [CrossRef] [PubMed]
  26. Han, G.; Asiedu, D.K.P.; Bennin, K.E. Plant disease detection with generative adversarial networks. Heliyon 2025, 11, e43002. [Google Scholar] [CrossRef]
  27. Shafay, M.; Hassan, T.; Owais, M.; Hussain, I.; Khawaja, S.G.; Seneviratne, L.; Werghi, N. Recent advances in plant disease detection: Challenges and opportunities. Plant Methods 2025, 21, 140. [Google Scholar] [CrossRef]
  28. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A deep learning-based approach for automated yellow rust disease detection from high-resolution hyperspectral UAV images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef]
  29. Abbott, J.A. Quality measurement of fruits and vegetables. Postharvest Biol. Technol. 1999, 15, 207–225. [Google Scholar] [CrossRef]
  30. Goyal, S.B.; Malik, V.; Rajawat, A.S.; Khan, M.; Ikram, A.; Alabdullah, B.; Almjally, A. Smart intercropping system to detect leaf disease using hyperspectral imaging and hybrid deep learning for precision agriculture. Front. Plant Sci. 2025, 16, 1662251. [Google Scholar] [CrossRef]
  31. Zhang, H.; Ye, N.; Gong, J.; Xue, H.; Wang, P.; Jiao, B.; Qiao, X. Hyperspectral imaging-based deep learning method for apple quarantine disease detection. Foods 2025, 14, 3246. [Google Scholar] [CrossRef]
  32. Chossegros, M.; Hubbard, A.; Burt, M.; Harrison, R.J.; Nellist, C.F.; Grinberg, N.F. Hyperspectral image analysis for classification of multiple infections in wheat. Plant Methods 2025, 21, 144. [Google Scholar] [CrossRef]
  33. Bi, Q.; Goodman, K.E.; Kaminsky, J.; Lessler, J. What is machine learning? A primer for the epidemiologist. Am. J. Epidemiol. 2019, 188, 2222–2239. [Google Scholar] [CrossRef]
  34. Meshram, V.; Patil, K.; Meshram, V.; Hanchate, D.; Ramkteke, S. Machine learning in agriculture domain: A state-of-art survey. Artif. Intell. Life Sci. 2021, 1, 100010. [Google Scholar] [CrossRef]
  35. Ma, R.; Zhang, N.; Zhang, X.; Bai, T.; Yuan, X.; Bao, H.; He, D.; Sun, W.; He, Y. Cotton Verticillium wilt monitoring based on UAV multispectral-visible multi-source feature fusion. Comput. Electron. Agric. 2024, 217, 108628. [Google Scholar] [CrossRef]
  36. Xie, Y.; Plett, D.; Evans, M.; Garrard, T.; Butt, M.; Clarke, K.; Liu, H. Hyperspectral imaging detects biological stress of wheat for early diagnosis of crown rot disease. Comput. Electron. Agric. 2024, 217, 108571. [Google Scholar] [CrossRef]
  37. Mustafa, G.; Zheng, H.; Khan, I.H.; Zhu, J.; Yang, T.; Wang, A.; Xue, B.; He, C.; Jia, H.; Li, G.; et al. Enhancing fusarium head blight detection in wheat crops using hyperspectral indices and machine learning classifiers. Comput. Electron. Agric. 2024, 218, 108663. [Google Scholar] [CrossRef]
  38. Yang, M.; Kang, X.; Qiu, X.; Ma, L.; Ren, H.; Huang, C.; Zhang, Z.; Lv, X. Method for early diagnosis of verticillium wilt in cotton based on chlorophyll fluorescence and hyperspectral technology. Comput. Electron. Agric. 2024, 216, 108497. [Google Scholar] [CrossRef]
  39. Mustafa, G.; Zheng, H.; Li, W.; Yin, Y.; Wang, Y.; Zhou, M.; Liu, P.; Bilal, M.; Jia, H.; Li, G.; et al. Fusarium head blight monitoring in wheat ears using machine learning and multimodal data from asymptomatic to symptomatic periods. Front. Plant Sci. 2023, 13, 1102341. [Google Scholar] [CrossRef]
  40. Ren, K.; Dong, Y.; Huang, W.; Guo, A.; Jing, X. Monitoring of winter wheat stripe rust by collaborating canopy SIF with wavelet energy coefficients. Comput. Electron. Agric. 2023, 215, 108366. [Google Scholar] [CrossRef]
  41. Sun, H.; Song, X.; Guo, W.; Guo, M.; Mao, Y.; Yang, G.; Feng, H.; Zhang, J.; Feng, Z.; Wang, J.; et al. Potato late blight severity monitoring based on the relief-mRmR algorithm with dual-drone cooperation. Comput. Electron. Agric. 2023, 215, 108438. [Google Scholar] [CrossRef]
  42. Tang, Y.; Yang, J.; Zhuang, J.; Hou, C.; Miao, A.; Ren, J.; Huang, H.; Tan, Z.; Paliwal, J. Early detection of citrus anthracnose caused by Colletotrichum gloeosporioides using hyperspectral imaging. Comput. Electron. Agric. 2023, 214, 108348. [Google Scholar] [CrossRef]
  43. Zhang, J.; Jing, X.; Song, X.; Zhang, T.; Duan, W.; Su, J. Hyperspectral estimation of wheat stripe rust using fractional order differential equations and Gaussian process methods. Comput. Electron. Agric. 2023, 206, 107671. [Google Scholar] [CrossRef]
  44. Zhang, S.; Li, X.; Ba, Y.; Lyu, X.; Zhang, M.; Li, M. Banana fusarium wilt disease detection by supervised and unsupervised methods from UAV-based multispectral imagery. Remote Sens. 2022, 14, 1231. [Google Scholar] [CrossRef]
  45. Almoujahed, M.B.; Rangarajan, A.K.; Whetton, R.L.; Vincke, D.; Eylenbosch, D.; Vermeulen, P.; Mouazen, A.M. Detection of fusarium head blight in wheat under field conditions using a hyperspectral camera and machine learning. Comput. Electron. Agric. 2022, 203, 107456. [Google Scholar] [CrossRef]
  46. Jing, X.; Zou, Q.; Yan, J.; Dong, Y.; Li, B. Remote sensing monitoring of winter wheat stripe rust based on mRMR-XGBoost algorithm. Remote Sens. 2022, 14, 756. [Google Scholar] [CrossRef]
  47. Xiao, D.; Pan, Y.; Feng, J.; Yin, J.; Liu, Y.; He, L. Remote sensing detection algorithm for apple fire blight based on UAV multispectral image. Comput. Electron. Agric. 2022, 199, 107137. [Google Scholar] [CrossRef]
  48. Mustafa, G.; Zheng, H.; Khan, I.H.; Tian, L.; Jia, H.; Li, G.; Cheng, T.; Tian, Y.; Cao, W.; Zhu, Y.; et al. Hyperspectral reflectance proxies to diagnose in-field fusarium head blight in wheat with machine learning. Remote Sens. 2022, 14, 2784. [Google Scholar] [CrossRef]
  49. Xuan, G.; Li, Q.; Shao, Y.; Shi, Y. Early diagnosis and pathogenesis monitoring of wheat powdery mildew caused by blumeria graminis using hyperspectral imaging. Comput. Electron. Agric. 2022, 197, 106921. [Google Scholar] [CrossRef]
  50. Narmilan, A.; Gonzalez, F.; Salgadoe, A.S.A.; Powell, K. Detection of white leaf disease in sugarcane using machine learning techniques over UAV multispectral images. Drones 2022, 6, 230. [Google Scholar] [CrossRef]
  51. Pérez-Roncal, C.; Arazuri, S.; Lopez-Molina, C.; Jarén, C.; Santesteban, L.G.; López-Maestresalas, A. Exploring the potential of hyperspectral imaging to detect Esca disease complex in asymptomatic grapevine leaves. Comput. Electron. Agric. 2022, 196, 106863. [Google Scholar] [CrossRef]
  52. Tian, L.; Xue, B.; Wang, Z.; Li, D.; Yao, X.; Cao, Q.; Zhu, Y.; Cao, W.; Cheng, T. Spectroscopic detection of rice leaf blast infection from asymptomatic to mild stages with integrated machine learning and feature selection. Remote Sens. Environ. 2021, 257, 112350. [Google Scholar] [CrossRef]
  53. Khan, I.H.; Liu, H.; Li, W.; Cao, A.; Wang, X.; Liu, H.; Cheng, T.; Tian, Y.; Zhu, Y.; Cao, W.; et al. Early detection of powdery mildew disease and accurate quantification of its severity using hyperspectral images in wheat. Remote Sens. 2021, 13, 3612. [Google Scholar] [CrossRef]
  54. Rodriguez, J.; Lizarazo, I.; Prieto, F.; Angulo-Morales, V. Assessment of potato late blight from UAV-based multispectral imagery. Comput. Electron. Agric. 2021, 184, 106061. [Google Scholar] [CrossRef]
  55. Xiao, Y.; Dong, Y.; Huang, W.; Liu, L.; Ma, H. Wheat fusarium head blight detection using UAV-based spectral and texture features in optimal window size. Remote Sens. 2021, 13, 2437. [Google Scholar] [CrossRef]
  56. Gao, Z.; Khot, L.R.; Naidu, R.A.; Zhang, Q. Early detection of grapevine leafroll disease in a red-berried wine grape cultivar using hyperspectral imaging. Comput. Electron. Agric. 2020, 179, 105807. [Google Scholar] [CrossRef]
  57. Li, X.; Yang, C.; Huang, W.; Tang, J.; Tian, Y.; Zhang, Q. Identification of cotton root rot by multifeature selection from sentinel-2 images using random forest. Remote Sens. 2020, 12, 3504. [Google Scholar] [CrossRef]
  58. Ardila, C.E.C.; Ramirez, L.A.; Ortiz, F.A.P. Spectral analysis for the early detection of anthracnose in fruits of Sugar Mango (Mangifera indica). Comput. Electron. Agric. 2020, 173, 105357. [Google Scholar] [CrossRef]
  59. Lan, Y.; Huang, Z.; Deng, X.; Zhu, Z.; Huang, H.; Zheng, Z.; Lian, B.; Zeng, G.; Tong, Z. Comparison of machine learning methods for citrus greening detection on UAV multispectral images. Comput. Electron. Agric. 2020, 171, 105234. [Google Scholar] [CrossRef]
  60. Ye, H.; Huang, W.; Huang, S.; Cui, B.; Dong, Y.; Guo, A.; Ren, Y.; Jin, Y. Recognition of banana fusarium wilt based on UAV remote sensing. Remote Sens. 2020, 12, 938. [Google Scholar] [CrossRef]
  61. Van De Vijver, R.; Mertens, K.; Heungens, K.; Somers, B.; Nuyttens, D.; Borra-Serrano, I.; Lootens, P.; Roldán-Ruiz, I.; Vangeyte, J.; Saeys, W. In-field detection of Alternaria solani in potato crops using hyperspectral imaging. Comput. Electron. Agric. 2020, 168, 105106. [Google Scholar] [CrossRef]
  62. Khan, A.; Vibhute, A.D.; Mali, S.; Patil, C.H. A systematic review on hyperspectral imaging technology with a machine and deep learning methodology for agricultural applications. Ecol. Inform. 2022, 69, 101678. [Google Scholar] [CrossRef]
  63. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
  64. Chen, F.; Zhang, Y.; Zhang, J.; Liu, L.; Wu, K. Rice false smut detection and prescription map generation in a complex planting environment, with mixed methods, based on near earth remote sensing. Remote Sens. 2022, 14, 945. [Google Scholar] [CrossRef]
  65. Shi, Y.; Han, L.; Kleerekoper, A.; Chang, S.; Hu, T. Novel cropdocnet model for automated potato late blight disease detection from unmanned aerial vehicle-based hyperspectral imagery. Remote Sens. 2022, 14, 396. [Google Scholar] [CrossRef]
  66. Kuswidiyanto, L.W.; Wang, P.; Noh, H.H.; Jung, H.Y.; Jung, D.H.; Han, X. Airborne hyperspectral imaging for early diagnosis of kimchi cabbage downy mildew using 3D-ResNet and leaf segmentation. Comput. Electron. Agric. 2023, 214, 108312. [Google Scholar] [CrossRef]
  67. Deng, J.; Zhang, X.; Yang, Z.; Zhou, C.; Wang, R.; Zhang, K.; Lv, X.; Yang, L.; Wang, Z.; Li, P.; et al. Pixel-level regression for UAV hyperspectral images: Deep learning-based quantitative inverse of wheat stripe rust disease index. Comput. Electron. Agric. 2023, 215, 108434. [Google Scholar] [CrossRef]
  68. Liu, Y.; Su, J.; Zheng, Z.; Liu, D.; Song, Y.; Fang, Y.; Yang, P.; Su, B. GLDCNet: A novel convolutional neural network for grapevine leafroll disease recognition using UAV-based imagery. Comput. Electron. Agric. 2024, 218, 108668. [Google Scholar] [CrossRef]
  69. Liu, Z.; Feng, Y.; Li, R.; Zhang, S.; Zhang, L.; Cui, G.; Ahmad, A.M.; Fu, L.; Cui, Y. Improved kiwifruit detection using VGG16 with RGB and NIR information fusion. In Proceedings of the 2019 ASABE Annual International Meeting, American Society of Agricultural and Biological Engineers, Boston, MA, USA, 7–10 July 2019; p. 1. [Google Scholar]
  70. Bu, Y.; Hu, J.; Chen, C.; Bai, S.; Chen, Z.; Hu, T.; Zhang, G.; Liu, N.; Cai, C.; Li, Y.; et al. ResNet incorporating the fusion data of RGB & hyperspectral images improves classification accuracy of vegetable soybean freshness. Sci. Rep. 2024, 14, 2568. [Google Scholar] [CrossRef]
  71. Qin, S.; Ding, Y.; Zhou, T.; Zhai, M.; Zhang, Z.; Fan, M.; Lv, X.; Zhang, Z.; Zhang, L. “Image-Spectral” fusion monitoring of small cotton samples nitrogen content based on improved deep forest. Comput. Electron. Agric. 2024, 221, 109002. [Google Scholar] [CrossRef]
  72. Tian, X.; Yao, J.; Yu, H.; Wang, W.; Huang, W. Early contamination warning of Aflatoxin B1 in stored maize based on the dynamic change of catalase activity and data fusion of hyperspectral images. Comput. Electron. Agric. 2024, 217, 108615. [Google Scholar] [CrossRef]
  73. Heidarian Dehkordi, R.; El Jarroudi, M.; Kouadio, L.; Meersmans, J.; Beyer, M. Monitoring wheat leaf rust and stripe rust in winter wheat using high-resolution UAV-based red-green-blue imagery. Remote Sens. 2020, 12, 3696. [Google Scholar] [CrossRef]
  74. Takakura, T.; Shimomachi, T.; Takemassa, T. Non-destructive detection of plant health. In Proceedings of the International Symposium on Design and Environmental Control of Tropical and Subtropical Greenhouses 578, Taichung, Taiwan, 15–18 April 2001; pp. 303–306. [Google Scholar]
  75. Bao, W.; Huang, C.; Hu, G.; Su, B.; Yang, X. Detection of Fusarium head blight in wheat using UAV remote sensing based on parallel channel space attention. Comput. Electron. Agric. 2024, 217, 108630. [Google Scholar] [CrossRef]
  76. Uddin, M.S.; Mazumder, M.K.A.; Prity, A.J.; Mridha, M.; Alfarhood, S.; Safran, M.; Che, D. Cauli-Det: Enhancing cauliflower disease detection with modified YOLOv8. Front. Plant Sci. 2024, 15, 1373590. [Google Scholar] [CrossRef] [PubMed]
  77. Liu, Y.; Yu, Q.; Geng, S. Real-time and lightweight detection of grape diseases based on Fusion Transformer YOLO. Front. Plant Sci. 2024, 15, 1269423. [Google Scholar] [CrossRef] [PubMed]
  78. Lu, J.; Lu, B.; Ma, W.; Sun, Y. EAIS-Former: An efficient and accurate image segmentation method for fruit leaf diseases. Comput. Electron. Agric. 2024, 218, 108739. [Google Scholar] [CrossRef]
  79. Mao, R.; Zhang, Y.; Wang, Z.; Hao, X.; Zhu, T.; Gao, S.; Hu, X. DAE-Mask: A novel deep-learning-based automatic detection model for in-field wheat diseases. Precis. Agric. 2024, 25, 785–810. [Google Scholar] [CrossRef]
  80. Li, W.; Yu, X.; Chen, C.; Gong, Q. Identification and localization of grape diseased leaf images captured by UAV based on CNN. Comput. Electron. Agric. 2023, 214, 108277. [Google Scholar] [CrossRef]
  81. Zheng, J.; Li, K.; Wu, W.; Ruan, H. RepDI: A light-weight CPU network for apple leaf disease identification. Comput. Electron. Agric. 2023, 212, 108122. [Google Scholar] [CrossRef]
  82. Li, G.; Jiao, L.; Chen, P.; Liu, K.; Wang, R.; Dong, S.; Kang, C. Spatial convolutional self-attention-based transformer module for strawberry disease identification under complex background. Comput. Electron. Agric. 2023, 212, 108121. [Google Scholar] [CrossRef]
  83. Zhang, D.; Huang, Y.; Wu, C.; Ma, M. Detecting tomato disease types and degrees using multi-branch and destruction learning. Comput. Electron. Agric. 2023, 213, 108244. [Google Scholar] [CrossRef]
  84. Zhang, Y.; Zhou, G.; Chen, A.; He, M.; Li, J.; Hu, Y. A precise apple leaf diseases detection using BCTNet under unconstrained environments. Comput. Electron. Agric. 2023, 212, 108132. [Google Scholar] [CrossRef]
  85. Bao, W.; Zhu, Z.; Hu, G.; Zhou, X.; Zhang, D.; Yang, X. UAV remote sensing detection of tea leaf blight based on DDMA-YOLO. Comput. Electron. Agric. 2023, 205, 107637. [Google Scholar] [CrossRef]
  86. He, J.; Liu, T.; Li, L.; Hu, Y.; Zhou, G. MFaster r-CNN for maize leaf diseases detection based on machine vision. Arab. J. Sci. Eng. 2023, 48, 1437–1449. [Google Scholar] [CrossRef]
  87. Zhu, S.; Ma, W.; Wang, J.; Yang, M.; Wang, Y.; Wang, C. EADD-YOLO: An efficient and accurate disease detector for apple leaf using improved lightweight YOLOv5. Front. Plant Sci. 2023, 14, 1120724. [Google Scholar] [CrossRef] [PubMed]
  88. Zhao, Y.; Yang, Y.; Xu, X.; Sun, C. Precision detection of crop diseases based on improved YOLOv5 model. Front. Plant Sci. 2023, 13, 1066835. [Google Scholar] [CrossRef] [PubMed]
  89. Lin, J.; Yu, D.; Pan, R.; Cai, J.; Liu, J.; Zhang, L.; Wen, X.; Peng, X.; Cernava, T.; Oufensou, S.; et al. Improved YOLOX-Tiny network for detection of tobacco brown spot disease. Front. Plant Sci. 2023, 14, 1135105. [Google Scholar] [CrossRef] [PubMed]
  90. Khan, F.; Zafar, N.; Tahir, M.N.; Aqib, M.; Waheed, H.; Haroon, Z. A mobile-based system for maize plant leaf disease detection and classification using deep learning. Front. Plant Sci. 2023, 14, 1079366. [Google Scholar] [CrossRef]
  91. Xue, Z.; Xu, R.; Bai, D.; Lin, H. YOLO-tea: A tea disease detection model improved by YOLOv5. Forests 2023, 14, 415. [Google Scholar] [CrossRef]
  92. Zhao, S.; Liu, J.; Wu, S. Multiple disease detection method for greenhouse-cultivated strawberry based on multiscale feature fusion Faster R_CNN. Comput. Electron. Agric. 2022, 199, 107176. [Google Scholar] [CrossRef]
  93. Ji, M.; Wu, Z. Automatic detection and severity analysis of grape black measles disease based on deep learning and fuzzy logic. Comput. Electron. Agric. 2022, 193, 106718. [Google Scholar] [CrossRef]
  94. Li, J.; Qiao, Y.; Liu, S.; Zhang, J.; Yang, Z.; Wang, M. An improved YOLOv5-based vegetable disease detection method. Comput. Electron. Agric. 2022, 202, 107345. [Google Scholar] [CrossRef]
  95. Li, D.; Ahmed, F.; Wu, N.; Sethi, A.I. Yolo-JD: A Deep Learning Network for jute diseases and pests detection from images. Plants 2022, 11, 937. [Google Scholar] [CrossRef]
  96. Zhang, Y.; Ma, B.; Hu, Y.; Li, C.; Li, Y. Accurate cotton diseases and pests detection in complex background based on an improved YOLOX model. Comput. Electron. Agric. 2022, 203, 107484. [Google Scholar] [CrossRef]
  97. Qiu, R.Z.; Chen, S.P.; Chi, M.X.; Wang, R.B.; Huang, T.; Fan, G.C.; Zhao, J.; Weng, Q.Y. An automatic identification system for citrus greening disease (Huanglongbing) using a YOLO convolutional neural network. Front. Plant Sci. 2022, 13, 1002606. [Google Scholar] [CrossRef] [PubMed]
  98. Qi, J.; Liu, X.; Liu, K.; Xu, F.; Guo, H.; Tian, X.; Li, M.; Bao, Z.; Li, Y. An improved YOLOv5 model based on visual attention mechanism: Application to recognition of tomato virus disease. Comput. Electron. Agric. 2022, 194, 106780. [Google Scholar] [CrossRef]
  99. Liu, S.; Qiao, Y.; Li, J.; Zhang, H.; Zhang, M.; Wang, M. An improved lightweight network for real-time detection of apple leaf diseases in natural scenes. Agronomy 2022, 12, 2363. [Google Scholar] [CrossRef]
  100. Kaur, P.; Harnal, S.; Gautam, V.; Singh, M.P.; Singh, S.P. An approach for characterization of infected area in tomato leaf disease based on deep learning and object detection technique. Eng. Appl. Artif. Intell. 2022, 115, 105210. [Google Scholar] [CrossRef]
  101. Roy, A.M.; Bose, R.; Bhaduri, J. A fast accurate fine-grain object detection model based on YOLOv4 deep neural network. Neural Comput. Appl. 2022, 34, 3895–3921. [Google Scholar] [CrossRef]
  102. Zhang, Z.; Qiao, Y.; Guo, Y.; He, D. Deep learning based automatic grape downy mildew detection. Front. Plant Sci. 2022, 13, 872107. [Google Scholar] [CrossRef]
  103. Xu, Y.; Chen, Q.; Kong, S.; Xing, L.; Wang, Q.; Cong, X.; Zhou, Y. Real-time object detection method of melon leaf diseases under complex background in greenhouse. J. Real-Time Image Process. 2022, 19, 985–995. [Google Scholar] [CrossRef]
  104. Li, Y.; Wang, J.; Wu, H.; Yu, Y.; Sun, H.; Zhang, H. Detection of powdery mildew on strawberry leaves based on DAC-YOLOv4 model. Comput. Electron. Agric. 2022, 202, 107418. [Google Scholar] [CrossRef]
  105. Li, S.; Li, K.; Qiao, Y.; Zhang, L. A multi-scale cucumber disease detection method in natural scenes based on YOLOv5. Comput. Electron. Agric. 2022, 202, 107363. [Google Scholar] [CrossRef]
  106. Fang, S.; Wang, Y.; Zhou, G.; Chen, A.; Cai, W.; Wang, Q.; Hu, Y.; Li, L. Multi-channel feature fusion networks with hard coordinate attention mechanism for maize disease identification under complex backgrounds. Comput. Electron. Agric. 2022, 203, 107486. [Google Scholar] [CrossRef]
  107. Wang, J.; Yu, L.; Yang, J.; Dong, H. DBA_SSD: A novel end-to-end object detection algorithm applied to plant disease detection. Information 2021, 12, 474. [Google Scholar] [CrossRef]
  108. Liu, C.; Zhu, H.; Guo, W.; Han, X.; Chen, C.; Wu, H. EFDet: An efficient detection method for cucumber disease under natural complex environments. Comput. Electron. Agric. 2021, 189, 106378. [Google Scholar] [CrossRef]
  109. Bao, W.; Yang, X.; Liang, D.; Hu, G.; Yang, X. Lightweight convolutional neural network model for field wheat ear disease identification. Comput. Electron. Agric. 2021, 189, 106367. [Google Scholar] [CrossRef]
  110. Zhang, K.; Wu, Q.; Chen, Y. Detecting soybean leaf disease from synthetic image using multi-feature fusion faster R-CNN. Comput. Electron. Agric. 2021, 183, 106064. [Google Scholar] [CrossRef]
  111. Sun, H.; Xu, H.; Liu, B.; He, D.; He, J.; Zhang, H.; Geng, N. MEAN-SSD: A novel real-time detector for apple leaf diseases using improved light-weight convolutional neural networks. Comput. Electron. Agric. 2021, 189, 106379. [Google Scholar] [CrossRef]
  112. Zhu, J.; Cheng, M.; Wang, Q.; Yuan, H.; Cai, Z. Grape leaf black rot detection based on super-resolution image enhancement and deep learning. Front. Plant Sci. 2021, 12, 695749. [Google Scholar] [CrossRef]
  113. Liu, J.; Wang, X. Tomato diseases and pests detection based on improved Yolo V3 convolutional neural network. Front. Plant Sci. 2020, 11, 521544. [Google Scholar] [CrossRef]
  114. Mi, Z.; Zhang, X.; Su, J.; Han, D.; Su, B. Wheat stripe rust grading by deep learning with attention mechanism and images from mobile devices. Front. Plant Sci. 2020, 11, 558126. [Google Scholar] [CrossRef]
  115. Chen, X.; Zhou, G.; Chen, A.; Yi, J.; Zhang, W.; Hu, Y. Identification of tomato leaf diseases based on combination of ABCK-BWTR and B-ARNet. Comput. Electron. Agric. 2020, 178, 105730. [Google Scholar] [CrossRef]
  116. Xie, X.; Ma, Y.; Liu, B.; He, J.; Li, S.; Wang, H. A deep-learning-based real-time detector for grape leaf diseases using improved convolutional neural networks. Front. Plant Sci. 2020, 11, 751. [Google Scholar] [CrossRef] [PubMed]
  117. Liu, J.; Wang, X. Early recognition of tomato gray leaf spot disease based on MobileNetv2-YOLOv3 model. Plant Methods 2020, 16, 1–16. [Google Scholar] [CrossRef] [PubMed]
  118. Zhang, Y.; Song, C.; Zhang, D. Deep learning-based object detection improvement for tomato disease. IEEE Access 2020, 8, 56607–56614. [Google Scholar] [CrossRef]
  119. Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
  120. Alanazi, R. A YOLOv10-based Approach for Banana Leaf Disease Detection. Eng. Technol. Appl. Sci. Res. 2025, 15, 23522–23526. [Google Scholar] [CrossRef]
  121. Murugavalli, S.; Gopi, R. Plant leaf disease detection using vision transformers for precision agriculture. Sci. Rep. 2025, 15, 22361. [Google Scholar] [CrossRef]
  122. Zhang, Z.; Li, X.; Xie, Y.; Chen, J. VMamba for Plant Leaf Disease Identification: Design and experiment. Front. Plant Sci. 2025, 16, 1515021. [Google Scholar] [CrossRef]
  123. Cap, Q.; Suwa, K.; Fujita, E.; Uga, H.; Kagiwada, S.; Iyatomi, H. An end-to-end practical plant disease diagnosis system for wide-angle cucumber images. Int. J. Eng. Technol. 2018, 7, 106–111. [Google Scholar] [CrossRef]
  124. Zhang, D.Y.; Luo, H.S.; Wang, D.Y.; Zhou, X.G.; Li, W.F.; Gu, C.Y.; Zhang, G.; He, F.M. Assessment of the levels of damage caused by Fusarium head blight in wheat using an improved YoloV5 method. Comput. Electron. Agric. 2022, 198, 107086. [Google Scholar] [CrossRef]
  125. Barbedo, J.G. Deep learning applied to plant pathology: The problem of data representativeness. Trop. Plant Pathol. 2022, 47, 85–94. [Google Scholar] [CrossRef]
  126. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  127. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
  128. Feng, L.; Wu, B.; He, Y.; Zhang, C. Hyperspectral imaging combined with deep transfer learning for rice disease detection. Front. Plant Sci. 2021, 12, 693521. [Google Scholar] [CrossRef] [PubMed]
  129. Liu, G.; Peng, J.; El-Latif, A.A.A. Sk-mobilenet: A lightweight adaptive network based on complex deep transfer learning for plant disease recognition. Arab. J. Sci. Eng. 2023, 48, 1661–1675. [Google Scholar] [CrossRef]
  130. Jin, X.; Xiong, J.; Rao, Y.; Zhang, T.; Ba, W.; Gu, S.; Zhang, X.; Lu, J. TranNas-NirCR: A method for improving the diagnosis of asymptomatic wheat scab with transfer learning and neural architecture search. Comput. Electron. Agric. 2023, 213, 108271. [Google Scholar] [CrossRef]
  131. Wang, X.; Pan, T.; Qu, J.; Sun, Y.; Miao, L.; Zhao, Z.; Li, Y.; Zhang, Z.; Zhao, H.; Hu, Z.; et al. Diagnosis of soybean bacterial blight progress stage based on deep learning in the context of data-deficient. Comput. Electron. Agric. 2023, 212, 108170. [Google Scholar] [CrossRef]
  132. Jung, Y.; Byun, S.; Kim, B.; Amin, S.U.; Seo, S. Harnessing synthetic data for enhanced detection of Pine Wilt Disease: An image classification approach. Comput. Electron. Agric. 2024, 218, 108690. [Google Scholar] [CrossRef]
  133. Li, H.; Huang, L.; Ruan, C.; Huang, W.; Wang, C.; Zhao, J. A dual-branch neural network for crop disease recognition by integrating frequency domain and spatial domain information. Comput. Electron. Agric. 2024, 219, 108843. [Google Scholar] [CrossRef]
  134. Shrivastava, A.; Gupta, A.; Girshick, R. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 761–769. [Google Scholar] [CrossRef]
  135. Chen, K.; Chen, Y.; Han, C.; Sang, N.; Gao, C. Hard sample mining makes person re-identification more efficient and accurate. Neurocomputing 2020, 382, 259–267. [Google Scholar] [CrossRef]
  136. Zhang, M.; Chen, Y.; Zhang, B.; Pang, K.; Lv, B. Recognition of pest based on faster rcnn. In Proceedings of the Signal and Information Processing, Networking and Computers: Proceedings of the 6th International Conference on Signal and Information Processing, Networking and Computers (ICSINC); Springer: Berlin/Heidelberg, Germany, 2020; pp. 62–69. [Google Scholar] [CrossRef]
  137. Kim, B.; Han, Y.K.; Park, J.H.; Lee, J. Improved vision-based detection of strawberry diseases using a deep neural network. Front. Plant Sci. 2021, 11, 559172. [Google Scholar] [CrossRef]
  138. Lin, P.; Zhang, H.; Zhao, F.; Wang, X.; Liu, H.; Chen, Y. Boosted Mask R-CNN algorithm for accurately detecting strawberry plant canopies in the fields from low-altitude drone images. Food Sci. Technol. 2022, 42, e95922. [Google Scholar] [CrossRef]
  139. Dai, F.; Wang, F.; Yang, D.; Lin, S.; Chen, X.; Lan, Y.; Deng, X. Detection method of citrus psyllids with field high-definition camera based on improved cascade region-based convolution neural networks. Front. Plant Sci. 2022, 12, 816272. [Google Scholar] [CrossRef] [PubMed]
  140. Wang, K.; Peng, Y.; Huang, H.; Hu, Y.; Li, S. Mining hard samples locally and globally for improved speech separation. In Proceedings of the ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); IEEE: New York, NY, USA, 2022; pp. 6037–6041. [Google Scholar] [CrossRef]
  141. Cap, Q.H.; Fukuda, A.; Kagiwada, S.; Uga, H.; Iwasaki, N.; Iyatomi, H. Towards robust plant disease diagnosis with hard-sample re-mining strategy. Comput. Electron. Agric. 2023, 215, 108375. [Google Scholar] [CrossRef]
  142. Tian, Y.; Sun, C.; Poole, B.; Krishnan, D.; Schmid, C.; Isola, P. What makes for good views for contrastive learning? Adv. Neural Inf. Process. Syst. 2020, 33, 6827–6839. [Google Scholar] [CrossRef]
  143. Figueroa-Mata, G.; Mata-Montero, E. Using a convolutional siamese network for image-based plant species identification with small datasets. Biomimetics 2020, 5, 8. [Google Scholar] [CrossRef]
  144. Shang, Z.; He, J.; Wang, D. Image recognition of plant diseases based on Siamese Networks. In Proceedings of the 2022 China Automation Congress (CAC); IEEE: New York, NY, USA, 2022; pp. 5328–5332. [Google Scholar] [CrossRef]
  145. Janarthan, S.; Thuseethan, S.; Rajasegarar, S.; Yearwood, J. P2OP—Plant Pathology on Palms: A deep learning-based mobile solution for in-field plant disease detection. Comput. Electron. Agric. 2022, 202, 107371. [Google Scholar] [CrossRef]
  146. Zhang, S.; Wang, D.; Yu, C. Apple leaf disease recognition method based on Siamese dilated Inception network with less training samples. Comput. Electron. Agric. 2023, 213, 108188. [Google Scholar] [CrossRef]
  147. Fang, U.; Li, J.; Lu, X.; Gao, L.; Ali, M.; Xiang, Y. Self-supervised cross-iterative clustering for unlabeled plant disease images. Neurocomputing 2021, 456, 36–48. [Google Scholar] [CrossRef]
  148. Liu, H.; Zhan, Y.; Xia, H.; Mao, Q.; Tan, Y. Self-supervised transformer-based pre-training method using latent semantic masking auto-encoder for pest and disease classification. Comput. Electron. Agric. 2022, 203, 107448. [Google Scholar] [CrossRef]
  149. Zhao, R.; Zhu, Y.; Li, Y. CLA: A self-supervised contrastive learning method for leaf disease identification with domain adaptation. Comput. Electron. Agric. 2023, 211, 107967. [Google Scholar] [CrossRef]
  150. Dai, Y.; Gieseke, F.; Oehmcke, S.; Wu, Y.; Barnard, K. Attentional feature fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 3560–3569. [Google Scholar] [CrossRef]
  151. Zhang, M.; Xu, S.; Song, W.; He, Q.; Wei, Q. Lightweight underwater object detection based on yolo v4 and multi-scale attentional feature fusion. Remote Sens. 2021, 13, 4706. [Google Scholar] [CrossRef]
  152. Salman, Z.; Muhammad, A.; Han, D. Plant disease classification in the wild using vision transformer-based models. Front. Plant Sci. 2025, 16, 1522985. [Google Scholar] [CrossRef] [PubMed]
  153. Aboelenin, S.; Elbasheer, F.A.; Eltoukhy, M.M.; El-Hady, W.M.; Hosny, K.M. A hybrid framework for plant leaf disease detection and classification using convolutional neural networks and vision transformer. Complex Intell. Syst. 2025, 11, 142. [Google Scholar] [CrossRef]
  154. Rashid, M.R.A.; Korim, M.A.E.; Hasan, M.; Ali, M.S.; Islam, M.M.; Jabid, T.; Islam, M. An ensemble learning framework with explainable AI for interpretable leaf disease detection. Array 2025, 26, 100386. [Google Scholar] [CrossRef]
  155. Van de Ven, G.M.; Tuytelaars, T.; Tolias, A.S. Three types of incremental learning. Nat. Mach. Intell. 2022, 4, 1185–1197. [Google Scholar] [CrossRef] [PubMed]
  156. Zhang, Z.; Hoai, M. Object detection with self-supervised scene adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 21589–21599. [Google Scholar]
  157. Zhang, N.; Wu, H.; Zhu, H.; Deng, Y.; Han, X. Tomato disease classification and identification method based on multimodal fusion deep learning. Agriculture 2022, 12, 2014. [Google Scholar] [CrossRef]
  158. Cao, Y.; Chen, L.; Yuan, Y.; Sun, G. Cucumber disease recognition with small samples using image-text-label-based multi-modal language model. Comput. Electron. Agric. 2023, 211, 107993. [Google Scholar] [CrossRef]
  159. Lu, Y.; Chen, D.; Olaniyi, E.; Huang, Y. Generative adversarial networks (GANs) for image augmentation in agriculture: A systematic review. Comput. Electron. Agric. 2022, 200, 107208. [Google Scholar] [CrossRef]
  160. Qi, H.; Huang, Z.; Jin, B.; Tang, Q.; Jia, L.; Zhao, G.; Cao, D.; Sun, Z.; Zhang, C. SAM-GAN: An improved DCGAN for rice seed viability determination using near-infrared hyperspectral imaging. Comput. Electron. Agric. 2024, 216, 108473. [Google Scholar] [CrossRef]
  161. Abbas, A.; Jain, S.; Gour, M.; Vankudothu, S. Tomato plant disease detection using transfer learning with C-GAN synthetic images. Comput. Electron. Agric. 2021, 187, 106279. [Google Scholar] [CrossRef]
  162. Li, X.; Li, X.; Zhang, M.; Dong, Q.; Zhang, G.; Wang, Z.; Wei, P. SugarcaneGAN: A novel dataset generating approach for sugarcane leaf diseases based on lightweight hybrid CNN-Transformer network. Comput. Electron. Agric. 2024, 219, 108762. [Google Scholar] [CrossRef]
  163. Moreno, H.; Gómez, A.; Altares-López, S.; Ribeiro, A.; Andújar, D. Analysis of Stable Diffusion-derived fake weeds performance for training Convolutional Neural Networks. Comput. Electron. Agric. 2023, 214, 108324. [Google Scholar] [CrossRef]
  164. Chen, D.; Qi, X.; Zheng, Y.; Lu, Y.; Huang, Y.; Li, Z. Synthetic data augmentation by diffusion probabilistic models to enhance weed recognition. Comput. Electron. Agric. 2024, 216, 108517. [Google Scholar] [CrossRef]
  165. Unknown, A. Revolutionizing crop disease detection with computational deep learning: A comprehensive review. Environ. Monit. Assess. 2024, 196, 302. [Google Scholar] [CrossRef]
  166. Jahin, M.A.; Shahriar, S.; Mridha, M.F.; Hossen, M.J.; Dey, N. Soybean disease detection via interpretable hybrid CNN-GNN with cross-modal attention. arXiv 2025, arXiv:2503.01284. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.