You are currently viewing a new version of our website. To view the old version click .
Agriculture
  • Feature Paper
  • Editor’s Choice
  • Review
  • Open Access

22 December 2021

On Using Artificial Intelligence and the Internet of Things for Crop Disease Detection: A Contemporary Survey

,
and
Networking Embedded Systems and Telecommunications (NEST) Research Group, Engineering Research Laboratory (LRI), Department of Electrical Engineering, National Higher School of Electricity and Mechanics (ENSEM), Hassan II University of Casablanca, Casablanca 8118, Morocco
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Digital Innovations in Agriculture

Abstract

The agricultural sector remains a key contributor to the Moroccan economy, representing about 15% of gross domestic product (GDP). Disease attacks are constant threats to agriculture and cause heavy losses in the country’s economy. Therefore, early detection can mitigate the severity of diseases and protect crops. However, manual disease identification is both time-consuming and error prone, and requires a thorough knowledge of plant pathogens. Instead, automated methods save both time and effort. This paper presents a contemporary overview of research undertaken over the past decade in the field of disease identification of different crops using machine learning, deep learning, image processing techniques, the Internet of Things, and hyperspectral image analysis. Additionally, a comparative study of several techniques applied to crop disease detection was carried out. Furthermore, this paper discusses the different challenges to be overcome and possible solutions. Then, several suggestions to address these challenges are provided. Finally, this research provides a future perspective that promises to be a highly useful and valuable resource for researchers working in the field of crop disease detection.

1. Introduction

Agriculture is the mainstay of many countries. Due to population growth, the demand for food is steadily increasing. To satisfy this pressing need, it is necessary to increase agricultural productivity and protect crops. Nevertheless, crops are highly prone to different diseases due to a large number of pathogens present in their environment. Some of these disease pathogens are virus organisms, whereas others are fungal or bacterial []. Crop diseases can reduce productivity by 10% to 95% [], resulting in a significant decrease in the quantity and quality of agricultural production. Therefore, early identification of diseases is crucial to avoid huge losses and reduce the excessive use of pesticides, which can harm human health and the environment. In most cases, and especially in developing countries and small farms, farmers identify crop diseases with the naked eye based on visual symptoms. This is a tedious task that requires expertise in plant pathology and excessive treatment time []. Moreover, if the field is attacked by a rare disease, farmers seek expert advice to obtain an accurate and efficient diagnosis, which obviously generates additional treatment costs []. Thus, this method of visual observation is not practical and feasible for large farms and may even yield erroneous predictions due to biased decisions []. The restrictions of the traditional approach have motivated researchers to develop technological proposals for the early identification of crop diseases in an accurate, fast, and reliable manner, and in order to meet the increasing demands of consumers and alleviate the environmental impact of chemical inputs on the environment and health. In this regard, several methods [,,,] have been proposed to automate the process of disease detection. These methods for automatic recognition of crop diseases are divided into two groups, direct and indirect methods []. Direct methods comprise molecular [] and serological techniques [,] that provide accurate and direct detection of the pathogens triggering the disease, although these techniques require a significant amount of time for the collection, processing, and analysis of the collected samples. By comparison, optical imaging techniques [,] are among the indirect methods that are able to identify diseases and predict the health of the crop through different parameters such as morphological change and transpiration rate. Fluorescence and hyperspectral imaging [] are some of the most widely used indirect methods for early disease identification. Although hyperspectral images are a valuable source of data and contain more information than ordinary photos [], hyperspectral devices are very expensive, bulky, and difficult to obtain for low-income farmers. Alternatively, other types of digital cameras are available at reasonable prices in electronics stores. As a result, most of the automatic identification processes considered so far are focused on visible domain images, which enables the use of very accurate and fast algorithms. Hence, this review focuses on various approaches based on image processing techniques and spectroscopy for automatic crop disease detection using numerous approaches and algorithms using deep and machine learning, fuzzy logic, and transfer learning.
The main objectives of this paper are, first, to identify the major issues that have not yet been properly explored in previous studies on the automation of the disease recognition process; and, second, to highlight future directions that may help circumvent these challenges. The upcoming sections are structured in the following order. Section 2 provides an insight into the current state of the art in disease recognition. Then, Section 3 is devoted to the comparative study of the various techniques used, identifying their advantages and drawbacks, followed by Section 4, in which the results are discussed and analyzed. In Section 5, the gaps in the existing literature are addressed. These shortcomings constitute possible avenues to explore in future research, which is addressed in Section 6. Eventually, the conclusion is drawn in Section 7.

2. Background

Manual identification of crop diseases is both fastidious and inaccurate, meaning it is only feasible in small farms []. In contrast, automatic disease detection is significantly more accurate and takes less time and labor []. As a result, numerous studies [,,,] have been conducted and are discussed in detail below. This section provides a review of different techniques applied in the identification of crop diseases, presents the taxonomy of various crop diseases, and describes the concept of image processing and machine learning. It also demonstrates the application of hyperspectral imagery, the Internet of Things, and deep and transfer learning in the field of disease recognition.

2.1. Taxonomy of Crop Diseases and Their Symptoms

The leaves of crops are highly prone to diseases, which are a natural phenomenon []. However, if corrective measures are not taken at the right time to stop the spread of the disease, it leads to a significant reduction in the quality and quantity of agricultural products []. Crops are affected by various pathogens [] such as viruses, bacteria, fungi, and deficiencies. Thus, the pathogens responsible for the disease are classified into two categories []: autotrophs, which thrive on living tissue, or saprophytes, which dwell on dead tissue. The symptoms of the disease adversely affect the development and growth of crops and are easily visible. Leaf discoloration is the first symptom of disease in plants. In addition, the shape and texture of the leaves are highly useful in detecting various diseases. Therefore, different diseases, such as mildew, rust, and powdery mildew, can be detected by processing images of the leaves [,].
The following is a brief description of the three common types of plant diseases [] that are illustrated in Figure 1 and described in Table 1:
Figure 1. Different types of pathogens: viruses, fungi, and bacteria.
Table 1. Classification of some leaf diseases with their symptoms.
  • Virus diseases []: Among all plant diseases, those caused by infection are difficult to identify and diagnose; moreover, these symptoms are mistaken for signs of nutritional deficiency or injury, as there is no preconceived indicator that can be constantly monitored. Whiteflies, leafhoppers, aphids, and cucumber crawling insects are regular carriers of virus diseases.
  • Fungal diseases []: Foliar diseases are caused by a fungus, such as downy mildew, anthracnose, and powdery mildew. It initially appears on old lower leaves, which have gray-green spots or are soaked in water. As the parasite matures, these spots darken and cause fungus to grow on them.
  • Bacterial diseases []: Pathogens cause serious diseases in vegetables. They do not directly enter the vegetation, but rather through injuries or apertures in the crop. Crop injuries result from various pathogens, insects, and agricultural implements during tasks such as picking and pruning.

2.2. Application of Machine Learning and Image Processing in Disease Identification

Foliar images are an excellent and rich source of data on plant pathology and morphological behavior; thus, these data must be thoroughly extracted and analyzed. Image processing plays [] a crucial role in the diagnosis and analysis of leaf diseases. The procedure adopted in this leaf disease identification process is illustrated in Figure 2, showing an insight into the different techniques employed by the authors to detect the disease by means of image processing and artificial intelligence.
Figure 2. Different approaches for the identification of leaf diseases.
The primary step [] in identifying diseases is the acquisition of images. In most cases, images can be fetched either from a digital camera or an imaging system. As raw images tend to contain noise, removing these impurities is required. As a result, the second step is known as image pre-processing, and involves the removal of unwanted distortions, in addition to contrast enhancement, to clarify and brighten the image features. For example, a Gaussian function that creates soft blur is commonly used to lessen the noise in the image. Subsequently, image segmentation [] is the third step in which the image is segmented from its background, whereas the region of interest (ROI) is partitioned to emphasize the prominent features. The fourth step is feature extraction [], which unveils the information and details of an image. As a side note, the leaf features usually include shape, texture, and color, which are used to diagnose the crop. Thus, these chosen features form an input feature vector which is then fed into the classifier. Using this vector, it is possible to discriminate one class of objects from another. The final step is classification []. Note that the choice of a suitable classifier depends on the specific problem. The classifier’s aim is to recognize the images by sorting them into several predefined classes based on the resulting feature vector obtained in the fourth step. For this purpose, the classification task contains two phases, namely, training and testing. The training operation trains the classifier on a training dataset; thus, the greater the number of training sets, the better the accuracy obtained. It should be noted that the result, which is the crop’s healthy state or diseased state associated with the species name, must be achieved as swiftly as possible.

2.3. Application of Deep and Transfer Learning in Disease Recognition

Over the past decade, deep learning [,,] and transfer learning [,] applications in agriculture have gained widespread success and yielded highly promising outcomes due to their capability to reliably learn and discern visual features. Numerous intriguing studies [,,,,,] have been published on the employment of these promising approaches for identifying diseases. Of particular note, the use of transfer learning is a trend that is becoming increasingly popular and is widely used by researchers [,]. Furthermore, transfer learning is not a sole technique, but rather a set of fine-tuned techniques, which enables the development of highly accurate models on a more restrictive specialized dataset, such as those for plant diseases. Mohanty et al. [] showed that the fine-tuning approach is far better than a CNN model that is trained from scratch. Another model is the Neural Network (NN), which is broadly employed and recommended to analyze hyperspectral data for the premature detection of diseases. Its basic mechanism was inspired by the human nervous system, and it possesses specific capabilities such as learning and generalization that aid in crop disease diagnosis. In contrast to other machine learning methods, it has a more accurate diagnostic capability because it is better able to combine training sets. Another similar comparative study was undertaken by Zhu et al. [], wherein back-propagation neural networks (BPNNs) were tested with the support vector machine (SVM), random forest (RF), latent Dirichlet allocation (LDA), extreme learning machine (ELM), LS-SVM, and partial least squares discrimination analysis (PLS-DA) for pre-symptomatic detection and classification of tobacco mosaic virus (TMV) disease with the use of hyperspectral imaging. Similarly, Zhu et al. [] studied the feasibility of hyperspectral imaging as a non-invasive technique for early detection of TMV disease with machine learning classifiers and the variable selection technique. The results revealed that the back-propagation neural network model (BPNN) achieved 95% accuracy, whereas the chemometric models achieved an accuracy of 80%. It is worth mentioning that it is possible to implement pattern identification methods such as the random forest and support vector machine by utilizing a new pattern recognition technique, named the Artificial Intelligent Nose. Cui et al. [] provided a review of different invasive and non-invasive techniques, including their advantages and drawbacks, in which the authors noted that the smart nose is a non-invasive and fast method for plant disease diagnosis. In essence, neural networks ensure the highest quality and unaltered spectral information for hyperspectral data analysis. The most well-known study of the ANN spawned the concept of deep learning, which has recently become popular in farming applications. Deep learning has received increasing and widespread interest from many researchers, particularly since 2018, as shown in Table 2. Researchers have made remarkable progress in crop image classification; some of the most typical and representative models are the convolutional neural network (CNN), auto-encoder (AE) recurrent neural network (RNN), and restricted Boltzmann machine (RBM). Many fascinating studies have been published on deep learning for crop disease classification and detection. Among these works, that of Ma et al. [] presented a deep convolutional neural network (DCNN) model able to detect over four types of cucumber disease. In a comparison with other traditional methods, such as the support vector machine, naive Bayes, and AlexNet, the DCNN was capable of identifying the different cucumber diseases with very high accuracy of up to 93.41%. Similarly, Tran et al. [] offered a monitoring system for tomato growth and to maximize tomato yield. This system was able to classify nutritional deficiencies and diseases during growth. Thus, agricultural experts evaluate the symptoms based on the results to protect tomato crops. In a similar manner, to effectively monitor apple tree growth at each stage and estimate the yield, Tian et al. [] deployed a dense YOLOV3 model that utilizes techniques for data augmentation to prevent overfitting. Their approach was found to be valid and applicable to apple orchards, although their study included wavy lights, interlaced fruit, and complex backgrounds.
Table 2. A brief summary of different research on transfer and deep learning since 2018 for identifying crop diseases.

2.4. Hyperspectral Imaging Applied to Disease Recognition

The hyperspectral imagery method has been strongly developed during the past two decades [], and used to identify abiotic and biotic stresses in cultivated plants []. Hyperspectral imaging is a technique combining spectroscopy and imagery, making it possible to simultaneously obtain the spatial and spectral information of an object. Disease infection causes changes in plant biochemical and biophysical properties, such as transpiration rate, tissue structure, water, and pigment content. These changes can then alter plant spectral properties, intercellular space, and water content []. However, the hyperspectral system is able to capture these spectral features. Zhu et al. [,] conducted a similar study to detect TSWV infection growth in tobacco, in which the authors reported that hyperspectral reflectance was gathered in the visible and near-infrared range to distinguish between healthy and infected TSWV tobacco leaves using statistical analysis methods. Primarily, the TSWV presence was identified at 14 DPI. Moreover, Zhu et al. [] demonstrated that hyperspectral imaging is able to detect the tobacco mosaic virus (TMV) infection before showing any symptoms, while utilizing SPA for the selection of the effective EW wavelength and, most significantly, for identifying various diseases. Due to the huge number of spectral values that are highly correlated in the hyperspectral dataset, high dimensionality and multi-collinearity frequently appear in hyperspectral data [,]. Accordingly, the selection of EWs is crucial for hyperspectral analysis in order to lessen the computational complexity, increase the efficiency of using hyperspectral data, and reduce the computational complexity. Thus, to address this multi-collinearity issue, a variety of approaches and methods have been presented, such as the successive projection algorithm (SPA) [,], partial least squares regression (PLSR) models [] and genetic algorithms (GAs).

2.5. Application of IoT in the Field of Leaf Disease Recognition

The Internet of Things (IoT) has improved agricultural capabilities. IoT applications can help farmers at any time during their farming activities and keep them updated with the latest crop and weather information to remotely monitor their fields. By means of IoT applications [,,], farmers can make plans for the next season’s harvest. Furthermore, they can detect crop diseases at an early stage to curb the spread of disease and save their yield. Agricultural IoT apps clearly play a major role in increasing agricultural production and decreasing crop losses due to diseases. In this context, a large amount of research has been conducted to identify diseases, as shown in Table 3. Truong et al. [] devised an IoT-based system made up of various devices that is able to deliver real-time environmental information and send it to the cloud to be stored. These environmental data are processed and scrutinized to predict weather conditions by means of the SVM algorithm deployed in the cloud in order to detect crop fungal diseases. In addition, better results have been achieved when the Internet of Things and image processing have been combined and implemented in the area of disease recognition. Krishna et al. [] implemented an IoT system featuring SMS alerts that enables automatic disease detection and pesticide spraying using the NodeMCU.
Table 3. Summary of the literature survey on Internet of Things systems.

3. Comparison of Various Crop Disease Detection Techniques

The primary objective of this section is to provide an overview of research carried out during the past decade for identifying crop diseases. Table 4 provides an outline of several methodologies adopted by researchers in the field of crop disease using machine learning, image processing, the Internet of Things, transfer learning, and deep learning techniques. It also indicates the limitations and gaps that need to be filled to help develop an automatic, efficient, accurate, and faster system in the future. Thus, in the conducted research, the authors conclude that deep learning provides accurate and highly promising results compared to other classification and detection methods. Additionally, the use of preprocessing techniques significantly improves segmentation accuracy. The k-means algorithm is the most widely and commonly used technique [,,,,] for segmenting diseased leaves and classifying crop diseases. In practice, no generalizable algorithm is able to solve all issues, so choosing a suitable learning algorithm for a specific problem is a crucial step for the model efficiency. Note that the extracted texture features are the most relevant and most useful for representing the disease-affected regions in the images, which are then employed to train the support vector machine (SVM) and neural network (NN) classifier. It is further emphasized that these texture features are arithmetical parameters that are automatically calculated by means of the gray level co-occurrence matrix (GLCM) [,], as stated below:
Table 4. Related work in the area of crop disease identification.
  • ASM: The second angular momentum that stands for the total sum of squares in the GLCM.
E n e r g y = i , j = 0 N 1 P i , j 2  
2.
Contrast: Denotes the sum of the difference in local intensity, where ij.
C o n t r a s t = i , j = 0 N 1 P i , j i j 2  
3.
Entropy: The quantity of image information necessary for the compression.
E n t r o p y = i , j = 0 N 1 ln P i , j P i , j    
4.
Correlation: Refers to the linear dependence of the adjacent pixels’ gray levels.
C o r r e l a t i o n = i , j = 0 N 1   P i , j i µ j µ / σ 2    
where P i , j is the i, j component of the GLCM normalized symmetric matrix and N denotes the number of gray levels. σ 2 is the intensity variation of all pixels as given below:
σ 2 = i , j = 0 N 1   P i , j i µ 2  
Moreover, the average of GLCM is µ given by:
  µ = i , j = 0 N 1   i P i , j  
5.
Homogeneity feature: Represents the homogeneity of the voxel pairs of the gray level, and is equal to 1 for a diagonal GLCM.
H o m o g e n e i t y = i , j = 0 N 1 P i , j / 1 + ( i j ) 2
Ultimately, the models for identifying and classifying crop diseases were evaluated by means of various metrics, which were specific to the model used in each study, such as sensitivity, precision (P), recall (R), quality measure (QM), and F1-score. The statistical evaluation measures used to analyze the quantitative performance of crop disease detection models with deep and transfer learning can be calculated as follows:
P r e c i s i o n = T P T P + F P
where Precision (P) is the fraction of true positives (TP) to the total amount of relevant results, that is, the sum of TP and false positives (FP). For multi-class classification problems, P is averaged across the classes.
S e n s i t i v i t y = T P T P + F N  
Sensitivity/Recall (R) is the fraction of TP to the total amount of TP and false negatives (FN). For multi-class classification problems, R obtains the average of all classes.
S p e c i f i c i t y = T N T N + F P
Specificity is the proportion of true negative (TN) samples to all healthy samples (true negatives and false positives). This measure is utilized to evaluate the performance of a proposed model in forecasting true negatives.
A c c u r a c y = T P + T N T P + T N + F P + F N  
Accuracy is the proportion of correctly classified samples to the total number of classified samples. This measure is employed to assess the overall performance of a suggested model.
F 1 _ s c o r e = 2 × S e n s i t i v i t y × P r e c i s i o n S e n s i t i v i t y + P r e c i s i o n  
F1-score is the harmonic average of both precision and recall. For multi-class classification problems, F1 is averaged across all classes, where:
  • TP: represents the number of true positive image samples that are perfectly identified as infected.
  • FP: is the number of false-positive image samples that are incorrectly classified as infected.
  • TN: is the number of true-negative image samples that are correctly classified as healthy.
  • FN: is the number of false-negative image samples that are incorrectly identified as uninfected.
As revealed in the previous studies [], it was found that accuracy was the most widely adopted metric; it was widely used in 72% of the articles reviewed, followed by the confusion matrix, then precision, recall, and the F1 measures. Some research has examined the root mean square error (RMSE), mean absolute error, R-squared, and mean squared error (MSE), among others. The authors note that it is difficult, if not impossible, to make a comparison across papers because different metrics were used for different tasks, different models, datasets, and parameters; in addition, different crops and diseases were analyzed under different conditions. Furthermore, it is very important to examine whether the researchers tested their implementation on an identical dataset (e.g., by splitting the dataset into training and validation sets), or whether they used different datasets to test their solution.
  • The Difference between machine learning and deep learning
The difference between machine and deep learning lies [] first in the fact that machine learning algorithms deal with quantitative and structured data and, second, the operator is responsible for choosing the right algorithm to extract the features that will influence the prediction. Deep learning algorithms deal with unstructured data and the algorithm is trained to extract the influential elements in the prediction as shown in Figure S1 in the Supplementary Materials. It should be noted that deep learning algorithms, compared to ML algorithms, demand a large amount of data and high computational power.

4. Discussion

In this paper, the authors reviewed many research articles and identified 129 studies eligible for systematic review using the PRISMA statement as presented in Figure S2 in the Supplementary Materials, these studies involve methodologies in image processing, machine learning, and deep learning particularly focused on the identification and classification of plant diseases. The study showed that the techniques most used in the literature, in general, are the support vector machine [,,,] (SVM), random forest [] (RF), artificial neural network [] (ANN) and convolutional neural network (CNN) [,,].
Additionally, many scientific contributions have focused on the prediction of major diseases affecting wheat, rice, and potatoes, such as powdery mildew [,], late blight [,], and blast [,]. The challenging aspect of this work is the evaluation and investigation of the computational efficiency of each study compared to other studies, because each paper applies different metrics to a variety of diseases in different crops. In addition, many techniques and pretreatment approaches are used to predict disease presence or severity. Accordingly, it is nearly impossible to generalize and compare different articles because it is paramount to follow the same experimental conditions. Thus, the present comparison of the different approaches used was strictly constrained, for example, by considering the types of crops on which the work was undertaken, in addition to the kinds of diseases considered during the work. Therefore, based on these constraints, from the results obtained in related works, it is observed that deep learning-based models have outperformed the classical approaches such as random forest, support vector machine, and k-nearest neighbors classifiers, knowing that the performance of these algorithms has been proven and validated using metrics such as accuracy, sensitivity, specificity, and F1-score etc.
Table 4 shows that several researchers applied spectral analysis using thermal and optical remote sensing images, in addition to multispectral and hyperspectral images. As shown by Duarte-Carvajalino et al. [], multispectral images were found to be relevant for the early-stage detection of disease, whereas hyperspectral images, which are the most widely used in the existing literature, can predict disease even before symptoms are visible to the naked eye. Note that this difference is due to the spectral resolution used by the two technologies. However, compared to hyperspectral imaging, multispectral imaging offers less data complexity []. However, hyperspectral image analysis has various limitations. Several authors have highlighted the high dimensionality of the data as one of the difficulties encountered. As pointed out by Mahlein et al. [], the high degree of interband correlation leads to information redundancy, generating convergence instability in multivariate prediction models. It is observed that the dataset used by most of the researchers is taken from PlantVillage, and, in the image preprocessing process, most researchers used the histogram equalization to improve the contrast, and the median, Gaussian filter, and Gabor filter for denoising and image enhancement. Furthermore, for image segmentation, researchers have focused on the hue using the k-means and fuzzy c-means algorithm to segment the images; this procedure enables extraction of the region of interest from the given image. Using this, plant features such as texture, shape, and color have often been extracted using the gray-level co-occurrence matrix (GLCM), local binary patterns (LBPs), and histogram of oriented gradients (HOG). This is the most prominent step in the classification process. The researchers used different classification algorithms based on machine learning, and deep and transfer learning, for the classification phase, such as the decision tree classifier (TC), random forest (RF), naive Bayes (NB), support vector machine (SVM), artificial neural network (ANN), probabilistic neural network (PNN), back-propagation neural network (BPNN), convolutional neural network (CNN), and InceptionV3.
The SVM and NN are mainly used in disease classification. The main advantage of NNs is that they can tolerate noise and are built from available data. The SVM, in turn, offers outstanding classification performance because it nonlinearly maps the input feature vector into a high dimensional space where it can be easily separated. Nevertheless, SVMs are not suitable when the data is very noisy. Hence, when many redundant variables form the input vector, it is possible to the use principal component analysis (PCA) dimensionality reduction method, as used by Kadir et al. []. In addition, the convolutional neural network (CNN), faster R-CNN, Vgg16, and ResNet50 models have been used to fully automate the classification process. Moreover, a new approach employed by Turkoglu et al. [] is the extreme learning machine (ELM), which offers faster learning and better performance and generalization with lower computational cost. The advantages and disadvantages of the classifiers used in the literature are summarized in Table 5.
Table 5. Comparison of various classifiers.
In this regard, a study was carried out by Ngugi et al. [] to compare the performance of 10 deep learning models using the PlantVillage dataset, namely AlexNet, ResNet-101, GoogleNet, DenseNet201, Vgg16, Inceptionv3, InceptionResNetv2, SqueezeNet, ShuffleNet, and MobileNets. We present the results obtained by the different architectures for all the performance measures in Table 6 below.
Table 6. The test set performance of 10 models considered in this comparative study.
According to Table 6, the DenseNet201 model is the most suitable because it requires less storage and has the advantage of having the best performance measures (accuracy = 0.9973, precision = 0.9958, recall = 0.9965, specificity = 0.9999, F1 score = 0.9961). However, it requires a longer learning time (82 h) compared to the InceptionV3 and ResNet-101 models; nonetheless, their accuracies are slightly lower than those of DenseNet201. Therefore, special care should be taken when choosing between these three architectures, as each model has certain advantages and limitations. By comparison, the small MobileNet, SqueezeNet, and ShuffleNet architectures are desirable in embedded and mobile applications where computing resources are limited, due to their short learning times and low storage requirements, while still achieving high accuracy.
In this regard, another comparative study of four machine learning algorithms—k-nearest neighbors, decision tree, naive Bayes, and logistic regression—was performed by Ahmed et al. [] to detect three rice plant diseases where the images were taken from the same PlantVillage database.
As shown in Figure 3, the best accuracy (over 97% by applying it to the test dataset) was obtained by the decision tree algorithm after 10 cross-validations.
Figure 3. Comparison between machine learning algorithms.
To briefly summarize this section, it is inferred that multispectral and hyperspectral imagery represents a valuable source of useful information for developing autonomous non-invasive systems to predict abiotic and biotic stresses in plants. Additionally, the integration of multiple data sources will strengthen and increase the stability and generalization capabilities of the algorithms. Furthermore, from the results obtained in the literature, it appears that the automatic extraction of leaf features performed by deep learning-based models is more relevant and efficient than the process of extracting these features using traditional approaches such as the grey level co-occurrence matrix (GLCM), area-based techniques (ABTs), and scale invariant feature transform (SIFT). However, it is noted that there is a lack of validation of the models used in real-world scenarios. Therefore, proper validation is necessary for the studies to have an accurate and general impact.

5. Unresolved Challenges in the Crop Disease Detection Field

The above section presents a wealth of promising research undertaken in the past few years in the area of crop foliar disease recognition and detection using a range of techniques. In the existing literature to date, there are numerous unresolved challenges that remain to be address and overcome to derive robust and feasible crop disease detection systems that can operate accurately under various field conditions. The most prominent of these highlighted challenges are:

5.1. Insufficient Data

The major problem in the use of deep learning models for plant disease detection is the insufficiency of datasets in terms of both diversity and size [] because these models have extremely large data requirements. In the majority of cases, the identification of plant diseases has been performed under ideal and controlled conditions [], such as the presence of a single disease with a homogeneous background. In addition, environmental conditions are not considered; hence, the accuracy rate obtained will be higher than that actually obtained in a practical application. Additionally, image labeling is a very laborious and tedious task. Due to these factors, the production of a reliable, efficient, and comprehensive dataset is extremely challenging. At present, there are six ways to deal with the lack of a dataset: data augmentation techniques, data sharing, citizen science, transfer learning, synthetic data, and few-shot learning.

5.2. Imbalanced Data

The most commonly used datasets for crop disease detection are cleaned or their unbalanced nature is ignored to fully concentrate on training algorithms and avoid being distracted by other problems. However, in real-world settings, the distribution across classes is skewed and unbalanced [], ranging from mildly biased to severely unbalanced. This poses a challenge for predictive modeling and may require specialized techniques, such as re-sampling techniques, because the machine learning algorithms typically employed for classification are built around the assumption of an equal number of examples for each class. As a result, some of the models have poor predictive performance, especially for the minority class which is more susceptible to misclassification than the majority class.

5.3. Vanishing Gradient Problem

Hochreiter’s work [] showed an issue called the “vanishing gradient problem” that arises during the training phase when employing back-propagation learning techniques with neural networks. Specifically, each weight of the neural network is updated based on the current weight and is proportionally related to the partial derivative of the error function. However, this updating of the weights may not take place in some cases due to an extremely small gradient that approaches zero. As a result, the gradient descent does not converge to the optimum and the neural network stops completely [].

5.4. Exploding Gradient Problem

The opposite problem to the vanishing problem is the gradient explosion problem []. Specifically, the gradients become increasingly larger as the back-propagation algorithm advances. This will lead to extremely large updates of the network weights and causes the gradient descent to diverge, which means that the system becomes unstable []. Thus, the model will lose its ability to learn efficiently. In general, as we move up the network during back-propagation, the gradient grows exponentially by repeatedly multiplying the gradients. As a result, the weight values can become incredibly large and spill over to become a non-numerical value (NaN).

5.5. Overfitting and Underfitting Problem

Learning models have excessively high chances of overfitting and underfitting the data in the training stage due to the large number of parameters involved, which are correlated in complex ways. Such situations reduce the ability of the model to perform well on the tested data. Thus, it is considered that a learning algorithm is underfitting when it is unable to grasp the underlying trend in the data. Its occurrence simply means high bias, low variance, and that the model does not fit the data well enough. This usually occurs when fewer data are available to build an accurate model and also when a linear model is attempted to be built with non-linear data. Conversely, a model is said to be overfitted when it is trained with a large quantity of data; it then learns from noise and inaccurate data inputs in the dataset. Then, the model does not correctly categorize the data due to excessive detail and noise. Its occurrence simply means low bias and high variance. The overfitting occurs in nonlinear and nonparametric approaches, as these kinds of learning algorithms have greater leeway in setting up an unrealistic model. Ideally, both of these should not exist in models, but they are generally challenging to eliminate. This problem was noted by Ahmad et al. [], whose model based on efficient convolutional neural networks tends to overfit during the training of the first epochs.

5.6. Image Acquisition: Conditions of Image Capture (Lighting, Spatial Location, Wind and Camera)

Ideally, images should be captured under similar conditions. However, in practice, this may only be feasible in the laboratory because it is extremely difficult to monitor the conditions of capture. Thus images may present unpredictable characteristics, making disease identification a daunting task. Moreover, the variable capture conditions have proven to be a challenging issue in measuring the severity of citrus leaf canker [] and in identifying citrus diseases []. In light of this, several endeavors have been undertaken to develop methods of invariant illumination []. Nevertheless, their success to date is still relatively modest.
  • Lighting Issue
Crops grow in natural environments that fluctuate greatly. Thus, images are impacted by numerous factors, such as wind, illumination, and other climatic conditions. Consequently, lighting issues are inevitable, and completely eliminating the variations is almost impossible. Nevertheless, certain endeavors have been made to mitigate them, e.g., Pourreza et al. [] developed a system to detect real-time citrus Huanglongbing disease using a narrow-band imaging and polarizing filter set. Specular lighting, the simultaneous presence of light and shadow, is the most difficult problem to deal with. However, the presence of specular lighting can be lessened by changing either the angle at which the image is taken or the leaf position, although this likely causes some degree of reflection. Furthermore, specular reflections and shadows were noted by Zhou et al. [] as the main source of error in monitoring Cercospora leaf spot on sugar beets, which occurred because of the automatic captures that complicate the prevention of lighting problems.
  • Camera
The image resolution is one of the crucial factors that has a direct influence on the image features. A higher resolution enables the detection of small lesions and spores. Moreover, the device being used to capture the image also influences these features.

5.7. Image Preprocessing

During the preprocessing and storage of leaf images, more information is lost as the compression ratio is increased. This may not dramatically influence the analysis of large lesions, but may severely distort small symptoms. Therefore, compression should be kept to a minimum or even avoided, especially if the symptoms are tiny.

5.8. Image Segmentation and Symptom Discrimination

In general, symptoms do not have clear boundaries; they gradually disappear in normal tissue, making the distinction between healthy and diseased areas highly ambiguous. This clearly affects the accuracy of the threshold and extracted features. Although manual and visual representation cannot clearly determine the edges, any machine-based representation will be prone to many subjective issues. Notably, the issue of subjective delineation of affected regions was first addressed by Olmstead et al. [] and later by Moya et al. [], who emphasized that some sort of external reference needs to be established for proper validation of disease identification methods. However, without the use of a reference, Oberti et al. [] observed for leaf powdery mildew that the number of false negatives or positives seen on the symptom discolored zones is too high. In summary, few solutions have been suggested for this problem because inconsistencies are intrinsic to the process. Furthermore, other difficulties are encountered when segmenting and locating regions of interest (ROIs):
-
A leaf may overlap with another leaf or other parts of the plant, and they may even be tilted or covered with dew or dust.
-
Images with complex backgrounds can render the segmentation of ROIs where symptoms appear challenging and intricate.

5.9. Feature Selection and Extraction

Although some plant species can be identified on the basis of leaf shape, other species have similar leaf shapes. Furthermore, symptoms do not necessarily arise in zones that are easily accessible; in practice, they can frequently be under the leaves or covered by other obstructions, or diseases can appear on the stems, fruits, or even flowers. Unfortunately, the latter problem has not attracted enough attention on the part of researchers. Furthermore, it is observed from the literature to date that researchers have mainly focused on the disease detection on the upper leaf surface. Nevertheless, Fuentes et al. [] suggested using the faster network R-CNN for detecting a number of tomato plant diseases in several locations.

5.10. Disease Classification

In many of the cases listed below, the classifier used to identify plant diseases may not be able to distinguish between them; for example, if the symptoms presented by different diseases are visually very similar, as both Ahmad et al. [] and Wiwartet al. [] have stated. In addition, the difficulties stated below are highly relevant to measuring the disease severity:
  • Differences in disease symptoms: According to the disease development stage, a specific disease can present very distinct characteristics in the symptoms’ shape, color, and size, causing serious identification problems. It should be noted that many different diseases can occur at the same time, making it extremely complex to distinguish between combinations of symptoms and individual symptoms. This problem was noted by Camargo et al. [] when handling symptoms produced by black streak disease on banana leaves, and Moya et al. [] when evaluating powdery mildew severity on squash leaves.
  • Diseases can occur simultaneously with many disorders, such as nutritional deficiencies, pests, and diseases: Typically, most techniques consider that there is only a single disease per image when, in reality, several other diseases can be present at the same time, in addition to other kinds of disorders, such as nutritional deficiencies and pests. These simultaneous symptoms can be either separate or physically combined, making disease identification a significant challenge. In this regard, Bock et al. [] observed the simultaneous presence of symptoms arising from different diseases and noted that this can lead to identification issues, and that more advances will be required to cope with this issue.
  • The symptoms’ similarity between different disorder types: Symptoms resulting from various disorders, such as diseases, phytotoxicity, presence of parasites, and nutritional deficiencies, can be visually similar. As a result, it can be extremely difficult to determine a symptom’s source with certitude, particularly if only the visible spectrum is used in the identification process. This forces methods to rely on tiny differences to discriminate between the symptoms. Numerous researchers have stated that some disorders have close similarities, leading to major issues of discrimination. In this regard, Ahmad et al. [] reported that symptoms resulting from Fusarium, Mosaic Potyvirus, Alternaria, and Phomopsis in soybean were very similar, and their classifier was unable to discern between them. This explains why the majority of studies conducted to date have chosen to tackle only diseases whose symptoms are quite dissimilar and, even then, their choices remain a significant challenge.

5.11. Other Challenges

Some other challenges facing automatic plant disease identification techniques cannot be categorized in the same way as those mentioned above. These challenges include reducing complexity, in addition to computational and memory demands [], because low-cost computers and cameras have a very limited computational resource. At the same time, as image resolution is increasing, the computational resources are also growing. Another major concern is the lack of properly labeled [] and sufficiently large datasets with high variability. This is notably the biggest hurdle when training recurrent neural network (RNN) models for plant disease detection, because collecting images in the field is not only a laborious task but also requires the guidance of agricultural experts for accurate annotation. Nevertheless, two free datasets exist []—PlantVillage and the Image Database of Plant Disease Symptoms dataset PDDB. Moreover, at present, no appropriate technology has been developed to automatically crop the leaf images around the affected area. A further issue is that hyperspectral data contain more than one hundred adjacent spectral bands and thus cannot be linearly trained []. Furthermore, these bands in different spectral regions are highly redundant [,] when extracting information to form an artificial neural network (ANN).

6. Future Work and Possible Solutions to Ongoing Limitations

In the previous section, gaps in the existing literature were highlighted to orient future research in this area. Thus, future work should first aim at acquiring diverse and large-size datasets to further promote research in this direction. Moreover, it is highly desirable to develop compact convolutional neural network CNN-based models that can achieve higher accuracy and promote the use of these technologies in the embedded platforms. Secondly, more emphasis should be placed in future research on the development of reliable methods and techniques to remove backgrounds and incorporate other forms of data, such as meteorological trends, disease occurrence history, and spatial location, to enhance the accuracy and reliability of disease identification systems. Additionally, disease recognition at different locations on plants and trees, such as the stems, blooms, and fruits, should receive greater attention from researchers due to its tremendous importance. One possible means to circumvent some of the limitations is to implement constraints to restrict variations in image capture conditions. However, even with very tight restrictions, many challenges will remain.
Some of the key challenges can be mitigated through the use of the most sophisticated approaches borrowed from the machine learning and computer vision fields. These include Markov random fields, mean shift, graph theory, and large margin nearest neighbor classification (LMNN), among other methods that have not yet been properly harnessed. In this regard, the proposed solutions to remedy the challenges presented above can be summarized as follows.

6.1. Data Augmentation Techniques

If the aim is to avoid the overfitting problem and expand the size of the dataset without manually collecting new images, data augmentation techniques are a possible solution for any limited data problem [,]. Data augmentation incorporates a collection of methods that improve the attributes and size of training datasets. Thus, DL models can perform better when these techniques are exploited, such as rotation, canny edge detection, shear, image noise addition, shift, and flipping.

6.2. Tackling Overfitting Problem

Overfitting is one of the fundamental problems encountered when using learning models, and occurs due to the sensitivity to the scale of the cross-entropy loss and the continuous updating of the gradient. Three classes exist to avoid the overfitting problem. The first acts on both the model parameters and the model architecture. This includes the most familiar approaches, such as batch normalization [], weight decay [], and dropout []. Weight decay is the technique that is commonly used by default in all algorithms as a universal regularizer. The second class operates on model inputs such as data augmentation and corruption. One of the causes of the overfitting problem is the lack of training data; as a result, the learned distribution does not exactly reflect the real distribution. In contrast, the marginalized corrupted feature (MCF) exclusively improves the solution in data augmentation. MCF is a new approach to combat overfitting in supervised learning []. The main idea of the MCF is to allow the models to be regularized by training them on corrupted data copies, without raising the computational complexity. The final class works on the output of the model. A technique was recently proposed by Pereyra et al. [] based on penalizing confident output distributions for model regularization. This method has demonstrated its high capacity to regularize CNN and RNN models. Hence, it will be judicious to explore these techniques in the field of crop disease detection.

6.3. Few-Shot Learning

In cases in which the dataset is extremely small, the techniques mentioned above may not be useful; that is, if there is a task in which the classification must be built with only one or two samples per class, and each sample is difficult to find. In such a case, innovative approaches are needed; one of these is few-shot learning (FSL) []. This is a relatively recent subfield of machine learning that needs more refinement and research. FSL allows the classification of new data when there are only a few training samples with supervised information. The approach of building an FSL classifier is suitable for solving the kind of problem related to rare plant pathologies, in which images are lacking for use in the training set. Typically, two major approaches are implicated in solving one-shot or few-shot machine learning issues, namely, the data-level approach and the parameter-level approach.

6.4. Transfer Learning

Recent research has revealed the extensive use of deep CNNs, which require a large quantity of data to perform effectively. The common challenge associated with the use of such models concerns the lack of training data. Specifically, collecting a large volume of data is an exhausting task, and no successful solution is available at this time. Therefore, in order to solve the fundamental dilemma of insufficient data, it is advisable to use TL models, which are highly effective in such cases []. In simple terms, transfer learning is the process by which the model trained for a specified task is reused as the starting point for training a new model. It attempts to transfer information from the original domain to the destination domain. This learning process is illustrated in Figure 4.
Figure 4. Transfer learning process.
Forthcoming research endeavors can be devoted to automatically estimate the detected disease severity, and expanded to attain the highest accuracy and speed via developing hybrid approaches such as genetic algorithms (GAs) and neural networks (NN) to increase the disease recognition rate, and combining particle swarm optimization (PSO) with other tools, such as gradient search techniques, to ensure a much higher speed. In addition, advanced and appropriate preprocessing techniques should be adopted to prevent noise interference in disease detection, in addition to partitioning the training and test data by employing more advanced techniques, such as stratified sampling, in order to create a well-balanced data partition, and avoid underfitting and overfitting. In addition, optimizing feature vectors should be contemplated to increase the disease recognition rate in these various stages, and recurrent neural network (RNN) models and the long-term memory function should be used to extract memory and temporal dimensions that can subsequently be harnessed for plant growth estimation. Finally, a web application can be designed with a range of features, such as displaying the identified diseases in the crops from leaf images taken by a smartphone camera. A discussion forum can also be developed for agronomists and farmers to talk about treatment and early preventive measures for the encountered diseases. Moreover, plant electrophysiology is a promising avenue for future research [], i.e., the electrical signal response produced in plants can be used for real-time disease detection. This approach is based on the fact that plants perceive the environment, and this perception is translated by a generation of electrical signals that essentially represent changes in their underlying physiological processes. Under the influence of stress, the metabolic activities of plant tissues and cells are unstable, which is inevitably reflected in the plant’s physiological electrical properties. As a result, the extraction of substantial characteristics from the generated electrical signals, such as impedance, varying capacitance, and conductivity, would be a highly interesting research direction for the classification of diseases in plants and crops.

7. Conclusions

Crop diseases are one of the main challenges in the farming sector. Thus, there is a need to identify crop diseases at the earliest stage to lessen disease severity and to curb disease propagation on farms. Accordingly, prominent and advanced research has been conducted in recent years on several kinds of disease identification techniques, as presented in this work. The main difference between other surveys and the present paper is the thorough technical analysis of the individual papers, and the approaches that have been applied to date. This provides a guideline and references to scientific communities. This paper also provides readers with insights into the automatic crop disease detection process and the key factors, namely the lack of sharp edges around the symptoms; fluctuating imaging conditions; variable symptoms presented by diseases; similar symptoms presented by different disorders; and the concomitant presence of symptoms arising from various disorders. These issues have a relevant impact on the effectiveness of both the image processing methods and the analytical tools that have been introduced to date. From this survey, it is concluded that image preprocessing directly impacts the segmentation process. Moreover, the k-means clustering algorithm was found to be the most suitable technique for segmenting disease-affected leaves. In addition, convolutional neural network (CNN) models were revealed to be extremely powerful and proficient in locating visual patterns in images. Notably, the use of computer vision and artificial intelligence in crop diagnostics in the agricultural sector is still recent, which implies that their numerous alternatives and opportunities remain to be explored, which may help mitigate the above-mentioned challenges. Additionally, with the increase in available computing power, previously demanding strategies can now be easily executed. Thus, based on this in-depth study of the existing literature on crop foliar disease automatic detection, in upcoming work the researchers intend to develop an efficient, accurate, low-cost, and swift system capable of identifying crop diseases from foliar images. In addition, this identifying system will be implemented in a mobile application, allowing an alert to be sent to the farmer once the disease is detected to enable him to intervene as soon as possible.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/agriculture12010009/s1, Figure S1: Difference between traditional machine learning and deep learning; Figure S2: PRISMA 2020 flow diagram for new systematic reviews which included searchers of databases and registers only.

Author Contributions

Writing—original draft preparation, H.O. and M.S.; writing—review and editing, M.S. and M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We are grateful to the reviewers for their work and valuable comments that helped us improve the quality of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used throughout this manuscript.
AIArtificial Intelligence
MLMachine Learning
DLDeep Learning
TLTransfer Learning
CVComputer Vision
IoTInternet of Things
IPImage Processing
WSNWireless Sensor Network
NNNeural Network
ANNArtificial Neural Network
CNNConvolutional Neural Network
DNNDeep Neural Network
RNNRecurrent Neural Network
PNNProbabilistic Neural Network
BBPNBack-Propagation Neural Network
GOAGrasshopper Optimization Algorithm
DAMLDeep Adversarial Metric Learning
KNNK-Nearest Neighbor
SVMSupport Vector Machine
M-SVMMulticlass-Support Vector Machine
LS-SVMLeast Squares Supporting Vector Machine
RFRandom Forest
PDAPenalized Discriminant Analysis
ELMExtreme Learning machine
NBNaïve Bayes
VGGVisual Geometry Group
ResNetResidual Neural Network
C-GANConditional Generative Adversarial Network
GPUGraphics Processing Unit
MEAN blockMobile End Apple Net block
FSLFew-Shot Learning
GOAGrasshopper Optimization Algorithm
GAGenetic Algorithms
PSOParticle Swarm Optimization
PCAPrincipal Component Analysis
LDALinear Discriminant Analysis
PLS—DAPartial Least Squares Discrimination Analysis
SPASuccessive Projection Algorithm.
SAMSpectral Angle Mapper
BBCHBiologische Bundesanstalt, Bundessortenamt and CHemical industry
RBMRestricted Boltzmann Machine
AEAuto-Encoder
EWEffective Wavelengths
GLCMGray Level Cooccurrence Matrix
CPDAColor Processing Detection Algorithm.
GUIGraphical User Interface
LBPLocal Binary Patterns
HOGHistogram-Oriented Gradient
HSVHue Saturation Value
RoIRegion of Interest
HPCCDDHomogeneous Pixel Counting technique for Cotton Disease Detection
CMDCassava Mosaic Disease
RMDRed Mite Damage
GMDGreen Mite Damage
TMVTobacco Mosaic Virus
VNIRVisible and Near-Infrared
SWIRShort Wavelength Infrared
ENVIEnvironment for Visualizing Images
KLKullback Leibler
ABTArea-Based Techniques
SIFTScale Invariant Feature Transform
MFCMarginalized Corrupted Features
LMNNLarge Margin Nearest Neighbor
PRISMAPreferred reporting items for systematic reviews and meta-analyses

References

  1. Lucas, G.B.; Campbell, C.L.; Lucas, L.T. Causes of plant diseases. In Introduction to Plant Diseases; Springer: Berlin/Heidelberg, Germany, 1992; pp. 9–14. [Google Scholar]
  2. Shirahatti, J.; Patil, R.; Akulwar, P. A survey paper on plant disease identification using machine learning approach. In Proceedings of the 2018 3rd International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 15–16 October 2018; pp. 1171–1174. [Google Scholar]
  3. Liu, L.; Dong, Y.; Huang, W.; Du, X.; Ren, B.; Huang, L.; Zheng, Q.; Ma, H. A disease index for efficiently detecting wheat fusarium head blight using sentinel-2 multispectral imagery. IEEE Access 2020, 8, 52181–52191. [Google Scholar] [CrossRef]
  4. Prasad, R.; Ranjan, K.R.; Sinha, A. AMRAPALIKA: An expert system for the diagnosis of pests, diseases, and disorders in Indian mango. Knowl.-Based Syst. 2006, 19, 9–21. [Google Scholar] [CrossRef]
  5. Singh, V.; Misra, A.K. Detection of plant leaf diseases using image segmentation and soft computing techniques. Inf. Process. Agric. 2017, 4, 41–49. [Google Scholar] [CrossRef] [Green Version]
  6. Afifi, A.; Alhumam, A.; Abdelwahab, A. Convolutional neural network for automatic identification of plant diseases with limited data. Plants 2021, 10, 28. [Google Scholar] [CrossRef] [PubMed]
  7. Mugithe, P.K.; Mudunuri, R.V.; Rajasekar, B.; Karthikeyan, S. Image processing technique for automatic detection of plant diseases and alerting system in agricultural farms. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020; pp. 1603–1607. [Google Scholar]
  8. Parikshith, H.; Rajath, S.N.; Kumar, S.P. Leaf disease detection using image processing and artificial intelligence—A survey. In Proceedings of the International Conference On Computational Vision and Bio Inspired Computing, Coimbatore, India, 25–26 September 2019; pp. 304–311. [Google Scholar]
  9. Sethy, P.K.; Barpanda, N.K.; Rath, A.K.; Behera, S.K. Deep feature based rice leaf disease identification using support vector machine. Comput. Electron. Agric. 2020, 175, 105527. [Google Scholar] [CrossRef]
  10. Mitra, D. Emerging plant diseases: Research status and challenges. Emerg. Trends Plant Pathol. 2021, 1–17. [Google Scholar] [CrossRef]
  11. Ichiki, T.U.; Shiba, T.; Matsukura, K.; Ueno, T.; Hirae, M.; Sasaya, T. Detection and diagnosis of rice-infecting viruses. Front. Microbiol. 2013, 4, 289. [Google Scholar]
  12. Lacomme, C.; Holmes, R.; Evans, F. Molecular and serological methods for the diagnosis of viruses in potato tubers. In Plant Pathology; Springer: Berlin/Heidelberg, Germany, 2015; pp. 161–176. [Google Scholar]
  13. Balodi, R.; Bisht, S.; Ghatak, A.; Rao, K. Plant disease diagnosis: Technological advancements and challenges. Indian Phytopathol. 2017, 70, 275–281. [Google Scholar] [CrossRef]
  14. Bachika, N.A.; Hashima, N.; Wayayoka, A.; Mana, H.C.; Alia, M.M. Optical imaging techniques for rice diseases detection: A review. J. Agric. Food Eng. 2020, 2. Available online: https://www.myjafe.com/wp-content/uploads/2020/04/MYJAFE2020-0001.pdf (accessed on 15 December 2021).
  15. Galletti, P.A.; Carvalho, M.E.; Hirai, W.Y.; Brancaglioni, V.A.; Arthur, V.; da Silva, C.B. Integrating optical imaging tools for rapid and non-invasive characterization of seed quality: Tomato (Solanum lycopersicum L.) and carrot (Daucus carota L.) as study cases. Front. Plant Sci. 2020, 11, 577851. [Google Scholar] [CrossRef] [PubMed]
  16. Bauriegel, E.; Herppich, W.B. Hyperspectral and chlorophyll fluorescence imaging for early detection of plant diseases, with special reference to Fusarium spec. infections on wheat. Agriculture 2014, 4, 32–57. [Google Scholar] [CrossRef] [Green Version]
  17. Mishra, P.; Polder, G.; Vilfan, N. Close range spectral imaging for disease detection in plants using autonomous platforms: A review on recent studies. Curr. Robot. Rep. 2020, 1, 43–48. [Google Scholar] [CrossRef] [Green Version]
  18. Neupane, K.; Baysal-Gurel, F. Automatic identification and monitoring of plant diseases using unmanned aerial vehicles: A review. Remote Sens. 2021, 13, 3841. [Google Scholar] [CrossRef]
  19. Abbas, A.; Jain, S.; Gour, M.; Vankudothu, S. Tomato plant disease detection using transfer learning with C-GAN synthetic images. Comput. Electron. Agric. 2021, 187, 106279. [Google Scholar] [CrossRef]
  20. Hamdani, H.; Septiarini, A.; Sunyoto, A.; Suyanto, S.; Utaminingrum, F. Detection of oil palm leaf disease based on color histogram and supervised classifier. Optik 2021, 245, 167753. [Google Scholar] [CrossRef]
  21. Sun, H.; Xu, H.; Liu, B.; He, D.; He, J.; Zhang, H.; Geng, N. MEAN-SSD: A novel real-time detector for apple leaf diseases using improved light-weight convolutional neural networks. Comput. Electron. Agric. 2021, 189, 106379. [Google Scholar] [CrossRef]
  22. Pothen, M.E.; Pai, M.L. Detection of rice leaf diseases using image processing. In Proceedings of the 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 11–13 March 2020; pp. 424–430. [Google Scholar]
  23. Kelman, A.; Pelczar Michael, J.; Shurtleff Malcolm, C.; Pelczar Rita, M. Plant Disease. Available online: https://www.britannica.com/science/plant-disease (accessed on 15 December 2021).
  24. Cerda, R.; Avelino, J.; Gary, C.; Tixier, P.; Lechevallier, E.; Allinne, C. Primary and secondary yield losses caused by pests and diseases: Assessment and modeling in coffee. PLoS ONE 2017, 12, e0169133. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Kaur, M.; Bhatia, R. Leaf disease detection and classification: A comprehensive survey. In Proceedings of the International Conference on IoT Inclusive Life (ICIIL 2019), Chandigarh, India, 19–20 December 2019; pp. 291–304. [Google Scholar]
  26. Singh, V.; Misra, A. Detection of unhealthy region of plant leaves using image processing and genetic algorithm. In Proceedings of the 2015 International Conference on Advances in Computer Engineering and Applications, Ghaziabad, India, 19–20 March 2015; pp. 1028–1032. [Google Scholar]
  27. Karthika, J.; Santhose, M.; Sharan, T. Disease detection in cotton leaf spot using image processing. J. Phys. Conf. Ser. 2021, 1916, 012224. [Google Scholar] [CrossRef]
  28. Devaraj, A.; Rathan, K.; Jaahnavi, S.; Indira, K. Identification of plant disease using image processing technique. In Proceedings of the 2019 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 4–6 April 2019; pp. 0749–0753. [Google Scholar]
  29. Mojjada, R.K.; Kumar, K.K.; Yadav, A.; Prasad, B.S.V. Detection of plant leaf disease using digital image processing. Mater. Today Proc. 2020. [Google Scholar] [CrossRef]
  30. Iqbal, M.A.; Talukder, K.H. Detection of potato disease using image segmentation and machine learning. In Proceedings of the 2020 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 4–6 August 2020; pp. 43–47. [Google Scholar]
  31. Srivastava, A.R.; Venkatesan, M. Tea leaf disease prediction using texture-based image processing. In Emerging Research in Data Engineering Systems and Computer Communications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 17–25. [Google Scholar]
  32. Whitmire, C.D.; Vance, J.M.; Rasheed, H.K.; Missaoui, A.; Rasheed, K.M.; Maier, F.W. Using machine learning and feature selection for alfalfa yield prediction. AI 2021, 2, 6. [Google Scholar] [CrossRef]
  33. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  34. Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef] [Green Version]
  35. Geetharamani, G.; Pandian, A. Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng. 2019, 76, 323–338. [Google Scholar]
  36. Chen, J.; Chen, J.; Zhang, D.; Sun, Y.; Nanehkaran, Y.A. Using deep transfer learning for image-based plant disease identification. Comput. Electron. Agric. 2020, 173, 105393. [Google Scholar] [CrossRef]
  37. Singh, D.; Jain, N.; Jain, P.; Kayal, P.; Kumawat, S.; Batra, N. PlantDoc: A dataset for visual plant disease detection. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, Hyderabad, India, 5–7 January 2020; pp. 249–253. [Google Scholar]
  38. Al-bayati, J.S.H.; Üstündağ, B.B. Evolutionary feature optimization for plant leaf disease detection by deep neural networks. Int. J. Comput. Intell. Syst. 2020, 13, 12–23. [Google Scholar] [CrossRef] [Green Version]
  39. Arsenovic, M.; Karanovic, M.; Sladojevic, S.; Anderla, A.; Stefanovic, D. Solving current limitations of deep learning based approaches for plant disease detection. Symmetry 2019, 11, 939. [Google Scholar] [CrossRef] [Green Version]
  40. Costa, J.; Silva, C.; Ribeiro, B. Hierarchical deep learning approach for plant disease detection. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis, Madrid, Spain, 1–4 July 2019; pp. 383–393. [Google Scholar]
  41. Ramcharan, A.; Baranowski, K.; McCloskey, P.; Ahmed, B.; Legg, J.; Hughes, D.P. Deep learning for image-based cassava disease detection. Front. Plant Sci. 2017, 8, 1852. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
  43. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Zhu, H.; Cen, H.; Zhang, C.; He, Y. Early detection and classification of tobacco leaves inoculated with tobacco mosaic virus based on hyperspectral imaging technique. In Proceedings of the 2016 ASABE Annual International Meeting, Orlando, FL, USA, 17–20 July 2016; p. 1. [Google Scholar]
  45. Zhu, H.; Chu, B.; Zhang, C.; Liu, F.; Jiang, L.; He, Y. Hyperspectral imaging for presymptomatic detection of tobacco disease with successive projections algorithm and machine-learning classifiers. Sci. Rep. 2017, 7, 4125. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Cui, S.; Ling, P.; Zhu, H.; Keener, H.M. Plant pest detection using an artificial nose system: A review. Sensors 2018, 18, 378. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Ma, J.; Du, K.; Zheng, F.; Zhang, L.; Gong, Z.; Sun, Z. A recognition method for cucumber diseases using leaf symptom images based on deep convolutional neural network. Comput. Electron. Agric. 2018, 154, 18–24. [Google Scholar] [CrossRef]
  48. Tran, T.-T.; Choi, J.-W.; Le, T.-T.H.; Kim, J.-W. A comparative study of deep CNN in forecasting and classifying the macronutrient deficiencies on development of tomato plant. Appl. Sci. 2019, 9, 1601. [Google Scholar] [CrossRef] [Green Version]
  49. Tian, Y.; Yang, G.; Wang, Z.; Wang, H.; Li, E.; Liang, Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
  50. De Luna, R.G.; Dadios, E.P.; Bandala, A.A. Automated image capturing system for deep learning-based tomato plant leaf disease detection and recognition. Proceedings of TENCON 2018-2018 IEEE Region 10 Conference, Jeju Island, Korea, 28−31 October 2018; pp. 1414–1419. [Google Scholar]
  51. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  52. Mananze, S.; Pôças, I.; Cunha, M. Retrieval of maize leaf area index using hyperspectral and multispectral data. Remote Sens. 2018, 10, 1942. [Google Scholar] [CrossRef] [Green Version]
  53. Rumpf, T.; Mahlein, A.-K.; Steiner, U.; Oerke, E.-C.; Dehne, H.-W.; Plümer, L. Early detection and classification of plant diseases with support vector machines based on hyperspectral reflectance. Comput. Electron. Agric. 2010, 74, 91–99. [Google Scholar] [CrossRef]
  54. Ng, W.; Minasny, B.; Malone, B.P.; Sarathjith, M.; Das, B.S. Optimizing wavelength selection by using informative vectors for parsimonious infrared spectra modelling. Comput. Electron. Agric. 2019, 158, 201–210. [Google Scholar] [CrossRef]
  55. Wei, C.; Huang, J.; Wang, X.; Blackburn, G.A.; Zhang, Y.; Wang, S.; Mansaray, L.R. Hyperspectral characterization of freezing injury and its biochemical impacts in oilseed rape leaves. Remote Sens. Environ. 2017, 195, 56–66. [Google Scholar] [CrossRef] [Green Version]
  56. Xie, C.; Shao, Y.; Li, X.; He, Y. Detection of early blight and late blight diseases on tomato leaves using hyperspectral imaging. Sci. Rep. 2015, 5, 16564. [Google Scholar] [CrossRef]
  57. Chen, W.-L.; Lin, Y.-B.; Ng, F.-L.; Liu, C.-Y.; Lin, Y.-W. RiceTalk: Rice blast detection using internet of things and artificial intelligence technologies. IEEE Internet Things J. 2019, 7, 1001–1010. [Google Scholar] [CrossRef]
  58. Krishna, M.; Sulthana, S.F.; Sireesha, V.; Prasanna, Y.; Sucharitha, V. Plant disease detection and pesticide spraying using dip and IoT. J. Emerg. Technol. Innov. Res. 2019, 6, 54–58. [Google Scholar]
  59. Mishra, M.; Choudhury, P.; Pati, B. Modified ride-NN optimizer for the IoT based plant disease detection. J. Ambient Intell. Humaniz. Comput. 2021, 12, 691–703. [Google Scholar] [CrossRef]
  60. Truong, T.; Dinh, A.; Wahid, K. An IoT environmental data collection system for fungal detection in crop fields. In Proceedings of the 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE), Windsor, ON, Canada, 30 April−3 May 2017; pp. 1–4. [Google Scholar]
  61. Devi, R.D.; Nandhini, S.A.; Hemalatha, R.; Radha, S. IoT enabled efficient detection and classification of plant diseases for agricultural applications. In Proceedings of the 2019 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), Chennai, India, 21−23 March 2019; pp. 447–451. [Google Scholar]
  62. Kumar, S.; Prasad, K.; Srilekha, A.; Suman, T.; Rao, B.D.; Krishna, J.N.V. Leaf disease detection and classification based on machine learning. In Proceedings of the 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE), Bengaluru, Karnataka, India, 10−11 July 2020; pp. 361–365. [Google Scholar]
  63. Mallick, D.K.; Ray, R.; Dash, S.R. Detection and classification of crop diseases from its leaves using image processing. In Smart Intelligent Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 215–228. [Google Scholar]
  64. Gurrala, K.K.; Yemineni, L.; Rayana, K.S.R.; Vajja, L.K. A new segmentation method for plant disease diagnosis. In Proceedings of the 2019 2nd International Conference on Intelligent Communication and Computational Techniques (ICCT), Jaipur, India, 28−29 September 2019; pp. 137–141. [Google Scholar]
  65. Karthikeyan, N.; Anjana, M.; Anusha, S.; Divya, R.; Vinod, A. Leaf disease detection using image processing. Int. J. Innov. Sci. Eng. Technol. 2020, 7. [Google Scholar] [CrossRef]
  66. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  67. Usman, A.; Bukht, T.F.N.; Ahmad, R.; Ahmad, J. Plant disease detection using internet of thing (IoT). Plant Dis. 2020, 11, 505–509. [Google Scholar]
  68. Reddy, K.A.; Reddy, N.M.C.; Sujatha, S. Precision method for pest detection in plants using the clustering algorithm in image processing. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28−30 July 2020; pp. 894–897. [Google Scholar]
  69. Khan, M.A. Detection and classification of plant diseases using image processing and multiclass support vector machine. Int. J. Comput. Trends Technol. 2020, 68, 5–11. [Google Scholar] [CrossRef]
  70. Sawant, C.; Shirgaonkar, M.; Khule, S.; Jadhav, P. Plant disease detection using image processing techniques. 2020. Available online: https://www.semanticscholar.org/paper/Plant-Disease-Detection-using-Image-Processing-Sawant-Shirgaonkar/d9a26b87f1879821fea0cd1944279cd51359d0c5 (accessed on 15 December 2021).
  71. Fulari, U.N.; Shastri, R.K.; Fulari, A.N. Leaf disease detection using machine learning. J. Seybold Rep. 2020, 1533, 9211. [Google Scholar]
  72. Ouhami, M.; Es-Saady, Y.; El Hajji, M.; Hafiane, A.; Canals, R.; El Yassa, M. Deep transfer learning models for tomato disease detection. In Proceedings of the International Conference on Image and Signal Processing, Marrakesh, Morocco, 4–6 June 2020; pp. 65–73. [Google Scholar]
  73. Deepa. A pre processing approach for accurate identification of plant diseases in leaves. In Proceedings of the 2018 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, India, 14–15 December 2018; pp. 249–252. [Google Scholar]
  74. Win, T.T. AI and IoT Methods for Plant Disease Detection in Myanmar. Kobe Institute of Computing: Kobe, Tokyo, 2018. [Google Scholar]
  75. Sun, G.; Jia, X.; Geng, T. Plant diseases recognition based on image processing technology. J. Electr. Comput. Eng. 2018, 2018, 6070129. [Google Scholar] [CrossRef] [Green Version]
  76. Thorat, A.; Kumari, S.; Valakunde, N.D. An IoT based smart solution for leaf disease detection. In Proceedings of the 2017 International Conference on Big Data, IoT and Data Science (BID), Pune, India, 20–22 December 2017; pp. 193–198. [Google Scholar]
  77. Moghadam, P.; Ward, D.; Goan, E.; Jayawardena, S.; Sikka, P.; Hernandez, E. Plant disease detection using hyperspectral imaging. In Proceedings of the 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney, Australia, 29 November–1 December 2017; pp. 1–8. [Google Scholar]
  78. Khirade, S.D.; Patil, A. Plant disease detection using image processing. In Proceedings of the 2015 International Conference on Computing Communication Control and Automation, Pune, India, 26−27 February 2015; pp. 768–771. [Google Scholar]
  79. Arivazhagan, S.; Shebiah, R.N.; Ananthi, S.; Varthini, S.V. Detection of unhealthy region of plant leaves and classification of plant leaf diseases using texture features. Agric. Eng. Int. CIGR J. 2013, 15, 211–217. [Google Scholar]
  80. Han, S.; Cointault, F. Détection précoce de maladies sur feuilles par traitement d’images. HAL Open Sci. Available online: https://hal.archives-ouvertes.fr/hal-00829402/document (accessed on 15 December 2021).
  81. Revathi, P.; Latha, M.H. Classification of Cotton Leaf Spot Diseases Using Image Processing Edge Detection Techniques; IEEE: Piscataway, NJ, USA, 2012; pp. 169–173. [Google Scholar] [CrossRef]
  82. Yerpude, A.; Dubey, S. Colour image segmentation using K-medoids clustering. Int. J. Comput. Technol. Appl. 2012, 3, 152–154. [Google Scholar]
  83. Chaudhary, P.; Chaudhari, A.K.; Cheeran, A.; Godara, S. Color transform based approach for disease spot detection on plant leaf. Int. J. Comput. Sci. Telecommun. 2012, 3, 65–70. [Google Scholar]
  84. Al-Hiary, H.; Bani-Ahmad, S.; Reyalat, M.; Braik, M.; Alrahamneh, Z. Fast and accurate detection and classification of plant diseases. Int. J. Comput. Appl. 2011, 17, 31–38. [Google Scholar] [CrossRef]
  85. Bauriegel, E.; Giebel, A.; Geyer, M.; Schmidt, U.; Herppich, W. Early detection of Fusarium infection in wheat using hyper-spectral imaging. Comput. Electron. Agric. 2011, 75, 304–312. [Google Scholar] [CrossRef]
  86. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 1–11. [Google Scholar] [CrossRef]
  87. Panigrahi, K.P.; Das, H.; Sahoo, A.K.; Moharana, S.C. Maize leaf disease detection and classification using machine learning algorithms. In Progress in Computing, Analytics and Networking; Springer: Singapore, 2020; pp. 659–669. [Google Scholar]
  88. Zhao, J.; Xu, C.; Xu, J.; Huang, L.; Zhang, D.; Liang, D. Forecasting the wheat powdery mildew (Blumeria graminis f. Sp. tritici) using a remote sensing-based decision-tree classification at a provincial scale. Australas. Plant Pathol. 2018, 47, 53–61. [Google Scholar] [CrossRef]
  89. Zhang, J.; Pu, R.; Yuan, L.; Huang, W.; Nie, C.; Yang, G. Integrating remotely sensed and meteorological observations to forecast wheat powdery mildew at a regional scale. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4328–4339. [Google Scholar] [CrossRef]
  90. Fenu, G.; Malloci, F.M. Artificial intelligence technique in crop disease forecasting: A case study on potato late blight prediction. In Proceedings of the International Conference on Intelligent Decision Technologies, Split, Croatia, 17–19 June 2020; pp. 79–89. [Google Scholar]
  91. Fenu, G.; Malloci, F.M. An application of machine learning technique in forecasting crop disease. In Proceedings of the 2019 3rd International Conference on Big Data Research, Paris, France, 20–22 November 2019; pp. 76–82. [Google Scholar]
  92. Hsieh, J.-Y.; Huang, W.; Yang, H.-T.; Lin, C.-C.; Fan, Y.-C.; Chen, H. Building the Rice Blast Disease Prediction Model Based on Machine Learning and Neural Networks; EasyChair: Manchester, UK, 2019. [Google Scholar]
  93. Kim, Y.; Roh, J.-H.; Kim, H.Y. Early forecasting of rice blast disease using long short-term memory recurrent neural networks. Sustainability 2018, 10, 34. [Google Scholar] [CrossRef] [Green Version]
  94. Duarte-Carvajalino, J.M.; Alzate, D.F.; Ramirez, A.A.; Santa-Sepulveda, J.D.; Fajardo-Rojas, A.E.; Soto-Suárez, M. Evaluating late blight severity in potato crops using unmanned aerial vehicles and machine learning algorithms. Remote Sens. 2018, 10, 1513. [Google Scholar] [CrossRef] [Green Version]
  95. Mahlein, A.-K.; Kuska, M.T.; Behmann, J.; Polder, G.; Walter, A. Hyperspectral sensors and imaging technologies in phytopathology: State of the art. Annu. Rev. Phytopathol. 2018, 56, 535–558. [Google Scholar] [CrossRef]
  96. Kadir, A.; Nugroho, L.E.; Susanto, A.; Santosa, P.I. Performance improvement of leaf identification system using principal component analysis. Int. J. Adv. Sci. Technol. 2012, 44, 113–124. [Google Scholar]
  97. Turkoglu, M.; Hanbay, D. Recognition of plant leaves: An approach with hybrid features produced by dividing leaf images into two and four parts. Appl. Math. Comput. 2019, 352, 1–14. [Google Scholar] [CrossRef]
  98. Ngugi, L.C.; Abelwahab, M.; Abo-Zahhad, M. Recent advances in image processing techniques for automated leaf pest and disease recognition–A review. Inf. Process. Agric. 2021, 8, 27–51. [Google Scholar] [CrossRef]
  99. Ahmed, K.; Shahidi, T.R.; Alam, S.M.I.; Momen, S. Rice leaf disease detection using machine learning techniques. In Proceedings of the 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI), Dhaka, Bangladesh, 24–25 December 2019; pp. 1–5. [Google Scholar]
  100. Barbedo, J.G. Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 2018, 172, 84–91. [Google Scholar] [CrossRef]
  101. Sambasivam, G.; Opiyo, G.D. A predictive machine learning application in agriculture: Cassava disease detection and classification with imbalanced dataset using convolutional neural networks. Egypt. Inform. J. 2021, 22, 27–34. [Google Scholar] [CrossRef]
  102. Hochreiter, S. Untersuchungen zu dynamischen neuronalen Netzen. Master Thesis, Technische Universität München, München, Germany, 1991. [Google Scholar]
  103. Or, B. The Exploding and Vanishing Gradients Problem in Time Series. Available online: https://towardsdatascience.com/the-exploding-and-vanishing-gradients-problem-in-time-series-6b87d558d22 (accessed on 15 December 2021).
  104. Hasan, R.I.; Yusuf, S.M.; Alzubaidi, L. Review of the state of the art of deep learning for plant diseases: A broad analysis and discussion. Plants 2020, 9, 1302. [Google Scholar] [CrossRef]
  105. Ahmad, M.; Abdullah, M.; Moon, H.; Han, D. Plant disease detection in imbalanced datasets using efficient convolutional neural networks with stepwise transfer learning. IEEE Access 2021, 9, 140565–140580. [Google Scholar] [CrossRef]
  106. Bock, C.; Cook, A.; Parker, P.; Gottwald, T. Automated image analysis of the severity of foliar citrus canker symptoms. Plant Dis. 2009, 93, 660–665. [Google Scholar] [CrossRef] [PubMed]
  107. Pydipati, R.; Burks, T.; Lee, W. Identification of citrus disease using color texture features and discriminant analysis. Comput. Electron. Agric. 2006, 52, 49–59. [Google Scholar] [CrossRef]
  108. Guo, W.; Rage, U.K.; Ninomiya, S. Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model. Comput. Electron. Agric. 2013, 96, 58–66. [Google Scholar] [CrossRef]
  109. Pourreza, A.; Lee, W.S.; Raveh, E.; Ehsani, R.; Etxeberria, E. Citrus Huanglongbing detection using narrow-band imaging and polarized illumination. Trans. ASABE 2014, 57, 259–272. [Google Scholar]
  110. Zhou, R.; Kaneko, S.I.; Tanaka, F.; Kayamori, M.; Shimizu, M. Image-based field monitoring of Cercospora leaf spot in sugar beet by robust template matching and pattern recognition. Comput. Electron. Agric. 2015, 116, 65–79. [Google Scholar] [CrossRef]
  111. Olmstead, J.W.; Lang, G.A.; Grove, G.G. Assessment of severity of powdery mildew infection of sweet cherry leaves by digital image analysis. HortScience 2001, 36, 107–111. [Google Scholar] [CrossRef] [Green Version]
  112. Moya, E.; Barrales, L.; Apablaza, G. Assessment of the disease severity of squash powdery mildew through visual analysis, digital image analysis and validation of these methodologies. Crop. Prot. 2005, 24, 785–789. [Google Scholar] [CrossRef]
  113. Oberti, R.; Marchi, M.; Tirelli, P.; Calcante, A.; Iriti, M.; Borghese, A.N. Automatic detection of powdery mildew on grapevine leaves by image analysis: Optimal view-angle range to increase the sensitivity. Comput. Electron. Agric. 2014, 104, 1–8. [Google Scholar] [CrossRef]
  114. Ahmad, I.S.; Reid, J.F.; Paulsen, M.R.; Sinclair, J.B. Color classifier for symptomatic soybean seeds using image processing. Plant Dis. 1999, 83, 320–327. [Google Scholar] [CrossRef] [PubMed]
  115. Wiwart, M.; Fordoński, G.; Żuk-Gołaszewska, K.; Suchowilska, E. Early diagnostics of macronutrient deficiencies in three legume species by color image analysis. Comput. Electron. Agric. 2009, 65, 125–132. [Google Scholar] [CrossRef]
  116. Camargo, A.; Smith, J. An image-processing based algorithm to automatically identify plant disease visual symptoms. Biosyst. Eng. 2009, 102, 9–21. [Google Scholar] [CrossRef]
  117. Liu, J.; Wang, X. Plant diseases and pests detection based on deep learning: A review. Plant Methods 2021, 17, 1–18. [Google Scholar] [CrossRef]
  118. Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
  119. Cucci, C.; Casini, A. Hyperspectral imaging for artworks investigation. In Data Handling in Science and Technology; Elsevier: Amsterdam, The Netherlands, 2020; Volume 32, pp. 583–604. [Google Scholar]
  120. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 1–74. [Google Scholar]
  121. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
  122. Laurent, C.; Pereyra, G.; Brakel, P.; Zhang, Y.; Bengio, Y. Batch normalized recurrent neural networks. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 2657–2661. [Google Scholar]
  123. Zhang, G.; Wang, C.; Xu, B.; Grosse, R. Three mechanisms of weight decay regularization. arXiv 2018, arXiv:1810.12281. [Google Scholar]
  124. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  125. Maaten, L.; Chen, M.; Tyree, S.; Weinberger, K. Learning with marginalized corrupted features. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 410–418. [Google Scholar]
  126. Pereyra, G.; Tucker, G.; Chorowski, J.; Kaiser, Ł.; Hinton, G. Regularizing neural networks by penalizing confident output distributions. arXiv 2017, arXiv:1701.06548. [Google Scholar]
  127. Kadam, S.; Vaidya, V. Review and analysis of zero, one and few shot learning approaches. In Proceedings of the International Conference on Intelligent Systems Design and Applications, Vellore, India, 6–8 December 2018; pp. 100–112. [Google Scholar]
  128. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A comprehensive survey on transfer learning. Proc. IEEE 2020, 109, 43–76. [Google Scholar] [CrossRef]
  129. Chatterjee, S.K.; Malik, O.; Gupta, S. Chemical sensing employing plant electrical signal response-classification of stimuli using curve fitting coefficients as features. Biosensors 2018, 8, 83. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.