Next Article in Journal
Driplines Layout Designs Comparison of Moisture Distribution in Clayey Soils, Using Soil Analysis, Calibrated Time Domain Reflectometry Sensors, and Precision Agriculture Geostatistical Imaging for Environmental Irrigation Engineering
Previous Article in Journal
Sewage Sludge Biochar Improves Water Use Efficiency and Bean Yield in a Small-Scale Field Experiment with Different Doses on Sandy Soil Under Semiarid Conditions
Previous Article in Special Issue
Artificial Neural Network and Mathematical Modeling to Estimate Losses in the Concentration of Bioactive Compounds in Different Tomato Varieties During Cooking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Detection and Classification of Grape Leaf Diseases with an Improved Hybrid Model Based on Feature Engineering and AI

1
Department of Computer Engineering, Faculty of Engineering and Naturel Sciences, Maltepe University, Istanbul 34857, Turkey
2
Department of Software Engineering, Faculty of Engineering and Naturel Sciences, Malatya Turgut Özal University, Malatya 44200, Turkey
*
Author to whom correspondence should be addressed.
AgriEngineering 2025, 7(7), 228; https://doi.org/10.3390/agriengineering7070228
Submission received: 16 June 2025 / Revised: 7 July 2025 / Accepted: 8 July 2025 / Published: 9 July 2025
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)

Abstract

There are many products obtained from grapes. The early detection of diseases in an economically important fruit is important, and the spread of disease significantly increases financial losses. In recent years, it is known that artificial intelligence techniques have achieved very successful results in image classification. Therefore, the early detection and classification of grape diseases with the latest artificial intelligence techniques and feature reduction techniques was carried out within the scope of this study. The most well-known convolutional neural network (CNN) architectures, texture-based Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG) methods, Neighborhood Component Analysis (NCA), feature reduction methods, and machine learning (ML) techniques are the methods used in this article. The proposed hybrid model was compared with two texture-based and four CNN models. The features from the most successful CNN model and texture-based architectures were combined. The NCA method was used to select the best features from the obtained feature map, and the model was classified using the best-known ML classifiers. Our proposed model achieved an accuracy value of 99.1%. This value shows that our model can be used in the detection of grape diseases.

1. Introduction

Grape growing is carried out in many parts of the world. China accounts for a large portion of grape production. Grape cultivation is a major component of China’s extensive fruit industry. It is estimated that approximately 13 million tons of production was achieved in 2017. Every year, there is a huge amount of crop loss in the world due to diseases in grape plants. A significant portion of this production loss is due to various plant diseases and pests [1,2]. Failure to detect these diseases at an early stage results in the loss of both manpower and time. In addition, these losses also lead to financial losses. Computer-aided image recognition, diagnosis, and classification systems have been used effectively in many areas in recent years, including agriculture. The main reason for the use of these systems is their success. Since these models are trained with large amounts of data, they often achieve high accuracy rates [3]. It is well-known that CNNs and texture-based architectures have achieved very successful results in computer-aided diagnosis systems, especially in applications where datasets containing images are used. The main advantage of these architectures is that they do not make human errors. In addition, these systems generally do not require any pre-processing of the data. They can assist experts in many cases where experts have difficulty distinguishing cases by eye, thereby reducing workload and saving time. The utilization of deep learning techniques in vineyard farming contributes to sustainability in agricultural production through the provision of accurate and timely diagnoses [4]. Consequently, it has the potential to enhance productivity in production by ensuring the more effective use of resources. The grape plant is cultivated in a variety of geographical regions worldwide. Despite the fact that various diseases affecting grape leaves manifest in different regions, the present study was undertaken with the objective of diagnosing the most prevalent disease types: Black Measles, Black Rot, and Isariopsis Leaf Spot.

1.1. Contribution and Novelty

  • A computer-aided system was developed to assist experts in the diagnosis of grape leaf diseases. A hybrid model was proposed using ML classifiers, the NCA method, and LBP, HOG, and DenseNet201 architectures together.
  • The proposed hybrid model was created by combining features from both CNNs and texture-based models.
  • The proposed deep model achieved highly competitive accuracy values of 99.1% in the detection and diagnosis of grape leaf diseases.

1.2. Related Works

Lin et al. developed the grapenet model to facilitate the rapid identification of grapevine leaf diseases and to be applicable to smart devices and mobile devices. Thus, they tried to save time and labor costs. In the AI challenger 2018 dataset, a seven-class convolutional neural network structure was used to distinguish one healthy and three diseased leaf types, and a total of 2850 grape leaf images were divided into (90%) training and (10%) test datasets. Operations such as rotation, color enhancement, contrast enhancement, and Gaussian denoising were performed on the training set, and the number of images was increased. The cross-entropy loss function was used as the loss function and the Adam optimization technique was used to optimize the model. In addition, the grad-cam image visualization technique was used in the visualization of the output feature maps. The hyperparameters during training were set with the initial learning rate as 0.0001, the batch size as 64, and the number of iterations as 120. It was stated that the image multiplexing method increased the success of the model by 4%, and the proposed model achieved an accuracy of 86.25% [5]. Liu et al. proposed a new recognition approach based on a convolutional neural network model developed for the diagnosis of grape leaf diseases. A total of 7669 images, 4023 of which were collected from the field and 3646 from publicly available datasets, were used to create a dataset consisting of 107,366 grape leaf images using image enhancement and image augmentation techniques. In the study, the Gaussian technique and Inception neural network architecture were used. The Adam optimization method was used for the optimization of the features. It was stated that the model they proposed reached a 97.22% accuracy value with an epoch value of 30, a learning coefficient of 0.01, and an image classification process consisting of seven classes [1]. Padol et al. used an SVM classification technique to detect and classify grape leaf diseases. They used histogram equalization and a Gaussian filter as pre-processing on 137 images in their dataset and extracted useful features from the diseased areas. Finally, they used SVM classification to classify grape leaf disease categories and achieved an accuracy of 88.89% [6]. Zhang et al. proposed a lightweight segmentation architecture called UPFormer for the detection of grape leaf disease images in their study. They compared their proposed segmentation-based model with CNN-, ViT-, and CNN-ViT-based models. They stated that their proposed model was more successful than these models. In addition, the researchers tested their model on Field-PV, Syn-PV, and Plant Village datasets in their study [7]. Karthik et al. tried to classify grape leaf diseases with a deep learning-based model they developed. In their study, the researchers proposed a new dual-trace network to classify disease images using a combination of Swin Transformer and Group Shuffle Residual DeformNet traces. They stated that the proposed model achieved 98.6% accuracy on the PlantVillage dataset [8]. Wang et al. applied the principal component analysis method, which is a feature reduction method, to images in a two-class dataset and then developed a model using back propagation networks, neural networks, generalized regression networks, and probabilistic neural networks techniques. It was stated that the disease classification accuracy of the model they developed was 94.29% [9]. When the related studies were examined, it was observed that similar datasets were used. In this study, the model we proposed is based on feature extraction, feature concatenation, and feature selection, unlike pre-trained models. This allows the model we proposed to produce more successful results.

1.3. Organization of Paper

The second section of the study provides information regarding the dataset, deep models, texture-based architectures, dimensionality reduction methods, proposed model, and ML classifiers utilized in this article. In the third section, experimental results obtained from both pre-trained models and the proposed model are given. The fourth section includes the discussion, and the fifth section includes the conclusion.

2. Materials and Methods

In this section, firstly, information is given about the dataset used during the experiments. Then, CNNs and texture-based architectures are explained. Also, the feature reduction and classification methods are explained. Finally, the hybrid model proposed in this study is explained.

2.1. Dataset

In this study, a publicly available dataset with a total of 4 classes, including 3 diseased and 1 healthy grape leaf image types, was used [10]. There were a total of 800 images in the dataset, with 200 images in each class. These classes comprised Black Measles, Black Rot, Healthy, and Isariopsis Leaf Spot. Random sample images selected from the classes in the dataset are shown in Figure 1.

2.2. Feature Extraction and Selection Algorithms

In recent years, CNN architectures have been used effectively in many areas, especially in image processing and object detection. CNN architectures generally consist of Input, Convolutional, ReLU, Pooling, Fully Connected, and Output layers. They are successfully used in many areas such as face recognition, medical image analysis, disease classification, anomaly detection, and autonomous vehicle development applications. Their superior success rates in image classification have made them very popular in recent years. Not requiring expert knowledge makes CNN architectures powerful. In this way, time and cost savings can be achieved. The CNN architectures used in this study were DenseNet201 [11], EfficientNetb0 [12], InceptionV3 [13], and ShuffleNet [14].
There are a total of 201 layers in the DenseNet201 architecture. The main feature of this architecture is that each layer in it concatenates the information it receives from all the previous layers. This architecture, which has approximately 20 million parameters, accepts the images it receives in a 224 × 224 × 3 size. The EfficientNetb0 architecture is a member of the EfficientNet family (versions b0 to b7) with approximately 5.3 million parameters. This architecture generally shows good performance on devices with low computational power. The InceptionV3 architecture was developed by Google. This architecture, which accepts the incoming image at a 299 ×299 × 3 size, has approximately 23 million parameters. This architecture, which shows high accuracy without using very deep layers, is known as the 3rd version of the GoogleNet architecture. The ShuffleNet architecture is a CNN model that can work on weak hardware devices. This architecture, which accepts 224 × 224 × 3-sized images as its input, has approximately 1.4 million parameters. It is not very suitable for datasets with large amounts of data.
In this study, 6 different machine learning classifiers well-known in the literature were used. These were Support Vector Machine (SVM) [15], Decision Tree (DT) [16], k-Nearest Neighborhood (KNN) [17], neural network (NN) [18], Logistic Regression (LR) [19], and Naive Bayes (NB) [20] classifiers.
In order to shorten the training time of the model and obtain successful results, the NCA supervised dimensionality reduction technique was applied to the feature map. This technique is essentially based on the probability of selecting neighbors. NCA acts with the logic of bringing data points from the same class closer to each other while moving data points from different classes away [21].
In this study, texture-based LBP and HOG techniques were used. The first of these, the LBP architecture, has been successfully used in a wide variety of applications including object recognition, face detection, and texture classification. It is a very useful computer vision method based on appearance features. The working logic of the LBP method is basically based on the logic that pixels in a selected 3 × 3 neighborhood take the value of 0 or 1 depending on whether they are larger or smaller than the center pixel. Then, this neighborhood is recreated in binary [22]. An 8-digit number is obtained in the binary number system, starting from the top left, with the digits   2 0 ,   2 1 ,   2 7 .   The decimal number corresponding to this obtained binary number is found and written as the center pixel’s value. This process is then applied to the entire image. The working principle of LBP is shown in Figure 2.
A HOG is a feature extraction method based on using histograms of oriented gradients to represent local shape and edge structures in an image. This technique shows particularly good performance in applications such as object detection. This technique is based on dividing the image into cells and calculating weighted histograms of gradient direction in each divided cell [23,24]. Figure 3 shows the determination of the direction region of the gradient, and Figure 4 shows the histograms of the gradients in each part of the image according to the direction regions.

2.3. Proposed Model

In this study, a hybrid model is proposed that performs automatic disease detection for both diseased and disease-free grape leaf images, where texture- and CNN-based architectures work together in feature extraction processes.
In this study, firstly, 4 different well-known CNN-based architectures were run independently on the same dataset under equal conditions. As a result of these experiments, the DenseNet201 model, which was the most successful among the 4 architectures, formed the basis of our proposed hybrid model.
The proposed model combines feature maps obtained from the texture-based LBP and HOG architectures with feature maps of different sizes obtained from the CNN-based DenseNet201 architecture. The aim here is to examine in depth the different features of each image in the dataset. In this way, the obtained feature map will be much more comprehensive.
In order to shorten the training process of the model and to work more effectively, the NCA method was used to select the most valuable features from the quite comprehensive feature map. For each image in the dataset, 1000 features from the last Fully Connected Layer of the DenseNet201 architecture, 2891 features from LBP, and 1296 features from HOG were concatenated. Thus, the size of the new feature map of the dataset became 800 × 5187. The size of the feature map obtained after the NCA technique was applied was 800 × 76. The reduced feature map obtained with the NCA method was tested with 6 different classifiers. The block diagram of the proposed model is given in Figure 5. When Figure 5 is examined, it can be observed that NCA selects the 76 most valuable features out of a total of 5187 features.

3. Results

Equal conditions were observed in all the experiments conducted on the grape leaf diseases dataset. The computer used in the experiments had an i7 processor, 32 GB memory, and 6 GB graphics card. The application results were obtained in the MATLAB 2024b environment. In the experiments where machine learning methods were used as a classifier, the cross-validation value was set to 5. In this study, default metrics were used in all the classifiers to compare the models under the same conditions. F-score (F1), accuracy (Acc), Specificity (Spc), Precision (Pre), Sensitivity (Sens), False Discovery Rate (FDR), False Negative Rate (FNR), and False Positive Rate (FPR) [26] values were used to measure the performance of the deep models. The Black Measles, Black Rot, Healthy, and Isariopsis Leaf Spot classes in the confusion matrix are represented with the labels 1, 2, 3, and 4, respectively.

3.1. Results of Pre-Trained CNN Models

These experiments, which involved the implementation of four distinct deep architectures, were conducted within the MATLAB 2023 environment. Each architectural configuration was executed for a duration of five epochs and 425 iterations. The Mini-Batch Size value of the architectures was determined as eight, the optimization Sgdm was selected, and the Learn Rate Ratio was determined as 1 × 10−4. During the experimental phase, 85% of the dataset was allocated for training purposes, while the remaining 15% was designated for testing. The accuracy values obtained from the deep architectures are presented in Table 1.
As illustrated in Table 1, DenseNet201 demonstrated the highest level of accuracy among the four deep architectures examined. In addition, the architecture with the lowest performance among these architectures was observed to be EfficientNetb0. The confusion matrices of the deep architectures presented in Table 1 are provided in Table 2.
Upon the thorough examination of Table 2, it can be determined that the DenseNet201 architecture exhibited a classification accuracy of 96.67%, with 116 out of the 120 test images being correctly classified and a mere 4 images being misclassified. A subsequent examination of DenseNet201 on a class basis revealed its optimal performance within the Isariopsis Leaf Spot class. The model demonstrated a 100% accuracy rate in its classification of the 30 test images in this class, correctly assigning them to the Isariopsis Leaf Spot class. The model demonstrated the poorest classification performance among the Black Measles class. The model demonstrated an accuracy of 93.33% in its classification of the test images in this class, correctly identifying 28 out of 30 images in the Black Measles class. It was observed that the DenseNet201 architecture achieved equal accuracy rates in the Black Rot and Healthy classes.

3.2. Results of Deep Models, ML, and Feature Extraction

In the subsequent phase of the study, the DenseNet201 architecture, which demonstrated the highest level of performance in the initial phase, was evaluated using ML classifiers. During this process, the cross-validation value was set at five. Six distinct ML classifiers were utilized in this experimental phase. It is important to note that no optimization procedures were implemented for the feature maps. Furthermore, the texture-based LBP and HOG methods were evaluated in conjunction with the ML classifiers. It is important to note that no feature reduction was applied to the feature maps obtained from the LBP and HOG methods. The performance of these two methods was monitored with six different ML classifiers. The accuracy values of the CNN- and texture-based architectures among the ML classifiers are shown in Table 3.
An examination of Table 3 reveals that the texture-based LBP method demonstrates higher diagnostic accuracy for grape leaf disease compared to the texture-based HOG method. Furthermore, analysis reveals that the texture-based methods consistently attain optimal performance with the SVM classifier. As illustrated in Table 3, the DenseNet201 architecture demonstrates optimal performance, attaining an accuracy of 98.9% in the SVM classifier. It can also be observed that all three architectures achieve the lowest accuracy values with the DT classifier.

3.3. Results of Proposed Model

In this section of the study, the proposed model for the diagnosis of grape leaf diseases is explained. Features were extracted from the CNN architecture DenseNet201 and the texture-based architectures LBP and HOG. In total, 1000 features were extracted for each image with the DenseNet201 architecture, 2891 features were extracted for each image with the LBP method, and 1296 features were extracted for each image with the HOG method. The features obtained from these three architectures were combined and a feature map was obtained. The size of the feature map was 800 × 5187. The NCA method was applied to the feature map and, thus, the most important features in the map were selected. After this process, the size of the optimal feature map obtained was 800 × 76. The default parameters were selected when using NCA. The verbose value was taken as 1, and SGD was used as the solvent. NCA reduced the size of the feature map by 98.5%. In this way, since the size of the feature map was reduced, the training time of the proposed model was also shortened. The accuracy values obtained by the proposed model with ML classifiers are shown in Table 4.
Diagnostic accuracy is an extremely important parameter in terms of the performance of the proposed model. When evaluated from this perspective, the highest performance among ML classifiers was achieved with the SVM classifier, with an accuracy value of 99.1%. There are many versions of the SVM classifier. This performance by the proposed model was achieved with the Cubic version of the SVM classifier. The proposed model achieved the worst classification performance with the DT classifier, with an accuracy value of 86.8%. The confusion matrix obtained by the proposed model with the SVM classifier is given in Table 5.
When Table 5 is examined, it can be seen that the proposed model only classified seven images incorrectly. It can be seen that the highest class-based diagnosis performance is achieved in the Healthy class. The proposed model only identified one image that should have been in the Healthy class incorrectly as being in the Black Rot class. It showed a diagnosis accuracy close to the Healthy class in the other three classes. It classified only two images incorrectly in each of these classes. The performance metrics of the proposed model are listed in Table 6.
In the experiments conducted on the grape disease dataset, which included 200 images in each class, although accuracy is a very important performance metric, the evaluation of the proposed model in terms of other performance metrics accepted in the literature is also very important. Therefore, when Table 6 is examined, the Healthy class, where the FPR value is 0, can be observed as the class in which the highest accuracy is obtained. Similarly, the Healthy class, where the FNR value is 0.5%, can be observed as the class in which the highest accuracy is obtained. When the classification metrics are also taken into account, it can be seen that the model has a consistent classification ability and can be used in the diagnosis of grape leaf diseases.

4. Discussion

Deep learning-based systems have been frequently used in image processing [27], recognition, and object detection [28] applications in recent years. In this study, a deep learning-based hybrid method was developed for the detection of diseases on grape leaves. A hybrid model that uses deep learning, texture-based feature extraction methods, and feature selection methods together for the detection and diagnosis of grape diseases was proposed. The accuracy value of the proposed model reaches 99.1% due to the combined use of different hybrid structures. The success of CNN architectures in capturing high-level features from complex images increased the performance of the proposed model. In addition, the LBP and HOG methods effectively obtained local features in diseased regions that deep models missed. A more distinctive feature map was created by combining these two texture-based and CNN-based methods. This feature map enabled the selection of meaningful features using the NCA method, thus reducing the computational cost of the model. Comparative analyses with pre-trained models revealed the superiority of the proposed hybrid structure. Studies in the literature on the detection and diagnosis of grape leaf diseases by computer-aided systems are shown in Table 7. When Table 7 is examined, it can be seen that in many studies, instead of training the proposed models with the original image set, the researchers attempted to increase the number of images in the dataset with augmentation techniques. In addition, in some studies, some pre-processing was applied to the dataset in order to eliminate noise in the images. In our study, no pre-processing was performed on the images in the original dataset and no attempt was made to increase the number of images. Nevertheless, it can be seen that our proposed model achieved highly competitive accuracy values. The accuracy value of the model proposed in this study is 99.1%.
There are some limitations to this study. The dataset on which the model was trained had a limited number of images and classes. This can limit the model’s ability to generalize to data it has not seen. A small number of examples per class can lead to overfitting, especially in complex models, and can cause problems in real-world applications. The study did not examine all known grape leaf diseases. Another limitation is the lack of data obtained under field conditions, such as changing light or complex backgrounds. Therefore, our aim in future studies is to create a dataset that includes more diseases, with images created under different conditions, and to evaluate the performance of the model in real time via a mobile application. The proposed hybrid model is promising for smart agriculture and precision viticulture applications. Thanks to its ability to detect grape diseases quickly and accurately, it can provide significant contributions to farmers and agronomists in preventing crop losses and optimizing treatment strategies.

5. Conclusions

Grapes are a fruits that are extremely rich in terms of the vitamins and nutritional value they contain. These products, which are consumed with pleasure by people all over the world, also have high commercial value. Diseases in this fruit can usually be detected by experts from its leaves. However, considering the insufficient number of experts, or instances when it is difficult to distinguish disease on the leaf with the naked eye at a very early level, failure to diagnose disease on grape leaves early can lead to the spread of the disease to the entire vineyard and indirectly lead to financial damage. The early diagnosis of diseases on grape leaves will enable an increase in the quantity of grapes produced. In this way, the amount of income obtained from grapes will increase. In this study, we developed a deep learning-based hybrid model that will ease the workload of experts and save time. The proposed hybrid model consists of the following steps: First, the extraction of individual features is achieved using texture-based LBP and HOG methods. Subsequently, the extraction of features is undertaken by means of the CNN-based DenseNet201 architecture. Subsequently, the concatenation of these features extracted from the images is undertaken, with the objective of generating a comprehensive feature map. The most useful features from the feature map are selected using the NCA method, and the reduced feature map is then classified using SVM, an ML method. The proposed model was compared with pre-trained models as well as other studies in the literature. The proposed model achieved 99.1% accuracy in the detection and diagnosis of grape leaf diseases. The high accuracy value obtained by this model is due to the combination of different features extracted by texture- and CNN-based models. In order to train the proposed model more effectively and quickly, NCA was used as a feature selection method and the size of the feature map was significantly reduced. Despite the decrease in the size of the feature map, the fact that it produces extremely high accuracy values is an indication that NCA works quite effectively on this dataset. Considering the four-class grape leaf disease dataset used in the experiments, that the accuracy value of the proposed model can be used to diagnose and detect grape leaf diseases.

Author Contributions

Conceptualization, F.A. and H.B.; methodology, H.B.; software, F.A.; validation, H.B. and F.A.; formal analysis, F.A.; investigation, F.A.; resources, H.B.; data curation, F.A.; writing—original draft preparation, H.B.; writing—review and editing, F.A.; visualization, F.A.; supervision, F.A.; project administration, H.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in Kaggle at https://www.kaggle.com/datasets/ekahanan/manualsplit/data (accessed on 15 March 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LBPLocal binary pattern
HOGHistogram of oriented gradients
NCANeighborhood component analysis
PCAPrincipal component analysis
CNNConvolutional neural network
MLMachine learning
SVMSupport vector machine
DTDecision tree
KNNK-nearest neighborhood
NNNeural network
LRLogistic regression
NBNaive bayes

References

  1. Liu, B.; Ding, Z.; Tian, L.; He, D.; Li, S.; Wang, H. Grape leaf disease identification using improved deep convolutional neural networks. Front. Plant Sci. 2020, 11, 1082. [Google Scholar] [CrossRef] [PubMed]
  2. Kızıloluk, S. Comparison of standard and pretrained cnn models for potato, cotton, bean and banana disease detection. Naturengs 2021, 2, 86–99. [Google Scholar] [CrossRef]
  3. Kursat, M.; Yildirim, M.; Emre, I. Classification of the images (Plant-21) in the dataset created with 21 different Euphorbia Taxons with the developed AI-based hybrid model. Signal Image Video Process. 2023, 17, 4153–4161. [Google Scholar] [CrossRef]
  4. Liu, B.; Tan, C.; Li, S.; He, J.; Wang, H. A data augmentation method based on generative adversarial networks for grape leaf disease identification. IEEE Access 2020, 8, 102188–102198. [Google Scholar] [CrossRef]
  5. Lin, J.; Chen, X.; Pan, R.; Cao, T.; Cai, J.; Chen, Y.; Peng, X.; Cernava, T.; Zhang, X. GrapeNet: A lightweight convolutional neural network model for identification of grape leaf diseases. Agriculture 2022, 12, 887. [Google Scholar] [CrossRef]
  6. Padol, P.B.; Yadav, A.A. SVM classifier based grape leaf disease detection. In Proceedings of the 2016 Conference on Advances in Signal Processing (CASP), Pune, India, 9–11 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 175–179. [Google Scholar] [CrossRef]
  7. Zhang, X.; Li, F.; Zheng, H.; Mu, W. UPFormer: U-sharped perception lightweight transformer for segmentation of field grape leaf diseases. Expert Syst. Appl. 2024, 249, 123546. [Google Scholar] [CrossRef]
  8. Karthik, R.; Vardhan, G.V.; Khaitan, S.; Harisankar, R.N.R.; Menaka, R.; Lingaswamy, S.; Won, D. A dual-track feature fusion model utilizing Group Shuffle Residual DeformNet and swin transformer for the classification of grape leaf diseases. Sci. Rep. 2024, 14, 14510. [Google Scholar] [CrossRef] [PubMed]
  9. Wang, H.; Li, G.; Ma, Z.; Li, X. Image recognition of plant diseases based on principal component analysis and neural networks. In Proceedings of the 2012 8th International Conference on Natural Computation, Chongqing, China, 29–31 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 246–251. [Google Scholar]
  10. Kaggle. Available online: https://www.kaggle.com/datasets/ekahanan/manualsplit/data (accessed on 15 March 2025).
  11. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar] [CrossRef]
  12. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning (PMLR), Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  13. Zhong, J.L.; Pun, C.M. An end-to-end dense-inceptionnet for image copy-move forgery detection. IEEE Trans. Inf. Forensics Secur. 2019, 15, 2134–2146. [Google Scholar] [CrossRef]
  14. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar] [CrossRef]
  15. Suthaharan, S. Support vector machine. In Machine Learning Models and Algorithms for Big Data Classification: Thinking with Examples for Effective Learning; Springer: Boston, MA, USA, 2016; pp. 207–235. [Google Scholar] [CrossRef]
  16. Friedl, M.A.; Brodley, C.E. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ. 1997, 61, 399–409. [Google Scholar] [CrossRef]
  17. Guo, G.; Wang, H.; Bell, D.; Bi, Y.; Greer, K. KNN model-based approach in classification. In On the Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE, Proceedings of the OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2003, Catania, Sicily, Italy, 3–7 November 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 986–996. [Google Scholar] [CrossRef]
  18. Féraud, R.; Clérot, F. A methodology to explain neural network classification. Neural Netw. 2002, 15, 237–246. [Google Scholar] [CrossRef] [PubMed]
  19. Dreiseitl, S.; Ohno-Machado, L. Logistic regression and artificial neural network classification models: A methodology review. J. Biomed. Inform. 2002, 35, 352–359. [Google Scholar] [CrossRef] [PubMed]
  20. Flach, P.A.; Lachiche, N. Naive Bayesian classification of structured data. Mach. Learn. 2004, 57, 233–269. [Google Scholar] [CrossRef]
  21. Yang, W.; Wang, K.; Zuo, W. Fast neighborhood component analysis. Neurocomputing 2012, 83, 31–37. [Google Scholar] [CrossRef]
  22. Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In Computer Vision-ECCV 2004, Proceedings, Part I 8, Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 469–481. [Google Scholar] [CrossRef]
  23. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 886–893. [Google Scholar]
  24. Déniz, O.; Bueno, G.; Salido, J.; De la Torre, F. Face recognition using histograms of oriented gradients. Pattern Recognit. Lett. 2011, 32, 1598–1603. [Google Scholar] [CrossRef]
  25. Yildirim, M.; Mutlu, H.B. Automatic detection of knee osteoarthritis grading using artificial intelligence-based methods. Int. J. Imaging Syst. Technol. 2024, 34, e23057. [Google Scholar] [CrossRef]
  26. Bingol, H. Classification of OME with Eardrum Otoendoscopic Images Using Hybrid-Based Deep Models, NCA, and Gaussian Method. Trait. Du Signal 2022, 39, 1295–1302. [Google Scholar] [CrossRef]
  27. Bugday, B.; Bingol, H.; Yildirim, M.; Alatas, B. Enhancing knee osteoarthritis detection with AI, image denoising, and optimized classification methods and the importance of physical therapy methods. PeerJ Comput. Sci. 2025, 11, e2766. [Google Scholar] [CrossRef] [PubMed]
  28. Tuncer, S.A.; Yildirim, M.; Tuncer, T.; Mülayim, M.K. YOLOv8-Based System for Nail Capillary Detection on a Single-Board Computer. Diagnostics 2024, 14, 1843. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Sample images from classes in the grape leaf diseases dataset.
Figure 1. Sample images from classes in the grape leaf diseases dataset.
Agriengineering 07 00228 g001
Figure 2. Working principle of LBP [22].
Figure 2. Working principle of LBP [22].
Agriengineering 07 00228 g002
Figure 3. Gradient direction region [25].
Figure 3. Gradient direction region [25].
Agriengineering 07 00228 g003
Figure 4. Histograms of gradients in each part of image [25].
Figure 4. Histograms of gradients in each part of image [25].
Agriengineering 07 00228 g004
Figure 5. Flow diagram of proposed model for grape leaf diseases.
Figure 5. Flow diagram of proposed model for grape leaf diseases.
Agriengineering 07 00228 g005
Table 1. Accuracy of CNN models.
Table 1. Accuracy of CNN models.
DenseNet201EfficientNetb0InceptionV3ShuffleNet
96.6788.3395.8395
Table 2. Confusion matrices obtained from CNN architectures.
Table 2. Confusion matrices obtained from CNN architectures.
DenseNet201EfficientNetb0
Agriengineering 07 00228 i001Agriengineering 07 00228 i002
InceptionV3ShuffleNet
Agriengineering 07 00228 i003Agriengineering 07 00228 i004
Table 3. Accuracy of CNNs in ML classifiers.
Table 3. Accuracy of CNNs in ML classifiers.
Deep Model
Feature Numbers
Values for Accuracy Derived from Deep Models (%)
DTLRNBSVMKNNNN
DenseNet20188.89795.498.996.698.8
LBP61.385.577.187.984.684.8
HOG57.48274.484.678.981.6
Table 4. Accuracy of proposed model for grape leaf diseases with ML classifiers.
Table 4. Accuracy of proposed model for grape leaf diseases with ML classifiers.
Accuracy Values Derived from ML (%)
DTLRNBSVMKNNNN
Proposed
Model
86.897.495.599.196.898.5
Table 5. Confusion matrix of proposed model.
Table 5. Confusion matrix of proposed model.
Proposed Model
Agriengineering 07 00228 i005
Table 6. Performance values of proposed model (%).
Table 6. Performance values of proposed model (%).
AccSpcSensPreFPRF1FNRFDR
Black Measles9999.6799990.339911
Black Rot9999.339998.010.6798.5111.98
Healthy99.510099.5100099.750.50
Isariopsis Leaf Spot9999.839999.500.1799.2510.5
Table 7. Studies in the literature on the classification of grape leaf diseases.
Table 7. Studies in the literature on the classification of grape leaf diseases.
ReferenceMethodNumber of ImagesNumber of ClassesAcc (%)
Liu et al. [1]Inception + Augmentation + Gaussian107366797.22
Lin et al. [5]CNN+ Gaussian2850786.25
Padol et al. [6]SVM137288.89
Karthik et al. [8]Swin Transformer + Group Shuffle Residual DeformNet + Augmentation4639498.6
Wang et al. [9]Back Propagation Network+ PCA + NN 85294.29
Proposed ModelDenseNet201 + LBP + HOG + NCA + SVM800499.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Atesoglu, F.; Bingol, H. The Detection and Classification of Grape Leaf Diseases with an Improved Hybrid Model Based on Feature Engineering and AI. AgriEngineering 2025, 7, 228. https://doi.org/10.3390/agriengineering7070228

AMA Style

Atesoglu F, Bingol H. The Detection and Classification of Grape Leaf Diseases with an Improved Hybrid Model Based on Feature Engineering and AI. AgriEngineering. 2025; 7(7):228. https://doi.org/10.3390/agriengineering7070228

Chicago/Turabian Style

Atesoglu, Fatih, and Harun Bingol. 2025. "The Detection and Classification of Grape Leaf Diseases with an Improved Hybrid Model Based on Feature Engineering and AI" AgriEngineering 7, no. 7: 228. https://doi.org/10.3390/agriengineering7070228

APA Style

Atesoglu, F., & Bingol, H. (2025). The Detection and Classification of Grape Leaf Diseases with an Improved Hybrid Model Based on Feature Engineering and AI. AgriEngineering, 7(7), 228. https://doi.org/10.3390/agriengineering7070228

Article Metrics

Back to TopTop