Next Article in Journal
Analyzing Tractor Productivity and Efficiency Evolution: A Methodological and Parametric Assessment of the Impact of Variations in Propulsion System Design
Next Article in Special Issue
Hyperspectral Imaging Combined with a Dual-Channel Feature Fusion Model for Hierarchical Detection of Rice Blast
Previous Article in Journal
Proximal LiDAR Sensing for Monitoring of Vegetative Growth in Rice at Different Growing Stages
Previous Article in Special Issue
Improving Winter Wheat Yield Estimation Under Saline Stress by Integrating Sentinel-2 and Soil Salt Content Using Random Forest
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model

1
College of Information and Management Sciences, Henan Agricultural University, Zhengzhou 450046, China
2
Key Lab of Smart Agriculture System, Ministry of Education, China Agricultural University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(15), 1580; https://doi.org/10.3390/agriculture15151580
Submission received: 13 May 2025 / Revised: 16 July 2025 / Accepted: 22 July 2025 / Published: 23 July 2025

Abstract

Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early and accurate detection crucial for effective management. In this study, we present QY-SE-MResNet34, a deep learning-based classification model that builds upon ResNet34 to perform multi-class classification of wheat leaf images and assess powdery mildew severity at the single-leaf level. The proposed methodology begins with dataset construction following the GBT 17980.22-2000 national standard for powdery mildew severity grading, resulting in a curated collection of 4248 wheat leaf images at the grain-filling stage across six severity levels. To enhance model performance, we integrated transfer learning with ResNet34, leveraging pretrained weights to improve feature extraction and accelerate convergence. Further refinements included embedding a Squeeze-and-Excitation (SE) block to strengthen feature representation while maintaining computational efficiency. The model architecture was also optimized by modifying the first convolutional layer (conv1)—replacing the original 7 × 7 kernel with a 3 × 3 kernel, adjusting the stride to 1, and setting padding to 1—to better capture fine-grained leaf textures and edge features. Subsequently, the optimal training strategy was determined through hyperparameter tuning experiments, and GrabCut-based background processing along with data augmentation were introduced to enhance model robustness. In addition, interpretability techniques such as channel masking and Grad-CAM were employed to visualize the model’s decision-making process. Experimental validation demonstrated that QY-SE-MResNet34 achieved an 89% classification accuracy, outperforming established models such as ResNet50, VGG16, and MobileNetV2 and surpassing the original ResNet34 by 11%. This study delivers a high-performance solution for single-leaf wheat powdery mildew severity assessment, offering practical value for intelligent disease monitoring and early warning systems in precision agriculture.

1. Introduction

Wheat powdery mildew, caused by Blumeria graminis f. sp. tritici, is a devastating fungal disease that severely impacts global wheat production. By impairing leaf photosynthesis, the disease causes premature plant senescence, reduces grain filling efficiency, and leads to significant yield losses ranging from 10% to 50% [1], along with substantial deterioration in grain quality. The pathogen also decreases thousand-grain weight by 10–20% [2], compromises flour quality, and reduces the nutritional value and market competitiveness of wheat grains. Effective disease control requires an integrated strategy incorporating resistant cultivars, optimized field management, and timely fungicide application. Precise identification and severity assessment of powdery mildew infection are particularly critical, enabling farmers to minimize yield losses, optimize control measures, improve pesticide application efficiency, and prevent reductions in thousand-grain weight [3]. Furthermore, accurate disease evaluation facilitates the development of resistant varieties, supports precision agriculture implementation, and enhances the economic sustainability of wheat production systems. Current disease severity grading methods, while fundamental for diagnosis and control decisions, predominantly rely on manual visual assessment [4]. These conventional approaches are not only time-consuming and labor-intensive but also subjective and inconsistent, making them unsuitable for large-scale cultivation systems requiring rapid and accurate disease diagnosis. This pressing challenge highlights the urgent need for innovative solutions leveraging advanced computer vision and deep learning technologies to revolutionize disease identification and severity assessment in modern agriculture.
The rapid evolution of artificial intelligence and deep learning technologies has brought transformative progress to agricultural disease identification [5]. This field has witnessed a paradigm shift from manual assessment to automated, intelligent detection systems. Modern image recognition techniques, powered by machine learning and deep learning architectures, can efficiently process large-scale image datasets [6], offering unprecedented advantages in terms of processing speed, detection accuracy, operational objectivity, and non-destructive analysis [7]. These capabilities provide essential technological foundations for real-time disease monitoring and assessment. Current methodologies in plant disease image recognition primarily fall into two categories: traditional machine learning and deep learning approaches [8]. The conventional machine learning pipeline involves manual feature extraction, where experts identify and isolate key visual characteristics such as color histograms, texture patterns, and edge contours from images [9]. These handcrafted features are then processed using various machine learning algorithms, including principal component analysis (PCA) for dimensionality reduction, support vector machines (SVM) for classification, AdaBoost for ensemble learning, and K-means clustering for pattern recognition [10,11,12,13], ultimately enabling automated disease identification and severity assessment. For instance, Sowmiya et al. [14] proposed an Improved Quantum Whale Optimization Algorithm integrated with Principal Component Analysis (IQWO-PCA) to analyze tomato disease image datasets using a machine learning (ML) model, thereby facilitating the adoption of effective preventive measures against this agricultural threat. Piyush et al. [15] attained a diagnostic accuracy of 98% in the automated identification of apple fruit and apple tree diseases by integrating a comprehensive feature extraction module with a Support Vector Machine (SVM) model encoded by an optimally constrained Restricted Boltzmann Machine (RBM). Shedthi et al. [16] developed a plant disease recognition framework that leveraged image processing and machine learning techniques. Their system employed a hybrid clustering algorithm—combining a Genetic Algorithm with K-means—for image segmentation and utilized an Artificial Neural Network (ANN) for disease classification. Although these approaches based on handcrafted feature extraction have achieved notable improvements in plant disease detection, they still exhibit inherent limitations. These include a strong reliance on manual feature engineering, difficulties in handling high-dimensional and sparse data, limited ability to model complex non-linear relationships, poor scalability to large datasets, and dependence on extensive labeled data for training.
In recent years, with the rapid advancement of deep learning technologies, models such as UNet [17], ResNet [18], and L-CSMS [19] have been increasingly employed in crop disease recognition. These methods enable the autonomous extraction of image features and optimize the training process, thereby significantly enhancing the performance and accuracy of plant disease identification tasks [20]. For example, Deng et al. [21] proposed a leaf blight segmentation model based on an improved VCDM-UNet. By analyzing the segmentation output, they quantified the proportion of diseased areas relative to total leaf area to perform disease severity grading, achieving an average grading accuracy of 83.95%. Zhang et al. [22] introduced the GhostNetV2 feature extraction module into the YOLOv8 architecture and constructed the GSGF model for grape leaf disease recognition, which attained an accuracy of 97.1% in tomato disease classification. Jiang et al. [23] developed a tea leaf disease detection method based on an enhanced Faster R-CNN algorithm, utilizing ResNet50 as the backbone integrated with a Feature Pyramid Network (FPN), achieving an average precision (PmA) of 88.06%. Yang [24] employed a combination of MobileNetV2 and the Large Margin Nearest Neighbor (LMNN) algorithm for wheat blast severity classification, resulting in a 3.5% improvement in accuracy over the baseline MobileNetV2 model. Similarly, Picon et al. [25] applied an improved ResNet architecture for the classification of European wheat diseases, achieving an accuracy of 87%. These studies demonstrate that deep learning approaches—through architectural augmentations—significantly improve the model’s capability to autonomously learn complex and subtle disease features, thereby eliminating the need for labor-intensive manual feature engineering that is typical in traditional machine learning methods [26]. This principle underpins the methodology adopted in this study for the classification of wheat powdery mildew severity levels.
The primary objective of this study was to accurately classify the severity levels of wheat powdery mildew on individual leaf blades, with particular emphasis on the early detection of infections when visible symptoms are minimal or absent. To this end, a dataset comprising 532 RGB images of single wheat leaves was collected using a digital mobile phone during the irrigation stage. These images were annotated and categorized based on the severity grading standard defined in GBT 17980.22-2000 [27,28]. Considering the relatively limited size of the dataset, data augmentation techniques were employed as a preprocessing step to increase the number of training samples, thereby improving model robustness and generalization. The ResNet34 [29] model was adopted as the backbone for automated feature extraction. To accelerate the training process and enhance feature representation, a transfer learning strategy was incorporated by initializing the network with weights pre-trained on large-scale datasets. Additionally, the Squeeze-and-Excitation (SE) attention mechanism [30] was integrated into the architecture to improve the model’s sensitivity to salient disease-related features. Meanwhile, the convolutional layer conv1 was modified from 7 × 7 to 3 × 3 with a stride of 1 and padding of 1, in order to better capture the texture and edge details of wheat leaves. Subsequently, the optimal training strategy was determined through hyperparameter tuning experiments, and GrabCut-based background processing along with data augmentation was introduced to enhance the model’s robustness. In addition, interpretability techniques such as channel masking and Grad-CAM were employed to visualize the model’s decision-making process. Based on these augmentations, the QY-SE-MResNet34 model was developed to classify the input leaf images according to powdery mildew severity levels. The model’s performance was evaluated using four metrics: precision (1), recall (2), accuracy (3), and balanced F-score (4) [31]. The recognition performance of the proposed QY-SE-MResNet34 model was systematically compared with four mainstream deep learning models: ResNet34, ResNet50, VGG16, and MobileNetV2, in order to assess its effectiveness and identify the optimal model for powdery mildew severity classification.

2. Materials and Methods

2.1. Research Area and Data Sources

2.1.1. Overview of the Research Area

The study was conducted at the demonstration base of Henan Agricultural University, located in the Pingyuan New District of Yuanyang County, Xinxiang City (113.956° E, 35.114° N). This region experiences a temperate continental monsoon climate, characterized by distinct seasonal variation. The average annual temperature is 14.2 °C, and precipitation is primarily concentrated in the summer and autumn months, with an annual average of approximately 680 mm. These climatic conditions are favorable for wheat cultivation: the cold winter temperatures promote wheat dormancy, while the warming trend in spring supports vegetative growth and development. In addition to its favorable climate, the region also possesses highly suitable soil resources for wheat production. The soil is fertile, with a deep profile and high organic matter content, providing an optimal environment for root development and nutrient uptake, thereby supporting healthy wheat growth throughout the growing season. The location of the research area is shown in Figure 1.

2.1.2. Wheat Leaf Powdery Mildew Image Collection

The wheat leaf powdery mildew images were collected in early May 2024. The image acquisition process involved carefully detaching intact leaves from wheat plants grown in an outdoor experimental field. These leaves were then photographed vertically under controlled indoor conditions to ensure consistency in lighting and background. During image capture, the distance between the camera and the leaf surface was maintained within a range of 0.2 to 0.5 m. All photographs were taken between 14:00 and 16:30, a time window selected to ensure optimal natural lighting for high-quality imaging. The images were captured using an iPhone 14 Pro (Apple Inc., Cupertino, CA, USA), which is equipped with a rear triple-camera system comprising a 48 MP main sensor, a 12 MP ultra-wide sensor, and a 12 MP telephoto lens. All images were acquired at a resolution of 3024 × 4036 pixels and saved in JPG format. Based on the statistical analysis of the collected samples, the dataset includes both healthy wheat leaves and leaves exhibiting varying severity levels of powdery mildew infection, thus providing a diverse and representative foundation for the development and evaluation of classification models.

2.1.3. Wheat Powdery Mildew Dataset Construction and Image Preprocessing

According to the agricultural industry standard GBT 17980.22-2000 Guidelines for Pesticide Field Efficacy Tests (I): Fungicide Control of Gramineae Powdery Mildew, the collected wheat leaf samples were graded under the guidance of experts specializing in wheat powdery mildew resistance breeding [32]. Each image was sequentially numbered and assigned a corresponding label based on disease severity. The grading criteria for wheat powdery mildew severity are presented in Table 1. A total of 531 field sample images were collected for this study, comprising 43 images of healthy leaves, 137 images classified as grade 1, 117 as grade 3, 68 as grade 5, 76 as grade 7, and 90 as grade 9. Representative images for each disease severity level are illustrated in Figure 2.
To facilitate network training and support efficient convolution operations, the wheat leaf powdery mildew images were standardized to a uniform size [33]. As the primary disease-related features are predominantly concentrated in the central regions of the leaf images [34], resizing—via either upscaling or downscaling—was conducted in a manner that preserved the integrity of key visual information while reducing redundant background content. In this study, Python (3.12) scripts were developed to automate the image preprocessing workflow, aiming to eliminate irrelevant information and improve the computational efficiency of the model training process [35]. To address the risk of overfitting associated with the limited dataset size, data augmentation techniques such as horizontal and vertical flipping, random rotation, and color space transformations were employed [36]. The final dataset was divided into a training set and a validation set using a 9:1 ratio to support model evaluation and generalization testing.

2.2. Research Method

To evaluate the generalization capability of the wheat powdery mildew dataset across deep learning models, the proposed improved residual network model (QY-SE-MResNet34) was benchmarked against four widely adopted mainstream architectures: ResNet34, ResNet50, VGG16, and MobileNetV2. These models have been extensively employed in image recognition research due to their robust performance, ability to capture complex feature representations, and suitability for transfer learning [37]. Each model embodies distinct architectural characteristics and design principles. ResNet34, a classic residual network, effectively mitigates the vanishing gradient problem through identity shortcut connections. ResNet50, with its deeper architecture, offers a balance between computational efficiency and model complexity, making it suitable for scenarios with limited resources or requiring rapid inference [38]. VGG16 is characterized by a deep, uniform convolutional structure that facilitates progressive, layer-wise feature extraction. MobileNetV2, designed with efficiency in mind, excels in real-time computer vision applications and lightweight detection tasks due to its inverted residual blocks and linear bottlenecks.

2.2.1. ResNet Backbone Network

ResNet (Residual Network) is among the most widely adopted deep neural network architectures in tasks such as image classification, object detection, and semantic segmentation. Its core innovation, the residual structure, is depicted in Figure 3. The key advancement introduced by ResNet lies in the use of residual connections [39], which establish direct cross-layer pathways within the network to effectively alleviate issues of gradient vanishing and exploding during training. This design enables the construction of deeper and wider networks, thereby significantly enhancing model expressiveness and generalization ability [40].
Specifically, the residual connection in ResNet involves adding shortcut connections that bypass one or more layers, forming residual blocks where the output of a preceding layer is directly added to the output of a subsequent layer [41]. The fundamental principle of this mechanism is to learn residual functions—that is, the difference between the desired output and the input—rather than the direct mapping itself [42]. This approach facilitates the learning of identity mappings, which mitigates gradient degradation problems [43], and allows for the effective training of very deep networks with improved robustness and convergence behavior. In addition to residual connections, ResNet incorporates techniques such as batch normalization and pretraining [44] to further enhance training stability and generalization performance [45]. Batch normalization accelerates convergence and reduces internal covariate shift, thereby improving generalization, while pretraining leverages knowledge from large-scale datasets to initialize network parameters, boosting both accuracy and robustness [46].

2.2.2. QY-SE-MResNet34 Network Model

To more effectively capture the subtle features of wheat powdery mildew lesions and improve the discrimination of disease severity levels, we propose the QY-SE-MResNet34 model, an enhanced version of ResNet34. Our approach integrates transfer learning and incorporates the Squeeze-and-Excitation (SE) attention module into the backbone network [47] alongside a modification of the initial convolutional layer (Conv1). Transfer learning is employed by initializing ResNet34 with weights pre-trained on large-scale datasets, thereby leveraging universal feature representations to accelerate convergence, boost classification performance, and mitigate overfitting—particularly beneficial when training data are limited. The SE module consists of two components: a “Squeeze” operation that aggregates spatial information via global average pooling, and an “Excitation” operation that models channel-wise dependencies through a lightweight gating mechanism. By learning inter-channel relationships, SE adaptively recalibrates channel feature responses, enhancing the network’s focus on diagnostically relevant patterns while retaining original feature integrity. Furthermore, Conv1’s original 7 × 7 kernel (stride = 2, padding = 3) is replaced with a 3 × 3 kernel (stride = 1, padding = 1) to facilitate finer-grained extraction of local texture and edge information critical for early-stage lesion detection. The architecture of the SE module is illustrated in Figure 4, highlighting its role in channel-wise feature reweighting.
In the ResNet34 architecture, a 3 × 3 convolutional kernel with a stride of 1 is employed to extract disease-related features from input wheat leaf images. This initial operation is followed by successive convolutional layers and activation functions, constituting the main processing pathway. The output of this pathway is then added to the original input features to form a residual, which is subsequently combined with the input to generate the final output of the residual block. Considering the high similarity of visual patterns across different severity levels of wheat powdery mildew, integrating attention mechanisms alongside additional convolutional kernels allows dynamic adjustment of the receptive field size. This facilitates the extraction of more fine-grained and discriminative features pertinent to disease severity. The overall architecture of the proposed QY-SE-MResNet34 residual network model is depicted in Figure 5. The transfer learning strategy initializes the model weights with parameters pretrained on the ImageNet dataset (source domain), while fine-tuning is performed using the preprocessed wheat powdery mildew images (target domain). The incorporated Squeeze-and-Excitation (SE) module enables adaptive modeling of inter-channel dependencies and weighted feature fusion, thereby enhancing the network’s capacity to emphasize critical disease-related features and improve classification performance. Additionally, the adoption of smaller convolutional kernels reduces the number of parameters, leading to improved computational efficiency and optimized parameter learning, ultimately contributing to superior network performance.

2.3. Model Operating Environment

The model was implemented using the PyTorch (v2.x) deep learning framework. The hardware environment consisted of 16 virtual CPUs (Intel® Xeon® Gold 6430), 120 GB of RAM, and an NVIDIA RTX 4090 GPU with 24 GB of VRAM. The software environment included Python 3.12, CUDA 12.4, and the Ubuntu 22.04 Linux operating system. During the training process, the deep neural network model was trained for 200 epochs with a batch size of 32 and an initial learning rate set to 0.01.

2.4. Model Evaluation Metrics

To evaluate the performance of the experimental model in wheat powdery mildew disease severity classification, the following metrics were used: Precision (P) (1), Recall (R) (2), Accuracy (Acc) (3), and F-score (4). The formulas for these metrics are as follows:
P = TP TP + FP
R = TP TP + FN
Acc = TP + TN TP + FP + TN + FN
Micro F 1 = 2 P R P + R
In the formula:
TP (True Positive)—The number of samples correctly classified as positive by the model and that are indeed positive.
TN (True Negative)—The number of samples correctly classified as negative by the model and that are indeed negative.
FP (False Positive)—The number of samples incorrectly classified as positive by the model but that are actually negative.
FN (False Negative)—The number of samples incorrectly classified as negative by the model but that are actually positive.
P (Precision)—The proportion of samples classified as positive by the model that are truly positive.
R (Recall)—The proportion of all actual positive samples that are correctly classified as positive by the model.
Acc (Accuracy)—The proportion of all samples that are correctly classified by the model, which serves as a measure for evaluating the overall performance of the model.
F (F-score)—A metric that balances precision and recall; a higher F-score indicates superior model performance.

3. Results and Analysis

Section 3 focuses on the training process, performance evaluation, and in-depth analysis of the proposed model. Section 3.1 introduces the hyperparameter tuning process and identifies key training configurations. Section 3.2 compares the classification performance of various models to assess the effectiveness of the proposed improvements. Section 3.3 discusses the impact of image preprocessing and data augmentation on model performance, including background removal, specific parameter settings, and visualized results. Section 3.4 presents and analyzes the classification results of different models on wheat powdery mildew severity levels. Section 3.5 conducts ablation experiments to verify the contribution of each improvement module. Finally, Section 3.6 explores the model’s interpretability and input feature sensitivity through channel masking and Grad-CAM visualization, providing further insights into the model’s decision-making mechanism and feature dependencies.

3.1. Hyperparameter Tuning Experiments

To achieve optimal model training performance, this study conducted tuning experiments on the key hyperparameters of the deep learning model. The setting of hyperparameters significantly affects the stability, convergence speed, and final accuracy of model training. Based on the ResNet34 architecture, we performed a series of combination experiments on three key parameters: learning rate, batch size, and number of epochs. In the tuning process, the accuracy on the validation set was used as the evaluation metric to objectively reflect the model’s generalization ability and avoid overfitting that may result from relying solely on training set performance.
The initial parameter settings were as follows: epoch = 100, batch size = 64, and learning rate = 0.01. On this basis, different combinations of these parameters were adjusted to observe their impact on model performance. The hyperparameter tuning process is shown in Table 2. From the experimental results, it can be seen that as the number of epochs increases and the batch size and learning rate are adjusted, the model’s accuracy on the validation set gradually improves. Among the tested combinations, the configuration of epoch = 200, batch size = 32, and learning rate = 0.001 achieved the best performance, with a validation accuracy of 76.74%. Therefore, this set of parameters was ultimately adopted for formal training and model comparison experiments.

3.2. Different Models’ Accuracy Analysis

To assess the performance of deep learning models in classifying the severity levels of wheat powdery mildew, a comparative experiment was conducted involving five models: ResNet34, ResNet50, VGG16, MobileNetV2, and the proposed QY-SE-MResNet34. The variations in accuracy and loss values across training epochs for each model are depicted in Figure 6. As shown, model accuracy generally increases while loss decreases with the number of training epochs, eventually stabilizing as the models converge. This trend reflects the models’ ability to progressively learn discriminative features from the input data, thereby enhancing their classification and recognition capabilities. Among the models evaluated, QY-SE-MResNet34 exhibits the fastest convergence, attaining high accuracy and low loss within approximately 80 epochs. The ResNet34 model follows closely, achieving similar performance after around 90 epochs. In contrast, ResNet50, VGG16, and MobileNetV2 require nearly 100 epochs to reach comparable results. Analysis of the relationship between training iterations and performance metrics reveals that QY-SE-MResNet34 achieves the lowest loss value among all models. This suggests that its design—featuring weighted feature fusion and convolutional kernels of different sizes—enhances the network’s capacity to extract edge details and disease-specific features from wheat leaf images. Consequently, the model demonstrates an improved ability to focus on key regions and capture fine-grained variations across different severity levels, thereby boosting its classification performance in wheat powdery mildew severity identification tasks.
During training, the dataset was divided into a training set and a validation set at a ratio of 9:1. Figure 6 clearly illustrates the trends of accuracy and loss on both the training and validation sets across different epochs for each model. The performance of the models on the validation set remained consistent with that on the training set, demonstrating good generalization ability and indicating that no significant overfitting occurred during training.
Although the QY-SE-MResNet34 model tended to stabilize around the 80th epoch, considering the different convergence speeds of the models and to ensure comparability under consistent training settings, the Early Stopping mechanism was not applied. Instead, all models were trained for a fixed number of 200 epochs, allowing for a complete observation of the learning process and performance differences. This setting helps to comprehensively evaluate convergence trends and provides a more intuitive basis for performance comparison between models.
Table 3 summarizes the accuracy and loss values of each model following 200 training epochs. As shown, the proposed QY-SE-MResNet34 model outperforms the other architectures, achieving the highest training accuracy of 89% and validation accuracy of 87%. These results indicate that the model possesses strong generalization capability and is highly effective in identifying fine-grained features related to disease severity. In addition, the QY-SE-MResNet34 model records the lowest training and validation loss values, at 0.6925 and 0.6771, respectively, further demonstrating its robustness and superior performance in wheat powdery mildew severity classification.

3.3. The Impact of Data Augmentation on Improving Model Performance

3.3.1. Image Preprocessing (Background Removal)

To further enhance the model’s robustness and avoid background interference during feature extraction, background removal was performed on the original images before data augmentation. Given the relatively uniform imaging environment of wheat powdery mildew samples—typically photographed against a white canvas or background board under controlled lighting conditions—the foreground mainly consisted of a single leaf, and the background was relatively simple. However, since the color of powdery mildew lesions often closely resembles the background, simple color thresholding or binarization methods may lead to the loss of lesion information or incomplete foreground contours, ultimately affecting the model’s ability to learn disease-related features.
Therefore, this study adopted a semi-automatic foreground extraction method based on GrabCut to effectively separate the leaf region from the background. The GrabCut algorithm uses Gaussian Mixture Models (GMM) to model foreground and background pixels and classifies them based on the min-cut principle, enabling fine segmentation of complex foregrounds. Specifically, the original image was first resized to 20% of its original size. Then, a rectangle was drawn around the leaf region to initialize GrabCut, ensuring that lesion areas were included in the foreground modeling and reducing the impact of background lighting variations on edge segmentation. The resulting foreground images retained the complete leaf contours and lesion areas while effectively removing background pixels.
Figure 7 presents a comparison between the original images and those processed using GrabCut. As shown, background information was effectively eliminated, while the leaf contours and disease regions were preserved, providing cleaner input data for subsequent data augmentation and model training.
Although this study attempted to apply GrabCut-based background removal during the image preprocessing stage, the collected images featured clean and uniform backgrounds. Moreover, the model achieved a relatively high classification accuracy of 89% even without background removal, indicating that the background had minimal impact on feature extraction. Considering that background removal may lead to the loss of lesion edge information and incurs a high computational cost when applied to large-scale datasets, this operation was performed only on a subset of images for illustrative purposes rather than on the entire dataset. Nevertheless, this method may serve as a useful reference for lesion recognition in more complex background scenarios in future studies.

3.3.2. Data Augmentation Methods and Parameters

To expand the training dataset and enhance the model’s generalization ability, the following data augmentation methods and corresponding parameters were applied:
  • Horizontal flip: 100% probability
  • Random rotation: Angle range of ±50°, uniformly sampled
  • Image translation: Random horizontal and vertical shifts of ±10% of the image width/height
  • Brightness adjustment: Randomly adjusted with a brightness factor between 0.8 and 1.2
  • Hue adjustment: Hue shift range of ±10°
  • Contrast adjustment: Contrast factor randomly set between 0.8 and 1.2
  • Sharpness adjustment: Sharpness augmentation factor ranging from 0.5 to 1.5
Figure 8 shows illustrative examples of the different types of data augmentation.

3.3.3. Visualization and Comparison of Data Augmentation Effects

After applying data augmentation, the training samples were balanced and used for training experiments on both the training and validation sets. The results are shown in Table 4. As observed, the training and validation accuracy after data augmentation reached 89.9% and 87.6%, respectively, whereas without data augmentation, the training and validation accuracy were 86.4% and 83.7%, respectively. Data augmentation thus improved the training and validation accuracy by 3.5% and 3.9%, respectively. Moreover, the gap between training and validation accuracy was smaller in the augmented dataset compared to the non-augmented one.
These results indicate that using data augmentation techniques—such as flipping, rotation, translation, and color transformations—to generate a more diverse training dataset effectively reduces model overfitting and enhances its robustness and generalization capability in the wheat powdery mildew severity classification task.
To more intuitively demonstrate the impact of data augmentation, Figure 9 presents the training and validation accuracy curves over epochs for models trained with and without data augmentation. As shown in Figure 9, the model trained with data augmentation exhibits more stable training behavior, faster accuracy improvement, and consistently higher validation accuracy compared to the model without augmentation. This indicates that data augmentation effectively alleviates overfitting and enhances the model’s generalization ability.

3.4. Different Models’ Disease Severity Recognition Results and Analysis

Figure 10 presents the confusion matrices for the various models. As shown, the QY-SE-MResNet34 model exhibits superior performance in classifying the severity levels of wheat powdery mildew compared to other models. For the 0-level disease, the QY-SE-MResNet34 model achieves a perfect recognition rate of 1.00, whereas other models—except for ResNet34, which attains an accuracy of 0.86—show relatively low recognition rates. In the 1-level category, only the QY-SE-MResNet34 and ResNet34 models surpass a recognition rate of 0.80, while the remaining models fluctuate around 0.70. These results indicate that the QY-SE-MResNet34 and ResNet34 models are particularly effective in identifying the early stages (0 and 1 level) of the disease. For the 3-level disease, the QY-SE-MResNet34 model achieves the highest recognition rate of 0.96, in stark contrast to MobileNetV2, which records the lowest at 0.48. The recognition accuracies of the other three models range between 0.60 and 0.70. Except for the QY-SE-MResNet34 model, the remaining models demonstrate limited effectiveness in identifying the mid-severity levels (3-, 5-, and 7-level). Notably, 5-level diseases are frequently misclassified as 3-level, and 7-level diseases are often confused with 5-level, due to the high similarity in visual features such as white lesions and leaf yellowing. These overlapping characteristics make it challenging for the models to distinguish between varying degrees of leaf necrosis and discoloration. Regarding the 9-level disease, the MobileNetV2 model again shows the lowest recognition rate (0.42), while ResNet34, ResNet50, and VGG16 achieve approximately 0.70. In contrast, the QY-SE-MResNet34 model reaches the highest recognition rate of 0.97. Overall, the QY-SE-MResNet34 model demonstrates the most consistent and accurate performance across all severity levels. The confusion matrix clearly illustrates the model’s superior ability to extract fine-grained features and distinguish between subtle inter-class differences, thereby significantly enhancing classification performance in wheat powdery mildew severity identification tasks.
Table 5 summarizes the performance metrics of each model in recognizing different severity levels of wheat powdery mildew. As shown, the QY-SE-MResNet34 model consistently outperforms the other models across all severity levels in terms of precision, recall, and balanced F-score. These results highlight the model’s superior capability in accurately identifying and distinguishing fine-grained disease features, thereby enhancing the overall classification effectiveness for wheat powdery mildew severity assessment.
As shown in Table 6, the QY-SE-MResNet34 model achieved the highest performance across all evaluation metrics for wheat powdery mildew severity classification, with precision, recall, F1-score, and accuracy reaching 88.6%, 85.83%, 86.17%, and 87%, respectively. In contrast, the MobileNetV2 model exhibited the lowest performance, with corresponding values of 60.7%, 61.0%, 59.7%, and 60.5%, followed by the VGG16 model, which yielded values of 64.1%, 62.7%, 63.6%, and 63.4%, respectively.
Notably, the QY-SE-MResNet34 model outperformed the baseline ResNet34 model by a margin of 10.8% in classification accuracy (76.2%), demonstrating the substantial improvements achieved through the integration of transfer learning, the incorporation of the SE attention module, and the optimization of the first convolutional layer (Conv1). The use of convolutional kernels with varying sizes further enhanced the model’s ability to capture fine-grained features across different disease severity levels, thereby enriching the image feature representations and improving classification precision.
Although the ResNet50 model showed moderate performance, its effectiveness in identifying fine-grained features was relatively limited. Despite offering benefits such as reduced parameter count and lower computational complexity, its reliance on lightweight operations—such as depthwise separable convolutions—may constrain its representational capacity, rendering it less effective for detailed classification tasks like wheat powdery mildew severity assessment.

3.5. Ablation Experiment

To validate the effectiveness of integrating ResNet34 with transfer learning, the SE attention mechanism, and Conv1 layer optimization, an ablation study was conducted. The evaluated models included the baseline ResNet34, ResNet34 with transfer learning (QY-ResNet34), ResNet34 incorporating the SE module (SE-ResNet34), ResNet34 with Conv1 optimization (MResNet34), and a combined model integrating all three improvements (QY-SE-MResNet34). These models were assessed using four evaluation metrics: precision, recall, accuracy, and balanced F-score.
The results of the ablation experiment are summarized in Table 7. Compared with the baseline ResNet34, the QY-ResNet34 model showed improvements in precision (1.9%), recall (2.75%), F-score (2.7%), and accuracy (2.8%). These gains suggest that incorporating transfer learning enables the model to effectively leverage prior knowledge from pretrained networks, thereby accelerating convergence and reducing computational costs. The SE-ResNet34 model achieved notable improvements—5.81% in precision, 8.6% in recall, 7.98% in F-score, and 4.8% in accuracy—demonstrating that the inclusion of the SE module strengthens the model’s ability to emphasize disease-relevant features, thereby enhancing classification performance. The MResNet34 model, which modifies the Conv1 layer, achieved performance gains of 3.01% in precision, 5.91% in recall, 4.85% in F-score, and 3.8% in accuracy. These results indicate that optimizing the initial convolutional layer enables the network to extract more detailed spatial features and better capture subtle differences in disease severity levels. Among all tested configurations, the QY-SE-MResNet34 model outperformed the others across all metrics, confirming that the synergistic integration of transfer learning, SE attention, and Conv1 optimization significantly enhances the model’s ability to recognize wheat powdery mildew severity. These results demonstrate the effectiveness of the proposed augmentations in improving both the robustness and precision of fine-grained disease classification.

3.6. Model Interpretability and Feature Sensitivity Analysis

3.6.1. Channel Masking Sensitivity Analysis

To further investigate the model’s sensitivity to different input color channels, this study designed and conducted a channel masking sensitivity experiment. The experiment involved individually masking the red, green, and blue channels of the input images by setting the pixel values of the corresponding channel to zero. The goal was to observe the change in the model’s classification performance when information from a single channel was missing, thereby revealing the model’s dependency on each channel and its impact on the disease severity recognition results.
Specifically, based on the preprocessed RGB images, the pixel values of one color channel were set to zero to generate masked images, which were then fed into the trained QY-SE-MResNet34 model for prediction. By comparing the predicted classes and confidence scores before and after masking, the contribution of each channel’s information to the model’s decision-making was evaluated. Figure 11 shows the visual comparison between the original and masked images, while Table 8 summarizes the predicted classes and confidence scores under different masking conditions.
The experimental results show that after masking the blue channel, the model’s predicted classes and confidence scores exhibited significant changes, with a notable decline in performance. In contrast, masking the red and green channels resulted in relatively stable predictions, with only slight decreases in confidence. Combined with the visual changes in the masked images (where masking the blue channel caused the image to appear yellowish, and masking the red and green channels resulted in magenta and cyan hues, respectively), these findings indicate that the model relies more heavily on features from the blue channel for classifying powdery mildew severity.
In summary, the channel masking sensitivity analysis verifies the model’s differential responses to individual input channels, providing intuitive insights into its decision-making process. This analysis also offers valuable guidance for future model optimization and data augmentation strategy design.

3.6.2. Visualization of Model Attention Regions (Grad-CAM)

To further explore the decision basis and interpretability of the proposed QY-SE-MResNet34 model in the task of wheat powdery mildew severity classification, this study adopts Grad-CAM (Gradient-weighted Class Activation Mapping) to visualize the model’s attention regions.
Grad-CAM is a gradient-based interpretability technique that generates heatmaps by computing the gradients of the model’s output with respect to the feature maps of a selected convolutional layer. These heatmaps highlight the regions in the input image that the model focuses on when making predictions. This method does not require modification of the model architecture and is applicable to various convolutional neural networks, making it effective for identifying whether the model attends to task-relevant areas.
In this study, two representative images with severity levels 3 and 9 were selected for visualization. The original images were overlaid with their corresponding Grad-CAM heatmaps for analysis. As shown in Figure 12, the model primarily focuses on regions with dense lesion distribution during prediction, and in images with higher severity, the activation regions are more pronounced. This indicates that the model does not simply rely on background or leaf contour information but effectively captures fine-grained differences in diseased areas, demonstrating a reliable decision basis and interpretability.
The visualization results indicate that the proposed model can effectively distinguish diseased areas from background noise, thereby enhancing its interpretability and practical value, and providing strong support for its deployment in real-world agricultural applications. In this section, two representative samples with correct model predictions—leaf images corresponding to powdery mildew severity levels 3 and 9—are selected. Grad-CAM is used to generate heatmaps to observe whether the model’s attention is focused around the lesion areas. Only these two typical severity levels are presented here, as the Grad-CAM visualizations of other severity levels exhibit similar attention patterns, reflecting a consistent focus trend of the model.

4. Discussion

Current research on wheat diseases primarily concentrates on disease type classification, with convolutional neural networks (CNNs) widely applied in agricultural disease identification tasks. Backbone architectures such as ResNet and VGG have demonstrated outstanding performance in the classification of various wheat disease images. For instance, Feng et al. [48] employed a transfer learning strategy by setting all layers of a lightweight CNN model as trainable, thereby developing a wheat leaf disease recognition model that exhibits high accuracy, strong generalization capability, and suitability for deployment on mobile platforms. The model was trained on a dataset encompassing three major wheat leaf diseases: powdery mildew, stripe rust, and leaf rust. Similarly, Lou et al. [49] proposed WDMNet, a lightweight wheat disease recognition network based on multi-scale attention, designed to identify six common wheat diseases. By integrating a multi-scale attention mechanism, the model effectively enhances the extraction of key disease features, such as lesion location, shape, and color. In parallel, data sources for wheat disease detection have become increasingly diverse. Gao et al. [50] introduced a novel approach for the rapid and non-destructive monitoring of wheat Fusarium head blight (FHB), combining low-altitude remote sensing using unmanned aerial vehicles (UAVs) with multispectral imaging and incorporating spectral and textural analysis techniques.
Although previous studies have made significant advances in wheat disease recognition, their focus has largely been on multi-disease classification tasks [51]. In contrast, this study specifically addresses the automated severity grading of wheat powdery mildew at the leaf level. By enhancing the ResNet34 model and benchmarking it against several mainstream deep learning architectures, we achieved effective classification of powdery mildew severity levels. Specifically, we developed an advanced model, QY-SE-MResNet34, built upon the ResNet34 backbone. This model retains the original network’s residual learning capability while integrating transfer learning to accelerate convergence and improve generalization performance. Additionally, squeeze-and-excitation (SE) attention modules are embedded within each residual block, enabling the model to better focus on critical lesion regions. Given the characteristics of wheat leaf disease images, such as small lesion sizes and blurred edges, the original 7 × 7 convolution kernel in ResNet34’s first layer was replaced with a 3 × 3 kernel to more effectively capture fine-grained local features. Then, the optimal training strategy was determined through hyperparameter tuning experiments, further improving the model’s performance. To address the issue of image backgrounds, the GrabCut method was introduced for background processing, enhancing the model’s focus on key regions and its robustness. Combined with data augmentation techniques, the positive impact on model performance was demonstrated through visual comparisons. Finally, interpretability techniques such as channel masking and Grad-CAM were employed to analyze the model’s decision-making process, revealing the model’s sensitivity to different input channels and image regions, thereby deepening the understanding of its discriminative mechanisms. Furthermore, in accordance with the national standard (GBT 17980.22-2000), we constructed a six-level severity image dataset of single wheat leaves affected by powdery mildew, thereby enhancing the model’s capability to distinguish subtle differences in disease severity under real-world conditions. Unlike many existing studies that focus primarily on binary classification (diseased vs. healthy), our work tackles a more granular severity grading problem. Through architectural optimization and a careful balance between performance and model complexity, the proposed QY-SE-MResNet34 outperformed baseline models in accuracy, recall, and F1-score, demonstrating strong practical applicability and scalability.
This study acknowledges several limitations that warrant further attention. First, the enhanced QY-SE-MResNet34 model exhibited superior performance compared to the baseline ResNet34 on our self-constructed wheat powdery mildew dataset. This improvement may be partly attributed to the image acquisition process, where wheat leaf images were captured outdoors using smartphones after manual collection. This approach effectively reduced interference from factors such as background occlusion, plant growth position, and solar altitude angle, resulting in more consistent leaf contours and geometric features across images, which contributed to stable grading results. However, in real-world applications, wheat leaves are often subject to uncontrollable environmental factors that cause greater variability in contour and geometric characteristics, potentially leading to decreased grading accuracy. The dataset used in this study contains images with relatively clean backgrounds and uniform lighting, resulting in limited sample complexity that cannot fully represent the diversity of real-world environments. Consequently, the model’s generalization ability in practical scenarios remains insufficient. Although GrabCut background processing and data augmentation methods were introduced to improve model robustness, the actual effectiveness of these preprocessing and augmentation strategies still needs to be further validated under more complex and natural conditions. Additionally, this study focuses primarily on improving the deep neural network architecture and algorithmic enhancements, without addressing model deployment. Finally, the current model only performs powdery mildew severity classification on wheat leaves and does not cover diseases on stems and spikes; future work should expand the application scope and improve the model’s applicability.
Therefore, addressing these challenges in future research is of paramount importance. On one hand, it is essential to evaluate the model’s performance under complex environmental conditions and in the presence of co-occurring wheat diseases. Future studies should also extend severity classification to additional plant organs such as stems and ears to enhance the model’s applicability. Simultaneously, balancing model complexity with generalization capability remains a critical consideration. Incorporating lightweight modules—such as the Convolutional Block Attention Module (CBAM) or depthwise separable convolutions—could improve feature extraction efficiency while maintaining manageable computational complexity. In this study, the SE attention mechanism was employed to strengthen the model’s focus on salient lesion features and facilitate fine-grained feature extraction across varying severity levels of powdery mildew, thereby enriching the model’s representational capacity. On the other hand, future research should also emphasize real-time deployment and large-scale application of the model in conjunction with emerging technological advancements. As deep learning technologies continue to evolve and computational resources become increasingly accessible, it is critical for models to transition from controlled experimental settings to practical, real-world agricultural environments. The proposed QY-SE-MResNet34 model, which incorporates SE attention mechanisms and modified convolutional layers based on transfer learning, has shown significant improvements in the accuracy and efficiency of powdery mildew severity classification. Nevertheless, practical deployment requires further attention to computational efficiency, hardware compatibility, and adaptability to diverse environmental conditions. Future efforts will focus on optimizing the model’s computational performance, enhancing inference speed on edge devices, and adopting model compression and acceleration techniques to ensure stable operation in low-resource settings. Moreover, additional augmentations will be necessary to accommodate the complexity and variability of different agricultural scenarios, thereby improving the model’s robustness and generalization. This will ultimately enable large-scale disease monitoring and support the implementation of precision agriculture practices. To address this, future research will focus on collecting wheat leaf images under natural field conditions with diverse angles, complex backgrounds, and at multiple time periods, in order to systematically evaluate the model’s adaptability and robustness in real-world environments.

5. Conclusions

This study enhances the ResNet34 model and compares its performance with that of several mainstream deep learning architectures, including ResNet34, ResNet50, VGG16, and MobileNetV2. Among these, the proposed QY-SE-MResNet34 model demonstrated the best overall performance in identifying the severity levels of wheat powdery mildew, thereby significantly improving both the accuracy and efficiency of severity classification.
The conclusions of this research are as follows.
  • The deep learning model autonomously learns and extracts subtle disease features from a large number of wheat leaf images, enabling precise classification of powdery mildew severity. To improve model robustness and generalization, data augmentation techniques were applied to expand the training dataset. Following augmentation, the training and validation accuracy reached 89.9% and 87.6%, respectively, representing improvements of 3.5% and 3.9% compared to the non-augmented datasets. These results confirm that data augmentation effectively enhances the model’s recognition performance, aligning with findings from previous studies.
  • Comparative analysis of the models revealed that ResNet34 achieves relatively high accuracy in recognizing wheat powdery mildew severity levels. This suggests that the residual connections in ResNet34 effectively mitigate issues such as gradient vanishing and model degradation, thereby enhancing model trainability and accelerating convergence. Consequently, ResNet34 is particularly well-suited for fine-grained classification tasks that require complex feature extraction. Notably, in the early identification of wheat powdery mildew, ResNet34 exhibits strong sensitivity to mild symptoms due to its efficient feature learning capabilities.
  • The QY-SE-MResNet34 model achieved the highest training and validation accuracy on the wheat powdery mildew dataset, reaching 89% and 87%, respectively, along with the lowest corresponding loss values of 0.6925 and 0.6771. Compared to the original ResNet34 model, both training and validation accuracy improved by 11%, and the loss values were significantly reduced, demonstrating the enhanced model’s superior capacity to recognize disease features and achieve faster convergence during training. In terms of evaluation metrics, the QY-SE-MResNet34 model achieved a precision of 88.6%, recall of 85.83%, balanced F-score of 86.17%, and overall accuracy of 87%. These represent substantial improvements over the baseline models, with accuracy gains of 10.8%, 18%, 23.6%, and 26.5% compared to ResNet34, ResNet50, VGG16, and MobileNetV2, respectively. These results highlight the model’s effectiveness in fine-grained recognition of wheat powdery mildew severity.
  • Key architectural modifications—specifically, the optimization of the first convolutional layer (Conv1) and the integration of the SE attention mechanism—played a critical role in enhancing model performance. Ablation studies confirmed the individual and combined effectiveness of these augmentations. Incorporating the SE (Squeeze-and-Excitation) module into the ResNet34 backbone with transfer learning improved the model’s ability to focus on important channel-wise features, thereby enhancing its capacity for fine-grained disease severity recognition. This allowed the model to extract more detailed representations of lesion characteristics across different severity levels, resulting in improved classification accuracy. These findings are consistent with prior studies demonstrating the benefits of attention mechanisms in enhancing model generalization and interpretability. Furthermore, modifying the Conv1 layer by replacing the original 7 × 7 kernel with a 3 × 3 kernel preserved higher spatial resolution in the early feature maps. This change retained more local detail and improved sensitivity to subtle visual differences, while also reducing computational complexity. Together, these augmentations contribute to the improved performance and practical applicability of the proposed model for automated severity grading of wheat powdery mildew.
  • In addition, to enhance the interpretability of the model and its understanding of input features, this study implemented multiple visualization and sensitivity analysis experiments. In the image preprocessing stage, the GrabCut background removal method was introduced. By modeling foreground and background pixels using Gaussian Mixture Models (GMM) and applying the min-cut principle, effective separation between leaf regions and background was achieved. Experimental results showed that GrabCut could accurately preserve leaf contours and diseased areas while eliminating background pixels, thereby providing cleaner input data for the model. Although the overall classification accuracy was already high, indicating minimal background interference, this method offers a feasible solution for future disease recognition under complex field conditions. Regarding input feature sensitivity, a channel-masking experiment was conducted to evaluate the model’s dependence on the RGB color channels. The results revealed that masking the blue channel significantly altered the predicted class and confidence, indicating that the model relies heavily on features contained in the blue channel for disease severity classification. In contrast, masking the red or green channels had a relatively minor impact, reflecting selective channel dependency in the model’s recognition process. Finally, the Grad-CAM technique was applied to visualize the model’s decision-making regions using heatmaps. Results demonstrated that the model primarily focused on areas of dense disease presence on the leaf surface. Moreover, samples of different severity levels exhibited distinct activation intensities and locations, suggesting that the model effectively distinguishes between fine-grained differences in disease severity. These visualizations confirmed that the model’s attention is concentrated on key regions rather than background information, providing interpretable evidence for the model’s decision-making process and enhancing its reliability and practical applicability.

Author Contributions

Conceptualization, M.L., Q.W. and H.Z.; data curation, M.L.; formal analysis, M.L.; funding acquisition, Q.W.; investigation, M.L.; methodology, Q.W. and H.Z.; project administration, Q.W. and H.Z.; resources, Q.W.; software, Q.W.; supervision, H.Q. and W.G.; validation, Y.G., L.S. and Y.L.; visualization, G.Z.; writing—original draft, M.L.; writing—review and editing, Q.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 32271993), Joint Fund of Advantageous Discipline Cultivation Project of the Henan Science and Technology Research and Development Program (No. 222301420114), Henan Science and Technology Tackling Project (No. 252102110367), Henan International Scientific and Technological Cooperation Project (No. 242102521027) and the Natural Science Foundation of Henan Province (No. 242301420143).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to wqhda@henau.edu.cn.

Acknowledgments

The authors thank the College of Information and Management Sciences of Henan Agricultural University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lv, Z.; Yang, S.; Ma, S.; Wang, Q.; Sun, J.; Du, L.; Han, J.; Guo, Y.; Zhang, H. Efficient Deployment of Peanut Leaf Disease Detection Models on Edge AI Devices. Agriculture 2025, 15, 332. [Google Scholar] [CrossRef]
  2. Wang, Y.M.; Zhang, Z.; Feng, L.W.; Ma, Y.; Du, Q. A new attention-based CNN approach for crop mapping using time series Sentinel-2 images. Comput. Electron. Agric. 2021, 184, 106090. [Google Scholar] [CrossRef]
  3. Yele, V.P.; Sedamkar, R.R.; Alegavi, S. Systematic Analysis of Effective Segmentation and Classification for Land Use Land Cover in Hyperspectral Image using Deep Learning Methods: A Review of the State of the Art: Reviewing Deep Learning Techniques for Land Use and Cover in Hyperspectral Images. In Proceedings of the 2024 20th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP), Babol, Iran, 21 February 2024; pp. 1–8. [Google Scholar]
  4. Liu, J.; Meng, H. Research on the maturity detection method of korla pears based on hyperspectral technology. Agriculture 2024, 14, 1257. [Google Scholar] [CrossRef]
  5. Wang, S.; Xu, D.; Liang, H.; Bai, Y.; Li, X.; Zhou, J.; Su, C.; Wei, W. Advances in Deep Learning Applications for Plant Disease and Pest Detection: A Review. Remote Sens. 2025, 17, 698. [Google Scholar] [CrossRef]
  6. Zhao, C.; Li, C.; Wang, X.; Li, C.; Wang, X.; Wu, X.; Du, Y.; Chai, H.; Cai, T.; Xiang, H.; et al. Plant Disease Segmentation Networks for Fast Automatic Severity Estimation Under Natural Field Scenarios. Agriculture 2025, 15, 583. [Google Scholar] [CrossRef]
  7. Zhang, M.; Liu, C.; Li, Z.; Yin, B. From Convolutional Networks to Vision Transformers: Evolution of Deep Learning in Agricultural Pest and Disease Identification. Agronomy 2025, 15, 1079. [Google Scholar] [CrossRef]
  8. Ren, Z.; Liang, K.; Liu, Y.; Wu, X.; Zhang, C.; Mei, X.; Zhang, Y. A Neural Network with Multiscale Convolution and Feature Attention Based on an Electronic Nose for Rapid Detection of Common Bunt Disease in Wheat Plants. Agriculture 2025, 15, 415. [Google Scholar] [CrossRef]
  9. Zhao, X.; Li, K.; Li, Y.; Ma, J.; Zhang, L. Identification method of vegetable diseases based on transfer learning and attention mechanism. Comput. Electron. Agric. 2022, 193, 106703. [Google Scholar] [CrossRef]
  10. Hao, X.; Wang, Z.; Zhang, Y.; Li, F.; Wang, M.; Li, J.; Mao, R. Detecting wheat yellow dwarf disease by employing a Dual-Branch multiscale model from UAV multispectral images. Comput. Electron. Agric. 2025, 230, 109898. [Google Scholar] [CrossRef]
  11. Bao, W.; Yang, Z.; Zhang, P.; Hu, f.; Hu, G.; Huang, L.; Yang, X. A domain adaptive wheat scab detection method for UAV images. Comput. Electron. Agric. 2025, 233, 110081. [Google Scholar] [CrossRef]
  12. Xu, K.; Hou, Y.; Sun, W.; Chen, D.; Lv, D.; Xing, J.; Yang, R. A Detection Method for Sweet Potato Leaf Spot Disease and Leaf-Eating Pests. Agriculture 2025, 15, 503. [Google Scholar] [CrossRef]
  13. Aloyce, A.A. Climate change and its impact on wheat stem rust disease dynamics in Tanzania. Discov. Agric. 2025, 3, 1–12. [Google Scholar] [CrossRef]
  14. Sowmiya, M.; Krishnaveni, S. IoT enabled prediction of agriculture’s plant disease using improvedϖ quantum whale optimization DRDNN approach. Meas. Sens. 2023, 27, 100812. [Google Scholar] [CrossRef]
  15. Sharma, P.; Sharma, P.D.; Bansal, S. Optimum RBM encoded SVM model with ensemble feature Extractor-based plant disease prediction. Chemom. Intell. Lab. Syst. 2025, 258, 105319. [Google Scholar] [CrossRef]
  16. Shabari, B.S.; Siddappa, M.; Surendra, S.; Vidyasagar, S.; Suresh, R. Detection and classification of diseased plant leaf images using hybrid algorithm. Multimed. Tools Appl. 2023, 82, 32349–32372. [Google Scholar]
  17. He, P.; Li, X.; Shen, W.; Deng, S.; Xiao, L.; Zhang, Y. Traceability and analysis method for measurement laboratory testing data based on intelligent Internet of Things and deep belief network. J. Intell. Syst. 2024, 33, 20240076. [Google Scholar] [CrossRef]
  18. Zhang, C.; Ni, R.; Mu, Y.; Sun, Y.; Thobela, L. Lightweight Multi-scale Convolutional Neural Network for Rice Leaf Disease Recognition. Comput. Mater. Contin. 2023, 74, 983–994. [Google Scholar]
  19. Bai, Y.P.; Feng, Y.K.; Li, G.H.; Zhao, M.; Zhou, H.; Hou, Z. Algorithm of wheat disease image identification based on Vision Transformer. J. Chin. Agric. Mech. 2024, 45, 267. [Google Scholar]
  20. Sun, X.; Li, G.; Qu, P.; Xie, X.; Pan, X.; Zhang, W. Research on plant disease identification based on CNN. Cogn. Robot. 2022, 2, 155–163. [Google Scholar] [CrossRef]
  21. Deng, Y.; Wang, X.; Long, C.; Liu, J.; Zhu, X.; Tan, S. Rice leaf blast lesion segmentation and disease severity grading based on VCDM-UNet. Trans. Chin. Soc. Agric. Eng. 2024, 40, 190–198. [Google Scholar]
  22. Zhang, H.; Dai, C.; Ren, J.; Wang, G.; Teng, F.; Wang, D. Research on grape disease identification method based on GhostNetV2 improved YOLO v8 model. J. Agric. Mech. 2025, 15, 1–11. [Google Scholar]
  23. Jiang, S.; Cao, Y.; Liu, Z.; Zhao, S.; Zhang, Z.; Wang, W. Tea leaf disease identification based on improved Faster RCNN. J. Huazhong Agric. Univ. 2024, 43, 41–50. [Google Scholar]
  24. Yang, X. Research on Wheat Disease Identification and Severity Grading Method Based on Convolutional Neural Networks; Anhui University: Hefei, China, 2022. [Google Scholar]
  25. Picon, A.; Alvarez-Gila, A.; Seitz, M.; Ortiz-Barredo, A.; Echazarra, J.; Johannes, A. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput. Electron. Agric. 2019, 161, 280–290. [Google Scholar] [CrossRef]
  26. Wu, Y.; Wu, Y.; Wang, B.; Yang, H. A Remote Sensing Method for Crop Mapping Based on Multiscale Neighborhood Feature Extraction. Remote Sens. 2022, 15, 47. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Zhang, N.; Zhu, J.; Sun, T.; Chai, X.; Dong, W. Efficient Triple Attention and AttentionMix: A Novel Network for Fine-Grained Crop Disease Classification. Agriculture 2025, 15, 313. [Google Scholar] [CrossRef]
  28. GB/T 17980.22—2000; Pesticide—Guidelines for the Field Efficacy Trials (I)—Fungicides Against Cereal Powdery Mildew. Standardization Administration of China: Beijing, China, 2000.
  29. Singh, R.N.; Krishnan, P.; Bharadwaj, C.; Das, B. Improving prediction of chickpea wilt severity using machine learning coupled with model combination techniques under field conditions. Ecol. Inform. 2023, 73, 101933. [Google Scholar] [CrossRef]
  30. Di, J.; Li, Q. A method of detecting apple leaf diseases based on improved convolutional neural network. PLoS ONE 2022, 17, e0262629. [Google Scholar] [CrossRef] [PubMed]
  31. Zhang, D.; Hou, L.; Lv, L.; Qi, H.; Sun, H.; Zhang, X.; Li, S.; Min, J.; Liu, Y.; Tang, Y.; et al. Precision Agriculture: Temporal and Spatial Modeling of Wheat Canopy Spectral Characteristics. Agriculture 2025, 15, 326. [Google Scholar] [CrossRef]
  32. Al-Gaashani, M.S.; Shang, F.; Abd El-Latif, A.A. Ensemble Learning of Lightweight Deep Convolutional Neural Networks for Crop Disease Image Detection. J. Circuits Syst. Comput. 2023, 32, 2350086. [Google Scholar] [CrossRef]
  33. Nirmal, R.; Senthil, P.; Sanjay, S.; Grish, K.S.; Shamimul, Q.; Prabhu, C.A. Computer aided agriculture development for crop disease detection by segmentation and classification using deep learning architectures. Comput. Electr. Eng. 2022, 103, 108357. [Google Scholar] [CrossRef]
  34. Feng, G.; Gu, Y.; Wang, C.; Zhang, D.; Xu, R.; Zhu, Z.; Luo, B. Wheat Fusarium head blight severity grading using generative adversarial networks and semi-supervised segmentation. Comput. Electron. Agric. 2025, 229, 109817. [Google Scholar] [CrossRef]
  35. Liu, Y.; Ma, X.; An, L.; Sun, H.; Zhao, F.; Yan, X.; Ma, Y.T.; Li, M. Exploring UAV narrow-band hyperspectral indices and crop functional traits derived from radiative transfer models to detect wheat powdery mildew. Int. J. Appl. Earth Obs. Geoinf. 2025, 141, 104627. [Google Scholar] [CrossRef]
  36. Wang, B. Zero-exemplar deep continual learning for crop disease recognition: A study of total variation attention regularization in vision transformers. Front. Plant Sci. 2024, 141, 1283055. [Google Scholar] [CrossRef] [PubMed]
  37. Quach, N.T.; Vu, T.H.N.; Nguyen, T.T.A.; Le, P.C.; Do, H.G.; Nauyen, T.D.; Thao, P.T.H.; Nguyen, T.T.L.; Chu, H.H.; Phi, Q.-T. Metabolic and genomic analysis deciphering biocontrol potential of endophytic Streptomyces albus RC2 against crop pathogenic fungi. Braz. J. Microbiol. 2023, 54, 2617–2626. [Google Scholar] [CrossRef] [PubMed]
  38. Mohaimin, A.Z.A.; Krishnamoorthy, S.; Shivanand, P. A critical review on bioaerosols—Dispersal of crop pathogenic microorganisms and their impact on crop yield. Braz. J. Microbiol. 2024, 55, 587–628. [Google Scholar] [CrossRef] [PubMed]
  39. Ghazanfar, L.; Abdelhamid, S.E.; Mallouhy, R.E.; Alghazo, J.; Kazimi, Z.A. Deep Learning Utilization in Agriculture: Detection of Rice Plant Diseases Using an Improved CNN Model. Plants 2022, 11, 2230. [Google Scholar] [CrossRef] [PubMed]
  40. Gunawan, M.Z.; Sihombing, P.; Sutarman. Optimization of the CNN model for smart agriculture. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1088, 012029. [Google Scholar] [CrossRef]
  41. Vasiliki, M.S.; Theodora, S.; Argyrios, S.; Minas, D. An Efficient Hybrid CNN Classification Model for Tomato Crop Disease. Technologies 2023, 11, 10. [Google Scholar] [CrossRef]
  42. Mallick, M.D.T.; Biswas, S.; Das, A.K.; Saha, H.N.; Chakrabarti, A.; Deb, N. Deep learning based automated disease detection and pest classification in Indian mung bean. Multimed. Tools Appl. 2023, 82, 12017–12041. [Google Scholar] [CrossRef]
  43. Fu, Y.; Guo, L.; Huang, F. A lightweight CNN model for pepper leaf disease recognition in a human palm background. Heliyon 2024, 10, e33447. [Google Scholar] [CrossRef] [PubMed]
  44. Saleh, A.; Momina, M. Efficient attention-based CNN network (EANet) for multi-class maize crop disease classification. Front. Plant Sci. 2022, 13, 1003152. [Google Scholar]
  45. Jose, G.E.; Margarita, G.; Melitsa, T.T.; Natasha, M.; Juan, C.; Calabria, S.; Romany, F.M. Intelligent Sine Cosine Optimization with Deep Transfer Learning Based Crops Type Classification Using Hyperspectral Images. Can. J. Remote Sens. 2022, 48, 621–632. [Google Scholar]
  46. Woźniak, A.; Haliniarz, M. Response of winter wheat to 35-year cereal monoculture. Agriculture 2025, 15, 489. [Google Scholar] [CrossRef]
  47. Tao, J.; Li, X.; He, Y.; Islam, M.A. CEFW-YOLO: A High-Precision Model for Plant Leaf Disease Detection in Natural Environments. Agriculture 2025, 15, 833. [Google Scholar] [CrossRef]
  48. Feng, X.; Li, D.; Wang, W.; Zheng, G.; Liu, H.; Sun, Y.; Liang, S.; Yang, Y.; Zang, H. Wheat leaf disease image recognition based on lightweight convolutional neural network and transfer learning. Henan Agric. Sci. 2021, 50, 174–180. [Google Scholar]
  49. Lou, G.Z.; Zhang, L.L.; Guo, Z.H.; Bi, X.; Zhao, K. Lightweight wheat disease recognition network based on multi-scale attention. J. Inn. Mong. Agric. Univ. (Nat. Sci. Ed.) 2025, 1–14. [Google Scholar]
  50. Gao, C.; Ji, X.; He, Q.; Gong, Z.; Sun, H.; Wen, T.; Guo, W. Monitoring of wheat fusarium head blight on spectral and textural analysis of UAV multispectral imagery. Agriculture 2023, 13, 293. [Google Scholar] [CrossRef]
  51. Gamal, N.R.; Fattah, A.A.; El-Rashidy, M.A.; El-Rashidy, A.E.; EI-Sayed, A.; Hemdan, E. E-D. An Efficient Plant Disease Recognition System Using Hybrid Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs) for Smart IoT Applications in Agriculture. Int. J. Comput. Intell. Syst. 2022, 15, 65. [Google Scholar]
Figure 1. Geographical location of the study area.
Figure 1. Geographical location of the study area.
Agriculture 15 01580 g001
Figure 2. Wheat powdery mildew dataset legend.
Figure 2. Wheat powdery mildew dataset legend.
Agriculture 15 01580 g002
Figure 3. Residual structure diagram.
Figure 3. Residual structure diagram.
Agriculture 15 01580 g003
Figure 4. Squeeze-and-Excitation (SE) block: global average pooling of input F (H × W × C) yields channel-wise weights s (1 × 1 × C) via two 1 × 1 convolutions and Sigmoid, which then scale F to produce output F s c a l e .
Figure 4. Squeeze-and-Excitation (SE) block: global average pooling of input F (H × W × C) yields channel-wise weights s (1 × 1 × C) via two 1 × 1 convolutions and Sigmoid, which then scale F to produce output F s c a l e .
Agriculture 15 01580 g004
Figure 5. QY-SE-M-ResNet34 model.
Figure 5. QY-SE-M-ResNet34 model.
Agriculture 15 01580 g005
Figure 6. The accuracy of training and validation of each models’ changes.
Figure 6. The accuracy of training and validation of each models’ changes.
Agriculture 15 01580 g006
Figure 7. Comparison of original and background-removed leaf images using the GrabCut algorithm.
Figure 7. Comparison of original and background-removed leaf images using the GrabCut algorithm.
Agriculture 15 01580 g007
Figure 8. Enhanced results of wheat powdery mildew image data.
Figure 8. Enhanced results of wheat powdery mildew image data.
Agriculture 15 01580 g008
Figure 9. Effect of data augmentation on training and validation accuracy.
Figure 9. Effect of data augmentation on training and validation accuracy.
Agriculture 15 01580 g009
Figure 10. Confusion matrix of each model.
Figure 10. Confusion matrix of each model.
Agriculture 15 01580 g010
Figure 11. Visual comparison of the original image and images with masked color channels.
Figure 11. Visual comparison of the original image and images with masked color channels.
Agriculture 15 01580 g011
Figure 12. Visualization of attention regions on wheat leaf images with severity levels of 3 and 9 using Grad-CAM.
Figure 12. Visualization of attention regions on wheat leaf images with severity levels of 3 and 9 using Grad-CAM.
Agriculture 15 01580 g012
Table 1. Wheat powdery mildew disease severity grading standards.
Table 1. Wheat powdery mildew disease severity grading standards.
Disease Severity LevelsSymptom Characteristics
0No disease
1Disease spots cover less than 5% of the total leaf area
3Disease spots cover 6% to 15% of the total leaf area
5Disease spots cover 16% to 25% of the total leaf area
7Disease spots cover 26% to 50% of the total leaf area
9Disease spots cover more than 50% of the total leaf area
Table 2. Hyperparameter optimization.
Table 2. Hyperparameter optimization.
No.EpochBatch SizeLearning RateValidation Accuracy/%
1100640.0163.83
2100320.00168.58
3200640.00172.61
4200320.00176.74
Table 3. Accuracy and loss values of each model.
Table 3. Accuracy and loss values of each model.
Network ModelTraining Accuracy/%Validation Accuracy/%Training Loss/%Validation Loss/%
ResNet3478760.82450.8268
ResNet5074720.87431.1738
VGG1667630.95881.0913
MobileNetV265601.09120.9805
QY-SE-MResNet3489870.69250.6771
Table 4. Comparison of model recognition accuracy before and after image augmentation.
Table 4. Comparison of model recognition accuracy before and after image augmentation.
MethodTraining Set AccuracyValidation Set Accuracy
data not augmentation86.4%83.7%
data augmentation89.9%87.6%
Table 5. Analysis of prediction results of each network model.
Table 5. Analysis of prediction results of each network model.
Network ModelEvaluation IndexEvaluation Index Result
013579
ResNet34Precision0.840.830.820.820.580.74
recall0.910.830.820.850.590.68
F-score0.880.830.820.840.590.71
ResNet50Precision0.730.660.660.780.810.72
recall0.570.730.550.660.810.75
F-score0.660.700.610.710.810.73
VGG16Precision0.670.610.540.740.370.69
recall0.390.710.650.250.490.71
F-score0.510.660.590.380.420.70
MobileNetV2Precision0.670.770.450.370.550.44
recall0.600.660.430.360.630.44
F-score0.630.700.440.360.590.44
QY-SE-MResNet34Precision1.000.850.950.860.810.95
recall0.660.960.890.760.920.96
F-score0.790.900.920.810.860.95
Table 6. Classification test results of wheat powdery mildew in each network model.
Table 6. Classification test results of wheat powdery mildew in each network model.
Network ModelPrecisionRecallF-ScoreAccuracy
ResNet3476.30%76.41%75.80%76.20%
ResNet5072.67%68.83%70.30%69.00%
VGG1664.10%62.70%63.60%63.40%
MobileNetV260.70%61.00%59.80%60.50%
QY-SE-MResNet3488.60%85.83%86.17%87.00%
Table 7. Ablation experiment results.
Table 7. Ablation experiment results.
Network ModelPrecisionRecallF-ScoreAccuracy
ResNet3476.30%76.41%75.80%76.20%
QY-ResNet3478.20%79.16%78.50%79.00%
SE-ResNet3482.11%85.01%83.78%81.00%
MResNet3479.31%82.32%80.65%80.00%
QY-SE-MResNet3488.60%85.83%86.17%87.00%
Table 8. Predicted classes and confidence scores of the model under different channel masking conditions.
Table 8. Predicted classes and confidence scores of the model under different channel masking conditions.
Masked ChannelPredicted ClassConfidence
Original994.80%
Blue988.50%
Green993.68%
Red993.75%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Guo, Y.; Guo, W.; Qiao, H.; Shi, L.; Liu, Y.; Zheng, G.; Zhang, H.; Wang, Q. Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model. Agriculture 2025, 15, 1580. https://doi.org/10.3390/agriculture15151580

AMA Style

Li M, Guo Y, Guo W, Qiao H, Shi L, Liu Y, Zheng G, Zhang H, Wang Q. Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model. Agriculture. 2025; 15(15):1580. https://doi.org/10.3390/agriculture15151580

Chicago/Turabian Style

Li, Meilin, Yufeng Guo, Wei Guo, Hongbo Qiao, Lei Shi, Yang Liu, Guang Zheng, Hui Zhang, and Qiang Wang. 2025. "Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model" Agriculture 15, no. 15: 1580. https://doi.org/10.3390/agriculture15151580

APA Style

Li, M., Guo, Y., Guo, W., Qiao, H., Shi, L., Liu, Y., Zheng, G., Zhang, H., & Wang, Q. (2025). Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model. Agriculture, 15(15), 1580. https://doi.org/10.3390/agriculture15151580

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop