Next Article in Journal
Grinding and Mixing Uniformity in a Feed Preparation Device with Four-Sided Jagged Hammers and Impact-Mixing Mechanisms
Previous Article in Journal
Benchmark Study of Point Cloud Semantic Segmentation Architectures on Strawberry Organs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognition of Strawberry Powdery Mildew in Complex Backgrounds: A Comparative Study of Deep Learning Models

College of Information and Electrical Engineering, China Agricultural University, No. 17, Qinghua East Road, Haidian District, Beijing 100091, China
*
Author to whom correspondence should be addressed.
AgriEngineering 2025, 7(6), 182; https://doi.org/10.3390/agriengineering7060182
Submission received: 25 April 2025 / Revised: 28 May 2025 / Accepted: 3 June 2025 / Published: 9 June 2025

Abstract

:
Powdery mildew is one of the most common diseases affecting strawberry yield and quality. Accurate and timely detection is essential to reduce pesticide usage and labor costs. However, recognizing strawberry powdery mildew in complex field environments remains a significant challenge. In this study, an HSV-based image segmentation method was employed to enhance the extraction of disease regions from complex backgrounds. A total of 14 widely used deep learning models—including SqueezeNet, GoogLeNet, ResNet-50, AlexNet, and others—were systematically evaluated for their classification performance. To address sample imbalance, data augmentation was applied to 2372 healthy and 553 diseased leaf images, resulting in 11,860 training samples. Experimental results showed that InceptionV4, DenseNet-121, and ResNet-50 achieved superior performance across metrics such as accuracy, F1-score, recall, and loss. Models such as MobileNetV2, AlexNet, VGG-16, and InceptionV3 demonstrated certain strengths, and models like SqueezeNet, VGG-19, EfficientNet, and even ResNet-50 showed room for further improvement in performance. These findings demonstrate that CNN models originally developed for other crop diseases can be effectively adapted to detect strawberry powdery mildew under complex conditions. Future work will focus on enhancing model robustness and deploying the system for real-time field monitoring.

1. Introduction

Strawberries are rich in nutrients and have high commercial value [1]. Their growth is sensitive to temperature, humidity, and light conditions. Strawberries are best grown in a temperate climate at a temperature of 15–25 °C and a humidity of 60–80%. Throughout the growth cycle, the morphology and physiological state of strawberry leaves change significantly. In the early vegetative stage, the leaves are tender, smaller in size, and more susceptible to pathogen invasion due to incomplete development of the cuticle. As the plant matures, the leaves become thicker and develop a more robust waxy layer, which offers moderate resistance to external stress. However, during the fruiting stage, increased leaf surface area and metabolic activity may again increase susceptibility to diseases. The strawberry plant is highly susceptible to powdery mildew during its growth, which can affect petioles, leaves, and fruits. This is an important factor leading to reductions in strawberry production [2]. The symptoms of PM vary by infection stage. In the initial infection period, small white powdery spots appear on the underside of young leaves. As the disease progresses, the white mycelium spreads across the leaf surface, leading to leaf curling, chlorosis, and eventually necrosis. In severe cases, fruit surfaces may also become covered in fungal growth, affecting both appearance and edibility. The traditional control system for strawberry powdery mildew mainly relies on a strategy combining physical intervention with chemical control [3]. For example, crop managers may go to the fields and spray fungicides. However, long-term and unreasonable use of fungicides can cause PM fungi to develop resistance to fungicides, increase farm operating costs, and have negative impacts on the environment [4].
The incubation period of powdery mildew is not obvious. When obvious lesions are visible to the naked eye, the mycelium has already formed secondary transmission nodes [5], and the best control window has already been missed. In the early stages of infection, powdery mildew often presents as extremely small and scattered white powdery spots, usually located on the underside of young leaves, and may be accompanied by subtle changes in texture or reflectance. These features are difficult to distinguish under natural lighting, especially in field environments with background noise such as soil, shadows, and overlapping leaves. As a result, early-stage lesions are easily confused with non-pathological visual elements like dust, water spots, or natural trichomes. This visual subtlety poses a major challenge for recognition through standard RGB imagery. Nevertheless, with the increasing sensitivity of deep learning models and the application of techniques such as high-resolution imaging, texture enhancement, and contrast normalization, some studies suggest that it is possible to identify patterns indicative of early infection. Although precise prediction of the incubation stage remains difficult, deep learning-based image analysis holds promise for detecting early symptoms before they become visually obvious to the human eye.
In recent years, advances in computer vision technology have accelerated the progress of image detection, and the use of machine learning and deep learning is gradually becoming a standard solution for plant disease detection. Disease detection technology based on deep learning has made significant progress. Mohanty et al. (2016) used a CNN model to classify 14 crop diseases with an accuracy rate of 99.35% [6]. Current research focuses on single background or detached leaf recognition. Detection of strawberry powdery mildew in complex field environments, such as uneven lighting, occlusion by branches and leaves, and interference from soil background, still faces two major challenges: first, feature confusion caused by background noise, and second, the model’s insufficient generalization ability in variable field scenes. Traditional machine learning methods construct classification models by extracting features such as leaf color and texture. For example, Li et al. (2019) achieved an accuracy of 86.5% in identifying apple scab based on HSV color space and LBP texture features [7]. Deep learning methods perform better in complex backgrounds due to their automatic feature extraction advantage. The disease classification accuracy of the classic CNN models ResNet and Inception on the public dataset PlantVillage generally exceeds 95% [8,9]. Object detection models such as YOLOv5 have also been used to locate disease in drone aerial images [10], with a recall rate of 89.2% in field tests.
Most current datasets collect single leaf images in controlled environments, with a sample size of less than 100,000, and lack labeled data in complex scenes. According to Hughes and Salathé (2015), more than 70% of disease recognition models suffer a 30–50% performance drop due to the mismatch between training data and actual scene distribution [11]. In addition, the scarcity of rare disease samples leads to the long-tail distribution problem: the proportion of samples with early symptoms of strawberry powdery mildew is less than 1%, which seriously restricts the generalization ability of the model [12].
An effective solution to the problem of insufficient data sets is to use transfer learning. The feature transfer of pre-trained models can reduce the dependence on the amount of target data. Atila et al. (2024) fine-tuned EfficientNet-B4 and achieved a tomato disease recognition accuracy of 91.7% using only 500 field images [13]. Domain adaptation technology adapts laboratory images to the field domain through style transfer, reducing distribution bias [14].
Through the above analysis, we decided to evaluate the performance of 14 models commonly used for agricultural disease identification. These models, including VGG-16, ResNet-50, AlexNet, and DenseNet-121, were used to identify strawberry powdery mildew in complex backgrounds. The technical route of the experiment is shown in Figure 1. Although different CNNs have been successfully applied in detecting crop diseases, no studies have demonstrated their suitability for detecting strawberry powdery mildew. Finding a technology that can quickly detect PM in strawberry fields would help producers use fungicides in a timely and effective manner while reducing the labor required to observe diseases in the early stages of crop growth. Therefore, the main objectives of this paper are: (1) to analyze the performance of the 14 selected CNNs on the strawberry powdery mildew dataset with complex backgrounds; and (2) to compare the performance of different CNNs in terms of precision, recall, feature extraction ability, etc [15,16,17].

2. Materials and Methods

2.1. Dataset Construction and Preprocessing

The healthy leaf images were captured using a Xiaomi 13 smartphone camera (The manufacturer is FIH Precision Electronics (Langfang) Co., Ltd., located in Langfang, Hebei Province, China) under natural lighting conditions. The key parameters during image acquisition were as follows: the blade pitch angle was 30° ± 5°, the camera-to-leaf distance was 50 ± 5 cm, and the relative humidity was 60 ± 5% RH. The light intensity under natural conditions varied between 200 and 1200 lux, preserving the natural lighting variation. The images were saved in JPG format with a resolution of 1440 × 1080 pixels. To ensure the representativeness of the healthy samples, data collection was conducted across a full strawberry growing season, from the early vegetative stage in October to the late fruiting stage in January. This time span captured variations in leaf morphology and physiological states under realistic cultivation conditions. Moreover, sampling was systematically performed from seven evenly spaced furrows (1, 3, 5, 7, 9, 11, and 13) within a 14-furrow greenhouse, accounting for spatial heterogeneity in microclimate and plant health status. These procedures followed standardized agricultural phenotyping guidelines to ensure biological and environmental diversity in the dataset.
All images were manually annotated. A total of 2372 healthy strawberry leaf images were collected from October 2024 to January 2025 at the greenhouse base of the Pinggu Science and Technology Courtyard, China Agricultural University, Beijing, China (117.0° E, 40.14° N). The greenhouse contains 14 furrows, and images were captured from furrows 1, 3, 5, 7, 9, 11, and 13 (Figure 2), with images collected every two weeks. In addition, 553 diseased leaf images with visible symptoms of strawberry powdery mildew were obtained from a publicly available dataset on Kaggle. To enable effective joint utilization of the healthy and diseased images, a uniform preprocessing pipeline was applied to both datasets. This included resizing all images to 1440 × 1080 pixels, applying color space conversion to HSV for consistent chromatic feature extraction, and employing a background complexity filtering step to align environmental noise levels. Additionally, visual inspection and quality control ensured that both datasets contained clear leaf regions without occlusion or severe motion blur. As a result, the integrated multi-source dataset preserves disease-relevant visual features while minimizing domain discrepancies, facilitating robust model training and generalization under realistic field conditions. These combined datasets formed a multi-source dataset for strawberry powdery mildew detection in complex backgrounds, as shown in Figure 3, which also shows representative examples of healthy and diseased leaves, as well as different infection severity and growth stages. These images highlight key visual differences, such as the presence of white powdery spots and leaf curling, which are more obvious in the later stages.
All images were manually annotated using the Labelme 5.7.0 tool. Each complete leaf was enclosed in a rectangular box and assigned a binary label: “1” for powdery mildew-infected leaves and “0” for healthy leaves. Labeling was performed according to consistent visual criteria and was quality-checked to ensure accuracy.
The entire dataset was randomly split into 80% for training and 20% for testing.

2.2. Data Augmentation

Images captured under natural light often exhibit issues such as overexposure, noise, or blurriness. To mitigate these challenges and enhance model robustness, several data augmentation techniques were applied, including:
  • Random rotation within the range [−10°, 10°]
  • Horizontal and vertical flipping
  • Random cropping with a scale range of 0.8–1.0
  • Addition of Gaussian noise (mean = 0, variance = 0.01)
  • Color jittering to simulate natural light variations
After augmentation, the total number of images increased to 11,860, including both original and augmented samples. The effects of the data augmentation process are shown in Figure 4.

2.3. Image Segmentation

In complex field environments, irrelevant information such as weeds, soil, and pipes can interfere with the model’s ability to accurately identify strawberry leaves, leading to challenges in the detection of powdery mildew. The color and texture of the soil are similar to certain features of powdery mildew, which makes it difficult for the model to focus solely on the strawberry leaves, thereby complicating the extraction and classification of disease features. Additionally, the presence of excessive background information increases model training time, as the model needs to process a larger volume of irrelevant data [18,19,20,21].
To address these challenges, we applied an HSV-based image segmentation method. The HSV color space offers better sensitivity to color differences, making it more effective in distinguishing the green leaves from interfering background elements. The input image was first converted from BGR to HSV color space. Using the characteristic sensitivity of HSV to color variations, we defined the green hue range for segmentation:
  • Lower bound: (35, 43, 46)
  • Upper bound: (77, 255, 255)
This range effectively isolates the green areas of the leaves. The S (saturation) and V (value) components were used to filter out areas with low saturation or excessively dark regions. A 5 × 5 kernel was then applied to the binary mask to remove small noise and interference. Finally, a bitwise AND operation was performed between the original image and the binary mask to extract the relevant leaf areas, as shown in Figure 5.
This segmentation technique significantly reduces the impact of background noise, enhancing the accuracy of subsequent deep learning models in identifying powdery mildew on strawberry leaves.

2.4. Model Architecture and Training

In this study, we selected 14 widely used convolutional neural network (CNN) models to classify strawberry leaves as either healthy or infected with powdery mildew. These models include classic architectures such as VGG-16, ResNet-50, and AlexNet, as well as lightweight and modern networks like MobileNetV2, EfficientNet-B0, and SqueezeNet. The core idea behind CNNs is to extract hierarchical features through convolutional layers, which are then used for classification. The final classification layer outputs a binary label representing the health status of the leaf [22,23,24,25].
Each model has distinct characteristics in terms of convolutional depth, receptive field, input resolution, and feature extraction mechanism. Table 1 summarizes the structure and main features of the selected models. For instance, ResNet-50 introduces residual connections to mitigate gradient vanishing problems in deep networks, while DenseNet-121 leverages dense feature reuse to enhance learning efficiency. Lightweight models such as MobileNetV2/V3 and SqueezeNet are optimized for deployment on resource-constrained devices through techniques like depthwise separable convolution and squeeze-and-excitation modules.

2.5. Training Settings

All models were implemented using the PaddlePaddle 2.4.0 deep learning framework and trained on a server with the following hardware configuration:
  • GPU: NVIDIA Tesla V100
  • CPU: 2 Cores
  • RAM: 16 GB
  • Video Memory: 16 GB
  • Disk: 100 GB
We employed transfer learning by initializing each model with ImageNet-pretrained weights, followed by fine-tuning on our custom dataset. The classifier layer was modified to output two categories: healthy and diseased. The models were trained using the Adam optimizer with a fixed learning rate of 0.001, a batch size of 32, and for 50 epochs. The cross-entropy loss function was used for binary classification. These hyperparameters were selected based on preliminary experiments and are commonly used in plant disease detection tasks.
This unified training setup ensures a fair performance comparison across all 14 models under consistent environmental and hyperparameter conditions.

3. Results

3.1. Model Performance Evaluation

In this experiment, fourteen convolutional neural network (CNN) models with diverse architectures were trained and tested to evaluate their performance on strawberry powdery mildew recognition under complex backgrounds. The selected models included both classical CNNs (e.g., AlexNet, VGG-16/19) and advanced architectures (e.g., DenseNet-121, InceptionV3/V4, EfficientNet-B0).
To provide a comprehensive performance comparison, four evaluation metrics were employed: Accuracy, Recall, F1-score, and Area Under the Receiver Operating Characteristic Curve (AUC) [26,27,28,29,30,31,32]. The formulas are defined as follows:
Accuracy = (TP + TN)/M
Recall = TP/(TP + FN)
F1-Score = 2TP/(2TP + FP + FN)
AUC: Calculated from the true positive rate (TPR) and false positive rate (FPR).
  • where:
  • TP = True Positives
  • TN = True Negatives
  • FP = False Positives
  • FN = False Negatives
  • M = Total number of samples
Figure 6 illustrates the training loss curves of all models. Most models, except for VGG-19, VGG-16, IDCNN, and MobileNetV3, showed rapid convergence and achieved training loss below 0.1, demonstrating effective model training.
Table 2 presents the average test set values for Accuracy, Recall, and F1-score. Notably, DenseNet-121, InceptionV3, InceptionV4, MobileNetV2, and MobileNetV3 all achieved accuracy levels above 98%. SqueezeNet, however, performed poorly with an accuracy of only 20.17%, indicating its unsuitability for this task.
In terms of F1-score, InceptionV4 achieved the highest score of 0.99, followed by DenseNet-121, ResNet-50, and InceptionV3, all reaching 0.98, indicating that these models maintain an excellent balance between precision and recall. IDCNN also reached 0.99, but considering its long inference time, it needs to be weighed according to the scenario in actual application. MobileNetV3’s F1-score is also as high as 0.98, which is particularly outstanding among lightweight models, while MobileNetV2 is slightly lower at 0.92, but still maintains good performance.
In terms of recall rate, DenseNet-121 and InceptionV4 both reached 0.99, and InceptionV3, MobileNetV3, IDCNN, etc. also remained at around 0.98, indicating that these models can effectively identify most positive samples with extremely low missed detection rates. In contrast, VGG-19’s recall rate is only 0.89, and SqueezeNet is even lower at only 0.14, which is obviously insufficient.
From the perspective of accuracy, the accuracy of InceptionV4, InceptionV3, and DenseNet-121 all exceeded 98%, and MobileNetV3, MobileNetV2, and ResNet-50 also stabilized at around 98%, with strong overall recognition capabilities. However, the accuracy of SqueezeNet was only 20.17%, which was far lower than other models and had no practical application value.
To further evaluate the comprehensive performance of each model in the strawberry powdery mildew recognition task, this study compared the accuracy of 14 models with the computational complexity (expressed in TFLOPs) of the inference stage. The results show that although all models have little difference in accuracy, there are significant differences in computational efficiency.
Among them, models such as Inception V4, DenseNet-121, PM_GHSI, and Inception V3 have extremely low TFLOPs while maintaining an accuracy of more than 98%, showing extremely high computational efficiency and cost-effectiveness. For example, the accuracy of Inception V4 reaches 99.23%, while TFLOPs is only 0.0002, and its Accuracy/TFLOPs index is as high as 496,150, which is significantly better than other models. In comparison, although VGG-16 and VGG-19 have accuracies of 91.89% and 85.38%, respectively, they have high computational overhead and low cost-effectiveness, making them unsuitable for deployment on resource-constrained devices.
In addition, lightweight models such as MobileNetV3 and AlexNet also show an accuracy of more than 92% with minimal computational overhead, providing a viable option for edge deployment. Overall, the cost-effectiveness evaluation combining accuracy and computational efficiency helps to select more practical deep learning models, especially for agricultural scenarios with limited computing resources.
From the above experimental results, it can be seen that among the fourteen convolutional neural network models evaluated in this study, SqueezeNet performed poorly in all major indicators, with an F1 score of only 0.19, a recall rate of 0.14, and an overall accuracy of only 20.17%. This result is in sharp contrast to the performance of other lightweight models (such as MobileNetV3 with an F1 score of 0.98). We conducted an in-depth analysis of its architectural design.
SqueezeNet relies heavily on the Fire module composed of 1 × 1 and 3 × 3 convolutions to reduce the number of parameters. Although this minimalist design can effectively reduce the size of the model, it limits the depth and diversity of the features that can be extracted by the network. For complex image classification tasks (such as detecting powdery mildew on strawberry leaves), fine-grained texture, edge information, and contextual features are crucial, and this architectural simplification will significantly reduce the effectiveness of the model. In addition, the attention mechanism in the original design hinders the model’s ability to capture and integrate multi-scale and high-level semantic features [33,34,35,36,37,38].
To address these limitations, the following improvements can be considered:
  • Incorporating channel attention mechanisms, residual connections, or multi-branch feature aggregation into SqueezeNet to enhance its expressive power.
  • Extracting knowledge from high-performance models such as DenseNet-121 or InceptionV4 to make lightweight variants of SqueezeNet approach the performance of deeper models while maintaining computational efficiency.
  • Using SqueezeNet in a multi-stage framework as a preliminary filter to identify candidate regions for further inspection by more accurate models.
In summary, although SqueezeNet has significant advantages in terms of the number of parameters and model size, its own model architecture lacks the representational power for accurate disease recognition in complex agricultural images. Our future work should attempt to improve its feature extraction capabilities or selectively deploy it in scenarios with strict resource constraints and low tolerance for classification accuracy.

3.2. Effect of Image Segmentation

In order to evaluate the effectiveness of the HSV-based segmentation module, we conducted a comparative experiment: the experimental group used the HSV segmented images as input, and the control group used the raw original images. By comparing the performances of the two groups of models, we can intuitively analyze the impact of the segmentation module on the final results.
As shown in Figure 7, the accuracy of all models is improved to different degrees after the introduction of the segmentation module. This result confirms that the HSV-based segmentation method can effectively attenuate background interference and improve the model’s feature extraction ability. The improvement in the GoogLeNet, ResNet-50, AlexNet, and DenseNet-121 models is particularly significant, indicating that they are more sensitive to background noise. Therefore, after removing background interference, they can focus more on important areas. This result further confirms that although deep neural networks have strong adaptive capabilities for feature extraction, suitable preprocessing can still provide the model with a “cleaner” input in a complex background, thereby reducing the learning difficulty and improving the overall performance. Moreover, models with different architectures also have different sensitivity to background noise. Future research can further explore the design of customized preprocessing strategies for specific network structures to further optimize the performance.

3.3. Inference Time Comparison

We also compared the inference time of all 14 models on the test set, and the results are shown in Figure 8. Overall, MobileNetV2 and EfficientNet-B0 showed the shortest inference time and the fastest inference speed, which are very suitable for scenarios with high real-time requirements or limited computing resources, such as mobile devices or edge computing platforms. InceptionV3 and ResNet-50 followed closely behind, with fast inference speeds, which can ensure high recognition performance while taking into account a certain response speed. On the contrary, the inference time of models such as IDCNN, PM_GHSI, MobileNetV3, and InceptionV4 is relatively long, among which IDCNN and PM_GHSI have the longest inference time, indicating that they are not suitable for application environments that require real-time processing. Although the inference speed of InceptionV4 and DenseNet-121 is slower than that of MobileNetV2 and EfficientNet-B0, they perform best in terms of recognition accuracy, recall rate, and F1-score, and are suitable for scenarios with extremely high recognition performance requirements and insensitive to inference time.
In general, model selection should be flexibly weighed according to specific deployment requirements. If the deployment environment has strict restrictions on response speed and resource consumption, MobileNetV2 or EfficientNet-B0 should be preferred; and in situations where higher recognition accuracy is required, DenseNet-121, InceptionV3, or InceptionV4 can be considered. The overall results also once again show that in practical applications, it is necessary to make a reasonable balance between speed and accuracy and select the appropriate model according to specific needs.

3.4. Visual Comparison of Representative Models on Healthy and Infected Leaves

To further address the reviewer’s suggestion regarding the differential recognition performance on healthy and infected leaves, this study selected four representative models (GoogLeNet, AlexNet, Inception V4, and MobileNetV3) for a visual comparison of prediction results on healthy leaves and leaves at different infection stages (early and late). The corresponding visual prediction results are shown in Figure 9.
The results show that, except for GoogLeNet, the other models can reliably and accurately distinguish between healthy and infected leaves, demonstrating good discriminative ability and prediction consistency. GoogLeNet misclassified some healthy leaves as infected, suggesting insufficient robustness when handling samples with unclear lesion boundaries or mild infection features. AlexNet, Inception V4, and MobileNetV3 can provide accurate predictions at different infection stages (early and late) and in healthy states, especially Inception V4, which showed extremely high recognition stability and confidence on all images.
In addition, MobileNetV3 has obvious computational efficiency advantages while maintaining high recognition accuracy and is suitable for resource-constrained environments. Although AlexNet has a relatively simple structure, it has shown strong stability and few misjudgments in this task, indicating that it still has application value in specific tasks.
In summary, the visual comparison in this section not only reveals the performance differences in different models in powdery mildew identification, but also provides an intuitive basis and practical reference for the subsequent selection of models based on the trade-off between accuracy and resource constraints in actual applications.

4. Discussion

4.1. Experimental Results

This study emphasizes that deep learning models originally developed for general or other plant disease recognition tasks can be effectively applied to the recognition of strawberry powdery mildew under complex field conditions. Experimental results show that models such as DenseNet-121, Inception V3, MobileNetV3, Inception V4, and MobileNetV2 still show strong recognition capabilities in complex backgrounds, with accuracy rates stable at more than 98%, and can accurately distinguish diseased leaves from background elements. However, the accuracy of SqueezeNet is only 20.17%, which is obviously not suitable for this task; the accuracy of VGG-19 is 85.38%, which is lower than most models, while the accuracy of the remaining seven models is more than 90%. This result verifies that the models currently widely used in other disease recognition fields still have good adaptability in complex field backgrounds, and shows that modern CNN architectures can easily cope with such tasks.
In terms of recall rate, the recall rates of DenseNet-121, Inception V3, Inception V4, and MobileNetV3 are all greater than or equal to 0.98, and almost all positive samples can be identified. The F1-score, as a comprehensive indicator of the balance between precision and recall, also shows highly consistent results: except for VGG-19, the F1-scores of the remaining models are all higher than 0.92, among which Inception V4 reaches 0.99, and Inception V3, ResNet-50, and DenseNet-121 also reach 0.98, showing excellent balance and outstanding generalization ability. Comprehensive analysis of various indicators such as accuracy, recall rate, and F1-score shows that Inception V4, DenseNet-121, and ResNet-50 perform best and are very suitable for deployment in production environments with strict requirements for high recognition performance. As a lightweight version of Inception V4, Inception V3 also performs well and is more suitable for use in application scenarios with high requirements for inference speed. If the resource constraints of the deployment environment are taken into account, lightweight models such as MobileNetV3 or EfficientNet-B0 achieve a good balance between performance and computational efficiency and are suitable for mobile or embedded systems.
In addition, this paper also verifies the effectiveness of the image segmentation module based on the HSV color space in improving recognition performance. After the introduction of segmentation processing, the accuracy of all models improved, especially the GoogLeNet and VGG series, which exhibited a significant improvement, indicating that the segmentation module can effectively weaken the interference of background noise in the model feature extraction process and improve the overall robustness. By filtering out irrelevant background elements (such as soil, weeds, etc.), the segmentation module provides the model with purer input data, thereby further improving the recognition performance.

4.2. Comparison with Traditional Machine Learning Approaches

Traditional machine learning (ML) techniques, including Support Vector Machines (SVM), Random Forests (RF), and k-Nearest Neighbors (KNN), have been widely used in earlier plant disease detection studies. These methods typically rely on hand-crafted features such as color histograms, texture descriptors (e.g., GLCM, HOG), or shape-based features. For instance, Hatuwal et al. [39] compared CNN with RF, SVM, and KNN for leaf disease classification and found that CNN achieved an accuracy of 97.89%, significantly higher than RF (87.43%), SVM (78.61%), and KNN (76.96%). Similarly, Rajpal [40] reviewed several studies and reported that traditional models based on RBF-kernel SVM typically achieve accuracies up to 94.1%, depending on the dataset and feature quality. However, their performance often drops sharply when applied to complex real-world scenarios with background clutter.
Additional studies have emphasized similar limitations. Dang-Ngoc et al. [41] proposed a hybrid multi-class SVM based on spatial Fuzzy C-Means clustering for disease classification, which reached promising results in controlled conditions but still required precise segmentation and feature extraction. Rumy et al. [42] conducted a comprehensive comparison of several ML methods and noted their relatively poor robustness in field conditions compared to deep learning models. Alhwaiti et al. [43] demonstrated that an SVM classifier using Histogram of Oriented Gradients (HOG) for tomato disease detection achieved high performance in lab settings but was highly sensitive to lighting and background variations.
In contrast, the deep learning models evaluated in this study, including DenseNet-121, Inception V4, and ResNet-50, consistently achieved accuracy rates above 98% and F1-scores up to 0.99 when applied to strawberry powdery mildew recognition under real-world conditions. Unlike traditional ML models that depend heavily on expert-designed features, convolutional neural networks (CNNs) automatically extract hierarchical and discriminative features from raw data, demonstrating better adaptability to variable backgrounds, occlusion, and noise. Even lightweight architectures such as MobileNetV3 and EfficientNet-B0 maintained a strong balance between computational efficiency and recognition performance, making them suitable for mobile or embedded systems where traditional ML models were previously preferred.
Therefore, the results of this study not only reaffirm the superior recognition performance and robustness of modern deep learning models but also highlight their practical advantage over conventional machine learning methods, particularly in field deployment scenarios requiring scalability and generalization.

4.3. Prospects for Agricultural Applications

Experimental results show that mainstream deep learning models, especially modern convolutional neural networks such as InceptionV4, DenseNet-121, and ResNet-50, show excellent recognition performance for strawberry powdery mildew even under complex field conditions. However, in practical agricultural applications, the applicability of these models depends not only on their accuracy and generalization ability, but also on factors such as computational cost, real-time inference capability, and compatibility with existing hardware infrastructure.
High-performance models such as InceptionV4 and DenseNet-121 are well suited for deployment in controlled environments with sufficient computing resources, such as agricultural monitoring stations or smart greenhouses equipped with GPUs or edge AI accelerators. These models can support high-throughput and high-precision disease monitoring, enabling timely intervention and large-scale data-driven decision-making.
For scenarios with more limited resources, such as handheld mobile devices, unmanned aerial vehicles (UAVs), or embedded systems, lightweight models such as MobileNetV3 and EfficientNet-B0 can achieve an effective balance between accuracy and computational efficiency. Their ability to run smoothly on low-power hardware (e.g., smartphones, Raspberry Pi, NVIDIA Jetson Nano, or Coral TPU) makes them particularly important for real-time detection tasks in the field. This capability helps farmers and agronomists quickly identify and respond to disease outbreaks, even in remote areas or areas with limited infrastructure.
In addition, the integration of an HSV-based image segmentation module greatly enhances the system’s robustness to background noise and varying lighting conditions. This preprocessing step significantly reduces the reliance on specialized imaging equipment, allowing the use of standard RGB cameras on mobile phones or drones as viable input sources. This not only lowers the technical threshold for application, but also improves the scalability and cost-effectiveness of large-scale agricultural deployment [41,42,43].
In summary, deploying deep learning models for disease detection in the agricultural field has important practical value. By tailoring the model selection according to specific application needs (whether it prioritizes detection accuracy, computational speed, or hardware availability), these models can be flexibly embedded in a variety of precision agriculture solutions. Ultimately, this helps achieve smarter, more responsive, and more sustainable crop management practices even in environments where specialized equipment is scarce.

5. Conclusions

This study presents a comprehensive investigation into the recognition of strawberry powdery mildew under complex field conditions. The primary contributions are summarized as follows:
  • We constructed and annotated a dedicated image dataset of strawberry powdery mildew captured in real-world, complex agricultural environments. This dataset addresses a lack of publicly available, high-quality data for this specific disease under field conditions and lays a solid foundation for future studies.
  • To enhance disease feature extraction from noisy and cluttered backgrounds, we introduced an HSV-based segmentation preprocessing method. This approach improves the visibility of disease regions and contributes to better model performance in complex scenes.
  • We systematically evaluated and compared the performance of 14 widely used deep learning models, covering a diverse range of network architectures. The benchmarking results offer valuable guidance for selecting appropriate models based on trade-offs between accuracy and computational complexity (TFLOPs), facilitating informed decisions in real-world deployments.
  • Our comparative experiments revealed how different architectures vary in their sensitivity to background interference and disease feature patterns. This insight is essential for understanding model robustness and for transferring these findings to other crop disease scenarios.
While our study is grounded in existing deep learning architectures, its novelty lies in the integration of segmentation strategies, the construction of a realistic dataset, and a comprehensive evaluation framework tailored for practical agricultural environments. These efforts collectively contribute to the advancement of intelligent plant disease recognition and offer technical support for scalable deployment in smart farming systems.
In future research, we plan to expand our work in the following directions:
  • Dataset expansion and fine-grained annotation
We will continue to expand the dataset by collecting more samples in different seasons, varieties, and light conditions. We also plan to introduce fine-grained annotations (e.g., early, middle, and late stages of the disease) to facilitate severity classification and time series analysis.
2.
Lightweight model optimization
Given the importance of real-time deployment in the field, we will explore structural modifications to lightweight models (e.g., MobileNet, EfficientNet) using techniques such as neural architecture search (NAS), pruning, quantization, and knowledge distillation to achieve a better balance between performance and speed.
3.
Multimodal data fusion
To improve the robustness and accuracy of the model, we plan to integrate other data modalities (e.g., meteorological data, hyperspectral images, or soil health indicators) into the disease identification process to achieve an early warning system.
4.
Field deployment and user interface design
Our goal is to build and test a prototype mobile application or embedded system that integrates our trained model and can be deployed on a smartphone or edge device. This includes a user-friendly interface for farmers and agronomists, as well as real-time feedback and historical analysis capabilities.
5.
Transferability and generalization
To demonstrate the generalization ability of our selected models, we will evaluate the transferability of trained models to other crops or diseases and explore domain adaptation techniques to reduce performance degradation when applied to new environments.

Author Contributions

Conceptualization, J.W.; methodology, J.W. and J.L.; software, J.W. and J.L.; validation, J.L.; formal analysis, J.W.; investigation, J.W. and J.L.; data curation, J.L. and J.W.; writing—original draft preparation, J.W.; writing—review and editing, J.W., F.M. and J.L.; visualization, J.W. and J.L.; supervision, J.W. and F.M.; project administration, J.W., F.M. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets generated for this study are available on request to the corresponding author.

Acknowledgments

The authors would like to thank the team members who assisted with the data collection in the strawberry fields and provided technical support for the experiments. We also acknowledge the constructive feedback provided by anonymous reviewers that helped improve the quality of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Skrovankova, S.; Sumczynski, D.; Mlcek, J.; Jurikova, T.; Sochor, J. Bioactive compounds and antioxidant activity of different types of berries. Int. J. Mol. Sci. 2015, 16, 24673–24706. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, J.; Chen, J.; Zhang, D.; Sun, Y.; Nanehkaran, Y.A. Using deep transfer learning for image-based plant disease identification. Comput. Electron. Agric. 2020, 173, 105393. [Google Scholar] [CrossRef]
  3. Khan, A.I.; Quadri, S.M.K.; Banday, S.; Shah, J.L. Deep diagnosis: A real-time apple leaf disease detection system based on deep learning. Comput. Electron. Agric. 2022, 198, 107093. [Google Scholar] [CrossRef]
  4. Vielba-Fernández, A.; Polonio, Á.; Ruiz-Jiménez, L.; de Vicente, A.; Pérez-García, A.; Fernández-Ortuño, D. Fungicide Resistance in Powdery Mildew Fungi. Microorganisms 2020, 8, 1431. [Google Scholar] [CrossRef]
  5. Xu, X.-M.; Robinson, J.D. Effects of Temperature on the Incubation and Latent Periods of Hawthorn Powdery Mildew (Podosphaera clandestina). Plant Pathol. 2000, 49, 791–797. [Google Scholar] [CrossRef]
  6. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef]
  7. Li, Y.; Zhang, L.; Wang, Y. Classification of Multi Diseases in Apple Plant Leaves Using HSV Color Space and LBP Texture Features. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2:1380974 (accessed on 9 April 2025).
  8. Hassan, M.M.; Islam, M.T.; Rahman, M.A.; Hasan, M.M. A Comprehensive Survey on Deep Learning Approaches for Plant Disease Detection. Artif. Intell. Rev. 2023, 56, 1–35. [Google Scholar]
  9. Sai, A. Deep Learning for Plant Disease Detection: A Review. Artif. Intell. Rev. 2021, 54, 1–20. [Google Scholar]
  10. Yu, R.; Luo, Y.; Zhou, Q.; Zhang, X.; Wu, D.; Ren, L. Early detection of pine wilt disease using deep learning algorithms and UAV-based multispectral imagery. For. Ecol. Manag. 2021, 497, 119493. [Google Scholar] [CrossRef]
  11. Hughes, D.P.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. Comput. Electron. Agric. 2015, 145, 311–318. [Google Scholar]
  12. Lee, S.; Arora, A.S.; Yun, C.M. Detecting strawberry diseases and pest infections in the very early stage with an ensemble deep-learning model. Front. Plant Sci. 2022, 13, 991134. [Google Scholar] [CrossRef]
  13. Atila, M.; Yildirim, M.; Yildirim, Y.; Yildirim, A.; Yildirim, M.; Yildirim, S. Plant leaf disease classification using EfficientNet deep learning model. Comput. Electron. Agric. 2024, 186, 106145. [Google Scholar] [CrossRef]
  14. Argüeso, D.; Picon, A.; Irusta, U.; Medela, A.; San-Emeterio, M.G.; Bereciartua, A.; Alvarez-Gila, A. Few-Shot Learning Approach for Plant Disease Classification Using Images Taken in the Field. Comput. Electron. Agric. 2020, 175, 105542. [Google Scholar] [CrossRef]
  15. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2016, 2016, 770–778. [Google Scholar] [CrossRef]
  16. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. Available online: https://arxiv.org/abs/1704.04861 (accessed on 5 April 2025).
  17. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  18. Liu, Y.; Zhang, C.; Wang, Z. Image segmentation evaluation: A survey of performance measures. Pattern Recognit. 2021, 116, 107963. [Google Scholar]
  19. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2015, 2015, 3431–3440. [Google Scholar] [CrossRef]
  20. Picon, A.; Alvarez-Gila, A.; Seitz, M.; Ortiz-Barredo, A.; Echazarra, J.; Johannes, A. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput. Electron. Agric. 2019, 161, 280–290. [Google Scholar] [CrossRef]
  21. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. Lect. Notes Comput. Sci. 2015, 9351, 234–241. [Google Scholar]
  22. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted residuals and linear bottlenecks. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2018, 2018, 4510–4520. [Google Scholar] [CrossRef]
  23. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. Available online: https://arxiv.org/abs/1409.1556 (accessed on 8 April 2025).
  24. Singh, A.; Ganapathysubramanian, B.; Singh, A.K.; Sarkar, S. Machine learning for high-throughput stress phenotyping in plants. Trends Plant Sci. 2016, 21, 110–124. [Google Scholar] [CrossRef] [PubMed]
  25. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2015, 2015, 1–9. [Google Scholar] [CrossRef]
  26. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. Proc. 36th Int. Conf. Mach. Learn. (ICML) 2019, 97, 6105–6114. [Google Scholar]
  27. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
  28. Wang, G.; Sun, Y.; Wang, J. Automatic image-based plant disease severity estimation using deep learning. Comput. Intell. Neurosci. 2017, 2017, 2917536. [Google Scholar] [CrossRef]
  29. Wang, X.; Zhou, Q.; Ji, L.; Zhai, Y. Research on corn disease identification method based on deep convolutional neural network. Comput. Electron. Agric. 2020, 179, 105824. [Google Scholar] [CrossRef]
  30. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional block attention module. Proc. Eur. Conf. Comput. Vis. (ECCV) 2018, 2018, 3–19. [Google Scholar] [CrossRef]
  31. Zhang, Q.; Yang, L.T.; Chen, Z.; Li, P. A survey on deep learning for big data. Inf. Fusion 2018, 42, 146–157. [Google Scholar] [CrossRef]
  32. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. Proc. Eur. Conf. Comput. Vis. 2016, 9905, 21–37. [Google Scholar]
  33. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2016, 2016, 779–788. [Google Scholar] [CrossRef]
  34. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. Available online: https://proceedings.neurips.cc/paper_files/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf (accessed on 10 April 2025). [CrossRef] [PubMed]
  35. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal loss for dense object detection. Proc. IEEE Int. Conf. Comput. Vis. 2017, 2017, 2980–2988. [Google Scholar] [CrossRef]
  36. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. Available online: https://arxiv.org/abs/1412.6980 (accessed on 6 April 2025).
  37. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2009, 2009, 248–255. [Google Scholar] [CrossRef]
  38. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. Available online: https://arxiv.org/abs/1602.07360 (accessed on 13 April 2025).
  39. Hatuwal, B.K.; Shakya, A.; Joshi, B. Plant Leaf Disease Recognition Using Random Forest, KNN, SVM and CNN. Polibits 2020, 62, 13–19. [Google Scholar]
  40. Rajpal, K. Machine Learning for Leaf Disease Classification: Data, Techniques and Applications. Artif. Intell. Rev. 2023, 56, 3571–3616. [Google Scholar]
  41. Dang-Ngoc, T.T.; Nguyen, T.T.; Nguyen, T.T.; Le, T.T.; Nguyen, T.T. An Optimal Hybrid Multiclass SVM for Plant Leaf Disease Detection Using Spatial Fuzzy C-Means Model. Expert Syst. Appl. 2023, 214, 118989. [Google Scholar]
  42. Rumy, S.H.; Sultana, S.; Rahman, M.M.; Hossain, M.S. A Comparative Analysis of Efficacy of Machine Learning Techniques for Disease Detection in Some Economically Important Crops. Crop Prot. 2025, 190, 107093. [Google Scholar]
  43. Alhwaiti, Y.; Ishaq, M.; Siddiqi, M.H.; Waqas, M.; Alruwaili, M.; Alanazi, S.; Khan, A.; Khan, F. Early Detection of Late Blight Tomato Disease Using Histogram Oriented Gradient Based Support Vector Machine. arXiv 2023, arXiv:2306.08326. [Google Scholar]
Figure 1. Experimental technology route.
Figure 1. Experimental technology route.
Agriengineering 07 00182 g001
Figure 2. Sampling distribution.
Figure 2. Sampling distribution.
Agriengineering 07 00182 g002
Figure 3. (a) Data collection: real scene picture from Dapeng; (b) Healthy strawberry leaves; (c) Mild powdery mildew infection on leaves; (d) Severe powdery mildew infection on leaves; (e) Powdery mildew on young leaves; (f) Powdery mildew on mature leaves.
Figure 3. (a) Data collection: real scene picture from Dapeng; (b) Healthy strawberry leaves; (c) Mild powdery mildew infection on leaves; (d) Severe powdery mildew infection on leaves; (e) Powdery mildew on young leaves; (f) Powdery mildew on mature leaves.
Agriengineering 07 00182 g003aAgriengineering 07 00182 g003b
Figure 4. (a) The original image and randomly crop augmented image, (b) random horizontal flip augmented and random rotation augmented images, and (c) color jitter augmented and resize augmented images.
Figure 4. (a) The original image and randomly crop augmented image, (b) random horizontal flip augmented and random rotation augmented images, and (c) color jitter augmented and resize augmented images.
Agriengineering 07 00182 g004
Figure 5. Image segmentation results: (a) The original image, (b) Image mapping to HSV space, (c) Create a mask, (d) Remove noise and interference, (e) Image grayscale, (f) Mapping to color space yields the result.
Figure 5. Image segmentation results: (a) The original image, (b) Image mapping to HSV space, (c) Create a mask, (d) Remove noise and interference, (e) Image grayscale, (f) Mapping to color space yields the result.
Agriengineering 07 00182 g005
Figure 6. Loss curve of model training (only the first 20 rounds are plotted here).
Figure 6. Loss curve of model training (only the first 20 rounds are plotted here).
Agriengineering 07 00182 g006
Figure 7. (a,b) Comparison of accuracy before and after adding the segmentation module.
Figure 7. (a,b) Comparison of accuracy before and after adding the segmentation module.
Agriengineering 07 00182 g007
Figure 8. Comparison of running time of the model after running 50 rounds.
Figure 8. Comparison of running time of the model after running 50 rounds.
Agriengineering 07 00182 g008
Figure 9. Recognition results of each model on samples at different disease stages. (a,d,g,j): Healthy leaves, (b,e,h,k): Infected leaves, (c,f,i,l): Early and late infected leaves.
Figure 9. Recognition results of each model on samples at different disease stages. (a,d,g,j): Healthy leaves, (b,e,h,k): Infected leaves, (c,f,i,l): Early and late infected leaves.
Agriengineering 07 00182 g009aAgriengineering 07 00182 g009b
Table 1. Each model structure and main features.
Table 1. Each model structure and main features.
ModelConvolutional LayersImage SizeModel Features
SqueezeNet23224 × 224Through extrusion and excitation mechanisms, the accuracy is maintained, and the weight of the model is realized.
GoogLeNet22224 × 224The Inception module is introduced, which uses convolutional kernels of different scales for parallel computing.
ResNet-5050224 × 224The residual connection is proposed to solve the problem of gradient disappearance and degradation with the increase of neural network layers.
AlexNet5227 × 227The first deep convolutional neural network successfully applied to large-scale image classification, using the ReLU activation function and the Dropout technique.
DenseNet-121121224 × 224Dense connections are used to connect the feature maps of all the previous layers to the later layers.
VGG-16/1916/19224 × 224Stacking multiple small 3 × 3 convolutional kernels instead of large convolutional kernels increases the nonlinearity of the network.
Inception V3/V442/57229 × 229Continuously optimize the Inception module by introducing more convolutional layers and optimizing structures.
MobileNetV2/V319/54224 × 224The use of deep separable convolution greatly reduces the parameters and computational cost of the model.
EfficientNet-B0 53224 × 224Through joint optimization of the width, depth, and resolution of the network.
IDCNN4224 × 224The ReLU activation function introduces nonlinear, pooled layer dimensionality reduction.
PM_GHSI4224 × 224The ReLU activation function introduces nonlinear, pooled layer dimensionality reduction.
Table 2. The average of precision, F1-score, recall, and TFLOPS for each model.
Table 2. The average of precision, F1-score, recall, and TFLOPS for each model.
ModelAccuracyF1-ScoreRecallTFLOPs
SqueezeNet20.17%0.120.140.0002
GoogLeNet93.35%0.960.950.0016
ResNet-5097.87%0.980.970.0041
AlexNet92.73%0.970.970.0007
DenseNet-12198.54%0.980.990.0002
VGG-1691.89%0.960.950.0155
VGG-1985.38%0.910.890.0196
Inception V399.14%0.980.980.0006
Inception V499.23%0.990.990.0002
MobileNetV298.12%0.920.920.0013
MobileNetV398.43%0.980.980.0007
EfficientNet-B096.57%0.960.960.0121
IDCNN98.88%0.990.980.0022
PM_GHSI97.05%0.970.960.0003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Li, J.; Meng, F. Recognition of Strawberry Powdery Mildew in Complex Backgrounds: A Comparative Study of Deep Learning Models. AgriEngineering 2025, 7, 182. https://doi.org/10.3390/agriengineering7060182

AMA Style

Wang J, Li J, Meng F. Recognition of Strawberry Powdery Mildew in Complex Backgrounds: A Comparative Study of Deep Learning Models. AgriEngineering. 2025; 7(6):182. https://doi.org/10.3390/agriengineering7060182

Chicago/Turabian Style

Wang, Jingzhi, Jiayuan Li, and Fanjia Meng. 2025. "Recognition of Strawberry Powdery Mildew in Complex Backgrounds: A Comparative Study of Deep Learning Models" AgriEngineering 7, no. 6: 182. https://doi.org/10.3390/agriengineering7060182

APA Style

Wang, J., Li, J., & Meng, F. (2025). Recognition of Strawberry Powdery Mildew in Complex Backgrounds: A Comparative Study of Deep Learning Models. AgriEngineering, 7(6), 182. https://doi.org/10.3390/agriengineering7060182

Article Metrics

Back to TopTop