Next Article in Journal
A Preliminary Investigation into Heavy Metal Tolerance in Pseudomonas Isolates: Does the Isolation Site Have an Effect?
Previous Article in Journal
A Remote Strawberry Health Monitoring System Performed with Multiple Sensors Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Maize Pathogenic Fungal Spores Based on Deep Learning

1
State Key Laboratory of Smart Farm Technologies and Systems, College of Agriculture, Northeast Agricultural University, Harbin 150030, China
2
College of Plant Protection, Northeast Agricultural University, Harbin 150030, China
3
Key Laboratory of Molecular Medicine and Biotherapy, Aerospace Center Hospital, School of Life Science, Beijing Institute of Technology, Beijing 100081, China
*
Authors to whom correspondence should be addressed.
These authors contribute equally to this work.
Agriculture 2025, 15(15), 1689; https://doi.org/10.3390/agriculture15151689
Submission received: 14 July 2025 / Revised: 31 July 2025 / Accepted: 4 August 2025 / Published: 5 August 2025
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)

Abstract

Timely detection of pathogen spores is fundamental to ensuring early intervention and reducing the spread of corn diseases, like northern corn leaf blight, corn head smut, and corn rust. Traditional spore detection methods struggle to identify spore-level targets within complex backgrounds. To improve the recognition accuracy of various maize disease spores, this study introduced the YOLOv8s-SPM model by incorporating the space-to-depth and convolution (SPD-Conv) layers, the Partial Self-Attention (PSA) mechanism, and Minimum Point Distance Intersection over Union (MPDIoU) loss function. First, we combined SPD-Conv layers into the Backbone of the YOLOv8s to enhance recognition performance on small targets and low-resolution images. To improve computational efficiency, the PSA mechanism was incorporated within the Neck layer of the network. Finally, MPDIoU loss function was applied to refine the localization performance of bounding boxes. The results revealed that the YOLOv8s-SPM model achieved 98.9% accuracy on the mixed spore dataset. Relative to the baseline YOLOv8s, the YOLOv8s-SPM model yielded a 1.4% gain in accuracy. The improved model significantly improved spore detection accuracy and demonstrated superior performance in recognizing diverse spore types under complex background conditions. It met the demands for high-precision spore detection and filled a gap in intelligent spore recognition for maize, offering an effective starting point and practical path for future research in this field.

1. Introduction

As a major staple crop, corn is characterized by its broad global distribution, vast cultivation area, and high productivity, making it essential to food security and grain yield enhancement [1]. However, corn cultivation faces numerous challenges, with diseases posing the most significant threat to yield and causing substantial economic losses [2]. Currently, crop disease management relies primarily on chemical control, which involves manual observation of disease symptoms followed by pesticide application. However, pathogens often exhibit an extended latency period after invading plants. By the time symptoms become visible, irreversible damage has typically occurred. Furthermore, missing the optimal control windows frequently leads to excessive pesticide use, resulting in unnecessary environmental contamination [3]. Among the corn diseases, northern corn leaf blight, corn head smut, and corn rust are the most prevalent and severe diseases affecting corn production [4,5,6]. These three diseases are typical airborne diseases that spread via spores transported on air currents. Early detection and diagnosis of pathogen spores at disease onset can significantly enhance control effectiveness [7]. The deployment of spore traps in fields can provide early warnings of potential outbreaks, facilitating a transition from reactive treatment to proactive prevention [8]. Nevertheless, current spore trap systems remain heavily reliant on manual intervention and laboratory analysis, constraining their operational efficiency. Artificial intelligence advances in plant pathogen monitoring now enable a shift from laboratory-dependent methods to field-deployable solutions, offering a more efficient and precise approach to agricultural disease management [9]. Therefore, using artificial intelligence technology to achieve automatic identification on pathogen spores from major airborne corn diseases is of significant practical value for disease control.
Machine learning techniques (image classification, target detection, image segmentation, etc.) have been deeply involved in pest control of various agricultural products. It was reported in several studies that machine learning techniques also show high potential in spore recognition. Wang et al. [10] proposed an image recognition strategy based on multi-feature fusion to identify spores of Botrytis cinerea, Pseudoperonospora cubensis, and Podosphaera xanthii. They compared various classifiers including Support Vector Machine (SVM), Random Forest, and K-nearest neighbor classification (KNN), with SVM achieving the highest classification accuracy at 94.36%. Javidan et al. [11] devised an innovative approach by integrating a Random Forest classifier with the Butterfly Optimization Algorithm (BOA) to facilitate precise discrimination among spores associated with four prevalent tomato fungal diseases. Leveraging spore characteristics such as color, shape, and texture as discriminative features, their method achieved a remarkable classification accuracy exceeding 98%. However, traditional machine learning methodologies necessitate intricate feature extraction and selection procedures. These approaches are predominantly constrained to simplistic scenarios featuring single targets with distinct and unambiguous characteristics, often exhibiting suboptimal performance in complex environments characterized by multiple targets and intricate backgrounds [12,13,14]. For example, the method proposed by Javidan et al. [11] relied on extensive image pre-processing, including background subtraction, edge enhancement, and grayscale normalization of individual color channels. Such steps are highly sensitive to noise and become unreliable when spores are occluded by debris or fungal hyphae in the microscopic images. Moreover, feature extraction required computing statistics across four different color spaces (RGB, HSV, l*a*b, and YCbCr), resulting in a complex and labor-intensive pipeline.
In contrast, deep learning as a new generation of technology showed visible advantages, including reduced costs, higher efficiency, and the capability to automatically extract pertinent features from raw data through sophisticated deep network architectures [15]. Li et al. [16] developed the MG-YOLO model for swift identification of cucumber gray mold spores. This model incorporates a multi-head self-attention (MHSA) mechanism and employs the GhostCSP network to enhance detection accuracy, achieving 98.3% accuracy (6.8% improvement over the original YOLOv5). Similarly, Zhu et al. [17] introduced an advanced GCS-YOLOv8 model based on the YOLOv8s model [18]. By introducing the Global Context Attention mechanism and novel upsampling operators, the model’s proficiency in extracting features from diminutive spore targets was significantly improved (average detection accuracy 92.6% for three cucumber fungi). Furthermore, Zhang et al. [19] improved YOLOv5 by integrating Efficient Channel Attention (ECA) [20] and Adaptively Spatial Feature Fusion (ASFF) for detection of wheat scab spores, achieving a recognition accuracy of 98.57% in complex mixed spore images. Alternatively, Zhang et al. [21] developed a lightweight decoupled model for identifying wheat scab spores on the basis of YOLOv7-tiny, optimizing the detection head into a decoupled design and integrating GSConv to reduce parameters and computation. The model achieved 98.0% mAP, demonstrating strong robustness and generalization.
Although previous research has achieved high-accuracy automatic spore recognition, most of them focus on a single type of pathogenic spore. Considering that fungal spores are micro-scale targets, diverse morphologies across species and frequent occlusion between each other, along with accurate detection of mixed spores, remain a challenge. Moreover, although there have been studies on spore recognition for diseases in crops such as wheat and cucumber, research specifically focused on corn disease spore recognition remains relatively limited.
This study proposed an enhanced spore detection model for corn diseases, named YOLOv8s-SPM, which incorporated three key optimizations based on YOLOv8s to improve small-target detection in complex scenes. First, the SPD-Conv module was integrated in the Backbone to replace the standard downsampling layer which enables better preservation of fine-grained features and edge details of small spores. Second, the PSA attention mechanism was embedded in the lowest-resolution feature layer of the Backbone to strengthen global contextual feature extraction. Finally, the original CIoU loss function was replaced with MPDIoU which improves the precision of bounding box regression for small-object scenarios. The proposed YOLOv8s-SPM model achieved an average recognition accuracy of 98.9% when detecting mixed spores of corn diseases. Compared with previous studies, which mostly focused on spores of other crops such as cucumber and wheat, research on corn disease spores remains limited. Moreover, these existing studies primarily addressed single-species detection and typically reported accuracies ranging from 92.6% to 98.6%, slightly lower than the performance achieved in this work. Our study supplemented the limited research on spore detection of corn diseases and achieved high-accuracy recognition of multi-class spores under complex conditions.

2. Materials and Methods

2.1. Experimental Design

The structure of this study was organized into four main parts (Figure 1). Firstly, we prepared the suspension containing three types of spores, then observed them under a microscope and captured microscopic images. Next, we completed data annotation and data enhancement and divided the dataset. Based on YOLOv8s, we introduced a series of optimizations and proposed YOLOv8s-SPM for corn disease spore recognition. Finally, we validated the superiority of the YOLOv8s-SPM. All data and code employed in this study are accessible through the following GitHub repository: https://github.com/DPmooncake/maize-spore-detection (accessed on 19 May 2025).

2.2. Experimental Materials

This study focused on three common airborne fungal diseases in maize: corn head smut caused by Sporisorium reilianum, northern corn leaf blight caused by Helminthosporium turcicum, and corn rust caused by Puccinia sorghi (Figure 2). Diseased plant tissues of these three diseases were collected at the Xiangyang Farm of Northeast Agricultural University in Harbin, China. For the non-obligate parasitic fungus H. turcicum, purified mycelia were obtained through isolation and culture after collecting diseased plants in the field. DNA was extracted from mycelia using the CTAB method, followed by PCR amplification and sequencing for identification. The mycelia were then collected, added to sterile water, and filtered through gauze to obtain a spore suspension. For obligate parasitic fungi, S. reilianum and P. sorghi, diseased tissues were picked and subjected to DNA extraction, PCR amplification, and sequencing identification. The diseased tissues were then added to sterile water, followed by filtration through gauze to obtain the spore suspension. The mixed suspension containing spores of these three pathogens was prepared for microscopic observation by temporary slides. Capture images were obtained using the Olympus BX43 manual microscope system. Table 1 summarizes the device specifications.
A total of 2003 microscopic images containing mixed spores were acquired. Each image included at least two types of spores. To simulate the complexity of field environments, we increased background complexity by introducing fungal hyphae into the suspension. For each spore instance in the microscopic images, we manually drew a bounding box to enclose the target. We saved the corresponding annotations, including spore category and bounding box coordinates, in JSON files for model training and evaluation. Examples of microscopic images with mixed spores are shown in Figure 3.

2.3. Data Enhancement and Dataset Construction

In the original dataset, the number of corn head smut spore samples was significantly greater than that of northern corn leaf blight and corn rust spores, resulting in a class imbalance issue. In deep learning, dataset balance plays a critical role in model performance. Imbalanced sample distribution often causes the network to favor the majority class, leading to performance degradation on underrepresented classes with limited training samples [22]. To address this issue, we employed oversampling and data augmentation strategies to achieve class-level balance. Specifically, the individual target regions of the minority classes (northern corn leaf blight and corn rust spores) were extracted from the original images. Each extracted patch was subjected to a series of augmentations, including random rotation (±20°), scaling (80–120%), Gaussian blur (σ ∈ [0, 0.5]), and brightness variation (±20%), to simulate intra-class variability. Additionally, multiple spores were allowed to partially overlap to mimic real spore occlusions. Then, these transformed spores were randomly pasted onto selected background images to synthesize new images (Figure 4). The updated bounding box coordinates were recorded in new annotation files in YOLO format. The original mixed spore images are combined with the enhanced images to form the enhanced dataset, thereby expanding the dataset for the underrepresented classes.
For the purpose of model training, the enhanced dataset was distributed as 80% for training, 10% for validation, and 10% for testing. The number of images and sample counts for each class across the datasets are summarized in Table 2.

2.4. YOLOv8s-SPM

Due to the YOLOv8s model’s limited effectiveness in detecting small targets within complex agricultural scenarios, we proposed an improved model, YOLOv8s-SPM, aiming to improve feature representation and localization accuracy. Specifically, the SPD-Conv modules were introduced to preserve small target features. The PSA mechanism was added at the intersection of the Backbone and Neck to improve model performance while maintaining low computational cost. The MPDIoU module was adopted to eliminate the optimization conflict caused by the traditional aspect ratio penalty term in CIoU. Figure 5 illustrates the architecture design of the YOLOv8s-SPM. A comprehensive overview of the YOLOv8s architecture, along with the design of the three optimization modules, is presented below.

2.4.1. YOLOv8s Model

As an advanced iteration of the YOLO model family, YOLOv8 provides notable benefits such as accelerated detection, superior accuracy, lightweight architecture and strong generalizability. In this study, we selected the YOLOv8s which is one of the lightweight variants in the YOLOv8 series. Figure 6 illustrates the architectural design of the YOLOv8s. In the Backbone layer, YOLOv8s adopts the C2f to replace the conventional C3 module [18]. This modification retains gradient partitioning to reduce computational load, while introducing additional skip connections to enhance shallow feature reuse, thereby mitigating the loss of fine details in small targets caused by the cropping operation in the previous Focus layer. The subsequent Spatial Pyramid Pooling-Fast (SPPF) module substitutes traditional parallel pooling with serial max pooling to further optimize computational efficiency. In the Neck layer, YOLOv8 enhances cross-scale feature interaction by upgrading the original Path Aggregation Network (PANet) [23] to a Bi-directional Feature Pyramid Network (BiFPN) [24]. BiFPN utilizes learnable feature weights to dynamically fuse semantic information from multi-resolution features, thereby improving the model’s robustness in detecting corn disease spores under complex, multi-scale distribution scenarios. YOLOv8 adopts a decoupled design in the detection head, allowing distinct modeling of classification and localization tasks to boost performance. In terms of training strategy, YOLOv8 adopts the Task-Aligned Assigner algorithm [18] for dynamic sample assignment. This algorithm jointly optimizes classification confidence and bounding box quality, enabling adaptive adjustment of positive and negative sample ratios based on task and data distribution characteristics. Furthermore, YOLOv8 integrates structural re-parameterization techniques from YOLOv7 [25] and hardware-aware design principles from YOLOv6 [26]. By incorporating Ghost Convolution and a hierarchical scaling strategy, YOLOv8s achieves faster inference under comparable computational complexity, laying a technical foundation for real-time identification of tiny spore targets in complex field environments.

2.4.2. Space-to-Depth and Convolution Layer

The downsampling operations in traditional convolutional neural networks often lead to diminished spatial resolution, causing loss of detail in detecting small-scale objects. We introduced SPD-Conv modules to resolve this issue. At its core, SPD-Conv introduces a structural modification by replacing standard stride convolutions or pooling mechanisms, commonly used for dimensionality reduction, with a progressive feature reorganization approach that preserves complete spatial information. Figure 7 shows that the SPD-Conv is composed of three stages: spatial partitioning, channel concatenation, and feature compression. Compared to traditional downsampling mechanisms based on stride convolution, SPD-Conv employs a spatial-to-channel transformation strategy that retains fine-grained details from the original feature maps [27]. This design provides better preservation of the geometric characteristics of small targets. Through parameterized implementation, SPD-Conv enables the joint optimization of spatial information integrity and feature discriminability during the downsampling process. This effectively avoids the degradation in representation capacity for small objects that often occurs with fixed-pattern sampling methods used in conventional architectures. It is worth noting that this improvement in spatial detail retention comes at the cost of increased computational complexity. The additional operations involved in spatial reorganization and channel transformation introduce a moderate overhead compared to conventional stride-based downsampling.

2.4.3. Partial Self-Attention Mechanism

Due to occlusions among spores, those with ambiguous shapes or low color contrast are prone to being missed. To overcome the above limitation, the network was enhanced with the PSA, aiming to strengthen feature representation and improve the discrimination of low-contrast targets under complex background conditions. Figure 8 shows the construction of the PSA mechanism. Two branches were generated from the input feature map: one branch is handled by the multi-head self-attention (MHSA) mechanism and the feed-forward network (FFN), while the other branch bypasses these operations and directly connects to the output [28]. The design reduced the computational overhead of the attention weight matrix by 50% without compromising the feature representation capacity. The PSA module effectively alleviates the computational costs by leveraging a divide-and-conquer strategy and structural optimization [28]. Meanwhile, it retains the ability to model global contextual information. It provides an efficient and lightweight enhancement path for improving YOLOv8’s performance in small object detection scenarios, particularly under complex and occluded conditions.

2.4.4. MPDIoU Loss Function

To improve localization accuracy, YOLOv8 incorporates CIoU, which outperforms conventional IoU by considering the distance between the centroids of the predicted and target boxes. Furthermore, it integrates a measure that reflects the similarity of their aspect ratios [29]. The formulations of IoU and CIoU are presented in Equation (1) and Equation (2), respectively.
I o U = A B A B
L C I o U = 1 I o U + ρ 2 c 2 + α v
α = v 1 I o U + v
v = 4 π 2 a r c t a n w g t h g t a r c t a n w h 2
Among these parameters, A and B denote the sizes of the predicted box and the ground truth box, respectively. v is a normalized term quantifying the aspect ratio difference. However, CIoU has inherent limitations due to v term: when the aspect ratios of prediction boxes and ground truth boxes are identical, but their sizes are different, v remains the same. CIoU may overly emphasize aspect ratio alignment and neglect the optimization of absolute scale [30]. Therefore, we replaced CIoU with MPDIoU as shown in Equation (5).
L M P D I o U = I o U d 1 2 w 2 + h 2 d 2 2 w 2 + h 2
Specifically, d1 represents the distance between the upper left corners of the ground truth box and the predicted box, and d2 represents the distance between the lower right corners. Compared to CIoU, MPDIoU offers two main advantages: first, by eliminating the aspect ratio term, it avoids gradient conflicts and directly drives size optimization through geometric consistency. This is particularly effective for precise localization in high-density small-object scenarios, where targets are closely spaced and similar in size, requiring accurate boundary matching to reduce false detections. Second, the capacity to recognize both position and scale is effectively enhanced through the integration of MPDIoU. Even when the overlapping area between boxes is small—such as when spores are partially occluded and only partially visible—MPDIoU still provides a stable gradient direction through endpoint distance minimization. The principles of CIoU and MPDIoU are illustrated in Figure 9.

2.5. Test Environment

All experiments were set up on the Google Colab cloud platform. Training GPU was NVIDIA A100 SXM, and CPU was AMD EPYC 7B12 CPU. The experiments were implemented in a Windows 10 environment using Python 3.8.
To standardize the input data, all images were resized to 640 × 640 pixels. During training, the initial learning rate was 0.001 with a batch size of 16. We adopted the default learning rate scheduling strategy provided by the YOLOv8 framework which combines linear warm-up and cosine annealing. The model was trained for 100 epochs. Table 3 provides a summary of the specific setup parameters.
During training, YOLOv8’s built-in Mosaic augmentation was utilized, which combines four training images into one, to increase data diversity and reduce the risk of overfitting to a certain extent.

2.6. Evaluation Index

The performance of the models was assessed utilizing Precision (P), Recall (R), F1-score and mean Average Precision (mAP) [31]. Precision and Recall quantify the performance of the detection algorithm, with Precision indicating the correctness of predicted positives and Recall representing the ability to detect true positive instances. Nevertheless, Precision and Recall alone are insufficient to comprehensively assess model performance. The model’s performance in object detection and classification was primarily assessed using mAP50 and mAP50:95 as evaluation metrics [32]. Higher model performance corresponds to greater mAP and mAP50:95 values.
Specifically, Precision is defined as the ratio of accurately predicted positive samples to the total expected positives and Recall quantifies the ratio of accurately predicted positives to all real positive instances, while F1-score is the harmonic average of those two. mAP50 is the mean Average Precision (AP) calculated across all categories, where a detection is deemed correct if the Intersection over Union (IoU) between the predicted and ground truth bounding boxes exceeds 0.50. In contrast, mAP50:95 denotes the mean AP computed by averaging over 10 different IoU thresholds, ranging from 0.50 to 0.95 in increments of 0.05. Equation (6) and Equation (7) introduce how Precision and Recall are calculated, respectively. Equation (8) shows how F1 is calculated. Equation (10) and Equation (11) present how mAP50 and mAP50:95 are calculated, respectively.
P = T P T P + F P × 100 %
R = T P T P + F N × 100 %
F 1 = 2 × P × R P + R
True Positives (TP) represent the number of correctly identified positive instances. False Positives (FP) denote negative instances that are incorrectly classified as positive, and False Negatives (FN) refer to positive instances that the model failed to detect.
A P 50 = 0 1 P r d r ,   I o U 0.5
m A P 50 = i = 1 n A P 50 i n
m A P 50 : 95 = i = 0 9 A P I o U = 0.50 + 0.05 i 10

3. Results and Discussion

3.1. Comparison of Different Attention Mechanism Modules

The single-stage detection framework of YOLOv8 provided efficient inference performance. However, conventional convolution operations often caused feature confusion when dealing with dense and multi-scale targets [33]. This study incorporated attention processes into the YOLOv8 model and evaluated the influence of several attention modules on the detection efficacy of tiny spore targets.
In this experiment, after introducing SPD-Conv and MPDIoU modules into the YOLOv8s model, five attention modules including Global Attention Mechanism (GAM) [34], Efficient Channel Attention (ECA) [20], Coordinate Attention (CA) [35], Receptive-Field Attention (RFA) [36], and PSA [28] were inserted after the SPPF layer, which connects the Backbone and Neck. The YOLOv8-SPM model obtained the highest mAP50 and mAP50:95 values (Table 4), confirming the effectiveness of the proposed method. Its mAP50 value did not differ significantly from other attention mechanisms, but the mAP50:95 increased by 1.3% compared to the second-best method. It was indicated that the PSA module effectively decreased the rate of missed detections for tiny targets through adaptive feature association enhancement and demonstrated strong robustness to local distortions and good generalization capability. Moreover, the YOLOv8-SPM model also had the highest F1-score, showing its balanced optimization between Precision and Recall. Unlike other attention mechanisms, PSA used a partial channel compression strategy and attained an improved equilibrium between computational efficiency and feature expression. This made it more suitable for detecting small targets like spores.

3.2. Verification of the Effectiveness of PSA at Different Positions

In deep learning, models can leverage attention mechanisms to concentrate on key parts of the input images while ignoring irrelevant ones. In studies of other improved models, attention mechanisms are typically added to the Backbone or Neck layers [19,37,38]. Introducing attention mechanisms into the Backbone structure helps the model focus more precisely on key areas during feature extraction, thereby significantly enhancing overall feature representation [39]. Therefore, this study added the attention mechanism in the Neck layer. To explore the effect of various insertion positions in the Neck layer, four different insertion positions (Figure 6) after the C2f operation in the detection head were designed for comparative experiments. The results showed that inserting the PSA module at Position-1 performed best (Table 5). This position lies at the intersection of the Backbone and Neck layers, retaining more complete spatial details and ensuring that all levels of the fused feature pyramid remain sensitive to the structure of small targets. While deeper layers in the Neck generally contain more semantic information due to multi-scale feature fusion, they may also experience reduced spatial resolution and increased semantic redundancy. When adding the attention mechanism deeper in the Neck layer, its ability to enhance fine-grained spatial features may be limited as the spatial structure of small targets is less distinct after multiple downsamplings. Additionally, fused features at this stage can be more homogeneous which may weaken the ability of the attention mechanism to highlight subtle local differences. These limitations may have contributed to the suboptimal performance of the PSA attention mechanism when introduced into the deeper layers of the Neck.

3.3. Ablation Experiments

In this study, we proposed three improvement modules including SPD-Conv, PSA, and MPDIoU. Ablation experiments were performed to assess the individual contributions and synergistic impacts of these enhancements on model performance. Table 6 summarizes the results.
Initially, SPD-Conv modules were incorporated into the YOLOv8s model (YOLOv8s-a). The mAP50 increased by 1.4% and the mAP95 improved by 2.6%. Additionally, the model exhibited gains of 1.4% in Precision and 4.3% in Recall. This may be attributed to the fact that SPD-Conv employs a spatial-to-channel transformation strategy which retains fine-grained details from the original feature maps. On this basis, after PSA was added in the lowest-resolution feature layer (YOLOv8s-b), the mAP50 and mAP50:95 of the model improved by 0.5% and 0.2%, respectively. It indicated that the PSA mechanism effectively improved the model’s capacity to detect mixed spores. When CIoU was replaced with MPDIoU (YOLOv8s-SPM), neither the values of the mAP50 nor the F1-score achieved obvious changes, while mAP50:95 and Precision were improved by 0.5% and 0.8%, respectively. In comparison to YOLOv8s, YOLOv8s-SPM achieved a significant improvement in spore detection accuracy, with mAP50:95 increasing by 2.7% through the integration of the above three optimization modules. It was demonstrated that the YOLOv8s-SPM model maintained stable detection performance across spores of varying scales, occlusion levels, and pose variations.
To further analyze the impact of each module on false positives and false negatives, confusion matrices of the four models in ablation experiments were plotted (Figure S1). The baseline YOLOv8s model showed many missed detections of corn head smut spores. After adding the SPD-Conv module, missed detections were notably reduced but more background regions were wrongly identified as corn head smut spores. This trade-off likely results from SPD-Conv’s enhanced sensitivity to fine-grained features, which helps the model respond to subtle or ambiguous targets but can also overactivate similar background textures. PSA and MPDIoU further reduced misclassifications in all classes by improving global feature understanding and localization precision. Overall, SPD-Conv mainly boosted Recall, while PSA and MPDIoU improved accuracy and robustness under complex conditions.
While the YOLOv8s-SPM model achieved notable improvements in detection accuracy, it incurred a moderate increase in computational complexity primarily due to the inclusion of the SPD-Conv module. The parameter counts and FLOPs rose from 11.14M and 14.33G (YOLOv8s) to 12.92M and 94.58G (YOLOv8s-a), respectively (Table S1). This increase is mainly attributed to the spatial-to-channel transformation and multi-stage feature aggregation introduced by SPD-Conv. Unlike simple stride convolutions, SPD-Conv involves more intensive operations to preserve spatial details. Nevertheless, the trade-off is acceptable given the substantial accuracy gains.

3.4. Performance Comparison of Different Modes

In this study, a comparative experiment was conducted to evaluate the detection performance of the enhanced model. Several network models, including YOLOv5s, YOLOv6s [26], YOLOv7-tiny [25], YOLOv8n, YOLOv8s, and YOLOv8-p2, were selected and compared with the proposed YOLOv8-SPM model. The same dataset was used to train each model, along with the same configuration settings and training parameters. Figure 10 illustrates the curve variations in several metrics during training for each model.
Table 7 demonstrates that the YOLOv8-SPM achieved the best performance across all five evaluation indicators. Compared to the baseline YOLOv8s model, the YOLOv8s-SPM model improved Precision, Recall, mAP50, and mAP50:95 by 2.3%, 4.0%, 1.7%, and 2.7%, respectively, indicating its enhanced effectiveness in detecting small spore targets under complex background interference. Compared to the YOLOv6s, YOLOv7-tiny, YOLOv8s, and YOLOv8n models, the YOLOv8s-p2 model showed relatively good detection accuracy, with mAP50 of 98.6%. The YOLOv8-p2 model is a variant of the YOLOv8 series proposed by the Ultralytics team for small object detection scenarios. By introducing a higher-resolution P2 layer and constructing dual-scale detection architecture, the YOLOv8-p2 model enhances sensitivity to small targets. Compared to YOLOv8-p2, the YOLOv8s-SPM model achieved increases of 1.2% in Precision, 0.5% in Recall, 0.3% in mAP50, and 1.5% in mAP50:95. This performance gap may be attributed to the different strategies employed by the two models’ small object detection. Compared to YOLOv8-p2, which improved the ability of small object detection by adding a higher-resolution detection head, YOLOv8s-SPM enhanced the detection pipeline from within. Its SPD-Conv retains spatial detail during downsampling. Meanwhile, PSA improves early-stage semantic focus and MPDIoU refines localization accuracy. Overall, this combination of three types of optimizations may prove more effective than simply adding an extra detection head. These results suggested that incorporating three types of optimizations lead to a substantial improvement in balancing detailed feature preservation and semantic representation, making the YOLOv8s-SPM model more advantageous for detecting small targets such as spores.

3.5. Visualization of Detection Results

We visualized the results of the spore detection using an identical test dataset to intuitively assess the detection capabilities of the baseline YOLOv8s and the improved YOLOv8s-SPM model. The comparisons of missed and false detections between the two models are illustrated in Figure 11 and Figure 12, respectively.
As shown in Figure 11, the YOLOv8s model frequently exhibited missed detections when identifying tiny spores, like spores of corn head smut. The frequency of missed detections may increase when the spores are partially occluded. In contrast, the YOLOv8s-SPM model successfully detected and classified all visible spores in Figure 11, demonstrating superior detection performance.
The YOLOv8s model also suffered from false detections. For example, the hyphae were erroneously classified as northern corn leaf blight spores, and some impurities were misidentified as corn rust spores (Figure 12). The YOLOv8s-SPM model, however, avoided such misclassifications entirely, further indicating its improved robustness.
The visual analysis results demonstrate that incorporating the SPD-Conv module, PSA mechanism, and MPD-IoU function effectively improved the model’s capacity for global context perception. This improvement lead to higher detection accuracy for tiny targets and polymorphic structures, reducing both missed and incorrect detections on the mixed spores.

4. Conclusions

To overcome the difficulties of poor identification accuracy brought on the fungal spores’ tiny size, adhesion, and occlusion, we proposed a maize disease spore detection model named YOLOv8s-SPM based on the YOLOv8s architecture to detect three important airborne disease spores in corn production. Specifically, this model used SPD-Conv modules to improve the capacity to detect small targets and low-resolution images. Furthermore, the PSA mechanism was embedded in the lowest-resolution feature layer to reduce computational complexity while improving overall performance. The CIoU was also replaced with MPDIoU to enhance localization accuracy. The YOLOv8s-SPM model outperformed the original YOLOv8s model by 1.4% with a mAP50 score of 98.9% and a mAP50:95 score of 90.8%. The model demonstrated strong robustness, particularly under complex background scenarios. Future research will focus on including more diverse spore categories to strengthen the model’s generalization ability. As the number of spore types increases, particularly those with highly similar morphological structures, further improvements will be explored on the basis of this model. We will also explore optimization strategies for lightweight networks, including model pruning and knowledge distillation, to minimize computational expenses and make the model more suitable for embedded devices. In addition, we plan to integrate the proposed detection model with spore trapping devices and explore the use of multispectral or hyperspectral imaging technologies to enable more accurate and reliable real-time disease surveillance in complex agricultural environments.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/agriculture15151689/s1. Table S1: Computational complexity of ablation variants in the YOLOv8s-SPM framework. Figure S1: Confusion matrix comparison across ablation variants. YOLOv8s-a: YOLOv8s with the SPD-Conv. YOLOv8s-b: YOLOv8s-a with the PSA attention mechanism. YOLOv8s-SPM: YOLOv8s-b with MPDIoU loss function.

Author Contributions

Y.R.: conceptualization, methodology, and writing—original draft. Y.X.: investigation, methodology, and writing—original draft. H.T.: investigation and visualization. Q.Z.: resources. M.Y.: resources. R.Z.: methodology and software. D.X.: investigation and project administration. Q.C.: supervision, project administration, writing—review and editing. Q.W.: supervision, writing—review and editing. S.S.: conceptualization, supervision, writing—review and editing, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Heilongjiang Provincial Natural Science Foundation of China (YQ2024C008).

Data Availability Statement

The datasets and code used in this study are available in the repository: https://github.com/DPmooncake/maize-spore-detection (accessed on 19 May 2025).

Acknowledgments

The authors are grateful to the Collaborative Innovation Center of Soybean Biotechnology and Nutrition Efficiency, Henan Province.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Erenstein, O.; Jaleta, M.; Sonder, K.; Mottaleb, K.; Prasanna, B.M. Global Maize Production, Consumption and Trade: Trends and R&D Implications. Food Sec. 2022, 14, 1295–1319. [Google Scholar] [CrossRef]
  2. Cai, J.; Pan, R.; Lin, J.; Liu, J.; Zhang, L.; Wen, X.; Chen, X.; Zhang, X. Improved EfficientNet for Corn Disease Identification. Front. Plant Sci. 2023, 14, 1224385. [Google Scholar] [CrossRef] [PubMed]
  3. Sun, J.; Yang, Y.; He, X.; Wu, X. Northern Maize Leaf Blight Detection Under Complex Field Environment Based on Deep Learning. IEEE Access 2020, 8, 33679–33688. [Google Scholar] [CrossRef]
  4. Singh, R.; Srivastava, R.P.; Ram, L. Northern Corn Leaf Blight-An Important Disease of Maize: An Extension Fact Sheet. Indian Res. J. Ext. Educ. 2012, 2, 239–241. [Google Scholar]
  5. Wang, Y.; Xu, C.; Gao, Y.; Ma, Y.; Zhang, X.; Zhang, L.; Di, H.; Ma, J.; Dong, L.; Zeng, X.; et al. Physiological Mechanisms Underlying Tassel Symptom Formation in Maize Infected with Sporisorium Reilianum. Plants 2024, 13, 238. [Google Scholar] [CrossRef]
  6. Cao, Y.; Cheng, Z.; Ma, J.; Yang, W.; Liu, X.; Zhang, X.; Zhang, J.; Wu, X.; Duan, C. Advances in Research on Southern Corn Rust, a Devasting Fungal Disease. Int. J. Mol. Sci. 2024, 25, 13644. [Google Scholar] [CrossRef]
  7. Korsnes, R.; Westrum, K.; Fløistad, E.; Klingen, I. Computer-Assisted Image Processing to Detect Spores from the Fungus Pandora Neoaphidis. MethodsX 2016, 3, 231–241. [Google Scholar] [CrossRef]
  8. Le Vourch, V.; Decroës, A.; Thonon, S.; Lienard, C.; Van Steenberge, C.; Rosillon, D.; Lebrun, P.; César, V.; Legrève, A. Spatiotemporal Dynamics of Phytophthora Infestans Airborne Inoculum in Belgium. Eur. J. Plant Pathol. 2025, 171, 323–340. [Google Scholar] [CrossRef]
  9. Upadhyay, A.; Chandel, N.S.; Singh, K.P.; Chakraborty, S.K.; Nandede, B.M.; Kumar, M.; Subeesh, A.; Upendar, K.; Salem, A.; Elbeltagi, A. Deep Learning and Computer Vision in Plant Disease Detection: A Comprehensive Review of Techniques, Models, and Trends in Precision Agriculture. Artif. Intell. Rev. 2025, 58, 92. [Google Scholar] [CrossRef]
  10. Wang, Y.; Du, X.; Ma, G.; Liu, Y.; Wang, B.; Mao, H. Classification Methods for Airborne Disease Spores from Greenhouse Crops Based on Multifeature Fusion. Appl. Sci. 2020, 10, 7850. [Google Scholar] [CrossRef]
  11. Javidan, S.M.; Banakar, A.; Vakilian, K.A.; Ampatzidis, Y.; Rahnama, K. Diagnosing the Spores of Tomato Fungal Diseases Using Microscopic Image Processing and Machine Learning. Multimed. Tools Appl. 2024, 83, 67283–67301. [Google Scholar] [CrossRef]
  12. Sujatha, R.; Chatterjee, J.M.; Jhanjhi, N.; Brohi, S.N. Performance of Deep Learning vs Machine Learning in Plant Leaf Disease Detection. Microprocess. Microsyst. 2021, 80, 103615. [Google Scholar] [CrossRef]
  13. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object Detection in 20 Years: A Survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
  14. Zou, X. A Review of Object Detection Techniques. In Proceedings of the 2019 International Conference on Smart Grid and Electrical Automation (ICSGEA), Xiangtan, China, 10–11 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 251–254. [Google Scholar]
  15. Chauhan, N.K.; Singh, K. A Review on Conventional Machine Learning vs Deep Learning. In Proceedings of the 2018 International Conference on Computing, Power and Communication Technologies (GUCON), Greater Noida, Uttar Pradesh, India, 28–29 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 347–352. [Google Scholar]
  16. Li, K.; Zhu, X.; Qiao, C.; Zhang, L.; Gao, W.; Wang, Y. The Gray Mold Spore Detection of Cucumber Based on Microscopic Image and Deep Learning. Plant Phenomics 2023, 5, 0011. [Google Scholar] [CrossRef] [PubMed]
  17. Zhu, X.; Chen, F.; Qiao, C.; Zhang, Y.; Zhang, L.; Gao, W.; Wang, Y. Cucumber Pathogenic Spores’ Detection Using the GCS-YOLOv8 Network with Microscopic Images in Natural Scenes. Plant Methods 2024, 20, 131. [Google Scholar] [CrossRef]
  18. Varghese, R.; Sambath, M. Sambath YOLOv8: A Novel Object Detection Algorithm with Enhanced Performance and Robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  19. Zhang, D.-Y.; Zhang, W.; Cheng, T.; Zhou, X.-G.; Yan, Z.; Wu, Y.; Zhang, G.; Yang, X. Detection of Wheat Scab Fungus Spores Utilizing the Yolov5-ECA-ASFF Network Structure. Comput. Electron. Agric. 2023, 210, 107953. [Google Scholar] [CrossRef]
  20. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 11531–11539. [Google Scholar]
  21. Zhang, D.; Tao, W.; Cheng, T.; Zhou, X.; Hu, G.; Qiao, H.; Guo, W.; Wang, Z.; Gu, C. GSD-YOLO: A Lightweight Decoupled Wheat Scab Spore Detection Network Based on Yolov7-Tiny. Agriculture 2024, 14, 2278. [Google Scholar] [CrossRef]
  22. Elreedy, D.; Atiya, A.F.; Kamalov, F. A Theoretical Distribution Analysis of Synthetic Minority Oversampling Technique (SMOTE) for Imbalanced Learning. Mach. Learn. 2024, 113, 4903–4923. [Google Scholar] [CrossRef]
  23. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 8759–8768. [Google Scholar]
  24. Lin, T.-Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 936–944. [Google Scholar]
  25. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 7464–7475. [Google Scholar]
  26. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022. [Google Scholar] [CrossRef]
  27. Sunkara, R.; Luo, T. No More Strided Convolutions or Pooling: A New CNN Building Block for Low-Resolution Images and Small Objects. arXiv 2022. [Google Scholar] [CrossRef]
  28. Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024. [Google Scholar] [CrossRef]
  29. Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. IEEE Trans. Cybern. 2022, 52, 8574–8586. [Google Scholar] [CrossRef]
  30. Ma, S.; Xu, Y. MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression. arXiv 2023. [Google Scholar] [CrossRef]
  31. Bono, F.M.; Radicioni, L.; Cinquemani, S. A Novel Approach for Quality Control of Automated Production Lines Working under Highly Inconsistent Conditions. Eng. Appl. Artif. Intell. 2023, 122, 106149. [Google Scholar] [CrossRef]
  32. Zhang, D.-Y.; Luo, H.-S.; Wang, D.-Y.; Zhou, X.-G.; Li, W.-F.; Gu, C.-Y.; Zhang, G.; He, F.-M. Assessment of the Levels of Damage Caused by Fusarium Head Blight in Wheat Using an Improved YoloV5 Method. Comput. Electron. Agric. 2022, 198, 107086. [Google Scholar] [CrossRef]
  33. Zheng, X.; Wang, H.; Shuang, Y.; Chen, G.; Zou, S.; Yuan, Q. Starting from the Structure: A Review of Small Object Detection Based on Deep Learning. Image Vis. Comput. 2024, 146, 105054. [Google Scholar] [CrossRef]
  34. Liu, Y.; Shao, Z.; Hoffmann, N. Global Attention Mechanism: Retain Information to Enhance Channel-Spatial Interactions. arXiv 2021. [Google Scholar] [CrossRef]
  35. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 13708–13717. [Google Scholar]
  36. Zhang, X.; Liu, C.; Yang, D.; Song, T.; Ye, Y.; Li, K.; Song, Y. RFAConv: Innovating Spatial Attention and Standard Convolutional Operation. arXiv 2023. [Google Scholar] [CrossRef]
  37. Zhao, E.; Zhao, H.; Liu, G.; Jiang, J.; Zhang, F.; Zhang, J.; Luo, C.; Chen, B.; Yang, X. Automated Recognition of Conidia of Nematode-Trapping Fungi Based on Improved YOLOv8. IEEE Access 2024, 12, 81314–81328. [Google Scholar] [CrossRef]
  38. Li, K.; Qiao, C.; Zhu, X.; Song, Y.; Zhang, L.; Gao, W.; Wang, Y. Lightweight Fungal Spore Detection Based on Improved YOLOv5 in Natural Scenes. Int. J. Mach. Learn. Cyber. 2024, 15, 2247–2261. [Google Scholar] [CrossRef]
  39. Niu, Z.; Zhong, G.; Yu, H. A Review on the Attention Mechanism of Deep Learning. Neurocomputing 2021, 452, 48–62. [Google Scholar] [CrossRef]
Figure 1. Experimental workflow.
Figure 1. Experimental workflow.
Agriculture 15 01689 g001
Figure 2. Morphological characteristics of fungal spores of (A) northern corn leaf blight, (B) corn rust, and (C) corn head smut.
Figure 2. Morphological characteristics of fungal spores of (A) northern corn leaf blight, (B) corn rust, and (C) corn head smut.
Agriculture 15 01689 g002
Figure 3. Examples of the microscopic images of the mixed spores.
Figure 3. Examples of the microscopic images of the mixed spores.
Agriculture 15 01689 g003
Figure 4. Examples of microscopic images after data enhancement.
Figure 4. Examples of microscopic images after data enhancement.
Agriculture 15 01689 g004
Figure 5. The architecture of the YOLOv8s-SPM model.
Figure 5. The architecture of the YOLOv8s-SPM model.
Agriculture 15 01689 g005
Figure 6. The architecture of the YOLOv8s model. Position-1, Position-2, Position-3, and Position-4 refer to the different insertion locations of the Partial Self-Attention (PSA) module in the experiment titled Verification of the Effectiveness of PSA at Different Positions.
Figure 6. The architecture of the YOLOv8s model. Position-1, Position-2, Position-3, and Position-4 refer to the different insertion locations of the Partial Self-Attention (PSA) module in the experiment titled Verification of the Effectiveness of PSA at Different Positions.
Agriculture 15 01689 g006
Figure 7. The structure of the SPD-Conv module.
Figure 7. The structure of the SPD-Conv module.
Agriculture 15 01689 g007
Figure 8. The architecture of the PSA module.
Figure 8. The architecture of the PSA module.
Agriculture 15 01689 g008
Figure 9. The principles of CIoU and MPDIoU. (A) The principle of the CIoU. (B) The principle of the MPDIoU. The green and red boxes correspond to the ground truth and predicted boxes, respectively.
Figure 9. The principles of CIoU and MPDIoU. (A) The principle of the CIoU. (B) The principle of the MPDIoU. The green and red boxes correspond to the ground truth and predicted boxes, respectively.
Agriculture 15 01689 g009
Figure 10. Comparison curves of evaluation metrics for different models.
Figure 10. Comparison curves of evaluation metrics for different models.
Agriculture 15 01689 g010
Figure 11. Visualization comparison of algorithm false negatives. The orange, red, and green bounding boxes represent spores of northern corn leaf blight, corn head smut, and corn rust, respectively.
Figure 11. Visualization comparison of algorithm false negatives. The orange, red, and green bounding boxes represent spores of northern corn leaf blight, corn head smut, and corn rust, respectively.
Agriculture 15 01689 g011
Figure 12. Visualization comparison of the algorithm’s false positives. The orange, red, and green bounding boxes represent spores of northern corn leaf blight, corn head smut, and corn rust, respectively.
Figure 12. Visualization comparison of the algorithm’s false positives. The orange, red, and green bounding boxes represent spores of northern corn leaf blight, corn head smut, and corn rust, respectively.
Agriculture 15 01689 g012
Table 1. Platform configuration.
Table 1. Platform configuration.
Optical SystemCameraMagnificationPixelExposure TimeSoftwareSystem
BX43DP7410 × 101920 × 12004.75 msCellSens Entry
(v1.15)
Win10 × 64
Table 2. Image counts in training, validation, test sets and spore details.
Table 2. Image counts in training, validation, test sets and spore details.
NameImagesCorn Leaf BlightCorn Head SmutCorn Rust
Train320211,97013,40812,712
Val400146719401557
Test401158616371434
Table 3. Experimental configuration parameters.
Table 3. Experimental configuration parameters.
CategoryParameter
GPUNVIDIA A100 SXM
CPUAMD EPYC 7B12
SystemWin10
Python version3.8
CUDA12.6.65
Pixel640 × 640
Epoch100
Batch size16
Learning rate0.001
Table 4. Comparison of the five attention mechanisms.
Table 4. Comparison of the five attention mechanisms.
ModelsPrecision/%Recall/%mAP50/%mAP50:95/%F1/%
YOLOv8s-SPD-GAM97.096.598.889.497.0%
YOLOv8s-SPD-ECA96.996.198.789.597.0%
YOLOv8s-SPD-CA96.796.698.889.497.0%
YOLOv8s-SPD-RFA97.196.498.889.297.0%
YOLOv8-SPM97.896.398.990.897.0%
Table 5. Comparison of PSA mechanism positions.
Table 5. Comparison of PSA mechanism positions.
PositionPrecision/%Recall/%mAP50/%mAP50:95/%F1/%
None97.096.498.990.397.0%
Position-197.896.398.990.897.0%
Position-296.796.698.889.797.0%
Position-396.796.098.890.697.0%
Position-497.296.598.890.597.0%
Table 6. Ablation experiment.
Table 6. Ablation experiment.
ModelsSPDPSAMPDIoUPrecision/%Recall/%mAP50/%mAP50:95/%F1/%
YOLOv8s 95.592.397.588.192.0%
YOLOv8s-a 96.996.698.490.197.0%
YOLOv8s-b 97.096.498.990.397.0%
YOLOv8s-SPM97.896.398.990.897.0%
√ indicates that the optimization has been added to the model.
Table 7. Comparison experiment.
Table 7. Comparison experiment.
ModelsPrecision/%Recall/%mAP50/%mAP50:95/%F1/%
YOLOv5s95.896.198.186.897.0%
YOLOv6s94.390.095.983.092.0%
YOLOv7-tiny95.295.897.984.195.0%
YOLOv8s95.592.397.588.194.0%
YOLOv8n95.090.596.485.592.0%
YOLOv8s-p296.695.898.689.396.0%
YOLOv8-SPM97.896.398.990.897.0%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, Y.; Xu, Y.; Tian, H.; Zhang, Q.; Yang, M.; Zhu, R.; Xin, D.; Chen, Q.; Wei, Q.; Song, S. Detection of Maize Pathogenic Fungal Spores Based on Deep Learning. Agriculture 2025, 15, 1689. https://doi.org/10.3390/agriculture15151689

AMA Style

Ren Y, Xu Y, Tian H, Zhang Q, Yang M, Zhu R, Xin D, Chen Q, Wei Q, Song S. Detection of Maize Pathogenic Fungal Spores Based on Deep Learning. Agriculture. 2025; 15(15):1689. https://doi.org/10.3390/agriculture15151689

Chicago/Turabian Style

Ren, Yijie, Ying Xu, Huilin Tian, Qian Zhang, Mingxiu Yang, Rongsheng Zhu, Dawei Xin, Qingshan Chen, Qiaorong Wei, and Shuang Song. 2025. "Detection of Maize Pathogenic Fungal Spores Based on Deep Learning" Agriculture 15, no. 15: 1689. https://doi.org/10.3390/agriculture15151689

APA Style

Ren, Y., Xu, Y., Tian, H., Zhang, Q., Yang, M., Zhu, R., Xin, D., Chen, Q., Wei, Q., & Song, S. (2025). Detection of Maize Pathogenic Fungal Spores Based on Deep Learning. Agriculture, 15(15), 1689. https://doi.org/10.3390/agriculture15151689

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop