Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (24)

Search Parameters:
Keywords = fine-grained crop disease

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 5142 KiB  
Article
Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model
by Meilin Li, Yufeng Guo, Wei Guo, Hongbo Qiao, Lei Shi, Yang Liu, Guang Zheng, Hui Zhang and Qiang Wang
Agriculture 2025, 15(15), 1580; https://doi.org/10.3390/agriculture15151580 - 23 Jul 2025
Viewed by 282
Abstract
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early [...] Read more.
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early and accurate detection crucial for effective management. In this study, we present QY-SE-MResNet34, a deep learning-based classification model that builds upon ResNet34 to perform multi-class classification of wheat leaf images and assess powdery mildew severity at the single-leaf level. The proposed methodology begins with dataset construction following the GBT 17980.22-2000 national standard for powdery mildew severity grading, resulting in a curated collection of 4248 wheat leaf images at the grain-filling stage across six severity levels. To enhance model performance, we integrated transfer learning with ResNet34, leveraging pretrained weights to improve feature extraction and accelerate convergence. Further refinements included embedding a Squeeze-and-Excitation (SE) block to strengthen feature representation while maintaining computational efficiency. The model architecture was also optimized by modifying the first convolutional layer (conv1)—replacing the original 7 × 7 kernel with a 3 × 3 kernel, adjusting the stride to 1, and setting padding to 1—to better capture fine-grained leaf textures and edge features. Subsequently, the optimal training strategy was determined through hyperparameter tuning experiments, and GrabCut-based background processing along with data augmentation were introduced to enhance model robustness. In addition, interpretability techniques such as channel masking and Grad-CAM were employed to visualize the model’s decision-making process. Experimental validation demonstrated that QY-SE-MResNet34 achieved an 89% classification accuracy, outperforming established models such as ResNet50, VGG16, and MobileNetV2 and surpassing the original ResNet34 by 11%. This study delivers a high-performance solution for single-leaf wheat powdery mildew severity assessment, offering practical value for intelligent disease monitoring and early warning systems in precision agriculture. Full article
Show Figures

Figure 1

17 pages, 1927 KiB  
Article
ConvTransNet-S: A CNN-Transformer Hybrid Disease Recognition Model for Complex Field Environments
by Shangyun Jia, Guanping Wang, Hongling Li, Yan Liu, Linrong Shi and Sen Yang
Plants 2025, 14(15), 2252; https://doi.org/10.3390/plants14152252 - 22 Jul 2025
Viewed by 375
Abstract
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification [...] Read more.
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification tasks. Unlike existing hybrid approaches, ConvTransNet-S uniquely introduces three key innovations: First, a Local Perception Unit (LPU) and Lightweight Multi-Head Self-Attention (LMHSA) modules were introduced to synergistically enhance the extraction of fine-grained plant disease details and model global dependency relationships, respectively. Second, an Inverted Residual Feed-Forward Network (IRFFN) was employed to optimize the feature propagation path, thereby enhancing the model’s robustness against interferences such as lighting variations and leaf occlusions. This novel combination of a LPU, LMHSA, and an IRFFN achieves a dynamic equilibrium between local texture perception and global context modeling—effectively resolving the trade-offs inherent in standalone CNNs or transformers. Finally, through a phased architecture design, efficient fusion of multi-scale disease features is achieved, which enhances feature discriminability while reducing model complexity. The experimental results indicated that ConvTransNet-S achieved a recognition accuracy of 98.85% on the PlantVillage public dataset. This model operates with only 25.14 million parameters, a computational load of 3.762 GFLOPs, and an inference time of 7.56 ms. Testing on a self-built in-field complex scene dataset comprising 10,441 images revealed that ConvTransNet-S achieved an accuracy of 88.53%, which represents improvements of 14.22%, 2.75%, and 0.34% over EfficientNetV2, Vision Transformer, and Swin Transformer, respectively. Furthermore, the ConvTransNet-S model achieved up to 14.22% higher disease recognition accuracy under complex background conditions while reducing the parameter count by 46.8%. This confirms that its unique multi-scale feature mechanism can effectively distinguish disease from background features, providing a novel technical approach for disease diagnosis in complex agricultural scenarios and demonstrating significant application value for intelligent agricultural management. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

28 pages, 4068 KiB  
Article
GDFC-YOLO: An Efficient Perception Detection Model for Precise Wheat Disease Recognition
by Jiawei Qian, Chenxu Dai, Zhanlin Ji and Jinyun Liu
Agriculture 2025, 15(14), 1526; https://doi.org/10.3390/agriculture15141526 - 15 Jul 2025
Viewed by 340
Abstract
Wheat disease detection is a crucial component of intelligent agricultural systems in modern agriculture. However, at present, its detection accuracy still has certain limitations. The existing models hardly capture the irregular and fine-grained texture features of the lesions, and the results of spatial [...] Read more.
Wheat disease detection is a crucial component of intelligent agricultural systems in modern agriculture. However, at present, its detection accuracy still has certain limitations. The existing models hardly capture the irregular and fine-grained texture features of the lesions, and the results of spatial information reconstruction caused by standard upsampling operations are inaccuracy. In this work, the GDFC-YOLO method is proposed to address these limitations and enhance the accuracy of detection. This method is based on YOLOv11 and encompasses three key aspects of improvement: (1) a newly designed Ghost Dynamic Feature Core (GDFC) in the backbone, which improves the efficiency of disease feature extraction and enhances the model’s ability to capture informative representations; (2) a redesigned neck structure, Disease-Focused Neck (DF-Neck), which further strengthens feature expressiveness, to improve multi-scale fusion and refine feature processing pipelines; and (3) the integration of the Powerful Intersection over Union v2 (PIoUv2) loss function to optimize the regression accuracy and convergence speed. The results showed that GDFC-YOLO improved the average accuracy from 0.86 to 0.90 when the cross-overmerge threshold was 0.5 (mAP@0.5), its accuracy reached 0.899, its recall rate reached 0.821, and it still maintained a structure with only 9.27 M parameters. From these results, it can be known that GDFC-YOLO has a good detection performance and stronger practicability relatively. It is a solution that can accurately and efficiently detect crop diseases in real agricultural scenarios. Full article
Show Figures

Figure 1

21 pages, 4147 KiB  
Article
AgriFusionNet: A Lightweight Deep Learning Model for Multisource Plant Disease Diagnosis
by Saleh Albahli
Agriculture 2025, 15(14), 1523; https://doi.org/10.3390/agriculture15141523 - 15 Jul 2025
Viewed by 495
Abstract
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB [...] Read more.
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB and multispectral drone imagery with IoT-based environmental sensor data (e.g., temperature, humidity, soil moisture), recorded over six months across multiple agricultural zones. Built on the EfficientNetV2-B4 backbone, AgriFusionNet incorporates Fused-MBConv blocks and Swish activation to improve gradient flow, capture fine-grained disease patterns, and reduce inference latency. The model was evaluated using a comprehensive dataset composed of real-world and benchmarked samples, showing superior performance with 94.3% classification accuracy, 28.5 ms inference time, and a 30% reduction in model parameters compared to state-of-the-art models such as Vision Transformers and InceptionV4. Extensive comparisons with both traditional machine learning and advanced deep learning methods underscore its robustness, generalization, and suitability for deployment on edge devices. Ablation studies and confusion matrix analyses further confirm its diagnostic precision, even in visually ambiguous cases. The proposed framework offers a scalable, practical solution for real-time crop health monitoring, contributing toward smart and sustainable agricultural ecosystems. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

21 pages, 2701 KiB  
Article
HSDT-TabNet: A Dual-Path Deep Learning Model for Severity Grading of Soybean Frogeye Leaf Spot
by Xiaoming Li, Yang Zhou, Yongguang Li, Shiqi Wang, Wenxue Bian and Hongmin Sun
Agronomy 2025, 15(7), 1530; https://doi.org/10.3390/agronomy15071530 - 24 Jun 2025
Viewed by 334
Abstract
Soybean frogeye leaf spot (FLS), a serious soybean disease, causes severe yield losses in the largest production regions of China. However, both conventional field monitoring and machine learning algorithms remain challenged in achieving rapid and accurate detection. In this study, an HSDT-TabNet model [...] Read more.
Soybean frogeye leaf spot (FLS), a serious soybean disease, causes severe yield losses in the largest production regions of China. However, both conventional field monitoring and machine learning algorithms remain challenged in achieving rapid and accurate detection. In this study, an HSDT-TabNet model was proposed for the grading of soybean FLS under field conditions by analyzing unmanned aerial vehicle (UAV)-based hyperspectral data. This model employs a dual-path parallel feature extraction strategy: the TabNet path performs sparse feature selection to capture fine-grained local discriminative information, while the hierarchical soft decision tree (HSDT) path models global nonlinear relationships across hyperspectral bands. The features from both paths are then dynamically fused via a multi-head attention mechanism to integrate complementary information. Furthermore, the overall generalization ability of the model is improved through hyperparameter optimization based on the tree-structured Parzen estimator (TPE). Experimental results show that HSDT-TabNet achieved a macro-accuracy of 96.37% under five-fold cross-validation. It outperformed the TabTransformer and SVM baselines by 2.08% and 2.23%, respectively. For high-severity cases (Level 4–5), the classification accuracy exceeded 97%. This study provides an effective method for precise field-scale crop disease monitoring. Full article
Show Figures

Figure 1

26 pages, 21987 KiB  
Article
AHN-YOLO: A Lightweight Tomato Detection Method for Dense Small-Sized Features Based on YOLO Architecture
by Wenhui Zhang and Feng Jiang
Horticulturae 2025, 11(6), 639; https://doi.org/10.3390/horticulturae11060639 - 6 Jun 2025
Viewed by 612
Abstract
Convolutional neural networks (CNNs) are increasingly applied in crop disease identification, yet most existing techniques are optimized solely for laboratory environments. When confronted with real-world challenges such as diverse disease morphologies, complex backgrounds, and subtle feature variations, these models often exhibit insufficient robustness. [...] Read more.
Convolutional neural networks (CNNs) are increasingly applied in crop disease identification, yet most existing techniques are optimized solely for laboratory environments. When confronted with real-world challenges such as diverse disease morphologies, complex backgrounds, and subtle feature variations, these models often exhibit insufficient robustness. To effectively identify fine-grained disease features in complex scenarios while reducing deployment and training costs, this paper proposes a novel network architecture named AHN-YOLO, based on an improved YOLOv11-n framework that demonstrates balanced performance in multi-scale feature processing. The key innovations of AHN-YOLO include (1) the introduction of an ADown module to reduce model parameters; (2) the adoption of a Normalized Wasserstein Distance (NWD) loss function to stabilize small-feature detection; and (3) the proposal of a lightweight hybrid attention mechanism, Light-ES, to enhance focus on disease regions. Compared to the original architecture, AHN-YOLO achieves a 17.1 % reduction in model size. Comparative experiments on a tomato disease detection dataset under real-world complex conditions demonstrate that AHN-YOLO improves accuracy, recall, and mAP-50 by 9.5%, 7.5%, and 9.2%, respectively, indicating a significant enhancement in detection precision. When benchmarked against other lightweight models in the field, AHN-YOLO exhibits superior training efficiency and detection accuracy in complex, dense scenarios, demonstrating clear advantages. Full article
(This article belongs to the Section Vegetable Production Systems)
Show Figures

Figure 1

19 pages, 4540 KiB  
Article
YOLO-BSMamba: A YOLOv8s-Based Model for Tomato Leaf Disease Detection in Complex Backgrounds
by Zongfang Liu, Xiangyun Guo, Tian Zhao and Shuang Liang
Agronomy 2025, 15(4), 870; https://doi.org/10.3390/agronomy15040870 - 30 Mar 2025
Cited by 1 | Viewed by 1107
Abstract
The precise identification of diseases in tomato leaves is of great importance for precise target pesticide application in a complex background scenario. Existing models often have difficulty capturing long-range dependencies and fine-grained features in images, leading to poor recognition where there are complex [...] Read more.
The precise identification of diseases in tomato leaves is of great importance for precise target pesticide application in a complex background scenario. Existing models often have difficulty capturing long-range dependencies and fine-grained features in images, leading to poor recognition where there are complex backgrounds. To tackle this challenge, this study proposed using YOLO-BSMamba detection mode. We proposed that a Hybrid Convolutional Mamba module (HCMamba) is integrated within the neck network, with the aim of improving feature representation by leveraging the capture global contextual dependencies capabilities of the State Space Model (SSM) and discerning the localized spatial feature capabilities of convolution. Furthermore, we introduced the Similarity-Based Attention Mechanism into the C2f module to improve the model’s feature extraction capabilities by focusing on disease-indicative leaf areas and reducing background noise. The weighted bidirectional feature pyramid network (BiFPN) was utilized to replace the feature-fusion component of the network, thereby enhancing the model’s detection performance for lesions exhibiting heterogeneous symptomatic gradations and enabling the model to effectively integrate features from different scales. Research results showed that the YOLO-BSMamba achieved an F1 score, mAP@0.5, and mAP@0.5:0.95 of 81.9%, 86.7%, and 72.0%, respectively, which represents an improvement of 3.0%, 4.8%, and 4.3%, respectively, compared to YOLOv8s. Compared to other YOLO series models, it achieves the best mAP@0.5 and F1 score. This study provides a robust and reliable method for tomato leaf disease recognition, which is expected to improve target pesticide efficiency, and further enhance crop monitoring and management in precision agriculture. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

27 pages, 7551 KiB  
Article
RDRM-YOLO: A High-Accuracy and Lightweight Rice Disease Detection Model for Complex Field Environments Based on Improved YOLOv5
by Pan Li, Jitao Zhou, Huihui Sun and Jian Zeng
Agriculture 2025, 15(5), 479; https://doi.org/10.3390/agriculture15050479 - 23 Feb 2025
Cited by 4 | Viewed by 1278
Abstract
Rice leaf diseases critically threaten global rice production by reducing crop yield and quality. Efficient disease detection in complex field environments remains a persistent challenge for sustainable agriculture. Existing deep learning-based methods for rice leaf disease detection struggle with inadequate sensitivity to subtle [...] Read more.
Rice leaf diseases critically threaten global rice production by reducing crop yield and quality. Efficient disease detection in complex field environments remains a persistent challenge for sustainable agriculture. Existing deep learning-based methods for rice leaf disease detection struggle with inadequate sensitivity to subtle disease features, high computational complexity, and degraded accuracy under complex field conditions, such as background interference and fine-grained disease variations. To address these limitations, this research aims to develop a lightweight yet high-accuracy detection model tailored for complex field environments that balances computational efficiency with robust performance. We propose RDRM-YOLO, an enhanced YOLOv5-based network, integrating four key improvements: (i) a cross-stage partial network fusion module (Hor-BNFA) is integrated within the backbone network’s feature extraction stage to enhance the model’s ability to capture disease-specific features; (ii) a spatial depth conversion convolution (SPDConv) is introduced to expand the receptive field, enhancing the extraction of fine-grained features, particularly from small disease spots; (iii) SPDConv is also integrated into the neck network, where the standard convolution is replaced with a lightweight GsConv to increase the accuracy of disease localization, category prediction, and inference speed; and (iv) the WIoU Loss function is adopted in place of CIoU Loss to accelerate convergence and enhance detection accuracy. The model is trained and evaluated utilizing a comprehensive dataset of 5930 field-collected and augmented sample images comprising four prevalent rice leaf diseases: bacterial blight, leaf blast, brown spot, and tungro. Experimental results demonstrate that our proposed RDRM-YOLO model achieves state-of-the-art performance with a detection accuracy of 94.3%, and a recall of 89.6%. Furthermore, it achieves a mean Average Precision (mAP) of 93.5%, while maintaining a compact model size of merely 7.9 MB. Compared to Faster R-CNN, YOLOv6, YOLOv7, and YOLOv8 models, the RDRM-YOLO model demonstrates faster convergence and achieves the optimal result values in Precision, Recall, mAP, model size, and inference speed. This work provides a practical solution for real-time rice disease monitoring in agricultural fields, offering a very effective balance between model simplicity and detection performance. The proposed enhancements are readily adaptable to other crop disease detection tasks, thereby contributing to the advancement of precision agriculture technologies. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

17 pages, 6161 KiB  
Article
Efficient Triple Attention and AttentionMix: A Novel Network for Fine-Grained Crop Disease Classification
by Yanqi Zhang, Ning Zhang, Jingbo Zhu, Tan Sun, Xiujuan Chai and Wei Dong
Agriculture 2025, 15(3), 313; https://doi.org/10.3390/agriculture15030313 - 31 Jan 2025
Cited by 2 | Viewed by 958
Abstract
In the face of global climate change, crop pests and diseases have emerged on a large scale, with diverse species lasting for long periods and exerting wide-ranging impacts. Identifying crop pests and diseases efficiently and accurately is crucial in enhancing crop yields. Nonetheless, [...] Read more.
In the face of global climate change, crop pests and diseases have emerged on a large scale, with diverse species lasting for long periods and exerting wide-ranging impacts. Identifying crop pests and diseases efficiently and accurately is crucial in enhancing crop yields. Nonetheless, the complexity and variety of scenarios render this a challenging task. In this paper, we propose a fine-grained crop disease classification network integrating the efficient triple attention (ETA) module and the AttentionMix data enhancement strategy. The ETA module is capable of capturing channel attention and spatial attention information more effectively, which contributes to enhancing the representational capacity of deep CNNs. Additionally, AttentionMix can effectively address the label misassignment issue in CutMix, a commonly used method for obtaining high-quality data samples. The ETA module and AttentionMix can work together on deep CNNs for greater performance gains. We conducted experiments on our self-constructed crop disease dataset and on the widely used IP102 plant pest and disease classification dataset. The results showed that the network, which combined the ETA module and AttentionMix, could reach an accuracy as high as 98.2% on our crop disease dataset. When it came to the IP102 dataset, this network achieved an accuracy of 78.7% and a recall of 70.2%. In comparison with advanced attention models such as ECANet and Triplet Attention, our proposed model exhibited an average performance improvement of 5.3% and 4.4%, respectively. All of this implies that the proposed method is both practical and applicable for classifying diseases in the majority of crop types. Based on classification results from the proposed network, an install-free WeChat mini program that enables real-time automated crop disease recognition by taking photos with a smartphone camera was developed. This study can provide an accurate and timely diagnosis of crop pests and diseases, thereby providing a solution reference for smart agriculture. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

17 pages, 19075 KiB  
Article
A Channel Attention-Driven Optimized CNN for Efficient Early Detection of Plant Diseases in Resource Constrained Environment
by Sana Parez, Naqqash Dilshad and Jong Weon Lee
Agriculture 2025, 15(2), 127; https://doi.org/10.3390/agriculture15020127 - 8 Jan 2025
Cited by 2 | Viewed by 1342
Abstract
Agriculture is a cornerstone of economic prosperity, but plant diseases can severely impact crop yield and quality. Identifying these diseases accurately is often difficult due to limited expert availability and ambiguous information. Early detection and automated diagnosis systems are crucial to mitigate these [...] Read more.
Agriculture is a cornerstone of economic prosperity, but plant diseases can severely impact crop yield and quality. Identifying these diseases accurately is often difficult due to limited expert availability and ambiguous information. Early detection and automated diagnosis systems are crucial to mitigate these challenges. To address this, we propose a lightweight convolutional neural network (CNN) designed for resource-constrained devices termed as LeafNet. LeafNet draws inspiration from the block-wise VGG19 architecture but incorporates several optimizations, including a reduced number of parameters, smaller input size, and faster inference time while maintaining competitive accuracy. The proposed LeafNet leverages small, uniform convolutional filters to capture fine-grained details of plant disease features, with an increasing number of channels to enhance feature extraction. Additionally, it integrates channel attention mechanisms to prioritize disease-related features effectively. We evaluated the proposed method on four datasets: the benchmark plant village (PV), the data repository of leaf images (DRLIs), the newly curated plant composite (PC) dataset, and the BARI Sunflower (BARI-Sun) dataset, which includes diverse and challenging real-world images. The results show that the proposed performs comparably to state-of-the-art methods in terms of accuracy, false positive rate (FPR), model size, and runtime, highlighting its potential for real-world applications. Full article
Show Figures

Figure 1

22 pages, 9996 KiB  
Article
Few-Shot Image Classification of Crop Diseases Based on Vision–Language Models
by Yueyue Zhou, Hongping Yan, Kun Ding, Tingting Cai and Yan Zhang
Sensors 2024, 24(18), 6109; https://doi.org/10.3390/s24186109 - 21 Sep 2024
Cited by 6 | Viewed by 2849
Abstract
Accurate crop disease classification is crucial for ensuring food security and enhancing agricultural productivity. However, the existing crop disease classification algorithms primarily focus on a single image modality and typically require a large number of samples. Our research counters these issues by using [...] Read more.
Accurate crop disease classification is crucial for ensuring food security and enhancing agricultural productivity. However, the existing crop disease classification algorithms primarily focus on a single image modality and typically require a large number of samples. Our research counters these issues by using pre-trained Vision–Language Models (VLMs), which enhance the multimodal synergy for better crop disease classification than the traditional unimodal approaches. Firstly, we apply the multimodal model Qwen-VL to generate meticulous textual descriptions for representative disease images selected through clustering from the training set, which will serve as prompt text for generating classifier weights. Compared to solely using the language model for prompt text generation, this approach better captures and conveys fine-grained and image-specific information, thereby enhancing the prompt quality. Secondly, we integrate cross-attention and SE (Squeeze-and-Excitation) Attention into the training-free mode VLCD(Vision-Language model for Crop Disease classification) and the training-required mode VLCD-T (VLCD-Training), respectively, for prompt text processing, enhancing the classifier weights by emphasizing the key text features. The experimental outcomes conclusively prove our method’s heightened classification effectiveness in few-shot crop disease scenarios, tackling the data limitations and intricate disease recognition issues. It offers a pragmatic tool for agricultural pathology and reinforces the smart farming surveillance infrastructure. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 5155 KiB  
Article
YOLOv8-E: An Improved YOLOv8 Algorithm for Eggplant Disease Detection
by Yuxi Huang, Hong Zhao and Jie Wang
Appl. Sci. 2024, 14(18), 8403; https://doi.org/10.3390/app14188403 - 18 Sep 2024
Cited by 3 | Viewed by 2497
Abstract
During the developmental stages, eggplants are susceptible to diseases, which can impact crop yields and farmers’ economic returns. Therefore, timely and effective detection of eggplant diseases is crucial. Deep learning-based object detection algorithms can automatically extract features from images of eggplants affected by [...] Read more.
During the developmental stages, eggplants are susceptible to diseases, which can impact crop yields and farmers’ economic returns. Therefore, timely and effective detection of eggplant diseases is crucial. Deep learning-based object detection algorithms can automatically extract features from images of eggplants affected by diseases. However, eggplant disease images captured in complex farmland environments present challenges such as varying disease sizes, occlusion, overlap, and small target detection, making it difficult for existing deep-learning models to achieve satisfactory detection performance. To address this challenge, this study proposed an optimized eggplant disease detection algorithm, YOLOv8-E, based on You Only Look Once version 8 nano (YOLOv8n). Firstly, we integrate switchable atrous convolution (SAConv) into the C2f module to design the C2f_SAConv module, replacing some of the C2f modules in the backbone network of YOLOv8n, enabling our proposed algorithm to better extract eggplant disease features. Secondly, to facilitate the deployment of the detection model on mobile devices, we reconstruct the Neck network of YOLOv8n using the SlimNeck module, making the model lighter. Additionally, to tackle the issue of missing small targets, we embed the large separable kernel attention (LSKA) module within SlimNeck, enhancing the model’s attention to fine-grained information. Lastly, we combined intersection over union with auxiliary bounding box (Inner-IoU) and minimum point distance intersection over union (MPDIoU), introducing the Inner-MPDIoU loss to speed up convergence of the model and raise detection precision of overlapped and occluded targets. Ablation studies demonstrated that, compared to YOLOv8n, the mean average precision (mAP) and F1 score of YOLOv8-E reached 79.4% and 75.7%, respectively, which obtained a 5.5% increment and a 4.5% increase, while also reducing the model size and computational complexity. Furthermore, YOLOv8-E achieved higher detection performance than other mainstream algorithms. YOLOv8-E exhibits significant potential for practical application in eggplant disease detection. Full article
Show Figures

Figure 1

23 pages, 4305 KiB  
Article
LCA-Net: A Lightweight Cross-Stage Aggregated Neural Network for Fine-Grained Recognition of Crop Pests and Diseases
by Jianlei Kong, Yang Xiao, Xuebo Jin, Yuanyuan Cai, Chao Ding and Yuting Bai
Agriculture 2023, 13(11), 2080; https://doi.org/10.3390/agriculture13112080 - 31 Oct 2023
Cited by 7 | Viewed by 2162
Abstract
In the realm of smart agriculture technology’s rapid advancement, the integration of various sensors and Internet of Things (IoT) devices has become prevalent in the agricultural sector. Within this context, the precise identification of pests and diseases using unmanned robotic systems assumes a [...] Read more.
In the realm of smart agriculture technology’s rapid advancement, the integration of various sensors and Internet of Things (IoT) devices has become prevalent in the agricultural sector. Within this context, the precise identification of pests and diseases using unmanned robotic systems assumes a crucial role in ensuring food security, advancing agricultural production, and maintaining food reserves. Nevertheless, existing recognition models encounter inherent limitations such as suboptimal accuracy and excessive computational efforts when dealing with similar pests and diseases in real agricultural scenarios. Consequently, this research introduces the lightweight cross-layer aggregation neural network (LCA-Net). To address the intricate challenge of fine-grained pest identification in agricultural environments, our approach initially enhances the high-performance large-scale network through lightweight adaptation, concurrently incorporating a channel space attention mechanism. This enhancement culminates in the development of a cross-layer feature aggregation (CFA) module, meticulously engineered for seamless mobile deployment while upholding performance integrity. Furthermore, we devised the Cut-Max module, which optimizes the accuracy of crop pest and disease recognition via maximum response region pruning. Thorough experimentation on comprehensive pests and disease datasets substantiated the exceptional fine-grained performance of LCA-Net, achieving an impressive accuracy rate of 83.8%. Additional ablation experiments validated the proposed approach, showcasing a harmonious balance between performance and model parameters, rendering it suitable for practical applications in smart agricultural supervision. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Analysis in Agriculture)
Show Figures

Figure 1

17 pages, 11370 KiB  
Article
VLDNet: An Ultra-Lightweight Crop Disease Identification Network
by Xiaopeng Li, Yichi Zhang, Yuhan Peng and Shuqin Li
Agriculture 2023, 13(8), 1482; https://doi.org/10.3390/agriculture13081482 - 26 Jul 2023
Cited by 3 | Viewed by 2643
Abstract
Existing deep learning methods usually adopt deeper and wider network structures to achieve better performance. However, we found that this rule does not apply well to crop disease identification tasks, which inspired us to rethink the design paradigm of disease identification models. Crop [...] Read more.
Existing deep learning methods usually adopt deeper and wider network structures to achieve better performance. However, we found that this rule does not apply well to crop disease identification tasks, which inspired us to rethink the design paradigm of disease identification models. Crop diseases belong to fine-grained features and lack obvious patterns. Deeper and wider network structures will cause information loss of features, which will damage identification efficiency. Based on this, this paper designs a very lightweight disease identification network called VLDNet. The basic module VLDBlock of VLDNet extracts intrinsic features through 1 × 1 convolution, and uses cheap linear operations to supplement redundant features to improve feature extraction efficiency. In inference, reparameterization technology is used to further reduce the model size and improve inference speed. VLDNet achieves state-of-the-art model (SOTA) latency-accuracy trade-offs on self-built and public datasets, such as equivalent performance to Swin-Tiny with a parameter size of 0.097 MB and 0.04 G floating point operations (FLOPs), while reducing parameter size and FLOPs by 297 times and 111 times, respectively. In actual testing, VLDNet can recognize 221 images per second, which is far superior to similar accuracy models. This work is expected to further promote the application of deep learning-based crop disease identification methods in practical production. Full article
Show Figures

Figure 1

17 pages, 3801 KiB  
Article
MTDL-EPDCLD: A Multi-Task Deep-Learning-Based System for Enhanced Precision Detection and Diagnosis of Corn Leaf Diseases
by Dikang Dai, Peiwen Xia, Zeyang Zhu and Huilian Che
Plants 2023, 12(13), 2433; https://doi.org/10.3390/plants12132433 - 23 Jun 2023
Cited by 7 | Viewed by 2349
Abstract
Corn leaf diseases lead to significant losses in agricultural production, posing challenges to global food security. Accurate and timely detection and diagnosis are crucial for implementing effective control measures. In this research, a multi-task deep learning-based system for enhanced precision detection and diagnosis [...] Read more.
Corn leaf diseases lead to significant losses in agricultural production, posing challenges to global food security. Accurate and timely detection and diagnosis are crucial for implementing effective control measures. In this research, a multi-task deep learning-based system for enhanced precision detection and diagnosis of corn leaf diseases (MTDL-EPDCLD) is proposed to enhance the detection and diagnosis of corn leaf diseases, along with the development of a mobile application utilizing the Qt framework, which is a cross-platform software development framework. The system comprises Task 1 for rapid and accurate health status identification (RAHSI) and Task 2 for fine-grained disease classification with attention (FDCA). A shallow CNN-4 model with a spatial attention mechanism is developed for Task 1, achieving 98.73% accuracy in identifying healthy and diseased corn leaves. For Task 2, a customized MobileNetV3Large-Attention model is designed. It achieves a val_accuracy of 94.44%, and improvements of 4–8% in precision, recall, and F1 score from other mainstream deep learning models. Moreover, the model attains an area under the curve (AUC) of 0.9993, exhibiting an enhancement of 0.002–0.007 compared to other mainstream models. The MTDL-EPDCLD system provides an accurate and efficient tool for corn leaf disease detection and diagnosis, supporting informed decisions on disease management, increased crop yields, and improved food security. This research offers a promising solution for detecting and diagnosing corn leaf diseases, and its continued development and implementation may substantially impact agricultural practices and outcomes. Full article
(This article belongs to the Special Issue Deep Learning in Plant Sciences)
Show Figures

Figure 1

Back to TopTop