Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = leaf lesions segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4394 KiB  
Article
Deep Learning Models for Detection and Severity Assessment of Cercospora Leaf Spot (Cercospora capsici) in Chili Peppers Under Natural Conditions
by Douglas Vieira Leite, Alisson Vasconcelos de Brito, Gregorio Guirada Faccioli and Gustavo Haddad Souza Vieira
Plants 2025, 14(13), 2011; https://doi.org/10.3390/plants14132011 - 1 Jul 2025
Viewed by 317
Abstract
The accurate assessment of plant disease severity is crucial for effective crop management. Deep learning, especially via CNNs, is widely used for image segmentation in plant lesion detection, but accurately assessing disease severity across varied environmental conditions remains challenging. This study evaluates eight [...] Read more.
The accurate assessment of plant disease severity is crucial for effective crop management. Deep learning, especially via CNNs, is widely used for image segmentation in plant lesion detection, but accurately assessing disease severity across varied environmental conditions remains challenging. This study evaluates eight deep learning models for detecting and quantifying Cercospora leaf spot (Cercospora capsici) severity in chili peppers under natural field conditions. A custom dataset of 1645 chili pepper leaf images, collected from a Brazilian plantation and annotated with 6282 lesions, was developed for real-world robustness, reflecting real-world variability in lighting and background. First, an algorithm was developed to process raw images, applying ROI selection and background removal. Then, four YOLOv8 and four Mask R-CNN models were fine-tuned for pixel-level segmentation and severity classification, comparing one-stage and two-stage models to offer practical insights for agricultural applications. In pixel-level segmentation on the test dataset, Mask R-CNN achieved superior precision with a Mean Intersection over Union (MIoU) of 0.860 and F1-score of 0.924 for the mask_rcnn_R101_FPN_3x model, compared to 0.808 and 0.893 for the YOLOv8s-Seg model. However, in severity classification, Mask R-CNN underestimated higher severity levels, with an accuracy of 72.3% for level III, while YOLOv8 attained 91.4%. Additionally, YOLOv8 demonstrated greater efficiency, with an inference time of 27 ms versus 89 ms for Mask R-CNN. While Mask R-CNN excels in segmentation accuracy, YOLOv8 offers a compelling balance of speed and reliable severity classification, making it suitable for real-time plant disease assessment in agricultural applications. Full article
(This article belongs to the Section Plant Protection and Biotic Interactions)
Show Figures

Figure 1

16 pages, 2956 KiB  
Article
Development of Molecular Markers for Bacterial Leaf Streak Resistance Gene bls2 and Breeding of New Resistance Lines in Rice
by Jieyi Huang, Xuan Wei, Min Tang, Ziqiu Deng, Yi Lan and Fang Liu
Int. J. Mol. Sci. 2025, 26(11), 5264; https://doi.org/10.3390/ijms26115264 - 30 May 2025
Viewed by 320
Abstract
Bacterial leaf streak (BLS) is one of the internationally significant quarantine diseases in rice. Effectively utilizing BLS resistance genes from wild rice (Oryza rufipogon Griff.) to breed new varieties offers a fundamental solution for BLS control. This study focused on the fine mapping [...] Read more.
Bacterial leaf streak (BLS) is one of the internationally significant quarantine diseases in rice. Effectively utilizing BLS resistance genes from wild rice (Oryza rufipogon Griff.) to breed new varieties offers a fundamental solution for BLS control. This study focused on the fine mapping of the BLS resistance gene bls2 and the development of closely linked molecular markers for breeding BLS-resistant lines. Using a Guangxi common wild rice accession DY19 (carrying bls2) as the donor parent and the highly BLS-susceptible indica rice variety 9311 as the recipient parent, BLS-resistant rice lines were developed through multiple generations of backcrossing and selfing, incorporating molecular marker-assisted selection (MAS), single nucleotide polymorphism(SNP) chip genotyping, pathogen inoculation assays, and agronomic trait evaluation. The results showed that bls2 was delimited to a 113 kb interval between the molecular markers ID2 and ID5 on chromosome 2, with both markers exhibiting over 98% accuracy in detecting bls2. Four stable new lines carrying the bls2 segment were obtained in the BC5F4 generation. These four lines showed highly significant differences in BLS resistance compared with 9311, demonstrating moderate resistance or higher with average lesion lengths ranging from 0.69 to 1.26 cm. Importantly, no significant differences were observed between these resistant lines and 9311 in key agronomic traits, including plant height, number of effective panicles, panicle length, seed setting rate, grain length, grain width, length-to-width ratio, and 1000-grain weight. Collectively, two molecular markers closely linked to bls2 were developed, which can be effectively applied in MAS, and four new lines with significantly enhanced resistance to BLS and excellent agronomic traits were obtained. These findings provide technical support and core germplasm resources for BLS resistance breeding. Full article
(This article belongs to the Special Issue Crop Biotic and Abiotic Stress Tolerance: 4th Edition)
Show Figures

Figure 1

22 pages, 6392 KiB  
Article
Dual-Phase Severity Grading of Strawberry Angular Leaf Spot Based on Improved YOLOv11 and OpenCV
by Yi-Xiao Xu, Xin-Hao Yu, Qing Yi, Qi-Yuan Zhang and Wen-Hao Su
Plants 2025, 14(11), 1656; https://doi.org/10.3390/plants14111656 - 29 May 2025
Viewed by 556
Abstract
Phyllosticta fragaricola-induced angular leaf spot causes substantial economic losses in global strawberry production, necessitating advanced severity assessment methods. This study proposed a dual-phase grading framework integrating deep learning and computer vision. The enhanced You Only Look Once version 11 (YOLOv11) architecture incorporated [...] Read more.
Phyllosticta fragaricola-induced angular leaf spot causes substantial economic losses in global strawberry production, necessitating advanced severity assessment methods. This study proposed a dual-phase grading framework integrating deep learning and computer vision. The enhanced You Only Look Once version 11 (YOLOv11) architecture incorporated a Content-Aware ReAssembly of FEatures (CARAFE) module for improved feature upsampling and a squeeze-and-excitation (SE) attention mechanism for channel-wise feature recalibration, resulting in the YOLOv11-CARAFE-SE for the severity assessment of strawberry angular leaf spot. Furthermore, an OpenCV-based threshold segmentation algorithm based on H-channel thresholds in the HSV color space achieved accurate lesion segmentation. A disease severity grading standard for strawberry angular leaf spot was established based on the ratio of lesion area to leaf area. In addition, specialized software for the assessment of disease severity was developed based on the improved YOLOv11-CARAFE-SE model and OpenCV-based algorithms. Experimental results show that compared with the baseline YOLOv11, the performance is significantly improved: the box mAP@0.5 is increased by 1.4% to 93.2%, the mask mAP@0.5 is increased by 0.9% to 93.0%, the inference time is shortened by 0.4 ms to 0.9 ms, and the computational load is reduced by 1.94% to 10.1 GFLOPS. In addition, this two-stage grading framework achieves an average accuracy of 94.2% in detecting selected strawberry horn leaf spot disease samples, providing real-time field diagnostics and a high-throughput phenotypic analysis for resistance breeding programs. This work demonstrates the feasibility of rapidly estimating the severity of strawberry horn leaf spot, which will establish a robust technical framework for strawberry disease management under field conditions. Full article
(This article belongs to the Section Crop Physiology and Crop Production)
Show Figures

Figure 1

17 pages, 4587 KiB  
Article
Improved YOLOv8-Based Segmentation Method for Strawberry Leaf and Powdery Mildew Lesions in Natural Backgrounds
by Mingzhou Chen, Wei Zou, Xiangjie Niu, Pengfei Fan, Haowei Liu, Cuiling Li and Changyuan Zhai
Agronomy 2025, 15(3), 525; https://doi.org/10.3390/agronomy15030525 - 21 Feb 2025
Cited by 1 | Viewed by 1035
Abstract
This study addresses the challenge of segmenting strawberry leaves and lesions in natural backgrounds, which is critical for accurate disease severity assessment and automated dosing. Focusing on strawberry powdery mildew, we propose an enhanced YOLOv8-based segmentation method for leaf and lesion detection. Four [...] Read more.
This study addresses the challenge of segmenting strawberry leaves and lesions in natural backgrounds, which is critical for accurate disease severity assessment and automated dosing. Focusing on strawberry powdery mildew, we propose an enhanced YOLOv8-based segmentation method for leaf and lesion detection. Four instance segmentation models (SOLOv2, YOLACT, YOLOv7-seg, and YOLOv8-seg) were compared, using YOLOv8-seg as the baseline. To improve performance, SCDown and PSA modules were integrated into the backbone to reduce redundancy, decrease computational load, and enhance detection of small objects and complex backgrounds. In the neck, the C2f module was replaced with the C2fCIB module, and the SimAM attention mechanism was incorporated to improve target differentiation and reduce noise interference. The loss function combined CIOU with MPDIOU to enhance adaptability in challenging scenarios. Ablation experiments demonstrated a segmentation accuracy of 92%, recall of 85.2%, and mean average precision (mAP) of 90.4%, surpassing the YOLOv8-seg baseline by 4%, 2.9%, and 4%, respectively. Compared to SOLOv2, YOLACT, and YOLOv7-seg, the improved model’s mAP increased by 14.8%, 5.8%, and 3.9%, respectively. The improved model reduces missed detections and enhances target localization, providing theoretical support for subsequent applications in intelligent, dosage-based disease management. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

18 pages, 7639 KiB  
Article
Improved Tunicate Swarm Optimization Based Hybrid Convolutional Neural Network for Classification of Leaf Diseases and Nutrient Deficiencies in Rice (Oryza)
by R. Sherline Jesie and M. S. Godwin Premi
Agronomy 2024, 14(8), 1851; https://doi.org/10.3390/agronomy14081851 - 21 Aug 2024
Cited by 2 | Viewed by 1370
Abstract
In Asia, rice is the most consumed grain by humans, serving as a staple food in India. The yield of rice paddies is easily affected by nutrient deficiencies and leaf diseases. To overcome this problem and improve the yield productivity of rice, nutrient [...] Read more.
In Asia, rice is the most consumed grain by humans, serving as a staple food in India. The yield of rice paddies is easily affected by nutrient deficiencies and leaf diseases. To overcome this problem and improve the yield productivity of rice, nutrient deficiency and leaf disease identification are essential. The main nutrient elements in paddies are potassium, phosphorus, and nitrogen (PPN), the deficiency of any of which strongly affects the rice plants. When multiple nutrient elements are deficient, the leaf color of the rice plants is altered. To overcome this problem, optimal nutrient delivery is required. Hence, the present study proposes the use of Fuzzy C Means clustering (FCM) with Improved Tunicate Swarm Optimization (ITSO) to segment the lesions in rice plant leaves and identify the deficient nutrients. The proposed ITSO integrates the Tunicate Swarm Optimization (TSO) and Bacterial Foraging Optimization (BFO) approaches. The Hybrid Convolutional Neural Network (HCNN), a deep learning model, is used with ITSO to classify the rice leaf diseases, as well as nutrient deficiencies in the leaves. Two datasets, namely, a field work dataset and a Kaggle dataset, were used for the present study. The proposed HCNN-ITSO classified Bacterial Leaf Bright (BLB), Narrow Brown Leaf Spot (NBLS), Sheath Rot (SR), Brown Spot (BS), and Leaf Smut (LS) in the field work dataset. Furthermore, the potassium-, phosphorus-, and nitrogen-deficiency-presenting leaves were classified using the proposed HCNN-ITSO in the Kaggle dataset. The MATLAB platform was used for experimental analysis in the field work and Kaggle datasets in terms of various performance measures. When compared to previous methods, the proposed method achieved the best accuracies of 98.8% and 99.01% in the field work and Kaggle datasets, respectively. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

24 pages, 7302 KiB  
Article
CTDUNet: A Multimodal CNN–Transformer Dual U-Shaped Network with Coordinate Space Attention for Camellia oleifera Pests and Diseases Segmentation in Complex Environments
by Ruitian Guo, Ruopeng Zhang, Hao Zhou, Tunjun Xie, Yuting Peng, Xili Chen, Guo Yu, Fangying Wan, Lin Li, Yongzhong Zhang and Ruifeng Liu
Plants 2024, 13(16), 2274; https://doi.org/10.3390/plants13162274 - 15 Aug 2024
Cited by 3 | Viewed by 1460
Abstract
Camellia oleifera is a crop of high economic value, yet it is particularly susceptible to various diseases and pests that significantly reduce its yield and quality. Consequently, the precise segmentation and classification of diseased Camellia leaves are vital for managing pests and diseases [...] Read more.
Camellia oleifera is a crop of high economic value, yet it is particularly susceptible to various diseases and pests that significantly reduce its yield and quality. Consequently, the precise segmentation and classification of diseased Camellia leaves are vital for managing pests and diseases effectively. Deep learning exhibits significant advantages in the segmentation of plant diseases and pests, particularly in complex image processing and automated feature extraction. However, when employing single-modal models to segment Camellia oleifera diseases, three critical challenges arise: (A) lesions may closely resemble the colors of the complex background; (B) small sections of diseased leaves overlap; (C) the presence of multiple diseases on a single leaf. These factors considerably hinder segmentation accuracy. A novel multimodal model, CNN–Transformer Dual U-shaped Network (CTDUNet), based on a CNN–Transformer architecture, has been proposed to integrate image and text information. This model first utilizes text data to address the shortcomings of single-modal image features, enhancing its ability to distinguish lesions from environmental characteristics, even under conditions where they closely resemble one another. Additionally, we introduce Coordinate Space Attention (CSA), which focuses on the positional relationships between targets, thereby improving the segmentation of overlapping leaf edges. Furthermore, cross-attention (CA) is employed to align image and text features effectively, preserving local information and enhancing the perception and differentiation of various diseases. The CTDUNet model was evaluated on a self-made multimodal dataset compared against several models, including DeeplabV3+, UNet, PSPNet, Segformer, HrNet, and Language meets Vision Transformer (LViT). The experimental results demonstrate that CTDUNet achieved an mean Intersection over Union (mIoU) of 86.14%, surpassing both multimodal models and the best single-modal model by 3.91% and 5.84%, respectively. Additionally, CTDUNet exhibits high balance in the multi-class segmentation of Camellia oleifera diseases and pests. These results indicate the successful application of fused image and text multimodal information in the segmentation of Camellia disease, achieving outstanding performance. Full article
(This article belongs to the Special Issue Sustainable Strategies for Tea Crops Protection)
Show Figures

Figure 1

18 pages, 7039 KiB  
Article
Two-Stage Detection Algorithm for Plum Leaf Disease and Severity Assessment Based on Deep Learning
by Caihua Yao, Ziqi Yang, Peifeng Li, Yuxia Liang, Yamin Fan, Jinwen Luo, Chengmei Jiang and Jiong Mu
Agronomy 2024, 14(7), 1589; https://doi.org/10.3390/agronomy14071589 - 21 Jul 2024
Cited by 10 | Viewed by 1824
Abstract
Crop diseases significantly impact crop yields, and promoting specialized control of crop diseases is crucial for ensuring agricultural production stability. Disease identification primarily relies on human visual inspection, which is inefficient, inaccurate, and subjective. This study focused on the plum red spot ( [...] Read more.
Crop diseases significantly impact crop yields, and promoting specialized control of crop diseases is crucial for ensuring agricultural production stability. Disease identification primarily relies on human visual inspection, which is inefficient, inaccurate, and subjective. This study focused on the plum red spot (Polystigma rubrum), proposing a two-stage detection algorithm based on deep learning and assessing the severity of the disease through lesion coverage rate. The specific contributions are as follows: We utilized the object detection model YOLOv8 to strip leaves to eliminate the influence of complex backgrounds. We used an improved U-Net network to segment leaves and lesions. We combined Dice Loss with Focal Loss to address the poor training performance due to the pixel ratio imbalance between leaves and disease spots. For inconsistencies in the size and shape of leaves and lesions, we utilized ODConv and MSCA so that the model could focus on features at different scales. After verification, the accuracy rate of leaf recognition is 95.3%, and the mIoU, mPA, mPrecision, and mRecall of the leaf disease segmentation model are 90.93%, 95.21%, 95.17%, and 95.21%, respectively. This research provides an effective solution for the detection and severity assessment of plum leaf red spot disease under complex backgrounds. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

21 pages, 4622 KiB  
Article
A Two-Stage Approach to the Study of Potato Disease Severity Classification
by Yanlei Xu, Zhiyuan Gao, Jingli Wang, Yang Zhou, Jian Li and Xianzhang Meng
Agriculture 2024, 14(3), 386; https://doi.org/10.3390/agriculture14030386 - 28 Feb 2024
Cited by 7 | Viewed by 2533
Abstract
Early blight and late blight are two of the most prevalent and severe diseases affecting potato crops. Efficient and accurate grading of their severity is crucial for effective disease management. However, existing grading methods are limited to assessing the severity of each disease [...] Read more.
Early blight and late blight are two of the most prevalent and severe diseases affecting potato crops. Efficient and accurate grading of their severity is crucial for effective disease management. However, existing grading methods are limited to assessing the severity of each disease independently, often resulting in low recognition accuracy and slow grading processes. To address these challenges, this study proposes a novel two-stage approach for the rapid severity grading of both early blight and late blight in potato plants. In this research, two lightweight models were developed: Coformer and SegCoformer. In the initial stage, Coformer efficiently categorizes potato leaves into three classes: those afflicted by early blight, those afflicted by late blight, and healthy leaves. In the subsequent stage, SegCoformer accurately segments leaves, lesions, and backgrounds within the images obtained from the first stage. Furthermore, it assigns severity labels to the identified leaf lesions. To validate the accuracy and processing speed of the proposed methods, we conduct experimental comparisons. The experimental results indicate that Coformer achieves a classification accuracy as high as 97.86%, while SegCoformer achieves an mIoU of 88.50% for semantic segmentation. The combined accuracy of this method reaches 84%, outperforming the Sit + Unet_V accuracy by 1%. Notably, this approach achieves heightened accuracy while maintaining a faster processing speed, completing image processing in just 258.26 ms. This research methodology effectively enhances agricultural production efficiency. Full article
(This article belongs to the Special Issue Smart Mechanization and Automation in Agriculture)
Show Figures

Figure 1

20 pages, 6395 KiB  
Article
Plant Diseased Lesion Image Segmentation and Recognition Based on Improved Multi-Scale Attention Net
by Tao Yang, Yannian Wang and Jihong Lian
Appl. Sci. 2024, 14(5), 1716; https://doi.org/10.3390/app14051716 - 20 Feb 2024
Cited by 5 | Viewed by 2006
Abstract
Fallen leaf disease can lead to a decrease in leaf area, a decrease in photosynthetic products, insufficient accumulation of fruit sugar, poor coloring and flavor, and a large number of fruits developing sunburn. To address the aforementioned issue, this article introduces a deep [...] Read more.
Fallen leaf disease can lead to a decrease in leaf area, a decrease in photosynthetic products, insufficient accumulation of fruit sugar, poor coloring and flavor, and a large number of fruits developing sunburn. To address the aforementioned issue, this article introduces a deep learning algorithm designed for the segmentation and recognition of agricultural disease images, particularly those involving leaf lesions. The essence of this algorithm lies in enhancing the Multi-scale Attention Net (MA-Net) encoder and attention mechanism to improve the model’s performance when processing agricultural disease images. Firstly, an analysis was conducted on MA-Net, and its limitations were identified. Compared to res-block, Mix Vision Transformer (MiT) consumes relatively less time during the training process, can better capture global and contextual information in images, and has better robustness and scalability. Then, the feature extraction parts of different networks were used as encoders to join the MA-Net network. Compared to a Position-wise Attention Block (PAB), which has higher computational complexity and requires a larger amount of computing resources, Effective Channel Attention net (ECANet) reduces the number of model parameters and computation by learning the correlation between channels, as well as having a better denoising ability. The experimental results show that the proposed solution has high accuracy and stability in agricultural disease image segmentation and recognition. The mean Intersection over Union (mIoU) is 98.1%, which is 0.2% higher than traditional MA-Net; Dice Loss is 0.9%, which is 0.1% lower than traditional MA-Net. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

20 pages, 4764 KiB  
Article
A Cucumber Leaf Disease Severity Grading Method in Natural Environment Based on the Fusion of TRNet and U-Net
by Hui Yao, Chunshan Wang, Lijie Zhang, Jiuxi Li, Bo Liu and Fangfang Liang
Agronomy 2024, 14(1), 72; https://doi.org/10.3390/agronomy14010072 - 27 Dec 2023
Cited by 8 | Viewed by 2332
Abstract
Disease severity grading is the primary decision-making basis for the amount of pesticide usage in vegetable disease prevention and control. Based on deep learning, this paper proposed an integrated framework, which automatically segments the target leaf and disease spots in cucumber images using [...] Read more.
Disease severity grading is the primary decision-making basis for the amount of pesticide usage in vegetable disease prevention and control. Based on deep learning, this paper proposed an integrated framework, which automatically segments the target leaf and disease spots in cucumber images using different semantic segmentation networks and then calculates the area of disease spots and the target leaf for disease severity grading. Two independent datasets of leaves and lesions were constructed, which served as the training set for the first-stage diseased leaf segmentation and the second-stage lesion segmentation models. The leaf dataset contains 1140 images, and the lesion data set contains 405 images. The proposed TRNet was composed of a convolutional network and a Transformer network and achieved an accuracy of 93.94% by fusing local features and global features for leaf segmentation. In the second stage, U-Net (Resnet50 as the feature network) was used for lesion segmentation, and a Dice coefficient of 68.14% was obtained. After integrating TRNet and U-Net, a Dice coefficient of 68.83% was obtained. Overall, the two-stage segmentation network achieved an average accuracy of 94.49% and 94.43% in the severity grading of cucumber downy mildew and cucumber anthracnose, respectively. Compared with DUNet and BLSNet, the average accuracy of TUNet in cucumber downy mildew and cucumber anthracnose severity classification increased by 4.71% and 8.08%, respectively. The proposed model showed a strong capability in segmenting cucumber leaves and disease spots at the pixel level, providing a feasible method for evaluating the severity of cucumber downy mildew and anthracnose. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

22 pages, 11798 KiB  
Article
SE-VisionTransformer: Hybrid Network for Diagnosing Sugarcane Leaf Diseases Based on Attention Mechanism
by Cuimin Sun, Xingzhi Zhou, Menghua Zhang and An Qin
Sensors 2023, 23(20), 8529; https://doi.org/10.3390/s23208529 - 17 Oct 2023
Cited by 21 | Viewed by 2862
Abstract
Sugarcane is an important raw material for sugar and chemical production. However, in recent years, various sugarcane diseases have emerged, severely impacting the national economy. To address the issue of identifying diseases in sugarcane leaf sections, this paper proposes the SE-VIT hybrid network. [...] Read more.
Sugarcane is an important raw material for sugar and chemical production. However, in recent years, various sugarcane diseases have emerged, severely impacting the national economy. To address the issue of identifying diseases in sugarcane leaf sections, this paper proposes the SE-VIT hybrid network. Unlike traditional methods that directly use models for classification, this paper compares threshold, K-means, and support vector machine (SVM) algorithms for extracting leaf lesions from images. Due to SVM’s ability to accurately segment these lesions, it is ultimately selected for the task. The paper introduces the SE attention module into ResNet-18 (CNN), enhancing the learning of inter-channel weights. After the pooling layer, multi-head self-attention (MHSA) is incorporated. Finally, with the inclusion of 2D relative positional encoding, the accuracy is improved by 5.1%, precision by 3.23%, and recall by 5.17%. The SE-VIT hybrid network model achieves an accuracy of 97.26% on the PlantVillage dataset. Additionally, when compared to four existing classical neural network models, SE-VIT demonstrates significantly higher accuracy and precision, reaching 89.57% accuracy. Therefore, the method proposed in this paper can provide technical support for intelligent management of sugarcane plantations and offer insights for addressing plant diseases with limited datasets. Full article
(This article belongs to the Special Issue AI, IoT and Smart Sensors for Precision Agriculture)
Show Figures

Figure 1

20 pages, 5627 KiB  
Article
Sweetgum Leaf Spot Image Segmentation and Grading Detection Based on an Improved DeeplabV3+ Network
by Peng Wu, Maodong Cai, Xiaomei Yi, Guoying Wang, Lufeng Mo, Musenge Chola and Chilekwa Kapapa
Forests 2023, 14(8), 1547; https://doi.org/10.3390/f14081547 - 28 Jul 2023
Cited by 5 | Viewed by 1932
Abstract
Leaf spot disease and brown spot disease are common diseases affecting maple leaves. Accurate and efficient detection of these diseases is crucial for maintaining the photosynthetic efficiency and growth quality of maple leaves. However, existing segmentation methods for plant diseases often fail to [...] Read more.
Leaf spot disease and brown spot disease are common diseases affecting maple leaves. Accurate and efficient detection of these diseases is crucial for maintaining the photosynthetic efficiency and growth quality of maple leaves. However, existing segmentation methods for plant diseases often fail to accurately and rapidly detect disease areas on plant leaves. This paper presents a novel solution to accurately and efficiently detect common diseases in maple leaves. We propose a deep learning approach based on an enhanced version of DeepLabV3+ specifically designed for detecting common diseases in maple leaves. To construct the maple leaf spot dataset, we employed image annotation and data enhancement techniques. Our method incorporates the CBAM-FF module to fuse gradual features and deep features, enhancing the detection performance. Furthermore, we leverage the SANet attention mechanism to improve the feature extraction capabilities of the MobileNetV2 backbone network for spot features. The utilization of the focal loss function further enhances the detection accuracy of the affected areas. Experimental results demonstrate the effectiveness of our improved algorithm, achieving a mean intersection over union (MIoU) of 90.23% and a mean pixel accuracy (MPA) of 94.75%. Notably, our method outperforms traditional semantic segmentation methods commonly used for plant diseases, such as DeeplabV3+, Unet, Segnet, and others. The proposed approach significantly enhances the segmentation performance for detecting diseased spots on Liquidambar formosana leaves. Additionally, based on pixel statistics, the segmented lesion image is graded for accurate detection. Full article
Show Figures

Figure 1

15 pages, 3623 KiB  
Article
Diagnosis and Mobile Application of Apple Leaf Disease Degree Based on a Small-Sample Dataset
by Lili Li, Bin Wang, Yanwen Li and Hua Yang
Plants 2023, 12(4), 786; https://doi.org/10.3390/plants12040786 - 9 Feb 2023
Cited by 17 | Viewed by 3121
Abstract
The accurate segmentation of apple leaf disease spots is the key to identifying the classification of apple leaf diseases and disease severity. Therefore, a DeepLabV3+ semantic segmentation network model with an actors spatial pyramid pool module (ASPP) was proposed to achieve effective extraction [...] Read more.
The accurate segmentation of apple leaf disease spots is the key to identifying the classification of apple leaf diseases and disease severity. Therefore, a DeepLabV3+ semantic segmentation network model with an actors spatial pyramid pool module (ASPP) was proposed to achieve effective extraction of apple leaf lesion features and to improve the apple leaf disease recognition and disease severity diagnosis compared with the classical semantic segmentation network models PSPNet and GCNet. In addition, the effects of the learning rate, optimizer, and backbone network on the performance of the DeepLabV3+ network model with the best performance were analyzed. The experimental results show that the mean pixel accuracy (MPA) and mean intersection over union (MIoU) of the model reached 97.26% and 83.85%, respectively. After being deployed into the smartphone platform, the detection time of the detection system was 9s per image for the portable and intelligent diagnostics of apple leaf diseases. The transfer learning method provided the possibility of quickly acquiring a high-performance model under the condition of small datasets. The research results can provide a precise guide for the prevention and precise control of apple diseases in fields. Full article
(This article belongs to the Collection Application of AI in Plants)
Show Figures

Figure 1

15 pages, 3634 KiB  
Article
The Fast Detection of Crop Disease Leaves Based on Single-Channel Gravitational Kernel Density Clustering
by Yifeng Ren, Qingyan Li and Zhe Liu
Appl. Sci. 2023, 13(2), 1172; https://doi.org/10.3390/app13021172 - 15 Jan 2023
Cited by 1 | Viewed by 2177
Abstract
Plant diseases and pests may seriously affect the yield of crops and even threaten the survival of human beings. The characteristics of plant diseases and insect pests are mainly reflected in the occurrence of lesions on crop leaves. Machine vision disease detection is [...] Read more.
Plant diseases and pests may seriously affect the yield of crops and even threaten the survival of human beings. The characteristics of plant diseases and insect pests are mainly reflected in the occurrence of lesions on crop leaves. Machine vision disease detection is of great significance for the early detection and prevention of plant diseases and insect pests. A fast detection method for lesions based on a single-channel gravitational kernel density clustering algorithm was designed to examine the complexity and ambiguity of diseased leaf images. Firstly, a polynomial was used to fit the R-channel feature histogram curve of a diseased leaf image in the RGB color space, and then the peak point and peak area of the fitted feature histogram curve were determined according to the derivative attribute. Secondly, the cluster numbers and the initial cluster center of the diseased leaf images were determined according to the peak area and peak point. Thirdly, according to the clustering center of the preliminarily determined diseased leaf images, the single-channel gravity kernel density clustering algorithm in this paper was used to achieve the rapid segmentation of the diseased leaf lesions. Finally, the experimental results showed that our method could segment the lesions quickly and accurately. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Visual Signal Processing)
Show Figures

Figure 1

39 pages, 3038 KiB  
Article
Identification of Stripe Rust and Leaf Rust on Different Wheat Varieties Based on Image Processing Technology
by Hongli Wang, Qian Jiang, Zhenyu Sun, Shiqin Cao and Haiguang Wang
Agronomy 2023, 13(1), 260; https://doi.org/10.3390/agronomy13010260 - 14 Jan 2023
Cited by 11 | Viewed by 3593
Abstract
The timely and accurate identification of stripe rust and leaf rust is essential in effective disease control and the safe production of wheat worldwide. To investigate methods for identifying the two diseases on different wheat varieties based on image processing technology, single-leaf images [...] Read more.
The timely and accurate identification of stripe rust and leaf rust is essential in effective disease control and the safe production of wheat worldwide. To investigate methods for identifying the two diseases on different wheat varieties based on image processing technology, single-leaf images of the diseases on different wheat varieties, acquired under field and laboratory environmental conditions, were processed. After image scaling, median filtering, morphological reconstruction, and lesion segmentation on the images, 140 color, texture, and shape features were extracted from the lesion images; then, feature selections were conducted using methods including ReliefF, 1R, correlation-based feature selection, and principal components analysis combined with support vector machine (SVM), back propagation neural network (BPNN), and random forest (RF), respectively. For the individual-variety disease identification SVM, BPNN, and RF models built with the optimal feature combinations, the identification accuracies of the training sets and the testing sets on the same individual varieties acquired under the same image acquisition conditions as the training sets used for modeling were 87.18–100.00%, but most of the identification accuracies of the testing sets for other individual varieties were low. For the multi-variety disease identification SVM, BPNN, and RF models built with the merged optimal feature combinations based on the multi-variety disease images acquired under field and laboratory environmental conditions, identification accuracies in the range of 82.05–100.00% were achieved on the training set, the corresponding multi-variety disease image testing set, and all the individual-variety disease image testing sets. The results indicated that the identification of images of stripe rust and leaf rust could be greatly affected by wheat varieties, but satisfactory identification performances could be achieved by building multi-variety disease identification models based on disease images from multiple varieties under different environments. This study provides an effective method for the accurate identification of stripe rust and leaf rust and could be a useful reference for the automatic identification of other plant diseases. Full article
(This article belongs to the Special Issue Epidemiology and Control of Fungal Diseases of Crop Plants)
Show Figures

Figure 1

Back to TopTop