Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (109)

Search Parameters:
Keywords = crops fine classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2357 KB  
Article
From Vision-Only to Vision + Language: A Multimodal Framework for Few-Shot Unsound Wheat Grain Classification
by Yuan Ning, Pengtao Lv, Qinghui Zhang, Le Xiao and Caihong Wang
AI 2025, 6(9), 207; https://doi.org/10.3390/ai6090207 - 29 Aug 2025
Viewed by 239
Abstract
Precise classification of unsound wheat grains is essential for crop yields and food security, yet most existing approaches rely on vision-only models that demand large labeled datasets, which is often impractical in real-world, data-scarce settings. To address this few-shot challenge, we propose UWGC, [...] Read more.
Precise classification of unsound wheat grains is essential for crop yields and food security, yet most existing approaches rely on vision-only models that demand large labeled datasets, which is often impractical in real-world, data-scarce settings. To address this few-shot challenge, we propose UWGC, a novel vision-language framework designed for few-shot classification of unsound wheat grains. UWGC integrates two core modules: a fine-tuning module based on Adaptive Prior Refinement (APE) and a text prompt enhancement module that incorporates Advancing Textual Prompt (ATPrompt) and the multimodal model Qwen2.5-VL. The synergy between the two modules, leveraging cross-modal semantics, enhances generalization of UWGC in low-data regimes. It is offered in two variants: UWGC-F and UWGC-T, in order to accommodate different practical needs. Across few-shot settings on a public grain dataset, UWGC-F and UWGC-T consistently outperform existing vision-only and vision-language methods, highlighting their potential for unsound wheat grain classification in real-world agriculture. Full article
Show Figures

Figure 1

14 pages, 8017 KB  
Article
Fast Rice Plant Disease Recognition Based on Dual-Attention-Guided Lightweight Network
by Chenrui Kang, Lin Jiao, Kang Liu, Zhigui Liu and Rujing Wang
Agriculture 2025, 15(16), 1724; https://doi.org/10.3390/agriculture15161724 - 10 Aug 2025
Viewed by 438
Abstract
The yield and quality of rice are severely affected by rice disease, which can result in crop failure. Early and precise identification of rice plant diseases enables timely action, minimizing potential economic losses. Deep convolutional neural networks (CNNs) have significantly advanced image classification [...] Read more.
The yield and quality of rice are severely affected by rice disease, which can result in crop failure. Early and precise identification of rice plant diseases enables timely action, minimizing potential economic losses. Deep convolutional neural networks (CNNs) have significantly advanced image classification accuracy by leveraging powerful feature extraction capabilities, outperforming traditional machine learning methods. In this work, we propose a dual attention-guided lightweight network for fast and precise recognition of rice diseases with small lesions and high similarity. First, to efficiently extract features while reducing computational redundancy, we incorporate FasterNet using partial convolution (PC-Conv). Furthermore, to enhance the network’s ability to capture fine-grained lesion details, we introduce a dual-attention mechanism that aggregates long-range contextual information in both spatial and channel dimensions. Additionally, we construct a large-scale rice disease dataset, named RD-6, which contains 2196 images across six categories, to support model training and evaluation. Finally, the proposed rice disease detection method is evaluated on the RD-6 dataset, demonstrating its superior performance over other state-of-the-art methods, especially in terms of recognition efficiency. For instance, the method achieves an average accuracy of 99.9%, recall of 99.8%, precision of 100%, specificity of 100%, and F1-score of 99.9%. Additionally, the proposed method has only 3.6 M parameters, demonstrating higher efficiency without sacrificing accuracy. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

17 pages, 54671 KB  
Article
Pep-VGGNet: A Novel Transfer Learning Method for Pepper Leaf Disease Diagnosis
by Süleyman Çetinkaya and Amira Tandirovic Gursel
Appl. Sci. 2025, 15(15), 8690; https://doi.org/10.3390/app15158690 - 6 Aug 2025
Viewed by 287
Abstract
The health of crops is a major challenge for productivity growth in agriculture, with plant diseases playing a key role in limiting crop yield. Identifying and understanding these diseases is crucial to preventing their spread. In particular, greenhouse pepper leaves are susceptible to [...] Read more.
The health of crops is a major challenge for productivity growth in agriculture, with plant diseases playing a key role in limiting crop yield. Identifying and understanding these diseases is crucial to preventing their spread. In particular, greenhouse pepper leaves are susceptible to diseases such as mildew, mites, caterpillars, aphids, and blight, which leave distinctive marks that can be used for disease classification. The study proposes a seven-class classifier for the rapid and accurate diagnosis of pepper diseases, with a primary focus on pre-processing techniques to enhance colour differentiation between green and yellow shades, thereby facilitating easier classification among the classes. A novel algorithm is introduced to improve image vibrancy, contrast, and colour properties. The diagnosis is performed using a modified VGG16Net model, which includes three additional layers for fine-tuning. After initialising on the ImageNet dataset, some layers are frozen to prevent redundant learning. The classification is additionally accelerated by introducing flattened, dense, and dropout layers. The proposed model is tested on a private dataset collected specifically for this study. Notably, this work is the first to focus on diagnosing aphid and caterpillar diseases in peppers. The model achieves an average accuracy of 92.00%, showing promising potential for seven-class deep learning-based disease diagnostics. Misclassifications in the aphid class are primarily due to the limited number of samples available. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 2750 KB  
Article
Combining Object Detection, Super-Resolution GANs and Transformers to Facilitate Tick Identification Workflow from Crowdsourced Images on the eTick Platform
by Étienne Clabaut, Jérémie Bouffard and Jade Savage
Insects 2025, 16(8), 813; https://doi.org/10.3390/insects16080813 - 6 Aug 2025
Viewed by 476
Abstract
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance [...] Read more.
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance based on tick species and province of residence of the submitter. Considering that more than 100,000 images from over 73,500 identified records representing 25 tick species have been submitted to eTick since the public launch in 2018, a partial automation of the image processing workflow could save substantial human resources, especially as submission numbers have been steadily increasing since 2021. In this study, we evaluate an end-to-end artificial intelligence (AI) pipeline to support tick identification from eTick user-submitted images, characterized by heterogeneous quality and uncontrolled acquisition conditions. Our framework integrates (i) tick localization using a fine-tuned YOLOv7 object detection model, (ii) resolution enhancement of cropped images via super-resolution Generative Adversarial Networks (RealESRGAN and SwinIR), and (iii) image classification using deep convolutional (ResNet-50) and transformer-based (ViT) architectures across three datasets (12, 6, and 3 classes) of decreasing granularities in terms of taxonomic resolution, tick life stage, and specimen viewing angle. ViT consistently outperformed ResNet-50, especially in complex classification settings. The configuration yielding the best performance—relying on object detection without incorporating super-resolution—achieved a macro-averaged F1-score exceeding 86% in the 3-class model (Dermacentor sp., other species, bad images), with minimal critical misclassifications (0.7% of “other species” misclassified as Dermacentor). Given that Dermacentor ticks represent more than 60% of tick volume submitted on the eTick platform, the integration of a low granularity model in the processing workflow could save significant time while maintaining very high standards of identification accuracy. Our findings highlight the potential of combining modern AI methods to facilitate efficient and accurate tick image processing in community science platforms, while emphasizing the need to adapt model complexity and class resolution to task-specific constraints. Full article
(This article belongs to the Section Medical and Livestock Entomology)
Show Figures

Graphical abstract

26 pages, 1790 KB  
Article
A Hybrid Deep Learning Model for Aromatic and Medicinal Plant Species Classification Using a Curated Leaf Image Dataset
by Shareena E. M., D. Abraham Chandy, Shemi P. M. and Alwin Poulose
AgriEngineering 2025, 7(8), 243; https://doi.org/10.3390/agriengineering7080243 - 1 Aug 2025
Viewed by 701
Abstract
In the era of smart agriculture, accurate identification of plant species is critical for effective crop management, biodiversity monitoring, and the sustainable use of medicinal resources. However, existing deep learning approaches often underperform when applied to fine-grained plant classification tasks due to the [...] Read more.
In the era of smart agriculture, accurate identification of plant species is critical for effective crop management, biodiversity monitoring, and the sustainable use of medicinal resources. However, existing deep learning approaches often underperform when applied to fine-grained plant classification tasks due to the lack of domain-specific, high-quality datasets and the limited representational capacity of traditional architectures. This study addresses these challenges by introducing a novel, well-curated leaf image dataset consisting of 39 classes of medicinal and aromatic plants collected from the Aromatic and Medicinal Plant Research Station in Odakkali, Kerala, India. To overcome performance bottlenecks observed with a baseline Convolutional Neural Network (CNN) that achieved only 44.94% accuracy, we progressively enhanced model performance through a series of architectural innovations. These included the use of a pre-trained VGG16 network, data augmentation techniques, and fine-tuning of deeper convolutional layers, followed by the integration of Squeeze-and-Excitation (SE) attention blocks. Ultimately, we propose a hybrid deep learning architecture that combines VGG16 with Batch Normalization, Gated Recurrent Units (GRUs), Transformer modules, and Dilated Convolutions. This final model achieved a peak validation accuracy of 95.24%, significantly outperforming several baseline models, such as custom CNN (44.94%), VGG-19 (59.49%), VGG-16 before augmentation (71.52%), Xception (85.44%), Inception v3 (87.97%), VGG-16 after data augumentation (89.24%), VGG-16 after fine-tuning (90.51%), MobileNetV2 (93.67), and VGG16 with SE block (94.94%). These results demonstrate superior capability in capturing both local textures and global morphological features. The proposed solution not only advances the state of the art in plant classification but also contributes a valuable dataset to the research community. Its real-world applicability spans field-based plant identification, biodiversity conservation, and precision agriculture, offering a scalable tool for automated plant recognition in complex ecological and agricultural environments. Full article
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)
Show Figures

Figure 1

25 pages, 5142 KB  
Article
Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model
by Meilin Li, Yufeng Guo, Wei Guo, Hongbo Qiao, Lei Shi, Yang Liu, Guang Zheng, Hui Zhang and Qiang Wang
Agriculture 2025, 15(15), 1580; https://doi.org/10.3390/agriculture15151580 - 23 Jul 2025
Viewed by 396
Abstract
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early [...] Read more.
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early and accurate detection crucial for effective management. In this study, we present QY-SE-MResNet34, a deep learning-based classification model that builds upon ResNet34 to perform multi-class classification of wheat leaf images and assess powdery mildew severity at the single-leaf level. The proposed methodology begins with dataset construction following the GBT 17980.22-2000 national standard for powdery mildew severity grading, resulting in a curated collection of 4248 wheat leaf images at the grain-filling stage across six severity levels. To enhance model performance, we integrated transfer learning with ResNet34, leveraging pretrained weights to improve feature extraction and accelerate convergence. Further refinements included embedding a Squeeze-and-Excitation (SE) block to strengthen feature representation while maintaining computational efficiency. The model architecture was also optimized by modifying the first convolutional layer (conv1)—replacing the original 7 × 7 kernel with a 3 × 3 kernel, adjusting the stride to 1, and setting padding to 1—to better capture fine-grained leaf textures and edge features. Subsequently, the optimal training strategy was determined through hyperparameter tuning experiments, and GrabCut-based background processing along with data augmentation were introduced to enhance model robustness. In addition, interpretability techniques such as channel masking and Grad-CAM were employed to visualize the model’s decision-making process. Experimental validation demonstrated that QY-SE-MResNet34 achieved an 89% classification accuracy, outperforming established models such as ResNet50, VGG16, and MobileNetV2 and surpassing the original ResNet34 by 11%. This study delivers a high-performance solution for single-leaf wheat powdery mildew severity assessment, offering practical value for intelligent disease monitoring and early warning systems in precision agriculture. Full article
Show Figures

Figure 1

17 pages, 1913 KB  
Article
CropSTS: A Remote Sensing Foundation Model for Cropland Classification with Decoupled Spatiotemporal Attention
by Jian Yan, Xingfa Gu and Yuxing Chen
Remote Sens. 2025, 17(14), 2481; https://doi.org/10.3390/rs17142481 - 17 Jul 2025
Viewed by 756
Abstract
Recent progress in geospatial foundation models (GFMs) has demonstrated strong generalization capabilities for remote sensing downstream tasks. However, existing GFMs still struggle with fine-grained cropland classification due to ambiguous field boundaries, insufficient and low-efficient temporal modeling, and limited cross-regional adaptability. In this paper, [...] Read more.
Recent progress in geospatial foundation models (GFMs) has demonstrated strong generalization capabilities for remote sensing downstream tasks. However, existing GFMs still struggle with fine-grained cropland classification due to ambiguous field boundaries, insufficient and low-efficient temporal modeling, and limited cross-regional adaptability. In this paper, we propose CropSTS, a remote sensing foundation model designed with a decoupled temporal–spatial attention architecture, specifically tailored for the temporal dynamics of cropland remote sensing data. To efficiently pre-train the model under limited labeled data, we employ a hybrid framework combining joint-embedding predictive architecture with knowledge distillation from web-scale foundation models. Despite being trained on a small dataset and using a compact model, CropSTS achieves state-of-the-art performance on the PASTIS-R benchmark in terms of mIoU and F1-score. Our results validate that structural optimization for temporal encoding and cross-modal knowledge transfer constitute effective strategies for advancing GFM design in agricultural remote sensing. Full article
(This article belongs to the Special Issue Advanced AI Technology for Remote Sensing Analysis)
Show Figures

Figure 1

21 pages, 4147 KB  
Article
AgriFusionNet: A Lightweight Deep Learning Model for Multisource Plant Disease Diagnosis
by Saleh Albahli
Agriculture 2025, 15(14), 1523; https://doi.org/10.3390/agriculture15141523 - 15 Jul 2025
Cited by 1 | Viewed by 835
Abstract
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB [...] Read more.
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB and multispectral drone imagery with IoT-based environmental sensor data (e.g., temperature, humidity, soil moisture), recorded over six months across multiple agricultural zones. Built on the EfficientNetV2-B4 backbone, AgriFusionNet incorporates Fused-MBConv blocks and Swish activation to improve gradient flow, capture fine-grained disease patterns, and reduce inference latency. The model was evaluated using a comprehensive dataset composed of real-world and benchmarked samples, showing superior performance with 94.3% classification accuracy, 28.5 ms inference time, and a 30% reduction in model parameters compared to state-of-the-art models such as Vision Transformers and InceptionV4. Extensive comparisons with both traditional machine learning and advanced deep learning methods underscore its robustness, generalization, and suitability for deployment on edge devices. Ablation studies and confusion matrix analyses further confirm its diagnostic precision, even in visually ambiguous cases. The proposed framework offers a scalable, practical solution for real-time crop health monitoring, contributing toward smart and sustainable agricultural ecosystems. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

30 pages, 34212 KB  
Article
Spatiotemporal Mapping and Driving Mechanism of Crop Planting Patterns on the Jianghan Plain Based on Multisource Remote Sensing Fusion and Sample Migration
by Pengnan Xiao, Yong Zhou, Jianping Qian, Yujie Liu and Xigui Li
Remote Sens. 2025, 17(14), 2417; https://doi.org/10.3390/rs17142417 - 12 Jul 2025
Viewed by 351
Abstract
The accurate mapping of crop planting patterns is vital for sustainable agriculture and food security, particularly in regions with complex cropping systems and limited cloud-free observations. This research focuses on the Jianghan Plain in southern China, where diverse planting structures and persistent cloud [...] Read more.
The accurate mapping of crop planting patterns is vital for sustainable agriculture and food security, particularly in regions with complex cropping systems and limited cloud-free observations. This research focuses on the Jianghan Plain in southern China, where diverse planting structures and persistent cloud cover make consistent monitoring challenging. We integrated multi-temporal Sentinel-2 and Landsat-8 imagery from 2017 to 2021 on the Google Earth Engine platform and applied a sample migration strategy to construct multi-year training data. A random forest classifier was used to identify nine major planting patterns at a 10 m resolution. The classification achieved an average overall accuracy of 88.3%, with annual Kappa coefficients ranging from 0.81 to 0.88. A spatial analysis revealed that single rice was the dominant pattern, covering more than 60% of the area. Temporal variations in cropping patterns were categorized into four frequency levels (0, 1, 2, and 3 changes), with more dynamic transitions concentrated in the central-western and northern subregions. A multiscale geographically weighted regression (MGWR) model revealed that economic and production-related factors had strong positive associations with crop planting patterns, while natural factors showed relatively weaker explanatory power. This research presents a scalable method for mapping fine-resolution crop patterns in complex agroecosystems, providing quantitative support for regional land-use optimization and the development of agricultural policies. Full article
Show Figures

Figure 1

32 pages, 5287 KB  
Article
UniHSFormer X for Hyperspectral Crop Classification with Prototype-Routed Semantic Structuring
by Zhen Du, Senhao Liu, Yao Liao, Yuanyuan Tang, Yanwen Liu, Huimin Xing, Zhijie Zhang and Donghui Zhang
Agriculture 2025, 15(13), 1427; https://doi.org/10.3390/agriculture15131427 - 2 Jul 2025
Viewed by 449
Abstract
Hyperspectral imaging (HSI) plays a pivotal role in modern agriculture by capturing fine-grained spectral signatures that support crop classification, health assessment, and land-use monitoring. However, the transition from raw spectral data to reliable semantic understanding remains challenging—particularly under fragmented planting patterns, spectral ambiguity, [...] Read more.
Hyperspectral imaging (HSI) plays a pivotal role in modern agriculture by capturing fine-grained spectral signatures that support crop classification, health assessment, and land-use monitoring. However, the transition from raw spectral data to reliable semantic understanding remains challenging—particularly under fragmented planting patterns, spectral ambiguity, and spatial heterogeneity. To address these limitations, we propose UniHSFormer-X, a unified transformer-based framework that reconstructs agricultural semantics through prototype-guided token routing and hierarchical context modeling. Unlike conventional models that treat spectral–spatial features uniformly, UniHSFormer-X dynamically modulates information flow based on class-aware affinities, enabling precise delineation of field boundaries and robust recognition of spectrally entangled crop types. Evaluated on three UAV-based benchmarks—WHU-Hi-LongKou, HanChuan, and HongHu—the model achieves up to 99.80% overall accuracy and 99.28% average accuracy, outperforming state-of-the-art CNN, ViT, and hybrid architectures across both structured and heterogeneous agricultural scenarios. Ablation studies further reveal the critical role of semantic routing and prototype projection in stabilizing model behavior, while parameter surface analysis demonstrates consistent generalization across diverse configurations. Beyond high performance, UniHSFormer-X offers a semantically interpretable architecture that adapts to the spatial logic and compositional nuance of agricultural imagery, representing a forward step toward robust and scalable crop classification. Full article
Show Figures

Figure 1

24 pages, 1991 KB  
Article
Robust Deep Neural Network for Classification of Diseases from Paddy Fields
by Karthick Mookkandi and Malaya Kumar Nath
AgriEngineering 2025, 7(7), 205; https://doi.org/10.3390/agriengineering7070205 - 1 Jul 2025
Cited by 1 | Viewed by 588
Abstract
Agriculture in India supports millions of livelihoods and is a major force behind economic expansion. Challenges in modern agriculture depend on environmental factors (such as soil quality and climate variability) and biotic factors (such as pests and diseases). These challenges can be addressed [...] Read more.
Agriculture in India supports millions of livelihoods and is a major force behind economic expansion. Challenges in modern agriculture depend on environmental factors (such as soil quality and climate variability) and biotic factors (such as pests and diseases). These challenges can be addressed by advancements in technology (such as sensors, internet of things, communication, etc.) and data-driven approaches (such as machine learning (ML) and deep learning (DL)), which can help with crop yield and sustainability in agriculture. This study introduces an innovative deep neural network (DNN) approach for identifying leaf diseases in paddy crops at an early stage. The proposed neural network is a hybrid DL model comprising feature extraction, channel attention, inception with residual, and classification blocks. Channel attention and inception with residual help extract comprehensive information about the crops and potential diseases. The classification module uses softmax to obtain the score for different classes. The importance of each block is analyzed via an ablation study. To understand the feature extraction ability of the modules, extracted features at different stages are fed to the SVM classifier to obtain the classification accuracy. This technique was experimented on eight classes with 7857 paddy crop images, which were obtained from local paddy fields and freely available open sources. The classification performance of the proposed technique is evaluated according to accuracy, sensitivity, specificity, F1 score, MCC, area under curve (AUC), and receiver operating characteristic (ROC). The model was fine-tuned by setting the hyperparameters (such as batch size, learning rate, optimizer, epoch, and train and test ratio). Training, validation, and testing accuracies of 99.91%, 99.87%, and 99.49%, respectively, were obtained for 20 epochs with a learning rate of 0.001 and sgdm optimizer. The proposed network robustness was studied via an ablation study and with noisy data. The model’s classification performance was evaluated for other agricultural data (such as mango, maize, and wheat diseases). These research outcomes can empower farmers with smarter agricultural practices and contribute to economic growth. Full article
Show Figures

Figure 1

21 pages, 4394 KB  
Article
Deep Learning Models for Detection and Severity Assessment of Cercospora Leaf Spot (Cercospora capsici) in Chili Peppers Under Natural Conditions
by Douglas Vieira Leite, Alisson Vasconcelos de Brito, Gregorio Guirada Faccioli and Gustavo Haddad Souza Vieira
Plants 2025, 14(13), 2011; https://doi.org/10.3390/plants14132011 - 1 Jul 2025
Cited by 1 | Viewed by 577
Abstract
The accurate assessment of plant disease severity is crucial for effective crop management. Deep learning, especially via CNNs, is widely used for image segmentation in plant lesion detection, but accurately assessing disease severity across varied environmental conditions remains challenging. This study evaluates eight [...] Read more.
The accurate assessment of plant disease severity is crucial for effective crop management. Deep learning, especially via CNNs, is widely used for image segmentation in plant lesion detection, but accurately assessing disease severity across varied environmental conditions remains challenging. This study evaluates eight deep learning models for detecting and quantifying Cercospora leaf spot (Cercospora capsici) severity in chili peppers under natural field conditions. A custom dataset of 1645 chili pepper leaf images, collected from a Brazilian plantation and annotated with 6282 lesions, was developed for real-world robustness, reflecting real-world variability in lighting and background. First, an algorithm was developed to process raw images, applying ROI selection and background removal. Then, four YOLOv8 and four Mask R-CNN models were fine-tuned for pixel-level segmentation and severity classification, comparing one-stage and two-stage models to offer practical insights for agricultural applications. In pixel-level segmentation on the test dataset, Mask R-CNN achieved superior precision with a Mean Intersection over Union (MIoU) of 0.860 and F1-score of 0.924 for the mask_rcnn_R101_FPN_3x model, compared to 0.808 and 0.893 for the YOLOv8s-Seg model. However, in severity classification, Mask R-CNN underestimated higher severity levels, with an accuracy of 72.3% for level III, while YOLOv8 attained 91.4%. Additionally, YOLOv8 demonstrated greater efficiency, with an inference time of 27 ms versus 89 ms for Mask R-CNN. While Mask R-CNN excels in segmentation accuracy, YOLOv8 offers a compelling balance of speed and reliable severity classification, making it suitable for real-time plant disease assessment in agricultural applications. Full article
(This article belongs to the Section Plant Protection and Biotic Interactions)
Show Figures

Figure 1

21 pages, 2701 KB  
Article
HSDT-TabNet: A Dual-Path Deep Learning Model for Severity Grading of Soybean Frogeye Leaf Spot
by Xiaoming Li, Yang Zhou, Yongguang Li, Shiqi Wang, Wenxue Bian and Hongmin Sun
Agronomy 2025, 15(7), 1530; https://doi.org/10.3390/agronomy15071530 - 24 Jun 2025
Viewed by 441
Abstract
Soybean frogeye leaf spot (FLS), a serious soybean disease, causes severe yield losses in the largest production regions of China. However, both conventional field monitoring and machine learning algorithms remain challenged in achieving rapid and accurate detection. In this study, an HSDT-TabNet model [...] Read more.
Soybean frogeye leaf spot (FLS), a serious soybean disease, causes severe yield losses in the largest production regions of China. However, both conventional field monitoring and machine learning algorithms remain challenged in achieving rapid and accurate detection. In this study, an HSDT-TabNet model was proposed for the grading of soybean FLS under field conditions by analyzing unmanned aerial vehicle (UAV)-based hyperspectral data. This model employs a dual-path parallel feature extraction strategy: the TabNet path performs sparse feature selection to capture fine-grained local discriminative information, while the hierarchical soft decision tree (HSDT) path models global nonlinear relationships across hyperspectral bands. The features from both paths are then dynamically fused via a multi-head attention mechanism to integrate complementary information. Furthermore, the overall generalization ability of the model is improved through hyperparameter optimization based on the tree-structured Parzen estimator (TPE). Experimental results show that HSDT-TabNet achieved a macro-accuracy of 96.37% under five-fold cross-validation. It outperformed the TabTransformer and SVM baselines by 2.08% and 2.23%, respectively. For high-severity cases (Level 4–5), the classification accuracy exceeded 97%. This study provides an effective method for precise field-scale crop disease monitoring. Full article
Show Figures

Figure 1

14 pages, 18260 KB  
Article
Genome-Wide Association Analysis Identifies Loci for Powdery Mildew Resistance in Wheat
by Xiangdong Chen, Haobo Wang, Kaiqiang Fang, Guohui Ding, Nannan Dong, Na Dong, Man Zhang, Yihao Zang and Zhengang Ru
Agronomy 2025, 15(6), 1439; https://doi.org/10.3390/agronomy15061439 - 12 Jun 2025
Viewed by 997
Abstract
Wheat (Triticum aestivum L.), a staple crop of global significance, faces constant biotic stress threats, with powdery mildew caused by Blumeria graminis f. sp. tritici (Bgt) being particularly damaging. In this study, a multi-year single-site experiment was conducted to minimize [...] Read more.
Wheat (Triticum aestivum L.), a staple crop of global significance, faces constant biotic stress threats, with powdery mildew caused by Blumeria graminis f. sp. tritici (Bgt) being particularly damaging. In this study, a multi-year single-site experiment was conducted to minimize the environmental impacts, and a five-level classification system was used to assess powdery mildew resistance. A 660K SNP array genotyped 204 wheat germplasms, followed by GWAS. SNP loci with a −log10(p) > 3.0 were screened and validated across repeats to identify those associated with powdery mildew (Pm) resistance. Twelve SNPs were consistently associated with Pm resistance across multiple years. Of these, three colocalized with previously reported Pm-resistance gene or QTL regions, and the remaining nine represented potentially novel loci. The candidate genes identified included leucine-rich repeat (LRR) and NB-ARC immune receptors, as well as pathogen-related, thioredoxin, and serine threonine-protein kinase genes. Overall, the SNP loci and candidate genes identified in this study provide a basis for further fine mapping and cloning of the genes involved in relation to Pm resistance. Full article
(This article belongs to the Special Issue Mechanism and Sustainable Control of Crop Diseases)
Show Figures

Figure 1

19 pages, 14266 KB  
Article
Predictive Capability Evaluation of Micrograph-Driven Deep Learning for Ti6Al4V Alloy Tensile Strength Under Varied Preprocessing Strategies
by Yuqi Xiong and Wei Duan
Metals 2025, 15(6), 586; https://doi.org/10.3390/met15060586 - 24 May 2025
Viewed by 599
Abstract
The purpose of this study is to develop a micrograph-driven model for Ti6Al4V mechanical property prediction through integrated image preprocessing and deep learning, reducing the reliance on manually extracted features and process parameters. This paper systematically evaluates the capability of a CNN model [...] Read more.
The purpose of this study is to develop a micrograph-driven model for Ti6Al4V mechanical property prediction through integrated image preprocessing and deep learning, reducing the reliance on manually extracted features and process parameters. This paper systematically evaluates the capability of a CNN model using preprocessed micrographs to predict Ti6Al4V alloy ultimate tensile strength (UTS), while analyzing how different preprocessing combinations influence model performance. A total of 180 micrographs were selected from published literature to construct the dataset. After applying image standardization (grayscale transformation, resizing, and normalization) and image enhancement, a pre-trained ResNet34 model was employed with transfer learning to conduct strength grade classification (low, medium, high) and UTS regression. The results demonstrated that on highly heterogeneous micrograph datasets, the model exhibited moderate classification capability (maximum accuracy = 65.60% ± 1.22%) but negligible UTS regression capability (highest R2 = 0.163 ± 0.020). Fine-tuning on subsets with consistent forming processes improved regression performance (highest R2 = 0.360 ± 1.47 × 10−5), outperforming traditional predictive models (highest R2 = 0.148). The classification model was insensitive to normalization methods, while min–max normalization with center-cropping showed optimal standardization for regression (R2 = 0.111 ± 0.017). Gamma correction maximized classification accuracy, whereas histogram equalization achieved the highest improvement for regression. Full article
Show Figures

Figure 1

Back to TopTop