Advances in Artificial Intelligence for Plant Research

A special issue of Plants (ISSN 2223-7747). This special issue belongs to the section "Plant Modeling".

Deadline for manuscript submissions: 20 October 2025 | Viewed by 20987

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China
Interests: artificial intelligence; computer vision; plant phenotyping; precision agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Soil and Water Systems, University of Idaho, Moscow, ID, USA
Interests: robotics sensing; decision support systems; climate-smart agriculture; precision agriculture; intelligent robotics

E-Mail
Guest Editor
College of Agriculture/College of Life Sciences, Guizhou University, Guiyang 550025, China
Interests: pest management; biocontrol; smart agriculture; deep learning; image recognition

Special Issue Information

Dear Colleagues,

Rapid advances in artificial intelligence offer a transformative solution for botanical research that promises to revolutionize crop management, disease prediction, precision agriculture, and sustainable ecosystem management. This topic will focus on the latest advances, challenges, and opportunities in artificial intelligence in the field of plant research, promoting interdisciplinary collaboration and driving significant advances in plant science. Specific research topics include, but are not limited to, the following:

  1. Plant phenotype analysis: the application of computer vision and machine learning technology to identify and analyze the morphological characteristics and growth state of plants.
  2. Plant disease detection and prediction: using AI technology to predict and identify plant diseases to improve early warning and management efficiency.
  3. Crop management and optimization: combining data analytics and AI algorithms to optimize fertilization, irrigation, and other agricultural practices to improve crop yield and quality.
  4. Plant genomics and genetic research: using AI-assisted genome analysis and genetic algorithms to accelerate plant genetic improvement and new variety development.
  5. Environmental monitoring and adaptation: using AI to monitor the impact of environmental factors on plant growth and help develop plant varieties that adapt to different climatic conditions.
  6. Agricultural robots and automation: studying the application of AI-driven agricultural robots in seeding, picking, and weed control to improve the efficiency of agricultural operations.

Dr. Guoxiong Zhou
Dr. Liujun Li
Dr. Xiaoyulong Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Plants is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • plant research
  • artificial intelligence
  • phenotype analysis
  • disease detection
  • crop management and optimization
  • agricultural robots

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 2116 KB  
Article
Predicting the Potential Suitable Habitat of Solanum rostratum in China Using the Biomod2 Ensemble Modeling Framework
by Jiajie Wang, Jingdong Zhao, Lina Jiang, Xuejiao Han and Yuanjun Zhu
Plants 2025, 14(17), 2779; https://doi.org/10.3390/plants14172779 - 5 Sep 2025
Viewed by 453
Abstract
Solanum rostratum Dunal is a highly invasive species with strong environmental adaptability and reproductive capacity, posing serious threats to agroforestry ecosystems and human health. In this study, we compiled occurrence records of S. rostratum in China from online databases and sources in the [...] Read more.
Solanum rostratum Dunal is a highly invasive species with strong environmental adaptability and reproductive capacity, posing serious threats to agroforestry ecosystems and human health. In this study, we compiled occurrence records of S. rostratum in China from online databases and sources in the literature. We employed the Biomod2 ensemble modeling framework to predict the potential distribution of the species under current climatic conditions and four future climate scenarios (SSP126, SSP245, SSP370, and SSP585), and to identify the key environmental variables influencing its distribution. The ensemble model based on the committee averaging (EMca) approach achieved the highest predictive accuracy, with a true skill statistic (TSS) of 0.932 and an area under the curve (AUC) of 0.990. Under present climatic conditions, S. rostratum is predominantly distributed across northern China, particularly in Xinjiang, Inner Mongolia, and the northeastern provinces, covering a total suitable area of 1,191,586.55 km2, with highly suitable habitats accounting for 50.37% of this range. Under future climate scenarios, the species’ suitable range is projected to expand significantly, particularly under the high-emissions SSP585 scenario, with the distribution centroid expected to shift significantly toward high-altitude regions in Gansu Province. Precipitation and temperature emerged as the most influential environmental factors affecting habitat suitability. These findings indicate that ongoing global warming may facilitate the survival, reproduction, and rapid spread of S. rostratum across China in the coming decades. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

39 pages, 4783 KB  
Article
Sparse-MoE-SAM: A Lightweight Framework Integrating MoE and SAM with a Sparse Attention Mechanism for Plant Disease Segmentation in Resource-Constrained Environments
by Benhan Zhao, Xilin Kang, Hao Zhou, Ziyang Shi, Lin Li, Guoxiong Zhou, Fangying Wan, Jiangzhang Zhu, Yongming Yan, Leheng Li and Yulong Wu
Plants 2025, 14(17), 2634; https://doi.org/10.3390/plants14172634 - 24 Aug 2025
Viewed by 641
Abstract
Plant disease segmentation has achieved significant progress with the help of artificial intelligence. However, deploying high-accuracy segmentation models in resource-limited settings faces three key challenges, as follows: (A) Traditional dense attention mechanisms incur quadratic computational complexity growth (O(n2d)), rendering [...] Read more.
Plant disease segmentation has achieved significant progress with the help of artificial intelligence. However, deploying high-accuracy segmentation models in resource-limited settings faces three key challenges, as follows: (A) Traditional dense attention mechanisms incur quadratic computational complexity growth (O(n2d)), rendering them ill-suited for low-power hardware. (B) Naturally sparse spatial distributions and large-scale variations in the lesions on leaves necessitate models that concurrently capture long-range dependencies and local details. (C) Complex backgrounds and variable lighting in field images often induce segmentation errors. To address these challenges, we propose Sparse-MoE-SAM, an efficient framework based on an enhanced Segment Anything Model (SAM). This deep learning framework integrates sparse attention mechanisms with a two-stage mixture of experts (MoE) decoder. The sparse attention dynamically activates key channels aligned with lesion sparsity patterns, reducing self-attention complexity while preserving long-range context. Stage 1 of the MoE decoder performs coarse-grained boundary localization; Stage 2 achieves fine-grained segmentation by leveraging specialized experts within the MoE, significantly enhancing edge discrimination accuracy. The expert repository—comprising standard convolutions, dilated convolutions, and depthwise separable convolutions—dynamically routes features through optimized processing paths based on input texture and lesion morphology. This enables robust segmentation across diverse leaf textures and plant developmental stages. Further, we design a sparse attention-enhanced Atrous Spatial Pyramid Pooling (ASPP) module to capture multi-scale contexts for both extensive lesions and small spots. Evaluations on three heterogeneous datasets (PlantVillage Extended, CVPPP, and our self-collected field images) show that Sparse-MoE-SAM achieves a mean Intersection-over-Union (mIoU) of 94.2%—surpassing standard SAM by 2.5 percentage points—while reducing computational costs by 23.7% compared to the original SAM baseline. The model also demonstrates balanced performance across disease classes and enhanced hardware compatibility. Our work validates that integrating sparse attention with MoE mechanisms sustains accuracy while drastically lowering computational demands, enabling the scalable deployment of plant disease segmentation models on mobile and edge devices. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

22 pages, 33740 KB  
Article
Detection of Pine Wilt Disease in UAV Remote Sensing Images Based on SLMW-Net
by Xiaoli Yuan, Guoxiong Zhou, Yongming Yan and Xuewu Yan
Plants 2025, 14(16), 2490; https://doi.org/10.3390/plants14162490 - 11 Aug 2025
Viewed by 535
Abstract
The pine wood nematode is responsible for pine wilt disease, which poses a significant threat to forest ecosystems worldwide. If not quickly detected and removed, the disease spreads rapidly. Advancements in UAV and image detection technologies are crucial for disease monitoring, enabling efficient [...] Read more.
The pine wood nematode is responsible for pine wilt disease, which poses a significant threat to forest ecosystems worldwide. If not quickly detected and removed, the disease spreads rapidly. Advancements in UAV and image detection technologies are crucial for disease monitoring, enabling efficient and automated identification of pine wilt disease. However, challenges persist in the detection of pine wilt disease, including complex UAV imagery backgrounds, difficulty extracting subtle features, and prediction frame bias. In this study, we develop a specialized UAV remote sensing pine forest ARen dataset and introduce a novel pine wilt disease detection model, SLMW-Net. Firstly, the Self-Learning Feature Extraction Module (SFEM) is proposed, combining a convolutional operation and a learnable normalization layer, which effectively solves the problem of difficult feature extraction from pine trees in complex backgrounds and reduces the interference of irrelevant regions. Secondly, the MicroFeature Attention Mechanism (MFAM) is designed to enhance the capture of tiny features of pine trees infected by initial nematode diseases by combining Grouped Attention and Gated Feed-Forward. Then, Weighted and Linearly Scaled IoU Loss (WLIoU Loss) is introduced, which combines weight adjustment and linear stretch truncation to improve the learning strategy, enhance the model performance and generalization ability. SLMW-Net is trained on the self-built ARen dataset and compared with seven existing methods. The experimental results show that SLMW-Net outperforms all other methods, achieving an mAP@0.5 of 86.7% and an mAP@0.5:0.95 of 40.1%. Compared to the backbone model, the mAP@0.5 increased from 83.9% to 86.7%. Therefore, the proposed SLMW-Net has demonstrated strong capabilities to address three major challenges related to pine wilt disease detection, helping to protect forest health and maintain ecological balance. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

23 pages, 3810 KB  
Article
KBNet: A Language and Vision Fusion Multi-Modal Framework for Rice Disease Segmentation
by Xiaoyangdi Yan, Honglin Zhou, Jiangzhang Zhu, Mingfang He, Tianrui Zhao, Xiaobo Tan and Jiangquan Zeng
Plants 2025, 14(16), 2465; https://doi.org/10.3390/plants14162465 - 8 Aug 2025
Viewed by 517
Abstract
High-quality disease segmentation plays a crucial role in the precise identification of rice diseases. Although the existing deep learning methods can identify the disease on rice leaves to a certain extent, these methods often face challenges in dealing with multi-scale disease spots and [...] Read more.
High-quality disease segmentation plays a crucial role in the precise identification of rice diseases. Although the existing deep learning methods can identify the disease on rice leaves to a certain extent, these methods often face challenges in dealing with multi-scale disease spots and irregularly growing disease spots. In order to solve the challenges of rice leaf disease segmentation, we propose KBNet, a novel multi-modal framework integrating language and visual features for rice disease segmentation, leveraging the complementary strengths of CNN and Transformer architectures. Firstly, we propose the Kalman Filter Enhanced Kolmogorov–Arnold Networks (KF-KAN) module, which combines the modeling ability of KANs for nonlinear features and the dynamic update mechanism of the Kalman filter to achieve accurate extraction and fusion of multi-scale lesion information. Secondly, we introduce the Boundary-Constrained Physical-Information Neural Network (BC-PINN) module, which embeds the physical priors, such as the growth law of the lesion, into the loss function to strengthen the modeling of irregular lesions. At the same time, through the boundary punishment mechanism, the accuracy of edge segmentation is further improved and the overall segmentation effect is optimized. The experimental results show that the KBNet framework demonstrates solid performance in handling complex and diverse rice disease segmentation tasks and provides key technical support for disease identification, prevention, and control in intelligent agriculture. This method has good popularization value and broad application potential in agricultural intelligent monitoring and management. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

20 pages, 4847 KB  
Article
FCA-STNet: Spatiotemporal Growth Prediction and Phenotype Extraction from Image Sequences for Cotton Seedlings
by Yiping Wan, Bo Han, Pengyu Chu, Qiang Guo and Jingjing Zhang
Plants 2025, 14(15), 2394; https://doi.org/10.3390/plants14152394 - 2 Aug 2025
Viewed by 465
Abstract
To address the limitations of the existing cotton seedling growth prediction methods in field environments, specifically, poor representation of spatiotemporal features and low visual fidelity in texture rendering, this paper proposes an algorithm for the prediction of cotton seedling growth from images based [...] Read more.
To address the limitations of the existing cotton seedling growth prediction methods in field environments, specifically, poor representation of spatiotemporal features and low visual fidelity in texture rendering, this paper proposes an algorithm for the prediction of cotton seedling growth from images based on FCA-STNet. The model leverages historical sequences of cotton seedling RGB images to generate an image of the predicted growth at time t + 1 and extracts 37 phenotypic traits from the predicted image. A novel STNet structure is designed to enhance the representation of spatiotemporal dependencies, while an Adaptive Fine-Grained Channel Attention (FCA) module is integrated to capture both global and local feature information. This attention mechanism focuses on individual cotton plants and their textural characteristics, effectively reducing the interference from common field-related challenges such as insufficient lighting, leaf fluttering, and wind disturbances. The experimental results demonstrate that the predicted images achieved an MSE of 0.0086, MAE of 0.0321, SSIM of 0.8339, and PSNR of 20.7011 on the test set, representing improvements of 2.27%, 0.31%, 4.73%, and 11.20%, respectively, over the baseline STNet. The method outperforms several mainstream spatiotemporal prediction models. Furthermore, the majority of the predicted phenotypic traits exhibited correlations with actual measurements with coefficients above 0.8, indicating high prediction accuracy. The proposed FCA-STNet model enables visually realistic prediction of cotton seedling growth in open-field conditions, offering a new perspective for research in growth prediction. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

29 pages, 3125 KB  
Article
Tomato Leaf Disease Identification Framework FCMNet Based on Multimodal Fusion
by Siming Deng, Jiale Zhu, Yang Hu, Mingfang He and Yonglin Xia
Plants 2025, 14(15), 2329; https://doi.org/10.3390/plants14152329 - 27 Jul 2025
Viewed by 756
Abstract
Precisely recognizing diseases in tomato leaves plays a crucial role in enhancing the health, productivity, and quality of tomato crops. However, disease identification methods that rely on single-mode information often face the problems of insufficient accuracy and weak generalization ability. Therefore, this paper [...] Read more.
Precisely recognizing diseases in tomato leaves plays a crucial role in enhancing the health, productivity, and quality of tomato crops. However, disease identification methods that rely on single-mode information often face the problems of insufficient accuracy and weak generalization ability. Therefore, this paper proposes a tomato leaf disease recognition framework FCMNet based on multimodal fusion, which combines tomato leaf disease image and text description to enhance the ability to capture disease characteristics. In this paper, the Fourier-guided Attention Mechanism (FGAM) is designed, which systematically embeds the Fourier frequency-domain information into the spatial-channel attention structure for the first time, enhances the stability and noise resistance of feature expression through spectral transform, and realizes more accurate lesion location by means of multi-scale fusion of local and global features. In order to realize the deep semantic interaction between image and text modality, a Cross Vision–Language Alignment module (CVLA) is further proposed. This module generates visual representations compatible with Bert embeddings by utilizing block segmentation and feature mapping techniques. Additionally, it incorporates a probability-based weighting mechanism to achieve enhanced multimodal fusion, significantly strengthening the model’s comprehension of semantic relationships across different modalities. Furthermore, to enhance both training efficiency and parameter optimization capabilities of the model, we introduce a Multi-strategy Improved Coati Optimization Algorithm (MSCOA). This algorithm integrates Good Point Set initialization with a Golden Sine search strategy, thereby boosting global exploration, accelerating convergence, and effectively preventing entrapment in local optima. Consequently, it exhibits robust adaptability and stable performance within high-dimensional search spaces. The experimental results show that the FCMNet model has increased the accuracy and precision by 2.61% and 2.85%, respectively, compared with the baseline model on the self-built dataset of tomato leaf diseases, and the recall and F1 score have increased by 3.03% and 3.06%, respectively, which is significantly superior to the existing methods. This research provides a new solution for the identification of tomato leaf diseases and has broad potential for agricultural applications. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

17 pages, 3667 KB  
Article
Improving the Recognition of Bamboo Color and Spots Using a Novel YOLO Model
by Yunlong Zhang, Tangjie Nie, Qingping Zeng, Lijie Chen, Wei Liu, Wei Zhang and Long Tong
Plants 2025, 14(15), 2287; https://doi.org/10.3390/plants14152287 - 24 Jul 2025
Viewed by 499
Abstract
The sheaths of bamboo shoots, characterized by distinct colors and spotting patterns, are key phenotypic markers influencing species classification, market value, and genetic studies. This study introduces YOLOv8-BS, a deep learning model optimized for detecting these traits in Chimonobambusa utilis using a dataset [...] Read more.
The sheaths of bamboo shoots, characterized by distinct colors and spotting patterns, are key phenotypic markers influencing species classification, market value, and genetic studies. This study introduces YOLOv8-BS, a deep learning model optimized for detecting these traits in Chimonobambusa utilis using a dataset from Jinfo Mountain, China. Enhanced by data augmentation techniques, including translation, flipping, and contrast adjustment, YOLOv8-BS outperformed benchmark models (YOLOv7, YOLOv5, YOLOX, and Faster R-CNN) in color and spot detection. For color detection, it achieved a precision of 85.9%, a recall of 83.4%, an F1-score of 84.6%, and an average precision (AP) of 86.8%. For spot detection, it recorded a precision of 90.1%, a recall of 92.5%, an F1-score of 91.1%, and an AP of 96.1%. These results demonstrate superior accuracy and robustness, enabling precise phenotypic analysis for bamboo germplasm evaluation and genetic diversity studies. YOLOv8-BS supports precision agriculture by providing a scalable tool for sustainable bamboo-based industries. Future improvements could enhance model adaptability for fine-grained varietal differences and real-time applications. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

29 pages, 17922 KB  
Article
Wheat Soil-Borne Mosaic Virus Disease Detection: A Perspective of Agricultural Decision-Making via Spectral Clustering and Multi-Indicator Feedback
by Xue Hou, Chao Zhang, Yunsheng Song, Turki Alghamdi, Majed Aborokbah, Hui Zhang, Haoyue La and Yizhen Wang
Plants 2025, 14(15), 2260; https://doi.org/10.3390/plants14152260 - 22 Jul 2025
Viewed by 439
Abstract
The rapid advancement of artificial intelligence is transforming agriculture by enabling data-driven plant disease monitoring and decision support. Soil-borne mosaic wheat virus (SBWMV) is a soil-transmitted virus disease that poses a serious threat to wheat production across multiple ecological zones. Due to the [...] Read more.
The rapid advancement of artificial intelligence is transforming agriculture by enabling data-driven plant disease monitoring and decision support. Soil-borne mosaic wheat virus (SBWMV) is a soil-transmitted virus disease that poses a serious threat to wheat production across multiple ecological zones. Due to the regional variability in environmental conditions and symptom expressions, accurately evaluating the severity of wheat soil-borne mosaic (WSBM) infections remains a persistent challenge. To address this, the problem is formulated as large-scale group decision-making process (LSGDM), where each planting plot is treated as an independent virtual decision maker, providing its own severity assessments. This modeling approach reflects the spatial heterogeneity of the disease and enables a structured mechanism to reconcile divergent evaluations. First, for each site, field observation of infection symptoms are recorded and represented using intuitionistic fuzzy numbers (IFNs) to capture uncertainty in detection. Second, a Bayesian graph convolutional networks model (Bayesian-GCN) is used to construct a spatial trust propagation mechanism, inferring missing trust values and preserving regional dependencies. Third, an enhanced spectral clustering method is employed to group plots with similar symptoms and assessment behaviors. Fourth, a feedback mechanism is introduced to iteratively adjust plot-level evaluations based on a set of defined agricultural decision indicators sets using a multi-granulation rough set (ADISs-MGRS). Once consensus is reached, final rankings of candidate plots are generated from indicators, providing an interpretable and evidence-based foundation for targeted prevention strategies. By using the WSBM dataset collected in 2017–2018 from Walla Walla Valley, Oregon/Washington State border, the United States of America, and performing data augmentation for validation, along with comparative experiments and sensitivity analysis, this study demonstrates that the AI-driven LSGDM model integrating enhanced spectral clustering and ADISs-MGRS feedback mechanisms outperforms traditional models in terms of consensus efficiency and decision robustness. This provides valuable support for multi-party decision making in complex agricultural contexts. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

38 pages, 10101 KB  
Article
Wheat Cultivation Suitability Evaluation with Stripe Rust Disease: An Agricultural Group Consensus Framework Based on Artificial-Intelligence-Generated Content and Optimization-Driven Overlapping Community Detection
by Tingyu Xu, Haowei Cui, Yunsheng Song, Chao Zhang, Turki Alghamdi and Majed Aborokbah
Plants 2025, 14(12), 1794; https://doi.org/10.3390/plants14121794 - 11 Jun 2025
Viewed by 989
Abstract
Plant modeling uses mathematical and computational methods to simulate plant structures, physiological processes, and interactions with various environments. In precision agriculture, it enables the digital monitoring and prediction of crop growth, supporting better management and efficient resource use. Wheat, as a major global [...] Read more.
Plant modeling uses mathematical and computational methods to simulate plant structures, physiological processes, and interactions with various environments. In precision agriculture, it enables the digital monitoring and prediction of crop growth, supporting better management and efficient resource use. Wheat, as a major global staple, is vital for food security. However, wheat stripe rust, a widespread and destructive disease, threatens yield stability. The paper proposes wheat cultivation suitability evaluation with stripe rust disease using an agriculture group consensus framework (WCSE-AGC) to tackle this issue. Assessing stripe rust severity in regions relies on wheat pathologists’ judgments based on multiple criteria, creating a multi-attribute, multi-decision-maker consensus problem. Limited regional coverage and inconsistent evaluations among wheat pathologists complicate consensus-reaching. To support wheat pathologist participation, this study employs artificial-intelligence-generated content (AIGC) techniques by using Claude 3.7 to simulate wheat pathologists’ scoring through role-playing and chain-of-thought prompting. WCSE-AGC comprises three main stages. First, a graph neural network (GNN) models trust propagation within wheat pathologists’ social networks, completing missing trust links and providing a solid foundation for weighting and clustering. This ensures reliable expert influence estimations. Second, integrating secretary bird optimization (SBO), K-means, and three-way clustering detects overlapping wheat pathologist subgroups, reducing opinion divergence and improving consensus inclusiveness and convergence. Third, a two-stage optimization balances group fairness and adjustment cost, enhancing consensus practicality and acceptance. The paper conducts experiments using publicly available real wheat stripe rust datasets from four different locations, Ethiopia, India, Turkey, and China, and validates the effectiveness and robustness of the framework through comparative and sensitivity analyses. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

21 pages, 5449 KB  
Article
ELD-YOLO: A Lightweight Framework for Detecting Occluded Mandarin Fruits in Plant Research
by Xianyao Wang, Yutong Huang, Siyu Wei, Weize Xu, Xiangsen Zhu, Jiong Mu and Xiaoyan Chen
Plants 2025, 14(11), 1729; https://doi.org/10.3390/plants14111729 - 5 Jun 2025
Cited by 2 | Viewed by 813
Abstract
Mandarin fruit detection provides crucial technical support for yield prediction and the precise identification and harvesting of mandarin fruits. However, challenges such as occlusion from leaves or branches, the presence of small or partially visible fruits, and limitations in model efficiency pose significant [...] Read more.
Mandarin fruit detection provides crucial technical support for yield prediction and the precise identification and harvesting of mandarin fruits. However, challenges such as occlusion from leaves or branches, the presence of small or partially visible fruits, and limitations in model efficiency pose significant obstacles in a complex orchard environment. To tackle these issues, we propose ELD-YOLO, a lightweight detection framework designed to enhance edge detail preservation and improve the detection of small and occluded fruits. Our method incorporates edge-aware processing to strengthen feature representation, introduces a streamlined detection head that balances accuracy with computational cost, and employs an adaptive upsampling strategy to minimize information loss during feature scaling. Experiments on a mandarin fruit dataset show that ELD-YOLO achieves a precision of 89.7%, a recall of 83.7%, an mAP@50 of 92.1%, and an mAP@50:95 of 68.6% while reducing the parameter count by 15.4% compared with the baseline. These results demonstrate that ELD-YOLO provides an effective and efficient solution for fruit detection in complex orchard scenarios. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

20 pages, 3875 KB  
Article
A Bottom-Up Multi-Feature Fusion Algorithm for Individual Tree Segmentation in Dense Rubber Tree Plantations Using Unmanned Aerial Vehicle–Light Detecting and Ranging
by Zhipeng Zeng, Junpeng Miao, Xiao Huang, Peng Chen, Ping Zhou, Junxiang Tan and Xiangjun Wang
Plants 2025, 14(11), 1640; https://doi.org/10.3390/plants14111640 - 27 May 2025
Viewed by 606
Abstract
Accurate individual tree segmentation (ITS) in dense rubber plantations is a challenging task due to overlapping canopies, indistinct tree apexes, and intricate branch structures. To address these challenges, we propose a bottom-up, multi-feature fusion method for segmenting rubber trees using UAV-LiDAR point clouds. [...] Read more.
Accurate individual tree segmentation (ITS) in dense rubber plantations is a challenging task due to overlapping canopies, indistinct tree apexes, and intricate branch structures. To address these challenges, we propose a bottom-up, multi-feature fusion method for segmenting rubber trees using UAV-LiDAR point clouds. Our approach first involves performing a trunk extraction based on branch-point density variations and neighborhood directional features, which allows for the precise separation of trunks from overlapping canopies. Next, we introduce a multi-feature fusion strategy that replaces single-threshold constraints, integrating geometric, directional, and density attributes to classify core canopy points, boundary points, and overlapping regions. Disputed points are then iteratively assigned to adjacent trees based on neighborhood growth angle consistency, enhancing the robustness of the segmentation. Experiments conducted in rubber plantations with varying canopy closure (low, medium, and high) show accuracies of 0.97, 0.98, and 0.95. Additionally, the crown width and canopy projection area derived from the segmented individual tree point clouds are highly consistent with ground truth data, with R2 values exceeding 0.98 and 0.97, respectively. The proposed method provides a reliable foundation for 3D tree modeling and biomass estimation in structurally complex plantations, advancing precision forestry and ecosystem assessment by overcoming the critical limitations of existing ITS approaches in high-closure tropical agroforests. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

30 pages, 10238 KB  
Article
OE-YOLO: An EfficientNet-Based YOLO Network for Rice Panicle Detection
by Hongqing Wu, Maoxue Guan, Jiannan Chen, Yue Pan, Jiayu Zheng, Zichen Jin, Hai Li and Suiyan Tan
Plants 2025, 14(9), 1370; https://doi.org/10.3390/plants14091370 - 30 Apr 2025
Viewed by 1299
Abstract
Accurately detecting rice panicles in complex field environments remains challenging due to their small size, dense distribution, diverse growth directions, and easy confusion with the background. To accurately detect rice panicles, this study proposes OE-YOLO, an enhanced framework derived from YOLOv11, incorporating three [...] Read more.
Accurately detecting rice panicles in complex field environments remains challenging due to their small size, dense distribution, diverse growth directions, and easy confusion with the background. To accurately detect rice panicles, this study proposes OE-YOLO, an enhanced framework derived from YOLOv11, incorporating three synergistic innovations. First, oriented bounding boxes (OBB) replace horizontal bounding boxes (HBB) to precisely capture features of rice panicles across different heights and growth stages. Second, the backbone network is redesigned with EfficientNetV2, leveraging its compound scaling strategy to balance multi-scale feature extraction and computational efficiency. Third, a C3k2_DConv module improved by dynamic convolution is introduced, enabling input-adaptive kernel fusion to amplify discriminative features while suppressing background interference. Extensive experiments on rice Unmanned Aerial Vehicle (UAV) imagery demonstrate OE-YOLO’s superiority, achieving 86.9% mAP50 and surpassing YOLOv8-obb and YOLOv11 by 2.8% and 8.3%, respectively, with only 2.45 M parameters and 4.8 GFLOPs. The model has also been validated at flight heights of 3 m and 10 m and during the heading and filling stages, achieving mAP50 improvements of 8.3%, 6.9%, 6.7%, and 16.6% compared to YOLOv11, respectively, demonstrating the generalization capability of the model. These advancements demonstrated OE-YOLO as a computationally frugal yet highly accurate solution for real-time crop monitoring, addressing critical needs in precision agriculture for robust, oriented detection under resource constraints. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

20 pages, 2828 KB  
Article
CBSNet: An Effective Method for Potato Leaf Disease Classification
by Yongdong Chen and Wenfu Liu
Plants 2025, 14(5), 632; https://doi.org/10.3390/plants14050632 - 20 Feb 2025
Cited by 2 | Viewed by 921
Abstract
As potato is an important crop, potato disease detection and classification are of key significance in guaranteeing food security and enhancing agricultural production efficiency. Aiming at the problems of tiny spots, blurred disease edges, and susceptibility to noise interference during image acquisition and [...] Read more.
As potato is an important crop, potato disease detection and classification are of key significance in guaranteeing food security and enhancing agricultural production efficiency. Aiming at the problems of tiny spots, blurred disease edges, and susceptibility to noise interference during image acquisition and transmission in potato leaf diseases, we propose a CBSNet-based potato disease recognition method. Firstly, a convolution module called Channel Reconstruction Multi-Scale Convolution (CRMC) is designed to extract the upper and lower features by separating the channel features and applying a more optimized convolution to the upper and lower features, followed by a multi-scale convolution operation to capture the key changes more effectively. Secondly, a new attention mechanism, Spatial Triple Attention (STA), is developed, which first reconstructs the spatial dimensions of the input feature maps, then inputs the reconstructed three types of features into each of the three branches and carries out targeted processing according to the importance of the features, thereby improving the model performance. In addition, the Bat–Lion Algorithm (BLA) is introduced, which combines the Lion algorithm and the bat optimization algorithm and makes the optimization process more adaptive by using the bat algorithm to adjust the gradient direction during the updating process of the Lion algorithm. The BLA not only boosts the model’s ability to recognize potato disease features but also ensures training stability and enhances the model’s robustness in handling noisy images. Experimental results showed that CBSNet achieved an average Accuracy of 92.04% and a Precision of 91.58% on the self-built dataset. It effectively extracts subtle spots and blurry edges of potato leaf diseases, providing strong technical support for disease prevention and control in large-scale potato farming. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

20 pages, 5647 KB  
Article
VM-YOLO: YOLO with VMamba for Strawberry Flowers Detection
by Yujin Wang, Xueying Lin, Zhaowei Xiang and Wen-Hao Su
Plants 2025, 14(3), 468; https://doi.org/10.3390/plants14030468 - 5 Feb 2025
Cited by 2 | Viewed by 2501
Abstract
Computer vision technology is widely used in smart agriculture, primarily because of its non-invasive nature, which avoids causing damage to delicate crops. Nevertheless, the deployment of computer vision algorithms on agricultural machinery with limited computing resources represents a significant challenge. Algorithm optimization with [...] Read more.
Computer vision technology is widely used in smart agriculture, primarily because of its non-invasive nature, which avoids causing damage to delicate crops. Nevertheless, the deployment of computer vision algorithms on agricultural machinery with limited computing resources represents a significant challenge. Algorithm optimization with the aim of achieving an equilibrium between accuracy and computational power represents a pivotal research topic and is the core focus of our work. In this paper, we put forward a lightweight hybrid network, named VM-YOLO, for the purpose of detecting strawberry flowers. Firstly, a multi-branch architecture-based fast convolutional sampling module, designated as Light C2f, is proposed to replace the C2f module in the backbone of YOLOv8, in order to enhance the network’s capacity to perceive multi-scale features. Secondly, a state space model-based lightweight neck with a global sensitivity field, designated as VMambaNeck, is proposed to replace the original neck of YOLOv8. After the training and testing of the improved algorithm on a self-constructed strawberry flower dataset, a series of experiments is conducted to evaluate the performance of the model, including ablation experiments, multi-dataset comparative experiments, and comparative experiments against state-of-the-art algorithms. The results show that the VM-YOLO network exhibits superior performance in object detection tasks across diverse datasets compared to the baseline. Furthermore, the results also demonstrate that VM-YOLO has better performances in the mAP, inference speed, and the number of parameters compared to the YOLOv6, Faster R-CNN, FCOS, and RetinaNet. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

19 pages, 8945 KB  
Article
Multimodal Data Fusion for Precise Lettuce Phenotype Estimation Using Deep Learning Algorithms
by Lixin Hou, Yuxia Zhu, Mengke Wang, Ning Wei, Jiachi Dong, Yaodong Tao, Jing Zhou and Jian Zhang
Plants 2024, 13(22), 3217; https://doi.org/10.3390/plants13223217 - 15 Nov 2024
Cited by 5 | Viewed by 2090
Abstract
Effective lettuce cultivation requires precise monitoring of growth characteristics, quality assessment, and optimal harvest timing. In a recent study, a deep learning model based on multimodal data fusion was developed to estimate lettuce phenotypic traits accurately. A dual-modal network combining RGB and depth [...] Read more.
Effective lettuce cultivation requires precise monitoring of growth characteristics, quality assessment, and optimal harvest timing. In a recent study, a deep learning model based on multimodal data fusion was developed to estimate lettuce phenotypic traits accurately. A dual-modal network combining RGB and depth images was designed using an open lettuce dataset. The network incorporated both a feature correction module and a feature fusion module, significantly enhancing the performance in object detection, segmentation, and trait estimation. The model demonstrated high accuracy in estimating key traits, including fresh weight (fw), dry weight (dw), plant height (h), canopy diameter (d), and leaf area (la), achieving an R2 of 0.9732 for fresh weight. Robustness and accuracy were further validated through 5-fold cross-validation, offering a promising approach for future crop phenotyping. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

16 pages, 15828 KB  
Article
Artificial Intelligence Vision Methods for Robotic Harvesting of Edible Flowers
by Fabio Taddei Dalla Torre, Farid Melgani, Ilaria Pertot and Cesare Furlanello
Plants 2024, 13(22), 3197; https://doi.org/10.3390/plants13223197 - 14 Nov 2024
Viewed by 1826
Abstract
Edible flowers, with their increasing demand in the market, face a challenge in labor-intensive hand-picking practices, hindering their attractiveness for growers. This study explores the application of artificial intelligence vision for robotic harvesting, focusing on the fundamental elements: detection, pose estimation, and plucking [...] Read more.
Edible flowers, with their increasing demand in the market, face a challenge in labor-intensive hand-picking practices, hindering their attractiveness for growers. This study explores the application of artificial intelligence vision for robotic harvesting, focusing on the fundamental elements: detection, pose estimation, and plucking point estimation. The objective was to assess the adaptability of this technology across various species and varieties of edible flowers. The developed computer vision framework utilizes YOLOv5 for 2D flower detection and leverages the zero-shot capabilities of the Segmentation Anything Model for extracting points of interest from a 3D point cloud, facilitating 3D space flower localization. Additionally, we provide a pose estimation method, a key factor in plucking point identification. The plucking point is determined through a linear regression correlating flower diameter with the height of the plucking point. The results showed effective 2D detection. Further, the zero-shot and standard machine learning techniques employed achieved promising 3D localization, pose estimation, and plucking point estimation. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

15 pages, 1750 KB  
Article
AIpollen: An Analytic Website for Pollen Identification Through Convolutional Neural Networks
by Xingchen Yu, Jiawen Zhao, Zhenxiu Xu, Junrong Wei, Qi Wang, Feng Shen, Xiaozeng Yang and Zhonglong Guo
Plants 2024, 13(22), 3118; https://doi.org/10.3390/plants13223118 - 5 Nov 2024
Cited by 5 | Viewed by 1961
Abstract
With the rapid development of artificial intelligence, deep learning has been widely applied to complex tasks such as computer vision and natural language processing, demonstrating its outstanding performance. This study aims to exploit the high precision and efficiency of deep learning to develop [...] Read more.
With the rapid development of artificial intelligence, deep learning has been widely applied to complex tasks such as computer vision and natural language processing, demonstrating its outstanding performance. This study aims to exploit the high precision and efficiency of deep learning to develop a system for the identification of pollen. To this end, we constructed a dataset across 36 distinct genera. In terms of model selection, we employed a pre-trained ResNet34 network and fine-tuned its architecture to suit our specific task. For the optimization algorithm, we opted for the Adam optimizer and utilized the cross-entropy loss function. Additionally, we implemented ELU activation function, data augmentation, learning rate decay, and early stopping strategies to enhance the training efficiency and generalization capability of the model. After training for 203 epochs, our model achieved an accuracy of 97.01% on the test set and 99.89% on the training set. Further evaluation metrics, such as an F1 score of 95.9%, indicate that the model exhibits good balance and robustness across all categories. To facilitate the use of the model, we develop a user-friendly web interface. Users can upload images of pollen grains through the URL link provided in this article) and immediately receive predicted results of their genus names. Altogether, this study has successfully trained and validated a high-precision pollen grain identification model, providing a powerful tool for the identification of pollen. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 2054 KB  
Review
AI-Powered Plant Science: Transforming Forestry Monitoring, Disease Prediction, and Climate Adaptation
by Zuo Xu and Dalong Jiang
Plants 2025, 14(11), 1626; https://doi.org/10.3390/plants14111626 - 26 May 2025
Viewed by 1441
Abstract
The integration of artificial intelligence (AI) and forestry is driving transformative advances in precision monitoring, disaster management, carbon sequestration, and biodiversity conservation. However, significant knowledge gaps persist in cross-ecological model generalisation, multi-source data fusion, and ethical implementation. This review provides a comprehensive overview [...] Read more.
The integration of artificial intelligence (AI) and forestry is driving transformative advances in precision monitoring, disaster management, carbon sequestration, and biodiversity conservation. However, significant knowledge gaps persist in cross-ecological model generalisation, multi-source data fusion, and ethical implementation. This review provides a comprehensive overview of AI’s transformative role in forestry, focusing on three key areas: resource monitoring, disaster management, and sustainability. Data were collected via a comprehensive literature search of academic databases from 2019 to 2025. The review identified several key applications of AI in forestry, including high-precision resource monitoring with sub-metre accuracy in delineating tree canopies, enhanced disaster management with high recall rates for wildfire detection, and optimised carbon sequestration in mangrove forests. Despite these advancements, challenges remain in cross-ecological model generalisation, multi-source data fusion, and ethical implementation. Future research should focus on developing robust, scalable AI models that can be integrated into existing forestry management systems. Policymakers and practitioners should collaborate to ensure that AI-driven solutions are implemented in a way that balances technological innovation with ecosystem resilience and ethical considerations. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

Back to TopTop