Advances in Artificial Intelligence for Plant Research

A special issue of Plants (ISSN 2223-7747). This special issue belongs to the section "Plant Modeling".

Deadline for manuscript submissions: 20 October 2025 | Viewed by 7270

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China
Interests: artificial intelligence; computer vision; plant phenotyping; precision agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Soil and Water Systems, University of Idaho, Moscow, ID, USA
Interests: robotics sensing; decision support systems; climate-smart agriculture; precision agriculture; intelligent robotics

E-Mail
Guest Editor
College of Agriculture/College of Life Sciences, Guizhou University, Guiyang 550025, China
Interests: pest management; biocontrol; smart agriculture; deep learning; image recognition

Special Issue Information

Dear Colleagues,

Rapid advances in artificial intelligence offer a transformative solution for botanical research that promises to revolutionize crop management, disease prediction, precision agriculture, and sustainable ecosystem management. This topic will focus on the latest advances, challenges, and opportunities in artificial intelligence in the field of plant research, promoting interdisciplinary collaboration and driving significant advances in plant science. Specific research topics include, but are not limited to, the following:

  1. Plant phenotype analysis: the application of computer vision and machine learning technology to identify and analyze the morphological characteristics and growth state of plants.
  2. Plant disease detection and prediction: using AI technology to predict and identify plant diseases to improve early warning and management efficiency.
  3. Crop management and optimization: combining data analytics and AI algorithms to optimize fertilization, irrigation, and other agricultural practices to improve crop yield and quality.
  4. Plant genomics and genetic research: using AI-assisted genome analysis and genetic algorithms to accelerate plant genetic improvement and new variety development.
  5. Environmental monitoring and adaptation: using AI to monitor the impact of environmental factors on plant growth and help develop plant varieties that adapt to different climatic conditions.
  6. Agricultural robots and automation: studying the application of AI-driven agricultural robots in seeding, picking, and weed control to improve the efficiency of agricultural operations.

Dr. Guoxiong Zhou
Dr. Liujun Li
Dr. Xiaoyulong Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Plants is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • plant research
  • artificial intelligence
  • phenotype analysis
  • disease detection
  • crop management and optimization
  • agricultural robots

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 3443 KiB  
Article
A Bottom-Up Multi-Feature Fusion Algorithm for Individual Tree Segmentation in Dense Rubber Tree Plantations Using Unmanned Aerial Vehicle–Light Detecting and Ranging
by Zhipeng Zeng, Junpeng Miao, Xiao Huang, Peng Chen, Ping Zhou, Junxiang Tan and Xiangjun Wang
Plants 2025, 14(11), 1640; https://doi.org/10.3390/plants14111640 - 27 May 2025
Abstract
Accurate individual tree segmentation (ITS) in dense rubber plantations is a challenging task due to overlapping canopies, indistinct tree apexes, and intricate branch structures. To address these challenges, we propose a bottom-up, multi-feature fusion method for segmenting rubber trees using UAV-LiDAR point clouds. [...] Read more.
Accurate individual tree segmentation (ITS) in dense rubber plantations is a challenging task due to overlapping canopies, indistinct tree apexes, and intricate branch structures. To address these challenges, we propose a bottom-up, multi-feature fusion method for segmenting rubber trees using UAV-LiDAR point clouds. Our approach first involves performing a trunk extraction based on branch-point density variations and neighborhood directional features, which allows for the precise separation of trunks from overlapping canopies. Next, we introduce a multi-feature fusion strategy that replaces single-threshold constraints, integrating geometric, directional, and density attributes to classify core canopy points, boundary points, and overlapping regions. Disputed points are then iteratively assigned to adjacent trees based on neighborhood growth angle consistency, enhancing the robustness of the segmentation. Experiments conducted in rubber plantations with varying canopy closure (low, medium, and high) show accuracies of 0.97, 0.98, and 0.95. Additionally, the crown width and canopy projection area derived from the segmented individual tree point clouds are highly consistent with ground truth data, with R2 values exceeding 0.98 and 0.97, respectively. The proposed method provides a reliable foundation for 3D tree modeling and biomass estimation in structurally complex plantations, advancing precision forestry and ecosystem assessment by overcoming the critical limitations of existing ITS approaches in high-closure tropical agroforests. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
30 pages, 10238 KiB  
Article
OE-YOLO: An EfficientNet-Based YOLO Network for Rice Panicle Detection
by Hongqing Wu, Maoxue Guan, Jiannan Chen, Yue Pan, Jiayu Zheng, Zichen Jin, Hai Li and Suiyan Tan
Plants 2025, 14(9), 1370; https://doi.org/10.3390/plants14091370 - 30 Apr 2025
Viewed by 345
Abstract
Accurately detecting rice panicles in complex field environments remains challenging due to their small size, dense distribution, diverse growth directions, and easy confusion with the background. To accurately detect rice panicles, this study proposes OE-YOLO, an enhanced framework derived from YOLOv11, incorporating three [...] Read more.
Accurately detecting rice panicles in complex field environments remains challenging due to their small size, dense distribution, diverse growth directions, and easy confusion with the background. To accurately detect rice panicles, this study proposes OE-YOLO, an enhanced framework derived from YOLOv11, incorporating three synergistic innovations. First, oriented bounding boxes (OBB) replace horizontal bounding boxes (HBB) to precisely capture features of rice panicles across different heights and growth stages. Second, the backbone network is redesigned with EfficientNetV2, leveraging its compound scaling strategy to balance multi-scale feature extraction and computational efficiency. Third, a C3k2_DConv module improved by dynamic convolution is introduced, enabling input-adaptive kernel fusion to amplify discriminative features while suppressing background interference. Extensive experiments on rice Unmanned Aerial Vehicle (UAV) imagery demonstrate OE-YOLO’s superiority, achieving 86.9% mAP50 and surpassing YOLOv8-obb and YOLOv11 by 2.8% and 8.3%, respectively, with only 2.45 M parameters and 4.8 GFLOPs. The model has also been validated at flight heights of 3 m and 10 m and during the heading and filling stages, achieving mAP50 improvements of 8.3%, 6.9%, 6.7%, and 16.6% compared to YOLOv11, respectively, demonstrating the generalization capability of the model. These advancements demonstrated OE-YOLO as a computationally frugal yet highly accurate solution for real-time crop monitoring, addressing critical needs in precision agriculture for robust, oriented detection under resource constraints. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

20 pages, 2828 KiB  
Article
CBSNet: An Effective Method for Potato Leaf Disease Classification
by Yongdong Chen and Wenfu Liu
Plants 2025, 14(5), 632; https://doi.org/10.3390/plants14050632 - 20 Feb 2025
Cited by 1 | Viewed by 475
Abstract
As potato is an important crop, potato disease detection and classification are of key significance in guaranteeing food security and enhancing agricultural production efficiency. Aiming at the problems of tiny spots, blurred disease edges, and susceptibility to noise interference during image acquisition and [...] Read more.
As potato is an important crop, potato disease detection and classification are of key significance in guaranteeing food security and enhancing agricultural production efficiency. Aiming at the problems of tiny spots, blurred disease edges, and susceptibility to noise interference during image acquisition and transmission in potato leaf diseases, we propose a CBSNet-based potato disease recognition method. Firstly, a convolution module called Channel Reconstruction Multi-Scale Convolution (CRMC) is designed to extract the upper and lower features by separating the channel features and applying a more optimized convolution to the upper and lower features, followed by a multi-scale convolution operation to capture the key changes more effectively. Secondly, a new attention mechanism, Spatial Triple Attention (STA), is developed, which first reconstructs the spatial dimensions of the input feature maps, then inputs the reconstructed three types of features into each of the three branches and carries out targeted processing according to the importance of the features, thereby improving the model performance. In addition, the Bat–Lion Algorithm (BLA) is introduced, which combines the Lion algorithm and the bat optimization algorithm and makes the optimization process more adaptive by using the bat algorithm to adjust the gradient direction during the updating process of the Lion algorithm. The BLA not only boosts the model’s ability to recognize potato disease features but also ensures training stability and enhances the model’s robustness in handling noisy images. Experimental results showed that CBSNet achieved an average Accuracy of 92.04% and a Precision of 91.58% on the self-built dataset. It effectively extracts subtle spots and blurry edges of potato leaf diseases, providing strong technical support for disease prevention and control in large-scale potato farming. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

20 pages, 5647 KiB  
Article
VM-YOLO: YOLO with VMamba for Strawberry Flowers Detection
by Yujin Wang, Xueying Lin, Zhaowei Xiang and Wen-Hao Su
Plants 2025, 14(3), 468; https://doi.org/10.3390/plants14030468 - 5 Feb 2025
Viewed by 1295
Abstract
Computer vision technology is widely used in smart agriculture, primarily because of its non-invasive nature, which avoids causing damage to delicate crops. Nevertheless, the deployment of computer vision algorithms on agricultural machinery with limited computing resources represents a significant challenge. Algorithm optimization with [...] Read more.
Computer vision technology is widely used in smart agriculture, primarily because of its non-invasive nature, which avoids causing damage to delicate crops. Nevertheless, the deployment of computer vision algorithms on agricultural machinery with limited computing resources represents a significant challenge. Algorithm optimization with the aim of achieving an equilibrium between accuracy and computational power represents a pivotal research topic and is the core focus of our work. In this paper, we put forward a lightweight hybrid network, named VM-YOLO, for the purpose of detecting strawberry flowers. Firstly, a multi-branch architecture-based fast convolutional sampling module, designated as Light C2f, is proposed to replace the C2f module in the backbone of YOLOv8, in order to enhance the network’s capacity to perceive multi-scale features. Secondly, a state space model-based lightweight neck with a global sensitivity field, designated as VMambaNeck, is proposed to replace the original neck of YOLOv8. After the training and testing of the improved algorithm on a self-constructed strawberry flower dataset, a series of experiments is conducted to evaluate the performance of the model, including ablation experiments, multi-dataset comparative experiments, and comparative experiments against state-of-the-art algorithms. The results show that the VM-YOLO network exhibits superior performance in object detection tasks across diverse datasets compared to the baseline. Furthermore, the results also demonstrate that VM-YOLO has better performances in the mAP, inference speed, and the number of parameters compared to the YOLOv6, Faster R-CNN, FCOS, and RetinaNet. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

19 pages, 8945 KiB  
Article
Multimodal Data Fusion for Precise Lettuce Phenotype Estimation Using Deep Learning Algorithms
by Lixin Hou, Yuxia Zhu, Mengke Wang, Ning Wei, Jiachi Dong, Yaodong Tao, Jing Zhou and Jian Zhang
Plants 2024, 13(22), 3217; https://doi.org/10.3390/plants13223217 - 15 Nov 2024
Cited by 3 | Viewed by 1198
Abstract
Effective lettuce cultivation requires precise monitoring of growth characteristics, quality assessment, and optimal harvest timing. In a recent study, a deep learning model based on multimodal data fusion was developed to estimate lettuce phenotypic traits accurately. A dual-modal network combining RGB and depth [...] Read more.
Effective lettuce cultivation requires precise monitoring of growth characteristics, quality assessment, and optimal harvest timing. In a recent study, a deep learning model based on multimodal data fusion was developed to estimate lettuce phenotypic traits accurately. A dual-modal network combining RGB and depth images was designed using an open lettuce dataset. The network incorporated both a feature correction module and a feature fusion module, significantly enhancing the performance in object detection, segmentation, and trait estimation. The model demonstrated high accuracy in estimating key traits, including fresh weight (fw), dry weight (dw), plant height (h), canopy diameter (d), and leaf area (la), achieving an R2 of 0.9732 for fresh weight. Robustness and accuracy were further validated through 5-fold cross-validation, offering a promising approach for future crop phenotyping. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

16 pages, 15828 KiB  
Article
Artificial Intelligence Vision Methods for Robotic Harvesting of Edible Flowers
by Fabio Taddei Dalla Torre, Farid Melgani, Ilaria Pertot and Cesare Furlanello
Plants 2024, 13(22), 3197; https://doi.org/10.3390/plants13223197 - 14 Nov 2024
Viewed by 1221
Abstract
Edible flowers, with their increasing demand in the market, face a challenge in labor-intensive hand-picking practices, hindering their attractiveness for growers. This study explores the application of artificial intelligence vision for robotic harvesting, focusing on the fundamental elements: detection, pose estimation, and plucking [...] Read more.
Edible flowers, with their increasing demand in the market, face a challenge in labor-intensive hand-picking practices, hindering their attractiveness for growers. This study explores the application of artificial intelligence vision for robotic harvesting, focusing on the fundamental elements: detection, pose estimation, and plucking point estimation. The objective was to assess the adaptability of this technology across various species and varieties of edible flowers. The developed computer vision framework utilizes YOLOv5 for 2D flower detection and leverages the zero-shot capabilities of the Segmentation Anything Model for extracting points of interest from a 3D point cloud, facilitating 3D space flower localization. Additionally, we provide a pose estimation method, a key factor in plucking point identification. The plucking point is determined through a linear regression correlating flower diameter with the height of the plucking point. The results showed effective 2D detection. Further, the zero-shot and standard machine learning techniques employed achieved promising 3D localization, pose estimation, and plucking point estimation. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

15 pages, 1750 KiB  
Article
AIpollen: An Analytic Website for Pollen Identification Through Convolutional Neural Networks
by Xingchen Yu, Jiawen Zhao, Zhenxiu Xu, Junrong Wei, Qi Wang, Feng Shen, Xiaozeng Yang and Zhonglong Guo
Plants 2024, 13(22), 3118; https://doi.org/10.3390/plants13223118 - 5 Nov 2024
Cited by 2 | Viewed by 1407
Abstract
With the rapid development of artificial intelligence, deep learning has been widely applied to complex tasks such as computer vision and natural language processing, demonstrating its outstanding performance. This study aims to exploit the high precision and efficiency of deep learning to develop [...] Read more.
With the rapid development of artificial intelligence, deep learning has been widely applied to complex tasks such as computer vision and natural language processing, demonstrating its outstanding performance. This study aims to exploit the high precision and efficiency of deep learning to develop a system for the identification of pollen. To this end, we constructed a dataset across 36 distinct genera. In terms of model selection, we employed a pre-trained ResNet34 network and fine-tuned its architecture to suit our specific task. For the optimization algorithm, we opted for the Adam optimizer and utilized the cross-entropy loss function. Additionally, we implemented ELU activation function, data augmentation, learning rate decay, and early stopping strategies to enhance the training efficiency and generalization capability of the model. After training for 203 epochs, our model achieved an accuracy of 97.01% on the test set and 99.89% on the training set. Further evaluation metrics, such as an F1 score of 95.9%, indicate that the model exhibits good balance and robustness across all categories. To facilitate the use of the model, we develop a user-friendly web interface. Users can upload images of pollen grains through the URL link provided in this article) and immediately receive predicted results of their genus names. Altogether, this study has successfully trained and validated a high-precision pollen grain identification model, providing a powerful tool for the identification of pollen. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 1840 KiB  
Review
AI-Powered Plant Science: Transforming Forestry Monitoring, Disease Prediction, and Climate Adaptation
by Zuo Xu and Dalong Jiang
Plants 2025, 14(11), 1626; https://doi.org/10.3390/plants14111626 - 26 May 2025
Viewed by 28
Abstract
The integration of artificial intelligence (AI) and forestry is driving transformative advances in precision monitoring, disaster management, carbon sequestration, and biodiversity conservation. However, significant knowledge gaps persist in cross-ecological model generalisation, multi-source data fusion, and ethical implementation. This review provides a comprehensive overview [...] Read more.
The integration of artificial intelligence (AI) and forestry is driving transformative advances in precision monitoring, disaster management, carbon sequestration, and biodiversity conservation. However, significant knowledge gaps persist in cross-ecological model generalisation, multi-source data fusion, and ethical implementation. This review provides a comprehensive overview of AI’s transformative role in forestry, focusing on three key areas: resource monitoring, disaster management, and sustainability. Data were collected via a comprehensive literature search of academic databases from 2019 to 2025. The review identified several key applications of AI in forestry, including high-precision resource monitoring with sub-metre accuracy in delineating tree canopies, enhanced disaster management with high recall rates for wildfire detection, and optimised carbon sequestration in mangrove forests. Despite these advancements, challenges remain in cross-ecological model generalisation, multi-source data fusion, and ethical implementation. Future research should focus on developing robust, scalable AI models that can be integrated into existing forestry management systems. Policymakers and practitioners should collaborate to ensure that AI-driven solutions are implemented in a way that balances technological innovation with ecosystem resilience and ethical considerations. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Back to TopTop