Advanced Remote Sensing and AI Techniques in Agriculture and Forestry

A special issue of Plants (ISSN 2223-7747). This special issue belongs to the section "Plant Modeling".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 7927

Special Issue Editors


E-Mail Website
Guest Editor
Department of Crop and Soil Sciences, College of Agriculture and Environmental Sciences, University of Georgia, Tifton, GA 31793, USA
Interests: deep learning; agricultural robots; precision agriculture; computer vision; automation; high-throughput plant phenotyping

E-Mail Website
Guest Editor
Department of Computer Science, Wake Forest University, 1834 Wake Forest Road, Winston-Salem, NC 27109, USA
Interests: remote sensing; ecological monitoring; biodiversity and species distribution; object detection; land cover classification; statistical modeling and simulation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid advancement of artificial intelligence (AI), computer vision, and remote sensing technologies has opened new frontiers for plant research in both agricultural and forestry systems. These tools enable intelligent, scalable, and data-driven solutions for understanding vegetation dynamics, determining plant conditions, and optimizing resource management.

This Special Issue aims to provide a comprehensive platform for cutting-edge research that explores advanced AI and remote sensing technologies for plant monitoring, analysis, and decision support. While studies employing remote sensing, UAV, or multispectral imaging are highly encouraged, submissions are not limited to sensing-based approaches. Contributions focusing purely on algorithmic innovation, such as model optimization, lightweight architecture design, and novel learning strategies, are equally welcome.

The scope of this Special Issue includes, but is not limited to, algorithm development and applications for target detection, classification, and segmentation in agricultural and forestry contexts. Topics may also cover disease and pest identification, fruit detection and maturity assessment, yield estimation, vegetation mapping, species distribution, and stress diagnosis. By bridging theoretical advancement with practical implementation, this Special Issue seeks to promote the next generation of intelligent, efficient, and sustainable solutions for precision agriculture and forestry management.

Dr. Rui-Feng Wang
Dr. Kangning Cui
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Plants is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • computer vision
  • remote sensing
  • unmanned aerial vehicle (UAV)
  • precision agriculture
  • forestry monitoring
  • deep learning
  • machine learning
  • object detection
  • image classification
  • image segmentation
  • plant disease and pest recognition
  • fruit and maturity assessment
  • yield estimation
  • vegetation mapping and stress analysis
  • lightweight network architecture

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 4602 KB  
Article
A Condition-Aware Shading Domain-Adaptive Framework for Robust Chlorophyll Inversion Across Shade Managements in Hopea hainanensis
by Lin Chen, Xiaoli Yang, Xiaona Dong, Ling Lin, Mengmeng Shi, Feifei Chen, Chuanteng Huang, Huilin Yu, Ying Yuan and Miaoyi Han
Plants 2026, 15(8), 1236; https://doi.org/10.3390/plants15081236 - 17 Apr 2026
Viewed by 359
Abstract
Shade management, which is widely adopted in cultivation and understory regeneration, alters plant light environments, thereby degrading the trait inversion performance and posing a key challenge in plant phenotyping. To address this issue, this study reframed chlorophyll retrieval of Hopea hainanensis under shade [...] Read more.
Shade management, which is widely adopted in cultivation and understory regeneration, alters plant light environments, thereby degrading the trait inversion performance and posing a key challenge in plant phenotyping. To address this issue, this study reframed chlorophyll retrieval of Hopea hainanensis under shade management as an illumination-regime-dependent conditional domain shift problem, and developed a condition-aware domain adaptation framework (CAI-DAI) tailored to this setting. The results showed that chlorophyll content increased with shading intensity, accompanied by clear differences in canopy spectral distributions among shading levels, supporting the presence of condition-dependent variation under shade management. Model comparisons showed that CA-IE and CAI-DAI, which integrate conditional encoding and conditional alignment, performed better than the comparative models across fine-tuning ratios from 30% to 70%. Among them, CAI-DAI achieved the best and most stable performance, with test MAE ranging from 4.355 to 4.774 μg·cm−2 and nRMSE ranging from 16.4% to 18.2%, and R2 ranging from 0.456 to 0.585. Further evaluation at individual shading levels (S1–S4) showed that CAI-DAI produced narrower error ranges than CA-IE. It also showed smaller error fluctuations under most fine-tuning ratios. These results demonstrate that the proposed framework effectively improves robustness under heterogeneous shading conditions and limited labeled samples, providing methodological support for chlorophyll monitoring and decision-making related to shade management. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

18 pages, 28028 KB  
Article
SCEA-YOLO: A General-Purpose Maturity Grading Model of Multi-Crop Greenhouse Robots
by Tianyuan Li, Ping Liu, Dongfang Song, Xingtian Zhao, Xiangyu Lyu and Kun Zhang
Plants 2026, 15(7), 1102; https://doi.org/10.3390/plants15071102 - 3 Apr 2026
Viewed by 431
Abstract
Accurate classification of fruit maturity is essential for automated grading and robotic manipulation in modern greenhouse cultivation. Most existing methods rely on crop-specific models, severely restricting their scalability in multi-crop scenarios. To overcome this limitation, this study presents SCEA-YOLO, a unified and efficient [...] Read more.
Accurate classification of fruit maturity is essential for automated grading and robotic manipulation in modern greenhouse cultivation. Most existing methods rely on crop-specific models, severely restricting their scalability in multi-crop scenarios. To overcome this limitation, this study presents SCEA-YOLO, a unified and efficient instance segmentation framework built on YOLOv11s-seg, for simultaneous maturity classification of tomatoes and sweet peppers. To boost feature discrimination, reduce computational redundancy, and alleviate class imbalance, SCEA-YOLO integrates spatial-channel reconstruction convolution and an efficient multi-scale attention mechanism, while replacing the original detection head with the proposed EA-Head. The model is evaluated on a hybrid dataset captured under diverse greenhouse conditions, including varying illumination, fruit occlusion, and overlapping canopies. Its robustness to different viewing angles and camera distances is further validated via deployment on an automated grading robot. Compared with the baseline, SCEA-YOLO enhances classification precision and mAP50–95 by 5.3% and 2.3% for tomatoes, and 1.2% and 1.4% for sweet peppers, respectively. With only 33.2 GFLOPs, the model satisfies real-time inference demands. Benefiting from its lightweight structure and real-time performance, SCEA-YOLO can be readily deployed on embedded systems and robotic platforms. It offers a practical, unified, and scalable solution for intelligent fruit maturity evaluation in multi-crop greenhouse production. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

20 pages, 19943 KB  
Article
MBMSA-UNet: A Multi-Scale Attention-Based Instance Segmentation Model for Moso Bamboo Cells
by Xue Zhou, Ziwei Cheng, Long Chen, Jiawei Pei, Yingyu Liao, Weizhang Liu, Chunyin Wu and Changyu Liu
Plants 2026, 15(6), 969; https://doi.org/10.3390/plants15060969 - 20 Mar 2026
Viewed by 3240
Abstract
Instance segmentation of moso bamboo cells is a critical step in quantitative structural analysis of bamboo materials and plant phenomics research. Moso bamboo tissues are mainly composed of vascular bundles and parenchyma cells. Within vascular bundles, fiber cells exhibit thick cell walls and [...] Read more.
Instance segmentation of moso bamboo cells is a critical step in quantitative structural analysis of bamboo materials and plant phenomics research. Moso bamboo tissues are mainly composed of vascular bundles and parenchyma cells. Within vascular bundles, fiber cells exhibit thick cell walls and extremely dense arrangements, whereas vessel cells are characterized by large diameters and complex internal structures. These features frequently lead to blurred boundaries, structural complexity, and local overexposure in microscopic images, making it difficult for traditional segmentation algorithms to achieve stable and accurate results. Although the U-Net has demonstrated outstanding performance in biological microscopic image analysis, its feature extraction capability and boundary recognition stability remain insufficient when dealing with the composite structure of moso bamboo. To address these challenges, this study proposes an improved model based on a multi-scale attention mechanism, termed MBMSA-UNet (Moso Bamboo Multi-Scale Attention U-Net). Building upon the encoder–decoder architecture of U-Net, the proposed model introduces a multi-scale channel-spatial attention block, aiming to handle the pronounced morphological and scale differences among vessels, fibers, and parenchyma cells. By adaptively reweighting features at different scales, the model enhances cross-layer feature fusion and strengthens responses to key regions, thereby effectively suppressing local overexposure interference and emphasizing boundary features between different cell types. Experimental results demonstrate that, compared with the U-Net and several of its improved variants, MBMSA-UNet achieves higher segmentation accuracy and greater robustness on microscopic images of moso bamboo, providing a solid foundation for fine-grained quantitative analysis of complex bamboo tissues. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

23 pages, 9431 KB  
Article
Hybrid Deep Learning–Geostatistical Mapping of Forest Aboveground Biomass in Lishui, China
by Rui Qian, Qilin Zhang, Yuying Gong, Jingyi Wang, Xiaolei Cui, Xiong Yin and Mingshi Li
Plants 2026, 15(4), 587; https://doi.org/10.3390/plants15040587 - 12 Feb 2026
Viewed by 618
Abstract
Forest aboveground biomass (AGB) is a key indicator of forest productivity and carbon sequestration, yet many remote sensing AGB models overlook spatial autocorrelation in plot observations and model residuals. This study proposes a hybrid framework that combines a CNN-Transformer (Convolutional Neural Network-Transformer) model [...] Read more.
Forest aboveground biomass (AGB) is a key indicator of forest productivity and carbon sequestration, yet many remote sensing AGB models overlook spatial autocorrelation in plot observations and model residuals. This study proposes a hybrid framework that combines a CNN-Transformer (Convolutional Neural Network-Transformer) model with geostatistical Kriging of residuals to improve regional AGB mapping in Lishui City, Zhejiang Province, China. Using 398 forest plots and multi-source predictors derived from Sentinel-2 imagery, ALOS-2 PALSAR-2 SAR data, and ALOS 12.5 m DEM, relevant variables were screened using Random Forest importance ranking. The most influential predictors included Sentinel-2 Band 8 and Band 12, EVI, PC1, mean77, HH/HV, ARVI, NDVI, RVI, and elevation. Ten-fold cross-validation showed that the CNN-Transformer-CK model had the highest accuracy in predicting forest AGB, with a validation R2 of 0.72 and RMSE of 12.18 t/ha, followed by the CNN-Transformer model (R2 = 0.69, RMSE = 12.22 t/ha) and RF (R2 = 0.59 and RMSE = 14.31 t/ha). The proposed approach supports wall-to-wall AGB mapping for forest management and conservation planning. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

24 pages, 5237 KB  
Article
DCA-UNet: A Cross-Modal Ginkgo Crown Recognition Method Based on Multi-Source Data
by Yunzhi Guo, Yang Yu, Yan Li, Mengyuan Chen, Wenwen Kong, Yunpeng Zhao and Fei Liu
Plants 2026, 15(2), 249; https://doi.org/10.3390/plants15020249 - 13 Jan 2026
Cited by 1 | Viewed by 604
Abstract
Wild ginkgo, as an endangered species, holds significant value for genetic resource conservation, yet its practical applications face numerous challenges. Traditional field surveys are inefficient in mountainous mixed forests, while satellite remote sensing is limited by spatial resolution. Current deep learning approaches relying [...] Read more.
Wild ginkgo, as an endangered species, holds significant value for genetic resource conservation, yet its practical applications face numerous challenges. Traditional field surveys are inefficient in mountainous mixed forests, while satellite remote sensing is limited by spatial resolution. Current deep learning approaches relying on single-source data or merely simple multi-source fusion fail to fully exploit information, leading to suboptimal recognition performance. This study presents a multimodal ginkgo crown dataset, comprising RGB and multispectral images acquired by an UAV platform. To achieve precise crown segmentation with this data, we propose a novel dual-branch dynamic weighting fusion network, termed dual-branch cross-modal attention-enhanced UNet (DCA-UNet). We design a dual-branch encoder (DBE) with a two-stream architecture for independent feature extraction from each modality. We further develop a cross-modal interaction fusion module (CIF), employing cross-modal attention and learnable dynamic weights to boost multi-source information fusion. Additionally, we introduce an attention-enhanced decoder (AED) that combines progressive upsampling with a hybrid channel-spatial attention mechanism, thereby effectively utilizing multi-scale features and enhancing boundary semantic consistency. Evaluation on the ginkgo dataset demonstrates that DCA-UNet achieves a segmentation performance of 93.42% IoU (Intersection over Union), 96.82% PA (Pixel Accuracy), 96.38% Precision, and 96.60% F1-score. These results outperform differential feature attention fusion network (DFAFNet) by 12.19%, 6.37%, 4.62%, and 6.95%, respectively, and surpasses the single-modality baselines (RGB or multispectral) in all metrics. Superior performance on cross-flight-altitude data further validates the model’s strong generalization capability and robustness in complex scenarios. These results demonstrate the superiority of DCA-UNet in UAV-based multimodal ginkgo crown recognition, offering a reliable and efficient solution for monitoring wild endangered tree species. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

28 pages, 6257 KB  
Article
A Precise Apple Quality Prediction Model Integrating Driving Factor Screening and BP Neural Network
by Junkai Zeng, Mingyang Yu, Yan Chen, Xin Li, Jianping Bao and Xiaoqiu Pu
Plants 2025, 14(24), 3795; https://doi.org/10.3390/plants14243795 - 13 Dec 2025
Cited by 1 | Viewed by 692
Abstract
Apple fruit quality is primarily determined by Vitamin C (VC), Soluble Saccharides (SSs), Titratable Acid (TA), and the Soluble Saccharides/Titratable Acid (SSs/TA). This study aims to establish a prediction model based on the Back Propagation (BP) neural network by analyzing the intrinsic relationships [...] Read more.
Apple fruit quality is primarily determined by Vitamin C (VC), Soluble Saccharides (SSs), Titratable Acid (TA), and the Soluble Saccharides/Titratable Acid (SSs/TA). This study aims to establish a prediction model based on the Back Propagation (BP) neural network by analyzing the intrinsic relationships between these quality indicators and the photosynthetic physiological characteristics of fruit trees, providing a new method for the precise prediction and regulation of fruit quality. Using ‘Fuji’ apple as the material, fruit quality indicators, leaf photosynthetic parameters, canopy structure indicators, and carbon–water–nitrogen metabolism indicators were systematically measured. Correlation analysis was employed to identify key influencing factors, BP neural network models with different hidden layer structures were constructed, and the optimal feature subset was screened through feature importance analysis, single-factor sensitivity analysis, and ablation experiments, ultimately establishing a simplified and efficient prediction model. Pn, Gs, SPCI, and DUE showed significant positive correlations with VC, SS, and SS/TA, whereas N and NLT were significantly positively correlated with TA content. SUE was identified as a common core driving factor for VC, SS, and SS/TA. The BP neural network demonstrated strong predictive performance for the four quality indicators, with the optimal model achieving validation set R2 values of 0.87, 0.86, 0.86, and 0.89, respectively. The simplified model developed through feature screening exhibited further improved performance: the validation set R2 for the VC prediction model increased to 0.93, while MAE and MAPE decreased by 32% and 35%, respectively. Photosynthetic characteristics and nitrogen metabolism status of the fruit trees serve as key physiological foundations determining apple quality. The quality prediction model based on the BP neural network achieved high accuracy, and its predictive performance was significantly enhanced after feature refinement, providing an effective tool for precise apple quality prediction and smart orchard management. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

Review

Jump to: Research

38 pages, 79039 KB  
Review
Towards Robust UAV Navigation in Agriculture: Key Technologies, Application, and Future Directions
by Guantong Dong, Xiuhua Lou and Haihua Wang
Plants 2026, 15(9), 1303; https://doi.org/10.3390/plants15091303 - 23 Apr 2026
Viewed by 309
Abstract
Unmanned aerial vehicles (UAVs) are becoming an important platform for precision agriculture, supporting both high-throughput sensing and active field operations such as spraying, monitoring, and phenotyping. However, unlike general UAV applications, agricultural environments impose distinctive challenges due to heterogeneous field structures, canopy occlusion, [...] Read more.
Unmanned aerial vehicles (UAVs) are becoming an important platform for precision agriculture, supporting both high-throughput sensing and active field operations such as spraying, monitoring, and phenotyping. However, unlike general UAV applications, agricultural environments impose distinctive challenges due to heterogeneous field structures, canopy occlusion, terrain variation, dynamic disturbances, and strong coupling between navigation performance and task quality. To address this gap, this review presents a systematic analysis of UAV navigation in agricultural environments from a system-level perspective. The review first summarizes the core technical components of agricultural UAV navigation, including sensing, localization, mapping, planning, and control. It then discusses how navigation requirements vary across representative scenarios such as open fields, orchards, and terraced farmland, and examines their roles in key applications including aerial mapping, field monitoring, precision spraying, and close-range orchard operations. In addition, datasets, simulation platforms, and evaluation protocols relevant to agricultural UAV navigation are reviewed. Finally, major challenges are identified, including scene heterogeneity, perception degradation, insufficient task-semantic integration, limited control robustness, and the lack of standardized benchmarks. Future research should move toward robust, task-aware, and modular navigation architectures that support reliable and scalable agricultural UAV deployment. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

44 pages, 24044 KB  
Review
Ground Mobile Robots for High-Throughput Plant Phenotyping: A Review from the Closed-Loop Perspective of Perception, Decision, and Action
by Heng-Wei Zhang, Yi-Ming Qin, An-Qi Wu, Xi Xi, Pingfan Hu and Rui-Feng Wang
Plants 2026, 15(8), 1218; https://doi.org/10.3390/plants15081218 - 16 Apr 2026
Viewed by 1015
Abstract
High-throughput plant phenotyping (HTPP) is increasingly limited by the mismatch between the need for field-relevant, fine-grained phenotypic information and the restricted capability of conventional observation platforms under complex agricultural conditions. Ground mobile robots are emerging as the key carrier for resolving this gap [...] Read more.
High-throughput plant phenotyping (HTPP) is increasingly limited by the mismatch between the need for field-relevant, fine-grained phenotypic information and the restricted capability of conventional observation platforms under complex agricultural conditions. Ground mobile robots are emerging as the key carrier for resolving this gap because they combine close-range sensing, autonomous mobility, and physical interaction within real field environments. In this paper, a structured scoping review is presented using a closed-loop perception–decision–action pipeline as the organizing principle. Within this framework, recent advances are synthesized from the perspectives of multimodal fusion, localization-aware sensing, motion planning, deep-learning-based phenotypic analysis, active observation, robotic intervention, and edge deployment. The review further clarifies the complementary roles of Unmanned Aerial Vehicles (UAVs), Unmanned Ground Vehicles (UGVs), and air–ground collaboration in multiscale phenotyping workflows. Beyond summarizing technologies, the article provides three concrete deliverables: a structured taxonomy of mobile phenotyping systems; comparative tables covering sensing modalities, localization/navigation methods, and AI models; and a research agenda linking technical progress to field deployability. The synthesis highlights four persistent bottlenecks, namely environmental generalization, annotation scarcity, limited standardization and reproducibility, and the gap between advanced models and agricultural edge hardware. Overall, ground robots are identified not merely as sensing platforms, but as the central system architecture for advancing mobile phenotyping toward autonomous, fine-grained, and field-deployable operation. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

Back to TopTop