remotesensing-logo

Journal Browser

Journal Browser

Intelligent Extraction of Phenotypic Traits in Agroforestry

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Forest Remote Sensing".

Deadline for manuscript submissions: 15 July 2025 | Viewed by 8446

Special Issue Editors


E-Mail Website
Guest Editor
College of Information Science and Technology & Artifical Intellengce, Nanjing Forestry University, Nanjing, China
Interests: computer vision; deep learning; forestry remote sensing
Research Institute of Subtropical Forestry, Chinese Academy of Forestry, Hangzhou, China
Interests: forest phenomics; UAV remote sensing; tree breeding

E-Mail Website
Guest Editor
Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing 210095, China
Interests: intelligent extraction of plant targets and traits; LiDAR; multi-source remote sensing data fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
New Zealand School of Forestry, University of Canterbury, Christchurch 8140, New Zealand
Interests: forestry; remote sensing; LiDAR; optimal imaging; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Natural Resources, Neraska-Lincoln University, Lincoln, OR, USA
Interests: image-based plant phenotyping analysis; computer vision

Special Issue Information

Dear Colleagues,

This Special Issue, “Intelligent Extraction of Phenotypic Traits in Agroforestry”, aims to explore the application of remote sensing technologies in extracting phenotypic traits in agriculture and forestry. These fields are vital for sustainable development and environmental conservation, and accurately assessing phenotypic traits is crucial for optimizing resource management and enhancing productivity. Remote sensing technologies provide a powerful tool for non-destructive, large-scale monitoring, enabling the extraction of valuable phenotypic information. Advanced data science technologies, such as computer vision and deep learning, further enhance the capability of these methods. Moreover, integrating plant phenomics into genetic breeding programs can significantly improve crop yields and the selection of superior plant varieties, demonstrating the broad applications of these technologies across various agricultural and forestry disciplines.

Authors are invited to submit original research articles, reviews, and case studies that address key themes, including machine learning, deep learning, and computer vision for phenotypic trait extraction, multi-sensor data fusion techniques (e.g., LiDAR and hyperspectral imaging), and integrating remote sensing data with ground observations in agriculture and forestry. Contributions covering a wide range of topics are welcome, including, but not limited to, the following:

  • Intelligent extraction of agriculture and forestry phenotypic traits using deep learning;
  • High-yield plant phenotype screening using remote sensing;
  • Multi-sensor data fusion for high-throughput phenotyping in agriculture and forestry;
  • Application of phenotypic data in genetic breeding programs for crop improvement and sustainable forest resource management.

Dr. Xijian Fan
Dr. Yanjie Li
Dr. Shichao Jin
Dr. Cong (Vega) Xu
Dr. Sruti Das Choudhury
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • phenotypic parameter extraction
  • deep learning
  • computer vision
  • high-throughput phenotyping
  • UAV
  • agriculture and forestry monitoring
  • 3D point cloud
  • multispectral image
  • multi-sensor data fusion
  • genetic breeding programs

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 55414 KiB  
Article
Parameter-Efficient Fine-Tuning for Individual Tree Crown Detection and Species Classification Using UAV-Acquired Imagery
by Jiuyu Zhang, Fan Lei and Xijian Fan
Remote Sens. 2025, 17(7), 1272; https://doi.org/10.3390/rs17071272 - 3 Apr 2025
Viewed by 360
Abstract
Pre-trained foundation models, trained on large-scale datasets, have demonstrated significant success in a variety of downstream vision tasks. Parameter-efficient fine-tuning (PEFT) methods aim to adapt these foundation models to new domains by updating only a small subset of parameters, thereby reducing computational overhead. [...] Read more.
Pre-trained foundation models, trained on large-scale datasets, have demonstrated significant success in a variety of downstream vision tasks. Parameter-efficient fine-tuning (PEFT) methods aim to adapt these foundation models to new domains by updating only a small subset of parameters, thereby reducing computational overhead. However, the effectiveness of these PEFT methods, especially in the context of forestry remote sensing—specifically for individual tree detection—remains largely unexplored. In this work, we present a simple and efficient PEFT approach designed to transfer pre-trained transformer models to the specific tasks of tree crown detection and species classification in unmanned aerial vehicle (UAV) imagery. To address the challenge of mitigating the influence of irrelevant ground targets in UAV imagery, we propose an Adaptive Salient Channel Selection (ASCS) method, which can be simply integrated into each transformer block during fine-tuning. In the proposed ASCS, task-specific channels are adaptively selected based on class-wise importance scores, where the channels most relevant to the target class are highlighted. In addition, a simple bias term is introduced to facilitate the learning of task-specific knowledge, enhancing the adaptation of the pre-trained model to the target tasks. The experimental results demonstrate that the proposed ASCS fine-tuning method, which utilizes a small number of task-specific learnable parameters, significantly outperforms the latest YOLO detection framework and surpasses the state-of-the-art PEFT method in tree detection and classification tasks. These findings demonstrate that the proposed ASCS is an effective PEFT method, capable of adapting the pre-trained model’s capabilities for tree crown detection and species classification using UAV imagery. Full article
(This article belongs to the Special Issue Intelligent Extraction of Phenotypic Traits in Agroforestry)
Show Figures

Figure 1

39 pages, 13451 KiB  
Article
Machine Learning-Based Summer Crops Mapping Using Sentinel-1 and Sentinel-2 Images
by Saeideh Maleki, Nicolas Baghdadi, Hassan Bazzi, Cassio Fraga Dantas, Dino Ienco, Yasser Nasrallah and Sami Najem
Remote Sens. 2024, 16(23), 4548; https://doi.org/10.3390/rs16234548 - 4 Dec 2024
Cited by 1 | Viewed by 1265
Abstract
Accurate crop type mapping using satellite imagery is crucial for food security, yet accurately distinguishing between crops with similar spectral signatures is challenging. This study assessed the performance of Sentinel-2 (S2) time series (spectral bands and vegetation indices), Sentinel-1 (S1) time series (backscattering [...] Read more.
Accurate crop type mapping using satellite imagery is crucial for food security, yet accurately distinguishing between crops with similar spectral signatures is challenging. This study assessed the performance of Sentinel-2 (S2) time series (spectral bands and vegetation indices), Sentinel-1 (S1) time series (backscattering coefficients and polarimetric parameters), alongside phenological features derived from both S1 and S2 time series (harmonic coefficients and median features), for classifying sunflower, soybean, and maize. Random Forest (RF), Multi-Layer Perceptron (MLP), and XGBoost classifiers were applied across various dataset configurations and train-test splits over two study sites and years in France. Additionally, the InceptionTime classifier, specifically designed for time series data, was tested exclusively with time series datasets to compare its performance against the three general machine learning algorithms (RF, XGBoost, and MLP). The results showed that XGBoost outperformed RF and MLP in classifying the three crops. The optimal dataset for mapping all three crops combined S1 backscattering coefficients with S2 vegetation indices, with comparable results between phenological features and time series data (mean F1 scores of 89.9% for sunflower, 76.6% for soybean, and 91.1% for maize). However, when using individual satellite sensors, S1 phenological features and time series outperformed S2 for sunflower, while S2 was superior for soybean and maize. Both phenological features and time series data produced close mean F1 scores across spatial, temporal, and spatiotemporal transfer scenarios, though median features dataset was the best choice for spatiotemporal transfer. Polarimetric S1 data did not yield effective results. The InceptionTime classifier further improved classification accuracy over XGBoost for all crops, with the degree of improvement varying by crop and dataset (the highest mean F1 scores of 90.6% for sunflower, 86.0% for soybean, and 93.5% for maize). Full article
(This article belongs to the Special Issue Intelligent Extraction of Phenotypic Traits in Agroforestry)
Show Figures

Figure 1

20 pages, 31755 KiB  
Article
An Improved 2D Pose Estimation Algorithm for Extracting Phenotypic Parameters of Tomato Plants in Complex Backgrounds
by Yawen Cheng, Ni Ren, Anqi Hu, Lingli Zhou, Chao Qi, Shuo Zhang and Qian Wu
Remote Sens. 2024, 16(23), 4385; https://doi.org/10.3390/rs16234385 - 24 Nov 2024
Viewed by 1264
Abstract
Phenotypic traits, such as plant height, internode length, and node count, are essential indicators of the growth status of tomato plants, carrying significant implications for research on genetic breeding and cultivation management. Deep learning algorithms such as object detection and segmentation have been [...] Read more.
Phenotypic traits, such as plant height, internode length, and node count, are essential indicators of the growth status of tomato plants, carrying significant implications for research on genetic breeding and cultivation management. Deep learning algorithms such as object detection and segmentation have been widely utilized to extract plant phenotypic parameters. However, segmentation-based methods are labor-intensive due to their requirement for extensive annotation during training, while object detection approaches exhibit limitations in capturing intricate structural features. To achieve real-time, efficient, and precise extraction of phenotypic traits of seedling tomatoes, a novel plant phenotyping approach based on 2D pose estimation was proposed. We enhanced a novel heatmap-free method, YOLOv8s-pose, by integrating the Convolutional Block Attention Module (CBAM) and Content-Aware ReAssembly of FEatures (CARAFE), to develop an improved YOLOv8s-pose (IYOLOv8s-pose) model, which efficiently focuses on salient image features with minimal parameter overhead while achieving a superior recognition performance in complex backgrounds. IYOLOv8s-pose manifested a considerable enhancement in detecting bending points and stem nodes. Particularly for internode detection, IYOLOv8s-pose attained a Precision of 99.8%, exhibiting a significant improvement over RTMPose-s, YOLOv5s6-pose, YOLOv7s-pose, and YOLOv8s-pose by 2.9%, 5.4%, 3.5%, and 5.4%, respectively. Regarding plant height estimation, IYOLOv8s-pose achieved an RMSE of 0.48 cm and an rRMSE of 2%, and manifested a 65.1%, 68.1%, 65.6%, and 51.1% reduction in the rRMSE compared to RTMPose-s, YOLOv5s6-pose, YOLOv7s-pose, and YOLOv8s-pose, respectively. When confronted with the more intricate extraction of internode length, IYOLOv8s-pose also exhibited a 15.5%, 23.9%, 27.2%, and 12.5% reduction in the rRMSE compared to RTMPose-s, YOLOv5s6-pose, YOLOv7s-pose, and YOLOv8s-pose. IYOLOv8s-pose achieves high precision while simultaneously enhancing efficiency and convenience, rendering it particularly well suited for extracting phenotypic parameters of tomato plants grown naturally within greenhouse environments. This innovative approach provides a new means for the rapid, intelligent, and real-time acquisition of plant phenotypic parameters in complex backgrounds. Full article
(This article belongs to the Special Issue Intelligent Extraction of Phenotypic Traits in Agroforestry)
Show Figures

Graphical abstract

21 pages, 5851 KiB  
Article
SAM-ResNet50: A Deep Learning Model for the Identification and Classification of Drought Stress in the Seedling Stage of Betula luminifera
by Shiya Gao, Hao Liang, Dong Hu, Xiange Hu, Erpei Lin and Huahong Huang
Remote Sens. 2024, 16(22), 4141; https://doi.org/10.3390/rs16224141 - 6 Nov 2024
Viewed by 1506
Abstract
Betula luminifera, an indigenous hardwood tree in South China, possesses significant economic and ecological value. In view of the current severe drought situation, it is urgent to enhance this tree’s drought tolerance. However, traditional artificial methods fall short of meeting the demands [...] Read more.
Betula luminifera, an indigenous hardwood tree in South China, possesses significant economic and ecological value. In view of the current severe drought situation, it is urgent to enhance this tree’s drought tolerance. However, traditional artificial methods fall short of meeting the demands of breeding efforts due to their inefficiency. To monitor drought situations in a high-throughput and automatic approach, a deep learning model based on phenotype characteristics was proposed to identify and classify drought stress in B. luminifera seedlings. Firstly, visible-light images were obtained from a drought stress experiment conducted on B. luminifera shoots. Considering the images’ characteristics, we proposed an SAM-CNN architecture by incorporating spatial attention modules into classical CNN models. Among the four classical CNNs compared, ResNet50 exhibited superior performance and was, thus, selected for the construction of the SAM-CNN. Subsequently, we analyzed the classification performance of the SAM-ResNet50 model in terms of transfer learning, training from scratch, model robustness, and visualization. The results revealed that SAM-ResNet50 achieved an accuracy of 1.48% higher than that of ResNet50, at 99.6%. Furthermore, there was a remarkable improvement of 18.98% in accuracy, reaching 82.31% for the spatial transform images generated from the test set images by applying movement and rotation for robustness testing. In conclusion, the SAM-ResNet50 model achieved outstanding performance, with 99.6% accuracy and realized high-throughput automatic monitoring based on phenotype, providing a new perspective for drought stress classification and technical support for B. luminifera-related breeding work. Full article
(This article belongs to the Special Issue Intelligent Extraction of Phenotypic Traits in Agroforestry)
Show Figures

Graphical abstract

21 pages, 6022 KiB  
Article
From Spectral Characteristics to Index Bands: Utilizing UAV Hyperspectral Index Optimization on Algorithms for Estimating Canopy Nitrogen Concentration in Carya Cathayensis Sarg
by Hailin Feng, Tong Zhou, Ketao Wang, Jianqin Huang, Hao Liang, Chenghao Lu, Yaoping Ruan and Liuchang Xu
Remote Sens. 2024, 16(20), 3780; https://doi.org/10.3390/rs16203780 - 11 Oct 2024
Cited by 2 | Viewed by 1524
Abstract
Employing drones and hyperspectral imagers for large-scale, precise evaluation of nitrogen (N) concentration in Carya cathayensis Sarg canopies is crucial for accurately managing nitrogen fertilization in C. cathayensis Sarg cultivation. This study gathered five sets of hyperspectral imagery data from C. cathayensis Sarg [...] Read more.
Employing drones and hyperspectral imagers for large-scale, precise evaluation of nitrogen (N) concentration in Carya cathayensis Sarg canopies is crucial for accurately managing nitrogen fertilization in C. cathayensis Sarg cultivation. This study gathered five sets of hyperspectral imagery data from C. cathayensis Sarg plantations across four distinct locations with varying environmental stresses using drones. The research assessed the canopy nitrogen concentration of C. cathayensis Sarg trees both during singular growth periods and throughout their entire growth cycles. The objective was to explore the influence of band combinations and spectral index formula configurations on the predictive capability of the hyperspectral indices (HIs) for canopy N concentration (CNC), optimize the performance between HIs and machine learning approaches, and validate the efficacy of optimized HI algorithms. The findings revealed the following: (i) Optimized HIs demonstrated optimal predictive performance during both singular growth periods and the full growth cycles of C. cathayensis Sarg. The most effective HI model for singular growth periods was the optimized–modified–normalized difference vegetation index (opt-mNDVI), achieving an adjusted coefficient of determination (R2) of 0.96 and a root mean square error (RMSE) of 0.71. For the entire growth cycle, the HI model, also opt-mNDVI, attained an R2 of 0.75 and an RMSE of 2.11; (ii) optimized band combinations substantially enhanced HIs’ predictive performance by 16% to 71%, while the choice between three-band and two-band combinations influenced the predictive capacity of optimized HIs by 4% to 46%. Hence, utilizing optimized HIs combined with Unmanned Aerial Vehicle (UAV) hyperspectral imaging to evaluate nitrogen concentration in C. cathayensis Sarg trees under complex field conditions offers significant practical value. Full article
(This article belongs to the Special Issue Intelligent Extraction of Phenotypic Traits in Agroforestry)
Show Figures

Graphical abstract

22 pages, 13737 KiB  
Article
Synergizing a Deep Learning and Enhanced Graph-Partitioning Algorithm for Accurate Individual Rubber Tree-Crown Segmentation from Unmanned Aerial Vehicle Light-Detection and Ranging Data
by Yunfeng Zhu, Yuxuan Lin, Bangqian Chen, Ting Yun and Xiangjun Wang
Remote Sens. 2024, 16(15), 2807; https://doi.org/10.3390/rs16152807 - 31 Jul 2024
Cited by 2 | Viewed by 1378
Abstract
The precise acquisition of phenotypic parameters for individual trees in plantation forests is important for forest management and resource exploration. The use of Light-Detection and Ranging (LiDAR) technology mounted on Unmanned Aerial Vehicles (UAVs) has become a critical method for forest resource monitoring. [...] Read more.
The precise acquisition of phenotypic parameters for individual trees in plantation forests is important for forest management and resource exploration. The use of Light-Detection and Ranging (LiDAR) technology mounted on Unmanned Aerial Vehicles (UAVs) has become a critical method for forest resource monitoring. Achieving the accurate segmentation of individual tree crowns (ITCs) from UAV LiDAR data remains a significant technical challenge, especially in broad-leaved plantations such as rubber plantations. In this study, we designed an individual tree segmentation framework applicable to dense rubber plantations with complex canopy structures. First, the feature extraction module of PointNet++ was enhanced to precisely extract understory branches. Then, a graph-based segmentation algorithm focusing on the extracted branch and trunk points was designed to segment the point cloud of the rubber plantation. During the segmentation process, a directed acyclic graph is constructed using components generated through grey image clustering in the forest. The edge weights in this graph are determined according to scores calculated using the topologies and heights of the components. Subsequently, ITC segmentation is performed by trimming the edges of the graph to obtain multiple subgraphs representing individual trees. Four different plots were selected to validate the effectiveness of our method, and the widths obtained from our segmented ITCs were compared with the field measurement. As results, the improved PointNet++ achieved an average recall of 94.6% for tree trunk detection, along with an average precision of 96.2%. The accuracy of tree-crown segmentation in the four plots achieved maximal and minimal R2 values of 98.2% and 92.5%, respectively. Further comparative analysis revealed that our method outperforms traditional methods in terms of segmentation accuracy, even in rubber plantations characterized by dense canopies with indistinct boundaries. Thus, our algorithm exhibits great potential for the accurate segmentation of rubber trees, facilitating the acquisition of structural information critical to rubber plantation management. Full article
(This article belongs to the Special Issue Intelligent Extraction of Phenotypic Traits in Agroforestry)
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 5956 KiB  
Technical Note
Unmanned Aerial Vehicle-Based Hyperspectral Imaging Integrated with a Data Cleaning Strategy for Detection of Corn Canopy Biomass, Chlorophyll, and Nitrogen Contents at Plant Scale
by Zhuolin Shi, Linglong Wang, Zengling Yang, Jinzhao Li, Linwei Cai, Yuanping Huang, Hongyan Zhang and Lujia Han
Remote Sens. 2025, 17(5), 895; https://doi.org/10.3390/rs17050895 - 3 Mar 2025
Viewed by 599
Abstract
The high-frequency detection of plant-scale crop growth in the field has great significance for achieving precise crop management and improving breeding practices. In this study, the biomass (BM), chlorophyll (Chl), and total nitrogen (TN) contents of the upper three leaves of the corn [...] Read more.
The high-frequency detection of plant-scale crop growth in the field has great significance for achieving precise crop management and improving breeding practices. In this study, the biomass (BM), chlorophyll (Chl), and total nitrogen (TN) contents of the upper three leaves of the corn canopy are taken as examples, and unmanned aerial vehicle (UAV) and indoor hyperspectral imaging (HSI) detection models are established using partial least squares regression and support vector machine regression, respectively. The performance of the UAV HSI model was notably lower in comparison to the indoor model. Therefore, a UAV HSI data cleaning strategy integrated with RGB image information is further proposed, which involves eliminating data points with serious interference from information non-related to the plant. After data cleaning, the R2C of the BM, Chl, and TN contents detected through UAV HSI reached 0.537, 0.852, and 0.657, representing an improvement of over 70%. The RMSEP values were as low as 0.50 g, 2.2 SPAD, and 0.258%, which were comparable to those obtained with the indoor HSI detection model. This study demonstrates that UAV HSI integrated with the proposed data cleaning strategy can enable the rapid detection of corn canopy leaf properties at the plant scale in the field, supporting the high-frequency characterization of plant-scale crop growth parameters in the field. Full article
(This article belongs to the Special Issue Intelligent Extraction of Phenotypic Traits in Agroforestry)
Show Figures

Figure 1

Back to TopTop