Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (40)

Search Parameters:
Keywords = forest scene reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3799 KiB  
Article
Forest Three-Dimensional Reconstruction Method Based on High-Resolution Remote Sensing Image Using Tree Crown Segmentation and Individual Tree Parameter Extraction Model
by Guangsen Ma, Gang Yang, Hao Lu and Xue Zhang
Remote Sens. 2025, 17(13), 2179; https://doi.org/10.3390/rs17132179 - 25 Jun 2025
Viewed by 430
Abstract
Efficient and accurate acquisition of tree distribution and three-dimensional geometric information in forest scenes, along with three-dimensional reconstructions of entire forest environments, hold significant application value in precision forestry and forestry digital twins. However, due to complex vegetation structures, fine geometric details, and [...] Read more.
Efficient and accurate acquisition of tree distribution and three-dimensional geometric information in forest scenes, along with three-dimensional reconstructions of entire forest environments, hold significant application value in precision forestry and forestry digital twins. However, due to complex vegetation structures, fine geometric details, and severe occlusions in forest environments, existing methods—whether vision-based or LiDAR-based—still face challenges such as high data acquisition costs, feature extraction difficulties, and limited reconstruction accuracy. This study focuses on reconstructing tree distribution and extracting key individual tree parameters, and it proposes a forest 3D reconstruction framework based on high-resolution remote sensing images. Firstly, an optimized Mask R-CNN model was employed to segment individual tree crowns and extract distribution information. Then, a Tree Parameter and Reconstruction Network (TPRN) was constructed to directly estimate key structural parameters (height, DBH etc.) from crown images and generate tree 3D models. Subsequently, the 3D forest scene could be reconstructed by combining the distribution information and tree 3D models. In addition, to address the data scarcity, a hybrid training strategy integrating virtual and real data was proposed for crown segmentation and individual tree parameter estimation. Experimental results demonstrated that the proposed method could reconstruct an entire forest scene within seconds while accurately preserving tree distribution and individual tree attributes. In two real-world plots, the tree counting accuracy exceeded 90%, with an average tree localization error under 0.2 m. The TPRN achieved parameter extraction accuracies of 92.7% and 96% for tree height, and 95.4% and 94.1% for DBH. Furthermore, the generated individual tree models achieved average Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) scores of 11.24 and 0.53, respectively, validating the quality of the reconstruction. This approach enables fast and effective large-scale forest scene reconstruction using only a single remote sensing image as input, demonstrating significant potential for applications in both dynamic forest resource monitoring and forestry-oriented digital twin systems. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

18 pages, 4774 KiB  
Article
InfraredStereo3D: Breaking Night Vision Limits with Perspective Projection Positional Encoding and Groundbreaking Infrared Dataset
by Yuandong Niu, Limin Liu, Fuyu Huang, Juntao Ma, Chaowen Zheng, Yunfeng Jiang, Ting An, Zhongchen Zhao and Shuangyou Chen
Remote Sens. 2025, 17(12), 2035; https://doi.org/10.3390/rs17122035 - 13 Jun 2025
Viewed by 459
Abstract
In fields such as military reconnaissance, forest fire prevention, and autonomous driving at night, there is an urgent need for high-precision three-dimensional reconstruction in low-light or night environments. The acquisition of remote sensing data by RGB cameras relies on external light, resulting in [...] Read more.
In fields such as military reconnaissance, forest fire prevention, and autonomous driving at night, there is an urgent need for high-precision three-dimensional reconstruction in low-light or night environments. The acquisition of remote sensing data by RGB cameras relies on external light, resulting in a significant decline in image quality and making it difficult to meet the task requirements. The method based on lidar has poor imaging effects in rainy and foggy weather, close-range scenes, and scenarios requiring thermal imaging data. In contrast, infrared cameras can effectively overcome this challenge because their imaging mechanisms are different from those of RGB cameras and lidar. However, the research on three-dimensional scene reconstruction of infrared images is relatively immature, especially in the field of infrared binocular stereo matching. There are two main challenges given this situation: first, there is a lack of a dataset specifically for infrared binocular stereo matching; second, the lack of texture information in infrared images causes a limit in the extension of the RGB method to the infrared reconstruction problem. To solve these problems, this study begins with the construction of an infrared binocular stereo matching dataset and then proposes an innovative perspective projection positional encoding-based transformer method to complete the infrared binocular stereo matching task. In this paper, a stereo matching network combined with transformer and cost volume is constructed. The existing work in the positional encoding of the transformer usually uses a parallel projection model to simplify the calculation. Our method is based on the actual perspective projection model so that each pixel is associated with a different projection ray. It effectively solves the problem of feature extraction and matching caused by insufficient texture information in infrared images and significantly improves matching accuracy. We conducted experiments based on the infrared binocular stereo matching dataset proposed in this paper. Experiments demonstrated the effectiveness of the proposed method. Full article
(This article belongs to the Collection Visible Infrared Imaging Radiometers and Applications)
Show Figures

Figure 1

25 pages, 15523 KiB  
Article
Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and Extraction of Individual Tree Parameters
by Guoji Tian, Chongcheng Chen and Hongyu Huang
Remote Sens. 2025, 17(9), 1520; https://doi.org/10.3390/rs17091520 - 25 Apr 2025
Cited by 1 | Viewed by 1012
Abstract
The accurate and efficient 3D reconstruction of trees is beneficial for urban forest resource assessment and management. Close-range photogrammetry (CRP) is widely used in the 3D model reconstruction of forest scenes. However, in practical forestry applications, challenges such as low reconstruction efficiency and [...] Read more.
The accurate and efficient 3D reconstruction of trees is beneficial for urban forest resource assessment and management. Close-range photogrammetry (CRP) is widely used in the 3D model reconstruction of forest scenes. However, in practical forestry applications, challenges such as low reconstruction efficiency and poor reconstruction quality persist. Recently, novel view synthesis (NVS) technology, such as neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS), has shown great potential in the 3D reconstruction of plants using some limited number of images. However, existing research typically focuses on small plants in orchards or individual trees. It remains uncertain whether this technology can be effectively applied in larger, more complex stands or forest scenes. In this study, we collected sequential images of urban forest plots with varying levels of complexity using imaging devices with different resolutions (cameras on smartphones and UAV). These plots included one with sparse, leafless trees and another with dense foliage and more occlusions. We then performed dense reconstruction of forest stands using NeRF and 3DGS methods. The resulting point cloud models were compared with those obtained through photogrammetric reconstruction and laser scanning methods. The results show that compared to photogrammetric method, NVS methods have a significant advantage in reconstruction efficiency. The photogrammetric method is suitable for relatively simple forest stands, as it is less adaptable to complex ones. This results in tree point cloud models with issues such as excessive canopy noise and wrongfully reconstructed trees with duplicated trunks and canopies. In contrast, NeRF is better adapted to more complex forest stands, yielding tree point clouds of the highest quality that offer more detailed trunk and canopy information. However, it can lead to reconstruction errors in the ground area when the input views are limited. The 3DGS method has a relatively poor capability to generate dense point clouds, resulting in models with low point density, particularly with sparse points in the trunk areas, which affects the accuracy of the diameter at breast height (DBH) estimation. Tree height and crown diameter information can be extracted from the point clouds reconstructed by all three methods, with NeRF achieving the highest accuracy in tree height. However, the accuracy of DBH extracted from photogrammetric point clouds is still higher than that from NeRF point clouds. Meanwhile, compared to ground-level smartphone images, tree parameters extracted from reconstruction results of higher-resolution and varied perspectives of drone images are more accurate. These findings confirm that NVS methods have significant application potential for 3D reconstruction of urban forests. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

20 pages, 8734 KiB  
Article
An Improved Method for Single Tree Trunk Extraction Based on LiDAR Data
by Jisheng Xia, Sunjie Ma, Guize Luan, Pinliang Dong, Rong Geng, Fuyan Zou, Junzhou Yin and Zhifang Zhao
Remote Sens. 2025, 17(7), 1271; https://doi.org/10.3390/rs17071271 - 3 Apr 2025
Viewed by 769
Abstract
Scanning forests with LiDAR is an efficient method for conducting forest resource surveys, including estimating tree diameter at breast height (DBH), canopy height, and segmenting individual trees. This study uses three-dimensional (3D) forest test data and point cloud data simulated by the Helios++ [...] Read more.
Scanning forests with LiDAR is an efficient method for conducting forest resource surveys, including estimating tree diameter at breast height (DBH), canopy height, and segmenting individual trees. This study uses three-dimensional (3D) forest test data and point cloud data simulated by the Helios++ V1.3.0 software, and proposes a voxelized trunk extraction algorithm to determine the trunk location and the vertical structure of single tree trunks in forest areas. Firstly, the voxel-based shape recognition algorithm is used to extract the trunk structure of tree point clouds, then the random sample consensus (RANSAC) algorithm is used to solve the vertical structure connectivity problem of tree trunks generated by the above method, and the Alpha Shapes algorithm is selected among various point cloud surface reconstruction algorithms to reconstruct the surface of tree point clouds. Then, building on the tree surface model, a light projection scene is introduced to locate the tree trunk coordinates at different heights. Finally, the convex hull of the trunk bottom is solved by the Graham scanning method. Accuracy assessments show that the proposed single-tree extraction algorithm and the forest vertical structure recognition algorithm, when applied within the light projection scene, effectively delineate the regions where the vertical structure distribution of single tree trunks is inconsistent. Full article
Show Figures

Graphical abstract

31 pages, 65110 KiB  
Article
SK-TreePCN: Skeleton-Embedded Transformer Model for Point Cloud Completion of Individual Trees from Simulated to Real Data
by Haifeng Xu, Yongjian Huai, Xun Zhao, Qingkuo Meng, Xiaoying Nie, Bowen Li and Hao Lu
Remote Sens. 2025, 17(4), 656; https://doi.org/10.3390/rs17040656 - 14 Feb 2025
Cited by 1 | Viewed by 1232
Abstract
Tree structural information is essential for studying forest ecosystem functions, driving mechanisms, and global change response mechanisms. Although current terrestrial laser scanning (TLS) can acquire high-precision 3D structural information of forests, mutual occlusion between trees, the scanner’s field of view, and terrain changes [...] Read more.
Tree structural information is essential for studying forest ecosystem functions, driving mechanisms, and global change response mechanisms. Although current terrestrial laser scanning (TLS) can acquire high-precision 3D structural information of forests, mutual occlusion between trees, the scanner’s field of view, and terrain changes make the point clouds captured by laser scanning sensors incomplete, further hindering downstream tasks. This study proposes a skeleton-embedded tree point cloud completion method, termed SK-TreePCN, which recovers complete individual tree point clouds from incomplete scanning data in the field. SK-TreePCN employs a transformer trained on simulated point clouds generated by a 3D radiative transfer model. Unlike existing point cloud completion algorithms designed for regular shapes and simple structures, the SK-TreePCN method addresses structurally heterogeneous trees. The 3D radiative transfer model LESS, which can simulate various TLS data over highly heterogeneous scenes, is employed to generate massive point clouds with training labels. Among the various point cloud completion methods evaluated, SK-TreePCN exhibits outstanding performance regarding the Chamfer distance (CD) and F1 Score. The generated point clouds display a more natural appearance and clearer branches. The accuracy of tree height and diameter at breast height extracted from the recovered point cloud achieved R2 values of 0.929 and 0.904, respectively. SK-TreePCN demonstrates applicability and robustness in recovering individual tree point clouds. It demonstrated great potential for TLS-based field measurements of trees, refining point cloud 3D reconstruction and tree information extraction and reducing field data collection labor while retaining satisfactory data quality. Full article
Show Figures

Graphical abstract

21 pages, 16141 KiB  
Article
The Development of a Sorting System Based on Point Cloud Weight Estimation for Fattening Pigs
by Luo Liu, Yangsen Ou, Zhenan Zhao, Mingxia Shen, Ruqian Zhao and Longshen Liu
Agriculture 2025, 15(4), 365; https://doi.org/10.3390/agriculture15040365 - 8 Feb 2025
Cited by 1 | Viewed by 1063
Abstract
As large-scale and intensive fattening pig farming has become mainstream, the increase in farm size has led to more severe issues related to the hierarchy within pig groups. Due to genetic differences among individual fattening pigs, those that grow faster enjoy a higher [...] Read more.
As large-scale and intensive fattening pig farming has become mainstream, the increase in farm size has led to more severe issues related to the hierarchy within pig groups. Due to genetic differences among individual fattening pigs, those that grow faster enjoy a higher social rank. Larger pigs with greater aggression continuously acquire more resources, further restricting the survival space of weaker pigs. Therefore, fattening pigs must be grouped rationally, and the management of weaker pigs must be enhanced. This study, considering current fattening pig farming needs and actual production environments, designed and implemented an intelligent sorting system based on weight estimation. The main hardware structure of the partitioning equipment includes a collection channel, partitioning channel, and gantry-style collection equipment. Experimental data were collected, and the original scene point cloud was preprocessed to extract the back point cloud of fattening pigs. Based on the morphological characteristics of the fattening pigs, the back point cloud segmentation method was used to automatically extract key features such as hip width, hip height, shoulder width, shoulder height, and body length. The segmentation algorithm first calculates the centroid of the point cloud and the eigenvectors of the covariance matrix to reconstruct the point cloud coordinate system. Then, based on the variation characteristics and geometric shape of the consecutive horizontal slices of the point cloud, hip width and shoulder width slices are extracted, and the related features are calculated. Weight estimation was performed using Random Forest, Multilayer perceptron (MLP), linear regression based on the least squares method, and ridge regression models, with parameter tuning using Bayesian optimization. The mean squared error, mean absolute error, and mean relative error were used as evaluation metrics to assess the model’s performance. Finally, the classification capability was evaluated using the median and average weights of the fattening pigs as partitioning standards. The experimental results show that the system’s average relative error in weight estimation is approximately 2.90%, and the total time for the partitioning process is less than 15 s, which meets the needs of practical production. Full article
(This article belongs to the Special Issue Modeling of Livestock Breeding Environment and Animal Behavior)
Show Figures

Figure 1

29 pages, 4808 KiB  
Article
Multi-Baseline Bistatic SAR Three-Dimensional Imaging Method Based on Phase Error Calibration Combining PGA and EB-ISOA
by Jinfeng He, Hongtu Xie, Haozong Liu, Zhitao Wu, Bin Xu, Nannan Zhu, Zheng Lu and Pengcheng Qin
Remote Sens. 2025, 17(3), 363; https://doi.org/10.3390/rs17030363 - 22 Jan 2025
Viewed by 695
Abstract
Tomographic synthetic aperture radar (TomoSAR) is an advanced three-dimensional (3D) synthetic aperture radar (SAR) imaging technology that can obtain multiple SAR images through multi-track observations, thereby reconstructing the 3D spatial structure of targets. However, due to system limitations, the multi-baseline (MB) monostatic SAR [...] Read more.
Tomographic synthetic aperture radar (TomoSAR) is an advanced three-dimensional (3D) synthetic aperture radar (SAR) imaging technology that can obtain multiple SAR images through multi-track observations, thereby reconstructing the 3D spatial structure of targets. However, due to system limitations, the multi-baseline (MB) monostatic SAR (MonoSAR) encounters temporal decorrelation issues when observing the scene such as forests, affecting the accuracy of the 3D reconstruction. Additionally, during TomoSAR observations, the platform jitter and inaccurate position measurement will contaminate the MB SAR data, which may result in the multiplicative noise with phase errors, thereby leading to the decrease in the imaging quality. To address the above issues, this paper proposes a MB bistatic SAR (BiSAR) 3D imaging method based on the phase error calibration that combines the phase gradient autofocus (PGA) and energy balance intensity-squared optimization autofocus (EB-ISOA). Firstly, the signal model of the MB one-stationary (OS) BiSAR is established and the 3D imaging principle is presented, and then the phase error caused by platform jitter and inaccurate position measurement is analyzed. Moreover, combining the PGA and EB-ISOA methods, a 3D imaging method based on the phase error calibration is proposed. This method can improve the accuracy of phase error calibration, avoid the vertical displacement, and has the noise robustness, which can obtain the high-precision 3D BiSAR imaging results. The experimental results are shown to verify the effectiveness and practicality of the proposed MB BiSAR 3D imaging method. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

18 pages, 13781 KiB  
Article
Evaluating Different Crown Reconstruction Approaches from Airborne LiDAR for Quantifying APAR Distribution Using a 3D Radiative Transfer Model
by Xun Zhao, Can Liu, Jianbo Qi, Lijuan Yuan, Zhexiu Yu, Siying He and Huaguo Huang
Remote Sens. 2025, 17(1), 53; https://doi.org/10.3390/rs17010053 - 27 Dec 2024
Cited by 1 | Viewed by 966
Abstract
Accurately quantifying fine-scale forest canopy-absorbed photosynthetically active radiation (APAR) is essential for monitoring forest growth and understanding ecological processes. The development of 3D radiative transfer models (3D RTMs) enables the precise simulation of canopy–light interactions, facilitating better quantification of forest canopy radiation dynamics. [...] Read more.
Accurately quantifying fine-scale forest canopy-absorbed photosynthetically active radiation (APAR) is essential for monitoring forest growth and understanding ecological processes. The development of 3D radiative transfer models (3D RTMs) enables the precise simulation of canopy–light interactions, facilitating better quantification of forest canopy radiation dynamics. However, the complex parameters of 3D RTMs, particularly detailed 3D scene structures, pose challenges to the simulation of radiative information. While high-resolution LiDAR offers precise 3D structural data, the effectiveness of different tree crown reconstruction methods for APAR quantification using airborne laser scanning (ALS) data has not been fully investigated. In this study, we employed three ALS-based tree crown reconstruction methods: alphashape, ellipsoid, and voxel-based combined with the 3D RTM LESS to assess their effectiveness in simulating and quantifying 3D APAR distribution. Specifically, we used two distinct 3D forest scenes from the RAMI-V dataset to simulate ALS data, reconstruct virtual forest scenes, and compare their simulated 3D APAR distributions with the benchmark reference scenes using the 3D RTM LESS. Furthermore, we simulated branchless scenes to evaluate the impact of branches on APAR distribution across different reconstruction methods. Our findings indicate that the alphashape-based tree crown reconstruction method depicts 3D APAR distributions that closely align with those of the benchmark scenes. Specifically, in scenarios with sparse (HET09) and dense (HET51) canopy distributions, the APAR values from scenes reconstructed using this method exhibit the smallest discrepancies when compared to the benchmark scenes. For HET09, the branched scenario yields RMSE, MAE, and MAPE values of 33.58 kW, 33.18 kW, and 40.19%, respectively, while for HET51, these metrics are 12.74 kW, 12.97 kW, and 10.27%. In the branchless scenario, HET09′s metrics are 10.65 kW, 10.22 kW, and 9.79%, and for HET51, they are 2.99 kW, 2.65 kW, and 2.11%. However, differences remain between the branched and branchless scenarios, with the extent of these differences being dependent on the canopy structure. Our conclusion demonstrated that among the three tree crown reconstruction methods tested, the alphashape-based method has the potential for simulating and quantifying fine-scale APAR at a regional scale. It provides a convenient technical support for obtaining fine-scale 3D APAR distributions in complex forest environments at a regional scale. However, the impact of branches in quantifying APAR using ALS-reconstructed scenes also needs to be further considered. Full article
Show Figures

Figure 1

18 pages, 46116 KiB  
Article
Structural Complexity Significantly Impacts Canopy Reflectance Simulations as Revealed from Reconstructed and Sentinel-2-Monitored Scenes in a Temperate Deciduous Forest
by Yi Gan, Quan Wang and Guangman Song
Remote Sens. 2024, 16(22), 4296; https://doi.org/10.3390/rs16224296 - 18 Nov 2024
Cited by 1 | Viewed by 1164
Abstract
Detailed three-dimensional (3D) radiative transfer models (RTMs) enable a clear understanding of the interactions between light, biochemistry, and canopy structure, but they are rarely explicitly evaluated due to the availability of 3D canopy structure data, leading to a lack of knowledge on how [...] Read more.
Detailed three-dimensional (3D) radiative transfer models (RTMs) enable a clear understanding of the interactions between light, biochemistry, and canopy structure, but they are rarely explicitly evaluated due to the availability of 3D canopy structure data, leading to a lack of knowledge on how canopy structure/leaf characteristics affect radiative transfer processes within forest ecosystems. In this study, the newly released 3D RTM Eradiate was extensively evaluated based on both virtual scenes reconstructed using the quantitative structure model (QSM) by adding leaves to point clouds generated from terrestrial laser scanning (TLS) data, and real scenes monitored by Sentinel-2 in a typical temperate deciduous forest. The effects of structural parameters on reflectance were investigated through sensitivity analysis, and the performance of the 3D model was compared with the 5-Scale and PROSAIL radiative transfer models. The results showed that the Eradiate-simulated reflectance achieved good agreement with the Sentinel-2 reflectance, especially in the visible and near-infrared spectral regions. Furthermore, the simulated reflectance, particularly in the blue and shortwave infrared spectral bands, was clearly shown to be influenced by canopy structure using the Eradiate model. This study demonstrated that the Eradiate RTM, based on the 3D explicit representation, is capable of providing accurate radiative transfer simulations in the temperate deciduous forest and hence provides a basis for understanding tree interactions and their effects on ecosystem structure and functions. Full article
Show Figures

Figure 1

29 pages, 12094 KiB  
Article
Bitemporal Radiative Transfer Modeling Using Bitemporal 3D-Explicit Forest Reconstruction from Terrestrial Laser Scanning
by Chang Liu, Kim Calders, Niall Origo, Louise Terryn, Jennifer Adams, Jean-Philippe Gastellu-Etchegorry, Yingjie Wang, Félicien Meunier, John Armston, Mathias Disney, William Woodgate, Joanne Nightingale and Hans Verbeeck
Remote Sens. 2024, 16(19), 3639; https://doi.org/10.3390/rs16193639 - 29 Sep 2024
Cited by 2 | Viewed by 2881
Abstract
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study [...] Read more.
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study presents the 3D-explicit reconstruction of a typical temperate deciduous forest in 2015 and 2022. We demonstrate for the first time the potential use of bitemporal 3D-explicit RT modeling from terrestrial laser scanning on the forward modeling and quantitative interpretation of: (1) remote sensing (RS) observations of leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and canopy light extinction, and (2) the impact of canopy gap dynamics on light availability of explicit locations. Results showed that, compared to the 2015 scene, the hemispherical-directional reflectance factor (HDRF) of the 2022 forest scene relatively decreased by 3.8% and the leaf FAPAR relatively increased by 5.4%. At explicit locations where canopy gaps significantly changed between the 2015 scene and the 2022 scene, only under diffuse light did the branch damage and closing gap significantly impact ground light availability. This study provides the first bitemporal RT comparison based on the 3D RT modeling, which uses one of the most realistic bitemporal forest scenes as the structural input. This bitemporal 3D-explicit forest RT modeling allows spatially explicit modeling over time under fully controlled experimental conditions in one of the most realistic virtual environments, thus delivering a powerful tool for studying canopy light regimes as impacted by dynamics in forest structure and developing RS inversion schemes on forest structural changes. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

21 pages, 13544 KiB  
Article
Three-Dimensional Reconstruction of Forest Scenes with Tree–Shrub–Grass Structure Using Airborne LiDAR Point Cloud
by Duo Xu, Xuebo Yang, Cheng Wang, Xiaohuan Xi and Gaofeng Fan
Forests 2024, 15(9), 1627; https://doi.org/10.3390/f15091627 - 15 Sep 2024
Cited by 4 | Viewed by 1845
Abstract
Fine three-dimensional (3D) reconstruction of real forest scenes can provide a reference for forestry digitization and forestry resource management applications. Airborne LiDAR technology can provide valuable data for large-area forest scene reconstruction. This paper proposes a 3D reconstruction method for complex forest scenes [...] Read more.
Fine three-dimensional (3D) reconstruction of real forest scenes can provide a reference for forestry digitization and forestry resource management applications. Airborne LiDAR technology can provide valuable data for large-area forest scene reconstruction. This paper proposes a 3D reconstruction method for complex forest scenes with trees, shrubs, and grass, based on airborne LiDAR point clouds. First, forest vertical distribution characteristics are used to segment tree, shrub, and ground–grass points from an airborne LiDAR point cloud. For ground–grass points, a ground–grass grid model is constructed. For tree points, a method based on hierarchical canopy point fitting is proposed to construct a trunk model, and a crown model is constructed with the 3D α-shape algorithm. For shrub points, a shrub model is directly constructed based on the 3D α-shape algorithm. Finally, tree, shrub, and ground–grass models are spatially combined to achieve the reconstruction of real forest scenes. Taking six forest plots located in Hebei, Yunnan, and Guangxi provinces in China and Baden-Württemberg in Germany as study areas, experimental results show that the accuracy of individual tree segmentation reaches 87.32%, the accuracy of shrub segmentation reaches 60.00%, the height accuracy of the grass model is evaluated with an RMSE < 0.15 m, the volume accuracy of shrub and tree models is assessed with R2 > 0.848 and R2 > 0.904, respectively. Furthermore, we compared the model constructed in this study with simplified point cloud and voxel models. The results demonstrate that the proposed modeling approach can meet the demand for the high-accuracy and lightweight modeling of large-area forest scenes. Full article
Show Figures

Figure 1

17 pages, 6686 KiB  
Article
Evaluating the Point Cloud of Individual Trees Generated from Images Based on Neural Radiance Fields (NeRF) Method
by Hongyu Huang, Guoji Tian and Chongcheng Chen
Remote Sens. 2024, 16(6), 967; https://doi.org/10.3390/rs16060967 - 10 Mar 2024
Cited by 11 | Viewed by 3602
Abstract
Three-dimensional (3D) reconstruction of trees has always been a key task in precision forestry management and research. Due to the complex branch morphological structure of trees themselves and the occlusions from tree stems, branches and foliage, it is difficult to recreate a complete [...] Read more.
Three-dimensional (3D) reconstruction of trees has always been a key task in precision forestry management and research. Due to the complex branch morphological structure of trees themselves and the occlusions from tree stems, branches and foliage, it is difficult to recreate a complete three-dimensional tree model from a two-dimensional image by conventional photogrammetric methods. In this study, based on tree images collected by various cameras in different ways, the Neural Radiance Fields (NeRF) method was used for individual tree dense reconstruction and the exported point cloud models are compared with point clouds derived from photogrammetric reconstruction and laser scanning methods. The results show that the NeRF method performs well in individual tree 3D reconstruction, as it has a higher successful reconstruction rate, better reconstruction in the canopy area and requires less images as input. Compared with the photogrammetric dense reconstruction method, NeRF has significant advantages in reconstruction efficiency and is adaptable to complex scenes, but the generated point cloud tend to be noisy and of low resolution. The accuracy of tree structural parameters (tree height and diameter at breast height) extracted from the photogrammetric point cloud is still higher than those derived from the NeRF point cloud. The results of this study illustrate the great potential of the NeRF method for individual tree reconstruction, and it provides new ideas and research directions for 3D reconstruction and visualization of complex forest scenes. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

23 pages, 3382 KiB  
Article
A Point Cloud Registration Framework with Color Information Integration
by Tianyu Han, Ruijie Zhang, Jiangming Kan, Ruifang Dong, Xixuan Zhao and Shun Yao
Remote Sens. 2024, 16(5), 743; https://doi.org/10.3390/rs16050743 - 20 Feb 2024
Cited by 8 | Viewed by 3238
Abstract
Point cloud registration serves as a critical tool for constructing 3D environmental maps. Both geometric and color information are instrumental in differentiating diverse point features. Specifically, when points appear similar based solely on geometric features, rendering them challenging to distinguish, the color information [...] Read more.
Point cloud registration serves as a critical tool for constructing 3D environmental maps. Both geometric and color information are instrumental in differentiating diverse point features. Specifically, when points appear similar based solely on geometric features, rendering them challenging to distinguish, the color information embedded in the point cloud carries significantly important features. In this study, the colored point cloud is utilized in the FCGCF algorithm, a refined version of the FCGF algorithm, incorporating color information. Moreover, we introduce the PointDSCC method, which amalgamates color consistency from the PointDSC method for outlier removal, thus enhancing registration performance when synergized with other pipeline stages. Comprehensive experiments across diverse datasets reveal that the integration of color information into the registration pipeline markedly surpasses the majority of existing methodologies and demonstrates robust generalizability. Full article
(This article belongs to the Special Issue 3D Point Clouds in Forest Remote Sensing III)
Show Figures

Figure 1

22 pages, 7517 KiB  
Article
Hybrid 3D Reconstruction of Indoor Scenes Integrating Object Recognition
by Mingfan Li, Minglei Li, Li Xu and Mingqiang Wei
Remote Sens. 2024, 16(4), 638; https://doi.org/10.3390/rs16040638 - 8 Feb 2024
Cited by 1 | Viewed by 2740
Abstract
Indoor 3D reconstruction is particularly challenging due to complex scene structures involving object occlusion and overlap. This paper presents a hybrid indoor reconstruction method that segments the room point cloud into internal and external components, and then reconstructs the room shape and the [...] Read more.
Indoor 3D reconstruction is particularly challenging due to complex scene structures involving object occlusion and overlap. This paper presents a hybrid indoor reconstruction method that segments the room point cloud into internal and external components, and then reconstructs the room shape and the indoor objects in different ways. We segment the room point cloud into internal and external points based on the assumption that the room shapes are composed of some large external planar structures. For the external, we seek for an appropriate combination of intersecting faces to obtain a lightweight polygonal surface model. For the internal, we define a set of features extracted from the internal points and train a classification model based on random forests to recognize and separate indoor objects. Then, the corresponding computer aided design (CAD) models are placed in the target positions of the indoor objects, converting the reconstruction into a model fitting problem. Finally, the indoor objects and room shapes are combined to generate a complete 3D indoor model. The effectiveness of this method is evaluated on point clouds from different indoor scenes with an average fitting error of about 0.11 m, and the performance is validated by extensive comparisons with state-of-the-art methods. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Graphical abstract

15 pages, 3943 KiB  
Technical Note
Automated Workflow for High-Resolution 4D Vegetation Monitoring Using Stereo Vision
by Martin Kobe, Melanie Elias, Ines Merbach, Martin Schädler, Jan Bumberger, Marion Pause and Hannes Mollenhauer
Remote Sens. 2024, 16(3), 541; https://doi.org/10.3390/rs16030541 - 31 Jan 2024
Cited by 1 | Viewed by 2627
Abstract
Precision agriculture relies on understanding crop growth dynamics and plant responses to short-term changes in abiotic factors. In this technical note, we present and discuss a technical approach for cost-effective, non-invasive, time-lapse crop monitoring that automates the process of deriving further plant parameters, [...] Read more.
Precision agriculture relies on understanding crop growth dynamics and plant responses to short-term changes in abiotic factors. In this technical note, we present and discuss a technical approach for cost-effective, non-invasive, time-lapse crop monitoring that automates the process of deriving further plant parameters, such as biomass, from 3D object information obtained via stereo images in the red, green, and blue (RGB) color space. The novelty of our approach lies in the automated workflow, which includes a reliable automated data pipeline for 3D point cloud reconstruction from dynamic scenes of RGB images with high spatio-temporal resolution. The setup is based on a permanent rigid and calibrated stereo camera installation and was tested over an entire growing season of winter barley at the Global Change Experimental Facility (GCEF) in Bad Lauchstädt, Germany. For this study, radiometrically aligned image pairs were captured several times per day from 3 November 2021 to 28 June 2022. We performed image preselection using a random forest (RF) classifier with a prediction accuracy of 94.2% to eliminate unsuitable, e.g., shadowed, images in advance and obtained 3D object information for 86 records of the time series using the 4D processing option of the Agisoft Metashape software package, achieving mean standard deviations (STDs) of 17.3–30.4 mm. Finally, we determined vegetation heights by calculating cloud-to-cloud (C2C) distances between a reference point cloud, computed at the beginning of the time-lapse observation, and the respective point clouds measured in succession with an absolute error of 24.9–35.6 mm in depth direction. The calculated growth rates derived from RGB stereo images match the corresponding reference measurements, demonstrating the adequacy of our method in monitoring geometric plant traits, such as vegetation heights and growth spurts during the stand development using automated workflows. Full article
Show Figures

Graphical abstract

Back to TopTop