Next Article in Journal
Characterization of Litter and Topsoil Under Different Vegetation Cover by Using a Chemometric Approach
Previous Article in Journal
Environmental Drivers of Regeneration in Scyphiphora hydrophyllacea: Thresholds for Seed Germination and Seedling Establishment in Hainan’s Intertidal Zones
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Registration of Terrestrial and UAV LiDAR Forest Point Clouds Through Canopy Shape Analysis

1
Key Laboratory of Forest Ecology and Environment of National Forestry and Grassland Administration, Ecology and Nature Conservation Institute, Chinese Academy of Forestry, Beijing 100091, China
2
Key Laboratory of Biodiversity Conservation of National Forestry and Grassland Administration, Ecology and Nature Conservation Institute, Chinese Academy of Forestry, Beijing 100091, China
3
College of Resources and Environment, Xingtai University, Xingtai 054001, China
4
Shandong Rail Transit Survey and Design Co., Ltd., Jinan 250101, China
5
Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China
6
Key Laboratory of Forestry Remote Sensing and Information System, National Forestry and Grassland Administration, Beijing 100091, China
*
Author to whom correspondence should be addressed.
Forests 2025, 16(8), 1347; https://doi.org/10.3390/f16081347
Submission received: 30 June 2025 / Revised: 29 July 2025 / Accepted: 15 August 2025 / Published: 19 August 2025

Abstract

Accurate registration of multi-platform light detection and ranging (LiDAR) point clouds is essential for detailed forest structure analysis and ecological monitoring. In this study, we developed a novel two-stage method for aligning terrestrial and unmanned aerial vehicle LiDAR point clouds in forest environments. The method first performs coarse alignment using canopy-level digital surface models and Fast Point Feature Histograms, followed by fine registration with Iterative Closest Point. Experiments conducted in six forest plots achieved an average registration accuracy of 0.24 m within 5.14 s, comparable to manual registration but with substantially reduced processing time and human intervention. In contrast to existing tree-based methods, the proposed approach eliminates the need for individual tree segmentation and ground filtering, streamlining preprocessing and improving scalability for large-scale forest monitoring. The proposed method facilitates a range of forest applications, including structure modeling, ecological parameter retrieval, and long-term change detection across diverse forest types and platforms.

1. Introduction

Forest structure is a fundamental attribute of forest ecosystems, directly affecting key ecological functions such as productivity [1,2], biodiversity [3,4,5], hydrological regulation [6,7], and carbon cycling [8,9]. Accurate and comprehensive characterization of forest structure is essential for understanding forest dynamics [10], formulating management strategies [11], and addressing global climate change [12,13]. Advances in remote sensing technologies, particularly the widespread application of light detection and ranging (LiDAR), have enabled the detailed three-dimensional (3D) representation of forest structures. As a result, LiDAR has become an indispensable tool for forest monitoring, structural modeling, and ecological applications [14,15,16].
LiDAR is an active remote sensing technology that measures distance by calculating the time delay between emitted and received laser pulses, enabling acquisition of 3D coordinates with centimeter- or even millimeter-level precision [17]. Compared with traditional optical remote sensing and synthetic aperture radar, LiDAR offers sub-meter vertical resolution and canopy-penetration capability, making it particularly effective for capturing the vertical heterogeneity and spatial complexity of forest structures. For forest monitoring, LiDAR can provide hundreds to thousands of points per square meter (pts/m2), significantly improving the accuracy of structural parameter extraction. Currently, 3D structural data of forest plots are primarily acquired through terrestrial and unmanned aerial vehicle (UAV) LiDAR platforms [18,19,20]. Terrestrial LiDAR, deployed on the ground, conducts multi-station scans to generate extremely dense point clouds (thousands to tens of thousands of pts/m2), making it ideal for extracting fine-scale features such as diameter at breast height (DBH), stem curves, and branch structures [21,22,23]. However, its vertical perspective is prone to occlusion, often leading to incomplete canopy data. In contrast, UAV LiDAR provides a top-down perspective, enabling rapid data acquisition with typical point cloud densities ranging from hundreds to thousands of pts/m2 [24,25]. It is well suited for retrieving tree height and crown size, although its sparse point density near the ground limits its ability to characterize understory structure [26]. Therefore, integrating multi-platform point clouds from both terrestrial and UAV LiDAR allows for complementary structural information, enabling complete and accurate 3D forest models from the understory to the canopy [27,28]. This multi-platform fusion approach provides essential data support for fine-scale, multi-layered forest structure analysis.
To address the challenge of registering terrestrial and UAV LiDAR point clouds, a variety of strategies have been developed, including methods assisted by artificial targets, the global navigation satellite system (GNSS), and target-free approaches [29,30]. Artificial target-based registration methods typically employ geometric markers, such as spheres, planes, or corner reflectors, to establish precise spatial correspondences between terrestrial and UAV LiDAR point clouds. However, these methods are often labor-intensive and time-consuming due to their reliance on manual setup and identification. GNSS-based methods utilize georeferencing data acquired from GNSS sensors on terrestrial and UAV LiDAR platforms to align the point clouds. However, GNSS signals can be severely degraded under dense forest canopies, leading to significant localization errors, sometimes on the order of meters, for terrestrial LiDAR [31,32,33].
Recently, target-free registration methods have gained increasing attention for terrestrial and UAV LiDAR point cloud alignment. These methods generally involve three stages: feature extraction, coarse registration, and fine registration [19,34]. Features such as keypoints, lines, or planes are extracted from both datasets, and correspondences are established based on geometric or semantic descriptors, followed by nonlinear optimization for final alignment. These methods eliminate the need for external information such as GNSS or artificial markers, improving automation [35]. In forest applications, tree-based registration methods are common, using trees as natural landmarks by identifying individual tree locations and performing spatial matching based on their location relationships. For example, Dai et al. [29] proposed a registration approach based on tree position using Meanshift clustering for spatial matching, while Li et al. [33] further introduced topological constraints of tree structures to enhance stability. Chen et al. [36] applied a hierarchical strategy that separates the point cloud into multiple stem layers for individual feature extraction and matching, improving performance in densely forested areas. These methods leverage the structural characteristics of trees, showing promising results in heterogeneous forest environments.
However, despite promising results, existing tree-based registration approaches still face significant challenges in complex forest environments. In particular, the reliance on accurate individual tree segmentation limits their applicability where canopy overlap, dense understory, or occlusions are common. Moreover, the high computational costs and sensitivity to missing or misidentified trees reveal a clear research gap for more robust and efficient registration methods [31].
To address these challenges, we propose a novel registration method that integrates Digital Surface Models (DSM) with local geometric features based on terrestrial and UAV LiDAR point cloud data. Our method first extracts DSMs from terrestrial and UAV LiDAR point clouds to represent the overall topography and spatial patterns of upper forest structures, providing stable macro-level structural constraints. Compared to raw 3D point clouds, DSMs retain only surface features, significantly reducing data volume and computational cost, which facilitates rapid processing and matching. Based on the DSM, we extract Fast Point Feature Histograms (FPFH) to obtain local geometric descriptors, enhancing the method’s capability to recognize structural shapes. The registration begins with Sample Consensus Initial Alignment (SAC-IA) for coarse alignment based on Fast Point Feature Histogram features, followed by refinement using Iterative Closest Point (ICP) optimization to improve final registration accuracy.
Compared with tree-based methods, the proposed method offers several advantages:
(1)
The DSM captures the height variation and spatial distribution of forest canopies, which exhibit consistency and distinguishability across platforms. Unlike methods requiring detailed identification of tree positions, DSM-based representation involves less data, greatly reducing storage and computational demands and enabling faster initial matching.
(2)
Traditional methods require precise tree location extraction, a complex and time-consuming process susceptible to noise, occlusion, and forest structural variability. The proposed method simplifies the process by aligning based on global morphology and local geometric features, enhancing automation.
(3)
Under the guidance of macro-scale DSM structures, the Fast Point Feature Histogram descriptors further enrich local geometric representations, enabling the registration process to balance global consistency and local alignment accuracy. This improves robustness and adaptability across diverse forest structures.

2. Study Area and Data

2.1. Study Area

The study was conducted in the Gaofeng Forest Farm, Guangxi Province, China (22°49′–23°15′ N, 108°08′–108°53′ E). The forest farm has a forest coverage rate of 87% and a total forest volume exceeding 5.7 million cubic meters. The area falls within the South Asian subtropical monsoon climate zone, characterized by warm and humid conditions. The mean annual temperature is 21.6 °C, with an average annual precipitation of 1300.6 mm, evaporation of 1643.4 mm, relative humidity of 79%, and total annual sunshine hours of approximately 1827 h. The terrain is dominated by hilly landforms, with elevations ranging from 150 to 400 m and slopes between 5° and 30°, sloping generally from northwest to southeast. The dominant tree species in the region are Eucalyptus and Pinus massoniana.
Six representative forest plots were established in this study area (Figure 1 and Table 1). Each plot has an area of 20 m × 30 m to capture the variability of forest stand structures across different forest types. Plots 1–3 are characterized by broad-leaved forests dominated by Eucalyptus, with Plots 1 and 2 exhibiting comparable stand densities of 1167 trees/ha. The average tree heights of Plots 1 and 2 are 9.94 m and 20.12 m, with corresponding mean DBH of 11.15 cm and 16.43 cm, and crown widths of 4.02 m and 4.71 m, respectively. Plot 3 displays a substantially higher tree density of 2533 trees/ha, with an average tree height of 12.39 m, DBH of 16.14 cm, and crown width of 3.10 m. In contrast, Plots 4–6 are dominated by coniferous forests, with Pinus massoniana as the predominant species. The average tree heights of Plots 4–6 are 9.05 m, 12.35 m, and 13.45 m, with corresponding DBHs of 15.63 cm, 9.49 cm, and 21.90 cm, respectively. Notably, crown width shows an increasing trend from Plot 4 (3.14 m) to Plot 6 (5.32 m), while tree densities vary at 1900 trees/ha, 2500 trees/ha, and 667 trees/ha, respectively. The selected plots collectively encompass a broad range of forest compositions and structural attributes, thereby providing a comprehensive basis for subsequent algorithm validation.

2.2. Data Acquisition

Terrestrial LiDAR data were collected in October 2022 using a Trimble TX8 terrestrial laser scanner (Trimble Inc., Westminster, CO, USA). The scanner has a horizontal field of view of 360° and a vertical field of view of 317°, with a maximum range of 340 m, and a ranging accuracy better than 2 mm. A multi-station scanning approach was employed: scanning stations were placed at the center and corners of each plot, with additional stations added based on tree distribution to ensure full coverage. The central station operated at a high-density setting, while corner and auxiliary stations used medium-density settings, corresponding to spatial resolutions of 5.7 mm and 22.6 mm at a distance of 30 m, respectively.
Spherical targets were deployed within each plot to facilitate multi-station alignment using an artificial target-based registration method. The registration accuracy between stations was maintained within 2 mm. After merging all stations, the final terrestrial LiDAR point cloud achieved an average density of approximately 30,000 points/m2.
UAV LiDAR data were simultaneously acquired using the GENIUS 16 system (SureStar, Hefei, China). The system has a maximum range of 200 m (for targets with 20% reflectivity) or 250 m (for 60% reflectivity), with a scanning rate of 320,000 points per second, a field of view of 360° × 30°, and measurement accuracy of 10 cm vertically and 10–15 cm horizontally. During data collection, flights were conducted at an average altitude of 100 m above ground level, with a flight speed of 6 m/s, a flight strip spacing of 70 m, and a swath overlap of 50%. Under these conditions, the resulting point cloud exhibited an average density of 120 points/m2.

3. Methods

3.1. Overview

This study proposes a novel method for registering forest point clouds acquired from terrestrial and UAV LiDAR. The workflow consists of four major steps (Figure 2). First, DSMs are extracted from terrestrial and UAV LiDAR point clouds to provide canopy surface information for initial registration. Second, Fast Point Feature Histograms are calculated from the DSMs to characterize local geometric features. Third, coarse registration is performed using the Sample Consensus Initial Alignment algorithm based on Fast Point Feature Histogram descriptors. Finally, the Iterative Closest Point algorithm is applied to refine the alignment and enhance registration accuracy.

3.2. DSM Construction

To effectively utilize terrain and canopy structure information from the forest LiDAR point clouds, DSMs are derived from both terrestrial and UAV LiDAR data. Unlike Digital Terrain Models (DTMs), which only describe ground height variations, or Canopy Height Models (CHMs), which represent canopy height variations, DSMs integrate both terrain and canopy surfaces. This composite structure ensures greater geometric stability and consistency, particularly suitable for forests with similar canopy surface morphology. The terrestrial and UAV LiDAR point clouds be denoted as:
P s = p i s = ( x i s , y i s , z i s ) i = 1 N s , s U , T
where s indicates the data source with U representing UAV LiDAR and T representing terrestrial LiDAR, N s is the number of points, and ( x i s , y i s , z i s ) is the 3D coordinate of the i -th point. The point cloud is divided into a 2D grid on the horizontal plane:
G m , n s = p i s P s x i s m × r , ( m + 1 ) × r ) , y i s n × r , ( n + 1 ) × r )
where m and n are grid indices, and r is the grid cell size. In each cell G m , n s , only the highest point is used to construct the DSM:
V s = v i s = ( v x i s , v y i s , v z i s ) i   = 1 M s , s U , T
where M s is the number of retained DSM points.

3.3. FPFH Extraction

To characterize local geometric structures in the DSMs of terrestrial and UAV LiDAR point clouds, a Fast Point Feature Histogram is extracted. Fast Point Feature Histograms is a widely used 3D feature descriptor that summarizes the geometric relationship between a point and its neighbors based on surface normal vectors and spatial distribution. It is rotation- and scale-invariant and provides a compact representation of local shape. It is chosen for its computational efficiency and robustness in capturing local geometric properties, making it well suited for forest point cloud data involving complex natural scenes.
The extraction process includes three steps: normal vector computation, Simplified Point Feature Histogram (SPFH) construction, and Fast Point Feature Histogram computation. First, the normal vector n i s for each point is calculated using the covariance matrix of its k -nearest neighbors:
T i s = 1 k j = 1 k ( v j s v i s ¯ ) ( v j s v i s ¯ ) T , s U , T
where v i s ¯ is the centroid of the neighborhood. The eigenvector corresponding to the smallest eigenvalue of T i s is taken as the normal n i s . Then, for each point pair ( v i s , v j s ) , SPFH is computed using angular and spatial features:
S P F H s = ( n i s , n j s ) · g j s , n i s · ( v j s v i s ) v j s v i s , a r c t a n ( n j · ( v j s v i s ) v j s v i s ) j = 1 k , s U , T
where g j s is an arbitrary orthogonal basis. The final Fast Point Feature Histogram descriptor for point v i s is computed as:
F P F H s v i s = S P F H s v i s + 1 k j = 1 k 1 ω i , j s S P F H s v j s , s U , T
where ω i , j s is the distance-based weighting factor.

3.4. SAC-IA Matching

Once Fast Point Feature Histogram descriptors are obtained for each grid point, coarse registration is carried out using the Sample Consensus Initial Alignment algorithm. Sample Consensus Initial Alignment integrates the RANSAC framework with Fast Point Feature Histogram-based features to identify the optimal rigid transformation that aligns the source point cloud to the target by maximizing inlier correspondences.
The process includes three main steps: feature matching, random sampling and transformation estimation, and inlier evaluation. First, a set of feature correspondences is established based on Fast Point Feature Histogram similarity:
C = ( v i T , v j U )
where each v j U is the closest match to v i T in feature space. Then, three matching pairs v i , u T , v j , u U , u   = 1,2 , 3 , are randomly sampled to estimate the rigid transformation T   = R , t , where R is a rotation matrix and t a translation vector. The transformation is obtained by minimizing the sum of squared Euclidean distance:
T = arg min R , t u = 1 3 R v i , u T + t v j , u U 2
For each pair, if the transformed point satisfies:
R v i T + t v j U < δ
it is counted as an inlier. The algorithm iterates to find the transformation that yields the highest number of inliers. The transformation T is then applied to the original point cloud P T for coarse registration.

3.5. Fine Registration Using ICP

Following the coarse registration via Sample Consensus Initial Alignment, fine registration is performed using the Iterative Closest Point algorithm to further improve accuracy. Iterative Closest Point is a classic method that minimizes the Euclidean distance between corresponding points in two point clouds, under the assumption of a reasonably good initial pose. It iteratively refines the transformation T   = R , t by repeatedly matching nearest neighbors and minimizing point-to-point distances. Applying Iterative Closest Point to the coarsely aligned P T results in a more precise registration outcome.

3.6. Evaluation Criteria

To comprehensively evaluate the performance of the proposed terrestrial and UAV LiDAR registration method, we assess both registration accuracy and efficiency. Based on the methodology of Chen et al. [36] the Root Mean Square Error (RMSE) is used to quantify the average distance error between corresponding points in the overlapping areas of the registered point clouds. A lower RMSE indicates better registration accuracy. RMSE is calculated as:
R M S E = 1 n i = 1 n d ( T a i b i )
where T a i is the transformed point a i from the source cloud, and b i is its corresponding point in the target cloud.

4. Results

The parameters in this study were empirically determined. The DSMs were generated with a grid resolution of 0.5 m. Normal estimation for the DSM point clouds from terrestrial and UAV LiDAR data was conducted using a search radius of 1 m to capture local surface orientations. Fast Point Feature Histograms were computed with a feature search radius of 2 m to effectively characterize local geometric structures. The Sample Consensus Initial Alignment algorithm was applied with the following settings: a maximum of 50,000 iterations, three samples per iteration, a correspondence randomness of 5, a similarity threshold of 0.8, a maximum correspondence distance of 2.5 m, and an inlier fraction threshold of 0.25. These parameters were selected to balance computational efficiency and registration accuracy, ensuring a robust alignment.
To assess the applicability of the proposed registration method in complex forest environments, several representative plots with high vegetation density were selected. Registration experiments were conducted using terrestrial and UAV LiDAR point clouds in these challenging settings. Qualitative visual assessments show that, despite substantial differences in viewpoint, point density, and occlusion between the datasets, the method effectively aligns the point clouds in terms of overall spatial structure. Terrain contours exhibit consistent topography, with no evident misalignments, tilting, or inversions in overlapping regions. These results indicate that the method provides a reliable initial pose and transformation, establishing a robust foundation for subsequent fine registration (Figure 3).
Building on the coarse registration, the Iterative Closest Point algorithm was applied to further refine the alignment between terrestrial and UAV LiDAR point clouds, enhancing accuracy and local structural coherence. Results indicate that, even in densely vegetated environments, the alignment of key features, such as terrain surfaces, tree trunks, and main branches, is substantially improved. Tree trunks appear continuous, ground layers show no visible displacement, and transitions at overlapping boundaries are smooth and consistent. This fine registration step effectively corrects residual pose errors from the coarse stage, exhibiting strong geometric consistency and adaptability to complex forest structures (Figure 4).
Table 2 summarizes the fine registration results for six test plots, comparing the accuracy of the proposed method with manual registration, alongside the corresponding processing times. The proposed method achieves high spatial alignment accuracy across all plots, with an average RMSE of 0.24 m, only marginally higher than the 0.18 m obtained through manual registration. For example, in Plots 4 and 5, the automatic registration yields RMSEs of 0.35 m and 0.30 m, compared to 0.28 m and 0.25 m for manual registration. These results demonstrate that the proposed method maintains acceptable error levels while offering practical usability. In terms of efficiency, automatic registration completed each plot in an average of 5.14 s under standard computational settings, with times ranging from 4.53 to 6.29 s. In contrast, manual registration typically requires tens of minutes to complete, and the processing time tends to increase with the complexity of the forest plots.

5. Discussion

5.1. Analysis of Registration Performance

The proposed registration method demonstrates strong performance in both accuracy and computational efficiency, confirming its effectiveness and practical feasibility. In the coarse registration stage, reliable alignment was primarily attributed to two key factors. First, the Digital Surface Models extracts, from terrestrial and UAV LiDAR point clouds, effectively captured canopy surface morphology, providing a stable geometric basis for initial alignment. Second, the integration of Fast Point Feature Histograms facilitates the extraction of local geometric descriptors at low computational cost, enhancing feature correspondence across platforms and improving the robustness and accuracy of coarse registration.
Building on the coarse stage, the fine registration process reduces the overall alignment error to 0.24 m, closely matching the accuracy of manual registration. This improvement stems from the coarse stage providing a well-initialized configuration, enabling the Iterative Closest Point algorithm to perform more precise point-to-point matching. Given the reliable initial pose, Iterative Closest Point also converges more rapidly and produces more stable results, effectively avoiding local minima and enhancing final registration accuracy. Notably, the complete registration pipeline achieves an average processing time of just 5.14 s, underscoring its high computational efficiency. In summary, the proposed method delivers centimeter-level accuracy and fast processing, making it well suited for high-precision, high-efficiency registration of multi-source forest point clouds. It offers a robust foundation for downstream tasks such as tree structure modelling and parameter estimation.

5.2. Comparison with Existing Studies

Many existing studies employ tree-based registration strategies that depend heavily on individual tree segmentation and localization. For instance, Dai et al. [29] report an actual registration time of approximately 30 s; however, the preceding tree detection step can require up to 20 min, constituting the primary time bottleneck. Although these methods generally deliver reliable accuracy, their overall efficiency is substantially constrained. In contrast, the proposed method bypasses individual tree segmentation by directly modelling the canopy surface and extracting geometric features from Digital Surface Models, enabling fast and efficient registration without explicit tree detection. Our approach completes processing in just 5.14 s per plot, significantly outperforming traditional methods in efficiency.
Due to variations in data platforms and forest types across studies, direct quantitative comparison is challenging. Therefore, typical accuracy ranges reported in the literature serve as a basis for indirect comparison. Existing tree-based registration methods generally achieve accuracies between 0.2 and 0.5 m. Our method attained an average RMSE of 0.24 m across multiple test plots, falling well within this range and meeting the accuracy requirements for extracting forest structural parameters such as tree height and crown width.
Shao et al. [31] proposed a registration method for multi-platform point clouds based on extracting edge features of tree crowns, demonstrating the potential of canopy structures for forest point cloud alignment. However, their approach relies on clearly delineated crown boundaries, rendering it sensitive to data quality and canopy gaps. In contrast, our method leverages the undifferentiated canopy surface morphology as the registration basis, obviating the need for distinct crown edges and thus excelling in dense forests with continuous canopy cover and minimal gaps. Furthermore, by not requiring ground filtering, our approach maintains robustness against terrain variability and sparse ground returns, common challenges in forest LiDAR datasets.
Overall, our registration method relies exclusively on canopy surface points, eliminating the need for ground filtering or individual tree segmentation and thereby reducing data preprocessing demands. This facilitates seamless integration into existing forest point cloud processing workflows, enhancing overall efficiency. Moreover, by utilizing general geometric features rather than species-specific traits or forest developmental stages, the method exhibits strong generalizability across diverse forest types, including both coniferous and broadleaf stands.

5.3. Potential for Extension and Limitations of the Proposed Method

Regarding extensibility, although the current algorithm targets registration of terrestrial and UAV LiDAR point clouds, the framework is adaptable to other platform combinations, such as Mobile and UAV LiDAR. Furthermore, the method holds promise for integration with multi-temporal datasets, supporting applications in forest dynamics monitoring and change detection. However, the method may encounter limitations in areas with flat terrain or low canopy height variability. Since it relies on canopy surface undulation for feature matching, its effectiveness diminishes in homogeneous stands where the canopy lacks distinctive geometric variation. In such cases, reduced variability in DSMs may lead to decreased feature distinctiveness and increased registration errors. Additionally, in steep or highly fragmented terrains, abrupt elevation changes may compromise the continuity of DSMs, leading to reduced overlap quality between terrestrial and UAV LiDAR datasets. Although the proposed method demonstrates robustness under typical forest conditions, such complex topographies may necessitate supplementary preprocessing steps or the incorporation of additional features to maintain registration accuracy. In the next work, we aim to comprehensively evaluate the influence of the proposed registration method on the final digital models and key forest structural parameters such as tree height, DBH, and crown width, thereby enabling precise plot-scale forest inventory and supporting high-resolution ecological monitoring.

6. Conclusions

This study proposes an efficient and robust method for registering terrestrial and UAV LiDAR point clouds in complex forest environments. By integrating Digital Surface Models (DSMs) with Fast Point Feature Histograms, the approach exploits both large-scale canopy morphology and local geometric features to achieve reliable initial alignment and precise fine registration. Experimental results demonstrate an average RMSE of 0.24 m, comparable to manual registration, with a processing time of only 5.14 s, underscoring its applicability for large-scale forest analyses requiring both accuracy and efficiency. Unlike traditional tree-based methods, this approach eliminates the need for individual tree segmentation and ground filtering, significantly streamlining the processing pipeline and enhancing automation. Future work will focus on improving performance in low-variability canopy conditions and exploring integration with instance segmentation for advanced forest analysis.

Author Contributions

Conceptualization, S.Y. and S.C.; Formal analysis, Z.T., B.Z. and J.D.; Funding acquisition, S.Y. and S.C.; Methodology, S.Y. and S.C.; Project administration, S.C.; Software, S.C.; Supervision, S.C.; Validation, S.Y. and S.C.; Visualization, S.Y.; Writing—original draft, S.Y.; Writing—review and editing, S.Y. and S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2024YFF1308103; Natural Science Fund of China, grant number 42301510; the Department of Science and Technology of Hubei Province, grant number 2023AFB149; Biological Breeding-National Science and Technology Major Project, grant number 2023ZD0405605; National Key R&D Program of China, grant number 2023YFD2200800; the Anhui International Joint Research Center for Ancient Architecture Intellisencing and Multi-Dimensional Modeling, grant number GJZZX2024KF05.

Data Availability Statement

The data that support this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

Author Jie Dai was employed by the company Shandong Rail Transit Survey and Design Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Lim, K.S.; Treitz, P.M.; Wulder, M.A.; St-Onge, B.A.; Flood, M. LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 2003, 27, 88–106. [Google Scholar] [CrossRef]
  2. Zimble, D.A.; Evans, D.L.; Carlson, G.C.; Parker, R.C.; Grado, S.C.; Gerard, P.D. Characterizing vertical forest structure using small-footprint airborne LiDAR. Remote Sens. Environ. 2003, 87, 171–182. [Google Scholar] [CrossRef]
  3. Hudak, A.T.; Crookston, N.L.; Evans, J.S.; Hall, D.E.; Falkowski, M.J. Nearest neighbor imputation of species-level, plot-scale forest structure attributes from LiDAR data. Remote Sens. Environ. 2008, 112, 2232–2245. [Google Scholar] [CrossRef]
  4. Yao, T.; Yang, X.; Zhao, F.; Wang, Z.; Zhang, Q.; Jupp, D.; Strahler, A. Measuring forest structure and biomass in New England forest stands using Echidna ground-based lidar. Remote Sens. Environ. 2011, 115, 2965–2974. [Google Scholar] [CrossRef]
  5. Wulder, M.A.; White, J.C.; Nelson, R.F.; Næsset, E.; Ørka, H.O.; Coops, N.C.; Gobakken, T. Lidar sampling for large-area forest characterization: A review. Remote Sens. Environ. 2012, 121, 196–209. [Google Scholar] [CrossRef]
  6. Froidevaux, J.S.P.; Zellweger, F.; Bollmann, K.; Jones, G.; Obrist, M.K. From field surveys to LiDAR: Shining a light on how bats respond to forest structure. Remote Sens. Environ. 2016, 175, 242–250. [Google Scholar] [CrossRef]
  7. Moran, C.J.; Rowell, E.M.; Seielstad, C.A. A data-driven framework to identify and compare forest structure classes using LiDAR. Remote Sens. Environ. 2018, 211, 154–166. [Google Scholar] [CrossRef]
  8. Wiggins, H.L.; Nelson, C.R.; Larson, A.J.; Safford, H.D. Using LiDAR to develop high-resolution reference models of forest structure and spatial pattern. For. Ecol. Manag. 2019, 434, 318–330. [Google Scholar] [CrossRef]
  9. Neuville, R.; Bates, J.S.; Jonard, F. Estimating forest structure from UAV-mounted LiDAR point cloud using machine learning. Remote Sens. 2021, 13, 352. [Google Scholar] [CrossRef]
  10. Sherrill, K.R.; Lefsky, M.A.; Bradford, J.B.; Ryan, M.G. Forest structure estimation and pattern exploration from discrete-return LiDAR in subalpine forests of the central Rockies. Can. J. For. Res. 2008, 38, 2081–2096. [Google Scholar] [CrossRef]
  11. Manzanera, J.A.; García-Abril, A.; Pascual, C.; Tejera, R.; Martín-Fernández, S.; Tokola, T.; Valbuena, R. Fusion of airborne LiDAR and multispectral sensors reveals synergic capabilities in forest structure characterization. GISci. Remote Sens. 2016, 53, 723–738. [Google Scholar] [CrossRef]
  12. Cao, L.; Liu, K.; Shen, X.; Wu, X.; Liu, H. Estimation of forest structural parameters using UAV-LiDAR data and a process-based model in ginkgo planted forests. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4175–4190. [Google Scholar] [CrossRef]
  13. Tijerín-Triviño, J.; Moreno-Fernández, D.; Zavala, M.A.; Astigarraga, J.; García, M. Identifying forest structural types along an aridity gradient in peninsular Spain: Integrating low-density LiDAR, forest inventory, and aridity index. Remote Sens. 2022, 14, 235. [Google Scholar] [CrossRef]
  14. Mura, M.; McRoberts, R.E.; Chirici, G.; Marchetti, M. Estimating and mapping forest structural diversity using airborne laser scanning data. Remote Sens. Environ. 2015, 170, 133–142. [Google Scholar] [CrossRef]
  15. Guo, X.; Coops, N.C.; Tompalski, P.; Nielsen, S.E.; Bater, C.W.; Stadt, J.J. Regional mapping of vegetation structure for biodiversity monitoring using airborne LiDAR data. Ecol. Inform. 2017, 38, 50–61. [Google Scholar] [CrossRef]
  16. Cao, L.; Liu, H.; Fu, X.; Zhang, Z.; Shen, X.; Ruan, H. Comparison of UAV LiDAR and digital aerial photogrammetry point clouds for estimating forest structural attributes in subtropical planted forests. Forests 2019, 10, 145. [Google Scholar] [CrossRef]
  17. Guan, H.; Su, Y.; Hu, T.; Wang, R.; Ma, Q.; Yang, Q.; Guo, Q. A novel framework to automatically fuse multi-platform LiDAR data in forest environments based on tree locations. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2165–2177. [Google Scholar] [CrossRef]
  18. Liu, Q.; Wang, J.; Ma, W.; Zhang, J.; Deng, Y.; Shao, D.; Liu, Y. Target-free ULS–TLS point-cloud registration for alpine forest lands. Comput. Electron. Agric. 2021, 190, 106460. [Google Scholar] [CrossRef]
  19. Dai, W.; Kan, H.; Tan, R.; Yang, B.; Guan, Q.; Zhu, N.; Dong, Z. Multisource forest point cloud registration with semantic-guided keypoints and robust RANSAC mechanisms. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103105. [Google Scholar] [CrossRef]
  20. Deng, Y.; Wang, J.; Dong, P.; Liu, Q.; Ma, W.; Zhang, J.; Li, J. Registration of TLS and ULS point cloud data in natural forest based on similar distance search. Forests 2024, 15, 1569. [Google Scholar] [CrossRef]
  21. Bienert, A.; Georgi, L.; Kunz, M.; Maas, H.G.; von Oheimb, G. Comparison and combination of mobile and terrestrial laser scanning for natural forest inventories. Forests 2018, 9, 395. [Google Scholar] [CrossRef]
  22. Xia, S.; Chen, D.; Peethambaran, J.; Wang, P.; Xu, S. Point cloud inversion: A novel approach for the localization of trees in forests from TLS data. Remote Sens. 2021, 13, 338. [Google Scholar] [CrossRef]
  23. Molina-Valero, J.A.; Martínez-Calvo, A.; Villamayor, M.J.G.; Pérez, M.A.N.; Álvarez-González, J.G.; Montes, F.; Pérez-Cruzado, C. Operationalizing the use of TLS in forest inventories: The R package FORTLS. Environ. Model. Softw. 2022, 150, 105337. [Google Scholar] [CrossRef]
  24. Cai, S.; Zhang, W.; Zhang, S.; Yu, S.; Liang, X. Branch architecture quantification of large-scale coniferous forest plots using UAV-LiDAR data. Remote Sens. Environ. 2024, 306, 114121. [Google Scholar] [CrossRef]
  25. Niwa, H. Classification of forest stratification and evaluation of forest stratification changes over two periods using UAV-LiDAR. Remote Sens. 2025, 17, 1682. [Google Scholar] [CrossRef]
  26. Fekry, R.; Yao, W.; Cao, L.; Shen, X. Ground-based/UAV-LiDAR data fusion for quantitative structure modeling and tree parameter retrieval in subtropical planted forest. For. Ecosyst. 2022, 9, 100065. [Google Scholar] [CrossRef]
  27. Terryn, L.; Calders, K.; Bartholomeus, H.; Bartolo, R.E.; Brede, B.; D’hont, B.; Verbeeck, H. Quantifying tropical forest structure through terrestrial and UAV laser scanning fusion in Australian rainforests. Remote Sens. Environ. 2022, 271, 112912. [Google Scholar] [CrossRef]
  28. Xu, D.; Chen, G.; Zhang, S.; Jing, W. An automated pipeline for extracting forest structural parameters by integrating UAV and ground-based LiDAR point clouds. Forests 2023, 14, 2179. [Google Scholar] [CrossRef]
  29. Dai, W.; Yang, B.; Liang, X.; Dong, Z.; Huang, R.; Wang, Y.; Li, W. Automated fusion of forest airborne and terrestrial point clouds through canopy density analysis. ISPRS J. Photogramm. Remote Sens. 2019, 156, 94–107. [Google Scholar] [CrossRef]
  30. Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Zang, Y.; Li, J.; Stilla, U. Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
  31. Shao, J.; Yao, W.; Wan, P.; Luo, L.; Wang, P.; Yang, L.; Zhang, W. Efficient co-registration of UAV and ground LiDAR forest point clouds based on canopy shapes. Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 103067. [Google Scholar] [CrossRef]
  32. Wu, X.; Xi, X.; Luo, C.; Wang, C.; Nie, S. Line segment descriptor-based efficient coarse registration for forest TLS–ULS point clouds. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5706312. [Google Scholar] [CrossRef]
  33. Li, G.; Wu, B.; Yang, L.; Pan, Z.; Dong, L.; Wu, S.; Yu, B. QuadrantSearch: A novel method for registering UAV and backpack LiDAR point clouds in forested areas. IEEE Trans. Geosci. Remote Sens. 2024, 63, 5700517. [Google Scholar] [CrossRef]
  34. Ghorbani, F.; Chen, Y.C.; Hollaus, M.; Pfeifer, N. A robust and automatic algorithm for TLS–ALS point cloud registration in forest environments based on tree locations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 4015–4035. [Google Scholar] [CrossRef]
  35. Zhang, W.; Shao, J.; Jin, S.; Luo, L.; Ge, J.; Peng, X.; Zhou, G. Automated marker-free registration of multisource forest point clouds using a coarse-to-global adjustment strategy. Forests 2021, 12, 269. [Google Scholar] [CrossRef]
  36. Chen, J.; Zhao, D.; Zheng, Z.; Xu, C.; Pang, Y.; Zeng, Y. A clustering-based automatic registration of UAV and terrestrial LiDAR forest point clouds. Comput. Electron. Agric. 2024, 217, 108648. [Google Scholar] [CrossRef]
Figure 1. Distribution and forest structure of forestplots. (a) Provincial locations, (b) city-level distribution, (c) forest context and positions of plots 1–6, (d) field photos (d1d6) showing structure and diversity.
Figure 1. Distribution and forest structure of forestplots. (a) Provincial locations, (b) city-level distribution, (c) forest context and positions of plots 1–6, (d) field photos (d1d6) showing structure and diversity.
Forests 16 01347 g001
Figure 2. Flowchart of the proposed registration method, including DSM construction, Fast Point Feature Histogram (FPFH) extraction, Sample Consensus Initial Alignment (SAC-IA) matching and Iterative Closest Point (ICP) registration.
Figure 2. Flowchart of the proposed registration method, including DSM construction, Fast Point Feature Histogram (FPFH) extraction, Sample Consensus Initial Alignment (SAC-IA) matching and Iterative Closest Point (ICP) registration.
Forests 16 01347 g002
Figure 3. Coarse registration between terrestrial and UAV LiDAR data in (af) Plots 1–6.
Figure 3. Coarse registration between terrestrial and UAV LiDAR data in (af) Plots 1–6.
Forests 16 01347 g003
Figure 4. Fine registration between terrestrial and UAV LiDAR data in (af) Plots 1–6.
Figure 4. Fine registration between terrestrial and UAV LiDAR data in (af) Plots 1–6.
Forests 16 01347 g004
Table 1. Tree attribute summary of the six forest plots.
Table 1. Tree attribute summary of the six forest plots.
PlotForest TypeTree Height (m)Diameters at Breast Height (cm)Crown Width (m)Tree Density (Trees/ha)
1Broad-leaved9.9411.154.021167
220.1216.434.711167
312.3916.143.102533
4Coniferous9.0515.633.141900
512.359.493.872500
613.4521.905.32667
Table 2. The comparison between the proposed and manual registration method.
Table 2. The comparison between the proposed and manual registration method.
ManualProposedTime (s)
Plot 10.140.264.53
Plot 20.130.184.61
Plot 30.150.156.03
Plot 40.280.354.87
Plot 50.250.34.53
Plot 60.130.186.29
Avg.0.180.245.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, S.; Tang, Z.; Zhang, B.; Dai, J.; Cai, S. Automatic Registration of Terrestrial and UAV LiDAR Forest Point Clouds Through Canopy Shape Analysis. Forests 2025, 16, 1347. https://doi.org/10.3390/f16081347

AMA Style

Yu S, Tang Z, Zhang B, Dai J, Cai S. Automatic Registration of Terrestrial and UAV LiDAR Forest Point Clouds Through Canopy Shape Analysis. Forests. 2025; 16(8):1347. https://doi.org/10.3390/f16081347

Chicago/Turabian Style

Yu, Sisi, Zhanzhong Tang, Beibei Zhang, Jie Dai, and Shangshu Cai. 2025. "Automatic Registration of Terrestrial and UAV LiDAR Forest Point Clouds Through Canopy Shape Analysis" Forests 16, no. 8: 1347. https://doi.org/10.3390/f16081347

APA Style

Yu, S., Tang, Z., Zhang, B., Dai, J., & Cai, S. (2025). Automatic Registration of Terrestrial and UAV LiDAR Forest Point Clouds Through Canopy Shape Analysis. Forests, 16(8), 1347. https://doi.org/10.3390/f16081347

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop