Next Article in Journal
OF-FSE: An Efficient Adaptive Equalization for QAM-Based UAV Modulation Systems
Previous Article in Journal
Joint Task Allocation and Resource Optimization Based on an Integrated Radar and Communication Multi-UAV System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Estimation of Tree Parameters by Fusing ALS and TLS Point Cloud Data Based on Canopy Gap Shape Feature Points

1
Research Center of Forestry, Remote Sensing & Information Engineering, Central South University & Technology, Changsha 410004, China
2
Hunan Provincial Key Laboratory of Forestry Remote Sensing Based Big Data & Ecological Security, Changsha 410004, China
3
Key Laboratory of National Forestry & Grassland Administration on Forest Resources Management and Monitoring in Southern Area, Changsha 410004, China
4
National-Local Joint Engineering Laboratory of Geo-Spatial Information Technology, Hunan University of Science and Technology, Xiangtan 411201, China
5
Research Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China
*
Author to whom correspondence should be addressed.
Drones 2023, 7(8), 524; https://doi.org/10.3390/drones7080524
Submission received: 11 May 2023 / Revised: 4 August 2023 / Accepted: 7 August 2023 / Published: 10 August 2023
(This article belongs to the Section Drones in Agriculture and Forestry)

Abstract

:
Airborne laser scanning (ALS) and terrestrial laser scanning (TLS) are two ways to obtain forest three-dimensional (3D) spatial information. Due to canopy occlusion and the features of different scanning methods, some of the forest point clouds acquired by a single scanning platform may be missing, resulting in an inaccurate estimation of forest structure parameters. Hence, the registration of ALS and TLS point clouds is an alternative for improving the estimation accuracy of forest structure parameters. Currently, forest point cloud registration is mainly conducted based on individual tree attributes (e.g., location, diameter at breast height, and tree height), but the registration is affected by individual tree segmentation and is inefficient. In this study, we proposed a method to automatically fuse ALS and TLS point clouds by using feature points of canopy gap shapes. First, the ALS and TLS canopy gap boundary vectors were extracted by the canopy point cloud density model, and the turning or feature points were obtained from the canopy gap vectors using the weighted effective area (WEA) algorithm. The feature points were then aligned, the transformation parameters were solved using the coherent point drift (CPD) algorithm, and the TLS point clouds were further aligned using the recovery transformation matrix and refined by utilizing the iterative closest point (ICP) algorithm. Finally, individual tree segmentations were performed to estimate tree parameters using the TLS and fusion point clouds, respectively. The results show that the proposed method achieved more accurate registration of ALS and TLS point clouds in four plots, with the average distance residuals of coarse and fine registration of 194.83 cm and 2.14 cm being much smaller compared with those from the widely used crown feature point-based method. Using the fused point cloud data led to more accurate estimates of tree height than using the TLS point cloud data alone. Thus, the proposed method has the potential to improve the registration of ALS and TLS point cloud data and the accuracy of tree height estimation.

1. Introduction

Forests are the mainstay of terrestrial ecosystems, accounting for 90% of terrestrial biomass and 86% of global vegetation carbon stocks, and play an important role in maintaining the stability of the ecological environment and mitigating the effects of global warming [1,2,3,4]. Forests consist of individual trees, and tree and stand parameters are thus important variables in forest resource inventories. Forest resource inventories have evolved from traditional field surveys to sample plot-based inventories combined with various remote sensing techniques [5,6]. In recent years, there has been substantial research in the area of using light detection and ranging (LiDAR), an active remote sensing technology, to obtain three-dimensional (3D) structural information of forests by quickly and accurately estimating tree and stand forest structural parameters (tree height, diameter at breast height (DBH), crown width, leaf area, biomass, etc.) [7,8,9]. However, there have been some big challenges in improving the estimation accuracy of tree and stand parameters using LiDAR data.
Terrestrial laser scanning (TLS) and airborne laser scanning (ALS) are commonly used in forest resource investigations [10]. TLS uses a bottom-up scanning approach and can be used to obtain detailed trunk and understory structure information for accurately estimating individual tree and plot level parameters, such as tree location, DBH, volume, and biomass. Using TLS, however, it is difficult to obtain complete information about tree top canopies due to the influence of branches, foliage, and shading [11,12]. ALS, with its top-down scanning approach, can be used to generate 3D spatial structures of tree canopies and more accurately estimate tree heights than TLS and traditional field surveys, but it is difficult to obtain information about lower layer canopies, such as tree trunks and shrubs [13,14,15]. Moreover, due to the limitations of scanning view angle and distance, point cloud data acquired by both TLS and ALS is partially missing, especially in forested areas with high canopy closure. To combine the advantages of TLS and ALS and improve the accuracy of estimating forest parameters, there is a strong need to register these two kinds of point cloud data from TLS and ALS [16,17,18].
TLS point cloud data often provides relative coordinates, while ALS point cloud data carrying a positioning orientation system offers geographic coordinates. This difference leads to a challenge for the registration of the point cloud data from TLS and ALS. At present, point cloud data registration is usually performed using methods based on geometric features (e.g., point, line, and plane) [19,20,21,22]. In urban areas, obvious geometric feature primitives such as boundaries and corner points of buildings are usually extracted and regarded as key points [23,24]. For example, Cheng et al. achieved automatic registration of TLS and ALS point cloud data using building outline features [25]. Li et al. achieved semi-automatic registration of TLS and ALS point cloud data using building corners and matching boundaries [26]. However, due to the complexity and irregularity of forests, it is difficult to obtain fine geometric features. Using artificial markers is one solution for the registration of TLS and ALS point cloud data in forested areas, but placing the markers is time-consuming, and the markers are often hardly detectable in dense forests. Some TLS instruments integrate external devices such as a global positioning system (GPS) receiver and an inertial measurement unit [27]. However, these external devices are often not applicable for registering point cloud data due to the remote locations of plots and the occlusion of forest canopies, especially in dense forests [28]. Hence, for the registration of point cloud data in forest environments, tree locations and tree attributes are used to realize automatic registration of point cloud data [15,29,30,31]. Hauglin et al. used the normalized values of DBH and tree height to register ALS and TLS point cloud data [32]. First, tree position and DBH are extracted from TLS data. Tree position and tree height are obtained from ALS data. Given an ALS range, the corresponding TLS is then searched for, and the individual tree location distance and the normalized eigenvalues of DBH and tree height are used for evaluation. This method usually has low registration accuracy and requires manual adjustment of the search range. Polewski et al. used tree locations as the registration primitive to measure the similarity by calculating horizontal and vertical distance features between trees. The positions of trees are detected in the backpack laser scanning (BLS) point clouds and ALS point clouds, and the similarity between them is mapped to the weighted binary graph. Finally, a best-matching method is found to realize the fusion of ALS and BLS point cloud data in forested areas [14]. Guan et al. proposed a novel framework to automatically fuse multiplatform LiDAR data based on tree location, which assumes that the spatial distribution of trees in each forest is different. A triangulated irregular network is constructed using Delaunay triangulation based on tree locations, and the corresponding tree pairs are found using a voting strategy based on the similarity of triangles [33]. It is difficult to extract accurate and reliable feature point pairs due to the complexity of forests and different scanning angles.
Most of the existing registration methods require segmentation of individual trees and obtaining information on tree locations, DBH, and height parameters as additional features to evaluate the similarity, which is time-consuming. In addition, in natural broadleaf forests with interlocking canopies, it is difficult to obtain accurate individual tree segmentations [34] and tree and stand parameters. As a result, although the registration methods based on the tree attributes, to some extent, work, the registration results of TLS and ALS point cloud data are prone to errors [35]. Thus, there is a strong need to improve the registration of TLS and ALS point cloud data. Dai et al. proposed a method of extracting crown feature points through canopy density analysis to fuse ALS and TLS point cloud data in forested areas. Low vegetation points in TLS and ALS data are initially removed, the point clouds of tree crowns are retained, and the mean shift algorithm is used to cluster canopy point clouds. Then, TLS canopy point clouds are subsampled according to the ALS crown point cloud height histogram. Ultimately, the mean shift algorithm is again used to extract the feature points of ALS and TLS crown point clouds as the registration primitive to fuse ALS and TLS point cloud data [36]. This method has high registration accuracy, but it requires manually matching TLS and ALS tree crowns.
In this paper, we propose a novel automatic registration approach for TLS and ALS point cloud data based on the shape characteristics of canopy gaps. This method is simple and effective, and it greatly reduces the computation intensity for feature point extraction and the estimation of individual tree attributes. Additionally, this method can integrate ALS and TLS point cloud data in forested areas quickly and effectively. The canopy point cloud density model is first used to extract canopy gap boundary vectors. The weighted effective area (WEA) algorithm is then adopted to obtain the canopy gap shape feature points. Finally, the TLS and ALS feature points are aligned by utilizing the coherent point drift (CPD) algorithm to solve the transformation parameters. A coarse and fine registration are performed by applying transformation parameters to the TLS point cloud and using the iterative closest point (ICP) algorithm, respectively. Individual tree segmentation and parameter estimation are conducted using TLS point cloud data alone and fused point cloud data, respectively. The accuracy of individual tree parameter estimation is compared at different complexities of stand structures and conditions using field measurements as a reference to verify the proposed method.

2. Materials and Methods

2.1. Study Area

This study was conducted at the Lutou Experimental Forest Farm of the Central South University of Forestry and Technology, located in Pingjiang County, Hunan Province, China (Figure 1a), with coordinates of 113°51′ E to 113°58′ E and 28°31′ N to 28°38′ N. The study area has an area of 53.08 km2 and an elevation range of 124 to 1273 m. The elevation is high in the south and low in the north, and the topography is mainly characterized by low and medium mountains. The study area is located in a subtropical monsoon climate zone with an average annual temperature of 15 °C, an average air humidity of 82%, and an average annual precipitation of 1624.8 mm. The percentage of forest cover is over 95%. This is an area characterized by subtropical evergreen broad-leaved forests with the main tree species of oak, Schama, hemp, Chinese fir, pine, bamboo, camphor, etc. The natural vegetation is lush. The forest types and stand structures are complex, including coniferous plantations, broad-leaved forests, coniferous and broad-leaved mixed forests, bamboo forests, shrubbery, and so on.

2.2. Data

2.2.1. Plot Data

Considering tree species and stand features including developmental stage, planting distance, understory vegetation, and the number of tree species in the stand [37], and based on the actual judgment of the in situ investigation, four plots were selected based on the complexity degree of stand structures and conditions and classified into three categories: plot 1—“simple stand”, plot 2 and plot 3—“less complex stand”, and plot 4—“complex stand”, as shown in Figure 1b. The fixed size of the plot was 20 m × 20 m. The plots were surveyed in January 2021. There were 35, 61, 92, and 43 trees in the four plots, respectively, totaling 243 trees. The trees in the plots were checked for each tree, parameters such as tree DBH and tree height were recorded (Table 1), and individual tree locations were obtained by Real Time Kinematic (RTK).

2.2.2. ALS Point Cloud Data and Pre-Processing

ALS point cloud data were collected at the end of December 2020 using a Hornet BB4 model UAV (https://www.huace.cn/, accessed on 8 August 2023) and a RIEGL VUX-1LR scanner (https://www.riegl.com/, accessed on 8 August 2023). The UAV flew at an altitude of approximately 250 m above ground and at a speed of 6 m/s. To increase the point cloud density, a tic-tac-toe flight pattern was used to increase the route overlap. Control points were set, and the true geographic coordinates of point clouds were obtained by post-differential processing. The scanning was carried out with a viewing angle of 330°, a frequency of 300 kHz, and a ranging accuracy of 10 mm. The average density of point clouds was greater than 100 pts/m2. The ALS data pre-processing included noise elimination, ground point classification, and digital elevation model (DEM) generation. Noise points such as bird flock points and low points were removed using LiDAR360 software (https://www.lidar360.com/, accessed on 8 August 2023). To classify the point cloud data accurately and efficiently, an improved progressive triangulated irregular network densification filtering algorithm was used [38]. The DEM was obtained by interpolating the ground points through a Delaunay triangular irregular net with a resolution of 0.3 m × 0.3 m.

2.2.3. TLS Point Cloud Data and Pre-Processing

TLS point cloud data were collected in mid-January 2021 using a Faro Focus3D X330 terrestrial 3D laser scanner (https://www.faro.com/, accessed on 8 August 2023). The scanner has a wavelength of 905 nm and a field of view of 360° and 305° in the horizontal and vertical directions, respectively. The measurement system error was approximately 2 mm at 25 m. The beam diameter at the exit was 3.8 mm, the beam divergence was 0.16 mrad, and the laser beam operated in 0.009° increments both horizontally and vertically. The scanning positions of each plot were set up at its four corners: northeast, southeast, southwest, and northwest. Depending on the plots, different numbers of scans were taken within the plot to ensure the integrity of the data. Multiple target spheres were reasonably placed within each of the plots so that at least three identical spheres could be seen by two adjacent stations. The amount of TLS point cloud data was very large. To improve the computational efficiency, the minimum point spacing method was used to extract the thinning before the same pre-processing as the ALS point cloud so that the distance between the points in the point cloud in 3D space was not less than the set threshold value.

2.3. Point Cloud Registration

Point cloud data registration is the process of establishing the linkage between ALS point clouds and TLS point clouds by finding the matching points and then solving the transformation parameters from TLS to ALS point clouds. In this paper, a method based on the characteristics of canopy gap shape boundaries was proposed and used to fuse ALS and TLS point clouds in forested areas. This method consists of three steps: (1) generation of canopy gap boundary vectors from ALS and TLS point clouds; (2) acquisition of feature or key points of the canopy gap vectors using the WEA; and (3) transformation and data fusion of point clouds using the CPD method and the ICP algorithm. Figure 2 is the flow chart of the proposed method.
In this method, canopy gap boundary vectors are first obtained through canopy point cloud models. Based on the canopy gap vectors, a set of key points is then searched for and extracted using the WEA algorithm [39]. The correspondence between the ALS point clouds and the TLS point clouds is found, and the transformation parameters from the geographic coordinates of the ALS point clouds to the relative coordinates of the TLS point clouds are derived. Finally, the transformation parameters are applied to the TLS point clouds, which is called coarse registration, and the transformed TLS point clouds are aligned with the ALS point clouds using the CPD methods [40] and the ICP algorithm, which is called fine registration [41].
(1)
Canopy gap generation
Canopy gaps are empty areas or polygons in forest canopy layers caused by the natural aging of trees, natural disasters, and man-made logging [42,43], which can be expressed as the areas under the vertical projection of the gaps between tree canopies. At present, LiDAR data canopy gap identification methods are mainly based on canopy height models (CHM) and point clouds, respectively. Canopy gap identification based on the CHM includes the threshold method, the pixel-by-pixel method, and the object-oriented method [43,44,45]. Point cloud-based canopy gap recognition includes the voxel method [46] and clustering [47]. In this study, canopy gaps were extracted directly from the perspective of the forest canopy, and the workflow consists of three steps (Figure 3): (1) canopy point cloud separation and canopy point cloud density model generation; (2) binary image conversion and canopy gap boundary vector generation; and (3) canopy gap size screening.
The pretreated LiDAR point clouds are classified into ground points and nonground points. For both ALS and TLS data, the heights of the nonground points above ground were calculated using DEM. The nonground points above a certain height threshold were selected as the canopy points to generate the canopy point cloud density model. In the experiment, 4 m was used for plots 1–3 and 7 m for plot 4. The canopy point cloud density is the number of canopy points projected on a plane per unit area. The canopy point cloud density model generated is used to extract the canopy gap boundary vectors, and the canopy point clouds are projected onto the planes normalized by ground points. A square of 0.3 m × 0.3 m is utilized as the target unit, and the number of point clouds in each target unit is counted as the pixel value to generate the canopy point cloud density model. The generated density model is transformed into a binary image by setting image pixel values greater than or equal to one as 1 and otherwise as 0. The canopy gaps are extracted from the converted binary images, raster-to-vector processing is performed to obtain the canopy gap boundary vectors, and the number of image pixels in each canopy gap vector is counted. To distinguish between branch gaps and canopy gaps in a forest stand, an area consisting of a minimum of nine image pixels is regarded as a canopy gap. The minimum number of pixels is determined in this experiment by taking into account the type of plot and the size of the target unit. The canopy gaps with fewer than 9 pixels are eliminated. The canopy gap boundary vectors contain only plane coordinate information and no elevation information. The elevation values of the turning points of the canopy gaps are obtained using DEM. The turning points are used as potential feature points.
(2)
Feature point acquisition
The feature points are the most basic feature primitives in point clouds, which represent the distribution characteristics of the point clouds in space and do not change due to the change in the coordinate system. All turning points of each canopy gap from ALS and TLS point cloud data constitute its set of key points. Given a canopy gap, the transformation parameters of the coordinates of the corresponding key points between ALS and TLS point clouds are obtained and regarded as the linkage between the point clouds, so that the correspondence between the point clouds can be quickly obtained. This method avoids the operation and computation of massive amounts of data. The number and accuracy of the feature points directly affect the efficiency and results of point cloud data registration. To improve accuracy, the turning points are refined using the bottom-up WEA algorithm [40]. The algorithm assesses the canopy gap’s turning points by evaluating their importance. Three adjacent turning points form a triangle, and its WEA area represents the importance of the middle turning point (Figure 4).
In Figure 4, triangle ABC has A as its vertex and BC as its base; X is the length of the base BC; H is the length of the vertical line AD from vertex A to base BC; and L is the distance from vertex A to the middle point E of the base. Usually, its area is calculated using S (Equation (1)) but using this area will lead to two turning points, that is, the two vertices belonging to two triangles, respectively, with a very tall triangle and a very flat triangle having the same area and having the same significance. In this study, the algorithm used to calculate the area considers the shape characteristics of the triangle, making the importance of each vertex more scientific. Three weight factors, namely flatness, skew, and convexity, are utilized to describe the shape features of the triangle. The WEA is calculated using Equation (2), where W Flat , W Skew , and W Convex are the functions of flatness, skew, and convex, respectively. The parameters H and X are used to calculate flatness; the ratio of H to L is employed to measure the skewness of the triangle, and convexity is represented by the direction of its vertex order relative to the predefined vertex order.
S = 0.5 × X × H ,
WEA = W Flat × W Skew × W Convex × S ,
W Flat = 4 Marctan H / KS × X / π + N M + N KH ,
W Skew = SM + H / L SM + 1 SK ,
W Convex = C , c o n v e x 1 , c o n c a v e ,
where C is a normal number. Parameter M > 0, N ≥ 0, (maximum range of control weight), KS > 0, KH ≥ 1, SM ≥ 0, and SK ≥ 1. The algorithm first calculates the area of the triangle formed by every three adjacent vertices using parameters X, H, and L, with the middle point as the vertex and the line of the two end points as the bottom edge. The WEA of the triangle is then calculated. If the WEA is less than the threshold value of 0.5, temporarily remove it and re-calculate the WEA for its adjacent vertices. The calculation process is repeated until the WEA of the triangle formed by every three adjacent vertices of the retained points is greater than the threshold value, and the turning point is selected as the key or feature point. The algorithm results in one set of key points for the ALS canopy gap and another set of key points for the TLS canopy gap for registration (Figure 5).
(3)
Point cloud transformation
Coarse registration is the process of finding the correspondence between the two sets of key points from the ALS and TLS point clouds and the transformation parameters and applying the parameters to the TLS point cloud. Based on the sets of key points, the CPD algorithm is used to obtain the transformation relationship between ALS and TLS canopy gap feature points [40]. The algorithm considers the alignment of two sets of key points as a probability density estimation problem, applies the transformation parameters by maximizing the likelihood to the center of mass of the Gaussian mixture model (GMM) as a whole, and continuously iterates to finally align the two sets of key points. The transformation of the GMM centroid position is defined as T Y = Y R T + t , where R is the rotation matrix and t is the translation vector. The set of TLS key points is regarded as a Gaussian mixture model, and the set of ALS key points is regarded as the points generated by the Gaussian mixture model. The posterior probability of the GMM probability density function is calculated using Bayesian theory as the corresponding probabilities of the sets of TLS and ALS key points, so that the problem of registration becomes the problem of likelihood function parameter assignment. The CPD algorithm uses the Expectation–Maximization (EM) algorithm to maximize the likelihood function and obtain the rigid transformation parameters [48]. The EM algorithm is divided into an E-step and an M-step. The E-step calculates the corresponding probability and expectation of the sets of TLS and ALS key points, and the M-step recalculates the new parameters by maximizing the expectations and continuously iterates to achieve the alignment of maximum likelihood estimation.
The algorithm principle is as follows:
X N × D = x n n = 1 ,   2 , , N and Y M × D = y m n = 1 ,   2 , , M are ALS and TLS, respectively, used to extract feature points from a window, and M and N are the number of points in dimension D.
To initialize:
R = I ,   t = 0 ,   0   ω 1 ,   σ 2 = 1 DNM n = 1 N m = 1 M x n y m 2 .
The EM algorithm iterates until it converges. E-step: calculate P,
P mn = exp 1 2 σ 2 x n Ry m + t 2 k = 1 M exp 1 2 σ 2 x n Ry k + t 2 +   2 π σ 2 D / 2 ω 1 ω M N .
M-step: solving R, t, and σ 2 by maximizing the expectation. Compute the A matrix:
N P = 1 T P 1 , μ x = 1 N P X T P T 1 , μ y = 1 N P Y T P 1 ,
A = X 1 μ X T T P T Y   1 μ Y T .
Compute the singular value decomposition of A:
A = US V T ,
R = UC V T ,   t = μ x μ y , C = d 1 , 1 , 1 , 1 , det U V T ,
σ 2 = 1 N P D tr X   1 μ X T T d P T 1 X   1 μ X T   tr A T R .
Transform point clouds:
T Y = Y R T + t ,
where 1 is the column vector of 1, P is a matrix for M × N representing all combined probabilities of TLS and ALS points, and each element P mn is equal to the value of the equation P m | x n .
The EM algorithm converges when the maximum number of iterations of 200 is reached or when the difference in the log-likelihood function between two consecutive iterations is less than a threshold of 1 × 10−5.
Based on the completed CPD alignment, the ICP algorithm is used to finely align the coarsely aligned TLS point clouds to improve the alignment accuracy of the point clouds. This algorithm calculates the transformation parameters between the point clouds by finding the nearest point pairs in the two sets of key points to minimize the error function. After finishing the fine alignment, the two-point clouds are aligned in the ALS absolute geographic coordinate system.

2.4. Individual Tree Segmentation Method

The TLS and fused point cloud data were normalized before individual tree segmentation to remove the effect of terrain. In this study, we used the comparative shortest path algorithm based on ecological theory to separate individual tree point clouds and extract individual tree parameters [49]. In this method, identifying tree trunks is first carried out; that is, each tree trunk is identified by fitting the point clouds at DBH height by the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm or circular detection method [50]. Trees tend to use the shortest path to optimize resource allocation in the process of water and nutrient transmission. Therefore, the shortest path distance of each point of a tree is normalized according to the size of the DBH, and the shortest path after normalization is compared to determine the point of the tree that is closer to its root. The algorithm has been integrated into the LiDAR360 software. The DBH and tree height of both the registered point clouds and TLS point clouds were extracted automatically by the LiDAR360 software.

2.5. Accuracy Evaluation

2.5.1. Accuracy Evaluation of Registration Results

The performance of the method in terms of registration was evaluated according to the distance residual by comparing the alignment results of the algorithm with manually selected feature points. Given the point p i in the source point cloud, the registration error can be calculated by the following equation:
d i = T p i ; θ m 1 T p i ; θ m 2 ,
where θ m 1 is the transformation parameter obtained by using the algorithm to extract feature points, and θ m 2 is the transformation parameter obtained by manually selecting feature points. Two hundred points are randomly selected to calculate the coarse and fine registration distance residuals from the TLS point clouds.

2.5.2. Tree Parameter Estimation Accuracy

The most commonly used determination coefficients, R2, Root Mean Square Error (RMSE), and relative RMSE (rRMSE), were used to evaluate the accuracy of individual tree parameter estimates by LiDAR. R2 was used to measure the degree of correlation between related variables. The larger the value, the higher the explanatory degree of the independent variable and dependent variable. RMSE can reflect the sensitivity and extremes between field measurements and estimates, with larger values indicating larger differences between the observed and estimated values. The calculation equation is as follows:
R 2 = 1 i = 1 m y i ^ y i 2 i = 1 m y i   y - 2 ,
RMSE = 1 m i = 1 m y i ^ y i 2 ,
rRMSE = RMSE y - ,
where y i ^ and y i denote the estimated and measured values, respectively, and y - denotes the measured mean value of the sample data, and m is the number of samples.

3. Results

3.1. Registration Results

The registration results of four plots are presented, including the plot global registration and profile results (Figure 6).
A quantitative evaluation of the registration accuracy of the proposed method is performed using the evaluation criteria described in Section 2.5.1. Figure 6a shows the global registration results for four plots. ALS and TLS ground points are fully overlapping, and there is no stratification in Figure 6b. The experimental results demonstrate that the proposed registration method can perform the registration of ALS and TLS point clouds successfully.
Table 2 lists the distance residuals of the aligned point clouds after CPD coarse registration and ICP fine registration. The average distance residual of coarse registration is 194.83 cm, which implies that the CPD method-derived canopy gap shape feature points provide a good initial alignment for the ALS and TLS point cloud registration. After the ICP fine registration, the average distance residual is 2.14 cm, which indicates that the proposed registration method has the great potential of accurately registering and fusing the ALS and TLS point cloud data.

3.2. Tree Parameter Estimation Results

In Table 3, linear regressions of the estimated values of tree parameters against the measured values were conducted for each plot. When the TLS point cloud data were used alone, the R2 values of the DBH estimates with the measurements for four plots were high, and there were also no obvious differences in the R2 values among the plots. However, the relative RMSE (rRMSE) value of plot four—complex stand—is much larger than those of plots one, two, and three. Overall, the rRMSE values of estimating DBH using TLS point cloud data are smaller than 10%, which indicates that using TLS point cloud data alone can lead to accurate estimates of tree DBH.
When the TLS point cloud data were used alone, results that were similar to those of DBH were obtained for the estimation of tree height (Table 3 and Figure 7). However, there is an obvious trend: as the complexity of the stand structures and conditions increases from plot one—a simple stand—to plots two and three—less complex stands—and to plot four—a complex stand—the R2 values of the tree height estimates with the measurements decrease and the rRMSE values increase. Compared with those of DBH estimates, given the complexity of stand structure and condition, the R2 value of tree height estimates against its observations is smaller and its rRMSE is greater. This implies that using TLS point cloud data alone, estimating tree height is more difficult than estimating DBH.
When the ALS and TLS point cloud data were fused and used for estimating tree height (Table 3 and Figure 7), given a plot, that is, the complexity of stand structure and condition, the R2 value of tree height estimates against its observations is obviously increased, and its rRMSE decreases compared with those using the TLS point cloud data alone. Moreover, the trend of the estimation accuracy decreasing with the increasing complexity of stand structure and condition from plot one—simple stand—to plots two and three—less complex stand—and to plot four—complex stand—is obviously noticed. The rRMSE values smaller than 6% for plot one—a simple stand—and for plots two and three—a less complex stand— indicate that the tree heights of the simple and less complex stands can be very accurately estimated using the fused point cloud data. However, it is still difficult to estimate the tree heights of the complex stand using the fused point cloud data due to its high rRMSE value of 19.27%.

4. Discussion

4.1. Comparison of the Proposed Method with Other Approaches for Registration Accuracy

The proposed registration method is compared with the crown feature points-based method [36], in which ALS and TLS point cloud fusion was implemented by extracting crown feature points through canopy density analysis. The mean shift method was used to obtain crown feature points from ALS and TLS data. A subsample of TLS data is needed to align the ALS and TLS point distributions before the TLS feature point extraction. For more details of the extraction method of tree crown feature points, the reader can refer to the study of Dai and Yang [36]. The registration distance residuals for the method are shown in Table 4.
The average distance residual of the coarse registration from the CPD method is 423.71 cm, and the average distance residual for the final registration results is 2.47 cm. Comparing the results in Table 2 and Table 4, it was found that the proposed method outperforms the crown feature points method for all four plots.
Compared with the crown feature point method, the proposed method has certain advantages in feature point extraction. Firstly, the feature point extraction steps of the proposed method are simpler. The crown feature point-based method needs the use of the mean shift method twice and a subsample of TLS point cloud data since there are differences in the point densities of the ALS and TLS crowns. These differences can be neglected in the proposed method. Secondly, the crown feature point-based method requires finding the correspondence between ALS and TLS crowns manually before subsampling TLS crowns. Manual involvement is not needed in the proposed method. Finally, the proposed method is more efficient because of the low computational intensity of obtaining the feature points.

4.2. Comparison of the Proposed Method with Other Approaches for Registration Performance

Generally, the feature points in LiDAR point clouds of forested areas are local features and characterized by a massive three-dimensional volume of data, and a large amount of computation is thus needed in the registration of point clouds and estimation of tree parameters. This is especially true for TLS point clouds. The computation time for searching for tree correspondence based on graph matching can be expressed as O(n4) [30]. In the crown feature points method, the time of computation can be expressed as O(Tn2) (T is the number of iterations) [36]. In the registration methods based on the canopy shapes, in which the correspondence between the key points of TLS and ALS point clouds is searched for to achieve point cloud registration according to the matching strategy of the distance between the corresponding key points, the computation time is accounted for as O(n2) [51]. In the proposed method, the time of computation is described as O(nlog(n)). Compared with that of the proposed method, the number of point clouds involved in the computation of the registration method based on the crown feature points is large, and the computation intensity is much higher. In the proposed method, the amounts of the key points included in two sets of the turning points of the canopy gap boundary vectors are smaller, and the computation intensity is much lower. Therefore, the method proposed in this paper only needs the search of the turning points of the canopy gap vectors, avoids the high intensity of computation due to the massive point cloud data involved in the registration of ALS and TLS point clouds, and greatly improves the efficiency of extracting the feature points.

4.3. Limitations

In this paper, we only tested the feasibility and accuracy of the proposed canopy gap shape feature-based method for registration and fusion of ALS and TLS point clouds, and we did not explore the parameter settings under different forest stand types and point cloud densities, including the height thresholds for canopy point cloud separation, the size of target cells, the resolution of the canopy point cloud density model, the thresholds for image pixel values in the binarization process, and the thresholds of the WEA. The height thresholds for canopy-point cloud separation vary depending on different study areas and sample sites. Large target units will result in a small number of canopy gaps and a great number of missing canopy gaps. A small target will cause the gaps that exist within the individual tree canopies and are identified as the canopy gaps. The binarized image pixel threshold will affect the shape of canopy gaps, and the WEA threshold will have an influence on the results of extracting the canopy gap turning or feature points. Overall, these parameter settings will affect the results of extracting the shape feature points of canopy gaps and thus have an impact on the accuracy of registering the point cloud data.

5. Conclusions

A method of point cloud registration based on the shape characteristics of canopy gaps was proposed in this paper, and automatic registration of TLS and ALS point cloud data was conducted using the selected four plots with different complexity levels of stand structures and conditions. The proposed method was compared with the method of fusing point clouds by crown feature points. The following conclusions can be drawn: (1) the canopy gap shape-based method proposed in this study performed better for fusing TLS and ALS point cloud data than the existing crown feature-based method; and (2) fusing ALS and TLS point clouds improved the estimation accuracy of tree height in terms of the R2 and rRMSE values between the estimated and observed values compared with using TLS point cloud data alone. The accuracy improvement became more significant as the stand structures and conditions became more complex. In conclusion, the method of point cloud registration enhances the efficiency and automation of feature point extraction and registration accuracy, which is beneficial for better quantification of forest structure and monitoring and researching forest ecosystems.

Author Contributions

Conceptualization, R.Z. and H.S.; methodology, R.Z. and K.M.; validation, R.Z., K.M. and H.S.; investigation, R.Z., J.T., S.C. and K.M.; writing—original draft preparation, R.Z. and H.S.; writing—review and editing, R.Z., K.M., L.F., Q.L. and H.S.; visualization, R.Z. and S.C.; supervision, H.S.; funding acquisition, H.S. and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Science and Technology Major Project of China’s High Resolution Earth Observation System (Project Number: 21-Y20B01-9001-19/22), the Hunan Provincial Natural Science Foundation of China (2022JJ30078), and the Natural Science Foundation of China (31971578).

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, J.; Feng, Z.; Zhu, Y. Estimation of Forest Biomass and Carbon Storage in China Based on Forest Resources Inventory Data. Forests 2019, 10, 650. [Google Scholar] [CrossRef] [Green Version]
  2. Zhao, M.; Zhao, N.; Liu, Y.; Liu, Y.; Yue, T. An overview of forest carbon measurement methods. Acta Ecol. Sin. 2019, 39, 3797–3807. [Google Scholar]
  3. Hu, H.; Luo, B.; Luo, S.; Wei, S.; Wang, Z.; Li, X.; Liu, F. Research Progress on Effects of Forest Fire Disturbance on Carbon Pool of Forest Ecosystem. Sci. Silvae Sin. 2020, 56, 160–169. [Google Scholar]
  4. Li, Y.; Guo, Q.; Wan, B.; Qin, H.; Wang, D.; Xu, K.; Song, S.; Sun, Q.; Zhao, X.; Yang, M.; et al. Current status and prospect of three-dimensional dynamic monitoring of natural resources based on LiDAR. Natl. Remote Sens. Bull. 2021, 25, 381–402. [Google Scholar] [CrossRef]
  5. Liang, X.; Kankare, V.; Hyyppä, J.; Wang, Y.; Kukko, A.; Haggrén, H.; Yu, X.; Kaartinen, H.; Jaakkola, A.; Guan, F.; et al. Terrestrial laser scanning in forest inventories. ISPRS J. Photogramm. Remote Sens. 2016, 115, 63–77. [Google Scholar] [CrossRef]
  6. Zhang, H.; Lei, X.; Li, F. Research Progress and Prospects of Forest Management Science in China. Sci. Silvae Sin. 2020, 56, 130–142. [Google Scholar]
  7. Guo, Q.; Su, Y.; Hu, T.; Liu, J. LiDAR Principles, Processing and Applications in Forest Ecology; Higher Education Press: Beijing, China, 2018. [Google Scholar]
  8. Guo, Q.; Liu, J.; Tao, S.; Xue, B.; Li, L.; Xu, G.; Li, W.; Wu, F.; Li, Y.; Chen, L.; et al. Perspectives and prospects of LiDAR in forest ecosystem monitoring and modeling. Chin. Sci. Bull. 2014, 59, 459–478. [Google Scholar]
  9. Pearse, G.D.; Morgenroth, J.; Watt, M.S.; Dash, J.P. Optimising prediction of forest leaf area index from discrete airborne lidar. Remote Sens. Environ. 2017, 200, 220–239. [Google Scholar]
  10. Li, Z.; Liu, Q.; Pang, Y. Review on forest parameters inversion using LiDAR. Natl. Remote Sens. Bull. 2016, 20, 1138–1150. [Google Scholar]
  11. Liu, L.; Pang, Y.; Li, Z. Individual Tree DBH and Height Estimation Using Terrestrial Laser Scanning (TLS) in a Subtropical Forest. Sci. Silvae Sin. 2016, 52, 26–37. [Google Scholar]
  12. Lau, A.; Martius, C.; Bartholomeus, H.; Shenkin, A.; Jackson, T.; Malhi, Y.; Herold, M.; Bentley, L.P. Estimating architecture-based metabolic scaling exponents of tropical trees using terrestrial lidar and 3d modelling. For. Ecol. Manag. 2019, 439, 132–145. [Google Scholar] [CrossRef]
  13. Zhu, J.; Liu, Q.; Cui, X.; Zhang, B. Extraction of individual tree parameters by combining terrestrial and UAV LiDAR. Trans. Chin. Soc. Agric. Eng. 2022, 38, 51–58. [Google Scholar]
  14. Polewski, P.; Ericksonc, A.; Yao, W.; Coopsc, N.; Krzysteka, P.; Stillab, U. Object-based coregistration of terrestrial photogrammetric and ALS point clouds in forested areas. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 347. [Google Scholar] [CrossRef] [Green Version]
  15. Paris, C.; Kelbe, D.; Van Aardt, J.; Bruzzone, L. A novel automatic method for the fusion of ALS and TLS lidar data for robust assessment of tree crown structure. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3679–3693. [Google Scholar] [CrossRef]
  16. Lindberg, E.; Holmgren, J.; Olofsson, K.; Olsson, H. Estimation of stem attributes using a combination of terrestrial and airborne laser scanning. Eur. J. For. Res. 2012, 131, 1917–1931. [Google Scholar] [CrossRef] [Green Version]
  17. Kankare, V.; Liang, X.; Vastaranta, M.; Yu, X.; Holopainen, M.; Hyyppä, J. Diameter distribution estimation with laser scanning based multisource single tree inventory. ISPRS J. Photogramm. Remote Sens. 2015, 108, 161171. [Google Scholar] [CrossRef]
  18. Saarinen, N.; Vastaranta, M.; Kankare, V.; Tanhuanpää, T.; Holopainen, M.; Hyyppä, J.; Hyyppä, H. Urban-tree-attribute update using multisource single-tree inventory. Forests 2014, 5, 1032–1052. [Google Scholar] [CrossRef] [Green Version]
  19. Zhang, J.; Lin, X.; Liang, X. Advances and Progress of Information Extraction from Point Clouds. Acta Geod. Cartogr. Sin. 2017, 46, 1460–1469. [Google Scholar]
  20. Dong, Z.; Yang, B.; Liang, F.; Huang, R.; Sebastian, S. Hierarchical registration of unordered tls point clouds based on binary shape context descriptor. ISPRS J. Photogramm. Remote Sens. 2018, 144, 61–79. [Google Scholar]
  21. Huang, R.; Jiang, L.; Shen, X.; Dong, Z.; Zhou, Q.; Yang, B.; Wang, H. An efficient method of monitoring slow-moving landslides with long-range terrestrial laser scanning: A case study of the Dashu landslide in the Three Gorges Reservoir Region, China. Landslides 2019, 16, 839–855. [Google Scholar]
  22. He, F.; Habib, A. A closed-form solution for coarse registration of point clouds using linear features. J. Surv. Eng. 2016, 142, 04016006. [Google Scholar]
  23. Yang, B.; Wei, Z.; Li, Q.; Li, J. Semiautomated building facade footprint extraction from mobile LiDAR point clouds. IEEE Geosci. Remote Sens. Lett. 2013, 10, 766–770. [Google Scholar]
  24. Yang, B.; Zang, Y.; Dong, Z.; Huang, R. An automated method to register airborne and terrestrial laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2015, 109, 6276. [Google Scholar]
  25. Cheng, X.; Cheng, X.; Li, Q.; Ma, L. Automatic registration of terrestrial and airborne point clouds using building outline features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 628–638. [Google Scholar] [CrossRef]
  26. Cheng, L.; Tong, L.; Li, M.; Liu, Y. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check. Remote Sens. 2013, 5, 6260–6283. [Google Scholar] [CrossRef] [Green Version]
  27. Böhm, J.; Haala, N. Efficient integration of aerial and terrestrial laser data for virtual city modeling using LASERMAPs. In Proceedings of the ISPRSWorkshop Laser Scanning, Enschede, The Netherlands, 12–14 September 2015. [Google Scholar]
  28. Shao, J.; Zhang, W.; Mellado, N.; Jin, S.; Cai, S.; Luo, L.; Yang, L.; Yan, G.; Zhou, G. Single scanner BLS system for forest plot mapping. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1675–1685. [Google Scholar]
  29. Kelbe, D.; Van Aardt, J.; Romanczyk, P.; Leeuwen, M.V.; Cawse-Nicholson, K. Multiview Marker-free registration of forest terrestrial laser scanner data pairs with embedded confidence metrics. IEEE Trans. Geosci. Remote Sens. 2017, 55, 729–741. [Google Scholar] [CrossRef]
  30. Polewski, P.; Yao, W.; Cao, L.; Gao, S. Marker-free coregistration of UAV and backpack lidar point clouds in forested areas. ISPRS J. Photogramm. Remote Sens. 2019, 147, 307–318. [Google Scholar]
  31. Kukko, A.; Kaijaluoto, R.; Kaartinen, H.; Lehtola, V.V.; Jaakkola, A.; Hyyppä, J. Graph SLAM correction for single scanner MLS forest data under boreal forest canopy. ISPRS J. Photogramm. Remote Sens. 2017, 132, 199–209. [Google Scholar] [CrossRef]
  32. Hauglin, M.; Lien, V.; Næsset, E.; Gobakken, T. Geo-referencing forest field plots by co-registration of terrestrial and airborne laser scanning data. Int. J. Remote Sens. 2014, 35, 3135–3149. [Google Scholar] [CrossRef]
  33. Guan, H.; Ma, Q.; Liu, M.; Wu, F.; Guo, Q.; Su, Y.; Hu, T.; Wang, R.; Yang, Q.; Sun, X.; et al. A Novel Framework to Automatically Fuse Multiplatform LiDAR Data in Forest Environments Based on Tree Locations. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2165–2177. [Google Scholar] [CrossRef]
  34. Ma, K.; Chen, Z.; Fu, L.; Tian, W.; Jiang, F.; Yi, J.; Du, Z.; Sun, H. Performance and Sensitivity of Individual Tree Segmentation Methods for UAV-LiDAR in Multiple Forest Types. Remote Sens. 2022, 14, 298. [Google Scholar]
  35. Dai, W.; Kan, H.; Tan, R.; Yang, B.; Guan, Q.; Zhu, N.; Xiao, W.; Dong, Z. Multisource forest point cloud registration with semantic-guided keypoints and robust RANSAC mechanisms. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103105. [Google Scholar]
  36. Dai, W.; Yang, B.; Liang, X.; Dong, Z.; Huang, R.; Wang, Y.; Li, W. Automated fusion of forest airborne and terrestrial point clouds through canopy density analysis. ISPRS J. Photogramm. Remote Sens. 2019, 156, 94–107. [Google Scholar]
  37. Liang, X.; Hyyppä, J.; Kaartinen, H.; Lehtomäki, M.; Pyörälä, J.; Pfeifer, N.; Holopainen, M.; Brolly, G.; Francesco, P.; Hackenberg, J.; et al. International benchmarking of terrestrial laser scanning approaches for forest inventories. ISPRS J. Photogramm. Remote Sens. 2018, 144, 137–179. [Google Scholar]
  38. Zhao, X.; Guo, Q.; Su, Y.; Xue, B. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas. ISPRS J. Photogramm. Remote Sens. 2016, 117, 79–91. [Google Scholar] [CrossRef] [Green Version]
  39. Zhou, S.; Jones, C.B. Shape-Aware Line Generalisation with Weighted Effective Area. Developments in Spatial Handling 11th International Symposium on Spatial Handling; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2005; p. 36980. [Google Scholar]
  40. Myronenko, A.; Song, X. Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2262–2275. [Google Scholar]
  41. Besl, P.J.; Mckay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. 1992, 14, 239–256. [Google Scholar] [CrossRef] [Green Version]
  42. Watt, A.S. Pattern and process in the plant community. J. Ecol. 1947, 35, 1–22. [Google Scholar]
  43. Mao, X.; Du, Z.; Liu, J.; Chen, S. Extraction of Forest Gaps in Natural Forest and Man-made Forest Based on UAV LIDAR. Trans. Chin. Soc. Agric. Mach. 2020, 51, 232–240. [Google Scholar]
  44. Kane, V.R.; Gersonde, R.F.; Lutz, J.A.; McGaughey, R.J.; Bakker, J.D.; Frankin, J.F. Patch dynamics and the development of structural and spatial heterogeneity in Pacific Northwest forests. Can. J. For. Res. 2011, 41, 2276–2291. [Google Scholar] [CrossRef] [Green Version]
  45. Chen, Y.; Feng, T.; Shi, P.; Wang, J. Classification of Remot Sensing Image Based on Object Oriented and Class Rules. Geomat. Inf. Sci. Wuhan Univ. 2006, 31, 316320. [Google Scholar]
  46. Ross, C.W.; Loudermilk, E.L.; Skowronski, N.; Pokswinski, S.; Hiers, J.K.; O’Brien, J. LiDAR Voxel-Size Optimization for Canopy Gap Estimation. Remote Sens. 2022, 14, 1054. [Google Scholar] [CrossRef]
  47. Gaulton, R.; Malthus, T.J. LiDAR mapping of canopy gaps in continuous cover forests: A comparison of canopy height model and point cloud based techniques. Int. J. Remote Sens. 2010, 31, 1193–1211. [Google Scholar]
  48. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39, 1–38. [Google Scholar]
  49. Tao, S.; Wu, F.; Guo, Q.; Wang, Y.; Li, W.; Xue, B.; Hu, X.; Li, P.; Tian, D.; Li, C.; et al. Segmenting tree crowns from terrestrial and mobile LiDAR data by exploring ecological theories. ISPRS J. Photogramm. Remote Sens. 2015, 110, 66–76. [Google Scholar] [CrossRef] [Green Version]
  50. Wu, J.Y.; Cawse-Nicholson, K.; Aardt, J. 3D tree reconstruction from simulated small footprint waveform LiDAR. Photogramm. Eng. Remote Sens. 2013, 79, 1147–1157. [Google Scholar] [CrossRef] [Green Version]
  51. Shao, J.; Yao, W.; Wan, P.; Luo, L.; Wang, P.; Yang, L.; Lyu, J.; Zhang, W. Efficient co-registration of UAV and ground LiDAR forest point clouds based on canopy shapes. Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 103067. [Google Scholar]
Figure 1. (a) The study area and its location in Hunan Province of China; and (b) the diagrams of the complexity degree of stand structures and conditions for (plot 1)—simple stand; (plot 2) and (plot 3)—less complex stands; and (plot 4)—complex stand (within each of the plots, each color indicates an individual tree).
Figure 1. (a) The study area and its location in Hunan Province of China; and (b) the diagrams of the complexity degree of stand structures and conditions for (plot 1)—simple stand; (plot 2) and (plot 3)—less complex stands; and (plot 4)—complex stand (within each of the plots, each color indicates an individual tree).
Drones 07 00524 g001
Figure 2. The flow chart of the proposed method to register the ALS and TLS point clouds in the forested area.
Figure 2. The flow chart of the proposed method to register the ALS and TLS point clouds in the forested area.
Drones 07 00524 g002
Figure 3. Workflow of canopy gap generation.
Figure 3. Workflow of canopy gap generation.
Drones 07 00524 g003
Figure 4. An explanation of the WEA algorithm.
Figure 4. An explanation of the WEA algorithm.
Drones 07 00524 g004
Figure 5. Schematic diagram of canopy gaps and the shape feature points. (a) ALS canopy gaps (black lines) and the shape feature points (purple squares). (b) The corresponding TLS canopy gaps (black lines) and the shape feature points (purple squares).
Figure 5. Schematic diagram of canopy gaps and the shape feature points. (a) ALS canopy gaps (black lines) and the shape feature points (purple squares). (b) The corresponding TLS canopy gaps (black lines) and the shape feature points (purple squares).
Drones 07 00524 g005
Figure 6. Results after fine registration. (a) A global view of the results. (b) Profile of the results.
Figure 6. Results after fine registration. (a) A global view of the results. (b) Profile of the results.
Drones 07 00524 g006
Figure 7. Tree height linear regression results: the above represents the tree height estimation accuracy of the TLS point cloud data used alone, and the below represents the tree height estimation accuracy of the fused point cloud data.
Figure 7. Tree height linear regression results: the above represents the tree height estimation accuracy of the TLS point cloud data used alone, and the below represents the tree height estimation accuracy of the fused point cloud data.
Drones 07 00524 g007
Table 1. Characteristics of four forest plots selected in this study (SD: standard deviation).
Table 1. Characteristics of four forest plots selected in this study (SD: standard deviation).
Plot IDDominant Tree SpeciesNumber of TreesDBH ± SD (cm)Height ± SD (m)
1Chinese fir3513.18 ± 3.248.83 ± 1.72
2Chinese fir6111.33 ± 2.317.80 ± 1.39
3Chinese fir929.78 ± 1.946.37 ± 1.06
4Natural broad-leaf forest4319.07 ± 12.09.08 ± 3.48
Table 2. Registration distance residuals of the proposed method for four plots.
Table 2. Registration distance residuals of the proposed method for four plots.
Plot IDCPD Average Distance Residual (cm)CPD + ICP Average Distance Residual (cm)
MinMaxAve.MinMaxAve.
194.28105.62100.100.923.111.60
2332.05365.06348.550.712.071.35
3260.54285.43275.020.262.571.20
451.7859.7355.650.5311.004.40
194.83 2.14
Table 3. Assessment of individual tree parameter estimation accuracy.
Table 3. Assessment of individual tree parameter estimation accuracy.
PlotDBH (cm)H (m)
TLSTLSFused Point Cloud
R2RMSErRMSER2RMSErRMSER2RMSErRMSE
10.970.554.17%0.840.687.70%0.920.475.32%
20.980.322.82%0.760.688.72%0.910.405.12%
30.980.242.45%0.720.558.63%0.890.355.49%
40.981.538.02%0.592.224.23%0.741.7519.27%
Table 4. Registration distance residuals of the point cloud data for four plots using the crown feature points method.
Table 4. Registration distance residuals of the point cloud data for four plots using the crown feature points method.
Plot IDCPD Average Distance Residual (cm)CPD + ICP Average Distance Residual (cm)
MinMaxAve.MinMaxAve.
1668.89721.09693.930.503.351.64
2113.72135.07123.260.873.361.77
3449.67484.33470.790.912.151.63
4381.86436.20406.850.558.864.82
423.71 2.47
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, R.; Sun, H.; Ma, K.; Tang, J.; Chen, S.; Fu, L.; Liu, Q. Improving Estimation of Tree Parameters by Fusing ALS and TLS Point Cloud Data Based on Canopy Gap Shape Feature Points. Drones 2023, 7, 524. https://doi.org/10.3390/drones7080524

AMA Style

Zhou R, Sun H, Ma K, Tang J, Chen S, Fu L, Liu Q. Improving Estimation of Tree Parameters by Fusing ALS and TLS Point Cloud Data Based on Canopy Gap Shape Feature Points. Drones. 2023; 7(8):524. https://doi.org/10.3390/drones7080524

Chicago/Turabian Style

Zhou, Rong, Hua Sun, Kaisen Ma, Jie Tang, Song Chen, Liyong Fu, and Qingwang Liu. 2023. "Improving Estimation of Tree Parameters by Fusing ALS and TLS Point Cloud Data Based on Canopy Gap Shape Feature Points" Drones 7, no. 8: 524. https://doi.org/10.3390/drones7080524

Article Metrics

Back to TopTop