Next Article in Journal
Recurrent Neural Network to Predict Saccade Offset Time Points from Electrooculogram Signals for Automatic Measurement of Eye-Fixation-Related Potential
Next Article in Special Issue
Precise Cadastral Survey of Rural Buildings Based on Wall Segment Topology Analysis from Dense Point Clouds
Previous Article in Journal
A Statistical Analysis of Long-Term Grid-Connected PV System Operation in Niš (Serbia) under Temperate Continental Climatic Conditions
Previous Article in Special Issue
A 3D Occlusion Facial Recognition Network Based on a Multi-Feature Combination Threshold
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Soft Segmentation of Terrestrial Laser Scanning Point Cloud of Forests

Institute of Computing Technology, China Academy of Railway Sciences Corporation Limited, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(10), 6228; https://doi.org/10.3390/app13106228
Submission received: 11 April 2023 / Revised: 12 May 2023 / Accepted: 15 May 2023 / Published: 19 May 2023
(This article belongs to the Special Issue Advances in 3D Sensing Techniques and Its Applications)

Abstract

:
As the three-dimensional (3D) laser scanner is widely used for forest inventory, analyzing and processing point cloud data captured with a 3D laser scanner have become an important research topic in recent years. The extraction of single trees from point cloud data is essential for further investigation at the individual tree level, such as counting trees and trunk analysis, and many developments related to this topic have been published. However, constructing an accurate and automated method to obtain the tree crown silhouette from the point cloud data is challenging because the tree crowns often overlap between adjacent trees. A soft segmentation method that uses K-Nearest Neighbor (KNN) and contour shape constraints at the overlap region is proposed to solve this task. Experimental results show that the visual effect of the tree crown shape and the precision of point cloud segmentation have improved. It is concluded that the proposed method works well for tree crown segmentation and silhouette reconstruction from the terrestrial laser scanning point cloud data of the forest.

1. Introduction

Both natural and planted forests are essential to the pursuit of harmony between man and nature. Therefore, they are worthy of attention and research, including tree counting, shape analysis, and the extracting of structural characteristics. In recent years, laser scanning has become a convenient technology for obtaining data on forests. Although the data consist of a large number of points, named point cloud, in three-dimensional (3D) space and those points do not directly contain adjacency information, all points have accurate 3D position information, which is helpful for forest analysis.
Point cloud segmentation, for example, tree recognition and crown extraction, is the basis of forest analysis. Forest point cloud has become a hot topic in intelligent biological data processing in recent decades. Based on accurate segmentation, each tree crown is obtained and can be reconstructed into a 3D digital geometric model. This geometrical model can be used to extract and analyze geometric parameters of the tree and as the elements of virtual reality scenes (3D animation and movie, etc.).
Regarding forest point cloud segmentation, existing representative methods related to our main contributions can be classified into two categories: individual tree identification and tree crown shape extraction.
Individual tree identification is an important research topic for supporting the collection of automatic field inventory using Light Detection and Ranging (LiDAR) technology [1]. Common methods or ideas include layer stacking, mean shift or region growth, bottom-up or up-down, clustering, and characteristic analysis based on differential quantity.
The layer stacking method for forest point cloud segmentation is used to slice the forest point cloud at fixed (0.5 m, for example) height intervals and detect trees in each layer, merging the results from all layers to build all tree profiles [2]. Similar to layer stacking, Hamraz H et al. [3] proposed a tree segmentation method for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model-based tree segmentation method.
Some of the literature adopted mean shift or region growth methods to segment forest point clouds. Wen X et al. [4] completed an experimental assessment of the mean shift algorithm for the segmentation of airborne LiDAR data. Ma Z et al. [5] presented a two-stage method to detect individual trees from LiDAR data and adopted a region-growing algorithm to complete the initial segmentation. Liu Q et al. [6] presented a trunk-growth method with normal vector directions for single tree point cloud segmentation and applied this method to the primeval forest scenes. Wang D et al. [7] proposed an automatic data-driven approach to extract individual trees from a large-area terrestrial point cloud based on the point cloud graph by pathfinding.
The bottom-up or up-down methods are explored for forest point cloud segmentation. Lu X et al. [8] built a bottom-up algorithm using point cloud data’s intensity and 3D structure to segment deciduous forests.
In addition, hybrid clustering techniques are also used for forest segmentation. Chen Q et al. [9] proposed a hybrid clustering technique by combining DBSCAN (Density-Based Spatial Clustering of Applications with Noise) and K-means for segmenting individual trees from airborne LiDAR point cloud data. For terrestrial backpack LiDAR data, Cebral L et al. [10] used an individual tree extraction method based on DBSCAN clustering and cylinder voxelization of the volume. To avoid under- and over-segmentation effects, Dersch et al. [1] presented an integrated single tree segmentation using a graph-cut clustering method that is supported by automatic stem detection. Sebastian D et al. [11] proposed a method for individual tree stem detection using a graph-cut clustering method.
A forest point cloud can also be segmented by employing characteristic analysis based on differential quantity, including normal vector and principal curvature. For example, given that the directions of the normal vector of the trunk points are in general consistent, trunks in a terrestrial scanning forest point cloud can be detected and then the individual trees can be isolated [12].
The second category of forest point cloud segmentation mainly emphasizes tree crown segmentation or tree extraction through tree crown segmentation.
Some methods emphasize using a priori knowledge, density analysis, clustering, and region growing. To obtain the accurate tree crown model with a complex structure from airborne LiDAR data for latter feature extraction, the method of Tang et al. [13] consists of several phases: filtering, transformation to a grayscale image, contrast enhancement, the opening and closing operation, and the watershed segmentation. Wang P et al. [14] completed the initial canopy segmentation using a normalized cut segmentation with a priori knowledge about the position of each tree. Given that closely located and intersected trees are often clustered together as multi-tree components, Xiao W et al. [15] suggested a tree-shaped model-based continuously adaptive mean shift algorithm. Minaík et al. [16] presented an algorithm for individual tree crown delineation that uses the excess green index, the marker-controlled watershed segmentation, the region growing algorithms, and a buffer around a treetop. Sun H et al. [17] adopted a point cloud density model, a local maximum algorithm with optimal window size, to improve the watershed algorithm for extracting the tree crowns. Shahzad M et al. [18] adopted the unsupervised mean shift clustering to segment a forest point cloud and then used the 3D ellipsoid model to fit the points of each cluster. By this way, the geometrical tree parameter’s location, height, and crown radius are extracted. Dong Z [19] built a multi-layered tree extraction method using a graph-based segmentation algorithm for segmenting the canopy and the sliding window detection method for other parts of understory trees.
Other literature put more attention to the geometry shape or topology information of the tree crown. Novotn J [20] proposed an approach to the tree crown segmentation process that combines the seeded region growing with an active contour to approximate a crown boundary. For correcting deviations caused by topography based on individual tree crown segmentation, Duan Z et al. [21] took into account the weight with normalized canopy height and the precise Digital Elevation Model (DEM) derived from the point cloud that is classified by a multi-scale curvature classification algorithm. For airborne LiDAR data, Strimbu V et al. [22] proposed a segmentation method that captures the forest’s topological structure in hierarchical data structures, quantifies topological relationships of tree crown components in a weighted graph, and finally partitions the graph to separate individual tree crowns. This bottom-up segmentation strategy is based on several quantifiable cohesion criteria that measure belief on two crown components.
In addition, a study found that combining airborne laser scanning with multiple and hyperspectral data could improve the accuracy of tree crown segmentation [23].
These representative methods are very diverse in tree crown reconstruction, a critical step of tree reconstruction [24]. Pyysalo U [25] used the obtained vector model to extract features and reconstruct single tree crowns from laser scanner data. Chao Z et al. [26] proposed an approach for tree crown reconstruction based on improving alpha-shape modeling, where the data are points unevenly distributed in a volume rather than on a surface only. Hyyppa [27] presented an attempt at combining the mobile mapping mode and a multi-echo-recording laser scanner, as well as a new methodology based on the resulting single-scan point clouds, for enhancing the integrity of individual tree crown reconstruction. To address the challenge from leaf/branch occlusions, mirroring the half-crowns facing the Mobile Laser Scanning (MLS) system to the other sides can be assumed as a solution strategy. Kato A et al. [28] employed an implicit surface to reconstruct the exact shape of an irregular tree crown of various tree species based on the LiDAR point cloud and visualize their actual crown formation in three-dimensional space.
The existing published literature indicates that the accuracy of tree counting and DBH (diameter at breast height) measuring from forest point cloud data is relatively high, which can meet the practical needs of the current forest inventory. However, the research progress of tree crown reconstruction is relatively small in recent years. Most of the existing tree crown segmentation methods are hard segmentation, and the division between overlapping trees is directly cut by a vertical plane, resulting in a plane shape of the segmentation area, which does not like the natural shape of the tree crown. In addition, the accuracy of crown point cloud segmentation affects the reconstruction effect of the tree crown shape. Therefore, for obtaining the realistic canopy silhouette shape, a soft segmentation algorithm of the forest point cloud is proposed in this study. The technical details are described in the next section.

2. Data and Methods

Synthetic forest data are used for illustrating the proposed method and evaluating its effectiveness and efficiency. Real laser scan data are used to show the results of the analysis with the proposed method.

2.1. Data

2.1.1. Synthetic Forest Data

The point cloud of the 4 pine trees, named Pine A, Pine B, Pine C, and Pine D, as shown in Figure 1, used in our experiment are captured using a Cyrax or RIGEL scanner in the Peking University campus, Beijing. Because each tree is far from its surrounding trees at the scanning site, we can easily extract each tree using 3D interactive software with the scanner. The number of points in each scan and the length of every edge along the Axially Aligned Bounding Box (AABB) of each tree are listed in Table 1. In this table, “Pine Na” is the name of the pine, and “Pine N” is the number of points of the pine; “Scene L”, “Scene W”, and “Scene H” are the length (unit: meter) along the x-axis, y-axis, and z-axis of the AABB, which indicate the size of a tree or a forest scene. Each point is represented by 3D coordinates (x, y, z).
For testing the proposed soft segmentation algorithm of forest laser scanning data, several forest scenes are built by combining several tree point clouds according to the different positional relationships, including linear (Forest A, Figure 2a), triangular (Forest B, Figure 2b) and quadrangular (Forest C, Figure 2c). Using these data, the accuracy of crown segmentation can be evaluated well. The information on these built small forest point clouds is listed in Table 2.
In addition, a small forest, Forest D, built using 8 trees, consists of Forest C and the transformation results of Forest C. The transformation comprises a translation vector ( 0.5 ,   21 ,   0 ) and a rotation transformation (rotation of 180 degrees around the z-direction). The pictures of Forest D from different views are shown in Figure 3. The information of the forest point cloud is also listed in Table 2.

2.1.2. Real Forest Point Cloud

A real forest point cloud, RUSH06, is used to evaluate the proposed method. RUSH06, as shown in Figure 4, was a plot, and the laser scan data was acquired in the native Eucalypt Open Forest (dry sclerophyll box-ironbark forest) in Victoria, Australia [29]. The information on the forest point cloud is also listed in Table 3.

2.2. Soft Segmentation Algorithm

Our forest point cloud segmentation algorithm, soft segmentation algorithm, consists of several steps: preprocessing, partitioning with region growth algorithm, modified hard segmentation, and refining by K-Nearest Neighbor (KNN) search and contour constraints.

2.2.1. Preprocessing

Forest point cloud data generally contains a large number of points, resulting in low computational efficiency and high time costs. Therefore, at the preprocessing stage, points that are not within the scope of the study should be removed [30]. Another process is down-sampling, which was often used to accelerate point cloud registration [31].
We used a down-sampling method [32] to simplify the raw point cloud. For example, after down-sampling, the number of points of Forest A (Figure 2a) can be decreased from 1,068,426 to 90,044 (8.428%) and 38,619 (3.615%) with different sampling intervals, 0.15 and 0.25, respectively, as shown in Figure 5. Although the shape of the stems has become blurred, the silhouette of the crown is kept well, and the density of the simplified point cloud is more unified. By comparing the visual effect and the compression ratio of the number of points, the default value of the sampling interval is set to 0.15 as a constant in our algorithm.
In addition, before tree identification, the ground points are removed according to the consistency of directions of normal vectors of points [12].

2.2.2. Partitioning with Region Growth Algorithm

The tree identification can be practiced with the region growth algorithm for a forest where the distance between the nearest trees is considerable and tree crowns are not overlapping. If only some tree crowns are without overlap, the region growth algorithm can be used to partition the whole point cloud scene into several groups that consist of a part of tree point clouds. For example, point cloud RUSH06 (Figure 4) can be divided into 21 groups, as shown in Figure 6.
In Figure 6, the red points in each group are tree roots estimated with the clustering method and are defined using the lower several-layer points [12].

2.2.3. Modified Hard Segmentation

At first, the traditional hard segmentation algorithm is described in the following.
After the rough segmentation, the next step is to detect the individual tree from the group points that have overlap regions with the neighboring tree crowns. Similar to the literature [33], the Delaunay graph is built with roots as sites, which can be used to search the neighboring trees.
According to the Delaunay graph, we use the minimum cut plane to partition two neighbor trees. The first step is to partition points into vertical slices. Assuming that the thickness Δ v b of each slice is 0.2 m, then the number of slices N v b is the quotient obtained by dividing the distance between C 1 and C 2 by Δ v b , i.e.,
N v b = | C 1 C 2 | Δ v b ,  
where C 1 and C 2 are the roots of two adjacent trees.
Figure 7a illustrates the method of generating slices along line C 1 to C 2 . Note that the partition is performed in a 3D space, and the local coordinate system [ C 1 ;   C 1 C 2 o , C 1 Y ,   C 1 Z ] has been built. In this coordinate system, C 1 C 2 o , C 1 Y , and C 1 Z are all unit vectors, and C 1 C 2 o / / C 1 C 2 , C 1 Z = ( 0 , 0 , 1 ) and C 1 Y = C 1 Z × C 1 C 2 o . Let C 1 ( x c , y c , z c ) , then any point p i ( x i , y i , z i ) belongs to the i t h slice B i , if i = | ( x i x c ) 2 + ( y i y c ) 2 Δ v b | . By this way, all points in Ω C ( s ) , which is the point set after the down-sampling of input data, are partitioned into m slices.
The second step is to find the optimal dividing line, which is defined by the minimum cut plane and can be obtained with the solution of the minimum–maximum optimal problem as follows:
i * = arg min i max j { z j | p j ( z j , z j , z j ) B i }
For convenient application, the optimal segment place series number i * is transformed to the distance d * , as follows:
d * = Δ v b · i *
In Equation (3), symbol x · y is the inner product of vectors x and y .
In this way, the problem of judging which side of a plane any point lies on is transformed into judging based on the projection distance along the direction C 1 C 2 o , i.e., when
C 1 p l · C 1 C 2 o < d *
Point p i is at the same side as C 1 of the cut plane, denoted as p i ( C 1 ,   C 2 ) .
The last step is to find all points that belong to the individual tree. The label L ( p i ) of a point p i ( x i , y i , z i ) is determined by a partitioning rule:
L ( p i ) = L ( C 0 ) ,   i f   p i i ( C 0 ,   C i ) ,  
where C i is the neighbor root of C 0 according to the Delaunay graph, as shown in Figure 7b. In fact, Equation (5) means that if point p i and root C 0 are always on the same side of the minimum cut planes that are between root C 0 and roots C i , the label of point p i is set as the label of root C 0 , i.e., point p i and root C 0 are inferred to come from the same tree.
Equation (5) defines a hard vertical partitioning (tagged as hard segmentation), which segments the point cloud (Figure 2b) into Figure 8 after down-sampling. From the figure, it can be found that after segmenting, the shape between the two segmented crowns is a straight line. This is generally not a natural shape, which cannot display the overlap between the adjacent tree crowns.
To improve the visual effect of the crown shape generated with the hard segmentation algorithm, a modified hard segmentation algorithm is proposed. This algorithm adopts the idea that those points near the intersection region are treated as unlabeled points set after hard segmentation.
In this modified hard segmentation algorithm, different from Equation (5), some patches near the best bin (Equation (2)) are set as pending regions. The width of each pending region is a positive parameter and is specified with experiments. In our experiments, the width of each pending region is 0.47 m. The segmentation effect of Forest B using this approach can be illustrated in Figure 9.

2.2.4. Refined by KNN and Contour Constraints

To determine which tree these points at the pending region belong to, three steps are proposed as follows.
Step 1: Generating horizontal layers. Given that the overlap of adjacent crowns only occurs at some layers, the forest point cloud can be segmented in a horizontal direction. The thickness of each layer is a constant number, for example, 0.4 m. The experimental result is shown in Figure 10a.
Step 2: Build the contours of all layers. Based on the results of modified hard segmentation and the generation of horizontal layers, the contour curve of each layer in every initial individual tree point cloud can be constructed using the 2D alpha-shape method [34], as shown in Figure 10b. This initial contour, used as the boundary of partitioning individual trees, has more options than the constraints resulting only from the contour generated by projecting points onto the ground.
Step 3: Refining by KNN and contour constraints. For these unlabeled regions, an approximate K-Nearest Neighbor (KNN) algorithm [35] is employed to label some unclassified points. For an unclassified point p i in the pending region, as the dashed black circle shown in Figure 10c, the classifying rule is as follows:
L ( p i ) = { L ( C α ) ,   if   r n e i > 0.8 p e n d i n g ,   o t h e r w i s e     ,
where
r n e i = n n e i K
In Equation (7), K is the number of neighbor points of p i , and n n e i is the number of neighbors that are the same label as the classified point C α . After this step, there are still some unclassified points.
Then, we used a Monte Carlo method to label those unclassified points according to the probability.
P l a b e l = e d i / σ .  
In Equation (8), d i is the distance from point p i to the contour of this layer; σ is a positive number and set by experiment. The idea behind this equation is that the farther away from the contour of one tree, the small the probability of belonging to this tree.
After using the refine segmentation algorithm, the contours of these segmented individual trees (Figure 10e) look more natural than the results using the hard segmentation algorithm (Figure 10d).
At last, the segmented individual trees using our method are displayed in Figure 11. From the figure, it can be seen that the division of overlapping areas of tree crowns appears more natural than in Figure 8.

2.3. Tree Crown Silhouette Extracted and Reconstruction

To conveniently obtain the geometric shape attributes and digital mesh model of the tree crown, the segmented individual tree point clouds are taken as the input for crown surface reconstruction. By re-sampling the contours of all layers, the surface points with uniform distribution on the crown silhouette can be obtained. Then, the crown surface geometry can be reconstructed based on a 3D α -shape [34] or modified α -shape method [36].
With the reconstructed tree mesh models, some attributes of the tree crown, including width, height, superficial area, and projecting ground area can be easily estimated.
The width W c r o w n of a tree crown is estimated according to Equation (9):
W c r o w n = max { | P i P j ¯ ,   P i , P j Γ | } ,  
where Γ is the boundary of projection points set that vertical mapping Λ   onto the xOy plane.
The height H c r o w n of the crown can be estimated with laser scanning point data:
H c r o w n =   Z max   Z min ,
where Z max and Z min are the maximum and minimum z-coordinates of the crown.
The superficial area of the crown is the sum of the areas of all triangles on the surface of the crown. In the traditional α -shape method, the triangles in the crown perhaps are turned into boundary triangles if the radius r of the probing ball is too small. This led to a bigger superficial area than the real case. In addition, if the r is set too large, the reconstructed crown will become a convex hull, which also induces a significant error. As our method only uses boundary points to build the crown surface, the superficial area only consists of boundary triangles, making the error small.
For calculating the projecting ground area, all points of Λ are projected onto the xOy plane, and a polygon 𝒫 is built on these projected points by employing the 2D α -shape method. Note that the polygon 𝒫 may not always be convex, and the built 𝒫 based on projecting C or Λ onto the xOy plane is the same.

3. Results

We demonstrate the effectiveness of the proposed algorithm based on point cloud segmentation and reconstruction experiments. Our algorithm was written in C++ Language with the support of OpenGL for visualization. Experiments were made on a laptop with Intel(R) Core(TM) i7-4710MQ [email protected] and RAM 4.0G.
For quantitatively evaluating the effect of segmentation of our method, we compare our results to that from the hard segmentation (Section 2.2.3) using three evaluation metrics, including accuracy (Acc), mean class accuracy (mAcc) [37], and mean Intersection over Union (mIoU).
The accuracy (Acc) is calculated using the following formula:
Acc = 1 N i = 1 k m i i .  
The sensitivity is calculated as follows:
mAcc = 1 k i = 1 k m i i N i ,
where m i j is the number of points from Tree i classified to Tree j and N is the total number of points in input data.
The index, mean Intersection over Union (mIoU), is calculated by Equation (13) as follows:
mIoU = 1 k i = 1 k m i i j ( m i j + m j i ) m i i

3.1. Synthetic Forest

In view of three quantitatively evaluating indexes, we compare the results generated with our method (soft segmentation) to that of the hard segmentation method (Section 2.2.3). The values of three indexes, Acc, mAcc, and mIoU, are listed in Table 4. From the table, it can be found that the soft segmentation has better performance than the hard segmentation method in all three indexes and in all four small forest point clouds. On average, with our method, the rates of Acc, mAcc, and mIoU have increased by 0.76%, 0.68%, and 1.42% over that with the hard segmentation method. This ratio is relatively small because the number of points in the entire point cloud is rather large, and the number of points in the overlap region is relatively small. Therefore, a slight increase in the ratio significantly impacts the evaluation of the effect of segmenting the overlap region.
The segmented results of Forest B can be shown in Figure 10; other experimental results are displayed as follows: Figure 12 shows the segmentation result using the proposed soft segmentation method. Finally, the segmentation results of Forests C and D are illustrated in Figure 13.

3.2. Real Scanning Data

We calculate the quantitatively evaluating indexes, Acc, mAcc, and mIoU, according to the results segmented by hard segmentation algorithm (Hard Seg.), soft segmentation without region growing as preprocessing (Soft Seg. without RG), and soft segmentation with region growing as preprocessing (Soft Seg. with RG). The values of the three indexes are listed in Table 5. From the table, it can also be found that the soft segmentation with region growing as preprocessing gets the best performance among the three methods. With our method, the rates of Acc, mAcc, and mIoU have increased by at least 10.8%, 9.6%, and 27.4%.
Different from those dense forests, the distribution of trees in forest RUSH06 is uneven, with some trees farther away from others. Therefore, the segmentation effect (Figure 14c) has been greatly improved compared to the hard segmentation method (Figure 14a) and to the method of soft segmentation without the region growth step (Figure 14b). The proposed method is used to deal with those small groups that have overlap regions between adjacent trees in each group. At last, the segmentation results are shown in Figure 14c,d.

3.3. Forest Reconstruction

For the four pine trees, we use our method to construct each tree crown separately, as shown in Figure 15. It can be found that the shape of the reconstructed crowns shows a realistic visual effect.
In addition, a small forest, Forest D, built using eight trees, consists of two pieces of Forest C using a rotational transformation and a translational transformation. The picture of the Forest D is shown in Figure 3, and its reconstructed individual tree shape is illustrated in Figure 16.
For the forest RUSH06, we use our method to segment individual trees and reconstruct each tree crown silhouette shape separately. The reconstructed results are displayed in Figure 17.
To visually verify the consistency between the tree point clouds and their silhouette surface, the points and their silhouette surfaces of the four trees are shown in Figure 18. It can also be found that the shape of reconstructed crowns shows a realistic visual effect.

3.4. Time Efficiency

The proposed method adopts down-sampling as the first step of our soft segmentation algorithm, which can significantly improve computing speed in subsequent steps. We use the segmentation experiment of Forest D to evaluate the time cost of each algorithm step when the down-sampling is (or is not) adopted as one preprocessing step, as shown in Figure 19. The running time (unit: s) of all steps using down-sampling is much shorter than that without using down-sampling. The overall running time of the algorithm with or without down-sampling is 0.7959 s and 9.9816 s, respectively.
The running time of each algorithm step of all segmentation experiments is listed in Table 6. Because the tree crowns in each point cloud, Forest A, Forest B, Forest C, and Forest D, overlap with each other, the step of region growing is not employed, and the time cost is zero. From Table 6, it can be found that a small forest point cloud, for example, when the number of points is about ten million points, can be segmented in several seconds; the two steps region growing and refining with KNN take the most time.

4. Conclusions and Future Work

To address the problem of an apparent artificial segmentation plane between adjacent tree crowns caused by hard segmentation, we, in this paper, propose a soft segmentation algorithm combining KNN with the contour shape constraints. Based on the segmented individual tree point set, the tree crown silhouette surface can be reconstructed with realistic figures, which is beneficial for shape modeling and crown analysis. Experimental results show that the segmentation performance has also been significantly improved in view of three quantitative indicators, Acc, mAcc, and mIoU; the time efficiency of point cloud segmentation has been improved by using down-sampling.
Although the soft segmentation method we propose here provides efficient segmentation of canopy overlay areas and improves speed and accuracy, this method needs more actual data to validate it further. Therefore, one of the future research directions is using the soft segmentation method to test more forest point cloud data, where the overlapping patterns of the canopy are more complex. Hence, the soft segmentation approach here needs to be further expanded. Another promising research direction is destructive laser point cloud data acquisition of forests to obtain accurate canopy shapes that are not accurately measured when they overlap each other. In addition, it is also meaningful to study the relationship between the distance between trees and the shape of the overlapping area of the canopy using this laser scanning point cloud data.

Author Contributions

Conceptualization, M.D. and G.L.; methodology, M.D. and G.L.; software, M.D.; validation, M.D. and G.L.; formal analysis, M.D.; investigation, M.D.; resources, M.D. and G.L.; data curation, M.D.; writing—original draft preparation, M.D.; writing—review and editing, M.D. and G.L.; visualization, M.D.; supervision, G.L.; project administration, G.L.; funding acquisition, G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the China Academy of Railway Sciences Corporation Limited, grant number 2021YJ197.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The first author can provide data including Forest A, B, C, and D; please send an email to get in contact; for RUSH06, please refer to TERN (https://portal.tern.org.au/metadata/23868, accessed on 1 March 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Budei, B.C.; St-Onge, B.; Hopkinson, C.; Audet, F. Identifying the genus or species of individual trees using a three-wavelength airborne lidar system. Remote Sens. Environ. 2018, 204, 632–647. [Google Scholar] [CrossRef]
  2. Ayrey, E.; Fraver, S.; Kershaw, J.A., Jr.; Kenefic, L.S.; Hayes, D.; Weiskittel, A.R.; Roth, B.E. Layer Stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds. Can. J. Remote Sens. 2017, 43, 16–27. [Google Scholar] [CrossRef]
  3. Hamraz, H.; Contreras, M.A.; Zhang, J. A scalable approach for tree segmentation within small-footprint airborne LiDAR data. Comput. Geosci. 2017, 102, 139–147. [Google Scholar] [CrossRef]
  4. Xiao, W.; Zaforemska, A.; Smigaj, M.; Wang, Y.; Gaulton, R. Mean shift segmentation assessment for individual forest tree delineation from airborne lidar data. Remote Sens. 2019, 11, 1263. [Google Scholar] [CrossRef]
  5. Ma, Z.; Pang, Y.; Wang, D.; Liang, X.; Chen, B.; Lu, H.; Weinackerm, H.; Koch, B. Individual tree crown segmentation of a larch plantation using airborne laser scanning data based on region growing and canopy morphology features. Remote Sens. 2020, 12, 1078. [Google Scholar] [CrossRef]
  6. Liu, Q.; Ma, W.; Zhang, J.; Liu, Y.; Xu, D.; Wang, J. Point-cloud segmentation of individual trees in complex natural forest scenes based on a trunk-growth method. J. For. Res. 2021, 32, 12. [Google Scholar] [CrossRef]
  7. Wang, D.; Liang, X.; Mofack, G.I.; Martin-Ducup, O. Individual tree extraction from terrestrial laser scanning data via graph pathing. For. Ecosyst. 2021, 8, 11. [Google Scholar] [CrossRef]
  8. Lu, X.; Guo, Q.; Li, W.; Flanagan, J. A bottom-up approach to segment individual deciduous trees using leaf-off LiDAR point cloud data. ISPRS J. Photogramm. Remote Sens. 2014, 94, 1–12. [Google Scholar] [CrossRef]
  9. Chen, Q.; Wang, X.; Hang, M.; Li, J. Research on the improvement of single tree segmentation algorithm based on airborne LiDAR point cloud. Open Geosci. 2021, 13, 705–716. [Google Scholar] [CrossRef]
  10. Comesaña-Cebral, L.; Martínez-Sánchez, J.; Lorenzo, H.; Arias, P. Individual tree segmentation method based on mobile backpack LiDAR point clouds. Sensors 2021, 21, 6007. [Google Scholar] [CrossRef]
  11. Dersch, S.; Heurich, M.; Krueger, N.; Krzystek, P. Combining graph-cut clustering with object-based stem detection for tree segmentation in highly dense airborne lidar point clouds. ISPRS J. Photogramm. Remote. Sens. 2021, 172, 207–222. [Google Scholar] [CrossRef]
  12. Li, H.; Zhang, X.; Jaeger, M.; Constant, T. Segmentation of forest terrain laser scan data. In Proceedings of the 9th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry, Seoul, Republic of Korea, 12–13 December 2010; ACM: New York, NY, USA, 2010. [Google Scholar]
  13. Tang, F.; Zhang, X.; Liu, J. Segmentation of tree crown model with complex structure from airborne LiDAR data. In Proceedings of the 15th International Conference on Geoinformatics, Nanjing, China, 25–27 May 2007; Volume 6752. [Google Scholar]
  14. Wang, P.; Xing, Y.; Wang, C.; Xi, X. A graph cut-based approach for individual tree detection using airborne LiDAR data. J. Univ. Chin. Acad. Sci. 2019, 36, 385–391. [Google Scholar]
  15. Xiao, W.; Xu, S.; Elberink, S.O.; Vosselman, G. Individual tree crown modeling and change detection from airborne LiDAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3467–3477. [Google Scholar] [CrossRef]
  16. Robert, M.; Jakub, L.; Theodora, L. Automatic tree crown extraction from uas multispectral imagery for the detection of bark beetle disturbance in mixed forests. Remote Sens. 2020, 24, 4081. [Google Scholar]
  17. Ma, K.; Xiong, Y.; Jiang, F.; Chen, S.; Sun, H. A novel vegetation point cloud density tree-segmentation model for overlapping crowns using uav LiDAR. Remote Sens. 2021, 13, 1442. [Google Scholar] [CrossRef]
  18. Shahzad, M.; Schmitt, M.; Zhu, X.X. Segmentation and crown parameter extraction of indiviudal trees in an airborne TomoSAR point cloud. In Proceedings of the Copernicus Publications, PIA15+HRIGI15—Joint ISPRS Conference, Munich, Germany, 25–27 March 2015. [Google Scholar]
  19. Dong, T.; Zhang, X.; Ding, Z.; Fan, J. Multi-layered tree crown extraction from LiDAR data using graph-based segmentation. Comput. Electron. Agric. 2020, 170, 105213. [Google Scholar] [CrossRef]
  20. Novotn, J. Tree crown delineation using region growing and active contour: Approach introduction. Mendel 2014, 2014, 213–216. [Google Scholar]
  21. Duan, Z.; Zhao, D.; Zeng, Y.; Zhao, Y.; Wu, B.; Zhu, J. Assessing and correcting topographic effects on forest canopy height retrieval using airborne LiDAR data. Sensors 2015, 15, 12133–12155. [Google Scholar] [CrossRef]
  22. Strîmbu, V.F.; Strîmbu, B.M. A graph-based segmentation algorithm for tree crown extraction using airborne LiDAR data. ISPRS J. Photogramm. Remote Sens. 2015, 104, 30–43. [Google Scholar] [CrossRef]
  23. Aubry-Kientz, M.; Laybros, A.; Weinstein, B.; Ball, J.G.; Jackson, T.; Coomes, D.; Vincent, G. Multi-sensor data fusion for improved segmentation of individual tree crowns in dense tropical forests. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3927–3936. [Google Scholar] [CrossRef]
  24. Janoutová, R.; Homolová, L.; Novotný, J.; Navrátilová, B.; Pikl, M.; Malenovský, Z. Detailed reconstruction of trees from terrestrial laser scans for remote sensing and radiative transfer modelling applications. Silico Plants 2021, 3, diab026. [Google Scholar] [CrossRef]
  25. Pyysalo, U.; Hyyppa, H. Reconstructing tree crowns from laser scanner data for feature extraction. In Proceedings of the ISPRS Commission III, Symposium 2002, Graz, Austria, 9–13 September 2002. 4p. [Google Scholar]
  26. Zhu, C.; Zhang, X.; Hu, B.; Jaeger, M. Reconstruction of Tree Crown Shape from Scanned Data. In Proceedings of the Technologies for E-Learning and Digital Entertainment, Third International Conference, Edutainment 2008, Nanjing, China, 25–27 June 2008; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  27. Lin, Y.; Hyyppa, J. Multiecho-recording mobile laser scanning for enhancing individual tree crown reconstruction. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4323–4332. [Google Scholar] [CrossRef]
  28. Kato, A.; Moskal, L.M.; Schiess, P.; Swanson, M.E.; Calhoun, D.; Stuetzle, W. Capturing tree crown formation through implicit surface reconstruction using airborne LiDAR data. Remote Sens. Environ. 2016, 113, 1148–1162. [Google Scholar] [CrossRef]
  29. Calders, K. Terrestrial Laser Scans—Riegl VZ400, Individual Tree Point Clouds and Cylinder Models, Rushworth Forest; Version 1; Terrestrial Ecosystem Research Network (Dataset): Indooroopilly, QLD, Australia, 2014. [Google Scholar] [CrossRef]
  30. Fang, H.; Li, H. Counting of plantation trees based on line detection of point cloud data. Geomatics and Information Science of Wuhan University, 22 July 2022, pp. 1–13. [CrossRef]
  31. Wang, J.; Li, H. Registration of 3D point clouds based on voxelization simplify and accelerated iterative closest point algorithm. In Artificial Intelligence—CICAI 2021; Lecture Notes in Computer Science; LNAI 13069; Springer: Cham, Switzerland, 2021; pp. 276–288. [Google Scholar] [CrossRef]
  32. Al-Rawabdeh, A.; He, F.; Habib, A. Automated feature-based down-sampling approaches for fine registration of irregular point clouds. Remote Sens. 2020, 12, 1224. [Google Scholar] [CrossRef]
  33. Yun, T.; Jiang, K.; Li, G.; Eichhorn, M.P.; Fan, J.; Liu, F.; Chen, B.; An, F.; Cao, L. Individual tree crown segmentation from airborne LiDAR data using a novel Gaussian filter and energy function minimization-based approach. Remote Sens. Environ. 2021, 256, 112307. [Google Scholar] [CrossRef]
  34. Herbert, E.; Ernst, P.M. Three-dimensional alpha shapes. ACM Trans. Graph. 1994, 13, 43–72. [Google Scholar]
  35. Sunil, A.; Theocharis, M.; David, M. Space-time tradeoffs for approximate nearest neighbor searching. J. ACM 2009, 57, 1–54. [Google Scholar]
  36. Li, S.; Li, H. Surface reconstruction algorithm using self-adaptive step alpha-shape. J. Data Acquis. Process. 2019, 34, 491–499. [Google Scholar] [CrossRef]
  37. Yang, X.; del Rey Castillo, E.; Zou, Y.; Wotherspoon, L.; Tan, Y. Automated semantic segmentation of bridge components from large-scale point clouds using a weighted superpoint graph. Autom. Constr. 2022, 142, 104519. [Google Scholar] [CrossRef]
Figure 1. Laser scan data of 4 pines: (a) Pine A; (b) Pine B; (c) Pine C; and (d) Pine D.
Figure 1. Laser scan data of 4 pines: (a) Pine A; (b) Pine B; (c) Pine C; and (d) Pine D.
Applsci 13 06228 g001
Figure 2. Three small forest point clouds are built with several trees. The color of a point is related to its y-coordinate and is a pseudo color used only to enhance visual effects. (a) Forest A, trees in line; (b) Forest B, triangular layout; (c) Forest C, quadrangular layout.
Figure 2. Three small forest point clouds are built with several trees. The color of a point is related to its y-coordinate and is a pseudo color used only to enhance visual effects. (a) Forest A, trees in line; (b) Forest B, triangular layout; (c) Forest C, quadrangular layout.
Applsci 13 06228 g002
Figure 3. Forest D consists of 8 trees. (a) Front view; (b) top view; Forest C is surrounded by a blue rectangle.
Figure 3. Forest D consists of 8 trees. (a) Front view; (b) top view; Forest C is surrounded by a blue rectangle.
Applsci 13 06228 g003
Figure 4. RUSH06, a laser scan data from the Eucalypt Open Forest [29].
Figure 4. RUSH06, a laser scan data from the Eucalypt Open Forest [29].
Applsci 13 06228 g004
Figure 5. The down-sampling results with different sampling intervals (up: 0.15; down: 0.25) are illustrated using the point cloud of Forest A. (a) The sampling interval is 0.15; (b) top view of (a); (c) the sampling interval is 0.25; (d) top view of (c).
Figure 5. The down-sampling results with different sampling intervals (up: 0.15; down: 0.25) are illustrated using the point cloud of Forest A. (a) The sampling interval is 0.15; (b) top view of (a); (c) the sampling interval is 0.25; (d) top view of (c).
Applsci 13 06228 g005
Figure 6. The rough segmentation results of RUSH06 using the region growth algorithm. Different groups use different colors.
Figure 6. The rough segmentation results of RUSH06 using the region growth algorithm. Different groups use different colors.
Applsci 13 06228 g006
Figure 7. The best segment line detection in the hard segmentation algorithm; (a) segment plane; (b) rough cell of an individual tree.
Figure 7. The best segment line detection in the hard segmentation algorithm; (a) segment plane; (b) rough cell of an individual tree.
Applsci 13 06228 g007
Figure 8. The individual tree identification with the hard segmentation. The three vertices of the black triangle show the positions of three roots. (a) The segmentation result of Forest B; (b) the top view of (a).
Figure 8. The individual tree identification with the hard segmentation. The three vertices of the black triangle show the positions of three roots. (a) The segmentation result of Forest B; (b) the top view of (a).
Applsci 13 06228 g008
Figure 9. The pending region is introduced to the hard segmentation algorithm.
Figure 9. The pending region is introduced to the hard segmentation algorithm.
Applsci 13 06228 g009
Figure 10. Key steps and results of refined segmentation. (a) Generating horizontal layers; (b) building the contours of all layers; (c) refining by KNN and contour constraints; (d) one representative layer of (b); (e) refined contour and classification of (d).
Figure 10. Key steps and results of refined segmentation. (a) Generating horizontal layers; (b) building the contours of all layers; (c) refining by KNN and contour constraints; (d) one representative layer of (b); (e) refined contour and classification of (d).
Applsci 13 06228 g010
Figure 11. The individual tree identification with the soft segmentation algorithm. (a) The segmentation result of Forest B; (b) the top view of (a).
Figure 11. The individual tree identification with the soft segmentation algorithm. (a) The segmentation result of Forest B; (b) the top view of (a).
Applsci 13 06228 g011
Figure 12. The individual tree of Forest A partitioning using the soft segmentation method.
Figure 12. The individual tree of Forest A partitioning using the soft segmentation method.
Applsci 13 06228 g012
Figure 13. The individual tree identification using the soft segmentation method. The green lines on the ground show the edges of the Voronoi graph, which is constructed using the root positions as sites. (a) The segmentation result of Forest C; (b) the segmentation result of Forest D.
Figure 13. The individual tree identification using the soft segmentation method. The green lines on the ground show the edges of the Voronoi graph, which is constructed using the root positions as sites. (a) The segmentation result of Forest C; (b) the segmentation result of Forest D.
Applsci 13 06228 g013
Figure 14. The segmentation results of RUSH06 using different segmentation methods. (a) Hard segmentation; (b) soft segmentation without precession; (c) soft segmentation with precession; (d) another view of (c).
Figure 14. The segmentation results of RUSH06 using different segmentation methods. (a) Hard segmentation; (b) soft segmentation without precession; (c) soft segmentation with precession; (d) another view of (c).
Applsci 13 06228 g014
Figure 15. The reconstructed shapes of three small forest point clouds in Figure 2. (a) Reconstructed individual tree shapes of Forest A; (b) reconstructed tree shapes of Forest B; (c) reconstructed tree shapes of Forest C.
Figure 15. The reconstructed shapes of three small forest point clouds in Figure 2. (a) Reconstructed individual tree shapes of Forest A; (b) reconstructed tree shapes of Forest B; (c) reconstructed tree shapes of Forest C.
Applsci 13 06228 g015aApplsci 13 06228 g015b
Figure 16. Reconstructed individual tree shape of Forest D in Figure 3.
Figure 16. Reconstructed individual tree shape of Forest D in Figure 3.
Applsci 13 06228 g016
Figure 17. The reconstructed individual tree crowns with the random coloring of RUSH06. (a) A font view; (b) the top view of (a).
Figure 17. The reconstructed individual tree crowns with the random coloring of RUSH06. (a) A font view; (b) the top view of (a).
Applsci 13 06228 g017aApplsci 13 06228 g017b
Figure 18. The reconstructed individual tree crowns from RUSH06. (a) Tree 1; (b) Tree 2; (c) Tree 3; (d) Tree 4.
Figure 18. The reconstructed individual tree crowns from RUSH06. (a) Tree 1; (b) Tree 2; (c) Tree 3; (d) Tree 4.
Applsci 13 06228 g018
Figure 19. Comparison of the time cost of each algorithm step when the down-sampling is (or is not) adopted as one step of preprocessing.
Figure 19. Comparison of the time cost of each algorithm step when the down-sampling is (or is not) adopted as one step of preprocessing.
Applsci 13 06228 g019
Table 1. Information of the 4 pines’ laser scan data.
Table 1. Information of the 4 pines’ laser scan data.
Pine NaPoint NScene LScene WScene H
Pine A311,50512.64813.84113.686
Pine B487,55510.1829.30611.932
Pine C116,94010.30812.2548.241
Pine D269,3668.46911.18811.113
Note: the unit of length used in this paper is meter (m).
Table 2. Information on three small forests.
Table 2. Information on three small forests.
Forest NaTree NPoint NScene LScene WScene H
Forest A31,068,42613.01729.80213.686
Forest B31,068,42621.07217.68013.686
Forest C41,185,36621.08723.77413.686
Forest D82,370,73221.58744.77413.686
Table 3. Information of the forest scan data, RUSH06.
Table 3. Information of the forest scan data, RUSH06.
Forest NaTree NPoint NScene LScene LScene L
RUSH063414,500,90582.25976.57025.164
Table 4. The quantitative evaluation of segmentation results of four forest point clouds using three indexes.
Table 4. The quantitative evaluation of segmentation results of four forest point clouds using three indexes.
MethodHard SegmentationSoft Segmentation
AccmAccmIoUAccmAccmIoU
Forest A0.98010.98150.96220.98270.98450.9672
Forest B0.95630.96240.91720.96390.96910.9318
Forest C0.95950.96790.93190.96720.97490.9456
Forest D0.94630.95560.90660.95750.96520.9262
Average (%)0.9606 0.9669 0.9295 0.9678 0.9734 0.9427
Table 5. The quantitative evaluation of segmentation results of RUSH06.
Table 5. The quantitative evaluation of segmentation results of RUSH06.
MethodAccmAccmIoU
Hard Seg.0.8590.87910.7279
Soft Seg. without RG0.85870.82050.6826
Soft Seg. with RG0.95160.96320.9272
Table 6. The running time (unit: s) of segmentation experiments of five forest point clouds.
Table 6. The running time (unit: s) of segmentation experiments of five forest point clouds.
StepsForest AForest BForest CForest DRUSH06
Down-sampling0.04340.03760.04790.08510.7864
Region Growing00003.5919
Layer Partitioning0.0310.03780.03940.0330.1098
Roots Detect0.00360.00410.00490.00180.014
Delaunay + Voronoi0.00240.00220.00290.00370.0176
Init Segmentation0.02950.02790.05340.35370.6606
Init Contour Build0.01110.01080.0120.02140.0436
Refine with KNN0.07770.08950.10760.26741.083
Refine with Contour0.01190.01280.0140.02980.0697
Total0.21060.22270.28210.79596.3766
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dai, M.; Li, G. Soft Segmentation of Terrestrial Laser Scanning Point Cloud of Forests. Appl. Sci. 2023, 13, 6228. https://doi.org/10.3390/app13106228

AMA Style

Dai M, Li G. Soft Segmentation of Terrestrial Laser Scanning Point Cloud of Forests. Applied Sciences. 2023; 13(10):6228. https://doi.org/10.3390/app13106228

Chicago/Turabian Style

Dai, Mingrui, and Guohua Li. 2023. "Soft Segmentation of Terrestrial Laser Scanning Point Cloud of Forests" Applied Sciences 13, no. 10: 6228. https://doi.org/10.3390/app13106228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop