Next Article in Journal
Estimating Frost during Growing Season and Its Impact on the Velocity of Vegetation Greenup and Withering in Northeast China
Previous Article in Journal
Drought Sensitivity and Trends of Riparian Vegetation Vigor in Nevada, USA (1985–2018)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Roof Plane Segmentation from Airborne LiDAR Data Using Hierarchical Clustering and Boundary Relabeling

1
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430000, China
2
Shenzhen Jimuyida Technology Co., Ltd., Shenzhen 518000, China
3
School of Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(9), 1363; https://doi.org/10.3390/rs12091363
Submission received: 22 March 2020 / Revised: 16 April 2020 / Accepted: 22 April 2020 / Published: 25 April 2020
(This article belongs to the Section Urban Remote Sensing)

Abstract

:
The roof plane segmentation is one of the key issues for constructing accurate three-dimensional building models from airborne light detection and ranging (LiDAR) data. Region growing is one of the most widely used methods to detect roof planes. It first selects one point or region as a seed, and then iteratively expands to neighboring points. However, region growing has two problems. The first problem is that it is hard to select the robust seed points. The other problem is that it is difficult to detect the accurate boundaries between two roof planes. In this paper, to solve these two problems, we propose a novel approach to segment the roof planes from airborne LiDAR point clouds using hierarchical clustering and boundary relabeling. For the first problem, we first extract the initial set of robust planar patches via an octree-based method, and then apply the hierarchical clustering method to iteratively merge the adjacent planar patches belonging to the same plane until the merging cost exceeds a predefined threshold. These merged planar patches are regarded as the robust seed patches for the next region growing. The coarse roof planes are generated by adding the non-planar points into the seed patches in sequence using region growing. However, the boundaries of coarse roof planes may be inaccurate. To solve this problem, namely, the second problem, we refine the boundaries between adjacent coarse planes by relabeling the boundary points. At last, we can effectively extract high-quality roof planes with smooth and accurate boundaries from airborne LiDAR data. We conducted our experiments on two datasets captured from Vaihingen and Wuhan using Leica ALS50 and Trimble Harrier 68i, respectively. The experimental results show that our proposed approach outperforms several representative approaches in both visual quality and quantitative metrics.

Graphical Abstract

1. Introduction

Automatic three-dimensional (3D) building reconstruction in urban areas is an important and classical research topic in the fields of photogrammetry, remote sensing and computer vision. Because the airborne light detection and ranging (LiDAR) data can directly acquire the 3D information of buildings, it becomes a widely used data for 3D building reconstruction. Most of the 3D building reconstruction approaches can be divided into two general categories: model-driven (top-down) and data-driven (bottom-up) [1]. The model-driven approach assumes that a building is comprised of some roof primitives with predefined topology. This kind of approach is especially appropriate for constructing 3D building models with relatively simple structure from low-density LiDAR points. For the buildings with complex roof structure and high-density point clouds, the data-driven approach is a better choice. The data-driven approach first extracts the main roof primitives from the input point clouds using a point segmentation algorithm, and then analyzes the primitive topology to construct the 3D building models [2,3,4]. For the polyhedral building model, the planar patch is the basic primitive. Therefore, the high-quality roof planes should be robustly segmented from the input point clouds before the polyhedral building reconstruction. The plane feature is also one of the key features as well as contour and conner features [5,6,7]. In general, the roof plane segmentation approaches can be divided into three categories: feature clustering-based, model fitting-based, and region growing-based approaches.
Feature clustering-based approaches use some existing clustering techniques, such as mean shift [8], mode-seeking [9], k-plane [10], and fuzzy k-means [11,12], to cluster points into planar patches based on local surface properties. The results of the feature clustering-based approach depend on the quality of surface property calculation, which is strongly influenced by the choice of neighborhood [13]. Kong et al. [9] applied a slope adaptive neighborhood to calculate the point features. The mode-seeking algorithm is applied to extract clusters from the feature space. Sampath and Shan [12] used the Voronoi neighborhood to estimate the surface normals and separated the LiDAR points into planar and nonplanar ones. Then, they applied the fuzzy k-means method to cluster the planar points into roof planes based on their surface normals. Zhang et al. [14] proposed to detect the roof planes from airborne LiDAR data based on spectral clustering of straight-line segments. However, the feature clustering-based approach still suffers from difficulty in neighborhood selection, and is sensitive to the noise and outliers [15].
Model fitting-based approaches can estimate robust planar parameters from the points with high noise and outliers. Hough Transform (HT) [16] and Random Sample Consensus (RANSAC) [17] are two widely employed model fitting methods for roof plane segmentation. The HT has been widely applied to detect planes, cylinders, and spheres from point clouds [18,19,20,21]. It first maps each point of input LiDAR data to a parameter space, and the voting accumulators of corresponding shapes are increased. Then, the parameters of corresponding shapes are estimated by searching the local maxima in the parameter space. However, HT is time-consuming, sensitive to the parameter values, and may find the spurious planes [22]. The RANSAC is another widely used method of fitting a model to noisy data [23]. Tarsha-Kurdi et al. [24] compared the HT and RANSAC for building roof plane segmentation from point clouds. They suggested that RANSAC is more efficient than HT in both segmented results and computational time. However, RANSAC is also relatively time-consuming, and cannot handle the problem of spurious planes. Numerous variants have been developed to solve the problems of standard RANSAC [1,25,26,27,28]. Xu et al. [25] proposed a weighted RANSAC framework for roof plane segmentation from point clouds. The wighted function considering both the point-to-plane distance and the angular difference is defined to suppress the spurious planes. Li et al. [26] proposed an improved RANSAC method based on Normal Distribution Transformation (NDT) cells to avoid the spurious planes for plane segmentation. The key contribution of this approach is that the planar NDT cell is selected as a minimal sample in each iteration to ensure the correctness of sampling. However, the problem of spurious planes has not been completely resolved yet in the RANSAC-based plane segmentation approaches.
Region growing-based approaches are another popular choice for roof plane segmentation. They first select one point or region as a seed, and then iteratively expand to neighboring points using appropriate criteria. As for the growing criteria, distances of points to planes and angle differences between normal vectors are two widely used similarity measures. Gorte [29] selected a triangle from input Triangulated Irregular Networks (TINs) as a seed, and then applied the region growing to iteratively add the neighboring triangles into the current segment. Zhang et al. [30] proposed to segment the roof plane using the region growing based on a plane-fitting technique. This approach selects the point with the best planarity as the seed point for region growing. Cao et al. [31] first selected the seed point in a parameter space with reduced dimensions, and then segmented the planar patches in the spatial space using region growing. Xu et al. [32] proposed a voxel-based region growing segmentation method for roof plane extraction. This approach performs region growing in the voxel level instead of the point level [33]. The Robust Principal Component Analysis (RPCA) is applied to estimate the attribute of voxels. To better estimate point normals around the sharp features, Gilani et al. [34] proposed an improved Principal Component Analysis (PCA) method to generate consistent point normals. Region growing-based approaches are easily implemented, and faster than model fitting-based approaches when the point number is large [35]. However, region growing is sensitive to the selection of the seed points or regions, and struggles to detect the accurate boundaries between two smooth regions. In addition, region growing-based approaches may lead to poor segmentation (over- and undersegmentation) for the building with complex structure, especially in the intersection region. To solve these problems, Awrangjeb and Fraser [36] designed several specific principles for different error cases to refine the initial planar segments. Poz and Ywata [28] refined the segments by merging the similar planar segments. Yan et al. [15] proposed a global optimization approach to refine the initial segments generated by a region growing method. This approach can significantly improve the completeness and correctness rates of initial segments and eliminate the cross-planes. However, as it needs to iteratively perform the multi-label optimization to minimize the energy functions, the computational cost of this approach is high.
In this paper, we also propose a region growing-based approach to segment roof planes from airborne LiDAR point clouds. Feng et al. [37] presented an efficient plane extraction approach based on agglomerative hierarchical clustering for organized point clouds. This approach first constructs a graph by dividing the point clouds into regular grids, and then performs hierarchical clustering on this graph. Inspired by this approach, we also apply the hierarchical clustering to iteratively merge initial set of planar patches generated by an octree-based segmentation method. The merged patches are selected as the seeds for region growing. However, there are two differences between our proposed approach with the approach presented in [37]: (1) the airborne LiDAR data used in this paper is unorganized and (2) an octree-based method is applied to generate the initial planar patches. In addition, to solve the problems of oversegmentation, undersegmentation, and spurious planes, we introduce a novel optimization approach based on boundary relabeling [38]. Starting from the plane segments generated by region growing, it continuously refines the planar patches by modifying the boundaries. The workflow of our proposed roof plane segmentation approach is presented in Figure 1. The key contributions of the proposed approach can be summarized as follows.
  • We propose a new region growing-based coarse roof plane segmentation approach. It generates the rough planar clusters via an octree-based method, and merges them using a hierarchical clustering method. The merged patches are selected as the robust seeds for region growing.
  • We propose a novel boundary relabeling-based roof plane refinement strategy to improve the quality of the initial coarse plane input. We formulate the roof plane refinement as an energy maximization problem and optimize it using boundary relabeling, which is more efficient than the global energy optimization approach [15]. It can remove most of the errors existed in the coarse segmentation and significantly improve the accuracy of the boundaries between adjacent roof planes.
The rest of this paper is organized as follows. Section 2 introduces our proposed region growing-based coarse plane segmentation approach. The refinement of planar patches using boundary relabeling is presented in Section 3. Experimental results on two sets of airborne LiDAR point clouds captured from Vaihingen and Wuhan are presented in Section 4. The conclusions are made in Section 5.

2. Region Growing-Based Coarse Roof Plane Segmentation

In our study, as well as the approaches presented in [15,25], the point clouds of building roofs are first extracted from airborne LiDAR data. To robustly extract roof planes from airborne LiDAR point clouds, we propose a new region growing-based plane segmentation approach. The proposed approach consists of three steps: In the first step, an octree-based segmentation method is proposed to extract the initial planar patches. In the second step, we apply a hierarchical clustering method to iteratively merge adjacent initial patches belonging to the same plane until the merge cost exceeds a predefined threshold. The merged patches are regarded as the seeds for region growing. Finally, the non-planar points are added into the seed patches in sequence via a region growing method. All of the steps of our proposed region growing-based coarse roof plane segmentation approach are presented in Algorithm 1. Moreover, one example of coarse roof plane segmentation is presented in Figure 2.
Algorithm 1 Region growing-based coarse roof plane segmentation
Input: The input building point clouds R 3 .
Output: The coarse roof planes P c o a r s e .
  • Organize R 3 into voxels V 3
  • Fit the rough planar patches P r o u g h via an octree-based segmentation method:
    (a)
    For each voxel V i , fit the planar patch P i ;  
    (b)
    Find the maximum point-to-plane distance d m a x among the points included in V i ;  
    (c)
    If d m a x T d , V i is said to be successfully fitted; Otherwise, partition V i into eight sub-voxels;  
    (d)
    Repeat (a)–(c) until each voxel has been successfully fitted or the voxel reaches a minimum size.  
  • Merge the rough patches P r o u g h via hierarchical clustering:
    (a)
    Construct the adjacent graph G = ( V , E ) for the rough patches;  
    (b)
    Find the edge with the minimum merging MSE, and merge the corresponding patches;  
    (c)
    Update the adjacent graph;  
    (d)
    Repeat (b)–(c) until the minimum merging MSE exceeds T m , obtain the merged patches P m e r g e .  
  • Add the non-planar points into the merged patches P m e r g e via region growing:
    (a)
    Sort P m e r g e according to the point number they included in a descending order; 
    (b)
    Iteratively select the seed patch in sequence and grow to its maximum extend. 
    (c)
    Apply the hierarchical clustering to merge the neighboring patches again, and obtain the coarse roof planes P c o a r s e .

2.1. Octree-Based Rough Planar Patch Extraction

Let R 3 denote the input points. To speed up the plane detection process and to organize the input points, we partition the original data R 3 into many voxels. Every point belongs to a voxel, and a voxel contains a set of points. Let V 3 denote all voxels. We apply an octree-based point segmentation approach to efficiently and effectively segment the input points into a set of rough planar patches P r o u g h . Due to the efficiency of the octree structure, it has been widely used in the field of point cloud segmentation [13,39,40]. For each voxel V i , we fit a planar patch P i using the PCA. To decide whether this voxel can be regarded as a valid planar patch, we calculate the point-to-plane distance d for each point x included in V i based on the estimated plane parameters. We can easily find the maximal point-to-plane distance d m a x among all points included in this voxel. If d m a x is smaller than a predefined threshold T d , this voxel is said to be successfully fitted; otherwise, this voxel will be continuously divided into eight sub-voxels, and the plane parameters will be estimated for each sub-voxel again. We recursively subdivide the voxel until each sub-voxel can be successfully fitted as a plane or the sub-voxel reaches a minimum size. In general, the size of roof structure is larger than 1 m 2 , so we directly set the minimum voxel size as 1 m . If the sub-voxel reaches a minimum size and still cannot be successfully fitted, the points belonging to this sub-voxel will be regarded as non-planar points. Let P r o u g h denote all rough planar patches. One visual example of octree-based rough planar patch extraction is shown in Figure 2b.

2.2. Planar Patch Merging Using Hierarchical Clustering

After rough planar patch extraction, a roof may be separated into several small patches, as shown in Figure 2b. Thus, we need an efficient scheme to further merge the patches belonging to the same plane. First, we construct an adjacent graph for the rough planar patches. Let G = ( V , E ) denote the graph, where V and E denote the nodes and edges, respectively. Each node v of the graph G is a rough patch. Each edge e ( p , q ) is applied to link two adjacent rough patches P p , P q . Then, the hierarchical clustering is performed on this graph to iteratively merge rough patches [37,41]. We repeat finding the edge e ( p , q ) with the minimum merging Mean Squared Error (MSE), and merging two corresponding nodes into one node. The graph is updated after each iteration. The merging MSE is the plane fitting MSE of the union of corresponding two patches P p , P q . If the minimum merging MSE exceeds the predefined threshold T m , we terminate the hierarchical clustering and finish the patch merging. Let P m e r g e denote all merged patches. One visual example of rough planar patch merging is shown in Figure 2c.

2.3. Point-Based Region Growing

After merging rough planar patches, we add the non-planar points into their neighboring patches via region growing to extract the complete roofs. The kd-tree is applied to search the k-nearest neighboring points for each point. We first sort the merged patches P m e r g e according to the number of points they included in a descending order. The sorted patches are regarded as the seed patches. Then, the point-based region growing [42] is performed to add the non-planar points into the seed patches in sequence. Finally, to avoid the oversegmentation artifacts, the hierarchical clustering is applied again to merge the added patches. One example of final coarse roof plane extraction is shown in Figure 2c. Let P c o a r s e denote all coarse roof planes.

3. Roof Plane Refinement

In Section 2, the coarse roof planes are segmented from the airborne LiDAR point clouds via a region growing-based approach. In most cases, this region growing-based approach performs well and can extract high-quality roof planes. However, for some buildings with complex roof structure, they still suffer from many problems (spurious planes, oversegmentation, undersegmentation, and so on), especially at the regions close to the boundaries of planar patches. Thus, a new refinement approach based on boundary relabeling is proposed to overcome these problems. We formulate the roof plane refinement as an energy maximization problem and optimize it using boundary relabeling. It refines the planar patches by alternatively relabeling the boundary points into the neighboring optimal patch. Figure 3 presents several representative examples of roof plane refinement. The input point clouds and the corresponding color images are presented in Figure 3a,b, respectively. The coarse roof planes with some representative problems highlighted by the yellow ellipses are presented in Figure 3c. The refined roof planes of Figure 3c are presented in Figure 3d. The representative problems existed in the coarse roof planes have been solved well after the roof plane refinement.

3.1. Plane Refinement as an Energy Maximization

Let N be the number of points, and K be the number of roof planes. Let A = ( A x 1 , , A x i , , A x N ) be the labels of points. A x i = k means that the point x i belongs to the roof plane P k . Apparently, we can use A to represent a partitioning of the points into planar clusters. Let A denote all valid partitions. The aim of plane refinement is to find the best partition A from A , namely, assign the best label k to each point x i . We formulate this problem as an energy optimization problem, and solve it by maximizing the following function,
A = arg max A A E ( A ) ,
where E ( A ) is the energy function, which is defined as the sum of distance term D ( A ) and boundary term G ( A ) . The distance term D ( A ) is defined based on the point-to-plane distance. The boundary term G ( A ) is used to ensure that the boundaries of refined roof planes are regular. The energy function is defined as follows,
E ( A ) = D ( A ) + λ G ( A )
where λ is a weight coefficient, which is applied to balance the influence of each term. The default value of λ is empirically set as 5 according to our testing experience, both the accuracy and regularity of the roof planes can be satisfied. During the testing, we find that λ is insensitive to the input data. λ can be regarded as a constant value in our approach. We simply fix λ as default value in the evaluation experiments of this paper and the results are quite competitive and stable.

3.1.1. Distance Term

The distance term D ( A ) encourages to assign the point to the plane with the minimum point-to-plane distance. Namely, it attempts to reduce the MSE of the refined roof planes. For each point x i and corresponding plane P k , we define the local distance term d ( x i ) of x i as
d ( x i ) = d ( x i , P k )
where d ( x i , P k ) is the distance from x i to P k . Moreover, we define the distance term as
D ( A ) = i = 1 N d ( x i )
Apparently, the distance term achieves the maximum when every point belongs to the plane with the minimum point-to-plane distance. Thus, this term can penalize the appearance of segmentation errors. However, in some cases, the optimal plane for a point is not the plane with minimum point-to-plane distance. One example is presented in Figure 4. From Figure 4a, we can find that there are two planes (Planes A and B) in this building. When only the distance term is used, the refined planes are presented in Figure 4b. In Figure 4b, there are two points marked by the red rectangles. These two points are more closer to the Plane B, but they actually belong to Plane A. Thus, we need to design a boundary term to solve this problem. The refined planes with the use of boundary term are presented in Figure 4c. These two points are assigned to Plane A as expected.

3.1.2. Boundary Term

The boundary term G ( A ) evaluates the regularity of roof planes. It penalizes local irregularities in the boundaries of roof planes, and prefers compactness and smooth boundaries. For each point x i , we assume that its label A x i equals k. Let N i denote the neighboring points of x i . Let | N i | denote the number of points included in N i . We define the local smoothness term g ( x i ) of point x i in the area of N i as
g ( x i ) = 1 | N i | j = 1 | N i | η ( A x j , k )
where the coefficient η ( A x j , k ) = 1 if A x j = k ; otherwise, η ( A x j , k ) = 0 . The range of g ( x i ) is [ 0 , 1 ] . Based on the local smoothness term, we define the boundary term as
G ( A ) = i = 1 N g ( x i )
where N is the total number of points. Apparently, if N i contains a unique planar patch, g ( x i ) is at its maximum. Near the boundaries, the points included in N i belong to several roof planes. Thus, it is not possible that g ( x i ) can achieve the maximum in all points. If the number of points close to the boundaries decreases, the value of G ( A ) will increase. Thus, this term can penalize the jagged boundaries, and can enforce smooth boundaries. In addition, this term can also penalize the island planes or the spurious planes, as these planes will cause longer boundaries.

3.2. Energy Optimization via Boundary Relabeling

We introduce a new optimization method based on boundary relabeling for plane refinement. The coarse roof plane segmentation generated by region growing serves as a good initial partitioning for the next energy optimization. Starting from the initial partitioning A , it iteratively updates the partitioning by proposing small local change at the boundary regions. Let A denote the new partitioning generated by moving a point from one plane to its neighbors. If the energy function E ( A ) increases after the local change, the partitioning is updated.
In each iteration, we move only one boundary point to its neighboring planes. Let x i b denote a boundary point. We assume that x i b belongs to plane P p , and A x i b = p . We propose to move x i b from P p to its neighboring plane P q . We need to efficiently evaluate whether this move is accepted or not. Namely, we need to evaluate whether E ( A ) is larger than E ( A ) or not. Because this move is a local change, it can be efficiently evaluated in a small local region. Before moving x i b , we first calculate the local distance term and boundary term, respectively. The local distance term d ( x i ) can be easily calculated according to the definition presented in Equation (3). Because the local boundary term g ( x i ) is defined around the neighboring points of x i , the move of x i will influence the boundary terms of its neighboring points as well as itself. Thus, we calculate the boundary term g N i ( x i ) for x i and its neighboring points N i as
g N i ( x i ) = j = 1 | N i | g ( x j )
where g ( x j ) is the local boundary term of a neighboring point x j and is defined in Equation (5). After moving x i b , we calculate the local distance term d ( x i ) and boundary term g N i ( x i ) as the similar way. We accept this move if it satisfies
d ( x i ) d ( x i ) max ( | d ( x i ) | , | d ( x i ) | ) + λ g N i ( x i ) g N i ( x i ) max ( g N i ( x i ) , g N i ( x i ) ) > 0
If this move has been accepted, we update the partitioning A = A . Then, we select the next boundary point, and evaluate this point can be moved or not. The iteration is terminated until no boundary point can be found for relabeling, and the optimal partitioning A is found. Let P r e f i n e d denote the final refined roof planes. One example of roof plane refinement via boundary relabeling is presented in Figure 5.

4. Experimental Results and Discussion

Two datasets are applied to evaluate the proposed approach in the experiments. The first dataset is captured from the city of Vaihingen, Germany [43], provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) (Availabel at http://www2.isprs.org/commissions/comm3/wg4/tests.html). The Vaihingen dataset is captured by a Leica ALS50 system with a mean flying height above ground of 500 m. The maximum viewing angle, pulse repetition rate, and scan rate of this scanner are 37.5 degrees, 83 kHz, and 70 kHz, respectively. The other dataset is captured from the city of Wuhan, China, using a Trimble Harrier 68i sensor. The maximum viewing angle, pulse repetition rate, and scan rate of this scanner are 60 degrees, 400 kHz, and 200 kHz, respectively. The details of two sets of point clouds used in our experiments are presented in Table 1. Several representative local areas of the two datasets are selected to conduct the experiments, as shown in Figure 6 and Figure 7. For these selected areas, the ground truth is generated manually. The ground truth is shown in the second rows of Figure 6 and Figure 7.
Based on these selected areas and corresponding ground truth, we compared the proposed approach with a region growing-based (RG-based) approach [42], a RANSAC-based approach [44], and a global optimization-based (GO-based) approach [15]. The RANSAC-based approach has been implemented in the CloudCompare (Available at http://www.cloudcompare.org/) software. In this section, we applied the boundary relabeling-based (BR-based) approach to denote our proposed approach. The proposed approach was implemented with C++ under Windows and tested on a computer with an Intel Core i7-8700 at 3.2 GHz and 16 GB RAM memory.

4.1. Evaluation Metrics

Several metrics presented in [36] are selected to quantitatively evaluate the performance of the proposed approach. The metrics consist of completeness ( C m ), correctness ( C r ), and quality ( Q l ), which are defined as follows,
C m = T P T P + F N C r = T P T P + F P Q l = T P T P + F N + F P
where T P (True Positive) is the number of roofs found both in the reference (ground truth) and segmentation, F N (False Negative) is the number of reference roofs not found in segmentation, F P (False Positive) is the number of roofs in segmentation not found in reference. The reference of the datasets are generated manually. Let G T denote the ground truth. To be a true positive, a minimum overlap 50% with the reference is required. In addition, the maximum overlap [25,26] is applied to find the correspondences between the reference and the segmented planes. Only the one-to-one correspondences are established in this method.
To evaluate the errors of oversegmentation and undersegmentation, two additional metrics—the reference cross-lap rate ( R c ) and detection cross-lap rate ( D c ) [45]—are applied. The reference cross-lap rate is the percentage of the reference planes that overlap multiple detected planes, and the detection cross-lap rate is the percentage of the detected planes that overlap more than one reference plane. Therefore, they can be applied to show the oversegmentation and undersegmentation errors, and they are defined as follows,
R c = N r N r D c = N d N d
where N r and N d denote the number of reference planes and detected planes, respectively. N r is the number of reference planes that overlap more than one detected plane. N d denotes the number of detected planes that overlap more than one reference plane.
In addition, the boundary precision ( B p ), boundary recall ( B r ), and average precision-recall known as F-measure ( F m ) [46,47] are applied to evaluate the quality of boundaries, and they are defined as follows,
B p = | C d C r | | C d | B r = | C d C r | | C r | F m = 2 1 / B r + 1 / B p
where C d and C r denote the boundary points of the reference and detected planes, respectively. | | denotes the number of points included in a set.

4.2. Choice of Parameters

Two main parameters ( T d and T m ) of the proposed BR-based approach need to be set. In the proposed approach, T d is applied to control the generation of the rough planar patches and T m is used to control the merging of neighboring planar patches. In this section, we selected two local buildings from Vaihingen dataset to illustrate how these parameters influence the proposed approach.
In Figure 8, we illustrated the influence of T d , and the value of T m is fixed as 0.01 . Figure 8a,e presents the input point clouds and corresponding color image, respectively. As T d decreases, the proposed approach may fail to fit the rough patches for some main roofs. As shown in Figure 8b,f, two roof planes highlighted by the yellow ellipses have been missed. As T d increases, the accuracy of the rough planar patches will decrease, and the undersegmentation errors may appear. As shown in Figure 8d,h, two similar adjacent roof planes have been regarded as one roof plane. When T d = 0.1 m , the generated rough planar patches and the final roof planes are the best for this building, as shown in Figure 8c,g. We suggested to tune T d in the range of [ 0.1 m , 0.2 m ] for different datasets according to their point accuracy. We set T d = 0.1 m for both Vaihingen and Wuhan datasets.
To illustrate the influence of T m , we fixed the value of T d as 0.1 m , and changed the value of T m . The influence of T m is illustrated in Figure 9. Figure 9a presents the test building. As T m decreases, the proposed approach may fail to merge the patches belonging to the same roof plane, and the oversegmentation errors may appear, as shown in Figure 9b. As T m increases, more and more patches will be merged into one patch, and the undersegmentation errors may appear, as shown in Figure 9d. Figure 9c presents the roofs generated using T m = 0.01 , and we found that both the over- and undersegmentation errors disappear. The proposed approach is sensitive to the value of T m , and we suggested to tune T m in the range of [ 0.01 , 0.03 ] for different datasets according to their point accuracy. For Vaihingen and Wuhan datasets, we set T m = 0.01 and 0.02 , respectively.

4.3. Comparative Evaluation

We conducted the visual comparison for different approaches on several local buildings selected from Vaihingen and Wuhan datasets, as shown in Figure 10 and Figure 11. In addition, based on the roof planes generated by the proposed BR-based approach, we produced the 3D building models using the method presented in [4]. The corresponding models are presented in the last rows of Figure 10 and Figure 11.
Figure 10 presents the roof segmentation results of four local buildings selected from the Vaihingen dataset (marked by the red rectangles in Figure 6). In Figure 10a, the irregular boundaries appear in the roof plane segmentation results generated by the RG-, RANSAC-, and GO-based approaches. The roof planes extracted by the RG- and RANSAC-based approaches also suffer from the problems of oversegmentation and spurious planes. However, the BR-based approach successfully avoids these problems. In Figure 10b, the proposed BR-based approach suffers from the problems of over- and undersegmentation (highlighted by the black rectangles). The main reason is that the normal difference between the corresponding planes is small. Moreover, the RG- and GO-based approaches also suffer from the similar problems. In addition, the RG- and RANSAC-based approaches also generate the irregular boundaries (highlighted by the red rectangles). In Figure 10c, the proposed BR-based approach offers the best segmentation. However, the over- or undersegmentation errors, and the spurious planes, appear in the roof segmentation results generated by the rest approaches. In Figure 10d, the GO- and BR-based approaches generate the best roof planes. The roof planes generated by the rest approaches all suffer from the problem of undersegmentation.
Figure 11 presents the roof segmentation results of four local buildings selected from the Wuhan dataset (marked by the red rectangles in Figure 7). In Figure 11a, the building has six roof planes, and the edges between these roof planes are weak. The RANSAC-based approach fails to detect any right roof planes for this building. The RG-, GO-, and BR-based approaches successfully detect the six roof planes. For the RG-based approach, the boundaries are smooth and regular, but almost all boundaries mismatch the ground truth. Both the GO- and BR-based approaches generate the roof planes with smooth boundaries. However, we found that the boundaries of the segments generated by the BR-based approach adhere more tightly to the ground truth. In Figure 11b, although the RG- and RANSAC-based approaches generate smooth and regular boundaries, they mismatch the ground truth. The blue lines in Figure 11b denote the real boundaries between two main roof planes. The BR-based approach still generates the best roof planes, and adheres more tightly to the ground truth than the GO-based approach. In Figure 11c, this building has 15 roof planes. Both the RG- and GO-based approaches generate some small spurious planes (black rectangles). In addition, both the RG- and RANSAC-based approaches generate the irregular boundaries (red rectangles). The BR-based approach generates the best roof planes for this building. In Figure 11d, there is a complex building with a multilayer structure. All approaches generate high-quality roof planes for the top layer of this complex building. However, both the RG- and GO-based approaches fail to detect the right planes for the bottom layer of this building (black rectangles). The RANSAC- and BR-based approaches perform well for this complex building.
Except for the local visual comparison, we performed the global quantitative evaluation for all approaches. Based on the ground truth presented in the second rows of Figure 6 and Figure 7, we calculated the evaluation metrics defined in Section 4.1 for all approaches. The quantitative evaluation results are shown in Table 2. For the first area of Vaihingen dataset (Figure 6a), the proposed BR-based approach offers the best correctness and quality, less oversegmentation errors, best boundary precision and recall, and best F-measure. For the completeness value, the BR-based approach is only slightly lower than the GO-based approach. For the detection cross-rate, the GO-based approach performs better. This is because the GO-based approach tends to segment a roof into many planes for this area, so there are many oversegmentation errors, and the undersegmentation errors will decrease. The value of R c reaches 17.46 % , and it is much larger than 6.35 % offered by the BR-based approach. For the second area of Vaihingen dataset (Figure 6b), the proposed BR-based approach offers the best scores for all evaluation metrics. In addition, for the two areas of Wuhan dataset (Figure 7a,b), the BR-based approach still offers the best scores for almost all metrics, except the detection cross-rate in the second area. This is also because the GO-based approach tends to over-segment the roofs for the second area of Wuhan dataset. In addition, for the second area of Wuhan dataset (Figure 7b), the completeness, correctness, boundary precision, boundary recall, and F-measure of the proposed BR-based approach are significantly larger than the rest approaches. The GO-based approach offers relatively bad correctness, many oversegmentation errors, low boundary precision and recall, and low F-measure. In conclusion, the proposed BR-based approach performs best among all approaches on both Vaihingen and Wuhan datasets. The GO-based approach performs well on the Vaihingen dataset, but performs bad on the Wuhan dataset.
At last, we discussed the computational time of all approaches, as shown in the last column of Table 2. We found that the computational time of the RG-based approach is the shortest among all approaches. We also found that the computational time of the GO-based approach is the longest, as it needs to iteratively perform the multi-label optimization to minimize the energy functions. Compared with the GO-based approach, the proposed BR-based approach is very efficient. The computational time of the proposed BR-based approach is significantly shorter than that of the GO-based approach. In addition, for the Wuhan dataset, the computational time of the BR-based approach is longer than that of the RANSAC-based approach. However, for the Vaihingen dataset, the computational time of the BR-based approach is shorter than that of the RANSAC-based approach.

5. Conclusions

In this paper, we have presented a novel approach for automatically extracting roof planes with smooth and accuracy boundaries from airborne LiDAR data with the use of hierarchical clustering and boundary relabeling. Especially, the proposed boundary relabeling-based plane refinement approach can remove most of the segmentation errors exited in the coarse segmentation, and significantly improve the quality of roof planes. In addition, as the boundary relabeling is performed on a local region, the computational cost of the proposed approach is significantly lower than the GO-based approach. The experimental results also show that the proposed approach can generate high-quality roof planes, and outperforms the RG-based, the RANSAC-based, and the GO-based approaches.
Nevertheless, the proposed approach still needs to be improved. First, two parameters need to be tuned for different dataset in the proposed approach, and the adaptive thresholding methods should be developed. Second, the plane refinement results of boundary relabeling rely on the coarse plane segmentation input, and the boundary relabeling may suffer from the local optimum. The more flexible and global optimization strategies should be developed to solve these problems.

Author Contributions

Conceptualization, L.L. and J.Y.; methodology, L.L.; software, L.L. and X.L.; validation, L.L. and J.T.; formal analysis, J.Y.; investigation, L.L. and Y.L.; data curation, J.T. and Y.L.; writing—original draft preparation, L.L.; writing—review and editing, J.Y. and L.G.; visualization, L.L. and X.L.; supervision, J.Y. and L.G.; project administration, J.Y.; funding acquisition, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (Project No. 2017YFB1302400).

Acknowledgments

The authors would like to thank the German Society for Photogrammetry, Remote Sensing, and Geoinformation for providing the Vaihingen dataset [43]. The authors would like to thank the editor and four anonymous reviewers for their valuable comments and suggestions to improve the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, D.; Zhang, L.; Li, J.; Liu, R. Urban building roof segmentation from airborne LiDAR point clouds. Int. J. Remote Sens. 2012, 33, 6497–6515. [Google Scholar] [CrossRef]
  2. Awrangjeb, M.; Gilani, S.A.N.; Siddiqui, F.U. An effective data-driven method for 3-D building roof reconstruction and robust change detection. Remote Sens. 2018, 10, 1512. [Google Scholar] [CrossRef] [Green Version]
  3. Liu, X.; Zhang, Y.; Ling, X.; Wan, Y.; Liu, L.; Li, Q. TopoLAP: Topology recovery for building reconstruction by deducing the relationships between linear and planar primitives. Remote Sens. 2019, 11, 1372. [Google Scholar] [CrossRef] [Green Version]
  4. Li, M.; Rottensteiner, F.; Heipke, C. Modelling of buildings from aerial LiDAR point clouds using TINs and label maps. ISPRS J. Photogramm. Remote Sens. 2019, 154, 127–138. [Google Scholar] [CrossRef]
  5. Kahaki, S.M.M.; Nordin, M.J.; Ashtari, A.H. Contour-based corner detection and classification by using mean projection transform. Sensors 2014, 14, 4126–4143. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Im, J.H.; Im, S.H.; Jee, G.I. Vertical corner feature based precise vehicle localization using 3D LiDAR in urban area. Sensors 2016, 16, 1268. [Google Scholar] [CrossRef] [Green Version]
  7. Hackel, T.; Wegner, J.D.; Schindler, K. Contour detection in unstructured 3D point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1610–1618. [Google Scholar]
  8. Melzer, T. Non-parametric segmentation of ALS point clouds using mean shift. J. Appl. Geod. 2007, 1, 159–170. [Google Scholar] [CrossRef]
  9. Filin, S.; Pfeifer, N. Segmentation of airborne laser scanning data using a slope adaptive neighborhood. ISPRS J. Photogramm. Remote Sens. 2006, 60, 71–80. [Google Scholar] [CrossRef]
  10. Kong, D.; Xu, L.; Li, X.; Li, S. K-plane-based classification of airborne LiDAR data for accurate building roof measurement. IEEE Trans. Instrum. Meas. 2014, 63, 1200–1214. [Google Scholar] [CrossRef]
  11. Biosca, J.; Lerma, J. Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods. ISPRS J. Photogramm. Remote Sens. 2008, 63, 84–98. [Google Scholar] [CrossRef]
  12. Sampath, A.; Shan, J. Segmentation and reconstruction of polyhedral building roofs from aerial LiDAR point clouds. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1554–1567. [Google Scholar] [CrossRef]
  13. Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
  14. Zhang, C.; He, Y.; Fraser, C.S. Spectral clustering of straight-line segments for roof plane extraction from airborne LiDAR point clouds. IEEE Geosci. Remote Sens. Lett. 2018, 15, 267–271. [Google Scholar] [CrossRef]
  15. Yan, J.; Shan, J.; Jiang, W. A global optimization approach to roof segmentation from airborne LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2014, 94, 183–193. [Google Scholar] [CrossRef]
  16. Duda, R.; Hart, P. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  17. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  18. Vosselman, G.; Gorte, B.; Sithole, G.; Rabbani, T. Recognising structure in laser scanner point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 46, 33–38. [Google Scholar]
  19. Borrmann, D.; Elseberg, J.; Lingemann, K.; Nüchter, A. The 3D Hough transform for plane detection in point clouds: A review and a new accumulator design. 3D Res. 2011, 2, 3. [Google Scholar] [CrossRef]
  20. Hulik, R.; Spanel, M.; Smrz, P.; Materna, Z. Continuous plane detection in point-cloud data based on 3D Hough transform. J. Vis. Commun. Image Represent. 2014, 25, 86–97. [Google Scholar] [CrossRef]
  21. Tian, Y.; Song, W.; Chen, L.; Sung, Y.; Kwak, J.; Sun, S. Fast planar detection system using a GPU-based 3D Hough transform for LiDAR point clouds. Appl. Sci. 2020, 10, 1744. [Google Scholar] [CrossRef] [Green Version]
  22. Nguyen, A.; Le, B. 3D point cloud segmentation: A survey. In Proceedings of the IEEE Conference on Robotics, Automation and Mechatronics, Manila, Philippines, 12–15 November 2013. [Google Scholar]
  23. Kahaki, S.M.; Wang, S.L.; Stepanyants, A. Accurate registration of in vivo time-lapse images. In Proceedings of the Medical Imaging 2019: Image Processing, San Diego, CA, USA, 16–21 February 2019. [Google Scholar]
  24. Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Hough-transform and extended RANSAC algorithms for automatic detection of 3D building roof planes from LiDAR data. Int. Arch. Photogramm. Remote Sens. 2007, 66, 124–132. [Google Scholar]
  25. Xu, B.; Jiang, W.; Shan, J.; Zhang, J.; Li, L. Investigation on the weighted RANSAC approaches for building roof plane segmentation from LiDAR point clouds. Remote Sens. 2016, 8, 5. [Google Scholar] [CrossRef] [Green Version]
  26. Li, L.; Yang, F.; Zhu, H.; Li, D.; Li, Y.; Tang, L. An improved RANSAC for 3D point cloud plane segmentation based on normal distribution transformation cells. Remote Sens. 2017, 9, 433. [Google Scholar] [CrossRef] [Green Version]
  27. Canaz Sevgen, S.; Karsli, F. An improved RANSAC algorithm for extracting roof planes from airborne LiDAR data. Photogramm. Rec. 2019. [Google Scholar] [CrossRef]
  28. Dal Poz, A.P.; Yano Ywata, M.S. Adaptive random sample consensus approach for segmentation of building roof in airborne laser scanning point cloud. Int. J. Remote Sens. 2020, 41, 2047–2061. [Google Scholar] [CrossRef]
  29. Gorte, B. Segmentation of TIN-structured surface models. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 465–469. [Google Scholar]
  30. Zhang, K.; Yan, J.; Chen, S.C. Automatic construction of building footprints from airborne LiDAR data. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2523–2533. [Google Scholar] [CrossRef] [Green Version]
  31. Cao, R.; Zhang, Y.; Liu, X.; Zhao, Z. Roof plane extraction from airborne LiDAR point clouds. Int. J. Remote Sens. 2017, 38, 3684–3703. [Google Scholar] [CrossRef]
  32. Xu, Y.; Yao, W.; Hoegner, L.; Stilla, U. Segmentation of building roofs from airborne LiDAR point clouds using robust voxel-based region growing. Remote Sens. Lett. 2017, 8, 1062–1071. [Google Scholar] [CrossRef]
  33. Deschaud, J.E.; Goulette, F. A fast and accurate plane detection algorithm for large noisy point clouds using filtered normals and voxel growing. In Proceedings of the International Symposium on 3DPVT, Paris, France, 17–20 May 2010. [Google Scholar]
  34. Gilani, S.A.N.; Awrangjeb, M.; Lu, G. Segmentation of airborne point cloud data for automatic building roof extraction. GISci. Remote Sens. 2018, 55, 63–89. [Google Scholar] [CrossRef] [Green Version]
  35. Wu, T.; Hu, X.; Ye, L. Fast and accurate plane segmentation of airborne LiDAR point cloud using cross-line elements. Remote Sens. 2016, 8, 383. [Google Scholar] [CrossRef] [Green Version]
  36. Awrangjeb, M.; Fraser, C. An automatic and threshold-free performance evaluation system for building extraction techniques from airborne LiDAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4184–4198. [Google Scholar] [CrossRef]
  37. Feng, C.; Taguchi, Y.; Kamat, V.R. Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
  38. Van den Bergh, M.; Boix, X.; Roig, G.; de Capitani, B.; Van Gool, L. Seeds: Superpixels extracted via energy-driven sampling. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012. [Google Scholar]
  39. Woo, K.; Kang, E.; Wang, S.; Lee, K.H. A new segmentation method for point cloud data. Int. J. Mach. Tools Manuf. 2002, 42, 167–178. [Google Scholar] [CrossRef]
  40. Wang, M.; Tseng, Y. Automatic segmentation of LiDAR data into coplanar point clusters using an octree-based split-and-merge algorithm. Photogramm. Eng. Remote Sens. 2010, 76, 407–420. [Google Scholar] [CrossRef]
  41. Yao, J.; Taddei, P.; Ruggeri, M.R.; Sequeira, V. Complex and photo-realistic scene representation based on range planar segmentation and model fusion. Int. J. Robot. Res. 2011, 30, 1263–1283. [Google Scholar] [CrossRef]
  42. Rabbani, T.; Van Den Heuvel, F.; Vosselman, G. Segmentation of point clouds using smoothness constraints. In Proceedings of the ISPRS Commission V Symposium: Image Engineering and Vision Metrology, Dresden, Germany, 25–27 September 2006. [Google Scholar]
  43. Cramer, M. The DGPF-test on digital airborne camera evaluation–overview and test design. Photogramm. Fernerkund. Geoinf. 2010, 2010, 73–82. [Google Scholar] [CrossRef]
  44. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  45. Shan, J.; Lee, S. Quality of building extraction from IKONOS imagery. J. Surv. Eng. 2005, 131, 27–32. [Google Scholar] [CrossRef] [Green Version]
  46. Estrada, F.; Jepson, A. Benchmarking image segmentation algorithms. Int. J. Comput. Vis. 2009, 85, 167–181. [Google Scholar] [CrossRef]
  47. Kahaki, S.M.M.; Nordin, M.J.; Ashtari, A.H.; Zahra, S.J. Invariant feature matching for image registration application based on new dissimilarity of spatial features. PLoS ONE 2016, 11, e0149710. [Google Scholar]
Figure 1. The workflow of our proposed roof plane segmentation approach.
Figure 1. The workflow of our proposed roof plane segmentation approach.
Remotesensing 12 01363 g001
Figure 2. One example of coarse roof plane segmentation. (a) Input light detection and ranging (LiDAR) points (blue points); (b) Octree-based rough planar patch extraction; (c) Rough patch merging; (d) Point-based region growing. The planar patches are denoted by different colors.
Figure 2. One example of coarse roof plane segmentation. (a) Input light detection and ranging (LiDAR) points (blue points); (b) Octree-based rough planar patch extraction; (c) Rough patch merging; (d) Point-based region growing. The planar patches are denoted by different colors.
Remotesensing 12 01363 g002
Figure 3. Several representative examples of roof plane refinement. (a) Input airborne LiDAR point clouds; (b) Corresponding color images; (c) Coarse roof planes with the problems of oversegmentation and undersegmentation (the first row), spurious planes (the second row), and irregular boundaries (the third row); (d) Refined roof planes. The yellow ellipses are used to highlight the problems existed in the coarse planes.
Figure 3. Several representative examples of roof plane refinement. (a) Input airborne LiDAR point clouds; (b) Corresponding color images; (c) Coarse roof planes with the problems of oversegmentation and undersegmentation (the first row), spurious planes (the second row), and irregular boundaries (the third row); (d) Refined roof planes. The yellow ellipses are used to highlight the problems existed in the coarse planes.
Remotesensing 12 01363 g003
Figure 4. Visual comparison of the roof plane refinement results when the boundary term is or is not used. (a) Corresponding color image; (b) The refined roof planes without the use of boundary term, the points highlighted by the red rectangles are classified into incorrect planar patches; (c) The refined roof planes with the use of boundary term.
Figure 4. Visual comparison of the roof plane refinement results when the boundary term is or is not used. (a) Corresponding color image; (b) The refined roof planes without the use of boundary term, the points highlighted by the red rectangles are classified into incorrect planar patches; (c) The refined roof planes with the use of boundary term.
Remotesensing 12 01363 g004
Figure 5. A visual example of roof plane refinement via boundary relabeling. (a) Corresponding color image; (b) Coarse roof planes (initial partitioning); (c) The intermediate refined roof planes with 34 points have been moved; (d) The final refined roof planes with 50 points have been moved.
Figure 5. A visual example of roof plane refinement via boundary relabeling. (a) Corresponding color image; (b) Coarse roof planes (initial partitioning); (c) The intermediate refined roof planes with 34 points have been moved; (d) The final refined roof planes with 50 points have been moved.
Remotesensing 12 01363 g005
Figure 6. Roof plane segmentation results of Vaihingen dataset. (a,b) are two selected areas. The color images of two areas are presented in the first row. The corresponding ground truth and the segmentation results of the proposed approach are presented in the second and third rows, respectively. The four local buildings highlighted by the red rectangles are selected to conduct the next comparative experiments.
Figure 6. Roof plane segmentation results of Vaihingen dataset. (a,b) are two selected areas. The color images of two areas are presented in the first row. The corresponding ground truth and the segmentation results of the proposed approach are presented in the second and third rows, respectively. The four local buildings highlighted by the red rectangles are selected to conduct the next comparative experiments.
Remotesensing 12 01363 g006
Figure 7. Roof plane segmentation results of Wuhan dataset. (a,b) are two selected areas. The color images of two areas are presented in the first row. The corresponding ground truth and the segmentation results of the proposed approach are presented in the second and third rows, respectively. The four local buildings highlighted by the red rectangles are selected to conduct the next comparative experiments.
Figure 7. Roof plane segmentation results of Wuhan dataset. (a,b) are two selected areas. The color images of two areas are presented in the first row. The corresponding ground truth and the segmentation results of the proposed approach are presented in the second and third rows, respectively. The four local buildings highlighted by the red rectangles are selected to conduct the next comparative experiments.
Remotesensing 12 01363 g007
Figure 8. The illustration of T d . Panels (a,e) are the input LiDAR points and corresponding color image. Panels (bd) are the rough planar extraction results generated with the parameter T d as 0.01 m, 0.1 m and 0.2 m, respectively. Panels (fh) are the corresponding final roof planes of panels (bd). The yellow ellipses are used to highlight the errors caused by the unsuitable T d .
Figure 8. The illustration of T d . Panels (a,e) are the input LiDAR points and corresponding color image. Panels (bd) are the rough planar extraction results generated with the parameter T d as 0.01 m, 0.1 m and 0.2 m, respectively. Panels (fh) are the corresponding final roof planes of panels (bd). The yellow ellipses are used to highlight the errors caused by the unsuitable T d .
Remotesensing 12 01363 g008
Figure 9. The roof plane segmentation results with different values of T m . (a) Corresponding color image; (b) T m = 0.002 ; (c) T m = 0.01 ; (d) T m = 0.02 .
Figure 9. The roof plane segmentation results with different values of T m . (a) Corresponding color image; (b) T m = 0.002 ; (c) T m = 0.01 ; (d) T m = 0.02 .
Remotesensing 12 01363 g009
Figure 10. Roof plane segmentation results of different approaches on four local buildings (ad) selected from the Vaihingen dataset. From the first row to the last row: reference images, ground truth, RG, RANSAC, GO, BR, and 3D models. The black rectangles denote the over- or undersegmentation errors. The red rectangles denote the irregular boundaries.
Figure 10. Roof plane segmentation results of different approaches on four local buildings (ad) selected from the Vaihingen dataset. From the first row to the last row: reference images, ground truth, RG, RANSAC, GO, BR, and 3D models. The black rectangles denote the over- or undersegmentation errors. The red rectangles denote the irregular boundaries.
Remotesensing 12 01363 g010
Figure 11. Roof plane segmentation results of different approaches on four local buildings (ad) selected from the Wuhan dataset. From the first row to the last row: reference images, ground truth, RG, RANSAC, GO, BR, and 3D models. The black rectangles denote the over- or undersegmentation errors and spurious planes. The red rectangles denote the irregular boundaries. The blue lines denote the real boundaries between adjacent roof planes.
Figure 11. Roof plane segmentation results of different approaches on four local buildings (ad) selected from the Wuhan dataset. From the first row to the last row: reference images, ground truth, RG, RANSAC, GO, BR, and 3D models. The black rectangles denote the over- or undersegmentation errors and spurious planes. The red rectangles denote the irregular boundaries. The blue lines denote the real boundaries between adjacent roof planes.
Remotesensing 12 01363 g011
Table 1. The details of two datasets used in our experiments.
Table 1. The details of two datasets used in our experiments.
Vaihingen DatasetWuhan Dataset
LocationVaihingen (Germany)Wuhan (China)
Laser scannerLeica ALS50Trimble Harrier 68i
Point density4 points/m 2 8 points/m 2
Roof typeMostly gable roof with a large slopeFlat roof and gable roof
Description of the study areasSmall-sized and detached buildings with complex roof structureLarge-sized buildings with complex roof structure
Table 2. The quality assessment on roof plane segmentation results generated by the different approaches in Figure 6 and Figure 7.
Table 2. The quality assessment on roof plane segmentation results generated by the different approaches in Figure 6 and Figure 7.
Figure 6a G T T P F N F P % C m % C r % Q l % R c % D c % B p % B r % F m Time(s)
RG635851592.0679.4574.3622.225.4876.0784.1979.920.17
RANSAC635762390.4871.2566.2820.635.0075.5782.7679.000.91
GO636121496.8381.3379.2217.461.3381.2692.3886.4611.55
BR63603695.2490.9186.966.353.0386.5092.7689.520.52
Figure 6b
RG403911197.5078.0076.4712.504.0088.7688.2988.520.16
RANSAC40382695.0086.3682.6110.004.5589.9290.9290.420.93
GO40391397.5092.8690.705.000.0093.6595.5394.589.37
BR404001100.0097.5697.562.500.0095.5296.2495.880.38
Figure 7a
RG686443194.1267.3764.6529.4113.6848.6455.5651.871.17
RANSAC6858101785.2977.3368.2432.3513.3347.2953.0650.011.81
GO68671498.5394.3793.065.882.8279.0981.6680.3599.86
BR68671398.5395.7194.372.941.4386.6185.4286.013.96
Figure 7b
RG8674128386.0547.1343.6925.583.8252.6354.3253.461.65
RANSAC867883590.7069.0364.4613.957.9668.7173.9471.232.23
GO867976291.8656.0353.3819.772.1367.5778.1572.48390.25
BR86806193.0298.7791.951.164.9488.7685.9287.325.21

Share and Cite

MDPI and ACS Style

Li, L.; Yao, J.; Tu, J.; Liu, X.; Li, Y.; Guo, L. Roof Plane Segmentation from Airborne LiDAR Data Using Hierarchical Clustering and Boundary Relabeling. Remote Sens. 2020, 12, 1363. https://doi.org/10.3390/rs12091363

AMA Style

Li L, Yao J, Tu J, Liu X, Li Y, Guo L. Roof Plane Segmentation from Airborne LiDAR Data Using Hierarchical Clustering and Boundary Relabeling. Remote Sensing. 2020; 12(9):1363. https://doi.org/10.3390/rs12091363

Chicago/Turabian Style

Li, Li, Jian Yao, Jingmin Tu, Xinyi Liu, Yinxuan Li, and Lianbo Guo. 2020. "Roof Plane Segmentation from Airborne LiDAR Data Using Hierarchical Clustering and Boundary Relabeling" Remote Sensing 12, no. 9: 1363. https://doi.org/10.3390/rs12091363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop