Next Article in Journal
Precision Motion Control of a Linear Permanent Magnet Synchronous Machine Based on Linear Optical-Ruler Sensor and Hall Sensor
Next Article in Special Issue
A Deep Learning Approach on Building Detection from Unmanned Aerial Vehicle-Based Images in Riverbank Monitoring
Previous Article in Journal
LTCC Packaged Ring Oscillator Based Sensor for Evaluation of Cell Proliferation
Previous Article in Special Issue
Hyperspectral Image Classification with Capsule Network Using Limited Training Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Segmentation and Multi-Scale Convolutional Neural Network-Based Classification of Airborne Laser Scanner Data

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430010, China
2
Shenzhen Power Supply Co., Ltd., No. 2018 Cuizhu Road., Shenzhen 430079, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(10), 3347; https://doi.org/10.3390/s18103347
Submission received: 18 August 2018 / Revised: 24 September 2018 / Accepted: 1 October 2018 / Published: 7 October 2018
(This article belongs to the Special Issue Deep Learning Remote Sensing Data)

Abstract

:
The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.

Graphical Abstract

1. Introduction

In processing digital terrain models (DTM) and 3D city and landscape models, point clouds have become a more and more popular type of data. For photogrammetry, common point clouds can be produced by airborne laser scanning (ALS) [1,2] and by dense matching of aerial photographs [3]. No matter which method is chosen, the classification of point clouds cannot be ignored. It is the first step in extracting productive geo-information. In some productions, such as DTM generating, points only need to be classified into two classes. In other processes, such as city reconstruction, points require classification into multiple categories. Some existing classification tasks are implemented on a point-based method, while other works proposed to use a segment-based method [4]. Point-based methods use the information of each point with reference to its neighbor, such as eigenvalue-base features, point density values, and the direction of normal vector, or information based on the point itself, such as intensity value and echo-based features, to obtain accurate classification results. On the contrary, segment-based methods divide the point cloud into segments first and put the class label into each segment within which all points belong to the same category.
Admittedly, segment-based classification methods outperform point-based methods in some respects. Above all, a segment-based method is always a timesaving method. The number of segments is much smaller than the number of points. Although dividing point clouds into segments may initially take some time, segment-based methods put a label into each segment which contains hundreds of points. The time of generating features and labeling will then be reduced. Secondly, segments may contain more extending features than a single point within its local neighborhood. Features such as segment size, segment point density, and average echo number in a segment can then be used. The categories’ separability may be improved by these features.
The advantages of segment-based classification cannot be realized without good segmentation. Under- and over-segmentation errors will negatively affect the classification accuracy [5]. Undeniably, under-segmentation will cause classification errors as all the points in a segment belong to the same category. Meanwhile, over-segmentation will add computational effort and reduce the reliance of the segment-based features.
In this paper, a segment-based method is used to reduce the computational burden of our previous work [6], which is a point-based convolutional neural network labeling method. The scientific contributions of this study are as follows:
-
We propose a three-step region-growing segmentation method for segment-based classification. We divide the segmentation into three steps in order to provide a good starting point for the following procedure.
-
We also develop our convolutional neural network. A multi-scale convolutional neural network is trained to automatically learn deep features of each point from the generated feature images across multiple scales.
The paper is structured as follows: the related work to this subject is discussed in Section 2. We present our methodology in Section 3. Section 4 presents the experimental results. The results from using our method is compared with those found using state-of-the-art segment-based and point-based methods. A discussion of our experiments is in Section 5. We provide concluding remarks and suggestions for future work in Section 6.

2. Related Work

In order to understand the relationship between data and labels, some modern discriminative methods are provided. Adaboost [7], support vector machine (SVMs) [8], random forests (RFs) [9], conditional random fields (CRFs) [10], and deep convolutional networks (DCNNs) [11] are popular ones. These methods are also used for the ALS data. The classification methods of the ALS data can be divided into two categories: the point-based classification and the segment-based classification [5].
For point-based classification, an Adaboost [12] algorithm, which automatically combines rough guesses to a more accurate hypothesis, was used to label the 3D ALS data into four classes. Five features were used in the classification. A SVM classifier was used in Mallet’s work [13]. It is a point-based method for LiDAR data. SVM is a non-parametric method. It performs well especially in non-linearly separable data. The potential of LiDAR data is developed by using the SVM method. Chehata [14] used the RF method to classify the LiDAR data. Random forests can make full use of the multi-echo and the full-waveform LiDAR features and provide an accurate classification result in an efficient way, even if the datasets are large. For a thorough discussion of supervised classifiers, Weinmann [15] applied 10 different methods to a same procedure to evaluate their performance. Both methods treat each point independently. In more complex places, such as urban areas, this drawback may lead to inhomogeneous results, as mentioned in Niemeyer’s work [16]. In urban areas, many different objects appear in even a small scene. Roofs and other challenging objects, like cars, fences, and hedges, may have many details, causing overlapping distributions of features in each class. Errors such as shadows caused by other objects, missing data, and random errors make the problem bigger.
In order to overcome these problems, the contextual information that contains the relationship between 3D points within a neighborhood is introduced into the classification of ALS data. The relationships between the object classes can be trained to make the results better. For example, a facade is more likely to appear next to a roof, and a fence is more likely to appear on top of grass. Probabilistic graphical models, such as the conditional random field model, are used for that reason. Niemeyer [17] presented a point-based CRF classifier for urban ALS data. He used a graphical model to represent the point cloud. The edges of the graphical model link each point to its 2D neighbors. The relationships between object categories and the datasets are learned in a training step by making use of a complex model. By comparing the classification result to the methods without contextual information, the CRF method achieves a smoother and more accurate result, even for the classes that come in a low quantity like garages and pavilions.
There are also some problems in the pairwise CRF method. The interactions only occur at a very local level. Thus, some isolated clusters of points may be classified into wrong classes. Many researchers have improved the CRF method to handle these missing long-range interactions. Luo and Sohn [18] presented a multi-rand and asymmetric conditional random field (maCRF). In maCRF, prior information of scene-layout compatibility is used to handle the long-range dependency problem. The maCRF combines two CRF models. One is the short-range CRF for the local neighborhood, and the other is a long-range CRF for the long-range interactions. The final results are refined by independently using the output of the two models. Another solution is proposed by Xiong [19]. A multi-stage inference procedure is used to handle the difficulties in modeling the contextual relationships between the 3D points. A segment result is achieved by using a point-based classification first. Then, the contextual information is presented using the segment result for the final point-based classification. This PN Potts model is proposed by Kohli [20]. The mutually propagating and iterating contextual information improved the classification results. Local spatial interactions can be restricted by the PN Potts model. In a large scale, some potential misclassification may be revised.
The convolutional neural networks also take the contextual information into consideration for point-based classification tasks. Boulch [21] picked several snapshots of the point cloud. For each snapshot, an RGB and geometric composite image was generated. The 3D data was then transformed into 2D images. The fully convolutional neural network was trained by these images and used for pixel-wise labeling. Caltagirone [22] applied a simple and fast fully convolutional neural network (FCN) to assist with road detection. Top-view images encoding several basic statistics, such as mean elevation and density, were generated. The FCN is specifically designed for the task of pixel-wise semantic segmentation by combining a large receptive field with high-resolution feature maps. Yousefhussien [23] presented a 1D FCN to generate point-based labeling while implicitly learning contextual features in an end-to-end fashion. Yang [6] presented point-based feature image generation for the CNN. For each point in the ALS data, a neighboring point with a window was extracted. Using their point-based features, the feature images containing the contextual information were then generated. Relationships between the point and the feature image were learned by the CNN model.
Another way to improve the labeling results and reduce time cost is segment-based classification. More stable features can be achieved since segments may contain some extending features compared to a single point within its local neighborhood. Furthermore, the number of segments is much smaller than the number of points, so time will be saved even though the segmentation process is added. Golovinskiy [24] presented a system for detecting objects such as traffic lights or cars using the combined terrestrial and ALS data. First, the potential object locations were determined based on a hierarchical clustering method. Then, a graph-cut-based segmentation was applied to classify the points close to these locations into the foreground and background. The segmentation method required the parameters of the segments, such as the maximum radius, to be set in advance. The points in the foreground segments were treated as objects while the points in the background segments were discarded. The feature vectors were calculated based on context and shape information and applied to a classifier. Shapovalov [25] used the k-means method to perform the segmentation. Each point was treated as a leaf of a tree and a heuristic method was used to reduce the computational burden. Since the k-means method only defines the total number of segments, it leads to strong over-segmentation. A graph over medoids of segments was built, and the edge values were determined by analyzing the k-nearest neighbors of the medoids. A naïve Bayes classifier was used to define the pairwise potentials. Features such as deviations of the surface normal of the segments and the geometrical arrangement of the medoids were considered. The experiment result showed that the segment-based method can remove noise, increase efficiency, and make use of natural edge features. In Xu’s work [4], single points and two types of segments acquired by different methods were treated as entities for the classification method. Features such as z variance, distance ratio, segment size, and normal direction were calculated from these three entities. The classification was based on heuristic rules, and the contextual information was considered using the segment-based methods. Niemeyer [26] merged the spatial and semantic context in a two-layer CRF. The output of the first CRF was used to generate segments. The segments contained larger scale context and was introduced as an energy term for the next iteration of the next CRF layer. Guinard and Landrieu [27] proposed a non-parametric segmentation model for the classification of 3D LiDAR point clouds in urban areas. The high-level structure of the area was captured by integrating the segmentation into the CRF. The segment-based method aggregated the noisy predictions of a weakly-supervised classifier and produced a higher accuracy result. Vosselman [5] thought that different segmentation methods may be good at different object classes. Thus, a hierarchical structure containing two different segmentation methods was proposed to obtain a generic technique for the ALS data. The structure was capable of handling complex urban areas with a large number of categories. The combination of small and large segments produced by the hierarchical structure made the interaction between nearby and distant points possible. The contextual information was learned by using a CRF. The edge value of the graph was defined by the boundaries of the segment rather than the medoids [25]. The features extracted by analyzing the boundary of the segment were added to improve the classification accuracy.
This paper is based on our previous work [6]. We changed the point-based method to a segment-based method. A three-step region-growing method was proposed for the segmentation. Feature images in different scales were generated and these feature images were treated as the input of a multi-scale convolutional network, and the CNN model was trained for the final semantic labeling task.

3. Methodology

3.1. Three-Step Region-Growing Segmentation

A three-step region-growing method was used for the segmentation, as shown in Figure 1. Normal direction, echo intensity values, and the planarity were key parameters in these steps. The basic region-growing method that we use was developed by Rabbani [28]. In order to obtain a proper segmentation result, we chose to use a three-step region-growing method. Different point values were used in each step.
In the first step, we used the region-growing method to find planar objects. We first sorted all the points by their curvature. The region began its growth from the point that had the minimum curvature since growth from the flattest areas allows for the reduction of the total number of segments [28]. Then, local surface normal vectors and echo intensity values were used to cluster the points. The local plane ( Π P ) was calculated using an M-estimator [29]. The normal direction for each point was calculated by fitting the plane to some neighboring points. Some objects such as low vegetation and impervious surfaces were hard to separate using only normal vectors. The chosen echo intensity values were high on building roofs, on gravel roads, and on cars, while low values were asphalt roads and tar streets [14]. This could solve this problem and enforce the segmentation. If the differences of the normal angles and the intensity values between the point and its neighbor were beneath the threshold, the point was added to the segment. The threshold was made to be small so that planar objects such as roofs and walls were in large segments. Other objects, such as cars and trees, were in small pieces. Then, segments beneath a certain size were discarded and the removed points were re-segmented in the second region-growing step.
In the second step, we used the region-growing method to get smaller objects, such as cars, shrubs and trees. We also set our growth to begin from the point with the minimum curvature to reduce the total number of segments. Then, the point planarity value was used to cluster the points. In the point clouds, the center of gravity was written as X ¯ = 1 n i = 1 n X i . The vector M = ( X 1 X ¯ , , X n X ¯ ) was defined. Then, we calculated the variance-covariance matrix as:
W C = 1 n M T M
From the matrix, the eigenvalues λ 1 > λ 2 > λ 3 were calculated. Additional features [30] were descried as follows:
Planarity :   P λ = λ 2 λ 3 λ 1
If the differences of the planarity values between the point and it neighbor were beneath the threshold, the point was added to the segment. Segments beneath a certain size were discarded too. Since most points on trees or cars were grouped to single segments, fewer points remained unsegmented.
Finally, the unsegmented points were merged into the most frequent segments in their neighbor region. We used a Kd-tree [31] structure for finding the nearest neighbor points. All the points then had a segment label.

3.2. Feature Image Generation

For features selection, we chose features stated in [6] which may be useful for the classification result. The following features were used as shown in Table 1.
To find the height above DTM, the DTM was generated using robust filtering [32], which is implemented in the commercial software package SCOP++. It helped to distinguish categories since it can reflect the global distribution for a point. This feature is, by far, the most important since it is the strongest and most discernable feature for all the categories and relationships based on the analysis by Niemeyer [17]. The description of echo intensity values and planarity were included in Section 3.1, and the sphericity can be calculated as follows:
Sphericity :   S λ = λ 3 λ 1
The variance of deviation angles can be calculated using the angle between the point normal vector and the vertical direction. This feature can help us separate planar surfaces such as roads from vegetation [6]. An eigenentropy-based scale selection method [15] was used to determine the neighborhood scale for computing these features. In total, all five features were used to generate the feature image. The echo intensity values were scaled to the range of 0–255, and the other four features were scaled to the range of 0–1.
Feature images were generated based on these features. For each point in the ALS data, a square window was set up. The point was located at the center of the window, and the window is parallel to the x–y plane. The window was divided into 128 × 128 cells. The center of each cell is calculated as:
{ X i , j = X p ( 63.5 j ) × w Y i , j = Y p ( 63.5 i ) × w Z i , j = Z p
i and j denote the row and column number; X p , Y p , and Z p denote the coordinates of the point; and w is the width of the cell.
If the width of the cell is small, the feature image may contain a lot of white pixels (pixels do not contain any point). This may influence the classification result in next step. Thus, we chose to find the nearest point around each cell center and assign features to it, even if the point was not in the cell. Five features were transferred into three integers as shown in [6]:
R E D = [ 255 × S λ × σ z 2 ] G R E E N = [ I n t e n s i t y × P λ ] B L U E = [ 255 × H a b o v e ] .
The steps of feature image generation are shown in Figure 2.

3.3. The Multi-Scale Convolutional Neural Network (MCNN)

The MCNN was implemented with Caffe [33]. The MCNN consists of three single-scale convolutional neural networks (SCNNs). The architecture of the SCNN is shown in Figure 3. The SCNN is comprised of four kinds of layers. A detailed explanation of the layers is as follows.
In Figure 3, “Conv” denotes the convolutional layer, which is the most frequent layer. It performs the convolution operation with the weight of the network. In our model, the size of the convolutional kernels is 3 × 3. “Pool” denotes the pooling layer, which can reduce the number of parameters to be learned and improve the robustness of the translation. In our model, the max-pooling strategy is used. BN denotes the batch normalization layer and the ReLU denotes the rectified linear unit layer. BN can improve the learning rate and reduce overfitting [34]. ReLU can improve the learning rate and eliminate the need for unsupervised pretraining during the training of a deep supervised network [35].
Using a single scale image patch to represent a point is problematic when the point has a relative complex surrounding environment. In order to obtain enough semantic information and make a more precise prediction, a multiscale CNN model is proposed which uses multiscale feature images to obtain multiscale CNN features in the classification of LiDAR point clouds. It is known that human vision is a multiscale process [36]. By changing the width of cell w, different scales of feature images can be generated for the same point. Different scales of feature images can help us get both robust semantic information and precise location information, thus enriching the features for classifying point clouds.
Our architecture of multiscale CNN is shown in Figure 4. Different scales of features were extracted by independent CNNs and fused into multiscale features to make the prediction. Since we get the different scale features by change the width of the cell, all the feature images have the same size (128 × 128 × 3). CNN models of different input scales were named SCNN1, SCNN2, and SCNN3. Both have the same architecture, as shown in Figure 3. In our work, feature images were classified into nine categories: power-line, low vegetation, impervious surfaces, car, fence/hedge, roof, facade, shrub, and tree. Thus, our multiscale CNN has nine outputs.
FC denotes the fully connected layer. All the neurons in the previous layer were connected with every single neuron in the latter layer. In the last fully connected layer, a standard feed-forward manner was operated to obtain the label prediction result [37]. The number of the classes to be identified was N. A vector, ρ, is computed, and its elements, ρk, encode the probability mass function over the N classes as:
ρ k = p ( y = k | x ) = exp ( a k ( x ) ) j N exp ( a j ( x ) )
x denotes the input of the images, a j ( x ) is the jth unit at the output layer. and k denotes the class index. From the probabilities, the most probable class was estimated as:
y ^ = argmax k p ( y = k | x )
All parameters in the MCNN were determined automatically from the training images. If the training dataset has Ns sample images with corresponding ground-truth labels, the error function can be defined as:
1 N s i = 1 N s f ( ρ i , y t r u e i ) + λ w 2
In this paper, f ( . , . ) was defined as the cross-entropy error function [38] that evaluates the agreement between the network’s output, ρ i , and the ground-truth label, y t r u e i . w is the vector including all weights of the network, 2 denotes the L2 norm, and λ regulates the influence of the magnitude of the weight vector on the error function. The approximate solution was obtained using the stochastic gradient descent (SGD) [11] method during backpropagation.
In order to fully train the MCNN, the parameters of the MCNN were initialized with the parameters of SCNN separately, and the fine-tuning of the MCNN was used to complete the training.

3.4. Workflow

The workflow of the proposed method is shown in Figure 5. In order to reduce the data redundancy and balance the number of points in each category, a class rebalancing strategy was used for the training data. If the point number of one category was larger than a certain size, we reduced it. If not, we kept it unmodified. During the training period, the MCNN model was trained using the multi-scale feature images generated from the rebalanced training data. As for the testing data, we initially applied the three-step region-growing method to obtain the segmentation results. Then, a voting strategy was used. We randomly chose several points in each segment to generate the multi-scale feature images. In the testing period, the most frequent label in the points predicted by the MCNN was assigned to the segment.

4. Experimental Results

4.1. Test Data

To evaluate our approach, we set the experiments using the ISPRS 3D labelling benchmark. This dataset has been presented in the scope of the ISPRS Test Project on Urban Classification and 3D Building Reconstruction and simultaneously serves as the benchmark dataset for the ISPRS benchmarks on 2D and 3D semantic labeling. The ALS data was acquired using a Leica ALS50 system with 45° field of view and a mean flying height 500 m above ground over Vaihigen, a small village in Germany [39]. The point density of the data was between 10 and 20 points/m2. For the semantic labeling task, nine classes (i.e., power, low vegetation, impervious surface, car, fence/hedge, roof, facade, shrub, and tree) were labeled by the authors of [17]. Point in the ALS data contained the spatial XYZ-coordinates, intensity values, and the number of returns. The given data was subdivided into two areas. The first area was the training area containing 753,876 labeled points. The nine categories were power-line, low vegetation, impervious surfaces, car, fence/hedge, roof, facade, shrub, and tree. The second area was the testing area containing 411,722 unlabeled points. More detailed information is shown in Table 2. For the trainging dataset, a class rebalancing strategy was applied. We used the same class rebalancing strategy as the one used in [6]. If the point number was larger than 15,000, we rebalanced it. If not, it remained unmodified.
The quantity of the classification result was evaluated based on the ISPRS contest criteria: precision/correctness, recall/completeness, F1 score, and the overall accuracy. The tp (true positive), fp (false positive), and the fn (false negative) values for each category were calculated. The precision value, recall value, and F1 score were calculated as follows:
{ p r e c i s i o n = t p t p + f p r e c a l l = t p t p + f n
F 1 = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l

4.2. Experiment Results

The experiment focused on the influence of segmentation and CNN architecture in classification results and testing efficiency. Five strategies were used in the experiment:
  • Point_S indicates the method used in [6]. It is a point-based method and uses the SCNN for semantic labeling.
  • Point_M replaces the SCNN in Point_C with the MCNN.
  • SegS_M adds the simple normal vector-based region-growing segmentation strategy into Point_M.
  • SegT_M adds our three-step region growing segmentation strategy into Point_M.
  • SegT_S replaces the MCNN in SegT_M with the SCNN.
As for testing procedure, a voting strategy was used. We randomly chose several points in each segment to generate feature images. For each point, three multi-scale feature images were generated. Based on the discussion in [6], we set the cell width of the feature images as 0.05, 0.1, and 0.2 m. For the region growing steps, we set the normal vector threshold to 5°, the intensity threshold to 10, and the planarity threshold to 0.15. The training of the CNN model was performed on a Dell T630 PC with an Intel Xeon E5-2603v3 CPU, 64GB RAM, and an NVIDIA Tesla K20c. The professional graphic card helped us to save time in calculating the MCNN parameters. The influence of the voting point number on SegT_M is shown in Figure 6. Taking the computational efficiency and the overall accuracy into consideration, we chose 10 points in each segment generating the feature images. The per-class accuracy and the overall accuracy for each method are shown in Table 3. The running time for each step, the number of feature images, the overall accuracy (OA), and the average F1 scores of each method are shown in Table 4.

4.3. ISPRS Benchmark Testing Results

To evaluate our method performance against others, the results of the classification were submitted to the ISPRS organizers for evaluation. The per-class accuracy and the overall accuracy for each submission are shown in Table 5, and the per-class F1 score and the average F1 score for each submission are shown in Table 6. In ISS_7 [40], supervoxel-based segmentation and the color-based region-growing segmentation were used to segment the ALS data. A machine learning algorithm was used to label these segments. In UM [41], an OvO (one-vs-one) machine learning strategy was applied to obtain a 3D semantic labeling result. Features extracted from LiDAR point-attributes, textural analysis, and geometric attributes were used. In HM_1 [42], a conditional random field method which used a random forest classifier for generating the unary potentials and a variety of the contrast-sensitive Potts models for generating the pairwise potentials was used for the point-based semantical labelling task. In WhuY3 [6], a point-based semantic labeling method using a convolutional neural network was proposed. This point-based feature image generation method transformed the 3D neighborhood features of a point into a 2D image. In LUH [26], a two-layer hierarchical framework was used for the contextual classification of LiDAR data. The supervised approach classified points and segments prospectively with two independent conditional random fields. In RIT_1 [23], 3D-coordinated and three corresponding spectral features for each point were used by a 1D-fully convolutional neural network to generate point-wise labeling while learning contextual features in an end-to-end fashion.

5. Discussion

The original purpose of our proposed method was to reduce the computation burden of the point-based method in our previous work [6]. As shown in Table 3 and Table 4, the three-step region-growing segmentation strategy has a good performance compared with the MCNN. Although the MCNN needs more feature images, it did improve the overall accuracy and the average F1 score of the classification result (e.g., comparing Point_S with Point_M and SegT_S with SegT_M). The time efficiency of the framework was closely related to the number of test feature images. The segmentation-based strategies reduced the test feature images completely. The total test feature image number of SegT_M was only one tenth that of Point_M and one fifth that of SegS_M. Although the segmentation method may cause some time, the testing efficiency of the framework is indeed improved. The overall accuracy and the average F1 score also have a good result.
Compared with Point_S [6], our method shows a better classification performance. For the large planar objects, such as low vegetation, impervious surfaces, and roofs, the mistaken mix of different categories was solved. As shown in Figure 7, since the planarity of the low vegetation and the impervious surfaces had little differences, these classes may be mixed together in a point-based classification method. Basing on the intensity values and the normal angle differences, these objects have been clustered together. Compared with Point_S, our method (SegT_M) improved the results of categorizing low vegetation (+3.3%) and impervious surfaces (+0.5%). For the small non-planar objects, such as fences, hedges and cars, our method has better classification results over the point-based method. As shown in Figure 8 and Figure 9, the point-based method may misclassify fences, hedges, and cars into shrubs and other classes. In the second and third steps of our segmentation, planarity values were used to cluster these objects. The whole segment served as an integer for the following classification procedure. Compared with Point_S, our method (SegT_M) improved the results of classifying the fence/hedge (+27.8%) and car (+9.6%) categories. In some areas, as shown in Figure 10, trees and roofs are hard to distinguish because of the unusual distribution of their points and the similarity of their planarity. The segmentation-based strategy can solve this problem to some degree. Compared with the Point_S, our method (SegT_M) improved the classification of the tree category (+5.2%).
As shown in Table 5 and Table 6, our method had a satisfactory performance over all participants on the ISPRS WG II/4 Vaihingen 3D Semantic Labeling task. Its overall accuracy and the average F1 score were ranked 1st of all participants. Based on our three-step segmentation strategy, planar objects, such as low vegetation (ranking 1st), impervious surfaces (2nd), and roofs (1st), and smaller objects, such as cars (1st), fences/hedges (1st), and shrubs (1st), had a good performance in F1 score. The multi-scale convolutional neural networks exploited the potential of the selected features, as we expected. There were also some misclassifications in our final result. As shown in Figure 11, shrubs and low vegetation were difficult to distinguish. Some shrubs were mixed up with trees and low vegetation. This is because, in the segmentation procedure, these points were clustered into the same segment. Our method needed to adjust several parameters in the segmentation step. It was hard to make the segmentation result suit all the categories. A more automatic and universal segmentation method should be proposed. Furthermore, only LiDAR was used in our experiments. In order to improve our further classification performance, the corresponding orthoimages could be used in our future work.

6. Conclusions

In this paper, we propose a three-step region growing segmentation method for segment-based point cloud classification. The three-step strategy minimized both under-segmentation and over-segmentation and provides a good starting point for the following procedure. The computational burden was reduced. Then, a multi-scale convolutional neural model was used to train and classify the feature images. Based on the voting strategy, each segment can be classified into nine classes by the multi-scale convolutional neural model. The classification result had a satisfactory performance on the ISPRS dataset compared with state-of-the-art methods. As shown in Table 5 and Table 6, the overall accuracy and the average F1 score rank the first compared with the other considered approached.
Our method still has the potential for improved performance. One such room for improvement is that the results of the segmentation can further influence the classification output. The complex parameter setting makes it hard to obtain the best result. In future works, we will propose a more automatic and universal segmentation method to solve this problem. Another room for improvement is that only LiDAR is used in our current method. To further improve our classification performance and apply our method to more complex 3D classification tasks, corresponding image data will be used in our future work.

Author Contributions

Z.Y. and W.J. contributed to the study design and manuscript writing. Z.Y. and B.T. conceived and designed the experiments. Z.Y. performed the experiments. H.P. contributed to the initial data.

Funding

This research was funded by Technology project of shenzhen power supply bureau: 090000KK52160017.

Acknowledgments

The labeled data was provided by Joachim Niemeyer at the Institute of Photogrammetry and GeoInformation, Leibniz Universität Hannover, Nienburger Str. 1, D-30167 Hannover, Germany. The partial evaluations of results were provided by Markus Gerke–Utwente/ITC/EOS.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dorninger, P.; Pfeifer, N. A comprehensive automated 3d approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. Sensors 2008, 8, 7323–7343. [Google Scholar] [CrossRef] [PubMed]
  2. Sithole, G.; Vosselman, G. Automatic structure detection in a point-cloud of an urban landscape. In Proceedings of the 2003 2nd GRSS/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas, Berlin, Germany, 22–23 May 2003; pp. 67–71. [Google Scholar]
  3. Pepe, M.; Prezioso, G. Two approaches for dense dsm generation from aerial digital oblique camera system. In Proceedings of the 2nd International Conference on Geographical Information Systems Theory, Applications and Management, Rome, Italy, 26–27 April 2016; pp. 63–70. [Google Scholar]
  4. Xu, S.; Oude Elberink, S.; Vosselman, G. Entities and features for classifcation of airborne laser scanning data in urban area. Anal. Chim. Acta 2012, I-4, 257–262. [Google Scholar] [CrossRef]
  5. Vosselman, G.; Coenen, M.; Rottensteiner, F. Contextual segment-based classification of airborne laser scanner data. ISPRS J. Photogramm. Remote Sens. 2017, 128, 354–371. [Google Scholar] [CrossRef]
  6. Yang, Z.; Jiang, W.; Xu, B.; Zhu, Q.; Jiang, S.; Huang, W. A convolutional neural network-based 3d semantic labeling method for als point clouds. Remote Sens. 2017, 9, 936. [Google Scholar] [CrossRef]
  7. Rätsch, G.; Onoda, T.; Müller, K.-R. Soft margins for adaboost. Mach. Learn. 2001, 42, 287–320. [Google Scholar]
  8. Joachims, T. Making Large-Scale Svm Learning Practical; Technical Report, SFB 475: Komplexitätsreduktion in Multivariaten Datenstrukturen; Universität Dortmund: Dortmund, Germany, 1998. [Google Scholar]
  9. Liaw, A.; Wiener, M. Classification and regression by randomforest. R News 2002, 2, 18–22. [Google Scholar]
  10. Lafferty, J.; McCallum, A.; Pereira, F.C. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, San Francisco, CA, USA, 28 June–1 July 2001; pp. 282–289. [Google Scholar]
  11. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  12. Lodha, S.K.; Fitzpatrick, D.M.; Helmbold, D.P. Aerial lidar data classification using adaboost, 3-D Digital Imaging and Modeling. In Proceedings of the Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007), Montreal, QC, Canada, 21–23 August 2007; pp. 435–442. [Google Scholar]
  13. Mallet, C.; Bretar, F.; Roux, M.; Soergel, U.; Heipke, C. Relevance assessment of full-waveform lidar data for urban area classification. ISPRS J. Photogramm. Remote Sens. 2011, 66, S71–S84. [Google Scholar] [CrossRef]
  14. Chehata, N.; Guo, L.; Mallet, C. Airborne lidar feature selection for urban classification using random forests. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2009, 38, W8. [Google Scholar]
  15. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogramm. Remote Sens. 2015, 105, 286–304. [Google Scholar] [CrossRef]
  16. Niemeyer, J.; Wegner, J.D.; Mallet, C.; Rottensteiner, F.; Soergel, U. Conditional random fields for urban scene classification with full waveform lidar data. In Photogrammetric Image Analysis; Springer: Berlin, Germany, 2011; pp. 233–244. [Google Scholar]
  17. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of lidar data and building object detection in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar] [CrossRef]
  18. Luo, C.; Sohn, G. Scene-layout compatible conditional random field for classifying terrestrial laser point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 79. [Google Scholar] [CrossRef]
  19. Xiong, X.; Munoz, D.; Bagnell, J.A.; Hebert, M. 3-d scene analysis via sequenced predictions over points and regions. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2609–2616. [Google Scholar]
  20. Kohli, P.; Torr, P.H. Robust higher order potentials for enforcing label consistency. Int. J. Comput. Vision 2009, 82, 302–324. [Google Scholar] [CrossRef]
  21. Boulch, A.; Le Saux, B.; Audebert, N. Unstructured point cloud semantic labeling using deep segmentation networks. In Proceedings of the the Eurographics Workshop on 3D Object Retrieval, Lyon, France, 23–24 April 2017. [Google Scholar]
  22. Caltagirone, L.; Scheidegger, S.; Svensson, L.; Wahde, M. Fast lidar-based road detection using fully convolutional neural networks. arXiv, 2017; arXiv:1703.03613. [Google Scholar]
  23. Yousefhussien, M.; Kelbe, D.J.; Ientilucci, E.J.; Salvaggio, C. A fully convolutional network for semantic labeling of 3d point clouds. arXiv, 2017; arXiv:1710.01408. [Google Scholar]
  24. Golovinskiy, A.; Kim, V.G.; Funkhouser, T. Shape-based recognition of 3d point clouds in urban environments. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2154–2161. [Google Scholar]
  25. Shapovalov, R.; Velizhev, E.; Barinova, O. Nonassociative markov networks for 3d point cloud classification. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVIII, Part 3A, Saint Mandé, France, 1–3 September 2010. [Google Scholar]
  26. Niemeyer, J.; Rottensteiner, F.; Sörgel, U.; Heipke, C. Hierarchical higher order crf for the classification of airborne lidar point clouds in urban areas. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 655–662. [Google Scholar] [CrossRef]
  27. Guinard, S.; Landrieu, L. Weakly supervised segmentation-aided classification of urban scenes from 3d lidar point clouds. In Proceedings of the International Society for Photogrammetry and Remote Sensing, Hannover, Germany, 6–9 June 2017. [Google Scholar]
  28. Rabbani, T.; Heuvel, F.A.V.D.; Vosselman, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 248–253. [Google Scholar]
  29. Xu, G.; Zhang, Z. Epipolar Geometry in Stereo, Motion and Object Recognition: A Unified Approach; Springer Science & Business Media Press: Berlin, Germany, 2013; Volume 6. [Google Scholar]
  30. Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensionality based scale selection in 3d lidar point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, W12. [Google Scholar]
  31. Bentley, J.L. Multidimensional binary search trees used for associative searching. Commun. ACM 1975, 18, 509–517. [Google Scholar] [CrossRef]
  32. Kraus, K.; Pfeifer, N. Determination of terrain models in wooded areas with airborne laser scanner data. ISPRS J. Photogramm. Remote Sens. 1998, 53, 193–203. [Google Scholar] [CrossRef]
  33. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678. [Google Scholar]
  34. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv, 2015; arXiv:1502.03167. [Google Scholar]
  35. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  36. Buyssens, P.; Elmoataz, A.; Lézoray, O. Multiscale convolutional neural networks for vision–based classification of cells. In Proceedings of the 11th Asian Conference on Computer Vision, Daejeon, Korea, 5–9 November 2012; pp. 342–352. [Google Scholar]
  37. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  38. Boer, P.T.D.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
  39. Cramer, M. The dgpf-test on digital airborne camera evaluation–overview and test design. Photogramm. Fernerkund. Geoinform. 2010, 2010, 73–82. [Google Scholar] [CrossRef] [PubMed]
  40. Ramiya, A.M.; Nidamanuri, R.R.; Ramakrishnan, K. A supervoxel-based spectro-spatial approach for 3d urban point cloud labelling. Int. J. Remote Sens. 2016, 37, 4172–4200. [Google Scholar] [CrossRef]
  41. Horvat, D.; Žalik, B.; Mongus, D. Context-dependent detection of non-linearly distributed points for vegetation classification in airborne lidar. ISPRS J. Photogramm. Remote Sens. 2016, 116, 1–14. [Google Scholar] [CrossRef]
  42. Steinsiek, M.; Polewski, P.; Yao, W.; Krzystek, P. Semantische analyse von als-und mls-daten in urbanen gebieten mittels conditional random fields. Tagungsband 2017, 37, 521–531. [Google Scholar]
Figure 1. The three-step region growing method: (a) planar object extraction, (b) small pieces re-segmentation, and (c) merging.
Figure 1. The three-step region growing method: (a) planar object extraction, (b) small pieces re-segmentation, and (c) merging.
Sensors 18 03347 g001
Figure 2. The steps of feature image generation: (a) Set up the square window and find the cell. (b) Search for the nearest point and assign the value to the cell. (c) Generate the feature image.
Figure 2. The steps of feature image generation: (a) Set up the square window and find the cell. (b) Search for the nearest point and assign the value to the cell. (c) Generate the feature image.
Sensors 18 03347 g002
Figure 3. The architecture of the SCNN.
Figure 3. The architecture of the SCNN.
Sensors 18 03347 g003
Figure 4. The architecture of the MCNN.
Figure 4. The architecture of the MCNN.
Sensors 18 03347 g004
Figure 5. The workflow of the proposed method.
Figure 5. The workflow of the proposed method.
Sensors 18 03347 g005
Figure 6. The influence of the voting point number on SegT_M.
Figure 6. The influence of the voting point number on SegT_M.
Sensors 18 03347 g006
Figure 7. The detailed improvement of low vegetation and impervious surfaces classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Figure 7. The detailed improvement of low vegetation and impervious surfaces classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Sensors 18 03347 g007
Figure 8. The detailed improvement of fence/hedge classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Figure 8. The detailed improvement of fence/hedge classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Sensors 18 03347 g008
Figure 9. The detailed improvement of car classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Figure 9. The detailed improvement of car classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Sensors 18 03347 g009
Figure 10. The detailed improvement of roof and tree classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Figure 10. The detailed improvement of roof and tree classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Sensors 18 03347 g010
Figure 11. The failed shrub and tree classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Figure 11. The failed shrub and tree classification. (a) The Point_S result. (b) The SegT_M result. (c) The ground truth.
Sensors 18 03347 g011
Table 1. LiDAR features for classification.
Table 1. LiDAR features for classification.
TypeSymbolFeature
Height features Δ z Height above DTM
Echo features I Intensity
Eigenvalue features P λ Planarity
S λ Sphericity
Local plane features δ n v 2 Variance of deviation angles
Table 2. Number of 3D points per class.
Table 2. Number of 3D points per class.
ClassTraining SetRebalancing ResultTest Set
Powerline546546N/A
Low Vegetation180,85018,005N/A
Impervious Surfaces193,72319,516N/A
Car46144614N/A
Fence/Hedge12,07012,070N/A
Roof152,04515,235N/A
Facade27,25013,731N/A
Shrub47,60511,850N/A
Tree135,17313,492N/A
753,876109,059411,722
Table 3. Per-class accuracy and the overall accuracy of each strategy.
Table 3. Per-class accuracy and the overall accuracy of each strategy.
MethodPowerLow VegetationImpervious SurfaceCarFence/HedgeRoofFacadeShrubTreeOA
Point_S24.781.891.969.314.795.440.938.278.582.3
Point_M25.283.192.171.219.395.542.139.279.383.0
SegS_M28.384.792.569.518.795.540.738.378.483.3
SegT_S26.884.391.271.233.795.443.343.681.283.6
SegT_M31.285.092.478.942.595.646.542.483.784.9
Table 4. Comparison of the computation time for each strategy.
Table 4. Comparison of the computation time for each strategy.
Point_SPoint_MSegS_MSegT_SSegT_M
Segmentation time (min)004:207:407:40
Number of training feature images109,059327,177327,177109,059327,177
Training feature images generation time (h)0.41.31.30.41.3
Number of testing feature images411,7221,235,166538,39839,430118,290
Testing feature images generation time (h)1.64.72.00.20.5
Training time (h)6.520.020.06.520.1
Testing time (s)70.4172.883.810.430.7
Overall Accuracy (%)82.383.083.383.684.9
Average F1 (%)61.663.765.764.369.2
Table 5. A quantitative comparison between the per-class accuracy and the overall accuracy of our method and other published methods on the ISPRS test set.
Table 5. A quantitative comparison between the per-class accuracy and the overall accuracy of our method and other published methods on the ISPRS test set.
MethodPowerLow VegetationImpervious SurfaceCarFence/HedgeRoofFacadeShrubTreeOA
ISS_740.849.996.546.739.596.2-52.068.876.2
UM33.379.590.332.52.990.543.743.385.280.8
HM_182.865.994.267.125.291.549.062.782.680.5
WhuY324.781.891.969.314.795.440.938.278.582.3
LUH53.272.790.463.325.991.360.773.479.181.6
RIT_129.869.893.677.010.492.947.473.479.381.6
Ours31.285.092.478.942.595.646.542.483.784.9
Table 6. A quantitative comparison between the per-class F1 score and the average value of our method and other published methods on the ISPRS test set.
Table 6. A quantitative comparison between the per-class F1 score and the average value of our method and other published methods on the ISPRS test set.
MethodPowerLow VegetationImpervious SurfaceCarFence/HedgeRoofFacadeShrubTreeAvg. F1
ISS_754.465.285.057.928.990.9-39.575.655.27
UM46.179.089.147.75.292.052.740.977.958.96
HM_169.873.891.558.229.991.654.747.880.266.39
WhuY337.181.490.163.423.993.447.539.978.061.63
LUH59.677.591.173.134.094.256.346.683.168.39
RIT_137.577.991.573.418.094.049.345.982.563.33
Ours42.582.791.474.753.794.353.147.982.869.2

Share and Cite

MDPI and ACS Style

Yang, Z.; Tan, B.; Pei, H.; Jiang, W. Segmentation and Multi-Scale Convolutional Neural Network-Based Classification of Airborne Laser Scanner Data. Sensors 2018, 18, 3347. https://doi.org/10.3390/s18103347

AMA Style

Yang Z, Tan B, Pei H, Jiang W. Segmentation and Multi-Scale Convolutional Neural Network-Based Classification of Airborne Laser Scanner Data. Sensors. 2018; 18(10):3347. https://doi.org/10.3390/s18103347

Chicago/Turabian Style

Yang, Zhishuang, Bo Tan, Huikun Pei, and Wanshou Jiang. 2018. "Segmentation and Multi-Scale Convolutional Neural Network-Based Classification of Airborne Laser Scanner Data" Sensors 18, no. 10: 3347. https://doi.org/10.3390/s18103347

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop