A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification
Abstract
:1. Introduction
2. Related Works
2.1. Knowledge-Driven Methods
2.2. Model-Driven Methods
3. Methodology
3.1. Overview of the Approach
3.2. Two-Level Graphical Model Generation for Feature Extraction
3.2.1. First-Level Graphical Model Generation by the Supervoxel and VCCS Algorithm
3.2.2. Second-Level Graphical Model Generation via LCCP Calculation
3.3. Hybrid Feature Description
3.3.1. Point-Based Feature Description
- (1)
- Local density of the point cloud: the density feature is calculated as the average distance from one point to the nearest k-neighbors. For each centroid in the super voxel, fast retrieval of domain points is achieved by the construction of a KDTREE and the fast library for approximate nearest neighbors (FLANN) algorithm [42]. Then, the local density feature of the point is obtained by calculating the average of the Euclidean distance between two pairs of neighboring points.
- (2)
- Point feature histogram (PFH): The goal of the PFH formulation is to encode a point’s k-neighborhood geometrical properties by generalizing the mean curvature around the point using a multidimensional histogram of values [43,44]. A Point Feature Histogram representation is based on the relationships between the points in the k-neighborhood and their estimated surface normals. In this work, the PFH feature of each centroid point is calculated by KDTREE searching from the original point cloud.
- (3)
- Direction: The direction feature indicates the angle between the normal of the location and the horizontal plane, which is calculated as follows (Equation (3)).
- (4)
- Relative elevation: The relative elevation feature is the distance from the center point of the supervoxel to the ground in the extended z-direction. Considering the influence of ground undulation on elevation features, this paper proposes a grid-based optimization strategy for elevation feature extraction (see Section 3.3.3).
- (5)
- RGB color: RGB color information can achieve effective judgment of feature types, and this paper uses color features as a basic feature of supervoxels. Considering that this paper uses supervoxels as the basic unit for feature classification experiments, their color features are determined by the average value of points inside the supervoxels.
3.3.2. Eigen-Based Feature Description
- (1)
- Curvature: Describes the extent of the curve for a point group.
- (2)
- Linearity: Describes the extent of the line-like shape for a point group.
- (3)
- Planarity: Describes the extent of the plane-like shape for a point group.
- (4)
- Scattering: Describes the extent of the sphere-like shape for a point group.
- (5)
- Anisotropy: Describes the difference between the extents of entropy in respective directions of eigenvectors for a point group.
3.3.3. Grid-Based Elevation Feature Description
3.4. Supervoxel-Based Random Forests (RF) Model
4. Experimental Results
4.1. ISPRS Benchmark Datasets
4.1.1. Toronto Sites
4.1.2. Vaihingen Sites
4.2. Airborne Laser Scanner Dataset in Urban Scenes of Shenzhen
4.3. Discussions of the Experimental Results
5. Conclusions
- (1)
- Rather than single points, we used supervoxels as the basic entity to construct the RF model and constrain the domain information via LCCP segmentation.
- (2)
- A two-level graphical model involving supervoxel calculation and LCCP optimization was generated from the raw point cloud, which significantly improved the reliability and accuracy of neighborhood searching.
- (3)
- The features were divided into three categories based on feature descriptions (point-based, eigen-based, and grid-based), and three unique feature calculation strategies were accordingly designed to improve feature reliability. We conducted three experiments using ALS data provided by ISPRS and real scene data collected from Shenzhen, China, respectively. We compared the quantitative analysis of ALS datasets with other state-of-the-art methods, and the classification results demonstrated the robustness and effectiveness of the proposed method. Furthermore, this method achieved fine-scale classification when the point clouds had different densities
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chen, C.; Peng, X.; Song, S.; Wang, K.; Qian, J.; Yang, B. Safety Distance Diagnosis of Large Scale Transmission Line Corridor Inspection Based on LiDAR Point Cloud Collected With UAV. Power Syst. Technol. 2017, 41, 2723–2730. [Google Scholar]
- Croce, V.; Caroti, G.; De Luca, L.; Jacquot, K.; Piemonte, A.; Véron, P. From the Semantic Point Cloud to Heritage-Building Information Modeling: A Semiautomatic Approach Exploiting Machine Learning. Remote Sens. 2021, 13, 461. [Google Scholar] [CrossRef]
- Javernick, L.; Brasington, J.; Caruso, B. Modeling the topography of shallow braided rivers using Structure-from-Motion photogrammetry. Geomorphology 2014, 213, 166–182. [Google Scholar] [CrossRef]
- Yue, X.; Wu, B.; Seshia, S.A.; Keutzer, K.; Sangiovanni-Vincentelli, A.L. A lidar point cloud generator: From a virtual world to autonomous driving. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, Yokohama, Japan, 11–14 June 2018; pp. 458–464. [Google Scholar]
- Lafarge, F.; Mallet, C. Creating large-scale city models from 3D-point clouds: A robust approach with hybrid representation. Int. J. Comput. Vis. 2012, 99, 69–85. [Google Scholar] [CrossRef]
- Xiong, B.; Jancosek, M.; Elberink, S.O.; Vosselman, G. Flexible building primitives for 3D building modeling. ISPRS J. Photogramm. Remote Sens. 2015, 101, 275–290. [Google Scholar] [CrossRef]
- Zhou, Q.Y.; Neumann, U. 2.5 d dual contouring: A robust approach to creating building models from aerial lidar point clouds. In Proceedings of the European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 115–128. [Google Scholar]
- Hackel, T.; Wegner, J.D.; Schindler, K. Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 177–184. [Google Scholar] [CrossRef] [Green Version]
- Jie, S.; Zulong, L. Airborne LiDAR feature selection for urban classification using random forests. Geomat. Inf. Sci. Wuhan Univ. 2014, 39, 1310–1313. [Google Scholar]
- Wang, Y.; Chen, Q.; Liu, L.; Li, X.; Sangaiah, A.K.; Li, K. Systematic comparison of power line classification methods from ALS and MLS point cloud data. Remote Sens. 2018, 10, 1222. [Google Scholar] [CrossRef] [Green Version]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. arXiv 2017, arXiv:1706.02413. [Google Scholar]
- Li, X.; Wang, L.; Wang, M.; Wen, C.; Fang, Y. DANCE-NET: Density-aware convolution networks with context encoding for airborne LiDAR point cloud classification. ISPRS J. Photogramm. Remote Sens. 2020, 166, 128–139. [Google Scholar] [CrossRef]
- Li, X.; Wen, C.; Cao, Q.; Du, Y.; Fang, Y. A novel semi-supervised method for airborne LiDAR point cloud classification. ISPRS J. Photogramm. Remote Sens. 2021, 180, 117–129. [Google Scholar] [CrossRef]
- Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of lidar data and building object detection in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar] [CrossRef]
- Niemeyer, J.; Rottensteiner, F.; Sörgel, U.; Heipke, C. Hierarchical higher order crf for the classification of airborne lidar point clouds in urban areas. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.-ISPRS Arch. 2016, 41, 655–662. [Google Scholar] [CrossRef] [Green Version]
- Zhu, Q.; Li, Y.; Hu, H.; Wu, B. Robust point cloud classification based on multi-level semantic relationships for urban scenes. ISPRS J. Photogramm. Remote Sens. 2017, 129, 86–102. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [Green Version]
- Papon, J.; Abramov, A.; Schoeler, M.; Worgotter, F. Voxel cloud connectivity segmentation-supervoxels for point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2027–2034. [Google Scholar]
- Wu, F.; Wen, C.; Guo, Y.; Wang, J.; Yu, Y.; Wang, C.; Li, J. Rapid localization and extraction of street light poles in mobile LiDAR point clouds: A supervoxel-based approach. IEEE Trans. Intell. Transp. Syst. 2016, 18, 292–305. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Ni, H.; Lin, X.; Zhang, J. Classification of ALS point cloud with improved point cloud segmentation and random forests. Remote Sens. 2017, 9, 288. [Google Scholar] [CrossRef] [Green Version]
- Huang, M.J.; Shyue, S.W.; Lee, L.H.; Kao, C.C. A knowledge-based approach to urban feature classification using aerial imagery with lidar data. Photogramm. Eng. Remote Sens. 2008, 74, 1473–1485. [Google Scholar] [CrossRef] [Green Version]
- Germaine, K.A.; Hung, M.C. Delineation of impervious surface from multispectral imagery and lidar incorporating knowledge based expert system rules. Photogramm. Eng. Remote Sens. 2011, 77, 75–85. [Google Scholar] [CrossRef]
- Zheng, M.; Wu, H.; Li, Y. An adaptive end-to-end classification approach for mobile laser scanning point clouds based on knowledge in urban scenes. Remote Sens. 2019, 11, 186. [Google Scholar] [CrossRef] [Green Version]
- Chen, H.; Wang, C.; Chen, T.; Zhao, X. Feature selecting based on fourier series fitting. In Proceedings of the 2017 8th IEEE International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 24–26 November 2017; pp. 241–244. [Google Scholar]
- Ponciano, J.J.; Roetner, M.; Reiterer, A.; Boochs, F. Object Semantic Segmentation in Point Clouds—Comparison of a Deep Learning and a Knowledge-Based Method. ISPRS Int. J. Geo-Inf. 2021, 10, 256. [Google Scholar] [CrossRef]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
- Hong, J.; Kim, K.; Lee, H. Faster Dynamic Graph CNN: Faster Deep Learning on 3D Point Cloud Data. IEEE Access 2020, 8, 190529–190538. [Google Scholar] [CrossRef]
- Li, Y.; Tong, G.; Li, X.; Zhang, L.; Peng, H. MVF-CNN: Fusion of multilevel features for large-scale point cloud classification. IEEE Access 2019, 7, 46522–46537. [Google Scholar] [CrossRef]
- Song, W.; Zhang, L.; Tian, Y.; Fong, S.; Liu, J.; Gozho, A. CNN-based 3D object classification using Hough space of LiDAR point clouds. Hum.-Cent. Comput. Inf. Sci. 2020, 10, 19. [Google Scholar] [CrossRef]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. (TOG) 2019, 38, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
- Vosselman, G.; Coenen, M.; Rottensteiner, F. Contextual segment-based classification of airborne laser scanner data. ISPRS J. Photogramm. Remote Sens. 2017, 128, 354–371. [Google Scholar] [CrossRef]
- Zhang, J.; Lin, X.; Ning, X. SVM-based classification of segmented airborne LiDAR point clouds in urban areas. Remote Sens. 2013, 5, 3749–3775. [Google Scholar] [CrossRef] [Green Version]
- Chen, D.; Peethambaran, J.; Zhang, Z. A supervoxel-based vegetation classification via decomposition and modelling of full-waveform airborne laser scanning data. Int. J. Remote Sens. 2018, 39, 2937–2968. [Google Scholar] [CrossRef]
- Wang, H.; Wang, C.; Luo, H.; Li, P.; Chen, Y.; Li, J. 3-D point cloud object detection based on supervoxel neighborhood with Hough forest framework. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1570–1581. [Google Scholar] [CrossRef]
- Christoph Stein, S.; Schoeler, M.; Papon, J.; Worgotter, F. Object partitioning using local convexity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 304–311. [Google Scholar]
- Feng, M.; Zhang, L.; Lin, X.; Gilani, S.Z.; Mian, A. Point attention network for semantic segmentation of 3D point clouds. Pattern Recognit. 2020, 107, 107446. [Google Scholar] [CrossRef]
- Rusu, R.B.; Cousins, S. 3D is here: Point cloud library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
- Weinmann, M.; Urban, S.; Hinz, S.; Jutzi, B.; Mallet, C. Distinctive 2D and 3D features for automated large-scale scene analysis in urban areas. Comput. Graph. 2015, 49, 47–57. [Google Scholar] [CrossRef]
- Zhou, Y.; Yu, Y.; Lu, G.; Du, S. Super segments based classification of 3D urban street scenes. Int. J. Adv. Robot. Syst. 2012, 9, 248. [Google Scholar] [CrossRef] [Green Version]
- Muja, M.; Lowe, D.G. Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2227–2240. [Google Scholar] [CrossRef]
- Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3384–3391. [Google Scholar]
- Rusu, R.B.; Marton, Z.C.; Blodow, N.; Beetz, M. Learning informative point classes for the acquisition of object model maps. In Proceedings of the 2008 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam, 17–20 December 2008; pp. 643–650. [Google Scholar]
- Miao, Z.; Jiang, X. Weighted iterative truncated mean filter. IEEE Trans. Signal Process. 2013, 61, 4149–4160. [Google Scholar] [CrossRef]
- Liang, J.M.; Shen, S.Q.; Li, M.; Li, L. Quantum anomaly detection with density estimation and multivariate Gaussian distribution. Phys. Rev. A 2019, 99, 052310. [Google Scholar] [CrossRef] [Green Version]
- Walk, S. Random Forest Template Library. Available online: https://prs.igp.ethz.ch/research/Source_code_and_datasets.html (accessed on 15 July 2021).
- Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef] [Green Version]
- Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Baillard, C.; Benitez, S.; Breitkopf, U. The ISPRS benchmark on urban object classification and 3D building reconstruction. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. I-3 (2012) Nr. 1 2012, 1, 293–298. [Google Scholar] [CrossRef] [Green Version]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
- Wen, C.; Yang, L.; Li, X.; Peng, L.; Chi, T. Directionally constrained fully convolutional neural network for airborne LiDAR point cloud classification. ISPRS J. Photogramm. Remote Sens. 2020, 162, 50–62. [Google Scholar] [CrossRef] [Green Version]
- Fabri, A.; Pion, S. CGAL: The computational geometry algorithms library. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 4–6 November 2009; pp. 538–539. [Google Scholar]
Feature Definition | Computing Formula |
---|---|
Curvature | |
Linearity | |
Planarity | |
Scattering | |
Anisotropy |
Methods | Area 1 | Area 2 | ||||
---|---|---|---|---|---|---|
OA (%) | mIoU (%) | F1-Score (%) | OA (%) | mIoU (%) | F1-Score (%) | |
MAR_2 | 94.3 | 89.2 | 88.9 | 94.0 | 88.7 | 88.4 |
MSR | ||||||
ITCM | 81.3 | 68.5 | 66.1 | 83.0 | 70.9 | 67.9 |
ITCR | 84.2 | 72.7 | 69.2 | 85.4 | 74.5 | 72.4 |
TUM | 82.6 | 70.4 | 68.1 | 83.1 | 71.1 | 68.9 |
Our method | 93.2 | 87.4 | 93.1 | 87.0 | 85.8 |
Methods | OA (%) | mIoU (%) | F1-Score (%) |
---|---|---|---|
UM | 80.8 | 67.8 | 78.1 |
BIJ_W | 81.5 | 68.8 | 78.6 |
LUH | 81.6 | 68.9 | 80.4 |
RIT_1 | 81.6 | 68.9 | 79.0 |
D_FCN | 82.2 | 69.8 | 80.9 |
WhuY3 | 82.3 | 69.9 | 81.0 |
WhuY4 | 84.9 | 73.8 | 80.8 |
Our method |
Area (a) | Area (b) | Area (c) | |||||||
---|---|---|---|---|---|---|---|---|---|
OA (%) | mIoU (%) | F1-Score (%) | OA (%) | mIoU (%) | F1-Score (%) | OA (%) | mIoU (%) | F1-Score (%) | |
Our method | 94.0 | 88.7 | 90.1 | 93.5 | 87.8 | 91.8 | 93.5 | 87.8 | 91.7 |
Point based | 90.6 | 82.9 | 85.6 | 87.6 | 78.0 | 84.2 | 86.1 | 75.6 | 79.8 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liao, L.; Tang, S.; Liao, J.; Li, X.; Wang, W.; Li, Y.; Guo, R. A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification. Remote Sens. 2022, 14, 1516. https://doi.org/10.3390/rs14061516
Liao L, Tang S, Liao J, Li X, Wang W, Li Y, Guo R. A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification. Remote Sensing. 2022; 14(6):1516. https://doi.org/10.3390/rs14061516
Chicago/Turabian StyleLiao, Lingfeng, Shengjun Tang, Jianghai Liao, Xiaoming Li, Weixi Wang, Yaxin Li, and Renzhong Guo. 2022. "A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification" Remote Sensing 14, no. 6: 1516. https://doi.org/10.3390/rs14061516
APA StyleLiao, L., Tang, S., Liao, J., Li, X., Wang, W., Li, Y., & Guo, R. (2022). A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification. Remote Sensing, 14(6), 1516. https://doi.org/10.3390/rs14061516