Next Article in Journal
Landslide Displacement Monitoring with Split-Bandwidth Interferometry: A Case Study of the Shuping Landslide in the Three Gorges Area
Next Article in Special Issue
Towards High-Definition 3D Urban Mapping: Road Feature-Based Registration of Mobile Mapping Systems and Aerial Imagery
Previous Article in Journal
Remote Sensing of Above-Ground Biomass
Previous Article in Special Issue
A Novel Building Type Classification Scheme Based on Integrated LiDAR and High-Resolution Images
Article Menu
Issue 9 (September) cover image

Export Article

Open AccessArticle

A Convolutional Neural Network-Based 3D Semantic Labeling Method for ALS Point Clouds

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430000, Hubei, China
Author to whom correspondence should be addressed.
Academic Editors: Bailang Yu, Lei Wang, Qiusheng Wu, Juha Hyyppä and Prasad S. Thenkabail
Remote Sens. 2017, 9(9), 936;
Received: 1 August 2017 / Revised: 23 August 2017 / Accepted: 8 September 2017 / Published: 10 September 2017
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
PDF [3610 KB, uploaded 12 September 2017]


3D semantic labeling is a fundamental task in airborne laser scanning (ALS) point clouds processing. The complexity of observed scenes and the irregularity of point distributions make this task quite challenging. Existing methods rely on a large number of features for the LiDAR points and the interaction of neighboring points, but cannot exploit the potential of them. In this paper, a convolutional neural network (CNN) based method that extracts the high-level representation of features is used. A point-based feature image-generation method is proposed that transforms the 3D neighborhood features of a point into a 2D image. First, for each point in the ALS data, the local geometric features, global geometric features and full-waveform features of its neighboring points within a window are extracted and transformed into an image. Then, the feature images are treated as the input of a CNN model for a 3D semantic labeling task. Finally, to allow performance comparisons with existing approaches, we evaluate our framework on the publicly available datasets provided by the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) benchmark tests on 3D labeling. The experiment results achieve 82.3% overall accuracy, which is the best among all considered methods. View Full-Text
Keywords: deep convolutional neural network; ALS point clouds; semantic 3D labeling; feature image deep convolutional neural network; ALS point clouds; semantic 3D labeling; feature image

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Yang, Z.; Jiang, W.; Xu, B.; Zhu, Q.; Jiang, S.; Huang, W. A Convolutional Neural Network-Based 3D Semantic Labeling Method for ALS Point Clouds. Remote Sens. 2017, 9, 936.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top