Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud
AbstractAirborne laser scanning (ALS) point cloud data are suitable for digital terrain model (DTM) extraction given its high accuracy in elevation. Existing filtering algorithms that eliminate non-ground points mostly depend on terrain feature assumptions or representations; these assumptions result in errors when the scene is complex. This paper proposes a new method for ground point extraction based on deep learning using deep convolutional neural networks (CNN). For every point with spatial context, the neighboring points within a window are extracted and transformed into an image. Then, the classification of a point can be treated as the classification of an image; the point-to-image transformation is carefully crafted by considering the height information in the neighborhood area. After being trained on approximately 17 million labeled ALS points, the deep CNN model can learn how a human operator recognizes a point as a ground point or not. The model performs better than typical existing algorithms in terms of error rate, indicating the significant potential of deep-learning-based methods in feature extraction from a point cloud. View Full-Text
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
Hu, X.; Yuan, Y. Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud. Remote Sens. 2016, 8, 730.
Hu X, Yuan Y. Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud. Remote Sensing. 2016; 8(9):730.Chicago/Turabian Style
Hu, Xiangyun; Yuan, Yi. 2016. "Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud." Remote Sens. 8, no. 9: 730.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.