remotesensing-logo

Journal Browser

Journal Browser

LiDAR and Point Cloud Processing for Digital Surface Modelling and 3D Scene Reconstruction

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: 15 June 2024 | Viewed by 7535

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
Interests: computer vision; computational geometry; 3D reconstruction and point cloud analysis; multi-view image analysis; machine learning/deep learning related to computer vision and remote sensing
School of Geospatial Engineering and Science, Sun Yat-sen University, Zhuhai 519082, China
Interests: geoinformation; 3D modeling; photogrammetry; 3D point cloud

Special Issue Information

Dear Colleagues,

Over the last few decades, the rapid development of light detection and ranging (LiDAR) and multi-view stereo (MVS) has led to point cloud becoming an essential data source for a diverse array of purposes, one of which is to produce digital terrain models and 3D city models. Point cloud provides straightforward 3D spatial information, with extra information such as reflection intensity, echoes, color, etc.; meanwhile, point cloud is massive, unorganized, and unordered, and such characteristics bring challenges to the conversion of point cloud to structural and informative geographic models. Special attention has been paid to this specific field using data from different sources and platforms (e.g., terrestrial, vehicle-borne, and air-borne) with a wide range of technologies. However, the geographical complexity of some particular landforms and the fast change in urban environment still require more effective and efficient solutions for digital surface modeling and 3D scene reconstruction.

This Special Issue will highlight the studies and applications of point cloud in geographic mapping and modeling. Specifically, recent advances in deep learning methods, the integration of multisource and multiplatform data, the semantic and topographic interpretation of scenes, and the generation of standard-format models (e.g., CityGML) are covered in this Special Issue.

Articles may address but are not limited to the following topics:

  • Terrain filtering;
  • Point cloud segmentation and classification;
  • Integration/registration of multiplatform point clouds;
  • Fusion of point cloud with spectral data;
  • Deep learning of point cloud;
  • Object (e.g., building and road) extraction from point cloud;
  • 3D urban reconstruction from point cloud;
  • Point cloud modeling in other applications (e.g., forest inventory, electrical utility, and historic preservation).

Dr. Hadi AliAkbarpour
Dr. Yuan Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LiDAR
  • point cloud
  • geographic mapping
  • semantic segmentation
  • 3D reconstruction
  • object extraction
  • data integration
  • ground filtering

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 11620 KiB  
Article
A Real-Time Method for Railway Track Detection and 3D Fitting Based on Camera and LiDAR Fusion Sensing
by Tiejian Tang, Jinghao Cao, Xiong Yang, Sheng Liu, Dongsheng Zhu, Sidan Du and Yang Li
Remote Sens. 2024, 16(8), 1441; https://doi.org/10.3390/rs16081441 - 18 Apr 2024
Viewed by 310
Abstract
Railway track detection, which is crucial for train operational safety, faces numerous challenges such as the curved track, obstacle occlusion, and vibrations during the train’s operation. Most existing methods for railway track detection use a camera or LiDAR. However, the vision-based approach lacks [...] Read more.
Railway track detection, which is crucial for train operational safety, faces numerous challenges such as the curved track, obstacle occlusion, and vibrations during the train’s operation. Most existing methods for railway track detection use a camera or LiDAR. However, the vision-based approach lacks essential 3D environmental information about the train, while the LiDAR-based approach tends to detect tracks of insufficient length due to the inherent limitations of LiDAR. In this study, we propose a real-time method for railway track detection and 3D fitting based on camera and LiDAR fusion sensing. Semantic segmentation of the railway track in the image is performed, followed by inverse projection to obtain 3D information of the distant railway track. Then, 3D fitting is applied to the inverse projection of the railway track for track vectorization and LiDAR railway track point segmentation. The extrinsic parameters necessary for inverse projection are continuously optimized to ensure robustness against variations in extrinsic parameters during the train’s operation. Experimental results show that the proposed method achieves desirable accuracy for railway track detection and 3D fitting with acceptable computational efficiency, and outperforms existing approaches based on LiDAR, camera, and camera–LiDAR fusion. To the best of our knowledge, our approach represents the first successful attempt to fuse camera and LiDAR data for real-time railway track detection and 3D fitting. Full article
Show Figures

Graphical abstract

21 pages, 9997 KiB  
Article
IPCONV: Convolution with Multiple Different Kernels for Point Cloud Semantic Segmentation
by Ruixiang Zhang, Siyang Chen, Xuying Wang and Yunsheng Zhang
Remote Sens. 2023, 15(21), 5136; https://doi.org/10.3390/rs15215136 - 27 Oct 2023
Viewed by 983
Abstract
The segmentation of airborne laser scanning (ALS) point clouds remains a challenge in remote sensing and photogrammetry. Deep learning methods, such as KPCONV, have proven effective on various datasets. However, the rigid convolutional kernel strategy of KPCONV limits its potential use for 3D [...] Read more.
The segmentation of airborne laser scanning (ALS) point clouds remains a challenge in remote sensing and photogrammetry. Deep learning methods, such as KPCONV, have proven effective on various datasets. However, the rigid convolutional kernel strategy of KPCONV limits its potential use for 3D object segmentation due to its uniform approach. To address this issue, we propose an Integrated Point Convolution (IPCONV) based on KPCONV, which utilizes two different convolution kernel point generation strategies, one cylindrical and one a spherical cone, for more efficient learning of point cloud data features. We propose a customizable Multi-Shape Neighborhood System (MSNS) to balance the relationship between these convolution kernel point generations. Experiments on the ISPRS benchmark dataset, LASDU dataset, and DFC2019 dataset demonstrate the validity of our method. Full article
Show Figures

Figure 1

21 pages, 2760 KiB  
Article
Robust Semi-Supervised Point Cloud Registration via Latent GMM-Based Correspondence
by Zhengyan Zhang, Erli Lyu, Zhe Min, Ang Zhang, Yue Yu and Max Q.-H. Meng
Remote Sens. 2023, 15(18), 4493; https://doi.org/10.3390/rs15184493 - 12 Sep 2023
Cited by 2 | Viewed by 1128
Abstract
Due to the fact that point clouds are always corrupted by significant noise and large transformations, aligning two point clouds by deep neural networks is still challenging. This paper presents a semi-supervised point cloud registration (PCR) method for accurately estimating point correspondences and [...] Read more.
Due to the fact that point clouds are always corrupted by significant noise and large transformations, aligning two point clouds by deep neural networks is still challenging. This paper presents a semi-supervised point cloud registration (PCR) method for accurately estimating point correspondences and handling large transformations using limited prior datasets. Firstly, a modified autoencoder is introduced as the feature extraction module to extract the distinctive and robust features for the downstream registration task. Unlike optimization-based pairwise PCR strategies, the proposed method treats two point clouds as two implementations of a Gaussian mixture model (GMM), which we call latent GMM. Based on the above assumption, two point clouds can be regarded as two probability distributions. Hence, the PCR of two point clouds can be approached by minimizing the KL divergence between these two probability distributions. Then, the correspondence between the point clouds and the latent GMM components is estimated using the augmented regression network. Finally, the parameters of GMM can be updated by the correspondence and the transformation matrix can be computed by employing the weighted singular value decomposition (SVD) method. Extensive experiments conducted on both synthetic and real-world data validate the superior performance of the proposed method compared to state-of-the-art registration methods. These experiments also highlight the method’s superiority in terms of accuracy, robustness, and generalization. Full article
Show Figures

Figure 1

23 pages, 13063 KiB  
Article
An Object-Based Ground Filtering of Airborne LiDAR Data for Large-Area DTM Generation
by Hunsoo Song and Jinha Jung
Remote Sens. 2023, 15(16), 4105; https://doi.org/10.3390/rs15164105 - 21 Aug 2023
Cited by 2 | Viewed by 1122
Abstract
Digital terrain model (DTM) creation is a modeling process that represents the Earth’s surface. An aptly designed DTM generation method tailored for intended study can significantly streamline ensuing processes and assist in managing errors and uncertainties, particularly in large-area projects. However, existing methods [...] Read more.
Digital terrain model (DTM) creation is a modeling process that represents the Earth’s surface. An aptly designed DTM generation method tailored for intended study can significantly streamline ensuing processes and assist in managing errors and uncertainties, particularly in large-area projects. However, existing methods often exhibit inconsistent and inexplicable results, struggle to clearly define what an object is, and often fail to filter large objects due to their locally confined operations. We introduce a new DTM generation method that performs object-based ground filtering, which is particularly beneficial for urban topography. This method defines objects as areas fully enclosed by steep slopes and grounds as smoothly connected areas, enabling reliable “object-based” segmentation and filtering, extending beyond the local context. Our primary operation, controlled by a slope threshold parameter, simplifies tuning and ensures predictable results, thereby reducing uncertainties in large-area modeling. Uniquely, our method considers surface water bodies in modeling and treats connected artificial terrains (e.g., overpasses) as ground. This contrasts with conventional methods, which often create noise near water bodies and behave inconsistently around overpasses and bridges, making our approach particularly beneficial for large-area 3D urban mapping. Examined on extensive and diverse datasets, our method offers unique features and high accuracy, and we have thoroughly assessed potential artifacts to guide potential users. Full article
Show Figures

Figure 1

19 pages, 13653 KiB  
Article
Semi-Automated BIM Reconstruction of Full-Scale Space Frames with Spherical and Cylindrical Components Based on Terrestrial Laser Scanning
by Guozhong Cheng, Jiepeng Liu, Dongsheng Li and Y. Frank Chen
Remote Sens. 2023, 15(11), 2806; https://doi.org/10.3390/rs15112806 - 28 May 2023
Cited by 1 | Viewed by 1542
Abstract
As-built building information modeling (BIM) model has gained more attention due to its increasing applications in construction, operation, and maintenance. Although methods for generating the as-built BIM model from laser scanning data have been proposed, few studies were focused on full-scale structures. To [...] Read more.
As-built building information modeling (BIM) model has gained more attention due to its increasing applications in construction, operation, and maintenance. Although methods for generating the as-built BIM model from laser scanning data have been proposed, few studies were focused on full-scale structures. To address this issue, this study proposes a semi-automated and effective approach to generate the as-built BIM model for a full-scale space frame structure with terrestrial laser scanning data, including the large-scale point cloud data (PCD) registration, large-scale PCD segmentation, and geometric parameters estimation. In particular, an effective coarse-to-fine data registration method was developed based on sphere targets and the oriented bounding box. Then, a novel method for extracting the sphere targets from full-scale structures was proposed based on the voxelization algorithm and random sample consensus (RANSAC) algorithm. Next, an efficient method for extracting cylindrical components was presented based on the detected sphere targets. The proposed approach is shown to be effective and reliable through the application of actual space frame structures. Full article
Show Figures

Graphical abstract

20 pages, 14779 KiB  
Article
High-Resolution and Efficient Neural Dual Contouring for Surface Reconstruction from Point Clouds
by Qi Liu, Jun Xiao, Lupeng Liu, Yunbiao Wang and Ying Wang
Remote Sens. 2023, 15(9), 2267; https://doi.org/10.3390/rs15092267 - 25 Apr 2023
Cited by 2 | Viewed by 1466
Abstract
The surface mesh reconstruction from point clouds has been a fundamental research topic in Computer Vision and Computer Graphics. Recently, the Neural Implicit Representation (NIR)-based reconstruction has revolutionized this research topic. This work summarizes and analyzes representative works on NIR-based reconstruction and highlights [...] Read more.
The surface mesh reconstruction from point clouds has been a fundamental research topic in Computer Vision and Computer Graphics. Recently, the Neural Implicit Representation (NIR)-based reconstruction has revolutionized this research topic. This work summarizes and analyzes representative works on NIR-based reconstruction and highlights several important insights. However, one major problem with existing works is that they struggle to handle high-resolution meshes. To address this, this paper introduces HRE-NDC, a novel High-Resolution and Efficient Neural Dual Contouring approach for mesh reconstruction from point clouds, which takes the previous state-of-the-art as a baseline and adopts a coarse-to-fine network structure design, along with feature-preserving downsampling and other improvements. HRE-NDC significantly reduces training time and memory usage while achieving better surface reconstruction results. Experimental results demonstrate the superiority of our method in both visualization and quantitative results, and it shows excellent generalization performance on various data including large indoor scenes, real-scanned urban buildings, clothing, and human bodies. Full article
Show Figures

Figure 1

Back to TopTop