remotesensing-logo

Journal Browser

Journal Browser

Advances in 3D Reconstruction Based on Remote Sensing Imagery and Lidar Point Cloud

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 4967

Special Issue Editors


E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: point cloud processing; urban intelligence

E-Mail Website
Guest Editor
Department of Civil, Environmental and Geodetic Engineering, Translational Data Analytics Institute, The Ohio State University, Columbus, OH, USA
Interests: sensor fusion; 3D computer vision; photogrammetry

E-Mail Website
Guest Editor
Department of Aeronautical and Aviation Engineering, The Hong Kong Polytechnic University, Hong Kong, China
Interests: three-dimensional computer vision; machine learning; intelligent spatial perception; visual and LiDAR SLAM

Special Issue Information

Dear Colleagues,

As a fundamental problem in remote sensing, 3D reconstruction has continued to attract the attention of researchers over recent decades. The ability to create detailed digital replicas of physical spaces is essential for various applications, ranging from urban planning, building construction, and forest management to virtual tourism. This has led to an increased demand for sophisticated 3D reconstruction techniques that can capture the intricacies of built and natural environments.

Over the past few years, the methodologies for 3D reconstruction have advanced rapidly, with significant strides in data acquisition, deep learning algorithms, and large-scale model development. In terms of data, satellite and UAV imagery continue to grow while laser scanning equipment becomes increasingly affordable and widely available, and simulated point cloud data are gradually increasing. The integration of multi-modal data sources, such as satellite imagery, aerial photography, and point cloud, has enabled the creation of more comprehensive and accurate 3D models. Implicit 3D reconstruction methods (e.g., neural radiance fields, NeRF) and 3D Gaussian splatting (3DGS) have shown great potential in creating high-quality reconstructions from sparse input views. Furthermore, the advent of large models has revolutionized the field, with models capable of cross-modal learning and depth recovery, leading to more robust and automated reconstruction processes.

As a forum for recent advances and developments in the research and applications of 3D reconstruction from remote sensing imagery and LiDAR point cloud, especially with a focus on deep learning algorithms, this issue calls for the latest findings and innovative work conducted on understanding and modeling natural and artificial scenes, including related data generation and fusion or annotation methods. Submissions can cover one or more of the following themes:

  • Advances in 3D reconstruction algorithms and techniques;
  • The integration of multi-modal data sources for enhanced 3D modeling;
  • Three-dimensional reconstruction of architecture, cultural heritage, and natural scenes;
  • Three-dimensional reconstruction of disaster management and emergency response;
  • Real-scene 3D reconstruction for geological, topographical, and urban analysis;
  • Contributions of 3D reconstruction to a low-altitude economy.

Dr. Fuxun Liang
Dr. Shuang Song
Dr. Bing Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • three-dimensional reconstruction
  • remote sensing image
  • LiDAR point cloud
  • multi-modality fusion
  • semantic segmentation
  • deep learning
  • neural radiation field (NeRF)
  • 3D Gaussian splatting (3DGS)
  • generative modeling
  • large models
  • ubiquitous point cloud interpretation
  • 3D scene modeling
  • building reconstruction
  • indoor modeling
  • natural scene reconstruction
  • digital twins

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 6200 KiB  
Article
Research on 3D Reconstruction Methods for Incomplete Building Point Clouds Using Deep Learning and Geometric Primitives
by Ziqi Ding, Yuefeng Lu, Shiwei Shao, Yong Qin, Miao Lu, Zhenqi Song and Dengkuo Sun
Remote Sens. 2025, 17(3), 399; https://doi.org/10.3390/rs17030399 - 24 Jan 2025
Viewed by 1146
Abstract
Point cloud data, known for their accuracy and ease of acquisition, are commonly used for reconstructing level of detail 2 (LoD-2) building models. However, factors like object occlusion can cause incompleteness, negatively impacting the reconstruction process. To address this challenge, this paper proposes [...] Read more.
Point cloud data, known for their accuracy and ease of acquisition, are commonly used for reconstructing level of detail 2 (LoD-2) building models. However, factors like object occlusion can cause incompleteness, negatively impacting the reconstruction process. To address this challenge, this paper proposes a method for reconstructing LoD-2 building models from incomplete point clouds. We design a generative adversarial network model that incorporates geometric constraints. The generator utilizes a multilayer perceptron with a curvature attention mechanism to extract multi-resolution features from the input data and then generates the missing portions of the point cloud through fully connected layers. The discriminator iteratively refines the generator’s predictions using a loss function that is combined with plane-aware Chamfer distance. For model reconstruction, the proposed method extracts a set of candidate polygons from the point cloud and computes weights for each candidate polygon based on a weighted energy term tailored to building characteristics. The most suitable planes are retained to construct the LoD-2 building model. The performance of this method is validated through extensive comparisons with existing state-of-the-art methods, showing a 10.9% reduction in the fitting error of the reconstructed models, and real-world data are tested to evaluate the effectiveness of the method. Full article
Show Figures

Figure 1

19 pages, 1575 KiB  
Article
FIFA3D: Flow-Guided Feature Aggregation for Temporal Three-Dimensional Object Detection
by Ruiqi Ma, Chunwei Wang, Chi Chen, Yihan Zeng, Bijun Li, Qin Zou, Qingqiu Huang, Xinge Zhu and Hang Xu
Remote Sens. 2025, 17(3), 380; https://doi.org/10.3390/rs17030380 - 23 Jan 2025
Viewed by 864
Abstract
Detecting accurate 3D bounding boxes from LiDAR point clouds is crucial for autonomous driving. Recent studies have shown the superiority of the performance of multi-frame 3D detectors, yet eliminating the misalignment across frames and effectively aggregating spatiotemporal information are still challenging problems. In [...] Read more.
Detecting accurate 3D bounding boxes from LiDAR point clouds is crucial for autonomous driving. Recent studies have shown the superiority of the performance of multi-frame 3D detectors, yet eliminating the misalignment across frames and effectively aggregating spatiotemporal information are still challenging problems. In this paper, we present a novel flow-guided feature aggregation scheme for 3D object detection (FIFA3D) to align cross-frame information. FIFA3D first leverages optical flow with supervised signals to model the pixel-to-pixel correlations between sequential frames. Considering the sparse nature of bird’s-eye-view feature maps, an additional classification branch is adopted to provide explicit pixel-wise clues. Meanwhile, we utilize multi-scale feature maps and predict flow in a coarse-to-fine manner. With guidance from the estimated flow, historical features can be well aligned to the current situation, and a cascade fusion strategy is introduced to benefit the following detection. Extensive experiments show that FIFA3D surpasses the single-frame baseline with remarkable margins of +10.8% mAPH and +6.8% mAP on the Waymo and nuScenes validation datasets and performs well compared with state-of-the-art methods. Full article
Show Figures

Figure 1

28 pages, 21353 KiB  
Article
ThermalGS: Dynamic 3D Thermal Reconstruction with Gaussian Splatting
by Yuxiang Liu, Xi Chen, Shen Yan, Zeyu Cui, Huaxin Xiao, Yu Liu and Maojun Zhang
Remote Sens. 2025, 17(2), 335; https://doi.org/10.3390/rs17020335 - 19 Jan 2025
Viewed by 1976
Abstract
Thermal infrared (TIR) images capture temperature in a non-invasive manner, making them valuable for generating 3D models that reflect the spatial distribution of thermal properties within a scene. Current TIR image-based 3D reconstruction methods primarily focus on static conditions, which only capture the [...] Read more.
Thermal infrared (TIR) images capture temperature in a non-invasive manner, making them valuable for generating 3D models that reflect the spatial distribution of thermal properties within a scene. Current TIR image-based 3D reconstruction methods primarily focus on static conditions, which only capture the spatial distribution of thermal radiation but lack the ability to represent its temporal dynamics. The absence of dedicated datasets and effective methods for dynamic 3D representation are two key challenges that hinder progress in this field. To address these challenges, we propose a novel dynamic thermal 3D reconstruction method, named ThermalGS, based on 3D Gaussian Splatting (3DGS). ThermalGS employs a data-driven approach to directly learn both scene structure and dynamic thermal representation, using RGB and TIR images as input. The position, orientation, and scale of Gaussian primitives are guided by the RGB mesh. We introduce feature encoding and embedding networks to integrate semantic and temporal information into the Gaussian primitives, allowing them to capture dynamic thermal radiation characteristics. Moreover, we construct the Thermal Scene Day-and-Night (TSDN) dataset, which includes multi-view, high-resolution aerial RGB reference images and TIR images captured at five different times throughout the day and night, providing a benchmark for dynamic thermal 3D reconstruction tasks. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on the TSDN dataset, with an average absolute temperature error of 1 °C and the ability to predict surface temperature variations over time. Full article
Show Figures

Graphical abstract

Back to TopTop