remotesensing-logo

Journal Browser

Journal Browser

3D City Modeling and Observation Using Remote Sensing and Artificial Intelligence

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 31 March 2026 | Viewed by 5772

Special Issue Editors


E-Mail Website
Guest Editor
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Interests: 3D reconstruction; UAV intelligent remote sensing; 3D measurement; 3D real scene modelling; VR-AR application; deep learning algorithm; stereo visual system; laser scanning

E-Mail Website
Guest Editor
Natural Resources and Ecosystem Services, Institute for Global Environmental Strategies, Kanagawa 240-0115, Japan
Interests: geographic information systems (GIS); remote sensing; spatial modeling; and data mining for urban and environmental analysis and planning; mapping urban land cover (green space, impervious surfaces, etc.); monitoring forest health using fine resolution satellite imagery
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

3D City modeling stands as a fundamental pillar for the burgeoning field of digital twin applications in modern cities. The evolution of cutting-edge remote sensing equipment, platforms, and methodologies has facilitated the acquisition of three-dimensional spatial data, making it an increasingly accessible process. Leveraging urban maps and three-dimensional datasets, the observation and analysis of critical urban objects in both static and dynamic states have been integrated into a variety of sectors, including urban planning, management, safety monitoring, and the burgeoning domain of smart cities.

While remote sensing offers the tools for modeling and observation, artificial intelligence (AI) has elevated these processes to new heights. AI's broad spectrum of object recognition and execution of complex tasks is characterized by its efficiency and reliability. The proliferation of mobile remote sensing platforms, such as drones and laser scanners, has democratized low-cost, automated urban remote sensing monitoring.

The evolution of 3D modeling techniques from various optimized models of multi-view geometry reconstruction is now progressing towards methods based on neural networks, such as deep learning for stereo reconstruction and neural radiance fields.

The quest to integrate more sophisticated AI algorithms into the realms of modeling and observational data analysis has emerged as a pivotal area of interest. The ongoing advancements in SOTA methods have not only validated this trend but have also set the stage for a new era of innovation in the field of 3D modeling.

This Special Issue aims at studies covering remote sensing platforms, systems,new algorithms, and the latest experimental datasets. Relevant research can contribute to enhancing the quality, efficiency, and application level of 3D city modeling and observational data analysis. This special issue includes, but is not limited to, the following topics:

  • Intelligent mobile measurement platform
  • 3D building modelling
  • 3D modelling with multi-source space data
  • Object recognition using AI algorithm
  • City space plane & management
  • UAV smart task plan
  • Realtime data processing
  • Neural network 3D algorithm 
  • Laser scanning data processing
  • City objects datasets

Dr. Zheng Ji
Dr. Brian Alan Johnson
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D city modlling
  • city observation
  • AI algorithm
  • mobile measurement platform
  • laser scanning data processing
  • object recognition and positioning
  • neural network for 3D reconstruction
  • multi-source space data processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3672 KB  
Article
DualRecon: Building 3D Reconstruction from Dual-View Remote Sensing Images
by Ruizhe Shao, Hao Chen, Jun Li, Mengyu Ma and Chun Du
Remote Sens. 2025, 17(23), 3793; https://doi.org/10.3390/rs17233793 - 22 Nov 2025
Viewed by 901
Abstract
Large-scale and rapid 3D reconstruction of urban areas holds significant practical value. Recently, methods that reconstruct buildings from off-nadir imagery have gained attention for their potential to meet the demand for large-scale, time-sensitive reconstruction applications. These methods typically estimate the building height and [...] Read more.
Large-scale and rapid 3D reconstruction of urban areas holds significant practical value. Recently, methods that reconstruct buildings from off-nadir imagery have gained attention for their potential to meet the demand for large-scale, time-sensitive reconstruction applications. These methods typically estimate the building height and footprint position by extracting building roof and the roof-to-footprint offset within a single off-nadir image. However, the reconstruction accuracy of these methods is primarily constrained by two issues: first, errors in the single-view building detection, and second, the inaccurate extraction of offsets, which is often a consequence of these detection errors as well as interference from shadow occlusion. To address these challenges, we propose DualRecon, a method for 3D building reconstruction from heterogeneous dual-view remote sensing imagery. In contrast to single-image detection methods, DualRecon achieves more accurate 3D information extraction for reconstruction by fusing and correlating building information across different views. This success can be attributed to three key advantages of DualRecon. First, DualRecon fuses the two input views and extracts building objects based on the fused image features, thereby improving the accuracy of building detection and localization. Second, compared to the roof-to-footprint offset, the disparity offset of the same rooftop between different views is less affected by interference from shadows and occlusions. Our method leverages this disparity offset to determine building height, which enhances the accuracy of height estimation. Third, we designed DualRecon with a three-branch architecture to be optimally tailored for the dual-view 3D information extraction task. Third, we designed DualRecon with a three-branch architecture to be optimally tailored for the dual-view 3D information extraction task. Moreover, this paper introduces BuildingDual—the first large-scale dual-view 3D building reconstruction dataset. It comprises 3789 image pairs containing 288,787 building instances, where each instance is annotated with its respective roofs in both views, roof-to-footprint offset, footprint, and the disparity offset of the roof. Experiments on this dataset demonstrate that DualRecon achieves more accurate reconstruction results than existing methods when performing 3D building reconstruction from dual-view remote sensing imagery. Our data and code will be made publicly available. Full article
Show Figures

Figure 1

19 pages, 43835 KB  
Article
A Stereo Disparity Map Refinement Method Without Training Based on Monocular Segmentation and Surface Normal
by Haoxuan Sun and Taoyang Wang
Remote Sens. 2025, 17(9), 1587; https://doi.org/10.3390/rs17091587 - 30 Apr 2025
Viewed by 2187
Abstract
Stereo disparity estimation is an essential component in computer vision and photogrammetry with many applications. However, there is a lack of real-world large datasets and large-scale models in the domain. Inspired by recent advances in the foundation model for image segmentation, we explore [...] Read more.
Stereo disparity estimation is an essential component in computer vision and photogrammetry with many applications. However, there is a lack of real-world large datasets and large-scale models in the domain. Inspired by recent advances in the foundation model for image segmentation, we explore the RANSAC disparity refinement based on zero-shot monocular surface normal prediction and SAM segmentation masks, which combine stereo matching models and advanced monocular large-scale vision models. The disparity refinement problem is formulated as follows: extracting geometric structures based on SAM masks and surface normal prediction, building disparity map hypotheses of the geometric structures, and selecting the hypotheses-based weighted RANSAC method. We believe that after obtaining geometry structures, even if there is only a part of the correct disparity in the geometry structure, the entire correct geometry structure can be reconstructed based on the prior geometry structure. Our method can best optimize the results of traditional models such as SGM or deep learning models such as MC-CNN. The model obtains 15.48% D1-error without training on the US3D dataset and obtains 6.09% bad 2.0 error and 3.65% bad 4.0 error on the Middlebury dataset. The research helps to promote the development of scene and geometric structure understanding in stereo disparity estimation and the application of combining advanced large-scale monocular vision models with stereo matching methods. Full article
Show Figures

Figure 1

18 pages, 22424 KB  
Article
Class-Incremental Semantic Segmentation for Mobile Laser Scanning Point Clouds Using Feature Representation Preservation and Loss Cross-Coupling
by Xucheng Chen, Haifeng Luo, Tianqiang Huang, Hanxian He and Wenyan Hu
Remote Sens. 2025, 17(3), 541; https://doi.org/10.3390/rs17030541 - 5 Feb 2025
Cited by 2 | Viewed by 1881
Abstract
Significant progress has been made in the semantic segmentation of mobile laser scanning (MLS) point clouds based on deep learning. However, the segmentation classes of deep learning models depend on the label classes of the source point clouds used for training, which makes [...] Read more.
Significant progress has been made in the semantic segmentation of mobile laser scanning (MLS) point clouds based on deep learning. However, the segmentation classes of deep learning models depend on the label classes of the source point clouds used for training, which makes it difficult to generalize the models to target point clouds with novel classes. In addition, retraining models using complete class label datasets is time-consuming, and the source point clouds are often unavailable or occupy a large amount of storage space. In this paper, we propose a new class-incremental semantic segmentation framework for MLS point clouds. Firstly, to prevent catastrophic forgetting of original class knowledge when the model learns novel classes, we design a feature representation preservation-based knowledge distillation module to maintain the encoding ability of the target models for original classes. Then, to further separate novel classes from the original background classes, we introduce a background shift mechanism based on loss cross-coupling and pseudo-label collaborative training, which adaptively balances the model plasticity when learning novel class knowledge. Finally, we conducted extensive experiments on two benchmark datasets (Paris-Lille-3D and Toronto-3D), and our proposed method achieved impressive results, which indicate that the proposed framework could effectively achieve class-incremental semantic segmentation for MLS point clouds. Full article
Show Figures

Figure 1

Back to TopTop