remotesensing-logo

Journal Browser

Journal Browser

3D and Semantic Reconstruction of the Urban Environment Using Multi-Modal and Multi-Resolution Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: 15 June 2024 | Viewed by 1386

Special Issue Editors

1. Geomatics Research Group, Department of Civil Engineering, Faculty of Engineering Technology, KU Leuven, Gebroeders De Smetstraat 1, 9000 Ghent, Belgium
2. Urban Governance and Design Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Interests: computer vision; remote sensing; 3D modeling; urban analytics

E-Mail Website
Guest Editor
Faculty of Science, Department of Geography, University of Liège, Allée du Six Août 19, 4000 Liège, Belgium
Interests: photogrammetry; remote sensing; SAR; image processing; 3D modeling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Geomatics Research Group, Department of Civil Engineering, KU Leuven, Gebroeders De Smetstraat 1, 9000 Ghent, Belgium
Interests: photogrammetry; computer vision; machine learning; 3D modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Photogrammetry and remote sensing techniques are utilized to produce 3D models of urban scenes using satellite, aerial, and terrestrial data with varying levels of automation, accuracy, and replicability. These 3D models serve as the fundamental geometric structure for 3D geospatial environments and facilitate the integration of indoor data. The semantic information associated with 3D data enables spatiotemporal querying and analysis, making high-quality 3D semantic models valuable tools for environmental analysis and urban management.

The enormous amount of different modality remote sensing data available today requires powerful and efficient processing techniques to allow an ever-more accurate mapping of the urban environment at ever-higher spatial and temporal resolutions. Recently, advanced deep learning techniques have significantly improved the interpretation quality, efficiency, and automation levels in 3D scene understanding using remote sensing data. This Special Issue seeks to address the latest developments in remote sensing-based 3D urban scene reconstruction—from innovative methods and new benchmark datasets to relevant application examples.

This Special Issue welcomes contributions that showcase recent advancements in processing multimodal data for the geometric and semantic generation of 3D scenes, focusing on the urban environment. The entire pipeline, from data collection and processing to 3D object detection, modeling, and reconstruction and up to their representation, visualization, and application in urban environment analytics, is envisaged. We encourage the submission of open-source and open access methodologies and datasets.

The topics of this Special Issue include, but are not limited to:

  • Weakly or self-supervised methods for extracting 3D semantic information of the urban environment;
  • Multimodal approaches for combining different sensing technologies (e.g., multispectral, LiDAR, and SAR);
  • Multiplatform (satellite, aerial, and terrestrial) and multiresolution data fusion approaches for 3D urban scene reconstruction;
  • New benchmark datasets;
  • Automatic 3D urban object identification and change detection methods from imagery, point clouds, and meshes;
  • End-to-end approaches for the automatic generation of high-level semantic objects (e.g., LoD and BIM);
  • Methods for efficient storage, processing, and visualization of 3D urban objects with a high level of detail and semantic attributes;
  • Assessment of quality of 3D urban scene models, analysis of scalability, and replicability of proposed methods;
  • Urban analytics based on multimodal remote sensing data and 3D building models.

Dr. Wufan Zhao
Dr. Andrea Nascetti
Prof. Dr. Maarten Vergauwen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning/deep learning
  • data efficient learning (foundation model, transfer learning, self-supervised, semi-supervised, and weakly supervised learning)
  • semantic/instance segmentation, object detection/tracking, and change detection
  • SCAN-to-BIM
  • data processing and feature extraction
  • point cloud registration
  • multisource data fusion
  • application of 3D point clouds and remote sensing images
  • spatial digital twins and smart cities

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 5828 KiB  
Article
Investigating Prior-Level Fusion Approaches for Enriched Semantic Segmentation of Urban LiDAR Point Clouds
by Zouhair Ballouch, Rafika Hajji, Abderrazzaq Kharroubi, Florent Poux and Roland Billen
Remote Sens. 2024, 16(2), 329; https://doi.org/10.3390/rs16020329 - 13 Jan 2024
Cited by 1 | Viewed by 872
Abstract
Three-dimensional semantic segmentation is the foundation for automatically creating enriched Digital Twin Cities (DTCs) and their updates. For this task, prior-level fusion approaches show more promising results than other fusion levels. This article proposes a new approach by developing and benchmarking three prior-level [...] Read more.
Three-dimensional semantic segmentation is the foundation for automatically creating enriched Digital Twin Cities (DTCs) and their updates. For this task, prior-level fusion approaches show more promising results than other fusion levels. This article proposes a new approach by developing and benchmarking three prior-level fusion scenarios to enhance the outcomes of point cloud-enriched semantic segmentation. The latter were compared with a baseline approach that used the point cloud only. In each scenario, specific prior knowledge (geometric features, classified images, or classified geometric information) and aerial images were fused into the neural network’s learning pipeline with the point cloud data. The goal was to identify the one that most profoundly enhanced the neural network’s knowledge. Two deep learning techniques, “RandLaNet” and “KPConv”, were adopted, and their parameters were modified for different scenarios. Efficient feature engineering and selection for the fusion step facilitated the learning process and improved the semantic segmentation results. Our contribution provides a good solution for addressing some challenges, particularly for more accurate extraction of semantically rich objects from the urban environment. The experimental results have demonstrated that Scenario 1 has higher precision (88%) on the SensatUrban dataset compared to the baseline approach (71%), the Scenario 2 approach (85%), and the Scenario 3 approach (84%). Furthermore, the qualitative results obtained by the first scenario are close to the ground truth. Therefore, it was identified as the efficient fusion approach for point cloud-enriched semantic segmentation, which we have named the efficient prior-level fusion (Efficient-PLF) approach. Full article
Show Figures

Figure 1

Back to TopTop