remotesensing-logo

Journal Browser

Journal Browser

3D Urban Scene Reconstruction Using Photogrammetry and Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (30 April 2021) | Viewed by 20238

Special Issue Editor


E-Mail Website
Guest Editor
AVT-Airborne Sensing, Trento Area, Italy
Interests: aerial and satellite photogrammetry; remote sensing; 3D city modeling; 3D surface modelling; geomatics

Special Issue Information

Dear Colleagues,

High-quality 3D city models are used in geographical information systems (GIS) for smart urban management, analysis and change monitoring. The increasing use of the smart city concept in a wider range of applications has underlined the need for accurate and updated geometric representations of the urban environment itself. Indeed 3D urban models are the geometric unit base for 3D geospatial environments and the integration of indoor data, while the semantic information associated with the 3D data enables spatiotemporal querying and analysis. Photogrammetry and remote sensing approaches are used to reconstruct urban scenes in 3D from satellite, aerial and terrestrial data, with different degree of automation, accuracy and replicability. This Special Issue aims to collect papers discussing the progress in photogrammetry and remote sensing for the geometric and semantic generation of 3D city models, from data collection and processing to 3D object identification, modelling and reconstruction, up to their representation, visualization and management in GIS environment. Submitted manuscripts should mainly focus on novelties introduced by recent approaches of photogrammetry and remote sensing to the following topics:

- Input data acquisition with new sensors for 3D urban scene modelling;

- Multi-platform (satellite, aerial, terrestrial), multi-sensor (optical, laser) and existing data fusion for 3D urban scene modelling;

- Processing of satellite, airborne, and terrestrial data for the extraction of features, geometric primitives and objects;

- Automatic 3D urban object identification from point clouds, meshes and images;

- 3D building modelling; - 3D reconstruction and texturing of urban objects;

- Representation of 3D urban objects with level-of-detail and semantic attributes;

- Update of 3D urban models;

- Methods for efficient representation of 3D urban models;

- Assessment of quality of 3D urban scene models, analysis of scalability and replicability of proposed methods.

Dr. Daniela Poli
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 7829 KiB  
Article
An Efficient and General Framework for Aerial Point Cloud Classification in Urban Scenarios
by Emre Özdemir, Fabio Remondino and Alessandro Golkar
Remote Sens. 2021, 13(10), 1985; https://doi.org/10.3390/rs13101985 - 19 May 2021
Cited by 15 | Viewed by 3369
Abstract
With recent advances in technologies, deep learning is being applied more and more to different tasks. In particular, point cloud processing and classification have been studied for a while now, with various methods developed. Some of the available classification approaches are based on [...] Read more.
With recent advances in technologies, deep learning is being applied more and more to different tasks. In particular, point cloud processing and classification have been studied for a while now, with various methods developed. Some of the available classification approaches are based on specific data source, like LiDAR, while others are focused on specific scenarios, like indoor. A general major issue is the computational efficiency (in terms of power consumption, memory requirement, and training/inference time). In this study, we propose an efficient framework (named TONIC) that can work with any kind of aerial data source (LiDAR or photogrammetry) and does not require high computational power while achieving accuracy on par with the current state of the art methods. We also test our framework for its generalization ability, showing capabilities to learn from one dataset and predict on unseen aerial scenarios. Full article
(This article belongs to the Special Issue 3D Urban Scene Reconstruction Using Photogrammetry and Remote Sensing)
Show Figures

Graphical abstract

30 pages, 22202 KiB  
Article
Relation-Constrained 3D Reconstruction of Buildings in Metropolitan Areas from Photogrammetric Point Clouds
by Yuan Li and Bo Wu
Remote Sens. 2021, 13(1), 129; https://doi.org/10.3390/rs13010129 - 01 Jan 2021
Cited by 15 | Viewed by 4181
Abstract
The complexity and variety of buildings and the defects of point cloud data are the main challenges faced by 3D urban reconstruction from point clouds, especially in metropolitan areas. In this paper, we developed a method that embeds multiple relations into a procedural [...] Read more.
The complexity and variety of buildings and the defects of point cloud data are the main challenges faced by 3D urban reconstruction from point clouds, especially in metropolitan areas. In this paper, we developed a method that embeds multiple relations into a procedural modelling process for the automatic 3D reconstruction of buildings from photogrammetric point clouds. First, a hybrid tree of constructive solid geometry and boundary representation (CSG-BRep) was built to decompose the building bounding space into multiple polyhedral cells based on geometric-relation constraints. The cells that approximate the shapes of buildings were then selected based on topological-relation constraints and geometric building models were generated using a reconstructing CSG-BRep tree. Finally, different parts of buildings were retrieved from the CSG-BRep trees, and specific surface types were recognized to convert the building models into the City Geography Markup Language (CityGML) format. The point clouds of 105 buildings in a metropolitan area in Hong Kong were used to evaluate the performance of the proposed method. Compared with two existing methods, the proposed method performed the best in terms of robustness, regularity, and topological correctness. The CityGML building models enriched with semantic information were also compared with the manually digitized ground truth, and the high level of consistency between the results suggested that the produced models will be useful in smart city applications. Full article
(This article belongs to the Special Issue 3D Urban Scene Reconstruction Using Photogrammetry and Remote Sensing)
Show Figures

Figure 1

26 pages, 19227 KiB  
Article
Reconstruction and Efficient Visualization of Heterogeneous 3D City Models
by Mehmet Buyukdemircioglu and Sultan Kocaman
Remote Sens. 2020, 12(13), 2128; https://doi.org/10.3390/rs12132128 - 02 Jul 2020
Cited by 45 | Viewed by 8243
Abstract
The increasing efforts in developing smart city concepts are often coupled with three-dimensional (3D) modeling of envisioned designs. Such conceptual designs and planning are multi-disciplinary in their nature. Realistic implementations must include existing urban structures for proper planning. The development of a participatory [...] Read more.
The increasing efforts in developing smart city concepts are often coupled with three-dimensional (3D) modeling of envisioned designs. Such conceptual designs and planning are multi-disciplinary in their nature. Realistic implementations must include existing urban structures for proper planning. The development of a participatory planning and presentation platform has several challenges from scene reconstruction to high-performance visualization, while keeping the fidelity of the designs. This study proposes a framework for the integrated representation of existing urban structures in CityGML LoD2 combined with a future city model in LoD3. The study area is located in Sahinbey Municipality, Gaziantep, Turkey. Existing city parts and the terrain were reconstructed using high-resolution aerial images, and the future city was designed in a CAD (computer-aided design) environment with a high level of detail. The models were integrated through a high-resolution digital terrain model. Various 3D modeling approaches together with model textures and semantic data were implemented and compared. A number of performance tuning methods for efficient representation and visualization were also investigated. The study shows that, although the object diversity and the level of detail in the city models increase, automatic reconstruction, dynamic updating, and high-performance web-based visualization of the models remain challenging. Full article
(This article belongs to the Special Issue 3D Urban Scene Reconstruction Using Photogrammetry and Remote Sensing)
Show Figures

Graphical abstract

24 pages, 6702 KiB  
Article
Spatial ETL for 3D Building Modelling Based on Unmanned Aerial Vehicle Data in Semi-Urban Areas
by Urška Drešček, Mojca Kosmatin Fras, Jernej Tekavec and Anka Lisec
Remote Sens. 2020, 12(12), 1972; https://doi.org/10.3390/rs12121972 - 19 Jun 2020
Cited by 17 | Viewed by 3407
Abstract
This paper provides the innovative approach of using a spatial extract, transform, load (ETL) solution for 3D building modelling, based on an unmanned aerial vehicle (UAV) photogrammetric point cloud. The main objective of the paper is to present the holistic workflow for 3D [...] Read more.
This paper provides the innovative approach of using a spatial extract, transform, load (ETL) solution for 3D building modelling, based on an unmanned aerial vehicle (UAV) photogrammetric point cloud. The main objective of the paper is to present the holistic workflow for 3D building modelling, emphasising the benefits of using spatial ETL solutions for this purpose. Namely, despite the increasing demands for 3D city models and their geospatial applications, the generation of 3D city models is still challenging in the geospatial domain. Advanced geospatial technologies provide various possibilities for the mass acquisition of geospatial data that is further used for 3D city modelling, but there is a huge difference in the cost and quality of input data. While aerial photogrammetry and airborne laser scanning involve high costs, UAV photogrammetry has brought new opportunities, including for small and medium-sized companies, by providing a more flexible and low-cost source of spatial data for 3D modelling. In our data-driven approach, we use a spatial ETL solution to reconstruct a 3D building model from a dense image matching point cloud which was obtained beforehand from UAV imagery. The results are 3D building models in a semantic vector format consistent with the OGC CityGML standard, Level of Detail 2 (LOD2). The approach has been tested on selected buildings in a simple semi-urban area. We conclude that spatial ETL solutions can be efficiently used for 3D building modelling from UAV data, where the data process model developed allows the developer to easily control and manipulate each processing step. Full article
(This article belongs to the Special Issue 3D Urban Scene Reconstruction Using Photogrammetry and Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop