Virtual 3D City Models

Virtual 3D city models, in varying forms of extent and detail, are becoming more common, yet their usage might still be limited [...]


Introduction
Virtual 3D city models, in varying forms of extent and detail, are becoming more common, yet their usage might still be limited. Although virtual 3D city models have significant potential in supporting the planning, simulation, and operation of cities, districts, and neighborhoods, there are still many obstacles to the modeling and use of virtual 3D city models as "digital twins". Recent developments in data collection technologies, such as UAV sensors and platforms, are catalyzing applications for 3D mapping. For example, the relatively low-cost nature of UAVs combined with the use of revolutionary photogrammetric algorithms, such as dense image matching, has made it a strong competitor to aerial LiDAR mapping [1]. However, there remain important gaps, in terms of automation, efficiency, and accuracy. For example, while the use of 3D measurement technologies is an effective means of analyzing both physical and natural objects, regardless of size, site and environmental conditions may still impose important obstacles. In addition, while virtual 3D city models lend themselves in support of planning, simulation, and operation, there is still an important demand for a broad range of use cases that have surpassed the conceptual and hypothetical realm and have seen some accomplishment in practice. We can learn from both successful and less successful demonstrations of the use of virtual 3D city models to plan, simulate, and operate our urban environments. The potential best practices and lessons learned can inspire others to follow up and explore similar or complimentary applications of virtual 3D city models, and comparing them is the best way forward to ensure a reasonable perspective of a complete, successful, and effective use of the potential of 3D virtual city models.

The Contribution of This Special Issue
The articles included in this Special Issue (SI) focus on the entire spectrum from data collection, extraction, transformation, and processing for modeling and reconstruction of virtual 3D city models and their components, to application and use of 3D city models for various use cases, including semantic view analysis and solar irradiation calculation.
Gao et al. [2] address data collection and reconstruction for the analysis and reproduction of both physical and natural objects in a complex coastal environment. Specifically, they focus their attention to giant coastal rock formations and consider as a case study a famous site off the coast of Miyako City, Iwate Prefecture, Japan. Since, in coastal environments, it is difficult to obtain the required data by conventional measurement methods, they combine two different 3D measurement techniques: a drone-mounted camera together with global navigation satellite system data, using 3D shape reconstruction software to integrate the point cloud data generated from the high-resolution camera images. For applications, they consider tourism promotion and environmental protection awareness initiatives, and fabricate the 3D digital models of the rocks with 3D printers for use as museum exhibitions, school curriculum materials, and related applications.
Salleh et al. [3] extend their scope to a virtual 3D model of the UTM university campus, as smaller scale but similar to a 3D city model. They also explore the entire process from data collection to development of the virtual 3D model, utilizing data from aerial photos and site observations, using SketchUp, FME, 3DCityDB, and Cesium for visualization. Although the methodology adopted was found to be able to create LoD2 building models, however, issues of accuracy arose in terms of building details and positioning. In terms of applications, they consider the 3D university campus model as a foundation for planning, navigation, and management of campus buildings.
Jovanovic et al. [4] also consider a university campus (Novi Sad) as their case study. However, instead, they consider airborne LiDAR as input to the reconstruction process. Specifically, they present a workflow from data collection by LiDAR, through extract, transform, load (ETL) transformations, and data processing to developing a 3D virtual city model. They discuss future potential usage scenarios in various fields of application, such as modern ICT-based urban planning and 3D cadaster.
Murtiyoso et al. [1] also emphasize a reconstruction workflow from point cloud data, however they address the automation of this workflow. Starting from UAV-derived point clouds and resulting in an LoD 2-compatible 3D model, they present the development and implementation of an automated workflow including building point cloud segmentation, the generation of roof planes, and the creation of 3D models of buildings using the roof planes as a base. Their results show that the rule-based segmentation approach they present works well with the additional advantage of instance segmentation and automatic semantic attribute annotation. Overall, the 3D modeling algorithm performs well for low to medium complexity roofs.
Fotsing at al. [5] focus their attention to point cloud registration which is usually achieved through spatial transformations that align and merge multiple point clouds into a single globally consistent model. Where conventional methods seek to align the largest number of common points between entities, they present a new segmentation-based approach that consists of extracting plane structures from point clouds and then, using the 4-Point Congruent Sets (4PCS) technique, estimates transformations that align the plane structures. They demonstrate that their method, aiming to align the largest number of planes, considerably reduces the data size, computational workload, and execution time, and produces better alignment of the point clouds.
Wang et al. [6], instead, turn their attention to image-based 3D reconstruction, requiring reliable image matching. They present a novel, quasi-dense matching method based on triangulation constraints and propagation as applied to different types of close-range image matching, such as illumination change, large viewpoint, and scale change. Beginning from a set of sparse matched points, an initial Delaunay triangulation is constructed and edge-to-edge matching propagation is then conducted for the point matching. A hierarchical matching strategy is adopted for matching two types of primitives from the edges of triangles. Points that cannot be matched in the first stage are further matched in a second stage that combines the descriptor and the Mahalanobis distance constraints. Subsequently, the triangulation is updated using the newly matched points, and the aforementioned matching is repeated iteratively until no new matching points are generated. Their results reveal that the proposed method has high robustness for different images and can obtain reliable matching results.
Finally, two manuscripts focus entirely on applications of a virtual 3D city model. Virtanen et al. [7] consider analysis views and visibility, applicable to property valuation and evaluation of urban green infrastructure. Specifically, they present a near real-time semantic view analysis relying on a 3D city model, implemented in a web browser, and tested in two alternative use cases: property valuation and evaluation of the urban green infrastructure. Their results describe the elements visible from a given location, and can also be applied to object type specific analysis, such as green view index estimation, with the main benefit being the freedom of choosing the point-of-view obtained with the 3D model.
Liang et al. [8] consider as application to interactively calculate solar irradiation on 3D surfaces in a virtual environment. Their virtual environment combines a 3D-city model, digital elevation model (DEM), digital surface model (DSM) and feature layers. Their open-source software application Solar3D extends the GRASS GIS r.sun solar radiation model from 2D to 3D by feeding the model with input, including surface slope, aspect and time-resolved shading, which is derived directly from the 3D scene using computer graphics techniques. The application can consume massive heterogeneous 3D-city models, including massive 3D-city models, such as oblique airborne photogrammetry-based 3D-city models (OAP3Ds or integrated meshes). It can perform near real-time pointwise calculation for duration from daily to annual at arbitrary surface positions, including on rooftops, facades, and the ground.
Altogether, this Special Issue presents potential best practices and lessons learned with respect to the development and application of virtual 3D city models. These can inspire others to follow up and explore similar or complimentary applications of virtual 3D city models. Obviously, no one size does not fit all and, as such, comparing successful practices and lessons learned is the best way forward to ensure a reasonable perspective of a complete, successful, and effective use of the potential of 3D virtual city models. I believe this Special Issue makes an important step toward this ultimate objective and I would like to thank all the authors for their contributions.