Next Article in Journal
Automated Digitization Approach for Road Intersections Mapping: Leveraging Azimuth and Curve Detection from Geo-Spatial Data
Previous Article in Journal
A Reversible Compression Coding Method for 3D Property Volumes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Multitemporal Game Engine Visualizations for Watershed Analysis, Lighting Simulation, and Change Detection in Built Environments

by
Heikki Kauhanen
1,*,
Toni Rantanen
1,
Petri Rönnholm
1,
Osama Bin Shafaat
1,
Kaisa Jaalama
2,
Arttu Julin
1 and
Matti Vaaja
1
1
Department of Built Environment, Aalto University, 02150 Espoo, Finland
2
Built Environment Solutions, Finnish Environment Institute Finland, Latokartanonkaari 11, 00790 Helsinki, Finland
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2025, 14(7), 265; https://doi.org/10.3390/ijgi14070265
Submission received: 1 May 2025 / Revised: 30 June 2025 / Accepted: 1 July 2025 / Published: 5 July 2025

Abstract

This study explores the reuse of high-resolution 3D spatial datasets for multiple urban analyses within a game engine environment, aligning with circular economy principles in sustainable urban planning. The work is situated in two residential test areas in Finland, where watershed analysis, lighting simulation, and change detection were conducted using data acquired through drone photogrammetry and terrestrial laser scanning. These datasets were processed and visualized using Unreal Engine 5.5, enabling the interactive, multitemporal exploration of urban phenomena. The results demonstrate how a single photogrammetric dataset—originally captured for visual or structural purposes—can serve a broad range of analytical functions, such as simulating seasonal lighting conditions, modeling stormwater runoff, and visualizing spatial changes over time. The study highlights the importance of capturing data at a resolution that satisfies the most demanding intended use, while allowing simpler analyses to benefit simultaneously. Reflections on game engine capabilities, data quality thresholds, and user interactivity underline the feasibility of integrating such tools into citizen participation, housing company decision making, and urban governance. The findings advocate for a circular data approach in urban planning, reducing redundant fieldwork and supporting sustainable data practices through multi-purpose digital twins and spatial simulations.

1. Introduction

Multitemporal visualization, combined with interactive tools, has advantages over static representations. Multitemporal data allows for the illustration of historical development, which may assist in detecting temporal trends and understanding present conditions [1]. Through simulation, probable future trends can be visualized [2]. All this is useful not only for professionals but also for non-professionals. Professionals are able to utilize visual analytics [3], combining their pre-existing knowledge, human ability to interpret visual connections or anomalies, and other available analyses to better understand data or phenomena. For non-professionals, interactive multitemporal visualizations help to intuitively grasp trends, changes and consequences. Interactive visualization environments can also serve as tools to enable citizens to participate in and collaborate on scientific projects, such as in citizen science [4]. Game engines are excellent for the interactive visualization of multitemporal data [5]. Mohd et al. [6] analyzed the strengths and weaknesses of several current game engines and concluded that Unreal Engine and Unity stand out from the other alternatives.
Game engines provide a straightforward framework for developing games, and allow optimized access for cutting-edge 3D rendering, physics, and interaction [7]. The scope of game engines has expanded from pure entertainment into serious games which have a focused purpose. For example, Djaouti et al. [8] lists some fields in which serious games can be utilized: state, government, military, defense, healthcare, education, corporate, religious, culture, art, ecology, politics, humanitarian, advertising, and scientific research. In some serious games, realistic and spatially accurate 3D models can be an essential part of making applications useful [9,10]. While 3D models can be created to represent fantasy worlds or objects, this article focuses on their application in depicting real-world environments due to their usefulness in conducting and visualizing urban analysis. [11]. Photogrammetry and laser scanning are efficient technologies to measure existing environments, providing photorealistic and accurate 3D models [12].
Game engines can utilize detailed photogrammetric, and laser scanned data [13]. In some game engines, there are even plugins to directly access 3D geospatial data from web services, e.g., Cesium Web Globe and Esri’s Web Globe [14]. Game engines enable the streamlined popularization of complex spatial data and analysis results. Recent game engine developments have enabled the rendering of much larger mesh models in real time, even with ray traced lighting [15]. However, to achieve the real-time rendering of the data, game engines may approximate some spatial calculations, thus leading to inaccuracies in the visualization results [16]. In reality, both the inaccuracies of the source data and the computational approximations contribute to the total error.
The use of multi-temporal 3D point cloud data in game engines is noted as an increasingly valuable method for immersive geovisualization, urban planning and decision making [17]. By exploiting game engines, interactive environments can be created to visualize changes in landscapes or cultural sites over time. This approach allows users to explore dynamic changes in a more intuitive and engaging way. For example, Papadopoulou et al. [5] demonstrated the use of 3D data from UAV photogrammetry to create virtual reality experiences that depict the evolution of geological monuments. Lercari et al. [18] developed a web-based platform that uses temporal site tracking to improve the sustainability of cultural heritage. Their system uses game engine technology to interactively visualize data changes, making it an effective tool for heritage conservation. It has been shown that urban 3D point clouds can support urban decision making and scenario-based planning [19]. This could be further enriched through visually rich and temporally detailed simulations in an interactive game engine environment, aimed at real-time decision making.
Urban digital twins (UDTs) are a particularly important topic for game engine-based geospatial data applications. They are emerging as critical tools for planning, management, and the analysis of complex urban environments [20]. By integrating heterogeneous spatial data, sensor networks, and simulation models, UDTs enable cities to monitor, simulate, and optimize operations in the areas of infrastructure, mobility, and sustainable development [19,21,22]. Recent developments highlight the need for UDTs to evolve from static representations towards adaptive, interoperable systems that support stakeholder collaboration and dynamic decision making [23]. These emerging digital ecosystems reflect a paradigm shift in how urban knowledge is produced, shared, and applied. As such, UDTs present significant potential for enhancing knowledge-based urban governance and fostering urban resilience. The performance of UDT data in the Unreal game engine was examined in Rantanen et al. [24].
Recent developments in sustainable urban planning increasingly emphasize the principles of the circular economy, where the efficient use, reuse, and repurposing of resources—including digital assets like spatial data—play a pivotal role. The need to make more informed decisions about energy efficiency, material flow, and lifecycle analysis of the built environment have been increasingly noted in the recent literature (e.g., [25]). As highlighted in the Circular Green Blocks project (2021–2023) [26], the integration of building-specific 3D geospatial data could enhance the data-driven decision-making among housing companies. However, only a few studies illustrate neighborhood-level analytics in decision making to benefit the circular economy. Spatial data-driven approaches could potentially support climate-conscious renovations, optimize solar energy potential, and contribute to green infrastructure planning, all of which are central to the current development of the circular economy in the built environment (e.g., [27]).
A circular data philosophy is contributed to, wherein data is treated as a reusable resource, by demonstrating how one high-quality spatial dataset can be repurposed for multiple analysis types and interactive visualizations. The combination of photogrammetry-based 3D models and game engine-based simulations not only minimizes redundant data acquisition but also provides actionable insights for urban actors—from planners to citizen stakeholders—fostering participatory and sustainable planning practices.
This paper presents three technical demonstrations and proof-of-concepts within two study sites located in urban blocks, by illustrating the potential of utilizing same-source data at a neighborhood-level in a game-engine visualization. The three demos are (1) watershed analysis, (2) lighting analysis, and (3) change detection analysis. The interconnection of these three use cases in the game engine will be discussed later in Section 4. Demos were built in Unreal Engine (version 5.5), a powerful commercial game engine. Unreal Engine 5.5 was selected due to its new features and support for the Cesium for Unreal plugin. These new features include the virtualized geometry system Nanite [28], which automatically handles levels of detail (LoDs) and allows high-quality photogrammetric scans with polygon counts reaching billions to be directly utilized. Nanite’s performance and visual quality appear to be better than those achieved with traditional methods [29]. Another feature is the advanced lighting system, Lumen [30,31], which uses multiple ray tracing methods to enable fully dynamic global illumination and reflections in real time. Unreal Engine also utilizes a visual scripting system called “Blueprint” [32], which provides an alternative to traditional programming for application development.

2. Materials and Methods

2.1. Study Area

The study includes two target areas in which the sustainable use of spatial data analyses is explored. The two areas are located in Malminkartano (60.2430° N, 24.8637° E) and Latokaski (60.1780° N, 24.6659° E) in Finland. The coordinates are in UTM zone 35N. Figure 1 illustrates both target areas in an oblique game engine view. Study areas are located 13 km apart, but they are included seamlessly in the same demo environment as explained in Section 2.4. Both study areas were part of the Circular Green Blocks project where sustainable practices and social phenomena not limited to geoinformation were studied [26].

2.2. Data Collection

This study uses multiple different data acquisition methods such as terrestrial laser scanning, terrestrial photogrammetry, mobile handheld laser scanning, and drone photogrammetry. The study primarily utilizes the drone photogrammetry dataset. Terrestrial laser scanning data was only applied to test a higher- resolution voxel shadow map for the lighting demo and as reference data for evaluating the geometric quality of a drone-based photogrammetric mesh models.

2.2.1. Drone Photogrammetry

The drone photogrammetry image capture was conducted using a Nordic Drones Geodrone 6 sourced from Nordic Drones Oy, Muurame, Finland. The drone was equipped with a Sony RX1R II camera sourced from Sony Europe B.V., Vantaa, Finland, featuring a fixed 35 mm lens and 42.4-megapixel full frame sensor. The photogrammetric processing was done using Agisoft Metashape Pro 2.2.0. All datasets were georeferenced to the GK25 coordinate system.
Both study sites were surveyed twice using UAVs to enable multitemporal analysis. The first survey took place in early summer when it was still the leaves off season, while the second survey was conducted early fall when there had been significant vegetation growth during the summer. Using change detection techniques described in Section 2.3.3 it was possible to analyze the growth between the two survey dates. More specifically, the first Latokaski flight was conducted on 9 June 2022, and the Propeller Aeropoint signals were utilized to georeference the data. The second survey was done on 12 September 2022. This data was later registered with the first dataset using the Iterative Closest Point (ICP) algorithm in Cloud Compare version 2.13.2, as described in Section 2.3.3. Malminkartano flights were conducted on 9 June 2022 and 12 September 2022, respectively, but only the 9 June 2022 dataset was used in this work since the watershed and lighting demos implemented in Malminkartano did not require multitemporal data. However, the methods presented in this article can later be applied to the Malminkartano data from 12 September as well. Additionally, a DJI Mavic3 drone sourced from DJI Europe, Frankfurt am Main, Germany, was used to capture images for visual reference inspection. These images were not used as part of the photogrammetric process.
All images were captured with manual focus set to infinity. With a wide-angle lens camera such as the one used, this results in a hyperfocal distance of roughly 10 m. This was done to ensure acceptable focus throughout the entire dataset and to stabilize the internal orientation of the camera system. For the same reason, no image stabilization was used. When using manual focus, care should be taken since some cameras are able to focus past infinity. Shutter priority was used to control motion blur. This results in aperture changing during the survey but as the flight altitude was much higher than the hyperfocal distance, there are only minor variations in the photos.
Metashape’s statistical tools were used to refine the accuracy of the photogrammetric bundle block by first removing the points that were observed from only two photos. The bundle processing was performed each time any points were removed from the sparse point cloud. After that, the points with the highest reprojection error were selected and removed. The selection criteria were empirically chosen so that a couple of thousand to ten thousand points were removed for each iteration. The process was repeated until the reprojection error was under half a pixel. This resulted in the following final sparse point cloud specifications for each dataset, as shown in Table 1. The RMSE shown in the table is the RMS reprojection error in pixels and GSD is the ground sample distance in millimeters as reported by Agisoft Metashape.
It should be noted that not all datasets can achieve under half a pixel reprojection error without compromising the redundancy. Care should be taken when specifying the quality requirements. The statistical refinement of the bundle block also serves as a quality control tool and allows for automatic processing when scripted. However, in this work, the process was performed manually to identify the optimal procedure for future automation.
In Malminkartano, the 3D RMSE of five ground control points (GCPs) was 1.04 cm. The distribution of GCPs are illustrated in Figure 2 with error ellipsoids scaled ×1500. The 3D RMSE of GCPs in Latokartano was 0.58 cm, and the distribution of the five GCPs available are presented in Figure 3 with error ellipsoids scaled ×2000. Figure 4 illustrates the imaging geometry of the Malminkartano spring case.

2.2.2. Terrestrial Laser Scanning

The laser scanning campaigns were carried out using Leica RTC360 terrestrial laser scanner (TLS) sourced from Leica Geosystems Oy, Espoo, Finland and processed in Leica Cyclone Register 360 2022.1.1 software. Spherical targets were utilized to register data from scan stations together into a uniform point cloud for each study area. The specifications of the surveys can be found in Table 2. The data was georeferenced to the ETRS-GK25 (EPSG:3879) coordinate system using RTK-GNSS measurements for the sphere target locations. Each survey took a full day to conclude. The point density was later decimated to 2 cm.

2.3. Creation of Demo Environments

Analysis types are chosen to benefit the people living in the test area and demonstrate how modern spatial analysis can aid in the decision making in urban environments. For example, the residents of one house reported water pooling in the yard, which motivated us to analyze the stormwater runoff of the yard. The analysis types are watershed analysis, lighting analysis, and change detection analysis. These analyses are described in the following sections.
The three different analysis results were visualized in a game environment, where interacting with the model is possible to better understand the results. Figure 5 illustrates the workflow, highlighting how visualization enables the display of both historical development and future trends.
Unreal Engine 5.5 was chosen to implement the three selected demos in an interconnected environment. Unreal Engine’s lighting system Lumen allows full-resolution shading and indirect lighting computing in real-time. Lumen provides both software and hardware-based ray tracing options for indirect lighting computation. Ray tracing is a rendering method used in computer graphics that aims to mimic the physical behavior of light. It allows, e.g., for a higher photorealistic quality and more accurate reflections and refractions compared to traditional rendering methods, albeit at the cost of speed. In this study, hardware ray tracing was selected because it supports more geometry types and achieves higher quality than software ray tracing, despite requiring compatible hardware and being more expensive in terms of performance. Additionally, hardware ray tracing is computed against a much lower quality mesh (i.e., Nanite fallback mesh) by default, to achieve real-time performance when using static meshes with Nanite enabled. This can lead to inaccuracies in lighting behavior as the underlying geometry does not exactly match the Nanite mesh. However, the detail of the fallback mesh can be increased at the cost of performance if more accurate lighting simulation is needed or if there are, e.g., inconsistencies with shadows [28,30,31].

2.3.1. Watershed

For the watershed analysis, the drone photogrammetry-based mesh was used to produce a true orthophoto and DEM (digital elevation model). This process was conducted in Agisoft Metashape and involves transforming the 3D data into 2.5D data, which then allows the use of 2D raster analysis tools in other software. The watershed analysis was conducted using QGIS version 3.40.8 tool r.watershed. For this demonstration, only the DEM, “Minimum size of exterior watershed basin”, and “Maximum length of surface flow” were used as inputs. The parameter values were iterated until the results appeared feasible. This outcome was achieved with both values set at 300. The “Maximum memory to be used with -m flag (in MB)” was set to 20,000.
In a real use case, a hydrology expert would be consulted regarding the parameter values. In this study, only the “Drainage direction” and “Stream segments” outputs were used from the various analysis results produced by the tool. These two results let us find the streams where the stormwater would likely flow as well as the direction of the flow.
The flow direction was calculated by reading the direction from the “Drainage direction” raster for each pixel that had a value in the “Stream segments” raster. This resulted in a raster with stream segments coded with the flow direction, as shown in Figure 6. This results in a visualization where arrows illustrate the direction of flow, and colors also indicate the direction, allowing the results to be seen even from a distance (Figure 7). A close-up illustration of flow directions is visible in Figure 8.
However, occluded areas such as ground under shelters or tree canopies cannot be included in raster-based watershed analysis. It is possible to calculate the watershed analysis in 3D [33]. Thus, using the TLS data could have been used to solve this issue. However, in this study, using readily available tools such as QGIS was prioritized to save time and resources, and to demonstrate how the 2D results can be later visualized in a 3D environment as texture maps. It remains possible to later expand the analysis utilizing the already captured TLS data. The resulting watershed vectors were visualized in QGIS as arrows and overlaid on top of the DEM. The DEM was color-coded to illustrate the height differences of the courtyard, as shown in Figure 9. The result was exported as a raster image and imported into Unreal Engine as a texture. Since the raster and the orthophoto exported from Metashape share the same format, the same projection can be applied to both when using them as textures. Furthermore, the texture was manually modified to exclude vertical details, as these would be stretched when applying a 2D raster to texture a 3D model.
Figure 10 shows the watershed stream vectors of a flat rooftop building, illustrating how the drainage system works according to the watershed analysis results. The elevation color scaling is changed compared to the previous figure to cover the elevations relevant for this example. The game engine visualization, as shown in Section 3.1, uses the colormap shown in Figure 9. It is possible to change the colormap to suit a specific application.

2.3.2. Lighting

The lighting simulation was created by using the CesiumSunSky actor (CSS) [34], which is based on Unreal Engine’s Sun Position Calculator plugin (SPC) [35], included with the Cesium for Unreal plugin. The actor provides a geospatially accurate sun and sky system that can be controlled and modified through the blueprint system. The position of the sun can be configured by providing the latitude, longitude, and time zone of the area of interest. The Blueprint system was used to create lighting simulations at the hour, day, month, and year levels, allowing users to inspect lighting conditions in Latokaski and Malminkartano at a selected time or by running the simulation through a chosen interval, such as a month. Figure 11 shows the lighting simulation controls set to 27 September 2024, with the simulation looping from 6:12 to 18:30 based on the input in the “Hour” settings. The speed of the simulation can be altered by changing the “Hour Step” setting. It is also possible to cycle through days or months. Changing the year only changes the displayed date in the simulation.

2.3.3. Change Detection

Change detection data was derived from drone photogrammetry-based mesh models in Cloud Compare using the cloud-to-mesh (C2M) method. Prior to change detection, the data was registered in Cloud Compare using ICP. Since the data was already in the same coordinate system, ICP was only used to fine-tune the registration. The process was run for 2000 iterations with no RMS threshold. While this is computationally wasteful, it ensured the ICP process was fully converged. The initial registration RMSE was 85 mm. For the change detection, the spring dataset captured on 9 June 2022, in the leaves-off case was considered the baseline, and the vertices of the fall (12 September 2022) dataset mesh in the leaves-on situation were colored according to the differences between the datasets. Differences under 40 cm and over 4 m were excluded from the change detection data to visualize only the relevant changes. These values were empirically chosen and visually inspected. Finally, the scalar field produced by the C2M process was converted to RGB values for later use in UE visualization.
Following this, the registration quality was tested with the fall dataset, first with points showing changes only under 40 cm and then with points showing changes only under 10 cm. This was done to benchmark the registration quality after eliminating significant changes between the two timestamp datasets. The resulting RMSEs were 70 mm and 42 mm, respectively.
Figure 12 illustrates the change detection data in Cloud Compare, overlaid on the mesh calculated from the 9 June 2022 drone survey. The color scale ranges from 40 cm (blue) to 4 m (red). Both meshes were imported into Unreal Engine as .fbx files for visualization and integration with the other two demo cases. The two meshes were treated as separate layers in Unreal Engine, allowing the change detection data to be toggled on and off with a hotkey during the visualization demo. As the change detection layer is a mesh, it also affects the lighting simulation. However, this can be toggled off if the goal is to only visualize the changes without affecting the lighting.

2.4. Setting up Game Engine Environments

The application for visualizing the three different analyses was built using Unreal Engine (version 5.5) in conjunction with the Cesium for Unreal plugin [36]. The Cesium for Unreal plugin was used as the basis for the implementation because it provides a global-scale, high-accuracy WGS84 globe with a digital terrain model (Cesium World Terrain, CWT) [37] within Unreal Engine. This allows the placement of the photogrammetric 3D models in their real-world location within the game engine application. To implement the two areas of interest in the game engine, high-quality photogrammetric 3D scans of Latokaski and Malminkartano were imported as static meshes utilizing the Nanite system. In addition, the Latokaski multitemporal change detection mesh (fall dataset) was imported using the same process. These models were then visually georeferenced within the game engine to their real-world locations. This was especially important for enabling the lighting simulation. As an additional feature, a fly-to functionality was added, which allows the user to quickly fly between the two areas of interest, as shown in the Supplementary Video S1.

3. Results

This section introduces all three demos described in the methods section in the same game engine environment where the user can navigate and observe the material using keyboard and mouse. The results were also set up in virtual reality (VR); however, as 2D figures are unconventional for portraying such visualizations, they are not illustrated in this article. All the three demos are interconnected in both versions of the game engine demo environment. The Supplementary Video S1 shows a fly-through of the demo environment. It should be noted that the user is free to navigate freely in the model and the video is only an example of navigating the demo.

3.1. Watershed Results

Watershed simulation results were derived from implementing the methodology described in Section 2.3.1. Figure 13 shows a textured mesh of the Malminkartano study area, which serves as a baseline for overlaying the watershed layer, visualized in Unreal Engine. The empty information box at the top left corner indicates the lighting simulation is not running and thus no tray traced shadows are cast.
Figure 14 illustrates the same mesh as shown in Figure 13, but with the watershed simulation layer on top. The watershed layer is a separate mesh with the same geometry as the base mesh but with the vertical polygons removed. This is done due to limitations in the dimensionality of the watershed analysis in QGIS, as described in Section 2.3.1. The watershed vectors are animated in Unreal Engine and there is a video in the Supplementary Material illustrating the animated flow of the stream vectors.

3.2. Lighting Results

Lighting simulation results are derived from implementing the methodology described in Section 2.3.2. Figure 15 shows a textured mesh calculated from the 9 June 2022 drone survey, excluding ray traced shadows. Any shadows shown in the model are the same as those present when the drone images were captured. Since the weather was overcast during the survey, minimal shadows are present. This allows for simulating ray traced shadows in Unreal Engine.
Figure 16 illustrates the same mesh as shown in Figure 15, but with ray traced shadows enabled. As described in Section 2.3.2, the date and time of the simulation can be chosen freely. In this example, it is set to 15 August 13:48.
To visually validate the results of the lighting simulation, Figure 17 shows a reference drone image from 15 August 2022, at 13:48, while Figure 18 shows the same area with the lighting simulation date set to match the reference image. This allows us to observe whether the simulated shadows are cast the same way as in the reference image. Note how the simulation images, using geometry data from the 9 June 2022 survey, show much less vegetation compared to the reference drone image, resulting in fewer shadows. This is due to significant vegetation growth between the timestamps of the model geometry and the reference drone image, as 9 June is usually still leaves off season in Finland while 15 August shows the leaves on scenario.

3.3. Change Detection Results

Change detection results are derived from implementing the methodology described in Section 2.3.3. Figure 19 shows the reference mesh calculated from the 9 June 2022 drone survey, and the compared mesh is based on the 12 September 2022 mission.
Figure 20 illustrates the compared change detection mesh overlaid on top of the reference mesh in a separate layer, which users can toggle on and off interactively in the game environment. The color scale ranges from 40 cm to 4 m, as described in Section 2.3.3.

4. Discussion

Using the same source data for all analysis ensures data integrity, provided there are no quality or temporal differences in the data. The drawback is that the data quality needs to be sufficient to meet the highest requirements dictated by the chosen analysis types. However, photorealistic visualization and lighting simulation, in particular, demand high-quality geometry data, allowing less demanding analyses, such as stream vector calculation for watershed analysis, to be performed using the same dataset. Higher resolution data requires more demanding fieldwork. However, this paper demonstrates that using modern surveying techniques, such as drone photogrammetry, the effort to capture high-detail data is manageable. Furthermore, using the same data for multiple data products reduces waste and promotes circular working philosophy, minimizing time-consuming fieldwork through data reuse. Even though the initial fieldwork may require more effort compared to capturing only enough data to meet the minimum requirements, the long-term benefits outweigh the initial demands. This study mainly utilized the drone photogrammetry data, but TLS campaign was conducted as well. TLS data can be later used to extend the results shown in this paper. For example, it is possible to conduct another TLS campaign several years later and use the change detection workflow to compare the TLS datasets to show slow changes such as ground subsidence. Furthermore, possible subsidence affects the watershed analysis as well, leading to multitemporal hydrological analysis. However, since no hydrology expert was available for this study, the watershed analysis results are only visually inspected and would need to be further validated before using the workflow in a real use case.
The level of accuracy required of the source data depends on the application. However, since all the applications might not be clear when the survey is planned, it would seem like the goal should be to aim for as high quality as possible to utilize the data for various applications. In reality, this might not be feasible, and the amount of work as well as the available resources should be carefully considered.
In this paper, the drone survey flight altitude at over one hundred meters was rather high and thus the ground sample distance was not as small as it could have been if the flight altitude was lower or a longer focal length lens was used. Under 2 cm, GSD can still be considered high-resolution, especially when compared to more traditional remote sensing, but it can still be limiting for certain applications. For example, Papadopoulou et al. [5] proposes to use different GSD requirements for various cartographic scales which, in turn, are defined by the quality requirements of the different archeological applications down to the fossil level, which requires under 0.1 cm GSD and flight altitude of five meters. It is quite clear that such a setup would not be feasible to cover block scale city areas but can be utilized for certain details. The important realization is that modern game engines utilizing technologies such as Nanite can handle real-time scaling of the data, making it possible to visualize hundreds of millions of polygons in real time, thus opening up new possibilities and applications.
To illustrate some examples of how the analyses introduced in this paper can be utilized, Figure 21 shows a flat roof building where it is easy to observe how the drainage system works. This can be useful for semi-automatically identifying problem areas in urban settings or verifying whether the system is working as designed. Initially, the watershed analysis was implemented to determine whether areas where water tends to pool in the courtyard could be detected. Once the stream vectors were calculated, other interesting places, such as the flat roof, were found. This highlights the potential of gathering high-quality spatial data, as it allows for revisiting the data later to conduct certain analyses when the need arises.
As all three demo cases are interconnected, it is possible to observe different phenomena from the same data using the same vantage point. Figure 22 shows the same area as Figure 21, but with the watershed analysis layer hidden. Now, the textured mesh detail can be seen, and the ray traced shadows from the lighting demo are cast. By toggling the watershed analysis on and off, it is easy to gain a good understanding of the target area. The game engine environment allows users to freely navigate and inspect various points of interest. This is highlighted in Supplementary Video S1.
Thirdly, Figure 23 shows the same building with two additional stories cloned on top, therefore casting shadows further on the roofline of the neighboring building compared to Figure 22. This can be utilized to visualize how additional construction would affect the shadowing of the environment. It is also possible to calculate watershed analysis for these virtually placed structures.
Like the roof drainage system, the watershed analysis can also be used to simulate water flows on street areas, as shown in Figure 24. When compared with Figure 25, the location of the drain observed from the textured mesh is seen to match the location to which the stream vectors point. This is more easily observed in Supplementary Video S1, which shows the animated stream vectors. Another way to verify the feasibility of the watershed analysis is by examining the color height map and confirming that the lowest point of the street area matches the location of the drain.
Using the change detection results shown in Section 3.3, it is possible to assess whether the vegetation growth during the summer season influences how different areas are being shadowed. Figure 26 shows the date set to 27 September 2022, at 1 PM. The geometry is from the spring dataset, so the trees do not have leaves. However, when the change detection layer obtained from the fall dataset, as described in Section 2.3.3, is toggled on, it can be seen that parts of the roof are shadowed much more when trees have leaves. This can be utilized when planning, for example, solar panel installations or identifying suitable locations for urban farming. The changes detected near the center of the image in Figure 27 are urban farming irrigation boxes. The date and time can be freely chosen in the demo environment as described in Section 2.3.2.
Akin to the validation results shown in Section 3.2 with Figure 17 and Figure 18, Figure 28 and Figure 29 illustrate how the shadows match between the simulated results (Figure 29) and a reference drone image (Figure 28). In this example, Figure 29 geometry includes the textured geometry calculated based on the spring survey and overlaid with the change detection result based on comparing the spring and fall survey geometries. These results can be helpful for urban green planning, as vegetation changes can be observed, measured, and simulated. Additionally, in a follow-up project, ways to include climate scenarios, including carbon flows, into the game engine visualization environment are currently being researched. This aims to allow visualizing possible future scenarios and more effectively communicate with decision makers and other stakeholders.
Visualization provides an opportunity to study multitemporal phenomena, such as dynamic shadows that cover certain parts of buildings differently with changing seasons and times of day. Change detection results also affect the lighting model, enabling a better understanding of how factors like leaf growth affect how buildings are shadowed and potentially alter watershed analysis throughout the year. The game engine environment also offers an opportunity to include virtual objects, such as planned Urban Green Infrastructure (UGI) elements or to illustrate how additional building projects would affect the environment. Energy considerations represent an important opportunity. Visualization enables the evaluation of the heat load during a sunny day or can be utilized to simulate how planting a tree to shade a façade could affect the heat load of the building. Similarly, the game engine can aid in the design of a solar panel installation by analyzing which parts of a roof or other structures receive the most sun [38]. This is especially important for façade-mounted panels, which are more likely to be shadowed by other elements, such as buildings or vegetation.
The findings of this study show how neighborhood- and building-scale 3D data supports resource-efficient and climate-smart decision making in urban environments. The reuse of a single, high-resolution dataset for watershed, lighting, and change detection analyses also illustrates a concrete application of digital circularity—one where spatial data is not consumed once but serves multiple functions over its lifecycle.
In the context of urban planning and decision making, the approach introduced in this paper allows for interactive scenario evaluation, such as estimating the benefits of new vegetation (e.g., for cooling), understanding water drainage behavior, or assessing the impact of structural changes on shading and energy use. These types of simulations align with sustainability audits, green retrofitting, and long-term renovation planning, offering a visual and data-backed foundation for investment decisions. Furthermore, this study demonstrates how circular data practices can be implemented through game engines, which may lower the barrier to understanding for non-expert users.
In comparison to recent publications, this paper introduces methodology to implement an interconnected demo environment where multiple analysis types can be interactively toggled on and off for dynamic visualization. In addition, the article shows how Nanite technology released in 2022 can be used to overcome the traditional need to produce several different scale models and observe them separately as is the case, for example, in an earlier work by Papadopoulou et al. [5]. However, the use of different scales can still be advantageous even when Nanite is used to cover larger areas with less detail and interesting areas with higher precision. In such case, the workflow proposed in this article would utilize Nanite to seamlessly navigate between different scales as opposed to the traditional way of switching the source material to observe different scales. Virtanen et al. [23] discusses various digital twin platforms, data sources and social aspects of the applications. This point of view is important to popularize spatial applications. Virtanen mentions social urban digital twins where the citizens are involved in developing these platforms by contributing in various ways. After all, digital twins should be updated regularly, and crowdsourcing is one of the possible ways to achieve that. Web-based 3D viewers discussed by Virtanen are easily accessible platforms for non-professionals. However, they are limited in terms of more demanding visualizations. The workflow introduced in this paper shows three different demo examples where the demand has come from community involved in the Circular Green Block project [26] and promotes the same social phenomena and modular urban twin aspects as Virtanen, while also supporting the rigorous approach of aiming for photorealistic precise 3D models as illustrated by Papadopoulou. However, the current implementation of the demos shown in this paper does not utilize as high precision source data, even though the coverage is larger.

5. Conclusions

The article aimed to illustrate how high-quality 3D datasets can serve multiple purposes, making it cost-efficient and sustainable to aim for higher-than-typical spatial data quality, thus increasing the usefulness and reuse of the data. Similarly, game engine visualization was studied to provide an accessible environment for non-professionals, enabling the results to reach a broader audience, involving more people, and helping to spread knowledge about the possibilities of using modern spatial data.
In addition, the study highlights the relevance of these methods in the context of circular economy thinking, where digital resources, such as spatial datasets, are reused across applications, reducing redundant measurements and promoting long-term value. The integration of analytical workflows within a single environment demonstrates a scalable and modular approach suitable for neighborhood- or district-level planning. The ability to interactively choose between lighting, hydrological, and change detection analyses in real time within the game engine interface offers practical tools for housing companies, city planners, and community stakeholders. Future developments could further automate these workflows and integrate additional environmental indicators, enabling the wider adoption of participatory and evidence-based urban planning. Since the approach is modular and updatable, it is possible to expand the demo environment by adding more data and analysis types into the system.

Supplementary Materials

The following supporting information can be downloaded at https://doi.org/10.5281/zenodo.15313720 (accessed on 4 July 2025). Video S1: Unreal engine demo.

Author Contributions

Conceptualization, Heikki Kauhanen, Kaisa Jaalama, Petri Rönnholm and Matti Vaaja; methodology, Heikki Kauhanen; software, Heikki Kauhanen; validation, Heikki Kauhanen; formal analysis, Heikki Kauhanen and Osama Bin Shafaat; investigation, Heikki Kauhanen, Toni Rantanen and Osama Bin Shafaat; resources, Heikki Kauhanen and Toni Rantanen; data curation, Heikki Kauhanen; writing—original draft preparation, Heikki Kauhanen, Petri Rönnholm, Toni Rantanen and Matti Vaaja; writing—review and editing, Heikki Kauhanen, Petri Rönnholm, Kaisa Jaalama, Matti Vaaja, Arttu Julin and Toni Rantanen; visualization, Heikki Kauhanen; supervision, Matti Vaaja and Petri Rönnholm; project administration, Matti Vaaja; funding acquisition, Matti Vaaja. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by European Regional Development Fund, grant numbers A77489 and A80427. In addition, this research has been supported by Aalto University, School of Engineering, doctoral school funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lin, Y.; Yang, H.; Weihong, W.; Yehua, S.; Xin, J. Processing of Multitemporal 3D Point Cloud Data for Use in Reconstructing Historical Geographic Scenarios. Sens. Mater. 2022, 34, 4551–4568. [Google Scholar] [CrossRef]
  2. Liu, X.; Liang, X.; Li, X.; Xu, X.; Ou, J.; Chen, Y.; Li, S.; Wang, S.; Pei, F. A future land use simulation model (FLUS) for simulating multiple land use scenarios by coupling human and natural effects. Landsc. Urban Plan. 2017, 168, 94–116. [Google Scholar] [CrossRef]
  3. Keim, D.A.; Mansmann, F.; Schneidewind, J.; Ziegler, H.; Thomas, J. Visual Analytics: Scope and Challenges. In Visual Data Mining: Theory, Techniques and Tools for Visual Analytics; Simoff, S.J., Böhlen, M.H., Mazeika, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; pp. 76–90. [Google Scholar]
  4. Berger, C.; Gerke, M. Current and potential use of augmented reality in (geographic) citizen science projects: A survey. Geo-Spat. Inf. Sci. 2024, 27, 1605–1621. [Google Scholar] [CrossRef]
  5. Papadopoulou, E.E.; Papakonstantinou, A.; Kapogianni, N.A.; Zouros, N.; Soulakellis, N. VR multiscale geovisualization based on UAS multitemporal data: The case of geological monuments. Remote Sens. 2022, 14, 4259. [Google Scholar] [CrossRef]
  6. Mohd, T.K.; Bravo-Garcia, F.; Love, L.; Gujadhur, M.; Nyadu, J. Analyzing Strengths and Weaknesses of Modern Game Engines. Int. J. Comput. Theory Eng. 2023, 15, 54–60. [Google Scholar] [CrossRef]
  7. Würstle, P.; Padsala, R.; Santhanavanich, T.; Coors, V. Viability testing of game engine usage for visualization of 3D geospatial data with OGC standards. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 10, 281–288. [Google Scholar] [CrossRef]
  8. Djaouti, D.; Alvarez, J.; Jessel, J.-P. Classifying serious games: The G/P/S model. In Handbook of Research on Improving Learning and Motivation Through Educational Games: Multi-Disciplinary Approaches; Felicia, P., Ed.; IGI Global: Hershey, PA, USA, 2011; pp. 118–136. [Google Scholar]
  9. Kontogianni, G.; Koutsaftis, C.; Skamantzari, M.; Georgopoulos, A. Utilising 3D Realistic Models in Serious Games for Cultural Heritage. Int. J. Comput. Methods Herit. Sci. (IJCMHS) 2017, 1, 21–46. [Google Scholar] [CrossRef]
  10. Golovina, O.; Kazanci, C.; Teizer, J.; König, M. Using serious games in virtual reality for automated close call and contact collision analysis in construction safety. In Proceedings of the 36th International Symposium on Automation and Robotics in Construction, Banff, Canada, 21–24 May 2019; pp. 967–974. [Google Scholar]
  11. Khoury, M.; Gibson, M.J.; Savic, D.; Chen, A.S.; Vamvakeridou-Lyroudia, L.; Langford, H.; Wigley, S. A Serious Game Designed to Explore and Understand the Complexities of Flood Mitigation Options in Urban–Rural Catchments. Water 2018, 10, 1885. [Google Scholar] [CrossRef]
  12. Julin, A.; Jaalama, K.; Virtanen, J.P.; Maksimainen, M.; Kurkela, M.; Hyyppä, J.; Hyyppä, H. Automated multi-sensor 3D reconstruction for the web. ISPRS Int. J. Geo-Inf. 2019, 8, 221. [Google Scholar] [CrossRef]
  13. Virtanen, J.P.; Daniel, S.; Turppa, T.; Zhu, L.; Julin, A.; Hyyppä, H.; Hyyppä, J. Interactive dense point clouds in a game engine. ISPRS J. Photogramm. Remote Sens. 2020, 163, 375–389. [Google Scholar] [CrossRef]
  14. Jeddoub, I.; Nys, G.A.; Hajji, R.; Billen, R. Data integration across urban digital twin lifecycle: A comprehensive review of current initiatives. Ann. GIS 2024, 1–20. [Google Scholar] [CrossRef]
  15. Lambru, C.; Morar, A.; Moldoveanu, F.; Asavei, V.; Moldoveanu, A. Comparative Analysis of Real-Time Global Illumination Techniques in Current Game Engines. IEEE Access 2021, 9, 125158–125183. [Google Scholar] [CrossRef]
  16. Dyrda, D.; Belloni, C. Space Foundation System: An Approach to Spatial. In Proceedings of the 2024 IEEE Conference on Games (CoG), Milano, Italy, 5–8 August 2024. [Google Scholar]
  17. Newell, R.; Canessa, R.; Sharma, T. Visualizing our options for coastal places: Exploring realistic immersive geovisualizations as tools for inclusive approaches to coastal planning and management. Front. Mar. Sci. 2017, 4, 290. [Google Scholar] [CrossRef]
  18. Lercari, N.; Jaffke, D.; Campiani, A.; Guillem, A.; McAvoy, S.; Delgado, G.J.; Bevk Neeb, A. Building cultural heritage resilience through remote sensing: An integrated approach using multi-temporal site monitoring, datafication, and Web-GL visualization. Remote Sens. 2021, 13, 4130. [Google Scholar] [CrossRef]
  19. Urech, P.R.; Mughal, M.O.; Bartesaghi-Koc, C. A simulation-based design framework to iteratively analyze and shape urban landscapes using point cloud modeling. Comput. Environ. Urban Syst. 2022, 91, 101731. [Google Scholar] [CrossRef]
  20. Mazzetto, S. A Review of Urban Digital Twins Integration, Challenges, and Future Directions in Smart City Development. Sustainability 2024, 16, 8337. [Google Scholar] [CrossRef]
  21. Gautier, J.; Christophe, S.; Brédif, M. Visualizing 3D climate data in urban 3D models. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2020, 43, 781–789. [Google Scholar] [CrossRef]
  22. Lehtola, V.V.; Koeva, M.; Oude Elberink, S.; Raposo, P.; Virtanen, J.P.; Vahdatikhaki, F.; Borsci, S. Digital twin of a city: Review of technology serving city needs. Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 102915. [Google Scholar] [CrossRef]
  23. Virtanen, J.P.; Alander, J.; Ponto, H.; Santala, V.; Martijnse-Hartikka, R.; Andra, A.; Sillander, T. Contemporary development directions for urban digital twins. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2024, 48, 177–182. [Google Scholar] [CrossRef]
  24. Rantanen, T.; Julin, A.; Virtanen, J.-P.; Hyyppä, H.; Vaaja, M.T. Open Geospatial Data Integration in Game Engine for Urban Digital Twin Applications. ISPRS Int. J. Geo-Inf. 2023, 12, 310. [Google Scholar] [CrossRef]
  25. Bellini, A.; Tadayon, A.; Andersen, B.; Klungseth, N.J. The role of data when implementing circular strategies in the built environment: A literature review. Clean. Environ. Syst. 2024, 13, 100183. [Google Scholar] [CrossRef]
  26. Circular Green Blocks—HSY. Available online: https://www.hsy.fi/en/hsy/hsys-projects/project-pages/circular-green-blocks/ (accessed on 30 April 2025).
  27. Pearlmutter, D.; Theochari, D.; Nehls, T.; Pinho, P.; Piro, P.; Korolova, A.; Papaefthimiou, S.; Garcia-Mateo, M.C.; Calheiros, C.; Zluwa, I.; et al. Enhancing the circular economy with nature-based solutions in the built urban environment: Green building materials, systems and sites. Blue-Green Syst. 2020, 2, 46–72. [Google Scholar] [CrossRef]
  28. Nanite Virtualized Geometry in Unreal Engine|Unreal Engine 5.5 Documentation|Epic Developer Community. Available online: https://dev.epicgames.com/documentation/en-us/unreal-engine/nanite-virtualized-geometry-in-unreal-engine (accessed on 30 April 2025).
  29. Díaz-Alemán, M.D.; Amador-García, E.M.; Díaz-González, E.; de la Torre-Cantero, J. Nanite as a Disruptive Technology for the Interactive Visualisation of Cultural Heritage 3D Models: A Case Study. Heritage 2023, 6, 5607–5618. [Google Scholar] [CrossRef]
  30. Lumen Global Illumination and Reflections in Unreal Engine|Unreal Engine 5.5 Documentation|Epic Developer Community. Available online: https://dev.epicgames.com/documentation/en-us/unreal-engine/lumen-global-illumination-and-reflections-in-unreal-engine (accessed on 30 April 2025).
  31. Lumen Technical Details in Unreal Engine|Unreal Engine 5.2 Documentation|Epic Developer Community. Available online: https://dev.epicgames.com/documentation/en-us/unreal-engine/lumen-technical-details-in-unreal-engine?application_version=5.2 (accessed on 30 April 2025).
  32. Blueprints Visual Scripting in Unreal Engine|Unreal Engine 5.5 Documentation|Epic Developer Community. Available online: https://dev.epicgames.com/documentation/en-us/unreal-engine/blueprints-visual-scripting-in-unreal-engine (accessed on 30 April 2025).
  33. Ketcham, R. New watershed methods for isolating and characterizing discrete objects in 3D data sets. Tomogr. Mater. Struct. 2025, 7, 100043. [Google Scholar] [CrossRef]
  34. Using a Geospatially Accurate Sun—Cesium. Available online: https://cesium.com/learn/unreal/unreal-geospatially-accurate-sun/ (accessed on 30 April 2025).
  35. Geographically Accurate Sun Positioning Tool in Unreal Engine|Unreal Engine 5.4 Documentation|Epic Developer Community. Available online: https://dev.epicgames.com/documentation/en-us/unreal-engine/geographically-accurate-sun-positioning-tool-in-unreal-engine?application_version=5.4 (accessed on 30 April 2025).
  36. Cesium for Unreal—Cesium. Available online: https://cesium.com/platform/cesium-for-unreal/ (accessed on 30 April 2025).
  37. Cesium World Terrain—Cesium. Available online: https://cesium.com/platform/cesium-ion/content/cesium-world-terrain/ (accessed on 30 April 2025).
  38. Clausen, C.S.B.; Ma, Z.G.; Jørgensen, B.N. Can we benefit from game engines to develop digital twins for planning the deployment of photovoltaics? Energy Inform. 2022, 5 (Suppl. 4), 42. [Google Scholar] [CrossRef]
Figure 1. An overview of the UE demo environment, showing both study areas marked with orange ellipses. Malminkartano at the top and Latokaski at the bottom. The coloring of parts of the study areas reflects the analysis results as explained in Section 2.3.1 for Malminkartano and Section 2.3.3 for Latokaski.
Figure 1. An overview of the UE demo environment, showing both study areas marked with orange ellipses. Malminkartano at the top and Latokaski at the bottom. The coloring of parts of the study areas reflects the analysis results as explained in Section 2.3.1 for Malminkartano and Section 2.3.3 for Latokaski.
Ijgi 14 00265 g001
Figure 2. Malminkartano spring ground control.
Figure 2. Malminkartano spring ground control.
Ijgi 14 00265 g002
Figure 3. Latokaski spring ground control.
Figure 3. Latokaski spring ground control.
Ijgi 14 00265 g003
Figure 4. Camera station geometry on top of the Malminkartano study area mesh.
Figure 4. Camera station geometry on top of the Malminkartano study area mesh.
Ijgi 14 00265 g004
Figure 5. Workflow of the three interconnected demos in game engine.
Figure 5. Workflow of the three interconnected demos in game engine.
Ijgi 14 00265 g005
Figure 6. Flow direction coding and symbols for visualization.
Figure 6. Flow direction coding and symbols for visualization.
Ijgi 14 00265 g006
Figure 7. Stream segment visualization in QGIS. The coloring of the stream segments reflects the estimated flow directions following the scheme illustrated in Figure 6.
Figure 7. Stream segment visualization in QGIS. The coloring of the stream segments reflects the estimated flow directions following the scheme illustrated in Figure 6.
Ijgi 14 00265 g007
Figure 8. Stream segment visualization detail in QGIS. The coloring of the stream segments and the direction of the arrows reflects the estimated flow directions following the scheme illustrated in Figure 6.
Figure 8. Stream segment visualization detail in QGIS. The coloring of the stream segments and the direction of the arrows reflects the estimated flow directions following the scheme illustrated in Figure 6.
Ijgi 14 00265 g008
Figure 9. QGIS watershed vectors and height visualization with colormap indicating elevation from the sea level.
Figure 9. QGIS watershed vectors and height visualization with colormap indicating elevation from the sea level.
Ijgi 14 00265 g009
Figure 10. QGIS watershed vectors and height visualization with colormap, scaled.
Figure 10. QGIS watershed vectors and height visualization with colormap, scaled.
Ijgi 14 00265 g010
Figure 11. Lighting simulation interval control options in the Unreal Engine editor.
Figure 11. Lighting simulation interval control options in the Unreal Engine editor.
Ijgi 14 00265 g011
Figure 12. Latokaski study site change detection data in Cloud Compare overlaid with a mesh calculated from the 9 June 2022 drone survey.
Figure 12. Latokaski study site change detection data in Cloud Compare overlaid with a mesh calculated from the 9 June 2022 drone survey.
Ijgi 14 00265 g012
Figure 13. Malminkartano study area without watershed vectors.
Figure 13. Malminkartano study area without watershed vectors.
Ijgi 14 00265 g013
Figure 14. Malminkartano study area animated watershed vectors.
Figure 14. Malminkartano study area animated watershed vectors.
Ijgi 14 00265 g014
Figure 15. Malminkartano study area without ray traced shadows.
Figure 15. Malminkartano study area without ray traced shadows.
Ijgi 14 00265 g015
Figure 16. Malminkartano study area with ray traced shadows.
Figure 16. Malminkartano study area with ray traced shadows.
Ijgi 14 00265 g016
Figure 17. Reference drone image from the Malminkartano study area.
Figure 17. Reference drone image from the Malminkartano study area.
Ijgi 14 00265 g017
Figure 18. Malminkartano study area simulated lighting image.
Figure 18. Malminkartano study area simulated lighting image.
Ijgi 14 00265 g018
Figure 19. Latokaski study area change detection visualization reference mesh.
Figure 19. Latokaski study area change detection visualization reference mesh.
Ijgi 14 00265 g019
Figure 20. Latokaski study area change detection visualization, with the color-coded change detection mesh layer overlaid on the reference mesh.
Figure 20. Latokaski study area change detection visualization, with the color-coded change detection mesh layer overlaid on the reference mesh.
Ijgi 14 00265 g020
Figure 21. Watershed analysis results from the Malminkartano study area, highlighting the expected behavior of the rooftop drainage system.
Figure 21. Watershed analysis results from the Malminkartano study area, highlighting the expected behavior of the rooftop drainage system.
Ijgi 14 00265 g021
Figure 22. Malminkartano study area: Lighting simulation image of a building casting ray traced shadows.
Figure 22. Malminkartano study area: Lighting simulation image of a building casting ray traced shadows.
Ijgi 14 00265 g022
Figure 23. Malminkartano study area: Lighting simulation image of a building with two additional stories virtually cloned on top, affecting the cast shadows.
Figure 23. Malminkartano study area: Lighting simulation image of a building with two additional stories virtually cloned on top, affecting the cast shadows.
Ijgi 14 00265 g023
Figure 24. Watershed analysis results show how the water is expected to flow into a drain on a road in the Malminkartano study area.
Figure 24. Watershed analysis results show how the water is expected to flow into a drain on a road in the Malminkartano study area.
Ijgi 14 00265 g024
Figure 25. Textured mesh of a road in Malminkartano study area showing the location of the drain highlighted with a red circle.
Figure 25. Textured mesh of a road in Malminkartano study area showing the location of the drain highlighted with a red circle.
Ijgi 14 00265 g025
Figure 26. Latokaski study area: Change detection geometry affecting solar energy potential and irrigation locations with leaves off.
Figure 26. Latokaski study area: Change detection geometry affecting solar energy potential and irrigation locations with leaves off.
Ijgi 14 00265 g026
Figure 27. Latokaski study area: Change detection geometry affecting solar energy potential and irrigation locations with leaves on.
Figure 27. Latokaski study area: Change detection geometry affecting solar energy potential and irrigation locations with leaves on.
Ijgi 14 00265 g027
Figure 28. Reference drone image from the Latokaski study area.
Figure 28. Reference drone image from the Latokaski study area.
Ijgi 14 00265 g028
Figure 29. Simulated lighting image of the Latokaski study area, with a change detection analysis layer and lighting simulation timestamp matching the reference image shown in Figure 28.
Figure 29. Simulated lighting image of the Latokaski study area, with a change detection analysis layer and lighting simulation timestamp matching the reference image shown in Figure 28.
Ijgi 14 00265 g029
Table 1. Specifications of the sparse drone point clouds.
Table 1. Specifications of the sparse drone point clouds.
Malminkartano
Spring
Malminkartano
Fall
Latokaski
Spring
Latokaski
Fall
Flying altitude125 m128 m127 m114 m
Number of photos515449587125
Number of points863,250755,809271,86883,411
RMSE0.499 px0.497 px0.464 px0.490 px
GSD17.9 mm17.1 mm18.3 mm15.7 mm
Table 2. Specifications of the terrestrial laser scan point clouds.
Table 2. Specifications of the terrestrial laser scan point clouds.
Malminkartano
Spring
Malminkartano
Fall
Latokaski
Spring
Latokaski
Fall
Number of scans51586692
Number of points999,451,5371,746,420,1071,392,724,2993,111,419,008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kauhanen, H.; Rantanen, T.; Rönnholm, P.; Shafaat, O.B.; Jaalama, K.; Julin, A.; Vaaja, M. Three-Dimensional Multitemporal Game Engine Visualizations for Watershed Analysis, Lighting Simulation, and Change Detection in Built Environments. ISPRS Int. J. Geo-Inf. 2025, 14, 265. https://doi.org/10.3390/ijgi14070265

AMA Style

Kauhanen H, Rantanen T, Rönnholm P, Shafaat OB, Jaalama K, Julin A, Vaaja M. Three-Dimensional Multitemporal Game Engine Visualizations for Watershed Analysis, Lighting Simulation, and Change Detection in Built Environments. ISPRS International Journal of Geo-Information. 2025; 14(7):265. https://doi.org/10.3390/ijgi14070265

Chicago/Turabian Style

Kauhanen, Heikki, Toni Rantanen, Petri Rönnholm, Osama Bin Shafaat, Kaisa Jaalama, Arttu Julin, and Matti Vaaja. 2025. "Three-Dimensional Multitemporal Game Engine Visualizations for Watershed Analysis, Lighting Simulation, and Change Detection in Built Environments" ISPRS International Journal of Geo-Information 14, no. 7: 265. https://doi.org/10.3390/ijgi14070265

APA Style

Kauhanen, H., Rantanen, T., Rönnholm, P., Shafaat, O. B., Jaalama, K., Julin, A., & Vaaja, M. (2025). Three-Dimensional Multitemporal Game Engine Visualizations for Watershed Analysis, Lighting Simulation, and Change Detection in Built Environments. ISPRS International Journal of Geo-Information, 14(7), 265. https://doi.org/10.3390/ijgi14070265

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop