The Imperial Cathedral in Königslutter (Germany) as an Immersive Experience in Virtual Reality with Integrated 360 ◦ Panoramic Photography

: As virtual reality (VR) and the corresponding 3D documentation and modelling technologies evolve into increasingly powerful and established tools for numerous applications in architecture, monument preservation, conservation / restoration and the presentation of cultural heritage, new methods for creating information-rich interactive 3D environments are increasingly in demand. In this article, we describe the development of an immersive virtual reality application for the Imperial Cathedral in Königslutter, in which 360 ◦ panoramic photographs were integrated within the virtual environment as a novel and complementary form of visualization. The Imperial Cathedral (Kaiserdom) of Königslutter is one of the most important examples of Romanesque architecture north of the Alps. The Cathedral had previously been subjected to laser-scanning and recording with 360 ◦ panoramic photography by the Photogrammetry & Laser Scanning lab of HafenCity University Hamburg in 2010. With the recent rapid development of consumer VR technology, it was subsequently decided to investigate how these two data sources could be combined within an immersive VR application for tourism and for architectural heritage preservation. A specialised technical workﬂow was developed to build the virtual environment in Unreal Engine 4 (UE4) and integrate the panorama photographs so as to ensure the seamless integration of these two datasets. A simple mechanic was developed using the native UE4 node-based programming language to switch between these two modes of visualisation.


Introduction
Virtual reality has recently become a much broader field, finding applications in medicine, architecture, military training, and cultural heritage, among other fields. With this growth has come some discrepancy in the definition of the medium: while in some fields VR is used to refer to 360 • immersive panoramas and videos, in other fields it refers to fully-realised interactive CGI environments. These two "kinds" of VR have traditionally been approached very differently, owing to highly diverging workflows and the different data sources required. However, there are currently no effective ways of bringing together these two kinds of data (each of which have their own complementary advantages in visualisation) into a single VR application. This is particularly important for applications in cultural heritage, where documentation often takes the form of multiple different kinds of complementary data (e.g., written, photographic, 3D, video and field recordings, among other forms).
Virtual reality is defined by Merriam Webster as "an artificial environment which is experienced through sensory stimuli (such as sights and sounds) provided by a computer and in which one's actions partially determine what happens in the environment" [1]. This very broad definition allows for most modern applications of VR to be taken into account. Additional definitions may be found in literature by Dörner et al. [2], Freina and Ott [3], and Portman et al. [4].
In the following we present the development workflow for a room-scale virtual reality experience of a cultural heritage monument which integrates a high-resolution CGI environment with 360 • panoramic photography, allowing the user to "toggle" between the virtual and the real environments from within the VR headset. This implementation has the advantage of exploiting the potential for the interactivity of a real-time game engine environment with the high-fidelity of high dynamic range image (HDRI) panoramic photography.

Related Work
While much credit for the generalization of VR technology and its increasing accessibility is due to the video game industry, which has invested heavily in pushing the industry forward [5], VR is now being employed in a wide range of disciplines. To date, VR has been successfully used for, among other applications, virtual surgery, virtual therapy, and flight and vehicle simulations. In the field of cultural heritage, VR has been instrumental in the development of the field of virtual heritage [6][7][8]. At the HafenCity University Hamburg, several VR projects concerning this subject have already been realized. The town museum in Bad Segeberg, housed in a 17th-century townhouse, was digitally constructed for a VR experience using the HTC Vive Pro [9]. Three historical cities (as well as their surrounding environments) have been developed as VR experiences: Duisburg in 1566 [10], Segeberg in 1600 [11], and Stade in 1620 [12]. In addition, two religious and cultural monuments are also available as VR experiences: the Selimiye Mosque in Edirne, Turkey [13], and a wooden model of Solomon's Temple [14].
The amount of work specifically regarding the real-time VR visualization of cultural heritage monuments is currently limited but growing. Recent museum exhibits using real-time VR to visualize cultural heritage include Batavia 1627 at the Westfries Museum in Hoorn, Netherlands [15], and Viking VR, developed to accompany an exhibit at the British Museum [16]. A number of recent research projects also focus on the use of VR for cultural heritage visualization [17][18][19][20], as well as on aspects beyond visualisation, including recreating the physical environmental stimuli [21]. The current paper contributes to this growing discussion by seeking to integrate 360 • panorama photographs within an immersive real-time visualization of a cultural heritage monument. At this stage, only very limited work regarding panoramic photography integration in real-time VR is known to the authors [22].

The Imperial Cathedral (Kaiserdom) in Königslutter, Germany
The town of Königslutter, some 20 km east of Braunschweig (Lower Saxony, Germany), is dominated by the Imperial Cathedral, known in German as the Kaiserdom (Figure 1). One of the most impressive examples of Romanesque architecture north of the alps, the cathedral's construction was begun under the direction of Kaiser Lothar III, German emperor from 1133 onwards [23,24]. The church was built in the form of a three-aisled cross-shaped column basilica. The cathedral is notable particularly for its repeated architectural references to northern Italian architectural styles of the time, indicating that it might be the work of an Italian architect or indeed someone who was well-travelled in those regions. Among the most important features of the cathedral is an ornamental hunting frieze, which hugs the external wall of the main aisle (see

Project Workflow
The overall workflow for the production of the VR experience of the Kaiserdom is schematically represented in Figure 2. Special focus was given to achieving a realistic 1:1 representation of the cathedral, including the integration of panoramic photos in the VR experience (see Section 4.6). The project was divided into five major phases of development ( Figure 2): (1a) data acquisition by terrestrial laser scanning with one Riegl VZ-400 scanner (outside) and two Zoller + Fröhlich IMAGER 5006 scanners (inside), (1b) registration and geo-referencing of scans using RiScan Pro and LaserControl, (1c) segmentation of point clouds into object tiles, (2a) 3D solid modelling with AutoCAD using segmented point clouds, (2b) generation of panoramic images using PTGui, (3) texture mapping of polygon models using Autodesk Maya and Substance Painter, (4a) placement of meshes and building the scene within the UE4 game engine, (4b) integration of motion control and interactions in UE4, (4c) integration of 360° panoramic imagery, and (5) immersive and interactive visualisation of the cathedral in the VR system HTC Vive Pro using Steam VR 2.0 as an interface between the game engine and the Head Mounted Display (HMD).

Project Workflow
The overall workflow for the production of the VR experience of the Kaiserdom is schematically represented in Figure 2. Special focus was given to achieving a realistic 1:1 representation of the cathedral, including the integration of panoramic photos in the VR experience (see Section 4.6). The project was divided into five major phases of development ( Figure 2): (1a) data acquisition by terrestrial laser scanning with one Riegl VZ-400 scanner (outside) and two Zoller + Fröhlich IMAGER 5006 scanners (inside), (1b) registration and geo-referencing of scans using RiScan Pro and LaserControl, (1c) segmentation of point clouds into object tiles, (2a) 3D solid modelling with AutoCAD using segmented point clouds, (2b) generation of panoramic images using PTGui, (3) texture mapping of polygon models using Autodesk Maya and Substance Painter, (4a) placement of meshes and building the scene within the UE4 game engine, (4b) integration of motion control and interactions in UE4, (4c) integration of 360 • panoramic imagery, and (5) immersive and interactive visualisation of the cathedral in the VR system HTC Vive Pro using Steam VR 2.0 as an interface between the game engine and the Head Mounted Display (HMD).

Data Acquisition
The data acquisition was already described in 2012 in Kersten and Lindstaedt [25] and is summarised in the following. The laser scan data for the Kaiserdom was acquired at 55 stations inside the cathedral by two Zoller + Froehlich IMAGER 5006 (www.zf-laser.com) terrestrial laser scanners, and at 8 stations outside the cathedral by one Riegl VZ-400 (www.riegl.com) on 5 January and 23 June 2010 ( Figure 3). In total, the scanning took 15 h. The scanning resolution was set to high (6 mm @ 10 m) for the IMAGER 5006 and to 5 mm at object space for the Riegl VZ-400. The precision of the geodetic network stations was 2.5 mm, while the precision of the control points for laser scanning was 5 mm. In order to later colourise the point cloud, as well as for the building of the virtual tour, 360° panoramic photos were taken at each IMAGER 5006 scan station and at a few supplementary stations using a Nikon DSLR camera (see Section 3.4).

3D Modelling
The 3D modelling was also described in 2012 in Kersten and Lindstaedt [25] and is briefly summarised in the following. The generated point cloud, being too large to import directly into a CAD program, was first segmented and then transferred to AutoCAD using the plugin PointCloud. Once imported, the cathedral was blocked out manually with a 3D mesh by extruding polylines along the surfaces and edges of the point cloud structure. This method has the advantage of not generating

Data Acquisition
The data acquisition was already described in 2012 in Kersten and Lindstaedt [25] and is summarised in the following. The laser scan data for the Kaiserdom was acquired at 55 stations inside the cathedral by two Zoller + Froehlich IMAGER 5006 (www.zf-laser.com) terrestrial laser scanners, and at 8 stations outside the cathedral by one Riegl VZ-400 (www.riegl.com) on 5 January and 23 June 2010 ( Figure 3). In total, the scanning took 15 h. The scanning resolution was set to high (6 mm @ 10 m) for the IMAGER 5006 and to 5 mm at object space for the Riegl VZ-400. The precision of the geodetic network stations was 2.5 mm, while the precision of the control points for laser scanning was 5 mm. In order to later colourise the point cloud, as well as for the building of the virtual tour, 360 • panoramic photos were taken at each IMAGER 5006 scan station and at a few supplementary stations using a Nikon DSLR camera (see Section 4.4).

Data Acquisition
The data acquisition was already described in 2012 in Kersten and Lindstaedt [25] and is summarised in the following. The laser scan data for the Kaiserdom was acquired at 55 stations inside the cathedral by two Zoller + Froehlich IMAGER 5006 (www.zf-laser.com) terrestrial laser scanners, and at 8 stations outside the cathedral by one Riegl VZ-400 (www.riegl.com) on 5 January and 23 June 2010 ( Figure 3). In total, the scanning took 15 h. The scanning resolution was set to high (6 mm @ 10 m) for the IMAGER 5006 and to 5 mm at object space for the Riegl VZ-400. The precision of the geodetic network stations was 2.5 mm, while the precision of the control points for laser scanning was 5 mm. In order to later colourise the point cloud, as well as for the building of the virtual tour, 360° panoramic photos were taken at each IMAGER 5006 scan station and at a few supplementary stations using a Nikon DSLR camera (see Section 3.4).

3D Modelling
The 3D modelling was also described in 2012 in Kersten and Lindstaedt [25] and is briefly summarised in the following. The generated point cloud, being too large to import directly into a CAD program, was first segmented and then transferred to AutoCAD using the plugin PointCloud. Once imported, the cathedral was blocked out manually with a 3D mesh by extruding polylines along the surfaces and edges of the point cloud structure. This method has the advantage of not generating

3D Modelling
The 3D modelling was also described in 2012 in Kersten and Lindstaedt [25] and is briefly summarised in the following. The generated point cloud, being too large to import directly into a CAD program, was first segmented and then transferred to AutoCAD using the plugin PointCloud. Once imported, the cathedral was blocked out manually with a 3D mesh by extruding polylines along the surfaces and edges of the point cloud structure. This method has the advantage of not generating too large a file, while retaining visual control of the built model using a superimposed point cloud. Figure 4 shows the final constructed 3D CAD model of the entire cathedral in four different perspective views.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 11 too large a file, while retaining visual control of the built model using a superimposed point cloud. Figure 4 shows the final constructed 3D CAD model of the entire cathedral in four different perspective views. For some smaller details on the cathedral, the automated meshing functions in Geomagic were used to quickly generate a mesh directly from the point cloud ( Figure 5). This works by means of a simple triangulation algorithm, which works better for more complex and irregular shapes and surfaces.

Panoramic Photography
In order to subsequently colourise the point cloud, as well as to generate a virtual online tour of the cathedral, a series of 360° panoramic photos were taken at each IMAGER 5006 scan station using a Nikon DSLR camera with a nodal point adapter. Supplementary panoramic photos were also taken at 10 additional locations outside the cathedral, as well as 19 further points within the cathedral. These were taken without any laser-scanning targets or extraneous objects present in the shot. The acquisition and processing of the panoramic photography was also described in 2012 in Kersten and Lindstaedt [25]. For better understanding of the whole workflow, the processing of the panoramic photography is briefly summarised in the following. Each set of photographs consists of 16 images- For some smaller details on the cathedral, the automated meshing functions in Geomagic were used to quickly generate a mesh directly from the point cloud ( Figure 5). This works by means of a simple triangulation algorithm, which works better for more complex and irregular shapes and surfaces.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 11 too large a file, while retaining visual control of the built model using a superimposed point cloud. Figure 4 shows the final constructed 3D CAD model of the entire cathedral in four different perspective views. For some smaller details on the cathedral, the automated meshing functions in Geomagic were used to quickly generate a mesh directly from the point cloud ( Figure 5). This works by means of a simple triangulation algorithm, which works better for more complex and irregular shapes and surfaces.

Panoramic Photography
In order to subsequently colourise the point cloud, as well as to generate a virtual online tour of the cathedral, a series of 360° panoramic photos were taken at each IMAGER 5006 scan station using a Nikon DSLR camera with a nodal point adapter. Supplementary panoramic photos were also taken at 10 additional locations outside the cathedral, as well as 19 further points within the cathedral. These were taken without any laser-scanning targets or extraneous objects present in the shot. The acquisition and processing of the panoramic photography was also described in 2012 in Kersten and Lindstaedt [25]. For better understanding of the whole workflow, the processing of the panoramic

Panoramic Photography
In order to subsequently colourise the point cloud, as well as to generate a virtual online tour of the cathedral, a series of 360 • panoramic photos were taken at each IMAGER 5006 scan station Appl. Sci. 2020, 10, 1517 6 of 11 using a Nikon DSLR camera with a nodal point adapter. Supplementary panoramic photos were also taken at 10 additional locations outside the cathedral, as well as 19 further points within the cathedral. These were taken without any laser-scanning targets or extraneous objects present in the shot. The acquisition and processing of the panoramic photography was also described in 2012 in Kersten and Lindstaedt [25]. For better understanding of the whole workflow, the processing of the panoramic photography is briefly summarised in the following. Each set of photographs consists of 16 images-one pointing towards the sky, three towards the ground and 12 photos for the 360 • circle in the horizontal position. The software PTGui automatically generated a spherical panorama with 11,700 × 5850 pixels (ca. 43 MB) for each camera station. These panorama images were converted into a set of six cube images (in total ca. 5 MB). The panorama viewing software KRpano (https://krpano.com) was initially used to generate an interactive virtual tour for web browsers ( Figure 6). The tour can be viewed at https://www.koenigslutter-kaiserdom.de/virtuelleTour/tour.html (must have Adobe Flash 9/10 enabled). In this browser-based tour, all spherical panoramas are linked to each other via hotspots or via the overview map (bottom-right corner). This provides a quick and convenient way of navigating through the panoramas, simply by clicking on the relevant map icon.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 11 one pointing towards the sky, three towards the ground and 12 photos for the 360° circle in the horizontal position. The software PTGui automatically generated a spherical panorama with 11,700 × 5850 pixels (ca. 43 MB) for each camera station. These panorama images were converted into a set of six cube images (in total ca. 5 MB). The panorama viewing software KRpano (https://krpano.com) was initially used to generate an interactive virtual tour for web browsers ( Figure 6). The tour can be viewed at https://www.koenigslutter-kaiserdom.de/virtuelleTour/tour.html (must have Adobe Flash 9/10 enabled). In this browser-based tour, all spherical panoramas are linked to each other via hotspots or via the overview map (bottom-right corner). This provides a quick and convenient way of navigating through the panoramas, simply by clicking on the relevant map icon.

Game Engine Unreal and VR System HTC Vive
A game engine is a simulation environment where 2D or 3D graphics can be manipulated through code. Developed primarily by the video games industry, they provide ideal platforms for the creation of VR experiences for other purposes (e.g., cultural heritage), as many of the necessary functionalities are already built in, eliminating the need to engineer these features independently. While there are dozens of appropriate game engines that could be used, the most popular for small studios and production teams tend to be the Unity engine (Unity Technologies, San Francisco, California, USA), CryEngine (Crytek, Frankfurt am Main, Germany) and Unreal Engine (Epic Games, Cary, North Carolina, USA). For this project, the Unreal Engine was chosen for its advantage in the built-in blueprints visual coding system, which allows users to build in simple interactions and animations without any prior knowledge of C++, the programming language on which the engine is built [26].
The specific hardware required to run VR is a VR headset, two "lighthouse" base stations, two controllers, and a VR-ready PC. For this project, the HTC Vive Pro was chosen as a headset. The lighthouses are needed to track the user's movement in 3D space (Figure 7), while the controllers are used for mapping interactions in the virtual world. Tracking is achieved with a gyroscope,

Game Engine Unreal and VR System HTC Vive
A game engine is a simulation environment where 2D or 3D graphics can be manipulated through code. Developed primarily by the video games industry, they provide ideal platforms for the creation of VR experiences for other purposes (e.g., cultural heritage), as many of the necessary functionalities are already built in, eliminating the need to engineer these features independently. While there are dozens of appropriate game engines that could be used, the most popular for small studios and production teams tend to be the Unity engine (Unity Technologies, San Francisco, California, USA), CryEngine (Crytek, Frankfurt am Main, Germany) and Unreal Engine (Epic Games, Cary, North Carolina, USA). For this project, the Unreal Engine was chosen for its advantage in the built-in blueprints visual coding system, which allows users to build in simple interactions and animations without any prior knowledge of C++, the programming language on which the engine is built [26]. The specific hardware required to run VR is a VR headset, two "lighthouse" base stations, two controllers, and a VR-ready PC. For this project, the HTC Vive Pro was chosen as a headset. The lighthouses are needed to track the user's movement in 3D space (Figure 7), while the controllers are used for mapping interactions in the virtual world. Tracking is achieved with a gyroscope, accelerometer, and laser position sensor within the VR headset itself, and can detect movements with an accuracy of 0.1 • [27]. Figure 7 shows the setup of the VR system HTC Vive Pro, including the interaction area (blue) for the user.

Implementation in Virtual Reality
In order to bring the model into virtual reality, some changes had to be made to the mesh and textures in order to make them run more efficiently within the game engine. The strict performance criteria of VR mean that every effort needs to be made to optimize the models and ensure that a sufficiently high frame rate (ideally 90 frames per second, though for many applications above 40 is sufficient) can be achieved. Much of this part of the workflow was done manually.
First, the mesh was split into different parts in order to make the data volume of the files smaller and therefore speed up the time taken for each iteration of the texturing process. Because UE4's builtin render engine renders only those meshes and textures that are within the screen-space of the viewer at any one time, a logical approach is to separate the interior from the exterior meshes, so as to unload the exterior data when the user is inside the cathedral and vice versa when they are outside. The two principal parts of the Kaiserdom-the central nave and the cloisters-were also processed separately for the same reason. In a few areas of the model, such as the southern side of the cloister, additional modelling was done in order to supplement the scan data. A low-poly city model provided by the company CPA Software GmbH (Siegburg, Germany) was used as a basis to model low-poly buildings in the area around the Cathedral. As these buildings were not central to the experience, they were modelled only in low detail so as not to take up too much rendering space on the GPU. Buildings further away from the Cathedral, which were only visible on the periphery of the virtual environment, were left in their raw form (simple grey rectangular meshes) to avoid any extraneous modelling work.
Much of the work in the VR optimization process was dedicated to the production of highquality textures suitable for real-time VR. There is a fundamental trade-off here between the quality of the textures needed to appear photorealistic at close range and the data streaming limit of the

Implementation In Virtual Reality
In order to bring the model into virtual reality, some changes had to be made to the mesh and textures in order to make them run more efficiently within the game engine. The strict performance criteria of VR mean that every effort needs to be made to optimize the models and ensure that a sufficiently high frame rate (ideally 90 frames per second, though for many applications above 40 is sufficient) can be achieved. Much of this part of the workflow was done manually.
First, the mesh was split into different parts in order to make the data volume of the files smaller and therefore speed up the time taken for each iteration of the texturing process. Because UE4's built-in render engine renders only those meshes and textures that are within the screen-space of the viewer at any one time, a logical approach is to separate the interior from the exterior meshes, so as to unload the exterior data when the user is inside the cathedral and vice versa when they are outside. The two principal parts of the Kaiserdom-the central nave and the cloisters-were also processed separately for the same reason. In a few areas of the model, such as the southern side of the cloister, additional modelling was done in order to supplement the scan data. A low-poly city model provided by the company CPA Software GmbH (Siegburg, Germany) was used as a basis to model low-poly buildings in the area around the Cathedral. As these buildings were not central to the experience, they were modelled only in low detail so as not to take up too much rendering space on the GPU. Buildings further away from the Cathedral, which were only visible on the periphery of the virtual environment, were left in their raw form (simple grey rectangular meshes) to avoid any extraneous modelling work.
Much of the work in the VR optimization process was dedicated to the production of high-quality textures suitable for real-time VR. There is a fundamental trade-off here between the quality of the textures needed to appear photorealistic at close range and the data streaming limit of the engine (which varies due to hardware and software specifications). As a rule, creating a photorealistic environment for VR requires high-quality textures in order to boost the experience of immersion. While the Unreal Engine automatically implements level-of-detail algorithms to reduce the load on the render engine, a certain amount of manual optimization must be done in addition to achieve performance goals. As such, texture resolution was varied depending on how far the texture would be from eye-level in the virtual environment. 4K textures (4096 × 4096 px) were used for high-detail textures that would appear at eye level, while 2K textures (2048 × 2048 px) were used for textures that appear well above eye level (for example, the ceiling and roof textures). While many of the textures for this process were adapted from photos taken at the Kaiserdom, supplementary photo-textures were sourced from a creative commons licensed CGI texture database (https://texturehaven.com/). For those materials with more exaggerated relief, such as the building stone and roof tiles, normal maps were also added and accentuated with a parallax displacement effect built with the native UE4 material creation tools.
The 3D models with their corresponding textures were exported into UE4 for placement and real-time visualization ( Figure 8A,B). The version of UE4 used in this case was 4.22. Additional elements such as plant meshes, clouds, fog, environmental lighting, and audio were added to heighten the sense of photorealism and immersion. In addition, simple interactions were integrated in order to help the user navigate around the environment. Firstly, a teleportation mechanic was implemented, allowing the user to jump from location to location. This mechanic makes use of a simple ray-tracer, pre-built into UE4, that allows the user to point to any location in the virtual world and check that the location is a valid teleportation point according to a certain set of criteria (these criteria, including the space available and the slope angle at the location, are calculated by UE4 with its "Navigation Mesh" feature). If the location is valid as a teleportation point, the user can teleport there with the click of the trigger button on the controller ( Figure 8D). In addition, automatic door-opening animations were added to several doors in the cathedral, allowing users to move between different parts of the building as in the real world. A short trailer of the virtual environment can be viewed online (https://www.youtube.com/watch?v=hmO0JOdlLgw).
Once the real-time environment was built and VR interactions set up, the 360 • panoramas could be integrated. A simple mechanism was implemented in the UE4 engine to make each panorama viewable. This mechanism was made up of: (1) a visual clue in the virtual world that indicated where the panorama was located. As an example we used a glowing ring, which stands out well from the rest of the environment ( Figure 8C)-a wide variety of other visual clues may be appropriate; (2) A trigger box overlapping with the ring, coupled with a function that fires when a certain button is pressed on the HTC Vive motion controller; (3) A separate, empty map or level in the UE4 editor; and (4) A skybox in the empty level onto which to project the cube-map panorama. Using this mechanism, the player can approach a glowing ring, representing a panorama taken on that spot, press a button on the motion controller, and be transported into the 360 • panorama. By toggling the button press on the motion controller, the player can come out of the panorama and be placed back in the virtual world ( Figure 9). Certain variations in this mechanic were tested (e.g., projecting the panoramic photo on the inside of a sphere in the virtual world, then using a button on the motion controller to alternately showing and hiding this sphere when the player was in the right area), but the method described above was found to provide the simplest and most robust way of toggling between the panoramic photos in the scene while retaining the original perspective of the photographs.
The finished version of the VR experience was tested with the HTC Vive Pro headset running on a PC with an 8-Core Intel Xeon CPU (3.10 GHz), an NVIDIA GTX 1070i GPU, and 32.0 GB RAM. With this setup, the experience achieved an average frame rate of 40-50 frames per second. appear well above eye level (for example, the ceiling and roof textures). While many of the textures for this process were adapted from photos taken at the Kaiserdom, supplementary photo-textures were sourced from a creative commons licensed CGI texture database (https://texturehaven.com/). For those materials with more exaggerated relief, such as the building stone and roof tiles, normal maps were also added and accentuated with a parallax displacement effect built with the native UE4 material creation tools. The 3D models with their corresponding textures were exported into UE4 for placement and real-time visualization ( Figure 8A,B). The version of UE4 used in this case was 4.22. Additional elements such as plant meshes, clouds, fog, environmental lighting, and audio were added to heighten the sense of photorealism and immersion. In addition, simple interactions were integrated in order to help the user navigate around the environment. Firstly, a teleportation mechanic was implemented, allowing the user to jump from location to location. This mechanic makes use of a simple ray-tracer, pre-built into UE4, that allows the user to point to any location in the virtual world and check that the location is a valid teleportation point according to a certain set of criteria (these criteria, including the space available and the slope angle at the location, are calculated by UE4 with its "Navigation Mesh" feature). If the location is valid as a teleportation point, the user can teleport there with the click of the trigger button on the controller ( Figure 8D). In addition, automatic dooropening animations were added to several doors in the cathedral, allowing users to move between different parts of the building as in the real world. A short trailer of the virtual environment can be viewed online (https://www.youtube.com/watch?v=hmO0JOdlLgw).
Once the real-time environment was built and VR interactions set up, the 360° panoramas could be integrated. A simple mechanism was implemented in the UE4 engine to make each panorama viewable. This mechanism was made up of: (1) a visual clue in the virtual world that indicated where the panorama was located. As an example we used a glowing ring, which stands out well from the rest of the environment ( Figure 8C)-a wide variety of other visual clues may be appropriate; (2) A trigger box overlapping with the ring, coupled with a function that fires when a certain button is pressed on the HTC Vive motion controller; (4) A skybox in the empty level onto which to project the cube-map panorama. Using this mechanism, the player can approach a glowing ring, representing a panorama taken on that spot, press a button on the motion controller, and be transported into the 360° panorama. By toggling the button press on the motion controller, the player can come out of the panorama and be placed back in the virtual world ( Figure 9). Certain variations in this mechanic were tested (e.g., projecting the panoramic photo on the inside of a sphere in the virtual world, then using a button on the motion controller to alternately showing and hiding this sphere when the player was in the right area), but the method described above was found to provide the simplest and most robust way of toggling between the panoramic photos in the scene while retaining the original perspective of the photographs. The finished version of the VR experience was tested with the HTC Vive Pro headset running on a PC with an 8-Core Intel Xeon CPU (3.10 GHz), an NVIDIA GTX 1070i GPU, and 32.0 GB RAM. With this setup, the experience achieved an average frame rate of 40-50 frames per second.

Conclusions and Outlook
This paper presented the interest and workflow in creating a VR visualization with integrated 360° panoramic photography of the Kaiserdom in Königslutter. The combination of these two kinds of media-real-time 3D visualization and HDRI panoramic photography-allows the interactive and immersive potential of the former to complement the high-fidelity and photorealism of the latter. While traditionally these two "kinds" of VR have remained separate, it is important to investigate ways of integrating them in order to build experiences that are able to integrate different kinds of data. This is particularly important for those fields, such as heritage, where documentation can take multiple forms, such as photographs, objects, 3D data, or written documents. The future development of the virtual museum, for example, depends on being able to integrate different kinds of data into a virtual space that can be navigated intuitively in virtual reality [28].
Further applications of the workflow described above can also be envisioned. In another recent project, a recreation of the town of Stade (Lower Saxony) in the year 1620 [12], panoramic photography is implemented so that users can jump between the real-time visualization of the town in 1620 and 360° photos from the modern day. This implementation allows users to directly juxtapose the historic and contemporary city, as an entry point to comparing the historical conditions of the two periods. In particular, this feature could have extra meaning for users who are already familiar with the town, by revealing the perhaps unknown history of certain well-known locations. While

Conclusions and Outlook
This paper presented the interest and workflow in creating a VR visualization with integrated 360 • panoramic photography of the Kaiserdom in Königslutter. The combination of these two kinds of media-real-time 3D visualization and HDRI panoramic photography-allows the interactive and immersive potential of the former to complement the high-fidelity and photorealism of the latter. While traditionally these two "kinds" of VR have remained separate, it is important to investigate ways of integrating them in order to build experiences that are able to integrate different kinds of data. This is particularly important for those fields, such as heritage, where documentation can take multiple forms, such as photographs, objects, 3D data, or written documents. The future development of the virtual museum, for example, depends on being able to integrate different kinds of data into a virtual space that can be navigated intuitively in virtual reality [28].
Further applications of the workflow described above can also be envisioned. In another recent project, a recreation of the town of Stade (Lower Saxony) in the year 1620 [12], panoramic photography is implemented so that users can jump between the real-time visualization of the town in 1620 and 360 • photos from the modern day. This implementation allows users to directly juxtapose the historic and contemporary city, as an entry point to comparing the historical conditions of the two periods. In particular, this feature could have extra meaning for users who are already familiar with the town, by revealing the perhaps unknown history of certain well-known locations. While real-time 3D visualizations on their own may provide a certain degree of immersion, the integration of different kinds of data in these virtual worlds, such as panoramic photography, can greatly enrich the experience by inviting the user to compare different kinds of visualizations.
In addition, by taking real-time visualisations beyond being simply static virtual worlds through the integration of different kinds of information, VR becomes much more powerful as a tool for education in museums. Cultural heritage monuments such as the Kaiserdom of Königslutter are particularly suited to VR exhibition due a substantial existing audience that may be looking for new ways to extend their visitor experience. By extending real-time visualisations through panoramic photography and other kinds of information, VR can come closer to realising its potential as a tool for cultural heritage education.