Next Article in Journal
New Geodetic and Gravimetric Maps to Infer Geodynamics of Antarctica with Insights on Victoria Land
Previous Article in Journal
Accuracy of Unmanned Aerial Vehicle (UAV) and SfM Photogrammetry Survey as a Function of the Number and Location of Ground Control Points Used
Previous Article in Special Issue
Medieval Archaeology Under the Canopy with LiDAR. The (Re)Discovery of a Medieval Fortified Settlement in Southern Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Overview of Innovative Heritage Deliverables Based on Remote Sensing Techniques

Department of Civil Engineering, TC Construction—Geomatics, 3000 Leuven, KU Leuven, Belgium
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(10), 1607; https://doi.org/10.3390/rs10101607
Submission received: 31 August 2018 / Revised: 27 September 2018 / Accepted: 7 October 2018 / Published: 10 October 2018
(This article belongs to the Special Issue Advances in Remote Sensing for Archaeological Heritage)

Abstract

:
The documentation and information representation of heritage sites is rapidly evolving. With the advancements in remote sensing technology, increasingly more heritage projects look to integrate innovative sensor data into their workflows. Along with it, more complex analyses have become available which require highly detailed inputs. However, there is a gap in the current body of knowledge of how to transfer the outputs from innovative data acquisition workflows to a set of useful deliverables that can be used for analysis. In addition, current procedures are often restricted by proprietary software or require field specific knowledge. As a result, more data are being generated in heritage projects but the tools to process them are lacking. In this work, we focus on methods that convert the raw information from the data acquisition to a set of realistic data representations of heritage objects. The goal is to present the industry with a series of practical solutions that integrate innovative technologies but still closely relate to the current heritage documentation workflows. An extensive literature study was performed discussing the different methods along with their advantages and opportunities. In the practical study, four deliverables were defined: the use of orthomosaics, web-based viewers, watertight mesh geometry and content for serious games. Each section is provided with a detailed overview of the process and realistic test cases that heritage experts can use as a basis for their own applications. The implementations are applicable to any project and provide the necessary information to update existing documentation workflows. Overall, the ideology is to increase the access to innovative technologies, better communicate the data to the different stakeholders and improve the overall usefulness of the information.

Graphical Abstract

The preservation of our cultural heritage is paramount to society both for current and future generations. In the case of tangible heritage such as monuments, this includes the conservation of the significant parts of a structure. Vital to any conservation process is the proper identification of the state of the asset and the agents of deterioration [1,2]. Knowing the state and impact factors of a monument requires a profound documentation of the object’s exterior. This includes the gathering and recording of all pertinent data, both metric and non-metric, of the current state of the asset. The acquired information allows experts to gain a better understanding of the recorded entities and to constitute a suitable diagnosis and treatment for its preservation [3]. More specifically, the information is used to assess the structure’s integrity, exchange information between stakeholders, detect pathologies and so on [4,5,6]. During this process, it is crucial that the data are well structured and sufficiently detailed for experts to make a proper assessment [7]. However, current heritage projects struggle to provide stakeholders with the correct information.
The emphasis of this work is on the investigation to which extend innovative technologies can be applied to better document and communicate information about the physical appearance of heritage monuments. This work focuses on the documentation of small to mid-scale structures and objects. The goal is to provide the heritage industry with a clear overview of the opportunities of documentation deliverables. More specifically, the focus is on digitizing the physical aspect of the target structures including the metric properties and visual appearance. A number of potential procedures is presented based on existing literature and our own implementations. Our methodology was as follows. First, a number of general problems and processes was identified in the related work. From the current body of knowledge, four groups of remote sensing issues were uncovered along with their opportunities. In the practical study, each issue was investigated in depth along with a feasible solution. A set of relevant test cases was established in accordance with heritage experts to make the approaches highly applicable. The remainder of this paper is structured as follows. The background and related work are presented in Section 1. Following, the innovative deliverables are discussed in Section 3, Section 4, Section 5 and Section 6 along with the current state of the art and realistic test cases. Finally, the conclusions are presented in Section 6.

1. Background and Related Work

The goal of non-destructive heritage documentation workflows is to create an appropriate digital representation of the current state of an object’s physical appearance so it can be reliably interpreted and analyzed. This includes the presentation of the correct geometry and texture so that all the properties of the asset are reflected as realistically as possible. The definition of the proper representation is in itself a challenge since there is a wide variety of analysis procedures which require different inputs. Some applications are as straightforward as determining the degradation of the site while more complex analysis may attempt to assess the water management system of a lost civilization [8]. Overall, the interpretation of tangible heritage sites is often considered a multi-scale problem [9]. For instance, local observations of texts, paintings or architectural details should often be evaluated with relation to each other and to their location on the site. Especially for applications relying on pattern recognition, it is of vital importance that the objects of interest are observed within their geospatial context [10].
Current data acquisition procedures include visual observations, the capturing of imagery and selective metric measurements either by hand or with total stations [11]. This raw information is typically used to produce a set of documents that consists of written text accompanied by images [12]. However, the resulting documents are often hard to interpret as they are created for a specific application and require field-specific knowledge to comprehend [13]. In addition, the included imagery is often suboptimal, especially in terrestrial applications for the following reasons. First, the imaging conditions in heritage sites are problematic due to poor lighting conditions, occlusions and varying depths [14]. For example, the images shown in Figure 1 are challenging to interpret due to poor image quality. Second, the imagery cannot be used to extract metric information as it is perspectively distorted. Without the support of the geometry, the complex patterns found in heritage sites are often undetectable. The same problem applies for the geospatial information of the images. Without detailed information concerning the whereabouts of each image, any spatial analysis of the site with relation to the imagery is near impossible. For instance, the right images in Figure 1 might be taken in the same location or in consecutive sections of the attic with only textual remarks to differentiate the location. Another obstacle is the limited Field-of-View (FoV) of the imagery acquired by conventional photography. Due to varying sizes of heritage objects, it is challenging to capture the proper information within a single image. As a result, a large number of images is required to present the complex geometry which results in a loss of contextual information and introduces more confusion to the process [15].
In addition to the documents, several floor plans and elevations are produced for the purpose of excavation planning, monitoring, pathology documentation and so on [9,16,17]. These plans are typically augmented with non-metric information such as the approximate location of the pathologies, materials and construction phases. However, the production of detailed metric information is a labor-intensive procedure and thus the plans often represent a coarse abstraction of the real objects [18,19]. In addition, the reduction of the information to 2D inevitably causes loss of information which leads to misinterpretations and subsequently to inappropriate analysis. However, stakeholders are reluctant to adopt 3D deliverables as they fear that the data have a reduced life-time due to proprietary file formats [20]. This is because more complex data typically require specific software and hardware to interact with. Even if the data are stored in a widely accepted format, they remain prone to data heterogeneity due to the size. For example, the high frequent measurements from modern data acquisition techniques quickly comprise hundreds of gigabytes which can only be handled by a few proprietary and expensive software packages despite the exchangeability of the data format.
The issue of heritage documentation is a widely discussed topic in the conservation industry. Nowadays, the use of advanced remote sensing techniques is becoming increasingly widespread to cope with the issues of documentation [21,22,23]. Two major techniques are the use of Terrestrial Laser Scanning (TLS) and photogrammetry. The former is an active light detection and ranging technique (Lidar) which uses the controlled deflection of laser beams to rapidly capture the scene. It comprises of a static sensor that is placed at multiple locations in the scene to capture all the objects within Line-of-Sight (LoS) [3]. Current systems are able to capture their surrounding at 1–2 MHz and this continues to increase. The latter is a passive light based data acquisition technique that uses imagery from digital cameras to compute 3D points. Based on the overlap between pixels in neighboring images, a 3D representation can be computed from the 2D inputs [24]. Both technologies are fairly complementary. TLS rapidly captures highly accurate points on an object’s surface in a lower resolution while photogrammetry produces highly dense and realistically colored points but typically with a lower metric accuracy. Furthermore, the success of photogrammetry is highly dependent on proper lighting while this is a non-issue for TLS. The result of both TLS and photogrammetry processes is a dense point cloud. This is a set of points with Cartesian coordinates with optional color and Signal-to-Noise ratio (Figure 2) [3].
Numerous researchers have proposed the integration of remote sensing techniques to digitize cultural heritage on building and object scale [25]. Remondino et al. [20] compared a range of sensors and techniques for the documentation of heritage. They discuss both Synthetic Aperture Radar (SAR) Satellite outputs, close-range Terrestrial Laser Scanners (TLS), long-range TLS, Airborne Laser Scanners (ALS) and a wide range of imaging cameras. They investigate the capabilities of these systems for the 3D reconstruction of a variety of heritage objects. Hassani et al. [22] give an in-depth overview of the different techniques that can be used for documentation including structured light scanning along with their advantages and disadvantages. Balsa-Barreiro et al. [26] used a similar approach for the integration of TLS and photogrammetry. Salonia et al. [27] presented solutions for multi-scale cultural heritage surveys. They thoroughly discussed the process of generating 3D information for small objects as well as entire sites. Alsadik et al. [28] and Teza et al. [29] are some of the few researchers who compared the metric accuracy of different acquisition techniques given a variety of objects. Galizia et al. [30] provided the same for photogrammetry and TLS for the reconstruction of dense watertight models. Overall, most of these works look to produce the best raw information as possible even in challenging conditions. However, the conversion of this highly dense information to a set of useful deliverables for the analysis is still the subject of ongoing research.
Several researchers already provided some solutions to reduce the vast amount of input data to workable information. Guidi et al. [23] provided an overview of several deliverables of the documentation process with respect to the new inputs. They propose a methodology to produce sections of the site and to aid modelers with digitally reconstructing objects of interest based on limited observations. Hess et al. [31] provided a similar workflow based on TLS data. Some researchers also provided test cases to evaluate the proposed deliverables. We support this approach since heritage experts can better relate to the presented procedures if they are applied to real projects. As the same obstacles tend to reoccur in different heritage projects, stakeholders are more likely to adopt a procedure when it is presented in the context of a familiar process with clear deliverables and results. For example, Pepe et al. [32] gave an overview of the outputs of innovative techniques such as databases and pathology maps. Liang et al. [33] provided results for the integration of several measurement techniques for a temple complex in China. Pritchard et al. [34] presented their methodology with TLS to digitize the Cologne Cathedral in Germany. Hanan et al. [35] provided results for the documentation of Batak Toba houses using close-range photogrammetry. Núñez Andrés et al. [36] presented a test case on digitizing façades using orthophotography. In this paper, we also support our proposed procedures with realistic test cases so that heritage experts can better relate to the content. The origin and context of each project is diversified so heritage experts get a better overview of the opportunities of the new technologies in different settings.
Once the raw data are transferred to a useful set of deliverables, they serve as input for the analysis of the heritage asset. As previously stated, numerous applications exist for the interpretation of cultural heritage information. A popular usage of the data is for the structural analysis of the structure [37], where a Finite Element Model (FEM) is typically constructed based on the spatial measurements [38]. Research has shown that, due to the density of the point cloud data, significantly more accurate FEMs can be constructed that better reflect the appearance of the asset [39]. However, the manual modeling from point clouds is very laborsome and thus major abstractions are introduced which partially defy the purpose of the highly accurate spatial information [40]. As previously stated, there are few software packages that can deal with this information and they are typically field-specific and very expensive. An alternative is deriving the FEM from detailed mesh geometry which is a more efficient and widely spread data representation. Not only does the mesh preserve the spatial accuracy and detailing of the geometry, it can also be converted to the metric basis of a FEM semi-automatically [41]. We consider these specific meshes as one of the prominent deliverables in this research. Another group of applications integrate the information in a database such as a Geographic information system (GIS) or Building Information Modeling (BIM) database [42]. For instance, Barazzetti et al. [43,44] and Brumana et al. [45] manually constructed a detailed BIM and integrated it with non-metric information for each element. Murphy et al. [46] and Dore et al. [47] proposed the concept of HBIM that represent the scene as a set of predefined metric archetypes. Brusaporci et al. [48] created an HBIM of the Camponeschi palace, L’Aquila including multiple Levels of Development (LOD) of a vaulting system and validated it with point cloud data. Similar to the FEM, the manual conversion of a point cloud to a BIM or GIS involves severe abstractions in the geometry of the object due to the labor intensity [49,50,51]. Even though the automated conversion of meshes to BIM or GIS is still subject of ongoing research, it is a promising approach to produce detailed as-is databases. Overall, the scope of this work is to represent the raw measurements more efficiently and make it more accessible to stakeholders. By doing so, the modeling effort is reduced and fewer interpretative errors are made.

2. Orthomosaic

2D outputs are currently the standard for heritage documentation. As discussed above, the documents, plans and imagery provided each have their flaws. In general, there is a lack of data processing that would lead to more comprehensive results. This is especially true for the captured imagery of the site. The final documents are flooded with unprocessed imagery which have limited use and are mainly there for context. Furthermore, these images often fail to provide an appropriate overview of the object as no spatial information is present. A solution is the use of orthomosaics. These are raster images that contain no perspective distortion and thus provide an orthographical view of the objects [52]. Given the proper scale, these images can serve as plans similar to the floor plans generated by total station. A major difference is that plans from total stations are a coarse abstraction of the reality while an orthomosaic is a realistic data representation near the quality of the human eye [53]. However, one should keep in mind these data are still raster based and thus cannot be directly used as an input for object oriented databases, as is the case with vector data. Therefore, orthomosaics are often used in combination with abstract plans to combine the best of both worlds [54]. Experts reconstruct the main components of the object of interest and appoint pathologies and other non-metric information while void spaces are covered by the orthomosaic. The result is a much more comprehensive plan that gives a better overview of the current state of the asset and allows further interpretation aside from the information marked during the initial study.
The creation of orthomosaics consists of the following steps. First, the imagery is undistorted using the intrinsic camera parameters [3]. By calibrating on a set of test images with known 2D and 3D matches, the radial and tangential distortions are estimated and removed and the focal length is accurately computed. Following, the external camera parameters are computed based on matches between overlapping imagery. Algorithms such as SIFT [55] and SURF [56] are already capable of detecting and matching image features in hundreds of images in a matter of minutes. Subsequently, a 3D model can be computed using dense matching. This model serves as input for the calculation of the Digital Surface Model (DSM) which is used for the removal of distortions caused by the central perspective of the camera and the relief of the object or terrain. Based on this extra information, the captured images are orthorectified [57]. The last step comprises the stitching of the different orthorectified images, creating the final orthomosaic [58].
The automated production of orthomosaics originates from the 1990s for aerial applications [59]. However, the technology was not adopted by the heritage industry until much later. A major obstacle is the acquisition cost of the required imagery. Especially in aerial applications, the price of capturing imagery by planes or helicopters and the software to process the data often exceeds a project’s budget. Workflows with balloons or kites significantly reduce the cost but have little control over the sensor’s movement [60]. Now, with the advent of Unmanned Aerial Vehicles (UAVs), the acquisition of aerial imagery has become much more accessible [61,62]. Not only are these devices significantly cheaper and easier to use, they also fly at a lower altitude, resulting in higher resolution imagery. In addition, they allow the capture of oblique imagery. Overall, the aerial images taken by UAVs lead to more complete and qualitative orthomosaics [63]. Currently, the majority of orthomosaics is still created from an aerial top view perspective. For instance, McCarthy et al. [52] presented a survey of the archaeological site Rubha an Fhaing Dhuibh in Scotland. Combined with total station measurements, they produced accurate georeferenced orthomosaics of all the masonry and debris on the site. Yastikli and Özerdem [64] created aerial orthomosaics of the ottoman empire sultan’s summer palace in Turkey using a UAV. Similarly, Barazzetti et al. [44] produced aerial orthomosaics of the mediaeval bridge Azzone Visconti in Lecco, Italy. Further, Themistocleous et al. [65] created an orthomosaic of the Fabrica area in Cyprus. Apart from the overview plans, De Reu et al. [66] proposed the use of aerial orthomosaics during the uncovering of an archaeological scene. They created excavation plans based on the orthomosaics calculated overnight instead of relying on plans produced based on time-consuming total station measurements. Further, Erenoglu et al. [17] created orthomosaics based on digital imagery as well as thermal and multispectral imagery.
With the success of aerial applications, terrestrial orthomosaics are gaining popularity as well. Researchers experimented with the technique to accurately document objects from oblique or terrestrial perspectives. Markiewicz et al. [67] presented an extended work on the creation of orthomosaics based on laser scanning input data and imagery. Jalandoni et al. [68] created terrestrial orthomosaics of ancient drawings on rocks in the northern part of Australia. Similarly, Monna et al. [69] presented a documentation workflow for Mongolian deer stones with carves using orthomosaics. Further, Chiabrando et al. [54] presented a detailed plan of the vault of the hall of honour of the Stupinigi in Italy, where they integrated an orthomosaic with a 2D representation of the contours of the vault. Additionally, they presented a test case where an orthophoto is created for the documentation of the frieze of the Roman arch of Augusto in Susa, Italy. Oliveira et al. [70] created orthomosaics from façades for the documentation of the general outlay and detailing. In addition, they modeled cracks and deteriorations on the face of dam structures. Orthomosaics can also be produced by other inputs aside from images alone. Koska and Křemen indicated correctly that laser scanners with an integrated camera can also be used to create orthomosaics. The combination of the laser scans with the texture of images can create highly accurate orthomosaics [71]. However, the resolution is typically much lower compared to images taken with Digital Single Lens Reflex (DSLR) cameras. Additionally, the texture information is often of lesser quality due to the location of the imagery and the quality of the sensor [72]. Ideally, laser scanning and imagery of a DSLR camera are merged to produce both accurate and realistic orthomosaics in terrestrial environments [71]. In this work, we therefore propose the combination of photogrammetry and terrestrial laser scanning since it yields superior results.

2.1. Test Case Sint Niklaaschurch

As a test case, the Sint Niklaaschurch in Ghent, Belgium is selected. The initial Romanesque church, built around 1100 A.D., has known several construction phases with major Gothic influence in the 13th century. After a series of minor alterations, the church’s architecture was modified again to dispatch of the Gothic character in the 18th century. This included the removal of the Gothic ornaments, the plastering of the walls and repainting them with contemporary art. Currently, the church is under renovation. The objective is to restore the church’s interior to its Gothic period appearance. This includes the removal of the plaster and the restoration of the original paintings underneath. It is within this context that a survey was ordered of the church’s nave walls before and after the restoration. The pathologies, architectural features and overall state of the walls had to be mapped for documentation purposes. In addition, based on the partial remainder of the Gothic paintings on the walls, a proper restoration plan needed to be constituted.

2.1.1. Survey

The target area is located in the southern aisle of the church (Figure 2). The four consecutive naves embedded in the southern wall needed to be acquired with a pixel density of 1 pix/mm given the proper scale of the scene. Over 900 m 2 of wall surface spread over six stories was to be mapped within a limited time frame and budget. Furthermore, the repetitive design of the naves complicated the recording of the project. In addition, the scene was heavily occluded due to the scaffolding and restoration equipment and the scene was poorly lit (Figure 1 top left). For the analysis, it was vital that the walls could be evaluated as a whole and that sufficient metric information was present for decision makers to produce a detailed tender offer for the restoration. On the other hand, detailed close-ups were needed to depict regions of interest. Due to these obstacles, the documentation of the asset with traditional photography and hand measurements is unfeasible. In this case, the use of orthomosaics was proposed to provide the necessary multi-scale information. More specifically, an integrated orthomosaic was computed from both TLS and imagery inputs. By combining laser scanning and photogrammetry, the advantages of both techniques could be exploited and the systems could be optimally deployed. The laser scanning provided the metric backbone of the dataset and was the basis to compute the correct geometric links between the different naves and stories. The image recordings, serving as input for the photogrammetric technique, focused on capturing texture and all of the details of the church’s wall. For the regions with remnants of paintings, text and so on, a color checker was used to ensure a proper exposure of the images. As the scanner requires several minutes to capture the scene, the images were captured simultaneously by the same operator. Because of the limited maneuverability and space on the scaffolding, the decision was made to register the scans on a cloud-to-cloud basis instead of a target-based approach. Therefore, consecutive scans were made with sufficient overlap to ensure a correct alignment. Overall, a single operator was able to capture the entire scene in one day resulting in 23 scans and 1049 images.

2.1.2. Processing

As discussed above, it is opportune to integrate both TLS and camera imagery for the production of orthomosaics. First, the scans were registered in the scanner’s proprietary software and exported to the open source .e57 format. Subsequently, the software RealityCapture was used to align all of the images with the scans. A set of visual features was detected in both the camera images and the TLS cube mapped images and correspondences were found between both datasets. Given the highly accurate registration of the scans, the images could also be aligned with high accuracy based on the overlap with the scans. A major advantage is that the computational burden of the dense matching, which includes the estimation of 3D points from overlapping pixels, is considerably reduced since a significant portion of the scene is already acquired in 3D. In contrast to conventional photogrammetry, the reconstructed point cloud is also perfectly vertical and has the proper scale due to active lidar system. Once the point cloud of the entire scene was established, a set of orthomosaics was computed. For each side of the walls in the naves, a reference plane was established parallel to the wall face. Next, a bounding box was defined around the target objects, eliminating the necessity of the time-consuming removal of clutter in the point clouds. The included points were used to generate a triangulation DSM with respect to the reference plane. This mesh geometry served as the basis for the reprojection of the imagery. For each mesh face in the DSM, the projection was computed between the image content of that face and the reference plane. The result is a mosaic of perspectively undistorted image content of the wall surface in the direction of the reference surface.

2.1.3. Orthomosaic

In total, 12 orthomosaics were generated for the entire project. The overall accuracy of the point cloud and DSM are estimated to be circa 3–4 mm, given the registration errors, instrument specifications, modeling errors and so on. The pixel density was set to the desired 1 pix/mm which is near the quality of the acquired imagery. The resulting orthomosaics were delivered to heritage experts who used it for varying deliverables. First, the information was used for documentation and pathology detection. Figure 3 depicts a showcase from the final documentation results representing an orthomosaic with several call-outs for the details. The different polychrome zones were appointed along with architectural details of the different time periods. It can be clearly observed that the orthomosaic provides a much needed overview of the scene and allows for a better understanding of the asset. For instance, within a single page, the heritage experts were able to depict detailed Latin texts (Figure 3b) with respect to the figurative designs underneath them (Figure 3c) which led to the conclusion that a Gothic altar was painted at that location. As these details are located on different floors, the spatial connection between these two remains might easily have been missed if not for the orthomosaic. Although the call-outs could be just a zoom-in on the orthomosaic at that specific place, it was opted to use the original imagery. In the end, this yields superior results because an image can be color and exposure corrected in post-processing, based on the color information of the image provided by the color checker. The same approach to correct the color of the zoomed-in part of the orthomosaic would not yield similar results. Because the orthomosaic is composed of multiple orthorectified parts of images, thus composed of parts with different exposures and color values, every part would have to be color corrected in a different way. Further, another deliverable was the production of 2D plans (Figure 4). The orthomosaics were integrated with existing plans which resulted in a more comprehensive plan than provided by the initial abstract geometry. Aside from the elevations and some measurements, the construction phases and renovation proposal were also defined. The resulting integrated plans were used to produce a more detailed and comprehensive tender offer which was much needed by the industry.

2.2. Test Case Church Bijloke

A second test case is the altarpiece in a church in the Ghentian city museum STAM at the Bijloke-site (Figure 5a). Apart from the 10-m tall altarpiece, the paintings and ornaments also had to be documented. The ordered survey again used the combination of TLS and photogrammetry to satisfy the important multi-scale requisite. The resulting detailed documentation was used for tender offers for the restoration, which is part of a larger project to conserve the interior of the STAM.
Due to the 10-m height of the altarpiece, a different capturing approach was used compared to the previous project. If solemnly ground level-based imagery and TLS had been used, there would be a lack of detail and geometrical accuracy in the upper part of the altarpiece due to the sensor’s position. This can be partially remediated by increasing the zoom and thus the Ground Sampling Distance (GSD). However, because of the unique point of view from at base of the structure, occlusions caused by protruding elements would still cause large undocumented areas. To overcome these deficiencies, a UAV was employed. By performing photogrammetry on oblique aerial images taken by a UAV, it is possible to overcome the limitations of terrestrial photogrammetry and laser scanning to create more complete and detailed 3D models and orthomosaics of tall objects (Figure 5b).
Apart from the orthomosaic’s heritage documentation purpose, they were used for the planning of the restoration of the different ornaments. Several call-outs of the final orthomosaic with exact dimensions can depict damaged areas so the required restoration efforts can be planned for these specific regions. This provided restoration experts with a good view on the overall state and included regions of the subjects which were invisible from the ground (Figure 5c). Additionally, by profoundly photographing the unaccessible ornaments at the upper part from several points of view, a detailed 3D reconstruction could be made from these elements. Together with the multiple correctly scaled orthomosaics of the ornaments (Figure 5d), this provided a good insight in the complex geometry and scale of these elements. In addition, by using a UAV, a detailed inspection could be performed without the need for costly scaffolding.

3. Panoramic Viewer

A major issue in documentation workflows is the lack of immersive and comprehensive methods to access the information. This is especially true for imagery taken on site. It is not uncommon for heritage projects to generate tens of thousands of images covering every detail and object in the scene. As the project expands, the number of images quickly becomes overwhelming [73,74]. An important factor in the number of images being generated is the Field-of-View (FoV). Typical cameras with 18–55 mm lenses produce a multitude of images compared to fish eye lenses or omni-directional cameras [64]. Not only does this result in increased file sizes, it also raises confusion in the project as there are significantly more files being generated. Several researchers have proposed methods to cope with the problems of photographic documentation. A popular approach is the use of panoramic imagery [75,76,77]. For instance, Jusof et al. [78] acquired High Dynamic Ranging (HDR) panoramic imagery in caverns and visualized it in a web browser for documentation purposes. The emphasis of their work is on the creation of imagery near the quality of the human eye [79]. As the emphasis is on detailing, their Gigapixel imagery approach is time-consuming and needs post-processing to properly create the panoramic imagery. Fan et al. [80] also used HDR panoramic imagery to create an immersive heritage environment and provide an offline database to visualize the data. We propose a similar workflow but focus on a swift and easy to use workflow that combines cheap sensors and lighting techniques to rapidly capture the scene. Mazzoleni et al. [81] proposed a local panoramic viewer linked to a semantic database for the immersive visualization of rocktombs. Currently, most applications still require the semi-automated stitching of the data which slows down the process significantly [82,83]. Researchers have shown that panoramic imagery provides more context than conventional imagery. For instance, D’Annibale et al. [84,85] experimented with different viewers including virtual reality glasses to interact with imagery taken of Petra. Their work shows that, by integrating the imagery into a panorama, a better overview of the site could be realized without losing detailing.
An important aspect of interpreting imagery is the geospatial component. A scene is easier to comprehend if the location of the imagery is known with respect to the objects and other images. There are several methods to let viewers experience this geospatial information. The obvious choice is an accompanying map where the location of each image is displayed. For instance, Woolner, Kwiatek and Tseng et al. [86,87,88] provided a local application that provides a map with the position of the panoramas for the purpose of immersive story telling. Additionally, some context can be displayed in the images themselves. Di Benedetto et al. [89] proposed the visualization of an adjacency graph with direction arrows to traverse between different viewing points based on scans of the structure. In our research, we propose both and integrate simplistic navigation tools in the imagery itself with an accompanying map based on Google Maps.
The concept of integrating geometry and images is often combined in a virtual tour. Based on the spatial information, users can intuitively traverse the scene in a digital environment. Imagery and additional non-metric information is presented at key locations as the users explore the scene. By presenting information at the right time at the proper location, viewers can much better relate to the content of the site. Several researchers have proposed solutions for the production of virtual tours of heritage sites. Martinez-Grana presented a virtual tour application of geological heritage using Google earth and QR codes [90]. They provided a digital environment of the Salamanca area, Spain that the user can intuitively traverse through along with several hyperlinks that refer to locally acquired imagery. Maicas et al. [91] created a virtual tour of the town of LLira in Spain. Gonzalez-Delgado et al. [92] provided a similar environment and linked a database of non-metric information to the presented imagery. An interesting feature of virtual tours is the inclusion of the time-aspect. For instance, Lozar et al. [93] and Nabil et al. [94] included time as the fourth dimension of their virtual tour to visualize the geological settlements and climate change throughout the history of the Torino region, Italy. Although not included in our presented work, the integration of the time aspect is possible in our application.
A major innovation in viewing applications is the development of web based visualization techniques. By integrating the data in an online platform, the data instantly become available to every interested party including non-experts. This is especially useful in applications that are interested in sharing information with the public and evaluating user experiences. Instead of only a few select experts assessing the scene, now anyone in the world can access the asset and potentially contribute to its evaluation. Aside from the standard touristic applications, examples include data mining, safety planning and attraction evaluation [95]. Hua et al. [96] provided a test case on a heritage site in the Fujian region in China which employs an Internet-based virtual experience for cultural tourism. They proposed a custom panoramic viewer along with historic information to facilitate information transfer to the guests. Bonacini et al. [97] investigated this technology to create a virtual tour of museums. Similar to our approach, they captured panoramic imagery at key locations and link it through the Google API. The imagery can also be provided with geospatial information through crowd-sourcing. For instance, Dhonju et al. [98] proposed an online web-mapping application that allows users to share their acquired imagery of heritage assets and geolocate them within the viewer.

3.1. Test Case

A realistic test case is presented to demonstrate the capabilities of panoramic viewers. The Sint-Eustachius church is a Romanesque church built between 1300 and 1500. It was constructed in several stages and was altered multiple times. It has both elements of the Romanesque and the Gothic building period and has a rich history. Unfortunately, the church is in a dire state. The iron sandstone, which is the main building material, has deteriorated drastically over time. Due to safety reasons, the church is now indefinitely closed for the public. Decision makers from the municipality and church fabric are under pressure from the public to constitute a long overdue conservation plan. To secure funding for the project, different stakeholders from the local government needed to be convinced of the relevance of the investment. It is in this context that the historic significance of the church was investigated along with its current state. Traditionally, this is performed with a set of documents and images. While being perfect as a reference framework, stakeholders other then historians and heritage experts can hardly relate to this data representation. Therefore, an immersive visualization tool was employed to better communicate the information. Concretely, there was the need to visualize the different construction phases of the church based on a visual inspection of the attic above the northern nave (Figure 6). The attic is little more than a crawl space with no lighting and numerous repetitive sections (Figure 1). Documentation attempts with conventional photography resulted in poorly lit, highly confusing imagery which failed to capture the overview of the space.

3.1.1. Data Acquisition

As previously stated, the emphasis of this section is on the creation of a fast and easy workflow to produce an immersive environment for the purpose of visualization. As the goal is to create an online platform, the amount of data was kept to a minimum. In addition, the heritage application should provide a general overview of the site. To comply with these requirements, the use of panoramic imagery is proposed. An omni-directional camera was employed to capture imagery at key locations in the project. More specifically, the attic was acquired with the Ricoh Theta S. Placed on a tripod, this pocket sized 360° camera produces 14 MP imagery through its two 12 MP cameras. As the image stitching is performed onboard, no post-processing is required. In total, 20 panoramic images were captured. Figure 7 depicts the spread of the panoramas over a site in order to capture all the relevant information. A challenging aspect is the absence of proper ambient lighting. As no lights were present, artificial lighting was provided. Aside from several spots, a lighting panel was used for homogeneous lighting placed directly underneath the sensor (Figure 1, bottom right). Figure 8 depicts the resulting imagery of the Ricoh supported by the artificial lighting. Overall, it is observed that, even though the lighting conditions are not perfect, the lighting was appropriate for the documentation of the scene and that the lighting panel provided homogeneous lighting for the omni-directional camera.

3.1.2. Viewer

The acquired imagery was imported in the application along with the existing plans of the attic. The used platform is based on an existing successful viewer. More specifically, the Google API [99] is used as a basis for the application. A HTML based website was created with the following components: an interactive map, a viewer and some navigation functionalities. Detailed instructions can be found on the Google support pages https://support.google.com for the creation of customized maps, setting up the panoramic environment and embedding the functionalities. The map component of the application is based on the Google Maps API (Figure 7). First, a simplistic map based on prior plans is georeferenced and transformed to a .KML file. Next, Google Maps is used to create a custom map that includes both the target area and the imported plan. The resulting map handle or ID is embedded into the HTML-code of the website. The panoramic viewer itself is initialized by the google.maps. StreetViewPanorama function (Figure 8). The viewer is given a tileSize and a worldSize equal to the size of the panoramic images. Each panorama is declared as a variable with a unique ID, description, latitude and longitude. A center heading is also defined with respect to the topographic north. The getCustomPanorama(pano) controls the interface with the interactive map. In the case a pano location is selected, the viewer calls the appropriate panoramic image and loads it into the viewer. The navigation functions of the viewer are defined by the white arrows that indicate the presence of nearby other panoramic images. The arrows can be used to quickly traverse through the scene. An adjacency graph is constructed that links each panoramic image to its nearest neighbors. A white direction arrow is created for each neighbor controlled by the link function. The heading of each arrow is defined by the relative geospatial positioning of each neighboring image projected onto the images. Additionally, individual panoramas can be selected in the map or the search function can be used to locate a specific panoramic image based on the description of the individual variables.
The resulting viewing application was hosted on an external server, making the data available to all stakeholders at the following URL: http://www.cantico.be/Zichem/Zichem. The viewer proved to be an insightful tool for the evaluation of the asset. Especially the stakeholders from the local government, who had no heritage background, could immediately relate to the presented imagery and thus could make better informed decisions. Overall, the panoramic viewer provided much needed context for the discussion of the different building stages and hypotheses. In addition, it supported the discussion concerning the potential treatments as the attic could be intuitively traversed with the viewer.

4. Meshes

Traditional heritage archives such as archaeological drawings, high resolution images, maps and so on provide a general description of objects and sites well-suited for the digital compilation of archaeological findings. However, two-dimensional content is highly affected by external factors that limit an exhaustive research of the recorded data. Poor lighting conditions, for instance, prevent experts from detecting both raised and sunk relief textures from digital pictures. Inaccurate hand measurements negatively influence the three-dimensional perspective estimation of sketch-based heritage models. Therefore, the need exists to acquire highly dense geometric representations of archaeological monuments and artifacts. Remote sensing technologies and methodologies have been exploited over the last years for creating 3D content of tangible cultural heritage. A popular data representation is a mesh model, which offers a detailed geometric description of the physical properties of an object. More specifically, the exterior of the object is represented by a set of polygons, vertices, and textured faces.
Mesh models have been widely used in the context of heritage documentation as a digital tool for the study and dissemination of cultural sites and objects. With the advent of 3D recording technologies and methodologies, it is now possible to create realistic meshes of historical sites and artifacts. These high-resolution models offer experts the opportunity to extract accurate descriptions from the mesh geometry [100]. For instance, three-dimensional primitives such as curves, lines, or planes provide a 3D object with a computational description of its geometric features. From this basis, the re-assembling of fractured objects is performed. Moreover, the photo-realistic texture of the model allows for the interaction between ancient monuments and the general public via virtual and augmented reality platforms [101].
The pipeline for creating a mesh consists of the following steps: data acquisition, data alignment, and surface reconstruction. The first step covers the 3D acquisition systems and techniques for 3D recording. The second step seeks to transform the recorded data into the same coordinate system, in a process commonly known as registration. The resulting 3D point-cloud is the input data for the surface reconstruction. This stage includes the computer vision techniques [102,103] to build the geometric structures that make up a mesh. These 3D modeling algorithms focus on converting the set of acquired points into a continuous surface by using either best-fit techniques [104,105] or 3D spatial interpolation methods [106,107]. Accordingly, the quality of the mesh heavily relies on the spatial resolution and metric accuracy of the recorded points. For heritage documentation purposes, every single point provides valuable information of the heritage monument, to such a degree that density and accuracy of the point-cloud is essential for restoration, conservation, monitoring and analysis. Therefore, rather than providing a detailed description of the algorithmic stages for surface construction, the literature review of mesh models is focused on the state-of-the-art in remote sensing techniques and technologies to estimate an accurate 3D dense point-cloud well-suited for heritage applications. These include Terrestrial Laser Scanners (TLS), photogrammetry, and structured light systems.
TLS-based approaches
As mentioned, TLS immersion into digital heritage documentation has gained immense popularity since it offers accurate and dense 3D representations of the world. For instance, Heinz Rüther et al. [108] provided a step-by-step description of the process to create mesh models of African historical sites. This set of 3D models are part of a holistic collection of spatial data that aims at providing experts with digital tools for the preservation, 3D visualization and further study of the site. To cope with the documentation of complex scenarios with poor light conditions and narrow passages, P. Rodriguez et. al. [109] integrated TLS data to create a 3D model of a 20th century subterranean structure of World War I. The 3D mesh deliverable obtained served as a basis for preservation, consolidation and valorization of underground fortifications. W. Neubauer et al. [110] mounted a TLS on a mobile platform to digitize the pyramids of Giza and the nearby Sphinx. The recording setup aided the scanning effort and helped to avoid occlusion of adjacent monuments. The principal usage of these models is to support the 3D documentation of the monuments on the Giza plateau [111]. TLS-based meshes along with 2D documentation have also been used to virtually reconstruct heritage sites that have fallen into a ruinous state because of natural disasters or looting activities. For example, Gabriele Guidi et al. [23] described the workflow for the digital reconstruction of a group of ancient temples located at MySon. The digital geometric description of the site’s current state played a vital role in the virtual reconstruction process and served as a basis for a tenable evaluation of the gradual deterioration of the site.
Photogrammetry-based approaches
Photogrammetry has proven to be an efficient and low cost remote sensing methodology to create 3D mesh models for heritage research and general public engagement [52]. The success of this 3D modeling approach relies on the photo-realistic texture of the acquired point-cloud. This asset is a consequence of the rapid evolution of optical image recording systems in terms of pixel resolution, acutance, and hardware compactness. Therefore, close-range and aerial photogrammetry approaches have been broadly used for 3D documentation of monuments and artifacts. For instance, G. Tucci et al. [112] explored geomatics techniques, including photogrammetry, to create a high quality 3D model of the earthenware frieze of an ancient loggia, in Italy. The obtained mesh not only contributed to digital documentation, but also supported the virtual and physical reconstruction of this art piece. Roberto Pierdicca et al. [113] proposed to combine spherical photogrammetry and structure from motion to obtain the 3D model of a symbolic building of the archaeological area of Chan Chan, Peru. This marriage of panoramic views and overlapping images provide sufficient information to construct an accurate and high quality model of the decorated walls and structures of the dilapidated building. These models are intended to facilitate the restoration labor of fragile monuments, thus preventing them from damaging their archaeological integrity. Archaeological excavations, is another area that has taken advantage of the benefits of adopting 3D meshes for heritage documentation. Matteo Dellepiane et al. [114] used mesh models as a digital tool to monitor the excavation progress and the archaeological findings of an urban settlement in Uppåkra, Sweden. Jeroen et al. [66] explored the possibilities and limitations of photogrammetry-based 3D models during and after the excavation process, applied to the case study of a historical abbey in Belgium. The obtained meshes served as a template to digitally draw the archaeological layers of the excavation progress. Unlike terrestrial-based acquired data, aerial photogrammetry approaches have been used over the last years to obtain 3D mesh models of difficult-to-access locations, especially those extensive areas surrounded by vegetation. For example, E. Dall’Asta et al. [115] proposed using aerial photogrammetry to digitize the overall extension of the roman emblematic site of Veleia, in Italy. The mesh model served as a framework for generating terrain models, on the basis of which a ground morphology description of the site was performed. For preservation purposes, Abdullah Tariq and Ibraheem Haneef [116] proposed combining GPS points, terrestrial images and aerial survey data to construct an accurate mesh of a Pakistan monument located on Shakarparian.
Symbiosis: TLS and Photogrammetry
To get the best functionalities of TLS and photogrammetry, numerous studies have focused on exploring methodologies to generate mesh models from the merger of these recording approaches. For heritage documentation, this symbiosis has been widely exploited for the fully comprehensive study of monuments and archaeological sites [20]. For instance, Annabelle Davis et al. [117] outlined a set of heritage management tools derived from combining dense data acquired with TLS and Photo-realistic mesh models based on photogrammetry techniques. The set of deliverables include: textured 3D models, cross-sections, 2D drawings, 3D intensity maps, and a VR application. These digital analysis tools assisted in the exhaustive research of three rock art sites located at the East Pilbarr, in Australia. For the sake of improving the metric accuracy of mesh models generated from different data sources, surveying technologies and techniques have been employed to support the registration of images and 3D scans. G.J. Grenzdörffer et al. [118], for example, determined a geographic reference system by using ground control points to increasingly integrate 3D points retrieved from aerial photogrammetry and TLS. This approach allowed for the dense and accurate 3D digitalization of a complex 13th century heritage building in Greifswald, Germany. The final result provided a baseline for two important tasks: decisions on the reconstruction, and architecture assessment of the monument. More recently, Florent Poux et al. [119] fused TLS and photogrammetry data for knowledge integration and dissemination of 3D quasi-planar archaeological objects, thus providing experts with the ability to automatically identify fundamental characteristics of the object such as color, material, light properties, area, and so on based on an ontology classification. This approach relies on the high level information that becomes available with the fusion of complex geometric structures with high quality texture, i.e., 3D descriptors, geometric primitives, color, normal, shape, etc. Although the approach is restricted to planar objects, it dramatically contributes to answering research questions regarding to origin of the object, current state, and damage over the time.
Structured-Light-based approaches
Aside from TLS and photogrammetry techniques, structured light-based technology is also employed for 3D heritage recording. Unlike terrestrial and aerial recording methods, close-range active scanners have been mainly used to produce 3D meshes of small-to-medium archaeological artifacts and objects, since they offer sub-millimeter accuracy and robustness to distinct kinds of materials [120]. Therefore, this technology has been applied to the fields of pictorial artworks, reassembling of archaeological ceramic pottery, osteology, and so on. For example, Fernando Buchón-Moragues et al. [121], generated a millimeter accurate mesh model of a wooden panel painting to develop a statistical analysis of its brushstrokes characteristics. This study enabled experts to verify the authenticity of the art work as well as to determine its antiquity period. V. Di Pietra et al. [122] outlined the process of 3D survey applied to an Egyptian sculpture. A hand-held scanner is used to produce a highly-detailed mesh model, so that experts were able to identify fractured pieces and material characteristics. This study provided valuable information to numerically determine the conservation state of the monument. Aurore Mathys et al. [123] explored different close-range 3D acquisition methods applied to the 3D mesh reconstruction of a human skull from Royal Belgian Institute of Natural Sciences collection. This research aimed to contrast the pros and cons of 3D recording systems in terms of accuracy, acquisition time, price, and texture quality. The mesh model obtained with a structured light-based scanner yielded higher spatial resolution when compared to photogrammetry and Reflectance Transformation Imaging (RTI). Papaioannou et al. [124] proposed to compute 3D shaped descriptors on the basis of geometric primitives and facet normals to automatically reassemble heritage objects from damaged fragments.

Test Case

A realistic case study is presented in the context of Egyptian heritage documentation. The governor’s tombs of the 15th Egyptian province located in Der el Bersha are positioned among the most important monuments of the Egyptian Middle Kingdom (ca. 2050–1650 B.C.) [125]. Unfortunately, due to natural catastrophes, quarrying and looting activities, this elite cemetery is now in a ruinous state (Figure 9). Consequently, its archaeological remains are fragmented and incomplete. In an effort to reconstruct these archaeological artifacts, traditional 2D recording techniques have been conducted to document the site [126]. However, the degree of damage of the remains and the short time allowed for missions have prevented experts from inferring sufficient knowledge for damage assessment, virtual reconstruction and hieroglyphic decoding. Therefore, the 3D digital documentation of the tangible heritage, more specifically the use of mesh models, is advisable for the geometric analysis and exhaustive research of the monument.
The main challenge resides within the distinct properties of the archaeological remains. Various funerary objects and rock fragments have been excavated, including pieces of wooden coffins, pottery artifacts, stones with raised relief decoration, and human skulls. Therefore, high quality texture and millimeter accuracy are essential for the full understanding of these particular objects. These constraints make structured light a suitable approach for 3D recording. The Einscan pro + was chosen for digitalization, since it allowed us to accurately scan objects of different shape, dimension (5–200 cm), material, and weight. This affordable scanner operates with two acquisition modes: fixed and handheld. The former mode is meant to digitize small objects, since a tripod and a turntable with markers are used to automatically scan objects and register the acquired data (see Figure 10a). The latter is ideal to scan larger artifacts, as scanning is performed while holding the scanner (see Figure 10b). In addition, a camera can be attached to the scanner to colorize the acquired points, thus texturing the polygonal structures of the mesh. The spatial resolution with the hand-held mode is 0.7–3 mm, and with the fixed mode 0.24 mm. Apart from the hardware acquisition capabilities, the Einscan pro + software [127] can perform each step of the meshing process, from points acquisition, through real-time registration of the data, to surface reconstruction algorithms (Figure 11).
As a test case, several objects with varying properties were reconstructed: an Egyptian vessel, a restored cranium, and medium-sized decorated block. The vessel was scanned by using the handheld mode. First, the overall shape was scanned to get the coarse geometry of the piece. Then, we focused on scanning the details to capture the pathologies, as shown in Figure 12. The obtained mesh supported the study of the internal cracked fractures of the piece, which are not visible from traditional archaeological images. The 3D model also was used to calculate accurate measurements of pottery fragments, like thickness and curvature. Based on the same approach but using the fixed mode, the cranium was digitized by first scanning the global shape with the turntable. Subsequently, the tripod was positioned so that the orbit, upper and lower jaw were captured in detail. Figure 13 depicts the obtained meshes from cardinal perspectives, the middle image shows the inferior of the skull without texture, effectively depicting the level of details captured. The 3D models assisted in the osteology research of the skull, providing experts with detailed geometric information for teeth analysis and morphological study of internal details. The process of scanning the decorated stone is shown in Figure 11. As noted, three positions were set up to scan all the angles of the object without damaging or altering its decoration. This way, experts are provided with geometry information of every single face of the fragment (Figure 14), which is primordial to digitally reassemble archaeological fractured remains. The latter is a potential application for this site, since numerous damaged fragments have been found over the years. Additionally, the geometric information of the mesh models, provide archaeologist with the ability to properly analyze sunk and high relief. The software MeshLab [128] was used to analyze the geometry of the scanned models, including the density of the polygonal structure, the number of vertices, and the geometrical construction of the mesh. In addition, this mesh processing software allowed us to customize rendering properties such as shading and light, to enhance the visual aspect of the model.

5. 3D Game Content

An immense obstacle for the adoption of highly dense point cloud data and meshes is software limitations. Most commercial software cannot cope with the vast amount of data that are acquired from heritage sites. With raw data file sizes of tens to hundreds of gigabytes, even simplistic operations such as viewing the data becomes troublesome. As explained in the previous section, by creating a set of meshes of the input point clouds, the data can be significantly reduced. However, the visualization of large complex meshes is also not convenient. Most software simply do not have the proper algorithms to handle this information. Furthermore, the few software that can deal with this information are typically field specific and very expensive. Therefore, the need exist for easy to access platforms that allow the visualization of the data and tools to analyze it.
A promising application to represent the heritage information of a site is to embed the data in a gaming engine. These software are heavily optimized for mesh visualization and are perfectly capable of dealing with the complexity of the acquired data of heritage sites due to tremendous advancements in computing power over the last decade and intelligent visualization algorithms [129]. The integration of real life information in gaming platforms is considered an instance of serious gaming. This is a relatively new field of research that focuses on using digital games not just for entertainment but for professional purposes as well [130,131]. By engaging with the digital representation of the real world through gaming mechanics, the user can intuitively traverse the data and perform a wide variety of actions. Research has shown that this unprecedented immersive interaction with the data is a successful tool to communicate information to both experts and novices [132,133]. This shows great potential when applied to heritage projects. For instance, Chen et al. [134] experimented with actual game mechanics such as leveling to communicate information concerning the Jing-Hang Grand Canal, China. They tested different platforms to determine the impact of the immersion level on the success of information communication. They found that, while immersion is not a requirement, it significantly progresses learning since processes can be visualized. Dagnino et al. [135] took it one step further and created an actual game within the environment so players can learn by experiencing the content. Similarly, Kontogianni et al. [136] developed a serious game of the Stoa of Attalos in Athens, Greece along with many others including De Paolis et al. [137], Doulamis et al. [138] and Christopoulos et al. [139]. Mortara et al. [140] proposed different platforms to learn cultural heritage. Based on extensive literature study, they also assessed the challenges that serious games have to overcome. While current applications employ a mass-oriented approach such as in museum applications, there is tremendous potential in the prospect of personalized experience games. These would further progress the affinity to the site and stimulate learning even more. In addition, the step towards virtual and augmented reality is currently being investigated [141,142].
Overall, most games are oriented towards unskilled players to communicate information or to visualize the data. However, there is also a great opportunity for serious games targeting experts of a specific field [143]. For instance, Ruppel et al. [144] set up a link between a Building Information Model and a gaming platform to simulate evacuations. Lercari et al. [145] reconstructed heritage houses to better understand the living patterns of our ancestors. From their game, they can also derive where certain objects should be located and what their purpose is. Additionally, the paradigm of detecting complex patterns in historic surroundings is significantly facilitated if an expert can intuitively visit the site.
Aside from serious games, the game environment can also be used for analytical purposes. Users can remotely investigate unaccessible areas without perspective distortion and the correct scale. Complex patterns only distinguishable by heritage experts can be detected by controlling the scene with lights, making objects transparent or taking some measurements such as the area and volume. This is especially relevant when the relation between objects should be visualized. Overall, gaming engines offer great potential for supporting different aspects of the conservation process and other industries.
Gaming engines can also accommodate meshes from other sources aside from the meshes derived from data acquisition workflows. For instance, Bille et al. [146] extended their BIM to a gaming engine for interaction purposes. Barazzetti et al. [147] made their heritage model available on site by publishing their information through a gaming engine. Amirebrahimi et al. [148] investigated microscale flood damage by integrating GIS and BIM data in an engine. Many researchers stated that the integration of HBIM is invaluable to the heritage field and that numerous applications are currently being developed to support the conservation and management process [149,150].

Test Case

The common denominator of serious games is the content. Every game heavily relies on the models used within an application. This is especially true for analyses and serious games where often real world environments are used. In this section, we therefore discuss the process of creating an immersive visualization of the highly detailed and accurate 3D content using a gaming engine.
As a test case, the documentation of the former torpedo installation of the Sint-Marie fortress in Zwijndrecht, Belgium is presented. Between 1881 and 1882 A.D. the installation was added to the fortress, making it the only torpedo base in Belgium. In the context of a project of the province of Antwerp to document all of its fortresses, bastions, bunkers and so on, a survey was ordered to document this unique site.
Because the project is located at the bank of the Scheldt River, the recording conditions were challenging (Figure 15). The bottom part of the site only emerges above the water at the lowest river tide, resulting in a very limited time frame in which the survey could be performed. Furthermore, the almost constant submersion causes the scene to be mudded and littered with rubble and vegetation. To document the torpedo base as complete and accurate as possible, the survey was conducted with a combination of TLS and photogrammetry, both on terrestrial and aerial imagery.
The processing of all the captured data in the RealityCapture software resulted in a mesh consisting of circa 1 million triangles which is a significant data reduction compared to the initial laser scans and imagery. To include the highest detail, 14 4 k resolution texture maps were generated of the scene. For visualization purposes, the mesh was imported in a gaming engine. These are typically heavily optimized for complex mesh visualizations, for example by only loading the texture of the viewable mesh parts or by lowering the texture quality for objects located further away. In this project, the freely available and popular gaming engine Unity was chosen. Furthermore, since it is mainly designed for game development, it provides several tools to add 2D and 3D objects, a physics system, visual effects and rendering tools, the possibility to add animation and movement and so on. These characteristics make gaming engines ideal to create immersive visualizations and serious games in a further developed stage. For instance, appropriate lighting was assigned to the different parts of the mesh to increase the visibility. A point light source was used to brighten the dark interior mesh parts and to make the scene lighting more realistic (Figure 16 left).
To easily present the data to the client, an overview video of the project site was made. An animation path was added to the available camera module along with a set of key frames. A video editing script interpolated the trajectory and created a video from the extracted frames along the path. Additionally, an immersive 3D visualization game of the site was produced. A script was added to the camera to allow player movement. This way a user can intuitively traverse the scene and visit any detail in the maximum resolution. By building the final game as an executable file the user is capable of playing the game with all of its components without the need to install any software. This way, the resulting game is not only useful for heritage experts but also for other stakeholders, who can virtually explore the scene and perform their own analysis.
This end product can serve as a basis for a game or analysis application. There are numerous possibilities to add functionalities and develop our end product further, e.g., by adding measurement tools so heritage experts can remotely calculate distances and areas without visiting the site. Furthermore, the game can also be played with VR goggles to increase the degree of immersion. Different types of analyses can be performed by simulating lighting, weather conditions and so on. Overall, it significantly lowers the stepping stone to interact with the data and offers an intuitive development platform.

6. Conclusions

Innovative remote sensing techniques are becoming more accessible for heritage projects. Along with it, the amount of data being captured from heritage sites is rapidly increasing. To properly use this information for analysis or documentation purposes, a good understanding is mandatory for transferring the raw information to a useful set of deliverables. However, there currently exists a gap between the deliverables and the methods of acquiring raw information from heritage sites. In this work, an overview is presented of the possible workflows to represent the information in an accessible way with limited loss of information. An extensive literature study is performed to identify the needs of the industry and the shortcomings of current heritage workflows. In response, a set of general deliverables is proposed that are closely related to the current processes. The goal of this paper is to provide heritage experts with the tools to better document and communicate information about tangible heritage. More specifically, the emphasis of the work is on the physical appearance of the objects.
In the practical study, four methods are discussed that integrate the innovative technologies and are easy to use for both experts and novices. The advantages and opportunities of each deliverable are presented with respect to the current literature. Furthermore, test cases are presented of each technique to provide heritage experts with a practical example of how to implement the deliverable. The first method is the use of orthomosaics. These aggregates of reprojected images are perspectively undistorted and have the proper scale. They contain significantly more information than conventional imagery since they can be used for measurements and give a more realistic overview of the asset. While typically employed for aerial applications, this technique can also be used to document indoor spaces that suffer from occlusions and low accessibility. Aside from orthomosaics produced by imagery and a set of control points, this information can be also generated by integrating point cloud data and imagery. This results in more accurate information and increases the consistency of the technology in complex environments.
The second set of deliverables are panoramic images. A panoramic application is developed based on the Google API that combines omni-directional imagery and geospatial maps to provide an immersive and comprehensive viewer. Heritage experts can use this technique to better display heritage scenes where conventional imagery would struggle. Additionally, the accompanying map provides much needed context of the environment.
The third set of deliverables are meshes. These surface-based geometric representations offer a watertight data representation and also allow the integration of highly detailed texture. Furthermore, the data can be significantly reduced as planar surfaces can be represented more efficiently. Meshes can be used for a wide range of 3D applications as they allow the interaction with the information in 3D. By doing so, crucial details can be analyzed which is unfeasible in 2D.
The final deliverable is the embedding of the geometry in gaming engines which are typically heavily optimized for mesh visualization. Within this gaming environment, expert and novices can intuitively access the information in a low cost and efficient environment. Numerous possibilities are already available that stimulate users to learn, analyze and interact with the data. This is a very promising tool for heritage applications as the immersive environment of a gaming engine allows users to experience cultural heritage with unprecedented detailing and accuracy.

Author Contributions

All authors contributed equally to this work. Authors’ research group can be found at https://iiw.kuleuven.be/onderzoek/geomatics.

Funding

This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 779962), from the Flemish research foundation FWO (strategic basic research project, No. 1S11218N) and from the Geomatics research group of the Department of Civil Engineering, TC Construction at the KU Leuven in Belgium. The authors acknowledge the support of the Puzzling Tombs project (nr. 3H170337), funded by the KU Leuven Bijzonder Onderzoeksfonds. In addition, the company Profiel cvba is thanked for their data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. ICOMOS. The ICOMOS Charter for the Interpretation and Presentation of Cultural Heritage Sites; International Council on Monuments and Sites: Paris, France, 2008; pp. 1–8. [Google Scholar]
  2. Bentkowska-kafel, A.; Macdonald, L. Digital Techniques for Documenting and Preserving Cultural Heritage; Collection Development, Cultural Heritage, and Digital Humanities; Arc Humanities Press: Leeds, UK, 2018; Volume 1. [Google Scholar]
  3. Van Genechten, B. Creating Built Heritage Orthophotographs from Laser Scans. Ph.D. Thesis, KU Leuven, Leuven, Belgium, 2009. [Google Scholar]
  4. Logothetis, S.; Delinasiou, A.; Stylianidis, E. Building Information Modelling for Cultural Heritage: A review. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-5/W3, 177–183. [Google Scholar] [CrossRef]
  5. Turco, M.L.; Caputo, F.; Fusaro, G. From Integrated Survey to the Parametric Modeling of Degradations. A Feasible Workflow. In Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection; Springer: Cham, Switzerland, 2016; Volume 10058, pp. 1–12. [Google Scholar]
  6. Arias, P.; Armesto, J.; Lorenzo, H.; Al, E. Digital photogrammetry, GPR and finite elements in heritage documentation: Geometry and Structural Damages. In Proceedings of the ISPRS Commission V Symposium Image Engineering and Vision Metrology, Dresden, Germany, 25–27 September 2006. [Google Scholar] [CrossRef]
  7. Koller, D.; Frischer, B.; Humphreys, G. Research challenges for digital archives of 3D cultural heritage models. J. Comput. Cult. Herit. 2009, 2, 7. [Google Scholar] [CrossRef]
  8. Evans, D.; Traviglia, A. Uncovering Angkor: Integrated remote sensing applications in the archaeology of early Cambodia. In Satellite Remote Sensing; Springer: Dordrecht, The Netherlands, 2012. [Google Scholar]
  9. Makuvaza, S. Aspects of Management Planning for Cultural World Heritage Sites: Principles, Approaches and Practices; Springer International Publishing AG: Basel, Switzerland, 2018. [Google Scholar]
  10. Traviglia, A.; Torsello, A. Landscape Pattern Detection in Archaeological Remote Sensing. Geosciences 2017, 7, 128. [Google Scholar] [CrossRef]
  11. Remondino, F.; Rizzi, A. Reality-based 3D documentation of natural and cultural heritage sites-techniques, problems, and examples. Appl. Geomatics 2010, 2, 85–100. [Google Scholar] [CrossRef]
  12. Haddad, N.A. From ground surveying to 3D laser scanner: A review of techniques used for spatial documentation of historic sites. J. King Saud Univ.—Eng. Sci. 2011, 23, 109–118. [Google Scholar] [CrossRef]
  13. Yastikli, N. Documentation of cultural heritage using digital photogrammetry and laser scanning. J. Cult. Herit. 2007, 8, 423–427. [Google Scholar] [CrossRef]
  14. Ntregka, A.; Georgopoulos, A.; Santana Quintero, M. Photogrammetric exploitation of HDR images for cultural heritage documentation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-5/W1, 209–214. [Google Scholar] [CrossRef]
  15. Makantasis, K.; Doulamis, A.; Doulamis, N.D.; Ioannides, M. In the wild image retrieval and clustering for 3D cultural heritage landmarks reconstruction. Multimed. Tools Appl. 2016, 75. [Google Scholar] [CrossRef]
  16. Traviglia, A.; Cottica, D. Remote sensing applications and archaeological research in the Northern Lagoon of Venice: The case of the lost settlement of Constanciacus. J. Archaeol. Sci. 2011, 38, 2040–2050. [Google Scholar] [CrossRef]
  17. Erenoglu, R.C.; Akcay, O.; Erenoglu, O. An UAS-assisted multi-sensor approach for 3D modeling and reconstruction of cultural heritage site. J. Cult. Herit. 2017, 26, 79–90. [Google Scholar] [CrossRef]
  18. Guarnieri, A.; Fissore, F.; Masiero, A.; Di Donna, A.; Coppa, U.; Vettore, A. From survey to fem analysis for documentation of built heritage: The case study of villa revedin-bolasco. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2017, 42, 527–533. [Google Scholar] [CrossRef]
  19. Guarnieri, A.; Fissore, F.; Masiero, A.; Vettore, A. From Tls Survey To 3D Solid Modeling for Documentation of Built Heritage: the Case Study of Porta Savonarola in Padua. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W5, 303–308. [Google Scholar] [CrossRef]
  20. Remondino, F. Heritage recording and 3D modeling with photogrammetry and 3D scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef] [Green Version]
  21. Fritsch, D.; Klein, M. 3D preservation of buildings—Reconstructing the past. Multimed. Tools Appl. 2018, 77, 9153–9170. [Google Scholar] [CrossRef]
  22. Hassani, F. Documentation of cultural heritage techniques, potentials and constraints. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2015, 40, 207–214. [Google Scholar] [CrossRef]
  23. Guidi, G.; Russo, M.; Angheleddu, D. 3D survey and virtual reconstruction of archeological sites. Digit. Appl. Archaeol. Cult. Herit. 2014, 1, 55–69. [Google Scholar] [CrossRef]
  24. Ortiz, P.; Sánchez, H.; Pires, H.; Pérez, J.A. Experiences about fusioning 3D digitalization techniques for cultural heritage documentation. In Proceedings of the ISPRS Commission V Symposium `Image Engineering and Vision Metrology’, Dresden, Germany, 25–27 September 2006; pp. 224–229. [Google Scholar]
  25. Yilmaz, H.M.; Yakar, M.; Gulec, S.A.; Dulgerler, O.N. Importance of digital close-range photogrammetry in documentation of cultural heritage. J. Cult. Herit. 2007, 8, 428–433. [Google Scholar] [CrossRef]
  26. Balsa-Barreiro, J.; Fritsch, D. Generation of visually aesthetic and detailed 3D models of historical cities by using laser scanning and digital photogrammetry. Digit. Appl. Archaeol. Cult. Herit. 2017, 8, 57–64. [Google Scholar] [CrossRef]
  27. Salonia, P.; Scolastico, S.; Pozzi, A.; Marcolongo, A.; Messina, T.L. Multi-Scale cultural heritage survey: Quick digital photogrammetric systems. J. Cult. Herit. 2009, 10, 59–64. [Google Scholar] [CrossRef]
  28. Alsadik, B.; Gerke, M.; Vosselman, G. Efficient Use of Video for 3D Modelling of Cultural Heritage Objects. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-3/W4, 1–8. [Google Scholar] [CrossRef]
  29. Teza, G.; Pesci, A.; Ninfo, A. Morphological Analysis for Architectural Applications: Comparison between Laser Scanning and Photogrammetry, Structure-from-motion. J. Surv. Eng. 2016, 142, 04016004. [Google Scholar] [CrossRef]
  30. Galizia, M.; Inzerillo, L.; Santagati, C. Heritage and Technology: Novel Approaches to 3D Documentation and Communication of Architectural Heritage. In Proceedings of the heritage and technology Mind Knowledge Experience, Aversa, Italy, 11–13 June 2015. [Google Scholar]
  31. Hess, M.; Petrovic, V.; Meyer, D.; Rissolo, D.; Kuester, F. Fusion of multimodal three-dimensional data for comprehensive digital documentation of cultural heritage sites. In Proceedings of the 2015 Digital Heritage International Congress, Granada, Spain, 28 September–2 October 2015; pp. 595–602. [Google Scholar]
  32. Pepe, M.; Parente, C. Cultural heritage documentation in sis environment: An application for “porta sirena” in the archaeological site of paestum. Int. Archi. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2017, 42, 427–432. [Google Scholar] [CrossRef]
  33. Liang, H.; Li, W.; Lai, S.; Zhu, L.; Jiang, W.; Zhang, Q. The integration of terrestrial laser scanning and terrestrial and unmanned aerial vehicle digital photogrammetry for the documentation of Chinese classical gardens—A case study of Huanxiu Shanzhuang, Suzhou, China. J. Cult. Herit. 2018. [Google Scholar] [CrossRef]
  34. Pritchard, D.; Sperner, J.; Hoepner, S.; Tenschert, R. Terrestrial laser scanning for heritage conservation: The Cologne Cathedral documentation project. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 213–220. [Google Scholar] [CrossRef]
  35. Hanan, H.; Suwardhi, D.; Nurhasanah, T.; Bukit, E.S. Batak Toba Cultural Heritage and Close-range Photogrammetry. Procedia—Soc. Behav. Sci. 2015, 184, 187–195. [Google Scholar] [CrossRef]
  36. Núñez Andrés, A.; Buill Pozuelo, F.; Regot Marimón, J.; de Mesa Gisbert, A. Generation of virtual models of cultural heritage. J. Cult. Herit. 2012, 13, 103–106. [Google Scholar] [CrossRef]
  37. Armesto, J.; Ordonez, C.; Alejano, L.; Arias, P. Terrestrial laser scanning used to determine the geometry of a granite boulder for stability analysis purposes. Geomorphology 2009, 106, 271–277. [Google Scholar] [CrossRef]
  38. Branco, J.; Varum, H. Behaviour of Traditional Portuguese Timber Roof Structures; Oregon State University: Corvallis, OR, USA, 2006. [Google Scholar]
  39. Koehl, M.; Viale, A.; Reeb, S. a Historical Timber Frame Model for Diagnosis and Documentation Before Building Restoration. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II, 27–29. [Google Scholar]
  40. Riveiro, B.; Morer, P.; Arias, P.; de Arteaga, I. Terrestrial laser scanning and limit analysis of masonry arch bridges. Constr. Build. Mater. 2011, 25, 1726–1735. [Google Scholar] [CrossRef]
  41. Bassier, M.; Hadjidemetriou, G.; Vergauwen, M.; Van Roy, N.; Verstrynge, E. Implementation of Scan-to-BIM and FEM for the Documentation and Analysis of Heritage Timber Roof Structures. In Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection; Springer: Cham, Switzerland, 2016; Volume 10058, pp. 1–12. [Google Scholar]
  42. Armesto, J.; Arias, P.; Roca, J.; Lorenzo, H. Monitoring and Assessing Structural Damage in Historic Buildings. Photogramm. Rec. 2008, 23, 36–50. [Google Scholar] [CrossRef] [Green Version]
  43. Barazzetti, L.; Banfi, F.; Brumana, R.; Gusmeroli, G.; Previtali, M.; Schiantarelli, G. Cloud-to-BIM-to-FEM: Structural simulation with accurate historic BIM from laser scans. Simul. Model. Pract. Theory 2015, 57, 71–87. [Google Scholar] [CrossRef]
  44. Barazzetti, L.; Banfi, F.; Brumana, R.; Previtali, M.; Roncoroni, F. Integrated Modeling and Monitoring of the Medieval Bridge Azzone Visconti. In Proceedings of the 8th European Workshop on Structural Health Monitoring (EWSHM 2016), Bilbao, Spain, 5–8 July 2016; pp. 5–8. [Google Scholar]
  45. Brumana, R.; Georgopoulos, A.; Brumana, R.; Georgopoulos, A.; Oreni, D.; Raimondi, A. HBIM for Documentation , Dissemination and Management of Built Heritage. The Case Study of St. Maria in Scaria d’Intelvi. Int. J. Herit. Digit. Era 2013, 2. [Google Scholar] [CrossRef]
  46. Murphy, M.; McGovern, E.; Pavia, S. Historic Building Information Modelling - Adding intelligence to laser and image based surveys of European classical architecture. ISPRS J. Photogramm. Remote Sens. 2013, 76, 89–102. [Google Scholar] [CrossRef]
  47. Dore, C.; Murphy, M. Semi-Automatic Modelling of Building Facades with Shape Grammars Using Historic Building Information Modelling. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—3D Virtual Reconstruction and Visualization of Complex Architectures, Trento, Italy, 25–26 February 2013; Volume XL. [Google Scholar]
  48. Brusaporci, S.; Maiezza, P.; Tata, A. A framework for architectural heritage hbim semantization and development. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2018, 42, 179–184. [Google Scholar] [CrossRef]
  49. Laing, R.; Leon, M.; Isaacs, J.; Georgiev, D. Scan to BIM: the development of a clear workflow for the incorporation of point clouds within a BIM environment. WIT Trans. Built Environ. 2015, 149, 279–289. [Google Scholar] [CrossRef]
  50. Oreni, D.; Brumana, R.; Georgopoulos, A.; Cuca, B. Hbim for Conservation and Management of Built Heritage: Towards a Library of Vaults and Wooden Bean Floors. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-5/W1, 215–221. [Google Scholar] [CrossRef]
  51. Oreni, D.; Brumana, R.; Della Torre, S.; Banfi, F.; Barazzetti, L.; Previtali, M. Survey turned into HBIM: the restoration and the work involved concerning the Basilica di Collemaggio after the earthquake (L’Aquila). IISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-5, 267–273. [Google Scholar] [CrossRef]
  52. McCarthy, J. Multi-image photogrammetry as a practical tool for cultural heritage survey and community engagement. J. Archaeol. Sci. 2014, 43, 175–185. [Google Scholar] [CrossRef]
  53. Verhoeven, G.; Doneus, M.; Briese, C.; Vermeulen, F. Mapping by matching: A computer vision-based approach to fast and accurate georeferencing of archaeological aerial photographs. J. Archaeol. Sci. 2012, 39, 2060–2070. [Google Scholar] [CrossRef]
  54. Chiabrando, F.; Donadio, E.; Rinaudo, F. SfM for orthophoto generation: Awinning approach for cultural heritage knowledge. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2015, 40, 91–98. [Google Scholar] [CrossRef]
  55. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, 20–25 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
  56. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Understand. 2008, 110, 346–359. [Google Scholar] [CrossRef] [Green Version]
  57. Habib, A.F.; Kim, E.M.; Kim, C.J.C. New Methodologies for True Orthophoto Generation. Photogramm. Eng. Remote Sens. 2007, 73, 25–36. [Google Scholar] [CrossRef] [Green Version]
  58. Kato, A.; Moskal, L.M.; Schiess, P.; Calhoun, D.; Swanson, M.E. True Orthophoto Creation Through Fusion of Lidar Derived Digital Surface Model and Aerial Photos. Symp. A Q. J. Mod. Foreign Lit. 2010, XXXVIII, 88–93. [Google Scholar]
  59. Krzystek, P. Fully Automatic Measurement of Digital Elevation Models with MATCH-T. Schriftenreihe des Institut für Photogrammetrie der Universität Stuttgart, 1991; Volume 15, pp. 165–182. Available online: https://www.researchgate.net/publication/239065724_Fully_Automatic_Measurement_of_Digital_Elevation_Models_with_MATCH-T (accessed on 8 August 2018).
  60. Verhoeven, G.; Taelman, D.; Vermeulen, F. Computer Vision-Based Orthophoto Mapping Of Complex Archaeological Sites: The Ancient Quarry Of Pitaranha (Portugal-Spain). Archaeometry 2012, 54, 1114–1129. [Google Scholar] [CrossRef]
  61. Cowley, D.; Ferguson, L. Historic aerial photographs for archaeology and heritage management. In Space, Time, Place, Proceedings of the Third International Conference on Remote Sensing in Archaeology, Tiruchirappalli, Tamil Nadu, India, 17–21 August 2009; BAR Publishing: Oxford, UK, 2010; pp. 97–104. [Google Scholar]
  62. Cowley, D.; Moriarty, C.; Geddes, G.; Brown, G.; Wade, T.; Nichol, C. UAVs in Context: Archaeological Airborne Recording in a National Body of Survey and Record. Drones 2017, 2, 2. [Google Scholar] [CrossRef]
  63. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  64. Yastikli, N.; Özerdem, O.Z. Architectural heritage documentation by using low cost uav with fisheye lens: Otag-I Jumayun in Istanbul as a case study. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 415–418. [Google Scholar] [CrossRef]
  65. Themistokleous, K.; Agapiou, A.; Cuca, B.; Hadjimitsis, D.G. 3D documentation of Fabrica Hills caverns using TERRESTRIAL and low cost UAV equiptment. EuroMed 2014, 1, 59–69. [Google Scholar]
  66. De Reu, J.; De Smedt, P.; Herremans, D.; Van Meirvenne, M.; Laloo, P.; De Clercq, W. On introducing an image-based 3D reconstruction method in archaeological excavation practice. J. Archaeol. Sci. 2014, 41, 251–262. [Google Scholar] [CrossRef]
  67. Markiewicz, J.S.; Podlasiak, P.; Zawieska, D. Attempts to automate the process of generation of orthoimages of objects of cultural heritage. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2015, 40, 393–400. [Google Scholar] [CrossRef]
  68. Jalandoni, A.; Domingo, I.; Taçon, P.S. Testing the value of low-cost Structure-from-Motion (SfM) photogrammetry for metric and visual analysis of rock art. J. Archaeol. Sci. Rep. 2018, 17, 605–616. [Google Scholar] [CrossRef]
  69. Monna, F.; Esin, Y.; Jerome, M.; Ludovic, G.; Navarro, N.; Jozef, W.; Saligny, L.; Couette, S.; Dumontet, A.; Chateau, C. Documenting carved stones by 3D modelling—Example of Mongolian deer stones. J. Cult. Herit. 2018, 1–11. [Google Scholar] [CrossRef]
  70. Oliveira, A.; Oliveira, J.F.; Pereira, J.M.; de Araújo, B.R.; Boavida, J. 3D modelling of laser scanned and photogrammetric data for digital documentation: The Mosteiro da Batalha case study. J. Real-Time Image Process. 2012, 9, 673–688. [Google Scholar] [CrossRef]
  71. Koska, B.; Kremen, T. The Combination of Laser Scanning and Structure from Motion Technology for Creation of Accurate Exterior and Interior Orthophotos of St. Nicholas Baroque Chrich. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL, 25–26. [Google Scholar] [CrossRef]
  72. Gonzalez-Aguilera, D.; Muñoz, A.L.; Lahoz, J.G.; Herrero, J.S.; Corchón, M.S.; García, E. Recording and modeling paleolithic caves through laser scanning. In Proceedings of the International Conference on Advanced Geographic Information Systems and Web Services (GEOWS 2009), Cancun, Mexico, 1–7 February 2009; pp. 19–26. [Google Scholar]
  73. Galatsanos, N.P.; Chin, R.T. Digital Imaging for Cultural Heritage Preservation. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 415–421. [Google Scholar] [CrossRef]
  74. Yokoi, K. Application of Virtual Reality (Vr) Panorama for Townscape Monitoring in the World. Master’s Thesis, Department of International Development Engineering, Graduate School of Engineering, Tokyo Institute of Technology, Tokyo, Japan, 2013; pp. 4–5. [Google Scholar]
  75. Abbey, S.G.; Theatre, R.; Pisa, C.; Zeppa, F.; Fangi, G. Spherical Photogrammetry for Cultural Heritage. In Proceedings of the Second Workshop on EHeritage and Digital Art Preservation, Firenze, Italy, 25–29 October 2010; pp. 3–6. [Google Scholar]
  76. Bourke, P. Novel imaging of heritage objects and sites. In Proceedings of the 2014 International Conference on Virtual Systems and Multimedia (VSMM 2014), Hong Kong, China, 9–12 December 2014; pp. 25–30. [Google Scholar]
  77. Grussenmeyer, P.; Landes, T.; Alby, E.; Carozza, L. High Resolution 3D Recording and Modelling of the Bronze Age Cave “Les Fraux” in Perigord (France). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 262–267. [Google Scholar]
  78. Jusof, M.J.; Rahim, H.R.A. Revealing visual details via high dynamic range gigapixels spherical panorama photography: The Tempurung Cave natural heritage site. In Proceedings of the 2014 International Conference on Virtual Systems and Multimedia (VSMM 2014), Hong Kong, China, 9–12 December 2014; pp. 193–200. [Google Scholar]
  79. Roussou. New Heritage: New Media and Cultural Heritage; Technical Report; Routledge: London, UK; New York, NY, USA, 2008. [Google Scholar]
  80. Fan, J.; Fan, Y.; Pei, J. HDR spherical panoramic image technology and its applications in ancient building heritage protection. In Proceedings of the 2009 IEEE 10th International Conference on Computer-Aided Industrial Design and Conceptual Design: E-Business, Creative Design, Manufacturing—CAID and CD’2009, Wenzhou, China, 26–29 November 2009; pp. 1549–1553. [Google Scholar]
  81. Mazzoleni, P.; Valtolina, S.; Franzoni, S.; Mussio, P.; Bertino, E. Towards a contextualized access to the cultural heritage world using 360 panoramic images. In Proceedings of the 18th International Conference on Software Engineering and Knowledge Engineering (SEKE 2006), San Francisco, CA, USA, 5–7 July 2006; ISBN 1600591965. [Google Scholar]
  82. McCollough, F. Complete Guide to High Dynamic Range Digital Photography; Number Dec, Scitech Book News; Pixiq: New York, NY, USA, 2008; p. 400. [Google Scholar]
  83. Vincent, M.L.; DeFanti, T.; Schulze, J.; Kuester, F.; Levy, T. Stereo panorama photography in archaeology: Bringing the past into the present through CAVEcams and immersive virtual environments. In Proceedings of the 2013 Digital Heritage International Congress, Marseille, France, 28 October–1 November 2013; p. 455. [Google Scholar]
  84. Annibale, E.D.; Fangi, G. Interactive Modelling By Projection of Oriented. In Proceedings of the ISPRS International Workshop on 3D Virtual Reconstruction and Visualization of Comprex Architectures (3D-Arch’2009), Trento, Italy, 25–28 February 2009. [Google Scholar]
  85. D’Annibale, E. New VR system for navigation and documentation of Cultural Heritage. CIPA Workshop Petra 2010, 1985, 4–8. [Google Scholar]
  86. Woolner, M.; Kwiatek, K. Embedding Interactive Storytelling Within Still and Video Panoramas for Cultural Heritage Sites. In Proceedings of the 15th International Conference on Virtual Systems and Multimedia (VSMM’09), Vienna, Austria, 9–12 September 2009. [Google Scholar]
  87. Kwiatek, K.; Woolner, M. Transporting the viewer into a 360? heritage story: Panoramic interactive narrative presented on a wrap-around screen. In Proceedings of the 2010 16th International Conference on Virtual Systems and Multimedia (VSMM 2010), Seoul, Korea, 20–23 October 2010; pp. 234–241. [Google Scholar]
  88. Tseng, Y.K.; Chen, H.K.; Hsu, P.Y. The Use of Digital Images Recording Historical Sites and ‘Spirit of Place’: A Case Study of Xuejia Tzu-chi Temple. Int. J. Hum. Arts Comput. 2013, 7, 156–171. [Google Scholar] [CrossRef]
  89. Di Benedetto, M.; Ganovelli, F.; Balsa Rodriguez, M.; Jaspe Villanueva, A.; Scopigno, R.; Gobbetti, E. ExploreMaps: Efficient construction and ubiquitous exploration of panoramic view graphs of complex 3D environments. Comput. Gr. Forum 2014, 33, 459–468. [Google Scholar] [CrossRef]
  90. Martínez-Graña, A.M.; Goy, J.L.; Cimarra, C.A. A virtual tour of geological heritage: Valourising geodiversity using Google earth and QR code. Comput. Geosci. 2013, 61, 83–93. [Google Scholar] [CrossRef]
  91. Maicas, J.; Blasco, V. EDETA 360°: VIRTUAL TOUR FOR VISITING THE HERITAGE OF LLÍRIA (SPAIN). In Proceedings of the 8th International Congress on Archaeology, Computer Graphics, Cultural Heritage and Innovation, Valencia, Spain, 5–7 September 2016; pp. 376–378. [Google Scholar]
  92. González-Delgado, J.A.; Martínez-Graña, A.M.; Civis, J.; Sierro, F.J.; Goy, J.L.; Dabrio, C.J.; Ruiz, F.; González-Regalado, M.L.; Abad, M. Virtual 3D tour of the Neogene palaeontological heritage of Huelva (Guadalquivir Basin, Spain). Environ. Earth Sci. 2014, 73, 4609–4618. [Google Scholar] [CrossRef]
  93. Lozar, F.; Clari, P.; Dela Pierre, F.; Natalicchio, M.; Bernardi, E.; Violanti, D.; Costa, E.; Giardino, M. Virtual tour of past environmental and climate change: The Messinian succession of the Tertiary Piedmont Basin (Italy). Geoheritage 2015, 7, 47–56. [Google Scholar] [CrossRef]
  94. Nabil, M.; Said, A. Time-Lapse Panoramas for the Egyptian Heritage. In Proceedings of the 18th International Conference on Cultural Heritage and New Technologies 2013 (CHNT 18, 2013), Vienna, Austria, 12–15 November 2013; pp. 1–8. [Google Scholar]
  95. Beraldin, J.-A.; Picard, M.; El-Hakim, S.; Godin, G.; Borgeat, L.; Blais, F.; Paquet, E.; Rioux, M.; Valzano, V.; Bandiera, A. Virtual Reconstruction of Heritage Sites: Opportunities and Challenges Created by 3D Technologies. In Proceedings of the InternationalWorkshop on Recording, Modeling and Visualization of Cultural Heritage, Ascona, Switzerland, 22–27 May 2005. [Google Scholar]
  96. Hua, L.; Chen, C.; Fang, H.; Wang, X. 3D documentation on Chinese Hakka Tulou and Internet-based virtual experience for cultural tourism: A case study of Yongding County, Fujian. J. Cult. Herit. 2018, 29, 173–179. [Google Scholar] [CrossRef]
  97. Bonacinia, E. A pilot project aerial street view tour at the valley of the temples (Agrigento). In Proceedings of the 8th International Congress on Archaeology, Computer Graphics, Cultural Heritage and Innovation, Valencia, Spain, 5–7 September 2016; pp. 430–432. [Google Scholar]
  98. Dhonju, H.K.; Xiao, W.; Shakya, B.; Mills, J.P.; Sarhosis, V. Documentation of Heritage Structures Through Geo- Crowdsourcing and Web-Mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII, 18–22. [Google Scholar] [CrossRef]
  99. Google Developers. Google API. 2018. Available online: https://cloud.google.com/maps-platform/?hl=de (accessed on 8 August 2018).
  100. Pintus, R.; Pal, K.; Yang, Y.; Weyrich, T.; Gobbetti, E.; Rushmeier, H. A Survey of Geometric Analysis in Cultural Heritage. Comput. Gr. Forum 2015, 35, 1–27. [Google Scholar] [CrossRef]
  101. Cadi, N.; Magnenat, N.; Se, S. Mixed Reality and Gamification for Cultural Heritage; Springer International Publishing: Cham, Switzerland, 2017; pp. 395–419. [Google Scholar]
  102. Aicardi, I.; Chiabrando, F.; Maria Lingua, A.; Noardo, F. Recent trends in cultural heritage 3D survey: The photogrammetric computer vision approach. J. Cult. Herit. 2018. [Google Scholar] [CrossRef]
  103. Guo, X.; Xiao, J.; Wang, Y. A survey on algorithms of hole filling in 3D surface reconstruction. Vis. Comput. 2018, 34, 93–103. [Google Scholar] [CrossRef]
  104. Ebke, H.c.; Schmidt, P.; Campen, M.; Kobbelt, L. Interactively Controlled Quad Remeshing of High Resolution 3D Models. ACM Trans. Gr. TOG 2016, 35, 218. [Google Scholar] [CrossRef]
  105. Abdullah, A.; Bajwa, R.; Gilani, S.R.; Agha, Z.; Boor, S.B.; Taj, M.; Khan, S.A. 3D Architectural Modeling: Efficient RANSAC for n -gonal primitive fitting. Eurogr. Assoc. 2015. [Google Scholar] [CrossRef]
  106. Boltcheva, D.; Lévy, B. Surface reconstruction by computing restricted Voronoi cells in parallel. CAD Comput. Aided Des. 2017, 90, 123–134. [Google Scholar] [CrossRef] [Green Version]
  107. Si, H. TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator. ACM Trans. Math. Softw. 2015, 41, 1–36. [Google Scholar] [CrossRef]
  108. Ruther, H.; Bhurtha, R.; Held, C.; Schroder, R.; Wessels, S. Laser Scanning in Heritage Documentation: The Scanning Pipeline and its Challenges. Photogramm. Eng. Remote Sens. 2012, 78, 309–316. [Google Scholar] [CrossRef]
  109. Rodríguez-Gonzálvez, P.; Nocerino, E.; Menna, F.; Minto, S.; Remondino, F. 3D Surveying and modeling of underground passages in wwi fortifications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2015, 40, 17–24. [Google Scholar] [CrossRef]
  110. Neubauer, W.; Doneus, M.; Studnicka, N.; Riegl, J. Combined High Resolution Laser Scanning and Photogrammetrical Documentation of the Pyramids at Giza. In Proceedings of the XXth International Symposium CIPA, Torino, Italy, 26 September–1 October 2005; pp. 470–475. [Google Scholar]
  111. Der Manuelian, P. Giza 3D: Digital archaeology and scholarly access to the Giza Pyramids: The Giza Project at Harvard University. In Proceedings of the 2013 Digital Heritage International Congress, Marseille, France, 28 October–1 November 2013; Volume 2, pp. 727–734. [Google Scholar]
  112. Tucci, G.; Bonora, V.; Conti, A.; Fiorini, L. High-quality 3D models and their use in a cultural heritage conservation project. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2017, 42, 687–693. [Google Scholar] [CrossRef]
  113. Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Colosi, F.; Orazi, R. Virtual reconstruction of archaeological heritage using a combination of photogrammetric techniques: Huaca Arco Iris, Chan Chan, Peru. Dig. Appl. Archaeol. Cult. Herit. 2016, 3, 80–90. [Google Scholar] [CrossRef]
  114. Dellepiane, M.; Dell’Unto, N.; Callieri, M.; Lindgren, S.; Scopigno, R. Archeological excavation monitoring using dense stereo matching techniques. J. Cult. Herit. 2013, 14, 201–210. [Google Scholar] [CrossRef]
  115. Dall’Asta, E.; Bruno, N.; Bigliardi, G.; Zerbi, A.; Roncella, R. Photogrammetric techniques for promotion of archaeological heritage: The archaeological museum of parma (Italy). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2016, 41, 243–250. [Google Scholar] [CrossRef]
  116. Tariq, A.; Gillani, S.M.O.A.; Qureshi, H.K.; Haneef, I. Heritage preservation using aerial imagery from light weight low cost Unmanned Aerial Vehicle (UAV). In Proceedings of the 2017 International Conference on Communication Technologies (ComTech), Rawalpindi, Pakistan, 19–21 April 2017; pp. 201–205. [Google Scholar]
  117. Davis, A.; Belton, D.; Helmholz, P.; Bourke, P.; McDonald, J. Pilbara rock art: Laser scanning, photogrammetry and 3D photographic reconstruction as heritage management tools. Heri. Sci. 2017, 5, 1–16. [Google Scholar] [CrossRef]
  118. Grenzdörffer, G.J.; Naumann, M.; Niemeyer, F.; Frank, A. Symbiosis of UAS photogrammetry and TLS for surveying and 3D modeling of cultural heritage monuments—A case study about the cathedral of St. Nicholas in the city of Greifswald. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2015, 40, 91–96. [Google Scholar] [CrossRef]
  119. Poux, F.; Neuville, R.; Van Wersch, L.; Nys, G.A.; Billen, R. 3D Point Clouds in Archaeology: Advances in Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences 2017, 7. [Google Scholar] [CrossRef]
  120. Giancola, S.; Valenti, M.; Sala, R. A Survey on 3D Cameras: Metrological Comparison of Time-of-Flight, Structured-Light and Active Stereoscopy Technologies; Springer International Publishing: Cham, Switzerland, 2018; Volume F3, pp. 89–90. [Google Scholar]
  121. Buchón-Moragues, F.; Bravo, J.M.; Ferri, M.; Redondo, J.; Sánchez-Pérez, J.V. Application of structured light system technique for authentication of wooden panel paintings. Sensors (Switzerland) 2016, 16, 881. [Google Scholar] [CrossRef] [PubMed]
  122. Di Pietra, V.; Donadio, E.; Picchi, D.; Sambuelli, L.; Spanò, A. Multi-source 3D models supporting ultrasonic test to investigate an egyptian sculpture of the archaeological museum in Bologna. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2017, 42, 259–266. [Google Scholar] [CrossRef]
  123. Mathys, A.; Brecko, J.; Semal, P. Comparing 3D digitizing technologies: What are the differences? In Proceedings of the 2013 Digital Heritage International Congress, Marseille, France, 28 October–1 November 2013; Volume 1, pp. 201–204. [Google Scholar]
  124. Papaioannou, G.; Schreck, T.; Andreadis, A.; Mavridis, P.; Gregor, R.; Sipiran, I.; Vardis, K. From Reassembly to Object Completion. J. Comput. Cult. Herit. 2017, 10, 1–22. [Google Scholar] [CrossRef]
  125. De Meyer, M.; Linseele, V.; Vereecken, S.; Williams, L.J. Fowl for the governor. The tomb of governor Djehutinakht IV or V at Dayr al-Barsha reinvestigated. Part 2: Pottery, human remains, and faunal remains. J. Egypt. Archaeol. 2014, 100, 67–87. [Google Scholar] [CrossRef]
  126. Willems, H. The Belgian Excavations at Deir al-Barsja, season 2003. In Mitteilungen des Deutschen Archäologischen Instituts; Gebr. Mann: Berlin, Germany, 2014. [Google Scholar]
  127. 3D Shining. Shining 3D Einscan, 2018. Available online: https://www.einscan.com/ (accessed on 8 August 2018).
  128. MeshLab. MeshLab, 2014. Available online: http://www.meshlab.net/ (accessed on 8 August 2018).
  129. Anderson, E.F.; McLoughlin, L.; Liarokapis, F.; Peters, C.; Petridis, P.; de Freitas, S. Developing serious games for cultural heritage: A state-of-the-art Review. Virtual Reality 2010, 14, 255–275. [Google Scholar] [CrossRef]
  130. Dörner, R.; Göbel, S.; Effelsberg, W.; Wiemeyer, J. Serious Games—Foundations, Concepts and Practice; Springer: Cham, Switzerland, 2016; p. 421. [Google Scholar]
  131. Hutchison, D. Games for Training, Education, Health and Sports: 4th International Conference on Serious Games; Springer International Publishing: Cham, Switzerland, 2014; p. 200. [Google Scholar]
  132. Carrozzino, M.; Bergamasco, M. Beyond virtual museums: Experiencing immersive virtual reality in real museums. J. Cult.Herit. 2010, 11, 452–458. [Google Scholar] [CrossRef]
  133. Rua, H.; Alvito, P. Living the past: 3D models, virtual reality and game engines as tools for supporting archaeology and the reconstruction of cultural heritage—The case-study of the Roman villa of Casal de Freiria. J. Archaeol. Sci. 2011, 38, 3296–3308. [Google Scholar] [CrossRef]
  134. Chen, S.; Pan, Z.; Zhang, M.; Shen, H. A case study of user immersion-based systematic design for serious heritage games. Multimed. Tools Appl. 2013, 62, 633–658. [Google Scholar] [CrossRef]
  135. Dagnino, F.M.; Pozzi, F.; Cozzani, G.; Bernava, L. Using Serious Games for Intangible Cultural Heritage (ICH) Education: A Journey into the Canto a Tenore Singing Style. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Porto, Portugal, 27 February–1 March 2017; pp. 429–435. [Google Scholar]
  136. Kontogianni, G.; Koutsaftis, C.; Skamantzari, M.; Georgopoulos, A. Utilising 3D Realistic Models in Serious Games for Cultural Heritage. Int. J. Comput. Methods Herit. Sci. (IJCMHS) 2017, 1, 21–46. [Google Scholar] [CrossRef]
  137. De Paolis, L.T.; Aloisio, G.; Celentano, M.G.; Oliva, L.; Vecchio, P. A simulation of life in a medieval town for edutainment and touristic promotion. In Proceedings of the 2011 International Conference on Innovations in Information Technology (IIT 2011), Abu Dhabi, UAE, 25–27 April 2011; pp. 361–366. [Google Scholar]
  138. Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.K.; Ioannides, M. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-5/W3, 61–68. [Google Scholar] [CrossRef]
  139. Christodoulou, S.E.; Vamvatsikos, D.; Georgiou, C. A BIM-based framework for forecasting and visualizing seismic damage, cost and time to repair. eWork and eBusiness in Architecture, Engineering and Construction. In Proceedings of the European Conference on Product and Process Modelling 2010, Cork, Ireland, 14–16 September 2010; pp. 33–38. [Google Scholar]
  140. Mortara, M.; Catalano, C.E.; Bellotti, F.; Fiucci, G.; Houry-Panchetti, M.; Petridis, P. Learning cultural heritage by serious games. J. Cult. Herit. 2014, 15, 318–325. [Google Scholar] [CrossRef] [Green Version]
  141. Mures, O.A.; Jaspe, A.; Padrón, E.J.; Rabuñal, J.R. Virtual Reality and Point-Based Rendering in Architecture and Heritage. In Handbook of Research on Visual Computing and Emerging Geometrical Design Tools; IGI Global: Hershey, PA, USA, 2016. [Google Scholar]
  142. Younes, G.; Kahil, R.; Jallad, M.; Asmar, D.; Elhajj, I.; Turkiyyah, G.; Al-Harithy, H. Virtual and augmented reality for rich interaction with cultural heritage sites: A case study from the Roman Theater at Byblos. Digit. Appl. Archaeol. Cult. Herit. 2017, 5, 1–9. [Google Scholar] [CrossRef]
  143. Bellotti, F.; Berta, R.; De Gloria, A.; D’ursi, A.; Fiore, V. A serious game model for cultural heritage. J. Comput. Cult. Herit. 2012, 5, 1–27. [Google Scholar] [CrossRef]
  144. Ruppel, U.; Schatz, K.; Rüppel, U.; Schatz, K. Designing a BIM-based serious game for fire safety evacuation simulations. Adv. Eng. Inform. 2011, 25, 600–611. [Google Scholar] [CrossRef]
  145. Lercari, N. 3D visualization and reflexive archaeology: A virtual reconstruction of Çatalhöyük history houses. Digit. Appl. Archaeol. Cult. Herit. 2017, 6, 10–17. [Google Scholar] [CrossRef]
  146. Bille, R.; Smith, S.P.; Maund, K.; Brewer, G. Extending Building Information Models into Game Engines. In Proceedings of the 2014 Conference on Interactive Entertainment—IE2014, Newcastle, Australia, 2–3 December 2014; pp. 1–8. [Google Scholar]
  147. Barazzetti, L.; Banfi, F.; Brumana, R.; Oreni, D.; Previtali, M.; Roncoroni, F. HBIM and augmented information: Towards a wider user community of image and range-based reconstructions. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2015, 40. [Google Scholar] [CrossRef]
  148. Amirebrahimi, S.; Rajabifard, A.; Mendis, P.; Ngo, T. A framework for a microscale flood damage assessment and visualization for a building using BIM–GIS integration. Int. J. Digit. Earth 2015, 8947, 1–24. [Google Scholar] [CrossRef]
  149. Oreni, D.; Brumana, R.; Georgopoulos, A.; Cuca, B. HBIM Library Objects for Conservation and Management of Built Heritage. Int. J. Herit. Digit. Era 2014, 3, 321–334. [Google Scholar] [CrossRef]
  150. Murphy, M.; Corns, A.; Cahill, J.; Eliashvili, K.; Chenau, A.; Pybus, C.; Shaw, R.; Devlin, G.; Deevy, A.; Truong-Hong, L. Developing historic building information modelling guidelines and procedures for architectural heritage in Ireland. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2017, 42, 539–546. [Google Scholar] [CrossRef]
Figure 1. Overview of the challenges of documentation with traditional imagery: Occlusions and clutter (top left); confusing whereabouts (top right); image defocus due to limited Depth-of-Field (DoF) (bottom left); and poor lighting conditions (bottom right).
Figure 1. Overview of the challenges of documentation with traditional imagery: Occlusions and clutter (top left); confusing whereabouts (top right); image defocus due to limited Depth-of-Field (DoF) (bottom left); and poor lighting conditions (bottom right).
Remotesensing 10 01607 g001
Figure 2. Example of point clouds generated by: photogrammetry (left); and Terrestrial Laser Scanning (TLS) (right).
Figure 2. Example of point clouds generated by: photogrammetry (left); and Terrestrial Laser Scanning (TLS) (right).
Remotesensing 10 01607 g002
Figure 3. Orthomosaic Nave VI (5 m × 15 m) over the six-story West wall. Initial polychrome zones appointed in yellow.
Figure 3. Orthomosaic Nave VI (5 m × 15 m) over the six-story West wall. Initial polychrome zones appointed in yellow.
Remotesensing 10 01607 g003
Figure 4. Overview of the orthomosaics integrated with the plans of the Sint-Niklaas church in Ghent, Belgium. The mosaic was constructed in a scene with six stories of scaffolding and a significant amount of clutter.
Figure 4. Overview of the orthomosaics integrated with the plans of the Sint-Niklaas church in Ghent, Belgium. The mosaic was constructed in a scene with six stories of scaffolding and a significant amount of clutter.
Remotesensing 10 01607 g004
Figure 5. Overview of the altarpiece in the church at the STAM museum at the Bijloke-site in Ghent (a); camera locations distributed evenly over the scene (b); resulting frontal orthomosaic of the altarpiece (c); and the resulting orthomosaic of one of the ornaments (d).
Figure 5. Overview of the altarpiece in the church at the STAM museum at the Bijloke-site in Ghent (a); camera locations distributed evenly over the scene (b); resulting frontal orthomosaic of the altarpiece (c); and the resulting orthomosaic of one of the ornaments (d).
Remotesensing 10 01607 g005
Figure 6. Overview of the panoramic viewer test case in the Sint-Eustachius church with target attic depicted in red © https://www.google.be/maps.
Figure 6. Overview of the panoramic viewer test case in the Sint-Eustachius church with target attic depicted in red © https://www.google.be/maps.
Remotesensing 10 01607 g006
Figure 7. Overview of the Google Maps creation tool with .KML drawing integration and HTML handle for public sharing [99].
Figure 7. Overview of the Google Maps creation tool with .KML drawing integration and HTML handle for public sharing [99].
Remotesensing 10 01607 g007
Figure 8. Overview of the panoramic viewing application developed with the Google API including: the map (above); viewer (bottom); and navigation tools (white arrows) [99].
Figure 8. Overview of the panoramic viewing application developed with the Google API including: the map (above); viewer (bottom); and navigation tools (white arrows) [99].
Remotesensing 10 01607 g008
Figure 9. General view of the plateau where Governor’s tombs are located (a); and the current status of the best preserved tomb (b).
Figure 9. General view of the plateau where Governor’s tombs are located (a); and the current status of the best preserved tomb (b).
Remotesensing 10 01607 g009
Figure 10. Operation modes of the Einscan pro + scanner: fixed mode (a); and hand-held mode (b).
Figure 10. Operation modes of the Einscan pro + scanner: fixed mode (a); and hand-held mode (b).
Remotesensing 10 01607 g010
Figure 11. Frames sequence that shows the hand-held scanning process. The screenshot on the bottom left of the first four frames depicts the real-time 3D acquisition. The last two pictures show the point cloud after scanning.
Figure 11. Frames sequence that shows the hand-held scanning process. The screenshot on the bottom left of the first four frames depicts the real-time 3D acquisition. The last two pictures show the point cloud after scanning.
Remotesensing 10 01607 g011
Figure 12. (Top) Different angles photographs of an Egyptian vessel, with a horizontal scale bar of 10 cm and a vertical scale bar of 20 cm; and (Bottom) watertight 3D mesh of the object, composed of circa 1 million triangles.
Figure 12. (Top) Different angles photographs of an Egyptian vessel, with a horizontal scale bar of 10 cm and a vertical scale bar of 20 cm; and (Bottom) watertight 3D mesh of the object, composed of circa 1 million triangles.
Remotesensing 10 01607 g012
Figure 13. High resolution watertight model of a cranium made up of circa 1.3 million triangles.
Figure 13. High resolution watertight model of a cranium made up of circa 1.3 million triangles.
Remotesensing 10 01607 g013
Figure 14. Non-watertight model of the decorated block shown in Figure 11, composed of circa 0.8 million triangles.
Figure 14. Non-watertight model of the decorated block shown in Figure 11, composed of circa 0.8 million triangles.
Remotesensing 10 01607 g014
Figure 15. The torpedo base at the fortress Sint-Marie in Zwijndrecht, Belgium is located at the banks of the tidal Scheldt River. The mudded bottom part of the scene, caused by the almost constant submersion, complicates the photogrammetric process.
Figure 15. The torpedo base at the fortress Sint-Marie in Zwijndrecht, Belgium is located at the banks of the tidal Scheldt River. The mudded bottom part of the scene, caused by the almost constant submersion, complicates the photogrammetric process.
Remotesensing 10 01607 g015
Figure 16. Added point light source to brighten the scene in the Unity gaming engine (left); and the animation added to the camera to create the video (right).
Figure 16. Added point light source to brighten the scene in the Unity gaming engine (left); and the animation added to the camera to create the video (right).
Remotesensing 10 01607 g016

Share and Cite

MDPI and ACS Style

Bassier, M.; Vincke, S.; De Lima Hernandez, R.; Vergauwen, M. An Overview of Innovative Heritage Deliverables Based on Remote Sensing Techniques. Remote Sens. 2018, 10, 1607. https://doi.org/10.3390/rs10101607

AMA Style

Bassier M, Vincke S, De Lima Hernandez R, Vergauwen M. An Overview of Innovative Heritage Deliverables Based on Remote Sensing Techniques. Remote Sensing. 2018; 10(10):1607. https://doi.org/10.3390/rs10101607

Chicago/Turabian Style

Bassier, Maarten, Stan Vincke, Roberto De Lima Hernandez, and Maarten Vergauwen. 2018. "An Overview of Innovative Heritage Deliverables Based on Remote Sensing Techniques" Remote Sensing 10, no. 10: 1607. https://doi.org/10.3390/rs10101607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop