Next Article in Journal
Evaluating the Archaeological Efficacy of Bathymetric LiDAR across Oceanographic Contexts: A Case Study from Apalachee Bay, Florida
Previous Article in Journal
Spatial Form and Conservation Strategy of Sishengci Historic District in Chengdu, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Digitization of Archaeological Sites—The Use Case of the Palace of Knossos

by
Zacharias Pervolarakis
1,
Emmanouil Zidianakis
1,
Antonis Katzourakis
1,
Theodoros Evdaimon
1,
Nikolaos Partarakis
1,*,
Xenophon Zabulis
1 and
Constantine Stephanidis
1,2
1
Institute of Computer Science, Foundation for Research and Technology Hellas (ICS-FORTH), N. Plastira 100, Vassilika Vouton, 70013 Heraklion, Greece
2
Computer Science Department, University of Crete, Voutes Campus, 70013 Heraklion, Greece
*
Author to whom correspondence should be addressed.
Heritage 2023, 6(2), 904-927; https://doi.org/10.3390/heritage6020050
Submission received: 20 December 2022 / Revised: 16 January 2023 / Accepted: 18 January 2023 / Published: 20 January 2023

Abstract

:
Modern digitization technologies have created an increasing number of possibilities for capturing the physical dimensions and appearance of archaeological artifacts and sites in 3D. The usage of such data is usually targeted to the research, study, and documentation of our cultural heritage. At the same time, the increasing quality of the produced digitizations has opened new possibilities for the further exploitation of digitization outcomes in a wider context than initially expected. A pioneer in this direction was the gaming industry, where photogrammetry has been recently employed to achieve extreme photorealism. Of course, challenges still exist, especially when digitization accuracy is of importance, such as in the case of large-scale archaeological sites. Further challenges regard the need to combine indoor and outdoor scenes that pose requirements in the selection of the appropriate digitization modalities and post-processing strategies. In more detail, the challenges relate to the appropriate usage of existing technologies, organization issues in terms of digitization visits, the combination and registration of data, data acquisition, and data processing methodologies, etc. In this paper, we demonstrated a methodology for the digitization of archaeological sites that can be used for creating digital assets suitable for various scenarios including research, education, and entertainment.

1. Introduction

Making large-scale heritage sites accessible through digital technology is still a major challenge today. In this work, the challenge was the digitization of the palace of Knossos and its peripheral sites in collaboration with the Ephorate of Antiquities of Heraklion. The objective was to support the digital preservation of the site for archaeological study and research and to make the outcomes available to the general audience on-site and remotely through an innovative interactive digital tour guide [1].
Knossos was the main prehistoric settlement of the island of Crete and is best known for its monumental palace, the so-called Minos Palace [2,3,4]. On Friday 23 March 1900 at 11 a.m., Arthur Evans began his excavation of Knossos. Although he was not the first to excavate at the site, that honor belongs to a Greek appropriately called Minos Kalokairinos who excavated the site in 1878, it was to be Evans who uncovered the Knossos Palace and brought to light a hitherto unknown civilization, possibly the oldest in Europe. The basic excavation of the site took four years, and for the rest of his life, Evans continued working on the site, reconstructing and building, often in an attempt to preserve the remains from the weather, to which they had been exposed for the first time in 3500 years [5].

2. Related Work

2.1. Three-Dimensional Reconstruction Technologies for Heritage Sites

The 3D digitization of wide-area heritage sites is a challenging task, particularly in cases where the site contains both indoor and outdoor structures. Multiple approaches have been proposed for such cases, which differentiate mainly based on the available digitization modalities.
Low-cost approaches use digital photography and photogrammetric reconstruction (e.g., [6,7]). These approaches have been empowered in recent years with the development of unmanned aerial vehicles (UAVs), which are used for the acquisition of images in large-scale projects (e.g., [8]). Other approaches propose the use of multiple UAVs during the data acquisition process [9,10] to enhance the data acquisition in unknown 3D environments.
Higher-end approaches use laser scanning to provide high-structural-accuracy coming with a lower texture quality than other approaches. For this reason, several works have combined laser scanning and photogrammetry to create 3D models of high structural and textural accuracy [11,12,13,14,15,16]. A variation of this approach combines UAVs and laser scanners [10].

2.2. Three-Dimensional Reconstruction in the Gaming Industry

An important part of this research work was regarding the suitability of the digitization outcomes for rendering in the context of a multitude of technologies including AR, VR, and MR. To this end, the background of this study was regarding progress achieved by the digitization industry regarding the usage of digitization outcomes in games. Traditionally in the gaming industry, game assets have been created manually using 3D modeling software, thus a great amount of information has relied on stock visual references or photographic documentation acquired during field trips. At the same time, in the past, game engines were not capable of using reconstructed 3D models due to the density of the polygons required, which made them unusable. Such limitation still exists today in terms of assets used in mobile games [17]. Over the last decade, several smaller projects have attempted to use photogrammetry to reproduce real-world locations [18,19].
The first large-scale use of photogrammetry in a game was the “Vanishing of Ethan Carter” by independent Polish developer “The Astronauts” [20]. In 2015, the title Star Wars: Battlefront was announced, which relied heavily on photogrammetry [21].
At the same time, several asset-authoring tools and asset libraries followed this new approach by providing photogrammetry-based synthetic 3D models. Allegorithmic was one of the first companies that updated the Substance Designer software to derive textural information from photogrammetric scans [22]. Furthermore, Quixel released a library of photogrammetry-based PBR textures called Quixel Megascans [23].

2.3. Virtual Exhibitions for Heritage Sites

We foresee that the combination of digitization outcomes with advances in the gaming industry could support the creation of virtual exhibitions of archaeological sites in the future. The term virtual exhibition (VE) has been used in the domain of digital cultural heritage (DCH) to describe a variety of technical solutions, interactions, and immersion styles. In the 2000s, the majority of VEs were web-based [24,25], and from the early 2010s, the basic guidelines for creating interesting and compelling VEs were contributed [26,27,28]. In parallel, digital technology explored ways of enhancing the museum experience through on-site VEs [29], mixed reality (MR) VEs [30,31,32], authoring environments for web-based virtual museums [33,34] and the authoring of web-based virtual environments to provide a synthetic representation of cultural heritage (CH) subjects, including intangible dimensions [35,36]. Finally, the value of AR for cultural heritage sites has been explored by conducting an exploratory study on museum stakeholders, personnel, and focus groups [37] and exploited both for tangible and intangible cultural heritage (e.g., [38,39]). The outcomes of the study were the identification of numerous perceived value dimensions within the cultural heritage tourism context for stakeholders, including economic, experiential, social, epistemic, historical and cultural, and educational value.

2.4. This Work

One of the main challenges of this work was to apply state-of-the-art digitization technologies to a wide-area archaeological site that contains both indoor and outdoor scenes. A large amount of data was collected that included digital images of the monument, aerial photographs for reconstruction, laser scans, panoramic photos, etc. The processed data contained raw and post-processed 3D models, materials, textures, point clouds, etc.
We propose a digitization methodology for archaeological sites that best suits the requirements of both indoor and outdoor scenes. In this methodology, we proposed appropriate digitization modalities to support realistic reconstruction while minimizing the degree of the data size and post-processing needs. This methodology proposes a strategy for complementing digitization modalities to best address the requirements of complete and accurate digitization.
The proposed approach combined expertise from past projects on digitization [40] and systematic approaches for CH representation [41] acquired in the context of the Mingei EU H2020 project. Then, we moved on to the registration of point cloud, mesh, and texture optimization, which was important since different modalities were used complementarily and were enhanced with photographic documentation through a process that made a balance between quality and realism. Further, post-processing was needed for the texture optimization to address the need to homogenize the appearance of the indoor and outdoor scans that were acquired under different lighting and illumination conditions. For presentation purposes, from the wide collection of technologies used, in this work, we focused on augmented reality (AR), which can project things in the real world that are simply not there.
We also tried to address a reoccurring problem in all such digitization projects, which is the long-term preservation of data. There are several reasons for the urgency of the long-term preservation of data: (a) data represent a snapshot in time of a monument, (b) there is a need to reuse data in the future to save time and effort, (c) new reconstruction algorithms may produce better results with the same data in the future, (d) digitization needs to be supported over time to study its deterioration, and (e) the availability of such data is necessary for scientific study. Considering the above-mentioned challenges, we addressed the long-term preservation of the data by employing a European open data repository, namely Zenodo [42]. Their raw and processed data are stored for long-term preservation following a restricted access policy, because any request for data access should be redirected to the Ephorate of Antiquities assigned with the responsibility for the monument.

3. Design and Method

In all digitization projects, careful consideration of the heritage object or site is essential. This was essential in our case too, due to the size of the site and the combination of indoor and outdoor scenes. To this end, several visits to the archaeological site were organized with the assistance of the Ephorate of Antiquities of Heraklion to study and decide upon the digitization strategy. For example, some parts of the site were under a canopy and thus not visible from the air. Furthermore, in the indoor spaces, there was a lack of sufficient light, while in other cases there was strong illumination during the daytime.
Apart from this technical study, it was also important to address the requirements of the digitization quality. In some cases, there was a need to achieve maximum digitization quality due to the significance of the specific location within the site, and thus special attention was taken by including additional scanning locations and acquiring more dense photographic documentation.

3.1. Digitization

Initially, the acquisition of data was something that needed consideration, since the size of the archaeological site was so big that the danger of ending up with a vast amount of input data was a reasonable consideration. Thus, a flexible divide-and-conquer approach had to be followed.
For the digitization of the outdoor environments, a UAV was used. The flight path selected was grid-wise, while a second grid, perpendicular to the first, was used for increased reconstruction robustness (see Figure 1).
A problem with this approach was that the segments of interest in a scene may not be visible from aerial views, such as the scene locations below the eaves of buildings. To this end, we decided to combine terrestrial views with aerial views. The final solution required two scanning processes, one aerial and one terrestrial, with the selection of a laser scanner instead of photogrammetry and its combination with handheld camera documentation that could be used both for texture mapping and for the photogrammetric reconstruction of the small details of outdoor scenes.
For the indoor environments, photogrammetric reconstruction exhibits a disadvantage in that it becomes less reliable for multiple reasons. The main ones include a lack of sufficient illumination, lack of textures, particularly on blank walls and ceilings, and shiny surfaces (e.g., metallic), because they exhibit illumination specularities that hinder reconstruction. Photogrammetric reconstruction requires significant computational time to obtain the results, because it is not based on direct measurements of a spatial structure (i.e., such as a laser scanner), but is rather an algorithm that computationally infers the structure from implicit measurements (images). Furthermore, considering the complexity of the indoor spaces of the palace of Knossos, it was decided that laser scanning should be used for the indoor scenes. Special attention had to be taken in each room for the selection of the most appropriate placements of the laser scanner to cover the maximum amount of detail with the minimum number of scans. The problem with laser scanning is textural realism, because it tends to provide high-quality mesh surfaces but lower-quality texture resolutions. To compensate for this problem, photographic documentation was used to document each room exhaustively, the data from which was to be manipulated during the synthesis of the final 3D reconstruction results. The scanning methodology used is summarized in Figure 2.

3.2. Post-Processing

The mission of preserving historical details from an archaeological perspective poses equivalent requirements to the documentation acquired with photographic evidence. Regarding outdoor scenes, there is a need to coordinate terrestrial with aerial scans.
Regarding the indoor scenes, there was a need to unify all the laser scans with photogrammetric reconstructions in one mesh suitable for the archaeological purposes and in a lower-quality mesh for the mobile-centric AR and virtual reality (VR) applications. The merging of all the scans enabled the creation of high-resolution textures (16k ~268.4 megapixels) for historical preservation and accuracy of information. Thus, the preservation of the highest fidelity parts from each 3D scan was used for historical reference. At the same time, in this work, we employed an image stacking approach to achieve the transition between the registered laser scan textures and the final 3D model to be employed in the AR applications.

4. Digitization

4.1. Aerial Scans

Several flights had to be conducted to extract the 3D outdoor models. The flights were conducted at the lowest possible altitude to obtain the highest possible photo resolution. The altitude of the flights depended on the natural obstacles of the space. The lowest altitude of the flights was 10 m, and the highest was 30 m. Most flights were conducted autonomously via the Pix4Dcapture [43] app and in the dοuble grid mode. In addition, some free flying had to be conducted to avoid obstacles, and so a lower-altitude flight was conducted with the same application. After the data were collected, they were loaded into the photogrammetry programs. For the data with a low number of images, the program Pix4Dmapper [44] was used, and for the data with a large number of images, the program Pix4Dmatic [45]. After processing the photogrammetry program exports, a 3D model consisting of a good structure and with very good textures was obtained, as shown in Figure 3. Table 1 shows the archaeological sites and the number of aerial images for each one.

4.2. Terrestrial Scans

A Faro Focus M70 laser scanner solution [46] was used to scan the indoor areas. The scanner was placed at many different points based on the placement plan for full coverage. Table 2 shows the archaeological sites and the number of scans for each one. Each scan took approximately 12 to 15 min to collect the color point clouds and panoramic images. The data were processed using the Faro Scene [47] software and registered for each site. Then, the meshes were created and exported. Scans were conducted with a high overlap to automatically register the data. The exported 3D model consisted of a very good structure and low-resolution textures, as shown in Figure 4a.
In addition to the indoor scans, outdoor laser scans were acquired to enhance the quality of the aerial reconstruction. This was because in some cases visual occlusions did not allow the UAV to capture all the details of the heritage site from an aerial perspective. Thus, this was complemented with higher-quality terrestrial laser scans, the results of which are presented in Figure 4b.
Figure 4. Terrestrial scans: (a) registration of multiple indoor scans, (b) registration of multiple outdoor scans.
Figure 4. Terrestrial scans: (a) registration of multiple indoor scans, (b) registration of multiple outdoor scans.
Heritage 06 00050 g004aHeritage 06 00050 g004b

4.3. Long-Term Preservation of Data

The datasets that were created in the context of this work to support the long-term preservation of data are summarized in Table 3.

5. Post-Processing

There is a need to unify all laser scans with photogrammetry mesh creations into one low-polygonal mesh that is suitable for mobile-centric AR and VR applications. The unification of all the scans enables the creation of high-resolution textures (16k ~268.4 megapixels) with an approximately isotropic texture distribution for the historical preservation and accuracy of information. At archeological sites, the preservation of the highest fidelity parts from each 3D scan is part of the historical reference. Points of interest consist of a combination of aerial photography and beacon scans. The analytical and aerial photogrammetry of the entire archaeological site of Knossos (for registration purposes) and the collection of the laser scans are presented in the previous section.
In this work, we employed an image stacking approach to achieve the transition between the registered laser scan textures and the final 3D model to be employed in AR applications.

5.1. Image Stacking Methodology

For all the AR-related work (3D meshes, texture extractions, mixing, and compositing), the 3D animation tool Blender v2.93 [58] was used throughout the project. Using the Blender tool, a point light was registered in the position and orientation of each Faro scan, and by enabling the usage of light nodes, a projection of the equirectangular environment textures for each panoramic picture with a 180-degree rotation in the z-axis from its normal coordinates and at a constant intensity of 85 was applied with the following general settings: power: 1w, max. bounces: 0, and radius: 0. An example of the panoramic projections and their equivalent image density distribution is presented in Figure 5 (smaller squares have a higher resolution).
Two types of textures were extracted per panoramic projection. The first was a more-accurate one, generated by projecting onto the scan’s mesh (a high-polygonal structure in a confined size). The second was a fallback texture that was projected directly onto the target mesh. The light from the panoramic image was projected upon a mesh with Toon BSDF material (color FFFFFF, size 1, smooth 1) because it retained a constant shading on all angles and also produced shadows for masking purposes.
The target’s mesh texture was reconstructed using three images per scan (see Figure 6): (1) a main, accurate texture extraction that was based on each panoramic photo projected on its high-polygonal mesh Faro scan, (2) a secondary fallback texture extraction that was baked from projecting the panoramic image straight onto the target mesh, and (3) a mask that was generated from a combination of the scanned geometry’s distance to the Faro scan and its angle of projection. For practical reasons, these two were combined into one greyscale texture. The longer the distance, or the less perpendicular to the projected surface the Faro was positioned to, the darker the mask was.
In Figure 7a, the scan’s mesh has a greater shape definition, and the textures have minimal distortions. In Figure 7b, the target mesh has enough geometry to be used in an AR environment, but fine details are missing. In Figure 7c, the overlapping geometry allows the extraction of accurate textures by transferring the albedo from high to low polygons, such as in Figure 7d, while the fallback projection in Figure 7e allows more flexibility. Finally, the angle and distance mask in Figure 7f, was used to enhance the final compositing stage with the finest details, as well as to eliminate any distortions.
Although the most-detailed individual 3D scans can obtain a very-high polygonal resolution, the generated mesh still has imperfections in comparison to the real object. This introduces visual errors and negatively impacts the authenticity since it projects the panoramic photo in an unresolved geometry beyond the edges of the object (see Figure 8b).
To rectify and keep only the valid characteristics, these imperfect projections needed to be removed. To accomplish this, a greyscale mask (with values 0–1) was generated for each scan by baking to the textures, with a point-light with a 0.1 m radius and the number of light path bounces set to 0. The rendering was conducted at 16 sample points (see Figure 8c). Aggressive filtering was applied to the values below 0.99, which were crushed down to 0, and anything above it was set as 1 (see Figure 8d). These masks were set as the alpha channels for each panoramic texture that was extracted using Blender’s compositor function. This process ensured that only the best data from each Faro scan were used for the final texture composition (see Figure 8e).
Figure 8. Texture masking. A mask was generated by extracting a light projection from the scan’s position. White is opaque and black is transparent, with values 1 and 0, respectively. (a) Light radius: 0 m; (b) the texture shown has projection bleeding from the column, which appears on the wall as a thin line, (c) light radius: 0.1 m, soft-mask; (d) aggressive masking by crushed values; (e) resulting texture is narrower than the one presented in (b) but is artifact-free; (f) compositor values, settings, and export render.
Figure 8. Texture masking. A mask was generated by extracting a light projection from the scan’s position. White is opaque and black is transparent, with values 1 and 0, respectively. (a) Light radius: 0 m; (b) the texture shown has projection bleeding from the column, which appears on the wall as a thin line, (c) light radius: 0.1 m, soft-mask; (d) aggressive masking by crushed values; (e) resulting texture is narrower than the one presented in (b) but is artifact-free; (f) compositor values, settings, and export render.
Heritage 06 00050 g008
In the Knossos project, the first step in creating good reference projections was to composite the JPG and PNG panoramic images in a mix of 75/25 for each Faro scan to improve their characteristics and remove the flat appearance of the non-HDR data. The PNG images projected perfectly on their equivalent 3D meshes, whereas the JPG ones did not, so they needed to have a set of transformations applied to them beforehand (scale and position).
For the best exposure calibration results of the multiple-scan textures, the individual intensities had to be adjusted, while all the layered composition effects were combined concurrently. A nodes tree in the shader editor was used, and tweaking in the 3D viewer gave a what-we-see-is-what-we-get synthesis (see Figure 9a). Real-time shaders have a finite limit in the number of concurrently processed textures they can process, which is about 16 textures, and that includes textures and masks. For previewing purposes in the 3D Viewer, a 4k downscaled texture variation of the textures was used, but for the final synthesis, in the compositor function, the original 16k textures were used. Unlit galleries led to sunlit open areas, and there was a need to combine overexposed with underexposed overlapping parts (see Figure 9b) in one continuous, seamless, and color-calibrated texture. The process for this was as follows:
Step 1—Create a new non-overlapping UV on the target mesh:
  • Smart UV {angle: 30, island 1/16.000}.
  • Extract from high-poly to low-poly (Faro scan to target mesh) resolution: generate albedo, normal map, and distance and angle mask from the equivalent 3D scans.
  • Accurate texture extraction: project Faro panoramic image onto its scan’s mesh and extract to the target mesh.
  • Bake type: diffuse; influence: color; selected to active: ray distance 0.015 m; output: margin 3 px.
  • Fallback texture extraction: project Faro panoramic image directly onto target mesh.
  • Bake type: diffuse; influence: direct; output: margin 3 px.
Step 2—Set Blender settings for render properties:
  • Color management: view transform | set as standard (from default, i.e., filmic, to avoid color-space changes and post-processing inconsistencies).
  • Light paths: max bounces | set all to (0) zero bounces to avoid light bleeding to neighboring geometry.
  • Sampling: render | one sample (more samples offer negligible gains at 16k textures. One sample is easier to fix in terms of texture issues in post-processing).
Step 3—Unification of multiple textures: equirectangular projections of the panoramic images captured from the Faro scans were used to generate 16k resolution textures on the target mesh. All textures were used concurrently with material node shader or compositor structures. For calibrating the exposure levels of all the different textures in real time, the material shader was used on the target mesh, although it had two problems: (1) it was memory-demanding, and the textures had to be downscaled to 4K during the calibration process, and (2) it had a finite number of textures that could be used concurrently, though using it was enough to obtain the proper exposure levels. Both issues ere alleviated using the compositor function in a similar structure by applying the appropriate exposure-level values per scan to calculate and unify multiple images for the final texture.
Figure 9. (a) Final texture layering methodology, (b) layering example.
Figure 9. (a) Final texture layering methodology, (b) layering example.
Heritage 06 00050 g009
The synthesis was conducted by layering variations in the average image stacking (AIS) techniques to achieve different results. AIS outputted the per-pixel average color of all the applied textures (Figure 10a), and this was converted into a group node for reusability purposes. At its core, it separated the colors into red (R), green (G), and blue (B) values per texture and summed their respective outcome. The R, G, and B values were divided by the number of influencing textures. The influence was determined by summing the alpha-channel values, which in turn were used as the divisors. A small adjustment had to be made for values near zero, otherwise it divided by zero, which turned anything into a white color. Finally, the combined RGB node outputted the average colors of multiple textures (see Figure 10b). Before calculating the average color, the exposure levels per scan had to be applied (see Figure 10c).
The exposure level was a group node (Figure 10c) and was used to approximate a consistent exposure across the entire target mesh. The Faro scans were conducted during different hours of the day with varying weather conditions and often appeared to mismatch in their overlapping areas. Correcting these levels was an artist-driven task and it yielded the best results when the adjustments were conducted using A and D with AT-OBF layers. This took values between 0–1. Its main function was to darken the textures to match their surroundings, where for a value of 1, the texture retained the original scan’s intensity and saturation. At a value of 0, it was essentially black. It used the RGB curves node, with the main curve being a flat curve at zero. Darker images, however, had less saturation with a crushed contrast. To remedy this, the node’s R, G, and B curves had their middle points increased (from 0.5 to 0.6) in order to brighten the mid-tones, and a hue/saturation node increase of 150% was also used. The user value was inverted, and that inversion drove the factors of influence for both the RGB curves and saturation nodes. A value of 1 had no influence, a value of 0.5 was a 50% application, and a value of 0 had the maximum effect. For clarity, a value of 0 was never used, but values as low as 0.1 had to be used.
The outline of the stacked layers is as follows: the green color represents accurately extracted textures (scan-based), and orange represents the fallback extracted textures, which were projected directly onto the target mesh. Fallbacks were useful despite their low resolution and potentially warped projection. The blue color nodes represent a collection of nested nodes that could be recycled/used anywhere on the shader editor and with a similar structure for the compositor editor.
Figure 10. (a) Shader/compositing node analysis for averaging multiple textures, (b) group node analysis for average image stacking, and (c) group node analysis for user-defined exposure.
Figure 10. (a) Shader/compositing node analysis for averaging multiple textures, (b) group node analysis for average image stacking, and (c) group node analysis for user-defined exposure.
Heritage 06 00050 g010aHeritage 06 00050 g010b
The results of applying the image stacking methodology described above is presented in Figure 11. In this figure, section (a) presents the texture quality exported from the laser scans (please note that registering multiple laser scans improved the mesh accuracy but reduced the texture quality due to the limitations of the software regarding the maximum exported faces). Section (b) presents the resulting textured mesh by applying the proposed image stacking approach.

5.2. Texture Optimization for Cross-Platform Applications

The unified texture needed to be scalable to support various platforms (mobile phones, VR headsets) and for performance reasons. Due to the presentation nature of AR, it needed consistency and an artistically driven reimagining for any of the missing polygonal parts or textures. Each laser scan produced a panoramic image that was projected onto its 3D scan. With the photo projection and texture extraction, all of the images were converted into multiple-texture variations of a UV-wrapped mesh, and image stacking was a great approach to the unification by using the shader nodes or the compositor function.
For the target AR mesh, new UVs were created to maximize the surface coverage and fix any potential coverage and overlapping issues. The resulting texture had to be as homogeneous as possible, and an example is presented in Figure 12. The panoramic image was projected onto a mesh that used a Toon BSDF material (color FFFFFF, size 1, smooth 1). This shader provided a constant light intensity on any angle thus it did not introduce additional shading; however, it still cast shadows that were essential for masking any non-visible polygonal structures. Thus, it preserved only what was truly visible.
The final texture was a five-layer composition (Figure 13, a #1) and a two-layer artist-driven texture composition (Figure 13, a #2) using the composition of #1 as a baseline for filling out the blank spots that Faro did not have proper visibility of. Each layer on #1 was a composition in and of itself that generated a texture with RGB-alpha channels. They were stacked as follows: (1) baseline textures (BT) fallback average image stacking (AIS), (2) overexposed burnt filter (OBF) using BT AIS, (3) accurate textures (AT) fallback using scan-projected AIS, (4) OBF fallback using AT-AIS, and (5) angle-and-distance (A and D)-masked AT AIS.
All the layers in composition #1 shared the same user-defined exposure levels per scan and were mixed using their composited alpha values (the top ones were the most visible). Figure 13b demonstrates the incremental visual improvements, and Figure 13c demonstrates a detailed close-up, in which the diagonal white–red lines mean transparency.
All brightness calibration was conducted once per scan and was applied across all the compositions at once, allowing for an accurate and versatile approach. For the calibration session, all the textures were downscaled to a 4k resolution for system responsiveness, while for the final composition, the original 16k extracted textures were used.
The final target’s mesh texture could be adjusted by a shader that could compose, color calibrate, and mask out the overexposure problems in a layered manner. To maintain and highlight the best features from the multiple overlapping scans, it was critical to remove as many overexposed parts as possible and use only the best from each scan, although there could be parts that either did not overlap or looked overexposed but were not (e.g., grey/white walls), in which case there was a fallback that simply used an average. The process is shown in Figure 14.
In detail 1 (Figure 13c) going from 1A to 1E, there is a gradual improvement over each progressing layer. In detail 2, the fallback and accurate AIS textures (2A and 2B) resulted in a partial discoloration of the red paint inside the window because there was an overlap with the nearby overexposed (burnt) scans in that specific area. The OBF layer (2C) managed to restore the color by discarding the problematic parts. The A-and-D layer (2D) had a darker shade of red, as the window had an acute angle in relation to the scan, but its resulting alpha allowed the more vibrant OBF part to excel in the final composition (1E). In the distance-and-angle detail (3D), there were transparent parts because the angle of projection was too steep from the closest scan. At first glance, 3E appears to have inherited the previous OBF AT AIS from 3C, but the shading does not match the rest of the expected OBF improvements over 3B, which means that the accurate textures had blind spots, hence the baseline fallback layer from 3A managed to make up for them in the end.
Figure 13. The final texture was a five-layer composition: (a) shader/compositor analysis presenting how different layers were used to compose the full 16k texture layer and its lower detail version with an artistic interpretation of missing information; (b) layers: A = fallback textures average image stacking (AIS), B = accurate textures (AT) AIS, C = AT with overexposed burnt filter (OBF) AIS, D = AT with angle and distance AIS, E = final 16k texture, where red/white diagonal stripes imply transparency; (c) detailed close-up.
Figure 13. The final texture was a five-layer composition: (a) shader/compositor analysis presenting how different layers were used to compose the full 16k texture layer and its lower detail version with an artistic interpretation of missing information; (b) layers: A = fallback textures average image stacking (AIS), B = accurate textures (AT) AIS, C = AT with overexposed burnt filter (OBF) AIS, D = AT with angle and distance AIS, E = final 16k texture, where red/white diagonal stripes imply transparency; (c) detailed close-up.
Heritage 06 00050 g013
The overexposed burnt filter (OBF) layer (Figure 14a) was similar to the average image stacking (AIS) layer, but it used the burnt filter group node between each texture and its AIS alpha input. The burnt filter affects a texture’s perceived Alpha channel and turns transparent any overexposed part of the image that is monochromatic (grey/white) above a certain brightness level. It greatly influences the output quality as it allows only the colored overlapping textures to be used for the AIS output. It is an aggressive filter, and many parts of an image can turn out transparent, such as cement, white paint, or even in extreme situations where all overlapping textures happen to be overexposed. For this reason, it is always used as a layer on top of a typical fallback AIS layer.
The burnt filter group works as follows: It has two inputs, namely a texture’s RGB color and its alpha value. The RGB connects to a black and white node, and then it is connected to (1) a color ramp node with the following settings: active color stop: 0 position: 0.45 with color: 0, stop: 1 position: 0.55 with color: 1, and (2) a mixed RGB node with a difference function to compare it the original RGB values, which in turn connect to a color ramp node with the following settings: stop: 0 position: 0 with color: 0, stop: 1, position: 0.054 with color: 1. Then, the original texture’s alpha value is connected to the mixed RGB node’s input #1, the (2) connects on the node’s input #2, and (1) is used as a factor to determine the mixing influence. The output is a greyscale image that is used to replace the texture’s original alpha channel before conducting the AIS group node operation.
Figure 14. The overexposure problems were composed, color-calibrated, and masked-out in a layered manner: (a) overexposure filter image stacking layer node analysis, and (b) burnt filter group node analysis.
Figure 14. The overexposure problems were composed, color-calibrated, and masked-out in a layered manner: (a) overexposure filter image stacking layer node analysis, and (b) burnt filter group node analysis.
Heritage 06 00050 g014
Each scan’s panoramic photo had an 8246∗3414 JPG resolution that, when projected perpendicular onto a perfect sphere, had a distribution of about 168 px/cm2 when projected 1 m away in the center of the image. Under perfect conditions, the resolution dropped by the distance’s inverse square, and doubling the distance from the scan’s position, the resolution dropped to 42 px/cm2 at 2 m away, 10 px/cm2 at 4 m away, etc.
However, projecting panoramic images onto their respective 3D meshes is never perfect, and they have a variety of different angles and distances (see Figure 15). The images in this study were not uniform, and at steep angles (that were almost parallel to the projection), the resolution per surface area dropped exponentially down to zero. As such, the ideal conditions for preserving accurate texture information are to maintain the quality of the most perpendicular and close-proximity scans on top of an average image-stacked texture that fades out linearly into a combined image-stacked average texture.
The angle-and-distance mask (A-and-D) layer was used to enhance the overall quality by incorporating the highest resolution parts of the scans (Figure 16). The resulting images were mostly transparent except for the highest-fidelity parts, so it had to work as a layer on top of the OBF AIS. The reason for discarding the acute angles was to avoid distorted and warped projections, while the distant projections had a low density that resulted in a lower overall resolution than the OBF AIS, potentially worsening the outcome. Each scan’s A-and-D mask was connected with its equivalent texture’s alpha value to a mixed RGB using the darken function. The output was used in place of the extracted textures’ alpha value in the AIS group node.
In the “Double Pelekean Sanctuary’s” case, the final 16k texture had an average distribution of 20 px/cm2 on a surface area that spanned 1057 m2. This almost uniform density approached the limit of the average panoramic quality, as the majority of the scans were over 2 m away from the projected area, while they looked almost as good as the ideal case. For mobile phones and tablets, an 8k texture is more appropriate due to their average hardware constraints, and to maintain the best features and resolution for AR applications, a new UV was generated that was based on the 16k UV images. All polygons with a vertical distance of 1m from the average Faro height were selected, and the UV was doubled in size, repeating the process for a 2 m distance. Finally, the pack UV with a 1/16.384 margin was applied, and a new 8k texture was baked, which was based on the original 16k image. This process kept the majority of the details almost intact while removing the burden of excessive detail on the difficult-to-reach areas that might not have had any better initial resolution on average.

6. Conclusions, Synopsis, and Lessons Learned

In this paper, we presented our efforts toward digitizing the Palace of Knossos. In this process, we stressed the limits of our knowledge in terms of digitization projects by addressing the needs of a very complex structure that combined both indoor and outdoor scenes. At the same, several issues had to be addressed to deal with areas that were in outdoor sites but were not visible due to canopies and illumination issues (low illumination in some sites and high illumination in others). Several visits to the site were organized to study the digitization site and define the digitization methodology to be followed. The outcomes of the digitization efforts resulted in a large amount of data that needed to be post-processed to generate the final registered 3D model of the site that included both indoor and outdoor scenes. A collection of video renderings of the digitization outcomes can be accessed through Zenodo [59].
During post-processing, several issues were encountered that contributed to the collection of lessons learned from this work. Starting from the aerial scanning, due to the size of the site it was initially attempted to perform digitization from a high altitude of 30 m using a dense grid structure for the acquisition of more images from a higher altitude. This resulted in a sufficient reconstruction of the entire site, but the mesh resolution acquired was not sufficient for our purposes. To compensate for this issue, we complemented the initial dataset with low-altitude scans by splitting the monument into smaller segments and performing a more detailed digitization from an altitude of 10 m per segment of the site. Furthermore, manual flying was used in the areas where 10 m was a not safe altitude due to obstacles. We then used the high-altitude reconstruction as a guide for registering the lower-height digitizations to produce an ultra-high-quality aerial scan of the site. Then, we had to overcome the issue that some areas of interest were not visible to the aerial scans due to the morphology of the monument (e.g., the area within the columns of the northern entrance of the palace). To address these issues, we followed a hybrid approach by using terrestrial laser scanning for these areas and then post-processing and combining the results with the ultra-high-quality aerial scan. The resulting model was a very realistic depiction of the heritage site obtained from aerial views complemented by terrestrial views, but it did not cover the indoor scenes of the site.
For the indoor scenes, due to the limitations of photogrammetry described earlier, the usage of a laser scanner was preferred. A careful analysis of each indoor scene was required to identify the appropriate locations for the laser scanner to cover the maximum amount of space with the minimum amount of laser scans. One of the major problems in the indoor scenes captured was the change in illumination, which was mainly because most of the scenes contained locations that were close to external light sources (high illumination), locations with reflected light (medium illumination), and locations with almost no light at all (low illumination). This caused several issues with the captured data since some locations were over-illuminated, while others were under-illuminated. Knowing this issue, during the capturing of the data, photographic documentation was also used to assist in the post-processing. The resulting problems caused by the changes in illumination were addressed during the post-processing following the methodology described earlier in this work. Several lessons were learned during this process, and many researchers could constructively criticize the followed approach, especially since no solutions for rectifying the illumination issues during the data capturing were followed. The reason for selecting or not doing so was as follows: Initially, considering that the data acquisition was conducted during the early summertime, the illumination of the scenes with proximity to outdoor space could not be easily compensated with external light sources. Then, in the areas right next to the scenes of high illumination, the lighting was sufficient. So, the only places where external light sources could be used were areas with low illumination. These were, in most cases, very small spaces that barely provided sufficient space for the laser scanner to operate. Apart from artificial lighting, another solution was to perform multi-exposure HDR capturing, allowing us to capture high-dynamic-range (HDR) images by taking and then combining several different exposures of the same subject matter. This was not preferred, since even in multi-exposure HDR capturing the problems did not disappear entirely due to the extreme variation in the illumination intensity. The methodology presented in Section 4.3. was built around optimizing the texture mapping methodology to compensate for the illumination discrepancies.
The final texture models were fine-tuned for two variations in usage: the first was regarding archaeological study purposes, with the maximum number of vertices and an ultra-high-quality texture size. Attempts to use the same 3D models for AR applications resulted in several performance issues due to both the texture size and the mesh complexity. Thus, a second lower-resolution optimized mobile-app-friendly version of the 3D models was implemented using the process described earlier to achieve the maximum amount of information with the lowest possible mesh and texture size.
As a synopsis of our experience, we could safely conclude that 3D reconstruction technologies and software are still not yet capable of producing results that are directly usable, either for study or entertainment purposes. In both cases, the need to register, post-process, and fine-tune the digitization outcomes requires a large amount of time and effort and special expertise in using specialized post-processing software. Furthermore, in the case of archaeological sites, the long-term preservation of source data is of the utmost importance since (a) the reacquisition of data requires repetition of the effort, (b) new technologies and new software may improve digitization results from existing datasets, (c) the validation of the digitization outcomes will always result in the need to check the source data, and (d) data is a valuable source of academic research for all.
We foresee that the presented research work methodology will provide the possibility to exploit digitization outcomes for more purposes rather than mere archaeological study, expanding their applicability into scenarios that include education and entertainment, thus enhancing the ways that culture can be experienced.

Author Contributions

Conceptualization, N.P. and E.Z.; methodology, E.Z., N.P. and X.Z.; software, Z.P., E.Z., T.E. and A.K.; validation, E.Z., X.Z., N.P. and C.S.; formal analysis, T.E., A.K., E.Z. and N.P.; investigation, E.Z., N.P., X.Z. and C.S.; resources, T.E.; data curation, T.E. and A.K.; writing—original draft preparation, N.P., Z.P., E.Z., X.Z., A.K. and T.E.; writing—review and editing, N.P., Z.P., E.Z., X.Z., A.K. and T.E.; visualization, T.E., A.K., Z.P. and E.Z.; supervision, E.Z., X.Z., N.P. and C.S.; project administration, E.Z.; funding acquisition, N.P. and C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was conducted in the context of the “Knossos Digital Tour Guide” of the Ephorate of Antiquities of Heraklion of the Hellenic Ministry of Culture and Sports funded by the NSRF 2014–2020—RIS 3 Crete program.

Data Availability Statement

Data is available in the Zenodo open-access repositories, and the datasets are cited in the main body of the document. Access to data is available upon request and approval by the ephorate of antiquities of Heraklion and the Greek Ministry of Culture.

Acknowledgments

In this work, we collaborated with the Ephorate of Antiquities of Heraklion of the Hellenic Ministry of Culture and Sports in the context of the implementation of the project “Knossos Digital Tour Guide” that was funded by the NSRF 2014–2020—RIS 3 Crete program. The authors would like to thank the Ephorate of Antiquities of Heraklion for their valuable collaboration and support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Knossos Palace. Available online: https://knossospalace.gr/ (accessed on 10 December 2022).
  2. MacDonald, C. Knossos. In The Oxford Handbook of the Bronze Age Aegean; Cline, E.H., Ed.; Oxford Academic: Oxford, UK, 2012. [Google Scholar] [CrossRef]
  3. Evans, A.J. The palace of Knossos. Annu. Br. Sch. Athens 1901, 7, 1–120. [Google Scholar] [CrossRef]
  4. Evans, J.D.; Cann, J.R.; Renfrew, A.C.; Cornwall, I.W.; Western, A.C. Excavations in the neolithic settlement of Knossos, 1957–1960. Part I. Annu. Br. Sch. Athens 1964, 59, 132–240. [Google Scholar] [CrossRef]
  5. Minoancrete. Available online: http://www.minoancrete.com/knossos1.htm (accessed on 10 December 2022).
  6. Wohlfeil, J.; Strackenbrock, B.; Kossyk, I. Automated high resolution 3D reconstruction of cultural heritage using multi-scale sensor systems and semi-global matching. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-4/W4, 37–43. [Google Scholar] [CrossRef] [Green Version]
  7. Wahbeh, W.; Nebiker, S.; Fangi, G. Combining Public Domain and Professional Panoramic Imagery for the Accurate and Dense 3D Reconstruction of the Destroyed BEL Temple in Palmyra. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-5, 81–88. [Google Scholar] [CrossRef] [Green Version]
  8. Pervolarakis, Z.; Agapakis, A.; Zidianakis, E.; Katzourakis, A.; Evdemon, T.; Partarakis, N.; Zabulis, X.; Stephanidis, C. A Case Study on Supporting the Preservation, Valorization and Sustainability of Natural Heritage. Heritage 2022, 5, 956–971. [Google Scholar] [CrossRef]
  9. Hardouin, G.; Moras, J.; Morbidi, F.; Marzat, J.; Mouaddib, E.M. Next-Best-View planning for surface reconstruction of large-scale 3D environments with multiple UAVs. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 1567–1574. [Google Scholar] [CrossRef]
  10. Xu, Z.; Wu, T.H.; Shen, Y.; Wu, L. Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on UAV Video and TLS Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 985–988. [Google Scholar] [CrossRef] [Green Version]
  11. Stampouloglou, M.; Toska, O.; Tapinaki, S.; Kontogianni, G.; Skamantzari, M.; Georgopoulos, A. 3D Documentation and Virtual Archaeological Restoration of Macedonian Tombs. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W11, 1073–1080. [Google Scholar] [CrossRef] [Green Version]
  12. El-Hakim, S.F.; Beraldin, J.-A.; Gonzo, L.; Whiting, E.; Jemtrud, M.; Valzano, V. A Hierarchical 3D Reconstruction Approach for Documenting Complex Heritage Sites. In Proceedings of the 20th Symposium of International Cooperation to Save the World’s Cultural Heritage, Torino, Italy, 26 September–1 October 2005. [Google Scholar]
  13. El-Hakim, S.; Beraldin, J.-A.; Picard, M.; Godin, G. Detailed 3D reconstruction of large-scale heritage sites with integrated techniques. IEEE Comput. Graph. Appl. 2004, 24, 21–29. [Google Scholar] [CrossRef]
  14. Delegou, E.T.; Mourgi, G.; Tsilimantou, E.; Ioannidis, C.; Moropoulou, A. A Multidisciplinary Approach for Historic Buildings Diagnosis: The Case Study of the Kaisariani Monastery. Heritage 2019, 2, 1211–1232. [Google Scholar] [CrossRef] [Green Version]
  15. Cefalu, A.; Abdel-Wahab, M.; Peter, M.; Wenzel, K.; Fritsch, D. Image based 3D Reconstruction in Cultural Heritage Preservation. In Proceedings of the 10th International Conference on Informatics in Control, Automation and Robotics, Reykjavík, Iceland, 1–3 September 2014; pp. 201–205. [Google Scholar] [CrossRef] [Green Version]
  16. Santagati, C.; Inzerillo, L.; Di Paola, F. Image-based modeling techniques for architectural heritage 3D digitalization: Limits and potentialities. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5, 550–560. [Google Scholar] [CrossRef] [Green Version]
  17. Vertices Limit—Unity Forums. Available online: https://forum.unity.com/threads/65535-vertices-limit.294585/ (accessed on 9 December 2022).
  18. Caro, J.L.; Hansen, S. From photogrammetry to the dissemination of archaeolo-gical heritage using game engines: Menga case study. Virtual Archaeol. Rev. 2015, 6, 58–68. [Google Scholar] [CrossRef]
  19. Kontogianni, G.; Georgopoulos, A. Exploiting Textured 3D Models for Developing Serious Games. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W7, 249–255. [Google Scholar] [CrossRef] [Green Version]
  20. LeJacq, Y. The Real World Reflected in the Vanishing of Ethan Carter. Available online: https://www.kotaku.com.au/2014/10/the-real-world-reflected-in-the-vanishing-of-ethan-carter/ (accessed on 15 November 2022).
  21. How We Used Photogrammetry to Capture Every Last Detail for Star Wars Battlefront. Available online: https://swtorstrategies.com/2015/05/dice-used-photogrammetry-capture-every-last-detail-star-wars-battlefront.html (accessed on 23 October 2022).
  22. Substance Designer 5.6—Scan Blending and Weathering. Available online: https://m.blog.naver.com/sspsos74/220889041556 (accessed on 3 June 2022).
  23. Meet the New Megascans. Available online: https://quixel.com/blog/2017/11/21/meet-the-new-megascans (accessed on 9 August 2022).
  24. Su, C.J. An internet based virtual exhibition system: Conceptual deisgn and infrastructure. Comput. Ind. Eng. 1998, 35, 615–618. [Google Scholar] [CrossRef]
  25. Lim, J.C.; Foo, S. Creating Virtual Exhibitions from an XML-Based Digital Archive. J. Inf. Sci. 2003, 29, 143–157. [Google Scholar] [CrossRef]
  26. Dumitrescu, G.; Lepadatu, C.; Ciurea, C. Creating Virtual Exhibitions for Educational and Cultural Development. Inform. Econ. 2014, 18, 102–110. [Google Scholar] [CrossRef]
  27. Foo, S. Online Virtual Exhibitions: Concepts and Design Considerations. DESIDOC J. Libr. Inf. Technol. 2008, 28, 22–34. [Google Scholar] [CrossRef] [Green Version]
  28. Rong, W. Some Thoughts on Using VR Technology to Communicate Culture. Open J. Soc. Sci. 2018, 06, 88–94. [Google Scholar] [CrossRef] [Green Version]
  29. Partarakis, N.; Antona, M.; Stephanidis, C. Adaptable, personalizable and multi user museum exhibits. In Curating the Digital; Springer: Cham, Switzerland, 2016; pp. 167–179. [Google Scholar]
  30. Papagiannakis, G.; Schertenleib, S.; O’Kennedy, B.; Arevalo-Poizat, M.; Magnenat-Thalmann, N.; Stoddart, A.; Thalmann, D. Mixing virtual and real scenes in the site of ancient Pompeii. Comput. Animat. Virtual Worlds 2005, 16, 11–24. [Google Scholar] [CrossRef] [Green Version]
  31. Magnenat-Thalmann, N.; Papagiannakis, G. Virtual worlds and augmented reality in cultural heritage applications. Rec. Model. Vis. Cult. Herit. 2005, 16, 419–430. [Google Scholar]
  32. Papagiannakis, G.; Magnenat-Thalmann, N. Mobile augmented heritage: Enabling human life in ancient Pompeii. Int. J. Archit. Comput. 2007, 5, 395–415. [Google Scholar] [CrossRef]
  33. Zidianakis, E.; Partarakis, N.; Ntoa, S.; Dimopoulos, A.; Kopidaki, S.; Ntagianta, A.; Ntafotis, E.; Xhako, A.; Pervolarakis, Z.; Kontaki, E.; et al. The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support. Electronics 2021, 10, 363. [Google Scholar] [CrossRef]
  34. Partarakis, N.N.; Doulgeraki, P.P.; Karuzaki, E.E.; Adami, I.I.; Ntoa, S.S.; Metilli, D.D.; Bartalesi, V.V.; Meghini, C.C.; Marketakis, Y.Y.; Kaplanidi, D.D.M.; et al. Representation of Socio-historical Context to Support the Authoring and Presentation of Multimodal Narratives: The Mingei Online Platform. J. Comput. Cult. Herit. 2021, 15, 1–26. [Google Scholar] [CrossRef]
  35. Stefanidi, E.; Partarakis, N.; Zabulis, X.; Zikas, P.; Papagiannakis, G.; Magnenat Thalmann, N. TooltY: An approach for the combination of motion capture and 3D reconstruction to present tool usage in 3D environments. In Intelligent Scene Modeling and Human-Computer Interaction; Springer: Cham, Switzerland, 2021; pp. 165–180. [Google Scholar]
  36. Stefanidi, E.; Partarakis, N.; Zabulis, X.; Papagiannakis, G. An approach for the visualization of crafts and machine usage in virtual environments. In Proceedings of the 13th International Conference on Advances in Computer-Human Interactions, Valencia, Spain, 21–25 November 2020; pp. 21–25. [Google Scholar]
  37. Dieck, M.C.T.; Jung, T.H. Value of augmented reality at cultural heritage sites: A stakeholder approach. J. Destin. Mark. Manag. 2017, 6, 110–117. [Google Scholar] [CrossRef]
  38. Mathioudakis, G.; Klironomos, I.; Partarakis, N.; Papadaki, E.; Anifantis, N.; Antona, M.; Stephanidis, C. Supporting Online and On-Site Digital Diverse Travels. Heritage 2021, 4, 4558–4577. [Google Scholar] [CrossRef]
  39. The Historical Figures AR. Available online: https://play.google.com/store/apps/details?id=ca.altkey.thehistoricalfiguresar (accessed on 31 October 2022).
  40. Pervolarakis, Z.; Agapakis, A.; Xhako, A.; Zidianakis, E.; Katzourakis, A.; Evdaimon, T.; Sifakis, M.; Partarakis, N.; Zabulis, X.; Stephanidis, C. A Method and Platform for the Preservation of Temporary Exhibitions. Heritage 2022, 5, 2833–2850. [Google Scholar] [CrossRef]
  41. Zabulis, X.; Partarakis, N.; Meghini, C.; Dubois, A.; Manitsaris, S.; Hauser, H.; Magnenat Thalmann, N.; Ringas, C.; Panesse, L.; Cadi, N.; et al. A Representation Protocol for Traditional Crafts. Heritage 2022, 5, 716–741. [Google Scholar] [CrossRef]
  42. Zenodo. Available online: https://zenodo.org/ (accessed on 31 October 2022).
  43. Pix4dcapture. Available online: https://www.pix4d.com/product/pix4dcapture (accessed on 10 October 2022).
  44. Pix4dmapper. Available online: https://www.pix4d.com/product/pix4dmapper-photogrammetry-software (accessed on 10 October 2022).
  45. Pix4dmatic. Available online: https://www.pix4d.com/product/pix4dmatic-large-scale-photogrammetry-software (accessed on 10 October 2022).
  46. FARO Focus Laser Scanners. Available online: https://www.faro.com/en/Products/Hardware/Focus-Laser-Scanners (accessed on 20 March 2022).
  47. FARO® SCENE Software. Available online: https://www.faro.com/en/Products/Software/SCENE-Software (accessed on 10 February 2022).
  48. Evdaimon, T.; Zabulis, X.; Partarakis, N. Photographic documentation of the main complex of the Knossos Palace. Zenodo 2022. [Google Scholar] [CrossRef]
  49. Nikon D850. Available online: https://www.nikonusa.com/en/nikon-products/product/dslr-cameras/d850.html (accessed on 10 November 2022).
  50. Evdaimon, T.; Zabulis, X.; Partarakis, N. A collection of photographs documenting the peripheral site to the Palace of Knossos. Zenodo 2022. [Google Scholar] [CrossRef]
  51. Evdaimon, T.; Zabulis, X.; Partarakis, N. A collection of UAV photographs from a high altitude of the Palace of Knossos. Zenodo 2022. [Google Scholar] [CrossRef]
  52. DJI Phantom 4 v2.0. Available online: https://www.dji.com/gr/phantom-4 (accessed on 10 December 2022).
  53. Evdaimon, T.; Zabulis, X.; Partarakis, N. A collection of UAV photographs from a low altitude of the Palace of Knossos flights 1–10. Zenodo 2022. [Google Scholar] [CrossRef]
  54. Evdaimon, T.; Zabulis, X.; Partarakis, N. A collection of UAV photographs from a low altitude of the Palace of Knossos flights 10–20. Zenodo 2022. [Google Scholar] [CrossRef]
  55. Evdaimon, T.; Zabulis, x.; Partarakis, N. A collection of Laser scans of the main complex of the Palace of Knossos. Zenodo 2022. [Google Scholar] [CrossRef]
  56. Evdaimon, T.; Zabulis, X.; Partarakis, N. A collection of laser scans of peripheral sites of the Palace of Knossos archaeological site. Zenodo 2022. [Google Scholar] [CrossRef]
  57. Evdaimon, T.; Katzourakis, A.; Pervolarakis, Z.; Partarakis, N.; Zidianakis, E.; Zabulis, X. A collection of indoor and outdoor digitisations of the Knossos Palace. Zenodo 2022. [Google Scholar] [CrossRef]
  58. Blender v2.93. Available online: https://www.blender.org/ (accessed on 13 July 2022).
  59. Katzourakis, A.; Pervolarakis, Ζ.; Evdaimon, T.; Zidianakis, E.; Partarakis, N.; Zabulis, X. A collection of rendered videos that present the digitisation outcomes and the AR application of the Knossos Palace. Zenodo 2022. [Google Scholar] [CrossRef]
Figure 1. Flight path and examples of image acquisition.
Figure 1. Flight path and examples of image acquisition.
Heritage 06 00050 g001
Figure 2. Scanning methodology.
Figure 2. Scanning methodology.
Heritage 06 00050 g002
Figure 3. Aerial scans.
Figure 3. Aerial scans.
Heritage 06 00050 g003
Figure 5. Three panoramic projections with density indications on a mesh (note the difference in shadow positions).
Figure 5. Three panoramic projections with density indications on a mesh (note the difference in shadow positions).
Heritage 06 00050 g005
Figure 6. Textures per scan.
Figure 6. Textures per scan.
Heritage 06 00050 g006
Figure 7. Accurate vs. fallback textures: (a) Faro scan original mesh; (b) simplified AR target mesh; (c) high–low polygonal overlapping meshes; (d) accurate texture projection; (e) fallback texture projection; and (f) angle and distance mask based on the scan.
Figure 7. Accurate vs. fallback textures: (a) Faro scan original mesh; (b) simplified AR target mesh; (c) high–low polygonal overlapping meshes; (d) accurate texture projection; (e) fallback texture projection; and (f) angle and distance mask based on the scan.
Heritage 06 00050 g007
Figure 11. Example of applying the image stacking methodology (test case: Double Pelekean Hall), (a) default Faro unified textures quality, (b) image stacking approach to unifying scans for AR exhibits.
Figure 11. Example of applying the image stacking methodology (test case: Double Pelekean Hall), (a) default Faro unified textures quality, (b) image stacking approach to unifying scans for AR exhibits.
Heritage 06 00050 g011aHeritage 06 00050 g011b
Figure 12. Homogeneous UV texture distribution.
Figure 12. Homogeneous UV texture distribution.
Heritage 06 00050 g012
Figure 15. Preserving accurate texture information by maintaining the quality of the most perpendicular and close-proximity scans on top of an average image-stacked texture: (a) panoramic projection with an example with an acute angle 6 m away, and (b) panoramic projection with an example with a distance of 1.5 m, which was mostly perpendicular.
Figure 15. Preserving accurate texture information by maintaining the quality of the most perpendicular and close-proximity scans on top of an average image-stacked texture: (a) panoramic projection with an example with an acute angle 6 m away, and (b) panoramic projection with an example with a distance of 1.5 m, which was mostly perpendicular.
Heritage 06 00050 g015
Figure 16. Angle-and-distance (A-and-D) layer structure; A-and-D mask refers to each scan’s A-and-D-generated mask.
Figure 16. Angle-and-distance (A-and-D) layer structure; A-and-D mask refers to each scan’s A-and-D-generated mask.
Heritage 06 00050 g016
Table 1. Collection of aerial photographs per site.
Table 1. Collection of aerial photographs per site.
SitesNumber of PhotosDescription
Knossos Palace11,056A total of 1506 photos from a high altitude, and 20 low-altitude flights, each one in a segment of the heritage site and with an average of 480 photos per flight.
Caravan Serai266One flight, 266 photos, low altitude.
Temple Tomb275One flight, 275 photos, low altitude.
Royal Villa at Knossos1406Two flights, low altitude, 999 photos during the first flight and 407 photos during the second flight.
Little Palace at Knossos610Two flights, low altitude, 289 photos during the first flight and 321 photos during the second flight.
Total13,613The total number of aerial photos acquired to reconstruct the entire heritage site.
Table 2. The number of laser scans per site.
Table 2. The number of laser scans per site.
SitesNumber of ScansDescription
Throne Room52The number of scans depended on various parameters that related more to the complexity rather than the size of each site. For example, internal architectural structures, e.g., pillars, increased the complexity, contributing to visual occlusions.
Furthermore, the existence of multiple rooms increased the number of scans required due to the need to capture the transition areas. Finally, subsequent scans needed to be overlapping for registration purposes.
North Entrance, North Pillar Hall13
West Wing30
West Magazine9
The Hall of the Double Axes and the Queen’s Megaron49
Shrine of the Double Axes30
Magazine of the Medallion Pithoi, Corridor of the Bays15
South House30
South Propylaeum12
North Lustral Basin11
Caravan Serai12
Temple Tomb22
Royal Villa at Knossos24
Little Palace at Knossos37
Total346
Table 3. Collection of datasets made available at Zenodo.
Table 3. Collection of datasets made available at Zenodo.
TitleContents
Photographic documentation of the main complex of the Knossos Palace [48]Photos of the main complex of the palace taken using a Nikon D850 [49]
Photographic documentation of the peripheral sites of the Knossos Palace [50]Photos of the peripheral sites taken using a Nikon D850 [49]
Aerial photographic documentation for photogrammetric reconstruction (high altitude) [51]Aerial photographic documentation using a DJI Phantom 4 [52] equipped with a 4K camera
Aerial photographic documentation for photogrammetric reconstruction (low altitude) flights 1–10 [53]
Aerial photographic documentation for photogrammetric reconstruction (low altitude) flights 10–20 [54]
Laser scans of the main palace complex [55]Laser scans were acquired using a Faro Focus laser scanner [46]
Laser scans of the peripheral sites of the Knossos Palace [56]
A collection of indoor and outdoor digitizations of the Knossos Palace [57]Three-dimensional reconstruction results (indoor and outdoor)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pervolarakis, Z.; Zidianakis, E.; Katzourakis, A.; Evdaimon, T.; Partarakis, N.; Zabulis, X.; Stephanidis, C. Three-Dimensional Digitization of Archaeological Sites—The Use Case of the Palace of Knossos. Heritage 2023, 6, 904-927. https://doi.org/10.3390/heritage6020050

AMA Style

Pervolarakis Z, Zidianakis E, Katzourakis A, Evdaimon T, Partarakis N, Zabulis X, Stephanidis C. Three-Dimensional Digitization of Archaeological Sites—The Use Case of the Palace of Knossos. Heritage. 2023; 6(2):904-927. https://doi.org/10.3390/heritage6020050

Chicago/Turabian Style

Pervolarakis, Zacharias, Emmanouil Zidianakis, Antonis Katzourakis, Theodoros Evdaimon, Nikolaos Partarakis, Xenophon Zabulis, and Constantine Stephanidis. 2023. "Three-Dimensional Digitization of Archaeological Sites—The Use Case of the Palace of Knossos" Heritage 6, no. 2: 904-927. https://doi.org/10.3390/heritage6020050

Article Metrics

Back to TopTop