Next Article in Journal
Optimal Operation of a Microgrid with Hydrogen Storage Based on Deep Reinforcement Learning
Next Article in Special Issue
Quality Assessment of Virtual Human Assistants for Elder Users
Previous Article in Journal
Theoretical and Empirical Verification of Electrical Impedance Matching Method for High-Power Transducers
Previous Article in Special Issue
Improving Deep Object Detection Algorithms for Game Scenes
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Scale Presentation of Spatial Context for Cultural Heritage Applications

1
Institute of Computer Science, Foundation for Research and Technology (ICS-FORTH), N. Plastira 100, Vassilika Vouton, 70013 Heraklion, Greece
2
Piraeus Bank Group Cultural Foundation, 6 Ang. Gerontas St., 10558 Athens, Greece
3
Histoire des Technosciences en Société, Conservatoire National des Arts et Métiers (HT2S-CNAM), Case 1LAB10, 2 rue Conté, 75003 Paris, France
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(2), 195; https://doi.org/10.3390/electronics11020195
Received: 4 November 2021 / Revised: 1 January 2022 / Accepted: 7 January 2022 / Published: 9 January 2022
(This article belongs to the Special Issue Applications of Computer Vision in Interactive Environments)

Abstract

:
An approach to the representation and presentation of spatial and geographical context of cultural heritage sites is proposed. The goal is to combine semantic representations of social and historical context with 3D representations of cultural heritage sites acquired through 3D reconstruction and 3D modeling technologies, to support their interpretation and presentation in education and tourism. Several use cases support and demonstrate the application of the proposed approach including immersive craft and context demonstration environment and interactive games.

1. Introduction

Cultural heritage (CH) sites refer to places, localities, natural landscapes, settlement areas, architectural complexes, archaeological sites, or standing structures that are recognized and often legally protected as a place of historical and cultural significance [1]. The proliferation of digitization and digital presentation modalities have created new opportunities in the digital preservation and interpretation of sites of World and Natural Heritage. The 3D digitization of CH sites is the most common representation approach nowadays, enabling the remote viewing of these sites or the on-site augmentation of audiovisual information. At the same time, research on digital libraries and the semantic representation of knowledge have created the potential of systematically encoding documentation that can accompany digital assets, to support their interpretation. In this work, we propose the combination of these signal and semantic sources of information to contextualize the presentation of CH sites. Signal sources are considered the ones acquired through technical media by “measuring” and “capturing” spatial and geographical context while semantic sources are the socio-historic information that provides meaning and stories to the acquired representation.
The goals and methodology of the proposed work correspond to the seven principles of the “Ename Charter” of the International Council on Monuments and Sites (ICOMOS) on the Interpretation and Presentation of CH sites [1], as follows.
  • By integrating the semantic information with digital assets, we facilitate the understanding and appreciation of cultural heritage sites and foster awareness and engagement, to serve their protection and conservation.
  • We communicate the meaning of CH sites to document their significance, through scientific and scholarly authoritative information sources.
  • By offering digital representations of spatial and geographical context, we support the safeguarding of tangible and intangible values of CH sites in their natural settings and social contexts.
  • We support the authenticity of CH sites, by communicating the significance of their social and historical context as well as their cultural values, to protect them from inaccurate or inappropriate interpretations.
  • The offering of contextual information contributes to the sustainable conservation of CH sites, by promoting the understanding of conservation efforts.
  • To encourage inclusiveness in the interpretation of CH sites, we facilitate the involvement of stakeholders and associated communities in the development of interpretations, by offering simple-to-use authoring interfaces.
  • We develop technical guidelines for heritage interpretation and presentation, including technologies, and research, along with guidelines on how to use them.
The remainder of this paper is structured as follows. In Section 2, related work is reviewed. In Section 3, the proposed approach is presented. In Section 4, the implementation of use cases using the proposed approach is demonstrated. Conclusions and directions for future work are provided in Section 5.

2. Background and Related Work

2.1. Cultural Mapping

Cultural mapping, also known as cultural resource mapping or cultural landscape mapping, refers to a range of research techniques and tools used to map tangible and intangible cultural assets within local landscapes around the world. Cultural mapping has been recognized by UNESCO as a crucial tool and technique in preserving the world’s intangible and tangible cultural assets. It includes techniques for data collection and management to sophisticated mapping using Geographic Information Systems [2,3].
Today, CH sites are considered as places [4], not only physical structures. The “whole environment is shaped and affected by its interaction with humanity and can be recognized as a heritage” [5]. Of particular interest is that CH sites include “all the tangible or intangible elements that demonstrate the particular relationship that a human community has established with a territory over time” [6]. It moreover includes the environments exploited by social groups, the architectural structures, as well as the places where craft products are produced [7].

2.2. 3D Digitization

The field of 3D digitization encompasses a variety of techniques and modalities, each with its own set of distinct characteristics. Three-dimensional (3D) digitization, or 3D scanning, has attracted a growing interest in the documentation of structured scenes, in multiple application domains. Several 3D scanning modalities have been developed, which are distinguished as to whether they require contact or not with the scanned surfaces and objects. Non-contact scanning modalities are more widely employed, as they use light as the operating principle of the sensor. They can be further classified according to the sensor type, that is, into passive or active illumination systems. Active sensors emit their electromagnetic energy for surface detection, while passive sensors utilize ambient light.
The most adopted and robust principles for the digitization of tangible CH are the time of flight or laser scanning, e.g., [8], structured light, e.g., [9], and photogrammetry, e.g., [2]. A range of products employs these principles in variations including digitation over time [10]. In photogrammetry, terrestrial and aerial photogrammetry often differ, with the latter using the global positioning system (GPS) coordinates of the drone sensor to assist reconstruction Pix4D, [11]. Combinations of these principles are found in off-the-shelf devices, such as the family of handheld scanners that combine trinocular photogrammetry with active illumination Faro Focus, [12].
The capabilities of the available technologies vary in terms of several criteria, underscoring the importance of considering them concerning the usage scenario. These criteria include resolution, accuracy, range, sampling rate, cost, operating conditions, skill requirements, the purpose of documentation, the material of the scanned object, as well as weight and ease of transport [13]. There are significant variations between the capabilities of different approaches. Photogrammetric techniques provide more photorealistic texture than the time of flight modalities but are less accurate as to the structural accuracy. When accuracy is of paramount importance, close access to the scanned object is required. If this is impossible or impractical, aerial scans can be used. In this case, though, time-of-flight techniques provide less accurate results, if the sensor is airborne and, thus, not static. Hence, of relevance is the sampling rate and scan duration of the sensor, as a time-of-flight scan lasts much longer than the acquisition of photogrammetry images.

2.3. Digital Assets in Games

In the context of gaming, the creation of 3D models of real-world scenes has always been an attractive topic in the game industry. According to [14] building virtual 3D scenes has always required talented people with artistic sense, specialized and expensive software, strong computational resources for photorealistic visualization, and significant manual effort. This process includes the acquisition and usage of several visual references such as concept art and photographs [15]. Realistic productions rely heavily on photographs, with higher budget games investing in field trips during the preproduction phase to capture authentic photographs on location [16,17]. Until recently photogrammetry was only used sporadically in game development as the dense millions of polygons meshes generated by these processes are highly unsuitable for real-time rendering.
The announcement of EA DICE in May 2015 that its new flagship title “Star Wars: Battlefront” would rely heavily on photogrammetry to capture the franchise’s acclaimed settings [18] was a radical change to game development. Faced with the challenge of capturing the well-established visual style of Star Wars, the team opted for photogrammetry to recreate not only props and outfits previously used in the movies but also the epic locations that are familiar to Star Wars fans [19]. The creation of realistic photogrammetry scans is still computationally demanding, requiring high-end computers and many hours of processing time. Thus, as demonstrated by [20], populating extensive game worlds with photogrammetry assets demands larger-scale solutions. Another consideration is the need to manually post-process photogrammetry reconstructions, including cleaning, re-texturing, etc., [21]. It is foreseen that photogrammetry will be extensively used for creating realistic assets, due to reconstruction realism and reduction of production costs [22]. Furthermore, it is conceivable that 3D art will follow the evolution of painting. In painting, the primary goal of the artists of the past centuries was to create a visually convincing replication of the real world. Then with the evolution of photography, it was made apparent that in the future the work of the painter will be replaced by one of the photographers. This fact resulted in an evolution in art mainly because artists were relieved by the burden of realism and allowed to explore their creativity to create novel forms of artistic expression resulting in the modern art revolution [23].

2.4. Virtual Environments in Cultural Heritage

Cultural heritage institutions seek new ways to attract and engage new visitors by investing and implementing interactive experiences on-site [24]. Over the years, several technologies have emerged each of which provides forms of interaction and various levels of immersion but also poses requirements in terms of space, setup, and deployment. Furthermore, supporting more users in the physical space through the combination of digital experiences and assistive technologies [25,26,27].
CH recently has adopted state-of-the-art technologies (e.g., [28,29]) to support pure and hybrid Virtual Environments delivered through a combination of AR, VR, and MR technologies (e.g., [30,31,32,33]). When coexisting in a physical context these are referred to as X-Reality (Extended Reality) or XR applications [34]. The use of such technologies has the potential to enrich the information of cultural heritage artifacts and museum exhibits and turn passive visitors into active participants engaged in an interactive and immersive blend of physical and virtual as if it was a single, unified world [35].

3. The Proposed Approach

The proposed approach has been developed in the context of the Mingei European project [36,37], which aims at representing and making accessible both tangible and intangible aspects of crafts as CH [38]. It is a stepwise approach as illustrated in Figure 1 and initiates with the acquisition of 3D data that capture the appearance of spatial and geographical context through the reconstruction of geographical terrain from remote sensing, 3D reconstruction of architectural structures from 3D scanning, and 3D modeling. The next step is the combination of 3D assets to synthesize representations that provide information on the past state of sites that are not anymore available, due to changes in landscape or destruction of architectural structures. Next, the utilized digital assets are semantically annotated to link information on the social and historical context of the site. In the last step, the wealth of information achieved through this authoring process is employed to provide new opportunities for the exploitation of knowledge, using digital presentation modalities media that support education and tourism. The use cases highlight the benefits of the proposed approach. The aforementioned approach is visually summarized.

3.1. Acquisition of 3D Assets

To digitize physical environments, we employ (a) remote sensing, (b) photogrammetry and laser scanning, and (c) 3D modeling.

3.1.1. Remote Sensing

A digital terrain model (DTM) is a data structure that represents the ground surface without built structures or living organisms [39]. A heightmap is the most common implementation of a DTM, e.g., [40,41,42]. A heightmap is a raster image where each pixel encodes surface elevation from the sea level. A heightmap image has, thus, one channel and can be visualized as a grayscale image, with black representing minimum height and white representing maximum height [43]. Using the neighborhood relations of pixels, a heightmap can be encoded as a 3D mesh. Heightmaps are usually obtained from remote sensing or aerial imaging. Besides creating heightmaps with own devices, height maps can be obtained by several vendors, e.g., [44].
The main limitation of DTMs is that they do not include textures. For this purpose, two are the main possible variations. An artistic approach to terrain texturing is to “paint” the terrain using conventional digital drawing tools. Another approach is to attach textures from remote sensing (satellite) images to the terrain mesh [45]. The result of both approaches is a terrain mesh of a geographic area in a level of realism that depends on the accuracy and density of the heightmap and the quality of the applied textures.

3.1.2. Photogrammetry and Laser Scanning

For accurate and realistic representations of spatial and geographical context, several 3D reconstruction technologies can be employed. In the past 20 years, several 3D scanning modalities have been developed, which can be distinguished as to whether they require contact or not, with the scanned surfaces and objects. Non-contact, 3D scanning modalities, used for the digitization of CH, are based on either active or passive sensors (see Figure 2).
Passive sensors are typically conventional monochromatic, color, or multispectral visual sensors (cameras). The scanning output is the computation result of a reconstruction algorithm, typically based on the correspondence mapping of multiple views. Active sensors include laser or conventional light-emitting techniques to measure 3D information, such as distance range. Some modalities integrate assisting information from an inertial measurement unit (IMU) or GPS unit that is operated by auxiliary software components. Passive sensors work with less information as they do not have the advantage of inserting reconstruction cues (i.e., structured illumination) into the environment. As such they are associated with sophisticated algorithmic approaches and are sensitive to illumination artifacts. Active sensors typically include direct distance measurement methods and are often limited by sunlight, as it is more luminous than the active illumination source of the sensor. Occlusion is a limiting factor for both active and passive sensors and, typically, multiple scans are required to comprehensively capture a 3D scene.
The following digitization modalities are employed depending on the type of environment and size of the scanned area, as illustrated in Figure 3.
Laser scanning: The advantage of laser scanning is that it is a very efficient, accurate, and robust modality. It provides a direct point measurement on the line of sight of every radius within its view sphere at a configurable resolution and an angular breadth of approximately 270 degrees of solid angle. The main disadvantage of laser scanning is the price of this modality: a reliable unit of medium accuracy (~2–3 mm) with a scan range of about 70 m is in the order of 30 K Euros.
Photogrammetry: For digitization of outdoor environments, the proliferation of unmanned aerial vehicles (drones) has facilitated aerial photogrammetric reconstruction, providing vantage viewpoints that greatly simplify reconstruction. On the other hand, scene segments of interest may not be visible from aerial views, such as for example the scene locations below the eaves of buildings. For indoor environments, photogrammetric reconstruction exhibits the disadvantage that it becomes more tedious for several reasons. The main ones are: (a) lack of sufficient illumination (b) lack of texture, particularly on blank walls and ceilings, (c) surfaces of high reflectance that exhibit illumination specularities when directly illuminated, such as metallic objects, and (d) detailed scans require the acquisition of a large number of images occlusions and are thus time-consuming image acquisition. In addition, photogrammetric reconstruction requires significant computational time to obtain results, because it is not based on direct measurements of spatial structure (i.e., such as the laser scanner), but is rather an algorithm that computationally infers structure from implicit measurements (images). The main advantage of photogrammetry lies in the (relatively) low cost of the required equipment, though high-end optics provide images of higher definition and, consequently, more accurate reconstructions and texture representations. In general, photogrammetric reconstruction is less accurate than laser scanning, but it is particularly useful for photorealistic reconstructions and of practical usage in covering wide areas. Limitations of aerial photogrammetry due to occlusions can be compensated with the addition of terrestrial views.
Handheld optical and inertial scanning, with real-time feedback: a high-end module comes from the combination of trinocular stereo with active illumination and inertial measurements coming from a sensitive IMU. For brevity, we henceforth call this type of device handheld scanners. The modality exhibits clear advantages to RGB-D scanning in terms of affinity of manipulation and robustness. Moreover, real-time feedback on a lightweight companion device (i.e., a tablet computer) can significantly facilitate the acquisition process. The range of such devices is in the order of 8 m3 and its accuracy can be in the order of 2 mm. It is suitable for applications in which digitization needs to be urgently scanned from various perspectives. Like any optical method, it is dependent on texture and exhibits limitations in shiny objects. The main disadvantage of pertinent devices is their high cost, which is in the order of 16–20 K Euros.

3.1.3. 3D Modeling

3D modeling becomes valuable in various situations where a context should be created that either represents an imaginary site or a site of no particular CH interest (for example a factory environment). Furthermore, in some cases, the representation of physical context poses several limitations due to the lack of sufficient data for 3D reconstruction algorithms or the existence of semi-transparent or luminous surfaces that pose several difficulties in terms of 3D reconstruction. Thus, a pure synthetic or hybrid approach can be followed. In the first case, the site is created by 3D modeling artists using their imagination or by examining reference photography of the actual site. In the second case, poor reconstructions are combined with 3D models that complement problematic areas of context (e.g., non-visible areas due to visual occlusions, luminous surfaces, semi-transparent objects, etc.,). Finally, 3D modeling can be selected when combining actual 3D reconstructions of objects in an imaginary context such as for example recreating a workshop of a craft practitioner based on historic craft documentation and 3D scans of actual tools and machines available in a CHI’s collection.

3.2. Combination of 3D Assets

To combine 3D assets acquired from heterogeneous modalities, a software editor is proposed named ICombine3D. ICombine3D is a multi-purpose tool for 3D reconstruction experts to visualize, repair, edit, measure, 3D reconstructions of a scene or object. These functionalities are availed to the user through a simple graphical user interface. ICombine3D also provides integration of scanned and geolocated monuments and sites and is compatible with online Geographic Information System (GIS)systems. In addition, ICombine3D provides functionalities for combining 3D assets and editing the representation of the spatial and geographical context. Thus, central to its capabilities is the ability to import multiple 3D assets in the scene, bring them to the same reference system. In this way, multiple types of information can be attached to the same spatial reference.
In this work, ICombine3D is used to combine reconstructions from multiple scanning modalities. To do so, through the editor these modalities are registered in the same spatial frame. In this way, we provide multiple types of information on the same spatial reference; i.e., a site or an object. In the case of sites, we use also the GIS capabilities of our system to properly reference the provided digitization and avail their integration by 3rd party systems. Furthermore, using the editor we created simulated environments, e.g., craft workshop simulations. An example of combining 3D assets using the aforementioned editor is presented in Figure 4.

3.3. Contextualization

To represent contextualization information, we employ the Mingei Online Platform (MOP) [36], an online authoring system for the representation of (a) social and historic context through narratives and (b) processes through the modeling of process schemas.
To represent social and historical context we use stories, or narratives, that encode the events and concepts that are relevant to the presented CH site. Using MOP, authoring tools enable curators to author “Narratives”, their “Narrations”, and the “Presentations” of each “Narration”, through simple form-based user interfaces [46,47]. “Narration” authoring is facilitated by providing the means to define how a “Narration” will be presented to end-users. Besides forms, the authoring tools provide means to visualize the “Narrative” structure. Tools to present generic information are provided, such as timelines, related media previews, and comprehensive narrative Web pages. MOP provides facilities for the export of knowledge in multiple formats, to support reuse and sharing of information, as well as open access to documented knowledge. Furthermore, it enhances the documented information by establishing a linkage between MOP and other relevant publicly available CH repositories, such as Europeana.

3.4. Multi-Purpose Exploitation of the Representation

The rich and multi-level representation of knowledge achieved through the proposed methodology opens a wide range of possibilities for exploiting contextualized digital assets and information. From these, in this paper, we examine three use cases targeting education, entertainment, and training as presented in the following section.

4. Use Cases

The proposed approach was employed in the context of three use cases stemming from two of the pilot sites of the Mingei project [48]. The first regards the Chios mastic villages and was developed in collaboration with the Mastic Museum of Chios of the Piraeus Bank Group Cultural Foundation (PIOP). The second regards a glass workshop and the activities that take place in it, in collaboration with the Museum of Conservatoire National des Arts et Métiers (CNAM) and the glass workshop of Centre Européen De Recherches Et De Formation Aux Arts Verriers (CERFAV).

4.1. Acquisition of 3D Assets

Examples of acquisition of digital assets documenting the spatial and geographical context as part of the use cases implementation are presented. In the first use case, we explore the representation of spatial and geographical context using terrain building from satellite maps, 3D reconstructions of villages, and the Chios mastic museum. In the second case, 3D modeling is employed and the entire glass workshop is created from photographic documentation of the actual workshop.

4.1.1. Remote Sensing

For the generation of the Chios terrain, the heightmap of the island was obtained from [44]. This map was imported into a 3D engine, in the form of a 3D mesh. The process included editing resolution, terrain size, and maximum altitude, in conjunction with the elevation data and the size of the 3D reconstructions of villages and the museum. Then, the terrain was post-processed using two different methods.
In the first method, the terrain was hand “painted” in Unity to create a representation of the landscape morphology. The heightmap and the output of this process can be seen in Figure 5.
In the second case, the terrain was textured using satellite images to be as close as possible to a representation in 3D of the actual island. In this case, built structures and elevation due to building structures, trees, etc., are omitted. The output of this process was the development of a terrain structure that although similar to the one of Chios, can be experienced as a game terrain (see Figure 6. Terrain texturing using satellite images).

4.1.2. Spatial and Geographical Context Representation Using Photogrammetry and Laser Scanning

For the reconstruction of the Chios villages, aerial images were acquired via an unmanned aerial vehicle (UAV). The subjects were large building complexes. The built structures were challenging, due to the complex, medieval city planning and fortification. We observe that building structures are reconstructed with fidelity and the narrow alleys, streets, and village squares are clearly outlined and allow the spatial planning of field operations (see Figure 7).
For the 3D reconstruction of the Mastic Museum, we used a similar spatial scale but, this time, an environment that combines a building structure and a rural environment. In this, we demonstrate the combination of aerial and terrestrial imaging, using a handheld camera. In Figure 8 and Figure 9 an overview of the reconstruction is provided in individual RGB and depth views. We focus on the situation, where the aerial scan poorly reconstructs the space below an eave. To improve the reconstruction, we added images acquired from a terrestrial view, under the eave, where the UAV may not fly. In Figure 10, the improvement in the reconstruction of that area is shown.

4.1.3. Spatial and Geographical Context Representation Using 3D Modeling

The workshop building was initially created in Unity based on a blueprint acquired from CERFAV (see Figure 11 left). Unfortunately, the scale was not included, therefore the first draft of the building was created based on the proportions from the survey. To create a more realistic version of this pilot, Unity’s High-Definition Rendering Pipeline (HDRP) was researched and used. Due to the change in definition, the workshop building from the previous iteration had to be re-modeled to match the quality intended. The new model was also scale corrected since the initial model was based on an electrosurgical survey which had no scale or reference for actual measurements. More so, re-modeling was taken a step further, to create space in the wall for windows, including the window frame and glass geometry. UV maps and materials were created for all of the building’s geometry. Configurations for exporting from Blender and importing to Unity were also defined for a straightforward result.
Afterward, the building was imported into a new scene in a Unity HDRP project. The environment was set up to use an HDRI Sky for the skybox and the ambient lighting. The fog was also added to give some density to the atmosphere and interact with the sunlight within the workshop.
Lighting was then set up. The lighting plan is using Baked lightmaps for all static geometry with the use of Mixed Lights, which provide real-time direct lighting but its indirect is baked. For dynamic geometry, e.g., characters, Light Probes are used in the next iteration. Extra care was put in creating the glass material which lets direct sunlight pass through. In a workshop, there is also artificial lighting, which was created by adding fluorescent tube lights on the ceiling with emissive materials for the glow and Point lights for the actual lighting influence. As mentioned before, all of those are Mixed Lights. Reflections were also added in the scene with the use of Reflection Probes.
For post-processing, exposure was added to emulate eye adaptation when adjusting to different lighting while navigating through the workshop, Ambient Occlusion which regulates the brightness in areas where direct light cannot reach, Chromatic Aberration and Vignette for a more cinematic effect, and finally, Bloom which accentuates light bleeding from the borders of brightly lit areas, overwhelming the camera or eye.
The machines of the workshop were created from scratch in Blender 3D using photographic documentation acquired during a visit to CERFAV in the context of ethnographic documentation activities of the glass-making craft. Examples of the documentation are presented in Figure 12. Examples of modeled machinery are presented in Figure 13.

4.2. Combination of 3D Assets

Having set an environment, there is enough ambient light to start placing the machines in the workshop. Existing machinery is placed approximately in the places of the photographic material as shown in Figure 14.

4.3. Contextualization

The contextualization of spatial and geographical content is implemented using the MOP. When creating a “Location” we use the GeoNames [49] database of locations to link the name with its GPS coordinates. Moreover, the 3D assets are uploaded to the platform and associated with the location entry. The data structure corresponding to the aforementioned process is shown in Figure 15.
Next, the narratives documenting the social and historical context are entered and associated with the location. Narratives also document the spatio-temporal dimensions relevant to the documented locations as shown in Figure 16. When the location entity is accessed, the MOP offers the corresponding digital assets and the associated narratives. In [47], further information on the narrative authoring process as implemented by MOP is presented.
The implementation of narratives is facilitated in the context of multipurpose exploitation of the achieved representation. This is achieved by exporting a narration of the narrative that is built concerning the targeted presentation. Exported narrations contain all the semantic sources and the relevant digital assets in a structured form that can assist the implementation of various presentations. Currently, linking exported narration with the presentation is a task performed by the development team of each presentation by querying and extracting information. In the future, a simplification of this process through, for example, targeted platform-specific exports may further simplify the process of creating presentations.

4.4. Multipurpose Exploitation of the Representation

Having concluded with the technical implementation of the proposed approach in two Pilot sites of Mingei this sub-section contributes examples of the multipurpose exploitation of the representation.

4.4.1. Airborne

Airborne is an immersive flight simulator that runs in a large 3D space, consisting of three touch-enabled walls, supports people tracking, and body-based interactions (see Figure 17). A total of six projectors EPSON EB-696Ui are used for the whole setup, hanging from the ceiling, at 2.80 m from the floor. Each pair of projectors spans the total length of each wall and the majority of its height (~1.60 m) at a resolution of 3840 × 1200 creating an ultra-wide (11520 × 1200 px) display for fully immersive visual outputs. Each projector is equipped with a touch sensor unit for enabling touch interaction on the walls. Moreover, stereo speakers are integrated into each projector to allow audio playback.
For Airborne, we used the Unity 3D package provided by [50] to support rapid prototyping of X-reality applications for interactive 3D spaces. The components we used from this package are the localization service that allows us to track the user position within the room, the synchronization service which is responsible for synchronizing objects across all the PC’s we have used, and the Touch Interaction which is used to track user input across all the walls.
In Airborne, users can fly over various mastiha villages on Chios Island (see Figure 18). During the flyover, users can stop at each village and retrieve multimedia and text information related to those villages. When a user enters the room and approaches the central wall, he starts flying to the first village, upon reaching the designated checkpoint the camera rotates accordingly to present the village from a convenient angle to allow the user to have a better view from the current perspective. In addition, a window popup will appear that presents the related information from which the user can interact (scroll through text, navigate the image/video viewer) by touching the wall.
Airborne has also been implemented as a standalone version accessed through a standard pc setting as shown in Figure 19.

4.4.2. The Chios Exploration Game

The game provides information regarding the historic period of the medieval occupation of Chios and more specifically regarding the socio-historic context of mastic cultivation and the creation of the first settlements that resulted in the formulation of the so-called mastic villages [51]. The game is based on the third-person action principles (a third-person game is a game in which the player character is visible on-screen during gaming [52]), and navigation in an open 3D environment where only the restrictions of the sea around the island exist, together with limitations related to the angle of the terrain surface.
Gameplay implementation was based on “The Explorer: 3D Game Kit” which is a collection of mechanics, tools, systems, and assets for third-person action games [53]. Based on this framework several components of the kit related to character animation and rendering, terrain navigation through teleports (see Figure 6 left), areas unlock, etc., were reused. The implementation was initiated through the formulation of the main game environment and routes within the environments and then moved to the 3D modeling of specific areas of the landscape. These areas were created with imaginary scenery built with assets from the Unity3D asset store [54] and reconstructions of (a) mastic villages, (b) rural sceneries, (c) mastic trees, and (d) mastic tools. In the case of 3D reconstructions, level of detail (LOD) post-processing [55] was performed to ensure that game assets do not pose extremely high graphics processing unit (GPU) rendering prerequisites.
For demonstration purposes, a scenario was created and recorded following the path of a user from game start until the discovery of two main points of interest (a) the mastic field and (b) the village of Olympoi to experience its architecture. A video demonstration can be found online at https://youtu.be/fsEgKhMydJw (accessed on 10 October 2021) while some screenshots from the gameplay are presented in Figure 20.

4.4.3. Glass Workshop Walkthrough

The final use case regards the presentation of the glass workshop. The implementation of this use case has been created by authoring an animation path within the workshop and implementing a camera path animation that covers the essential parts of the workshop for educational purposes. The application pauses in certain locations within the workshop to present the tools and machines used in glassblowing. The output is a rendering of the workshop that presents tools and machines following the choreography of the workshop (see Figure 21). This will further be enhanced in the future by introducing avatar-based animations and tool usage visualization to present the creation of a glass object within the workshop. For this purpose, we intend to build on top of research finding on the manipulations of tools and machines [56,57].

5. Conclusions and Future Work

In this work, an approach to the representation and presentation of spatial and geographical context is presented. The concept put forward is the association of digital assets with semantic representations of their social and historical context. A range of demonstrations is presented demonstrating the potential of the proposed approach in the context of heritage interpretation which aims at providing novel ways in which information is communicated to visitors of CH sites. To this end, the proposed approach builds on the representation of spatial and geographical context and its binding with socio-historic information through the formulation of narratives. Thus, the main contribution of the proposed approach can be summarized as the integration of several state-of-the-art technologies on the representation of spatial and geographic context under a methodology that enables their combination with the semantic representation of CH places and sites. The outcomes are rich immersive presentations that enhance the value and meaning of the acquired representation. The proposed methodology is both re-usable and extendable as it is provided in the form of a step-by-step approach and appropriate technical tools to be applied in each step. Its genericity is supported through commercially available technologies integrated with the methodology that can enhance the productivity of the implementation team through existing knowledge and expertise.
Concerning future improvements of the presented methodology, several directions are to be followed. Initially, the methodology itself could benefit from a digital step-by-step guide that would bridge the different technologies employed and simplify the implementation of new solutions. Furthermore, further effort is required in automating parts of the process that is currently done manually. For example, the representation of context in MOP is very powerful but requires a lot of data curation effort. Additionally, improvements concern format on conversions. More specifically an improvement of the process would be achieved by implementing an intermediate data schema and export facilities to prepare and export content from MOP to the subsequent implementation platforms. Significant addition, for example, could be considered exporting content to WebGL or WebXR compliant formats that simplify the task of implementing immersive presentations. Finally, concerning the replication of the methodology by others there is always a need to improve the automation level in each step of the methodology and reduce the number of tools and third-party software needed to achieve the desired results.

Author Contributions

Conceptualization, X.Z., N.P. (Nikolaos Partarakis), D.K., E.T., A.C., C.R. and A.L.C.; data curation, D.K.; funding acquisition, N.P. (Nikolaos Partarakis) and X.Z.; investigation, D.K., A.D. and A.C.; methodology, N.P. (Nikolaos Patsiouras), N.P. (Nikolaos Partarakis) and X.Z.; project administration, N.P. (Nikolaos Partarakis) and X.Z.; resources., A.D., A.C. and D.K.; software, N.P. (Nikolaos Patsiouras), N.P. (Nikolaos Partarakis), A.C.; supervision, N.P. (Nikolaos Partarakis), X.Z. and E.Z.; validation, D.K., A.D. and A.C.; visualization, N.P. (Nikolaos Partarakis), N.P. (Nikolaos Patsiouras), E.M. and X.Z.; writing—original draft, N.P. (Nikolaos Partarakis) and X.Z.; writing—review and editing, N.P. (Nikolaos Partarakis), X.Z., D.K., E.T., C.R, A.D. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been conducted in the context of the Mingei project that has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 822336.

Data Availability Statement

The data are available upon request.

Acknowledgments

The authors would like to thank the Chios Mastic Museum and Cerfav (Centre européen de recherches et de formation aux arts verriers) for their contribution to this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ICOMOS. Ename Charter—The Charter for the Interpretation and Presentation of Cultural Heritage Sites. In Proceedings of the ICOMOS 16th General Assembly, Québec, QC, Canada, 29 September–4 October 2008. [Google Scholar]
  2. Clark, S. Young. Keynote Speech. Cultural Mapping Symposium and Workshop, Australia. 1995. Available online: https://bangkok.unesco.org/content/cultural-mapping (accessed on 10 October 2021).
  3. Petrescu, F. The Use of GIS Technology in Cultural Heritage, In Proceedings of the XXI International Symposium CIPA 2007, Athens, Greece, 1–6 October 2007.
  4. Australia ICOMOS. The Burra Charter: The Australia ICOMOS Charter for Places of Cultural Significance. Available online: http://openarchive.icomos.org/id/eprint/2145/ (accessed on 16 January 2021).
  5. UNESCO; ICCROM; ICOMOS; IUCN. Managing Cultural World Heritage; United Nations Educational, Scientific and Cultural Organization: Paris, France, 2013. [Google Scholar]
  6. CEMAT. European Rural Heritage Observation Guide—CEMAT. 2003. Available online: https://rm.coe.int/16806f7cc2 (accessed on 22 July 2021).
  7. Donkin, L. Crafts and Conservation: Synthesis Report for ICCROM. 2001. Available online: https://www.iccrom.org/publication/crafts-and-conservation-synthesis-report-iccrom (accessed on 20 September 2020).
  8. Barber, D.M.; Dallas, R.W.; Mills, J.P. Laser scanning for architectural conservation. J. Archit. Conserv. 2006, 12, 35–52. [Google Scholar] [CrossRef]
  9. Song, L.; Li, X.; Yang, Y.-g.; Zhu, X.; Guo, Q.; Liu, H. Structured-Light Based 3D Reconstruction System for Cultural Relic Packaging. Sensors 2018, 18, 2981. [Google Scholar] [CrossRef] [PubMed][Green Version]
  10. Rodríguez-Gonzálvez, P.; Muñoz-Nieto, A.L.; del Pozo, S.; Sanchez-Aparicio, L.J.; Gonzalez-Aguilera, D.; Micoli, L.; Barsanti, S.G.; Guidi, G.; Mills, J.; Fieber, K.; et al. 4D Reconstruction and Visualization of Cultural Heritage: Analyzing Our Legacy through Time. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W3, 609–616. [Google Scholar] [CrossRef][Green Version]
  11. Pix4d. Available online: https://www.pix4d.com/ (accessed on 20 September 2021).
  12. Faro Focus. Available online: https://www.faro.com/en/Products/Hardware/Focus-Laser-Scanners (accessed on 16 September 2021).
  13. Corns, A. 3D-ICONS: D7.3-Guidelines and Case Studies (Final). Zenodo 2013, 59–68. [Google Scholar] [CrossRef]
  14. Parys, R.; Schilling, A. Incremental large-scale 3D reconstruction. In Proceedings of the IEEE International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 416–423. [Google Scholar]
  15. Statham, N. Use of photogrammetry in video games: A historical overview. Games Cult. 2020, 15, 289–307. [Google Scholar] [CrossRef]
  16. Ryckman, M. Exploring the Graffiti of The Division. Ubiblog. 2016. Available online: http://blog.ubi.com/exploring-the-graffiti-of-the-division-interview-with-amr-din (accessed on 16 January 2021).
  17. Steinman, G. Far Cry 4—Vice Dev Diary & Quest for Everest. UbiBlog. 2014. Available online: http://blog.ubi.com/far-cry-4-vice-developer-diary-quest-for-everest (accessed on 16 January 2021).
  18. Starwars, E.A. How We Used Photogrammetry to Capture Every Last Detail for Star Wars Battlefront. StarWars EA. 2015. Available online: http://starwars.ea.com/starwars/battle front/news/how-we-used-photogrammetry (accessed on 10 September 2021).
  19. Brown, K.; Hamilton, A. Photogrammetry and “Star Wars Battlefront”. In Proceedings of theGDC 2016: Game Developer Conference, San Francisco, CA, USA, 14–16 March 2016. [Google Scholar]
  20. Azzam, J. Porting A Real-Life Castle into Your Game When You’re Broke. GDC. 2017. Available online: http://www.gdcvault.com/play/1023997/Porting-a-Real-Life-Castle (accessed on 16 January 2021).
  21. Bishop, L.; Chris, C.; Michal, J. Photogrammetry for Games: Art, Technology and Pipeline Integration for Amazing Worlds; GDC: San Francisco, CA, USA, 2017. [Google Scholar]
  22. Photomodeler Technologies, How Is Photogrammetry Used in Video Games? 2020. Available online: https://www.photomodeler.com/how-is-photogrammetry-used-in-video-games/ (accessed on 20 September 2021).
  23. Maximov, A. Future of Art Production in Games. In Proceedings of the GDC 2017: Game Developer Conference, San Francisco: UBM Tech, San Francisco, CA, USA, 2–6 March 2017. [Google Scholar]
  24. Tscheu, F.; Buhalis, D. Augmented reality at cultural heritage sites. In Information and Communication Technologies in Tourism; Springer: Cham, Switzerland, 2016; pp. 607–619. [Google Scholar]
  25. Partarakis, N.; Zabulis, X.; Foukarakis, M.; Moutsaki, M.; Zidianakis, E.; Patakos, A.; Adami, I.; Kaplanidi, D.; Ringas, C.; Tasiopoulou, E. Supporting Sign Language Narrations in the Museum. Heritage 2021, 5, 1. [Google Scholar] [CrossRef]
  26. Partarakis, N.; Klironomos, I.; Antona, M.; Margetis, G.; Grammenos, D.; Stephanidis, C. Accessibility of cultural heritage exhibits. In International Conference on Universal Access in Human-Computer Interaction; Springer: Cham, Switzerland, 2016; pp. 444–455. [Google Scholar]
  27. Doulgeraki, C.; Partarakis, N.; Mourouzis, A.; Stephanidis, C. A development toolkit for unified web-based user interfaces. In International Conference on Computers for Handicapped Persons; Springer: Berlin/Heidelberg, Germany, 2008; pp. 346–353. [Google Scholar]
  28. Wang, H.; Zou, X.; Liu, C.; Liu, T.; Chen, J. Study on a location method for bio-objects in virtual environment based on neural network and fuzzy reasoning. In International Conference on Intelligent Robotics and Applications; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1004–1012. [Google Scholar]
  29. Wang, H.; Zou, X.; Liu, C.; Lu, J.; Liu, T. Study on behavior simulation for picking manipulator in virtual environment based on binocular stereo vision. In Proceedings of the 2008 Asia Simulation Conference-7th International Conference on System Simulation and Scientific Computing, Beijing, China, 10–12 October 2008; pp. 27–31. [Google Scholar]
  30. Partarakis, N.; Antona, M.; Stephanidis, C. Adaptable, personalizable and multi user museum exhibits. In Curating the Digital; Springer: Cham, Switzerland, 2016; pp. 167–179. [Google Scholar]
  31. Zidianakis, E.; Partarakis, N.; Antona, M.; Stephanidis, C. Building a sensory infrastructure to support interaction and monitoring in ambient intelligence environments. In International Conference on Distributed, Ambient, and Pervasive Interactions; Springer: Cham, Switzerland, 2014; pp. 519–529. [Google Scholar]
  32. Zidianakis, E.; Partarakis, N.; Ntoa, S.; Dimopoulos, A.; Kopidaki, S.; Ntagianta, A.; Ntafotis, E.; Xhako, A.; Pervolarakis, Z.; Kontaki, E.; et al. The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support. Electronics 2021, 10, 363. [Google Scholar] [CrossRef]
  33. Partarakis, N.; Zabulis, X.; Antona, M.; Stephanidis, C. Transforming Heritage Crafts to Engaging Digital Experiences; Springer: Cham, Switzerland, 2020. [Google Scholar]
  34. Fast-Berglund, Å.; Gong, L.; Li, D. Testing and validating Extended Reality (xR) technologies in manufacturing. Procedia Manuf. 2018, 25, 31–38. [Google Scholar] [CrossRef]
  35. Margetis, G.; Papagiannakis, G.; Stephanidis, C. Realistic Natural Interaction with Virtual Statues in X-Reality Environments. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 2019, XLII-2/W11, 801–808. [Google Scholar] [CrossRef][Green Version]
  36. Mingei Online Platform (MOP). Available online: http://mop.mingei-project.eu/ (accessed on 20 September 2021).
  37. Zabulis, X.; Meghini, C.; Partarakis, N.; Kaplanidi, D.; Doulgeraki, P.; Karuzaki, E.; Stefanidi, E.; Evdemon, T.; Metilli, D.; Bartalesi, V.; et al. What is needed to digitise knowledge on Heritage Crafts? Mem. Rev. 2019. [Google Scholar]
  38. Zabulis, X.; Meghini, C.; Partarakis, N.; Beisswenger, C.; Dubois, A.; Fasoula, M.; Galanakis, G. Representation and preservation of Heritage Crafts. Sustainability 2020, 12, 1461. [Google Scholar] [CrossRef][Green Version]
  39. Li, Z.; Zhu, C.; Gold, C. Digital Terrain Modeling: Principles and Methodology; CRC press: Boca Raton, FL, USA, 2004. [Google Scholar]
  40. Capaldo, P.; Nascetti, A.; Porfiri, M.; Pieralice, F.; Fratarcangeli, F.; Crespi, M.; Toutin, T. Evaluation and comparison of different radargrammetric approaches for Digital Surface Models generation from COSMO-SkyMed, TerraSAR-X, RADARSAT-2 imagery: Analysis of Beauport (Canada) test site. ISPRS J. Photogramm. Remote Sens. 2015, 100, 60–70. [Google Scholar] [CrossRef]
  41. Pepe, M.; Prezioso, G. Two Approaches for Dense DSM Generation from Aerial Digital Oblique Camera System. In GISTAM; SCITEPRESS—Science and Technology Publications Ida: Setúbal, Portugal, 2016; pp. 63–70. [Google Scholar]
  42. Hu, J.; You, S.; Neumann, U. Approaches to large-scale urban modeling. IEEE Comput. Graph. Appl. 2003, 23, 62–69. [Google Scholar]
  43. Heightmap. Available online: https://en.wikipedia.org/wiki/Heightmap (accessed on 10 October 2021).
  44. Tangram Heightmapper. Available online: https://tangrams.github.io/heightmapper/ (accessed on 16 January 2020).
  45. Agrawal, A.; Radhakrishna, M.; Joshi, R.C. Geometry-Based Mapping and Rendering of Vector Data over LOD Phototextured 3D Terrain Models; Skala-UNION Agency: Plzen, Czech Republic, 2006. [Google Scholar]
  46. Partarakis, N.; Kaplanidi, D.; Doulgeraki, P.; Karuzaki, E.; Petraki, A.; Metilli, D.; Bartalesi, V.; Adami, I.; Meghini, C.; Zabulis, X. Representation and Presentation of Culinary Tradition as Cultural Heritage. Heritage 2021, 4, 612–640. [Google Scholar] [CrossRef]
  47. Partarakis, N.N.P.; Doulgeraki, P.P.D.; Karuzaki, E.E.K.; Adami, I.I.A.; Ntoa, S.S.N.; Metilli, D.D.M.; Bartalesi, V.V.B.; Meghini, C.C.M.; Marketakis, Y.Y.M.; Kaplanidi, D.D.M.; et al. Representation of Socio-historical Context to Support the Authoring and Presentation of Multimodal Narratives: The Mingei Online Platform. J. Comput. Cult. Heritage 2021, 15, 1–26. [Google Scholar] [CrossRef]
  48. Mingei Project’s Website. Available online: https://www.mingei-project.eu/ (accessed on 10 May 2021).
  49. GeoNames. Available online: http://www.geonames.org/ (accessed on 16 January 2021).
  50. Zidianakis, E.; Chatziantoniou, A.; Dimopoulos, A.; Galanakis, G.; Michelakis, A.; Neroutsou, V.; Ntoa, S.; Paparoulis, S.; Antona, M.; Stephanidis, C. A Technological Framework for Rapid Prototyping of X-reality Applications for Interactive 3D Spaces. In Advances in Intelligent Systems and Computing; Springer: Singapore, 2021; pp. 99–106. [Google Scholar]
  51. Partarakis, N.; Patsiouras, N.; Evdemon, T.; Doulgeraki, P.; Karuzaki, E.; Stefanidi, E.; Ntoa, S.; Meghini, C.; Kaplanidi, D.; Fasoula, M.; et al. Enhancing the Educational Value of Tangible and Intangible Dimensions of Traditional Crafts Through Role-Play Gaming. In International Conference on ArtsIT, Interactivity and Game Creation; Springer: Cham, Switzerland, 2020; pp. 243–254. [Google Scholar]
  52. Know Your Genres: Third-Person Shooters—Xbox Wire. Available online: News.xbox.com (accessed on 17 July 2021).
  53. The Explorer: 3D Game Kit. Available online: https://learn.unity.com/project/3d-game-kit (accessed on 16 January 2020).
  54. Unity3D Asset Store. Available online: https://assetstore.unity.com/ (accessed on 16 January 2020).
  55. Luebke, D.; Reddy, M.; Cohen, J.D.; Varshney, A.; Watson, B.; Huebner, R. Level of Detail for 3D Graphics; Morgan Kaufmann: Burlington, MA, USA, 2003. [Google Scholar]
  56. Stefanidi, E.; Partarakis, N.; Zabulis, X.; Papagiannakis, G. An approach for the visualization of crafts and machine usage in virtual environments. In Proceedings of the 13th International Conference on Advances in Computer-Human Interactions, Valencia, Spain, 21–25 November 2020; pp. 21–25. [Google Scholar]
  57. Stefanidi, E.; Partarakis, N.; Zabulis, X.; Zikas, P.; Papagiannakis, G.; Thalmann, N.M. TooltY: An approach for the combination of motion capture and 3D reconstruction to present tool usage in 3D environments. In Intelligent Scene Modeling and Human-Computer Interaction; Springer: Cham, Switzerland, 2021; pp. 165–180. [Google Scholar]
Figure 1. Illustration of the proposed approach.
Figure 1. Illustration of the proposed approach.
Electronics 11 00195 g001
Figure 2. 3D digitization modalities.
Figure 2. 3D digitization modalities.
Electronics 11 00195 g002
Figure 3. 3D scanning modalities and use cases in CH.
Figure 3. 3D scanning modalities and use cases in CH.
Electronics 11 00195 g003
Figure 4. Combining 3D reconstructions to create a synthetic environment using ICombine3D editor.
Figure 4. Combining 3D reconstructions to create a synthetic environment using ICombine3D editor.
Electronics 11 00195 g004
Figure 5. Source heightmap and the final painted terrain.
Figure 5. Source heightmap and the final painted terrain.
Electronics 11 00195 g005
Figure 6. Terrain texturing using satellite images.
Figure 6. Terrain texturing using satellite images.
Electronics 11 00195 g006
Figure 7. Village reconstruction.
Figure 7. Village reconstruction.
Electronics 11 00195 g007
Figure 8. Rural building and mastic garden (RGB views).
Figure 8. Rural building and mastic garden (RGB views).
Electronics 11 00195 g008
Figure 9. Rural space and building (depth view).
Figure 9. Rural space and building (depth view).
Electronics 11 00195 g009
Figure 10. (Left): aerial reconstruction. (Right): a combination of aerial and terrestrial views.
Figure 10. (Left): aerial reconstruction. (Right): a combination of aerial and terrestrial views.
Electronics 11 00195 g010
Figure 11. (Left): Blueprint survey plan of the glass workshop. (Right): 3D model.
Figure 11. (Left): Blueprint survey plan of the glass workshop. (Right): 3D model.
Electronics 11 00195 g011
Figure 12. Documentation on the arrangement of machinery items at the CERFAV workshop.
Figure 12. Documentation on the arrangement of machinery items at the CERFAV workshop.
Electronics 11 00195 g012
Figure 13. 3D modeling of glass workshop machines.
Figure 13. 3D modeling of glass workshop machines.
Electronics 11 00195 g013aElectronics 11 00195 g013b
Figure 14. Placement of machinery following in the glass workshop.
Figure 14. Placement of machinery following in the glass workshop.
Electronics 11 00195 g014
Figure 15. Authoring location-based narrative in MOP.
Figure 15. Authoring location-based narrative in MOP.
Electronics 11 00195 g015
Figure 16. Authoring location-based narrative in MOP.
Figure 16. Authoring location-based narrative in MOP.
Electronics 11 00195 g016
Figure 17. An immersive environment for XR applications.
Figure 17. An immersive environment for XR applications.
Electronics 11 00195 g017
Figure 18. Chios airborne.
Figure 18. Chios airborne.
Electronics 11 00195 g018
Figure 19. Chios airborne—standalone version.
Figure 19. Chios airborne—standalone version.
Electronics 11 00195 g019
Figure 20. Setting transition points through teleports (left), mini video from the main game plot (right).
Figure 20. Setting transition points through teleports (left), mini video from the main game plot (right).
Electronics 11 00195 g020
Figure 21. Presentation instances of the glass workshop application for educational purposes.
Figure 21. Presentation instances of the glass workshop application for educational purposes.
Electronics 11 00195 g021
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Partarakis, N.; Zabulis, X.; Patsiouras, N.; Chatjiantoniou, A.; Zidianakis, E.; Mantinaki, E.; Kaplanidi, D.; Ringas, C.; Tasiopoulou, E.; Dubois, A.; Carre, A.L. Multi-Scale Presentation of Spatial Context for Cultural Heritage Applications. Electronics 2022, 11, 195. https://doi.org/10.3390/electronics11020195

AMA Style

Partarakis N, Zabulis X, Patsiouras N, Chatjiantoniou A, Zidianakis E, Mantinaki E, Kaplanidi D, Ringas C, Tasiopoulou E, Dubois A, Carre AL. Multi-Scale Presentation of Spatial Context for Cultural Heritage Applications. Electronics. 2022; 11(2):195. https://doi.org/10.3390/electronics11020195

Chicago/Turabian Style

Partarakis, Nikolaos, Xenophon Zabulis, Nikolaos Patsiouras, Antonios Chatjiantoniou, Emmanouil Zidianakis, Eleni Mantinaki, Danae Kaplanidi, Christodoulos Ringas, Eleana Tasiopoulou, Arnaud Dubois, and Anne Laure Carre. 2022. "Multi-Scale Presentation of Spatial Context for Cultural Heritage Applications" Electronics 11, no. 2: 195. https://doi.org/10.3390/electronics11020195

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop