Next Article in Journal / Special Issue
A Review of Heritage Building Information Modeling (H-BIM)
Previous Article in Journal
Who Is at Risk for Problematic Video Gaming? Risk Factors in Problematic Video Gaming in Clinically Referred Canadian Children and Adolescents
Previous Article in Special Issue
Enhancing the Appreciation of Traditional Chinese Painting Using Interactive Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Immersive VR Game Model for Recontextualization in Virtual Environments: The μVRModel

Italian National Research Council, Institute of Technologies Applied to Cultural Heritage-Via Salaria Km 29,300 Monterotondo St., 00015 Rome, Italy
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2018, 2(2), 20; https://doi.org/10.3390/mti2020020
Submission received: 28 March 2018 / Revised: 24 April 2018 / Accepted: 24 April 2018 / Published: 27 April 2018
(This article belongs to the Special Issue Digital Cultural Heritage)

Abstract

:
In recent years, immersive VR had a great boost in terms of adoption and research perspectives, especially those regarding the serious gaming universe. Within the cultural heritage field, virtual re-contextualization of items is a crucial task to be accomplished by individuals to understand a 3D reconstructed environment as a whole and to assign a meaning and a value to a specific cultural object. Immersive VR and consumer HMDs still present several issues related to motion sickness and locomotion: the interest in real-walking techniques outperforming other locomotion methods is growing year by year, although limited by physical constraints, higher costs, or current technology. In this work, we propose a novel game model (μVR) that combines real-walking techniques and an adaptive, game-driven, multi-scale progression to craft immersive re-contextualization applications. The presented model aims to minimize motion sickness while fully exploiting the physical tracked area and augmenting the understanding of what the user is experiencing at different world scales. We define and formalize the μVR model and its components mathematically for the sake of reproducibility, then we present results from a pilot test planned to validate the model on real users. Results assure the usability and effectiveness of VR model even if further implementation needs to be done.

1. Introduction

Our perception of the world is formed in the first phase of our growth, in our childhood, by means of experiences primarily done within the family and then within the surrounding community [1]. Nevertheless, this paradigm of knowing the world does not end in this early stage; it continues along all our life, putting us in front of different and always-changing challenges which make us aware of objects and events and conscious about relations and behaviors. One of the major issue in the field of cognition (and experience-making), is that we not always are able to re-create the vivid sequence of events just lived in our mind. That happens because, in some cases, we do not have enough comparable experience to refer to, thus aligning what we have just listened or viewed with our previous background. We somehow miss the mental cognitive paradigms to understand, and then explain, what just experienced. This usually happens when we face something new or far from our daily life, that is not easy to comprehend for our mind and, therefore, difficult to include into our matrix of life’s connections [2,3].
In this scenario, the role of digital technologies can be extremely important: they can accompany users in the process of meaning-making, with a pluralism of communication strategies and cognitive expedients, supporting users’ memorization and elaboration processes. The fruition of the past can be really enhanced and supported by digital technologies so to provide a reliable and tangible experience. The latter emerges indeed from the integration of various aspects that connote an individual: a person’s primitive senses, the ability of perceiving the world around him, the actions asked to be performed in a specific environment, the motivations that lead him to understand what to perform, and the cognition processes involved (influenced by feelings and relations). A meaningful experience arises “from the integration of perception, action, motivation, and cognition into an inseparable, meaningful whole. It is a story, emerging from the dialogue of a person with her or his world through action” [4]. Only a meaningful experience can guarantee a concrete moment of understanding and because direct actions, feelings, and learning are strictly connected [5,6] in what can be called ‘experiential learning’ [7].
In order to live such an intense experience, technologies need to be “invisible” for users [8]: they do not have to perceive the digital application separated from the cultural good-neither to be a substitute of it. To pursue such an objective, we need to imply usual communicative and mental strategies like (digital) storytelling and dramatization. From a cognitive point of view, this is possible by working on users’ perception: the latter refers to a process of significance that our mind activates once in front of something that need to be absorbed, interiorized, and understood by a person. It can be either an immersive virtual scenario to be explored through head-mounted displays (HMD) or immersive projections. Nevertheless, perception means an awareness of something, whether it is a vision or a sensation [9]. From a technical point of view, instead, this is not always easy to fulfill because of usability constraints, quality, and quantity of content delivery and exploration mechanisms (game-alike, free exploration, storytelling-driven, etc.). The typical features of storytelling and dramatization (like sounds, visual imagery and gestures) are indeed tied together by the use of digital technology, especially virtual reality (VR), but this task is difficult to accomplished because it always requires a balance between such aspects. Furthermore, interactivity plays a major role in the process of meaning-making and it is even more important if we operate within the cultural heritage field. We could finally answer to questions like “How easy it is to imagine how Rome would have been under the age of Augustus or what the Sforzesco Castle was like in Milan in Medieval period?”, “How would the columns of the Pantheon in Rome have looked in the past?”, or, more, “How would the Byzantine trulla of Athens have appeared in its period of maximum splendor?”. VR is surely a technique—but also a strategy—which helps answer such questions by introducing the possibility to feel deeply immersed in an altered world, sharing emotions and sensations.
The main contribution of this work is thus to present a novel game model (μVR): it allows the crafting of immersive VR applications targeting re-contextualization activities which ask users to relocate virtual cultural objects in their former locations. The proposed model employs real-walking locomotion techniques combined with adaptive multi-scale approaches-where users will deal with their proportion in relationship with the surrounding 3D environment. We will first describe in Section 2 related works in re-contextualization applications and space-related issues in immersive VR locomotion. Section 3 will then present and formalize the μVR model, introducing all components and actors involved including the definition of unsolved volumes, reachability, solvability, the μ operator, and their motivations. In Section 4, we will introduce a case study and related implementation of the model on modern game engines. In Section 5, we will finally test and validate the model on real users. The Conclusion will close the contribution together with further developments on this research line.

2. Related Work

2.1. Re-Contextualization in Cultural Heritage Domain

The development of 3D modeling and VR opened boundless possibility in the field of computer graphics applied to cultural heritage and museum exhibits [10]. In particular, in the field of museum collections, the combination of digitised museum objects together with virtual reconstructions allowed to improve the sense of immersion and comprehension of real artifacts [11]. How? By means of virtual contextualization strategy: re-collocating museum objects in the archaeological sites allows users to see them re-contextualized, so as to understand their original position, scale, and function and thus correlating them with the cultural identities to which they refer to [12,13]. Whereas the potentiality of contextualizing museum collections using VR was obvious, new possibilities opened up, especially in the field of museum applications [14,15,16]. The latter started being designed for engaging visitors and disseminating cultural content throughout narrative and interactive paradigms. Thus, many reconstruction projects of ancient sites focused on re-contextualization or virtual anastylosis through VR came up to light in the last decade. An example is Etruscanning 3D (Figure 1a), a European project from 2012 that takes advantage of visualization techniques and VR to design a virtual museum of the Etruscan Regolini Galassi tomb, in Cerveteri. The application presents digital restored replicas of the original museum collection (today at the Vatican’s Museums) re-contextualized inside a virtual reconstruction of the tomb. It allows visitors to explore the scenario and its contents [17].
“Marta Racconta” is a sample project of 2013 which aims at making accessible ancient monuments from the Greek Taras in 3D (Figure 2). It uses a ‘natural’ interface navigation systems, allowing visitors to explore the reconstructed architectures and interact with the related grave [18].
The Tangible Geographical Interface by Gagarin (2013) [19], shows a way to contextualize in this case lives and behaviors of animals and humans with an Earth map. In this multi-user installation, visitors gather in front of a geographical map showing the Jostedal Glacier National Park in Norway. They can select different “information-pucks” to work with and, by placing them on to a station, various information appear on the map. By turning the pucks, users navigate within a given subject and/or play animations illustrating a certain story. Here the tangible paradigm support the relocation process. Another example is the gesture-based application “Admotum” (Figure 1b), designed for the “Keys to Rome” exhibition (keys2rome.eu). Here interactive 3D technology is used for creating connections and parallelisms among virtual replicas of real artifacts and their contexts of provenance, contextualized into their virtual environments [20]. The above-mentioned projects have been developed to research natural interaction (NI) approaches using infrared sensors like Kinect. The paradigm of these applications relies upon a treasure hunt game mechanics where users have to find the cultural objects and locate them in their original position, to get additional information. For ‘original position’we establishthe exact location of such object (a) in a fixed architecture and (b) in its time period. It highlights not only the architectural model which includes such an object, but also the referable function and the owners, in some case. If we speak about the small head of Chrysippus (Figure 3a), exhibited at the Imperial Fora Museum, in Rome, during the “Keys to Rome” exhibition, we want visitors to firstly understand what they are admiring; secondly, to understand the function of such small museum object and then be able to contextualize it in its original location. By means of several digital applications, like “Admotum”, and 3D reconstructions placed along the museum visit path, visitors are finally able to get the historical importance and the “scale” of the Chrysippus head into the Imperial Fora context: it was an ancient place card of about 15 cm portraying the philosopher Chrysippus. It was originally located in the library of the Forum Pacis in Rome; it was set on a small base and served as marker for the section of the library devoted to Chrysippus’ philosophy (Figure 3b).
From a cognitive point of view, all these aspects—the dimension, the weight, the material, the location and so on—are fundamental to get to know that specific museum object; they indeed serve during the process of sense-making which is activated by user once he visits the Imperial Fora Museum and arrives in front of the Chrysippus. At the same time, from a technical point of view, the virtual restoration helps user to ‘see’ how the object was originally as well as the reconstruction of the whole Roman environment; finally the re-contextualization, accessible by means of a game-like mechanism, makes the user aware of the function and the meaning related to the Chrysippus head.
The British Museum has been one of the first museums to have incorporated in 2015 VR technology (via HMD) into its learning programme [21]. Users are able to explore a virtual reconstruction of a Bronze Age site, while watching at 3D scans of objects placed in their original setting: positive feedback and enthusiasm received from visitors have confirmed the impact VR has on learning about a Museum’s collection. The Regium@ Lepidi 2200 project of 2015 [22,23] also presents a possible virtual recontextualization of museum objects within the ancient Roman city of Reggio Emilia, Italy. The goal is to virtually recreate the Emilian city in the moment of maximum extension and splendor in the firstcentury B.C. Wearing the Oculus Rift, visitors can explore the Forum and discover the monuments with the eyes of its ancient inhabitants and be able to get up on the fly of a dove and admire the ancient RegiumLepidi in an overview from above (Figure 4a,b). Here, the recontextualization process sees the buildings’ relocation, and not simply architectural structure or artistic details.
People have reacted enthusiastically to immersive VR experience offered by the recent project Ullastret, in 2017 [24]: the flight over an Iberian town and virtual objects which visitors can later see in the museum display cabinet have a strong impact on visitors. The combination of HMD fruition and re-contextualization can recreate ancient virtual environments and build vivid experiences, strongly impacting the sense of presence of the final user.

2.2. Immersive VR: A Matter of Space

In recent years immersive VR had a great boost in terms of adoption and research perspectives, especially those regarding the serious gaming universe. Most user experiences designed for VR head-mounted displays often do not include or put limitations on the ability to walk ‘into’ the virtual environment. This aspect generates relevant issues in the user’s perceptions and understanding. In general, the main reason for such decision is that both developers and researchers did realize that such ability brings a well-known effect called simulator sickness [25]. Users can feel disoriented in the virtual environment, they can feel tired and sick while exploring the virtual environment. This is why there is still lot of research and experimentation on locomotion techniques in immersive VR. Our brain is indeed programmed to heavily react to mismatches between what our eyes see and what our vestibular sense feels: it can produce discomfort in the user and varies from one individual to another. This raises several challenges for VR games that aim to offer movements and locomotion that are so common in standard games. The very first step to create a compelling experience is to keep users comfortable, especially regarding locomotion. This aspect is also reinforced by previous research, where it is shown how the sense of presence impacts differently between walkable and non-walkable VR games. The sense of presence is an important issue in the process of meaning-making for an individual’s concrete learning experience [26,27,28]. As Schubert et al. [29] point out, it has been assumed that the more inclusive, extensive, surrounding, and vivid the virtual environment is, the higher the sense of presence results.
Recent literature reviews [30] found concrete connections between locomotion and sense of presence, between user’s perceptions and embodiment into the virtual environment. They documented 11 different locomotion techniques employed in immersive VR applications and games, including real walking, walk-in-place (WIP), teleport, redirected walking (RDW) just to name a few. Several past studies did prove that real walking is the optimal interaction technique for locomotion in immersive environments, since it creates a higher sense of immersion, increasing naturalness and improving task performance compared to other solutions [31,32] such as walking-in-place (WIP) techniques and external controls. Furthermore, other past works also showed that real walking outperforms walking-in-place and, more in general, other indirect solutions of VR locomotion [33]. The main problem with real walking is however the physical space: for large virtual gaming environments developers have to consider room size and more specifically the size of the tracked area. Some proposed solutions also investigate omnidirectional treadmills [34,35] in order to overcome space limitations by allowing users to walk forever on these platforms. However, since such platforms are generally small and linear, none of them enable a truly natural walking behaviors, or they are at the moment far from being cheap. With standard outside-in tracking, the device (HMD) ships with external cameras or sensors that track its spatial position, generally using infrared (IR) light in order to compute a consistent position in the virtual world (Figure 5).
For instance, modern HMDs employing such tracking available on the market, perform very well in tracked areas of about 3 × 3 m (Oculus Rift) or 5 × 5 m (HTC Vive), but a direct mapping with large virtual scenes (e.g., 100 × 100 m) is not possible.
With inside-out positional tracking, on the other hand, the sensors (or camera) are located directly on the HMD: one of the main advantages is that no complex calibrations are required, no installation and setup of cameras or sensors. These approaches offer larger walkable spaces, although they lack in accuracy and they are still not as fast as in outside-in tracking. Regarding outside-in tracking, solutions such as redirected walking techniques (RDW) [36] propose real-walking locomotion interfaces, with the objective to enable users to walk freely in virtual scenes actually larger than the tracked space. Recent studies in RDW also focused on special warps and distortions in order to match a given pair of virtual and physical worlds [37].
Exploring an environment at different scale levels has been addressed in movies and fictional literature such as Alice’s Adventures in Wonderland and Gulliver’s Travels. In these works, the action of shrinking or growing of the main characters produced entirely different perspectives within the same environment. Referring to the cultural heritage domain, this strongly impacts the meaning-making process: users can inspect an architectural detail, better understanding the finest decorations of a Roman fregius; or ‘embrace’ the entire space, catching common patterns or connections. Applied to immersive VR, such virtual scale factor—often called ‘world scale’—has a lot to offer, especially on a research level when dealing with stereoscopic perception using modern HMDs and the huge potential for a wide range of applications. For instance, multi-scale collaborative virtual environments (MCVE) and 3D user interfaces [38] already proven their usefulness, highlighting the importance of perception and understanding of world scale for the user. Such investigation also includes the challenging concept of dominant scale—i.e., the relative scale level to which a user cognitively interprets spatial relations. There are already few research works investigating the automatic configuration of scale factors (also including altitude and direction) to offer user a comfortable and ‘optimal’ virtual view [39] taking into account a set of points-of-interest (POIs). Recently, also well-known products such as Google Earth VR [40] for the exploration of very large datasets, crafted its application around the concept of multi-scale, providing also different cues at user interface level. Regarding spatial ability, immersive VR and puzzle-driven gameplay, the system proposed in [41] offers a strong sense of embodiment in VR through movement tracking (head and hands) and tangible object manipulation. The system engages the spatial skill of perspective taking by requiring a user to work from multiple points of view (ground view and aerial view) to solve a series of virtual puzzles.

3. μVR Model: A Formalization

The latest attempts push the research in the above mentioned direction: finding a way to minimize the motion sickness of users when exploring the 3D environment, while enhancing their sense of presence and augmenting their understanding of what they are experiencing. Given that the focus is on the quality of the virtual experience, the very first goal of the proposed μVR game model is to remove any kind of virtual locomotion technique (e.g., teleport) by instead exploiting the physical tracked area and real walking techniques combined with game state and world scale. The following section will define and formalize the μVR model and its components.

3.1. Item

We first introduce the item game actor I and its components, defined as
I = {O, T, ◩}
An item I in the model is a dynamic element: it has a 3D representation (a 3D model) in the virtual environment and it may change location and orientation during the session by means of user interaction (manipulation). An item can in fact be moved by performing a grab action through a 6-DOF controller or bare hands (e.g., using hand tracking sensors such as attached leap motion devices), thus modifying its current location and orientation within the virtual space (Figure 6). In order to enhance the overall sense of presence, physics effects (such as gravity, collisions, etc.) are generally applied to such items during the session, in combination with haptic capabilities of VR controllers (e.g., rumble). The starting location and orientation of the item are represented by O (origin) while T represents the target location and orientation. Notice O and T are actually compound objects (standard 3D transforms), including location and orientation. Both O and T locations rely on the playable extents of the given virtual environment. The item state ◩ is binary (‘solved’ or ‘unsolved’) and it is initially marked as ‘unsolved’ (⬜). The game model allows to query a given item state on demand at any given time during the session. From a cultural heritage perspective, the ‘solved’ state corresponds to the item placed at its intended location (e.g., placing a roman column in its original spot in a temple).
We define d as standard 3D euclidean distance between item location and its target location, specifically dstart is the initial distance (when the game begins) between O and T locations
dstart = distance(O,T)
When a user performs a grabbing or manipulation action on the item, the current distance d to its target location clearly changes: if said d is below a certain threshold (e.g.,: snap tolerance), the item state is marked as ‘solved’ (⬛). Furthermore, we define the item unsolved volume V as the 3D bounding box defined by current and target locations: the initial unsolved volume Vstart is obviously defined by O and T locations and the item bounds (Figure 6, right). Note that V can shrink (the item is closer to its target location) or even grow (item far away from its target). The latter can be limited at implementation level, for instance by capping growth to a maximum distance or within specific boundaries.

3.2. Global Unsolved Volume

Within the game model and a given virtual environment, we generally deal with a set of items𝕀, thus the set of all dynamic entities that can be rearranged by user intervention
𝕀 = {I1, I2, I3, … Ik}
Notice that at a given time during the game session, only a subset of 𝕀 is unsolved. For instance, with a set of five items we may have following state of 𝕀 at a given time
𝕀 = {⬜1, ⬛2, ⬜3, ⬜4, ⬛5}
We thus define global unsolved volume𝕍 as the additive expansion of unsolved volumes Vi for each single item in 𝕀
𝕍   =   i = 1 k V i
This mathematically means the size of 𝕍 will, in general, shrink during the game session—but never grow—due to decreasing number of unsolved items. At the beginning (initial game state) all the items are marked as ‘unsolved’: if the number of unsolved items equals 0 and other custom game conditions are met, the game ends (it is declared won).

3.3. Reachability and Solvability

We define an item as ‘reachable’ if its current virtual location relies inside the physical (real) tracked area. This means the user is able to perform the grab action by performing a physical movement without any kind of virtual camera locomotion. We similarly define an item as ‘solvable’ if its target location relies inside the tracked area (Figure 7 and Figure 8). At this point, we have all the ingredients to define two crucial properties of μVR model
  • At any given time, each unsolved item is always reachable;
  • Each unsolved item is solvable by user interaction.
The previous rules present an important constraint: such properties hold only if all items’ unsolved volumes (the global unsolved volume 𝕍) are contained inside the physical tracked area—i.e., each item is reachable and solvable by the user. Due to our initial assumptions on virtual locomotion, this scenario vastly restricts the model to spatially small-sized 3D scenes (e.g., a virtual table with a few items to rearrange). This is the main motivation for the μ operator that is introduced in the following section.

3.4. The μ Operator

The μ operator performs a liquid and adaptive miniaturization of the user each time an item is solved—i.e., when the user correctly placed a given item onto its target location and thus made a game progression. The basic behavior of μ consists in computing a specific set of VR parameters depending on 𝕍 (global unsolved volume). The algorithm computes a world scale factor (s) that satisfies the property of current 𝕍 entirely fitting inside the current physical tracked area A. Thus, on each ‘item solved’ event, the virtual world scale s and base VR position b (the virtual location of physical tracked area center) are recomputed.
μ(𝕍)→{s, b}
Such transformation of virtual world scale and base position can, for each progression, fully exploit the physical tracked area in order to interact and manipulate items, without employing virtual locomotion techniques. For instance, s = 10 means the user is 10 times bigger compared to the virtual environment, thus objects are perceived 1/10th of their original size. There is more involved from a spatial perspective: the game progression offers a unique way to perceive the same virtual environment at different scale levels (reference frames) on each iteration, whenever an item is solved. Here the user’s perception is at the same time augmented and proportioned to one’s current reference frame: user can do ‘more’ in terms of accessibility compared to normal daily actions, but are always bound to a well-known cognitive structure. The influence of users’ bodies on perception has been originally introduced by Gibson [42], who stressed that individuals do not perceive the environment, but rather they perceive the relationship between their body and the environment. Following this assumption, our user scale determines the range of potential actions we can perform within the environment, and thereby, defines the interactive value of the items of which our (virtual) environment is composed [43]. Therefore, the μVR model takes advantage of such cognitive paradigm to allow users to perform specific tasks.

3.5. μ-Progression

A game progression in the μVR model (μ-Progression) can be represented as sequence of game states, each tightly bound to a specific 𝕍
𝕍start→𝕍1→𝕍2→ …𝕍end
For each step (game state) in the chain, each transition (→) results in a decreased number of unsolved items. Furthermore, since at each stage multiple items are available at user reach, different paths are possible from 𝕍start to 𝕍end.
In Figure 9, a sample transition to a new state is shown: the user (left scheme) first solves the cube item, a transition lead to the right scheme, where user can solve the sphere item. Notice how the scale factor of virtual environment (including items) and virtual origin b are changed according to the size of tracked area, reachability and solvability of active game items. At each stage, the user is ‘descending’ (shrinking) or expanding towards the correct world scale, by solving items through real walking techniques inside the tracked area. Furthermore, it is important to notice that different progression routes are possible: for instance the user in 𝕍a can also solve the sphere item first and then solve the cube. Due to presented multi-scale mechanics, at each step (performed by a μ computation) we guarantee a decreased scale factor (s) after each transition, such that
si ≥ si+1
Until all items are solved, and world scale is set to 1, thus restoring and ‘rewarding’ the user with the correct scale level of the virtual environment.

4. μVR Model Application: A Case Study

4.1. The 3D Reconstruction Environment

The case study adopted to test and validate the μVR model is the tridimensional reconstruction of the Forum of Augustus. It was built by the Emperor Augustus between the end of the first century B.C. and the beginning of the first century A.D. (probably the construction begun after the 30 B.C. and was inaugurated in 2 B.C. when it was still unfinished.) and it is the first of the Imperial forums of Rome. It was a monumental complex which hosted the majestic temple of Mars Ultor (the Avenger) framed by two porticos supported by corinthian colonnade which could have served as the courts of justice, and behind this colonnade there were four semi-circular exedrae (two for each side of the forum) probably used for educational activities (Ginnasi). The back wall of the porticoes and of the exedrae were articulated with columns framing a series of rectangular niches adorned with a rich variety of different statues. At the end of the northern portico, there was the Hall of Colossus, a decorated chamber which hosted the colossal statue, of approximately 11 m tall, representing the Genius Augusti (the protective deity of Augustus) [44]. Nowadays, the remain of the Forum are still visible and well preserved in the archaeological area of the Imperial Fora (Figure 10a,b) and all the findings discovered during the excavation performed during the last century (statues, architectural elements and decoration) are shown in the nearby museum of the Trajan’s Markets-Imperial Fora Museum.
Such archaeological context was reconstructed by the CNR ITABC in 2015 in occasion of the “Keys to Rome” international exhibition organized by the European consortium V-MusT, the largest European Network of Excellence on Virtual Museums (v-must.net). It focused on the museum collections belonging to the Roman Culture and mainly to the Augustan age (Figure 11a,b).
The virtual reconstruction of the Forum was the result of a long and accurate work of digitization, related to museum’s objects, and 3D modeling of the hypothetical historical environment. The models created for Admotum are currently under revision both from an historical and a technical point of view. The former is aimed at updating the model according to recent archaeological discoveries; the latter is instead addressed to adapt the digital assets for modern game engines. However, the steps for creating historical assets based on sources (historical, archaeological, architectural, and so on) is based on a scientific workflow adopted and developed within this laboratory [45,46,47]. Reconstructing ancient architectures is always a challenge and an huge amount of data from different scientific domain need to be surveyed and blended together in order to achieve reliable reconstructive hypothesis which will be then translated into a 3D models. In our case, first, several typology of archaeological data, such as previous reconstructive hypotheses, drawings, pictures, bibliographic resources, excavation data, were analyzed to design the building re-enactment. Afterwards the reconstructive hypotheses proposed, were discussed and refined with the support of experts from the Imperial Fora Museum of Rome and from the Superintendence of Rome. When not supported by historical sources or archaeological evidences, reconstructions were supported by formal rules, construction techniques, and roman modules or based on comparisons with other coeval imperial buildings.
Just after this, preparatory studies we started to draw up the 3D reconstructive models. The modeling was performed using computer graphic software; it allowed modelers to design 3D geometries with the support plans, sections, and profiles derived from technical drawings. On the other hand, some preserved architectural elements, such as the capitals of the Temple of Mars and the caryatid statues were designed using a virtual anastilosis approach. In other words, we first surveyed the fragments with scanning techniques and then, after integrating the missing parts in computer graphic, we relocated them in their original position. Once the drafted 3D model of the Forum was verified with the expert, the 3D models were optimized and improved using a ‘game-oriented’ approach (controlled number of polygons, atlas textures, physically-based rendering (PBR), materials, etc.); it was particularly useful for the real time output, allowing better performance for different game engines.

4.2. Pilot Test

Regarding the pilot test implemented through Unreal Engine 4 (https://www.unrealengine.com), we selected and arranged a scene with n.4 game items (Figure 12). They had different scales: specifically a column, a statue, a brazier and a lion, employing both visual and audio cues during the experience.
From a visual perspective, each item had a light animated string (a sort of ‘Ariadne thread’) to provide a subtle and continuous hint for current item target location. Due to stereoscopic perception, such a hint can offer valuable support during the immersive fruition on where the item has to be placed in order to solve it. Furthermore, each item’s starting location had a compass indicator providing an additional cue at the very beginning (Figure 13a).
For audio cues, we used both spatialized sounds on unsolved items and a light background track with a dynamic pitch, depending on current world scale. The track playback was in fact initially slowed down and gradually restored on each game state progression, until game completion (audio pitch fully restored).
Since the pilot test employs physics, we took special care of collision geometries for items and static environment also to enhance the ‘physical’ aspect of the experience during manipulation tasks. Due to different scale levels, gravity plays also a major role during manipulation tasks, since it impacts physics behavior (e.g., items fall slowly and react differently at larger s values) thus providing additional cue on current world scale.
The chosen items arrangement for the experiment and their bounds (Figure 13b) produce an initial unsolved volume 𝕍 and a computed scale s = 74.4. This means at the very initial stage of the game progression, the column item—for instance—was about 20 cm tall.

5. User Testing

As target sample, we worked with high school students of Rome, elaborating with developers and 3D modelers an ad-hoc testing template, helpful to understand specific research issues, as above presented. The school provided us 45 students, ranging from 14 to 17 years old. Only 27 agreed to actively participate at the test. The others only observed far away from the interactive area. The experiment setup was composed by a workstation PC equipped with NVIDIA 980GTX, an Oculus Rift CV1 with two tracking sensors, two Oculus Touch controllers, and a tracked area of 2.5 × 2.5 m (Figure 14a,b).

5.1. Method and Tools

In order to provide reliable results, the test templates relied upon previous works of cognitivists, pedagogists, and experts in ICT and communication. We followed a combined method made of observations and direct questionnaires (Figure 15):
  • the former saw the direct involvement of operator who needed to take note information about (a) tester behavior; (b) attitude toward the VR and its items (museum objects); (c) general feeling of tester and atmosphere in which the experience is happening; (d) timing of the experience; (d) eventual involvement of external aid (requested or not). Meanwhile, he needed to look at the monitor where it was possible to retrace the actions of tester into the VR, in order to grab the sequence of item’s collection. He always referred to a predetermined sheet, useful to quickly point out any comment or extra note.
  • The questionnaire, instead, was directly filled by each tester just after the experience, to further compare the related observation with his own comments. It was divided into three sections: (a) overall feedback about the experience and the feelings tested; (b) questions about cognitive perception and scales/proportions; (c) questions about motion and visual sickness.
Both questionnaires and observations were collected anonymously, giving students a progressive number.

5.2. Logistics and Deployment

The evaluation was planned in a one day session of March 2018. It was organized into four (ideal) phases:
  • A preliminary explanation was delivered to all students once they entered the interactive area, at school; operators did not revealed the main concerns about the test and its research questions, in order to do not influence the testers. Students were also asked to read the disclaimer document and sign it.
  • One by one, students tried the HMD. They were put in a comfortable situation, reducing external influences and distractions (even if other students, as spectators, remained around to see the experience); operators asked students to use the ‘think-aloud’ technique in order to annotate any comment, sensation, and feeling that could be useful for further consideration.
  • During the experience, students performed the task alone, without any suggestion; minimum cues were nevertheless given, in extreme cases, of inaccessibility of the items. In the meantime, operators took note of behavioral conditions while timing the experience with a watch.
  • Just after the experience, students were asked to fill in the evaluative questionnaire, of one single page; instructions or explanations were given at specific request in order to do not influence the testers.
The entire evaluative session was conducted by two operators: a developer who followed the virtual experience and an evaluator who followed the observation and questionnaire parts.

6. Results

Based upon the feedback collected on 27 testers, results were really interesting and presented specific issues not supposed by the developed pilot test.
The target group resulted to be mostly composed by female of about 16 years old. The average time needed to experience the test scene was of about 1.35 min. Almost all the testers successfully accomplished the task with no aid from external personnel. Just 30% of them were helped during the experience (Figure 16), after specific request, especially in conditions of not clear visibility when they had the HMD on (students without glasses, large HMD respect to the student’s head…). Eighty percent of them quickly understood what to do in the VR (Figure 17); this datum was also confirmed by direct responses of students who said at 100% they were able to ‘see’ how to solve the test. The contextualization process was easily activated in the testers’ mind through simple visual expedients: forms, colors, materials, and shapes; correct positioning was also got by means of logic, proportion attempts, correspondences, and recall.
First impressions once they wore the HMD were interesting: exclamations like “I’m a giant”, “I’m very tall”, “I’m afraid of breaking something” reported a common feeling of preponderance of testers respect to the whole 3D environment. Here, the perception of the body is highly felt as hulking and oversized with respect to the surrounding architecture. Students seemed to move very softly in order to not accidentally ‘break’ anything in the virtual environment—like they were afraid of it. This datum is also confirmed by the fact that operator noticed that almost 86% of students just executed the task without really having fun. Developers supposed that testers might play with the items and the environment; instead, their behavior seemed really affected by dimensions: this does not mean that dimensions badly affected the tester’s behavior, but it surely had a weight in the overall experience. On this aspect, 48% students also affirmed that they would have liked to see their feet in the VR in order to have specific indication of where they were walking (Figure 18). In our test, feet were not visible. This is interesting if we think about body’s correspondence in the 3D virtual environment: in the case of the hands’ gesture, indeed, the extent of the hands in the virtual world are useful for the tester to see and understand what to pick up, how to handle an item, and where to place it; so the same should work for feet, in order to understand in which direction the tester is moving, how fast or slow, and what he/she is stepping on. Such implementation is thus suggestable.
About the handling of the items in the VR, the 96% of testers said to have had a good feeling. Nevertheless, the operator, after the experience, noticed them performing the handling gestures again and the idea was that they felt the need to handle the items in a very delicate manner—as they seemed so tiny and easily breakable. This datum can be confirmed by the quotations they gave to each single item they collected in the VR (Figure 19a–d). Students were asked to make an estimation about how such items would have been big or small in real life, with respect to a person. Answers were really curious and unpredictable: the average estimation of the brazier’s dimension was considered around half of a person (while in reality it is 150 cm high); the lion head, instead, was considered smaller than half of a person (while in reality it is 70 cm); the statue was estimated to be shorter than a person, about 150–160 cm (while in reality is 400 cm high); finally, the column was considered taller than a person, but that measure was far from the reality (500 cm high). These estimations surely do not match with real life architecture but gives a glimpse of how distances and proportions in VR change according to one’s body perception.
About visual imagery of VR and the proportions felt by testers, they affirmed to have perceived the architecture (the environmental space) as quite good (67%), even if 19% of them perceived it as too close and another 15% as too far. This is surely related to the virtual dimensions of testers inside the VR while they were performing the task (Figure 20): unlikely the balance between the virtual space and the virtual body changes in accordance with each item’s dimensions, the feeling of overimposition for certain testers remained high, finally giving them the impression of being “out of scale”. In general, testers preferred when they were giant (52%), even if a great percentage of them affirmed to love to be in the VR as in real life, with their current height (48%) (Figure 21). Again, this datum gives us reason to think that testers generally feel more comfortable with their usual proportion paradigm instead of altered dimensions. Nevertheless, such aspects need to be further investigated.
Regarding visual and audio cues—hints which should have helped testers in the resolution of the given task inside the VR—almost 62% of them affirmed to not have used them (Figure 22). None of them noticed the compass indicator placed below each item indicating the direction to follow to correctly place it; all testers did put each item in the right position by simply using the reasoning and comparing forms, dimensions, and colors to their background notions. Speaking out loud, they proved to follow a logical mental pathway to place the item in the right location: throughout the observation of the surrounding environment, making comparisons with other similar objects, estimating dimensions and proportions, or simply drawing on everyday life experience.
In general, the majority of testers confirmed to the great usability of HMD and the deep sense of presence in the VR. They felt included into another separated environment, minimally influenced by other students or operators. Once they had taken off the device, they felt an ‘estranged’ sensation, like being snatched out of their present (as they directly mentioned); some of them had headache, others felt vertigo, but generally speaking 63% of testers affirmed to do not have problems with HMD. Almost 70% of them never used such kind of device before, although they knew of them.
All the students expressed the desire to test the scene again. Once they finished the experience, all the testers started chatting together to confront their feelings and perceptions. The operator had the impression of them being really enthusiastic and happy about the experience.

7. Conclusions and Future Works

The pilot test proposed here attempted to verify our main research question: how can we overcome the motion sickness in VR environment, which is the cause of the users’ low tolerance of the game-like mechanism? The solution we proposed through the μVR model took into account (a) the scale perception of each individual of the virtual environment as well as (b) the common paradigm of physical movements into real life experience. From a formal point of view, the pilot test did prove the robustness of the model by allowing reachability and solvability for each item, for each game state (reference frame) thus exploiting the full potential of the real-walking technique inside the physical tracked area to perform manipulation tasks, validating our assumptions. Due to the mathematical correlation between physical tracked area A and the game-driven unsolved volume 𝕍 of proposed model, a full hardware scalability of μVR is guaranteed. The μ operator and its progression mechanics in fact completely abstract from hardware equipment. The model adapts to different walkable space dimensions and it will suit all future advancements regarding HMD refresh rates, tracking technology and size (i.e.,: larger physical areas). This will allow us to craft immersive, multi-scale re-contextualization applications through such a flexible game model.
The obtained results also fueled several improvements and future directions for the μVR model. Within game scenarios dealing with great differences in terms of items size and with special arrangements, few items could be really small. That does with respect to the scale of current game state, especially at the initial stages of the experience. For instance, the pilot test showed the lion item was almost never picked up on 𝕍start, as the primary item (Table 1), but just after few transitions. In the same scenario, if we deal with both urban and human scale recontextualization (e.g.,: an entire temple and a brazier), smaller items in a given 𝕍 may be inaccessible to manipulation tasks due to the world scale computed by the μ operator. This is the main reason why a filtering operator could be introduced (μ-Filtering) and investigated in order to perform a scale-based item selection within the current 𝕍, thus enabling items for user manipulation when they satisfy the filter.
The current model allows the unordered resolution of items (e.g., user can solve first A and then B or vice versa), so users can easily choose which virtual object to solve first, without influencing his gameplay. Rather, the gameplay arranges itself according to user’s selection. For this reason, another interesting direction is the investigation of items’ dependencies: specific items may depend on other items in terms of resolution. For instance, item A can be solved only if item B has already been solved (e.g., a temple roof and a statue on top of that roof). Future research may also investigate application of μVR model to MCVEs [38], thus enabling multi-user collaborative item solving in the same virtual environment. This may open very interesting game scenarios and derived modes where users perceive each other at different scales and cooperate to solve (re-contextualize) all the items. Such a direction could be implemented both in co-located or remote collaborative interaction models within computer supported cooperative work (CSCW). Given the stimulating feedback of this pilot with high school students, for the future we will plan other evaluations, taking into account different target group and an updated version of the promising μVR model.

Author Contributions

Bruno Fanini created and developed the μVR Model, including integration with Unreal Engine 4, pilot test application and the writing and revision of the manuscript. Alfonsina Pagano contributed to the design and planning of the study, the running of the interviews and observations, analysis of the data, and the writing and revision of the manuscript. Daniele Ferdani contributed to reconstruction, 3D modeling, asset creation and the writing and revision of the manuscript.

Acknowledgments

We would like to thank the Imperial Fora Museum-Mercati di Traiano, for the archaeological support to the 3D reconstruction; the V-MusT consortium (v-must.net) and the REVEAL project (revealvr.eu); and the Liceo Ginnasio “Augusto” of Rome, for being the test-ground of such experimentation.

Conflicts of Interest

The authors declare no conflict of interests.

References

  1. Van Dijck, J. Mediated Memories in the Digital Age; Stanford University Press: Stanford, CA, USA, 2007. [Google Scholar]
  2. Pagano, A.; Pietroni, E. Un metodo integrato per valutare i Musei Virtuali e l’esperienza dei visitatori. Il caso del Museo Virtuale della Valle del Tevere. Available online: https://www.academia.edu/35211139/Un_metodo_integrato_per_valutare_i_Musei_Virtuali_e_lesperienza_dei_visitatori (accessed on 28 March 2018).
  3. Pagano, A.; Cerato, I. Evaluation of the educational potentials of interactive technologies applied to Cultural Heritage. The Keys to Rome exhibition case study. In Proceedings of the Digital Heritage 2015, International Congress, Granada, Spagna, 28 Settembre–2 Ottobre 2015. [Google Scholar]
  4. Hassenzahl, M.; Diefenbach, S.; Göritz, A. Needs, affect, and interactive products—Facets of user experience. Interact. Comput. 2010, 22, 353–362. [Google Scholar] [CrossRef]
  5. Dewey, J. Experience & Education; Kappa Delta Pi: New York, NY, USA, 1938; ISBN 0-684-83828-1. [Google Scholar]
  6. Merriam, S.B.; Caffarella, R.S.; Baumgartner, L.M. Learning in Adulthood: A Comprehensive Guide; John Wiley & Sons: San Francisco, CA, USA, 2007. [Google Scholar]
  7. Loo, R. A Meta-Analytic Examination of Kolb’s Learning Style Preferences among Business Majors. J. Educ. Bus. 2002, 77, 252–256. [Google Scholar] [CrossRef]
  8. Antonaci, A.; Pagano, A. Technology enhanced visit to museums. A case study: Keys to Rome. In Proceedings of the INTED2015, Madrid, Spain, 2–4 March 2015. [Google Scholar]
  9. Hatfield, G. Perception: History of the Concept. In International Encyclopedia of the Social & Behavioral Sciences; Elsevier: New York, NY, USA, 2001; pp. 11202–11205. [Google Scholar]
  10. Hirose, M. Virtual reality technology and museum exhibit. Int. J. Virtual Real. (IJVR) 2015, 5, 31–36. [Google Scholar]
  11. Carrozzino, M.; Bergamasco, M. Beyond virtual museums: Experiencing immersive virtual reality in real museums. J. Cult. Herit. 2010, 11, 452–458. [Google Scholar] [CrossRef]
  12. Freedman, M. Think Different: Combining Online Exhibitions and Offline Components to Gain New Understandings of Museum Permanent Collections. Available online: https://www.museumsandtheweb.com/biblio/think_different_combining_online_exhibitions_and_offl.html (accessed on 28 March 2018).
  13. Gabellone, F.; Scardozzi, G. From the Object to the Territory: Image-Based Technologies and Remote Sensing for the Reconstruction of Ancient Contexts. Established by: Mauro Cristofani and Riccardo Francovich Supplemento 1. Available online: https://s3.amazonaws.com/academia.edu.documents/45566376/9_Gabellone.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1524796781&Signature=t5KaHhKDq8g5eLQUigjYoyqixc0%3D&response-content-disposition=inline%3B%20filename%3DFrom_the_object_to_the_territory_image-b.pdf (accessed on 1 September 2007).
  14. Ray, C.A.; Van Der Vaart, M.J. Contextualizing Collections: Using Virtual Reality in Archaeology Exhibitions. Exhibitionist 2013, 32, 73–79. [Google Scholar]
  15. Reffat, R.M.; Eslam, M.N. Effective communication with cultural heritage using virtual technologies. In Proceedings of the XXIV International CIPA Symposium, Strasbourg, France, 2–6 September 2013. [Google Scholar]
  16. Cultraro, M.; Gabellone, F.; Scardozzi, G. The virtual musealization of archaeological sites: Between documentation and communication. In Proceedings of the 3rd ISPRS International Workshop 3D-ARCH, Trento, Italy, 25–28 February 2009. [Google Scholar]
  17. Pietroni, E.; Ray, C.; Rufa, C.; Pletinckx, D.; Van Kampen, I. Natural interaction in VR environments for Cultural Heritage and its impact inside museums: The Etruscanning project. In Proceedings of the 18th International Conference on Virtual Systems and Multimedia (VSMM), Milan, Italy, 2–5 September 2012. [Google Scholar]
  18. Gabellone, F.; Ferrari, I.; Giannotta, M.T.; Dell’Aglio, A. From museum to original site: A 3d environment for virtual visits to finds re-contextualized in their original setting. In Proceedings of the Digital Heritage International Congress (Digital Heritage), Marseille, France, 28 October–1 November 2013; Volume 2. [Google Scholar]
  19. Wiberg, N.; Hafssteinsson, H.; Jonasson, S. Tangible geographical interface. In Proceedings of the Digital Heritage International Congress (Digital Heritage), Marseille, France, 28 October–1 November 2013. [Google Scholar]
  20. Fanini, B.; d’Annibale, E.; Demetrescu, E.; Ferdani, D.; Pagano, A. Engaging and shared gesture-based interaction for museums the case study of K2R international expo in Rome. Digit. Herit. 2015, 1, 263–270. [Google Scholar]
  21. Home, M.W. Virtual reality at the British Museum: What is the value of virtual reality environments for learning by children and young people, schools, and families? In Proceedings of the Annual Conference of Museums and the Web, Los Angeles, CA, USA, 6–9 April 2016.
  22. Forte, M. Regium@ Lepidi 2200 Project. Available online: https://www.archeomatica.it/musei/regium-lepidi-antica-citta-romana-rivive-digitalmente-nuovo-museo-virtuale-permanente (accessed on 28 March 2018).
  23. Forte, M. (Ed.) Regium@Lepidi 2200. In Archeologia e Nuove Tecnologie per la Ricostruzione di Reggio Emilia; Ante Quem: Bologna, Italy, 2016. [Google Scholar]
  24. Codina, F.; de Prado, G.; Ruiz, I.; Sierra, A. The Iberian town of Ullastret (Catalonia). An Iron Age urban agglomeration reconstructed virtually. Archeologia Calcolatori 2017, 28, 311–320. [Google Scholar]
  25. López-Ibáñez, M.; Peinado, F. Walking in VR: Measuring Presence and Simulator Sickness in First-Person Virtual Reality Games. Available online: http://ceur-ws.org/Vol-1682/CoSeCiVi16_paper_8.pdf (accessed on 28 March 2018).
  26. Baños, R.M.; Botella, C.; Alcañiz, M.; Liaño, V.; Guerrero, B.; Rey, B. Immersion and Emotion: Their Impact on the Sense of Presence. In Cyberpsychology & Behavior; Mary Ann Liebert, Inc.: New Rochelle, NY, USA, 2004; Volume 7. [Google Scholar]
  27. Slater, M. Measuring presence: A response to the Witmer and Singer presence questionnaire. Presence Teleoper. Virtual Environ. 1999, 8, 560–565. [Google Scholar] [CrossRef]
  28. Slater, M.; Usoh, M.; Steed, A. Depth of presence in virtual environments. Presence Teleoper. Virtual Environ. 1994, 3, 130–144. [Google Scholar] [CrossRef]
  29. Schubert, T.W.; Friedmann, F.; Regenbrecht, H.T. The experience of presence: Factor analytic insights. Presence Teleoper. Virtual Environ. 2001, 10, 266–281. [Google Scholar] [CrossRef]
  30. Boletsis, C. The New Era of Virtual Reality Locomotion: A Systematic Literature Review of Techniques and a Proposed Typology. Multimodal Technol. Interact. 2017, 1, 24. [Google Scholar] [CrossRef]
  31. Slater, M.; Usoh, M.; Steed, A. Taking steps: The influence of a walking technique on presence in virtual reality. ACM Trans. Comput Hum. Interact. 1995, 2, 201–219. [Google Scholar] [CrossRef]
  32. Peck, T.; Fuchs, H.; Whitton, M. The design and evaluation of a large-scale real-walking locomotion interface. IEEE Trans. Visual Comput. Graph. 2012, 18, 1053–1067. [Google Scholar] [CrossRef] [PubMed]
  33. Usoh, M.; Arthur, K.; Whitton, M.C.; Bastos, R.; Steed, A.; Slater, M.; Brooks, F.P., Jr. Walking > walking-in-place > flying, in virtual environments. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 8–13 August 1999. [Google Scholar]
  34. Steinicke, F.; Visell, Y.; Campos, J.; Lécuyer, A. Human Walking in Virtual Environments; Springer: New York, NY, USA, 2013. [Google Scholar]
  35. Ruddle, R.A.; Ekaterina, V.; Heinrich, H.B. Walking improves your cognitive map in environments that are large-scale and large in extent. ACM Trans. Comput.-Hum. Interact. (TOCHI) 2011, 18, 10. [Google Scholar] [CrossRef]
  36. Sharif, R.; Kohn, Z.; Whitton, M.C. Redirected walking. In Proceedings of the Eurographics, Manchester, UK, 5–7 September 2001. [Google Scholar]
  37. Qi, S.; Wei, L.; Kaufman, A. Mapping virtual and physical reality. ACM Trans. Graph. (TOG) 2016, 35, 64. [Google Scholar]
  38. Langbehn, E.; Gerd, B.; Frank, S. Scale matters! Analysis of dominant scale estimation in the presence of conflicting cues in multi-scale collaborative virtual environments. In Proceedings of the 2016 IEEE Symposium on 3D User Interfaces (3DUI), Greenville, SC, USA, 19–20 March 2016. [Google Scholar]
  39. Glazier, A.; Ashkenazi, N.; Seegmiller, M.; Ali, S.; Le, A. Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain. Available online: https://www.tdcommons.org/cgi/viewcontent.cgi?referer=https://scholar.google.co.uk&httpsredir=1&article=1769&context=dpubs_series (accessed on 2 October 2017).
  40. Käser, D.P.; Parker, E.; Glazier, A.; Podwal, M.; Seegmiller, M.; Wang, C.; Karlsson, P.; Ashkenazi, N.; Kim, J.; Le, A.; et al. The Making of Google Earth VR. In Proceedings of the ACM SIGGRAPH 2017 Talks (SIGGRAPH ‘17), Los Angeles, CA, USA, 30 July–3 August 2017; ACM: New York, NY, USA, 2017; p. 2. [Google Scholar]
  41. Chang, J.S.K.; Yeboah, G.; Doucette, A.; Clifton, P.; Nitsche, M.; Welsh, T.; Mazalek, A. TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition. In Proceedings of the 2017 Conference on Designing Interactive Systems, Edinburgh, UK, 10–14 June 2017. [Google Scholar]
  42. Gibson, J. The Ecological Approach to Visual Perception; Houghton Mifflin Co.: Boston, MA, USA, 1979. [Google Scholar]
  43. Linkenauger, S.A.; Leyrer, M.; Bülthoff, H.H.; Mohler, B.J. Welcome to Wonderland: The Influence of the Size and Shape of a Virtual Hand on the Perceived Size and Shape of Virtual Objects. PLoS ONE 2013, 8, e68594. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Ungaro, L.; Milella, M.; Vitti, M. Il sistema museale dei Fori Imperiali e i Mercati di Traiano. Available online: http://bib.cervantesvirtual.com/portal/simulacraromae/libro/c1.pdf (accessed on 28 March 2018).
  45. Demetrescu, E.; Ferdani, D.; Dell’unto, N.; Lindgren, S.; Leander Touati, A.M. 3D Movie of the House of Caecilius Iucundus in Pompeii. Available online: https://diglib.eg.org/handle/10.2312/14487 (accessed on 28 March 2018).
  46. Fanini, B.; Demetrescu, E.; Ferdani, D.; Pescarin, S. Aquae Patavinae VR, dall’acquisizione 3D al progetto di realtà virtuale: Una proposta per il museo del termalismo. In Aquae Salutiferae. Il Termalismo tra Antico e Contemporaneo; Bassani, M., Bressan, M., Ghedini, F., Eds.; Padova University Press: Antenor Quaderni, Padua, 2014; pp. 431–449. [Google Scholar]
  47. Demetrescu, E. Archaeological stratigraphy as a formal language for virtual reconstruction. Theory and practice. J. Archaeol. Sci. 2015, 57, 42–55. [Google Scholar] [CrossRef]
Figure 1. Screenshot from the VR applications (a) Etruscanning and (b) Admotum. The museum object displayed in the user interfaces are re-contextualized in their supposed original position within their reconstructed environments.
Figure 1. Screenshot from the VR applications (a) Etruscanning and (b) Admotum. The museum object displayed in the user interfaces are re-contextualized in their supposed original position within their reconstructed environments.
Mti 02 00020 g001
Figure 2. Images from “Marta Racconta” project. Grave goods in museum (left) and re-contextualization in original context (right). Courtesy of F. Gabellone, IBAM-CNR [18].
Figure 2. Images from “Marta Racconta” project. Grave goods in museum (left) and re-contextualization in original context (right). Courtesy of F. Gabellone, IBAM-CNR [18].
Mti 02 00020 g002
Figure 3. (a) Chrysippus head exhibited permanently at the Imperial Fora Museum in Rome. (b)screenshot of “Admotum”, showing a user’s virtual hand re-collocating the object in its original virtual context.
Figure 3. (a) Chrysippus head exhibited permanently at the Imperial Fora Museum in Rome. (b)screenshot of “Admotum”, showing a user’s virtual hand re-collocating the object in its original virtual context.
Mti 02 00020 g003
Figure 4. Virtual Museum Regium @ Lepidi 2200. (a) shows an overview of the Forum; (b) shows a detailed view of a column with capital recontextualization. Courtesy of M. Forte, Duke University [23].
Figure 4. Virtual Museum Regium @ Lepidi 2200. (a) shows an overview of the Forum; (b) shows a detailed view of a column with capital recontextualization. Courtesy of M. Forte, Duke University [23].
Mti 02 00020 g004
Figure 5. A standard outside-in setup with sensors tracking the user inside the area A.
Figure 5. A standard outside-in setup with sensors tracking the user inside the area A.
Mti 02 00020 g005
Figure 6. An item (here represented as a cube) with its starting 3D location O and its target (intended) location T. Left: the item can be solved by user intervention through manipulation (green arc). Right: the unsolved item volume (blue rectangular prism).
Figure 6. An item (here represented as a cube) with its starting 3D location O and its target (intended) location T. Left: the item can be solved by user intervention through manipulation (green arc). Right: the unsolved item volume (blue rectangular prism).
Mti 02 00020 g006
Figure 7. In this example, both items origin and target locations rely within the tracked area, thus we satisfy properties 1 and 2: the user can solve both items by performing basic manipulation tasks. It can be also noticed that solving order does not matter.
Figure 7. In this example, both items origin and target locations rely within the tracked area, thus we satisfy properties 1 and 2: the user can solve both items by performing basic manipulation tasks. It can be also noticed that solving order does not matter.
Mti 02 00020 g007
Figure 8. In this sample, both items are initially reachable but the sphere is unsolvable within the tracked area A.
Figure 8. In this sample, both items are initially reachable but the sphere is unsolvable within the tracked area A.
Mti 02 00020 g008
Figure 9. Two subsequent states (𝕍a and 𝕍b) of a game progression in μVR model.
Figure 9. Two subsequent states (𝕍a and 𝕍b) of a game progression in μVR model.
Mti 02 00020 g009
Figure 10. Forum of Augustus: picture (a) shows the remains of the temple of Mars Ultor and the southern portico; picture (b) shows semi-circular exedra with columns framing a series of rectangular niches.
Figure 10. Forum of Augustus: picture (a) shows the remains of the temple of Mars Ultor and the southern portico; picture (b) shows semi-circular exedra with columns framing a series of rectangular niches.
Mti 02 00020 g010
Figure 11. 3D reconstruction of the Forum of Augustus: picture (a) shows a rendering of the 3D environment calculated using a biased rendering engine; picture (b) shows the real-time visualization through Admotum application.
Figure 11. 3D reconstruction of the Forum of Augustus: picture (a) shows a rendering of the 3D environment calculated using a biased rendering engine; picture (b) shows the real-time visualization through Admotum application.
Mti 02 00020 g011
Figure 12. Comparative size of the four game items.
Figure 12. Comparative size of the four game items.
Mti 02 00020 g012
Figure 13. (a) Compass indicator UI for column item; (b) The setup of the sample VE with the arrangement of four game items (column, statue, brazier, and lion). The blue volume represents the global unsolved volume 𝕍 computed at the initial stage, not visible during gameplay. The virtual hands used to grab items scale accordingly to current user scale computed by μ operator during the experience.
Figure 13. (a) Compass indicator UI for column item; (b) The setup of the sample VE with the arrangement of four game items (column, statue, brazier, and lion). The blue volume represents the global unsolved volume 𝕍 computed at the initial stage, not visible during gameplay. The virtual hands used to grab items scale accordingly to current user scale computed by μ operator during the experience.
Mti 02 00020 g013
Figure 14. The setup of the experiment at school (a) and corresponding physical tracked area A (2.5 × 2.5 m) (b).
Figure 14. The setup of the experiment at school (a) and corresponding physical tracked area A (2.5 × 2.5 m) (b).
Mti 02 00020 g014
Figure 15. The test templates used for the experiment at school, March 2018. At the left side the observation protocol, the questionnaire protocol is on the right.
Figure 15. The test templates used for the experiment at school, March 2018. At the left side the observation protocol, the questionnaire protocol is on the right.
Mti 02 00020 g015
Figure 16. Graph coming from observations. It is related to the help asked by testers while performing with the HMD.
Figure 16. Graph coming from observations. It is related to the help asked by testers while performing with the HMD.
Mti 02 00020 g016
Figure 17. Graph coming from observations. It is related to the understanding of testers of the task to be performed once worn the HMD.
Figure 17. Graph coming from observations. It is related to the understanding of testers of the task to be performed once worn the HMD.
Mti 02 00020 g017
Figure 18. Desire by testers to see feet in the VR.
Figure 18. Desire by testers to see feet in the VR.
Mti 02 00020 g018
Figure 19. Estimated size of each item (ad) compared to a human body, calculated upon the average indication of students.
Figure 19. Estimated size of each item (ad) compared to a human body, calculated upon the average indication of students.
Mti 02 00020 g019
Figure 20. Graph coming from questionnaires. It is related to how tester perceive the VR environment.
Figure 20. Graph coming from questionnaires. It is related to how tester perceive the VR environment.
Mti 02 00020 g020
Figure 21. Graph coming from questionnaires. Tester is asked about which visualization preferred the most.
Figure 21. Graph coming from questionnaires. Tester is asked about which visualization preferred the most.
Mti 02 00020 g021
Figure 22. Graph coming from questionnaire, about usefulness of visual and audio hints in VR.
Figure 22. Graph coming from questionnaire, about usefulness of visual and audio hints in VR.
Mti 02 00020 g022
Table 1. In which sequence does the user pick up each item?
Table 1. In which sequence does the user pick up each item?
StatueLionColumnBrazier
1.43.72.12.8

Share and Cite

MDPI and ACS Style

Fanini, B.; Pagano, A.; Ferdani, D. A Novel Immersive VR Game Model for Recontextualization in Virtual Environments: The μVRModel. Multimodal Technol. Interact. 2018, 2, 20. https://doi.org/10.3390/mti2020020

AMA Style

Fanini B, Pagano A, Ferdani D. A Novel Immersive VR Game Model for Recontextualization in Virtual Environments: The μVRModel. Multimodal Technologies and Interaction. 2018; 2(2):20. https://doi.org/10.3390/mti2020020

Chicago/Turabian Style

Fanini, Bruno, Alfonsina Pagano, and Daniele Ferdani. 2018. "A Novel Immersive VR Game Model for Recontextualization in Virtual Environments: The μVRModel" Multimodal Technologies and Interaction 2, no. 2: 20. https://doi.org/10.3390/mti2020020

Article Metrics

Back to TopTop