You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

31 December 2020

X-Reality Museums: Unifying the Virtual and Real World Towards Realistic Virtual Museums

,
,
,
and
1
Foundation for Research and Technology Hellas, Institute of Computer Science, N. Plastira 100, Vassilika Vouton, GR-700 13 Heraklion, Greece
2
Department of Computer Science, University of Crete, GR-700 13 Heraklion, Greece
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Extended Reality: From Theory to Applications

Abstract

Culture is a field that is currently entering a revolutionary phase, no longer being a privilege for the few, but expanding to new audiences who are urged to not only passively consume cultural heritage content, but actually participate and assimilate it on their own. In this context, museums have already embraced new technologies as part of their exhibitions, many of them featuring augmented or virtual reality artifacts. The presented work proposes the synthesis of augmented, virtual and mixed reality technologies to provide unified X-Reality experiences in realistic virtual museums, engaging visitors in an interactive and seamless fusion of physical and virtual worlds that will feature virtual agents exhibiting naturalistic behavior. Visitors will be able to interact with the virtual agents, as they would with real world counterparts. The envisioned approach is expected to not only provide refined experiences for museum visitors, but also achieve high quality entertainment combined with more effective knowledge acquisition.

1. Introduction

Culture is a major connecting tissue of a society, facilitating its links with the past, strengthening its members’ bonds and elevating their quality of life, but also allowing them to envision and plan their future. Culture has gone through important changes, expanding its potential audience, and is currently in a revolutionary phase–named Culture 3.0–in which individuals are expected to assimilate and manipulate in their own way the cultural contents they are being exposed to [1]. Contemporary museums are challenged to not only adapt to the current status quo, and follow existing trends, but also shape future cultural experiences.
Evolution though does not happen in isolation; instead it is easier when it is sought following multidisciplinary approaches. In this case, a discipline providing a helping hand is information and communication technologies (ICTs). The abundance of mature technologies in the fields of computer graphics, human–computer interaction, computer vision, etc., have dissolved reluctances, even of the most sceptics, to harness digital tools with Cultural Heritage (CH) towards better understanding our history and civilization [2], as well as actively participating and disseminating our cultural heritage. Furthermore, in accordance with Culture 3.0, state-of-the-art technologies are expected to play an important role to enhance museums’ on-site experiences and hence transform them into high-technological CH spaces [3], able to provide high quality interactive experiences to their audience.
Museums take advantage of ICT technologies to exhibit digital content besides their physically exhibited artefacts and often employ augmented or virtual reality (collectively referred to as “mixed reality”) technologies to immerse visitors in novel experiences, following playful approaches [4]. These immersive technologies, however, have the potential to truly innovate museum experiences, beyond merely engaging visitors to artificial worlds. When used in combination, and by employing state-of-the art computer graphics, multisensory interaction, and artificial intelligence (AI) [5], they have the potential to orchestrate experiences of both physical [6] and virtual [7] worlds in an outstanding blend. Museum visitors can be immersed in virtual worlds unravelling in front of them and substituting in a seamless manner the museum’s physical environment. Virtual agents in this new world will behave and move naturally, thus acquiring the potential to directly interact with visitors who will be transformed from passive viewers to keen participants.
The presented work elaborates on the aforementioned vision and discusses background information and required technological advancements to attain it. In particular the contributions of this work are: (i) a conceptual model that can be used as a reference for implementing XR museum experiences which transcend the boundaries between real and virtual interaction in the museum space, (ii) the elaboration of a novel approach, that of “true mediated reality”, which refers to the collective technologies in support of realistic, high-quality human-to-virtual character interactions in cultural heritage contexts, (iii) a tangible case study, illustrating how the proposed conceptual model is materialized.
The rest of this article is structured as follows. Section 2 presents a review of the literature, introducing readers to the extended reality (XR) concept, as well as the current state regarding the technological components involved in the proposed conceptual architecture (diminished reality, true mediated reality, and natural multimodal interaction), concluding with a highlight on future research directions. Section 3 elaborates on the proposed conceptual model for building unified XR experiences toward realistic virtual museums, providing a segmentation of the components into a layered design so as to facilitate the development of novel solutions. Section 4 presents a case study, aiming to instantiate the discussed topics and use of the conceptual architecture in a concrete, real-world example. Finally, Section 5 discusses main points presented throughout the article, drawing conclusions on future research directions.

3. A Conceptual Architecture for Unifying XR Experiences for Realistic Virtual Museums

Museums have existed since the third century BC, and until today they have undergone several changes to cope with the sociological, cultural and economic shifts through humankind’s history [69]. Nevertheless, one fundamental attribute that has remained intact is their orientation towards education: museums principally aim to share knowledge with their audiences. Contemporary museums have embraced technology and incorporated technological artifacts related to their collections, as a means of creating delightful experiences, increasing their wow-factor, entertaining visitors, but also for providing access to collections that are not physically exhibited in the museum and for giving additional information about the exhibited artifacts.
Among the wide variety of technological exhibits, AR and VR solutions are increasingly becoming popular, due to the high immersion and presence they offer [70,71], taking also into account that the hardware involved is now affordable and has achieved important progress in the delivered user experience quality. This is an important accomplishment and constitutes a milestone for further evolving XR experiences in the museum towards becoming unique and unified.
Unified XR experiences in realistic virtual museums focus on engaging museum visitors in an interactive and immersive blend of physical and virtual, as if it was a single unified “world” [72]. Following this concept, interaction with the XR environment and its agents will be achieved naturally, as when interacting with real world artifacts and counterparts. Embodied virtual agents will interact with museum visitors in order to provide instructions and transfer knowledge in a more direct manner (e.g., historical personalities sharing their stories, infamous artwork becoming “alive”). This realistic interplay will introduce passive museum visitors to active partakers, thus endorsing their feeling of presence in the XR environment and achieving better transfer of knowledge and higher enjoyment.
From a technical perspective, unified XR experience in realistic virtual museums, requires a distributed Service oriented Architecture (SoA) that will interweave the different technologies in a flexible and scalable manner and promote reusability, interoperability, and loose coupling among its components. Figure 2 illustrates a conceptual model incorporating the fundamental components that such approaches should comprise.
Figure 2. Unified XR for realistic virtual museums: conceptual architecture.
Overall, the conceptual model involves two main component categories: (i) elements that directly affect user interaction and are responsible for delivering the XR experience (green area in Figure 2), and (ii) components pertaining to processes unseen by the end user and which are responsible for interpreting user interactions (marked as “true mediated reality” in Figure 2—yellow area).
In particular, the diminished reality component undertakes the task of removing, in real-time, physical elements (e.g., a museum exhibit) that will be replaced with their virtual counterparts in the user’s view. In order to achieve this, several processes have to run. Scene registration and localization processes will identify the user’s location in the physical environment and the objects in their field of view. This is an ongoing procedure during the interaction, as the user’s location may be modified anytime. Then, physical objects are perceptually removed, by substituting them with the appropriate background.
Next, virtual agents have to be placed in the virtual environment, in order to substitute the physical exhibits. This is handled by the true mediated reality components. A prerequisite for delivering high quality experiences is the realistic reconstruction of 3D models, matching–and in certain cases extending–their physical counterparts. 3D models should ideally not only represent in a realistic manner the museum exhibits, but also attempt to deliver their original form in case of ruined artifacts (e.g., statues with missing parts). Then, the virtual representations of physical artifacts are placed in the virtual environment, in a veritable manner, a task that requires realistic rendering and animations.
Last, a unified experience requires context-sensitive natural interaction with multiple users, which involves processes that are responsible for perceiving and interpreting users’ natural input commands, namely gestures and natural language. In this respect, a natural language processing knowledge base needs to be embedded in the system, while a corresponding process will undertake the task of identifying the received user commands. At the same time, a variety of input gestures should be supported, in accordance with the state of the art in the field, thus allowing users to build upon their experiences with other gestural interfaces and interact with the virtual environment easily and effectively. In parallel, an emotion detection process will be in charge of monitoring and detecting user emotions, so that the system can be further adapted to the user. All identified gestures, speech, and emotions should be taken into account by a context-sensitive interaction decision making process, responsible for determining how the virtual statue will respond, considering also other parameters, such as the number of users who actively interact with the virtual exhibit and of those who passively attend the ongoing interaction. The decisions made will impact the virtual agent’s behavior, in terms of posture, gestures, exhibited emotions, as well as the information that will be delivered through multiple possible formats, including spoken dialogue output.
The next section (Section 4) exemplifies the aforementioned conceptual architecture through an example in the form of a case study.

4. Case Study: XR Natural History Museum

The orchestration of the previously detailed conceptual architecture towards delivering an immersive XR user experience, integrating users in both a physical and virtual representation of the same geographical space at the same time is explained through the example of an interactive XR exhibition installation dedicated to the presentation of Pleistocene Cretan fauna [73]. This demonstration intends to showcase living, animated, life-sized reconstructions of the animals that roamed Crete approximately 800,000 years ago. The purpose of this case study is to create a unified XR experience that intertwines “realities” (augmented, virtual, and plain reality in this case) to deliver a unique experience that transcends the capacities of each medium individually, and enables users to interact with other museum visitors in the same room while immersed, and to simultaneously enjoy all the benefits offered by both the physical as well as synthetic museum space addressed.

4.1. XR Systems and Applications

As illustrated in Figure 3, there are two types of interactive systems that can co-exist in the same physical space, lending full support the Pleistocene Crete exhibition. First, the virtual experience, which constitutes a fully immersive, small museum room, where fossils and reconstructed skeletons of the creatures (based on hypothesized remains not yet found, or making up for the fact that animal remains may have been moved to museums abroad) are either mounted, or shown as dig sites. The user can navigate the room and view an abundance of information shown either in textual, image, audible or video form, borrowing real elements from the museum’s audiovisual material (Figure 4a). However, users can opt to “travel” back in time, and view each individual animal in a fully animated, lifelike reconstruction, roaming a virtual recreation of the animal’s habitat that completely transforms the environment around the user (Figure 4b). As such, users can observe the different ways these animals moved, and thus gain a far greater insight on the morphological conditions of the environment that allowed each animal to thrive in Crete at various time periods during the Pleistocene era. A “timeline” interactive User Interface feature further enables users to “travel” to the various time periods corresponding to the calculated eras where each animal lived, and visualize the changes via various effects (i.e., a fast-forward montage of evolution). Narrative elements can further be infused into the virtual exploration, allowing the user to view interactions between animals spanning the same time period, as if partaking in a nature documentary television series [74] (e.g., witnessing the growth of a specific Mammuthus creticus [75], or Athene cretensis [76] preying on a herd of Candiacervus [77]).
Figure 3. Conceptual diagram of the Pleistocene Crete interactive XR exhibition.
Figure 4. Screenshots of the Pleistocene Crete virtual exhibition (a) museum environment; (b) Pleistocene environment.
Second, the substitutional reality experience serves as an augmented approximation of the aforementioned virtual case. In this experience, the real, physical area of the museum fossil room is setup to accommodate the XR experience, allowing the real (or plaster-built) animal remains to be subjected to a diminished reality effect. As the real-world animal skeleton cast disappears, a lifelike, moving reconstruction of the animal takes its place, and is allowed to roam the physical space around the user. Elements of natural interaction (involving gestures, movement, body postures and object manipulation) can be embedded in the experience, allowing the animal to realistically react to users’ attempts to touch it, while also utilizing spatial mapping and surface understanding to allow the holographic creature to avoid collisions with other bystanders, furniture, and overall museum infrastructure.

4.2. Mapping to the Proposed Conceptual Model

The aforementioned exhibition systems and applications encompass the abstract framework of concepts and relationships presented in Section 3, so as to serve as guidance for the development of system; interpret the interactions between entities in the application environment and through its defined relationships; derive a specific and concrete architecture for describing the structure models of this particular use case. Means by which elements identified as either interaction-oriented or structures for true mediated reality are illustrated in the mapping shown in Figure 5.
Figure 5. Pleistocene Crete system architecture alignment to the Unified XR museums conceptual architecture. Components in green correspond to XR experience delivery while components in yellow entail the True Mediated Reality elements of the experience.
As can be seen in this mapping, the reference conceptual model is viewed as an outline of principles guiding the design of the Pleistocene Crete XR ecosystem. The alignment of the exhibition system architecture to our conceptual model allows the architecture to combine all the necessary elements and IT components in a unified X-Reality interactive environment, encapsulating the system functionalities into a number of parallel-running services. This allows us to break down otherwise complex processes into easy-to-grasp, standalone components, and hence greatly simplifies aspects of system development and integration.

4.3. Operational Setup and Evaluation Framework

Similar experiences to the one described can be developed to serve a variety of cultural institutions’ needs. For the production of the XR content, a collaboration among museum curators and technology experts is warranted, in order to reproduce reconstructed digital museum artefacts with a high degree of fidelity.
The aforementioned are intended as in-venue offers within an actual museum, meaning a dedicated XR space will be required for the deployment of both experiences. Similar mixed reality interaction demonstrators suggest an adequate “play area” be made available for each demonstrator so as to accommodate unimpeded user movement without risk of injury, but also ensure that the user’s capacity to navigate each XR space is not severely limited. Ideally, museums should be encouraged to employ one expert to be present at each experience space for troubleshooting, as well as providing one-on-one tutoring prior to the users immersing themselves into each experience. Each experience should have a limited duration, both to ensure enough time can be allocated for crowds of museum visitors to try out each application, as well as to combat the potential effects of motion sickness some users (especially first-timers) may experience while trying out these novel experiences.
To assess the museum visitor experience and gather feedback for improving the design and functionality of the applications, a reference evaluation framework can be used, based on common criteria such as the systems’ usability and engagement. A recent evaluation methodology which applies to this particular case is proposed in [78]. Furthermore, additional aspects of each experience should be taken into account to assess the systems’ potential as a tool for museum education. Such ventures constitute an interesting direction for future work.

5. Conclusions

Museums have been characterized as “places where time is transformed into space” (The quote is attributed to Orhan Pamuk, a Nobel Laureate novelist). Contemporary museums have gone through various shifts, expanding their thematic and hosting not only historical or art exhibits, but a wide breadth of tangible and intangible cultural heritage artefacts, as well as scientific and technology artefacts. At the same time, museum visitors have changed themselves, becoming more tech savvy and often desiring the incorporation of technological artefacts in non-technological museums. Yet, technology should not be used in the museum context as an end in itself; instead it should constitute the medium for elevating museum visitors’ experience, also enhancing their understanding and knowledge acquisition.
Along this direction, and taking advantage of the potential of XR technology, this chapter has proposed a reference technological model for implementing unified XR experiences in realistic virtual museums. The model supports physical and virtual worlds being seamlessly blended toward innovative museum experiences. Furthermore, we introduced the “true mediated reality” concept to refer to the collective technological components required for visitors to be able to interact with embodied virtual agents that will substitute museum artefacts with believable, interactive characters. The conceptual model aims to allow museum visitors to concurrently interact with their physical environment and other museum visitors while immersed in an XR application, constituting this type of experiences ideal for the museum environment where experiences should not be provided strictly in isolation but as social activities as well.
In this respect, we have presented the state of the art technology and highlighted needs to further advance research in three major technological pillars, and namely diminished reality, true mediated reality, and natural multimodal interaction. Future research endeavors should work toward fading the real environment, as it is perceived by all human senses, and substituting it with realistic objects and characters. Virtual characters in the XR environment should not only be realistic, but also exhibit naturalness in their movement, speech, and overall behavior. In addition, users’ representation in the virtual environment should be agnostic of the devices used, embedding in the XR world a user avatar without head-mounted displays or input controllers. Finally, user interaction is expected to be multimodal, as in real world, and natural featuring speech, gestures, and even emotions. When the aforementioned technological advancements have been achieved, the experience delivered in museums and cultural heritage sites will be revolutionized, hoping to not only entertain visitors, but allow them to better understand the exhibits, increase their empathy with challenging concepts and topics, and eventually enhance their knowledge.

Author Contributions

All authors have contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sacco, P.L. Culture 3.0: A New Perspective for the EU 2014–2020 Structural Funds Programming; European Expert Network on Culture: Brussels, Belgium, 2011. [Google Scholar]
  2. Ott, M.; Pozzi, F. Towards a new era for Cultural Heritage Education: Discussing the role of ICT. Comput. Hum. Behav. 2011, 27, 1365–1371. [Google Scholar] [CrossRef]
  3. Ioannides, M.; Davies, R. ViMM-Virtual Multimodal Museum: A Manifesto and Roadmap for Europe’s Digital Cultural Heritage. In Proceedings of the IEEE 2018 International Conference on Intelligent Systems, Funchal, Portugal, 25–27 September 2018; pp. 343–350. [Google Scholar] [CrossRef]
  4. Sylaiou, S.; Kasapakis, V.; Dzardanova, E.; Gavalas, D. Leveraging mixed reality technologies to enhance museum visitor experiences. In Proceedings of the IEEE 2018 International Conference on Intelligent Systems, Funchal, Portugal, 25–27 September 2018; pp. 595–601. [Google Scholar] [CrossRef]
  5. Majd, M.; Safabakhsh, R. Impact of machine learning on improvement of user experience in museums. In Proceedings of the IEEE 2017 Artificial Intelligence and Signal Processing (AISP), Shiraz, Iran, 25–27 October 2017; pp. 195–200. [Google Scholar] [CrossRef]
  6. Caggianese, G.; De Pietro, G.; Esposito, M.; Gallo, L.; Minutolo, A.; Neroni, P. Discovering Leonardo with artificial intelligence and holograms: A user study. Pattern Recognit. Lett. 2020, 131, 361–367. [Google Scholar] [CrossRef]
  7. Kiourt, C.; Pavlidis, G.; Koutsoudis, A.; Kalles, D. Multi-agents based virtual environments for cultural heritage. In Proceedings of the 2017 IEEE XXVI International Conference on Information, Communication and Automation Technologies (ICAT), Sarajevo, Bosnia and Herzegovina, 26–28 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
  8. Next Generation Internet—Interactive Technologies. Available online: https://ec.europa.eu/digital-single-market/en/next-generation-internet-interactive-technologies (accessed on 21 December 2020).
  9. 5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies. 2018. Available online: //www.gartner.com/smarterwithgartner/5-trends-emerge-in-gartner-hype-cycle-for-emerging-technologies-2018/ (accessed on 21 December 2020).
  10. Ptukhin, A.; Serkov, K.; Khrushkov, A.; Bozhko, E. Prospects and modern technologies in the development of VR/AR. In Proceedings of the IEEE 2018 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology, Yekaterinburg, Russia, 7–8 May 2018; pp. 169–173. [Google Scholar] [CrossRef]
  11. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A survey of augmented, virtual, and mixed reality for cultural heritage. J. Comput. Cult. Herit. 2018, 11, 1–36. [Google Scholar] [CrossRef]
  12. Cochrane, N. VFX-1 Virtual Reality Helmet by Forte. Game Bytes Magazine, 11 November 1994, p. 21. Available online: http://www.ibiblio.org/GameBytes/issue21/flooks/vfx1.html (accessed on 24 November 2020).
  13. Burdea, G.C.; Coiffet, P. Virtual Reality Technology, 2nd ed.; John Wiley & Sons: New Jersey, NJ, USA, 2017. [Google Scholar]
  14. Kugler, L. Why virtual reality will transform a workplace near you. Commun. ACM 2017, 60, 15–17. [Google Scholar] [CrossRef]
  15. Heinonen, M. Adoption of VR and AR Technologies in the Enterprise. Master’s Thesis, Lappeenranta University of Technology, Lappeenranta, Finland, 2017. [Google Scholar]
  16. PricewaterhouseCoopers. A Decade of Digital: Keeping Pace with Transformation. Available online: https://www.pwc.com/us/en/advisory-services/digital-iq/assets/pwc-digital-iq-report.pdf (accessed on 21 December 2020).
  17. Billinghurst, M.; Clark, A.; Lee, G. A survey of augmented reality. Found. Trends Hum. Comput. Interact. 2015, 8, 73–272. [Google Scholar] [CrossRef]
  18. Henderson, S.J.; Feiner, S.K. Augmented reality in the psychomotor phase of a procedural task. In Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 26–29 October 2011; pp. 191–200. [Google Scholar] [CrossRef]
  19. Van Krevelen, D.F.W.; Poelman, R. A survey of augmented reality technologies, applications and limitations. Int. J. Virtual Real. 2010, 9, 1–20. [Google Scholar] [CrossRef]
  20. Boom, D.V. Pokemon Go Has Crossed 1 Billion in Downloads. Available online: https://www.cnet.com/news/pokemon-go-has-crossed-1-billion-in-downloads/ (accessed on 21 December 2020).
  21. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  22. Ohta, Y.; Tamura, H. Mixed Reality: Merging Real and Virtual Worlds; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  23. Margetis, G.; Ntoa, S.; Antona, M.; Stephanidis, C. Augmenting natural interaction with physical paper in ambient intelligence environments. Multimed. Tools Appl. 2019, 78, 13387–13433. [Google Scholar] [CrossRef]
  24. Koutlemanis, P.; Zabulis, X. Tracking of multiple planar projection boards for interactive mixed-reality applications. Multimed. Tools Appl. 2017, 77, 17457–17487. [Google Scholar] [CrossRef]
  25. Margetis, G.; Grammenos, D.; Zabulis, X.; Stephanidis, C. iEat: An interactive table for restaurant customers’ experience enhancement. In Proceedings of the International Conference on Human-Computer Interaction 2013, Las Vegas, NV, USA, 21–26 July 2013; p. 666. [Google Scholar] [CrossRef]
  26. Kang, J. AR teleport: Digital reconstruction of historical and cultural-heritage sites for mobile phones via movement-based interactions. Wirel. Pers. Commun. 2013, 70, 1443–1462. [Google Scholar] [CrossRef]
  27. Papadaki, E.; Zabulis, X.; Ntoa, S.; Margetis, G.; Koutlemanis, P.; Karamaounas, P.; Stephanidis, C. The book of Ellie: An interactive book for teaching the alphabet to children. In Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, San Jose, CA, USA, 15–19 July 2013; pp. 1–6. [Google Scholar] [CrossRef]
  28. Papaefthymiou, M.; Kateros, S.; Georgiou, S.; Lydatakis, N.; Zikas, P.; Bachlitzanakis, V.; Papagiannakis, G. Gamified AR/VR character rendering and animation-enabling technologies. In Mixed Reality and Gamification for Cultural Heritage; Ioannides, M., Magnenat-Thalmann, N., Papagiannakis, G., Eds.; Springer: Cham, Switzerland, 2017; pp. 333–357. [Google Scholar] [CrossRef]
  29. Karakottas, A.; Papachristou, A.; Doumanoqlou, A.; Zioulis, N.; Zarpalas, D.; Daras, P. Augmented VR. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, Germany, 18–22 March 2018; p. 1. [Google Scholar] [CrossRef]
  30. Simsarian, K.T.; Akesson, K.P. Windows on the world: An example of augmented virtuality. In Proceedings of the 6th International Conference on Man-Machine Interaction Intelligent Systems in Business, Montpellier, France, 28–30 May 1997. [Google Scholar]
  31. Regenbrecht, H.; Ott, C.; Wagner, M.; Lum, T.; Kohler, P.; Wilke, W.; Mueller, E. An augmented virtuality approach to 3D videoconferencing. In Proceedings of the Second IEEE and ACM International Symposium on Mixed and Augmented Reality, Tokyo, Japan, 10 October 2003; pp. 290–291. [Google Scholar] [CrossRef]
  32. Apostolakis, K.C.; Alexiadis, D.S.; Daras, P.; Monaghan, D.; O’Connor, N.E.; Prestele, B.; Eisert, P.; Richard, G.; Zhang, Q.; Izquierdo, E.; et al. Blending real with virtual in 3DLife. In Proceedings of the 14th International IEEE Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS 2013), Paris, France, 3–5 July 2013; pp. 1–4. [Google Scholar] [CrossRef]
  33. Drossis, G.; Ntelidakis, A.; Grammenos, D.; Zabulis, X.; Stephanidis, C. Immersing users in landscapes using large scale displays in public spaces. In Distributed, Ambient, and Pervasive Interactions, DAPI 2015, Lecture Notes in Computer Science; Streitz, N., Markopoulos, P., Eds.; Springer: Cham, Switzerland, 2015; Volume 9189, pp. 152–162. [Google Scholar] [CrossRef]
  34. Christaki, K.; Apostolakis, K.C.; Doumanoglou, A.; Zioulis, N.; Zarpalas, D.; Daras, P. Space Wars: An AugmentedVR Game. In MultiMedia Modeling, MMM 2019, Lecture Notes in Computer Science; Kompatsiaris, I., Huet, B., Mezaris, V., Gurrin, C., Cheng, W.H., Vrochidis, S., Eds.; Springer: Cham, Switzerland, 2019; Volume 11296, pp. 566–570. [Google Scholar] [CrossRef]
  35. Grammenos, D.; Margetis, G.; Koutlemanis, P.; Zabulis, X. Paximadaki, the game: Creating an advergame for promoting traditional food products. In Proceedings of the 16th International Academic MindTrek Conference, Tampere, Finland, 3–5 October 2012; pp. 287–290. [Google Scholar] [CrossRef]
  36. Zikas, P.; Bachlitzanakis, V.; Papaefthymiou, M.; Kateros, S.; Georgiou, S.; Lydatakis, N.; Papagiannakis, G. Mixed reality serious games and gamification for smart education. In Proceedings of the 2016 European Conference on Games Based Learning, Paisley, UK, 6–7 October 2016; p. 805. [Google Scholar]
  37. Albert, A.; Hallowell, M.R.; Kleiner, B.; Chen, A.; Golparvar-Fard, M. Enhancing construction hazard recognition with high-fidelity augmented virtuality. J. Constr. Eng. Manag. 2014, 140. [Google Scholar] [CrossRef]
  38. Chen, A.; Golparvar-Fard, M.; Kleiner, B. SAVES: A safety training augmented virtuality environment for construction hazard recognition and severity identification. In Proceedings of the 13th International Conference on Construction Applications of Virtual Reality, London, UK, 30–31 October 2013; pp. 373–383. [Google Scholar]
  39. Paul, P.; Fleig, O.; Jannin, P. Augmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: Methods and performance evaluation. IEEE Trans. Med. Imaging 2005, 24, 1500–1511. [Google Scholar] [CrossRef] [PubMed]
  40. Coleman, B. Using Sensor Inputs to Affect Virtual and Real Environments. IEEE Pervasive Comput. 2009, 8, 16–23. [Google Scholar] [CrossRef]
  41. Mann, S.; Havens, J.C.; Iorio, J.; Yuan, Y.; Furness, T. All Reality: Values, taxonomy, and continuum, for Virtual, Augmented, eXtended/MiXed (X), Mediated (X, Y), and Multimediated Reality/Intelligence. In Proceedings of the AWE 2018 Conference, Santa Clara, CA, USA, 30 May–1 June 2018. [Google Scholar]
  42. Fast-Berglund, Å.; Gong, L.; Li, D. Testing and validating Extended Reality (xR) technologies in manufacturing. Procedia Manuf. 2018, 25, 31–38. [Google Scholar] [CrossRef]
  43. Lee, Y.; Moon, C.; Ko, H.; Lee, S.H.; Yoo, B. Unified Representation for XR Content and its Rendering Method. In Proceedings of the 25th ACM International Conference on 3D Web Technology, Seoul, Korea, 9–13 November 2020; pp. 1–10. [Google Scholar] [CrossRef]
  44. Extended Reality (XR = Augmented Reality + Virtual Reality + Mixed Reality) Marketplace 2018–2023. Available online: https://www.researchandmarkets.com/reports/4658849/extended-reality-xr-augmented-reality (accessed on 24 November 2020).
  45. Kenderdine, S. Embodiment, Entanglement, and Immersion in Digital Cultural Heritage. In A New Companion to Digital Humanities; Schreibman, S., Siemens, R., Unsworth, J., Eds.; John Wiley & Sons: Chichester, UK, 2015; pp. 22–41. ISBN 9781118680605. [Google Scholar]
  46. Barbot, B.; Kaufman, J.C. What makes immersive virtual reality the ultimate empathy machine? Discerning the underlying mechanisms of change. Comput. Hum. Behav. 2020, 111, 106431. [Google Scholar] [CrossRef]
  47. Kidd, J. With New Eyes I See: Embodiment, empathy and silence in digital heritage interpretation. Int. J. Herit. Stud. 2019, 25, 54–66. [Google Scholar] [CrossRef]
  48. Mann, S.; Fung, J. Videoorbits on eye tap devices for deliberately diminished reality or altering the visual perception of rigid planar patches of a real world scene. In Proceedings of the International Symposium on Mixed Reality (ISMR2001), Yokohama, Japan, 14–15 March 2001; pp. 48–55. [Google Scholar]
  49. Mori, S.; Ikeda, S.; Saito, H. A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects. IPSJ Trans. Comput. Vis. Appl. 2017, 9, 1–14. [Google Scholar] [CrossRef]
  50. Kido, D.; Fukuda, T.; Yabuki, N. Diminished reality system with real-time object detection using deep learning for onsite landscape simulation during redevelopment. Environ. Model. Softw. 2020, 131, 104759. [Google Scholar] [CrossRef]
  51. Dhamo, H.; Navab, N.; Tombari, F. Object-driven multi-layer scene decomposition from a single image. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 5369–5378. [Google Scholar] [CrossRef]
  52. Yeh, R.A.; Chen, C.; Lim, T.Y.; Schwing, A.G.; Hasegawa-Johnson, M.; Do, M.N. Semantic Image Inpainting with Deep Generative Models. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5485–5493. [Google Scholar] [CrossRef]
  53. Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; Huang, T.S. Free-Form Image Inpainting with Gated Convolution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 4471–4480. [Google Scholar] [CrossRef]
  54. Hartholt, A.; Fast, E.; Reilly, A.; Whitcup, W.; Liewer, M.; Mozgai, S. Ubiquitous virtual humans: A multi-platform framework for embodied AI agents in XR. In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), San Diego, CA, USA, 9–11 December 2019; pp. 308–3084. [Google Scholar] [CrossRef]
  55. Ioannides, M.; Magnenat-Thalmann, N.; Papagiannakis, G. Mixed Reality and Gamification for Cultural Heritage; Springer: Cham, Switzerland, 2017; ISBN 9783319496078. [Google Scholar]
  56. Magnenat-Thalmann, N.; Foni, A.; Papagiannakis, G.; Cadi-Yazli, N. Real Time Animation and Illumination in Ancient Roman Sites. Int. J. Virtual Real. 2007, 6, 11–24. [Google Scholar]
  57. Papaefthymiou, M.; Feng, A.; Shapiro, A.; Papagiannakis, G. A fast and robust pipeline for populating mobile AR scenes with gamified virtual characters. In Proceedings of the SIGGRAPH Asia Mobile Graphics and Interactive Applications, Kobe, Japan, 2–6 November 2015. [Google Scholar] [CrossRef]
  58. Kasap, Z.; Magnenat-Thalmann, N. Intelligent virtual humans with autonomy and personality: State-of-the-art. Intell. Decis. Technol. 2007. [Google Scholar] [CrossRef]
  59. Papanikolaou, P.; Papagiannakis, G. Real-Time Separable Subsurface Scattering for Animated Virtual Characters. In GPU Computing and Applications; Cai, Y., See, S., Eds.; Springer: Singapore, 2015; pp. 53–67. ISBN 9789812871343. [Google Scholar]
  60. Alexiadis, D.S.; Zioulis, N.; Zarpalas, D.; Daras, P. Fast deformable model-based human performance capture and FVV using consumer-grade RGB-D sensors. Pattern Recognit. 2018, 79, 260–278. [Google Scholar] [CrossRef]
  61. Frueh, C.; Sud, A.; Kwatra, V. Headset removal for virtual and mixed reality. In Proceedings of the ACM SIGGRAPH 2017 Talks on SIGGRAPH ’17, Los Angeles, CA, USA, 30 July–3 August 2017; pp. 1–2. [Google Scholar] [CrossRef]
  62. Sandor, C.; Fuchs, M.; Cassinelli, A.; Li, H.; Newcombe, R.; Yamamoto, G.; Feiner, S. Breaking the Barriers to True Augmented Reality. arXiv 2015, arXiv:1512.05471. [Google Scholar]
  63. Valli, A. The design of natural interaction. Multimed. Tools Appl. 2008, 38, 295–305. [Google Scholar] [CrossRef]
  64. Brondi, R.; Alem, L.; Avveduto, G.; Faita, C.; Carrozzino, M.; Tecchia, F.; Bergamasco, M. Evaluating the Impact of Highly Immersive Technologies and Natural Interaction on Player Engagement and Flow Experience in Games. In Entertainment Computing—ICEC 2015; Chorianopoulos, K., Divitini, M., Baalsrud Hauge, J., Jaccheri, L., Malaka, R., Eds.; Springer: Cham, Switzerland, 2015; Volume 9353, pp. 169–181. ISBN 9783319245898. [Google Scholar]
  65. Stephanidis, C. Human Factors in Ambient Intelligence Environments. In Handbook of Human Factors and Ergonomics; John Wiley & Sons: Hoboken, NJ, USA, 2012; pp. 1354–1373. ISBN 9781118131350. [Google Scholar]
  66. Bastug, E.; Bennis, M.; Medard, M.; Debbah, M. Toward Interconnected Virtual Reality: Opportunities, Challenges, and Enablers. IEEE Commun. Mag. 2017, 55, 110–117. [Google Scholar] [CrossRef]
  67. McTear, M.; Callejas, Z.; Griol, D. Conversational Interfaces: Past and Present. In The Conversational Interface; Springer: Cham, Switzerland, 2016; pp. 51–72. ISBN 9783319329673. [Google Scholar]
  68. Otter, D.W.; Medina, J.R.; Kalita, J.K. A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 2020, 1–21. [Google Scholar] [CrossRef]
  69. Dean, D.; Edson, G. Handbook for Museums; Routledge: London, UK, 2013; ISBN 9781135908379. [Google Scholar]
  70. Carrozzino, M.; Bergamasco, M. Beyond virtual museums: Experiencing immersive virtual reality in real museums. J. Cult. Herit. 2010, 11, 452–458. [Google Scholar] [CrossRef]
  71. Jung, T.; Tom Dieck, M.C.; Lee, H.; Chung, N. Effects of Virtual Reality and Augmented Reality on Visitor Experiences in Museum. In Proceedings of the Information and Communication Technologies in Tourism, Bilbao, Spain, 2–5 February 2016; pp. 621–635. [Google Scholar]
  72. Margetis, G.; Papagiannakis, G.; Stephanidis, C. Realistic Natural Interaction with Virtual Statues in X-Reality Environments. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 801–808. [Google Scholar] [CrossRef]
  73. Apostolakis, K.C.; Margetis, G.; Stephanidis, C. Pleistocene Crete: A narrative, interactive mixed reality exhibition that brings prehistoric wildlife back to life. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Recife, Brazil, 9–13 November 2020; pp. 237–240. [Google Scholar] [CrossRef]
  74. Darley, A. Simulating natural history: Walking with Dinosaurs as hyper-real edutainment. Sci. Cult. 2003, 12, 227–256. [Google Scholar] [CrossRef]
  75. Poulakakis, N.; Parmakelis, A.; Lymberakis, P.; Mylonas, M.; Zouros, E.; Reese, D.S.; Glaberman, S.; Caccone, A. Ancient DNA forces reconsideration of evolutionary history of Mediterranean pygmy elephantids. Biol. Lett. 2006, 2, 451–454. [Google Scholar] [CrossRef]
  76. Weesie, P.D. A Pleistocene endemic island form within the genus Athene: Athene cretensis n. sp. (Aves, Strigiformes) from Crete. In Proceedings of the Koninklijke Nederlands Akademie van Wetenschappen Amsterdam. Series B, Physical Sciences; North-Holland Pub. Co.: Amsterdam, The Netherlands, 1976; Volume 85, pp. 323–336. [Google Scholar]
  77. De Vos, J. Pleistocene deer fauna in Crete: Its adaptive radiation and extinction. Tropics 2000, 10, 125–134. [Google Scholar] [CrossRef][Green Version]
  78. Liu, Y. Evaluating visitor experience of digital interpretation and presentation technologies at cultural heritage sites: A case study of the old town, Zuoying. Built Herit. 2020, 4, 1–15. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.