You are currently viewing a new version of our website. To view the old version click .
Big Data and Cognitive Computing
  • Editor’s Choice
  • Review
  • Open Access

26 January 2022

Scalable Extended Reality: A Future Research Agenda

and
Human Computer Interaction Lab, Department of Computer Science, Technische Universität Kaiserslautern, 67663 Kaiserslautern, Germany
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction

Abstract

Extensive research has outlined the potential of augmented, mixed, and virtual reality applications. However, little attention has been paid to scalability enhancements fostering practical adoption. In this paper, we introduce the concept of scalable extended reality (XR S ), i.e., spaces scaling between different displays and degrees of virtuality that can be entered by multiple, possibly distributed users. The development of such XR S spaces concerns several research fields. To provide bidirectional interaction and maintain consistency with the real environment, virtual reconstructions of physical scenes need to be segmented semantically and adapted dynamically. Moreover, scalable interaction techniques for selection, manipulation, and navigation as well as a world-stabilized rendering of 2D annotations in 3D space are needed to let users intuitively switch between handheld and head-mounted displays. Collaborative settings should further integrate access control and awareness cues indicating the collaborators’ locations and actions. While many of these topics were investigated by previous research, very few have considered their integration to enhance scalability. Addressing this gap, we review related previous research, list current barriers to the development of XR S spaces, and highlight dependencies between them.

1. Introduction

Using different kinds of extended reality (XR) technologies to enter augmented, mixed, or virtual reality scenes has been considered supportive for various areas of application. For instance, users could receive contextual information on demand during training or maintenance tasks, and product development processes could benefit from fast and cheap modifications as virtual augmentations are adapted accordingly. Furthermore, connecting multiple head-mounted or handheld displays in one network holds great potential to provide co-located as well as distributed collaborators with customized accesses to a joint space. However, so far the majority of XR applications are limited to single use cases, specific technology, and two users. We believe that this lack of scalability limits the practical adoption of XR technologies as switching between tasks and technologies requires costly adaptions of the setup, as well as relearning interaction techniques.
Seeking to reduce these efforts, we introduce the concept of scalable XR spaces (XR S ) that we deem beneficial for various applications. For instance, product development could be supported by systems scaling from virtual to hybrid prototypes, and finally physical products augmented with single annotations. Similarly, virtuality could decrease according to established skills in training systems or progress at construction sites. Furthermore, co-located teams operating in augmented or mixed reality could be joined by absent collaborators entering virtual reconstructions of their scene. Depending on individual tasks and preferences, collaborators could thereby be provided either with head-mounted or handheld displays.
While previous research dealt rather separately with the three topics that we consider most crucial for the development of XR S spaces (i.e., collaboration support features, consistent and accessible visualizations, and intuitive interaction techniques), we focus on their integration to enhance scalability. To this end, we review the latest research in related fields and propose a future research agenda that lists both remaining and newly arising research topics.

2. Background and Terminology

2.1. Augmented, Mixed, and Virtual Reality

In 1994, Milgram and Kishino [1] introduced a taxonomy categorizing technologies that integrate virtual components according to their degree of virtuality, which is today known as the so-called reality–virtuality continuum [2]. Virtual reality (VR) applications are located at the right end of the continuum. In such completely computer-generated environments, users experience particularly high levels of immersion. Moving along the continuum to the left, the degree of virtuality decreases. As depicted in Figure 1, the term mixed reality (MR) encompasses both the augmentation of virtual scenes with real contents (augmented virtuality, AV) as well as the augmentation of real scenes with virtual contents (augmented reality, AR). While the continuum presents AR and AV as equal, the amount of research that has been published subsequent to its introduction is significantly higher for AR than for AV. This negligence of AV is also reflected in today’s general interpretation of MR. Currently, MR is associated with spaces that embed virtual objects into the physical world (i.e., onto, in front of, or behind real-world surfaces), while AR instead refers to real-world scenes that are augmented with pure virtual overlays—independently of the scene’s physical constraints.
Figure 1. The reality–virtuality continuum, reproduced with permission from [2].
While most smartphones and tablets can be used as handheld displays (HHDs) to enter MR and VR spaces, the quality of the experience is heavily affected by incorporated technology and sensors. For instance, the LiDAR scanner incorporated in Apple’s iPad Pro [3] enhances scene scanning and thus embedding virtual objects into the real world. Apart from HHDs, users may wear head-mounted displays (HMDs). To access VR scenes, headsets such as the HTC VIVE [4], Oculus Rift, or Oculus Quest [5] may be used. HMDs that allow users to enter MR scenes include the Microsoft HoloLens [6], Magic Leap [7], and Google Glass [8]. Furthermore, Apple is expected to release a game-changing HMD within the next years [9]. While some of these technologies provide interaction with built-in gesture recognition, others take use of touchscreens, external handheld controllers, or tracking technology such as the Leap Motion controller [10]. Furthermore, projection-based technologies such as Powerwalls or spatial AR can be employed. In this paper, we focus on the scalability between HMDs and HHDs, i.e., environments that can be accessed individually with 2D or 3D displays.

2.2. Scalable Extended Reality (XR S )

Recently, the term extended reality, and less often, cross reality (XR), is increasingly being used as an umbrella term for different technologies located along the reality–virtuality continuum [1,2]. In this paper, we use XR to refer to AR, MR, and VR applications in general and summarize HMDs and HHDs by XR devices or XR technologies.
In addition, we introduce the concept of scalable extended reality (XR S ) describing spaces that scale along the three different dimensions depicted in Figure 2: from low to high degrees of virtuality (i.e., from the left to the right end of the reality–virtuality continuum [1,2]); between different devices (i.e., HMDs and HHDs); and from single users to multiple collaborators that may be located at different sites. In the following, we use the term XR S whenever we refer to the concept of these highly scalable spaces.
Figure 2. Scalable extended reality (XR S ) spaces providing scalability between different degrees of virtuality, devices, and numbers of collaborators.
While existing XR applications are mostly limited to single use cases, specific technology, and two users, XR S spaces could serve as flexible, long-time training or working environments. Highly scalable visualization and interaction techniques could increase memorability, allowing users to switch intuitively between different degrees of virtuality and devices while keeping their focus on the actual task. For instance, product development could be supported by applications that scale from initial virtual prototypes to physical prototypes that are augmented with single virtual contents, i.e., the degree of virtuality decreases as the product evolves. In this context, time and costs could be reduced as the modification of virtual product parts is faster and cheaper than for physical parts. Since different XR technologies might be appropriate for different degrees of virtuality and different tasks, the memorability of interaction techniques needs to be enhanced to prevent users from relearning interaction techniques whenever they switch to another device. Considering multiple system users, each collaborator could be provided with the information needed for task completion on demand via customized augmentations and join the XR S space in MR or VR scenes using HMDs or HHDs depending on their individual preferences and the collaborative setting.
Johansen [11] distinguished groupware systems with respect to time and place. In this paper, we mainly focus on synchronous collaboration, i.e., collaborators are interacting at the same time but can be located at the same (co-located collaboration) or different sites (distributed or remote collaboration). In this context, we use the term on-site collaborator for anyone located in the actual working environment (e.g., the location of a real factory or physical prototype) and the term off-site collaborator for anyone joining the session from a distant location. Previous research also refers to this off-site collaborator as a remote expert. While the term remote implies that two collaborators are located at different sites, it does not indicate which collaborator is located at which site. This information is, however, of high relevance in XR S spaces where different technologies may be used for on-site collaborators in an MR scene (i.e., the working environment with virtual augmentations) and off-site collaborators immersed in a VR scene (i.e., the virtual replication of the working environment including the augmentations). Since such XR S spaces could support not only co-located but also distributed collaboration, they hold considerable potential for saving costs and time related to traveling.

4. A Future Research Agenda

As summarized above, previous research focused on enhancing collaboration support features, visualization and interaction techniques in the context of XR settings. However, state-of-the-art research considers these topics separately and does not take into account the integration of research outcomes in each of these fields to support collaboration in XR S spaces. Addressing this gap, we present a future research agenda that lists remaining as well as newly arising research questions. The proposed agenda is structured into topics concerning the general implementation of XR S and topics to increase scalability between different devices (i.e., HMDs and HHDs), different degrees of virtuality (i.e., from the left to the right end of the reality–virtuality continuum [1,2]), and different numbers of collaborators that may be located at different sites. Expected contributions among these new research topics are depicted in Figure 3.
Figure 3. Research topics that are relevant to the development of XR S spaces and expected contributions among them.

4.1. General Research Topics

XR S Framework. Since available XR technologies are provided by different manufacturers and run with different operating systems (e.g., MR-HMD: HoloLens by Microsoft [6], VR-HMD: VIVE by HTC [4], and HHD: iPad Pro by Apple [3]), incorporating all of them to a joint XR S space requires cross-platform development which is impeded by the lack of available interfaces. To facilitate the development of such interfaces, future research should consider the development of a framework that formalizes the interactions with real and virtual objects and describes which kind of data need to be tracked and shared among the entities.
Upgraded Hardware. To apply XR S in practice, upgraded hardware is needed that enhances ergonomic aspects and provides larger fields of view. Some studies (e.g., [21,51]) used VR-HMDs to mirror the real-world scene, including the virtual augmentations. While currently available VR-HMDs provide larger viewing angles than MR-HMDs, this approach impedes leveraging the actual benefits of MR technologies such as reduced isolation and natural interaction with real-world objects.
User-Independent Tracking. In previous research, hardware for tracking parts of the user’s body as well as to capture the on-site scene was attached to users. A common approach to incorporate hand gestures as awareness cues in the XR space was to attach the Leap Motion controller [10] to the off-site collaborator’s HMD. As such, the augmentation of the on-site collaborator’s view with the off-site collaborator’s hands depends heavily on the off-site collaborator’s hand and head movements. Similarly, tracking the physical scene with hardware attached to the on-site collaborators’ HMD will be affected by their head movements and position. Considering the practical adoption of XR S spaces, these dependencies on human behavior should be decreased.
Advanced Evaluation. The majority of the reviewed papers evaluated collaborative systems with two participants. While some pairs of collaborators knew each other prior to the experiment, others had never met before, and some researchers (e.g., [20,21]) employed actors to collaborate with all participants. The latter approach makes results more comparable but at the same time harder to transfer to real use cases. As such, it remains unclear if a system that turned out to support collaboration between an instructed and an uninstructed person will do so as well when used by two uninstructed collaborators. Furthermore, many studies were limited to simplified, short-term tasks. Hence, further research should take into account the evaluation with multiple collaborators performing real-world and long-term tasks.
Task Taxonomy. Due to the variety of visualization and interaction techniques offered in XR, these technologies are deemed supportive in various fields of applications. As each field of application comes with individual requirements, developing and configuring XR S spaces for the individual use cases could be supported by task taxonomies that describe which collaborator needs to perform which actions and needs to access which parts of the joint space. Establishing task taxonomies for the individual use cases would also highlight which tasks are relevant in multiple use cases such that future research efforts can be prioritized accordingly.
Training Programs. XR S spaces are meant to serve as very flexible long-time working environments that allow each user to easily switch between different technologies. Hence, once XR S spaces are ready to be applied in practice, appropriate training methodologies will be needed for the different use cases, technologies, and users with different levels of expertise.

4.2. Scalability between Different Devices

Intuitiveness. To enhance scalability between HMDs and HHDs, users should be provided with visualization and interaction techniques that remain intuitive to them when switching between these devices. To this end, it should be researched which kind of mapping between interaction techniques for 2D and 3D displays is perceived as intuitive. In particular, it needs to be investigated whether and how intuitiveness is affected by previous experiences and well-known, established interaction paradigms. For instance, certain touch-based interaction techniques are internalized by many people using smartphones in their everyday lives. As such, it remains to be researched whether modifying these interaction techniques, in a way that objectively may appear more intuitive, would confuse rather than support them.
Interpreting Sensor Data. In order to implement collaboration support features, previous research took use of sensors incorporated in XR devices such as the orientation of HMDs to render gaze rays. While the orientation of HHDs can also be obtained by incorporated sensors, it should be noted that an HHD’s orientation does not always correspond to the user’s actual gaze direction. Hence, transferring collaboration support features that are implemented for a specific device to a different device is not straightforward and could cause misunderstandings. To prevent them, future research should also focus on how available data are to be interpreted in the context of different use cases.
Input and Output Mapping. The mapping of input and output between HMDs and HHDs should be designed based on newly gained insights concerning intuitiveness, the interpretation of sensor data, and established task taxonomies. Thus, available input modalities of the different devices should be exploited in a way that allows the user to intuitively enter information with HMDs and HHDs. Hereby, the design of output mapping should take into account varying display sizes. As pointed out by multiple authors, using small displays can either cause occlusion when large virtual augmentations are rendered on small screens or impede the detection of virtual augmentations when they are not captured by their limited field of view. Hence, it needs to be investigated whether and how augmentations can be adapted to the display size.

4.3. Scalability between Different Degrees of Virtuality

Visual Quality. To achieve optimal usability, future research should also focus on the trade-off between visual quality and latency. In this context, it should be investigated which parts of a physical scene need to be reconstructed at which time intervals. For instance, in large XR S spaces, real-time reconstruction might not be necessary for the entire scene but only for particular parts. As such, reducing the overall amount of processed data could help to enhance the visual quality of particular areas.
Scalable Reconstruction Techniques. In the previous section, we presented a variety of existing techniques for reconstructing physical scenes along with their individual advantages and drawbacks. Considering the practical adoption of these techniques, physical scenes might differ according to surface materials, lightning conditions, and size such that different reconstruction techniques are appropriate for different kinds of environments. To foster scalability, further research should be dedicated to developing reconstruction techniques that are applicable to a variety of physical scenes.
Bidirectional Interaction. Previous research focused rather separately on the reconstruction and semantic segmentation of physical scenes that are either static or dynamic. As such, future research should take into account the integration of existing findings in these different research fields to provide consistent and accessible visualizations that allow bidirectional interaction (i.e., both on-site and off-site collaborators should be able to reference and manipulate certain parts in the XR S space).

4.4. Scalability between Different Numbers of Collaborators

Access Control. Access control features implemented by previous research include simultaneous manipulation via the multiplication of transformation matrices as well as locking objects for all but one collaborator. Considering an increasing number of collaborators, simultaneous manipulation could lead to objects being moved, rotated, or translated too far. Consequently, they have to be manipulated back and forth, which could impede collaboration rather than support it. Hence, it should be evaluated for what amount of collaborators simultaneous manipulation is feasible and in which cases ownership should be given to single collaborators.
Viewpoint-Independent Augmentations. An increasing number of collaborators, especially in co-located settings, will no longer be allowed to stand next to each other, having more or less the same perspective of the XR S scene. Instead, they might gather around virtual augmentations in circles, facing virtual augmentations from different perspectives. To avoid misunderstandings and maintain collaboration support, it needs to be investigated how visualizations can be adjusted in a way that allows collaborators to access the virtual objects from different perspectives.
Awareness Cues. Previous research proposed different kinds of awareness cues indicating what collaborators do or where they are located in space. However, the majority of reviewed user studies were limited to two collaborators. Considering settings that include multiple collaborators, rendering awareness cues such as gaze rays for all collaborators at once is likely to produce visual clutter, which may increase cognitive load. To avoid cognitive overload while maintaining optimal performance, each collaborator should be provided with those awareness cues delivering the information needed for task completion. Since tasks and cognitive load may vary strongly among collaborators, rendering individual views for each collaborator should be taken into consideration.

5. Conclusions

In this paper, we introduce the concept of XR S spaces scaling between different degrees of virtuality, different devices, and different numbers of possibly distributed users. We believe that increasing scalability holds great potential to enhance the practical adoption of XR technologies, as such highly flexible long-time training or working environments are expected to reduce costs and enhance memorability. However, developing such applications is not straightforward and involves interdisciplinary research. In fact, we consider the following topics to be the most relevant research fields for developing XR S spaces: (1) collaboration support features such as access control, as well as awareness cues that indicate where the other collaborators are and what they do; (2) consistent and accessible visualizations incorporating the semantic segmentation of virtually reconstructed physical scenes; (3) interaction techniques for selection, manipulation, and navigation in XR S that remain intuitive to the users even when they switch between devices or degrees of virtuality.
While so far these topics have been investigated rather separately, we propose to integrate the outcomes of previous works to enhance scalability. Developing XR S spaces is related to several independent research fields that existed long before the emergence of XR technologies. In this paper, we focus on the most recent research papers in these fields dealing with AR, MR, or VR applications. Thus, as a first step towards building XR S spaces, we review related work and highlight challenges arising upon the integration of previous research outcomes. Based on this, we propose an agenda of highly relevant questions that should be addressed by future research in order to build XR S spaces that fully exploit the potential inherent in XR technologies.

Author Contributions

Conceptualization, V.M.M. and A.E.; references—selection and review, V.M.M.; writing—original draft preparation, V.M.M.; writing—review and editing, V.M.M. and A.E.; visualization, V.M.M.; project administration, A.E.; funding acquisition, A.E. All authors have read and agreed to the published version of the manuscript.

Funding

Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—252408385—IRTG 2057.

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented reality
AVAugmented virtuality
DoFsDegrees of freedom
HMDHead-mounted display
HHDHandheld display
IMUInertial measurement unit
MRMixed reality
VRVirtual reality
XRExtended or cross reality
XR S Scalable extended reality

References

  1. Milgram, P.; Kishino, F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  2. Milgram, P.; Takemura, H.; Utsumi, A.; Kishino, F. Augmented Reality: A class of displays on the reality-virtuality continuum. In Telemanipulator and Telepresence Technologies; Das, H., Ed.; SPIE: Bellingham, WA, USA, 1995; Volume 2351, pp. 282–292. [Google Scholar] [CrossRef]
  3. iPad Pro. Available online: https://www.apple.com/ipad-pro/ (accessed on 28 October 2021).
  4. VIVE Pro Series. Available online: https://www.vive.com/us/product/#pro%20series (accessed on 28 October 2021).
  5. Oculus Headsets. Available online: https://www.oculus.com/compare/ (accessed on 17 January 2022).
  6. HoloLens 2. Available online: https://www.microsoft.com/en-us/hololens/hardware (accessed on 28 October 2021).
  7. Magic Leap. Available online: https://www.magicleap.com/en-us/magic-leap-1 (accessed on 17 January 2022).
  8. Glass. Available online: https://www.google.com/glass/tech-specs/ (accessed on 17 January 2022).
  9. Perry, T.S. Look Out for Apple’s AR Glasses: With head-up displays, cameras, inertial sensors, and lidar on board, Apple’s augmented-reality glasses could redefine wearables. IEEE Spectr. 2021, 58, 26–54. [Google Scholar] [CrossRef]
  10. Leap Motion Controller. Available online: https://www.ultraleap.com/product/leap-motion-controller/ (accessed on 28 October 2021).
  11. Johansen, R. Teams for tomorrow (groupware). In Proceedings of the Twenty-Fourth Annual Hawaii International Conference on System Sciences, Kauai, HI, USA, 8–11 January 1991; IEEE: Piscataway, NJ, USA, 1991; Volume 3, pp. 521–534. [Google Scholar] [CrossRef]
  12. Wells, T.; Houben, S. CollabAR—Investigating the Mediating Role of Mobile AR Interfaces on Co-Located Group Collaboration. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  13. Grandi, J.G.; Debarba, H.G.; Bemdt, I.; Nedel, L.; Maciel, A. Design and Assessment of a Collaborative 3D Interaction Technique for Handheld Augmented Reality. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18–22 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 49–56. [Google Scholar] [CrossRef]
  14. Bai, H.; Sasikumar, P.; Yang, J.; Billinghurst, M. A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  15. Huang, W.; Alem, L.; Tecchia, F.; Duh, H.B.L. Augmented 3D hands: A gesture-based mixed reality system for distributed collaboration. J. Multimodal User Interfaces 2018, 12, 77–89. [Google Scholar] [CrossRef]
  16. Kim, S.; Lee, G.; Huang, W.; Kim, H.; Woo, W.; Billinghurst, M. Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  17. Lee, G.A.; Teo, T.; Kim, S.; Billinghurst, M. A User Study on MR Remote Collaboration Using Live 360 Video. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, 16–20 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 153–164. [Google Scholar] [CrossRef]
  18. De Pace, F.; Manuri, F.; Sanna, A.; Zappia, D. A Comparison between Two Different Approaches for a Collaborative Mixed-Virtual Environment in Industrial Maintenance. Front. Robot. AI 2019, 6, 18. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Pereira, V.; Matos, T.; Rodrigues, R.; Nóbrega, R.; Jacob, J. Extended Reality Framework for Remote Collaborative Interactions in Virtual Environments. In Proceedings of the 2019 International Conference on Graphics and Interaction (ICGI), Faro, Portugal, 21–22 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 17–24. [Google Scholar] [CrossRef]
  20. Piumsomboon, T.; Lee, G.A.; Hart, J.D.; Ens, B.; Lindeman, R.W.; Thomas, B.H.; Billinghurst, M. Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; ACM: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  21. Piumsomboon, T.; Lee, G.A.; Irlitti, A.; Ens, B.; Thomas, B.H.; Billinghurst, M. On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  22. Teo, T.; Hayati, A.F.; Lee, G.A.; Billinghurst, M.; Adcock, M. A Technique for Mixed Reality Remote Collaboration using 360 Panoramas in 3D Reconstructed Scenes. In Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology, Parramatta, NSW, Australia, 12–15 November 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  23. Marks, S.; White, D. Multi-Device Collaboration in Virtual Environments. In Proceedings of the 2020 4th International Conference on Virtual and Augmented Reality Simulations, Sydney, NSW, Australia, 14–16 February 2020; ACM: New York, NY, USA, 2020; pp. 35–38. [Google Scholar] [CrossRef]
  24. Ibayashi, H.; Sugiura, Y.; Sakamoto, D.; Miyata, N.; Tada, M.; Okuma, T.; Kurata, T.; Mochimaru, M.; Igarashi, T. Dollhouse VR: A Multi-View, Multi-User Collaborative Design Workspace with VR Technology. In Proceedings of the SIGGRAPH Asia 2015 Emerging Technologies, Kobe, Japan, 2–6 November 2015; ACM: New York, NY, USA, 2015. [Google Scholar] [CrossRef]
  25. García-Pereira, I.; Gimeno, J.; Pérez, M.; Portalés, C.; Casas, S. MIME: A Mixed-Space Collaborative System with Three Immersion Levels and Multiple Users. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 16–20 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 179–183. [Google Scholar] [CrossRef] [Green Version]
  26. Yu, D.; Jiang, W.; Wang, C.; Dingler, T.; Velloso, E.; Goncalves, J. ShadowDancXR: Body Gesture Digitization for Low-cost Extended Reality (XR) Headsets. In Proceedings of the Companion, 2020 Conference on Interactive Surfaces and Spaces, Virtual Event, Portugal, 8–11 November 2020; ACM: New York, NY, USA, 2020; pp. 79–80. [Google Scholar] [CrossRef]
  27. Ahuja, K.; Goel, M.; Harrison, C. BodySLAM: Opportunistic User Digitization in Multi-User AR/VR Experiences. In Proceedings of the Symposium on Spatial User Interaction, Virtual Event, Canada, 30 October–1 November 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  28. Erickson, A.; Norouzi, N.; Kim, K.; Schubert, R.; Jules, J.; LaViola, J.J., Jr.; Bruder, G.; Welch, G.F. Sharing gaze rays for visual target identification tasks in collaborative augmented reality. J. Multimodal User Interfaces 2020, 14, 353–371. [Google Scholar] [CrossRef]
  29. Mohr, P.; Mori, S.; Langlotz, T.; Thomas, B.H.; Schmalstieg, D.; Kalkofen, D. Mixed Reality Light Fields for Interactive Remote Assistance. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  30. Lindlbauer, D.; Wilson, A.D. Remixed Reality: Manipulating Space and Time in Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; ACM: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  31. Tanaya, M.; Yang, K.; Christensen, T.; Li, S.; O’Keefe, M.; Fridley, J.; Sung, K. A Framework for analyzing AR/VR Collaborations: An initial result. In Proceedings of the 2017 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Annecy, France, 26–28 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 111–116. [Google Scholar] [CrossRef]
  32. Teo, T.; Lawrence, L.; Lee, G.A.; Billinghurst, M.; Adcock, M. Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  33. Teo, T.; Norman, M.; Lee, G.A.; Billinghurst, M.; Adcock, M. Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration. J. Multimodal User Interfaces 2020, 14, 373–385. [Google Scholar] [CrossRef]
  34. Stotko, P.; Krumpen, S.; Hullin, M.B.; Weinmann, M.; Klein, R. SLAMCast: Large-Scale, Real-Time 3D Reconstruction and Streaming for Immersive Multi-Client Live Telepresence. IEEE Trans. Vis. Comput. Graph. 2019, 25, 2102–2112. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Jeršov, S.; Tepljakov, A. Digital Twins in Extended Reality for Control System Applications. In Proceedings of the 2020 43rd International Conference on Telecommunications and Signal Processing (TSP), Milan, Italy, 7–9 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 274–279. [Google Scholar] [CrossRef]
  36. Schütt, P.; Schwarz, M.; Behnke, S. Semantic Interaction in Augmented Reality Environments for Microsoft HoloLens. In Proceedings of the 2019 European Conference on Mobile Robots (ECMR), Prague, Czech Republic, 4–6 September 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
  37. Dai, A.; Chang, A.X.; Savva, M.; Halber, M.; Funkhouser, T.; Nießner, M. ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2432–2443. [Google Scholar] [CrossRef] [Green Version]
  38. Huang, S.S.; Ma, Z.Y.; Mu, T.J.; Fu, H.; Hu, S.M. Supervoxel Convolution for Online 3D Semantic Segmentation. ACM Trans. Graph. 2021, 40, 1–15. [Google Scholar] [CrossRef]
  39. Gauglitz, S.; Lee, C.; Turk, M.; Höllerer, T. Integrating the physical environment into mobile remote collaboration. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, San Francisco, CA, USA, 21–24 September 2012; ACM: New York, NY, USA, 2012; pp. 241–250. [Google Scholar] [CrossRef] [Green Version]
  40. Gauglitz, S.; Nuernberger, B.; Turk, M.; Höllerer, T. In Touch with the Remote World: Remote Collaboration with Augmented Reality Drawings and Virtual Navigation. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, Edinburgh, UK, 11–13 November 2014; ACM: New York, NY, USA, 2014; pp. 197–205. [Google Scholar] [CrossRef]
  41. Gauglitz, S.; Nuernberger, B.; Turk, M.; Höllerer, T. World-stabilized annotations and virtual scene navigation for remote collaboration. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, Honolulu, HI, USA, 5–8 October 2014; ACM: New York, NY, USA, 2014; pp. 449–459. [Google Scholar] [CrossRef] [Green Version]
  42. Nuernberger, B.; Lien, K.C.; Grinta, L.; Sweeney, C.; Turk, M.; Höllerer, T. Multi-view gesture annotations in image-based 3D reconstructed scenes. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, Munich, Germany, 2–4 November 2016; ACM: New York, NY, USA, 2016; pp. 129–138. [Google Scholar] [CrossRef] [Green Version]
  43. Nuernberger, B.; Lien, K.C.; Höllerer, T.; Turk, M. Interpreting 2D gesture annotations in 3D augmented reality. In Proceedings of the 2016 IEEE Symposium on 3D User Interfaces (3DUI), Greenville, SC, USA, 19–20 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 149–158. [Google Scholar] [CrossRef]
  44. Lien, K.C.; Nuernberger, B.; Höllerer, T.; Turk, M. PPV: Pixel-Point-Volume Segmentation for Object Referencing in Collaborative Augmented Reality. In Proceedings of the 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Merida, Mexico, 19–23 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 77–83. [Google Scholar] [CrossRef]
  45. Kiss, F.; Woźniak, P.W.; Biener, V.; Knierim, P.; Schmidt, A. VUM: Understanding Requirements for a Virtual Ubiquitous Microscope. In Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia, Essen, Germany, 22–25 November 2020; ACM: New York, NY, USA, 2020; pp. 259–266. [Google Scholar] [CrossRef]
  46. Chang, Y.S.; Nuernberger, B.; Luan, B.; Höllerer, T. Evaluating gesture-based augmented reality annotation. In Proceedings of the 2017 IEEE Symposium on 3D User Interfaces (3DUI), Los Angeles, CA, USA, 18–19 March 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 182–185. [Google Scholar] [CrossRef]
  47. Surale, H.B.; Matulic, F.; Vogel, D. Experimental Analysis of Barehand Mid-air Mode-Switching Techniques in Virtual Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  48. Chaconas, N.; Höllerer, T. An Evaluation of Bimanual Gestures on the Microsoft HoloLens. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18–22 March 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar] [CrossRef]
  49. Schwind, V.; Mayer, S.; Comeau-Vermeersch, A.; Schweigert, R.; Henze, N. Up to the Finger Tip: The Effect of Avatars on Mid-Air Pointing Accuracy in Virtual Reality. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play, Melbourne, VIC, Australia, 28–31 October 2018; ACM: New York, NY, USA, 2018; pp. 477–488. [Google Scholar] [CrossRef]
  50. Brasier, E.; Chapuis, O.; Ferey, N.; Vezien, J.; Appert, C. ARPads: Mid-air Indirect Input for Augmented Reality. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Porto de Galinhas, Brazil, 9–13 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 332–343. [Google Scholar] [CrossRef]
  51. Satriadi, K.A.; Ens, B.; Cordeil, M.; Jenny, B.; Czauderna, T.; Willett, W. Augmented Reality Map Navigation with Freehand Gestures. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 593–603. [Google Scholar] [CrossRef] [Green Version]
  52. von Willich, J.; Schmitz, M.; Müller, F.; Schmitt, D.; Mühlhäuser, M. Podoportation: Foot-Based Locomotion in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  53. Müller, F.; McManus, J.; Günther, S.; Schmitz, M.; Mühlhäuser, M.; Funk, M. Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  54. Botev, J.; Mayer, J.; Rothkugel, S. Immersive mixed reality object interaction for collaborative context-aware mobile training and exploration. In Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems, Amherst, MA, USA, 18 June 2019; ACM: New York, NY, USA, 2019; pp. 4–9. [Google Scholar] [CrossRef]
  55. Biener, V.; Schneider, D.; Gesslein, T.; Otte, A.; Kuth, B.; Kristensson, P.O.; Ofek, E.; Pahud, M.; Grubert, J. Breaking the Screen: Interaction Across Touchscreen Boundaries in Virtual Reality for Mobile Knowledge Workers. IEEE Trans. Vis. Comput. Graph. 2020, 26, 3490–3502. [Google Scholar] [CrossRef] [PubMed]
  56. Ro, H.; Byun, J.H.; Park, Y.J.; Lee, N.K.; Han, T.D. AR Pointer: Advanced Ray-Casting Interface Using Laser Pointer Metaphor for Object Manipulation in 3D Augmented Reality Environment. Appl. Sci. 2019, 9, 3078. [Google Scholar] [CrossRef] [Green Version]
  57. Lee, L.H.; Zhu, Y.; Yau, Y.P.; Braud, T.; Su, X.; Hui, P. One-thumb Text Acquisition on Force-assisted Miniature Interfaces for Mobile Headsets. In Proceedings of the 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), Austin, TX, USA, 23–27 March 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
  58. Chen, Y.; Katsuragawa, K.; Lank, E. Understanding Viewport- and World-based Pointing with Everyday Smart Devices in Immersive Augmented Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  59. Fuvattanasilp, V.; Fujimoto, Y.; Plopski, A.; Taketomi, T.; Sandor, C.; Kanbara, M.; Kato, H. SlidAR+: Gravity-aware 3D object manipulation for handheld augmented reality. Comput. Graph. 2021, 95, 23–35. [Google Scholar] [CrossRef]
  60. Bozgeyikli, E.; Bozgeyikli, L.L. Evaluating Object Manipulation Interaction Techniques in Mixed Reality: Tangible User Interfaces and Gesture. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March–1 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 778–787. [Google Scholar] [CrossRef]
  61. Besançon, L.; Sereno, M.; Yu, L.; Ammi, M.; Isenberg, T. Hybrid Touch/Tangible Spatial 3D Data Selection. Comput. Graph. Forum 2019, 38, 553–567. [Google Scholar] [CrossRef] [Green Version]
  62. Wacker, P.; Nowak, O.; Voelker, S.; Borchers, J. ARPen: Mid-Air Object Manipulation Techniques for a Bimanual AR System with Pen & Smartphone. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  63. Nukarinen, T.; Kangas, J.; Rantala, J.; Koskinen, O.; Raisamo, R. Evaluating ray casting and two gaze-based pointing techniques for object selection in virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, Tokyo, Japan, 28 November–1 December 2018; ACM: New York, NY, USA, 2018. [Google Scholar] [CrossRef] [Green Version]
  64. Bâce, M.; Leppänen, T.; de Gomez, D.G.; Gomez, A.R. ubiGaze: Ubiquitous Augmented Reality Messaging Using Gaze Gestures. In Proceedings of the SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications, Macau, 5–8 December 2016; ACM: New York, NY, USA, 2016. [Google Scholar] [CrossRef] [Green Version]
  65. Hassoumi, A.; Hurter, C. Eye Gesture in a Mixed Reality Environment. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications—HUCAPP, Prague, Czech Republic, 25–27 February 2019; pp. 183–187. [Google Scholar] [CrossRef]
  66. Fikkert, W.; D’Ambros, M.; Bierz, T.; Jankun-Kelly, T.J. Interacting with Visualizations. In Proceedings of the Human-Centered Visualization Environments: GI-Dagstuhl Research Seminar, Dagstuhl Castle, Germany, 5–8 March 2006; Revised Lectures. Kerren, A., Ebert, A., Meyer, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 77–162. [Google Scholar] [CrossRef]
  67. Al-Rahayfeh, A.; Faezipour, M. Eye Tracking and Head Movement Detection: A State-of-Art Survey. IEEE J. Transl. Eng. Health Med. 2013, 1, 2100212. [Google Scholar] [CrossRef] [PubMed]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.