Mixed Reality Interfaces

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (28 February 2019)

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science, The University of Nottingham, Nottingham NG8 1BB, UK
Interests: human computer interaction; mixed reality

E-Mail Website
Guest Editor
School of Computer Science, University of Nottingham, Nottingham, UK
Interests: computer science; human computer interaction; interactive art; exercise

Special Issue Information

Dear Colleagues,

Mixed Reality Interfaces allow users to interact with physical and digital elements of an experience in an integrated way. With developments in smartphone technology such as 3d depth sensing and specific support for augmented reality, along with the release of affordable headsets, Mixed Reality experiences are finally becoming accessible to the general public. Mixed Reality experiences have potential to support a wide range of applications including gaming, sport, museum visiting, healthcare, education and training.

It is therefore timely to reexamine different approaches for blending of the physical and virtual, explore novel applications, conduct ‘in the wild’ evaluations and develop toolkits and guidelines for designers and developers.

We invite contributions to the Multimodal Technologies and Interaction Special Issue on Mixed Reality Interfaces. Contributions should address contemporary technical, theoretical and application challenges of Mixed Reality Interfaces, including, but not limited to any of the following:

  • Novel software developments (interfaces, authoring tools, sensor algorithms)
  • Novel hardware (headsets, new smartphone hardware, tangible objects…)
  • User evaluations and studies of systems (experimental, in-the-wild…)
  • Theories of the use of Mixed Reality Interfaces

Prof. Boriana Koleva
Dr. Joseph Marshall
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 276 KiB  
Article
IMPAct: A Holistic Framework for Mixed Reality Robotic User Interface Classification and Design
by Dennis Krupke, Jianwei Zhang and Frank Steinicke
Multimodal Technol. Interact. 2019, 3(2), 25; https://doi.org/10.3390/mti3020025 - 11 Apr 2019
Cited by 2 | Viewed by 3530
Abstract
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. [...] Read more.
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. The interdisciplinary nature of mixed reality robotic user interfaces (MRRUI) makes them very interesting and powerful, but also very challenging to design and analyze. Many single aspects have already been successfully provided with theoretical structure, but to the best of our knowledge, there is no contribution combining everything into an MRRUI taxonomy. In this article, we present the results of an extensive investigation of relevant aspects from prominent classifications and taxonomies in the scientific literature. During a card sorting experiment with professionals from the field of human–computer interaction, these aspects were clustered into named groups for providing a new structure. Further categorization of these groups into four different categories was obvious and revealed a memorable structure. Thus, this article provides a framework of objective, technical factors, which finds its application in a precise description of MRRUIs. An example shows the effective use of the proposed framework for precise system description, therefore contributing to a better understanding, design, and comparison of MRRUIs in this growing field of research. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Show Figures

Graphical abstract

15 pages, 5305 KiB  
Article
Tango vs. HoloLens: A Comparison of Collaborative Indoor AR Visualisations Using Hand-Held and Hands-Free Devices
by Urs Riedlinger, Leif Oppermann and Wolfgang Prinz
Multimodal Technol. Interact. 2019, 3(2), 23; https://doi.org/10.3390/mti3020023 - 03 Apr 2019
Cited by 27 | Viewed by 4805
Abstract
In this article, we compare a Google Tango tablet with the Microsoft HoloLens smartglasses in the context of the visualisation and interaction with Building Information Modeling data. A user test was conducted where 16 participants solved four tasks, two for each device, in [...] Read more.
In this article, we compare a Google Tango tablet with the Microsoft HoloLens smartglasses in the context of the visualisation and interaction with Building Information Modeling data. A user test was conducted where 16 participants solved four tasks, two for each device, in small teams of two. Two aspects are analysed in the user test: the visualisation of interior designs and the visualisation of Building Information Modeling data. The results show that the Tango tablet is surprisingly preferred by most users when it comes to collaboration and discussion in our scenario. While the HoloLens offers hands-free operation and a stable tracking, users mentioned that the interaction with the Tango tablet felt more natural. In addition, users reported that it was easier to get an overall impression with the Tango tablet rather than with the HoloLens smartglasses. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Show Figures

Figure 1

16 pages, 2944 KiB  
Article
Design and Evaluation of a Mixed-Reality Playground for Child-Robot Games
by Maria Luce Lupetti, Giovanni Piumatti, Claudio Germak and Fabrizio Lamberti
Multimodal Technol. Interact. 2018, 2(4), 69; https://doi.org/10.3390/mti2040069 - 06 Oct 2018
Cited by 5 | Viewed by 3197
Abstract
In this article we present the Phygital Game project, a mixed-reality game platform in which children can play with or against a robot. The project was developed by adopting a human-centered design approach, characterized by the engagement of both children and parents in [...] Read more.
In this article we present the Phygital Game project, a mixed-reality game platform in which children can play with or against a robot. The project was developed by adopting a human-centered design approach, characterized by the engagement of both children and parents in the design process, and situating the game platform in a real context—an educational center for children. We report the results of both the preliminary studies and the final testing session, which focused on the evaluation of usability factors. By providing a detailed description of the process and the results, this work aims at sharing the findings and the lessons learned about both the implications of adopting a human-centered approach across the whole design process and the specific challenges of developing a mixed-reality playground. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Show Figures

Figure 1

Back to TopTop