Special Issue "Mixed Reality Interfaces"

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (28 February 2019)

Special Issue Editors

Guest Editor
Prof. Boriana Koleva

University of Nottingham, Nottingham, United Kingdom
Website | E-Mail
Interests: human computer interaction; mixed reality
Guest Editor
Dr. Joseph Marshall

School of Computer Science, University of Nottingham, Nottingham, United Kingdom
Website | E-Mail
Interests: computer science; human computer interaction; interactive art; exercise

Special Issue Information

Dear Colleagues,

Mixed Reality Interfaces allow users to interact with physical and digital elements of an experience in an integrated way. With developments in smartphone technology such as 3d depth sensing and specific support for augmented reality, along with the release of affordable headsets, Mixed Reality experiences are finally becoming accessible to the general public. Mixed Reality experiences have potential to support a wide range of applications including gaming, sport, museum visiting, healthcare, education and training.

It is therefore timely to reexamine different approaches for blending of the physical and virtual, explore novel applications, conduct ‘in the wild’ evaluations and develop toolkits and guidelines for designers and developers.

We invite contributions to the Multimodal Technologies and Interaction Special Issue on Mixed Reality Interfaces. Contributions should address contemporary technical, theoretical and application challenges of Mixed Reality Interfaces, including, but not limited to any of the following:

  • Novel software developments (interfaces, authoring tools, sensor algorithms)
  • Novel hardware (headsets, new smartphone hardware, tangible objects…)
  • User evaluations and studies of systems (experimental, in-the-wild…)
  • Theories of the use of Mixed Reality Interfaces

Prof. Boriana Koleva
Dr. Joseph Marshall
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Open AccessArticle IMPAct: A Holistic Framework for Mixed Reality Robotic User Interface Classification and Design
Multimodal Technologies Interact. 2019, 3(2), 25; https://doi.org/10.3390/mti3020025
Received: 28 February 2019 / Revised: 22 March 2019 / Accepted: 5 April 2019 / Published: 11 April 2019
PDF Full-text (276 KB) | HTML Full-text | XML Full-text
Abstract
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. [...] Read more.
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. The interdisciplinary nature of mixed reality robotic user interfaces (MRRUI) makes them very interesting and powerful, but also very challenging to design and analyze. Many single aspects have already been successfully provided with theoretical structure, but to the best of our knowledge, there is no contribution combining everything into an MRRUI taxonomy. In this article, we present the results of an extensive investigation of relevant aspects from prominent classifications and taxonomies in the scientific literature. During a card sorting experiment with professionals from the field of human–computer interaction, these aspects were clustered into named groups for providing a new structure. Further categorization of these groups into four different categories was obvious and revealed a memorable structure. Thus, this article provides a framework of objective, technical factors, which finds its application in a precise description of MRRUIs. An example shows the effective use of the proposed framework for precise system description, therefore contributing to a better understanding, design, and comparison of MRRUIs in this growing field of research. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Figures

Graphical abstract

Open AccessArticle Tango vs. HoloLens: A Comparison of Collaborative Indoor AR Visualisations Using Hand-Held and Hands-Free Devices
Multimodal Technologies Interact. 2019, 3(2), 23; https://doi.org/10.3390/mti3020023
Received: 19 February 2019 / Revised: 24 March 2019 / Accepted: 28 March 2019 / Published: 3 April 2019
PDF Full-text (5305 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we compare a Google Tango tablet with the Microsoft HoloLens smartglasses in the context of the visualisation and interaction with Building Information Modeling data. A user test was conducted where 16 participants solved four tasks, two for each device, in [...] Read more.
In this article, we compare a Google Tango tablet with the Microsoft HoloLens smartglasses in the context of the visualisation and interaction with Building Information Modeling data. A user test was conducted where 16 participants solved four tasks, two for each device, in small teams of two. Two aspects are analysed in the user test: the visualisation of interior designs and the visualisation of Building Information Modeling data. The results show that the Tango tablet is surprisingly preferred by most users when it comes to collaboration and discussion in our scenario. While the HoloLens offers hands-free operation and a stable tracking, users mentioned that the interaction with the Tango tablet felt more natural. In addition, users reported that it was easier to get an overall impression with the Tango tablet rather than with the HoloLens smartglasses. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Figures

Figure 1

Open AccessArticle Design and Evaluation of a Mixed-Reality Playground for Child-Robot Games
Multimodal Technologies Interact. 2018, 2(4), 69; https://doi.org/10.3390/mti2040069
Received: 16 August 2018 / Revised: 20 September 2018 / Accepted: 2 October 2018 / Published: 6 October 2018
PDF Full-text (2944 KB) | HTML Full-text | XML Full-text
Abstract
In this article we present the Phygital Game project, a mixed-reality game platform in which children can play with or against a robot. The project was developed by adopting a human-centered design approach, characterized by the engagement of both children and parents in [...] Read more.
In this article we present the Phygital Game project, a mixed-reality game platform in which children can play with or against a robot. The project was developed by adopting a human-centered design approach, characterized by the engagement of both children and parents in the design process, and situating the game platform in a real context—an educational center for children. We report the results of both the preliminary studies and the final testing session, which focused on the evaluation of usability factors. By providing a detailed description of the process and the results, this work aims at sharing the findings and the lessons learned about both the implications of adopting a human-centered approach across the whole design process and the specific challenges of developing a mixed-reality playground. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Figures

Figure 1

Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top