Previous Issue

Table of Contents

Multimodal Technologies Interact., Volume 3, Issue 2 (June 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-12
Export citation of selected articles as:
Open AccessArticle
Combining VR Visualization and Sonification for Immersive Exploration of Urban Noise Standards
Multimodal Technologies Interact. 2019, 3(2), 34; https://doi.org/10.3390/mti3020034
Received: 28 March 2019 / Revised: 3 May 2019 / Accepted: 7 May 2019 / Published: 13 May 2019
Viewed by 142 | PDF Full-text (2719 KB)
Abstract
Urban traffic noise situations are usually visualized as conventional 2D maps or 3D scenes. These representations are indispensable tools to inform decision makers and citizens about issues of health, safety, and quality of life but require expert knowledge in order to be properly [...] Read more.
Urban traffic noise situations are usually visualized as conventional 2D maps or 3D scenes. These representations are indispensable tools to inform decision makers and citizens about issues of health, safety, and quality of life but require expert knowledge in order to be properly understood and put into context. The subjectivity of how we perceive noise as well as the inaccuracies in common noise calculation standards are rarely represented. We present a virtual reality application that seeks to offer an audiovisual glimpse into the background workings of one of these standards, by employing a multisensory, immersive analytics approach that allows users to interactively explore and listen to an approximate rendering of the data in the same environment that the noise simulation occurs in. In order for this approach to be useful, it should manage complicated noise level calculations in a real time environment and run on commodity low-cost VR hardware. In a prototypical implementation, we utilized simple VR interactions common to current mobile VR headsets and combined them with techniques from data visualization and sonification to allow users to explore road traffic noise in an immersive real-time urban environment. The noise levels were calculated over CityGML LoD2 building geometries, in accordance with Common Noise Assessment Methods in Europe (CNOSSOS-EU) sound propagation methods. Full article
(This article belongs to the Special Issue Interactive 3D Cartography)
Open AccessArticle
Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study
Multimodal Technologies Interact. 2019, 3(2), 33; https://doi.org/10.3390/mti3020033
Received: 15 March 2019 / Revised: 15 April 2019 / Accepted: 27 April 2019 / Published: 9 May 2019
Viewed by 162 | PDF Full-text (2585 KB) | HTML Full-text | XML Full-text
Abstract
Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and [...] Read more.
Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and used as input devices for interactive systems. Clothing-based wearable interfaces are suitable for in-vehicle controls. They can combine various modalities to enable users to perform simple, natural, and efficient interactions while minimizing any negative effect on their driving. Research on clothing-based wearable in-vehicle interfaces is still underexplored. As such, there is a lack of understanding of how to use textile-based input for in-vehicle controls. As a first step towards filling this gap, we have conducted a user-elicitation study to involve users in the process of designing in-vehicle interactions via a fabric-based wearable device. We have been able to distill a taxonomy of wrist and touch gestures for in-vehicle interactions using a fabric-based wrist interface in a simulated driving setup. Our results help drive forward the investigation of the design space of clothing-based wearable interfaces for in-vehicle secondary interactions. Full article
Figures

Figure 1

Open AccessArticle
Recognition of Tactile Facial Action Units by Individuals Who Are Blind and Sighted: A Comparative Study
Multimodal Technologies Interact. 2019, 3(2), 32; https://doi.org/10.3390/mti3020032
Received: 16 March 2019 / Revised: 24 April 2019 / Accepted: 6 May 2019 / Published: 8 May 2019
Viewed by 148 | PDF Full-text (4762 KB) | HTML Full-text | XML Full-text
Abstract
Given that most cues exchanged during a social interaction are nonverbal (e.g., facial expressions, hand gestures, body language), individuals who are blind are at a social disadvantage compared to their sighted peers. Very little work has explored sensory augmentation in the context of [...] Read more.
Given that most cues exchanged during a social interaction are nonverbal (e.g., facial expressions, hand gestures, body language), individuals who are blind are at a social disadvantage compared to their sighted peers. Very little work has explored sensory augmentation in the context of social assistive aids for individuals who are blind. The purpose of this study is to explore the following questions related to visual-to-vibrotactile mapping of facial action units (the building blocks of facial expressions): (1) How well can individuals who are blind recognize tactile facial action units compared to those who are sighted? (2) How well can individuals who are blind recognize emotions from tactile facial action units compared to those who are sighted? These questions are explored in a preliminary pilot test using absolute identification tasks in which participants learn and recognize vibrotactile stimulations presented through the Haptic Chair, a custom vibrotactile display embedded on the back of a chair. Study results show that individuals who are blind are able to recognize tactile facial action units as well as those who are sighted. These results hint at the potential for tactile facial action units to augment and expand access to social interactions for individuals who are blind. Full article
(This article belongs to the Special Issue Haptics for Human Augmentation)
Figures

Figure 1

Open AccessArticle
User-Generated Gestures for Voting and Commenting on Immersive Displays in Urban Planning
Multimodal Technologies Interact. 2019, 3(2), 31; https://doi.org/10.3390/mti3020031
Received: 14 April 2019 / Revised: 29 April 2019 / Accepted: 30 April 2019 / Published: 7 May 2019
Viewed by 105 | PDF Full-text (5009 KB) | HTML Full-text | XML Full-text
Abstract
Traditional methods of public consultation offer only limited interactivity with urban planning materials, leading to a restricted engagement of citizens. Public displays and immersive virtual environments have the potential to address this issue, enhance citizen engagement and improve the public consultation process, overall. [...] Read more.
Traditional methods of public consultation offer only limited interactivity with urban planning materials, leading to a restricted engagement of citizens. Public displays and immersive virtual environments have the potential to address this issue, enhance citizen engagement and improve the public consultation process, overall. In this paper, we investigate how people would interact with a large display showing urban planning content. We conducted an elicitation study with a large immersive display, where we asked participants (N = 28) to produce gestures to vote and comment on urban planning material. Our results suggest that the phone interaction modality may be more suitable than the hand interaction modality for voting and commenting on large interactive displays. Our findings may inform the design of interactions for large immersive displays, in particular, those showing urban planning content. Full article
Figures

Figure 1

Open AccessArticle
Visual Analysis of a Smart City’s Energy Consumption
Multimodal Technologies Interact. 2019, 3(2), 30; https://doi.org/10.3390/mti3020030
Received: 31 March 2019 / Revised: 24 April 2019 / Accepted: 29 April 2019 / Published: 2 May 2019
Viewed by 224 | PDF Full-text (15037 KB)
Abstract
Through the use of open data portals, cities, districts and countries are increasingly making available energy consumption data. These data have the potential to inform both policymakers and local communities. At the same time, however, these datasets are large and complicated to analyze. [...] Read more.
Through the use of open data portals, cities, districts and countries are increasingly making available energy consumption data. These data have the potential to inform both policymakers and local communities. At the same time, however, these datasets are large and complicated to analyze. We present the activity-centered-design, from requirements to evaluation, of a web-based visual analysis tool to explore energy consumption in Chicago. The resulting application integrates energy consumption data and census data, making it possible for both amateurs and experts to analyze disaggregated datasets at multiple levels of spatial aggregation and to compare temporal and spatial differences. An evaluation through case studies and qualitative feedback demonstrates that this visual analysis application successfully meets the goals of integrating large, disaggregated urban energy consumption datasets and of supporting analysis by both lay users and experts. Full article
(This article belongs to the Special Issue Interactive Visualizations for Sustainability)
Open AccessArticle
Tell Them How They Did: Feedback on Operator Performance Helps Calibrate Perceived Ease of Use in Automated Driving
Multimodal Technologies Interact. 2019, 3(2), 29; https://doi.org/10.3390/mti3020029
Received: 4 March 2019 / Revised: 24 April 2019 / Accepted: 25 April 2019 / Published: 29 April 2019
Viewed by 234 | PDF Full-text (1059 KB) | HTML Full-text | XML Full-text
Abstract
The development of automated driving will profit from an agreed-upon methodology to evaluate human–machine interfaces. The present study examines the role of feedback on interaction performance provided directly to participants when interacting with driving automation (i.e., perceived ease of use). In addition, the [...] Read more.
The development of automated driving will profit from an agreed-upon methodology to evaluate human–machine interfaces. The present study examines the role of feedback on interaction performance provided directly to participants when interacting with driving automation (i.e., perceived ease of use). In addition, the development of ratings itself over time and use case specificity were examined. In a driving simulator study, N = 55 participants completed several transitions between Society of Automotive Engineers (SAE) level 0, level 2, and level 3 automated driving. One half of the participants received feedback on their interaction performance immediately after each use case, while the other half did not. As expected, the results revealed that participants judged the interactions to become easier over time. However, a use case specificity was present, as transitions to L0 did not show effects over time. The role of feedback also depended on the respective use case. We observed more conservative evaluations when feedback was provided than when it was not. The present study supports the application of perceived ease of use as a diagnostic measure in interaction with automated driving. Evaluations of interfaces can benefit from supporting feedback to obtain more conservative results. Full article
Figures

Figure 1

Open AccessBrief Report
A Virtual Reality System for Practicing Conversation Skills for Children with Autism
Multimodal Technologies Interact. 2019, 3(2), 28; https://doi.org/10.3390/mti3020028
Received: 20 March 2019 / Revised: 12 April 2019 / Accepted: 17 April 2019 / Published: 20 April 2019
Viewed by 294 | PDF Full-text (1132 KB) | HTML Full-text | XML Full-text
Abstract
We describe a virtual reality environment, Bob’s Fish Shop, which provides a system where users diagnosed with Autism Spectrum Disorder (ASD) can practice social interactions in a safe and controlled environment. A case study is presented which suggests such an environment can provide [...] Read more.
We describe a virtual reality environment, Bob’s Fish Shop, which provides a system where users diagnosed with Autism Spectrum Disorder (ASD) can practice social interactions in a safe and controlled environment. A case study is presented which suggests such an environment can provide the opportunity for users to build the skills necessary to carry out a conversation without the fear of negative social consequences present in the physical world. Through the repetition and analysis of these virtual interactions, users can improve social and conversational understanding. Full article
Figures

Figure 1

Open AccessArticle
Comparing Interaction Techniques to Help Blind People Explore Maps on Small Tactile Devices
Multimodal Technologies Interact. 2019, 3(2), 27; https://doi.org/10.3390/mti3020027
Received: 28 February 2019 / Revised: 27 March 2019 / Accepted: 17 April 2019 / Published: 20 April 2019
Viewed by 205 | PDF Full-text (3012 KB) | HTML Full-text | XML Full-text
Abstract
Exploring geographic maps on touchscreens is a difficult task in the absence of vision as those devices miss tactile cues. Prior research has therefore introduced non-visual interaction techniques designed to allow visually impaired people to explore spatial configurations on tactile devices. In this [...] Read more.
Exploring geographic maps on touchscreens is a difficult task in the absence of vision as those devices miss tactile cues. Prior research has therefore introduced non-visual interaction techniques designed to allow visually impaired people to explore spatial configurations on tactile devices. In this paper, we present a study in which six blind and six blindfolded sighted participants evaluated three of those interaction techniques compared to a screen reader condition. We observed that techniques providing guidance result in a higher user satisfaction and more efficient exploration. Adding a grid-like structure improved the estimation of distances. None of the interaction techniques improved the reconstruction of the spatial configurations. The results of this study allow improving the design of non-visual interaction techniques that support a better exploration and memorization of maps in the absence of vision. Full article
(This article belongs to the Special Issue Interactive Assistive Technology)
Figures

Figure 1

Open AccessArticle
Tactile Cues for Improving Target Localization in Subjects with Tunnel Vision
Multimodal Technologies Interact. 2019, 3(2), 26; https://doi.org/10.3390/mti3020026
Received: 15 March 2019 / Revised: 12 April 2019 / Accepted: 16 April 2019 / Published: 19 April 2019
Viewed by 205 | PDF Full-text (2927 KB) | HTML Full-text | XML Full-text
Abstract
The loss of peripheral vision is experienced by millions of people with glaucoma or retinitis pigmentosa, and has a major impact in everyday life, specifically to locate visual targets in the environment. In this study, we designed a wearable interface to render the [...] Read more.
The loss of peripheral vision is experienced by millions of people with glaucoma or retinitis pigmentosa, and has a major impact in everyday life, specifically to locate visual targets in the environment. In this study, we designed a wearable interface to render the location of specific targets with private and non-intrusive tactile cues. Three experimental studies were completed to design and evaluate the tactile code and the device. In the first study, four different tactile codes (single stimuli or trains of pulses rendered either in a Cartesian or a Polar coordinate system) were evaluated with a head pointing task. In the following studies, the most efficient code, trains of pulses with Cartesian coordinates, was used on a bracelet located on the wrist, and evaluated during a visual search task in a complex virtual environment. The second study included ten subjects with a simulated restrictive field of view (10°). The last study consisted of proof of a concept with one visually impaired subject with restricted peripheral vision due to glaucoma. The results show that the device significantly improved the visual search efficiency with a factor of three. Including object recognition algorithm to smart glass, the device could help to detect targets of interest either on demand or suggested by the device itself (e.g., potential obstacles), facilitating visual search, and more generally spatial awareness of the environment. Full article
(This article belongs to the Special Issue Interactive Assistive Technology)
Figures

Figure 1

Open AccessArticle
IMPAct: A Holistic Framework for Mixed Reality Robotic User Interface Classification and Design
Multimodal Technologies Interact. 2019, 3(2), 25; https://doi.org/10.3390/mti3020025
Received: 28 February 2019 / Revised: 22 March 2019 / Accepted: 5 April 2019 / Published: 11 April 2019
Viewed by 315 | PDF Full-text (276 KB) | HTML Full-text | XML Full-text
Abstract
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. [...] Read more.
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. The interdisciplinary nature of mixed reality robotic user interfaces (MRRUI) makes them very interesting and powerful, but also very challenging to design and analyze. Many single aspects have already been successfully provided with theoretical structure, but to the best of our knowledge, there is no contribution combining everything into an MRRUI taxonomy. In this article, we present the results of an extensive investigation of relevant aspects from prominent classifications and taxonomies in the scientific literature. During a card sorting experiment with professionals from the field of human–computer interaction, these aspects were clustered into named groups for providing a new structure. Further categorization of these groups into four different categories was obvious and revealed a memorable structure. Thus, this article provides a framework of objective, technical factors, which finds its application in a precise description of MRRUIs. An example shows the effective use of the proposed framework for precise system description, therefore contributing to a better understanding, design, and comparison of MRRUIs in this growing field of research. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Figures

Graphical abstract

Open AccessArticle
Exploring the Development Requirements for Virtual Reality Gait Analysis
Multimodal Technologies Interact. 2019, 3(2), 24; https://doi.org/10.3390/mti3020024
Received: 12 February 2019 / Revised: 24 March 2019 / Accepted: 4 April 2019 / Published: 10 April 2019
Viewed by 230 | PDF Full-text (3632 KB) | HTML Full-text | XML Full-text
Abstract
The hip joint is highly prone to traumatic and degenerative pathologies resulting in irregular locomotion. Monitoring and treatment depend on high-end technology facilities requiring physician and patient co-location, thus limiting access to specialist monitoring and treatment for populations living in rural and remote [...] Read more.
The hip joint is highly prone to traumatic and degenerative pathologies resulting in irregular locomotion. Monitoring and treatment depend on high-end technology facilities requiring physician and patient co-location, thus limiting access to specialist monitoring and treatment for populations living in rural and remote locations. Telemedicine offers an alternative means of monitoring, negating the need for patient physical presence. In addition, emerging technologies, such as virtual reality (VR) and immersive technologies, offer potential future solutions through virtual presence, where the patient and health professional can meet in a virtual environment (a virtual clinic). To this end, a prototype asynchronous telemedicine VR gait analysis system was designed, aiming to transfer a full clinical facility within the patients’ local proximity. The proposed system employs cost-effective alternative motion capture combined with the system’s immersive 3D virtual gait analysis clinic. The user interface and the tools in the application offer health professionals asynchronous, objective, and subjective analyses. This paper investigates the requirements for the design of such a system and discusses preliminary comparative data of its performance evaluation against a high-fidelity gait analysis clinical application. Full article
(This article belongs to the Special Issue Digital Health Applications of Ubiquitous HCI Research)
Figures

Figure 1

Open AccessArticle
Tango vs. HoloLens: A Comparison of Collaborative Indoor AR Visualisations Using Hand-Held and Hands-Free Devices
Multimodal Technologies Interact. 2019, 3(2), 23; https://doi.org/10.3390/mti3020023
Received: 19 February 2019 / Revised: 24 March 2019 / Accepted: 28 March 2019 / Published: 3 April 2019
Viewed by 279 | PDF Full-text (5305 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we compare a Google Tango tablet with the Microsoft HoloLens smartglasses in the context of the visualisation and interaction with Building Information Modeling data. A user test was conducted where 16 participants solved four tasks, two for each device, in [...] Read more.
In this article, we compare a Google Tango tablet with the Microsoft HoloLens smartglasses in the context of the visualisation and interaction with Building Information Modeling data. A user test was conducted where 16 participants solved four tasks, two for each device, in small teams of two. Two aspects are analysed in the user test: the visualisation of interior designs and the visualisation of Building Information Modeling data. The results show that the Tango tablet is surprisingly preferred by most users when it comes to collaboration and discussion in our scenario. While the HoloLens offers hands-free operation and a stable tracking, users mentioned that the interaction with the Tango tablet felt more natural. In addition, users reported that it was easier to get an overall impression with the Tango tablet rather than with the HoloLens smartglasses. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Figures

Figure 1

Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top