Next Issue
Volume 3, March
Previous Issue
Volume 3, September

Table of Contents

Multimodal Technologies Interact., Volume 3, Issue 4 (December 2019) – 9 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Audio Legends: Investigating Sonic Interaction in an Augmented Reality Audio Game
Multimodal Technologies Interact. 2019, 3(4), 73; https://doi.org/10.3390/mti3040073 - 13 Nov 2019
Viewed by 1082
Abstract
Augmented Reality Audio Games (ARAG) enrich the physical world with virtual sounds to express their content and mechanics. Existing ARAG implementations have focused on exploring the surroundings and navigating to virtual sound sources as the main mode of interaction. This paper suggests that [...] Read more.
Augmented Reality Audio Games (ARAG) enrich the physical world with virtual sounds to express their content and mechanics. Existing ARAG implementations have focused on exploring the surroundings and navigating to virtual sound sources as the main mode of interaction. This paper suggests that gestural activity with a handheld device can realize complex modes of sonic interaction in the augmented environment, resulting in an enhanced immersive game experience. The ARAG “Audio Legends” was designed and tested to evaluate the usability and immersion of a system featuring an exploration phase based on auditory navigation, as well as an action phase, in which players aim at virtual sonic targets and wave the device to hit them or hold the device to block them. The results of the experiment provide evidence that players are easily accustomed to auditory navigation and that gestural sonic interaction is perceived as difficult, yet this does not affect negatively the system’s usability and players’ immersion. Findings also include indications that elements, such as sound design, the synchronization of sound and gesture, the fidelity of audio augmentation, and environmental conditions, also affect significantly the game experience, whereas background factors, such as age, sex, and game or music experience, do not have any critical impact. Full article
Show Figures

Figure 1

Open AccessArticle
Dislocated Boardgames: Design Potentials for Remote Tangible Play
Multimodal Technologies Interact. 2019, 3(4), 72; https://doi.org/10.3390/mti3040072 - 07 Nov 2019
Cited by 1 | Viewed by 1083
Abstract
Conventional digital and remote forms of play lack the physicality associated with analog play. Research on the materiality of boardgames has highlighted the inherent material aspects to this analog form of play and how these are relevant for the design of digital play. [...] Read more.
Conventional digital and remote forms of play lack the physicality associated with analog play. Research on the materiality of boardgames has highlighted the inherent material aspects to this analog form of play and how these are relevant for the design of digital play. In this work, we analyze the inherent material qualities and related experiences of boardgames, and speculate how these might shift in remote manifestations. Based on that, we depict three lenses of designing for remote tangible play: physicality, agency, and time. These lenses present leverage points for future designs and illustrate how the digital and the physical can complement each other following alternative notions of hybrid digital–physical play. Based on that, we illustrate the related design space and discuss how boardgame qualities can be translated to the remote space, as well as how their characteristics might change. Thereby, we shed light on related design challenges and reflect on how designing for shared physicality can enrich dislocated play by applying these lenses. Full article
(This article belongs to the Special Issue Novel User Interfaces and Interaction Techniques in the Games Context)
Show Figures

Figure 1

Open AccessReview
Interaction Order and Historical Body Shaping Children’s Making Projects—A Literature Review
Multimodal Technologies Interact. 2019, 3(4), 71; https://doi.org/10.3390/mti3040071 - 28 Oct 2019
Cited by 2 | Viewed by 1162
Abstract
The importance of familiarizing children with the Maker Movement, Makerspaces and Maker mindset has been acknowledged. In this literature review, we examine the complex social action of children, aged from 7 to 17 (K-12), engaging in technology Making activities as it is seen [...] Read more.
The importance of familiarizing children with the Maker Movement, Makerspaces and Maker mindset has been acknowledged. In this literature review, we examine the complex social action of children, aged from 7 to 17 (K-12), engaging in technology Making activities as it is seen in the extant literature. The included papers contain empirical data from actual digital Making workshops and diverse research projects with children, conducted in both formal and non-formal/informal settings, such as schools or museums, libraries, Fab Labs and other makerspaces. We utilized the theoretical lens of nexus analysis and its concepts of interaction order and historical body, and as a result of our analysis, we report best practices and helping and hindering factors. Two gaps in the current knowledge were identified: (1) the current research focuses on success stories instead of challenges in the working, and, (2) histories of the participants and interaction between them are very rarely in the focus of the existing studies or reported in detail, even though they significantly affect what happens and what is possible to happen in Making sessions. Full article
(This article belongs to the Special Issue Multimodal Interaction in the Cyberspace)
Open AccessArticle
Prediction of Who Will Be Next Speaker and When Using Mouth-Opening Pattern in Multi-Party Conversation
Multimodal Technologies Interact. 2019, 3(4), 70; https://doi.org/10.3390/mti3040070 - 26 Oct 2019
Viewed by 870
Abstract
We investigated the mouth-opening transition pattern (MOTP), which represents the change of mouth-opening degree during the end of an utterance, and used it to predict the next speaker and utterance interval between the start time of the next speaker’s utterance and the end [...] Read more.
We investigated the mouth-opening transition pattern (MOTP), which represents the change of mouth-opening degree during the end of an utterance, and used it to predict the next speaker and utterance interval between the start time of the next speaker’s utterance and the end time of the current speaker’s utterance in a multi-party conversation. We first collected verbal and nonverbal data that include speech and the degree of mouth opening (closed, narrow-open, wide-open) of participants that were manually annotated in four-person conversation. A key finding of the MOTP analysis is that the current speaker often keeps her mouth narrow-open during turn-keeping and starts to close it after opening it narrowly or continues to open it widely during turn-changing. The next speaker often starts to open her mouth narrowly after closing it during turn-changing. Moreover, when the current speaker starts to close her mouth after opening it narrowly in turn-keeping, the utterance interval tends to be short. In contrast, when the current speaker and the listeners open their mouths narrowly after opening them narrowly and then widely, the utterance interval tends to be long. On the basis of these results, we implemented prediction models of the next-speaker and utterance interval using MOTPs. As a multimodal-feature fusion, we also implemented models using eye-gaze behavior, which is one of the most useful items of information for prediction of next-speaker and utterance interval according to our previous study, in addition to MOTPs. The evaluation result of the models suggests that the MOTPs of the current speaker and listeners are effective for predicting the next speaker and utterance interval in multi-party conversation. Our multimodal-feature fusion model using MOTPs and eye-gaze behavior is more useful for predicting the next speaker and utterance interval than using only one or the other. Full article
(This article belongs to the Special Issue Multimodal Conversational Interaction and Interfaces)
Show Figures

Figure 1

Open AccessArticle
F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems
Multimodal Technologies Interact. 2019, 3(4), 69; https://doi.org/10.3390/mti3040069 - 17 Oct 2019
Viewed by 1177
Abstract
F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial [...] Read more.
F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial formations when navigating to groups. This work uses a simulated environment in which teleoperators are requested to navigate to different groups of virtual agents. The simulated environment represents a conference lobby scenario where multiple groups of Virtual Agents with varying group sizes are placed in different spatial formations. The task requires teleoperators to navigate a robot to join each group using an egocentric-perspective camera. In a second phase, teleoperators are allowed to evaluate their own performance by reviewing how they navigated the robot from an exocentric perspective. The two important outcomes from this study are, firstly, teleoperators inherently respect F-formations even when operating a mobile robotic telepresence system. Secondly, teleoperators prefer additional support in order to correctly navigate the robot into a preferred position that adheres to F-formations. Full article
(This article belongs to the Special Issue The Future of Intelligent Human-Robot Collaboration)
Show Figures

Figure 1

Open AccessArticle
Investigating Immersion and Learning in a Low-Embodied versus High-Embodied Digital Educational Game: Lessons Learned from an Implementation in an Authentic School Classroom
Multimodal Technologies Interact. 2019, 3(4), 68; https://doi.org/10.3390/mti3040068 - 12 Oct 2019
Viewed by 1091
Abstract
Immersion is often argued to be one of the main driving forces behind children’s learning in digital educational games. Researchers have supported that movement-based interaction afforded by emerging embodied digital educational games may heighten even more immersion and learning. However, there is lack [...] Read more.
Immersion is often argued to be one of the main driving forces behind children’s learning in digital educational games. Researchers have supported that movement-based interaction afforded by emerging embodied digital educational games may heighten even more immersion and learning. However, there is lack of empirical research warranting these claims. This case study has investigated the impact of high-embodied digital educational game, integrated in a primary school classroom, on children’s immersion and content knowledge about nutrition (condition1 = 24 children), in comparison to the impact of a low-embodied version of the game (condition2 = 20 children). Post-interventional surveys investigating immersion indicated that there was difference only on the level of engagement, in terms of perceived usability, while children’s learning gains in terms of content knowledge did not differ among the two conditions. Interviews with a subset of the children (n = 8 per condition) resulted in the identification of (a) media form, (b) media content and (c) context-related factors, which provided plausible explanations about children’s experienced immersion. Implications are discussed for supporting immersion in high-embodied educational digital games. Full article
(This article belongs to the Special Issue Multimodal Interaction in the Cyberspace)
Show Figures

Figure 1

Open AccessArticle
The Influence of Feedback Type in Robot-Assisted Training
Multimodal Technologies Interact. 2019, 3(4), 67; https://doi.org/10.3390/mti3040067 - 09 Oct 2019
Viewed by 979
Abstract
Robot-assisted training, where social robots can be used as motivational coaches, provides an interesting application area. This paper examines how feedback given by a robot agent influences the various facets of participant experience in robot-assisted training. Specifically, we investigated the effects of feedback [...] Read more.
Robot-assisted training, where social robots can be used as motivational coaches, provides an interesting application area. This paper examines how feedback given by a robot agent influences the various facets of participant experience in robot-assisted training. Specifically, we investigated the effects of feedback type on robot acceptance, sense of safety and security, attitude towards robots and task performance. In the experiment, 23 older participants performed basic arm exercises with a social robot as a guide and received feedback. Different feedback conditions were administered, such as flattering, positive and negative feedback. Our results suggest that the robot with flattering and positive feedback was appreciated by older people in general, even if the feedback did not necessarily correspond to objective measures such as performance. Participants in these groups felt better about the interaction and the robot. Full article
(This article belongs to the Special Issue The Future of Intelligent Human-Robot Collaboration)
Show Figures

Figure 1

Open AccessArticle
Reorganize Your Blogs: Supporting Blog Re-visitation with Natural Language Processing and Visualization
Multimodal Technologies Interact. 2019, 3(4), 66; https://doi.org/10.3390/mti3040066 - 07 Oct 2019
Viewed by 897
Abstract
Temporally-connected personal blogs contain voluminous textual content, presenting challenges in re-visiting and reflecting on experiences. Other data repositories have benefited from natural language processing (NLP) and interactive visualizations (VIS) to support exploration, but little is known about how these techniques could be used [...] Read more.
Temporally-connected personal blogs contain voluminous textual content, presenting challenges in re-visiting and reflecting on experiences. Other data repositories have benefited from natural language processing (NLP) and interactive visualizations (VIS) to support exploration, but little is known about how these techniques could be used with blogs to present experiences and support multimodal interaction with blogs, particularly for authors. This paper presents the effect of reorganization—reorganizing the large blog set with NLP and presenting abstract topics with VIS—to support novel re-visitation experiences to blogs. The BlogCloud tool, a blog re-visitation tool that reorganizes blog paragraphs around user-searched keywords, implements reorganization and similarity-based content grouping. Through a public use session with bloggers who wrote about extended hikes, we observed the effect of NLP-based reorganization in delivering novel re-visitation experiences. Findings suggest that the re-presented topics provide new reflection materials and re-visitation paths, enabling interaction with symbolic items in memory. Full article
(This article belongs to the Special Issue Text Mining in Complex Domains)
Show Figures

Figure 1

Open AccessArticle
Towards Designing Diegetic Gaze in Games: The Use of Gaze Roles and Metaphors
Multimodal Technologies Interact. 2019, 3(4), 65; https://doi.org/10.3390/mti3040065 - 21 Sep 2019
Cited by 2 | Viewed by 1149
Abstract
Gaze-based interactions have found their way into the games domain and are frequently employed as a means to support players in their activities. Instead of implementing gaze as an additional game feature via a game-centred approach, we propose a diegetic perspective by introducing [...] Read more.
Gaze-based interactions have found their way into the games domain and are frequently employed as a means to support players in their activities. Instead of implementing gaze as an additional game feature via a game-centred approach, we propose a diegetic perspective by introducing gaze interaction roles and gaze metaphors. Gaze interaction roles represent ambiguous mechanics in gaze, whereas gaze metaphors serve as narrative figures that symbolise, illustrate, and are applied to the interaction dynamics. Within this work, the current literature in the field is analysed to seek examples that design around gaze mechanics and follow a diegetic approach that takes roles and metaphors into account. A list of surveyed gaze metaphors related to each gaze role is presented and described in detail. Furthermore, a case study shows the potentials of the proposed approach. Our work aims at contributing to existing frameworks, such as EyePlay, by reflecting on the ambiguous meaning of gaze in games. Through this integrative approach, players are anticipated to develop a deeper connection to the game narrative via gaze, resulting in a stronger experience concerning presence (i.e., being in the game world). Full article
(This article belongs to the Special Issue Novel User Interfaces and Interaction Techniques in the Games Context)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop