Special Issue "Embodied and Spatial Interaction"

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (31 December 2018)

Special Issue Editor

Guest Editor
Prof. Markku Turunen

Faculty of Communication Sciences, University of Tampere, 33100 Tampere, Finland
Website | E-Mail
Interests: multimodal human-computer interaction; gestural interaction; user experience; accessibility; users with special needs

Special Issue Information

Dear Colleagues,

Embodied and spatial interaction has gained a lot of attention lately, but there is little research carried out from multimodal interaction perspective. In this special issue, we welcome submissions related to embodied and spatial interaction, which cover any combination of multimodal interaction means. Articles could relate to design, implementation and evaluation of multimodal solutions. In particular, we welcome articles which focus on fusion and fission of different modalities, including multi-dimensional analysis of multimodal interaction (e.g., combinations of subjective and objective metrics) in this setting. Other possible topics include case studies (e.g., system descriptions and evaluation results) of multimodal embodied and spatial systems, multimodal interaction techniques for embodied and spatial interaction (e.g., combination of gestural and spoken interaction), and domain-specific case studies (e.g., embodied and spatial interaction in industrial settings and healthcare).

In this context, embodied and spatial interaction could be interpreted rather freely, and interaction can take place both in physical environments (e.g., build environments) and virtual environments (e.g., virtual reality), including their combinations. Different interaction means could include gestures (e.g., mid-air gestures), spoken interaction, haptic feedback, gaze tracking, motion capture, wearable computing, and human interaction with IoT-data (among others). From evaluation viewpoint, research on multi-dimensional analysis (e.g., subjective metrics such as UX questionnaires, and objective metrics such as log-data and biometric information) are particularly welcome. Also, embodied and spatial interaction studies with special user groups (including assistive technology, people with special needs, ICT4D) are within the scope of this Special Issue.

This Special Issue aims to provide a collection of high quality research articles that address challenges in multimodal embodied and spatial interaction. Both theoretical and applied research studies are welcome, and practical case studies in different domains are particularly welcome.

Prof. Markku Turunen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Spatial interaction
  • Embodied interaction
  • Human-computer-interaction
  • Interaction analysis
  • Interaction design
  • User Experience
  • Gestures
  • Spoken interaction
  • Haptic feedback
  • Gaze-tracking

Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Exploring How Interactive Technology Enhances Gesture-Based Expression and Engagement: A Design Study
Multimodal Technologies Interact. 2019, 3(1), 13; https://doi.org/10.3390/mti3010013
Received: 11 December 2018 / Revised: 15 February 2019 / Accepted: 23 February 2019 / Published: 27 February 2019
PDF Full-text (2713 KB) | HTML Full-text | XML Full-text
Abstract
The interpretation and understanding of physical gestures play a significant role in various forms of art. Interactive technology and digital devices offer a plethora of opportunities for personal gesture-based experience and they assist in the creation of collaborative artwork. In this study, three [...] Read more.
The interpretation and understanding of physical gestures play a significant role in various forms of art. Interactive technology and digital devices offer a plethora of opportunities for personal gesture-based experience and they assist in the creation of collaborative artwork. In this study, three prototypes for use with different digital devices (digital camera, PC camera, and Kinect) were designed. Subsequently, a series of workshops were conducted and in-depth interviews with participants from different cultural and occupational backgrounds. The latter were designed to explore how to specifically design personalised gesture-based expressions and how to engage the creativity of the participants in their gesture-based experiences. The findings indicated that, in terms of gesture-based interaction, the participants preferred to engage with the visual traces that were displayed at specific timings in multi-experience spaces. Their gesture-based interactions could effectively support non-verbal emotional expression. In addition, the participants were shown to be strongly inclined to combine their personal stories and emotions into their own gesture-based artworks. Based on the participants’ different cultural and occupational backgrounds, their artistic creation could be spontaneously formed. Full article
(This article belongs to the Special Issue Embodied and Spatial Interaction)
Figures

Figure 1

Open AccessArticle Embodied Engagement with Narrative: A Design Framework for Presenting Cultural Heritage Artifacts
Multimodal Technologies Interact. 2019, 3(1), 1; https://doi.org/10.3390/mti3010001
Received: 1 November 2018 / Revised: 10 December 2018 / Accepted: 17 December 2018 / Published: 2 January 2019
PDF Full-text (3742 KB) | HTML Full-text | XML Full-text
Abstract
An increasing number of museum exhibits incorporate multi-modal technologies and interactions; yet these media divert visitors’ attention away from the cultural heritage artifacts on display. This paper proposes an overarching conceptual structure for designing tangible and embodied narrative interaction with cultural heritage artifacts [...] Read more.
An increasing number of museum exhibits incorporate multi-modal technologies and interactions; yet these media divert visitors’ attention away from the cultural heritage artifacts on display. This paper proposes an overarching conceptual structure for designing tangible and embodied narrative interaction with cultural heritage artifacts within a museum exhibit so that visitors can interact with them to comprehend their cultural context. The Tangible and Embodied Narrative Framework (TENF) consists of three spectra (diegetic vs. non-diegetic, internal vs. external, and ontological vs. exploratory) and, considering how different interactions map along these three spectra, can guide designers in the way they integrate digital media, narrative, and embodiment. In this paper, we examine interactive narrative scholarship, existing frameworks for tangible and embodied interactions, and tangible and embodied narrative projects. We then describe the design of the TENF and its application to the pilot project, Mapping Place, and to the case study project, Multi-Sensory Prayer Nuts. The findings indicate that embodied engagement with artifacts through a narrative role can help visitors (1) contextualize the meaning of artifacts and (2) make personalized connections to the artifacts. Based on this work, we suggest design recommendations for tailoring the use of the TENF in the cultural heritage domain: simulate cultural practices, associate visitors with cultural perspectives, and provide simultaneous digital feedback. We conclude by describing future directions for the research, which include generating other possible projects using the TENF; collaborating with other designers and museum professionals; and exploring applications of the TENF in museum spaces. Full article
(This article belongs to the Special Issue Embodied and Spatial Interaction)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview Gesture Elicitation Studies for Mid-Air Interaction: A Review
Multimodal Technologies Interact. 2018, 2(4), 65; https://doi.org/10.3390/mti2040065
Received: 8 September 2018 / Revised: 20 September 2018 / Accepted: 26 September 2018 / Published: 29 September 2018
Cited by 1 | PDF Full-text (277 KB) | HTML Full-text | XML Full-text
Abstract
Mid-air interaction involves touchless manipulations of digital content or remote devices, based on sensor tracking of body movements and gestures. There are no established, universal gesture vocabularies for mid-air interactions with digital content or remote devices based on sensor tracking of body movements [...] Read more.
Mid-air interaction involves touchless manipulations of digital content or remote devices, based on sensor tracking of body movements and gestures. There are no established, universal gesture vocabularies for mid-air interactions with digital content or remote devices based on sensor tracking of body movements and gestures. On the contrary, it is widely acknowledged that the identification of appropriate gestures depends on the context of use, thus the identification of mid-air gestures is an important design decision. The method of gesture elicitation is increasingly applied by designers to help them identify appropriate gesture sets for mid-air applications. This paper presents a review of elicitation studies in mid-air interaction based on a selected set of 47 papers published within 2011–2018. It reports on: (1) the application domains of mid-air interactions examined; (2) the level of technological maturity of systems at hand; (3) the gesture elicitation procedure and its variations; (4) the appropriateness criteria for a gesture; (5) participants number and profile; (6) user evaluation methods (of the gesture vocabulary); (7) data analysis and related metrics. This paper confirms that the elicitation method has been applied extensively but with variability and some ambiguity and discusses under-explored research questions and potential improvements of related research. Full article
(This article belongs to the Special Issue Embodied and Spatial Interaction)
Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top