Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, scientific, peer-reviewed, open access journal of multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Science Applications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision provided to authors approximately 24.4 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the first half of 2022).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Latest Articles
Assessing the Influence of Multimodal Feedback in Mobile-Based Musical Task Performance
Multimodal Technol. Interact. 2022, 6(8), 68; https://doi.org/10.3390/mti6080068 - 08 Aug 2022
Abstract
►
Show Figures
Digital musical instruments have become increasingly prevalent in musical creation and production. Optimizing their usability and, particularly, their expressiveness, has become essential to their study and practice. The absence of multimodal feedback, present in traditional acoustic instruments, has been identified as an obstacle
[...] Read more.
Digital musical instruments have become increasingly prevalent in musical creation and production. Optimizing their usability and, particularly, their expressiveness, has become essential to their study and practice. The absence of multimodal feedback, present in traditional acoustic instruments, has been identified as an obstacle to complete performer–instrument interaction in particular due to the lack of embodied control. Mobile-based digital musical instruments present a particular case by natively providing the possibility of enriching basic auditory feedback with additional multimodal feedback. In the experiment presented in this article, we focused on using visual and haptic feedback to support and enrich auditory content to evaluate the impact on basic musical tasks (i.e., note pitch tuning accuracy and time). The experiment implemented a protocol based on presenting several musical note examples to participants and asking them to reproduce them, with their performance being compared between different multimodal feedback combinations. Collected results show that additional visual feedback was found to reduce user hesitation in pitch tuning, allowing users to reach the proximity of desired notes in less time. Nonetheless, neither visual nor haptic feedback was found to significantly impact pitch tuning time and accuracy compared to auditory-only feedback.
Full article
Open AccessArticle
Ability-Based Methods for Personalized Keyboard Generation
by
, , , , , , and
Multimodal Technol. Interact. 2022, 6(8), 67; https://doi.org/10.3390/mti6080067 - 03 Aug 2022
Abstract
►▼
Show Figures
This study introduces an ability-based method for personalized keyboard generation, wherein an individual’s own movement and human–computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance,
[...] Read more.
This study introduces an ability-based method for personalized keyboard generation, wherein an individual’s own movement and human–computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user’s movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual’s movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user’s motor abilities when designing virtual interfaces.
Full article

Figure 1
Open AccessArticle
Smart Map Augmented: Exploring and Learning Maritime Audio-Tactile Maps without Vision: The Issue of Finger or Marker Tracking
Multimodal Technol. Interact. 2022, 6(8), 66; https://doi.org/10.3390/mti6080066 - 03 Aug 2022
Abstract
►▼
Show Figures
Background: When exploring audio-tactile nautical charts without vision, users could trigger vocal announcements of a seamark’s name thanks to video tracking. In a first condition they could simply use a green sticker fastened at the tip of a finger and in a second
[...] Read more.
Background: When exploring audio-tactile nautical charts without vision, users could trigger vocal announcements of a seamark’s name thanks to video tracking. In a first condition they could simply use a green sticker fastened at the tip of a finger and in a second condition they could handle a small handy green object, called the marker. Methods: In this study, we attempted to compare finger and marker tracking conditions to complete spatial tasks without vision. More precisely, we aimed to better understand which kind of interaction was the most efficient to perform either localization or estimation of distance and direction tasks. Twelve blindfolded participants realized these two spatial tasks on a 3D-printed audio-tactile nautical chart. Results: Results of the localization tasks revealed that in finger condition, participants were faster in finding geographic elements, i.e., seamarks. During estimation tasks, no differences were found between the precision of distances and direction estimations in both conditions. However, spatial reasoning took significantly less time in marker condition. Finally, we discussed the issue of the efficiency of these two interaction conditions depending on the spatial tasks. Conclusions: More experimentation and discussion should be undertaken to identify better modalities for helping visually impaired persons to explore audio-tactile maps and to prepare navigation.
Full article

Figure 1
Open AccessArticle
Cognitive Learning and Robotics: Innovative Teaching for Inclusivity
Multimodal Technol. Interact. 2022, 6(8), 65; https://doi.org/10.3390/mti6080065 - 03 Aug 2022
Abstract
We present the interdisciplinary CoWriting Kazakh project in which a social robot acts as a peer in learning the new Kazakh Latin alphabet, to which Kazakhstan is going to shift from the current Kazakh Cyrillic by 2030. We discuss the past literature on
[...] Read more.
We present the interdisciplinary CoWriting Kazakh project in which a social robot acts as a peer in learning the new Kazakh Latin alphabet, to which Kazakhstan is going to shift from the current Kazakh Cyrillic by 2030. We discuss the past literature on cognitive learning and script acquisition in-depth and present a theoretical framing for this study. The results of word and letter analyses from two user studies conducted between 2019 and 2020 are presented. Learning the new alphabet through Kazakh words with two or more syllables and special native letters resulted in significant learning gains. These results suggest that reciprocal Cyrillic-to-Latin script learning results in considerable cognitive benefits due to mental conversion, word choice, and handwriting practices. Overall, this system enables school-age children to practice the new Kazakh Latin script in an engaging learning scenario. The proposed theoretical framework illuminates the understanding of teaching and learning within the multimodal robot-assisted script learning scenario and beyond its scope.
Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
►▼
Show Figures

Figure 1
Open AccessArticle
Behaviour of True Artificial Peers
by
and
Multimodal Technol. Interact. 2022, 6(8), 64; https://doi.org/10.3390/mti6080064 - 02 Aug 2022
Abstract
Typical current assistance systems often take the form of optimised user interfaces between the user interest and the capabilities of the system. In contrast, a peer-like system should be capable of independent decision-making capabilities, which in turn require an understanding and knowledge of
[...] Read more.
Typical current assistance systems often take the form of optimised user interfaces between the user interest and the capabilities of the system. In contrast, a peer-like system should be capable of independent decision-making capabilities, which in turn require an understanding and knowledge of the current situation for performing a sensible decision-making process. We present a method for a system capable of interacting with their user to optimise their information-gathering task, while at the same time ensuring the necessary satisfaction with the system, so that the user may not be discouraged from further interaction. Based on this collected information, the system may then create and employ a specifically adapted rule-set base which is much closer to an intelligent companion than a typical technical user interface. A further aspect is the perception of the system as a trustworthy and understandable partner, allowing an empathetic understanding between the user and the system, leading to a closer integrated smart environment.
Full article
(This article belongs to the Topic Interactive Artificial Intelligence and Man-Machine Communication)
►▼
Show Figures

Figure 1
Open AccessArticle
ViviPaint: Creating Dynamic Painting with a Thermochromic Toolkit
by
, , , , , , , , , and
Multimodal Technol. Interact. 2022, 6(8), 63; https://doi.org/10.3390/mti6080063 - 27 Jul 2022
Abstract
►▼
Show Figures
New materials and technologies facilitate the design of thermochromic dynamic paintings. However, creating a thermochromic painting requires knowledge of electrical engineering and computer science, which is a barrier for artists and enthusiasts with non-technology backgrounds. Existing toolkits only support limited design space and
[...] Read more.
New materials and technologies facilitate the design of thermochromic dynamic paintings. However, creating a thermochromic painting requires knowledge of electrical engineering and computer science, which is a barrier for artists and enthusiasts with non-technology backgrounds. Existing toolkits only support limited design space and fail to provide usable solutions for independent creation and for meeting the needs of the artists. We present ViviPaint, a toolkit that assists artists and enthusiasts in creating thermochromic paintings easily and conveniently. We summarized the pain points and challenges by observing a professional artist’s entire thermochromic painting creation process. We then designed ViviPaint consisting of a design tool and a set of hardware components. The design tool provides a GUI animation choreography interface, hardware assembly guidance, and assistance in assembly process. The hardware components comprise an augmented picture frame with a detachable structure and 24 temperature-changing units using Peltier elements. The results of our evaluation study (N = 8) indicate that our toolkit is easy to use and effectively assists users in creating thermochromic paintings.
Full article

Figure 1
Open AccessArticle
Perspectives on Socially Intelligent Conversational Agents
Multimodal Technol. Interact. 2022, 6(8), 62; https://doi.org/10.3390/mti6080062 - 25 Jul 2022
Abstract
The propagation of digital assistants is consistently progressing. Manifested by an uptake of ever more human-like conversational abilities, respective technologies are moving increasingly away from their role as voice-operated task enablers and becoming rather companion-like artifacts whose interaction style is rooted in anthropomorphic
[...] Read more.
The propagation of digital assistants is consistently progressing. Manifested by an uptake of ever more human-like conversational abilities, respective technologies are moving increasingly away from their role as voice-operated task enablers and becoming rather companion-like artifacts whose interaction style is rooted in anthropomorphic behavior. One of the required characteristics in this shift from a utilitarian tool to an emotional character is the adoption of social intelligence. Although past research has recognized this need, more multi-disciplinary investigations should be devoted to the exploration of relevant traits and their potential embedding in future agent technology. Aiming to lay a foundation for further developments, we report on the results of a Delphi study highlighting the respective opinions of 21 multi-disciplinary domain experts. Results exhibit 14 distinctive characteristics of social intelligence, grouped into different levels of consensus, maturity, and abstraction, which may be considered a relevant basis, assisting the definition and consequent development of socially intelligent conversational agents.
Full article
(This article belongs to the Special Issue Multimodal Conversational Interaction and Interfaces, Volume II)
Open AccessArticle
Learning Management System Analytics on Arithmetic Fluency Performance: A Skill Development Case in K6 Education
Multimodal Technol. Interact. 2022, 6(8), 61; https://doi.org/10.3390/mti6080061 - 22 Jul 2022
Abstract
Achieving fluency in arithmetic operations is vital if students are to develop mathematical creativity and critical thinking abilities. Nevertheless, a substantial body of literature has demonstrated that students are struggling to develop such skills, due to the absence of appropriate instructional support or
[...] Read more.
Achieving fluency in arithmetic operations is vital if students are to develop mathematical creativity and critical thinking abilities. Nevertheless, a substantial body of literature has demonstrated that students are struggling to develop such skills, due to the absence of appropriate instructional support or motivation. A proposed solution to tackle this problem is the rapid evolution and widespread integration of educational technology into the modern school system. To be precise, the Learning Management System (LMS) has been found to be particularly useful in the instructional process, especially where matters related to personalised and self-regulated learning are concerned. In the present work, we explored the aforementioned topics in the context of a longitudinal study in which 720 primary education students (4th–6th grade), from United Arab Emirates (UAE), utilised an LMS, at least once per week, for one school year (nine months). The findings revealed that the vast majority (97% of the 6th graders, 83% of the 4th graders, and 76% of the 5th graders) demonstrated a positive improvement in their arithmetic fluency development. Moreover, the Multiple Linear Regression analysis revealed that students need to practice deliberately for approximately 68 days (a minimum of 3 min a day) before seeing any substantial improvement in their performance. The study also made an additional contribution by demonstrating how design practice compliance with gamification and Learning Analytics in LMS may lead children to be fluent in simple arithmetic operations. For educators interested in LMS-based intervention, research implications and directions are presented.
Full article
(This article belongs to the Special Issue Effective and Efficient Digital Learning)
►▼
Show Figures

Figure 1
Open AccessReview
Design Considerations for Immersive Virtual Reality Applications for Older Adults: A Scoping Review
by
, , , , and
Multimodal Technol. Interact. 2022, 6(7), 60; https://doi.org/10.3390/mti6070060 - 20 Jul 2022
Abstract
Immersive virtual reality (iVR) has gained considerable attention recently with increasing affordability and accessibility of the hardware. iVR applications for older adults present tremendous potential for diverse interventions and innovations. The iVR literature, however, provides a limited understanding of guiding design considerations and
[...] Read more.
Immersive virtual reality (iVR) has gained considerable attention recently with increasing affordability and accessibility of the hardware. iVR applications for older adults present tremendous potential for diverse interventions and innovations. The iVR literature, however, provides a limited understanding of guiding design considerations and evaluations pertaining to user experience (UX). To address this gap, we present a state-of-the-art scoping review of literature on iVR applications developed for older adults over 65 years. We performed a search in ACM Digital Library, IEEE Xplore, Scopus, and PubMed (1 January 2010–15 December 2019) and found 36 out of 3874 papers met the inclusion criteria. We identified 10 distinct sets of design considerations that guided target users and physical configuration, hardware use, and software design. Most studies carried episodic UX where only 2 captured anticipated UX and 7 measured longitudinal experiences. We discuss the interplay between our findings and future directions to design effective, safe, and engaging iVR applications for older adults.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
An Interdisciplinary Design of an Interactive Cultural Heritage Visit for In-Situ, Mixed Reality and Affective Experiences
by
, , , , , and
Multimodal Technol. Interact. 2022, 6(7), 59; https://doi.org/10.3390/mti6070059 - 18 Jul 2022
Abstract
Interactive technologies, such as mixed-reality and natural interactions with avatars, can enhance cultural heritage and the experience of visiting a museum. In this paper, we present the design rationale of an interactive experience for a cultural heritage place in the church of Roncesvalles
[...] Read more.
Interactive technologies, such as mixed-reality and natural interactions with avatars, can enhance cultural heritage and the experience of visiting a museum. In this paper, we present the design rationale of an interactive experience for a cultural heritage place in the church of Roncesvalles at the beginning of Camino de Santiago. We followed a participatory design with a multidisciplinary team which resulted in the design of a spatial augmented reality system that employs 3D projection mapping and a conversational agent acting as the storyteller. Multiple features were identified as desirable for an interactive experience: interdisciplinary design team; in-situ; mixed reality; interactive digital storytelling; avatar; tangible objects; gestures; emotions and groups. The findings from a workshop are presented for guiding other interactive cultural heritage experiences.
Full article
(This article belongs to the Special Issue Digital Cultural Heritage (Volume II))
►▼
Show Figures

Figure 1
Open AccessArticle
The Appraisal Principle in Multimedia Learning: Impact of Appraisal Processes, Modality, and Codality
Multimodal Technol. Interact. 2022, 6(7), 58; https://doi.org/10.3390/mti6070058 - 18 Jul 2022
Abstract
►▼
Show Figures
This paper presents two experiments examining the influences of media-specific appraisal and attribution on multimedia learning. The first experiment compares four different versions of learning material (text, text with images, animation with text, and animation with audio). Results reveal that the attributed type
[...] Read more.
This paper presents two experiments examining the influences of media-specific appraisal and attribution on multimedia learning. The first experiment compares four different versions of learning material (text, text with images, animation with text, and animation with audio). Results reveal that the attributed type of appraisal, (i.e., the subjective impression of whether a medium is easy or difficult to learn with) impacts invested mental effort and learning outcomes. Though there was no evidence for the modality effect in the first experiment, we were able to identify it in a second study. We were also able to replicate appraisal and attribution findings from study 1 in study 2: if media appraisal leads to the result that learning with a specific medium is difficult, more mental effort will be invested in information processing. Consequently, learning outcomes are better, and learners are more likely to attribute knowledge acquisition to their own abilities. Outcomes also indicate that the modality effect can be explained by avoidance of split-attention rather than modality-specific information processing in working memory.
Full article

Figure 1
Open AccessArticle
Harvesting Context and Mining Emotions Related to Olfactory Cultural Heritage
by
, , , , , , , and
Multimodal Technol. Interact. 2022, 6(7), 57; https://doi.org/10.3390/mti6070057 - 18 Jul 2022
Abstract
This paper presents an Artificial Intelligence approach to mining context and emotions related to olfactory cultural heritage narratives, particularly to fairy tales. We provide an overview of the role of smell and emotions in literature, as well as highlight the importance of olfactory
[...] Read more.
This paper presents an Artificial Intelligence approach to mining context and emotions related to olfactory cultural heritage narratives, particularly to fairy tales. We provide an overview of the role of smell and emotions in literature, as well as highlight the importance of olfactory experience and emotions from psychology and linguistic perspectives. We introduce a methodology for extracting smells and emotions from text, as well as demonstrate the context-based visualizations related to smells and emotions implemented in a novel smell tracker tool. The evaluation is performed using a collection of fairy tales from Grimm and Andersen. We find out that fairy tales often connect smell with the emotional charge of situations. The experimental results show that we can detect smells and emotions in fairy tales with an F1 score of 91.62 and 79.2, respectively.
Full article
(This article belongs to the Special Issue Digital Cultural Heritage (Volume II))
►▼
Show Figures

Figure 1
Open AccessArticle
Detecting Emotions from Illustrator Gestures—The Italian Case
Multimodal Technol. Interact. 2022, 6(7), 56; https://doi.org/10.3390/mti6070056 - 17 Jul 2022
Abstract
The evolution of computers in recent years has given a strong boost to research techniques aimed at improving human–machine interaction. These techniques tend to simulate the dynamics of the human–human interaction process, which is based on our innate ability to understand the emotions
[...] Read more.
The evolution of computers in recent years has given a strong boost to research techniques aimed at improving human–machine interaction. These techniques tend to simulate the dynamics of the human–human interaction process, which is based on our innate ability to understand the emotions of other humans. In this work, we present the design of a classifier to recognize the emotions expressed by human beings, and we discuss the results of its testing in a culture-specific case study. The classifier relies exclusively on the gestures people perform, without the need to access additional information, such as facial expressions, the tone of a voice, or the words spoken. The specific purpose is to test whether a computer can correctly recognize emotions starting only from gestures. More generally, it is intended to allow interactive systems to be able to automatically change their behaviour based on the recognized mood, such as adapting the information contents proposed or the flow of interaction, in analogy to what normally happens in the interaction between humans. The document first introduces the operating context, giving an overview of the recognition of emotions and the approach used. Subsequently, the relevant bibliography is described and analysed, highlighting the strengths of the proposed solution. The document continues with a description of the design and implementation of the classifier and of the study we carried out to validate it. The paper ends with a discussion of the results and a short overview of possible implications.
Full article
(This article belongs to the Special Issue Digital Cultural Heritage (Volume II))
►▼
Show Figures

Figure 1
Open AccessArticle
Customizing and Evaluating Accessible Multisensory Music Experiences with Pre-Verbal Children—A Case Study on the Perception of Musical Haptics Using Participatory Design with Proxies
Multimodal Technol. Interact. 2022, 6(7), 55; https://doi.org/10.3390/mti6070055 - 17 Jul 2022
Abstract
Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD)
[...] Read more.
Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD) can be involved in such processes. In this paper, we apply in-depth qualitative and mixed methodologies in a case study with four students with PMLD. Using Participatory Design with Proxies (PDwP), we assess how these students can be involved in the customization and evaluation of the design of a multisensory music experience intended for a large-scale ADMI. Results from an experiment focused on communication of musical haptics highlighted the diversity in employed interaction strategies used by the children, accessibility limitations of the current multisensory experience design, and the importance of using a multifaceted variety of qualitative and quantitative methods to arrive at more informed conclusions when applying a design with proxies methodology.
Full article
(This article belongs to the Special Issue Musical Interactions (Volume II))
►▼
Show Figures

Figure 1
Open AccessFeature PaperArticle
A Study on Attention Attracting Elements of 360-Degree Videos Based on VR Eye-Tracking System
by
and
Multimodal Technol. Interact. 2022, 6(7), 54; https://doi.org/10.3390/mti6070054 - 14 Jul 2022
Abstract
►▼
Show Figures
In 360-degree virtual reality (VR) videos, users possess increased freedom in terms of gaze movement. As a result, the users’ attention may not move according to the narrative intended by the director and miss out on important parts of the narrative of the
[...] Read more.
In 360-degree virtual reality (VR) videos, users possess increased freedom in terms of gaze movement. As a result, the users’ attention may not move according to the narrative intended by the director and miss out on important parts of the narrative of the 360-degree video. Therefore, it is necessary to study a directing technique that can attract user attention in 360-degree VR videos. In this study, we analyzed the directing elements that can attract users’ attention in a 360-degree VR video and developed a 360 VR eye-tracking system to investigate the effect of the attention-attracting elements on the user. Elements that can attract user attention were classified into five categories: object movement, hand gesture, GUI insertion, camera movement, and gaze angle variation. We developed a 360 VR eye-tracking system to analyze whether five attention-attracting elements influence the user’s attention. Based on the eye tracking system, an experiment was conducted to analyze whether the user’s attention moves according to the five attention-attracting elements. Based on the experimental results, it can be seen that ‘hand gesture’ attracted the second most attention shift of the subjects, and ‘GUI insertion’ induced the smallest shift of attention of the subjects.
Full article

Figure 1
Open AccessArticle
A Comparative Study of Methods for the Visualization of Probability Distributions of Geographical Data
Multimodal Technol. Interact. 2022, 6(7), 53; https://doi.org/10.3390/mti6070053 - 13 Jul 2022
Abstract
►▼
Show Figures
Probability distributions are omnipresent in data analysis. They are often used to model the natural uncertainty present in real phenomena or to describe the properties of a data set. Designing efficient visual metaphors to convey probability distributions is, however, a difficult problem. This
[...] Read more.
Probability distributions are omnipresent in data analysis. They are often used to model the natural uncertainty present in real phenomena or to describe the properties of a data set. Designing efficient visual metaphors to convey probability distributions is, however, a difficult problem. This fact is especially true for geographical data, where conveying the spatial context constrains the design space. While many different alternatives have been proposed to solve this problem, they focus on representing data variability. However, they are not designed to support spatial analytical tasks involving probability quantification. The present work aims to adapt recent non-spatial approaches to the geographical context, in order to support probability quantification tasks. We also present a user study that compares the efficiency of these approaches in terms of both accuracy and usability.
Full article

Figure 1
Open AccessArticle
A Framework for Stakeholders’ Involvement in Digital Productions for Cultural Heritage Tourism
Multimodal Technol. Interact. 2022, 6(7), 52; https://doi.org/10.3390/mti6070052 - 27 Jun 2022
Abstract
This paper proposes a new framework for the production and development of immersive and playful technologies in cultural heritage in which different stakeholders such as users and local communities are involved early on in the product development chain. We believe that an early
[...] Read more.
This paper proposes a new framework for the production and development of immersive and playful technologies in cultural heritage in which different stakeholders such as users and local communities are involved early on in the product development chain. We believe that an early stage of co-creation in the design process produces a clear understanding of what users struggle with, facilitates the creation of community ownership and helps in better defining the design challenge at hand. We show that adopting such a framework has several direct and indirect benefits, including a deeper sense of site and product ownership as direct benefits to the individual, and the creation and growth of tangential economies to the community.
Full article
(This article belongs to the Special Issue Co-Design Within and Between Communities in Cultural Heritage)
►▼
Show Figures

Figure 1
Open AccessArticle
Is Natural Necessary? Human Voice versus Synthetic Voice for Intelligent Virtual Agents
by
and
Multimodal Technol. Interact. 2022, 6(7), 51; https://doi.org/10.3390/mti6070051 - 27 Jun 2022
Abstract
►▼
Show Figures
The use of intelligent virtual agents (IVA) to support humans in social contexts will depend on their social acceptability. Acceptance will be related to the human’s perception of the IVAs as well as the IVAs’ ability to respond and adapt their conversation appropriately
[...] Read more.
The use of intelligent virtual agents (IVA) to support humans in social contexts will depend on their social acceptability. Acceptance will be related to the human’s perception of the IVAs as well as the IVAs’ ability to respond and adapt their conversation appropriately to the human. Adaptation implies computer-generated speech (synthetic speech), such as text-to-speech (TTS). In this paper, we present the results of a study to investigate the effect of voice type (human voice vs. synthetic voice) on two aspects: (1) the IVA’s likeability and voice impression in the light of co-presence, and (2) the interaction outcome, including human–agent trust and behavior change intention. The experiment included 118 participants who interacted with either the virtual advisor with TTS or the virtual advisor with human voice to gain tips for reducing their study stress. Participants in this study found the voice of the virtual advisor with TTS to be more eerie, but they rated both agents, with recorded voice and with TTS, similarly in terms of likeability. They further showed a similar attitude towards both agents in terms of co-presence and building trust. These results challenge previous studies that favor human voice over TTS, and suggest that even if human voice is preferred, TTS can deliver equivalent benefits.
Full article

Figure 1
Open AccessArticle
Inter- and Transcultural Learning in Social Virtual Reality: A Proposal for an Inter- and Transcultural Virtual Object Database to be Used in the Implementation, Reflection, and Evaluation of Virtual Encounters
Multimodal Technol. Interact. 2022, 6(7), 50; https://doi.org/10.3390/mti6070050 - 25 Jun 2022
Abstract
Visual stimuli are frequently used to improve memory, language learning or perception, and understanding of metacognitive processes. However, in virtual reality (VR), there are few systematically and empirically derived databases. This paper proposes the first collection of virtual objects based on empirical evaluation
[...] Read more.
Visual stimuli are frequently used to improve memory, language learning or perception, and understanding of metacognitive processes. However, in virtual reality (VR), there are few systematically and empirically derived databases. This paper proposes the first collection of virtual objects based on empirical evaluation for inter-and transcultural encounters between English- and German-speaking learners. We used explicit and implicit measurement methods to identify cultural associations and the degree of stereotypical perception for each virtual stimuli (n = 293) through two online studies, including native German and English-speaking participants. The analysis resulted in a final well-describable database of 128 objects (called InteractionSuitcase). In future applications, the objects can be used as a great interaction or conversation asset and behavioral measurement tool in social VR applications, especially in the field of foreign language education. For example, encounters can use the objects to describe their culture, or teachers can intuitively assess stereotyped attitudes of the encounters.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Vocational Training in Virtual Reality: A Case Study Using the 4C/ID Model
Multimodal Technol. Interact. 2022, 6(7), 49; https://doi.org/10.3390/mti6070049 - 24 Jun 2022
Abstract
Virtual reality (VR) is an emerging technology with a variety of potential benefits for vocational training. Therefore, this paper presents a VR training based on the highly validated 4C/ID model to train vocational competencies in the field of vehicle painting. The following 4C/ID
[...] Read more.
Virtual reality (VR) is an emerging technology with a variety of potential benefits for vocational training. Therefore, this paper presents a VR training based on the highly validated 4C/ID model to train vocational competencies in the field of vehicle painting. The following 4C/ID components were designed using the associated 10 step approach: learning tasks, supportive information, procedural information, and part-task practice. The paper describes the instructional design process including an elaborated blueprint for a VR training application for aspiring vehicle painters. We explain the model’s principles and features and their suitability for designing a VR vocational training that fosters integrated competence acquisition. Following the methodology of design-based research, several research methods (e.g., a target group analysis) and the ongoing development of prototypes enabled agile process structures. Results indicate that the 4C/ID model and the 10 step approach promote the instructional design process using VR in vocational training. Implementation and methodological issues that arose during the design process (e.g., limited time within VR) are adequately discussed in the article.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Entropy, Future Internet, Algorithms, Computation, MAKE, MTI
Interactive Artificial Intelligence and Man-Machine Communication
Topic Editors: Christos Troussas, Cleo Sgouropoulou, Akrivi Krouska, Ioannis Voyiatzis, Athanasios VoulodimosDeadline: 11 December 2022
Topic in
AI, Algorithms, Information, MTI, Sensors
Lightweight Deep Neural Networks for Video Analytics
Topic Editors: Amin Ullah, Tanveer Hussain, Mohammad Farhad BulbulDeadline: 31 December 2023

Conferences
Special Issues
Special Issue in
MTI
User Interfaces for Cyclists
Guest Editors: Andrii Matviienko, Markus LöchtefeldDeadline: 31 August 2022
Special Issue in
MTI
Cooperative Intelligence in Automated Driving
Guest Editors: Ronald Schroeter, Andreas Riener, Myounghoon Jeon (Philart)Deadline: 10 November 2022
Special Issue in
MTI
Interaction Design and the Automated City – Emerging Urban Interfaces, Prototyping Approaches and Design Methods
Guest Editors: Marius Hoggenmueller, Martin Tomitsch, Jessica R. Cauchard, Luke Hespanhol, Maria Luce Lupetti, Ronald Schroeter, Sharon Yavo-Ayalon, Alexander Wiethoff, Stewart WorrallDeadline: 30 November 2022
Special Issue in
MTI
3D Human–Computer Interaction (Volume II)
Guest Editors: Arun K. Kulshreshth, Christoph W. BorstDeadline: 15 December 2022