Next Issue
Volume 4, December
Previous Issue
Volume 4, June

Table of Contents

Multimodal Technologies Interact., Volume 4, Issue 3 (September 2020) – 34 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Aligning Realities: Correlating Content between Projected and Head Worn Displays
Multimodal Technologies Interact. 2020, 4(3), 67; https://doi.org/10.3390/mti4030067 - 16 Sep 2020
Viewed by 226
Abstract
Enabling the effective representation of an object’s position and depth in augmented reality (AR) is crucial not just for realism, but also to enable augmented reality’s wider utilization in real world applications. Domains such as architecture and building design cannot leverage AR’s advantages [...] Read more.
Enabling the effective representation of an object’s position and depth in augmented reality (AR) is crucial not just for realism, but also to enable augmented reality’s wider utilization in real world applications. Domains such as architecture and building design cannot leverage AR’s advantages without the effective representation of position. Prior work has examined how the human visual system perceives and interprets such cues in AR. However, it has focused on application systems that only use a single AR modality, i.e., head-mounted display, tablet/handheld, or projection. However, given the respective limitations of each modality regarding shared experience, stereo display, field of view, etc., prior work has ignored the possible benefits of utilizing multiple AR modalities together. By using multiple AR systems together, we can attempt to address the deficiencies of one modality by leveraging the features of other modalities. This work examines methods for representing position in a multi-modal AR system consisting of a stereo head-mounted display and a ceiling mounted projection system. Given that the AR content is now rendered across two separate AR realities, how does the user know which projected object matches the object shown in their head-mounted display? We explore representations to correlate and fuse objects across modalities. In this paper, we review previous work on position and depth in AR, before then describing multiple representations for head-mounted and projector-based AR that can be paired together across modalities. To the authors’ knowledge, this work represents the first step towards utilizing multiple AR modalities in which the AR content is designed directly to compliment deficiencies in the other modality. Full article
Show Figures

Figure 1

Open AccessArticle
Designing for Temporal Harmony: Exploring the Well-Being Concept for Designing the Temporal Dimension of User Experience
Multimodal Technologies Interact. 2020, 4(3), 66; https://doi.org/10.3390/mti4030066 - 13 Sep 2020
Viewed by 320
Abstract
User Experience (UX) is characterized by its temporal dimension, dynamic nature, and variability. Although descriptive models about the temporal dimension and related aspects exist, an understanding of the design possibilities and a design approach that ensures the design of the temporal dimension promoting [...] Read more.
User Experience (UX) is characterized by its temporal dimension, dynamic nature, and variability. Although descriptive models about the temporal dimension and related aspects exist, an understanding of the design possibilities and a design approach that ensures the design of the temporal dimension promoting a positive UX and well-being are still lacking. This paper addresses this research gap and builds on Zimbardo and Boyd’s Time Perspective Theory (TPT). TPT presents five time perspectives (TPs)—Past-Negative, Past-Positive, Present-Fatalistic, Present-Hedonistic, and Future—to reveal that people have individual attitudes toward time that influence their thoughts, actions, and feelings. Studies conclude that a balance between the positive TPs (Past-Positive, Present-Hedonistic, and Future), i.e., temporal harmony, contributes to long-term well-being. We present our design framework and approach “designing for temporal harmony,” which incorporates the theory into the practice to highlight the temporal design possibilities and to offer guidance for designers. We applied the design framework and approach to a case study, developed an app concept, and evaluated it with users. The results demonstrate that it is possible to systematically develop temporal UX concepts that evoke positive anticipations, experiences, and retrospections, and that these promote a positive UX as well as contribute to users’ long-term well-being. Full article
Show Figures

Figure 1

Open AccessArticle
Exploring the Effect of Training in Visual Block Programming for Preservice Teachers
Multimodal Technologies Interact. 2020, 4(3), 65; https://doi.org/10.3390/mti4030065 - 12 Sep 2020
Viewed by 308
Abstract
This study evaluates the effectiveness of visual block programming-based instruction and its possibilities in the training of future teachers. In particular, the application Scratch, a visual programming environment, was employed to introduce pre-service teachers to programming. The study followed a mixed-method design with [...] Read more.
This study evaluates the effectiveness of visual block programming-based instruction and its possibilities in the training of future teachers. In particular, the application Scratch, a visual programming environment, was employed to introduce pre-service teachers to programming. The study followed a mixed-method design with a sample of 79 pre-service teachers. A quantitative approach was used to evaluate the gains in the participants’ knowledge of computational concepts and attitudes towards Scratch as a pedagogic tool. A qualitative analysis aimed at evaluating the participants’ knowledge concerning programming applications, and their perception about possible difficulties in the implementation of programming in educational contexts. Positive results were obtained for programming in the classroom, with significant improvements in innovation, collaboration, active learning, motivation, and fun for the students. After the experiment, the subjects highlighted Scratch as a fundamental block programming tool and the need for teacher training in this field. The need to improve the implementation of visual block programming in Education Degree curricula is supported. Full article
Show Figures

Figure 1

Open AccessArticle
Exploring the Power of Multimodal Features for Predicting the Popularity of Social Media Image in a Tourist Destination
Multimodal Technologies Interact. 2020, 4(3), 64; https://doi.org/10.3390/mti4030064 - 05 Sep 2020
Viewed by 309
Abstract
Social media platforms are widely used nowadays by various businesses to promote their products and services through multimedia content. Instagram is one of those platforms, which is used not only by companies to promote their products but also by local governments to promote [...] Read more.
Social media platforms are widely used nowadays by various businesses to promote their products and services through multimedia content. Instagram is one of those platforms, which is used not only by companies to promote their products but also by local governments to promote tourist destinations. Predicting the popularity of the promotional tourist destination images helps marketers to plan strategically. However, given the abundance of images posted on Instagram daily, identifying the factors that determine the popularity of an image is a big challenge, due to informal and noisy visual content, frequent content evolution, a lack of explicit visual elements, and people’s informal behavior in liking, commenting on, and viewing the images. We present an approach to identify the factors most responsible for the popularity of tourist destinations-related images on Instagram. Our approach provides a proof of concept for an artificial intelligence (AI)-based real-time content management system, which will help to promote a tourist destination. The experiments on a collection of posts crawled from the official Instagram account of Jeju Island, which is one of the most popular tourist destinations in Korea, show that the recency of the post is the most important predictor of the number of likes and comments it will receive. Moreover, the combination of visual content and context features is an excellent predictor of popularity. The number of likes and comments are found to be complementary to each other for predicting image popularity. Full article
Show Figures

Figure 1

Open AccessReview
Measuring Trust with Psychophysiological Signals: A Systematic Mapping Study of Approaches Used
Multimodal Technologies Interact. 2020, 4(3), 63; https://doi.org/10.3390/mti4030063 - 01 Sep 2020
Viewed by 288
Abstract
Trust plays an essential role in all human relationships. However, measuring trust remains a challenge for researchers exploring psychophysiological signals. Therefore, this article aims to systematically map the approaches used in studies assessing trust with psychophysiological signals. In particular, we examine the numbers [...] Read more.
Trust plays an essential role in all human relationships. However, measuring trust remains a challenge for researchers exploring psychophysiological signals. Therefore, this article aims to systematically map the approaches used in studies assessing trust with psychophysiological signals. In particular, we examine the numbers and frequency of combined psychophysiological signals, the primary outcomes of previous studies, and the types and most commonly used data analysis techniques for analyzing psychophysiological data to infer a trust state. For this purpose, we employ a systematic mapping review method, through which we analyze 51 carefully selected articles (studies focused on trust using psychophysiology). Two significant findings are as follows: (1) Psychophysiological signals from EEG(electroencephalogram) and ECG(electrocardiogram) for monitoring peripheral and central nervous systems are the most frequently used to measure trust, while audio and EOG(electro-oculography) psychophysiological signals are the least commonly used. Moreover, the maximum number of psychophysiological signals ever combined so far is three (2). Most of which are peripheral nervous system monitoring psychophysiological signals that are low in spatial resolution. (3) Regarding outcomes: there is only one tool proposed for assessing trust in an interpersonal context, excluding trust in a technology context. Moreover, there are no stable and accurate ensemble models that have been developed to assess trust; all prior attempts led to unstable but fairly accurate models or did not satisfy the conditions for combining several algorithms (ensemble). In conclusion, the extent to which trust can be assessed using psychophysiological measures during user interactions (real-time) remains unknown, as there several issues, such as the lack of a stable and accurate ensemble trust classifier model, among others, that require urgent research attention. Although this topic is relatively new, much work has been done. However, more remains to be done to provide clarity on this topic. Full article
Show Figures

Figure 1

Open AccessArticle
Rhythmic Synchrony with Artificial Agents and Its Effects on Frequency of Visual Illusions Seen in White Noise
Multimodal Technologies Interact. 2020, 4(3), 62; https://doi.org/10.3390/mti4030062 - 01 Sep 2020
Viewed by 509
Abstract
Rhythmic synchrony among different individuals has often been observed in various religious rituals and it has been known to bring various psychological effects in human minds. This study investigated the effects of induced rhythmic synchrony with artificial agents in drumming on participants’ visual [...] Read more.
Rhythmic synchrony among different individuals has often been observed in various religious rituals and it has been known to bring various psychological effects in human minds. This study investigated the effects of induced rhythmic synchrony with artificial agents in drumming on participants’ visual illusions. The participants completed a task with three cartoon agents on a computer screen beating drums taking turns. We then investigated whether participants were tended to find more meaningful shapes in displayed random dots (pareidolia) when rhythms of intervals between each agents’ drumbeats were in-sync rather than out-of-sync. We simultaneously compared an active condition, in which participants took the role as one of three agents to beat a drum, with a passive condition, in which they only observed three agents beating the drums. The results showed that pareidolia appeared strongly in participants where the drum rhythm was in sync, regardless of active and passive conditions. Full article
Show Figures

Figure 1

Open AccessArticle
Mid-Air Gesture Control of Multiple Home Devices in Spatial Augmented Reality Prototype
Multimodal Technologies Interact. 2020, 4(3), 61; https://doi.org/10.3390/mti4030061 - 31 Aug 2020
Viewed by 301
Abstract
Touchless, mid-air gesture-based interactions with remote devices have been investigated as alternative or complementary to interactions based on remote controls and smartphones. Related studies focus on user elicitation of a gesture vocabulary for one or a few home devices and explore recommendations of [...] Read more.
Touchless, mid-air gesture-based interactions with remote devices have been investigated as alternative or complementary to interactions based on remote controls and smartphones. Related studies focus on user elicitation of a gesture vocabulary for one or a few home devices and explore recommendations of respective gesture vocabularies without validating them by empirical testing with interactive prototypes. We have developed an interactive prototype based on spatial Augmented Reality (AR) of seven home devices. Each device responds to touchless gestures (identified from a previous elicitation study) via the MS Kinect sensor. Nineteen users participated in a two-phase test (with and without help provided by a virtual assistant) according to a scenario that required from each user to apply 41 gestural commands (19 unique). We report on main usability indicators: task success, task time, errors (false negative/positives), memorability, perceived usability, and user experience. The main conclusion is that mid-air interaction with multiple home devices is feasible, fairly easy to learn and apply, and enjoyable. The contributions of this paper are (a) validation of a previously elicited gesture set; (b) development of a spatial AR prototype for testing of mid-air gestures, and (c) extensive assessment of gestures and evidence in favor of mid-air interaction in smart environments. Full article
Show Figures

Figure 1

Open AccessArticle
Design and Evaluation of SONIS, a Wearable Biofeedback System for Gait Retraining
Multimodal Technologies Interact. 2020, 4(3), 60; https://doi.org/10.3390/mti4030060 - 28 Aug 2020
Viewed by 289
Abstract
Herein, we introduce SONIS, a wearable system to support gait rehabilitation training after a lower extremity trauma, which combines a sensing sock with a smartphone application. SONIS provides interactive, corrective, real-time feedback combining visual and auditory cues. We report the design of SONIS [...] Read more.
Herein, we introduce SONIS, a wearable system to support gait rehabilitation training after a lower extremity trauma, which combines a sensing sock with a smartphone application. SONIS provides interactive, corrective, real-time feedback combining visual and auditory cues. We report the design of SONIS and its evaluation by patients and therapists, which indicates acceptance by targeted users, credibility as a rehabilitation tool, and a positive user experience. SONIS demonstrates how to successfully combine a number of feedback strategies and modalities: graphical, verbal, and music feedback on gait quality during training (knowledge of performance) and verbal and vibrotactile feedback on gait tracking (knowledge of results). Full article
Show Figures

Figure 1

Open AccessArticle
Pre-Clinical Proof-of-Concept Study of a Bladder Irrigation Feedback System for Gross Haematuria in a Lab Setup
Multimodal Technologies Interact. 2020, 4(3), 59; https://doi.org/10.3390/mti4030059 - 23 Aug 2020
Viewed by 313
Abstract
Conventional continuous bladder irrigation (CBI) systems used in Urology have been labor-intensive and challenging for healthcare workers to manage consistently due to inter-observer variability in interpreting the blood concentration in the drainage fluid. The team has come up with a feedback system to [...] Read more.
Conventional continuous bladder irrigation (CBI) systems used in Urology have been labor-intensive and challenging for healthcare workers to manage consistently due to inter-observer variability in interpreting the blood concentration in the drainage fluid. The team has come up with a feedback system to control the saline flow-rate. It consists of a sensor probe that measures blood concentration in drainage fluid by measuring the light intensity absorbed by the samples. The other component is a gripper that adjusts the saline flow-rate based on the blood concentration detected. Results have shown that probe utilizing green color LED light can measure blood concentration between 0 and 18 percent. Besides, the gripper actuates to the blood concentration values detected accordingly. The quantification process reduces or even eradicates human error due to the subjective assessment of individual medical professionals. Full article
Show Figures

Figure 1

Open AccessArticle
Tolerance for Uncertainty and Patterns of Decision-Making in Complex Problem-Solving Strategies
Multimodal Technologies Interact. 2020, 4(3), 58; https://doi.org/10.3390/mti4030058 - 22 Aug 2020
Viewed by 284
Abstract
Current studies of complex problem-solving do not commonly evaluate the regulatory role of such personality-based variables as tolerance for uncertainty, risk-readiness, and patterns for coping with decisional conflict. This research aims to establish the contribution of those traits into individual parameters of complex [...] Read more.
Current studies of complex problem-solving do not commonly evaluate the regulatory role of such personality-based variables as tolerance for uncertainty, risk-readiness, and patterns for coping with decisional conflict. This research aims to establish the contribution of those traits into individual parameters of complex problem-solving strategies. The study was conducted on 53 healthy individuals 17 to 29 years old (M = 20.42; SD = 2.34). Our own computerized complex problem task “The Anthill” was developed for this research. We identified five measurable parameters of the participants’ problem-solving strategies: preferred orientational level (POL); orientational level variability (OLV); class quotas‘ range (R); mean and median quotas shift (MS and MeS); and abrupt changes of strategy (AC). Psychodiagnostic methods included: new questionnaire of tolerance/intolerance for uncertainty; personal decision-making factors questionnaire; Melbourne Decision Making Questionnaire; Subjective Risk Intelligence Scale; Eysencks’ Impulsiveness Scale. The study showed the role of tolerance for uncertainty, risk-readiness, negative attitude toward uncertainty, and decision-making styles in the regulation of complex problem-solving strategies. Specifically, procrastination, tolerance for uncertainty, and risk-readiness were significant predictors of individual strategy indicators, such as POL, OLV, and MeS. Thus, personality traits were shown to regulate resource allocation strategies and the required level of orientation in a complex problem. Full article
Show Figures

Figure 1

Open AccessArticle
UUX Evaluation of a Digitally Advanced Human–Machine Interface for Excavators
Multimodal Technologies Interact. 2020, 4(3), 57; https://doi.org/10.3390/mti4030057 - 20 Aug 2020
Viewed by 422
Abstract
With the evaluation of a next-generation human–machine interface (HMI) concept for excavators, this study aims to discuss the HMI quality measurement based on usability and user experience (UUX) metrics. Regarding the digital transformation of construction sites, future work environments will have to be [...] Read more.
With the evaluation of a next-generation human–machine interface (HMI) concept for excavators, this study aims to discuss the HMI quality measurement based on usability and user experience (UUX) metrics. Regarding the digital transformation of construction sites, future work environments will have to be capable of presenting various complex visual data and enabling efficient and safe interactivity while working. The evaluated HMI focused on introducing a touch display-based interface, providing advanced operation functions and different interaction modalities. The assessment of UUX should show whether the novel HMI can be utilised to perform typical tasks (usability) and how it is accepted and assessed in terms of non-instrumental qualities (user experience, UX). Using the collected data, this article also aims to contribute to the general discussion about the role of UX beyond usability in industrial applications and deepen the understanding of non-instrumental qualities when it comes to user-oriented process and machine design. The exploratory study examines insights into the application of elaborated UUX measuring tools like the User Experience Questionnaire (UEQ) on the interaction with industrial goods accompanied by their rating with other tools, namely System Usability Scale (SUS), Intuitive Interaction Questionnaire (INTUI) and the National Aeronautics and Space Administration (NASA) Task Load Index (NASA-TLX). Four goals are pursued in this study. The first goal is to compare in-depth two different ways of interaction with the novel HMI—namely one by a control pad on the right joystick and one by touch. Therefore, a sample of 17 subjects in total was split into two groups and differences in UUX measures were tested. Secondly, the performances of both groups were tested over the course of trials to investigate possible differences in detail. The third goal is to interpret measures of usability and user experience against existing benchmark values. Fourth and finally, we use the data gathered to analyse correlations between measures of UUX. The results of our study show that the different ways of interaction did not impact any of the measures taken. In terms of detailed performance analysis, both groups yielded differences in terms of time per action, but not between the groups. The comparison of UUX measures with benchmark values yielded mixed results. The UUX measures show some relevant significant correlations. The participants mostly reported enjoying the use of the HMI concept, but several practical issues (e.g., efficiency) still need to be overcome. Once again, the study confirms the urge of user inclusion in product development. Especially in the course of digitalisation, as big scale advancements of systems and user interfaces bring uncertainty for many manufacturers regarding whether or how a feature should be integrated. Full article
(This article belongs to the Special Issue Understanding UX through Implicit and Explicit Feedback)
Show Figures

Figure 1

Open AccessArticle
The Chongchong Step Master Game for Gait and Balance Training
Multimodal Technologies Interact. 2020, 4(3), 56; https://doi.org/10.3390/mti4030056 - 18 Aug 2020
Viewed by 284
Abstract
Exercise can help to improve health, strengthen vitality and prevent brain disease, especially for the elderly. Exercise games, or exergames, which combine both exercise and video gaming, train people in a fun and competitive manner to lead a healthy lifestyle. Exergames promote more [...] Read more.
Exercise can help to improve health, strengthen vitality and prevent brain disease, especially for the elderly. Exercise games, or exergames, which combine both exercise and video gaming, train people in a fun and competitive manner to lead a healthy lifestyle. Exergames promote more physical effort and have the potential to contribute to physical education. This research presents a full-body virtual reality exercise game called the Chongchong Step Master, which is designed to improve gait and balance function and prevent dementia in the elderly. This system used Kinect sensors to accurately recognize the user’s body movements and the stepping board mat to recognize and guide the user’s walking motion. It aims to help the elderly exercise more easily and independently with the virtual physical trainer. Full article
(This article belongs to the Special Issue Personal Health, Fitness Technologies, and Games)
Show Figures

Figure 1

Open AccessArticle
Adapting a Virtual Advisor’s Verbal Conversation Based on Predicted User Preferences: A Study of Neutral, Empathic and Tailored Dialogue
Multimodal Technologies Interact. 2020, 4(3), 55; https://doi.org/10.3390/mti4030055 - 17 Aug 2020
Viewed by 386
Abstract
Virtual agents that improve the lives of humans need to be more than user-aware and adaptive to the user’s current state and behavior. Additionally, they need to apply expertise gained from experience that drives their adaptive behavior based on deep understanding of the [...] Read more.
Virtual agents that improve the lives of humans need to be more than user-aware and adaptive to the user’s current state and behavior. Additionally, they need to apply expertise gained from experience that drives their adaptive behavior based on deep understanding of the user’s features (such as gender, culture, personality, and psychological state). Our work has involved extension of FAtiMA (Fearnot AffecTive Mind Architecture) with the addition of an Adaptive Engine to the FAtiMA cognitive agent architecture. We use machine learning to acquire the agent’s expertise by capturing a collection of user profiles into a user model and development of agent expertise based on the user model. In this paper, we describe a study to evaluate the Adaptive Engine, which compares the benefit (i.e., reduced stress, increased rapport) of tailoring dialogue to the specific user (Adaptive group) with dialogues that are either empathic (Empathic group) or neutral (Neutral group). Results showed a significant reduction in stress in the empathic and neutral groups, but not the adaptive group. Analyses of rule accuracy, participants’ dialogue preferences, and individual differences reveal that the three groups had different needs for empathic dialogue and highlight the importance and challenges of getting the tailoring right. Full article
(This article belongs to the Special Issue Understanding UX through Implicit and Explicit Feedback)
Show Figures

Figure 1

Open AccessArticle
Game-Like 3D Visualisation of Air Quality Data
Multimodal Technologies Interact. 2020, 4(3), 54; https://doi.org/10.3390/mti4030054 - 17 Aug 2020
Viewed by 313
Abstract
The data produced by sensor networks for urban air quality monitoring is becoming a valuable asset for informed health-aware human activity planning. However, in order to properly explore and exploit these data, citizens need intuitive and effective ways of interacting with it. This [...] Read more.
The data produced by sensor networks for urban air quality monitoring is becoming a valuable asset for informed health-aware human activity planning. However, in order to properly explore and exploit these data, citizens need intuitive and effective ways of interacting with it. This paper presents CityOnStats, a visualisation tool developed to provide users, mainly adults and young adults, with a game-like 3D environment populated with air quality sensing data, as an alternative to the traditionally passive visualisation techniques. CityOnStats provides several visual cues of pollution presence with the purpose of meeting each user’s preferences. Usability tests with a sample of 30 participants have shown the value of air quality 3D game-based visualisation and have provided empirical support for which visual cues are most adequate for the task at hand. Full article
Show Figures

Figure 1

Open AccessArticle
Emotion and Interaction Control: A Motive-Based Approach to Media Choice in Socio-Emotional Communication
Multimodal Technologies Interact. 2020, 4(3), 53; https://doi.org/10.3390/mti4030053 - 15 Aug 2020
Viewed by 372
Abstract
A large part of everyday communication is mediated by technology, with a constantly growing number of choices. Accordingly, how people choose between different communication media is a long-standing research question. However, while prominent media theories focus on how media characteristics affect communication performance, [...] Read more.
A large part of everyday communication is mediated by technology, with a constantly growing number of choices. Accordingly, how people choose between different communication media is a long-standing research question. However, while prominent media theories focus on how media characteristics affect communication performance, the underlying psychological motives of media choice and how different technologies comply with these are less considered. We propose a theoretical framework that links media characteristics with peoples’ intentions to influence communication and present a qualitative study on reasons for media choice in socio-emotional situations. An analysis through the lens of the framework illustrates how users employ media to establish control over the interactional speed and emotional intensity of communication and thereby regulate their communication experience. Besides an advanced theoretical understanding, the present analysis provides a basis for a conscious design of communication media, to deliberately shape the way people interact with technology and each other. Full article
Show Figures

Figure 1

Open AccessArticle
Multi-Domain Recognition of Hand-Drawn Diagrams Using Hierarchical Parsing
Multimodal Technologies Interact. 2020, 4(3), 52; https://doi.org/10.3390/mti4030052 - 14 Aug 2020
Viewed by 294
Abstract
This paper presents an approach for the recognition of multi-domain hand-drawn diagrams, which exploits Sketch Grammars (SkGs) to model the symbols’ shape and the abstract syntax of diagrammatic notations. The recognition systems automatically generated from SkGs process the input sketches according to the [...] Read more.
This paper presents an approach for the recognition of multi-domain hand-drawn diagrams, which exploits Sketch Grammars (SkGs) to model the symbols’ shape and the abstract syntax of diagrammatic notations. The recognition systems automatically generated from SkGs process the input sketches according to the following phases: the user’ strokes are first segmented and interpreted as primitive shapes, then by exploiting the domain context they are clustered into symbols of the domain and, finally, an interpretation of whole diagram is given. The main contribution of this paper is an efficient model of parsing suitable for both interactive and non-interactive sketch-based interfaces, configurable to different domains, and able to exploit contextual information for improving recognition accuracy and solving interpretation ambiguities. The proposed approach was evaluated in the domain of UML class diagrams obtaining good results in terms of recognition accuracy and usability. Full article
Show Figures

Figure 1

Open AccessArticle
Integrating Gamification and Social Interaction into an AR-Based Gamified Point System
Multimodal Technologies Interact. 2020, 4(3), 51; https://doi.org/10.3390/mti4030051 - 13 Aug 2020
Viewed by 318
Abstract
A loyalty program is an important link between consumers and merchants in daily consumption. While new technologies (e.g. gamification, social networks, augmented reality, and so on) make it possible to strengthen this bond, their potential has not yet been fully exploited. In our [...] Read more.
A loyalty program is an important link between consumers and merchants in daily consumption. While new technologies (e.g. gamification, social networks, augmented reality, and so on) make it possible to strengthen this bond, their potential has not yet been fully exploited. In our research, we explore a novel approach to integrate gamification and social interaction into a loyalty program based on augmented reality (AR). We propose an AR-based gamified point system which supports users in obtaining pet-based dynamic feedback on mobile devices and provides a multi-user environment for social interaction. Compared to traditional point systems, pet-based designs help to establish an emotional connection between users and the system. The multi-user environment is designed to increase the user’s motivation through the positive effects of social interactions. Our system provides interpersonal communication channels between users, including competitive and non-competitive interactions. We performed an evaluation consisting of two experiments to examine the effects of game elements (mission and feedback) and social cues (competitive and non-competitive interactions). In the first experiment, we analyze the change in online shopping behavior before and after adding game elements. The results show that gamification can increase user participation in online shopping. In the second experiment, we study the effects of social cues. The results show that social cues can motivate users to participate in the use of a gamified point system. Full article
Show Figures

Figure 1

Open AccessArticle
Factors Influencing Viewing Behavior in Live Streaming: An Interview-Based Survey of Music Fans
Multimodal Technologies Interact. 2020, 4(3), 50; https://doi.org/10.3390/mti4030050 - 13 Aug 2020
Viewed by 339
Abstract
V Live is a live-streaming service made by South Korean IT company in August 2015. The service provides diverse video contents specific to entertainment content. Most of V Live users are K-pop fans, and they actively express emotions on V Live content by [...] Read more.
V Live is a live-streaming service made by South Korean IT company in August 2015. The service provides diverse video contents specific to entertainment content. Most of V Live users are K-pop fans, and they actively express emotions on V Live content by writing comments, pressing “hearts”, and sharing video content. Based on Uses and Gratifications theory, this study investigated why people use live streaming service, and the factors influencing users’ viewing behavior in live streaming. We conducted an in-depth interview with V Live users. Based on the results of the interview, an online survey was conducted. As a result, six factors—“Interpersonal relationship motivation”, “Social presence motivation”, “Celebrity support motivation”, “Celebrity presence motivation”, “Social interaction motivation”, and “Differentiation motivation”—were derived as motivations to use V Live. While “Social presence motivation” and “Differentiation motivation” among V Live use motivations that have been shown to mediate the relationship between fans’ fanship and V Live viewing time, all motivations using V Live have been shown to mediate the relationship between fans’ fanship and V Live viewing participation. Full article
Show Figures

Figure 1

Open AccessArticle
Towards Explainable and Sustainable Wow Experiences with Technology
Multimodal Technologies Interact. 2020, 4(3), 49; https://doi.org/10.3390/mti4030049 - 11 Aug 2020
Viewed by 407
Abstract
Interacting with technology can evoke various positive and negative reactions in users. An outstandingly positive user experience enabled by interactive technology is often referred to as a “wow experience” in design practice and research. Such experiences are considered to be emotional, memorable, and [...] Read more.
Interacting with technology can evoke various positive and negative reactions in users. An outstandingly positive user experience enabled by interactive technology is often referred to as a “wow experience” in design practice and research. Such experiences are considered to be emotional, memorable, and highly desirable. Surprisingly, wow experiences have not received much attention in design research. In this study, we try to gain a more in-depth understanding of how wow experiences are caused. Through an exploratory factor analysis, we identify six factors contributing to wow experiences with interactive technology: Hygiene, goal attainment, uniqueness, relevance, emotional fingerprint, and inspiration. We propose an integrated model of wow experience and a prototype questionnaire to measure wow experiences with interactive products based on the identified factors. Full article
Show Figures

Figure 1

Open AccessBrief Report
Exploring the Use of Virtual Characters (Avatars), Live Animation, and Augmented Reality to Teach Social Skills to Individuals with Autism
Multimodal Technologies Interact. 2020, 4(3), 48; https://doi.org/10.3390/mti4030048 - 11 Aug 2020
Viewed by 472
Abstract
Individuals with autism and other developmental disabilities struggle to acquire and appropriately use social skills to improve the quality of their lives. These critical skills can be difficult to teach because they are context dependent and many students are not motivated to engage [...] Read more.
Individuals with autism and other developmental disabilities struggle to acquire and appropriately use social skills to improve the quality of their lives. These critical skills can be difficult to teach because they are context dependent and many students are not motivated to engage in instruction to learn them. The use of multi-modal technologies shows promise in the teaching a variety of skills to individuals with disabilities. iAnimate Live is a project that makes virtual environments, virtual characters (avatars), augmented reality, and animation more accessible for teachers and clinicians. These emerging technologies have the potential to provide more efficient, portable, accessible, and engaging instructional materials to teach a variety of social skills. After reviewing the relevant research on using virtual environments virtual characters (avatars) and animation for social skills instruction, this article describes current experimental applications exploring their use via the iAnimate Live project. Full article
Open AccessArticle
Comparative Study of Machine Learning Algorithms to Classify Hand Gestures from Deployable and Breathable Kirigami-Based Electrical Impedance Bracelet
Multimodal Technologies Interact. 2020, 4(3), 47; https://doi.org/10.3390/mti4030047 - 07 Aug 2020
Viewed by 423
Abstract
Wearable devices are gaining recognition for their use as a biosensor platform. Electrical impedance tomography (EIT) is one of the sensing techniques that utilizes wearable sensors as its primary data acquisition system. It measures the impedance or resistance at the peripheral (skin) level [...] Read more.
Wearable devices are gaining recognition for their use as a biosensor platform. Electrical impedance tomography (EIT) is one of the sensing techniques that utilizes wearable sensors as its primary data acquisition system. It measures the impedance or resistance at the peripheral (skin) level and calculates the conductivity distribution throughout the body. Even though the technology has existed for several decades, modern-day EIT devices are still costly and bulky. The paper proposes a novel low-cost kirigami-based wearable device that has soft PEDOT: PSS electrodes for sensing skin impedances. Simulation results show that the proposed kirigami structure for the bracelet has a large deformation during actuation while experiencing relatively lower stress. The paper also presents a comparative study on a few machine learning algorithms to classify hand gestures, based on the measured skin impedance. The best classification accuracy (91.49%) was observed from the quadratic support vector machine (SVM) algorithm with 48 principal components. Full article
Show Figures

Graphical abstract

Open AccessArticle
A Multimodal Facial Emotion Recognition Framework through the Fusion of Speech with Visible and Infrared Images
Multimodal Technologies Interact. 2020, 4(3), 46; https://doi.org/10.3390/mti4030046 - 06 Aug 2020
Viewed by 429
Abstract
The exigency of emotion recognition is pushing the envelope for meticulous strategies of discerning actual emotions through the use of superior multimodal techniques. This work presents a multimodal automatic emotion recognition (AER) framework capable of differentiating between expressed emotions with high accuracy. The [...] Read more.
The exigency of emotion recognition is pushing the envelope for meticulous strategies of discerning actual emotions through the use of superior multimodal techniques. This work presents a multimodal automatic emotion recognition (AER) framework capable of differentiating between expressed emotions with high accuracy. The contribution involves implementing an ensemble-based approach for the AER through the fusion of visible images and infrared (IR) images with speech. The framework is implemented in two layers, where the first layer detects emotions using single modalities while the second layer combines the modalities and classifies emotions. Convolutional Neural Networks (CNN) have been used for feature extraction and classification. A hybrid fusion approach comprising early (feature-level) and late (decision-level) fusion, was applied to combine the features and the decisions at different stages. The output of the CNN trained with voice samples of the RAVDESS database was combined with the image classifier’s output using decision-level fusion to obtain the final decision. An accuracy of 86.36% and similar recall (0.86), precision (0.88), and f-measure (0.87) scores were obtained. A comparison with contemporary work endorsed the competitiveness of the framework with the rationale for exclusivity in attaining this accuracy in wild backgrounds and light-invariant conditions. Full article
(This article belongs to the Special Issue Multimodal Emotion Recognition)
Show Figures

Figure 1

Open AccessArticle
Design and Initial Testing of an Affordable and Accessible Smart Compression Garment to Measure Physical Activity Using Conductive Paint Stretch Sensors
Multimodal Technologies Interact. 2020, 4(3), 45; https://doi.org/10.3390/mti4030045 - 01 Aug 2020
Viewed by 449
Abstract
Motion capture and the measurement of physical activity are common practices in the fields of physical therapy, sports medicine, biomechanics, and kinesiology. The data collected by these systems can be very important to understand how someone is recovering or how effective various assistive [...] Read more.
Motion capture and the measurement of physical activity are common practices in the fields of physical therapy, sports medicine, biomechanics, and kinesiology. The data collected by these systems can be very important to understand how someone is recovering or how effective various assistive devices may be. Traditional motion capture systems are very expensive and only allow for data collection to be performed in a lab environment. In our previous research, we have tested the validity of a novel stitched stretch sensor using conductive thread. This paper furthers that research by validating a smart compression garment with integrated conductive paint stretch sensors to measure movement. These sensors are very inexpensive to fabricate and, when paired with an open-sourced wireless microcontroller, can enable a more affordable, accessible, and comfortable form of motion capture. A wearable garment like the one tested in this study could allow us to understand how meaningful, functional activities are performed in a natural setting. Full article
Show Figures

Figure 1

Open AccessArticle
A Framework on Division of Work Tasks between Humans and Robots in the Home
Multimodal Technologies Interact. 2020, 4(3), 44; https://doi.org/10.3390/mti4030044 - 27 Jul 2020
Viewed by 530
Abstract
This paper analyzes work activity in the home, e.g., cleaning, performed by two actors, a human and a robot. Nowadays, there are attempts to automate this activity through the use of robots. However, the activity of cleaning, in and of itself, is not [...] Read more.
This paper analyzes work activity in the home, e.g., cleaning, performed by two actors, a human and a robot. Nowadays, there are attempts to automate this activity through the use of robots. However, the activity of cleaning, in and of itself, is not important; it is used instrumentally to understand if and how robots can be integrated within current and future homes. The theoretical framework of the paper is based on empirical work collected as part of the Multimodal Elderly Care Systems (MECS) project. The study proposes a framework for the division of work tasks between humans and robots. The framework is anchored within existing research and our empirical findings. Swim-lane diagrams are used to visualize the tasks performed (WHAT), by each of the two actors, to ascertain the tasks’ temporality (WHEN), and their distribution and transitioning from one actor to the other (WHERE). The study presents the framework of various dimensions of work tasks, such as the types of work tasks, but also the temporality and spatiality of tasks, illustrating linear, parallel, sequential, and distributed tasks in a shared or non-shared space. The study’s contribution lies in its foundation for analyzing work tasks that robots integrated into or used in the home may generate for humans, along with their multimodal interactions. Finally, the framework can be used to visualize, plan, and design work tasks for the human and for the robot, respectively, and their work division. Full article
Show Figures

Figure 1

Open AccessArticle
Guided User Research Methods for Experience Design—A New Approach to Focus Groups and Cultural Probes
Multimodal Technologies Interact. 2020, 4(3), 43; https://doi.org/10.3390/mti4030043 - 26 Jul 2020
Viewed by 703
Abstract
Many companies are facing the task for radical innovations—totally new concepts and ideas for products and services, which are successful at the market. One major factor for success is a positive user experience. Thus, design teams need, and are challenged to integrate, an [...] Read more.
Many companies are facing the task for radical innovations—totally new concepts and ideas for products and services, which are successful at the market. One major factor for success is a positive user experience. Thus, design teams need, and are challenged to integrate, an experience-centered perspective in their human-centered design processes. To support this, we propose adjusted versions of the well-established user research methods focus groups and cultural probes, in order to tailor them to the specific needs and focus of experience-based design, especially in the context of solving “wicked design problems”. The results are experience focus groups and experience probes, which augment the traditional methods with new structuring, materials, and tasks based on the three principles experience focus, creative visualization, and systematic guidance. We introduce and describe a two step-approach for applying these methods, as well as a case study that was conducted in cooperation with a company that illustrates how the methods can be applied to enable an experience-centered perspective on the topic of “families and digital life”. The case study demonstrates how the methods address the three principles they are based on. Post-study interviews with representatives of the company revealed valuable insights about their usefulness for practical user experience design. Full article
Show Figures

Figure 1

Open AccessArticle
Spot-Presentation of Stereophonic Earcons to Assist Navigation for the Visually Impaired
Multimodal Technologies Interact. 2020, 4(3), 42; https://doi.org/10.3390/mti4030042 - 20 Jul 2020
Viewed by 621
Abstract
This study seeks to demonstrate that a navigation system using stereophonic sound technology is effective in supporting visually impaired people in public spaces. In the proposed method, stereophonic sound is produced by a pair of parametric speakers for a person who comes to [...] Read more.
This study seeks to demonstrate that a navigation system using stereophonic sound technology is effective in supporting visually impaired people in public spaces. In the proposed method, stereophonic sound is produced by a pair of parametric speakers for a person who comes to a specific position, detected by an RGB-D sensor. The sound is a stereophonic earcon representing the target facility. The recipient can intuitively understand the direction of the target facility. The sound is not audible for anyone except for the person being supported and is not noisy. This system is constructed in a shopping mall, and an experiment is conducted, in which the proposed system and guidance by a tactile map lead to a designated facility. As a result, it is confirmed, that the execution time of the proposed method is reduced. It is also confirmed that the proposed method shows higher performance in terms of the average time required to grasp the direction than the tactile map approach. In the actual environment where this system is supposed to be used, the correct answer rate is over 80%. These results suggest that the proposed method can replace the conventional tactile map as a guidance system. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

Open AccessArticle
Demystifying the First-Time Experience of Mobile Games: The Presence of a Tutorial Has a Positive Impact on Non-Expert Players’ Flow and Continuous-Use Intentions
Multimodal Technologies Interact. 2020, 4(3), 41; https://doi.org/10.3390/mti4030041 - 11 Jul 2020
Viewed by 562
Abstract
The purpose of video game tutorials is to help players easily understand new game mechanics and thereby facilitate chances of early engagement with the main contents of one’s game. The mobile game market (i.e., phones and tablets) faces important retention issues caused by [...] Read more.
The purpose of video game tutorials is to help players easily understand new game mechanics and thereby facilitate chances of early engagement with the main contents of one’s game. The mobile game market (i.e., phones and tablets) faces important retention issues caused by a high number of players who abandon games permanently within 24 h of downloading them. A laboratory experiment with 40 players tested how tutorial presence and player expertise impact on users’ psychophysiological states and continuous-use intentions (CUIs). The results suggest that in a simple game context, tutorials have a positive impact on non-expert players’ perceived state of flow and have no effect on expert players’ perceived flow. The results also suggest that flow has a positive impact on CUIs for both experts and non-experts. The theoretical contributions and managerial implications of these results are discussed. Full article
Show Figures

Figure 1

Open AccessArticle
Tools for Wellbeing-Supportive Design: Features, Characteristics, and Prototypes
Multimodal Technologies Interact. 2020, 4(3), 40; https://doi.org/10.3390/mti4030040 - 10 Jul 2020
Viewed by 845
Abstract
While research on wellbeing within Human-Computer Interaction (HCI) is an active space, a gap between research and practice persists. To tackle this, we sought to identify the practical needs of designers in taking wellbeing research into practice. We report on 15 semi-structured interviews [...] Read more.
While research on wellbeing within Human-Computer Interaction (HCI) is an active space, a gap between research and practice persists. To tackle this, we sought to identify the practical needs of designers in taking wellbeing research into practice. We report on 15 semi-structured interviews with designers from four continents, yielding insights into design tool use generally and requirements for wellbeing design tools specifically. We then present five resulting design tool concepts, two of which were further developed into prototypes and tested in a workshop with 34 interaction design and HCI professionals. Findings include seven desirable features and three desirable characteristics for wellbeing-supportive design tools, including that these tools should satisfy the need for proof, buy-in, and tangibility. We also provide clarity around the notion of design for wellbeing and why it must be distinguished from design for positive emotions. Full article
Show Figures

Figure 1

Open AccessArticle
Behavior‒Output Control Theory, Trust and Social Loafing in Virtual Teams
Multimodal Technologies Interact. 2020, 4(3), 39; https://doi.org/10.3390/mti4030039 - 08 Jul 2020
Viewed by 687
Abstract
Social loafing, the act of withholding effort in teams, has been identified as an important problem in virtual teams. A lack of social control and the inability to observe or trust that others are fulfilling their commitments are often cited as major causes [...] Read more.
Social loafing, the act of withholding effort in teams, has been identified as an important problem in virtual teams. A lack of social control and the inability to observe or trust that others are fulfilling their commitments are often cited as major causes of social loafing in virtual teams where there is geographic dispersion and a reliance on electronic communications. Yet, more research is needed to better understand such claims. The goal of this study was to examine the impact of control and trust on social loafing in virtual teams. To accomplish this, we proposed and empirically tested a multi-level research model that explains the relationships among team controls, trust, social loafing, and team performance. We tested the model with 272 information technology employees in 39 virtual teams. Results indicate that control and trust reduce social loafing separately and also jointly. Full article
(This article belongs to the Special Issue The Future of Intelligent Human-Robot Collaboration)
Show Figures

Figure 1

Open AccessArticle
Beyond Maslow’s Pyramid: Introducing a Typology of Thirteen Fundamental Needs for Human-Centered Design
Multimodal Technologies Interact. 2020, 4(3), 38; https://doi.org/10.3390/mti4030038 - 07 Jul 2020
Viewed by 936
Abstract
This paper introduces a design-focused typology of psychological human needs that includes 13 fundamental needs and 52 sub-needs (four for each fundamental need). The typology was developed to provide a practical understanding of psychological needs as a resource for user-centered design practice and [...] Read more.
This paper introduces a design-focused typology of psychological human needs that includes 13 fundamental needs and 52 sub-needs (four for each fundamental need). The typology was developed to provide a practical understanding of psychological needs as a resource for user-centered design practice and research with a focus on user experience and well-being. The first part of the manuscript briefly reviews Abraham Maslow’s pioneering work on human needs, and the underlying propositions, main contributions and limitations of his motivational theory. The review results in a set of requirements for a design-focused typology of psychological needs. The second part reports on the development of the new typology. The thirteen needs were selected from six existing typologies with the use of five criteria that distinguish fundamental from non-fundamental needs. The resulting typology builds on the strengths of Maslow’s need hierarchy but rejects the hierarchical structure and adds granularity to the need categories. The third part of the paper describes three examples of how the need typology can inform design practice, illustrated with student design cases. It also presents three means for communicating the need typology. The general discussion section reflects on implications and limitations and proposes ideas for future research. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop