Next Issue
Volume 5, January
Previous Issue
Volume 4, September

Multimodal Technol. Interact., Volume 4, Issue 4 (December 2020) – 26 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Communication
A Review of Automated Speech-Based Interaction for Cognitive Screening
Multimodal Technol. Interact. 2020, 4(4), 93; https://doi.org/10.3390/mti4040093 - 17 Dec 2020
Viewed by 2345
Abstract
Language, speech and conversational behaviours reflect cognitive changes that may precede physiological changes and offer a much more cost-effective option for detecting preclinical cognitive decline. Artificial intelligence and machine learning have been established as a means to facilitate automated speech-based cognitive screening through [...] Read more.
Language, speech and conversational behaviours reflect cognitive changes that may precede physiological changes and offer a much more cost-effective option for detecting preclinical cognitive decline. Artificial intelligence and machine learning have been established as a means to facilitate automated speech-based cognitive screening through automated recording and analysis of linguistic, speech and conversational behaviours. In this work, a scoping literature review was performed to document and analyse current automated speech-based implementations for cognitive screening from the perspective of human–computer interaction. At this stage, the goal was to identify and analyse the characteristics that define the interaction between the automated speech-based screening systems and the users, potentially revealing interaction-related patterns and gaps. In total, 65 articles were identified as appropriate for inclusion, from which 15 articles satisfied the inclusion criteria. The literature review led to the documentation and further analysis of five interaction-related themes: (i) user interface, (ii) modalities, (iii) speech-based communication, (iv) screening content and (v) screener. Cognitive screening through speech-based interaction might benefit from two practices: (1) implementing more multimodal user interfaces that facilitate—amongst others—speech-based screening and (2) introducing the element of motivation in the speech-based screening process. Full article
(This article belongs to the Special Issue Speech-Based Interaction)
Review
Choice, Control and Computers: Empowering Wildlife in Human Care
Multimodal Technol. Interact. 2020, 4(4), 92; https://doi.org/10.3390/mti4040092 - 14 Dec 2020
Cited by 11 | Viewed by 2974
Abstract
The purpose of this perspective paper and technology overview is to encourage collaboration between designers and animal carers in zoological institutions, sanctuaries, research facilities, and in soft-release scenarios for the benefit of all stakeholders, including animals, carers, managers, researchers, and visitors. We discuss [...] Read more.
The purpose of this perspective paper and technology overview is to encourage collaboration between designers and animal carers in zoological institutions, sanctuaries, research facilities, and in soft-release scenarios for the benefit of all stakeholders, including animals, carers, managers, researchers, and visitors. We discuss the evolution of animal-centered technology (ACT), including more recent animal-centered computing to increase animal wellbeing by providing increased opportunities for choice and control for animals to gain greater self-regulation and independence. We believe this will increase animal welfare and relative freedom, while potentially improving conservation outcomes. Concurrent with the benefits to the animals, this technology may benefit human carers by increasing workplace efficiency and improving research data collection using automated animal monitoring systems. These benefits are balanced against cultural resistance to change, the imposition of greater staff training, a potential reduction in valuable animal-carer interaction, and the financial costs for technology design, acquisition, obsolescence, and maintenance. Successful applications will be discussed to demonstrate how animal-centered technology has evolved and, in some cases, to suggest future opportunities. We suggest that creative uses of animal-centered technology, based upon solid animal welfare science, has the potential for greatly increasing managed animal welfare, eventually growing from individual animal enrichment features to facility-wide integrated animal movement systems and transitions to wildlife release and rewilding strategies. Full article
(This article belongs to the Special Issue Animal Centered Computing: Enriching the Lives of Animals)
Show Figures

Figure 1

Article
Controller-Free Hand Tracking for Grab-and-Place Tasks in Immersive Virtual Reality: Design Elements and Their Empirical Study
Multimodal Technol. Interact. 2020, 4(4), 91; https://doi.org/10.3390/mti4040091 - 12 Dec 2020
Cited by 8 | Viewed by 3900
Abstract
Hand tracking enables controller-free interaction with virtual environments, which can, compared to traditional handheld controllers, make virtual reality (VR) experiences more natural and immersive. As naturalness hinges on both technological and user-based features, fine-tuning the former while assessing the latter can be used [...] Read more.
Hand tracking enables controller-free interaction with virtual environments, which can, compared to traditional handheld controllers, make virtual reality (VR) experiences more natural and immersive. As naturalness hinges on both technological and user-based features, fine-tuning the former while assessing the latter can be used to increase usability. For a grab-and-place use case in immersive VR, we compared a prototype of a camera-based hand tracking interface (Leap Motion) with customized design elements to the standard Leap Motion application programming interface (API) and a traditional controller solution (Oculus Touch). Usability was tested in 32 young healthy participants, whose performance was analyzed in terms of accuracy, speed and errors as well as subjective experience. We found higher performance and overall usability as well as overall preference for the handheld controller compared to both controller-free solutions. While most measures did not differ between the two controller-free solutions, the modifications made to the Leap API to form our prototype led to a significant decrease in accidental drops. Our results do not support the assumption of higher naturalness for hand tracking but suggest design elements to improve the robustness of controller-free object interaction in a grab-and-place scenario. Full article
Show Figures

Figure 1

Article
The Role of Simulators in Interdisciplinary Medical Work
Multimodal Technol. Interact. 2020, 4(4), 90; https://doi.org/10.3390/mti4040090 - 08 Dec 2020
Cited by 2 | Viewed by 2448
Abstract
This article reports from a project introducing a virtual reality simulator with patient-specific input for endovascular aneurysm repair (EVAR) into a surgical environment at a university hospital in Norway during 2016–2019. The project includes acquisition of the simulator, training of personnel, and a [...] Read more.
This article reports from a project introducing a virtual reality simulator with patient-specific input for endovascular aneurysm repair (EVAR) into a surgical environment at a university hospital in Norway during 2016–2019. The project includes acquisition of the simulator, training of personnel, and a mapping of the effects. We followed the process, adopting ethnographic methods including participation in the operating room, simulated patient-specific rehearsals, preparations of the rehearsals, meetings with the simulator company, scientific meetings and scientific work related to the clinical trials (the second author led the clinical trial), in addition to open-ended interviews with vascular surgeons and interventional radiologists. We used the concepts of boundary work and sensework as conceptual lenses through which we studied the introduction of the simulator and how it influenced the nature of work and the professional relationship between the vascular surgeons and the interventional radiologists. We found that the simulator facilitated professional integration, at the same time as it served as a material resource for professional identity development. This study is the first to our knowledge that investigates the role of simulators for professional identity and relationship among surgeons and radiologists. Further studies of simulators in similar and different social contexts may contribute to deeper and more generic understanding of the way simulators influence our working life. Full article
(This article belongs to the Special Issue Imaging Interaction in Surgery)
Show Figures

Figure 1

Article
The Response to Impactful Interactivity on Spectators’ Engagement in a Digital Game
Multimodal Technol. Interact. 2020, 4(4), 89; https://doi.org/10.3390/mti4040089 - 04 Dec 2020
Cited by 2 | Viewed by 2727
Abstract
As gaming spectatorship has become a worldwide phenomenon, keeping the spectator in mind while designing games is becoming more important. Here, we explore the factors that influence spectators’ engagement. Through the use of GRiD Crowd, a game akin to life-size Pong, different levels [...] Read more.
As gaming spectatorship has become a worldwide phenomenon, keeping the spectator in mind while designing games is becoming more important. Here, we explore the factors that influence spectators’ engagement. Through the use of GRiD Crowd, a game akin to life-size Pong, different levels of spectator influence on the game were tested and their impact on engagement via arousal measures were analyzed. Spectator influence on the game was accomplished via smartphone, where 78 participants put in different audience compositions (alongside friends or strangers) were tested. We found that when the spectators had an impact on the game, higher levels of emotional arousal were recorded, which generated an increase in engagement. These results provide a suggestion of design that could be used by game designers who wish to engage their spectatorship, a segment of their target market that is becoming impossible to ignore. Full article
Show Figures

Figure 1

Article
The Cost of Production in Elicitation Studies and the Legacy Bias-Consensus Trade off
Multimodal Technol. Interact. 2020, 4(4), 88; https://doi.org/10.3390/mti4040088 - 04 Dec 2020
Cited by 4 | Viewed by 2440
Abstract
Gesture elicitation studies are a popular means of gaining valuable insights into how users interact with novel input devices. One of the problems elicitation faces is that of legacy bias, when elicited interactions are biased by prior technologies use. In response, methodologies have [...] Read more.
Gesture elicitation studies are a popular means of gaining valuable insights into how users interact with novel input devices. One of the problems elicitation faces is that of legacy bias, when elicited interactions are biased by prior technologies use. In response, methodologies have been introduced to reduce legacy bias. This is the first study that formally examines the production method of reducing legacy bias (i.e., repeated proposals for a single referent). This is done through a between-subject study that had 27 participants per group (control and production) with 17 referents placed in a virtual environment using a head-mounted display. This study found that over a range of referents, legacy bias was not significantly reduced over production trials. Instead, production reduced participant consensus on proposals. However, in the set of referents that elicited the most legacy biased proposals, production was an effective means of reducing legacy bias, with an overall reduction of 11.93% for the chance of eliciting a legacy bias proposal. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

Article
Preschoolers’ STEM Learning on a Haptic Enabled Tablet
Multimodal Technol. Interact. 2020, 4(4), 87; https://doi.org/10.3390/mti4040087 - 02 Dec 2020
Cited by 1 | Viewed by 2637
Abstract
The research on children’s learning of science, technology, engineering, and math (STEM) topics from electronic applications (apps) is limited, though it appears that children can reasonably transfer learning from tablet games to particular tasks. We were interested to determine whether these findings would [...] Read more.
The research on children’s learning of science, technology, engineering, and math (STEM) topics from electronic applications (apps) is limited, though it appears that children can reasonably transfer learning from tablet games to particular tasks. We were interested to determine whether these findings would translate to the emerging technology of haptic feedback tablets. The research on haptic feedback technology, specifically, has found that this type of feedback is effective in teaching physics concepts to older students. However, haptic feedback has not yet been sufficiently explored with younger groups (e.g., preschoolers). To determine the effect of playing a STEM game enhanced with haptic technology on learning outcomes, we designed an experiment where preschool participants were randomly exposed to one of three different conditions: (a) STEM game with no haptic feedback (tablet), (b) STEM game enabled with haptic feedback (haptics), or (c) a puzzle game (control). Results revealed no significant differences in comprehension or transfer by condition. Results from this study contribute to the literature on the effectiveness of haptic feedback for preschool STEM learning. Full article
(This article belongs to the Special Issue Emerging Technologies and New Media for Children)
Show Figures

Figure 1

Article
Conceptual Design and Evaluation of Windshield Displays for Excavators
Multimodal Technol. Interact. 2020, 4(4), 86; https://doi.org/10.3390/mti4040086 - 27 Nov 2020
Viewed by 2551
Abstract
This paper investigates the possible visualization using transparent displays, which could be placed on the excavator’s windshield. This way, the information could be presented closer to operators’ line of sight, without fully obstructing their view. Therefore, excavator operators could acquire the supportive information [...] Read more.
This paper investigates the possible visualization using transparent displays, which could be placed on the excavator’s windshield. This way, the information could be presented closer to operators’ line of sight, without fully obstructing their view. Therefore, excavator operators could acquire the supportive information provided by the machine without diverting their attention from operational areas. To ensure that there is a match between the supportive information and operators’ contextual needs, we conducted four different activities as parts of our design process. Firstly, we looked at four relevant safety guidelines to determine which information is essential to perform safe operations. Secondly, we reviewed all commercially available technologies to discover their suitability in the excavator context. Thirdly, we conducted a design workshop to generate ideas on how the essential information should look like and behave based on the performed operation and the chosen available technology. Fourthly, we interviewed seven excavator operators to test their understanding and obtain their feedback on the proposed visualization concepts. The results indicated that four out of six visualization concepts that we proposed could be understood easily by the operators and we also revised them to better suit the operators’ way of thinking. All the operators also positively perceived this approach, since all of them included at least three visualization concepts to be presented on the windshield. Full article
Show Figures

Figure 1

Article
Intelligent Blended Agents: Reality–Virtuality Interaction with Artificially Intelligent Embodied Virtual Humans
Multimodal Technol. Interact. 2020, 4(4), 85; https://doi.org/10.3390/mti4040085 - 27 Nov 2020
Cited by 1 | Viewed by 2957
Abstract
Intelligent virtual agents (VAs) already support us in a variety of everyday tasks such as setting up appointments, monitoring our fitness, and organizing messages. Adding a humanoid body representation to these mostly voice-based VAs has enormous potential to enrich the human–agent communication process [...] Read more.
Intelligent virtual agents (VAs) already support us in a variety of everyday tasks such as setting up appointments, monitoring our fitness, and organizing messages. Adding a humanoid body representation to these mostly voice-based VAs has enormous potential to enrich the human–agent communication process but, at the same time, raises expectations regarding the agent’s social, spatial, and intelligent behavior. Embodied VAs may be perceived as less human-like if they, for example, do not return eye contact, or do not show a plausible collision behavior with the physical surroundings. In this article, we introduce a new model that extends human-to-human interaction to interaction with intelligent agents and covers different multi-modal and multi-sensory channels that are required to create believable embodied VAs. Theoretical considerations of the different aspects of human–agent interaction are complemented by implementation guidelines to support the practical development of such agents. In this context, we particularly emphasize one aspect that is distinctive of embodied agents, i.e., interaction with the physical world. Since previous studies indicated negative effects of implausible physical behavior of VAs, we were interested in the initial responses of users when interacting with a VA with virtual–physical capabilities for the first time. We conducted a pilot study to collect subjective feedback regarding two forms of virtual–physical interactions. Both were designed and implemented in preparation of the user study, and represent two different approaches to virtual–physical manipulations: (i) displacement of a robotic object, and (ii) writing on a physical sheet of paper with thermochromic ink. The qualitative results of the study indicate positive effects of agents with virtual–physical capabilities in terms of their perceived realism as well as evoked emotional responses of the users. We conclude with an outlook on possible future developments of different aspects of human–agent interaction in general and the physical simulation in particular. Full article
(This article belongs to the Special Issue Social Interaction and Psychology in XR)
Show Figures

Figure 1

Article
A Human–Computer Interface Replacing Mouse and Keyboard for Individuals with Limited Upper Limb Mobility
Multimodal Technol. Interact. 2020, 4(4), 84; https://doi.org/10.3390/mti4040084 - 27 Nov 2020
Viewed by 2913
Abstract
People with physical disabilities in their upper extremities face serious issues in using classical input devices due to lacking movement possibilities and precision. This article suggests an alternative input concept and presents corresponding input devices. The proposed interface combines an inertial measurement unit [...] Read more.
People with physical disabilities in their upper extremities face serious issues in using classical input devices due to lacking movement possibilities and precision. This article suggests an alternative input concept and presents corresponding input devices. The proposed interface combines an inertial measurement unit and force sensing resistors, which can replace mouse and keyboard. Head motions are mapped to mouse pointer positions, while mouse button actions are triggered by contracting mastication muscles. The contact pressures of each fingertip are acquired to replace the conventional keyboard. To allow for complex text entry, the sensory concept is complemented by an ambiguous keyboard layout with ten keys. The related word prediction function provides disambiguation at word level. Haptic feedback is provided to users corresponding to their virtual keystrokes for enhanced closed-loop interactions. This alternative input system enables text input as well as the emulation of a two-button mouse. Full article
Show Figures

Figure 1

Article
MatMouse: A Mouse Movements Tracking and Analysis Toolbox for Visual Search Experiments
Multimodal Technol. Interact. 2020, 4(4), 83; https://doi.org/10.3390/mti4040083 - 26 Nov 2020
Cited by 4 | Viewed by 2937
Abstract
The present study introduces a new MATLAB toolbox, called MatMouse, suitable for the performance of experimental studies based on mouse movements tracking and analysis. MatMouse supports the implementation of task-based visual search experiments. The proposed toolbox provides specific functions which can be utilized [...] Read more.
The present study introduces a new MATLAB toolbox, called MatMouse, suitable for the performance of experimental studies based on mouse movements tracking and analysis. MatMouse supports the implementation of task-based visual search experiments. The proposed toolbox provides specific functions which can be utilized for the experimental building and mouse tracking processes, the analysis of the recorded data in specific metrics, the production of related visualizations, as well as for the generation of statistical grayscale heatmaps which could serve as an objective ground truth product. MatMouse can be executed as a standalone package or integrated in existing MATLAB scripts and/or toolboxes. In order to highlight the functionalities of the introduced toolbox, a complete case study example is presented. MatMouse is freely distributed to the scientific community under the third version of GNU General Public License (GPL v3) on GitHub platform. Full article
Show Figures

Figure 1

Article
Design for Breathtaking Experiences: An Exploration of Design Strategies to Evoke Awe in Human–Product Interactions
Multimodal Technol. Interact. 2020, 4(4), 82; https://doi.org/10.3390/mti4040082 - 24 Nov 2020
Cited by 1 | Viewed by 2919
Abstract
From looking up at a skyscraper to the Grand Canyon’s vastness, you may have experienced awe in one way or another. Awe is experienced when one encounters something greater or more powerful than themselves and is associated with prosocial behavior through a diminishment [...] Read more.
From looking up at a skyscraper to the Grand Canyon’s vastness, you may have experienced awe in one way or another. Awe is experienced when one encounters something greater or more powerful than themselves and is associated with prosocial behavior through a diminishment of self-importance. In design research, most studies on awe have been conducted in lab conditions by using technologies such as virtual reality because of its efficiency to simulate typical awe-stimulating conditions (e.g., nature scenes). While useful in inducing awe and assessing its effects on users, they give little guidance about how design can deliberately evoke awe. Most attempts focus on the response of awe instead of its eliciting conditions. With an aim to support designers to facilitate awe, this paper explores design strategies to evoke awe. Based on appraisal theory, the cause of awe was formulated, and its relevance to designing for awe was investigated. The conditions that underlie awe in design were explored through a survey in which participants reported 150 awe experiences, resulting in six design strategies. The paper describes these strategies and discusses how they can be used in a design process, giving attention to addressing the experiential value of awe. Full article
Show Figures

Figure 1

Review
When Agents Become Partners: A Review of the Role the Implicit Plays in the Interaction with Artificial Social Agents
Multimodal Technol. Interact. 2020, 4(4), 81; https://doi.org/10.3390/mti4040081 - 22 Nov 2020
Cited by 2 | Viewed by 2649
Abstract
The way we interact with computers has significantly changed over recent decades. However, interaction with computers still falls behind human to human interaction in terms of seamlessness, effortlessness, and satisfaction. We argue that simultaneously using verbal, nonverbal, explicit, implicit, intentional, and unintentional communication [...] Read more.
The way we interact with computers has significantly changed over recent decades. However, interaction with computers still falls behind human to human interaction in terms of seamlessness, effortlessness, and satisfaction. We argue that simultaneously using verbal, nonverbal, explicit, implicit, intentional, and unintentional communication channels addresses these three aspects of the interaction process. To better understand what has been done in the field of Human Computer Interaction (HCI) in terms of incorporating the type channels mentioned above, we reviewed the literature on implicit nonverbal interaction with a specific emphasis on the interaction between humans on the one side, and robot and virtual humans on the other side. These Artificial Social Agents (ASA) are increasingly used as advanced tools for solving not only physical but also social tasks. In the literature review, we identify domains of interaction between humans and artificial social agents that have shown exponential growth over the years. The review highlights the value of incorporating implicit interaction capabilities in Human Agent Interaction (HAI) which we believe will lead to satisfying human and artificial social agent team performance. We conclude the article by presenting a case study of a system that harnesses subtle nonverbal, implicit interaction to increase the state of relaxation in users. This “Virtual Human Breathing Relaxation System” works on the principle of physiological synchronisation between a human and a virtual, computer-generated human. The active entrainment concept behind the relaxation system is generic and can be applied to other human agent interaction domains of implicit physiology-based interaction. Full article
(This article belongs to the Special Issue Understanding UX through Implicit and Explicit Feedback)
Show Figures

Figure 1

Article
Cross-Platform Usability Model Evaluation
Multimodal Technol. Interact. 2020, 4(4), 80; https://doi.org/10.3390/mti4040080 - 20 Nov 2020
Cited by 3 | Viewed by 2582
Abstract
It is becoming common for several devices to be utilised together to access and manipulate shared information spaces and migrate tasks between devices. Despite the increased worldwide use of cross-platform services, there is limited research into how cross-platform service usability can be assessed. [...] Read more.
It is becoming common for several devices to be utilised together to access and manipulate shared information spaces and migrate tasks between devices. Despite the increased worldwide use of cross-platform services, there is limited research into how cross-platform service usability can be assessed. This paper presents a novel cross-platform usability model. The model employs the think-aloud protocol, observations, and questionnaires to reveal cross-platform usability problems. Two Likert scales were developed for measuring overall user satisfaction of cross-platform usability and user satisfaction with the seamlessness of the transition between one device and another. The paper further employs a series of objective measures for the proposed model. The viability and performance of the model were examined in the context of evaluating three cross-platform services across three devices. The results demonstrate that the model is a valuable method for assessing and quantifying cross-platform usability. The findings were thoroughly analysed and discussed, and subsequently used to refine the model. The model was also evaluated by eight user experience experts and seven out of the eight agreed that it is useful. Full article
Show Figures

Figure 1

Review
Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy
Multimodal Technol. Interact. 2020, 4(4), 79; https://doi.org/10.3390/mti4040079 - 17 Nov 2020
Cited by 3 | Viewed by 2813
Abstract
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can [...] Read more.
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Graphical abstract

Article
Multimodal Mixed Reality Impact on a Hand Guiding Task with a Holographic Cobot
Multimodal Technol. Interact. 2020, 4(4), 78; https://doi.org/10.3390/mti4040078 - 31 Oct 2020
Cited by 1 | Viewed by 2914
Abstract
In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. [...] Read more.
In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial. Full article
Show Figures

Figure 1

Article
The Potentials of Tangible Technologies for Learning Linear Equations
Multimodal Technol. Interact. 2020, 4(4), 77; https://doi.org/10.3390/mti4040077 - 23 Oct 2020
Cited by 1 | Viewed by 3317 | Correction
Abstract
Tangible technologies provide interactive links between the physical and digital worlds, thereby merging the benefits of physical and virtual manipulatives. To explore the potentials of tangible technologies for learning linear equations, a tangible manipulative (TM) was designed and developed. A prototype of the [...] Read more.
Tangible technologies provide interactive links between the physical and digital worlds, thereby merging the benefits of physical and virtual manipulatives. To explore the potentials of tangible technologies for learning linear equations, a tangible manipulative (TM) was designed and developed. A prototype of the initial TM was implemented and evaluated using mixed methods (i.e., classroom interventions, paper-based tests, thinking aloud sessions, questionnaires, and interviews) in real classroom settings. Six teachers, 24 primary school students, and 65 lower secondary school students participated in the exploratory study. The quantitative and qualitative analysis revealed that the initial TM supported student learning at various levels and had a positive impact on their learning achievement. Moreover, its overall usability was also accepted. Some minor improvements with regard to its pedagogy and usability could be implemented. These findings indicate that the initial TM is likely to be beneficial for linear equation learning in pre-primary to lower secondary schools and be usable in mathematics classrooms. Theoretical and practical implications are discussed. Full article
Show Figures

Figure 1

Review
Innovative and Assistive eHealth Technologies for Smart Therapeutic and Rehabilitation Outdoor Spaces for the Elderly Demographic
Multimodal Technol. Interact. 2020, 4(4), 76; https://doi.org/10.3390/mti4040076 - 22 Oct 2020
Cited by 6 | Viewed by 3001
Abstract
The use of technology for social connectivity and achieving engagement goals is increasingly essential to the overall well-being of our rapidly ageing population. While much of the extant literature has focused on home automation and indoor remote health monitoring; there is a growing [...] Read more.
The use of technology for social connectivity and achieving engagement goals is increasingly essential to the overall well-being of our rapidly ageing population. While much of the extant literature has focused on home automation and indoor remote health monitoring; there is a growing literature that finds personal health and overall well-being improves when physical activities are conducted outdoors. This study presents a review of possible innovative and assistive eHealth technologies suitable for smart therapeutic and rehabilitation outdoor spaces for older persons. The article also presents key performance metrics required of eHealth technologies to ensure robust, timely and reliable biometric data transfer between patients in a therapeutic landscape environment and respective medical centres. A literature review of relevant publications with a primary focus of integrating sensors and eHealth technologies in outdoor spaces to collect and transfer data from the elderly demographic who engage such built landscapes to appropriate stakeholders was conducted. A content analysis was carried out to synthesize outcomes of the literature review. The study finds that research in assistive eHealth technologies and interfaces for outdoor therapeutic spaces is in its nascent stages and has limited generalisability. The level of technology uptake and readiness for smart outdoor spaces is still developing and is currently being outpaced by the growth of elderly fitness zones in public spaces. Further research is needed to explore those eHealth technologies with interactive feedback mechanisms that are suitable for outdoor therapeutic environments. Full article
(This article belongs to the Special Issue Personal Health, Fitness Technologies, and Games)
Show Figures

Figure 1

Article
Virtual Reality Nature Exposure and Test Anxiety
Multimodal Technol. Interact. 2020, 4(4), 75; https://doi.org/10.3390/mti4040075 - 22 Oct 2020
Cited by 1 | Viewed by 3180
Abstract
The number of students affected by exam anxiety continues to rise. Therefore, it is becoming progressively relevant to explore innovative remediation strategies that will help mitigate the debilitating effects of exam anxiety. The study aimed to investigate whether green environment exposure, delivered by [...] Read more.
The number of students affected by exam anxiety continues to rise. Therefore, it is becoming progressively relevant to explore innovative remediation strategies that will help mitigate the debilitating effects of exam anxiety. The study aimed to investigate whether green environment exposure, delivered by virtual reality (VR) technology, would serve as an effective intervention to mitigate participants’ test anxiety and therefore improve the experience of the exam, measured by positive and negative affect, and increase test scores in a pseudo exam. Twenty high and twenty low exam anxiety students completed a pseudo exam before and after being exposed to either a simulated green environment or urban environment. Only those who had high anxiety and were exposed to the nature VR intervention had significant reductions in negative affect (F(1, 31) = 5.86, p = 0.02, ηp2 = 0.15), supporting the idea that exposure to nature, even if simulated, may benefit students’ feelings about their academic performance. The findings are discussed in light of future developments in nature and educational research. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

Article
Self-Perception and Training Perceptions on Teacher Digital Competence (TDC) in Spanish and French University Students
Multimodal Technol. Interact. 2020, 4(4), 74; https://doi.org/10.3390/mti4040074 - 17 Oct 2020
Cited by 10 | Viewed by 3275
Abstract
The purpose of this research is, on the one hand, to analyze the self-perception of future teachers of childhood education and primary education, and those studying for a master’s degree in secondary education teacher training on their Teacher Digital Competence (TDC), as well [...] Read more.
The purpose of this research is, on the one hand, to analyze the self-perception of future teachers of childhood education and primary education, and those studying for a master’s degree in secondary education teacher training on their Teacher Digital Competence (TDC), as well as the potential influence of gender, country and university institution of origin in their representations. On the other hand, it seeks to analyze the perception of future teachers on the TDC of their university trainers (formative perception). In accordance with these aims, a quantitative methodology of a non-experimental nature and of a prospective cross-sectional ex post facto approach has been used. A total of 428 students from two Spanish universities and from a French university agreed to participate in the research. The results report a positive and differential self-perception by gender of the TDC acquired and unfavorable perceptions of the digital competences of their teachers. These results confirm the need to improve the technological-manipulative and didactic training of university teachers, and to adapt the teaching competences to the demands of the Information and Communication Society (ICS) and to the guidelines of the Common Digital Competence Framework. Full article
Review
Multimodal Navigation Systems for Users with Visual Impairments—A Review and Analysis
Multimodal Technol. Interact. 2020, 4(4), 73; https://doi.org/10.3390/mti4040073 - 16 Oct 2020
Cited by 2 | Viewed by 3833
Abstract
Multimodal interaction refers to situations where users are provided with multiple modes for interacting with systems. Researchers are working on multimodality solutions in several domains. The focus of this paper is within the domain of navigation systems for supporting users with visual impairments. [...] Read more.
Multimodal interaction refers to situations where users are provided with multiple modes for interacting with systems. Researchers are working on multimodality solutions in several domains. The focus of this paper is within the domain of navigation systems for supporting users with visual impairments. Although several literature reviews have covered this domain, none have gone through the research synthesis of multimodal navigation systems. This paper provides a review and analysis of multimodal navigation solutions aimed at people with visual impairments. This review also puts forward recommendations for effective multimodal navigation systems. Moreover, this review also presents the challenges faced during the design, implementation and use of multimodal navigation systems. We call for more research to better understand the users’ evolving modality preferences during navigation. Full article
Show Figures

Figure 1

Review
Tangible Interaction with Light: A Review
Multimodal Technol. Interact. 2020, 4(4), 72; https://doi.org/10.3390/mti4040072 - 05 Oct 2020
Cited by 2 | Viewed by 2877
Abstract
Light is an important means of information representation and feedback in Human–Computer Interaction (HCI), and light-emitting interaction elements are omni-present. We address here the interplay of light and tangible interaction with specifically designed objects. The goal of such designs is to support an [...] Read more.
Light is an important means of information representation and feedback in Human–Computer Interaction (HCI), and light-emitting interaction elements are omni-present. We address here the interplay of light and tangible interaction with specifically designed objects. The goal of such designs is to support an embodied, emotional and engaged interaction experience integrated into the physical surroundings. The specific combination of tangible interaction and light as a medium is used in several approaches, but a systematic overview of this research area still does not exist. In order to understand the essence, process and result of tangible interaction with light, we conducted a systematic literature review of 169 studies of tangible interaction with light over the past 20 years. Our results provide a demographic overview of the research, but foremost analyze their concepts, purposes, conceptual frameworks, user contexts, interaction behaviors and problems addressed by tangible light. Three important findings were obtained: (1) Tangible interaction with light has been used for diverse purposes, contexts and interactions; (2) Tangible light has addressed problems: weak interaction, don’t know how to interact, interaction lacks innovation, collaborative interaction, remote tangible interaction, and emotional interaction; (3) Current research in this area can be classified as “wild theory” in conceptual research frameworks, which means it emphasizes very much on innovation. The most important contribution of this work is the systematic review in itself, but the findings of our work also give some indications on new ways and future trends for tangible interaction, when combined with light as a medium. Full article
Show Figures

Figure 1

Article
Design for Sustained Wellbeing through Positive Activities—A Multi-Stage Framework
Multimodal Technol. Interact. 2020, 4(4), 71; https://doi.org/10.3390/mti4040071 - 29 Sep 2020
Viewed by 2726
Abstract
In this paper, we introduce a framework that conceptualizes a multi-stage process through which technology can promote sustained wellbeing. Intentional wellbeing-enhancing activities form the centerpiece linking direct product interaction to, ultimately, wellbeing. The framework was developed following a bottom-up–top-down approach by integrating theoretical [...] Read more.
In this paper, we introduce a framework that conceptualizes a multi-stage process through which technology can promote sustained wellbeing. Intentional wellbeing-enhancing activities form the centerpiece linking direct product interaction to, ultimately, wellbeing. The framework was developed following a bottom-up–top-down approach by integrating theoretical knowledge from positive psychology, behavioral science and human–computer interaction (HCI)/design with empirical insights. We outline (a) the framework, (b) its five main stages including their multidisciplinary theoretical foundations, (c) relations between these stages and (d) specific elements that further describe each stage. The paper illustrates how the framework was developed and elaborates three major areas of application: (design) research, design strategies and measurement approaches. With this work, we aim to provide actionable guidance for researchers and IT practitioners to understand and design technologies that foster sustained wellbeing. Full article
Show Figures

Figure 1

Perspective
The Emerging Promise of Touchscreen Devices for Individuals with Intellectual Disabilities
Multimodal Technol. Interact. 2020, 4(4), 70; https://doi.org/10.3390/mti4040070 - 27 Sep 2020
Cited by 2 | Viewed by 2865
Abstract
This article explores the emerging promise touchscreen devices hold for individuals with intellectual disabilities (ID). Many individuals with ID that struggle to read, write or use voice assisted strategies can use touchscreen devices in many aspects of their lives. Research has shown that [...] Read more.
This article explores the emerging promise touchscreen devices hold for individuals with intellectual disabilities (ID). Many individuals with ID that struggle to read, write or use voice assisted strategies can use touchscreen devices in many aspects of their lives. Research has shown that touchscreen technology is available, easy to use and can open an array of empowering possibilities for individuals with ID. In this article we will be sharing research and a vision for possible uses of touchscreen devices in the future for individuals with ID. Our perspectives are shaped by our experiences with using touchscreen technology in collaboration with people who have ID. A special aspect of our research methodology is the fact that one of our co-researchers has ID. Full article
Article
Positive Design for Children with Atopic Dermatitis—Enhanced Problem-Solving and Possibility-Driven Approach in the Context of Chronic Disease
Multimodal Technol. Interact. 2020, 4(4), 69; https://doi.org/10.3390/mti4040069 - 23 Sep 2020
Viewed by 2388
Abstract
Since the 1960s, atopic dermatitis has seen a steady increase in prevalence in developed countries. Most often, the onset begins at an early age and many patients are very young children. Due to their young age, their parents are forced to take over [...] Read more.
Since the 1960s, atopic dermatitis has seen a steady increase in prevalence in developed countries. Most often, the onset begins at an early age and many patients are very young children. Due to their young age, their parents are forced to take over handling of the disease. As a consequence, atopic dermatitis places a high burden not only on affected children, but also on their parents and siblings, limiting human flourishing of a whole family. Therefore, the described research area calls for a possibility-driven approach that looks beyond mere problem-solving while building on existing support possibilities and creating new ones. This paper presents atopi as a result of such a possibility-driven approach. It incorporates existing patient education and severity scoring into an extensive service, adding new elements to turn necessary practices into joyful experiences, to create feelings of relatedness and to increase perceived self-efficacy, thus is suitable to enable human flourishing. Full article
Show Figures

Graphical abstract

Article
ODO: Design of Multimodal Chatbot for an Experiential Media System
Multimodal Technol. Interact. 2020, 4(4), 68; https://doi.org/10.3390/mti4040068 - 23 Sep 2020
Cited by 3 | Viewed by 3195
Abstract
This paper presents the design of a multimodal chatbot for use in an interactive theater performance. This chatbot has an architecture consisting of vision and natural language processing capabilities, as well as embodiment in a non-anthropomorphic movable LED array set in a stage. [...] Read more.
This paper presents the design of a multimodal chatbot for use in an interactive theater performance. This chatbot has an architecture consisting of vision and natural language processing capabilities, as well as embodiment in a non-anthropomorphic movable LED array set in a stage. Designed for interaction with up to five users at a time, the system can perform tasks including face detection and emotion classification, tracking of crowd movement through mobile phones, and real-time conversation to guide users through a nonlinear story and interactive games. The final prototype, named ODO, is a tangible embodiment of a distributed multimedia system that solves several technical challenges to provide users with a unique experience through novel interaction. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop