Previous Issue
Volume 4, September

Table of Contents

Multimodal Technol. Interact., Volume 4, Issue 4 (December 2020) – 19 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Conceptual Design and Evaluation of Windshield Displays for Excavators
Multimodal Technol. Interact. 2020, 4(4), 86; https://doi.org/10.3390/mti4040086 - 27 Nov 2020
Viewed by 171
Abstract
This paper investigates the possible visualization using transparent displays, which could be placed on the excavator’s windshield. This way, the information could be presented closer to operators’ line of sight, without fully obstructing their view. Therefore, excavator operators could acquire the supportive information [...] Read more.
This paper investigates the possible visualization using transparent displays, which could be placed on the excavator’s windshield. This way, the information could be presented closer to operators’ line of sight, without fully obstructing their view. Therefore, excavator operators could acquire the supportive information provided by the machine without diverting their attention from operational areas. To ensure that there is a match between the supportive information and operators’ contextual needs, we conducted four different activities as parts of our design process. Firstly, we looked at four relevant safety guidelines to determine which information is essential to perform safe operations. Secondly, we reviewed all commercially available technologies to discover their suitability in the excavator context. Thirdly, we conducted a design workshop to generate ideas on how the essential information should look like and behave based on the performed operation and the chosen available technology. Fourthly, we interviewed seven excavator operators to test their understanding and obtain their feedback on the proposed visualization concepts. The results indicated that four out of six visualization concepts that we proposed could be understood easily by the operators and we also revised them to better suit the operators’ way of thinking. All the operators also positively perceived this approach, since all of them included at least three visualization concepts to be presented on the windshield. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Intelligent Blended Agents: Reality–Virtuality Interaction with Artificially Intelligent Embodied Virtual Humans
Multimodal Technol. Interact. 2020, 4(4), 85; https://doi.org/10.3390/mti4040085 - 27 Nov 2020
Viewed by 184
Abstract
Intelligent virtual agents (VAs) already support us in a variety of everyday tasks such as setting up appointments, monitoring our fitness, and organizing messages. Adding a humanoid body representation to these mostly voice-based VAs has enormous potential to enrich the human–agent communication process [...] Read more.
Intelligent virtual agents (VAs) already support us in a variety of everyday tasks such as setting up appointments, monitoring our fitness, and organizing messages. Adding a humanoid body representation to these mostly voice-based VAs has enormous potential to enrich the human–agent communication process but, at the same time, raises expectations regarding the agent’s social, spatial, and intelligent behavior. Embodied VAs may be perceived as less human-like if they, for example, do not return eye contact, or do not show a plausible collision behavior with the physical surroundings. In this article, we introduce a new model that extends human-to-human interaction to interaction with intelligent agents and covers different multi-modal and multi-sensory channels that are required to create believable embodied VAs. Theoretical considerations of the different aspects of human–agent interaction are complemented by implementation guidelines to support the practical development of such agents. In this context, we particularly emphasize one aspect that is distinctive of embodied agents, i.e., interaction with the physical world. Since previous studies indicated negative effects of implausible physical behavior of VAs, we were interested in the initial responses of users when interacting with a VA with virtual–physical capabilities for the first time. We conducted a pilot study to collect subjective feedback regarding two forms of virtual–physical interactions. Both were designed and implemented in preparation of the user study, and represent two different approaches to virtual–physical manipulations: (i) displacement of a robotic object, and (ii) writing on a physical sheet of paper with thermochromic ink. The qualitative results of the study indicate positive effects of agents with virtual–physical capabilities in terms of their perceived realism as well as evoked emotional responses of the users. We conclude with an outlook on possible future developments of different aspects of human–agent interaction in general and the physical simulation in particular. Full article
(This article belongs to the Special Issue Social Interaction and Psychology in XR)
Show Figures

Figure 1

Open AccessArticle
A Human–Computer Interface Replacing Mouse and Keyboard for Individuals with Limited Upper Limb Mobility
Multimodal Technol. Interact. 2020, 4(4), 84; https://doi.org/10.3390/mti4040084 - 27 Nov 2020
Viewed by 261
Abstract
People with physical disabilities in their upper extremities face serious issues in using classical input devices due to lacking movement possibilities and precision. This article suggests an alternative input concept and presents corresponding input devices. The proposed interface combines an inertial measurement unit [...] Read more.
People with physical disabilities in their upper extremities face serious issues in using classical input devices due to lacking movement possibilities and precision. This article suggests an alternative input concept and presents corresponding input devices. The proposed interface combines an inertial measurement unit and force sensing resistors, which can replace mouse and keyboard. Head motions are mapped to mouse pointer positions, while mouse button actions are triggered by contracting mastication muscles. The contact pressures of each fingertip are acquired to replace the conventional keyboard. To allow for complex text entry, the sensory concept is complemented by an ambiguous keyboard layout with ten keys. The related word prediction function provides disambiguation at word level. Haptic feedback is provided to users corresponding to their virtual keystrokes for enhanced closed-loop interactions. This alternative input system enables text input as well as the emulation of a two-button mouse. Full article
Show Figures

Figure 1

Open AccessArticle
MatMouse: A Mouse Movements Tracking and Analysis Toolbox for Visual Search Experiments
Multimodal Technol. Interact. 2020, 4(4), 83; https://doi.org/10.3390/mti4040083 - 26 Nov 2020
Viewed by 237
Abstract
The present study introduces a new MATLAB toolbox, called MatMouse, suitable for the performance of experimental studies based on mouse movements tracking and analysis. MatMouse supports the implementation of task-based visual search experiments. The proposed toolbox provides specific functions which can be utilized [...] Read more.
The present study introduces a new MATLAB toolbox, called MatMouse, suitable for the performance of experimental studies based on mouse movements tracking and analysis. MatMouse supports the implementation of task-based visual search experiments. The proposed toolbox provides specific functions which can be utilized for the experimental building and mouse tracking processes, the analysis of the recorded data in specific metrics, the production of related visualizations, as well as for the generation of statistical grayscale heatmaps which could serve as an objective ground truth product. MatMouse can be executed as a standalone package or integrated in existing MATLAB scripts and/or toolboxes. In order to highlight the functionalities of the introduced toolbox, a complete case study example is presented. MatMouse is freely distributed to the scientific community under the third version of GNU General Public License (GPL v3) on GitHub platform. Full article
Show Figures

Figure 1

Open AccessArticle
Design for Breathtaking Experiences: An Exploration of Design Strategies to Evoke Awe in Human–Product Interactions
Multimodal Technol. Interact. 2020, 4(4), 82; https://doi.org/10.3390/mti4040082 - 24 Nov 2020
Viewed by 214
Abstract
From looking up at a skyscraper to the Grand Canyon’s vastness, you may have experienced awe in one way or another. Awe is experienced when one encounters something greater or more powerful than themselves and is associated with prosocial behavior through a diminishment [...] Read more.
From looking up at a skyscraper to the Grand Canyon’s vastness, you may have experienced awe in one way or another. Awe is experienced when one encounters something greater or more powerful than themselves and is associated with prosocial behavior through a diminishment of self-importance. In design research, most studies on awe have been conducted in lab conditions by using technologies such as virtual reality because of its efficiency to simulate typical awe-stimulating conditions (e.g., nature scenes). While useful in inducing awe and assessing its effects on users, they give little guidance about how design can deliberately evoke awe. Most attempts focus on the response of awe instead of its eliciting conditions. With an aim to support designers to facilitate awe, this paper explores design strategies to evoke awe. Based on appraisal theory, the cause of awe was formulated, and its relevance to designing for awe was investigated. The conditions that underlie awe in design were explored through a survey in which participants reported 150 awe experiences, resulting in six design strategies. The paper describes these strategies and discusses how they can be used in a design process, giving attention to addressing the experiential value of awe. Full article
Show Figures

Figure 1

Open AccessReview
When Agents Become Partners: A Review of the Role the Implicit Plays in the Interaction with Artificial Social Agents
Multimodal Technol. Interact. 2020, 4(4), 81; https://doi.org/10.3390/mti4040081 - 22 Nov 2020
Viewed by 223
Abstract
The way we interact with computers has significantly changed over recent decades. However, interaction with computers still falls behind human to human interaction in terms of seamlessness, effortlessness, and satisfaction. We argue that simultaneously using verbal, nonverbal, explicit, implicit, intentional, and unintentional communication [...] Read more.
The way we interact with computers has significantly changed over recent decades. However, interaction with computers still falls behind human to human interaction in terms of seamlessness, effortlessness, and satisfaction. We argue that simultaneously using verbal, nonverbal, explicit, implicit, intentional, and unintentional communication channels addresses these three aspects of the interaction process. To better understand what has been done in the field of Human Computer Interaction (HCI) in terms of incorporating the type channels mentioned above, we reviewed the literature on implicit nonverbal interaction with a specific emphasis on the interaction between humans on the one side, and robot and virtual humans on the other side. These Artificial Social Agents (ASA) are increasingly used as advanced tools for solving not only physical but also social tasks. In the literature review, we identify domains of interaction between humans and artificial social agents that have shown exponential growth over the years. The review highlights the value of incorporating implicit interaction capabilities in Human Agent Interaction (HAI) which we believe will lead to satisfying human and artificial social agent team performance. We conclude the article by presenting a case study of a system that harnesses subtle nonverbal, implicit interaction to increase the state of relaxation in users. This “Virtual Human Breathing Relaxation System” works on the principle of physiological synchronisation between a human and a virtual, computer-generated human. The active entrainment concept behind the relaxation system is generic and can be applied to other human agent interaction domains of implicit physiology-based interaction. Full article
(This article belongs to the Special Issue Understanding UX through Implicit and Explicit Feedback)
Show Figures

Figure 1

Open AccessArticle
Cross-Platform Usability Model Evaluation
Multimodal Technol. Interact. 2020, 4(4), 80; https://doi.org/10.3390/mti4040080 - 20 Nov 2020
Viewed by 265
Abstract
It is becoming common for several devices to be utilised together to access and manipulate shared information spaces and migrate tasks between devices. Despite the increased worldwide use of cross-platform services, there is limited research into how cross-platform service usability can be assessed. [...] Read more.
It is becoming common for several devices to be utilised together to access and manipulate shared information spaces and migrate tasks between devices. Despite the increased worldwide use of cross-platform services, there is limited research into how cross-platform service usability can be assessed. This paper presents a novel cross-platform usability model. The model employs the think-aloud protocol, observations, and questionnaires to reveal cross-platform usability problems. Two Likert scales were developed for measuring overall user satisfaction of cross-platform usability and user satisfaction with the seamlessness of the transition between one device and another. The paper further employs a series of objective measures for the proposed model. The viability and performance of the model were examined in the context of evaluating three cross-platform services across three devices. The results demonstrate that the model is a valuable method for assessing and quantifying cross-platform usability. The findings were thoroughly analysed and discussed, and subsequently used to refine the model. The model was also evaluated by eight user experience experts and seven out of the eight agreed that it is useful. Full article
Show Figures

Figure 1

Open AccessReview
Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy
Multimodal Technol. Interact. 2020, 4(4), 79; https://doi.org/10.3390/mti4040079 - 17 Nov 2020
Viewed by 318
Abstract
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can [...] Read more.
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Graphical abstract

Open AccessArticle
Multimodal Mixed Reality Impact on a Hand Guiding Task with a Holographic Cobot
Multimodal Technol. Interact. 2020, 4(4), 78; https://doi.org/10.3390/mti4040078 - 31 Oct 2020
Viewed by 374
Abstract
In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. [...] Read more.
In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial. Full article
Show Figures

Figure 1

Open AccessArticle
The Potentials of Tangible Technologies for Learning Linear Equations
Multimodal Technol. Interact. 2020, 4(4), 77; https://doi.org/10.3390/mti4040077 - 23 Oct 2020
Viewed by 488
Abstract
Tangible technologies provide interactive links between the physical and digital worlds, thereby merging the benefits of physical and virtual manipulatives. To explore the potentials of tangible technologies for learning linear equations, a tangible manipulative (TM) was designed and developed. A prototype of the [...] Read more.
Tangible technologies provide interactive links between the physical and digital worlds, thereby merging the benefits of physical and virtual manipulatives. To explore the potentials of tangible technologies for learning linear equations, a tangible manipulative (TM) was designed and developed. A prototype of the initial TM was implemented and evaluated using mixed methods (i.e., classroom interventions, paper-based tests, thinking aloud sessions, questionnaires, and interviews) in real classroom settings. Six teachers, 24 primary school students, and 65 lower secondary school students participated in the exploratory study. The quantitative and qualitative analysis revealed that the initial TM supported student learning at various levels and had a positive impact on their learning achievement. Moreover, its overall usability was also accepted. Some minor improvements with regard to its pedagogy and usability could be implemented. These findings indicate that the initial TM is likely to be beneficial for linear equation learning in pre-primary to lower secondary schools and be usable in mathematics classrooms. Theoretical and practical implications are discussed. Full article
Show Figures

Figure 1

Open AccessReview
Innovative and Assistive eHealth Technologies for Smart Therapeutic and Rehabilitation Outdoor Spaces for the Elderly Demographic
Multimodal Technol. Interact. 2020, 4(4), 76; https://doi.org/10.3390/mti4040076 - 22 Oct 2020
Viewed by 351
Abstract
The use of technology for social connectivity and achieving engagement goals is increasingly essential to the overall well-being of our rapidly ageing population. While much of the extant literature has focused on home automation and indoor remote health monitoring; there is a growing [...] Read more.
The use of technology for social connectivity and achieving engagement goals is increasingly essential to the overall well-being of our rapidly ageing population. While much of the extant literature has focused on home automation and indoor remote health monitoring; there is a growing literature that finds personal health and overall well-being improves when physical activities are conducted outdoors. This study presents a review of possible innovative and assistive eHealth technologies suitable for smart therapeutic and rehabilitation outdoor spaces for older persons. The article also presents key performance metrics required of eHealth technologies to ensure robust, timely and reliable biometric data transfer between patients in a therapeutic landscape environment and respective medical centres. A literature review of relevant publications with a primary focus of integrating sensors and eHealth technologies in outdoor spaces to collect and transfer data from the elderly demographic who engage such built landscapes to appropriate stakeholders was conducted. A content analysis was carried out to synthesize outcomes of the literature review. The study finds that research in assistive eHealth technologies and interfaces for outdoor therapeutic spaces is in its nascent stages and has limited generalisability. The level of technology uptake and readiness for smart outdoor spaces is still developing and is currently being outpaced by the growth of elderly fitness zones in public spaces. Further research is needed to explore those eHealth technologies with interactive feedback mechanisms that are suitable for outdoor therapeutic environments. Full article
(This article belongs to the Special Issue Personal Health, Fitness Technologies, and Games)
Show Figures

Figure 1

Open AccessArticle
Virtual Reality Nature Exposure and Test Anxiety
Multimodal Technol. Interact. 2020, 4(4), 75; https://doi.org/10.3390/mti4040075 - 22 Oct 2020
Viewed by 421
Abstract
The number of students affected by exam anxiety continues to rise. Therefore, it is becoming progressively relevant to explore innovative remediation strategies that will help mitigate the debilitating effects of exam anxiety. The study aimed to investigate whether green environment exposure, delivered by [...] Read more.
The number of students affected by exam anxiety continues to rise. Therefore, it is becoming progressively relevant to explore innovative remediation strategies that will help mitigate the debilitating effects of exam anxiety. The study aimed to investigate whether green environment exposure, delivered by virtual reality (VR) technology, would serve as an effective intervention to mitigate participants’ test anxiety and therefore improve the experience of the exam, measured by positive and negative affect, and increase test scores in a pseudo exam. Twenty high and twenty low exam anxiety students completed a pseudo exam before and after being exposed to either a simulated green environment or urban environment. Only those who had high anxiety and were exposed to the nature VR intervention had significant reductions in negative affect (F(1, 31) = 5.86, p = 0.02, ηp2 = 0.15), supporting the idea that exposure to nature, even if simulated, may benefit students’ feelings about their academic performance. The findings are discussed in light of future developments in nature and educational research. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

Open AccessArticle
Self-Perception and Training Perceptions on Teacher Digital Competence (TDC) in Spanish and French University Students
Multimodal Technol. Interact. 2020, 4(4), 74; https://doi.org/10.3390/mti4040074 - 17 Oct 2020
Cited by 1 | Viewed by 435
Abstract
The purpose of this research is, on the one hand, to analyze the self-perception of future teachers of childhood education and primary education, and those studying for a master’s degree in secondary education teacher training on their Teacher Digital Competence (TDC), as well [...] Read more.
The purpose of this research is, on the one hand, to analyze the self-perception of future teachers of childhood education and primary education, and those studying for a master’s degree in secondary education teacher training on their Teacher Digital Competence (TDC), as well as the potential influence of gender, country and university institution of origin in their representations. On the other hand, it seeks to analyze the perception of future teachers on the TDC of their university trainers (formative perception). In accordance with these aims, a quantitative methodology of a non-experimental nature and of a prospective cross-sectional ex post facto approach has been used. A total of 428 students from two Spanish universities and from a French university agreed to participate in the research. The results report a positive and differential self-perception by gender of the TDC acquired and unfavorable perceptions of the digital competences of their teachers. These results confirm the need to improve the technological-manipulative and didactic training of university teachers, and to adapt the teaching competences to the demands of the Information and Communication Society (ICS) and to the guidelines of the Common Digital Competence Framework. Full article
Open AccessReview
Multimodal Navigation Systems for Users with Visual Impairments—A Review and Analysis
Multimodal Technol. Interact. 2020, 4(4), 73; https://doi.org/10.3390/mti4040073 - 16 Oct 2020
Viewed by 418
Abstract
Multimodal interaction refers to situations where users are provided with multiple modes for interacting with systems. Researchers are working on multimodality solutions in several domains. The focus of this paper is within the domain of navigation systems for supporting users with visual impairments. [...] Read more.
Multimodal interaction refers to situations where users are provided with multiple modes for interacting with systems. Researchers are working on multimodality solutions in several domains. The focus of this paper is within the domain of navigation systems for supporting users with visual impairments. Although several literature reviews have covered this domain, none have gone through the research synthesis of multimodal navigation systems. This paper provides a review and analysis of multimodal navigation solutions aimed at people with visual impairments. This review also puts forward recommendations for effective multimodal navigation systems. Moreover, this review also presents the challenges faced during the design, implementation and use of multimodal navigation systems. We call for more research to better understand the users’ evolving modality preferences during navigation. Full article
Show Figures

Figure 1

Open AccessReview
Tangible Interaction with Light: A Review
Multimodal Technol. Interact. 2020, 4(4), 72; https://doi.org/10.3390/mti4040072 - 05 Oct 2020
Viewed by 439
Abstract
Light is an important means of information representation and feedback in Human–Computer Interaction (HCI), and light-emitting interaction elements are omni-present. We address here the interplay of light and tangible interaction with specifically designed objects. The goal of such designs is to support an [...] Read more.
Light is an important means of information representation and feedback in Human–Computer Interaction (HCI), and light-emitting interaction elements are omni-present. We address here the interplay of light and tangible interaction with specifically designed objects. The goal of such designs is to support an embodied, emotional and engaged interaction experience integrated into the physical surroundings. The specific combination of tangible interaction and light as a medium is used in several approaches, but a systematic overview of this research area still does not exist. In order to understand the essence, process and result of tangible interaction with light, we conducted a systematic literature review of 169 studies of tangible interaction with light over the past 20 years. Our results provide a demographic overview of the research, but foremost analyze their concepts, purposes, conceptual frameworks, user contexts, interaction behaviors and problems addressed by tangible light. Three important findings were obtained: (1) Tangible interaction with light has been used for diverse purposes, contexts and interactions; (2) Tangible light has addressed problems: weak interaction, don’t know how to interact, interaction lacks innovation, collaborative interaction, remote tangible interaction, and emotional interaction; (3) Current research in this area can be classified as “wild theory” in conceptual research frameworks, which means it emphasizes very much on innovation. The most important contribution of this work is the systematic review in itself, but the findings of our work also give some indications on new ways and future trends for tangible interaction, when combined with light as a medium. Full article
Show Figures

Figure 1

Open AccessArticle
Design for Sustained Wellbeing through Positive Activities—A Multi-Stage Framework
Multimodal Technol. Interact. 2020, 4(4), 71; https://doi.org/10.3390/mti4040071 - 29 Sep 2020
Viewed by 465
Abstract
In this paper, we introduce a framework that conceptualizes a multi-stage process through which technology can promote sustained wellbeing. Intentional wellbeing-enhancing activities form the centerpiece linking direct product interaction to, ultimately, wellbeing. The framework was developed following a bottom-up–top-down approach by integrating theoretical [...] Read more.
In this paper, we introduce a framework that conceptualizes a multi-stage process through which technology can promote sustained wellbeing. Intentional wellbeing-enhancing activities form the centerpiece linking direct product interaction to, ultimately, wellbeing. The framework was developed following a bottom-up–top-down approach by integrating theoretical knowledge from positive psychology, behavioral science and human–computer interaction (HCI)/design with empirical insights. We outline (a) the framework, (b) its five main stages including their multidisciplinary theoretical foundations, (c) relations between these stages and (d) specific elements that further describe each stage. The paper illustrates how the framework was developed and elaborates three major areas of application: (design) research, design strategies and measurement approaches. With this work, we aim to provide actionable guidance for researchers and IT practitioners to understand and design technologies that foster sustained wellbeing. Full article
Show Figures

Figure 1

Open AccessPerspective
The Emerging Promise of Touchscreen Devices for Individuals with Intellectual Disabilities
Multimodal Technol. Interact. 2020, 4(4), 70; https://doi.org/10.3390/mti4040070 - 27 Sep 2020
Viewed by 641
Abstract
This article explores the emerging promise touchscreen devices hold for individuals with intellectual disabilities (ID). Many individuals with ID that struggle to read, write or use voice assisted strategies can use touchscreen devices in many aspects of their lives. Research has shown that [...] Read more.
This article explores the emerging promise touchscreen devices hold for individuals with intellectual disabilities (ID). Many individuals with ID that struggle to read, write or use voice assisted strategies can use touchscreen devices in many aspects of their lives. Research has shown that touchscreen technology is available, easy to use and can open an array of empowering possibilities for individuals with ID. In this article we will be sharing research and a vision for possible uses of touchscreen devices in the future for individuals with ID. Our perspectives are shaped by our experiences with using touchscreen technology in collaboration with people who have ID. A special aspect of our research methodology is the fact that one of our co-researchers has ID. Full article
Open AccessArticle
Positive Design for Children with Atopic Dermatitis—Enhanced Problem-Solving and Possibility-Driven Approach in the Context of Chronic Disease
Multimodal Technol. Interact. 2020, 4(4), 69; https://doi.org/10.3390/mti4040069 - 23 Sep 2020
Viewed by 427
Abstract
Since the 1960s, atopic dermatitis has seen a steady increase in prevalence in developed countries. Most often, the onset begins at an early age and many patients are very young children. Due to their young age, their parents are forced to take over [...] Read more.
Since the 1960s, atopic dermatitis has seen a steady increase in prevalence in developed countries. Most often, the onset begins at an early age and many patients are very young children. Due to their young age, their parents are forced to take over handling of the disease. As a consequence, atopic dermatitis places a high burden not only on affected children, but also on their parents and siblings, limiting human flourishing of a whole family. Therefore, the described research area calls for a possibility-driven approach that looks beyond mere problem-solving while building on existing support possibilities and creating new ones. This paper presents atopi as a result of such a possibility-driven approach. It incorporates existing patient education and severity scoring into an extensive service, adding new elements to turn necessary practices into joyful experiences, to create feelings of relatedness and to increase perceived self-efficacy, thus is suitable to enable human flourishing. Full article
Show Figures

Graphical abstract

Open AccessArticle
ODO: Design of Multimodal Chatbot for an Experiential Media System
Multimodal Technol. Interact. 2020, 4(4), 68; https://doi.org/10.3390/mti4040068 - 23 Sep 2020
Viewed by 688
Abstract
This paper presents the design of a multimodal chatbot for use in an interactive theater performance. This chatbot has an architecture consisting of vision and natural language processing capabilities, as well as embodiment in a non-anthropomorphic movable LED array set in a stage. [...] Read more.
This paper presents the design of a multimodal chatbot for use in an interactive theater performance. This chatbot has an architecture consisting of vision and natural language processing capabilities, as well as embodiment in a non-anthropomorphic movable LED array set in a stage. Designed for interaction with up to five users at a time, the system can perform tasks including face detection and emotion classification, tracking of crowd movement through mobile phones, and real-time conversation to guide users through a nonlinear story and interactive games. The final prototype, named ODO, is a tangible embodiment of a distributed multimedia system that solves several technical challenges to provide users with a unique experience through novel interaction. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop