Next Issue
Volume 6, July
Previous Issue
Volume 6, May
 
 

Multimodal Technol. Interact., Volume 6, Issue 6 (June 2022) – 7 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
49 pages, 8112 KiB  
Review
A Survey on Databases for Multimodal Emotion Recognition and an Introduction to the VIRI (Visible and InfraRed Image) Database
by Mohammad Faridul Haque Siddiqui, Parashar Dhakal, Xiaoli Yang and Ahmad Y. Javaid
Multimodal Technol. Interact. 2022, 6(6), 47; https://doi.org/10.3390/mti6060047 - 17 Jun 2022
Cited by 16 | Viewed by 6617
Abstract
Multimodal human–computer interaction (HCI) systems pledge a more human–human-like interaction between machines and humans. Their prowess in emanating an unambiguous information exchange between the two makes these systems more reliable, efficient, less error prone, and capable of solving complex tasks. Emotion recognition is [...] Read more.
Multimodal human–computer interaction (HCI) systems pledge a more human–human-like interaction between machines and humans. Their prowess in emanating an unambiguous information exchange between the two makes these systems more reliable, efficient, less error prone, and capable of solving complex tasks. Emotion recognition is a realm of HCI that follows multimodality to achieve accurate and natural results. The prodigious use of affective identification in e-learning, marketing, security, health sciences, etc., has increased demand for high-precision emotion recognition systems. Machine learning (ML) is getting its feet wet to ameliorate the process by tweaking the architectures or wielding high-quality databases (DB). This paper presents a survey of such DBs that are being used to develop multimodal emotion recognition (MER) systems. The survey illustrates the DBs that contain multi-channel data, such as facial expressions, speech, physiological signals, body movements, gestures, and lexical features. Few unimodal DBs are also discussed that work in conjunction with other DBs for affect recognition. Further, VIRI, a new DB of visible and infrared (IR) images of subjects expressing five emotions in an uncontrolled, real-world environment, is presented. A rationale for the superiority of the presented corpus over the existing ones is instituted. Full article
Show Figures

Figure 1

17 pages, 1715 KiB  
Article
The Quantitative Case-by-Case Analyses of the Socio-Emotional Outcomes of Children with ASD in Robot-Assisted Autism Therapy
by Zhansaule Telisheva, Aida Amirova, Nazerke Rakhymbayeva, Aida Zhanatkyzy and Anara Sandygulova
Multimodal Technol. Interact. 2022, 6(6), 46; https://doi.org/10.3390/mti6060046 - 15 Jun 2022
Cited by 7 | Viewed by 2006
Abstract
With its focus on robot-assisted autism therapy, this paper presents case-by-case analyses of socio-emotional outcomes of 34 children aged 3–12 years old, with different cases of Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). We grouped children by the following characteristics: [...] Read more.
With its focus on robot-assisted autism therapy, this paper presents case-by-case analyses of socio-emotional outcomes of 34 children aged 3–12 years old, with different cases of Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). We grouped children by the following characteristics: ASD alone (n = 22), ASD+ADHD (n = 12), verbal (n = 11), non-verbal (n = 23), low-functioning autism (n = 24), and high-functioning autism (n = 10). This paper provides a series of separate quantitative analyses across the first and last sessions, adaptive and non-adaptive sessions, and parent and no-parent sessions, to present child experiences with the NAO robot, during play-based activities. The results suggest that robots are able to interact with children in social ways and influence their social behaviors over time. Each child with ASD is a unique case and needs an individualized approach to practice and learn social skills with the robot. We, finally, present specific child–robot intricacies that affect how children engage and learn over time as well as across different sessions. Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
Show Figures

Figure 1

19 pages, 13566 KiB  
Review
Human–Machine Interface for Remote Crane Operation: A Review
by Taufik Akbar Sitompul
Multimodal Technol. Interact. 2022, 6(6), 45; https://doi.org/10.3390/mti6060045 - 10 Jun 2022
Cited by 4 | Viewed by 3248
Abstract
Cranes are traditionally controlled by operators who are present on-site. While this operation mode is still common nowadays, a significant amount of progress has been made to move operators away from their cranes, so that they would not be exposed to hazardous situations [...] Read more.
Cranes are traditionally controlled by operators who are present on-site. While this operation mode is still common nowadays, a significant amount of progress has been made to move operators away from their cranes, so that they would not be exposed to hazardous situations that may occur in their workplace. Despite its apparent benefits, remote operation has a major challenge that does not exist in on-site operation, i.e., the amount of information that operators could receive remotely is more limited than what they could receive by being on-site. Since operators and their cranes are located separately, human–machine interface plays an important role in facilitating information exchange between operators and their machines. This article examines various kinds of human–machine interfaces for remote crane operation that have been proposed within the scientific community, discusses their possible benefits, and highlights opportunities for future research. Full article
Show Figures

Figure 1

13 pages, 279 KiB  
Article
Smartphone Usage and Studying: Investigating Relationships between Type of Use and Self-Regulatory Skills
by Kendall Hartley, Lisa D. Bendixen, Emily Shreve and Dan Gianoutsos
Multimodal Technol. Interact. 2022, 6(6), 44; https://doi.org/10.3390/mti6060044 - 07 Jun 2022
Viewed by 2546
Abstract
The purpose of this study is to investigate the relationships between self-regulated learning skills and smartphone usage in relation to studying. It is unclear whether poor learning habits related to smartphone usage are unique traits or a reflection of existing self-regulated learning skills. [...] Read more.
The purpose of this study is to investigate the relationships between self-regulated learning skills and smartphone usage in relation to studying. It is unclear whether poor learning habits related to smartphone usage are unique traits or a reflection of existing self-regulated learning skills. The self-regulatory skills (a) regulation, (b) knowledge, and (c) management of cognition were measured and compared to the smartphone practices (a) multitasking, (b) avoiding distractions, and (c) mindful use. First-year undergraduates (n = 227) completed an online survey of self-regulatory skills and common phone practices. The results support the predictions that self-regulatory skills are negatively correlated with multitasking while studying and are positively correlated with distraction avoidance and mindful use of the phone. The management of cognition factor, which includes effort, time, and planning, was strongly correlated with multitasking (r = −0.20) and avoiding distractions (r = 0.45). Regulation of cognition was strongly correlated with mindful use (r = 0.33). These results support the need to consider the relationship between self-regulation and smartphone use as it relates to learning. Full article
24 pages, 7122 KiB  
Article
Design, Development, and a Pilot Study of a Low-Cost Robot for Child–Robot Interaction in Autism Interventions
by Ilias A. Katsanis, Vassilis C. Moulianitis and Diamantis T. Panagiotarakos
Multimodal Technol. Interact. 2022, 6(6), 43; https://doi.org/10.3390/mti6060043 - 06 Jun 2022
Cited by 1 | Viewed by 2006
Abstract
Socially assistive robots are widely deployed in interventions with children on the autism spectrum, exploiting the benefits of this technology in social behavior intervention plans, while reducing their autistic behavior. Furthermore, innovations in modern technologies such as machine learning enhance these robots with [...] Read more.
Socially assistive robots are widely deployed in interventions with children on the autism spectrum, exploiting the benefits of this technology in social behavior intervention plans, while reducing their autistic behavior. Furthermore, innovations in modern technologies such as machine learning enhance these robots with great capabilities. Since the results of this implementation are promising, their total cost makes them unaffordable for some organizations while the needs are growing progressively. In this paper, a low-cost robot for autism interventions is proposed, benefiting from the advantages of machine learning and low-cost hardware. The mechanical design of the robot and the development of machine learning models are presented. The robot was evaluated by a small group of educators for children with ASD. The results of various model implementations, together with the design evaluation of the robot, are encouraging and indicate that this technology would be advantageous for deployment in child–robot interaction scenarios. Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
Show Figures

Figure 1

31 pages, 57438 KiB  
Article
Interactive Visualizations of Transparent User Models for Self-Actualization: A Human-Centered Design Approach
by Mouadh Guesmi, Mohamed Amine Chatti, Alptug Tayyar, Qurat Ul Ain and Shoeb Joarder
Multimodal Technol. Interact. 2022, 6(6), 42; https://doi.org/10.3390/mti6060042 - 30 May 2022
Cited by 6 | Viewed by 3351
Abstract
This contribution sheds light on the potential of transparent user models for self-actualization. It discusses the development of EDUSS, a conceptual framework for self-actualization goals of transparent user modeling. Drawing from a qualitative research approach, the framework investigates self-actualization from psychology and computer [...] Read more.
This contribution sheds light on the potential of transparent user models for self-actualization. It discusses the development of EDUSS, a conceptual framework for self-actualization goals of transparent user modeling. Drawing from a qualitative research approach, the framework investigates self-actualization from psychology and computer science disciplines and derives a set of self-actualization goals and mechanisms. Following a human-centered design (HCD) approach, the framework was applied in an iterative process to systematically design a set of interactive visualizations to help users achieve different self-actualization goals in the scientific research domain. For this purpose, an explainable user interest model within a recommender system is utilized to provide various information on how the interest models are generated from users’ publication data. The main contributions are threefold: First, a synthesis of research on self-actualization from different domains. Second, EDUSS, a theoretically-sound self-actualization framework for transparent user modeling consisting of five main goals, namely, Explore, Develop, Understand, Scrutinize, and Socialize. Third, an instantiation of the proposed framework to effectively design interactive visualizations that can support the different self-actualization goals, following an HCD approach. Full article
(This article belongs to the Special Issue Explainable User Models)
Show Figures

Figure 1

38 pages, 4783 KiB  
Article
Designing with Genius Loci: An Approach to Polyvocality in Interactive Heritage Interpretation
by Violeta Tsenova, Gavin Wood and David Kirk
Multimodal Technol. Interact. 2022, 6(6), 41; https://doi.org/10.3390/mti6060041 - 24 May 2022
Cited by 3 | Viewed by 2640
Abstract
Co-design with communities interested in heritage has oriented itself towards designing for polyvocality to diversify the accepted knowledges, values and stories associated with heritage places. However, engagement with heritage theory has only recently been addressed in HCI design, resulting in some previous work [...] Read more.
Co-design with communities interested in heritage has oriented itself towards designing for polyvocality to diversify the accepted knowledges, values and stories associated with heritage places. However, engagement with heritage theory has only recently been addressed in HCI design, resulting in some previous work reinforcing the same realities that designers set out to challenge. There is need for an approach that supports designers in heritage settings in working critically with polyvocality to capture values, knowledges, and authorised narratives and reflect on how these are negotiated and presented in the designs created. We contribute “Designing with Genius Loci” (DwGL)—our proposed approach to co-design for polyvocality. We conceptualised DwGL through long-term engagement with volunteers and staff at a UK heritage site. First, we used ongoing recruitment to incentivise participation. We held a series of making workshops to explore participants’ attitudes towards authorised narratives. We built participants’ commitments to collaboration by introducing the common goal of creating an interactive digital design. Finally, as we designed, we enacted our own commitments to the heritage research and to participants’ experiences. These four steps form the backbone of our proposed approach and serve as points of reflexivity. We applied DwGL to co-creating three designs: Un/Authorised View, SDH Palimpsest and Loci Stories, which we present in an annotated portfolio. Grounded in research through design, we reflect on working with the proposed approach and provide three lessons learned, guiding further research efforts in this design space: (1) creating a conversation between authorised and personal heritage stories; (2) designing using polyvocality negotiates voices; and (3) designs engender existing qualities and values. The proposed approach places polyvocality foremost in interactive heritage interpretation and facilitates valuable discussions between the designers and communities involved. Full article
(This article belongs to the Special Issue Co-Design Within and Between Communities in Cultural Heritage)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop