Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, peer-reviewed, open access journal on multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Cybernetics) / CiteScore - Q2 (Neuroscience (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14.5 days after submission; acceptance to publication is undertaken in 4.9 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.4 (2023)
Latest Articles
Extended Reality Applications for CNC Machine Training: A Systematic Review
Multimodal Technol. Interact. 2024, 8(9), 80; https://doi.org/10.3390/mti8090080 - 11 Sep 2024
Abstract
►
Show Figures
Extended reality (XR) as an immersive technology has gained significant interest in the industry for training and maintenance tasks. It offers an interactive, three-dimensional environment that can boost users’ efficiency and safety in various sectors. The present systematic review provides information based on
[...] Read more.
Extended reality (XR) as an immersive technology has gained significant interest in the industry for training and maintenance tasks. It offers an interactive, three-dimensional environment that can boost users’ efficiency and safety in various sectors. The present systematic review provides information based on a Scopus database search for research articles from 2011 to 2024 to expose 19 selected studies related to XR developments and approaches. The purpose is to grasp the state of the art, focusing on user training in goals or tasks that involve computer numerical control (CNC) machines. The study revealed approaches that broadly employed XR devices to execute diverse operations for virtual CNC machines, offering enhanced safety and skills acquisition, lessening the use of physical machines that impact energy consumption or the time invested by an expert worker to teach an operation task. The articles highlight the advantages of XR training versus traditional training in CNC machines, revealing an opportunity to enhance learning aligned to the industry 4.0 (I4.0) paradigm. Virtual reality (VR) and augmented reality (AR) applications are the most used and are mainly centered on a single-user environment. In addition, a VR approach is built as a proof of concept for learning CNC machine operations, considering the key features identified.
Full article
Open AccessBrief Report
Can Generative AI Contribute to Health Literacy? A Study in the Field of Ophthalmology
by
Carlos Ruiz-Núñez, Javier Gismero Rodríguez, Antonio J. Garcia Ruiz, Saturnino Manuel Gismero Moreno, María Sonia Cañizal Santos and Iván Herrera-Peco
Multimodal Technol. Interact. 2024, 8(9), 79; https://doi.org/10.3390/mti8090079 - 4 Sep 2024
Abstract
ChatGPT, a generative artificial intelligence model, can provide useful and reliable responses in the field of ophthalmology, comparable to those of medical professionals. Twelve frequently asked questions from ophthalmology patients were selected, and responses were generated both in the role of an expert
[...] Read more.
ChatGPT, a generative artificial intelligence model, can provide useful and reliable responses in the field of ophthalmology, comparable to those of medical professionals. Twelve frequently asked questions from ophthalmology patients were selected, and responses were generated both in the role of an expert user and a non-expert user. The responses were evaluated by ophthalmologists using three scales: Global Quality Score (GQS), Reliability Score (RS), and Usefulness Score (US), and analyzed statistically through descriptive study, association, and comparison. The results indicate that there are no significant differences between the responses of expert and non-expert users, although the responses from the expert user tend to be slightly better rated. ChatGPT’s responses proved to be reliable and useful, suggesting its potential as a complementary tool to enhance health literacy and alleviate the informational burden on healthcare professionals.
Full article
Open AccessArticle
VRChances: An Immersive Virtual Reality Experience to Support Teenagers in Their Career Decisions
by
Michael Holly, Carina Weichselbraun, Florian Wohlmuth, Florian Glawogger, Maria Seiser, Philipp Einwallner and Johanna Pirker
Multimodal Technol. Interact. 2024, 8(9), 78; https://doi.org/10.3390/mti8090078 - 4 Sep 2024
Abstract
►▼
Show Figures
In this paper, we present a tool that offers young people virtual career guidance through an immersive virtual reality (VR) experience. While virtual environments provide an effective way to explore different experiences, VR offers users immersive interactions with simulated 3D environments. This allows
[...] Read more.
In this paper, we present a tool that offers young people virtual career guidance through an immersive virtual reality (VR) experience. While virtual environments provide an effective way to explore different experiences, VR offers users immersive interactions with simulated 3D environments. This allows the realistic exploration of different job fields in a virtual environment without being physically present. The study investigates the extent to which performing occupational tasks in a virtual environment influences the career perceptions of young adults and whether it enhances their understanding of professions. In particular, the study focuses on users’ expectations of an electrician’s profession. In total, 23 teenagers and eight application experts were involved to assess the teenager’s expectations and the potential of the career guidance tool.
Full article
Figure 1
Open AccessArticle
A Multispecies Interaction Design Approach: Introducing the Beings Activities Context Technologies (BACT) Framework
by
Theodora Chamaidi and Modestos Stavrakis
Multimodal Technol. Interact. 2024, 8(9), 77; https://doi.org/10.3390/mti8090077 - 4 Sep 2024
Abstract
►▼
Show Figures
For years, design has been focused on human needs, creating human-centred solutions and often neglecting the existence or the impact that design can have on other species. As designers shift from that traditional anthropocentric approach to adopting design practices that include other species’
[...] Read more.
For years, design has been focused on human needs, creating human-centred solutions and often neglecting the existence or the impact that design can have on other species. As designers shift from that traditional anthropocentric approach to adopting design practices that include other species’ perspectives in the process, there is a growing need for practices capable of providing designers with the right tools to understand non-human needs and design for their inclusion. For this reason, the Beings Activities Context Technologies (BACT) framework is proposed as a theoretical means to support the shift to a more multispecies-oriented approach, expanding the anthropocentric Benyon’s People Activities Contexts Technologies (PACT) framework. The methodological implications of the framework have been explored in a case study design project focused on the development of a wearable device designed to support beekeepers during their work. The case study explored the design by taking into consideration both the needs of humans and animals in the context of beekeeping while analysing their interactions in depth. Through this framework, we seek to contribute to the more-than-human turn in interaction design and aid designers in expanding their considerations beyond the person–technology relationship.
Full article
Figure 1
Open AccessArticle
Observations and Considerations for Implementing Vibration Signals as an Input Technique for Mobile Devices
by
Thomas Hrast, David Ahlström and Martin Hitz
Multimodal Technol. Interact. 2024, 8(9), 76; https://doi.org/10.3390/mti8090076 - 2 Sep 2024
Abstract
This work examines swipe-based interactions on smart devices, like smartphones and smartwatches, that detect vibration signals through defined swipe surfaces. We investigate how these devices, held in users’ hands or worn on their wrists, process vibration signals from swipe interactions and ambient noise
[...] Read more.
This work examines swipe-based interactions on smart devices, like smartphones and smartwatches, that detect vibration signals through defined swipe surfaces. We investigate how these devices, held in users’ hands or worn on their wrists, process vibration signals from swipe interactions and ambient noise using a support vector machine (SVM). The work details the signal processing workflow involving filters, sliding windows, feature vectors, SVM kernels, and ambient noise management. It includes how we separate the vibration signal from a potential swipe surface and ambient noise. We explore both software and human factors influencing the signals: the former includes the computational techniques mentioned, while the latter encompasses swipe orientation, contact, and movement. Our findings show that the SVM classifies swipe surface signals with an accuracy of 69.61% when both devices are used, 97.59% with only the smartphone, and 99.79% with only the smartwatch. However, the classification accuracy drops to about 50% in field user studies simulating real-world conditions such as phone calls, typing, walking, and other undirected movements throughout the day. The decline in performance under these conditions suggests challenges in ambient noise discrimination, which this work discusses, along with potential strategies for improvement in future research.
Full article
(This article belongs to the Special Issue Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives)
►▼
Show Figures
Figure 1
Open AccessReview
Impact of Artificial Intelligence on Learning Management Systems: A Bibliometric Review
by
Diego Vergara, Georgios Lampropoulos, Álvaro Antón-Sancho and Pablo Fernández-Arias
Multimodal Technol. Interact. 2024, 8(9), 75; https://doi.org/10.3390/mti8090075 - 25 Aug 2024
Abstract
►▼
Show Figures
The field of artificial intelligence is drastically advancing. This study aims to provide an overview of the integration of artificial intelligence into learning management systems. This study followed a bibliometric review approach. Specifically, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses
[...] Read more.
The field of artificial intelligence is drastically advancing. This study aims to provide an overview of the integration of artificial intelligence into learning management systems. This study followed a bibliometric review approach. Specifically, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, 256 documents from the Scopus and Web of Science (WoS) databases over the period of 2004–2023 were identified and examined. Besides an analysis of the documents within the existing literature, emerging themes and topics were identified, and directions and recommendations for future research are provided. Based on the outcomes, the use of artificial intelligence within learning management systems offers adaptive and personalized learning experiences, promotes active learning, and supports self-regulated learning in face-to-face, hybrid, and online learning environments. Additionally, learning management systems enriched with artificial intelligence can improve students’ learning outcomes, engagement, and motivation. Their ability to increase accessibility and ensure equal access to education by supporting open educational resources was evident. However, the need to develop effective design approaches, evaluation methods, and methodologies to successfully integrate them within classrooms emerged as an issue to be solved. Finally, the need to further explore education stakeholders’ artificial intelligence literacy also arose.
Full article
Figure 1
Open AccessArticle
Multisensory Technologies for Inclusive Exhibition Spaces: Disability Access Meets Artistic and Curatorial Research
by
Sevasti Eva Fotiadi
Multimodal Technol. Interact. 2024, 8(8), 74; https://doi.org/10.3390/mti8080074 - 19 Aug 2024
Abstract
►▼
Show Figures
This article discusses applications of technology for sensory-disabled audiences in modern and contemporary art exhibitions. One case study of experimental artistic and curatorial research by The OtherAbilities art collective is discussed: a series of prototype tools for sensory translation from audible sound to
[...] Read more.
This article discusses applications of technology for sensory-disabled audiences in modern and contemporary art exhibitions. One case study of experimental artistic and curatorial research by The OtherAbilities art collective is discussed: a series of prototype tools for sensory translation from audible sound to vibration were developed to be embeddable in the architecture of spaces where art is presented. In the article, the case study is approached from a curatorial perspective. Based on bibliographical sources, the article starts with a brief historical reference to disability art activism and a presentation of contemporary accessibility solutions for sensory-disabled audiences in museums. The research for the case study was conducted during testing and feedback sessions on the prototypes using open-ended oral interviews, open-ended written comments, and ethnographic observation of visitors’ behavior during exhibitions. The testers were d/Deaf, hard of hearing and hearing. The results focus on the reception of the sensory translation of audible sound to vibration by test users of diverse hearing abilities and on the reception of the prototypes in the context of art and design exhibitions. The article closes with a reflection on how disability scholarship meets art curatorial theory in the example of the article’s case study.
Full article
Figure 1
Open AccessArticle
Micro-Credentialing and Digital Badges in Developing RPAS Knowledge, Skills, and Other Attributes
by
John Murray, Keith Joiner and Graham Wild
Multimodal Technol. Interact. 2024, 8(8), 73; https://doi.org/10.3390/mti8080073 - 15 Aug 2024
Abstract
►▼
Show Figures
This study explores the potential of micro-credentialing and digital badges in developing and validating the knowledge, skills, and other attributes (KSaOs) required for diverse Remotely Piloted Aircraft Systems (RPAS) operations. The rapid proliferation of drone usage has outpaced the development of necessary KSaOs
[...] Read more.
This study explores the potential of micro-credentialing and digital badges in developing and validating the knowledge, skills, and other attributes (KSaOs) required for diverse Remotely Piloted Aircraft Systems (RPAS) operations. The rapid proliferation of drone usage has outpaced the development of necessary KSaOs for safe and efficient drone operations. This research aims to bridge this gap by identifying the unique and specific KSaOs required for different types of drone operations and examining how micro-credentialing and digital badges can provide tangible evidence of these KSaOs. The study also investigates the potential benefits and challenges of implementing digital badges in the RPAS sector and how these challenges can be addressed. Furthermore, it explores how digital badges can contribute to the standardization and recognition of RPAS competencies across different national regulatory bodies. The methodology involves observational studies of publicly available videos of drone operations, with a focus on agriculture spraying operations. The findings highlight the importance of both generic and specific KSaOs in RPAS operations and suggest that digital badges may provide an effective means of evidencing mastery of these competencies. This research contributes to the ongoing discourse on drone regulation and competency development, offering practical insights for regulators, training providers, and drone operators.
Full article
Figure 1
Open AccessArticle
Evaluating Virtual Reality in Education: An Analysis of VR through the Instructors’ Lens
by
Vaishnavi Rangarajan, Arash Shahbaz Badr and Raffaele De Amicis
Multimodal Technol. Interact. 2024, 8(8), 72; https://doi.org/10.3390/mti8080072 - 12 Aug 2024
Abstract
►▼
Show Figures
The rapid development of virtual reality (VR) technology has triggered a significant expansion of VR applications in educational settings. This study seeks to understand the extent to which these applications meet the expectations and pedagogical needs of university instructors. We conducted semi-structured interviews
[...] Read more.
The rapid development of virtual reality (VR) technology has triggered a significant expansion of VR applications in educational settings. This study seeks to understand the extent to which these applications meet the expectations and pedagogical needs of university instructors. We conducted semi-structured interviews and observations with 16 university-level instructors from Oregon State University to gather insights into their experiences and perspectives regarding the use of VR in educational contexts. Our qualitative analysis reveals detailed trends in instructors’ requirements, their satisfaction and dissatisfaction with current VR tools, and the perceived barriers to broader adoption. The study also explores instructors’ expectations and preferences for designing and implementing VR-driven courses, alongside an evaluation of the usability of selected VR applications. By elucidating the challenges and opportunities associated with VR in education, this study aims to guide the development of more effective VR educational tools and inform future curriculum design, contributing to the enhancement of digital learning environments.
Full article
Figure 1
Open AccessArticle
Enhancing Reflective and Conversational User Engagement in Argumentative Dialogues with Virtual Agents
by
Annalena Aicher, Yuki Matsuda, Keichii Yasumoto, Wolfgang Minker, Elisabeth André and Stefan Ultes
Multimodal Technol. Interact. 2024, 8(8), 71; https://doi.org/10.3390/mti8080071 - 6 Aug 2024
Abstract
In their process of information seeking, human users tend to selectively ignore information that contradicts their pre-existing beliefs or opinions. These so-called “self-imposed filter bubbles” (SFBs) pose a significant challenge for argumentative conversational agents aiming to facilitate critical, unbiased opinion formation on controversial
[...] Read more.
In their process of information seeking, human users tend to selectively ignore information that contradicts their pre-existing beliefs or opinions. These so-called “self-imposed filter bubbles” (SFBs) pose a significant challenge for argumentative conversational agents aiming to facilitate critical, unbiased opinion formation on controversial topics. With the ultimate goal of developing a system that helps users break their self-imposed filter bubbles (SFBs), this paper aims to investigate the role of co-speech gestures, specifically examining how these gestures significantly contribute to achieving this objective. This paper extends current research by examining methods to engage users in cooperative discussions with a virtual human-like agent, encouraging a deep reflection on arguments to disrupt SFBs. Specifically, we investigate the agent’s non-verbal behavior in the form of co-speech gestures. We analyze whether co-speech gestures, depending on the conveyed information, enhance motivation, and thus conversational user engagement, thereby encouraging users to consider information that could potentially disrupt their SFBs. The findings of a laboratory study with 56 participants highlight the importance of non-verbal agent behaviors, such as co-speech gestures, in improving users’ perceptions of the interaction and the conveyed content. This effect is particularly notable when the content aims to challenge the user’s SFB. Therefore, this research offers valuable insights into enhancing user engagement in the design of multimodal interactions with future cooperative argumentative virtual agents.
Full article
(This article belongs to the Special Issue Multimodal Interaction with Virtual Agents and Communication Robots)
►▼
Show Figures
Graphical abstract
Open AccessCommunication
Multimodal Drumming Education Tool in Mixed Reality
by
James Pinkl, Julián Villegas and Michael Cohen
Multimodal Technol. Interact. 2024, 8(8), 70; https://doi.org/10.3390/mti8080070 - 5 Aug 2024
Abstract
►▼
Show Figures
First-person VR- and MR-based Action Observation research has thus far yielded both positive and negative findings in studies observing such tools’ potential to teach motor skills. Teaching drumming, particularly polyrhythms, is a challenging motor skill to learn and has remained largely unexplored in
[...] Read more.
First-person VR- and MR-based Action Observation research has thus far yielded both positive and negative findings in studies observing such tools’ potential to teach motor skills. Teaching drumming, particularly polyrhythms, is a challenging motor skill to learn and has remained largely unexplored in the field of Action Observation. In this contribution, a multimodal tool designed to teach rudimental and polyrhythmic drumming was developed and tested in a 20-subject study. The tool presented subjects with a first-person MR perspective via a head-mounted display to provide users with visual exposure to both virtual content and their physical surroundings simultaneously. When compared against a control group practicing via video demonstrations, results showed increased rhythmic accuracy across four exercises. Specifically, a difference of 239 ms (z-ratio = 3.520, p < 0.001) was found between the timing errors of subjects who practiced with our multimodal mixed reality development compared to subjects who practiced with video, demonstrating the potential of such affordances. This research contributes to ongoing work in the fields of Action Observation and Mixed Reality, providing evidence that Action Observation techniques can be an effective practice method for drumming.
Full article
Figure 1
Open AccessArticle
The Role of Audio Feedback and Gamification Elements for Remote Boom Operation
by
Alissa Burova, John Mäkelä, Tuuli Keskinen, Pekka Kallioniemi, Kimmo Ronkainen and Markku Turunen
Multimodal Technol. Interact. 2024, 8(8), 69; https://doi.org/10.3390/mti8080069 - 1 Aug 2024
Abstract
Remote operations have been greatly enhanced by advancements in technology, enabling remote control of machinery in hazardous environments. However, it is still a challenge to design remote control interfaces and provide feedback in a way that would enhance situational awareness without negatively affecting
[...] Read more.
Remote operations have been greatly enhanced by advancements in technology, enabling remote control of machinery in hazardous environments. However, it is still a challenge to design remote control interfaces and provide feedback in a way that would enhance situational awareness without negatively affecting cognitive load. This study investigates how different audio feedback designs can support remote boom operation and, additionally, explores the potential impact of gamification elements on operator performance and motivation. Due to COVID-19 restrictions, this study was conducted remotely with 16 participants using a simulated environment featuring a virtual excavator. Participants performed digging tasks using two audio feedback designs: frequency-modulated beeping and realistic spatialized steam sounds. The findings indicate that both audio designs are beneficial for remote boom operations: the beeping sound was perceived as more comfortable and efficient in determining the proximity of a hidden object and helped in avoiding collisions, whereas spatial sounds enhanced the sense of presence. Therefore, we suggest combining both audio designs for optimal performance and emphasize the importance of customizable feedback in remote operations. This study also revealed that gamification elements could both positively and negatively affect performance and motivation, highlighting the need for careful design tailored to specific task requirements.
Full article
(This article belongs to the Special Issue Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives)
►▼
Show Figures
Figure 1
Open AccessReview
Lessons Learned from Implementing Light Field Camera Animation: Implications, Limitations, Potentials, and Future Research Efforts
by
Mary Guindy and Peter A. Kara
Multimodal Technol. Interact. 2024, 8(8), 68; https://doi.org/10.3390/mti8080068 - 1 Aug 2024
Abstract
►▼
Show Figures
Among the novel 3D visualization technologies of our era, light field displays provide the complete 3D visual experience without the need for any personal viewing device. Due to the lack of such constraint, these displays may be viewed by any number of observers
[...] Read more.
Among the novel 3D visualization technologies of our era, light field displays provide the complete 3D visual experience without the need for any personal viewing device. Due to the lack of such constraint, these displays may be viewed by any number of observers simultaneously, and the corresponding use case contexts may also involve a virtually unlimited numbers of users; any number that the valid viewing area of the display may accommodate. While many instances of the utilization of this technology operate with static contents, camera animation may also be relevant. While the topic of light field camera animation has already been addressed on an initial level, there are still numerous research efforts to be carried out. In this paper, we elaborate on the lessons learned from implementing light field camera animation. The paper discusses the associated implications, limitations, potentials, and future research efforts. Each of these areas are approached from the perspectives of use cases, visual content, and quality assessment, as well as capture and display hardware. Our work highlights the existing research gaps in the investigated topic, the severe issues related to visualization sharpness, and the lack of appropriate datasets, as well as the constraints due to which novel contents may be captured by virtual cameras instead of real capture systems.
Full article
Figure 1
Open AccessArticle
Effects of a 12-Week Semi-Immersion Virtual Reality-Based Multicomponent Intervention on the Functional Capacity of Older Adults in Different Age Groups: A Randomized Control Trial
by
Li-Ting Wang, Yung Liao, Shao-Hsi Chang and Jong-Hwan Park
Multimodal Technol. Interact. 2024, 8(8), 67; https://doi.org/10.3390/mti8080067 - 1 Aug 2024
Abstract
►▼
Show Figures
Virtual reality (VR) exercise has been used as a strategy to promote physical health in older adults. Studies have revealed that the effects of exercise interventions vary across age groups. This study aimed to investigate the effects of a 12-week semi-immersion VR-based multicomponent
[...] Read more.
Virtual reality (VR) exercise has been used as a strategy to promote physical health in older adults. Studies have revealed that the effects of exercise interventions vary across age groups. This study aimed to investigate the effects of a 12-week semi-immersion VR-based multicomponent exercise program on the functional fitness of young-old (65–73 years) and middle-old (74–85 years) adults. This study recruited two age groups (young-old adults, n = 49; middle-old adults, n = 37) and randomly assigned them to the experimental (EG) and control (CG) groups. EG participants performed a 75–90-min semi-immersion VR exercise routine twice weekly for 12 weeks, whereas CG participants maintained their original lifestyles without any alterations. The Senior Fitness Test was used to measure functional fitness by assessing upper- and lower-limb flexibility and muscle strength, cardiorespiratory fitness, and balance. EG participants exhibited greater improvements than their CG counterparts in certain functional fitness tests, specifically the Back Scratch, Arm Curl, 2-Minute Step, and 8-Foot Up-and-Go Tests. On comparing the age groups, a difference was exclusively noted in the effects of the Chair Sit-and-Reach Test. In the EG, the intervention significantly improved lower-body flexibility in young-old adults but elicited no such improvement in middle-old adults. Semi-immersion VR exercise improved the functional fitness of young-old and middle-old adults in the EG, with superior results in the former. Elucidating the impact of age-specific exercise interventions on functional capacity will help practitioners design age-specific exercise training content that enhances functional fitness in older adults of different ages.
Full article
Figure 1
Open AccessReview
User Experience in Immersive Virtual Reality-Induced Hypoalgesia in Adults and Children Suffering from Pain Conditions
by
Javier Guerra-Armas, Mar Flores-Cortes, Guillermo Ceniza-Bordallo and Marta Matamala-Gomez
Multimodal Technol. Interact. 2024, 8(8), 66; https://doi.org/10.3390/mti8080066 - 1 Aug 2024
Abstract
Pain is the most common reason for medical consultation and use of health care resources. The high socio-economic burden of pain justifies seeking an appropriate therapeutic strategy. Immersive virtual reality (VR) has emerged as a first-line non-pharmacological option for pain management. However, the
[...] Read more.
Pain is the most common reason for medical consultation and use of health care resources. The high socio-economic burden of pain justifies seeking an appropriate therapeutic strategy. Immersive virtual reality (VR) has emerged as a first-line non-pharmacological option for pain management. However, the growing literature has not been accompanied by substantial progress in understanding how VR could reduce the pain experience, with some user experience factors being associated with the hypoalgesic effects of immersive VR. The aim of this review is (i) to summarize the state of the art on the effects of VR on adults and children suffering from pain conditions; (ii) to identify and summarize how mechanisms across immersive VR user experience influence hypoalgesic effects in patients with acute and chronic pain among adults and children. A critical narrative review based on PICOT criteria (P = Patient or Population and Problem; I = Intervention or Indicator; C = O = Outcome; T = Type) was conducted that includes experimental studies or systematic reviews involving studies in experimentally induced pain, acute pain, or chronic pain in adults and children. The results suggest an association between immersive VR-induced hypoalgesia and user experience such as distraction, presence, interactivity, gamification, and virtual embodiment. These findings suggest that hierarchical relationships might exist between user experience-related factors and greater hypoalgesic effects following an immersive VR intervention. This relationship needs to be considered in the design and development of VR-based strategies for pain management.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality—2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Mindful Waters: An Interactive Digital Aquarium for People with Dementia
by
Maarten Hundscheid, Linghan Zhang, Ans Tummers-Heemels and Wijnand IJsselsteijn
Multimodal Technol. Interact. 2024, 8(8), 65; https://doi.org/10.3390/mti8080065 - 26 Jul 2024
Abstract
►▼
Show Figures
Dementia can be associated with social withdrawal, mood changes, and decreased interaction. Animal-assisted therapies and robotic companions have shown potential in enhancing well-being but come with limitations like high maintenance costs and complexity. This research presents an interactive digital aquarium called Mindful Waters,
[...] Read more.
Dementia can be associated with social withdrawal, mood changes, and decreased interaction. Animal-assisted therapies and robotic companions have shown potential in enhancing well-being but come with limitations like high maintenance costs and complexity. This research presents an interactive digital aquarium called Mindful Waters, which was developed to promote social interaction and engagement among People with Dementia. The pilot study involved interactive sessions at a community center and a care facility, with situated observations, video and audio recordings, and interviews to assess user engagement motivation, behavior, and user experience with Mindful Waters. The study revealed that Mindful Waters functioned well with People with Dementia and stimulated conversational topics about aquariums through engagement. User feedback was generally positive, with participants appreciating the visual appeal and simplicity. However, some participants with advanced dementia found it challenging to interact due to their mobility limitations, cognitive impairments, and the limited duration of interaction sessions. The overall results suggest that Mindful Waters can benefit dementia care; further research is needed to optimize its design and functionality for long-term placement in care facilities.
Full article
Figure 1
Open AccessHypothesis
Serious Games for Cognitive Rehabilitation in Older Adults: A Conceptual Framework
by
Diego E. Guzmán, Carlos F. Rengifo and Cecilia E. García-Cena
Multimodal Technol. Interact. 2024, 8(8), 64; https://doi.org/10.3390/mti8080064 - 23 Jul 2024
Abstract
►▼
Show Figures
This paper presents a conceptual framework for the development of serious games aimed at cognitive rehabilitation in older adults. Following Jabareen’s methodology, a literature review was conducted to identify concepts and theories that are relevant in this field. The resulting framework comprises the
[...] Read more.
This paper presents a conceptual framework for the development of serious games aimed at cognitive rehabilitation in older adults. Following Jabareen’s methodology, a literature review was conducted to identify concepts and theories that are relevant in this field. The resulting framework comprises the use of virtual reality, integration of physical activity, incorporation of social interaction features, adaptability of difficulty levels, and customization of game content. The interconnections between these concepts and underlying cognitive theories, such as the cognitive reserve hypothesis and the scaffolding theory of aging and cognition, are highlighted. As we are in the early stages of our research, our goal is to introduce and test novel interpretations of current knowledge within this conceptual framework. Additionally, the practical implications of the conceptual framework are discussed, including its strengths and limitations, as well as its relevance for future research and clinical practice in the field of cognitive rehabilitation. It is hoped that this framework will provide a guide for the design and implementation of effective interventions to improve cognitive health and well-being in the older adult population.
Full article
Figure 1
Open AccessArticle
Multimodal Dictionaries for Traditional Craft Education
by
Xenophon Zabulis, Nikolaos Partarakis, Valentina Bartalesi, Nicolo Pratelli, Carlo Meghini, Arnaud Dubois, Ines Moreno and Sotiris Manitsaris
Multimodal Technol. Interact. 2024, 8(7), 63; https://doi.org/10.3390/mti8070063 - 18 Jul 2024
Abstract
►▼
Show Figures
We address the problem of systematizing the authoring of digital dictionaries for craft education from ethnographic studies and recordings. First, we present guidelines for the collection of ethnographic data using digital audio and video and identify terms that are central in the description
[...] Read more.
We address the problem of systematizing the authoring of digital dictionaries for craft education from ethnographic studies and recordings. First, we present guidelines for the collection of ethnographic data using digital audio and video and identify terms that are central in the description of crafting actions, products, tools, and materials. Second, we present a classification scheme for craft terms and a way to semantically annotate them, using a multilingual and hierarchical thesaurus, which provides term definitions and a semantic hierarchy of these terms. Third, we link ethnographic resources and open-access data to the identified terms using an online platform for the representation of traditional crafts, associating their definition with illustrations, examples of use, and 3D models. We validate the efficacy of the approach by creating multimedia vocabularies for an online eLearning platform for introductory courses to nine traditional crafts.
Full article
Figure 1
Open AccessArticle
3D Hand Motion Generation for VR Interactions Using a Haptic Data Glove
by
Sang-Woo Seo, Woo-Sug Jung and Yejin Kim
Multimodal Technol. Interact. 2024, 8(7), 62; https://doi.org/10.3390/mti8070062 - 15 Jul 2024
Abstract
Recently, VR-based training applications have become popular and promising, as they can simulate real-world situations in a safe, repeatable, and cost-effective way. For immersive simulations, various input devices have been designed and proposed to increase the effectiveness of training. In this study, we
[...] Read more.
Recently, VR-based training applications have become popular and promising, as they can simulate real-world situations in a safe, repeatable, and cost-effective way. For immersive simulations, various input devices have been designed and proposed to increase the effectiveness of training. In this study, we developed a novel device that generates 3D hand motion data and provides haptic force feedback for VR interactions. The proposed device can track 3D hand positions using a combination of the global position estimation of ultrasonic sensors and the hand pose estimation of inertial sensors in real time. For haptic feedback, shape–memory alloy (SMA) actuators were designed to provide kinesthetic forces and an efficient power control without an overheat problem. Our device improves upon the shortcomings of existing commercial devices in tracking and haptic capabilities such that it can track global 3D positions and estimate hand poses in a VR space without using an external suit or tracker. For better flexibility in handling and feeling physical objects compared to exoskeleton-based devices, we introduced an SMA-based actuator to control haptic forces. Overall, our device was designed and implemented as a lighter and less bulky glove which provides comparable accuracy and performance in generating 3D hand motion data for a VR training application (i.e., the use of a fire extinguisher), as demonstrated in the experimental results.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
The Optimization of Numerical Algorithm Parameters with a Genetic Algorithm to Animate Letters of the Sign Alphabet
by
Sergio Hernandez-Mendez, Carlos Hernández-Mejía, Delia Torres-Muñoz and Carolina Maldonado-Mendez
Multimodal Technol. Interact. 2024, 8(7), 61; https://doi.org/10.3390/mti8070061 - 10 Jul 2024
Abstract
►▼
Show Figures
At present, the development of animation-based works for human–computer interaction applications has increased. To generate animations, actions are pre-recorded and animation flows are configured. In this research, from two images of letters of the sign language alphabet, intermediate frames were generated using a
[...] Read more.
At present, the development of animation-based works for human–computer interaction applications has increased. To generate animations, actions are pre-recorded and animation flows are configured. In this research, from two images of letters of the sign language alphabet, intermediate frames were generated using a numerical traced algorithm based on homotopy. The parameters of a homotopy curve were optimized with a genetic algorithm to generate intermediate frames. In the experiments performed, sequences where a person executes pairs of letters in sign language were recorded and animations of the same pairs of letters were generated with the proposed method. Subsequently, the similarity of the real sequences to the animations was measured using Dynamic Time Wrapping. The results obtained show that the images obtained are consistent with their execution by a person. Animation files between sign pairs were created from sign images, with each file weighing an average of 18.3 KB. By having sequences between pairs of letters it is possible to animate words and sentences. The animations generated by this homotopy-based animation method optimized with a genetic algorithm can be used in various deaf interaction applications to provide assistance. From several pairs of letters a file base was generated using the animations between pairs of letters; with these files you can create animations of words and sentences.
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Information, Mathematics, MTI, Symmetry
Youth Engagement in Social Media in the Post COVID-19 Era
Topic Editors: Naseer Abbas Khan, Shahid Kalim Khan, Abdul QayyumDeadline: 30 September 2024
Conferences
Special Issues
Special Issue in
MTI
Effectiveness of Serious Games in Risk Communication of Natural Disasters
Guest Editors: Rui Jesus, Pedro Albuquerque Santos, Maria Ana Viana-BaptistaDeadline: 20 September 2024
Special Issue in
MTI
Cooperative Intelligence in Automated Driving- 2nd Edition
Guest Editors: Andreas Riener, Myounghoon Jeon (Philart), Ronald SchroeterDeadline: 20 October 2024
Special Issue in
MTI
Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives—2nd Edition
Guest Editors: Wei Liu, Jan Auernhammer, Takumi OhashiDeadline: 15 January 2025
Special Issue in
MTI
Innovative Theories and Practices for Designing and Evaluating Inclusive Educational Technology and Online Learning
Guest Editor: Julius NganjiDeadline: 31 January 2025