Next Issue
Volume 9, August
Previous Issue
Volume 9, June
 
 

Multimodal Technol. Interact., Volume 9, Issue 7 (July 2025) – 11 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
21 pages, 1589 KB  
Review
Virtual Reality in Medical Education, Healthcare Education, and Nursing Education: An Overview
by Georgios Lampropoulos, Antonio del Bosque, Pablo Fernández-Arias and Diego Vergara
Multimodal Technol. Interact. 2025, 9(7), 75; https://doi.org/10.3390/mti9070075 - 20 Jul 2025
Cited by 1 | Viewed by 1647
Abstract
Virtual reality is increasingly used in health sciences education, including healthcare, nursing, and medical education. Hence, this study provides an overview of the use of virtual reality within healthcare education, nursing education, and medical education through the analysis of published documents from 2010 [...] Read more.
Virtual reality is increasingly used in health sciences education, including healthcare, nursing, and medical education. Hence, this study provides an overview of the use of virtual reality within healthcare education, nursing education, and medical education through the analysis of published documents from 2010 to 2025. Based on the outcomes of this study, virtual reality emerged as an effective educational tool that can support students and health professionals. The immersive, realistic, and safe environments created in virtual reality allowed learners to enhance their knowledge and practice their skills, patient interactions, and decision-making without risking patient safety. Improvements in learning outcomes, including performance, clinical skills development, critical thinking, and knowledge acquisition were observed. Virtual reality also positively contributes toward a more holistic health sciences education as it increases students’ empathy and behavioral understanding. Finally, eight main research topics were identified and research gaps and future research directions are presented. Full article
Show Figures

Figure 1

27 pages, 1331 KB  
Article
Data-Driven Adaptive Course Framework—Case Study: Impact on Success and Engagement
by Neslihan Ademi and Suzana Loshkovska
Multimodal Technol. Interact. 2025, 9(7), 74; https://doi.org/10.3390/mti9070074 - 19 Jul 2025
Viewed by 586
Abstract
Adaptive learning tailors learning to the specific needs and preferences of the learner. Although studies focusing on adaptive learning systems became popular decades ago, there is still a need for empirical evidence on the usability of adaptive learning in various educational environments. This [...] Read more.
Adaptive learning tailors learning to the specific needs and preferences of the learner. Although studies focusing on adaptive learning systems became popular decades ago, there is still a need for empirical evidence on the usability of adaptive learning in various educational environments. This study uses LMS log data to elucidate an adaptive course design explicitly developed for formal educational environments in higher education institutions. The framework utilizes learning analytics and machine learning techniques. Based on learners’ online engagement and tutors’ assessment of course activities, adaptive learning paths are presented to learners. To determine whether our system can increase learner engagement and prevent failures, learner success and engagement are measured during the learning process. The results show that the proposed adaptive course framework can increase course engagement and success. However, this potential depends on several factors, such as course organization, feedback, time constraints for activities, and the use of incentives. Full article
Show Figures

Figure 1

17 pages, 2402 KB  
Article
Performance and Comfort of Precise Distal Pointing Interaction in Intelligent Cockpits: The Role of Control Display Gain and Wrist Posture
by Yongmeng Wu, Ninghan Ma, Guoan Mao, Xin Li, Xiao Song, Leshao Zhang and Jinyi Zhi
Multimodal Technol. Interact. 2025, 9(7), 73; https://doi.org/10.3390/mti9070073 - 19 Jul 2025
Viewed by 424
Abstract
Using personal smart devices such as mobile phones to perform precise distal pointing in intelligent cockpits is a developing trend. The present study investigated the effects of different control display gains (CD gains) and wrist movement modalities on performance and comfort for precise [...] Read more.
Using personal smart devices such as mobile phones to perform precise distal pointing in intelligent cockpits is a developing trend. The present study investigated the effects of different control display gains (CD gains) and wrist movement modalities on performance and comfort for precise distal pointing interaction. Twenty healthy participants performed a precise distant pointing task with four constant CD gains (0.6, 0.8, 0.84, and 1.0), two dynamic CD gains, and two wrist movement modalities (wrist extension and rotation) by using a mobile phone as the input device. Physiological electromyographic data, task performance, and subjective questionnaire data were collected. Comparative results show that constant CD gain is superior to dynamic CD gain and that 0.8 to 1.0 is the optimum range of values. The data showed a clear and consistent trend in performance and comfort as the CD gain increased from 0.6 to 1.0, with performance and comfort becoming progressively better, reaching an optimum at 0.84. In terms of the wrist control method, the rotation mode had smaller task completion time than the extension mode. The results of this study provide a basis for the design of remote interaction using mobile phones in an intelligent cockpit. Full article
Show Figures

Figure 1

25 pages, 2225 KB  
Article
Virtual Reality Applied to Design Reviews in Shipbuilding
by Seppo Helle, Taneli Nyyssönen, Olli Heimo, Leo Sakari and Teijo Lehtonen
Multimodal Technol. Interact. 2025, 9(7), 72; https://doi.org/10.3390/mti9070072 - 15 Jul 2025
Viewed by 551
Abstract
This article describes a pilot project studying the potential benefits of using virtual reality (VR) in design reviews of cruise ship interiors. The research was conducted as part of a 2020–2022 research project targeting at sustainable shipbuilding methods. It was directly connected to [...] Read more.
This article describes a pilot project studying the potential benefits of using virtual reality (VR) in design reviews of cruise ship interiors. The research was conducted as part of a 2020–2022 research project targeting at sustainable shipbuilding methods. It was directly connected to an ongoing cruise ship building project, executed in cooperation with four companies constructing interiors. The goal was to use VR reviews instead of, or in addition to, constructing physical mock-up sections of the ship interiors, with expected improvements in sustainability and stakeholder communication. A number of virtual 3D models were created, imported into a virtual reality environment, and presented to customers. Experiences were collected through interviews and surveys from both the construction companies and customers. The results indicate that VR can be an efficient tool for design reviews. The designs can often be evaluated better in VR than using traditional methods. Material savings are possible by using virtual mock-ups instead of physical ones. However, it was also discovered that the visual rendering capabilities of the used software environment do not provide the realism that would be desired in some reviews. To overcome this limitation, more resources would be needed in preparing the models for VR reviews. Full article
Show Figures

Figure 1

20 pages, 1012 KB  
Article
Interaction with Tactile Paving in a Virtual Reality Environment: Simulation of an Urban Environment for People with Visual Impairments
by Nikolaos Tzimos, Iordanis Kyriazidis, George Voutsakelis, Sotirios Kontogiannis and George Kokkonis
Multimodal Technol. Interact. 2025, 9(7), 71; https://doi.org/10.3390/mti9070071 - 14 Jul 2025
Viewed by 996
Abstract
Blindness and low vision are increasing serious public health issues that affect a significant percentage of the population worldwide. Vision plays a crucial role in spatial navigation and daily activities. Its reduction or loss creates numerous challenges for an individual. Assistive technology can [...] Read more.
Blindness and low vision are increasing serious public health issues that affect a significant percentage of the population worldwide. Vision plays a crucial role in spatial navigation and daily activities. Its reduction or loss creates numerous challenges for an individual. Assistive technology can enhance mobility and navigation in outdoor environments. In the field of orientation and mobility training, technologies with haptic interaction can assist individuals with visual impairments in learning how to navigate safely and effectively using the sense of touch. This paper presents a virtual reality platform designed to support the development of navigation techniques within a safe yet realistic environment, expanding upon existing research in the field. Following extensive optimization, we present a visual representation that accurately simulates various 3D tile textures using graphics replicating real tactile surfaces. We conducted a user interaction study in a virtual environment consisting of 3D navigation tiles enhanced with tactile textures, placed appropriately for a real-world scenario, to assess user performance and experience. This study also assess the usability and user experience of the platform. We hope that the findings will contribute to the development of new universal navigation techniques for people with visual impairments. Full article
Show Figures

Figure 1

26 pages, 628 KB  
Review
Systemic Gamification Theory (SGT): A Holistic Model for Inclusive Gamified Digital Learning
by Franz Coelho and Ana Maria Abreu
Multimodal Technol. Interact. 2025, 9(7), 70; https://doi.org/10.3390/mti9070070 - 10 Jul 2025
Viewed by 1341
Abstract
Gamification has emerged as a powerful strategy in digital education, enhancing engagement, motivation, and learning outcomes. However, most research lacks theoretical grounding and often applies multiple and uncontextualized game elements, limiting its impact and replicability. To address these gaps, this study introduces a [...] Read more.
Gamification has emerged as a powerful strategy in digital education, enhancing engagement, motivation, and learning outcomes. However, most research lacks theoretical grounding and often applies multiple and uncontextualized game elements, limiting its impact and replicability. To address these gaps, this study introduces a Systemic Gamification Theory (SGT)—a comprehensive, human-centered model for designing and evaluating inclusive and effective gamified educational environments. Sustained in Education, Human–Computer Interaction, and Psychology, SGT is structured around four core principles, emphasizing the importance of integrating game elements (1—Integration) into cohesive systems that generate emergent outcomes (2—Emergence) aligned synergistically (3—Synergy) with contextual needs (4—Context). The theory supports inclusivity by accounting for individual traits, situational dynamics, spatial settings, and cultural diversity. To operationalize SGT, we developed two tools: i. a set of 10 Heuristics to guide and analyze effective and inclusive gamification; and ii. a Framework for designing and evaluating gamified systems, as well as comparing research methods and outcomes across different contexts. These tools demonstrated how SGT enables robust, adaptive, and equitable gamified learning experiences. By advancing theoretical and practical development, SGT fosters a transformative approach to gamification, enriching multimedia learning through thoughtful system design and reflective evaluation practices. Full article
Show Figures

Figure 1

23 pages, 3492 KB  
Article
Innovating Personalized Learning in Virtual Education Through AI
by Luis Fletscher, Jhon Mercado, Alvaro Gómez and Carlos Mendoza-Cardenas
Multimodal Technol. Interact. 2025, 9(7), 69; https://doi.org/10.3390/mti9070069 - 3 Jul 2025
Viewed by 1159
Abstract
The rapid expansion of virtual education has highlighted both its opportunities and limitations. Conventional virtual learning environments tend to lack flexibility, often applying standardized methods that do not account for individual learning differences. In contrast, Artificial Intelligence (AI) empowers the creation of customized [...] Read more.
The rapid expansion of virtual education has highlighted both its opportunities and limitations. Conventional virtual learning environments tend to lack flexibility, often applying standardized methods that do not account for individual learning differences. In contrast, Artificial Intelligence (AI) empowers the creation of customized educational experiences that address specific student needs. Such personalization is essential to mitigate educational inequalities, particularly in areas with limited infrastructure, scarce access to trained educators, and varying levels of digital literacy. This study explores the role of AI in advancing virtual education, with particular emphasis on supporting differentiated learning. It begins by selecting an appropriate pedagogical model to guide personalization strategies and proceeds to investigate the application of AI techniques across three key areas: the characterization of educational resources, the detection of learning styles, and the recommendation of tailored content. The primary contribution of this research is the development of a scalable framework that can be adapted to a variety of educational contexts, with the goal of enhancing the effectiveness and personalization of virtual learning environments through AI. Full article
Show Figures

Figure 1

27 pages, 715 KB  
Article
Developing Comprehensive e-Game Design Guidelines to Support Children with Language Delay: A Step-by-Step Approach with Initial Validation
by Noha Badkook, Doaa Sinnari and Abeer Almakky
Multimodal Technol. Interact. 2025, 9(7), 68; https://doi.org/10.3390/mti9070068 - 3 Jul 2025
Cited by 1 | Viewed by 658
Abstract
e-Games have become increasingly important in supporting the development of children with language delays. However, most existing educational games were not designed using usability guidelines tailored to the specific needs of this group. While various general and game-specific guidelines exist, they often have [...] Read more.
e-Games have become increasingly important in supporting the development of children with language delays. However, most existing educational games were not designed using usability guidelines tailored to the specific needs of this group. While various general and game-specific guidelines exist, they often have limitations. Some are too broad, others only address limited features of e-Games, and many fail to consider needs relevant to children with speech and language challenges. Therefore, this paper introduced a new collection of usability guidelines, called eGLD (e-Game for Language Delay), specifically designed for evaluating and improving educational games for children with language delays. The guidelines were created based on Quinones et al.’s methodology, which involves seven stages from the exploratory phase to the refining phase. eGLD consists of 19 guidelines and 131 checklist items that are user-friendly and applicable, addressing diverse features of e-Games for treating language delay in children. To conduct the first validation of eGLD, an experiment was carried out on two popular e-Games, “MITA” and “Speech Blubs”, by comparing the usability issues identified using eGLD with those identified by Nielsen and GUESS (Game User Experience Satisfaction Scale) guidelines. The experiment revealed that eGLD detected a greater number of usability issues, including critical ones, demonstrating its potential effectiveness in assessing and enhancing the usability of e-Games for children with language delay. Based on this validation, the guidelines were refined, and a second round of validation is planned to further ensure their reliability and applicability. Full article
(This article belongs to the Special Issue Video Games: Learning, Emotions, and Motivation)
Show Figures

Figure 1

13 pages, 854 KB  
Article
Individual Variability in Cognitive Engagement and Performance Adaptation During Virtual Reality Interaction: A Comparative EEG Study of Autistic and Neurotypical Individuals
by Aulia Hening Darmasti, Raphael Zender, Agnes Sianipar and Niels Pinkwart
Multimodal Technol. Interact. 2025, 9(7), 67; https://doi.org/10.3390/mti9070067 - 1 Jul 2025
Viewed by 547
Abstract
Many studies have recognized that individual variability shapes user experience in virtual reality (VR), yet little is known about how these differences influence objective cognitive engagement and performance outcomes. This study investigates how cognitive factors (IQ, age) and technological familiarity (tech enthusiasm, tech [...] Read more.
Many studies have recognized that individual variability shapes user experience in virtual reality (VR), yet little is known about how these differences influence objective cognitive engagement and performance outcomes. This study investigates how cognitive factors (IQ, age) and technological familiarity (tech enthusiasm, tech fluency, first-time VR experience) influence EEG-derived cognitive responses (alpha and theta activity) and task performance (trial duration) during VR interactions. Sixteen autistic and sixteen neurotypical participants engaged with various VR interactions while their neural activity was recorded using a Muse S EEG. Correlational analyses showed distinct group-specific patterns: higher IQ correlated with elevated average alpha and theta power in autistic participants, while tech fluency significantly influenced performance outcomes only in neurotypical group. Prior VR experience correlated with better performance in the neurotypical group but slower adaptation in the autistic group. These results highlight the role of individual variability in shaping VR engagement and underscore the importance of personalized design approaches. This work provides foundational insights toward advancing inclusive, user-centered VR systems. Full article
Show Figures

Figure 1

26 pages, 2873 KB  
Article
Interactive Content Retrieval in Egocentric Videos Based on Vague Semantic Queries
by Linda Ablaoui, Wilson Estecio Marcilio-Jr, Lai Xing Ng, Christophe Jouffrais and Christophe Hurter
Multimodal Technol. Interact. 2025, 9(7), 66; https://doi.org/10.3390/mti9070066 - 30 Jun 2025
Viewed by 957
Abstract
Retrieving specific, often instantaneous, content from hours-long egocentric video footage based on hazily remembered details is challenging. Vision–language models (VLMs) have been employed to enable zero-shot textual-based content retrieval from videos. But, they fall short if the textual query contains ambiguous terms or [...] Read more.
Retrieving specific, often instantaneous, content from hours-long egocentric video footage based on hazily remembered details is challenging. Vision–language models (VLMs) have been employed to enable zero-shot textual-based content retrieval from videos. But, they fall short if the textual query contains ambiguous terms or users fail to specify their queries enough, leading to vague semantic queries. Such queries can refer to several different video moments, not all of which can be relevant, making pinpointing content harder. We investigate the requirements for an egocentric video content retrieval framework that helps users handle vague queries. First, we narrow down vague query formulation factors and limit them to ambiguity and incompleteness. Second, we propose a zero-shot, user-centered video content retrieval framework that leverages a VLM to provide video data and query representations that users can incrementally combine to refine queries. Third, we compare our proposed framework to a baseline video player and analyze user strategies for answering vague video content retrieval scenarios in an experimental study. We report that both frameworks perform similarly, users favor our proposed framework, and, as far as navigation strategies go, users value classic interactions when initiating their search and rely on the abstract semantic video representation to refine their resulting moments. Full article
Show Figures

Figure 1

35 pages, 1412 KB  
Article
AI Chatbots in Philology: A User Experience Case Study of Conversational Interfaces for Content Creation and Instruction
by Nikolaos Pellas
Multimodal Technol. Interact. 2025, 9(7), 65; https://doi.org/10.3390/mti9070065 - 27 Jun 2025
Cited by 1 | Viewed by 1002
Abstract
A persistent challenge in training future philology educators is engaging students in deep textual analysis across historical periods—especially in large classes where limited resources, feedback, and assessment tools hinder the teaching of complex linguistic and contextual features. These constraints often lead to superficial [...] Read more.
A persistent challenge in training future philology educators is engaging students in deep textual analysis across historical periods—especially in large classes where limited resources, feedback, and assessment tools hinder the teaching of complex linguistic and contextual features. These constraints often lead to superficial learning, decreased motivation, and inequitable outcomes, particularly when traditional methods lack interactive and scalable support. As digital technologies evolve, there is increasing interest in how Artificial Intelligence (AI) can address such instructional gaps. This study explores the potential of conversational AI chatbots to provide scalable, pedagogically grounded support in philology education. Using a mixed-methods case study, twenty-six (n = 26) undergraduate students completed structured tasks using one of three AI chatbots (ChatGPT, Gemini, or DeepSeek). Quantitative and qualitative data were collected via usability scales, AI literacy surveys, and semi-structured interviews. The results showed strong usability across all platforms, with DeepSeek rated highest in intuitiveness. Students reported confidence in using AI for efficiency and decision-making but desired greater support in evaluating multiple AI-generated outputs. The AI-enhanced environment promoted motivation, autonomy, and conceptual understanding, despite some onboarding and clarity challenges. Implications include reducing instructor workload, enhancing student-centered learning, and informing curriculum development in philology, particularly for instructional designers and educational technologists. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop