Previous Issue
Volume 9, June
 
 

Multimodal Technol. Interact., Volume 9, Issue 7 (July 2025) – 6 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
26 pages, 612 KiB  
Review
Systemic Gamification Theory (SGT): A Holistic Model for Inclusive Gamified Digital Learning
by Franz Coelho and Ana Maria Abreu
Multimodal Technol. Interact. 2025, 9(7), 70; https://doi.org/10.3390/mti9070070 - 10 Jul 2025
Abstract
Gamification has emerged as a powerful strategy in digital education, enhancing engagement, motivation, and learning outcomes. However, most research lacks theoretical grounding and often applies multiple and uncontextualized game elements, limiting its impact and replicability. To address these gaps, this study introduces a [...] Read more.
Gamification has emerged as a powerful strategy in digital education, enhancing engagement, motivation, and learning outcomes. However, most research lacks theoretical grounding and often applies multiple and uncontextualized game elements, limiting its impact and replicability. To address these gaps, this study introduces a Systemic Gamification Theory (SGT)—a comprehensive, human-centered model for designing and evaluating inclusive and effective gamified educational environments. Sustained in Education, Human–Computer Interaction, and Psychology, SGT is structured around four core principles, emphasizing the importance of integrating game elements (1—Integration) into cohesive systems that generate emergent outcomes (2—Emergence) aligned synergistically (3—Synergy) with contextual needs (4—Context). The theory supports inclusivity by accounting for individual traits, situational dynamics, spatial settings, and cultural diversity. To operationalize SGT, we developed two tools: i. a set of 10 Heuristics to guide and analyze effective and inclusive gamification; and ii. a Framework for designing and evaluating gamified systems, as well as comparing research methods and outcomes across different contexts. These tools demonstrated how SGT enables robust, adaptive, and equitable gamified learning experiences. By advancing theoretical and practical development, SGT fosters a transformative approach to gamification, enriching multimedia learning through thoughtful system design and reflective evaluation practices. Full article
Show Figures

Figure 1

23 pages, 3492 KiB  
Article
Innovating Personalized Learning in Virtual Education Through AI
by Luis Fletscher, Jhon Mercado, Alvaro Gómez and Carlos Mendoza-Cardenas
Multimodal Technol. Interact. 2025, 9(7), 69; https://doi.org/10.3390/mti9070069 - 3 Jul 2025
Viewed by 329
Abstract
The rapid expansion of virtual education has highlighted both its opportunities and limitations. Conventional virtual learning environments tend to lack flexibility, often applying standardized methods that do not account for individual learning differences. In contrast, Artificial Intelligence (AI) empowers the creation of customized [...] Read more.
The rapid expansion of virtual education has highlighted both its opportunities and limitations. Conventional virtual learning environments tend to lack flexibility, often applying standardized methods that do not account for individual learning differences. In contrast, Artificial Intelligence (AI) empowers the creation of customized educational experiences that address specific student needs. Such personalization is essential to mitigate educational inequalities, particularly in areas with limited infrastructure, scarce access to trained educators, and varying levels of digital literacy. This study explores the role of AI in advancing virtual education, with particular emphasis on supporting differentiated learning. It begins by selecting an appropriate pedagogical model to guide personalization strategies and proceeds to investigate the application of AI techniques across three key areas: the characterization of educational resources, the detection of learning styles, and the recommendation of tailored content. The primary contribution of this research is the development of a scalable framework that can be adapted to a variety of educational contexts, with the goal of enhancing the effectiveness and personalization of virtual learning environments through AI. Full article
Show Figures

Figure 1

27 pages, 715 KiB  
Article
Developing Comprehensive e-Game Design Guidelines to Support Children with Language Delay: A Step-by-Step Approach with Initial Validation
by Noha Badkook, Doaa Sinnari and Abeer Almakky
Multimodal Technol. Interact. 2025, 9(7), 68; https://doi.org/10.3390/mti9070068 - 3 Jul 2025
Viewed by 226
Abstract
e-Games have become increasingly important in supporting the development of children with language delays. However, most existing educational games were not designed using usability guidelines tailored to the specific needs of this group. While various general and game-specific guidelines exist, they often have [...] Read more.
e-Games have become increasingly important in supporting the development of children with language delays. However, most existing educational games were not designed using usability guidelines tailored to the specific needs of this group. While various general and game-specific guidelines exist, they often have limitations. Some are too broad, others only address limited features of e-Games, and many fail to consider needs relevant to children with speech and language challenges. Therefore, this paper introduced a new collection of usability guidelines, called eGLD (e-Game for Language Delay), specifically designed for evaluating and improving educational games for children with language delays. The guidelines were created based on Quinones et al.’s methodology, which involves seven stages from the exploratory phase to the refining phase. eGLD consists of 19 guidelines and 131 checklist items that are user-friendly and applicable, addressing diverse features of e-Games for treating language delay in children. To conduct the first validation of eGLD, an experiment was carried out on two popular e-Games, “MITA” and “Speech Blubs”, by comparing the usability issues identified using eGLD with those identified by Nielsen and GUESS (Game User Experience Satisfaction Scale) guidelines. The experiment revealed that eGLD detected a greater number of usability issues, including critical ones, demonstrating its potential effectiveness in assessing and enhancing the usability of e-Games for children with language delay. Based on this validation, the guidelines were refined, and a second round of validation is planned to further ensure their reliability and applicability. Full article
(This article belongs to the Special Issue Video Games: Learning, Emotions, and Motivation)
Show Figures

Figure 1

13 pages, 844 KiB  
Article
Individual Variability in Cognitive Engagement and Performance Adaptation During Virtual Reality Interaction: A Comparative EEG Study of Autistic and Neurotypical Individuals
by Aulia Hening Darmasti, Raphael Zender, Agnes Sianipar and Niels Pinkwart
Multimodal Technol. Interact. 2025, 9(7), 67; https://doi.org/10.3390/mti9070067 - 1 Jul 2025
Viewed by 220
Abstract
Many studies have recognized that individual variability shapes user experience in virtual reality (VR), yet little is known about how these differences influence objective cognitive engagement and performance outcomes. This study investigates how cognitive factors (IQ, age) and technological familiarity (tech enthusiasm, tech [...] Read more.
Many studies have recognized that individual variability shapes user experience in virtual reality (VR), yet little is known about how these differences influence objective cognitive engagement and performance outcomes. This study investigates how cognitive factors (IQ, age) and technological familiarity (tech enthusiasm, tech fluency, first-time VR experience) influence EEG-derived cognitive responses (alpha and theta activity) and task performance (trial duration) during VR interactions. Sixteen autistic and sixteen neurotypical participants engaged with various VR interactions while their neural activity was recorded using a Muse S EEG. Correlational analyses showed distinct group-specific patterns: higher IQ correlated with elevated average alpha and theta power in autistic participants, while tech fluency significantly influenced performance outcomes only in neurotypical group. Prior VR experience correlated with better performance in the neurotypical group but slower adaptation in the autistic group. These results highlight the role of individual variability in shaping VR engagement and underscore the importance of personalized design approaches. This work provides foundational insights toward advancing inclusive, user-centered VR systems. Full article
Show Figures

Figure 1

26 pages, 2873 KiB  
Article
Interactive Content Retrieval in Egocentric Videos Based on Vague Semantic Queries
by Linda Ablaoui, Wilson Estecio Marcilio-Jr, Lai Xing Ng, Christophe Jouffrais and Christophe Hurter
Multimodal Technol. Interact. 2025, 9(7), 66; https://doi.org/10.3390/mti9070066 - 30 Jun 2025
Viewed by 284
Abstract
Retrieving specific, often instantaneous, content from hours-long egocentric video footage based on hazily remembered details is challenging. Vision–language models (VLMs) have been employed to enable zero-shot textual-based content retrieval from videos. But, they fall short if the textual query contains ambiguous terms or [...] Read more.
Retrieving specific, often instantaneous, content from hours-long egocentric video footage based on hazily remembered details is challenging. Vision–language models (VLMs) have been employed to enable zero-shot textual-based content retrieval from videos. But, they fall short if the textual query contains ambiguous terms or users fail to specify their queries enough, leading to vague semantic queries. Such queries can refer to several different video moments, not all of which can be relevant, making pinpointing content harder. We investigate the requirements for an egocentric video content retrieval framework that helps users handle vague queries. First, we narrow down vague query formulation factors and limit them to ambiguity and incompleteness. Second, we propose a zero-shot, user-centered video content retrieval framework that leverages a VLM to provide video data and query representations that users can incrementally combine to refine queries. Third, we compare our proposed framework to a baseline video player and analyze user strategies for answering vague video content retrieval scenarios in an experimental study. We report that both frameworks perform similarly, users favor our proposed framework, and, as far as navigation strategies go, users value classic interactions when initiating their search and rely on the abstract semantic video representation to refine their resulting moments. Full article
Show Figures

Figure 1

35 pages, 1412 KiB  
Article
AI Chatbots in Philology: A User Experience Case Study of Conversational Interfaces for Content Creation and Instruction
by Nikolaos Pellas
Multimodal Technol. Interact. 2025, 9(7), 65; https://doi.org/10.3390/mti9070065 - 27 Jun 2025
Viewed by 224
Abstract
A persistent challenge in training future philology educators is engaging students in deep textual analysis across historical periods—especially in large classes where limited resources, feedback, and assessment tools hinder the teaching of complex linguistic and contextual features. These constraints often lead to superficial [...] Read more.
A persistent challenge in training future philology educators is engaging students in deep textual analysis across historical periods—especially in large classes where limited resources, feedback, and assessment tools hinder the teaching of complex linguistic and contextual features. These constraints often lead to superficial learning, decreased motivation, and inequitable outcomes, particularly when traditional methods lack interactive and scalable support. As digital technologies evolve, there is increasing interest in how Artificial Intelligence (AI) can address such instructional gaps. This study explores the potential of conversational AI chatbots to provide scalable, pedagogically grounded support in philology education. Using a mixed-methods case study, twenty-six (n = 26) undergraduate students completed structured tasks using one of three AI chatbots (ChatGPT, Gemini, or DeepSeek). Quantitative and qualitative data were collected via usability scales, AI literacy surveys, and semi-structured interviews. The results showed strong usability across all platforms, with DeepSeek rated highest in intuitiveness. Students reported confidence in using AI for efficiency and decision-making but desired greater support in evaluating multiple AI-generated outputs. The AI-enhanced environment promoted motivation, autonomy, and conceptual understanding, despite some onboarding and clarity challenges. Implications include reducing instructor workload, enhancing student-centered learning, and informing curriculum development in philology, particularly for instructional designers and educational technologists. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop