Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (534)

Search Parameters:
Keywords = virtual reality learning environments

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 396 KB  
Review
Adaptive Architectures for Gamified Learning in Software Engineering: A Systematic Review
by Aurora Annamaria Quartulli, Giovanni Mignogna, Vera Zizzo and Marina Mongiello
Computers 2026, 15(4), 235; https://doi.org/10.3390/computers15040235 - 9 Apr 2026
Abstract
Effective software engineering education today requires tools that adapt to individual learner proficiency and progress, while ensuring positive student engagement. Gamified platforms represent an effective approach to learning and maintaining motivation, but their efficacy depends on a robust underlying architecture. This systematic literature [...] Read more.
Effective software engineering education today requires tools that adapt to individual learner proficiency and progress, while ensuring positive student engagement. Gamified platforms represent an effective approach to learning and maintaining motivation, but their efficacy depends on a robust underlying architecture. This systematic literature review analyzes state-of-the-art artificial intelligence (AI)-based adaptive architectures designed to support gamified learning tools, highlighting their architectural models (such as intelligent tutoring systems, multi-agent systems, and immersive virtual reality/augmented reality environments), adaptation mechanisms (including Generative AI and chatbots), and personalization strategies. A significant focus is placed on Process Mining and Learning Analytics as methodological approaches to organize learning paths and guide dynamic adaptation based on student behavior. The results of the selected studies demonstrate advantages such as increased engagement, longer-term participation, and personalized learning pace. However, challenges remain, such as common assessment criteria, integrating different technologies, and system scalability. The findings offer concrete insights for designing the next generation of effective gamified learning tools, based on data and software engineering processes. Full article
16 pages, 303 KB  
Article
Virtual Reality and the Sense of Belonging Among Distance Learners: A Study on Peer Relationships in Higher Education
by David Košatka, Alžběta Šašinková, Markéta Košatková, Tomáš Hunčík and Čeněk Šašinka
Virtual Worlds 2026, 5(2), 17; https://doi.org/10.3390/virtualworlds5020017 - 9 Apr 2026
Abstract
Distance learners in higher education are often assumed to face limited peer interaction, potentially weakening their sense of belonging. This study examines peer relationships and belonging among students in distance and blended university programs, with attention to the role of virtual reality (VR) [...] Read more.
Distance learners in higher education are often assumed to face limited peer interaction, potentially weakening their sense of belonging. This study examines peer relationships and belonging among students in distance and blended university programs, with attention to the role of virtual reality (VR) within digitally mediated learning environments. Immersive VR teaching is included in the curriculum for distance learning students in the studied programs. Using a mixed-methods design, survey data and open-ended responses were collected from 17 students in Information Studies and Information Service Design. An adapted Classroom Community Scale was supplemented with items addressing the perceived contribution of different communication technologies. Contrary to expectations, fully distance learners did not report weaker agreement with statements reflecting belonging than blended students; on several items, they expressed stronger agreement, particularly regarding perceived peer support and learning opportunities. Results indicate that conventional 2D communication tools, particularly chats and video calls, are central to sustaining peer relationships. VR was not perceived as essential but described by some students as an added value supporting shared experience and group cohesion. Overall, belonging emerges as a socio-technical achievement shaped by communication practices rather than physical proximity. Full article
14 pages, 1329 KB  
Article
Differential Effects of Desktop and Immersive Virtual Reality on Learning, Cognitive Load and Attitudes of University Students
by Julio Cabero-Almenara, Mª Victoria Fernández-Scagliusi, Antonio Palacios-Rodríguez and Rocío Piñero-Virué
Appl. Sci. 2026, 16(7), 3595; https://doi.org/10.3390/app16073595 - 7 Apr 2026
Abstract
Virtual reality (VR) has emerged as a technology with growing presence in education, driven by its potential to increase motivation, promote learning, and offer immersive experiences that are challenging to replicate in traditional settings. However, the literature shows contradictory results regarding its impact [...] Read more.
Virtual reality (VR) has emerged as a technology with growing presence in education, driven by its potential to increase motivation, promote learning, and offer immersive experiences that are challenging to replicate in traditional settings. However, the literature shows contradictory results regarding its impact on academic performance, cognitive load, and student attitudes, particularly when comparing immersive and non-immersive (desktop) modalities. Against this backdrop, this study aimed to examine whether interaction with VR-based learning objects improves knowledge acquisition, whether differences exist between immersive and desktop versions, what cognitive load is associated with each modality, and what attitudes students develop toward VR. A total of 136 Education students participated, randomly assigned to either the immersive (n = 70) or non-immersive (n = 66) condition, following a pretest–posttest experimental design. Data were collected using a performance test, the NASA-TLX questionnaire, and a semantic differential scale. Results indicated significant improvements in learning across both modalities with no statistically significant differences between them, a slightly higher—yet low-to-moderate—cognitive load in the immersive condition, and highly positive attitudes in both groups. These findings suggest that both modalities are effective and well accepted, although immersive VR requires somewhat greater cognitive effort. The discussion highlights the need to clarify the factors that moderate these effects and to advance theoretical frameworks for instructional design in VR environments. Full article
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)
Show Figures

Figure 1

13 pages, 428 KB  
Study Protocol
Work at Heights Training: Conventional Approach with and Without Immersive Virtual Reality Study Protocol
by Diana Guerrero-Jaramillo, Ricardo de la Caridad Montero and Oscar Campo
Methods Protoc. 2026, 9(2), 55; https://doi.org/10.3390/mps9020055 - 1 Apr 2026
Viewed by 226
Abstract
Background: Work at heights is a high-risk occupational activity, with falls being a leading cause of fatal accidents in construction and industrial maintenance. Conventional safety training often does not fully prepare workers for real-world hazards. Immersive virtual reality (IVR) has emerged as a [...] Read more.
Background: Work at heights is a high-risk occupational activity, with falls being a leading cause of fatal accidents in construction and industrial maintenance. Conventional safety training often does not fully prepare workers for real-world hazards. Immersive virtual reality (IVR) has emerged as a promising training tool, providing controlled and realistic simulations of hazardous scenarios. This hypothesis-generating pilot study evaluates the feasibility and effectiveness of IVR in enhancing practical skills, safety perception, and physiological responses during work-at-height training. Methods: This controlled trial will recruit first-time trainees from the National Learning Service (SENA) of Colombia. Participants will be assigned to an intervention group, receiving IVR training before field-based practical sessions, or a control group, receiving standard theoretical instruction. Outcomes include practical skill acquisition, ergonomic risk, cognitive performance, and physiological responses, including heart rate variability measured with validated devices. Assessments will be performed using standardized tools, and data will be analyzed with repeated-measures ANOVA and regression models to compare groups. Conclusions: By integrating practical, cognitive, ergonomic, and physiological measures, this study will provide evidence on whether IVR improves the effectiveness of work-at-height training beyond conventional methods. Findings may inform future strategies to enhance occupational safety training in high-risk work environments. Full article
(This article belongs to the Section Public Health Research)
Show Figures

Figure 1

21 pages, 1225 KB  
Article
Virtual Museums and Active Learning: Evidence from a Technology-Mediated Intervention
by Chenglin Yang, Shujing Jiang, Guangyuan Yao, Chi-kin Lam, Tao Tan and Yue Sun
Future Internet 2026, 18(4), 186; https://doi.org/10.3390/fi18040186 - 1 Apr 2026
Viewed by 268
Abstract
The integration of virtual museums into education has emerged as an innovative approach embraced by both teachers and learners, reflecting the broader impact of virtual reality (VR) applications in education. This study puts forward a pedagogical framework for utilizing virtual museums in teaching [...] Read more.
The integration of virtual museums into education has emerged as an innovative approach embraced by both teachers and learners, reflecting the broader impact of virtual reality (VR) applications in education. This study puts forward a pedagogical framework for utilizing virtual museums in teaching art history and investigating their impact on the art history curriculum. In this context, two free online museums are used as teaching materials, representing 3D interactive learning environments that enable immersive exploration of cultural heritage. Grounded in the Theory of Technology-Mediated Learning, this research adopts a hybrid methodological approach to track the art history courses of 75 Chinese undergraduates through experiments, questionnaires, and structured interviews over a four-week period. The findings demonstrate that virtual museum-integrated instruction significantly enhances learning effectiveness over sustained use, actively promotes learner engagement, and fosters greater autonomy. Importantly, learners prioritize educational value and authenticity in virtual museum features, while also expressing a strong preference for technologically mature platforms. This research contributes to understanding the impact of VR on digital transformation in the educational sector by providing a validated instructional model that integrates virtual museums into art history curricula, offering educators a replicable framework for implementation. Future studies should investigate the relationship between emotional engagement and academic performance within virtual museums to further refine both pedagogical strategies and educational virtual reality design. Full article
Show Figures

Graphical abstract

25 pages, 3662 KB  
Article
Evaluating the Perception, Understanding, and Forgetting of Progressive Neural Networks: A Quantitative and Qualitative Analysis
by Lucía Güitta-López, Jaime Boal and Álvaro J. López-López
AI 2026, 7(4), 120; https://doi.org/10.3390/ai7040120 - 31 Mar 2026
Viewed by 299
Abstract
The use of virtual environments to collect the experience required by deep reinforcement learning models is accelerating the deployment of these algorithms in industrial environments. However, once the experience-gathering problem is solved, it is necessary to address how to efficiently transfer the knowledge [...] Read more.
The use of virtual environments to collect the experience required by deep reinforcement learning models is accelerating the deployment of these algorithms in industrial environments. However, once the experience-gathering problem is solved, it is necessary to address how to efficiently transfer the knowledge from the virtual scenario to reality. This paper focuses on examining Progressive Neural Networks (PNNs) as a promising transfer learning technique. The analyses carried out range from studying the capabilities and limits of the layers responsible for learning the state representation from a pixel space, which could arguably be the convolutional blocks, to the forgetting agents suffer when learning a new task. Introducing controlled visual changes in the environment scene can lead to a performance degradation of 50.3% in the worst-case scenario. These visual discrepancies significantly impact the agent’s learning time and accuracy when using a PNN architecture. Regarding the PNN forgetting assessment, partial forgetting occurs in two of the three environments analyzed, those where the agent masters its new task. This could be due to a balance between the relevance of the new features learned and the ones inherited from the teacher agent. Full article
Show Figures

Figure 1

14 pages, 466 KB  
Review
Fidelity, Virtual Human Assistants, and Engagement in Immersive Virtual Learning Environments: The Role of Temporal Functional Fidelity
by Thomas Gaudi, Bill Kapralos and Alvaro Quevedo
Encyclopedia 2026, 6(4), 77; https://doi.org/10.3390/encyclopedia6040077 - 30 Mar 2026
Viewed by 358
Abstract
Advances in consumer virtual reality (VR) and artificial intelligence (AI) have accelerated the use of immersive virtual learning environments (iVLEs) for skills training. Learner engagement is a critical determinant of training effectiveness, which can be shaped by VR system features (e.g., visual, auditory, [...] Read more.
Advances in consumer virtual reality (VR) and artificial intelligence (AI) have accelerated the use of immersive virtual learning environments (iVLEs) for skills training. Learner engagement is a critical determinant of training effectiveness, which can be shaped by VR system features (e.g., visual, auditory, and tactile immersion) coupled with interaction mechanics and instructional design integrated with the instructional behaviors of virtual human assistants (VHAs). Although visual and behavioral fidelity in VHAs have been extensively studied, functional fidelity (i.e., the extent to which the iVLE and/or VHAs support cognitive, perceptual, and motor processes required to perform a task regardless of visual realism), and particularly the temporal alignment of instructional guidance with learners’ cognitive and motor demands, remains underexamined. This article highlights research on VHAs in iVLEs with a special emphasis on temporal functional fidelity as an emerging requirement for synchronizing instructional support with user workload and task phases. By consolidating existing findings and highlighting gaps in current empirical work, this article outlines key implications for the design and evaluation of VHAs and identifies directions for future research aimed at optimizing instructional timing in iVLEs. The goal is to inform principled VHA design and clarify how fidelity dimensions should be integrated to support effective, pedagogically grounded immersive learning experiences. Full article
(This article belongs to the Section Mathematics & Computer Science)
Show Figures

Figure 1

14 pages, 544 KB  
Article
Immersion Matters: User Experience in Educational Virtual Tours Based on 360° Images and 3D Models
by Ángel López-Ramos, Jose Luis Saorín, Dámari Melian-Díaz, Alejandro Bonnet-de-León and Cecile Meier
Appl. Sci. 2026, 16(7), 3270; https://doi.org/10.3390/app16073270 - 27 Mar 2026
Viewed by 253
Abstract
Virtual tours are increasingly used in education, particularly when access to real environments is limited. This study examined how display mode and representation format affect subjective user experience in an educational virtual tour of a hospital operating room. A within-subject 2 × 2 [...] Read more.
Virtual tours are increasingly used in education, particularly when access to real environments is limited. This study examined how display mode and representation format affect subjective user experience in an educational virtual tour of a hospital operating room. A within-subject 2 × 2 design compared two representation formats (360° photographs vs. 3D models) and two display modes (desktop PC vs. immersive virtual reality using Meta Quest 2). Eighty-four university students completed the four visualization conditions and evaluated each experience using an adapted version of the QUXiVE questionnaire. Descriptive statistics and internal consistency indices were calculated, and each questionnaire dimension was analyzed using a two-way repeated-measures ANOVA with display mode and representation format as within-subject factors. A significant main effect of display mode was found for presence, engagement, immersion, flow, emotion, judgment, physical consequences, and perceived educational usefulness (all p < 0.001), but not for usability (p = 0.273). A significant main effect of representation format was observed for presence (p = 0.003), emotion (p = 0.018), and perceived educational usefulness (p = 0.015), whereas no significant interaction effects were found. These findings indicate that immersive VR had the strongest and most consistent effect on subjective user experience across both 360° and 3D virtual tours, although it was also associated with higher physical-consequence scores. By contrast, the effect of representation format was more limited. Overall, both approaches appear to be complementary educational resources, depending on pedagogical goals, available infrastructure, and desired levels of interactivity. Full article
Show Figures

Figure 1

12 pages, 1274 KB  
Article
Cultural Knowledge Presentation of Salah Lanna Within the Context of Buddhist Art: Expressed Through Stone Buddha Statues via Virtual Reality
by Phichete Julrode and Piyapat Jarusawat
Information 2026, 17(4), 312; https://doi.org/10.3390/info17040312 - 24 Mar 2026
Viewed by 176
Abstract
The traditional craft of Buddha statue carving represents an important form of cultural heritage in many Asian societies, yet the transmission of this knowledge is increasingly threatened by modernization and the declining number of skilled artisans. This study explores the use of Virtual [...] Read more.
The traditional craft of Buddha statue carving represents an important form of cultural heritage in many Asian societies, yet the transmission of this knowledge is increasingly threatened by modernization and the declining number of skilled artisans. This study explores the use of Virtual Reality (VR) as an innovative tool for preserving and teaching the cultural knowledge associated with Salah Lanna stone Buddha carving. A VR-based learning environment was developed to simulate traditional carving techniques, tools, and cultural narratives related to Lanna Buddhist art. The system was designed using Unity 3D and integrated hand-tracking interaction to enable immersive practice of carving procedures. The prototype was evaluated through expert review involving ten specialists in Buddha carving, art education, and VR technology. The evaluation assessed five dimensions: usability, authenticity, cultural relevance, immersion, and perceived learning potential. Results indicate high levels of expert evaluation results regarding the effectiveness of the system, with average scores of 4.6 for usability, 4.8 for authenticity, 4.7 for cultural relevance, 4.5 for immersion, and 4.9 for perceived learning potential on a five-point scale. The findings suggest that VR technology can provide a promising platform for preserving traditional craftsmanship and supporting immersive cultural learning. By integrating technical training with cultural narratives, the system demonstrates potential for enhancing access to traditional craft education while contributing to the digital preservation of Salah Lanna cultural heritage. Full article
(This article belongs to the Special Issue Advances in Extended Reality Technologies for User Experience Design)
Show Figures

Figure 1

23 pages, 5784 KB  
Article
Learning Italian Hand Gesture Culture Through an Automatic Gesture Recognition Approach
by Chiara Innocente, Giorgio Di Pisa, Irene Lionetti, Andrea Mamoli, Manuela Vitulano, Giorgia Marullo, Simone Maffei, Enrico Vezzetti and Luca Ulrich
Future Internet 2026, 18(4), 177; https://doi.org/10.3390/fi18040177 - 24 Mar 2026
Viewed by 218
Abstract
Italian hand gestures constitute a distinctive and widely recognized form of nonverbal communication, deeply embedded in everyday interaction and cultural identity. Despite their prominence, these gestures are rarely formalized or systematically taught, posing challenges for foreign speakers and visitors seeking to interpret their [...] Read more.
Italian hand gestures constitute a distinctive and widely recognized form of nonverbal communication, deeply embedded in everyday interaction and cultural identity. Despite their prominence, these gestures are rarely formalized or systematically taught, posing challenges for foreign speakers and visitors seeking to interpret their meaning and pragmatic use. Moreover, their ephemeral and embodied nature complicates traditional preservation and transmission approaches, positioning them within the broader domain of intangible cultural heritage. This paper introduces a machine learning–based framework for recognizing iconic Italian hand gestures, designed to support cultural learning and engagement among foreign speakers and visitors. The approach combines RGB–D sensing with depth-enhanced geometric feature extraction, employing interpretable classification models trained on a purpose-built dataset. The recognition system is integrated into a non-immersive virtual reality application simulating an interactive digital totem conceived for public arrival spaces, providing tutorial content, real-time gesture recognition, and immediate feedback within a playful and accessible learning environment. Three supervised machine learning pipelines were evaluated, and Random Forest achieved the best overall performance. Its integration with an Isolation Forest module was further considered for deployment, achieving a macro-averaged accuracy and F1-score of 0.82 under a 5-fold cross-validation protocol. An experimental user study was conducted with 25 subjects to evaluate the proposed interactive system in terms of usability, user engagement, and learning effectiveness, obtaining favorable results and demonstrating its potential as a practical tool for cultural education and intercultural communication. Full article
Show Figures

Figure 1

35 pages, 4820 KB  
Article
Comparing Learning Outcomes of Indigenous and Non-Indigenous Students Using a VR360 and Virtual Drone System for Thao Indigenous Culture and Environmental Education
by Wernhuar Tarng, Bin-Yu Lee and Tsu-Jen Ding
Electronics 2026, 15(6), 1315; https://doi.org/10.3390/electronics15061315 - 21 Mar 2026
Viewed by 227
Abstract
Indigenous cultures in Taiwan embody rich ecological knowledge and strong environmental conservation values. However, elementary and secondary education often provides limited exposure to these cultures due to geographic constraints and insufficient instructional resources, relying primarily on textbooks and teacher-centered teaching methods. Such approaches [...] Read more.
Indigenous cultures in Taiwan embody rich ecological knowledge and strong environmental conservation values. However, elementary and secondary education often provides limited exposure to these cultures due to geographic constraints and insufficient instructional resources, relying primarily on textbooks and teacher-centered teaching methods. Such approaches restrict experiential learning, which may diminish students’ motivation and depth of understanding. However, 360-degree virtual reality (VR360) enables immersive simulations of authentic environments, increasing the accessibility of cultural and ecological education through smartphones and low-cost Google Cardboard. In addition, drone technology enhances learning by offering multiple perspectives for environmental exploration and data collection. This study examines the effectiveness of integrating a VR360 and virtual drone system into instruction focused on the ecological context of Sun Moon Lake and Thao Indigenous culture. Learning outcomes for Indigenous and non-Indigenous students were compared in terms of learning effectiveness, motivation, cognitive load, and technology acceptance. Ecological and cultural materials were collected through field investigations and drone photography, enabling students to explore landscapes from a first-person perspective and engage with Thao cultural practices and their relationship with local ecology. The findings indicate that the proposed VR-based system significantly enhances learning experiences and demonstrates strong potential for cultural and ecological education, offering valuable guidance for the design of future immersive instructional strategies and learning materials related to Indigenous cultures. Full article
(This article belongs to the Special Issue Advances in AI-Augmented E-Learning for Smart Cities)
Show Figures

Figure 1

13 pages, 1027 KB  
Article
Predicting Cybersickness in Virtual Reality from Head–Torso Kinematics Using a Hybrid Convolutional–Recurrent Network Model
by Ala Hag, Houshyar Asadi, Mohammad Reza Chalak Qazani, Thuong Hoang, Ambarish Kulkarni, Stefan Greuter and Saeid Nahavandi
Computers 2026, 15(3), 193; https://doi.org/10.3390/computers15030193 - 17 Mar 2026
Viewed by 361
Abstract
Motion sickness (MS) is a prevalent condition that can significantly degrade user comfort and immersion, particularly in virtual reality (VR) environments. Accurate prediction models are essential for early detection and mitigation of MS symptoms, thereby improving the overall VR experience. Most existing approaches [...] Read more.
Motion sickness (MS) is a prevalent condition that can significantly degrade user comfort and immersion, particularly in virtual reality (VR) environments. Accurate prediction models are essential for early detection and mitigation of MS symptoms, thereby improving the overall VR experience. Most existing approaches rely on bio-physiological data acquired through body-mounted sensors, which may restrict user mobility and diminish immersion. This study proposes a less intrusive alternative, leveraging head and torso kinematic data for MS prediction. We introduce a hybrid Convolutional–Recurrent Neural Network (C-RNN) designed to capture both spatial and temporal features for enhanced classification accuracy. Using a dataset of 40 participants, the proposed C-RNN outperformed traditional machine learning models—including Support Vector Machines (SVMs), k-Nearest Neighbors (KNN), Decision Trees (DT), and a baseline Recurrent Neural Network (RNN)—across multiple evaluation metrics. The C-RNN achieved 85.63% accuracy, surpassing SVM (60%), KNN (73.75%), DT (74.38%), and RNN (81.88%), with corresponding gains in precision, recall, F1-score, and ROC AUC. These results demonstrate that head–torso motion patterns provide sufficient predictive signal for accurate MS detection, offering a non-intrusive, efficient alternative to physiological sensing that supports improved comfort and sustained immersion in VR. Full article
(This article belongs to the Special Issue Innovative Research in Human–Computer Interactions)
Show Figures

Figure 1

28 pages, 3274 KB  
Review
The Physiological and Psychological Effects of the Built Environment: Research Progress and Implications
by Mengren Deng, Wenxin Jin, Haoxu Guo, Xinyan Chen, Yufei Wang, Longchi Xu and Weiqiang Zhou
Buildings 2026, 16(6), 1144; https://doi.org/10.3390/buildings16061144 - 13 Mar 2026
Viewed by 390
Abstract
With accelerating urbanization and a global emphasis on quality of life, the effects of the built environment on individual physiological and psychological well-being have become a critical research focus. However, existing studies remain fragmented in terms of theoretical perspectives, spatial scales, and methodological [...] Read more.
With accelerating urbanization and a global emphasis on quality of life, the effects of the built environment on individual physiological and psychological well-being have become a critical research focus. However, existing studies remain fragmented in terms of theoretical perspectives, spatial scales, and methodological approaches, and a comprehensive synthesis of the physiological and psychological effects of the built environment is still lacking. This review adopts an interdisciplinary approach, integrating architecture, urban planning, landscape architecture, geography, and psychology to systematically review the literature on the health impacts of the built environment. Its findings indicate that the scope of the built environment has expanded from natural settings to residential areas, streets, and public spaces. Research scales have progressed from macro-level districts to streets and public spaces and further to micro-level physical environments. The impacts have extended from emotional responses to broader health and well-being outcomes, with increasing attention being given to specific population groups. Technological advances have shifted research paradigms from traditional surveys to approaches incorporating big data, machine learning, virtual reality, and physiological monitoring, enabling more precise analyses of links between spatial perception and emotional responses. This review identifies gaps in interdisciplinary integration, long-term monitoring, and the consideration of individual differences, highlighting the need for future studies to integrate multimodal data with theory-informed practice to support more human-centered, health-promoting built environments. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

39 pages, 7178 KB  
Article
Deep-Learning-Derived Facial Electromyogram Signatures of Emotion in Immersive Virtual Reality (bWell): Exploring the Impact of Emotional, Cognitive, and Physical Demands
by Zohreh H. Meybodi, Francis Thibault, Budhachandra Khundrakpam, Gino De Luca, Jing Zhang, Joshua A. Granek and Nusrat Choudhury
Sensors 2026, 26(6), 1827; https://doi.org/10.3390/s26061827 - 13 Mar 2026
Viewed by 430
Abstract
Emotional and workload-related states unfold dynamically during immersive virtual reality (VR) experiences, yet reliable physiological modeling in such environments remains challenging. We investigated whether multi-channel facial electromyography (fEMG), combined with spatio-temporal deep learning, can (i) accurately classify calibrated facial expressions across participants and [...] Read more.
Emotional and workload-related states unfold dynamically during immersive virtual reality (VR) experiences, yet reliable physiological modeling in such environments remains challenging. We investigated whether multi-channel facial electromyography (fEMG), combined with spatio-temporal deep learning, can (i) accurately classify calibrated facial expressions across participants and (ii) transfer to spontaneous, task-elicited behavior in immersive VR. Twelve adults completed a calibration phase involving four intentional expressions (smile, frown, raised eyebrow, neutral), followed by VR scenes designed to elicit emotional, cognitive, physical, and dual task demands. After participant-level physiological normalization, a single shared Convolutional Neural Network–Temporal Convolutional Network (CNN–TCN) model was trained and evaluated using leave-one-participant-out (LOPO) validation. The model achieved strong cross-participant performance (Macro-F1 = 0.88 ± 0.13; ROC-AUC = 0.95 ± 0.06). When applied to unlabeled spontaneous VR task-elicited fEMG recordings, the trained model generated continuous expression classes. Derived static and temporal expression features showed scene-dependent modulation and False Discovery Rate (FDR)-surviving associations, primarily with perceived physical demand (NASA-TLX). The observed muscle activation patterns were physiologically plausible and aligned with Facial Action Coding System (FACS)-based interpretations of underlying muscle activity. These findings demonstrate that end-to-end spatio-temporal modeling of raw fEMG enables facial expression sensing in immersive VR using a single shared model following physiological normalization. The proposed framework bridges calibrated expression learning and spontaneous task-elicited behavior, supporting privacy-preserving, continuous and physiologically grounded monitoring in human-centered VR applications. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (3rd Edition))
Show Figures

Figure 1

22 pages, 1747 KB  
Review
Talking Head Generation Through Generative Models and Cross-Modal Synthesis Techniques
by Hira Nisar, Salman Masood, Zaki Malik and Adnan Abid
J. Imaging 2026, 12(3), 119; https://doi.org/10.3390/jimaging12030119 - 10 Mar 2026
Viewed by 590
Abstract
Talking Head Generation (THG) is a rapidly advancing field at the intersection of computer vision, deep learning, and speech synthesis, enabling the creation of animated human-like heads that can produce speech and express emotions with high visual realism. The core objective of THG [...] Read more.
Talking Head Generation (THG) is a rapidly advancing field at the intersection of computer vision, deep learning, and speech synthesis, enabling the creation of animated human-like heads that can produce speech and express emotions with high visual realism. The core objective of THG systems is to synthesize coherent and natural audio–visual outputs by modeling the intricate relationship between speech signals, facial dynamics, and emotional cues. These systems find widespread applications in virtual assistants, interactive avatars, video dubbing for multilingual content, educational technologies, and immersive virtual and augmented reality environments. Moreover, the development of THG has significant implications for accessibility technologies, cultural preservation, and remote healthcare interfaces. This survey paper presents a comprehensive and systematic overview of the technological landscape of Talking Head Generation. We begin by outlining the foundational methodologies that underpin the synthesis process, including generative adversarial networks (GANs), motion-aware recurrent architectures, and attention-based models. A taxonomy is introduced to organize the diverse approaches based on the nature of input modalities and generation goals. We further examine the contributions of various domains such as computer vision, speech processing, and human–robot interaction, each of which plays a critical role in advancing the capabilities of THG systems. The paper also provides a detailed review of datasets used for training and evaluating THG models, highlighting their coverage, structure, and relevance. In parallel, we analyze widely adopted evaluation metrics, categorized by their focus on image quality, motion accuracy, synchronization, and semantic fidelity. Operating parameters such as latency, frame rate, resolution, and real-time capability are also discussed to assess deployment feasibility. Special emphasis is placed on the integration of generative artificial intelligence (GenAI), which has significantly enhanced the adaptability and realism of talking head systems through more powerful and generalizable learning frameworks. Full article
Show Figures

Figure 1

Back to TopTop