Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (178)

Search Parameters:
Keywords = VR headsets

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 692 KB  
Brief Report
A Gamified Virtual Reality Escape Room as a Tool for Teaching Cardiac Anatomy: A Feasibility Study
by Haley Morgan, Carolyn A. Meyer, Chad M. Eitel, Kenneth R. Ivie, Heather Hall and Tod R. Clapp
Virtual Worlds 2026, 5(2), 21; https://doi.org/10.3390/virtualworlds5020021 - 11 May 2026
Viewed by 178
Abstract
Gamification, defined as the application of game elements in non-gaming contexts, has emerged as a promising tool for enhancing student engagement in content-heavy curriculums such as anatomy and physiology. This preliminary study describes the development of a virtual reality (VR) cardiac anatomy escape [...] Read more.
Gamification, defined as the application of game elements in non-gaming contexts, has emerged as a promising tool for enhancing student engagement in content-heavy curriculums such as anatomy and physiology. This preliminary study describes the development of a virtual reality (VR) cardiac anatomy escape room and provides initial data on student engagement and confidence with learning objectives. Participants were recruited from Colorado State University following completion of a cadaveric anatomy course. The heart-themed escape room was developed using Unity 6000.1.7f1 and deployed on Meta Quest 3 headsets, featuring seven puzzle stations that generated cardiac structures upon successful completion. Players then assembled a complete heart model within a set time. Results showed high engagement and accomplishment, with students reporting improved understanding of visualizing cardiac structures and enjoyment in testing anatomical knowledge. All participants reported that they felt confident with the content following completion of the escape room. While VR has been successfully incorporated into curricula, VR escape rooms have the potential to serve as an engaging and fun supplementary learning tool for students. These findings suggest that virtual reality implementation can enhance anatomy education through immersive gamified learning environments. Full article
Show Figures

Figure 1

28 pages, 12791 KB  
Article
Empirical Validation of Fitts’ Law in Virtual Reality: Modeling, Prediction, and Modality Comparison
by Nikolina Rodin, Dario Ogrizović, Luka Batistić and Sandi Ljubic
Multimodal Technol. Interact. 2026, 10(5), 49; https://doi.org/10.3390/mti10050049 - 1 May 2026
Viewed by 262
Abstract
Fitts’ law is a foundational model for predicting pointing performance and has been increasingly explored in immersive virtual reality (VR) environments. This paper presents a controlled experimental framework for deriving modality-specific Fitts’ law models in VR and evaluating their predictive transfer to applied [...] Read more.
Fitts’ law is a foundational model for predicting pointing performance and has been increasingly explored in immersive virtual reality (VR) environments. This paper presents a controlled experimental framework for deriving modality-specific Fitts’ law models in VR and evaluating their predictive transfer to applied interaction tasks. The framework comprises two scenarios. The first replicates a standardized ISO 9241 pointing task in a 3D virtual environment to derive predictive movement time models by systematically varying target distance (20–50 cm), target size (2.5–5 cm), and spatial configuration (0, 45, 90, 135). The second simulates an applied warehouse-inspired task involving tool sorting and structured placement actions to evaluate the generalizability of the derived models in more ecologically valid VR interactions. Thirty-two participants completed all tasks using the Meta Quest 3 headset and two interaction modalities: a handheld controller and hand tracking with gesture recognition. Results show that Fitts’ law remains a strong predictor of movement time for 3D pointing in VR, with high linear fits for both the controller (R2=0.9615) and hand tracking (R2=0.9668). However, models derived from standardized pointing tasks showed limited transferability to applied object-manipulation scenarios, producing prediction errors of approximately 27–35% and systematically underestimating movement times. Additionally, both objective metrics and subjective evaluations indicated that controller-based interaction outperformed hand tracking in efficiency, accuracy, perceived workload, and usability. These findings highlight both the robustness and limitations of Fitts-based performance modeling in realistic VR interaction contexts. Full article
Show Figures

Figure 1

17 pages, 768 KB  
Article
Virtual Reality Technology Reduces Pain and Anxiety in Hospitalized Pediatric Patients Undergoing Peripheral Venous Catheterization: A Randomized Controlled Trial
by Jiao Yu, Qiqi Cheng, Min Luo, Huidan Yu and Suqing Wang
Children 2026, 13(4), 509; https://doi.org/10.3390/children13040509 - 5 Apr 2026
Viewed by 575
Abstract
Objective: To investigate the effects of virtual reality (VR) technology on pain and anxiety in hospitalized pediatric patients undergoing Peripheral Venous Catheterization. Methods: This study is a randomized controlled trial (RCT). Between July and December 2024, eligible pediatric inpatients aged 5–14 years from [...] Read more.
Objective: To investigate the effects of virtual reality (VR) technology on pain and anxiety in hospitalized pediatric patients undergoing Peripheral Venous Catheterization. Methods: This study is a randomized controlled trial (RCT). Between July and December 2024, eligible pediatric inpatients aged 5–14 years from the Chegu Branch of Wuhan Union Hospital were randomly assigned to either the experimental group or the control group. The control group received routine care during peripheral venous catheterization, including health education and psychological comfort. The intervention group, in addition to routine care, used VR headsets to watch age-appropriate game videos, with each VR session lasting 10–15 min. The primary outcome measure was patient-reported pain levels, with anxiety as a key secondary outcome. Secondary outcome measures included catheterization time, heart rate, patient satisfaction with nursing procedures, and usability evaluation of the VR equipment. Results: A total of 80 pediatric patients were enrolled, with 40 in the VR group (mean age 8.05 ± 2.60 years) and 40 in the control group (mean age 8.63 ± 2.50 years). Generalized estimating equation (GEE) analysis showed a significant interaction effect between group and time for pain (Wald χ2 = 7.091, p = 0.029), while no significant interaction was found for anxiety (Wald χ2 = 0.971, p = 0.615). Before peripheral venous catheterization, there was no significant difference in pain and anxiety scores between the two groups of pediatric patients (p > 0.05). Patients in the VR group reported significantly reduced pain (β = −0.78; 95% CI, −1.40 to −0.15; p = 0.015) during catheterization, and overall anxiety scores were also lower in the VR group (β = −0.43; 95% CI, −0.77 to −0.08; p = 0.016), although the group by time interaction for anxiety was not significant. The intervention group also demonstrated a lower peak heart rate (107.67 ± 16.25 beats/min vs. 115.25 ± 29.53 beats/min; p = 0.047) and a shorter procedure duration [110 (100, 120) seconds vs. 120 (110, 123.5) seconds; p < 0.001]. Operator satisfaction with the nursing procedure was also significantly higher in the intervention group (95.0% vs. 72.5%, p < 0.001). Conclusions: VR significantly reduces pain and anxiety in hospitalized pediatric patients during peripheral venous catheterization. Full article
(This article belongs to the Section Pediatric Nursing)
Show Figures

Figure 1

13 pages, 2335 KB  
Article
Virtual Reality Versus Monitor-Based Distraction in Children with Mild Intellectual Disability: A Preliminary Comparative Observational Study
by Antonio Fallea, Simone Treccarichi, Simona L’Episcopo, Massimiliano Bartolone, Luigi Vetri, Mirella Vinci, Raffaele Ferri and Francesco Calì
Children 2026, 13(3), 437; https://doi.org/10.3390/children13030437 - 23 Mar 2026
Viewed by 435
Abstract
Background/Objectives: Dental anxiety represents a significant barrier to oral care in children with neurodevelopmental disorders (NDDs), whose sensory sensitivities and behavioral challenges often complicate clinical management and limit access to treatment. Virtual reality (VR) has emerged as a supportive tool to improve [...] Read more.
Background/Objectives: Dental anxiety represents a significant barrier to oral care in children with neurodevelopmental disorders (NDDs), whose sensory sensitivities and behavioral challenges often complicate clinical management and limit access to treatment. Virtual reality (VR) has emerged as a supportive tool to improve the feasibility of dental procedures in this vulnerable population. This study aims to evaluate whether a VR-based distraction approach could facilitate the completion of dental treatment in children with mild intellectual disability (ID). Methods: A prospective comparative observational study was conducted between February and September 2025 involving 56 children aged 11–15 years with mild ID and moderate dental anxiety (Corah Dental Anxiety Scale, DAS: 9–12). Participants were allocated to two groups of distraction approaches—VR distraction (n = 28) using the Oculus Quest 3® headset or a monitor-based cartoon (n = 28)—according to device availability and to maintain balanced group sizes. The primary outcome was treatment success, defined as completion of the restorative dental procedure under local anesthesia within 50 min. Results: Treatment success was achieved in 78.6% of the VR group versus 46.4% of the monitor group (p = 0.026). The odds of successful treatment were more than four times higher with VR compared to monitor distraction (OR 4.12; 95% CI: 1.16–16.47), with a risk ratio of 2.50 (95% CI: 1.14–5.50). Stratified analysis suggested a stronger effect in females (OR 12.25; 95% CI: 1.27–118.36) than in males (OR 2.56; 95% CI: 0.53–12.43). Conclusions: VR-based distraction significantly improved dental treatment success in children with mild ID compared with conventional distraction. Although gender differences were observed, they should be interpreted with caution due to the small sample size. This work lays the foundation for developing both short- and long-term protocols to facilitate dental treatment management and cooperation in patients with NDDs. Full article
Show Figures

Figure 1

23 pages, 1318 KB  
Article
An Immersive Virtual Reality Room to Enhance Positive Affect and Engagement in Nursing Home Residents with Neurocognitive and Psychological Disorders: A Feasibility Study
by Malgorzata Klass, Frédérick Dandler, Yaëlle Ducommun, Michel Hanset, Laurence Ruscart, Jean-Christophe Bier, Sandra De Breucker and Jennifer Foucart
Healthcare 2026, 14(5), 588; https://doi.org/10.3390/healthcare14050588 - 26 Feb 2026
Viewed by 829
Abstract
Background/Objectives: Older adults with neurocognitive and psychological disorders are often institutionalized in nursing homes, which negatively affects well-being and mood, and may accelerate cognitive decline. Immersive virtual reality (VR) is a promising non-pharmacological countermeasure, but VR-headset discomfort limits its usability in this [...] Read more.
Background/Objectives: Older adults with neurocognitive and psychological disorders are often institutionalized in nursing homes, which negatively affects well-being and mood, and may accelerate cognitive decline. Immersive virtual reality (VR) is a promising non-pharmacological countermeasure, but VR-headset discomfort limits its usability in this population. Therefore, this study examined the tolerability and feasibility of an immersive VR room, which provides customizable interactive environments projected across four walls at 360° and enables shared experiences, to enhance positive affect and engagement in nursing home residents. Methods: Twenty nursing home residents were initially enrolled, and nineteen completed five 10 min sessions in the immersive VR room accompanied by a caregiver. State positive and negative effects were assessed using the visual analogue scale (VAS) and the Observed Emotion Rating Scale (OERS), and participants’ verbal feedback was collected during and after the sessions. Results: VAS scores indicated that VR room immersion was feasible and well-tolerated, with most participants feeling secure and experiencing increased positive affect during and just after the sessions. OERS scores and observations revealed frequent expressions of pleasure, interest, and active engagement with both the VR environments and the caregiver. Participants’ reports valued the enjoyable and relaxing experience provided by immersion in the VR room, noting the realism and aesthetics of the environments and nature-related elements, which allowed them to travel virtually and evoke personal memories. Conclusions: Immersive VR room sessions were well tolerated, enhanced positive affect, and may support cognitive functioning by fostering active engagement and social interaction. Given that this is a feasibility study with a small cohort and short follow-up, the present findings should be considered preliminary and confirmed in larger, controlled, longer-term studies. Full article
Show Figures

Figure 1

20 pages, 1780 KB  
Article
A Comprehensive Eye-Tracking System Toward Large FOV HMD
by Jiafu Lv, Di Zhang, Ke Han, Qi Wu and Sanxing Cao
Sensors 2026, 26(5), 1402; https://doi.org/10.3390/s26051402 - 24 Feb 2026
Viewed by 703
Abstract
Eye tracking in virtual reality (VR) head-mounted displays poses substantial engineering challenges, particularly under immersive display configurations with large fields of view (FOV), where optical layout, illumination, and image acquisition impose nontrivial system constraints. To address these design constraints, we present an integrated [...] Read more.
Eye tracking in virtual reality (VR) head-mounted displays poses substantial engineering challenges, particularly under immersive display configurations with large fields of view (FOV), where optical layout, illumination, and image acquisition impose nontrivial system constraints. To address these design constraints, we present an integrated near-eye eye-tracking prototype tailored for immersive VR headsets, combining customized hardware components and a real-time software pipeline. The proposed system integrates optimized near-eye illumination and image acquisition with a pupil detection module and a deep learning-based gaze-vector estimation model, forming a real-time software pipeline for stable end-to-end gaze mapping under fixed calibration conditions. Under identical system settings, calibration procedures, and gaze-point mapping conditions, we evaluate the proposed gaze-vector estimation model through a controlled model-level ablation. The attention-enhanced model achieves an average angular deviation of 1.15°, corresponding to a 61.4% relative reduction compared with a baseline ResNet-152 model without attention. To demonstrate the usability of the system outputs at the application level, we further implement a real-time visualization example that integrates pupil diameter, gaze vectors, and blink events to depict the temporal evolution of eye-movement signals. This work provides a cost-effective and reproducible engineering reference for near-eye eye-movement acquisition and visualization in immersive VR settings and serves as a technical foundation for subsequent interaction design or behavioral analysis studies. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

14 pages, 2366 KB  
Article
Validating the Performance of VR Headset Eye-Tracking Using Gold Standard Eye-Tracker and MoCap System
by Russell Nathan Todd, Jian Gong, Amy Catherine Banic and Qin Zhu
Information 2026, 17(2), 143; https://doi.org/10.3390/info17020143 - 2 Feb 2026
Viewed by 905
Abstract
The integration of eye-tracking into consumer-grade virtual reality (VR) headsets presents a transformative opportunity for assessing user mental states within simulated, immersive environments. However, the validity of this built-in technology must be established against gold-standard real-world eye-tracking systems. This study employs a novel [...] Read more.
The integration of eye-tracking into consumer-grade virtual reality (VR) headsets presents a transformative opportunity for assessing user mental states within simulated, immersive environments. However, the validity of this built-in technology must be established against gold-standard real-world eye-tracking systems. This study employs a novel paradigm using a physically moving object to evaluate the accuracy of dynamic smooth pursuit, a key oculomotor function in mental state assessment. We rigorously validated the performance of the HTC Vive Pro Eye’s integrated eye-tracker against the Tobii Pro Glasses 3 using a high-precision OptiTrack motion capture system as ground-truth for object position. Eight participants completed both 2D and 3D gaze-tracking tasks. In the 2D condition, they tracked a dot on a screen, while in the 3D condition, they tracked a physically moving object. The real-world object trajectories captured by OptiTrack were replicated within a VR environment. Gaze data from both the VR headset and the Tobii glasses were recorded simultaneously and compared to the OptiTrack baseline using Dynamic Time Warping (DTW) to quantify accuracy. Results revealed a task-dependent performance. In the 2D task, the Tobii glasses demonstrated significantly lower DTW distances, indicating superior accuracy. Conversely, in the 3D task, the VR headset significantly outperformed the glasses, showing a closer match to the real object trajectory. This suggests that while traditional eye-trackers excel in constrained 2D contexts, integrated VR eye-tracking is more accurate for naturalistic 3D gaze pursuit. We conclude that VR headset eye-tracking is not only a reliable but also a cost-effective tool for research, particularly offering enhanced performance for studies conducted within immersive 3D simulations. Full article
Show Figures

Figure 1

12 pages, 474 KB  
Article
Toward Generalized Emotion Recognition in VR by Bridging Natural and Acted Facial Expressions
by Rahat Rizvi Rahman, Hee Yun Choi, Joonghyo Lim, Go Eun Lee, Seungmoo Lee, Chungyean Cho and Kostadin Damevski
Sensors 2026, 26(3), 845; https://doi.org/10.3390/s26030845 - 28 Jan 2026
Viewed by 584
Abstract
Recognizing emotions accurately in virtual reality (VR) enables adaptive and personalized experiences across gaming, therapy, and other domains. However, most existing facial emotion recognition models rely on acted expressions collected under controlled settings, which differ substantially from the spontaneous and subtle emotions that [...] Read more.
Recognizing emotions accurately in virtual reality (VR) enables adaptive and personalized experiences across gaming, therapy, and other domains. However, most existing facial emotion recognition models rely on acted expressions collected under controlled settings, which differ substantially from the spontaneous and subtle emotions that arise during real VR experiences. To address this challenge, the objective of this study is to develop and evaluate generalizable emotion recognition models that jointly learn from both acted and natural facial expressions in virtual reality. We integrate two complementary datasets collected using the Meta Quest Pro headset, one capturing natural emotional reactions and another containing acted expressions. We evaluate multiple model architectures, including convolutional and domain-adversarial networks, and a mixture-of-experts model that separates natural and acted expressions. Our experiments show that models trained jointly on acted and natural data achieve stronger cross-domain generalization. In particular, the domain-adversarial and mixture-of-experts configurations yield the highest accuracy on natural and mixed-emotion evaluations. Analysis of facial action units (AUs) reveals that natural and acted emotions rely on partially distinct AU patterns, while generalizable models learn a shared representation that integrates salient AUs from both domains. These findings demonstrate that bridging acted and natural expression domains can enable more accurate and robust VR emotion recognition systems. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

27 pages, 6868 KB  
Review
Virtual Reality in Cultural Heritage: A Scientometric Analysis and Review of Long-Term Use and Usability Trends
by Radu Comes and Zsolt Levente Buna
Appl. Sci. 2026, 16(2), 1013; https://doi.org/10.3390/app16021013 - 19 Jan 2026
Cited by 1 | Viewed by 1096
Abstract
The integration of virtual reality (VR) technologies in museums and cultural heritage has expanded rapidly, driven by demand for immersive visitor experiences. Yet comprehensive studies on their long-term sustainability and operational challenges remain scarce. This mixed-methods study combines scientometric analysis of 1635 Web [...] Read more.
The integration of virtual reality (VR) technologies in museums and cultural heritage has expanded rapidly, driven by demand for immersive visitor experiences. Yet comprehensive studies on their long-term sustainability and operational challenges remain scarce. This mixed-methods study combines scientometric analysis of 1635 Web of Science publications (1997–2025) using VOSviewer 1.6.20 with longitudinal evidence from three VR installations deployed by the authors in Romanian museums representing understudied Central/Eastern European contexts. Analysis maps global trends, collaborations, and regional gaps, while practical evaluation addresses durability, usability, maintenance, technological obsolescence, multi-user management, and headset hygiene. Findings reveal VR’s engagement and preservation potential but highlight constraints limiting long-term viability. Strategic planning, adaptive design, and maintenance frameworks emerge as critical for sustainability. Limitations include WoS exclusivity and regional focus, while findings offer actionable insights for diverse institutional contexts. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Figure 1

25 pages, 5130 KB  
Article
Interpretable Biomechanical Feature Selection for VR Exercise Assessment Using SHAP and LDA
by Urszula Czajkowska, Magdalena Żuk, Michał Popek and Celina Pezowicz
Sensors 2026, 26(2), 464; https://doi.org/10.3390/s26020464 - 10 Jan 2026
Cited by 1 | Viewed by 706
Abstract
Virtual reality (VR) technologies are increasingly applied in rehabilitation, offering interactive physical and spatial exercises. A major challenge remains the objective assessment of human movement quality (HMQA). This study aimed to identify biomechanical features differentiating correct and incorrect execution of a lateral lunge [...] Read more.
Virtual reality (VR) technologies are increasingly applied in rehabilitation, offering interactive physical and spatial exercises. A major challenge remains the objective assessment of human movement quality (HMQA). This study aimed to identify biomechanical features differentiating correct and incorrect execution of a lateral lunge and to determine the minimal number of sensors required for reliable VR-based motion analysis, prioritising interpretability. Thirty-two healthy adults (mean age: 26.4 ± 8.5 years) performed 211 repetitions recorded with the HTC Vive Tracker system (7 sensors + headset). Repetitions were classified by a physiotherapist using video observation and predefined criteria. The analysis included joint angles, angular velocities and accelerations, and Euclidean distances between 28 sensor pairs, evaluated with Linear Discriminant Analysis (LDA) and SHapley Additive exPlanations (SHAP). Angular features achieved higher LDA performance (F1 = 0.89) than distance-based features (F1 = 0.78), which proved more stable and less sensitive to calibration errors. Comparison of SHAP and LDA showed high agreement in identifying key features, including hip flexion, knee rotation acceleration, and spatial relations between headset and foot or shank sensors. The findings indicate that simplified sensor configurations may provide reliable diagnostic information, highlighting opportunities for interpretable VR-based rehabilitation systems in home and clinical settings. Full article
Show Figures

Figure 1

19 pages, 4076 KB  
Article
Enhancing Lecture Interactivity Through Virtual Reality
by Marián Matys, Martin Gašo, Tomáš Balala and Ľuboslav Dulina
Appl. Sci. 2026, 16(2), 711; https://doi.org/10.3390/app16020711 - 9 Jan 2026
Viewed by 492
Abstract
Although conventional lectures can provide a wide range of information to a large group of people, maintaining attention and ensuring knowledge transfer can be a challenge. Therefore, it is important to look for new, engaging, and effective approaches. This pilot feasibility study explores [...] Read more.
Although conventional lectures can provide a wide range of information to a large group of people, maintaining attention and ensuring knowledge transfer can be a challenge. Therefore, it is important to look for new, engaging, and effective approaches. This pilot feasibility study explores the effectiveness of virtual reality (VR) in increasing student engagement and knowledge transfer during lectures in the field of supply chain logistics and inventory selection systems. An educational VR game was developed through the systematic design of application logic, the creation of 3D assets, the construction of virtual scenes, and the implementation of gameplay. The application simulates three inventory picking methods: conventional selection, Pick by Light, and Pick by Vision systems. A total of 22 master’s students participated in the pilot study. They tested three different versions of the VR game, compared the time they needed to complete it, and participated in a guided discussion and questionnaire. The preliminary student reports indicated that students felt more engaged in the learning process and reported a perceived higher engagement with inventory picking systems compared to the traditional lecture format. On the other hand, participants mentioned concerns about nausea and the unavailability of VR headsets. The pilot results indicate that VR shows potential as an educational tool for teaching industrial logistics because it transforms the typical classroom environment into a more active and playful one, leading to a more natural understanding of the subject. Full article
(This article belongs to the Special Issue Advances in Virtual Reality Applications)
Show Figures

Figure 1

25 pages, 9160 KB  
Article
Diagnosis of Schizophrenia Using Multimodal Data and Classification Using the EEGNet Framework
by Nandini Manickam, Vijayakumar Ponnusamy and Arul Saravanan
Diagnostics 2025, 15(23), 3081; https://doi.org/10.3390/diagnostics15233081 - 3 Dec 2025
Cited by 1 | Viewed by 1267
Abstract
Background/Objectives: In recent years, people have been facing a lot of difficulties in handling stress, emotions, social, and behavioral issues, which have led to severe mental disorders. Schizophrenia is one disorder that requires more attention. This disorder is characterized by positive or [...] Read more.
Background/Objectives: In recent years, people have been facing a lot of difficulties in handling stress, emotions, social, and behavioral issues, which have led to severe mental disorders. Schizophrenia is one disorder that requires more attention. This disorder is characterized by positive or psychotic symptoms, negative symptoms, and cognitive symptoms, which makes diagnosis and treatment complicated. The main objective is to identify the degree of severity of symptoms through multimodal data and classify them using the EEGNet framework. Methods: Multimodal data are collected. To identify the severity of symptoms of schizophrenia, initial screening is performed through assessment tools such as the Positive and Negative Symptoms Scale (PANSS), Brief Negative Symptom Scale (BNSS), Scale for Assessment of Negative Symptoms (NSA-16), and Scale for Assessment of Positive Symptoms (SAPS). Designed photo elicitation and VR box video stimuli are used for data collection. The patients are asked to express their thoughts upon viewing photos shown through a photo elicitation task. The patients are given Virtual Reality (VR) stimuli where videos will be played in a VR box and patients are asked to express their thoughts. Patients’ facial expressions and speech signals are captured through webcam while performing these tasks. Finally, the electrical activities of the patients are assessed through a 14-channel EEG headset. A novel method of fusing and embedding of normalized multimodality features into the EEGNet architecture is carried out that enables combined utilization of electrophysiological information from EEG and complementary behavioral–affective cues from other modalities, thereby enhancing classification performance while retaining the architectural efficiency of EEGNet. Results: The reliability and validity of the questionnaire are statistically analyzed and found to be α = 0.761. The sum of variance of PANSS is about 27.08, SAPS is about 28.61, and BNSS is about 29.92 with p < 0.05. This EEGNet model displays an accuracy of 0.99, precision of 0.99, recall of 0.98, and F1-score of 0.99 for healthy and a precision of 0.98, recall of 0.99, and F1-score of 0.99 for schizophrenia-affected patients and ROC AUC of about 0.9989. Conclusions: This system proves to be a promising method for the diagnosis of schizophrenia and thereby enhances the performance of the system. Full article
Show Figures

Figure 1

21 pages, 2831 KB  
Article
The Psychological Effects of AI Learning Assistants in Immersive Virtual Reality Environments
by Avgoustos Tsinakos, Nikoletta Teazi and Styliani Tsinakou
Information 2025, 16(12), 1062; https://doi.org/10.3390/info16121062 - 3 Dec 2025
Cited by 1 | Viewed by 1810
Abstract
Artificial Intelligence (AI) and Virtual Reality (VR) are increasingly integrated into education, yet their combined psychological effects remain underexplored. This paper investigates the potential benefits and risks of AI-powered learning assistants within immersive VR environments. The study builds on insights from a previous [...] Read more.
Artificial Intelligence (AI) and Virtual Reality (VR) are increasingly integrated into education, yet their combined psychological effects remain underexplored. This paper investigates the potential benefits and risks of AI-powered learning assistants within immersive VR environments. The study builds on insights from a previous pilot involving a virtual tour guide for Athens and proposes a case study with 52 high school students. In groups of three, students would use Oculus headsets with an AI assistant (pre-programmed and AI-generated modes), explore content for a week, and complete questionnaires on usability, trust, and psychological impact. The analysis is expected to support a balance of positive outcomes including greater engagement, motivation and autonomy but also negative ones such as over-reliance, diminished critical thinking, and social isolation. The paper also identifies key psychological dynamics, including the critical role of social influence and teacher-led adoption, and the nuanced nature of student trust in AI-generated information. Ethical implications, such as data privacy and the digital divide, are also discussed. The study concludes by proposing that AI-VR can enrich learning, especially in cultural contexts, but requires safeguards for trust, ethics, and accessibility, with further research on long-term effects, psychological impact and cross-cultural and linguistic nuances. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Graphical abstract

35 pages, 4671 KB  
Article
Virtual Reality for Innovative and Responsible Tourism
by Mateusz Naramski and Kinga Stecuła
Sustainability 2025, 17(22), 10233; https://doi.org/10.3390/su172210233 - 15 Nov 2025
Cited by 2 | Viewed by 1708
Abstract
The article discusses the use of virtual reality (VR) as a tool for responsible tourism. Practical research was conducted in a group of 215 participants using VR headsets (Meta Quest Pro and HTC VIVE). Volunteers participated in a VR session using the Google [...] Read more.
The article discusses the use of virtual reality (VR) as a tool for responsible tourism. Practical research was conducted in a group of 215 participants using VR headsets (Meta Quest Pro and HTC VIVE). Volunteers participated in a VR session using the Google Earth VR application. They visited two locations of their choice. The first was a place they had previously visited in real life, while the other was a location they had not visited but would like to. Participants completed a survey before and after the VR experience. In the survey, participants rated, among others, their level of satisfaction, willingness to visit given locations, and emotions accompanying the experience. The authors conducted a statistical analysis of the survey results. The scientific goal of the article was primarily to present a proposal for the use of virtual reality as an innovative tool supporting responsible tourism. The results confirmed a positive reception of VR experiences: average satisfaction ratings exceeded 4.0 on a 5-point scale, and positive emotions (most often +1 and +2 on a scale from −2 to +2) dominated among participants. Higher emotional valence was significantly correlated with satisfaction (ρ ≈ 0.434, p < 0.001) and with increased willingness to visit destinations (ρ ≈ 0.306, p < 0.001). Statistically significant differences were noticed in satisfaction level with visiting new places among groups of respondents with different tourism type preferences (people who prefer cultural or health tourism reported noticeably higher satisfaction with the VR experience than other respondents). The authors also conducted a discussion on how VR technology can be a tool supporting responsible tourism. Full article
(This article belongs to the Special Issue Smart and Responsible Tourism: Innovations for a Sustainable Future)
Show Figures

Figure 1

18 pages, 1654 KB  
Article
Speaking Through an Avatar: Emotional Expressiveness, Individual Differences, User Experience and Performance
by David Ponce, Sara Garces-Arilla, Marta Mendez, Magdalena Mendez-Lopez and M.-Carmen Juan
Appl. Sci. 2025, 15(22), 12082; https://doi.org/10.3390/app152212082 - 13 Nov 2025
Viewed by 3087
Abstract
Emotionally expressive avatars are often used to increase engagement in virtual environments, but their effects on users’ emotional outcomes and experience during evaluative tasks are not well established. This study examined whether differences in avatar emotional expressiveness are associated with affective responses and [...] Read more.
Emotionally expressive avatars are often used to increase engagement in virtual environments, but their effects on users’ emotional outcomes and experience during evaluative tasks are not well established. This study examined whether differences in avatar emotional expressiveness are associated with affective responses and user experience during a socially evaluative speech task in virtual reality (VR), and how individual characteristics and emotional variables relate to performance and user experience. Sixty-three university students were randomly assigned to deliver a five-minute self-presentation, simulating a job interview, in front of a virtual mirror while embodied in either a high-expressive or low-expressive avatar. In the present study, the manipulation of avatar expressiveness was implemented using Meta Quest 2 and Meta Quest Pro headsets, differing mainly in facial-tracking capability. Participants completed a structured three-phase protocol: pre-avatar embodiment (baseline questionnaires), avatar embodiment (speech task), and post-avatar embodiment (post-task measures). Emotional state and trait variables, speech fluency and engagement during the task, and user experience variables were assessed. No significant effects of avatar expressiveness were found on emotional or experiential variables. Correlation analyses revealed a positive association between extraversion and avatar embodiment. These findings contribute to our understanding of the factors that are associated with user experience and behaviour in avatar-based VR environments and suggest that individual traits, such as extraversion, should be considered when designing VR applications for training, education, and therapeutic purposes. Full article
Show Figures

Figure 1

Back to TopTop