Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (78)

Search Parameters:
Keywords = fully immersive virtual reality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2191 KB  
Systematic Review
Virtual Reality-Based Cognitive and Physical Interventions in Cognitive Impairment: A Network Meta-Analysis of Immersion Level Effects
by Wanyi Li, Wei Gao and Xiangyang Lin
Behav. Sci. 2025, 15(12), 1610; https://doi.org/10.3390/bs15121610 - 22 Nov 2025
Viewed by 816
Abstract
Virtual reality (VR) has emerged as an innovative platform for delivering cognitive and physical training to individuals with cognitive impairment. However, the differential effectiveness of fully immersive versus partially immersive VR interventions remains unclear. This network meta-analysis aimed to evaluate how immersion level [...] Read more.
Virtual reality (VR) has emerged as an innovative platform for delivering cognitive and physical training to individuals with cognitive impairment. However, the differential effectiveness of fully immersive versus partially immersive VR interventions remains unclear. This network meta-analysis aimed to evaluate how immersion level influences cognitive, motor, and functional outcomes in neurodegenerative populations. A systematic search of PubMed, Embase, Cochrane Library, and Web of Science up to October 2025 identified 20 randomized controlled trials involving 1382 participants with mild cognitive impairment (MCI) or dementia. Interventions were categorized into four groups: (1) fully immersive VR (head-mounted displays), (2) partially immersive VR (screen-based or motion-capture systems), (3) active control (traditional cognitive or physical training), and (4) passive control (usual care or health education). Outcomes included the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), Trail Making Test (TMT), Digit Span Test (DST), Timed Up and Go (TUG), and Instrumental Activities of Daily Living (IADL). Standardized mean differences (SMDs) and surface under the cumulative ranking curve (SUCRA) values were calculated using RevMan 5.4 and Stata 18.0. Fully immersive VR significantly improved global cognition compared to passive control (MMSE: SMD = 0.51, 95% CI [0.06, 0.96]), while partially immersive VR showed superior effects on executive function versus active control (TMT-B: SMD = −1.29, 95% CI [−2.62, −0.93]) and on motor function (TUG: SMD = −0.59, 95% CI [−1.11, −0.08]). In MoCA performance, both VR modalities outperformed traditional interventions (SUCRA: fully immersive = 76.0%; partially immersive = 84.8%). SUCRA rankings suggest that fully immersive VR is optimal for memory and foundational cognition (81.7%), whereas partially immersive VR performs best for executive function (98.9%). These findings indicate that the efficacy of VR-based cognitive or physical–cognitive interventions is modulated by immersion level. Tailoring VR modality to specific cognitive domains may optimize rehabilitation outcomes in MCI and dementia care. Full article
Show Figures

Figure 1

19 pages, 1387 KB  
Article
Integrating Physiologic Assessment into Virtual Reality-Based Pediatric Pain Intervention: A Feasibility Study
by Harsheen Marwah, Stefania R. Moldovanu, Talis Reks, Brian Anthony and Deirdre E. Logan
Virtual Worlds 2025, 4(4), 47; https://doi.org/10.3390/virtualworlds4040047 - 22 Oct 2025
Viewed by 934
Abstract
This feasibility study explored the integration of physiological monitoring into a virtual reality (VR) intervention for pediatric pain management. The goal of this study is to identify a feasible strategy for collecting physiologic data in the context of a VR intervention currently being [...] Read more.
This feasibility study explored the integration of physiological monitoring into a virtual reality (VR) intervention for pediatric pain management. The goal of this study is to identify a feasible strategy for collecting physiologic data in the context of a VR intervention currently being developed for youth with chronic pain. We assess the potential of Cognitive Load (CL)—derived from heart rate and pupillometry/eye-tracking data—as a marker of arousal and user engagement in a VR simulation to promote school functioning in youth with chronic pain. The HP Reverb G2 Omnicept headset and Polar H10 heart-rate sensor were utilized. The Child Presence Questionnaire (CPQ) assessed participants’ self-reported immersion and engagement. Data collection focused on feasibility and utility of physiologic data in assessing arousal and correlations with self-reported experience. Nine participants engaged in the simulation, with eight yielding complete data. The simulation and headset were well tolerated. CPQ Transportation subscale showed trend-level correlation with mean CL. Due to small sample and feasibility focus, individual-level results were examined. Combining multiple physiologic markers into a construct like CL is intriguing, but data interpretability was limited. Pupillometry and related metrics show promise as feasible markers of engagement and arousal for VR-based intervention but require appropriate expertise to fully interpret. The study found that integration of physiologic monitoring is feasible, but further work is needed to standardize metrics and identify the most useful and user-friendly markers. Full article
Show Figures

Figure 1

19 pages, 1218 KB  
Article
The Impact of Virtual Reality Immersion on Learning Outcomes: A Comparative Study of Declarative and Procedural Knowledge Acquisition
by Nengbao Yu, Wenya Shi, Wei Dong and Renying Kang
Behav. Sci. 2025, 15(10), 1322; https://doi.org/10.3390/bs15101322 - 26 Sep 2025
Cited by 1 | Viewed by 1964
Abstract
The potential of Virtual Reality (VR) in enhancing learning and training is being widely explored. The relationship of immersion, as one of the core technical features of VR, with knowledge types has not been fully explored. This study aims to investigate how VR [...] Read more.
The potential of Virtual Reality (VR) in enhancing learning and training is being widely explored. The relationship of immersion, as one of the core technical features of VR, with knowledge types has not been fully explored. This study aims to investigate how VR immersion levels (high vs. low) affect the acquisition of declarative and procedural knowledge, as well as related cognitive and affective factors. A 2 × 2 mixed design was adopted, with 64 college students who had no VR experience and no background in professional medical knowledge being randomly assigned to either a high-immersion group (using HTC Vive Pro headsets) or a low-immersion group (using desktop monitors). Participants completed learning tasks on thyroid and related diseases (declarative knowledge) and cardiopulmonary resuscitation (procedural knowledge), followed by knowledge tests and self-report questionnaires to measure presence, motivation, self-efficacy, cognitive load, and emotional states. Results showed that high immersion significantly improved learning outcomes for both types of knowledge with large effect sizes. In both knowledge domains, high immersion also enhanced presence, intrinsic motivation, self-efficacy, and positive emotions. However, cognitive load was reduced only for declarative knowledge, and no significant effects were observed for self-regulation. These findings highlight the differential impact of VR immersion on knowledge acquisition and provide insights for optimizing VR-based educational interventions. Full article
(This article belongs to the Special Issue Exploring Enactive Learning in Immersive XR Environments)
Show Figures

Figure 1

18 pages, 3097 KB  
Article
Deep Neural Network-Based Alignment of Virtual Reality onto a Haptic Device for Visuo-Haptic Mixed Reality
by Hyeonsu Kim, Hanbit Yong and Myeongjin Kim
Appl. Sci. 2025, 15(18), 10071; https://doi.org/10.3390/app151810071 - 15 Sep 2025
Viewed by 822
Abstract
Precise alignment between virtual reality (VR) and haptic interfaces is essential for delivering an immersive visuo-haptic mixed reality experience. Existing methods typically depend on markers, external trackers, or cameras, which can be intrusive and hinder usability. In addition, previous network-based approaches generally rely [...] Read more.
Precise alignment between virtual reality (VR) and haptic interfaces is essential for delivering an immersive visuo-haptic mixed reality experience. Existing methods typically depend on markers, external trackers, or cameras, which can be intrusive and hinder usability. In addition, previous network-based approaches generally rely on image data for alignment. This paper introduces a deep neural network-based alignment method that eliminates the need for such external components. Unlike existing methods, our approach is designed based on coordinate transformation and leverages a network model for alignment. The proposed method utilizes the head-mounted display (HMD) position, fingertip position obtained via hand tracking, and the six-degrees-of-freedom (6-DOF) pose of a haptic device’s end-effector as inputs to a neural network model. A shared multi-layer perceptron and max pooling layer are employed to extract global feature vectors from the inputs, ensuring permutation invariance. The extracted feature vectors are then processed through fully connected layers to estimate the pose of the haptic device’s base. Experimental results show a mean positional error of 2.718 mm and the mean rotation error of 0.5330°, which equates to 1.3% relative to the haptic device’s maximum length. The proposed method demonstrates robustness against noise, demonstrating its applicability across various domains, including medical simulations, virtual prototyping, and interactive training environments. Full article
(This article belongs to the Special Issue Advances in Human–Machine Interaction)
Show Figures

Figure 1

23 pages, 868 KB  
Article
LightLiveAuth: A Lightweight Continuous Authentication Model for Virtual Reality
by Pengyu Li, Feifei Chen, Lei Pan, Thuong Hoang, Ye Zhu and Leon Yang
IoT 2025, 6(3), 50; https://doi.org/10.3390/iot6030050 - 2 Sep 2025
Viewed by 1074
Abstract
As network infrastructure and Internet of Things (IoT) technologies continue to evolve, immersive systems such as virtual reality (VR) are becoming increasingly integrated into interconnected environments. These advancements allow real-time processing of multi-modal data, improving user experiences with rich visual and three-dimensional interactions. [...] Read more.
As network infrastructure and Internet of Things (IoT) technologies continue to evolve, immersive systems such as virtual reality (VR) are becoming increasingly integrated into interconnected environments. These advancements allow real-time processing of multi-modal data, improving user experiences with rich visual and three-dimensional interactions. However, ensuring continuous user authentication in VR environments remains a significant challenge. To address this issue, an effective user monitoring system is required to track VR users in real time and trigger re-authentication when necessary. Based on this premise, we propose a multi-modal authentication framework that uses eye-tracking data for authentication, named MobileNetV3pro. The framework applies a transfer learning approach by adapting the MobileNetV3Large architecture (pretrained on ImageNet) as a feature extractor. Its pre-trained convolutional layers are used to obtain high-level image representations, while a custom fully connected classification is added to perform binary classification. Authentication performance is evaluated using Equal Error Rate (EER), accuracy, F1-score, model size, and inference time. Experimental results show that eye-based authentication with MobileNetV3pro achieves a lower EER (3.00%) than baseline models, demonstrating its effectiveness in VR environments. Full article
Show Figures

Figure 1

23 pages, 525 KB  
Systematic Review
Virtual and Augmented Reality Games in Dementia Care: Systematic and Bibliographic Review
by Martin Eckert, Varsha Radhakrishnan, Thomas Ostermann, Jan Peter Ehlers and Gregor Hohenberg
Healthcare 2025, 13(16), 2013; https://doi.org/10.3390/healthcare13162013 - 15 Aug 2025
Cited by 1 | Viewed by 1108
Abstract
Background: This review investigates the use of virtual and augmented reality games in dementia care. It provides an insight into the last 13 years of research, including the earliest publications on this topic, and takes a systematic and bibliographic approach. Methods: We sourced [...] Read more.
Background: This review investigates the use of virtual and augmented reality games in dementia care. It provides an insight into the last 13 years of research, including the earliest publications on this topic, and takes a systematic and bibliographic approach. Methods: We sourced research publications from three different scientific databases (PubMed, Scopus, and APA PsycInfo) for this publication. We chose the PRISMA approach and categorized the studies according to the publisher. A set of 12 variables was defined across three categories (bibliographic, medical, and technical). Results: Of the 389 identified articles, 36 met the inclusion and exclusion criteria. After a phase of pilot studies mainly being conducted, the number of publications increased by four times but decreased again in 2023. Dominating were pilot and feasibility studies; 8 out of the 36 trials were RCTs. The median trial population was 24, and the protocols were performed for an average of 10 weeks, with two 40-min sessions a week. Simulator sickness was reported but not by the majority of participants. A total of 59% of the studies used fully immersive 3D-VR systems. We identified only three publications that provided high immersion quality. These findings indicate the positive effects of using virtual and augmented reality systems on participants’ cognitive function and mood. Conclusions: This publication focuses on the technical aspects of the applied technologies and immersion levels of the patients. Using augmented and virtual reality methods to improve the quality of life and physical interaction of dementia patients shows the potential to enhance cognitive functioning in this population, but further investigation and multicenter RCTs are needed. There are strong indications that this research branch has high potential to benefit both caretakers and patients. Full article
Show Figures

Graphical abstract

21 pages, 21564 KB  
Article
Remote Visualization and Optimization of Fluid Dynamics Using Mixed Reality
by Sakshi Sandeep More, Brandon Antron, David Paeres and Guillermo Araya
Appl. Sci. 2025, 15(16), 9017; https://doi.org/10.3390/app15169017 - 15 Aug 2025
Viewed by 1068
Abstract
This study presents an innovative pipeline for processing, compressing, and remotely visualizing large-scale numerical simulations of fluid dynamics in a virtual wind tunnel (VWT), leveraging virtual and augmented reality (VR/AR) for enhanced analysis and high-end visualization. The workflow addresses the challenges of handling [...] Read more.
This study presents an innovative pipeline for processing, compressing, and remotely visualizing large-scale numerical simulations of fluid dynamics in a virtual wind tunnel (VWT), leveraging virtual and augmented reality (VR/AR) for enhanced analysis and high-end visualization. The workflow addresses the challenges of handling massive databases generated using Direct Numerical Simulation (DNS) while maintaining visual fidelity and ensuring efficient rendering for user interaction. Fully immersive visualization of supersonic (Mach number 2.86) spatially developing turbulent boundary layers (SDTBLs) over strong concave and convex curvatures was achieved. The comprehensive DNS data provides insights on the transport phenomena inside turbulent boundary layers under strong deceleration or an Adverse Pressure Gradient (APG) caused by concave walls as well as strong acceleration or a Favorable Pressure Gradient (FPG) caused by convex walls under different wall thermal conditions (i.e., Cold, Adiabatic, and Hot walls). The process begins with a .vts file input from a DNS, which is visualized using ParaView software. These visualizations, representing different fluid behaviors based on a DNS with a high spatial/temporal resolution and employing millions of “numerical sensors”, are treated as individual time frames and exported in GL Transmission Format (GLTF), which is a widely used open-source file format designed for efficient transmission and loading of 3D scenes. To support the workflow, optimized Extract–Transform–Load (ETL) techniques were implemented for high-throughput data handling. Conversion of exported Graphics Library Transmission Format (GLTF) files into Graphics Library Transmission Format Binary files (typically referred to as GLB) reduced the storage by 25% and improved the load latency by 60%. This research uses Unity’s Profile Analyzer and Memory Profiler to identify performance limitations during contour rendering, focusing on the GPU and CPU efficiency. Further, immersive VR/AR analytics are achieved by connecting the processed outputs to Unity engine software and Microsoft HoloLens Gen 2 via Azure Remote Rendering cloud services, enabling real-time exploration of fluid behavior in mixed-reality environments. This pipeline constitutes a significant advancement in the scientific visualization of fluid dynamics, particularly when applied to datasets comprising hundreds of high-resolution frames. Moreover, the methodologies and insights gleaned from this approach are highly transferable, offering potential applications across various other scientific and engineering disciplines. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 5829 KB  
Article
Virtual Reality in Supporting the Creation of Sustainable Tourism: A Case Study of Gen Z Technology Acceptance
by Marek Miłosz, Kamil Żyła, Stanisław Piotr Skulimowski, Anna Liliana Dakowicz, Tomasz Szymczyk and Marcin Badurowicz
Sustainability 2025, 17(16), 7173; https://doi.org/10.3390/su17167173 - 8 Aug 2025
Cited by 2 | Viewed by 2055
Abstract
Tourism’s rapid growth has significant and negative effects on the environment, society, and economy. Sustainable tourism practices are essential in order to mitigate these effects. Virtual reality (VR) technologies offer the possibility of implementing sustainable tourism policies by providing immersive experiences that replace [...] Read more.
Tourism’s rapid growth has significant and negative effects on the environment, society, and economy. Sustainable tourism practices are essential in order to mitigate these effects. Virtual reality (VR) technologies offer the possibility of implementing sustainable tourism policies by providing immersive experiences that replace real ones. Moreover, VR can be a useful tool for the protection and promotion of cultural and natural heritage. The article discusses the potential directions for sustainable tourism using VR. This technology can reduce the burden on popular tourist sites without losing their value to visitors. Additionally, it can promote less popular destinations in the wider public awareness. A case study of the implementation of a virtual tour at the Pahlavon Mahmud Mausoleum in Khiva (Uzbekistan) is presented. The research method was designed to evaluate the acceptability of VR technology among a convenience sampling of n = 57 Gen Z consumers (university students 20–24 years of age), who completed interviews following their participation in a voluntary virtual walking tour. The research results suggest that VR can be an acceptable and useful tool for implementing sustainable tourism policies in the near future. Another conclusion is that virtual sightseeing should not fully replace onsite tourism. Full article
(This article belongs to the Section Tourism, Culture, and Heritage)
Show Figures

Figure 1

19 pages, 1224 KB  
Article
Charting the Future of Maritime Education and Training: A Technology-Acceptance-Model-Based Pilot Study on Students’ Behavioural Intention to Use a Fully Immersive VR Engine Room Simulator
by David Bačnar, Demir Barić and Dario Ogrizović
Appl. Syst. Innov. 2025, 8(3), 84; https://doi.org/10.3390/asi8030084 - 19 Jun 2025
Viewed by 4556
Abstract
Fully immersive engine room simulators are increasingly recognised as prominent tools in advancing maritime education and training. However, end-users’ acceptance of these innovative technologies remains insufficiently explored. To address this research gap, this case-specific pilot study applied the Technology Acceptance Model (TAM) to [...] Read more.
Fully immersive engine room simulators are increasingly recognised as prominent tools in advancing maritime education and training. However, end-users’ acceptance of these innovative technologies remains insufficiently explored. To address this research gap, this case-specific pilot study applied the Technology Acceptance Model (TAM) to explore maritime engineering students’ intentions to adopt the newly introduced head-mounted display (HMD) virtual reality (VR) engine room simulator as a training tool. Sampling (N = 84) was conducted at the Faculty of Maritime Studies, University of Rijeka, during the initial simulator trials. Structural Equation Modelling (SEM) revealed that perceived usefulness was the primary determinant of students’ behavioural intention to accept the simulator as a tool for training purposes, acting both as a direct predictor and as a mediating variable, transmitting the positive effect of perceived ease of use onto the intention. By providing preliminary empirical evidence on the key factors influencing maritime engineering students’ intentions to adopt HMD-VR simulation technologies within existing training programmes, this study’s findings might offer valuable insights to software developers and educators in shaping future simulator design and enhancing pedagogical practices in alignment with maritime education and training (MET) standards. Full article
(This article belongs to the Special Issue Advanced Technologies and Methodologies in Education 4.0)
Show Figures

Figure 1

23 pages, 4826 KB  
Article
Visualization of High-Intensity Laser–Matter Interactions in Virtual Reality and Web Browser
by Martin Matys, James P. Thistlewood, Mariana Kecová, Petr Valenta, Martina Greplová Žáková, Martin Jirka, Prokopis Hadjisolomou, Alžběta Špádová, Marcel Lamač and Sergei V. Bulanov
Photonics 2025, 12(5), 436; https://doi.org/10.3390/photonics12050436 - 30 Apr 2025
Viewed by 3089
Abstract
We present the Virtual Beamline (VBL) application, an interactive web-based platform for visualizing high-intensity laser–matter interactions using particle-in-cell (PIC) simulations, with future potential for experimental data visualization. These interactions include ion acceleration, electron acceleration, γ-flash generation, electron–positron pair production, and attosecond and [...] Read more.
We present the Virtual Beamline (VBL) application, an interactive web-based platform for visualizing high-intensity laser–matter interactions using particle-in-cell (PIC) simulations, with future potential for experimental data visualization. These interactions include ion acceleration, electron acceleration, γ-flash generation, electron–positron pair production, and attosecond and spiral pulse generation. Developed at the ELI Beamlines facility, VBL integrates a custom-built WebGL engine with WebXR-based Virtual Reality (VR) support, allowing users to explore complex plasma dynamics in non-VR mode on a computer screen or in fully immersive VR mode using a head-mounted display. The application runs directly in a standard web browser, ensuring broad accessibility. VBL enhances the visualization of PIC simulations by efficiently processing and rendering four main data types: point particles, 1D lines, 2D textures, and 3D volumes. By utilizing interactive 3D visualization, it overcomes the limitations of traditional 2D representations, offering enhanced spatial understanding and real-time manipulation of visualization parameters such as time steps, data layers, and colormaps. Users can interactively explore the visualized data by moving their body or using a controller for navigation, zooming, and rotation. These interactive capabilities improve data exploration and interpretation, making VBL a valuable tool for both scientific analysis and educational outreach. The visualizations are hosted online and freely accessible on our server, providing researchers, the general public, and broader audiences with an interactive tool to explore complex plasma physics simulations. By offering an intuitive and dynamic approach to large-scale datasets, VBL enhances both scientific research and knowledge dissemination in high-intensity laser–matter physics. Full article
Show Figures

Figure 1

48 pages, 14252 KB  
Article
Towards an Accessible Metaverse Experience: Evaluation of a Multiplatform Technological Heritage Museum Prototype
by Matea Žilak, Jose M. Monzo, Carmen Bachiller and Beatriz Rey
Electronics 2025, 14(8), 1635; https://doi.org/10.3390/electronics14081635 - 18 Apr 2025
Viewed by 1514
Abstract
Before metaverse technologies become fully integrated into daily life, their accessibility must be carefully considered. To ensure equal opportunities for all users, regardless of age or disability, immersive technologies should offer seamless and intuitive interaction with virtual environments, objects, and other users. This [...] Read more.
Before metaverse technologies become fully integrated into daily life, their accessibility must be carefully considered. To ensure equal opportunities for all users, regardless of age or disability, immersive technologies should offer seamless and intuitive interaction with virtual environments, objects, and other users. This paper presents an evaluation of the accessibility and user experience of a metaverse technological heritage museum prototype on two platforms: mobile devices and virtual reality. Through feedback from 64 participants of various ages, we define accessibility guidelines for metaverse museums and identify requirements for improving the prototype. Our findings reveal significant differences between young participants and adults in their navigation and interaction experiences across platforms. This work addresses a research gap in metaverse museum accessibility evaluation and contributes to the development of more inclusive virtual spaces by providing concrete recommendations aligned with accessibility standards. Full article
(This article belongs to the Special Issue Metaverse and Digital Twins, 2nd Edition)
Show Figures

Figure 1

12 pages, 751 KB  
Article
An Integrated Cognitive Remediation and Recovery-Oriented Program for Individuals with Bipolar Disorder Using a Virtual Reality-Based Intervention: 6- and 12-Month Cognitive Outcomes from a Randomized Feasibility Trial
by Alessandra Perra, Mauro Giovanni Carta, Diego Primavera, Giulia Cossu, Aurora Locci, Rosanna Zaccheddu, Federica Piludu, Alessia Galetti, Antonio Preti, Valerio De Lorenzo, Lorenzo Di Natale, Sergio Machado, Antonio Egidio Nardi and Federica Sancassiani
Behav. Sci. 2025, 15(4), 452; https://doi.org/10.3390/bs15040452 - 1 Apr 2025
Viewed by 2197
Abstract
Introduction: Achieving long-term impacts from cognitive remediation (CR) interventions is a key goal in rehabilitative care. Integrating virtual reality (VR) with psychoeducational approaches within CR programs has shown promise in enhancing user engagement and addressing the complex needs of individuals with bipolar disorder [...] Read more.
Introduction: Achieving long-term impacts from cognitive remediation (CR) interventions is a key goal in rehabilitative care. Integrating virtual reality (VR) with psychoeducational approaches within CR programs has shown promise in enhancing user engagement and addressing the complex needs of individuals with bipolar disorder (BD). A previous randomized controlled crossover feasibility trial demonstrated the viability of a fully immersive VR-CR intervention for BD, reporting low dropout rates, high acceptability, and significant cognitive improvements. This secondary analysis aimed to evaluate the stability of these outcomes over time. Methods: This paper presents a 6- to 12-month follow-up of the initial trial. Secondary cognitive outcomes were assessed, including visuospatial abilities, memory, attention, verbal fluency, and executive function, using validated assessment tools. Statistical analyses were conducted using Friedman’s test. Results: A total of 36 participants completed the 6- to 12-month follow-up. Overall, cognitive functions showed a trend toward stability or improvement over time, except for visuospatial and executive functions, which demonstrated inconsistent trajectories. Significant improvements were observed in language (p = 0.02). Conclusion: This study highlights the overall stability of cognitive functions 12 months after a fully immersive VR-CR program for individuals with BD. To sustain long-term clinical benefits, an integrated approach, such as incorporating psychoeducational strategies within cognitive remediation interventions, may be essential. Further follow-up studies with control groups and larger sample sizes are needed to validate these findings. Full article
(This article belongs to the Special Issue Psychoeducation and Early Intervention)
Show Figures

Figure 1

17 pages, 2942 KB  
Article
Profiling Students by Perceived Immersion: Insights from VR Engine Room Simulator Trials in Maritime Higher Education
by Luka Liker, Demir Barić, Ana Perić Hadžić and David Bačnar
Appl. Sci. 2025, 15(7), 3786; https://doi.org/10.3390/app15073786 - 30 Mar 2025
Cited by 4 | Viewed by 1748
Abstract
Research on students’ immersive experiences with fully immersive virtual reality (VR) technologies is extensively documented across diverse educational settings; however, in maritime higher education, it remains relatively underrepresented. Therefore, by using segmentation analysis, this study aims to profile maritime engineering students at the [...] Read more.
Research on students’ immersive experiences with fully immersive virtual reality (VR) technologies is extensively documented across diverse educational settings; however, in maritime higher education, it remains relatively underrepresented. Therefore, by using segmentation analysis, this study aims to profile maritime engineering students at the Faculty of Maritime Studies, University of Rijeka, by perceived immersion (PIMM) within a Head-Mounted Display (HMD) VR engine room simulator and to explore differences in their perceived learning benefits (PLBs), future behavioural intentions (FBI), and satisfaction (SAT) with the HMD-VR experience. The sample comprised 84 participants who engaged in preliminary HMD-VR engine room simulator trials. A non-hierarchical (K-mean) cluster analysis, combined with the Elbow method, identified two distinct and homogeneous groups: Immersionists and Conformists. The results of an independent sample t-test indicated that Immersionists exhibited significantly higher scores regarding perceived learning benefits, future behavioural intentions, and overall satisfaction than Conformists. The study results underscore the significance of understanding students’ subjective perception of immersion in the implementation and further development of fully immersive VR technologies within maritime education and training (MET) curricula. However, as the study is based on a specific case within a particular educational context, the result may not directly apply to the broader student population. Full article
Show Figures

Figure 1

20 pages, 1619 KB  
Systematic Review
A Breakthrough in Producing Personalized Solutions for Rehabilitation and Physiotherapy Thanks to the Introduction of AI to Additive Manufacturing
by Emilia Mikołajewska, Dariusz Mikołajewski, Tadeusz Mikołajczyk and Tomasz Paczkowski
Appl. Sci. 2025, 15(4), 2219; https://doi.org/10.3390/app15042219 - 19 Feb 2025
Cited by 7 | Viewed by 4520
Abstract
The integration of artificial intelligence (AI) with additive manufacturing (AM) is driving breakthroughs in personalized rehabilitation and physical therapy solutions, enabling precise customization to individual patient needs. This article presents the current state of knowledge and perspectives of using personalized solutions for rehabilitation [...] Read more.
The integration of artificial intelligence (AI) with additive manufacturing (AM) is driving breakthroughs in personalized rehabilitation and physical therapy solutions, enabling precise customization to individual patient needs. This article presents the current state of knowledge and perspectives of using personalized solutions for rehabilitation and physiotherapy thanks to the introduction of AI to AM. Advanced AI algorithms analyze patient-specific data such as body scans, movement patterns, and medical history to design customized assistive devices, orthoses, and prosthetics. This synergy enables the rapid prototyping and production of highly optimized solutions, improving comfort, functionality, and therapeutic outcomes. Machine learning (ML) models further streamline the process by anticipating biomechanical needs and adapting designs based on feedback, providing iterative refinement. Cutting-edge techniques leverage generative design and topology optimization to create lightweight yet durable structures that are ideally suited to the patient’s anatomy and rehabilitation goals .AI-based AM also facilitates the production of multi-material devices that combine flexibility, strength, and sensory capabilities, enabling improved monitoring and support during physical therapy. New perspectives include integrating smart sensors with printed devices, enabling real-time data collection and feedback loops for adaptive therapy. Additionally, these solutions are becoming increasingly accessible as AM technology lowers costs and improves, democratizing personalized healthcare. Future advances could lead to the widespread use of digital twins for the real-time simulation and customization of rehabilitation devices before production. AI-based virtual reality (VR) and augmented reality (AR) tools are also expected to combine with AM to provide immersive, patient-specific training environments along with physical aids. Collaborative platforms based on federated learning can enable healthcare providers and researchers to securely share AI insights, accelerating innovation. However, challenges such as regulatory approval, data security, and ensuring equity in access to these technologies must be addressed to fully realize their potential. One of the major gaps is the lack of large, diverse datasets to train AI models, which limits their ability to design solutions that span different demographics and conditions. Integration of AI–AM systems into personalized rehabilitation and physical therapy should focus on improving data collection and processing techniques. Full article
(This article belongs to the Special Issue Additive Manufacturing in Material Processing)
Show Figures

Figure 1

33 pages, 6254 KB  
Article
Development of a Reduced Order Model-Based Workflow for Integrating Computer-Aided Design Editors with Aerodynamics in a Virtual Reality Dashboard: Open Parametric Aircraft Model-1 Testcase
by Andrea Lopez and Marco E. Biancolini
Appl. Sci. 2025, 15(2), 846; https://doi.org/10.3390/app15020846 - 16 Jan 2025
Viewed by 1719
Abstract
In this paper, a workflow for creating advanced aerodynamics design dashboards is proposed. A CAD modeler is directly linked to the CFD simulation results so that the designer can explore in real time, assisted by virtual reality (VR), how shape parameters affect the [...] Read more.
In this paper, a workflow for creating advanced aerodynamics design dashboards is proposed. A CAD modeler is directly linked to the CFD simulation results so that the designer can explore in real time, assisted by virtual reality (VR), how shape parameters affect the aerodynamics and choose the optimal combination to optimize performance. In this way, the time required for the conception of a new component can be drastically reduced because, even at the preliminary stage, the designer has all the necessary information to make more thoughtful choices. Thus, this work sets a highly ambitious and innovative goal: to create a smart design dashboard where every shape parameter is directly and in real-time linked to the results of the high-fidelity analyses. The OPAM (Open Parametric Aircraft Model), a simplified model of the Boeing 787, was considered as a case study. CAD parameterization and mesh morphing were combined to generate the design points (DPs), while Reduced Order Models (ROMs) were developed to link the results of the CFD analyses to the chosen parameterization. The ROMs were exported as FMUs (Functional Mockup Units) to be easily managed in any environment. Finally, a VR design dashboard was created in the Unity environment, enabling the interaction with the geometric model in order to observe in a fully immersive and intuitive environment how each shape parameter affects the physics involved. The MetaQuest 3 headset has been selected for these tests. Thus, the use of VR for a design platform represents another innovative aspect of this work. Full article
(This article belongs to the Special Issue Application of Fluid Mechanics and Aerodynamics in Aerospace)
Show Figures

Figure 1

Back to TopTop