Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (218)

Search Parameters:
Keywords = head-out immersion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 9379 KiB  
Article
UDirEar: Heading Direction Tracking with Commercial UWB Earbud by Interaural Distance Calibration
by Minseok Kim, Younho Nam, Jinyou Kim and Young-Joo Suh
Electronics 2025, 14(15), 2940; https://doi.org/10.3390/electronics14152940 - 23 Jul 2025
Viewed by 196
Abstract
Accurate heading direction tracking is essential for immersive VR/AR, spatial audio rendering, and robotic navigation. Existing IMU-based methods suffer from drift and vibration artifacts, vision-based approaches require LoS and raise privacy concerns, and RF techniques often need dedicated infrastructure. We propose UDirEar, a [...] Read more.
Accurate heading direction tracking is essential for immersive VR/AR, spatial audio rendering, and robotic navigation. Existing IMU-based methods suffer from drift and vibration artifacts, vision-based approaches require LoS and raise privacy concerns, and RF techniques often need dedicated infrastructure. We propose UDirEar, a COTS UWB device-based system that estimates user heading using solely high-level UWB information like distance and unit direction. By initializing an EKF with each user’s constant interaural distance, UDirEar compensates for the earbuds’ roto-translational motion without additional sensors. We evaluate UDirEar on a step-motor-driven dummy head against an IMU-only baseline (MAE 30.8°), examining robustness across dummy head–initiator distances, elapsed time, EKF calibration conditions, and NLoS scenarios. UDirEar achieves a mean absolute error of 3.84° and maintains stable performance under all tested conditions. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Figure 1

28 pages, 5168 KiB  
Article
GazeHand2: A Gaze-Driven Virtual Hand Interface with Improved Gaze Depth Control for Distant Object Interaction
by Jaejoon Jeong, Soo-Hyung Kim, Hyung-Jeong Yang, Gun Lee and Seungwon Kim
Electronics 2025, 14(13), 2530; https://doi.org/10.3390/electronics14132530 - 22 Jun 2025
Viewed by 724
Abstract
Research on Virtual Reality (VR) interfaces for distant object interaction has been carried out to improve user experience. Since hand-only interfaces and gaze-only interfaces have limitations such as physical fatigue or restricted usage, VR interaction interfaces using both gaze and hand input have [...] Read more.
Research on Virtual Reality (VR) interfaces for distant object interaction has been carried out to improve user experience. Since hand-only interfaces and gaze-only interfaces have limitations such as physical fatigue or restricted usage, VR interaction interfaces using both gaze and hand input have been proposed. However, current gaze + hand interfaces still have restrictions such as difficulty in translating along the gaze ray direction, using less realistic methods, or limited rotation support. This study aims to design a new distant object interaction technique that supports hand-based interaction with high freedom of object interaction in immersive VR. In this study, we developed GazeHand2, a hand-based object interaction technique, which features a new depth control that enables free object manipulation in VR. Building on the strength of the original GazeHand, GazeHand2 can control the change rate of the gaze depth by using the relative position of the hand, allowing users to translate the object to any position. To validate our design, we conducted a user study on object manipulation, which compares it with other gaze + hand interfaces (Gaze+Pinch and ImplicitGaze). Result showed that, compared to other conditions, GazeHand2 reduced 39.3% to 54.3% of hand movements and 27.8% to 47.1% of head movements under 3 m and 5 m tasks. It also significantly increased overall user experiences (0.69 to 1.12 pt higher than Gaze+Pinch and 1.18 to 1.62 pt higher than ImplicitGaze). Furthermore, over half of the participants preferred GazeHand2 because it supports convenient and efficient object translation and hand-based realistic object manipulation. We concluded that GazeHand2 can support simple and effective distant object interaction with reduced physical fatigue and higher user experiences compared to other interfaces in immersive VR. We suggested future designs to improve interaction accuracy and user convenience for future works. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 1224 KiB  
Article
Charting the Future of Maritime Education and Training: A Technology-Acceptance-Model-Based Pilot Study on Students’ Behavioural Intention to Use a Fully Immersive VR Engine Room Simulator
by David Bačnar, Demir Barić and Dario Ogrizović
Appl. Syst. Innov. 2025, 8(3), 84; https://doi.org/10.3390/asi8030084 - 19 Jun 2025
Viewed by 719
Abstract
Fully immersive engine room simulators are increasingly recognised as prominent tools in advancing maritime education and training. However, end-users’ acceptance of these innovative technologies remains insufficiently explored. To address this research gap, this case-specific pilot study applied the Technology Acceptance Model (TAM) to [...] Read more.
Fully immersive engine room simulators are increasingly recognised as prominent tools in advancing maritime education and training. However, end-users’ acceptance of these innovative technologies remains insufficiently explored. To address this research gap, this case-specific pilot study applied the Technology Acceptance Model (TAM) to explore maritime engineering students’ intentions to adopt the newly introduced head-mounted display (HMD) virtual reality (VR) engine room simulator as a training tool. Sampling (N = 84) was conducted at the Faculty of Maritime Studies, University of Rijeka, during the initial simulator trials. Structural Equation Modelling (SEM) revealed that perceived usefulness was the primary determinant of students’ behavioural intention to accept the simulator as a tool for training purposes, acting both as a direct predictor and as a mediating variable, transmitting the positive effect of perceived ease of use onto the intention. By providing preliminary empirical evidence on the key factors influencing maritime engineering students’ intentions to adopt HMD-VR simulation technologies within existing training programmes, this study’s findings might offer valuable insights to software developers and educators in shaping future simulator design and enhancing pedagogical practices in alignment with maritime education and training (MET) standards. Full article
(This article belongs to the Special Issue Advanced Technologies and Methodologies in Education 4.0)
Show Figures

Figure 1

21 pages, 4981 KiB  
Article
FEM Simulation of FDS Response in Oil-Impregnated Paper Insulation of Current Transformers with Axial Aging Variation
by Lujia Wang, Yutong Zhang, Ling Yang, Xiaoyu Hu, Sien Xu, Weimin Huang and Longzhen Wang
Energies 2025, 18(12), 3163; https://doi.org/10.3390/en18123163 - 16 Jun 2025
Viewed by 345
Abstract
The aging of oil-impregnated paper (OIP) insulation is one of the key factors influencing the service life of oil-immersed current transformers. Frequency domain spectroscopy (FDS), supported by mathematical models or simulation methods, is commonly used to evaluate insulation conditions. However, traditional aging models [...] Read more.
The aging of oil-impregnated paper (OIP) insulation is one of the key factors influencing the service life of oil-immersed current transformers. Frequency domain spectroscopy (FDS), supported by mathematical models or simulation methods, is commonly used to evaluate insulation conditions. However, traditional aging models typically ignored significant aging differences between the transformer OIP head and straight sections caused by the axial temperature gradient. To address this limitation, an accelerated thermal aging experiment was performed on a full-scale oil-immersed inverted current transformer prototype. Based on the analysis of its internal temperature field, the axial temperature gradient boundary of the main insulation was identified. By applying region-specific aging control strategies to different axial segments, a FEM model incorporating axial aging variation was developed to analyze its influence on FDS. The simulation results closely matched experimental data, with a maximum deviation below 9.22%. The model’s applicability was further confirmed through the aging prediction of an in-service transformer. The proposed model is expected to provide a more accurate basis for predicting the FDS characteristics of OIP insulation in current transformers. Full article
Show Figures

Figure 1

18 pages, 5112 KiB  
Article
Gaze–Hand Steering for Travel and Multitasking in Virtual Environments
by Mona Zavichi, André Santos, Catarina Moreira, Anderson Maciel and Joaquim Jorge
Multimodal Technol. Interact. 2025, 9(6), 61; https://doi.org/10.3390/mti9060061 - 13 Jun 2025
Viewed by 526
Abstract
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique [...] Read more.
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique that combines eye tracking with hand pointing: users steer only when gaze aligns with a hand-defined target, reducing unintended actions and enabling free look. Speed is controlled via either a joystick or a waist-level speed circle. We evaluated our method in a user study (n = 20) across multitasking and single-task scenarios, comparing it to a similar technique. Results show that gaze–hand steering maintains performance and enhances user comfort and spatial awareness during multitasking. Our findings support using gaze–hand steering in gaze-dominant VR applications requiring precision and simultaneous interaction. Our method significantly improves VR navigation in gaze–dominant, multitasking-intensive applications, supporting immersion and efficient control. Full article
Show Figures

Figure 1

16 pages, 1891 KiB  
Article
Effect of Pre-Freezing 18 °C Holding Time on Post-Thaw Motility and Morphometry of Cryopreserved Boar Epididymal Sperm
by Mamonene Angelinah Thema, Ntuthuko Raphael Mkhize, Maleke Dimpho Sebopela, Mahlatsana Ramaesela Ledwaba and Masindi Lottus Mphaphathi
Animals 2025, 15(12), 1691; https://doi.org/10.3390/ani15121691 - 7 Jun 2025
Viewed by 550
Abstract
The study investigated the sperm motility and morphometry of pre-freeze and post-thaw boar epididymal semen cooled at increasing holding times at 18 °C. A total of 50 testes of heterogeneous boars were collected (5 testes/day) from the local abattoir and transported to the [...] Read more.
The study investigated the sperm motility and morphometry of pre-freeze and post-thaw boar epididymal semen cooled at increasing holding times at 18 °C. A total of 50 testes of heterogeneous boars were collected (5 testes/day) from the local abattoir and transported to the laboratory at 5 °C within 30 min after slaughter. Semen was retrieved from the caudal part of the epididymis using the slicing float-up method, diluted with Beltsville Thawing Solution extender, pooled in a 50 mL centrifuge tube/5 testes/day, and cooled at 18 °C. Following each holding time (0, 3, 6, 9, 12, 24, and 48 h), the cooled semen sample was re-suspended with Fraction A extender and stored at 5 °C for an additional 45 min. A cooled resuspended semen sample was then diluted with Fraction B extender, loaded into 0.25 mL straws, and frozen using liquid nitrogen vapour. Thawing was accomplished by immersing the semen straws in warm (37 °C) water for 1 min and the samples were evaluated for sperm motility and morphometry traits using the computer-assisted sperm analyzer system. The data were analyzed using variance analysis. Descriptive statistics were used to assess sperm morphometry, establishing the minimum and maximum values. Boar epididymal sperm survived for up to 48 h when held at 18 °C. Furthermore, the highest post-thawed sperm motility rates were observed in semen frozen after 3 h of holding time, with a sperm total motility of 85.9%, a progressive motility of 60.3%, and a rapid motility of 33.2%, as compared to other holding times (p < 0.05). The acceptable ranges for pre-freeze and post-thawed sperm morphology were head length (8.4–9.1 µm), width (4.4–4.8 µm), area (29.9–38.2 µm2), perimeter (20.1–23.7 µm), midpiece width (1.1–2.8 µm), and sperm shape, were consistent regardless of the holding time. A holding time of 3 h enhances the cryoresistance of sperm cooled at 18 °C. Therefore, these findings suggest that boar epididymal sperm can be effectively conserved and can maintain fertilization capability when cooled for 3 h at 18 °C before freezing. Full article
(This article belongs to the Section Animal Reproduction)
Show Figures

Figure 1

26 pages, 1812 KiB  
Article
Evaluating Virtual Game Design for Cultural Heritage Interpretation: An Exploratory Study on arkeOyun
by Sevde Güner and Leman Figen Gül
Heritage 2025, 8(6), 208; https://doi.org/10.3390/heritage8060208 - 4 Jun 2025
Viewed by 1517
Abstract
The interpretation of archaeological heritage encounters inherent challenges due to the fragmentation and contextual loss of the physical site. Virtual reality has emerged as an innovative medium for enhancing user engagement and promoting meaningful dissemination of culture. This exploratory study investigates the design [...] Read more.
The interpretation of archaeological heritage encounters inherent challenges due to the fragmentation and contextual loss of the physical site. Virtual reality has emerged as an innovative medium for enhancing user engagement and promoting meaningful dissemination of culture. This exploratory study investigates the design and preliminary expert-based evaluation of arkeOyun, a virtual reality game created to better understand archaeological sites’ spatial and cultural significance, by sampling the Kültepe Archaeological Site. The aim of this study is to evaluate the usefulness of virtual game-based approaches in the dissemination of cultural heritage and user interaction, emphasising spatial clarity, narrative integration, and immersive engagement. Our study incorporates qualitative and quantitative methods, utilising concurrent think-aloud and heuristic evaluation with participants who were selected due to their expertise in heritage, design, and human–computer interaction domains. Participants engaged with arkeOyun via a head-mounted display, and their real-time comments and post-experience evaluations were systematically evaluated. Results indicate that although participants responded positively to the game’s immersive design, interface simplicity, and spatial organisation, notable deficiencies were seen in narrative coherence, emotional resonance, and multimodal feedback. Navigation and the presentation of informative content were seen as critical areas requiring improvement. The data triangulation revealed both consistent and varying assessments, highlighting the need for context-specific support, varied task structures, and emotionally compelling narratives for enhanced interpretation of cultural significance. The findings of our study illustrate the potential of virtual reality games as a medium for cultural heritage interpretation via arkeOyun. For experiences to evolve from immersive simulations to major interpretative platforms, it is vital to integrate narrative frameworks, multimodal scaffolding, and user-centred interaction tactics more deeply. The results of this exploratory pilot study present preliminary findings on integrating virtual reality games in archaeological heritage interpretation and contribute to further projects. Full article
(This article belongs to the Special Issue Heritage as a Design Resource for Virtual Reality)
Show Figures

Figure 1

11 pages, 6922 KiB  
Article
The Feasibility and Clinical Evaluation of an Immersive Augmented Reality Surgical Headset Integrated with Swept-Source Intraoperative Optical Coherence Tomography for Ophthalmic Surgery in the DISCOVER Study
by Masaharu Mizuno, Karen Matar, Reem Amine, Katherine E. Talcott, Jeffrey M. Goshe, William J. Dupps, Sumit Sharma, Asmita Indurkar, John Mamone, Jamie Reese, Sunil K. Srivastava and Justis P. Ehlers
Diagnostics 2025, 15(11), 1394; https://doi.org/10.3390/diagnostics15111394 - 30 May 2025
Viewed by 657
Abstract
Objectives: to evaluate the feasibility and utility of intraoperative optical coherence tomography (iOCT) utilizing an immersive augmented reality surgical headset (Beyeonics iOCT, Beyeonics Vision Ltd., Haifa, Israel) digital visualization platform with swept-source integrated OCT in ophthalmic surgery. Methods: As part of [...] Read more.
Objectives: to evaluate the feasibility and utility of intraoperative optical coherence tomography (iOCT) utilizing an immersive augmented reality surgical headset (Beyeonics iOCT, Beyeonics Vision Ltd., Haifa, Israel) digital visualization platform with swept-source integrated OCT in ophthalmic surgery. Methods: As part of the Institutional Review Board-approved prospective DISCOVER study, the Beyeonics iOCT was utilized in multiple ophthalmic surgical procedures to evaluate the feasibility and utility of iOCT with this platform. The Beyeonics iOCT is a three-dimensional surgical visualization system that utilizes a swept-source integrated OCT within the digital microscope system. Surgeon feedback on system performance and integration into the surgical workflow was gathered via a prespecified survey. Results: Thirteen eyes of thirteen patients were included in this study. The surgical procedures consisted of four cataract surgeries, two lamellar corneal transplants, one pterygium removal, and six vitreoretinal surgeries. Surgeons were able to successfully view and review the iOCT images within the surgical Head-Mounted Display, eliminating the need for an external display. Utility feedback from surgeons included iOCT assisting with confirming wound architecture, corneal graft orientation, and retinal structure. All surgeries were completed without reverting to a conventional microscope, and no intraoperative adverse events occurred. Conclusions: The new visualization platform with integrated swept-source iOCT demonstrated feasibility and potential utility in multiple ophthalmic surgical platforms. Additional research related to outcomes, ergonomics, and enhanced software analysis is needed in the future. Full article
(This article belongs to the Special Issue New Perspectives in Ophthalmic Imaging)
Show Figures

Figure 1

20 pages, 76650 KiB  
Article
Enhancing Cultural Heritage Engagement with Novel Interactive Extended-Reality Multisensory System
by Adolfo Muñoz, Juan José Climent-Ferrer, Ana Martí-Testón, J. Ernesto Solanes and Luis Gracia
Electronics 2025, 14(10), 2039; https://doi.org/10.3390/electronics14102039 - 16 May 2025
Viewed by 1258
Abstract
Extended-reality (XR) tools are increasingly used to revitalise museum experiences, but typical head-mounted or smartphone solutions tend to fragment audiences and suppress the social dialogue that makes cultural heritage memorable. This article addresses that gap on two fronts. First, it proposes a four-phase [...] Read more.
Extended-reality (XR) tools are increasingly used to revitalise museum experiences, but typical head-mounted or smartphone solutions tend to fragment audiences and suppress the social dialogue that makes cultural heritage memorable. This article addresses that gap on two fronts. First, it proposes a four-phase design methodology—spanning artifact selection, narrative framing, tangible-interface fabrication, spatial installation, software integration, validation, and deployment—that helps curators, designers, and technologists to co-create XR exhibitions in which co-presence, embodied action, and multisensory cues are treated as primary design goals rather than afterthoughts. Second, the paper reports LanternXR, a proof-of-concept built with the methodology: visitors share a 3D-printed replica of the fourteenth-century Virgin of Boixadors while wielding a tracked “camera” and a candle-like lantern that lets them illuminate, photograph, and annotate the sculpture inside a life-sized Gothic nave rendered on large 4K displays with spatial audio and responsive lighting. To validate the approach, the article presents an analytical synthesis of feedback from curators, museologists, and XR technologists, underscoring the system’s capacity to foster collaboration, deepen engagement, and broaden accessibility. The findings show how XR can move museum audiences from isolated immersion to collective, multisensory exploration. Full article
Show Figures

Figure 1

23 pages, 4826 KiB  
Article
Visualization of High-Intensity Laser–Matter Interactions in Virtual Reality and Web Browser
by Martin Matys, James P. Thistlewood, Mariana Kecová, Petr Valenta, Martina Greplová Žáková, Martin Jirka, Prokopis Hadjisolomou, Alžběta Špádová, Marcel Lamač and Sergei V. Bulanov
Photonics 2025, 12(5), 436; https://doi.org/10.3390/photonics12050436 - 30 Apr 2025
Viewed by 1125
Abstract
We present the Virtual Beamline (VBL) application, an interactive web-based platform for visualizing high-intensity laser–matter interactions using particle-in-cell (PIC) simulations, with future potential for experimental data visualization. These interactions include ion acceleration, electron acceleration, γ-flash generation, electron–positron pair production, and attosecond and [...] Read more.
We present the Virtual Beamline (VBL) application, an interactive web-based platform for visualizing high-intensity laser–matter interactions using particle-in-cell (PIC) simulations, with future potential for experimental data visualization. These interactions include ion acceleration, electron acceleration, γ-flash generation, electron–positron pair production, and attosecond and spiral pulse generation. Developed at the ELI Beamlines facility, VBL integrates a custom-built WebGL engine with WebXR-based Virtual Reality (VR) support, allowing users to explore complex plasma dynamics in non-VR mode on a computer screen or in fully immersive VR mode using a head-mounted display. The application runs directly in a standard web browser, ensuring broad accessibility. VBL enhances the visualization of PIC simulations by efficiently processing and rendering four main data types: point particles, 1D lines, 2D textures, and 3D volumes. By utilizing interactive 3D visualization, it overcomes the limitations of traditional 2D representations, offering enhanced spatial understanding and real-time manipulation of visualization parameters such as time steps, data layers, and colormaps. Users can interactively explore the visualized data by moving their body or using a controller for navigation, zooming, and rotation. These interactive capabilities improve data exploration and interpretation, making VBL a valuable tool for both scientific analysis and educational outreach. The visualizations are hosted online and freely accessible on our server, providing researchers, the general public, and broader audiences with an interactive tool to explore complex plasma physics simulations. By offering an intuitive and dynamic approach to large-scale datasets, VBL enhances both scientific research and knowledge dissemination in high-intensity laser–matter physics. Full article
Show Figures

Figure 1

17 pages, 2942 KiB  
Article
Profiling Students by Perceived Immersion: Insights from VR Engine Room Simulator Trials in Maritime Higher Education
by Luka Liker, Demir Barić, Ana Perić Hadžić and David Bačnar
Appl. Sci. 2025, 15(7), 3786; https://doi.org/10.3390/app15073786 - 30 Mar 2025
Cited by 1 | Viewed by 775
Abstract
Research on students’ immersive experiences with fully immersive virtual reality (VR) technologies is extensively documented across diverse educational settings; however, in maritime higher education, it remains relatively underrepresented. Therefore, by using segmentation analysis, this study aims to profile maritime engineering students at the [...] Read more.
Research on students’ immersive experiences with fully immersive virtual reality (VR) technologies is extensively documented across diverse educational settings; however, in maritime higher education, it remains relatively underrepresented. Therefore, by using segmentation analysis, this study aims to profile maritime engineering students at the Faculty of Maritime Studies, University of Rijeka, by perceived immersion (PIMM) within a Head-Mounted Display (HMD) VR engine room simulator and to explore differences in their perceived learning benefits (PLBs), future behavioural intentions (FBI), and satisfaction (SAT) with the HMD-VR experience. The sample comprised 84 participants who engaged in preliminary HMD-VR engine room simulator trials. A non-hierarchical (K-mean) cluster analysis, combined with the Elbow method, identified two distinct and homogeneous groups: Immersionists and Conformists. The results of an independent sample t-test indicated that Immersionists exhibited significantly higher scores regarding perceived learning benefits, future behavioural intentions, and overall satisfaction than Conformists. The study results underscore the significance of understanding students’ subjective perception of immersion in the implementation and further development of fully immersive VR technologies within maritime education and training (MET) curricula. However, as the study is based on a specific case within a particular educational context, the result may not directly apply to the broader student population. Full article
Show Figures

Figure 1

14 pages, 2113 KiB  
Article
Immersive Virtual Reality for Enabling Patient Experience and Enrollment in Oncology Clinical Trials: A Feasibility Study
by Frank Tsai, Landon Gray, Amy Mirabella, Margaux Steinbach, Jacqueline M. Garrick, Nadine J. Barrett, Nelson Chao and Frederic Zenhausern
Cancers 2025, 17(7), 1148; https://doi.org/10.3390/cancers17071148 - 29 Mar 2025
Viewed by 818
Abstract
Background/Objectives: Informed consent is a crucial part of the clinical trial enrollment process in which patients are asked to understand and provide approval for medical interventions. Consent forms can be complex and hinder patient comprehension, highlighting the need for novel tools to [...] Read more.
Background/Objectives: Informed consent is a crucial part of the clinical trial enrollment process in which patients are asked to understand and provide approval for medical interventions. Consent forms can be complex and hinder patient comprehension, highlighting the need for novel tools to improve the patient enrollment experience. This feasibility study aimed to develop an immersive technology to enroll human subjects in oncology clinical trials and provide 3D avatar-based informed consent in a virtual reality (VR) environment. Methods: Clinical feasibility and the effects of head-mounted VR devices on motion sickness and educational quality were evaluated in adult oncology patients enrolled in an intravenous (IV) port placement intervention study. Participants received before- and after-questionnaires to measure their understanding of the information received in VR. A follow-up questionnaire was given four weeks post-consent to measure knowledge retention. Results: Clinical staff reported that VR technology was manageable to use. Among 16 adult participants, all reported that VR was well tolerated with no motion sickness. The mean pre-intervention knowledge score was 64.6%, with an immediate post-intervention knowledge score of 97.9%. A mean knowledge score of 93.3% four-weeks post-consent was observed among 10/16 participants who completed a follow-up questionnaire. Conclusions: These findings support that VR is well tolerated and effective at delivering information during the informed consent process for oncology clinical trials. Key limitations include the small sample size and single clinical population. Further trials are warranted to compare efficacy over traditional consenting mechanisms and include more diverse clinical populations among a wider participant pool. Full article
(This article belongs to the Special Issue Digital Health Technologies in Oncology)
Show Figures

Figure 1

29 pages, 40685 KiB  
Article
Evaluating the Benefits and Drawbacks of Visualizing Systems Modeling Language (SysML) Diagrams in the 3D Virtual Reality Environment
by Mostafa Lutfi and Ricardo Valerdi
Systems 2025, 13(4), 221; https://doi.org/10.3390/systems13040221 - 23 Mar 2025
Viewed by 1246
Abstract
Model-Based Systems Engineering (MBSE) prioritizes system design through models rather than documents, and it is implemented with the Systems Modeling Language (SysML), which is the state-of-the-art language in academia and industry. Virtual Reality (VR), an immersive visualization technology, can simulate reality in virtual [...] Read more.
Model-Based Systems Engineering (MBSE) prioritizes system design through models rather than documents, and it is implemented with the Systems Modeling Language (SysML), which is the state-of-the-art language in academia and industry. Virtual Reality (VR), an immersive visualization technology, can simulate reality in virtual environments with varying degrees of fidelity. In recent years, the technology industry has invested substantially in the development of head-mounted displays (HMDs) and related virtual reality (VR) technologies. Various research has suggested that VR-based immersive design reviews enhance system issue/fault identification, collaboration, focus, and presence compared to non-immersive approaches. Additionally, several research efforts have demonstrated that the VR environment provides higher understanding and knowledge retention levels than traditional approaches. In recent years, multiple attempts have been made to visualize conventional 2D SysML diagrams in a virtual reality environment. To the best of the author’s knowledge, no empirical evaluation has been performed to analyze the benefits and drawbacks of visualizing SysML diagrams in a VR environment. Hence, the authors aimed to evaluate four key benefit types and drawbacks through experiments with human subjects. The authors chose four benefit types—Systems Understanding, Information Sharing, Modeling and Training Experience, and Digital Twin based on the MBSE value and benefits review performed by researchers and benefits claimed by the evaluations for similar visual formalism languages. Experiments were conducted to compare the understanding, interaction, and knowledge retention for 3D VR and conventional 2D SysML diagrams. The authors chose a ground-based telescope system as the system of interest (SOI) for system modeling. The authors utilized a standalone wireless HMD unit for a virtual reality experience, which enabled experiments to be conducted irrespective of location. Students and experts from multiple disciplines, including systems engineering, participated in the experiment and provided their opinions on the VR SysML implementation. The knowledge test, perceived evaluation results, and post-completion surveys were analyzed to determine whether the 3D VR SysML implementation improved these benefits and identified potential drawbacks. The authors utilized a few VR scenario efficacy measures, namely the Simulation Sickness Questionnaire (SSQ) and System Usability Scale (SUS), to avoid evaluation design-related anomalies. Full article
Show Figures

Figure 1

21 pages, 454 KiB  
Review
The Role of Immersive Virtual Reality in Upper Limb Rehabilitation for Subacute Stroke: A Review
by Danilo Donati, Elena Pinotti, Monica Mantovani, Silvia Casarotti, Annalisa Fini, Roberto Tedeschi and Serena Caselli
J. Clin. Med. 2025, 14(6), 1903; https://doi.org/10.3390/jcm14061903 - 12 Mar 2025
Cited by 1 | Viewed by 2197
Abstract
Background: Patients with stroke sequelae experience motor impairments that make it difficult to perform many activities of daily living, resulting in reduced social participation. Immersive virtual reality (VR) provides the necessary conditions for motor learning, such as repetitiveness, intensity, and task meaningfulness, and [...] Read more.
Background: Patients with stroke sequelae experience motor impairments that make it difficult to perform many activities of daily living, resulting in reduced social participation. Immersive virtual reality (VR) provides the necessary conditions for motor learning, such as repetitiveness, intensity, and task meaningfulness, and it could be a promising rehabilitation tool for upper limb recovery in individuals with stroke sequelae. Objective: The objectives of this study are to summarize the current scientific evidence on the use of immersive VR for upper limb rehabilitation in patients with subacute stroke and to identify clinical and instrumental criteria that may inform the development of a standardized VR treatment protocol. Materials and Methods: Bibliographic research on primary and secondary studies was conducted using the keywords “subacute stroke”, “immersive virtual reality/head-mounted display (HMD)”, and “upper extremity/arm/hand” in the following electronic databases: CINAHL, PubMed (MEDLINE), Embase, Web of Science, Cochrane Library, PEDro, and Google Scholar. Then, we performed the selection of studies and the assessment of the methodological quality of such studies using the PEDro scale. Finally, the qualitative synthesis of the data extracted from the selected studies was carried out. This systematic review was conducted according to the PRISMA 2020 guidelines. Results: After the selection process, five studies were included in this systematic review (two RCTs, two controlled clinical studies, one study protocol). Four studies reported significant improvements in some main outcomes after the VR intervention, including a significant increase in the Fugl-Meyer Upper Extremity total score, in favor of the virtual rehabilitation group. Conclusions: VR appears to be a promising rehabilitation tool for upper limb motor recovery. However, further research is needed to determine the intervention methods and long-term effects of VR on the stroke population. Full article
Show Figures

Figure 1

21 pages, 1111 KiB  
Article
Comparative Analysis of Audio Feature Extraction for Real-Time Talking Portrait Synthesis
by Pegah Salehi, Sajad Amouei Sheshkal, Vajira Thambawita, Sushant Gautam, Saeed S. Sabet, Dag Johansen, Michael A. Riegler and Pål Halvorsen
Big Data Cogn. Comput. 2025, 9(3), 59; https://doi.org/10.3390/bdcc9030059 - 4 Mar 2025
Viewed by 1840
Abstract
This paper explores advancements in real-time talking-head generation, focusing on overcoming challenges in Audio Feature Extraction (AFE), which often introduces latency and limits responsiveness in real-time applications. To address these issues, we propose and implement a fully integrated system that replaces conventional AFE [...] Read more.
This paper explores advancements in real-time talking-head generation, focusing on overcoming challenges in Audio Feature Extraction (AFE), which often introduces latency and limits responsiveness in real-time applications. To address these issues, we propose and implement a fully integrated system that replaces conventional AFE models with OpenAI’s Whisper, leveraging its encoder to optimize processing and improve overall system efficiency. Our evaluation of two open-source real-time models across three different datasets shows that Whisper not only accelerates processing but also improves specific aspects of rendering quality, resulting in more realistic and responsive talking-head interactions. Although interviewer training systems are considered a potential application, the primary contribution of this work is the improvement of the technical foundations necessary for creating responsive AI avatars. These advancements enable more immersive interactions and expand the scope of AI-driven applications, including educational tools and simulated training environments. Full article
Show Figures

Figure 1

Back to TopTop