Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (206)

Search Parameters:
Keywords = Head-Mounted Display (HMD)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3117 KiB  
Article
Feasibility and Accuracy of a Dual-Function AR-Guided System for PSI Positioning and Osteotomy Execution in Pelvic Tumour Surgery: A Cadaveric Study
by Tanya Fernández-Fernández, Javier Orozco-Martínez, Carla de Gregorio-Bermejo, Elena Aguilera-Jiménez, Amaia Iribar-Zabala, Lydia Mediavilla-Santos, Javier Pascau, Mónica García-Sevilla, Rubén Pérez-Mañanes and José Antonio Calvo-Haro
Bioengineering 2025, 12(8), 810; https://doi.org/10.3390/bioengineering12080810 - 28 Jul 2025
Viewed by 299
Abstract
Objectives: Pelvic tumor resections demand high surgical precision to ensure clear margins while preserving function. Although patient-specific instruments (PSIs) improve osteotomy accuracy, positioning errors remain a limitation. This study evaluates the feasibility, accuracy, and usability of a novel dual-function augmented reality (AR) [...] Read more.
Objectives: Pelvic tumor resections demand high surgical precision to ensure clear margins while preserving function. Although patient-specific instruments (PSIs) improve osteotomy accuracy, positioning errors remain a limitation. This study evaluates the feasibility, accuracy, and usability of a novel dual-function augmented reality (AR) system for intraoperative guidance in PSI positioning and osteotomy execution using a head-mounted display (HMD). The system provides dual-function support by assisting both PSI placement and osteotomy execution. Methods: Ten fresh-frozen cadaveric hemipelves underwent AR-assisted internal hemipelvectomy, using customized 3D-printed PSIs and a new in-house AR software integrated into an HMD. Angular and translational deviations between planned and executed osteotomies were measured using postoperative CT analysis. Absolute angular errors were computed from plane normals; translational deviation was assessed as maximum error at the osteotomy corner point in both sagittal (pitch) and coronal (roll) planes. A Wilcoxon signed-rank test and Bland–Altman plots were used to assess intra-workflow cumulative error. Results: The mean absolute angular deviation was 5.11 ± 1.43°, with 86.66% of osteotomies within acceptable thresholds. Maximum pitch and roll deviations were 4.53 ± 1.32 mm and 2.79 ± 0.72 mm, respectively, with 93.33% and 100% of osteotomies meeting translational accuracy criteria. Wilcoxon analysis showed significantly lower angular error when comparing final executed planes to intermediate AR-displayed planes (p < 0.05), supporting improved PSI positioning accuracy with AR guidance. Surgeons rated the system highly (mean satisfaction ≥ 4.0) for usability and clinical utility. Conclusions: This cadaveric study confirms the feasibility and precision of an HMD-based AR system for PSI-guided pelvic osteotomies. The system demonstrated strong accuracy and high surgeon acceptance, highlighting its potential for clinical adoption in complex oncologic procedures. Full article
Show Figures

Figure 1

25 pages, 5055 KiB  
Article
FlickPose: A Hand Tracking-Based Text Input System for Mobile Users Wearing Smart Glasses
by Ryo Yuasa and Katashi Nagao
Appl. Sci. 2025, 15(15), 8122; https://doi.org/10.3390/app15158122 - 22 Jul 2025
Viewed by 369
Abstract
With the growing use of head-mounted displays (HMDs) such as smart glasses, text input remains a challenge, especially in mobile environments. Conventional methods like physical keyboards, voice recognition, and virtual keyboards each have limitations—physical keyboards lack portability, voice input has privacy concerns, and [...] Read more.
With the growing use of head-mounted displays (HMDs) such as smart glasses, text input remains a challenge, especially in mobile environments. Conventional methods like physical keyboards, voice recognition, and virtual keyboards each have limitations—physical keyboards lack portability, voice input has privacy concerns, and virtual keyboards struggle with accuracy due to a lack of tactile feedback. FlickPose is a novel text input system designed for smart glasses and mobile HMD users, integrating flick-based input and hand pose recognition. It features two key selection methods: the touch-panel method, where users tap a floating UI panel to select characters, and the raycast method, where users point a virtual ray from their wrist and confirm input via a pinch motion. FlickPose uses five left-hand poses to select characters. A machine learning model trained for hand pose recognition outperforms Random Forest and LightGBM models in accuracy and consistency. FlickPose was tested against the standard virtual keyboard of Meta Quest 3 in three tasks (hiragana, alphanumeric, and kanji input). Results showed that raycast had the lowest error rate, reducing unintended key presses; touch-panel had more deletions, likely due to misjudgments in key selection; and frequent HMD users preferred raycast, as it maintained input accuracy while allowing users to monitor their text. A key feature of FlickPose is adaptive tracking, which ensures the keyboard follows user movement. While further refinements in hand pose recognition are needed, the system provides an efficient, mobile-friendly alternative for HMD text input. Future research will explore real-world application compatibility and improve usability in dynamic environments. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

12 pages, 8520 KiB  
Article
Integrated Haptic Feedback with Augmented Reality to Improve Pinching and Fine Moving of Objects
by Jafar Hamad, Matteo Bianchi and Vincenzo Ferrari
Appl. Sci. 2025, 15(13), 7619; https://doi.org/10.3390/app15137619 - 7 Jul 2025
Viewed by 455
Abstract
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack [...] Read more.
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack of immediate and clear feedback from head-mounted displays (HMDs). Current tracking technologies cannot always guarantee reliable recognition, leaving users uncertain about whether their gestures have been successfully detected. To address this limitation, haptic feedback can play a key role by confirming gesture recognition and compensating for discrepancies between the visual perception of fingertip contact with virtual objects and the actual system recognition. The goal of this paper is to compare a simple vibrotactile ring with a full glove device and identify their possible improvements for a fundamental gesture like pinching and fine moving of objects using Microsoft HoloLens 2. Where the pinch action is considered an essential fine motor skill, augmented reality integrated with haptic feedback can be useful to notify the user of the recognition of the gestures and compensate for misaligned visual perception between the tracked fingertip with respect to virtual objects to determine better performance in terms of spatial precision. In our experiments, the participants’ median distance error using bare hands over all axes was 10.3 mm (interquartile range [IQR] = 13.1 mm) in a median time of 10.0 s (IQR = 4.0 s). While both haptic devices demonstrated improvement in participants precision with respect to the bare-hands case, participants achieved with the full glove median errors of 2.4 mm (IQR = 5.2) in a median time of 8.0 s (IQR = 6.0 s), and with the haptic rings they achieved even better performance with median errors of 2.0 mm (IQR = 2.0 mm) in an even better median time of only 6.0 s (IQR= 5.0 s). Our outcomes suggest that simple devices like the described haptic rings can be better than glove-like devices, offering better performance in terms of accuracy, execution time, and wearability. The haptic glove probably compromises hand and finger tracking with the Microsoft HoloLens 2. Full article
Show Figures

Figure 1

16 pages, 2092 KiB  
Article
Augmented Reality-Assisted Placement of Surgical Guides and Osteotomy Execution for Pelvic Tumour Resections: A Pre-Clinical Feasibility Study Using 3D-Printed Models
by Tanya Fernández-Fernández, Javier Orozco-Martínez, Amaia Iribar-Zabala, Elena Aguilera Jiménez, Carla de Gregorio-Bermejo, Lydia Mediavilla-Santos, Javier Pascau, Mónica García-Sevilla, Rubén Pérez-Mañanes and Jose Antonio Calvo-Haro
Cancers 2025, 17(13), 2260; https://doi.org/10.3390/cancers17132260 - 7 Jul 2025
Viewed by 350
Abstract
Objectives: This pre-clinical feasibility study evaluates the accuracy of a novel augmented reality-based (AR-based) guidance technology using head-mounted displays (HMDs) for the placement of patient-specific instruments (PSIs)—also referred to as surgical guides—and osteotomy performance in pelvic tumour resections. The goal is to [...] Read more.
Objectives: This pre-clinical feasibility study evaluates the accuracy of a novel augmented reality-based (AR-based) guidance technology using head-mounted displays (HMDs) for the placement of patient-specific instruments (PSIs)—also referred to as surgical guides—and osteotomy performance in pelvic tumour resections. The goal is to improve PSI placement accuracy and osteotomy execution while assessing user perception and workflow efficiency. Methods: The study was conducted on ten 3D-printed pelvic phantoms derived from CT scans of cadaveric specimens. Custom PSIs were designed and printed to guide osteotomies at the supraacetabular, symphysial, and ischial regions. An AR application was developed for the HoloLens 2 HMD to display PSI location and cutting planes. The workflow included manual supraacetabular PSI placement, AR-guided placement of the other PSIs and osteotomy execution. Postoperative CT scans were analysed to measure angular and distance errors in PSI placement and osteotomies. Task times and user feedback were also recorded. Results: The mean angular deviation for PSI placement was 2.20°, with a mean distance error of 1.19 mm (95% CI: 0.86 to 1.52 mm). Osteotomies showed an overall mean angular deviation of 3.73° compared to planned cuts, all within the predefined threshold of less than 5°. AR-assisted guidance added less than two minutes per procedure. User feedback highlighted the intuitive interface and high usability, especially for visualising cutting planes. Conclusions: Integrating AR through HMDs is a feasible and accurate method for enhancing PSI placement and osteotomy performance in pelvic tumour resections. The system provides reliable guidance even in cases of PSI failure and adds minimal time to the surgical workflow while significantly improving accuracy. Further validation in cadaveric models is needed to ensure its clinical applicability. Full article
(This article belongs to the Special Issue Clinical Treatment of Osteosarcoma)
Show Figures

Figure 1

19 pages, 1224 KiB  
Article
Charting the Future of Maritime Education and Training: A Technology-Acceptance-Model-Based Pilot Study on Students’ Behavioural Intention to Use a Fully Immersive VR Engine Room Simulator
by David Bačnar, Demir Barić and Dario Ogrizović
Appl. Syst. Innov. 2025, 8(3), 84; https://doi.org/10.3390/asi8030084 - 19 Jun 2025
Viewed by 766
Abstract
Fully immersive engine room simulators are increasingly recognised as prominent tools in advancing maritime education and training. However, end-users’ acceptance of these innovative technologies remains insufficiently explored. To address this research gap, this case-specific pilot study applied the Technology Acceptance Model (TAM) to [...] Read more.
Fully immersive engine room simulators are increasingly recognised as prominent tools in advancing maritime education and training. However, end-users’ acceptance of these innovative technologies remains insufficiently explored. To address this research gap, this case-specific pilot study applied the Technology Acceptance Model (TAM) to explore maritime engineering students’ intentions to adopt the newly introduced head-mounted display (HMD) virtual reality (VR) engine room simulator as a training tool. Sampling (N = 84) was conducted at the Faculty of Maritime Studies, University of Rijeka, during the initial simulator trials. Structural Equation Modelling (SEM) revealed that perceived usefulness was the primary determinant of students’ behavioural intention to accept the simulator as a tool for training purposes, acting both as a direct predictor and as a mediating variable, transmitting the positive effect of perceived ease of use onto the intention. By providing preliminary empirical evidence on the key factors influencing maritime engineering students’ intentions to adopt HMD-VR simulation technologies within existing training programmes, this study’s findings might offer valuable insights to software developers and educators in shaping future simulator design and enhancing pedagogical practices in alignment with maritime education and training (MET) standards. Full article
(This article belongs to the Special Issue Advanced Technologies and Methodologies in Education 4.0)
Show Figures

Figure 1

16 pages, 1471 KiB  
Article
Interpersonal Synchrony Affects the Full-Body Illusion
by Hiromu Ogawa, Hirotaka Uchitomi and Yoshihiro Miyake
Appl. Sci. 2025, 15(12), 6870; https://doi.org/10.3390/app15126870 - 18 Jun 2025
Viewed by 447
Abstract
The full-body illusion (FBI) is a phenomenon where individuals experience body perception not in their physical body but in an external virtual body. Previous studies have shown that the relationship between the self and the virtual body influences the occurrence and intensity of [...] Read more.
The full-body illusion (FBI) is a phenomenon where individuals experience body perception not in their physical body but in an external virtual body. Previous studies have shown that the relationship between the self and the virtual body influences the occurrence and intensity of the FBI. However, the influence of interpersonal factors on the FBI has not been explored. This study investigated the effect of interpersonal synchrony on body perception through an evaluation experiment involving the FBI. Specifically, the participant and an experimenter clapped together while their movements were recorded by a video camera placed behind the participant and displayed to them via a head-mounted display (HMD). This setup presented synchronous visuotactile stimuli, aligning the visual feedback with the tactile sensations in the participant’s hands, to induce the FBI. The experimenter’s clapping rhythm was manipulated to either be synchronous or asynchronous with the participant’s rhythm, thus controlling the state of movement synchronization between the participant and the experimenter. The impact on the participant’s body perception was then assessed through subjective reports. The results indicated that when the clapping rhythm was synchronized with the other person, there was a significant reduction in touch referral to the participant’s virtual body. Additionally, there was a trend toward a reduction in ownership. This study demonstrated for the first time that interpersonal synchrony affects body perception. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality: Theory, Methods, and Applications)
Show Figures

Figure 1

26 pages, 13313 KiB  
Article
Exploring Augmented Reality HMD Telemetry Data Visualization for Strategy Optimization in Student Solar-Powered Car Racing
by Jakub Forysiak, Piotr Krawiranda, Krzysztof Fudała, Zbigniew Chaniecki, Krzysztof Jóźwik, Krzysztof Grudzień and Andrzej Romanowski
Energies 2025, 18(12), 3196; https://doi.org/10.3390/en18123196 - 18 Jun 2025
Viewed by 451
Abstract
This article explores how different modalities of presenting telemetry data can support strategy management during solar-powered electric vehicle racing. Student team members using augmented reality head-mounted displays (AR HMD) have reported significant advantages for in-race strategy monitoring and execution, yet so far, there [...] Read more.
This article explores how different modalities of presenting telemetry data can support strategy management during solar-powered electric vehicle racing. Student team members using augmented reality head-mounted displays (AR HMD) have reported significant advantages for in-race strategy monitoring and execution, yet so far, there is no published evidence to support these claims. This study shows that there are specific situations in which various visualization modes, including AR HMDs, demonstrate improved performance for users with varying levels of experience. We analyzed racing team performance for specific in-race events extracted from free and circuit-based real race datasets. These findings were compared with results obtained in a controlled, task-based user study utilizing three visualization interface conditions. Our exploration focused on how telemetry data visualizations influenced user performance metrics such as event reaction time, decision adequacy, task load index, and usability outcomes across four event types, taking into account both the interface and participant experience level. The results reveal that while traditional web application-type visualizations work well in most cases, augmented reality has the potential to improve race performance in some of the examined free-race and circuit-race scenarios. A notable novelty and key finding of this study is that the use of augmented reality HMDs provided particularly significant advantages for less experienced participants in most of the tasks, underscoring the substantial benefits of this technology for the support of novice users. Full article
Show Figures

Figure 1

18 pages, 5112 KiB  
Article
Gaze–Hand Steering for Travel and Multitasking in Virtual Environments
by Mona Zavichi, André Santos, Catarina Moreira, Anderson Maciel and Joaquim Jorge
Multimodal Technol. Interact. 2025, 9(6), 61; https://doi.org/10.3390/mti9060061 - 13 Jun 2025
Viewed by 547
Abstract
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique [...] Read more.
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique that combines eye tracking with hand pointing: users steer only when gaze aligns with a hand-defined target, reducing unintended actions and enabling free look. Speed is controlled via either a joystick or a waist-level speed circle. We evaluated our method in a user study (n = 20) across multitasking and single-task scenarios, comparing it to a similar technique. Results show that gaze–hand steering maintains performance and enhances user comfort and spatial awareness during multitasking. Our findings support using gaze–hand steering in gaze-dominant VR applications requiring precision and simultaneous interaction. Our method significantly improves VR navigation in gaze–dominant, multitasking-intensive applications, supporting immersion and efficient control. Full article
Show Figures

Figure 1

18 pages, 21832 KiB  
Article
Modulation of In-Vehicle Display Parameters to Reduce Motion Sickness
by Yeseom Jin, Jiseon Son, Taekyoung Kim, Hoolim Kim, Seunghwan Bang and Hyungseok Kim
Electronics 2025, 14(11), 2249; https://doi.org/10.3390/electronics14112249 - 31 May 2025
Viewed by 474
Abstract
As in-vehicle display environments become increasingly common, addressing motion sickness has become essential due to the intensified visual and vestibular discrepancies introduced by media experiences within vehicles. Prior research highlights that minimizing the conflict between vestibular signals and visual motion perception is crucial [...] Read more.
As in-vehicle display environments become increasingly common, addressing motion sickness has become essential due to the intensified visual and vestibular discrepancies introduced by media experiences within vehicles. Prior research highlights that minimizing the conflict between vestibular signals and visual motion perception is crucial for reducing motion sickness. This study aims to identify optimal viewing conditions and simulation settings for motion sickness reduction by experimentally adjusting field of view (FOV) and screen brightness. Specifically, the FOV is narrowed according to vehicle acceleration and angular speed, aligning with simulated vehicle motion through a motion simulator connected to a head-mounted display (HMD). The experimental results indicate that this approach can reduce motion sickness by up to 40%. Additionally, integrating the generated motion data with VR motion data enables a realistic simulation of in-vehicle conditions, suggesting that this method may enhance comfort in actual in-vehicle media environments. Full article
(This article belongs to the Special Issue Big Data and AI Applications)
Show Figures

Figure 1

11 pages, 6922 KiB  
Article
The Feasibility and Clinical Evaluation of an Immersive Augmented Reality Surgical Headset Integrated with Swept-Source Intraoperative Optical Coherence Tomography for Ophthalmic Surgery in the DISCOVER Study
by Masaharu Mizuno, Karen Matar, Reem Amine, Katherine E. Talcott, Jeffrey M. Goshe, William J. Dupps, Sumit Sharma, Asmita Indurkar, John Mamone, Jamie Reese, Sunil K. Srivastava and Justis P. Ehlers
Diagnostics 2025, 15(11), 1394; https://doi.org/10.3390/diagnostics15111394 - 30 May 2025
Viewed by 669
Abstract
Objectives: to evaluate the feasibility and utility of intraoperative optical coherence tomography (iOCT) utilizing an immersive augmented reality surgical headset (Beyeonics iOCT, Beyeonics Vision Ltd., Haifa, Israel) digital visualization platform with swept-source integrated OCT in ophthalmic surgery. Methods: As part of [...] Read more.
Objectives: to evaluate the feasibility and utility of intraoperative optical coherence tomography (iOCT) utilizing an immersive augmented reality surgical headset (Beyeonics iOCT, Beyeonics Vision Ltd., Haifa, Israel) digital visualization platform with swept-source integrated OCT in ophthalmic surgery. Methods: As part of the Institutional Review Board-approved prospective DISCOVER study, the Beyeonics iOCT was utilized in multiple ophthalmic surgical procedures to evaluate the feasibility and utility of iOCT with this platform. The Beyeonics iOCT is a three-dimensional surgical visualization system that utilizes a swept-source integrated OCT within the digital microscope system. Surgeon feedback on system performance and integration into the surgical workflow was gathered via a prespecified survey. Results: Thirteen eyes of thirteen patients were included in this study. The surgical procedures consisted of four cataract surgeries, two lamellar corneal transplants, one pterygium removal, and six vitreoretinal surgeries. Surgeons were able to successfully view and review the iOCT images within the surgical Head-Mounted Display, eliminating the need for an external display. Utility feedback from surgeons included iOCT assisting with confirming wound architecture, corneal graft orientation, and retinal structure. All surgeries were completed without reverting to a conventional microscope, and no intraoperative adverse events occurred. Conclusions: The new visualization platform with integrated swept-source iOCT demonstrated feasibility and potential utility in multiple ophthalmic surgical platforms. Additional research related to outcomes, ergonomics, and enhanced software analysis is needed in the future. Full article
(This article belongs to the Special Issue New Perspectives in Ophthalmic Imaging)
Show Figures

Figure 1

13 pages, 1193 KiB  
Article
Validation of an Automated Scoring Algorithm That Assesses Eye Exploration in a 3-Dimensional Virtual Reality Environment Using Eye-Tracking Sensors
by Or Koren, Anais Di Via Ioschpe, Meytal Wilf, Bailasan Dahly, Ramit Ravona-Springer and Meir Plotnik
Sensors 2025, 25(11), 3331; https://doi.org/10.3390/s25113331 - 26 May 2025
Viewed by 507
Abstract
Eye-tracking studies in virtual reality (VR) deliver insights into behavioral function. The gold standard of evaluating gaze behavior is based on manual scoring, which is labor-intensive. Previously proposed automated eye-tracking algorithms for VR head mount display (HMD) were not validated against manual scoring, [...] Read more.
Eye-tracking studies in virtual reality (VR) deliver insights into behavioral function. The gold standard of evaluating gaze behavior is based on manual scoring, which is labor-intensive. Previously proposed automated eye-tracking algorithms for VR head mount display (HMD) were not validated against manual scoring, or tested in dynamic areas of interest (AOIs). Our study validates the accuracy of an automated scoring algorithm, which determines temporal fixation behavior on static and dynamic AOIs in VR, against subjective human annotation. The interclass-correlation coefficient (ICC) was calculated for the time of first fixation (TOFF) and total fixation duration (TFD), in ten participants, each presented with 36 static and dynamic AOIs. High ICC values (≥0.982; p < 0.0001) were obtained when comparing the algorithm-generated TOFF and TFD to the raters’ annotations. In sum, our algorithm is accurate in determining temporal parameters related to gaze behavior when using HMD-based VR. Thus, the significant time required for human scoring among numerous raters can be rendered obsolete with a reliable automated scoring system. The algorithm proposed here was designed to sub-serve a separate study that uses TOFF and TFD to differentiate apathy from depression in those suffering from Alzheimer’s dementia. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

16 pages, 7057 KiB  
Article
VRBiom: A New Periocular Dataset for Biometric Applications of Head-Mounted Display
by Ketan Kotwal, Ibrahim Ulucan, Gökhan Özbulak, Janani Selliah and Sébastien Marcel
Electronics 2025, 14(9), 1835; https://doi.org/10.3390/electronics14091835 - 30 Apr 2025
Viewed by 763
Abstract
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially [...] Read more.
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially available HMD devices are equipped with internal inward-facing cameras to record the periocular areas. Given the nature of these devices and captured data, many applications such as biometric authentication and gaze analysis become feasible. To effectively explore the potential of HMDs for these diverse use-cases and to enhance the corresponding techniques, it is essential to have an HMD dataset that captures realistic scenarios. In this work, we present a new dataset of periocular videos acquired using a virtual reality headset called VRBiom. The VRBiom, targeted at biometric applications, consists of 900 short videos acquired from 25 individuals recorded in the NIR spectrum. These 10 s long videos have been captured using the internal tracking cameras of Meta Quest Pro at 72 FPS. To encompass real-world variations, the dataset includes recordings under three gaze conditions: steady, moving, and partially closed eyes. We have also ensured an equal split of recordings without and with glasses to facilitate the analysis of eye-wear. These videos, characterized by non-frontal views of the eye and relatively low spatial resolutions (400×400), can be instrumental in advancing state-of-the-art research across various biometric applications. The VRBiom dataset can be utilized to evaluate, train, or adapt models for biometric use-cases such as iris and/or periocular recognition and associated sub-tasks such as detection and semantic segmentation. In addition to data from real individuals, we have included around 1100 presentation attacks constructed from 92 PA instruments. These PAIs fall into six categories constructed through combinations of print attacks (real and synthetic identities), fake 3D eyeballs, plastic eyes, and various types of masks and mannequins. These PA videos, combined with genuine (bona fide) data, can be utilized to address concerns related to spoofing, which is a significant threat if these devices are to be used for authentication. The VRBiom dataset is publicly available for research purposes related to biometric applications only. Full article
Show Figures

Figure 1

29 pages, 4394 KiB  
Article
Analysis of Voice, Speech, and Language Biomarkers of Parkinson’s Disease Collected in a Mixed Reality Setting
by Milosz Dudek, Daria Hemmerling, Marta Kaczmarska, Joanna Stepien, Mateusz Daniol, Marek Wodzinski and Magdalena Wojcik-Pedziwiatr
Sensors 2025, 25(8), 2405; https://doi.org/10.3390/s25082405 - 10 Apr 2025
Cited by 2 | Viewed by 1924
Abstract
This study explores an innovative approach to early Parkinson’s disease (PD) detection by analyzing speech data collected using a mixed reality (MR) system. A total of 57 Polish participants, including PD patients and healthy controls, performed five speech tasks while using an MR [...] Read more.
This study explores an innovative approach to early Parkinson’s disease (PD) detection by analyzing speech data collected using a mixed reality (MR) system. A total of 57 Polish participants, including PD patients and healthy controls, performed five speech tasks while using an MR head-mounted display (HMD). Speech data were recorded and analyzed to extract acoustic and linguistic features, which were then evaluated using machine learning models, including logistic regression, support vector machines (SVMs), random forests, AdaBoost, and XGBoost. The XGBoost model achieved the best performance, with an F1-score of 0.90 ± 0.05 in the story-retelling task. Key features such as MFCCs (mel-frequency cepstral coefficients), spectral characteristics, RASTA-filtered auditory spectrum, and local shimmer were identified as significant in detecting PD-related speech alterations. Additionally, state-of-the-art deep learning models (wav2vec2, HuBERT, and WavLM) were fine-tuned for PD detection. HuBERT achieved the highest performance, with an F1-score of 0.94 ± 0.04 in the diadochokinetic task, demonstrating the potential of deep learning to capture complex speech patterns linked to neurodegenerative diseases. This study highlights the effectiveness of combining MR technology for speech data collection with advanced machine learning (ML) and deep learning (DL) techniques, offering a non-invasive and high-precision approach to PD diagnosis. The findings hold promise for broader clinical applications, advancing the diagnostic landscape for neurodegenerative disorders. Full article
Show Figures

Figure 1

17 pages, 2942 KiB  
Article
Profiling Students by Perceived Immersion: Insights from VR Engine Room Simulator Trials in Maritime Higher Education
by Luka Liker, Demir Barić, Ana Perić Hadžić and David Bačnar
Appl. Sci. 2025, 15(7), 3786; https://doi.org/10.3390/app15073786 - 30 Mar 2025
Cited by 1 | Viewed by 787
Abstract
Research on students’ immersive experiences with fully immersive virtual reality (VR) technologies is extensively documented across diverse educational settings; however, in maritime higher education, it remains relatively underrepresented. Therefore, by using segmentation analysis, this study aims to profile maritime engineering students at the [...] Read more.
Research on students’ immersive experiences with fully immersive virtual reality (VR) technologies is extensively documented across diverse educational settings; however, in maritime higher education, it remains relatively underrepresented. Therefore, by using segmentation analysis, this study aims to profile maritime engineering students at the Faculty of Maritime Studies, University of Rijeka, by perceived immersion (PIMM) within a Head-Mounted Display (HMD) VR engine room simulator and to explore differences in their perceived learning benefits (PLBs), future behavioural intentions (FBI), and satisfaction (SAT) with the HMD-VR experience. The sample comprised 84 participants who engaged in preliminary HMD-VR engine room simulator trials. A non-hierarchical (K-mean) cluster analysis, combined with the Elbow method, identified two distinct and homogeneous groups: Immersionists and Conformists. The results of an independent sample t-test indicated that Immersionists exhibited significantly higher scores regarding perceived learning benefits, future behavioural intentions, and overall satisfaction than Conformists. The study results underscore the significance of understanding students’ subjective perception of immersion in the implementation and further development of fully immersive VR technologies within maritime education and training (MET) curricula. However, as the study is based on a specific case within a particular educational context, the result may not directly apply to the broader student population. Full article
Show Figures

Figure 1

29 pages, 40685 KiB  
Article
Evaluating the Benefits and Drawbacks of Visualizing Systems Modeling Language (SysML) Diagrams in the 3D Virtual Reality Environment
by Mostafa Lutfi and Ricardo Valerdi
Systems 2025, 13(4), 221; https://doi.org/10.3390/systems13040221 - 23 Mar 2025
Viewed by 1273
Abstract
Model-Based Systems Engineering (MBSE) prioritizes system design through models rather than documents, and it is implemented with the Systems Modeling Language (SysML), which is the state-of-the-art language in academia and industry. Virtual Reality (VR), an immersive visualization technology, can simulate reality in virtual [...] Read more.
Model-Based Systems Engineering (MBSE) prioritizes system design through models rather than documents, and it is implemented with the Systems Modeling Language (SysML), which is the state-of-the-art language in academia and industry. Virtual Reality (VR), an immersive visualization technology, can simulate reality in virtual environments with varying degrees of fidelity. In recent years, the technology industry has invested substantially in the development of head-mounted displays (HMDs) and related virtual reality (VR) technologies. Various research has suggested that VR-based immersive design reviews enhance system issue/fault identification, collaboration, focus, and presence compared to non-immersive approaches. Additionally, several research efforts have demonstrated that the VR environment provides higher understanding and knowledge retention levels than traditional approaches. In recent years, multiple attempts have been made to visualize conventional 2D SysML diagrams in a virtual reality environment. To the best of the author’s knowledge, no empirical evaluation has been performed to analyze the benefits and drawbacks of visualizing SysML diagrams in a VR environment. Hence, the authors aimed to evaluate four key benefit types and drawbacks through experiments with human subjects. The authors chose four benefit types—Systems Understanding, Information Sharing, Modeling and Training Experience, and Digital Twin based on the MBSE value and benefits review performed by researchers and benefits claimed by the evaluations for similar visual formalism languages. Experiments were conducted to compare the understanding, interaction, and knowledge retention for 3D VR and conventional 2D SysML diagrams. The authors chose a ground-based telescope system as the system of interest (SOI) for system modeling. The authors utilized a standalone wireless HMD unit for a virtual reality experience, which enabled experiments to be conducted irrespective of location. Students and experts from multiple disciplines, including systems engineering, participated in the experiment and provided their opinions on the VR SysML implementation. The knowledge test, perceived evaluation results, and post-completion surveys were analyzed to determine whether the 3D VR SysML implementation improved these benefits and identified potential drawbacks. The authors utilized a few VR scenario efficacy measures, namely the Simulation Sickness Questionnaire (SSQ) and System Usability Scale (SUS), to avoid evaluation design-related anomalies. Full article
Show Figures

Figure 1

Back to TopTop