Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (198)

Search Parameters:
Keywords = hololens

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2384 KiB  
Article
Image Quality Assessment of Augmented Reality Glasses as Medical Display Devices (HoloLens 2)
by Simon König, Simon Siebers and Claus Backhaus
Appl. Sci. 2025, 15(14), 7648; https://doi.org/10.3390/app15147648 - 8 Jul 2025
Viewed by 365
Abstract
See-through augmented reality glasses, such as HoloLens 2, are increasingly adopted in medical settings; however, their efficacy as medical display devices remains unclear, as current evaluation protocols are designed for traditional monitors. This study examined whether the established display-evaluation techniques apply to HoloLens [...] Read more.
See-through augmented reality glasses, such as HoloLens 2, are increasingly adopted in medical settings; however, their efficacy as medical display devices remains unclear, as current evaluation protocols are designed for traditional monitors. This study examined whether the established display-evaluation techniques apply to HoloLens 2 and whether it meets standards for primary and secondary medical displays. HoloLens 2 was assessed for overall image quality, luminance, grayscale consistency, and color uniformity. Five participants rated the TG18-OIQ pattern under ambient lighting conditions of 2.4 and 138.7 lx. Minimum and maximum luminance were measured using the TG18-LN12-03 and -18 patterns, targeting ≥ 300 cd/m2 and a luminance ratio ≥ 250. Grayscale conformity to the standard grayscale display function allowed deviations of 10% for primary and 20% for secondary displays. Color uniformity was measured at five screen positions for red, green, and blue, with a chromaticity limit of 0.01 for primary displays. HoloLens 2 satisfied four of the ten primary and four of the seven secondary overall-quality criteria, achieving a maximum luminance of 2366 cd/m2 and a luminance ratio of 1478.75. Grayscale uniformity was within tolerance for 10 of the 15 primary and 13 of the 15 secondary measurements, while 25 of the 30 color uniformity values exceeded the threshold. The adapted evaluation methods facilitate a systematic assessment of HoloLens 2 as a medical display. Owing to inadequate grayscale and color representation, the headset is unsuitable as a primary diagnostic display; for secondary use, requirements must be assessed based on specific application requirements. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

12 pages, 8520 KiB  
Article
Integrated Haptic Feedback with Augmented Reality to Improve Pinching and Fine Moving of Objects
by Jafar Hamad, Matteo Bianchi and Vincenzo Ferrari
Appl. Sci. 2025, 15(13), 7619; https://doi.org/10.3390/app15137619 - 7 Jul 2025
Viewed by 455
Abstract
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack [...] Read more.
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack of immediate and clear feedback from head-mounted displays (HMDs). Current tracking technologies cannot always guarantee reliable recognition, leaving users uncertain about whether their gestures have been successfully detected. To address this limitation, haptic feedback can play a key role by confirming gesture recognition and compensating for discrepancies between the visual perception of fingertip contact with virtual objects and the actual system recognition. The goal of this paper is to compare a simple vibrotactile ring with a full glove device and identify their possible improvements for a fundamental gesture like pinching and fine moving of objects using Microsoft HoloLens 2. Where the pinch action is considered an essential fine motor skill, augmented reality integrated with haptic feedback can be useful to notify the user of the recognition of the gestures and compensate for misaligned visual perception between the tracked fingertip with respect to virtual objects to determine better performance in terms of spatial precision. In our experiments, the participants’ median distance error using bare hands over all axes was 10.3 mm (interquartile range [IQR] = 13.1 mm) in a median time of 10.0 s (IQR = 4.0 s). While both haptic devices demonstrated improvement in participants precision with respect to the bare-hands case, participants achieved with the full glove median errors of 2.4 mm (IQR = 5.2) in a median time of 8.0 s (IQR = 6.0 s), and with the haptic rings they achieved even better performance with median errors of 2.0 mm (IQR = 2.0 mm) in an even better median time of only 6.0 s (IQR= 5.0 s). Our outcomes suggest that simple devices like the described haptic rings can be better than glove-like devices, offering better performance in terms of accuracy, execution time, and wearability. The haptic glove probably compromises hand and finger tracking with the Microsoft HoloLens 2. Full article
Show Figures

Figure 1

16 pages, 2092 KiB  
Article
Augmented Reality-Assisted Placement of Surgical Guides and Osteotomy Execution for Pelvic Tumour Resections: A Pre-Clinical Feasibility Study Using 3D-Printed Models
by Tanya Fernández-Fernández, Javier Orozco-Martínez, Amaia Iribar-Zabala, Elena Aguilera Jiménez, Carla de Gregorio-Bermejo, Lydia Mediavilla-Santos, Javier Pascau, Mónica García-Sevilla, Rubén Pérez-Mañanes and Jose Antonio Calvo-Haro
Cancers 2025, 17(13), 2260; https://doi.org/10.3390/cancers17132260 - 7 Jul 2025
Viewed by 350
Abstract
Objectives: This pre-clinical feasibility study evaluates the accuracy of a novel augmented reality-based (AR-based) guidance technology using head-mounted displays (HMDs) for the placement of patient-specific instruments (PSIs)—also referred to as surgical guides—and osteotomy performance in pelvic tumour resections. The goal is to [...] Read more.
Objectives: This pre-clinical feasibility study evaluates the accuracy of a novel augmented reality-based (AR-based) guidance technology using head-mounted displays (HMDs) for the placement of patient-specific instruments (PSIs)—also referred to as surgical guides—and osteotomy performance in pelvic tumour resections. The goal is to improve PSI placement accuracy and osteotomy execution while assessing user perception and workflow efficiency. Methods: The study was conducted on ten 3D-printed pelvic phantoms derived from CT scans of cadaveric specimens. Custom PSIs were designed and printed to guide osteotomies at the supraacetabular, symphysial, and ischial regions. An AR application was developed for the HoloLens 2 HMD to display PSI location and cutting planes. The workflow included manual supraacetabular PSI placement, AR-guided placement of the other PSIs and osteotomy execution. Postoperative CT scans were analysed to measure angular and distance errors in PSI placement and osteotomies. Task times and user feedback were also recorded. Results: The mean angular deviation for PSI placement was 2.20°, with a mean distance error of 1.19 mm (95% CI: 0.86 to 1.52 mm). Osteotomies showed an overall mean angular deviation of 3.73° compared to planned cuts, all within the predefined threshold of less than 5°. AR-assisted guidance added less than two minutes per procedure. User feedback highlighted the intuitive interface and high usability, especially for visualising cutting planes. Conclusions: Integrating AR through HMDs is a feasible and accurate method for enhancing PSI placement and osteotomy performance in pelvic tumour resections. The system provides reliable guidance even in cases of PSI failure and adds minimal time to the surgical workflow while significantly improving accuracy. Further validation in cadaveric models is needed to ensure its clinical applicability. Full article
(This article belongs to the Special Issue Clinical Treatment of Osteosarcoma)
Show Figures

Figure 1

18 pages, 2110 KiB  
Article
Evaluation of HoloLens 2 for Hand Tracking and Kinematic Features Assessment
by Jessica Bertolasi, Nadia Vanessa Garcia-Hernandez, Mariacarla Memeo, Marta Guarischi and Monica Gori
Virtual Worlds 2025, 4(3), 31; https://doi.org/10.3390/virtualworlds4030031 - 3 Jul 2025
Viewed by 536
Abstract
The advent of mixed reality (MR) systems has revolutionized human–computer interactions by seamlessly integrating virtual elements with the real world. Devices like the HoloLens 2 (HL2) enable intuitive, hands-free interactions through advanced hand-tracking technology, making them valuable in fields such as education, healthcare, [...] Read more.
The advent of mixed reality (MR) systems has revolutionized human–computer interactions by seamlessly integrating virtual elements with the real world. Devices like the HoloLens 2 (HL2) enable intuitive, hands-free interactions through advanced hand-tracking technology, making them valuable in fields such as education, healthcare, engineering, and training simulations. However, despite the growing adoption of MR, there is a noticeable lack of comprehensive comparisons between the hand-tracking accuracy of the HL2 and high-precision benchmarks like motion capture systems. Such evaluations are essential to assess the reliability of MR interactions, identify potential tracking limitations, and improve the overall precision of hand-based input in immersive applications. This study aims to assess the accuracy of HL2 in tracking hand position and measuring kinematic hand parameters, including joint angles and lateral pinch span (distance between thumb and index fingertips), using its tracking data. To achieve this, the Vicon motion capture system (VM) was used as a gold-standard reference. Three tasks were designed: (1) finger tracing of a 2D pattern in 3D space, (2) grasping various common objects, and (3) lateral pinching of objects with varying sizes. Task 1 tests fingertip tracking, Task 2 evaluates joint angle accuracy, and Task 3 examines the accuracy of pinch span measurement. In all tasks, HL2 and VM simultaneously recorded hand positions and movements. The data captured in Task 1 were analyzed to evaluate HL2’s hand-tracking capabilities against VM. Finger rotation angles from Task 2 and lateral pinch span from Task 3 were then used to assess HL2’s accuracy compared to VM. The results indicate that the HL2 exhibits millimeter-level errors compared to Vicon’s tracking system in Task 1, spanning in a range from 2 mm to 4 mm, suggesting that HL2’s hand-tracking system demonstrates good accuracy. Additionally, the reconstructed grasping positions in Task 2 from both systems show a strong correlation and an average error of 5°, while in Task 3, the accuracy of the HL2 is comparable to that of VM, improving performance as the object thickness increases. Full article
Show Figures

Figure 1

19 pages, 7664 KiB  
Article
Off-Cloud Anchor Sharing Framework for Multi-User and Multi-Platform Mixed Reality Applications
by Aida Vidal-Balea, Oscar Blanco-Novoa, Paula Fraga-Lamas and Tiago M. Fernández-Caramés
Appl. Sci. 2025, 15(13), 6959; https://doi.org/10.3390/app15136959 - 20 Jun 2025
Viewed by 419
Abstract
This article presents a novel off-cloud anchor sharing framework designed to enable seamless device interoperability for Mixed Reality (MR) multi-user and multi-platform applications. The proposed framework enables local storage and synchronization of spatial anchors, offering a robust and autonomous alternative for real-time collaborative [...] Read more.
This article presents a novel off-cloud anchor sharing framework designed to enable seamless device interoperability for Mixed Reality (MR) multi-user and multi-platform applications. The proposed framework enables local storage and synchronization of spatial anchors, offering a robust and autonomous alternative for real-time collaborative experiences. Such anchors are digital reference points tied to specific positions in the physical world that allow virtual content in MR applications to remain accurately aligned to the real environment, thus being an essential tool for building collaborative MR experiences. This anchor synchronization system takes advantage of the use of local anchor storage to optimize the sharing process and to exchange the anchors only when necessary. The framework integrates Unity, Mirror and Mixed Reality Toolkit (MRTK) to support seamless interoperability between Microsoft HoloLens 2 devices and desktop computers, with the addition of external IoT interaction. As a proof of concept, a collaborative multiplayer game was developed to illustrate the multi-platform and anchor sharing capabilities of the proposed system. The experiments were performed in Local Area Network (LAN) and Wide Area Network (WAN) environments, and they highlight the importance of efficient anchor management in large-scale MR environments and demonstrate the effectiveness of the system in handling anchor transmission across varying levels of spatial complexity. Specifically, the obtained results show that the developed framework is able to obtain anchor transmission times that start around 12.7 s for the tested LAN/WAN networks and for small anchor setups, and to roughly 86.02–87.18 s for complex physical scenarios where room-sized anchors are required. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

9 pages, 275 KiB  
Review
Augmented Reality Integration in Surgery for Craniosynostoses: Advancing Precision in the Management of Craniofacial Deformities
by Divya Sharma, Adam Matthew Holden and Soudeh Nezamivand-Chegini
J. Clin. Med. 2025, 14(12), 4359; https://doi.org/10.3390/jcm14124359 - 19 Jun 2025
Viewed by 441
Abstract
Craniofacial deformities, particularly craniosynostosis, present significant surgical challenges due to complex anatomy and the need for individualised, high-precision interventions. Augmented reality (AR) has emerged as a promising tool in craniofacial surgery, offering enhanced spatial visualisation, real-time anatomical referencing, and improved surgical accuracy. This [...] Read more.
Craniofacial deformities, particularly craniosynostosis, present significant surgical challenges due to complex anatomy and the need for individualised, high-precision interventions. Augmented reality (AR) has emerged as a promising tool in craniofacial surgery, offering enhanced spatial visualisation, real-time anatomical referencing, and improved surgical accuracy. This review explores the current and emerging applications of AR in preoperative planning, intraoperative navigation, and surgical education within paediatric craniofacial surgery. Through a literature review of peer-reviewed studies, we examine how AR platforms, such as the VOSTARS system and Microsoft HoloLens, facilitate virtual simulations, precise osteotomies, and collaborative remote guidance. Despite demonstrated benefits in feasibility and accuracy, widespread clinical adoption is limited by technical, ergonomic, financial, and training-related challenges. Future directions include the integration of artificial intelligence, haptic feedback, and robotic assistance to further augment surgical precision and training efficacy. AR holds transformative potential for improving outcomes and efficiency in craniofacial deformity correction, warranting continued research and clinical validation. Full article
(This article belongs to the Special Issue Craniofacial Surgery: State of the Art and the Perspectives)
Show Figures

Figure 1

30 pages, 4181 KiB  
Article
Augmented Reality for PCB Component Identification and Localization
by Kuhelee Chandel, Stefan Seipel, Julia Åhlén and Andreas Roghe
Appl. Sci. 2025, 15(11), 6331; https://doi.org/10.3390/app15116331 - 4 Jun 2025
Viewed by 668
Abstract
This study evaluates the effectiveness of augmented reality (AR), using the Microsoft™ HoloLens™™ 2, for identifying and localizing PCB components compared to traditional PDF-based methods. Two experiments examined the influence of user expertise, viewing angles, and component sizes on accuracy and usability. The [...] Read more.
This study evaluates the effectiveness of augmented reality (AR), using the Microsoft™ HoloLens™™ 2, for identifying and localizing PCB components compared to traditional PDF-based methods. Two experiments examined the influence of user expertise, viewing angles, and component sizes on accuracy and usability. The results indicate that AR improved identification accuracy and user experience for non-experts, although it was slower than traditional methods for experienced users. Optimal performance was achieved at 90° viewing angles, while accuracy declined significantly at oblique angles. Medium-sized components received the highest confidence scores, suggesting favorable visibility and recognition characteristics within this group, though further evaluation with a broader component distribution is warranted. Participant feedback highlighted the system’s intuitive interface and effective guidance, but also noted challenges with marker stability, visual discomfort, and ergonomic limitations. These findings suggest that AR can enhance training and reduce errors in electronics manufacturing, although refinements in marker rendering and user onboarding are necessary to support broader adoption. This research provides empirical evidence on the role of AR in supporting user-centered design and improving task performance in industrial electronics workflows. Full article
Show Figures

Figure 1

26 pages, 2125 KiB  
Article
Adaptive Augmented Reality Architecture for Optimising Assistance and Safety in Industry 4.0
by Ginés Morales Méndez and Francisco del Cerro Velázquez
Big Data Cogn. Comput. 2025, 9(5), 133; https://doi.org/10.3390/bdcc9050133 - 19 May 2025
Cited by 1 | Viewed by 830
Abstract
The present study proposes adaptive augmented reality (AR) architecture, specifically designed to enhance real-time operator assistance and occupational safety in industrial environments, which is representative of Industry 4.0. The proposed system addresses key challenges in AR adoption, such as the need for dynamic [...] Read more.
The present study proposes adaptive augmented reality (AR) architecture, specifically designed to enhance real-time operator assistance and occupational safety in industrial environments, which is representative of Industry 4.0. The proposed system addresses key challenges in AR adoption, such as the need for dynamic personalisation of instructions based on operator profiles and the mitigation of technical and cognitive barriers. Architecture integrates theoretical modelling, modular design, and real-time adaptability to match instruction complexity with user expertise and environmental conditions. A working prototype was implemented using Microsoft HoloLens 2, Unity 3D, and Vuforia and validated in a controlled industrial scenario involving predictive maintenance and assembly tasks. The experimental results demonstrated statistically significant enhancements in task completion time, error rates, perceived cognitive load, operational efficiency, and safety indicators in comparison with conventional methods. The findings underscore the system’s capacity to enhance both performance and consistency while concomitantly bolstering risk mitigation in intricate operational settings. This study proposes a scalable and modular AR framework with built-in safety and adaptability mechanisms, demonstrating practical benefits for human–machine interaction in Industry 4.0. The present study is subject to certain limitations, including validation in a simulated environment, which limits the direct extrapolation of the results to real industrial scenarios; further evaluation in various operational contexts is required to verify the overall scalability and applicability of the proposed system. It is recommended that future research studies explore the long-term ergonomics, scalability, and integration of emerging technologies in decision support within adaptive AR systems. Full article
Show Figures

Figure 1

21 pages, 9744 KiB  
Article
Real-Time Identification of Look-Alike Medical Vials Using Mixed Reality-Enabled Deep Learning
by Bahar Uddin Mahmud, Guanyue Hong, Virinchi Ravindrakumar Lalwani, Nicholas Brown and Zachary D. Asher
Future Internet 2025, 17(5), 223; https://doi.org/10.3390/fi17050223 - 16 May 2025
Viewed by 438
Abstract
The accurate identification of look-alike medical vials is essential for patient safety, particularly when similar vials contain different substances, volumes, or concentrations. Traditional methods, such as manual selection or barcode-based identification, are prone to human error or face reliability issues under varying lighting [...] Read more.
The accurate identification of look-alike medical vials is essential for patient safety, particularly when similar vials contain different substances, volumes, or concentrations. Traditional methods, such as manual selection or barcode-based identification, are prone to human error or face reliability issues under varying lighting conditions. This study addresses these challenges by introducing a real-time deep learning-based vial identification system, leveraging a Lightweight YOLOv4 model optimized for edge devices. The system is integrated into a Mixed Reality (MR) environment, enabling the real-time detection and annotation of vials with immediate operator feedback. Compared to standard barcode-based methods and the baseline YOLOv4-Tiny model, the proposed approach improves identification accuracy while maintaining low computational overhead. The experimental evaluations demonstrate a mean average precision (mAP) of 98.76 percent, with an inference speed of 68 milliseconds per frame on HoloLens 2, achieving real-time performance. The results highlight the model’s robustness in diverse lighting conditions and its ability to mitigate misclassifications of visually similar vials. By combining deep learning with MR, this system offers a more reliable and efficient alternative for pharmaceutical and medical applications, paving the way for AI-driven MR-assisted workflows in critical healthcare environments. Full article
(This article belongs to the Special Issue Smart Technology: Artificial Intelligence, Robotics and Algorithms)
Show Figures

Figure 1

21 pages, 127827 KiB  
Review
Artificial Intelligence in Orthopedic Medical Education: A Comprehensive Review of Emerging Technologies and Their Applications
by Kyle Sporn, Rahul Kumar, Phani Paladugu, Joshua Ong, Tejas Sekhar, Swapna Vaja, Tamer Hage, Ethan Waisberg, Chirag Gowda, Ram Jagadeesan, Nasif Zaman and Alireza Tavakkoli
Int. Med. Educ. 2025, 4(2), 14; https://doi.org/10.3390/ime4020014 - 30 Apr 2025
Cited by 2 | Viewed by 1474
Abstract
Integrating artificial intelligence (AI) and mixed reality (MR) into orthopedic education has transformed learning. This review examines AI-powered platforms like Microsoft HoloLens, Apple Vision Pro, and HTC Vive Pro, which enhance anatomical visualization, surgical simulation, and clinical decision-making. These technologies improve the spatial [...] Read more.
Integrating artificial intelligence (AI) and mixed reality (MR) into orthopedic education has transformed learning. This review examines AI-powered platforms like Microsoft HoloLens, Apple Vision Pro, and HTC Vive Pro, which enhance anatomical visualization, surgical simulation, and clinical decision-making. These technologies improve the spatial understanding of musculoskeletal structures, refine procedural skills with haptic feedback, and personalize learning through AI-driven adaptive algorithms. Generative AI tools like ChatGPT further support knowledge retention and provide evidence-based insights on orthopedic topics. AI-enabled platforms and generative AI tools help address challenges in standardizing orthopedic education. However, we still face many barriers that relate to standardizing data, algorithm evaluation, ethics, and the curriculum. AI is used in preoperative planning and predictive analytics in the postoperative period that bridges theory and practice. AI and MR are key to supporting innovation and scalability in orthopedic education. However, technological innovation relies on collaborative partnerships to develop equitable, evidence-informed practices that can be implemented in orthopedic education. For sustained impact, innovation must be aligned with pedagogical theories and principles. We believe that orthopedic medical educators’ future critical role will be to enhance the next generation of competent clinicians. Full article
(This article belongs to the Special Issue New Advancements in Medical Education)
Show Figures

Figure 1

19 pages, 1357 KiB  
Article
Performance Measurement of Gesture-Based Human–Machine Interfaces Within eXtended Reality Head-Mounted Displays
by Leopoldo Angrisani, Mauro D’Arco, Egidio De Benedetto, Luigi Duraccio, Fabrizio Lo Regio, Michele Sansone and Annarita Tedesco
Sensors 2025, 25(9), 2831; https://doi.org/10.3390/s25092831 - 30 Apr 2025
Viewed by 567
Abstract
This paper proposes a method for measuring the performance of Human–Machine Interfaces based on hand-gesture recognition, implemented within eXtended Reality Head-Mounted Displays. The proposed method leverages a systematic approach, enabling performance measurement in compliance with the Guide to the Expression of Uncertainty in [...] Read more.
This paper proposes a method for measuring the performance of Human–Machine Interfaces based on hand-gesture recognition, implemented within eXtended Reality Head-Mounted Displays. The proposed method leverages a systematic approach, enabling performance measurement in compliance with the Guide to the Expression of Uncertainty in Measurement. As an initial step, a testbed is developed, comprising a series of icons accommodated within the field of view of the eXtended Reality Head-Mounted Display considered. Each icon must be selected through a cue-guided task using the hand gestures under evaluation. Multiple selection cycles involving different individuals are conducted to derive suitable performance metrics. These metrics are derived considering the specific parameters characterizing the hand gestures, as well as the uncertainty contributions arising from intra- and inter-individual variability in the measured quantity values. As a case study, the eXtended Reality Head-Mounted Display Microsoft HoloLens 2 and the finger-tapping gesture were investigated. Without compromising generality, the obtained results show that the proposed method can provide valuable insights into performance trends across individuals and gesture parameters. Moreover, the statistical analyses employed can determine whether increased individual familiarity with the Human–Machine Interface results in faster task completion without a corresponding decrease in accuracy. Overall, the proposed method provides a comprehensive framework for evaluating the compliance of hand-gesture-based Human–Machine Interfaces with target performance specifications related to specific application contexts. Full article
(This article belongs to the Special Issue Advances in Wearable Sensors for Continuous Health Monitoring)
Show Figures

Figure 1

10 pages, 644 KiB  
Article
Enhanced Preoperative Pancreatoduodenectomy Patient Education Using Mixed Reality Technology: A Randomized Controlled Pilot Study
by Jessica Heard, Paul Murdock, Juan Malo, Joseph Lim, Sourodip Mukharjee and Rohan Jeyarajah
Informatics 2025, 12(2), 42; https://doi.org/10.3390/informatics12020042 - 23 Apr 2025
Viewed by 842
Abstract
(1) Background: Mixed Reality (MR) technology, such as the HoloLens, offers a novel approach to preoperative education. This study evaluates its feasibility and effectiveness in improving patient comprehension and comfort during informed consent for pancreatoduodenectomy. (2) Methods: A single-center, randomized, controlled pilot study [...] Read more.
(1) Background: Mixed Reality (MR) technology, such as the HoloLens, offers a novel approach to preoperative education. This study evaluates its feasibility and effectiveness in improving patient comprehension and comfort during informed consent for pancreatoduodenectomy. (2) Methods: A single-center, randomized, controlled pilot study was conducted between February and May 2023. Patients recommended for pancreatoduodenectomy were randomized into a control group receiving standard education or an intervention group using the HoloLens. Pre- and post-intervention surveys assessed patient understanding and comfort. (3) Results: Nineteen patients participated (8 HoloLens, 11 control). Both groups showed improved comprehension post-intervention, but only the HoloLens group demonstrated a statistically significant increase (Z = −2.524, p = 0.012). MR users had a greater understanding of surgical steps compared to controls, and 75% of participants in both groups reported high comfort levels with the surgery. MR integration was feasible and did not disrupt clinical workflow. (4) Conclusions: These findings suggest that MR can enhance preoperative education for complex procedures. However, limitations include the small sample size and single-center design, necessitating larger studies to confirm its broader applicability. MR-based education represents a promising tool to improve patient engagement and comprehension in surgical decision making. Full article
Show Figures

Figure 1

23 pages, 7791 KiB  
Article
Effect of Interactive Virtual Reality on the Teaching of Conceptual Design in Engineering and Architecture Fields
by Elena M. Díaz González, Rachid Belaroussi, Ovidia Soto-Martín, Montserrat Acosta and Jorge Martín-Gutierrez
Appl. Sci. 2025, 15(8), 4205; https://doi.org/10.3390/app15084205 - 11 Apr 2025
Viewed by 1261
Abstract
This research paper explores the impact of immersive virtual reality (IVR) on the teaching of conceptual design in engineering and architecture fields, focusing on the use of interactive 3D drawing tools in virtual and augmented reality environments. The study analyzes how IVR influences [...] Read more.
This research paper explores the impact of immersive virtual reality (IVR) on the teaching of conceptual design in engineering and architecture fields, focusing on the use of interactive 3D drawing tools in virtual and augmented reality environments. The study analyzes how IVR influences spatial understanding, idea communication, and immersive 3D sketching for industrial and architectural design. Additionally, it examines user perceptions of virtual spaces prior to physical construction and evaluates the effectiveness of these technologies through surveys administered to mechanical engineering students utilizing VR/AR headsets. A structured methodology was developed for students enrolled in an industrial design course, comprising four phases: initial theoretical instruction on ephemeral architecture, immersive 3D sketching sessions using Meta Quest 2 and Microsoft HoloLens 2 VR/AR headsets, detailed CAD modeling based on conceptual sketches, and immersive virtual tours to evaluate user perception and design efficacy. Ad hoc questionnaires specifically designed for this research were employed. The results indicate a positive reception to IVR, emphasizing its ease of use, intuitive learning process, and effectiveness in improving motivation, academic performance, and student engagement during the conceptual design phase in graphic engineering education. Full article
Show Figures

Figure 1

27 pages, 11200 KiB  
Article
An Automatic Registration System Based on Augmented Reality to Enhance Civil Infrastructure Inspections
by Leonardo Binni, Massimo Vaccarini, Francesco Spegni, Leonardo Messi and Berardo Naticchia
Buildings 2025, 15(7), 1146; https://doi.org/10.3390/buildings15071146 - 31 Mar 2025
Cited by 1 | Viewed by 715
Abstract
Manual geometric and semantic alignment of inspection data with existing digital models (field-to-model data registration) and on-site access to relevant information (model-to-field data registration) represent cumbersome procedures that cause significant loss of information and fragmentation, hindering the efficiency of civil infrastructure inspections. To [...] Read more.
Manual geometric and semantic alignment of inspection data with existing digital models (field-to-model data registration) and on-site access to relevant information (model-to-field data registration) represent cumbersome procedures that cause significant loss of information and fragmentation, hindering the efficiency of civil infrastructure inspections. To address the bidirectional registration challenge, this study introduces a high-accuracy automatic registration method and system based on Augmented Reality (AR) that streamlines data exchange between the field and a knowledge graph-based Digital Twin (DT) platform for infrastructure management, and vice versa. A centimeter-level 6-DoF pose estimation of the AR device in large-scale, open unprepared environments is achieved by implementing a hybrid approach based on Real-Time Kinematic and Visual Inertial Odometry to cope with urban-canyon scenarios. For this purpose, a low-cost and non-invasive RTK receiver was prototyped and firmly attached to an AR device (i.e., Microsoft HoloLens 2). Multiple filters and latency compensation techniques were implemented to enhance registration accuracy. The system was tested in a real-world scenario involving the inspection of a highway viaduct. Throughout the use case inspection, the system seamlessly and automatically provided field operators with on-field access to existing DT information (i.e., open BIM models) such as georeferenced holograms and facilitated the enrichment of the asset’s DT through the automatic registration of inspection data (i.e., images) with the open BIM models included in the DT. This study contributes to DT-based civil infrastructure management by establishing a bidirectional and seamless integration between virtual and physical entities. Full article
Show Figures

Figure 1

19 pages, 6442 KiB  
Article
Synergy-Based Evaluation of Hand Motor Function in Object Handling Using Virtual and Mixed Realities
by Yuhei Sorimachi, Hiroki Akaida, Kyo Kutsuzawa, Dai Owaki and Mitsuhiro Hayashibe
Sensors 2025, 25(7), 2080; https://doi.org/10.3390/s25072080 - 26 Mar 2025
Viewed by 563
Abstract
This study introduces a novel system for evaluating hand motor function through synergy-based analysis during object manipulation in virtual and mixed-reality environments. Conventional assessments of hand function are often subjective, relying on visual observation by therapists or patient-reported outcomes. To address these limitations, [...] Read more.
This study introduces a novel system for evaluating hand motor function through synergy-based analysis during object manipulation in virtual and mixed-reality environments. Conventional assessments of hand function are often subjective, relying on visual observation by therapists or patient-reported outcomes. To address these limitations, we developed a system that utilizes the leap motion controller (LMC) to capture finger motion data without the constraints of glove-type devices. Spatial synergies were extracted using principal component analysis (PCA) and Varimax rotation, providing insights into finger motor coordination with the sparse decomposition. Additionally, we incorporated the HoloLens 2 to create a mixed-reality object manipulation task that enhances spatial awareness for the user, improving natural interaction with virtual objects. Our results demonstrate that synergy-based analysis allows for the systematic detection of hand movement abnormalities that are not captured through traditional task performance metrics. This system demonstrates promise in advancing rehabilitation by enabling more objective and detailed evaluations of finger motor function, facilitating personalized therapy, and potentially contributing to the early detection of motor impairments in the future. Full article
Show Figures

Figure 1

Back to TopTop