Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (241)

Search Parameters:
Keywords = augmented reality navigator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2399 KB  
Article
SADAMB: Advancing Spatially-Aware Vision-Language Modeling Through Datasets, Metrics, and Benchmarks
by Giorgos Papadopoulos, Petros Drakoulis, Athanasios Ntovas, Alexandros Doumanoglou and Dimitris Zarpalas
Computers 2025, 14(10), 413; https://doi.org/10.3390/computers14100413 - 29 Sep 2025
Abstract
Understanding spatial relationships between objects in images is crucial for robotic navigation, augmented reality systems, and autonomous driving applications, among others. However, existing vision-language benchmarks often overlook explicit spatial reasoning, limiting progress in this area. We attribute this limitation in part to existing [...] Read more.
Understanding spatial relationships between objects in images is crucial for robotic navigation, augmented reality systems, and autonomous driving applications, among others. However, existing vision-language benchmarks often overlook explicit spatial reasoning, limiting progress in this area. We attribute this limitation in part to existing open datasets and evaluation metrics, which tend to overlook spatial details. To address this gap, we make three contributions: First, we greatly extend the COCO dataset with annotations of spatial relations, providing a resource for spatially aware image captioning and visual question answering. Second, we propose a new evaluation framework encompassing metrics that assess image captions’ spatial accuracy at both the sentence and dataset levels. And third, we conduct a benchmark study of various vision encoder–text decoder transformer architectures for image captioning using the introduced dataset and metrics. Results reveal that current models capture spatial information only partially, underscoring the challenges of spatially grounded caption generation. Full article
Show Figures

Figure 1

25 pages, 1278 KB  
Review
Eye-Tracking Advancements in Architecture: A Review of Recent Studies
by Mário Bruno Cruz, Francisco Rebelo and Jorge Cruz Pinto
Buildings 2025, 15(19), 3496; https://doi.org/10.3390/buildings15193496 - 28 Sep 2025
Abstract
This Scoping Review (ScR) synthesizes advances in architectural eye-tracking (ET) research published between 2010 and 2024. Drawing on 75 peer-reviewed studies that met clear inclusion criteria, it monitors the field’s rapid expansion, from only 20 experiments before 2018, to more than 45 new [...] Read more.
This Scoping Review (ScR) synthesizes advances in architectural eye-tracking (ET) research published between 2010 and 2024. Drawing on 75 peer-reviewed studies that met clear inclusion criteria, it monitors the field’s rapid expansion, from only 20 experiments before 2018, to more than 45 new investigations in the three years thereafter, situating these developments within the longer historical evolution of ET hardware and analytical paradigms. The review maps 13 recurrent areas of application, focusing on design evaluation, wayfinding and spatial navigation, end-user experience, and architectural education. Across these domains, ET reliably reveals where occupants focus, for how long, and in what sequence, providing objective evidence that complements designer intuition and conventional post-occupancy surveys. Experts and novices might display distinct gaze signatures; for example, architects spend longer fixating on contextual and structural cues, whereas lay users dwell on decorative details, highlighting possible pedagogical opportunities. Despite these benefits, persistent challenges include data loss in dynamic or outdoor settings, calibration drift, single-user hardware constraints, and the need to triangulate gaze metrics with cognitive or affective measures. Future research directions emphasize integrating ET with virtual or augmented reality (VR) (AR) to validate design interactively, improving mobile tracking accuracy, and establishing shared datasets to enable replication and meta-analysis. Overall, the study demonstrates that ET is maturing into an indispensable, evidence-based lens for creating more intuitive, legible, and human-centered architecture. Full article
(This article belongs to the Special Issue Emerging Trends in Architecture, Urbanization, and Design)
Show Figures

Figure 1

12 pages, 2022 KB  
Case Report
Implementation of Medicalholodeck® for Augmented Reality Surgical Navigation in Microsurgical Mandibular Reconstruction: Enhanced Vessel Identification
by Norman Alejandro Rendón Mejía, Hansel Gómez Arámbula, José Humberto Baeza Ramos, Yidam Villa Martínez, Francisco Hernández Ávila, Mónica Quiñonez Pérez, Carolina Caraveo Aguilar, Rogelio Mariñelarena Hernández, Claudio Reyes Montero, Claudio Ramírez Espinoza and Armando Isaac Reyes Carrillo
Healthcare 2025, 13(19), 2406; https://doi.org/10.3390/healthcare13192406 - 24 Sep 2025
Viewed by 171
Abstract
Mandibular reconstruction with the fibula free flap is the gold standard for large defects, with virtual surgical planning becoming integral to the process. The localization and dissection of critical vessels, such as the recipient vessels in the neck and the perforating vessels of [...] Read more.
Mandibular reconstruction with the fibula free flap is the gold standard for large defects, with virtual surgical planning becoming integral to the process. The localization and dissection of critical vessels, such as the recipient vessels in the neck and the perforating vessels of the fibula flap, are demanding steps that directly impact surgical success. Augmented reality (AR) offers a solution by overlaying three-dimensional virtual models directly onto the surgeon’s view of the operative field. We report the first case in Latin America utilizing a low-cost, commercially available holographic navigation system for complex microsurgical mandibular reconstruction. A 26-year-old female presented with a large, destructive osteoblastoma of the left mandible, requiring wide resection and reconstruction. Preoperative surgical planning was conducted using DICOM data from the patient’s CT scans to generate 3D holographic models with the Medicalholodeck® software. Intraoperatively, the primary surgeon used the AR system to superimpose the holographic models onto the patient. The system provided real-time, immersive guidance for identifying the facial artery, which was anatomically displaced by the tumor mass, as well as for localizing the peroneal artery perforators for donor flap harvest. A free fibula flap was harvested and transferred. During the early postoperative course and after 3-months of follow-up, the patient presented with an absence of any clinical complications. This case demonstrates the successful application and feasibility of using a low-cost, consumer-grade holographic navigation system. Full article
(This article belongs to the Special Issue Virtual Reality Technologies in Health Care)
Show Figures

Figure 1

30 pages, 3101 KB  
Review
Artificial Intelligence in the Diagnosis and Treatment of Brain Gliomas
by Kyriacos Evangelou, Ioannis Kotsantis, Aristotelis Kalyvas, Anastasios Kyriazoglou, Panagiota Economopoulou, Georgios Velonakis, Maria Gavra, Amanda Psyrri, Efstathios J. Boviatsis and Lampis C. Stavrinou
Biomedicines 2025, 13(9), 2285; https://doi.org/10.3390/biomedicines13092285 - 17 Sep 2025
Viewed by 477
Abstract
Brain gliomas are highly infiltrative and heterogenous tumors, whose early and accurate detection as well as therapeutic management are challenging. Artificial intelligence (AI) has the potential to redefine the landscape in neuro-oncology and can enhance glioma detection, imaging segmentation, and non-invasive molecular characterization [...] Read more.
Brain gliomas are highly infiltrative and heterogenous tumors, whose early and accurate detection as well as therapeutic management are challenging. Artificial intelligence (AI) has the potential to redefine the landscape in neuro-oncology and can enhance glioma detection, imaging segmentation, and non-invasive molecular characterization better than conventional diagnostic modalities through deep learning-driven radiomics and radiogenomics. AI algorithms have been shown to predict genotypic and phenotypic glioma traits with remarkable accuracy and facilitate patient-tailored therapeutic decision-making. Such algorithms can be incorporated into surgical planning to optimize resection extent while preserving eloquent cortical structures through preoperative imaging fusion and intraoperative augmented reality-assisted navigation. Beyond resection, AI may assist in radiotherapy dose distribution optimization, thus ensuring maximal tumor control while minimizing surrounding tissue collateral damage. AI-guided molecular profiling and treatment response prediction models can facilitate individualized chemotherapy regimen tailoring, especially for glioblastomas with MGMT promoter methylation. Applications in immunotherapy are emerging, and research is focusing on AI to identify tumor microenvironment signatures predictive of immune checkpoint inhibition responsiveness. AI-integrated prognostic models incorporating radiomic, histopathologic, and clinical variables can additionally improve survival stratification and recurrence risk prediction remarkably, to refine follow-up strategies in high-risk patients. However, data heterogeneity, algorithmic transparency concerns, and regulatory challenges hamstring AI implementation in neuro-oncology despite its transformative potential. It is therefore imperative for clinical translation to develop interpretable AI frameworks, integrate multimodal datasets, and robustly validate externally. Future research should prioritize the creation of generalizable AI models, combine larger and more diverse datasets, and integrate multimodal imaging and molecular data to overcome these obstacles and revolutionize AI-assisted patient-specific glioma management. Full article
(This article belongs to the Special Issue Mechanisms and Novel Therapeutic Approaches for Gliomas)
Show Figures

Graphical abstract

24 pages, 1501 KB  
Review
Artificial Intelligence and Digital Tools Across the Hepato-Pancreato-Biliary Surgical Pathway: A Systematic Review
by Andreas Efstathiou, Evgenia Charitaki, Charikleia Triantopoulou and Spiros Delis
J. Clin. Med. 2025, 14(18), 6501; https://doi.org/10.3390/jcm14186501 - 15 Sep 2025
Viewed by 481
Abstract
Background: Hepato-pancreato-biliary (HPB) surgery involves operations that depend heavily on precise imaging, careful planning, and intraoperative decision-making. The rapid emergence of artificial intelligence (AI) and digital tools has assisted in these domains. Methods: We performed a PRISMA-guided systematic review (searches through June 2025) [...] Read more.
Background: Hepato-pancreato-biliary (HPB) surgery involves operations that depend heavily on precise imaging, careful planning, and intraoperative decision-making. The rapid emergence of artificial intelligence (AI) and digital tools has assisted in these domains. Methods: We performed a PRISMA-guided systematic review (searches through June 2025) of AI/digital technologies applied to HPB surgical care, including novel models such as machine learning, deep learning, radiomics, augmented/mixed reality, and computer vision. Our focus was for eligible studies to address imaging interpretation, preoperative planning, intraoperative guidance, or outcome prediction. Results: In total, 38 studies met inclusion criteria. Imaging models constructed with AI showed high diagnostic performance for lesion detection and classification (commonly AUC ~0.80–0.98). Moreover, risk models using machine learning frequently exceeded traditional scores for predicting postoperative complications (e.g., pancreatic fistula). AI-assisted three-dimensional visual reconstructions enhanced anatomical understanding for preoperative planning, while augmented and mixed-reality systems enabled real-time intraoperative navigation in pilot series. Computer-vision systems recognized critical intraoperative landmarks (e.g., critical view of safety) and detected hazards such as bleeding in near real time. Most of the studies included were retrospective, single-center, or feasibility designs, with limited external validation. Conclusions: The usage of AI and digital tools show promising results across the HPB pathway—from preoperative diagnostics to intraoperative safety and guidance. The evidence to date supports technical feasibility and suggests clinical benefit, but routine adoption and further conclusions should await prospective, multicenter validation and consistent reporting. With continued refinement, multidisciplinary collaboration, appropriate cost effectiveness, and attention to ethics and implementation, these technologies could improve the precision, safety, and outcomes of HPB surgery. Full article
Show Figures

Figure 1

39 pages, 12608 KB  
Article
An Audio Augmented Reality Navigation System for Blind and Visually Impaired People Integrating BIM and Computer Vision
by Leonardo Messi, Massimo Vaccarini, Alessandra Corneli, Alessandro Carbonari and Leonardo Binni
Buildings 2025, 15(18), 3252; https://doi.org/10.3390/buildings15183252 - 9 Sep 2025
Viewed by 555
Abstract
Since statistics show a growing trend in blindness and visual impairment, the development of navigation systems supporting Blind and Visually Impaired People (BVIP) must be urgently addressed. Guiding BVIP to a desired destination across indoor and outdoor settings without relying on a pre-installed [...] Read more.
Since statistics show a growing trend in blindness and visual impairment, the development of navigation systems supporting Blind and Visually Impaired People (BVIP) must be urgently addressed. Guiding BVIP to a desired destination across indoor and outdoor settings without relying on a pre-installed infrastructure is an open challenge. While numerous solutions have been proposed by researchers in recent decades, a comprehensive navigation system that can support BVIP mobility in mixed and unprepared environments is still missing. This study proposes a novel navigation system that enables BVIP to request directions and be guided to a desired destination across heterogeneous and unprepared settings. To achieve this, the system applies Computer Vision (CV)—namely an integrated Structure from Motion (SfM) pipeline—for tracking the user and exploits Building Information Modelling (BIM) semantics for planning the reference path to reach the destination. Audio Augmented Reality (AAR) technology is adopted for directional guidance delivery due to its intuitive and non-intrusive nature, which allows seamless integration with traditional mobility aids (e.g., white canes or guide dogs). The developed system was tested on a university campus to assess its performance during both path planning and navigation tasks, the latter involving users in both blindfolded and sighted conditions. Quantitative results indicate that the system computed paths in about 10 milliseconds and effectively guided blindfolded users to their destination, achieving performance comparable to that of sighted users. Remarkably, users in blindfolded conditions completed navigation tests with an average deviation from the reference path within the 0.60-meter shoulder width threshold in 100% of the trials, compared to 75% of the tests conducted by sighted users. These findings demonstrate the system’s accuracy in maintaining navigational alignment within acceptable human spatial tolerances. The proposed approach contributes to the advancement of BVIP assistive technologies by enabling scalable, infrastructure-free navigation across heterogeneous environments. Full article
Show Figures

Figure 1

16 pages, 3781 KB  
Systematic Review
Augmented Reality in Dental Extractions: Narrative Review and an AR-Guided Impacted Mandibular Third-Molar Case
by Gerardo Pellegrino, Carlo Barausse, Subhi Tayeb, Elisabetta Vignudelli, Martina Casaburi, Stefano Stradiotti, Fabrizio Ferretti, Laura Cercenelli, Emanuela Marcelli and Pietro Felice
Appl. Sci. 2025, 15(17), 9723; https://doi.org/10.3390/app15179723 - 4 Sep 2025
Viewed by 839
Abstract
Background: Augmented-reality (AR) navigation is emerging as a means of turning pre-operative cone-beam CT data into intuitive, in situ guidance for difficult tooth removal, yet the scattered evidence has never been consolidated nor illustrated with a full clinical workflow. Aims: This [...] Read more.
Background: Augmented-reality (AR) navigation is emerging as a means of turning pre-operative cone-beam CT data into intuitive, in situ guidance for difficult tooth removal, yet the scattered evidence has never been consolidated nor illustrated with a full clinical workflow. Aims: This study aims to narratively synthesise AR applications limited to dental extractions and to illustrate a full AR-guided clinical workflow. Methods: We performed a PRISMA-informed narrative search (PubMed + Cochrane, January 2015–June 2025) focused exclusively on AR applications in dental extractions and found nine eligible studies. Results: These pilot reports—covering impacted third molars, supernumerary incisors, canines, and cyst-associated teeth—all used marker-less registration on natural dental surfaces and achieved mean target-registration errors below 1 mm with headset set-up times under three minutes; the only translational series (six molars) recorded a mean surgical duration of 21 ± 6 min and a System Usability Scale score of 79. To translate these findings into practice, we describe a case of AR-guided mandibular third-molar extraction. A QR-referenced 3D-printed splint, intra-oral scan, and CBCT were fused to create a colour-coded hologram rendered on a Magic Leap 2 headset. The procedure took 19 min and required only a conservative osteotomy and accurate odontotomy that ended without neurosensory disturbance (VAS pain 2/10 at one week). Conclusions: Collectively, the literature synthesis and clinical demonstration suggest that current AR platforms deliver sub-millimetre accuracy, minimal workflow overhead, and high user acceptance in high-risk extractions while highlighting the need for larger, controlled trials to prove tangible patient benefit. Full article
Show Figures

Figure 1

16 pages, 6484 KB  
Review
Digital Technologies in Implantology: A Narrative Review
by Ani Kafedzhieva, Angelina Vlahova and Bozhana Chuchulska
Bioengineering 2025, 12(9), 927; https://doi.org/10.3390/bioengineering12090927 - 29 Aug 2025
Viewed by 1050
Abstract
Digital technologies have significantly advanced implant dentistry, refining diagnosis, treatment planning, surgical precision, and prosthetic rehabilitation. This review explores recent developments, emphasizing accuracy, efficiency, and clinical impact. A literature analysis identifies key innovations, such as digital planning, guided surgery, dynamic navigation, digital impressions [...] Read more.
Digital technologies have significantly advanced implant dentistry, refining diagnosis, treatment planning, surgical precision, and prosthetic rehabilitation. This review explores recent developments, emphasizing accuracy, efficiency, and clinical impact. A literature analysis identifies key innovations, such as digital planning, guided surgery, dynamic navigation, digital impressions and CAD/CAM prosthetics. Digital workflows enhance implant placement by improving precision and reducing deviations compared to freehand techniques. Dynamic navigation provides real-time guidance, offering accuracy comparable to static guides and proving benefits in complex cases. Digital impressions demonstrate high precision, which can match or, in some scenarios, surpass conventional methods, though conventional impressions remain the gold standard for full-arch cases. CAD/CAM technology optimizes prosthetic fit, aesthetics, and material selection. Artificial intelligence and machine learning contribute to treatment planning and predictive analytics, yet challenges persist, including high costs, the need for specialized training, and long-term clinical validation. This review underscores the advantages of digital approaches—improved accuracy, better communication, and minimally invasive procedures—while addressing existing limitations. Emerging technologies, such as AI, augmented reality, and 3D printing, are expected to further transform implantology. Continued research is crucial to fully integrate digital advancements and enhance patient outcomes. Full article
(This article belongs to the Special Issue Dentistry Regenerative Medicine and Oral Bioengineering)
Show Figures

Figure 1

23 pages, 3314 KB  
Article
Optimization of Manifold Learning Using Differential Geometry for 3D Reconstruction in Computer Vision
by Yawen Wang
Mathematics 2025, 13(17), 2771; https://doi.org/10.3390/math13172771 - 28 Aug 2025
Viewed by 624
Abstract
Manifold learning is a significant computer vision task used to describe high-dimensional visual data in lower-dimensional manifolds without sacrificing the intrinsic structural properties required for 3D reconstruction. Isomap, Locally Linear Embedding (LLE), Laplacian Eigenmaps, and t-SNE are helpful in data topology preservation but [...] Read more.
Manifold learning is a significant computer vision task used to describe high-dimensional visual data in lower-dimensional manifolds without sacrificing the intrinsic structural properties required for 3D reconstruction. Isomap, Locally Linear Embedding (LLE), Laplacian Eigenmaps, and t-SNE are helpful in data topology preservation but are typically indifferent to the intrinsic differential geometric characteristics of the manifolds, thus leading to deformation of spatial relations and reconstruction accuracy loss. This research proposes an Optimization of Manifold Learning using Differential Geometry Framework (OML-DGF) to overcome the drawbacks of current manifold learning techniques in 3D reconstruction. The framework employs intrinsic geometric properties—like curvature preservation, geodesic coherence, and local–global structure correspondence—to produce structurally correct and topologically consistent low-dimensional embeddings. The model utilizes a Riemannian metric-based neighborhood graph, approximations of geodesic distances with shortest path algorithms, and curvature-sensitive embedding from second-order derivatives in local tangent spaces. A curvature-regularized objective function is derived to steer the embedding toward facilitating improved geometric coherence. Principal Component Analysis (PCA) reduces initial dimensionality and modifies LLE with curvature weighting. Experiments on the ModelNet40 dataset show an impressive improvement in reconstruction quality, with accuracy gains of up to 17% and better structure preservation than traditional methods. These findings confirm the advantage of employing intrinsic geometry as an embedding to improve the accuracy of 3D reconstruction. The suggested approach is computationally light and scalable and can be utilized in real-time contexts such as robotic navigation, medical image diagnosis, digital heritage reconstruction, and augmented/virtual reality systems in which strong 3D modeling is a critical need. Full article
Show Figures

Figure 1

19 pages, 23064 KB  
Article
Intraoperative Computed Tomography, Ultrasound, and Augmented Reality in Mesial Temporal Lobe Epilepsy Surgery—A Retrospective Cohort Study
by Franziska Neumann, Alexander Grote, Marko Gjorgjevski, Barbara Carl, Susanne Knake, Katja Menzler, Christopher Nimsky and Miriam H. A. Bopp
Sensors 2025, 25(17), 5301; https://doi.org/10.3390/s25175301 - 26 Aug 2025
Viewed by 869
Abstract
Mesial temporal lobe epilepsy (mTLE) surgery, particularly selective amygdalohippocampectomy (sAHE), is a recognized treatment for pharmacoresistant temporal lobe epilepsy (TLE). Accurate intraoperative orientation is crucial for complete resection while maintaining functional integrity. This study evaluated the usability and effectiveness of multimodal neuronavigation and [...] Read more.
Mesial temporal lobe epilepsy (mTLE) surgery, particularly selective amygdalohippocampectomy (sAHE), is a recognized treatment for pharmacoresistant temporal lobe epilepsy (TLE). Accurate intraoperative orientation is crucial for complete resection while maintaining functional integrity. This study evaluated the usability and effectiveness of multimodal neuronavigation and microscope-based augmented reality (AR) with intraoperative computed tomography (iCT) and navigated intraoperative ultrasound (iUS) in 28 patients undergoing resective surgery. Automatic iCT-based registration provided high initial navigation accuracy. Navigated iUS was utilized to verify navigational accuracy and assess the extent of resection during the procedure. AR support was successfully implemented in all cases, enhancing surgical orientation, surgeon comfort, and patient safety, while also aiding training and education. At one-year follow-up, 60.7% of patients achieved complete seizure freedom (ILAE Class 1), rising to 67.9% at the latest follow-up (median 4.6 years). Surgical complications were present in three cases (10.7%), but none resulted in permanent deficits. The integration of microscope-based AR with iCT and navigated iUS provides a precise and safe approach to resection in TLE surgery, additionally serving as valuable tool for neurosurgical training and education. Full article
(This article belongs to the Special Issue Virtual, Augmented, and Mixed Reality in Neurosurgery)
Show Figures

Graphical abstract

21 pages, 9031 KB  
Article
A Pyramid Convolution-Based Scene Coordinate Regression Network for AR-GIS
by Haobo Xu, Chao Zhu, Yilong Wang, Huachen Zhu and Wei Ma
ISPRS Int. J. Geo-Inf. 2025, 14(8), 311; https://doi.org/10.3390/ijgi14080311 - 15 Aug 2025
Viewed by 689
Abstract
Camera tracking plays a pivotal role in augmented reality geographic information systems (AR-GIS) and location-based services (LBS), serving as a crucial component for accurate spatial awareness and navigation. Current learning-based camera tracking techniques, while achieving superior accuracy in pose estimation, often overlook changes [...] Read more.
Camera tracking plays a pivotal role in augmented reality geographic information systems (AR-GIS) and location-based services (LBS), serving as a crucial component for accurate spatial awareness and navigation. Current learning-based camera tracking techniques, while achieving superior accuracy in pose estimation, often overlook changes in scale. This oversight results in less stable localization performance and challenges in coping with dynamic environments. To address these tasks, we propose a pyramid convolution-based scene coordinate regression network (PSN). Our approach leverages a pyramidal convolutional structure, integrating kernels of varying sizes and depths, alongside grouped convolutions that alleviate computational demands while capturing multi-scale features from the input imagery. Subsequently, the network incorporates a novel randomization strategy, effectively diminishing correlated gradients and markedly bolstering the training process’s efficiency. The culmination lies in a regression layer that maps the 2D pixel coordinates to their corresponding 3D scene coordinates with precision. The experimental outcomes show that our proposed method achieves centimeter-level accuracy in small-scale scenes and decimeter-level accuracy in large-scale scenes after only a few minutes of training. It offers a favorable balance between localization accuracy and efficiency, and effectively supports augmented reality visualization in dynamic environments. Full article
Show Figures

Figure 1

15 pages, 1726 KB  
Systematic Review
Application of Augmented Reality in Reverse Total Shoulder Arthroplasty: A Systematic Review
by Jan Orlewski, Bettina Hochreiter, Karl Wieser and Philipp Kriechling
J. Clin. Med. 2025, 14(15), 5533; https://doi.org/10.3390/jcm14155533 - 6 Aug 2025
Cited by 1 | Viewed by 600
Abstract
Background: Reverse total shoulder arthroplasty (RTSA) is increasingly used for managing cuff tear arthropathy, osteoarthritis, complex fractures, and revision procedures. As the demand for surgical precision and reproducibility grows, immersive technologies such as virtual reality (VR), augmented reality (AR), and metaverse-based platforms are [...] Read more.
Background: Reverse total shoulder arthroplasty (RTSA) is increasingly used for managing cuff tear arthropathy, osteoarthritis, complex fractures, and revision procedures. As the demand for surgical precision and reproducibility grows, immersive technologies such as virtual reality (VR), augmented reality (AR), and metaverse-based platforms are being explored for surgical training, intraoperative guidance, and rehabilitation. While early data suggest potential benefits, a focused synthesis specific to RTSA is lacking. Methods: This systematic review was conducted in accordance with PRISMA 2020 guidelines. A comprehensive search of PubMed, Scopus, and Cochrane Library databases was performed through 30 May 2025. Eligible studies included those evaluating immersive technologies in the context of RTSA for skill acquisition or intraoperative guidance. Only peer-reviewed articles published in English were included. Data were synthesized narratively due to heterogeneity in study design and outcome metrics. Results: Out of 628 records screened, 21 studies met the inclusion criteria. Five studies evaluated immersive VR for surgical training: four randomized controlled trials and one retrospective case series. VR training improved procedural efficiency and showed non-inferiority to cadaveric training. Sixteen studies investigated intraoperative navigation or AR guidance. Clinical and cadaveric studies consistently reported improved accuracy in glenoid baseplate positioning with reduced angular and linear deviations in postoperative controls as compared to preoperative planning. Conclusions: Immersive technologies show promise in enhancing training, intraoperative accuracy, and procedural consistency in RTSA. VR and AR platforms may support standardized surgical education and precision-based practice, but their broad clinical impact remains limited by small sample sizes, heterogeneous methodologies, and limited long-term outcomes. Further multicenter trials with standardized endpoints and cost-effectiveness analyses are warranted. Postoperative rehabilitation using immersive technologies in RTSA remains underexplored and presents an opportunity for future research. Full article
Show Figures

Figure 1

34 pages, 41467 KB  
Article
Evaluating Spatial Decision-Making and Player Experience in a Remote Multiplayer Augmented Reality Hide-and-Seek Game
by Yasas Sri Wickramasinghe, Heide Karen Lukosch, James Everett and Stephan Lukosch
Multimodal Technol. Interact. 2025, 9(8), 79; https://doi.org/10.3390/mti9080079 - 31 Jul 2025
Viewed by 837
Abstract
This study investigates how remote multiplayer gameplay, enabled through Augmented Reality (AR), transforms spatial decision-making and enhances player experience in a location-based augmented reality game (LBARG). A remote multiplayer handheld-based AR game was designed and evaluated on how it influences players’ spatial decision-making [...] Read more.
This study investigates how remote multiplayer gameplay, enabled through Augmented Reality (AR), transforms spatial decision-making and enhances player experience in a location-based augmented reality game (LBARG). A remote multiplayer handheld-based AR game was designed and evaluated on how it influences players’ spatial decision-making strategies, engagement, and gameplay experience. In a user study involving 60 participants, we compared remote gameplay in our AR game with traditional hide-and-seek. We found that AR significantly transforms traditional gameplay by introducing different spatial interactions, which enhanced spatial decision-making and collaboration. Our results also highlight the potential of AR to increase player engagement and social interaction, despite the challenges posed by the added navigation complexities. These findings contribute to the engaging design of future AR games and beyond. Full article
Show Figures

Figure 1

19 pages, 3242 KB  
Article
Augmented Reality Navigation for Acupuncture Procedures with Smart Glasses
by Shin-Yan Chiou, Hsiao-Hsiang Chang, Yu-Cheng Chen and Geng-Hao Liu
Electronics 2025, 14(15), 3025; https://doi.org/10.3390/electronics14153025 - 29 Jul 2025
Viewed by 589
Abstract
Traditional acupuncture relies on the precise selection of acupuncture points to adjust Qi flow along meridians. Traditionally, acupuncture points are localized using cun (or body-relative cun) as a proportional measurement. However, locating specific points can be challenging, even for experienced practitioners. This study [...] Read more.
Traditional acupuncture relies on the precise selection of acupuncture points to adjust Qi flow along meridians. Traditionally, acupuncture points are localized using cun (or body-relative cun) as a proportional measurement. However, locating specific points can be challenging, even for experienced practitioners. This study aims to enhance the accuracy and efficiency of acupuncture point localization by introducing an augmented reality (AR) navigation system utilizing AR glasses (Magic Leap One). The system employs a Six-Point Landmark-Based AR Registration method to overlay an acupuncture point model onto a patient’s head without the need for external markers. Methods included testing with traditional Chinese medicine students, measuring positional errors, and evaluating stability. Results demonstrated an average error of 5.01 ± 2.64 mm, which is well within the therapeutic range of 2 cun (about 5 cm), with minimal drift during stability tests. This AR system provides an accurate and intuitive tool for practitioners and learners, reducing variability in acupuncture point selection and offering promise for broader clinical applications. Full article
Show Figures

Figure 1

19 pages, 3117 KB  
Article
Feasibility and Accuracy of a Dual-Function AR-Guided System for PSI Positioning and Osteotomy Execution in Pelvic Tumour Surgery: A Cadaveric Study
by Tanya Fernández-Fernández, Javier Orozco-Martínez, Carla de Gregorio-Bermejo, Elena Aguilera-Jiménez, Amaia Iribar-Zabala, Lydia Mediavilla-Santos, Javier Pascau, Mónica García-Sevilla, Rubén Pérez-Mañanes and José Antonio Calvo-Haro
Bioengineering 2025, 12(8), 810; https://doi.org/10.3390/bioengineering12080810 - 28 Jul 2025
Viewed by 548
Abstract
Objectives: Pelvic tumor resections demand high surgical precision to ensure clear margins while preserving function. Although patient-specific instruments (PSIs) improve osteotomy accuracy, positioning errors remain a limitation. This study evaluates the feasibility, accuracy, and usability of a novel dual-function augmented reality (AR) [...] Read more.
Objectives: Pelvic tumor resections demand high surgical precision to ensure clear margins while preserving function. Although patient-specific instruments (PSIs) improve osteotomy accuracy, positioning errors remain a limitation. This study evaluates the feasibility, accuracy, and usability of a novel dual-function augmented reality (AR) system for intraoperative guidance in PSI positioning and osteotomy execution using a head-mounted display (HMD). The system provides dual-function support by assisting both PSI placement and osteotomy execution. Methods: Ten fresh-frozen cadaveric hemipelves underwent AR-assisted internal hemipelvectomy, using customized 3D-printed PSIs and a new in-house AR software integrated into an HMD. Angular and translational deviations between planned and executed osteotomies were measured using postoperative CT analysis. Absolute angular errors were computed from plane normals; translational deviation was assessed as maximum error at the osteotomy corner point in both sagittal (pitch) and coronal (roll) planes. A Wilcoxon signed-rank test and Bland–Altman plots were used to assess intra-workflow cumulative error. Results: The mean absolute angular deviation was 5.11 ± 1.43°, with 86.66% of osteotomies within acceptable thresholds. Maximum pitch and roll deviations were 4.53 ± 1.32 mm and 2.79 ± 0.72 mm, respectively, with 93.33% and 100% of osteotomies meeting translational accuracy criteria. Wilcoxon analysis showed significantly lower angular error when comparing final executed planes to intermediate AR-displayed planes (p < 0.05), supporting improved PSI positioning accuracy with AR guidance. Surgeons rated the system highly (mean satisfaction ≥ 4.0) for usability and clinical utility. Conclusions: This cadaveric study confirms the feasibility and precision of an HMD-based AR system for PSI-guided pelvic osteotomies. The system demonstrated strong accuracy and high surgeon acceptance, highlighting its potential for clinical adoption in complex oncologic procedures. Full article
Show Figures

Figure 1

Back to TopTop