Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (182)

Search Parameters:
Keywords = Medical Augmented Reality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 560 KB  
Systematic Review
Augmented Reality Technologies for Radiation Safety Training: A Systematic Review of Sensor Integration and Visualization Approaches
by Rajiv Khadka, Xingyue Yang, Jack Dunker and John Koudelka
Future Internet 2026, 18(3), 161; https://doi.org/10.3390/fi18030161 - 19 Mar 2026
Viewed by 150
Abstract
This paper presents a comprehensive systematic review examining the application of augmented reality (AR) and sensor technologies for visualizing ionizing radiation in virtual training environments. The review methodology involved systematic identification and analysis of the relevant literature based on predetermined criteria including publication [...] Read more.
This paper presents a comprehensive systematic review examining the application of augmented reality (AR) and sensor technologies for visualizing ionizing radiation in virtual training environments. The review methodology involved systematic identification and analysis of the relevant literature based on predetermined criteria including publication type, year of publication, application domain, and technological approach. The literature search encompassed publications from 2011 to 2021 across four major academic databases: Web of Science, Google Scholar, IEEE Xplore, and Scopus. Through rigorous screening following PRISMA 2020 guidelines, 23 research articles met the inclusion criteria for detailed analysis. From 404 initial database records, 360 were excluded during title/abstract screening (primarily for lacking AR components, radiation focus, or training applications) and 4 during full-text assessment (all for lacking sensor integration). The findings reveal that AR-based ionizing radiation visualization has been successfully implemented across diverse domains, including nuclear facility operations, medical procedures, CERN research activities, and educational and monitoring applications. The analysis identified multiple dimensions of impact, encompassing distinct benefits, emerging opportunities, and implementation challenges associated with AR deployment for ionizing radiation training. Each of these dimensions is comprehensively examined and documented within this review. Additionally, this study identifies critical research gaps that currently limit the full potential of AR technology in supporting ionizing radiation training programs. These gaps are systematically analyzed and discussed to establish clear directions for future research endeavors in this emerging field. Full article
(This article belongs to the Special Issue Human-Computer Interaction and Virtual Reality (VR))
Show Figures

Figure 1

15 pages, 2008 KB  
Article
Application of the Ultraleap 3Di-Based Gesture-Controlled 3D Imaging Visualization System in Pulmonary Segmentectomy: A Single-Center Prospective Study
by Zhengnan Liu, Bin Wang, Chengrun Li, Ruiji Chen, Jixing Lin and Jie Li
Bioengineering 2026, 13(3), 284; https://doi.org/10.3390/bioengineering13030284 - 28 Feb 2026
Viewed by 336
Abstract
Objective: Pulmonary segmentectomy serves as a crucial approach for treating early-stage lung cancer. However, this procedure demands precise identification of segmental anatomy, and surgeons often need to repeatedly consult the patient’s 3D imaging data or other medical records during the operation. Traditional contact-based [...] Read more.
Objective: Pulmonary segmentectomy serves as a crucial approach for treating early-stage lung cancer. However, this procedure demands precise identification of segmental anatomy, and surgeons often need to repeatedly consult the patient’s 3D imaging data or other medical records during the operation. Traditional contact-based intraoperative imaging assistance devices involve cumbersome operation and pose risks to the sterile environment. This study aims to evaluate the clinical utility of an Ultraleap 3Di-based gesture-controlled 3D imaging visualization system for non-contact interaction during pulmonary segmentectomy in patients with early-stage lung cancer. Methods: This study enrolled 58 patients with early-stage non-small cell lung cancer scheduled for video-assisted thoracoscopic pulmonary segmentectomy from June 2025 to December 2025. Participants were randomly assigned to either the experimental group or the control group. Intraoperatively, the experimental group utilized the Ultraleap 3Di system for non-contact 3D image review, while the control group relied on conventional contact-based devices for image retrieval, which was operated by non-sterile assistants. The compared outcomes included intraoperative image retrieval time, total operative time, intraoperative blood loss, R0 resection rate, postoperative drainage duration, and surgeon satisfaction. Results: The baseline characteristics were comparable between the two groups. The mean age was 53.66 ± 9.12 years in the experimental group and 55.21 ± 8.76 years in the control group (t = −0.66, p > 0.05); the experimental group included 16 males and 13 females, while the control group included 14 males and 15 females (χ2 = 0.276, p > 0.05). Preoperative pulmonary function, as measured by FEV1/FVC ratio, was 74.48 ± 4.75% in the experimental group versus 76.08 ± 4.51% in the control group (t = −1.31, p > 0.05). The image retrieval time in the experimental group was significantly shorter than that in the control group (75.16 ± 19.38 s versus 209.59 ± 28.13 s, t = −21.19, p < 0.001, 95% CI [−147.13, −121.72], Cohen’s d = −5.57). The total operative time was also reduced (88.72 ± 13.82 min versus 96.55 ± 13.90 min, t = −2.15, p = 0.036, 95% CI [−15.12, −0.53], Cohen’s d = −0.57). No significant differences were observed between the two groups in terms of R0 resection rate (both 100%), intraoperative blood loss, or postoperative drainage duration (p > 0.05). The operating surgeons rated the system highly for image clarity, navigation timeliness, and overall utility, while the score for operational convenience was relatively neutral (mean score 3.2). Conclusions: The Ultraleap 3Di-based non-contact visualization system reduces the time required for intraoperative image retrieval and improves overall procedural efficiency in segmentectomy, without compromising surgical safety or oncological radicality. Future efforts should focus on optimizing the intuitiveness of gesture interaction and exploring its integration with augmented reality and artificial intelligence to further advance the system’s intelligence and practical utility. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Figure 1

12 pages, 3085 KB  
Article
Data-Driven Interactive Lens Control System Based on Dielectric Elastomer
by Hui Zhang, Zhijie Xia, Zhisheng Zhang and Jianxiong Zhu
Technologies 2026, 14(1), 68; https://doi.org/10.3390/technologies14010068 - 16 Jan 2026
Viewed by 338
Abstract
In order to solve the dynamic analysis and interactive imaging control problems in the deformation process of bionic soft lenses, dielectric elastomer (DE) actuators are separated from a convex lens, and data-driven eye-controlled motion technology is investigated. According to the DE properties, which [...] Read more.
In order to solve the dynamic analysis and interactive imaging control problems in the deformation process of bionic soft lenses, dielectric elastomer (DE) actuators are separated from a convex lens, and data-driven eye-controlled motion technology is investigated. According to the DE properties, which are consistent with the deformation characteristics of hydrogel electrodes, the motion and deformation effect of eye-controlled lenses under film prestretching, lens size, and driving voltage, is studied. The results show that when the driving voltage increases to 7.8 kV, the focal length of the lens, whose prestretching λ is 4, and the diameter d is 1 cm, varies in the range of 49.7 mm and 112.5 mm. And the maximum focal-length change could reach 58.9%. In the process of eye controlling design and experimental verification, a high DC voltage supply was programmed, and eye movement signals for controlling the lens were analyzed by MATLAB software (R2023b). Eye-controlled interactive real-time motion and tunable imaging of the lens were realized. The response efficiency of soft lenses could reach over 93%. The adaptive lens system developed in this research has the potential to be applied to medical rehabilitation, exploration, augmented reality (AR), and virtual reality (VR) in the future. Full article
(This article belongs to the Special Issue AI Driven Sensors and Their Applications)
Show Figures

Figure 1

29 pages, 2297 KB  
Review
Digital Telecommunications in Medicine and Biomedical Engineering: Applications, Challenges, and Future Directions
by Nikolaos Karkanis, Andreas Giannakoulas, Kyriakos E. Zoiros, Theodoros N. F. Kaifas and Georgios A. A. Kyriacou
Eng 2026, 7(1), 19; https://doi.org/10.3390/eng7010019 - 1 Jan 2026
Viewed by 1208
Abstract
Digital telecommunications have become the backbone of modern healthcare, transforming how patients and professionals interact, share information, and deliver treatment. The integration of telecommunications with medicine, biomedical engineering and health services has enabled rapid growth in telemedicine, remote patient monitoring, wearable biomedical devices, [...] Read more.
Digital telecommunications have become the backbone of modern healthcare, transforming how patients and professionals interact, share information, and deliver treatment. The integration of telecommunications with medicine, biomedical engineering and health services has enabled rapid growth in telemedicine, remote patient monitoring, wearable biomedical devices, and data-driven clinical decision-making. Emerging technologies such as artificial intelligence, big data analytics, virtual and augmented reality and robotic tele-surgery are further expanding the scope of digital health. This review provides a comprehensive overview of the role of telecommunications in medicine and biomedical engineering. We classify key applications, highlight enabling technologies and critically examine the challenges regarding interoperability, data security, latency, and cost. Finally, we discuss future directions, including 5G/6G networks, edge computing, and privacy-preserving medical AI, emphasizing the need for reliable and equitable access to telecommunications-enabled healthcare worldwide. Full article
Show Figures

Figure 1

42 pages, 1158 KB  
Review
Virtual Reality in Preclinical and Clinical Education—An Insight into Current Advancements and Future Perspectives
by Adam Brachet, Maciej Biskupski, Gabriela Hunek, Jakub Rusek, Aleksandra Bełżek, Alicja Forma, Grzegorz Teresiński, Robert Sitarz, Robert Karpiński and Jacek Baj
Appl. Sci. 2025, 15(24), 12941; https://doi.org/10.3390/app152412941 - 8 Dec 2025
Viewed by 1740
Abstract
This review examines the current state of virtual reality (VR) applications in preclinical and clinical medical education, emphasizing their impact on teaching effectiveness and clinical competence. A structured literature analysis was conducted to evaluate VR-based educational strategies across key medical domains, including anatomy, [...] Read more.
This review examines the current state of virtual reality (VR) applications in preclinical and clinical medical education, emphasizing their impact on teaching effectiveness and clinical competence. A structured literature analysis was conducted to evaluate VR-based educational strategies across key medical domains, including anatomy, biochemistry, histology, surgery, emergency medicine, neurology, pediatrics, psychiatry, radiology, and rehabilitation. The reviewed studies demonstrate that VR enhances procedural performance, improves knowledge retention, strengthens diagnostic accuracy, and supports the acquisition of non-technical skills such as communication and teamwork. VR applications were also shown to reduce patient anxiety and pain during clinical procedures and improve engagement in rehabilitation programs. Despite persisting challenges such as cost, accessibility, and technical limitations, current evidence supports the growing role of VR as an effective, scalable, and safe educational and clinical tool. This review highlights critical opportunities for integrating VR into medical curricula and outlines future research directions aimed at optimizing its implementation in healthcare education. Full article
(This article belongs to the Special Issue Virtual Reality (VR) in Healthcare)
Show Figures

Figure 1

12 pages, 4149 KB  
Review
Projected Augmented Reality in Surgery: History, Validation, and Future Applications
by Nikhil Dipak Shah, Lohrasb Sayadi, Peyman Kassani and Raj Vyas
J. Clin. Med. 2025, 14(22), 8246; https://doi.org/10.3390/jcm14228246 - 20 Nov 2025
Viewed by 1080
Abstract
Background/Objectives: Projected augmented reality (PAR) enables real-time projection of digital surgical information directly onto the operative field. This offers a hands-free, headset-free platform that is universally visible to all members of the surgical team. Compared to head-mounted display systems, which are limited by [...] Read more.
Background/Objectives: Projected augmented reality (PAR) enables real-time projection of digital surgical information directly onto the operative field. This offers a hands-free, headset-free platform that is universally visible to all members of the surgical team. Compared to head-mounted display systems, which are limited by restricted fields of view, ergonomic challenges, and user exclusivity, PAR provides a more intuitive and collaborative surgical interface. When paired with artificial intelligence (AI), PAR has the potential to automate aspects of surgical planning and deliver high-precision guidance in both high-resource and global health settings. Our team is working on the development and validation of a PAR platform to dynamically project surgical and anatomic markings directly onto the patients intraoperatively. Methods: We developed a PAR system using a structured light scanner and depth camera to generate digital 3D surface reconstructions of a patient’s anatomy. Surgical markings were then made digitally, and a projector was used to precisely project these points directly onto the patient’s skin. We also developed a trained machine learning model that detects cleft lip landmarks and automatically designs surgical markings, with the plan to integrate this into our PAR system. Results: The PAR system accurately projected surgeon and AI-generated surgical markings onto anatomical models with sub-millimeter precision. Projections remained aligned during movement and were clearly visible to the entire surgical team without requiring wearable hardware. Conclusions: PAR integrated with AI provides accurate, real-time, and shared intraoperative guidance. This platform improves surgical precision and has broad potential for remote mentorship and global surgical training. Full article
(This article belongs to the Special Issue Plastic Surgery: Challenges and Future Directions)
Show Figures

Figure 1

15 pages, 2942 KB  
Article
Development and Evaluation of a Next-Generation Medication Safety Support System Based on AI and Mixed Reality: A Study from South Korea
by Nathan Lucien Vieira, Su Jin Kim, Sangah Ahn, Ji Sim Yoon, Sook Hyun Park, Jeong Hee Hong, Min-Jeoung Kang, Il Kim, Meong Hi Son, Won Chul Cha and Junsang Yoo
Appl. Sci. 2025, 15(22), 12002; https://doi.org/10.3390/app152212002 - 12 Nov 2025
Viewed by 1520
Abstract
Medication errors pose a significant threat to patient safety. Although Bar-Code Medication Administration (BCMA) has reduced error rates, it is constrained by handheld devices, workflow interruptions, and incomplete safeguards against wrong patients, wrong doses, or drug incompatibility. In this study, we developed and [...] Read more.
Medication errors pose a significant threat to patient safety. Although Bar-Code Medication Administration (BCMA) has reduced error rates, it is constrained by handheld devices, workflow interruptions, and incomplete safeguards against wrong patients, wrong doses, or drug incompatibility. In this study, we developed and evaluated a next-generation BCMA system by integrating artificial intelligence and mixed reality technologies for real-time safety checks: Optical Character Recognition verifies medication–label concordance, facial recognition confirms patient identity, and a rules engine evaluates drug–diluent compatibility. Computer vision models achieved high recognition accuracy for drug vials (100%), medication labels (90%), QR codes (90%), and patient faces (90%), with slightly lower performance for intravenous fluids (80%). A mixed-methods evaluation was conducted in a simulated environment using the System Usability Scale (SUS), Reduced Instructional Materials Motivation Survey (RIMMS), Virtual Reality Sickness Questionnaire (VRSQ), and NASA Task Load Index (NASA-TLX). The results indicated excellent usability (median SUS = 82.5/100), strong user motivation (RIMMS = 3.7/5), minimal cybersickness (VRSQ = 0.4/6), and manageable cognitive workload (NASA-TLX = 31.7/100). Qualitative analysis highlighted the system’s potential to streamline workflow and serve as a digital “second verifier.” These findings suggest strong potential for clinical integration, enhancing medication safety at the point of care. Full article
Show Figures

Figure 1

29 pages, 508 KB  
Review
Clinical Applications of Virtual and Augmented Reality in Radiology: A Scoping Review
by Somin Mindy Lee, Henrique Coimbra Baffi, Tolulope Ola, Brian Tsang, Aaryan Gupta, Ricardo Faingold, Jennifer Stimec and Andrea S. Doria
J. Clin. Med. 2025, 14(20), 7438; https://doi.org/10.3390/jcm14207438 - 21 Oct 2025
Viewed by 1586
Abstract
Background: Virtual reality (VR) and augmented reality (AR) have emerged as innovative tools in healthcare, particularly using diagnostic and interventional imaging methods, offering new avenues for enhancing patient care and procedural outcomes. Their applications range from improving preoperative planning and pain management [...] Read more.
Background: Virtual reality (VR) and augmented reality (AR) have emerged as innovative tools in healthcare, particularly using diagnostic and interventional imaging methods, offering new avenues for enhancing patient care and procedural outcomes. Their applications range from improving preoperative planning and pain management to providing advanced procedural support and training. Despite their growing integration into clinical practice, evidence of their cost-effectiveness and specific clinical benefits when using radiological tools remains limited. This review aims to map the current landscape of VR and AR applications using radiological modalities and highlight areas for future research. Objective: This scoping review explores the clinical applications of VR and AR in different radiological fields, aiming at assessing target areas, cost-effectiveness, and benefits of these technologies. Methods: We conducted a comprehensive literature search using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. A total of 15 primary studies were included, covering diverse populations and applications of VR and AR. Results: In total, 15 studies (N = 781 patients) were included, with sample sizes ranging from 6 to 120. These studies highlighted various clinical applications of VR and AR, including imaging-guided preoperative planning, pain management, and procedural support. Although several studies demonstrated improvements in patient experiences and diagnostic accuracy, cost-effectiveness data were lacking. Notably, 47% of the studies focused exclusively on pediatric populations (N = 363), and 33% were randomized controlled trials. Quality assessment using the STARD criteria revealed that 60% of studies were rated as good (score > 12), 27% as fair (score 10–12), and 13% as suboptimal (score < 10), with inter-reader reliability showing substantial agreement (ICC = 0.76; 95% CI: 0.64–0.91). Out of 15 included studies, only 6 (40%) reported statistically significant improvements in patient experiences, with the remaining studies reporting positive trends (e.g., feasibility, usability, improved planning). Individual studies demonstrated significant benefits of VR interventions; for instance, one study reported a reduction in distress scores by a mean of 3.0 (95% CI: 1.0–5.0) and a decreased need for parental presence (risk ratio 0.3; 95% CI: 0.1–0.7; p < 0.001) compared to conventional methods. Conclusions: VR and AR technologies hold promise in enhancing patient care and procedural outcomes. Future research should focus on the cost-effectiveness of these technologies and identify specific target populations that would benefit the most. Additionally, adherence to the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines should be encouraged to ensure transparent and comprehensive reporting in VR and AR studies. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

17 pages, 697 KB  
Proceeding Paper
Can 3D Virtual Worlds Be Used as Intelligent Tutoring Systems to Innovate Teaching and Learning Methods? Future Challenges and Possible Scenarios for Metaverse and Artificial Intelligence in Education
by Alfonso Filippone, Umberto Barbieri, Emanuele Marsico, Antonio Bevilacqua, Maria Ermelinda De Carlo and Raffaele Di Fuccio
Eng. Proc. 2025, 87(1), 110; https://doi.org/10.3390/engproc2025087110 - 9 Oct 2025
Viewed by 1106
Abstract
The integration of Virtual Worlds (VW) and Intelligent Tutoring Systems (ITS) represents a transformative advancement in education, combining immersive, interactive learning with AI-driven personalization. This study explores the synergies between these technologies, analyzing their benefits, challenges, and applications in domains such as medical [...] Read more.
The integration of Virtual Worlds (VW) and Intelligent Tutoring Systems (ITS) represents a transformative advancement in education, combining immersive, interactive learning with AI-driven personalization. This study explores the synergies between these technologies, analyzing their benefits, challenges, and applications in domains such as medical training, STEM education, and language learning. Findings highlight their shared characteristics of adaptability, real-time feedback, and collaborative learning. However, challenges such as computational demands, pedagogical complexity, and ethical concerns must be addressed. Future research should focus on hybrid models leveraging blockchain, IoT, and augmented reality to enhance adaptive and scalable learning experiences. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

22 pages, 6620 KB  
Article
A Study to Determine the Feasibility of Combining Mobile Augmented Reality and an Automatic Pill Box to Support Older Adults’ Medication Adherence
by Osslan Osiris Vergara-Villegas, Vianey Guadalupe Cruz-Sánchez, Abel Alejandro Rubín-Alvarado, Saulo Abraham Gante-Díaz, Jonathan Axel Cruz-Vazquez, Brandon Areyzaga-Mendizábal, Jesús Yaljá Montiel-Pérez, Juan Humberto Sossa-Azuela, Iliac Huerta-Trujillo and Rodolfo Romero-Herrera
Computers 2025, 14(10), 421; https://doi.org/10.3390/computers14100421 - 2 Oct 2025
Cited by 1 | Viewed by 3304
Abstract
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used [...] Read more.
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used in developing a mobile augmented reality (MAR) pill box. The proposal supports patients in adhering to their medication treatment. First, we explain the design and construction of the automatic pill box, which includes alarms and uses QR codes recognized by the MAR system to provide medication information. Then, we explain the development of the MAR system. We conducted a preliminary survey with 30 participants to assess the feasibility of the MAR app. One hundred older adults participated in the survey. After one week of using the proposal, each patient answered a survey regarding the proposal functionality. The results revealed that 88% of the participants strongly agree and 11% agree that the app is a support in adhering to medical treatment. Finally, we conducted a study to compare the time elapsed between the scheduled time for taking the medication and the time it was actually consumed. The results from 189 records showed that using the proposal, 63.5% of the patients take medication with a maximum delay of 4.5 min. The results also showed that the alarm always sounded at the scheduled time and that the QR code displayed always corresponded to the medication that had to be consumed. Full article
Show Figures

Figure 1

14 pages, 2921 KB  
Article
Design and Validation of an Augmented Reality Training Platform for Patient Setup in Radiation Therapy Using Multimodal 3D Modeling
by Jinyue Wu, Donghee Han and Toshioh Fujibuchi
Appl. Sci. 2025, 15(19), 10488; https://doi.org/10.3390/app151910488 - 28 Sep 2025
Cited by 1 | Viewed by 960
Abstract
This study presents the development and evaluation of an Augmented Reality (AR)-based training system aimed at improving patient setup accuracy in radiation therapy. Leveraging Microsoft HoloLens 2, the system provides an immersive environment for medical staff to enhance their understanding of patient setup [...] Read more.
This study presents the development and evaluation of an Augmented Reality (AR)-based training system aimed at improving patient setup accuracy in radiation therapy. Leveraging Microsoft HoloLens 2, the system provides an immersive environment for medical staff to enhance their understanding of patient setup procedures. High-resolution 3D anatomical models were reconstructed from CT scans using 3D Slicer, while Luma AI was employed to rapidly capture complete body surface models. Due to limitations in each method—such as missing extremities or back surfaces—Blender was used to merge the models, improving completeness and anatomical fidelity. The AR application was developed in Unity, employing spatial anchors and 125 × 125 mm2 QR code markers to stabilize and align virtual models in real space. System accuracy testing demonstrated that QR code tracking achieved millimeter-level variation, with an expanded uncertainty of ±2.74 mm. Training trials for setup showed larger deviations in the X (left–right), Y (up-down), and Z (front-back) axes at the centimeter scale. This meant that we were able to quantify the user’s patient setup skills. While QR code positioning was relatively stable, manual placement of markers and the absence of real-time verification contributed to these errors. The system offers a radiation-free and interactive platform for training, enhancing spatial awareness and procedural skills. Future work will focus on improving tracking stability, optimizing the workflow, and integrating real-time feedback to move toward clinical applicability. Full article
(This article belongs to the Special Issue Novel Technologies in Radiology: Diagnosis, Prediction and Treatment)
Show Figures

Figure 1

23 pages, 3314 KB  
Article
Optimization of Manifold Learning Using Differential Geometry for 3D Reconstruction in Computer Vision
by Yawen Wang
Mathematics 2025, 13(17), 2771; https://doi.org/10.3390/math13172771 - 28 Aug 2025
Viewed by 2989
Abstract
Manifold learning is a significant computer vision task used to describe high-dimensional visual data in lower-dimensional manifolds without sacrificing the intrinsic structural properties required for 3D reconstruction. Isomap, Locally Linear Embedding (LLE), Laplacian Eigenmaps, and t-SNE are helpful in data topology preservation but [...] Read more.
Manifold learning is a significant computer vision task used to describe high-dimensional visual data in lower-dimensional manifolds without sacrificing the intrinsic structural properties required for 3D reconstruction. Isomap, Locally Linear Embedding (LLE), Laplacian Eigenmaps, and t-SNE are helpful in data topology preservation but are typically indifferent to the intrinsic differential geometric characteristics of the manifolds, thus leading to deformation of spatial relations and reconstruction accuracy loss. This research proposes an Optimization of Manifold Learning using Differential Geometry Framework (OML-DGF) to overcome the drawbacks of current manifold learning techniques in 3D reconstruction. The framework employs intrinsic geometric properties—like curvature preservation, geodesic coherence, and local–global structure correspondence—to produce structurally correct and topologically consistent low-dimensional embeddings. The model utilizes a Riemannian metric-based neighborhood graph, approximations of geodesic distances with shortest path algorithms, and curvature-sensitive embedding from second-order derivatives in local tangent spaces. A curvature-regularized objective function is derived to steer the embedding toward facilitating improved geometric coherence. Principal Component Analysis (PCA) reduces initial dimensionality and modifies LLE with curvature weighting. Experiments on the ModelNet40 dataset show an impressive improvement in reconstruction quality, with accuracy gains of up to 17% and better structure preservation than traditional methods. These findings confirm the advantage of employing intrinsic geometry as an embedding to improve the accuracy of 3D reconstruction. The suggested approach is computationally light and scalable and can be utilized in real-time contexts such as robotic navigation, medical image diagnosis, digital heritage reconstruction, and augmented/virtual reality systems in which strong 3D modeling is a critical need. Full article
Show Figures

Figure 1

23 pages, 525 KB  
Systematic Review
Virtual and Augmented Reality Games in Dementia Care: Systematic and Bibliographic Review
by Martin Eckert, Varsha Radhakrishnan, Thomas Ostermann, Jan Peter Ehlers and Gregor Hohenberg
Healthcare 2025, 13(16), 2013; https://doi.org/10.3390/healthcare13162013 - 15 Aug 2025
Cited by 2 | Viewed by 1363
Abstract
Background: This review investigates the use of virtual and augmented reality games in dementia care. It provides an insight into the last 13 years of research, including the earliest publications on this topic, and takes a systematic and bibliographic approach. Methods: We sourced [...] Read more.
Background: This review investigates the use of virtual and augmented reality games in dementia care. It provides an insight into the last 13 years of research, including the earliest publications on this topic, and takes a systematic and bibliographic approach. Methods: We sourced research publications from three different scientific databases (PubMed, Scopus, and APA PsycInfo) for this publication. We chose the PRISMA approach and categorized the studies according to the publisher. A set of 12 variables was defined across three categories (bibliographic, medical, and technical). Results: Of the 389 identified articles, 36 met the inclusion and exclusion criteria. After a phase of pilot studies mainly being conducted, the number of publications increased by four times but decreased again in 2023. Dominating were pilot and feasibility studies; 8 out of the 36 trials were RCTs. The median trial population was 24, and the protocols were performed for an average of 10 weeks, with two 40-min sessions a week. Simulator sickness was reported but not by the majority of participants. A total of 59% of the studies used fully immersive 3D-VR systems. We identified only three publications that provided high immersion quality. These findings indicate the positive effects of using virtual and augmented reality systems on participants’ cognitive function and mood. Conclusions: This publication focuses on the technical aspects of the applied technologies and immersion levels of the patients. Using augmented and virtual reality methods to improve the quality of life and physical interaction of dementia patients shows the potential to enhance cognitive functioning in this population, but further investigation and multicenter RCTs are needed. There are strong indications that this research branch has high potential to benefit both caretakers and patients. Full article
Show Figures

Graphical abstract

19 pages, 1578 KB  
Review
Augmented Reality in Health Education: Transforming Nursing, Healthcare, and Medical Education and Training
by Georgios Lampropoulos, Pablo Fernández-Arias, Antonio del Bosque and Diego Vergara
Nurs. Rep. 2025, 15(8), 289; https://doi.org/10.3390/nursrep15080289 - 8 Aug 2025
Cited by 3 | Viewed by 4376
Abstract
Background: In health sciences education and particularly, in healthcare education, nursing education, and medical education, augmented reality is being increasingly used due to the changes and benefits to teaching and learning approaches it can yield. However, as the field advances, it is [...] Read more.
Background: In health sciences education and particularly, in healthcare education, nursing education, and medical education, augmented reality is being increasingly used due to the changes and benefits to teaching and learning approaches it can yield. However, as the field advances, it is important to systematically map the current literature and provide an overview of the field. Aim: By analyzing the current literature, this study focuses on examining the use of augmented reality in healthcare, nursing, and medical education and training. Method: The study adopts a systematic mapping review approach and analyzes 156 studies that were published during 2010–2025. Results: The results revealed that augmented reality is an effective educational tool that can support teaching and learning of diverse subjects in the context of health education, as it enables learners to combine their theoretical knowledge with practical applications within interactive and immersive learning environments and simulations without risking patient safety. Increased learning outcomes, including hands-on acquisition of practical skills and clinical competencies, engagement, performance, knowledge gain and retention, as well as their critical thinking and decision-making were observed. The potential of augmented reality to offer realistic and interactive visual representations, to support procedural training, to provide cost-effective solutions, to enhance collaborative learning, and to increase accessibility to education, even in resource-limited settings, was highlighted. Education stakeholders expressed positive attitudes and perspectives toward the adoption and integration of augmented reality into health sciences education. Discussion: The results emphasize the role of augmented reality in supporting and improving health education. Additionally, the study revealed six main topics, identified current research gaps, and provided future research directions. Conclusions: When appropriately applied, augmented reality has the potential to effectively support and enrich nursing, healthcare, and medical education and training. Full article
Show Figures

Figure 1

17 pages, 2828 KB  
Article
Augmented Reality in Cardiovascular Education (HoloHeart): Assessment of Students’ and Lecturers’ Needs and Expectations at Heidelberg University Medical School
by Pascal Philipp Schlegel, Florian Kehrle, Till J. Bugaj, Eberhard Scholz, Alexander Kovacevic, Philippe Grieshaber, Ralph Nawrotzki, Joachim Kirsch, Markus Hecker, Anna L. Meyer, Katharina Seidensaal, Thuy D. Do, Jobst-Hendrik Schultz, Norbert Frey and Ann-Kathrin Rahm
Appl. Sci. 2025, 15(15), 8595; https://doi.org/10.3390/app15158595 - 2 Aug 2025
Cited by 2 | Viewed by 1184
Abstract
Background: A detailed understanding of cardiac anatomy and physiology is crucial in cardiovascular medicine. However, traditional learning methods often fall short in addressing this complexity. Augmented reality (AR) offers a promising tool to enhance comprehension. To assess its potential integration into the Heidelberger [...] Read more.
Background: A detailed understanding of cardiac anatomy and physiology is crucial in cardiovascular medicine. However, traditional learning methods often fall short in addressing this complexity. Augmented reality (AR) offers a promising tool to enhance comprehension. To assess its potential integration into the Heidelberger Curriculum Medicinale (HeiCuMed), we conducted a needs assessment among medical students and lecturers at Heidelberg University Medical School. Methods: Our survey aimed to evaluate the perceived benefits of AR-based learning compared to conventional methods and to gather expectations regarding an AR course in cardiovascular medicine. Using LimeSurvey, we developed a questionnaire to assess participants’ prior AR experience, preferred learning methods, and interest in a proposed AR-based, 2 × 90-min in-person course. Results: A total of 101 students and 27 lecturers participated. Support for AR in small-group teaching was strong: 96.3% of students and 90.9% of lecturers saw value in a dedicated AR course. Both groups favored its application in anatomy, cardiac surgery, and internal medicine. Students prioritized congenital heart defects, coronary anomalies, and arrhythmias, while lecturers also emphasized invasive valve interventions. Conclusions: There is significant interest in AR-based teaching in cardiovascular education, suggesting its potential to complement and improve traditional methods in medical curricula. Further studies are needed to assess the potential benefits regarding learning outcomes. Full article
Show Figures

Figure 1

Back to TopTop