Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (135)

Search Parameters:
Keywords = augmented reality guidance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3205 KB  
Article
Human-Centered Collaborative Robotic Workcell Facilitating Shared Autonomy for Disability-Inclusive Manufacturing
by YongKuk Kim, DaYoung Kim, DoKyung Hwang, Juhyun Kim, Eui-Jung Jung and Min-Gyu Kim
Electronics 2026, 15(2), 461; https://doi.org/10.3390/electronics15020461 - 21 Jan 2026
Abstract
Workers with upper-limb disabilities face difficulties in performing manufacturing tasks requiring fine manipulation, stable handling, and multistep procedural understanding. To address these limitations, this paper presents an integrated collaborative workcell designed to support disability-inclusive manufacturing. The system comprises four core modules: a JSON-based [...] Read more.
Workers with upper-limb disabilities face difficulties in performing manufacturing tasks requiring fine manipulation, stable handling, and multistep procedural understanding. To address these limitations, this paper presents an integrated collaborative workcell designed to support disability-inclusive manufacturing. The system comprises four core modules: a JSON-based collaboration database that structures manufacturing processes into robot–human cooperative units; a projection-based augmented reality (AR) interface that provides spatially aligned task guidance and virtual interaction elements; a multimodal interaction channel combining gesture tracking with speech and language-based communication; and a personalization mechanism that enables users to adjust robot behaviors—such as delivery poses and user-driven task role switching—which are then stored for future operations. The system is implemented using ROS-style modular nodes with an external WPF-based projection module and evaluated through scenario-based experiments involving workers with upper-limb impairments. The experimental scenarios illustrate that the proposed workcell is capable of supporting step transitions, part handover, contextual feedback, and user-preference adaptation within a unified system framework, suggesting its feasibility as an integrated foundation for disability-inclusive human–robot collaboration in manufacturing environments. Full article
Show Figures

Figure 1

22 pages, 13863 KB  
Article
AI-Based Augmented Reality Microscope for Real-Time Sperm Detection and Tracking in Micro-TESE
by Mahmoud Mohamed, Ezaki Yuriko, Yuta Kawagoe, Kazuhiro Kawamura and Masashi Ikeuchi
Bioengineering 2026, 13(1), 102; https://doi.org/10.3390/bioengineering13010102 - 15 Jan 2026
Viewed by 286
Abstract
Non-obstructive azoospermia (NOA) is a severe male infertility condition characterized by extremely low or absent sperm production. In microdissection testicular sperm extraction (Micro-TESE) procedures for NOA, embryologists must manually search through testicular tissue under a microscope for rare sperm, a process that can [...] Read more.
Non-obstructive azoospermia (NOA) is a severe male infertility condition characterized by extremely low or absent sperm production. In microdissection testicular sperm extraction (Micro-TESE) procedures for NOA, embryologists must manually search through testicular tissue under a microscope for rare sperm, a process that can take 1.8–7.5 h and impose significant fatigue and burden. This paper presents an augmented reality (AR) microscope system with AI-based image analysis to accelerate sperm retrieval in Micro-TESE. The proposed system integrates a deep learning model (YOLOv5) for real-time sperm detection in microscope images, a multi-object tracker (DeepSORT) for continuous sperm tracking, and a velocity calculation module for sperm motility analysis. Detected sperm positions and motility metrics are overlaid in the microscope’s eyepiece view via a microdisplay, providing immediate visual guidance to the embryologist. In experiments on seminiferous tubule sample images, the YOLOv5 model achieved a precision of 0.81 and recall of 0.52, outperforming previous classical methods in accuracy and speed. The AR interface allowed an operator to find sperm faster, roughly doubling the sperm detection rate (66.9% vs. 30.8%). These results demonstrate that the AR microscope system can significantly aid embryologists by highlighting sperm in real time and potentially shorten Micro-TESE procedure times. This application of AR and AI in sperm retrieval shows promise for improving outcomes in assisted reproductive technology. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Graphical abstract

16 pages, 289 KB  
Review
Artificial Intelligence in Oncologic Thoracic Surgery: Clinical Decision Support and Emerging Applications
by Francesco Petrella and Stefania Rizzo
Cancers 2026, 18(2), 246; https://doi.org/10.3390/cancers18020246 - 13 Jan 2026
Viewed by 230
Abstract
Artificial intelligence (AI) is rapidly reshaping thoracic surgery, advancing from decision support to the threshold of autonomous intervention. AI-driven technologies—including machine learning (ML), deep learning (DL), and computer vision—have demonstrated significant improvements in diagnostic accuracy, surgical planning, intraoperative navigation, and postoperative outcome prediction. [...] Read more.
Artificial intelligence (AI) is rapidly reshaping thoracic surgery, advancing from decision support to the threshold of autonomous intervention. AI-driven technologies—including machine learning (ML), deep learning (DL), and computer vision—have demonstrated significant improvements in diagnostic accuracy, surgical planning, intraoperative navigation, and postoperative outcome prediction. In lung cancer and thoracic oncology, AI enhances imaging analysis, histopathological classification, and risk stratification, supporting multidisciplinary decision-making and personalized therapy. Robotic-assisted and AI-guided systems are optimizing surgical precision and workflow efficiency, while real-time decision-support tools and augmented reality are improving intraoperative safety. Despite these advances, widespread adoption is limited by challenges in algorithmic bias, data integration, regulatory approval, and ethical transparency. The literature emphasizes the need for multicenter validation, explainable AI, and robust governance frameworks to ensure safe and effective clinical integration. Future research should focus on digital twin technology, federated learning, and transparent AI outputs to further enhance reliability and accessibility. AI is poised to transform thoracic surgery, but responsible implementation and ongoing evaluation are essential for realizing its full potential. The aim of this review is to evaluate and synthesize the current landscape of artificial intelligence (AI) applications across the thoracic surgical pathway, from preoperative decision-support to intraoperative guidance and emerging autonomous interventions. Full article
(This article belongs to the Special Issue Thoracic Neuroendocrine Tumors and the Role of Emerging Therapies)
20 pages, 2153 KB  
Article
Fusing Prediction and Perception: Adaptive Kalman Filter-Driven Respiratory Gating for MR Surgical Navigation
by Haoliang Li, Shuyi Wang, Jingyi Hu, Tao Zhang and Yueyang Zhong
Sensors 2026, 26(2), 405; https://doi.org/10.3390/s26020405 - 8 Jan 2026
Viewed by 162
Abstract
Background: Respiratory-induced target displacement remains a major challenge for achieving accurate and safe augmented-reality-guided thoracoabdominal percutaneous puncture. Existing approaches often suffer from system latency, dependence on intraoperative imaging, or the absence of intelligent timing assistance; Methods: We developed a mixed-reality (MR) surgical navigation [...] Read more.
Background: Respiratory-induced target displacement remains a major challenge for achieving accurate and safe augmented-reality-guided thoracoabdominal percutaneous puncture. Existing approaches often suffer from system latency, dependence on intraoperative imaging, or the absence of intelligent timing assistance; Methods: We developed a mixed-reality (MR) surgical navigation system that incorporates Adaptive Kalman-filter-based respiratory prediction module and visual gating cues. The system was evaluated using a dynamic respiratory motion simulation platform. The Kalman filter performs real-time state estimation and short-term prediction of optically tracked respiratory motion, enabling simultaneous compensation for MR model drift and forecasting of the end-inhalation window to trigger visual guidance; Results: Compared with the uncompensated condition, the proposed system reduced dynamic registration error from (3.15 ± 1.23) mm to (2.11 ± 0.58) mm (p < 0.001). Moreover, the predicted guidance window occurred approximately 142 ms in advance with >92% accuracy, providing preparation time for needle insertion; Conclusions: The integrated MR system effectively suppresses respiratory-induced model drift and offers intelligent timing guidance for puncture execution. Full article
Show Figures

Figure 1

19 pages, 20380 KB  
Article
Accessible Augmented Reality in Sheltered Workshops: A Mixed-Methods Evaluation for Users with Mental Disabilities
by Valentin Knoben, Malte Stellmacher, Jonas Blattgerste, Björn Hein and Christian Wurll
Virtual Worlds 2026, 5(1), 1; https://doi.org/10.3390/virtualworlds5010001 - 4 Jan 2026
Viewed by 220
Abstract
A prominent application of Augmented Reality (AR) is to provide step-by-step guidance for procedural tasks as it allows information to be displayed in situ by overlaying it directly onto the user’s physical environment. While the potential of AR is well known, the perspectives [...] Read more.
A prominent application of Augmented Reality (AR) is to provide step-by-step guidance for procedural tasks as it allows information to be displayed in situ by overlaying it directly onto the user’s physical environment. While the potential of AR is well known, the perspectives and requirements of individuals with mental disabilities, who face both cognitive and psychological barriers at work, have yet to be addressed, particularly on Head-Mounted Displays (HMDs). To understand practical limitations of such a system, we conducted a mixed-methods user study with 29 participants, including individuals with mental disabilities, their colleagues, and support professionals. Participants used a commercially available system on an AR HMD to perform a machine setup task. Quantitative results revealed that participants with mental disabilities perceived the system as less usable than those without. Qualitative findings point towards actionable leverage points of improvement such as privacy-aware human support, motivating but lightweight gamification, user-controlled pacing with clear feedback, confidence-building interaction patterns, and clearer task intent of multimodal instructions. Full article
Show Figures

Figure 1

21 pages, 577 KB  
Review
The Present and Future of Sarcopenia Diagnosis and Exercise Interventions: A Narrative Review
by Hongje Jang, Jeonghyeok Song, Jeonghun Kim, Hyeongmin Lee, Hyemin Lee, Hye-yeon Park, Huijin Shin, Yeah-eun Kwon, Yeji Kim and JongEun Yim
Appl. Sci. 2025, 15(23), 12760; https://doi.org/10.3390/app152312760 - 2 Dec 2025
Viewed by 1982
Abstract
The aim of this review was to harmonize major consensus statements (European Working Group on Sarcopenia in Older People 2; Asian Working Group for Sarcopenia 2019; Foundation for the National Institutes of Health Sarcopenia Project operational criteria) into a stage- and setting-stratified algorithm. [...] Read more.
The aim of this review was to harmonize major consensus statements (European Working Group on Sarcopenia in Older People 2; Asian Working Group for Sarcopenia 2019; Foundation for the National Institutes of Health Sarcopenia Project operational criteria) into a stage- and setting-stratified algorithm. It maps diagnostic strata to dose-defined resistance and combined training, integrates multimodal and technology-enabled options (whole-body electrical muscle stimulation, whole-body vibration, virtual reality, AI-assisted telerehabilitation) with safety cues, and embeds nutrition (≥1.2 g/kg/day protein, vitamin D, key micronutrients) and education to sustain adherence. Sarcopenia is a consequential geriatric syndrome linked to falls, loss of independence, hospitalization, mortality, and psychosocial burden, yet translation to practice is hindered by heterogeneous definitions, diagnostics, and treatment guidance. Literature searches via PubMed/MEDLINE, EBSCO, SciELO, and Google Scholar (January 2000 to August 2025) yielded 354 records; after screening and deduplication, 132 peer-reviewed studies were included. We summarize tools for screening, strength, muscle mass, and function (e.g., Sarcopenia Five-Item Questionnaire, grip strength, dual-energy X-ray absorptiometry, gait speed) and identify resistance exercise as the cornerstone, with aerobic, balance, and flexibility training adding functional and metabolic benefits. Clinic-ready tables and figures operationalize a stepwise program across primary to severe sarcopenia and across acute or iatrogenic to community settings. Early screening plus structured, exercise-centered care, augmented by targeted nutrition and education, offers pragmatic, scalable benefits. Full article
Show Figures

Figure 1

30 pages, 7547 KB  
Review
Artificial Intelligence Applications in Interventional Radiology
by Carolina Lanza, Salvatore Alessio Angileri, Serena Carriero, Sonia Triggiani, Velio Ascenti, Simone Raul Mortellaro, Marco Ginolfi, Alessia Leo, Francesca Arnone, Pierluca Torcia, Pierpaolo Biondetti, Anna Maria Ierardi and Gianpaolo Carrafiello
J. Pers. Med. 2025, 15(12), 569; https://doi.org/10.3390/jpm15120569 - 28 Nov 2025
Viewed by 1525
Abstract
This review is a brief overview of the current status and the potential role of artificial intelligence (AI) in interventional radiology (IR). The literature published in the last decades was reviewed and the technical developments in terms of radiomics, virtual reality, robotics, fusion [...] Read more.
This review is a brief overview of the current status and the potential role of artificial intelligence (AI) in interventional radiology (IR). The literature published in the last decades was reviewed and the technical developments in terms of radiomics, virtual reality, robotics, fusion imaging, cone-beam computed tomography (CBCT) and Imaging Guidance Software were analyzed. The evidence shows that AI significatively improves pre-procedural planning, intra-procedural navigation, and post-procedural assessment. Radiomics extracts features from optical images of personalized treatment strategies. Virtual reality offers innovative tools especially for training and procedural simulation. Robotic systems, combined with AI, could enhance precision and reproducibility of IR procedures while reducing operator exposure to X-ray. Fusion imaging and CBCT, augmented by AI software, improve real-time guidance and procedural outcomes. Full article
Show Figures

Figure 1

12 pages, 2242 KB  
Article
Augmented Reality-Assisted Micro-Invasive Apicectomy with Markerless Visual–Inertial Odometry: An In Vivo Pilot Study
by Marco Farronato, Davide Farronato, Federico Michelini and Giulio Rasperini
Appl. Sci. 2025, 15(23), 12588; https://doi.org/10.3390/app152312588 - 27 Nov 2025
Viewed by 362
Abstract
Introduction: Apicectomy is an endodontic surgical procedure prescribed for persistent periapical pathologies when conventional root canal therapy or retreatment have failed. Accurate intraoperative visualization of the root apex and surrounding structures remains challenging and subject to possible errors. Augmented reality (AR) allows for [...] Read more.
Introduction: Apicectomy is an endodontic surgical procedure prescribed for persistent periapical pathologies when conventional root canal therapy or retreatment have failed. Accurate intraoperative visualization of the root apex and surrounding structures remains challenging and subject to possible errors. Augmented reality (AR) allows for the addition of real-time digital overlays of the anatomical region, thus potentially improving surgical precision and reducing invasiveness. The purpose of this pilot study is to describe the application of an AR method in cases requiring apicectomy. Materials and Methods: Patients presenting with chronic persistent apical radio-translucency associated with pain underwent AR-assisted apicectomy. Cone-beam computed tomography (CBCT) scans were obtained preoperatively for segmentation of the target root apex and adjacent anatomical structures. A custom visual–inertial odometry (VIO) algorithm was used to map and stabilize the segmented digital 3D models on a portable device in real time, enabling an overlay of digital guides onto the operative field. The duration of preoperative procedures, was recorded. Postoperative pain measured by a Visual Analogue Scale (VAS), and periapical healing assessed with radiographic evaluations, were recorded at baseline (T0) and at 6 weeks and 6 months (T1–T2) after surgery. Results: AR-assisted apicectomies were successfully performed in all three patients without intraoperative complications. The digital overlap procedure required an average of [1.49 ± 0.34] minutes. VAS scores decreased significantly from T0 to T2, and patients showed radiographic evidence of progressive periapical healing. No patient reported persistent discomfort at follow-up. Conclusion: This preliminary pilot study indicates that AR-assisted apicectomy is feasible and may improve intraoperative visualization with low additional surgical time. Future larger-scale studies with control groups are needed to validate the method proposed and to quantify the outcomes. Clinical Significance: By integrating real-time digital images of bony structures and root morphology, AR guidance during apicectomy may offer enhanced precision for apical resection and may decrease the risk of iatrogenic damage. The use of a visual–inertial odometry-based AR method is a novel technique that demonstrated promising results in terms of VAS and final outcomes, especially in anatomically challenging cases in this preliminary pilot study. Full article
(This article belongs to the Special Issue Advanced Dental Imaging Technology)
Show Figures

Figure 1

12 pages, 4149 KB  
Review
Projected Augmented Reality in Surgery: History, Validation, and Future Applications
by Nikhil Dipak Shah, Lohrasb Sayadi, Peyman Kassani and Raj Vyas
J. Clin. Med. 2025, 14(22), 8246; https://doi.org/10.3390/jcm14228246 - 20 Nov 2025
Viewed by 864
Abstract
Background/Objectives: Projected augmented reality (PAR) enables real-time projection of digital surgical information directly onto the operative field. This offers a hands-free, headset-free platform that is universally visible to all members of the surgical team. Compared to head-mounted display systems, which are limited by [...] Read more.
Background/Objectives: Projected augmented reality (PAR) enables real-time projection of digital surgical information directly onto the operative field. This offers a hands-free, headset-free platform that is universally visible to all members of the surgical team. Compared to head-mounted display systems, which are limited by restricted fields of view, ergonomic challenges, and user exclusivity, PAR provides a more intuitive and collaborative surgical interface. When paired with artificial intelligence (AI), PAR has the potential to automate aspects of surgical planning and deliver high-precision guidance in both high-resource and global health settings. Our team is working on the development and validation of a PAR platform to dynamically project surgical and anatomic markings directly onto the patients intraoperatively. Methods: We developed a PAR system using a structured light scanner and depth camera to generate digital 3D surface reconstructions of a patient’s anatomy. Surgical markings were then made digitally, and a projector was used to precisely project these points directly onto the patient’s skin. We also developed a trained machine learning model that detects cleft lip landmarks and automatically designs surgical markings, with the plan to integrate this into our PAR system. Results: The PAR system accurately projected surgeon and AI-generated surgical markings onto anatomical models with sub-millimeter precision. Projections remained aligned during movement and were clearly visible to the entire surgical team without requiring wearable hardware. Conclusions: PAR integrated with AI provides accurate, real-time, and shared intraoperative guidance. This platform improves surgical precision and has broad potential for remote mentorship and global surgical training. Full article
(This article belongs to the Special Issue Plastic Surgery: Challenges and Future Directions)
Show Figures

Figure 1

34 pages, 921 KB  
Systematic Review
Artificial Intelligence in Gastrointestinal Surgery: A Systematic Review of Its Role in Laparoscopic and Robotic Surgery
by Ludovica Gorini, Roberto de la Plaza Llamas, Daniel Alejandro Díaz Candelas, Rodrigo Arellano González, Wenzhong Sun, Jaime García Friginal, María Fra López and Ignacio Antonio Gemio del Rey
J. Pers. Med. 2025, 15(11), 562; https://doi.org/10.3390/jpm15110562 - 19 Nov 2025
Viewed by 1783
Abstract
Background: Artificial intelligence (AI) is transforming surgical practice by enhancing training, intraoperative guidance, decision-making, and postoperative assessment. However, its specific role in laparoscopic and robotic general surgery remains to be clearly defined. The objective is to systematically review the current applications of [...] Read more.
Background: Artificial intelligence (AI) is transforming surgical practice by enhancing training, intraoperative guidance, decision-making, and postoperative assessment. However, its specific role in laparoscopic and robotic general surgery remains to be clearly defined. The objective is to systematically review the current applications of AI in laparoscopic and robotic general surgery and categorize them by function and surgical context. Methods: A systematic search of PubMed and Web of Science was conducted up to 22 June 2025, using predefined search terms. Eligible studies focused on AI applications in laparoscopic or robotic general surgery, excluding urological, gynecological, and obstetric fields. Original articles in English or Spanish were included. Data extraction was performed independently by two reviewers and synthesized descriptively by thematic categories. Results: A total of 152 original studies were included. Most were conducted in laparoscopic settings (n = 125), while 19 focused on robotic surgery and 8 involved both. The majority were technical evaluations or retrospective observational studies. Seven thematic categories were identified: surgical decision support and outcome prediction; skill assessment and training; workflow recognition and intraoperative guidance; object or structure detection; augmented reality and navigation; image enhancement; technical assistance; and surgeon perception and preparedness. Most studies applied deep learning, for classification, prediction, recognition, and real-time guidance in laparoscopic cholecystectomies, colorectal and gastric surgeries. Conclusions: AI has been widely adopted in various domains of laparoscopic and robotic general surgery. While most studies remain in early developmental stages, the evidence suggests increasing maturity and integration into clinical workflows. Standardization of evaluation and reporting frameworks will be essential to translate these innovations into widespread practice. Full article
(This article belongs to the Special Issue Update on Robotic Gastrointestinal Surgery, 2nd Edition)
Show Figures

Graphical abstract

15 pages, 5189 KB  
Article
Assembly Complexity Index (ACI) for Modular Robotic Systems: Validation and Conceptual Framework for AR/VR-Assisted Assembly
by Kartikeya Walia and Philip Breedon
Machines 2025, 13(10), 882; https://doi.org/10.3390/machines13100882 - 24 Sep 2025
Viewed by 961
Abstract
The growing adoption of modular robotic systems presents new challenges in ensuring ease of assembly, deployment, and reconfiguration, especially for end-users with varying technical expertise. This study proposes and validates an Assembly Complexity Index (ACI) framework, combining subjective workload (NASA Task Load Index) [...] Read more.
The growing adoption of modular robotic systems presents new challenges in ensuring ease of assembly, deployment, and reconfiguration, especially for end-users with varying technical expertise. This study proposes and validates an Assembly Complexity Index (ACI) framework, combining subjective workload (NASA Task Load Index) and task complexity (Task Complexity Index) into a unified metric to quantify assembly difficulty. Twelve participants performed modular manipulator assembly tasks under supervised and unsupervised conditions, enabling evaluation of learning effects and assembly complexity dynamics. Statistical analyses, including Cronbach’s alpha, correlation studies, and paired t-tests, demonstrated the framework’s internal consistency, sensitivity to user learning, and ability to capture workload-performance trade-offs. Additionally, we propose an augmented reality (AR) and virtual reality (VR) integration workflow to further mitigate assembly complexity, offering real-time guidance and adaptive assistance. The proposed framework not only supports design iteration and operator training but also provides a human-centered evaluation methodology applicable to modular robotics deployment in Industry 4.0 environments. The AR/VR-assisted workflow presented here is proposed as a conceptual extension and will be validated in future work. Full article
Show Figures

Figure 1

12 pages, 2022 KB  
Case Report
Implementation of Medicalholodeck® for Augmented Reality Surgical Navigation in Microsurgical Mandibular Reconstruction: Enhanced Vessel Identification
by Norman Alejandro Rendón Mejía, Hansel Gómez Arámbula, José Humberto Baeza Ramos, Yidam Villa Martínez, Francisco Hernández Ávila, Mónica Quiñonez Pérez, Carolina Caraveo Aguilar, Rogelio Mariñelarena Hernández, Claudio Reyes Montero, Claudio Ramírez Espinoza and Armando Isaac Reyes Carrillo
Healthcare 2025, 13(19), 2406; https://doi.org/10.3390/healthcare13192406 - 24 Sep 2025
Viewed by 1184
Abstract
Mandibular reconstruction with the fibula free flap is the gold standard for large defects, with virtual surgical planning becoming integral to the process. The localization and dissection of critical vessels, such as the recipient vessels in the neck and the perforating vessels of [...] Read more.
Mandibular reconstruction with the fibula free flap is the gold standard for large defects, with virtual surgical planning becoming integral to the process. The localization and dissection of critical vessels, such as the recipient vessels in the neck and the perforating vessels of the fibula flap, are demanding steps that directly impact surgical success. Augmented reality (AR) offers a solution by overlaying three-dimensional virtual models directly onto the surgeon’s view of the operative field. We report the first case in Latin America utilizing a low-cost, commercially available holographic navigation system for complex microsurgical mandibular reconstruction. A 26-year-old female presented with a large, destructive osteoblastoma of the left mandible, requiring wide resection and reconstruction. Preoperative surgical planning was conducted using DICOM data from the patient’s CT scans to generate 3D holographic models with the Medicalholodeck® software. Intraoperatively, the primary surgeon used the AR system to superimpose the holographic models onto the patient. The system provided real-time, immersive guidance for identifying the facial artery, which was anatomically displaced by the tumor mass, as well as for localizing the peroneal artery perforators for donor flap harvest. A free fibula flap was harvested and transferred. During the early postoperative course and after 3-months of follow-up, the patient presented with an absence of any clinical complications. This case demonstrates the successful application and feasibility of using a low-cost, consumer-grade holographic navigation system. Full article
(This article belongs to the Special Issue Virtual Reality Technologies in Health Care)
Show Figures

Figure 1

21 pages, 4089 KB  
Article
A Remote Maintenance Support Method for Complex Equipment Based on Layered-MVC-B/S Integrated AR Framework
by Xuhang Wang, Qinhua Lu, Jiayu Chen and Dong Zhou
Sensors 2025, 25(19), 5935; https://doi.org/10.3390/s25195935 - 23 Sep 2025
Cited by 2 | Viewed by 844
Abstract
Augmented reality (AR)-based assisted maintenance methods are effective in completing simple equipment maintenance tasks. However, complex equipment typically requires multi-location remote collaboration due to structural complexity, multiple fault states, and high maintenance costs, significantly increasing maintenance difficulty. This paper therefore proposes a remote [...] Read more.
Augmented reality (AR)-based assisted maintenance methods are effective in completing simple equipment maintenance tasks. However, complex equipment typically requires multi-location remote collaboration due to structural complexity, multiple fault states, and high maintenance costs, significantly increasing maintenance difficulty. This paper therefore proposes a remote maintenance support method for complex equipment based on layered-MVC-B/S integrated AR framework (IAR-RMS). First, clearly define the maintenance content and workflow for multi-person remote collaboration and conduct an in-depth analysis of process control within the task workflow to avoid incomplete or unsystematic maintenance guidance information and processes. Second, analyze collaborative management from the perspectives of maintenance role conflicts and maintenance operation conflicts and implement on-demand permission control and operation sequence management to ensure the timeliness and user-friendliness of multi-person collaboration. Then, integrate the layered architecture, MVC, and B/S architecture to construct a remote maintenance support (RMS) model based on an integrated architecture system, ensuring the reliability and timeliness of the model. Finally, demonstrate the main functional modules of the RMS task process, and use power system disassembly and assembly as an experiment to validate the effectiveness and generalizability of the proposed IAR-RMS method. The results indicate that the proposed IAR-RMS method can effectively realize maintenance support tasks in multi-person remote collaboration scenarios. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 1501 KB  
Review
Artificial Intelligence and Digital Tools Across the Hepato-Pancreato-Biliary Surgical Pathway: A Systematic Review
by Andreas Efstathiou, Evgenia Charitaki, Charikleia Triantopoulou and Spiros Delis
J. Clin. Med. 2025, 14(18), 6501; https://doi.org/10.3390/jcm14186501 - 15 Sep 2025
Viewed by 1855
Abstract
Background: Hepato-pancreato-biliary (HPB) surgery involves operations that depend heavily on precise imaging, careful planning, and intraoperative decision-making. The rapid emergence of artificial intelligence (AI) and digital tools has assisted in these domains. Methods: We performed a PRISMA-guided systematic review (searches through June 2025) [...] Read more.
Background: Hepato-pancreato-biliary (HPB) surgery involves operations that depend heavily on precise imaging, careful planning, and intraoperative decision-making. The rapid emergence of artificial intelligence (AI) and digital tools has assisted in these domains. Methods: We performed a PRISMA-guided systematic review (searches through June 2025) of AI/digital technologies applied to HPB surgical care, including novel models such as machine learning, deep learning, radiomics, augmented/mixed reality, and computer vision. Our focus was for eligible studies to address imaging interpretation, preoperative planning, intraoperative guidance, or outcome prediction. Results: In total, 38 studies met inclusion criteria. Imaging models constructed with AI showed high diagnostic performance for lesion detection and classification (commonly AUC ~0.80–0.98). Moreover, risk models using machine learning frequently exceeded traditional scores for predicting postoperative complications (e.g., pancreatic fistula). AI-assisted three-dimensional visual reconstructions enhanced anatomical understanding for preoperative planning, while augmented and mixed-reality systems enabled real-time intraoperative navigation in pilot series. Computer-vision systems recognized critical intraoperative landmarks (e.g., critical view of safety) and detected hazards such as bleeding in near real time. Most of the studies included were retrospective, single-center, or feasibility designs, with limited external validation. Conclusions: The usage of AI and digital tools show promising results across the HPB pathway—from preoperative diagnostics to intraoperative safety and guidance. The evidence to date supports technical feasibility and suggests clinical benefit, but routine adoption and further conclusions should await prospective, multicenter validation and consistent reporting. With continued refinement, multidisciplinary collaboration, appropriate cost effectiveness, and attention to ethics and implementation, these technologies could improve the precision, safety, and outcomes of HPB surgery. Full article
Show Figures

Figure 1

39 pages, 12608 KB  
Article
An Audio Augmented Reality Navigation System for Blind and Visually Impaired People Integrating BIM and Computer Vision
by Leonardo Messi, Massimo Vaccarini, Alessandra Corneli, Alessandro Carbonari and Leonardo Binni
Buildings 2025, 15(18), 3252; https://doi.org/10.3390/buildings15183252 - 9 Sep 2025
Cited by 1 | Viewed by 2314
Abstract
Since statistics show a growing trend in blindness and visual impairment, the development of navigation systems supporting Blind and Visually Impaired People (BVIP) must be urgently addressed. Guiding BVIP to a desired destination across indoor and outdoor settings without relying on a pre-installed [...] Read more.
Since statistics show a growing trend in blindness and visual impairment, the development of navigation systems supporting Blind and Visually Impaired People (BVIP) must be urgently addressed. Guiding BVIP to a desired destination across indoor and outdoor settings without relying on a pre-installed infrastructure is an open challenge. While numerous solutions have been proposed by researchers in recent decades, a comprehensive navigation system that can support BVIP mobility in mixed and unprepared environments is still missing. This study proposes a novel navigation system that enables BVIP to request directions and be guided to a desired destination across heterogeneous and unprepared settings. To achieve this, the system applies Computer Vision (CV)—namely an integrated Structure from Motion (SfM) pipeline—for tracking the user and exploits Building Information Modelling (BIM) semantics for planning the reference path to reach the destination. Audio Augmented Reality (AAR) technology is adopted for directional guidance delivery due to its intuitive and non-intrusive nature, which allows seamless integration with traditional mobility aids (e.g., white canes or guide dogs). The developed system was tested on a university campus to assess its performance during both path planning and navigation tasks, the latter involving users in both blindfolded and sighted conditions. Quantitative results indicate that the system computed paths in about 10 milliseconds and effectively guided blindfolded users to their destination, achieving performance comparable to that of sighted users. Remarkably, users in blindfolded conditions completed navigation tests with an average deviation from the reference path within the 0.60-meter shoulder width threshold in 100% of the trials, compared to 75% of the tests conducted by sighted users. These findings demonstrate the system’s accuracy in maintaining navigational alignment within acceptable human spatial tolerances. The proposed approach contributes to the advancement of BVIP assistive technologies by enabling scalable, infrastructure-free navigation across heterogeneous environments. Full article
Show Figures

Figure 1

Back to TopTop