Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (393)

Search Parameters:
Keywords = mixed/augmented reality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2153 KB  
Article
Fusing Prediction and Perception: Adaptive Kalman Filter-Driven Respiratory Gating for MR Surgical Navigation
by Haoliang Li, Shuyi Wang, Jingyi Hu, Tao Zhang and Yueyang Zhong
Sensors 2026, 26(2), 405; https://doi.org/10.3390/s26020405 - 8 Jan 2026
Viewed by 101
Abstract
Background: Respiratory-induced target displacement remains a major challenge for achieving accurate and safe augmented-reality-guided thoracoabdominal percutaneous puncture. Existing approaches often suffer from system latency, dependence on intraoperative imaging, or the absence of intelligent timing assistance; Methods: We developed a mixed-reality (MR) surgical navigation [...] Read more.
Background: Respiratory-induced target displacement remains a major challenge for achieving accurate and safe augmented-reality-guided thoracoabdominal percutaneous puncture. Existing approaches often suffer from system latency, dependence on intraoperative imaging, or the absence of intelligent timing assistance; Methods: We developed a mixed-reality (MR) surgical navigation system that incorporates Adaptive Kalman-filter-based respiratory prediction module and visual gating cues. The system was evaluated using a dynamic respiratory motion simulation platform. The Kalman filter performs real-time state estimation and short-term prediction of optically tracked respiratory motion, enabling simultaneous compensation for MR model drift and forecasting of the end-inhalation window to trigger visual guidance; Results: Compared with the uncompensated condition, the proposed system reduced dynamic registration error from (3.15 ± 1.23) mm to (2.11 ± 0.58) mm (p < 0.001). Moreover, the predicted guidance window occurred approximately 142 ms in advance with >92% accuracy, providing preparation time for needle insertion; Conclusions: The integrated MR system effectively suppresses respiratory-induced model drift and offers intelligent timing guidance for puncture execution. Full article
Show Figures

Figure 1

19 pages, 20380 KB  
Article
Accessible Augmented Reality in Sheltered Workshops: A Mixed-Methods Evaluation for Users with Mental Disabilities
by Valentin Knoben, Malte Stellmacher, Jonas Blattgerste, Björn Hein and Christian Wurll
Virtual Worlds 2026, 5(1), 1; https://doi.org/10.3390/virtualworlds5010001 - 4 Jan 2026
Viewed by 136
Abstract
A prominent application of Augmented Reality (AR) is to provide step-by-step guidance for procedural tasks as it allows information to be displayed in situ by overlaying it directly onto the user’s physical environment. While the potential of AR is well known, the perspectives [...] Read more.
A prominent application of Augmented Reality (AR) is to provide step-by-step guidance for procedural tasks as it allows information to be displayed in situ by overlaying it directly onto the user’s physical environment. While the potential of AR is well known, the perspectives and requirements of individuals with mental disabilities, who face both cognitive and psychological barriers at work, have yet to be addressed, particularly on Head-Mounted Displays (HMDs). To understand practical limitations of such a system, we conducted a mixed-methods user study with 29 participants, including individuals with mental disabilities, their colleagues, and support professionals. Participants used a commercially available system on an AR HMD to perform a machine setup task. Quantitative results revealed that participants with mental disabilities perceived the system as less usable than those without. Qualitative findings point towards actionable leverage points of improvement such as privacy-aware human support, motivating but lightweight gamification, user-controlled pacing with clear feedback, confidence-building interaction patterns, and clearer task intent of multimodal instructions. Full article
Show Figures

Figure 1

29 pages, 1050 KB  
Article
A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC
by Wukjae Cha, Hyang Jin Lee, Sangjin Kook, Keunok Kim and Dongho Won
Sensors 2026, 26(1), 217; https://doi.org/10.3390/s26010217 - 29 Dec 2025
Viewed by 295
Abstract
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential [...] Read more.
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol’s robustness under the Dolev–Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA’s resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF–cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

31 pages, 652 KB  
Review
Immersive HCI for Intangible Cultural Heritage in Tourism Contexts: A Narrative Review of Design and Evaluation
by Zhan Xu, Feng Liu, Guobin Xia, Shuo Wang, Yiting Duan, Luwen Yu, Shichao Zhao and Muzi Li
Sustainability 2026, 18(1), 153; https://doi.org/10.3390/su18010153 - 23 Dec 2025
Viewed by 653
Abstract
Immersive technologies such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and multisensory interaction are increasingly deployed to support the transmission and presentation of intangible cultural heritage (ICH), particularly within tourism and heritage interpretation contexts. In cultural tourism, ICH is often [...] Read more.
Immersive technologies such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and multisensory interaction are increasingly deployed to support the transmission and presentation of intangible cultural heritage (ICH), particularly within tourism and heritage interpretation contexts. In cultural tourism, ICH is often encountered through museums, heritage sites, festivals, and digitally mediated experiences rather than through sustained community-based transmission, raising important challenges for interaction design, accessibility, and cultural representation. This study presents a narrative review of immersive human–computer interaction (HCI) research in the context ICH, with a particular focus on tourism-facing applications. An initial dataset of 145 records was identified through a structured search of major academic databases from their inception to 2024. Following staged screening based on relevance, publication type, and temporal criteria, 97 empirical or technical studies published after 2020 were included in the final analysis. The review synthesises how immersive technologies are applied across seven ICH domains and examines their deployment in key tourism-related settings, including museum interpretation, heritage sites, and sustainable cultural tourism experiences. The findings reveal persistent tensions between technological innovation, cultural authenticity, and user engagement, challenges that are especially pronounced in tourism context. The review also maps the dominant methodological approaches, including user-centred design, participatory frameworks, and mixed-method strategies. By integrating structured screening with narrative synthesis, the review highlights fragmentation in the field, uneven methodological rigour, and gaps in both cultural adaptability and long-term sustainability, and outlines future directions for culturally responsive and inclusive immersive HCI research in ICH tourism. Full article
(This article belongs to the Special Issue Cultural Heritage and Sustainable Urban Tourism)
Show Figures

Figure 1

24 pages, 10048 KB  
Entry
Immersive Methods and Biometric Tools in Food Science and Consumer Behavior
by Abdul Hannan Zulkarnain and Attila Gere
Encyclopedia 2026, 6(1), 2; https://doi.org/10.3390/encyclopedia6010002 - 22 Dec 2025
Viewed by 312
Definition
Immersive methods and biometric tools provide a rigorous, context-rich way to study how people perceive and choose food. Immersive methods use extended reality, including virtual, augmented, mixed, and augmented virtual environments, to recreate settings such as homes, shops, and restaurants. They increase participants’ [...] Read more.
Immersive methods and biometric tools provide a rigorous, context-rich way to study how people perceive and choose food. Immersive methods use extended reality, including virtual, augmented, mixed, and augmented virtual environments, to recreate settings such as homes, shops, and restaurants. They increase participants’ sense of presence and the ecological validity (realism of conditions) of experiments, while still tightly controlling sensory and social cues like lighting, sound, and surroundings. Biometric tools record objective signals linked to attention, emotion, and cognitive load via sensors such as eye-tracking, galvanic skin response (GSR), heart rate (and variability), facial electromyography, electroencephalography, and functional near-infrared spectroscopy. Researchers align stimuli presentation, gaze, and physiology on a common temporal reference and link these data to outcomes like liking, choice, or willingness-to-buy. This approach reveals implicit responses that self-reports may miss, clarifies how changes in context shift perception, and improves predictive power. It enables faster, lower-risk product and packaging development, better-informed labeling and retail design, and more targeted nutrition and health communication. Good practices emphasize careful system calibration, adequate statistical power, participant comfort and safety, robust data protection, and transparent analysis. In food science and consumer behavior, combining immersive environments with biometrics yields valid, reproducible evidence about what captures attention, creates value, and drives food choice. Full article
(This article belongs to the Collection Food and Food Culture)
Show Figures

Graphical abstract

55 pages, 25612 KB  
Article
Experiential Approach to a Neolithic Lakeside Settlement Using Extended Reality (XR) Technologies
by Athanasios Evagelou, Alexandros Kleftodimos, Magdalini Grigoriou and Georgios Lappas
Electronics 2025, 14(24), 4870; https://doi.org/10.3390/electronics14244870 - 10 Dec 2025
Viewed by 389
Abstract
The present paper discusses extended reality (XR) applications specifically designed to enhance experiential location-based learning in outdoor spaces, which are utilized in the context of an environmental education program of the Education Center for the Environment and Sustainability (E.S.E.C.) of Kastoria. With the [...] Read more.
The present paper discusses extended reality (XR) applications specifically designed to enhance experiential location-based learning in outdoor spaces, which are utilized in the context of an environmental education program of the Education Center for the Environment and Sustainability (E.S.E.C.) of Kastoria. With the use of augmented, mixed, and virtual reality technologies, an attempt is made to enrich the knowledge and experiences of the students during their visit to the representation of the Neolithic settlement (open-air museum) and their active participation in the learning process. Students take on roles such as those of an archeologist, a detective, and an explorer. By utilizing mobile devices and leveraging GPS technology, students search for and identify virtual findings at the excavation site, travel through time, and investigate the resolution of a mystery (crime) that occurred during the Neolithic period, exploring and navigating the space of the neolithic representation interacting with real and virtual objects, while through special VR glasses they discover the lifestyle of neolithic man. The design of the applications was based on the ADDIE model, while the evaluation was conducted using a structured questionnaire for XR experiences. The fundamental constructs of the questionnaire were defined as follows: Challenge, Satisfaction/Enjoyment, Ease of Use, Usefulness/Knowledge, Interaction/Collaboration, and Intention to Reuse. A total of 163 students were involved in the study. Descriptive statistics showed consistently high scores across factors (M = 4.21–4.58, SD = 0.41–0.63). Pearson correlations revealed strong associations between Challenge—Satisfaction/Enjoyment (r = 0.688), Usefulness/Knowledge—Intention to Reuse (r = 0.648), and Satisfaction—Intention to Reuse (r = 0.651). Regression analysis further supported key relationships such as Usefulness/Knowledge—Intention to Reuse (β = 0.31, p < 0.001), Usefulness/Knowledge—Interaction/Collaboration (β = 0.34, p < 0.001), Satisfaction/Enjoyment—Usefulness/Knowledge (β = 0.42, p < 0.001) and Challenge—Satisfaction/Enjoyment (β = 0.69, p < 0.001). Overall, findings suggest that well-designed XR experiences can support higher engagement, perceived cognitive value, and intention to reuse in authentic outdoor learning contexts. Full article
Show Figures

Figure 1

25 pages, 8864 KB  
Article
Collaboration Mechanics with AR/VR for Cadastral Surveys—A Conceptual Implementation for an Urban Ward in Indonesia
by Trias Aditya, Adrian N. Pamungkas, Faishal Ashaari, Walter T. de Vries, Calvin Wijaya and Nicholas G. Setiawan
Geomatics 2025, 5(4), 75; https://doi.org/10.3390/geomatics5040075 - 5 Dec 2025
Viewed by 554
Abstract
Synchronous interactions from different locations have become a globally accepted modus of interaction since the COVID-19 outbreak. For centuries, professional cadastral survey activities always required an interaction modus whereby surveyors, neighboring landowners, and local officers were present simultaneously. During the systematic adjudication and [...] Read more.
Synchronous interactions from different locations have become a globally accepted modus of interaction since the COVID-19 outbreak. For centuries, professional cadastral survey activities always required an interaction modus whereby surveyors, neighboring landowners, and local officers were present simultaneously. During the systematic adjudication and land registration project in Indonesia, multiple problems in the land information systems emerged, which, up to date, remain unsolved. These include the presence of plots of land without a related title, incorrect demarcations in the field, and the listing of titles without a connection to a land plot. We argue that these problems emerged due to ineffective survey workflows, which draw on inflexible process steps. This research assesses how and how much the use of augmented and virtual reality (AR/VR) technologies can make land registration services more effective and expand collaboration in a synchronous and at distant manner (the so-called same time, different place principle). The tested cadastral survey workflows include the procedure for a first land titling, the one for land subdivision, and the updating and maintenance of the cadastral database. These are common cases that could potentially benefit from integrated uses of augmented and virtual reality applications. Mixed reality technologies using VR glasses are also tested as tools, allowing individuals, surveyors, and government officers to work together synchronously from different places via a web mediation dashboard. The work aims at providing alternatives for safe interactions of field surveyors with decision-making groups in their endeavors to reach fast and effective collaborative decisions on boundaries. Full article
Show Figures

Figure 1

22 pages, 1145 KB  
Article
TSMTFN: Two-Stream Temporal Shift Module Network for Efficient Egocentric Gesture Recognition in Virtual Reality
by Muhammad Abrar Hussain, Chanjun Chun and SeongKi Kim
Virtual Worlds 2025, 4(4), 58; https://doi.org/10.3390/virtualworlds4040058 - 4 Dec 2025
Viewed by 346
Abstract
Egocentric hand gesture recognition is vital for natural human–computer interaction in augmented and virtual reality (AR/VR) systems. However, most deep learning models struggle to balance accuracy and efficiency, limiting real-time use on wearable devices. This paper introduces a Two-Stream Temporal Shift Module Transformer [...] Read more.
Egocentric hand gesture recognition is vital for natural human–computer interaction in augmented and virtual reality (AR/VR) systems. However, most deep learning models struggle to balance accuracy and efficiency, limiting real-time use on wearable devices. This paper introduces a Two-Stream Temporal Shift Module Transformer Fusion Network (TSMTFN) that achieves high recognition accuracy with low computational cost. The model integrates Temporal Shift Modules (TSMs) for efficient motion modeling and a Transformer-based fusion mechanism for long-range temporal understanding, operating on dual RGB-D streams to capture complementary visual and depth cues. Training stability and generalization are enhanced through full-layer training from epoch 1 and MixUp/CutMix augmentations. Evaluated on the EgoGesture dataset, TSMTFN attained 96.18% top-1 accuracy and 99.61% top-5 accuracy on the independent test set with only 16 GFLOPs and 21.3M parameters, offering a 2.4–4.7× reduction in computation compared to recent state-of-the-art methods. The model runs at 15.10 samples/s, achieving real-time performance. The results demonstrate robust recognition across over 95% of gesture classes and minimal inter-class confusion, establishing TSMTFN as an efficient, accurate, and deployable solution for next-generation wearable AR/VR gesture interfaces. Full article
Show Figures

Figure 1

21 pages, 3768 KB  
Article
Spatial Plane Positioning of AR-HUD Graphics: Implications for Driver Inattentional Blindness in Navigation and Collision Warning Scenarios
by Menlong Ye and Jun Yin
Electronics 2025, 14(23), 4768; https://doi.org/10.3390/electronics14234768 - 4 Dec 2025
Viewed by 482
Abstract
In-vehicle Augmented Reality Head-Up Displays (AR-HUDs) enhance driving performance and experience by presenting critical information such as navigation cues and collision warnings. Although many studies have investigated the efficacy of AR-HUD navigation and collision warning interface designs, existing research has overlooked the critical [...] Read more.
In-vehicle Augmented Reality Head-Up Displays (AR-HUDs) enhance driving performance and experience by presenting critical information such as navigation cues and collision warnings. Although many studies have investigated the efficacy of AR-HUD navigation and collision warning interface designs, existing research has overlooked the critical interplay between graphic spatial positioning and safety risks arising from inattentional blindness. This study employed a single-factor within-subjects design, with Experiment 1 and Experiment 2 separately examining the impact of the spatial planar position (horizontal planar position, vertical planar position, mixed planar position) of AR-HUD navigation graphics and collision warning graphics on drivers’ inattentional blindness. The results revealed that the spatial planar position of AR-HUD navigation graphics has no significant effect on inattentional blindness behavior or reaction time. However, the horizontal planar position yielded the best user experience with low workload, followed by the mixed planar position. For AR-HUD collision warning graphics, their spatial planar position does not significantly influence the frequency of inattentional blindness; From the perspectives of workload and user experience, the vertical planar position of collision warning graphics provides the best experience with the lowest workload, while the mixed planar position demonstrates superior hedonic qualities. Overall, this study offers design guidelines for in-vehicle AR-HUD interfaces. Full article
Show Figures

Figure 1

22 pages, 1230 KB  
Review
Extended Reality in Computer Science Education: A Narrative Review of Pedagogical Benefits, Challenges, and Future Directions
by Miguel A. Garcia-Ruiz, Elba A. Morales-Vanegas, Laura S. Gaytán-Lugo, Pablo A. Alcaraz-Valencia and Pedro C. Santana-Mancilla
Virtual Worlds 2025, 4(4), 56; https://doi.org/10.3390/virtualworlds4040056 - 3 Dec 2025
Viewed by 608
Abstract
Technologies such as XR (Extended Reality), in the form of VR (Virtual Reality), AR (Augmented Reality) and MR (Mixed-Reality), are being researched for their potential to support higher education. XR offers novel opportunities for improving understanding and engagement of computer science (CS) courses, [...] Read more.
Technologies such as XR (Extended Reality), in the form of VR (Virtual Reality), AR (Augmented Reality) and MR (Mixed-Reality), are being researched for their potential to support higher education. XR offers novel opportunities for improving understanding and engagement of computer science (CS) courses, abstract and algorithmic thinking and the application of knowledge to solve problems with computers. This narrative literature review aims to report the state of XR adoption in the university CS education context by studying pedagogical benefits, representative cases, challenges, and future research work. Recent case studies have demonstrated that VR innovations are supportive of algorithm and data structure visualization, AR in programming and circuit analysis contextualization, and MR in bridging the experimental practice on virtual with real hardware within computer labs. The potential of XR to enhance engagement, motivation, and complex content understanding has already been researched. However, ongoing obstacles remain such as the high cost of hardware, technical issues in practicing scalable content, restricted access for students with disabilities, and ethical considerations over privacy and data protection. This review also presents XR, not as a substitute for traditional pedagogy, but as an additive tool that, in alignment with well-defined curricular objectives, may enhance CS learning. If it overcomes these deficiencies and progresses appropriate inclusive evidence-based practices, XR has the potential to play a powerful role in the future of computer science education as part of the digital learning ecosystem. Full article
Show Figures

Graphical abstract

23 pages, 1722 KB  
Systematic Review
Augmented and Mixed Reality Interventions in People with Multiple Sclerosis: A Systematic Review
by María Fernández-Cañas, Roberto Cano-de-la-Cuerda, Selena Marcos-Antón and Ana Onate-Figuérez
Brain Sci. 2025, 15(12), 1292; https://doi.org/10.3390/brainsci15121292 - 30 Nov 2025
Viewed by 540
Abstract
Background: In recent years, extended reality has gained traction in people with multiple sclerosis (MS) for their ability to deliver engaging, task-specific, and multisensory therapeutic experiences. Aim: This systematic review investigates the application of Mixed Reality (MR) and Augmented Reality (AR) technologies in [...] Read more.
Background: In recent years, extended reality has gained traction in people with multiple sclerosis (MS) for their ability to deliver engaging, task-specific, and multisensory therapeutic experiences. Aim: This systematic review investigates the application of Mixed Reality (MR) and Augmented Reality (AR) technologies in neurorehabilitation for individuals with MS. Method: A comprehensive systematic review was conducted across seven databases and seven eligible studies were identified involving MR/AR interventions targeting motor and cognitive functions, in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA). The review protocol was prospectively registered in the International Prospective Register of Systematic Reviews (PROSPERO). Data extraction was performed independently by the two reviewers and discrepancies were resolved by consensus or consultation with a third reviewer. Participants were predominantly diagnosed with relapsing-remitting MS and presented mild to moderate disability. Technologies ranged from head-mounted displays to home-based AR platforms, with interventions addressing gait, upper-limb coordination, and dual-task performance. Outcome measures were mapped to the ICF framework, encompassing body function, activity, participation, and contextual factors. Results: Findings suggest short-term improvements in gait parameters, grip strength, and motor coordination, with enhanced engagement and usability reported. Methodological quality was moderate, with small sample sizes and heterogeneous protocols limiting generalizability. Risk of bias varied across study designs. Despite promising results, further research is needed to validate long-term efficacy, optimize cognitive load, and standardize intervention protocols. Conclusions: MR and AR may serve as effective complements to conventional and VR-based rehabilitation, particularly in personalized, task-oriented training for MS populations. Full article
(This article belongs to the Special Issue The Rehabilitation of Neurologic Disorders)
Show Figures

Figure 1

16 pages, 8229 KB  
Article
MVL-Loc: Leveraging Vision-Language Model for Generalizable Multi-Scene Camera Relocalization
by Zhendong Xiao, Shan Yang, Shujie Ji, Jun Yin, Ziling Wen and Wu Wei
Appl. Sci. 2025, 15(23), 12642; https://doi.org/10.3390/app152312642 - 28 Nov 2025
Viewed by 393
Abstract
Camera relocalization, a cornerstone capability of modern computer vision, accurately determines a camera’s position and orientation from images and is essential for applications in augmented reality, mixed reality, autonomous driving, delivery drones, and robotic navigation. Unlike traditional deep learning-based methods regress camera pose [...] Read more.
Camera relocalization, a cornerstone capability of modern computer vision, accurately determines a camera’s position and orientation from images and is essential for applications in augmented reality, mixed reality, autonomous driving, delivery drones, and robotic navigation. Unlike traditional deep learning-based methods regress camera pose from images in a single scene which lack generalization and robustness in diverse environments. We propose MVL-Loc, a novel end-to-end multi-scene six degrees of freedom camera relocalization framework. MVL-Loc leverages pretrained world knowledge from vision-language models and incorporates multimodal data to generalize across both indoor and outdoor settings. Furthermore, natural language is employed as a directive tool to guide the multi-scene learning process, facilitating semantic understanding of complex scenes and capturing spatial relationships among objects. Extensive experiments on the 7Scenes and Cambridge Landmarks datasets demonstrate MVL-Loc’s robustness and state-of-the-art performance in real-world multi-scene camera relocalization, with improved accuracy in both positional and orientational estimates. Full article
Show Figures

Figure 1

12 pages, 2242 KB  
Article
Augmented Reality-Assisted Micro-Invasive Apicectomy with Markerless Visual–Inertial Odometry: An In Vivo Pilot Study
by Marco Farronato, Davide Farronato, Federico Michelini and Giulio Rasperini
Appl. Sci. 2025, 15(23), 12588; https://doi.org/10.3390/app152312588 - 27 Nov 2025
Viewed by 343
Abstract
Introduction: Apicectomy is an endodontic surgical procedure prescribed for persistent periapical pathologies when conventional root canal therapy or retreatment have failed. Accurate intraoperative visualization of the root apex and surrounding structures remains challenging and subject to possible errors. Augmented reality (AR) allows for [...] Read more.
Introduction: Apicectomy is an endodontic surgical procedure prescribed for persistent periapical pathologies when conventional root canal therapy or retreatment have failed. Accurate intraoperative visualization of the root apex and surrounding structures remains challenging and subject to possible errors. Augmented reality (AR) allows for the addition of real-time digital overlays of the anatomical region, thus potentially improving surgical precision and reducing invasiveness. The purpose of this pilot study is to describe the application of an AR method in cases requiring apicectomy. Materials and Methods: Patients presenting with chronic persistent apical radio-translucency associated with pain underwent AR-assisted apicectomy. Cone-beam computed tomography (CBCT) scans were obtained preoperatively for segmentation of the target root apex and adjacent anatomical structures. A custom visual–inertial odometry (VIO) algorithm was used to map and stabilize the segmented digital 3D models on a portable device in real time, enabling an overlay of digital guides onto the operative field. The duration of preoperative procedures, was recorded. Postoperative pain measured by a Visual Analogue Scale (VAS), and periapical healing assessed with radiographic evaluations, were recorded at baseline (T0) and at 6 weeks and 6 months (T1–T2) after surgery. Results: AR-assisted apicectomies were successfully performed in all three patients without intraoperative complications. The digital overlap procedure required an average of [1.49 ± 0.34] minutes. VAS scores decreased significantly from T0 to T2, and patients showed radiographic evidence of progressive periapical healing. No patient reported persistent discomfort at follow-up. Conclusion: This preliminary pilot study indicates that AR-assisted apicectomy is feasible and may improve intraoperative visualization with low additional surgical time. Future larger-scale studies with control groups are needed to validate the method proposed and to quantify the outcomes. Clinical Significance: By integrating real-time digital images of bony structures and root morphology, AR guidance during apicectomy may offer enhanced precision for apical resection and may decrease the risk of iatrogenic damage. The use of a visual–inertial odometry-based AR method is a novel technique that demonstrated promising results in terms of VAS and final outcomes, especially in anatomically challenging cases in this preliminary pilot study. Full article
(This article belongs to the Special Issue Advanced Dental Imaging Technology)
Show Figures

Figure 1

42 pages, 50263 KB  
Article
How AR-Enhanced Cultural Heritage Landscapes Influence Perception in Rural Tourism Spaces: Evidence from Eye Tracking and HRV
by Wenzhuo Fan, Chen Li, Songhua Gao, Nisha Ai and Nan Li
Sustainability 2025, 17(23), 10575; https://doi.org/10.3390/su172310575 - 25 Nov 2025
Viewed by 674
Abstract
Against the backdrop of globalization, environmental pressures, and rapid tourism development, digital technologies are emerging as vital supplementary tools for cultural heritage preservation. This study investigates the impact of augmented reality (AR)-enhanced cultural heritage landscapes on rural tourists’ perceptions, validating their effects through [...] Read more.
Against the backdrop of globalization, environmental pressures, and rapid tourism development, digital technologies are emerging as vital supplementary tools for cultural heritage preservation. This study investigates the impact of augmented reality (AR)-enhanced cultural heritage landscapes on rural tourists’ perceptions, validating their effects through two physiological dimensions: visual attention and autonomic nervous system regulation. Employing a mixed experimental design (n = 81), the research integrates heart rate variability, eye tracking, and subjective questionnaires, with the Aoluguya Village in Inner Mongolia serving as the testing site. Participants viewed videos and images of real and AR environments in an isolated space. Data were analyzed using repeated measures ANOVA and paired t-tests. The results revealed that AR significantly increased RMSSD in the native rural environment (t(89) = −3.606, p = 0.001, d = 0.38), indicating heightened parasympathetic activity, while no significant effect was observed in the artificially recreated environment (t(89) = −2.020, p = 0.407), demonstrating that physiological benefits depend on the setting. Eye tracking data revealed that both AR environments increased total gaze duration and gaze frequency (average increase of 1.5–2.0 gazes), enhancing visual attention. The questionnaire results (n = 26) supported this finding on attention focus, novelty, and esthetic dimensions, though improvements in authenticity and overall satisfaction were limited. This study demonstrates that AR environments significantly capture visitor attention, particularly when integrated with authentic local spaces to enhance visitor experiences. The findings provide practical insights for revitalizing traditional village cultural heritage and optimizing rural tourism. Full article
Show Figures

Figure 1

18 pages, 4084 KB  
Article
Synergic Co-Benefits and Value of Digital Technology Enablers for Circular Management Models Across Value Chain Stakeholders in the Built Environment
by Sakdirat Kaewunruen, Charalampos Baniotopoulos, Patrick Teuffel, Hamza Driou, Otso Valta, Jan Pešta and Diana Bajare
CivilEng 2025, 6(4), 62; https://doi.org/10.3390/civileng6040062 - 23 Nov 2025
Viewed by 479
Abstract
It is undeniable that digital technology enables, e.g., building information modelling, digital twins, extended reality (i.e., virtual reality, augmented reality, mixed reality), and automation, have recently played a significant role in the construction and engineering industry. The traditional applications of digital technologies include [...] Read more.
It is undeniable that digital technology enables, e.g., building information modelling, digital twins, extended reality (i.e., virtual reality, augmented reality, mixed reality), and automation, have recently played a significant role in the construction and engineering industry. The traditional applications of digital technologies include design and construction management, waste management, and, to a limited extent, asset management. Despite some applications of digital technologies, the technology users are often isolated and siloed. In reality, the cross-functional applications, roles, and co-benefits have not been thoroughly understood or well demonstrated. This is evident by a very limited usage of such technology across either the whole lifecycle or the value chain of built environment sectors. On this ground, this study is the first to tackle the challenges by conducting expert and stakeholder interviews using open-ended questionnaires both online and offline (n = 42) to identify synergic roles and influences, as well as co-benefits of digital technology enablers. Industry participants are dominant in our study and, unsurprisingly, siloed practice can undermine cross-collaboration among value chain stakeholders. Clearly, co-benefits may hypothetically occur, but they can be only unlocked by genuine, participative stakeholder engagement. This study is unprecedented, and our new findings also reveal technical and societal capabilities of digital technologies, which can inclusively enable participative decision-making, engagement, and integration of stakeholders for implementing buildings’ circularity through viable business and management models. New insights clearly exhibit that digital technology enablers must be co-created by main stakeholders in order to yield co-benefits and harvest synergic value for circular management models in the built environment. Full article
(This article belongs to the Section Urban, Economy, Management and Transportation Engineering)
Show Figures

Figure 1

Back to TopTop