Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,516)

Search Parameters:
Keywords = virtual reality system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 980 KiB  
Article
Dynamic Decoding of VR Immersive Experience in User’s Technology-Privacy Game
by Shugang Li, Zulei Qin, Meitong Liu, Ziyi Li, Jiayi Zhang and Yanfang Wei
Systems 2025, 13(8), 638; https://doi.org/10.3390/systems13080638 (registering DOI) - 1 Aug 2025
Abstract
The formation mechanism of Virtual Reality (VR) Immersive Experience (VRIE) is notably complex; this study aimed to dynamically decode its underlying drivers by innovatively integrating Flow Theory and Privacy Calculus Theory, focusing on Perceptual-Interactive Fidelity (PIF), Consumer Willingness to Immerse in Technology (CWTI), [...] Read more.
The formation mechanism of Virtual Reality (VR) Immersive Experience (VRIE) is notably complex; this study aimed to dynamically decode its underlying drivers by innovatively integrating Flow Theory and Privacy Calculus Theory, focusing on Perceptual-Interactive Fidelity (PIF), Consumer Willingness to Immerse in Technology (CWTI), and the applicability of Loss Aversion Theory. To achieve this, we analyzed approximately 30,000 user reviews from Amazon using Latent Semantic Analysis (LSA) and regression analysis. The findings reveal that user attention’s impact on VRIE is non-linear, suggesting an optimal threshold, and confirm PIF as a central influencing mechanism; furthermore, CWTI significantly moderates users’ privacy calculus, thereby affecting VRIE, while Loss Aversion Theory showed limited explanatory power in the VR context. These results provide a deeper understanding of VR user behavior, offering significant theoretical guidance and practical implications for future VR system design, particularly in strategically balancing user cognition, PIF, privacy concerns, and individual willingness. Full article
Show Figures

Figure 1

32 pages, 6323 KiB  
Article
Design, Implementation and Evaluation of an Immersive Teleoperation Interface for Human-Centered Autonomous Driving
by Irene Bouzón, Jimena Pascual, Cayetana Costales, Aser Crespo, Covadonga Cima and David Melendi
Sensors 2025, 25(15), 4679; https://doi.org/10.3390/s25154679 - 29 Jul 2025
Viewed by 252
Abstract
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to [...] Read more.
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to support remote interventions in emergency scenarios. Built on a modular ROS2 architecture, the system allows seamless transition between simulated and physical platforms, enabling safe and reproducible testing. The experimental results show a high task success rate and user satisfaction, highlighting the importance of intuitive controls, gesture recognition accuracy, and low-latency feedback. Our findings contribute to the understanding of human-robot interaction (HRI) in immersive teleoperation contexts and provide insights into the role of multisensory feedback and control modalities in building trust and situational awareness for remote operators. Ultimately, this approach is intended to support the broader acceptability of autonomous driving technologies by enhancing human supervision, control, and confidence. Full article
(This article belongs to the Special Issue Human-Centred Smart Manufacturing - Industry 5.0)
Show Figures

Figure 1

13 pages, 1775 KiB  
Review
Integrating Physical Activity and Artificial Intelligence in Burn Rehabilitation: Muscle Recovery and Body Image Restoration
by Vasiliki J. Malliou, George Pafis, Christos Katsikas and Spyridon Plakias
Appl. Sci. 2025, 15(15), 8323; https://doi.org/10.3390/app15158323 - 26 Jul 2025
Viewed by 234
Abstract
Burn injuries result in complex physiological and psychological sequelae, including hypermetabolism, muscle wasting, mobility impairment, scarring, and disrupted body image. While advances in acute care have improved survival, comprehensive rehabilitation strategies are critical for restoring function, appearance, and psychosocial well-being. Structured physical activity, [...] Read more.
Burn injuries result in complex physiological and psychological sequelae, including hypermetabolism, muscle wasting, mobility impairment, scarring, and disrupted body image. While advances in acute care have improved survival, comprehensive rehabilitation strategies are critical for restoring function, appearance, and psychosocial well-being. Structured physical activity, including resistance and aerobic training, plays a central role in counteracting muscle atrophy, improving cardiovascular function, enhancing scar quality, and promoting psychological resilience and body image restoration. This narrative review synthesizes the current evidence on the effects of exercise-based interventions on post-burn recovery, highlighting their therapeutic mechanisms, clinical applications, and implementation challenges. In addition to physical training, emerging technologies such as virtual reality, aquatic therapy, and compression garments offer promising adjunctive benefits. Notably, artificial intelligence (AI) is gaining traction in burn rehabilitation through its integration into wearable biosensors and telehealth platforms that enable real-time monitoring, individualized feedback, and predictive modeling of recovery outcomes. These AI-driven tools have the potential to personalize exercise regimens, support remote care, and enhance scar assessment and wound tracking. Overall, the integration of exercise-based interventions with digital technologies represents a promising, multimodal approach to burn recovery. Future research should focus on optimizing exercise prescriptions, improving access to personalized rehabilitation tools, and advancing AI-enabled systems to support long-term recovery, functional independence, and positive self-perception among burn survivors. Full article
Show Figures

Figure 1

22 pages, 1329 KiB  
Review
Visual Field Examinations for Retinal Diseases: A Narrative Review
by Ko Eun Kim and Seong Joon Ahn
J. Clin. Med. 2025, 14(15), 5266; https://doi.org/10.3390/jcm14155266 - 25 Jul 2025
Viewed by 172
Abstract
Visual field (VF) testing remains a cornerstone in assessing retinal function by measuring how well different parts of the retina detect light. It is essential for early detection, monitoring, and management of many retinal diseases. By mapping retinal sensitivity, VF exams can reveal [...] Read more.
Visual field (VF) testing remains a cornerstone in assessing retinal function by measuring how well different parts of the retina detect light. It is essential for early detection, monitoring, and management of many retinal diseases. By mapping retinal sensitivity, VF exams can reveal functional loss before structural changes become visible. This review summarizes how VF testing is applied across key conditions: hydroxychloroquine (HCQ) retinopathy, age-related macular degeneration (AMD), diabetic retinopathy (DR) and macular edema (DME), and inherited disorders including inherited dystrophies such as retinitis pigmentosa (RP). Traditional methods like the Goldmann kinetic perimetry and simple tools such as the Amsler grid help identify large or central VF defects. Automated perimetry (e.g., Humphrey Field Analyzer) provides detailed, quantitative data critical for detecting subtle paracentral scotomas in HCQ retinopathy and central vision loss in AMD. Frequency-doubling technology (FDT) reveals early neural deficits in DR before blood vessel changes appear. Microperimetry offers precise, localized sensitivity maps for macular diseases. Despite its value, VF testing faces challenges including patient fatigue, variability in responses, and interpretation of unreliable results. Recent advances in artificial intelligence, virtual reality perimetry, and home-based perimetry systems are improving test accuracy, accessibility, and patient engagement. Integrating VF exams with these emerging technologies promises more personalized care, earlier intervention, and better long-term outcomes for patients with retinal disease. Full article
(This article belongs to the Special Issue New Advances in Retinal Diseases)
Show Figures

Figure 1

51 pages, 5654 KiB  
Review
Exploring the Role of Digital Twin and Industrial Metaverse Technologies in Enhancing Occupational Health and Safety in Manufacturing
by Arslan Zahid, Aniello Ferraro, Antonella Petrillo and Fabio De Felice
Appl. Sci. 2025, 15(15), 8268; https://doi.org/10.3390/app15158268 - 25 Jul 2025
Viewed by 348
Abstract
The evolution of Industry 4.0 and the emerging paradigm of Industry 5.0 have introduced disruptive technologies that are reshaping modern manufacturing environments. Among these, Digital Twin (DT) and Industrial Metaverse (IM) technologies are increasingly recognized for their potential to enhance Occupational Health and [...] Read more.
The evolution of Industry 4.0 and the emerging paradigm of Industry 5.0 have introduced disruptive technologies that are reshaping modern manufacturing environments. Among these, Digital Twin (DT) and Industrial Metaverse (IM) technologies are increasingly recognized for their potential to enhance Occupational Health and Safety (OHS). However, a comprehensive understanding of how these technologies integrate to support OHS in manufacturing remains limited. This study systematically explores the transformative role of DT and IM in creating immersive, intelligent, and human-centric safety ecosystems. Following the PRISMA guidelines, a Systematic Literature Review (SLR) of 75 peer-reviewed studies from the SCOPUS and Web of Science databases was conducted. The review identifies key enabling technologies such as Virtual Reality (VR), Augmented Reality (AR), Extended Reality (XR), Internet of Things (IoT), Artificial Intelligence (AI), Cyber-Physical Systems (CPS), and Collaborative Robots (COBOTS), and highlights their applications in real-time monitoring, immersive safety training, and predictive hazard mitigation. A conceptual framework is proposed, illustrating a synergistic digital ecosystem that integrates predictive analytics, real-time monitoring, and immersive training to enhance the OHS. The findings highlight both the transformative benefits and the key adoption challenges of these technologies, including technical complexities, data security, privacy, ethical concerns, and organizational resistance. This study provides a foundational framework for future research and practical implementation in Industry 5.0. Full article
Show Figures

Figure 1

24 pages, 4249 KiB  
Article
Developing a Serious Video Game to Engage the Upper Limb Post-Stroke Rehabilitation
by Jaime A. Silva, Manuel F. Silva, Hélder P. Oliveira and Cláudia D. Rocha
Appl. Sci. 2025, 15(15), 8240; https://doi.org/10.3390/app15158240 - 24 Jul 2025
Viewed by 252
Abstract
Stroke often leads to severe motor impairment, especially in the upper limbs, greatly reducing a patient’s ability to perform daily tasks. Effective rehabilitation is essential to restore function and improve quality of life. Traditional therapies, while useful, may lack engagement, leading to low [...] Read more.
Stroke often leads to severe motor impairment, especially in the upper limbs, greatly reducing a patient’s ability to perform daily tasks. Effective rehabilitation is essential to restore function and improve quality of life. Traditional therapies, while useful, may lack engagement, leading to low motivation and poor adherence. Gamification—using game-like elements in non-game contexts—offers a promising way to make rehabilitation more engaging. The authors explore a gamified rehabilitation system designed in Unity 3D using a Kinect V2 camera. The game includes key features such as adjustable difficulty, real-time and predominantly positive feedback, user friendliness, and data tracking for progress. The evaluations were conducted with 18 healthy participants, most of whom had prior virtual reality experience. About 77% found the application highly motivating. While the gameplay was well received, the visual design was noted as lacking engagement. Importantly, all users agreed that the game offers a broad range of difficulty levels, making it accessible to various users. The results suggest that the system has strong potential to improve rehabilitation outcomes and encourage long-term use through enhanced motivation and interactivity. Full article
Show Figures

Figure 1

40 pages, 16352 KiB  
Review
Surface Protection Technologies for Earthen Sites in the 21st Century: Hotspots, Evolution, and Future Trends in Digitalization, Intelligence, and Sustainability
by Yingzhi Xiao, Yi Chen, Yuhao Huang and Yu Yan
Coatings 2025, 15(7), 855; https://doi.org/10.3390/coatings15070855 - 20 Jul 2025
Viewed by 652
Abstract
As vital material carriers of human civilization, earthen sites are experiencing continuous surface deterioration under the combined effects of weathering and anthropogenic damage. Traditional surface conservation techniques, due to their poor compatibility and limited reversibility, struggle to address the compound challenges of micro-scale [...] Read more.
As vital material carriers of human civilization, earthen sites are experiencing continuous surface deterioration under the combined effects of weathering and anthropogenic damage. Traditional surface conservation techniques, due to their poor compatibility and limited reversibility, struggle to address the compound challenges of micro-scale degradation and macro-scale deformation. With the deep integration of digital twin technology, spatial information technologies, intelligent systems, and sustainable concepts, earthen site surface conservation technologies are transitioning from single-point applications to multidimensional integration. However, challenges remain in terms of the insufficient systematization of technology integration and the absence of a comprehensive interdisciplinary theoretical framework. Based on the dual-core databases of Web of Science and Scopus, this study systematically reviews the technological evolution of surface conservation for earthen sites between 2000 and 2025. CiteSpace 6.2 R4 and VOSviewer 1.6 were used for bibliometric visualization analysis, which was innovatively combined with manual close reading of the key literature and GPT-assisted semantic mining (error rate < 5%) to efficiently identify core research themes and infer deeper trends. The results reveal the following: (1) technological evolution follows a three-stage trajectory—from early point-based monitoring technologies, such as remote sensing (RS) and the Global Positioning System (GPS), to spatial modeling technologies, such as light detection and ranging (LiDAR) and geographic information systems (GIS), and, finally, to today’s integrated intelligent monitoring systems based on multi-source fusion; (2) the key surface technology system comprises GIS-based spatial data management, high-precision modeling via LiDAR, 3D reconstruction using oblique photogrammetry, and building information modeling (BIM) for structural protection, while cutting-edge areas focus on digital twin (DT) and the Internet of Things (IoT) for intelligent monitoring, augmented reality (AR) for immersive visualization, and blockchain technologies for digital authentication; (3) future research is expected to integrate big data and cloud computing to enable multidimensional prediction of surface deterioration, while virtual reality (VR) will overcome spatial–temporal limitations and push conservation paradigms toward automation, intelligence, and sustainability. This study, grounded in the technological evolution of surface protection for earthen sites, constructs a triadic framework of “intelligent monitoring–technological integration–collaborative application,” revealing the integration needs between DT and VR for surface technologies. It provides methodological support for addressing current technical bottlenecks and lays the foundation for dynamic surface protection, solution optimization, and interdisciplinary collaboration. Full article
Show Figures

Graphical abstract

36 pages, 6020 KiB  
Article
“It Felt Like Solving a Mystery Together”: Exploring Virtual Reality Card-Based Interaction and Story Co-Creation Collaborative System Design
by Yaojiong Yu, Mike Phillips and Gianni Corino
Appl. Sci. 2025, 15(14), 8046; https://doi.org/10.3390/app15148046 - 19 Jul 2025
Viewed by 332
Abstract
Virtual reality interaction design and story co-creation design for multiple users is an interdisciplinary research field that merges human–computer interaction, creative design, and virtual reality technologies. Story co-creation design enables multiple users to collectively generate and share narratives, allowing them to contribute to [...] Read more.
Virtual reality interaction design and story co-creation design for multiple users is an interdisciplinary research field that merges human–computer interaction, creative design, and virtual reality technologies. Story co-creation design enables multiple users to collectively generate and share narratives, allowing them to contribute to the storyline, modify plot trajectories, and craft characters, thereby facilitating a dynamic storytelling experience. Through advanced virtual reality interaction design, collaboration and social engagement can be further enriched to encourage active participation. This study investigates the facilitation of narrative creation and enhancement of storytelling skills in virtual reality by leveraging existing research on story co-creation design and virtual reality technology. Subsequently, we developed and evaluated the virtual reality card-based collaborative storytelling platform Co-Relay. By analyzing interaction data and user feedback obtained from user testing and experimental trials, we observed substantial enhancements in user engagement, immersion, creativity, and fulfillment of emotional and social needs compared to a conventional web-based storytelling platform. The primary contribution of this study lies in demonstrating how the incorporation of story co-creation can elevate storytelling proficiency, plot development, and social interaction within the virtual reality environment. Our novel methodology offers a fresh outlook on the design of collaborative narrative creation in virtual reality, particularly by integrating participatory multi-user storytelling platforms that blur the traditional boundaries between creators and audiences, as well as between fiction and reality. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

13 pages, 2968 KiB  
Article
Neurophysiological Effects of Virtual Reality Multitask Training in Cardiac Surgery Patients: A Study with Standardized Low-Resolution Electromagnetic Tomography (sLORETA)
by Irina Tarasova, Olga Trubnikova, Darya Kupriyanova, Irina Kukhareva and Anastasia Sosnina
Biomedicines 2025, 13(7), 1755; https://doi.org/10.3390/biomedicines13071755 - 18 Jul 2025
Viewed by 301
Abstract
Background: Digital technologies offer innovative opportunities for recovering and maintaining intellectual and mental health. The use of a multitask approach that combines motor component with various cognitive tasks in a virtual environment can optimize cognitive and physical functions and improve the quality of [...] Read more.
Background: Digital technologies offer innovative opportunities for recovering and maintaining intellectual and mental health. The use of a multitask approach that combines motor component with various cognitive tasks in a virtual environment can optimize cognitive and physical functions and improve the quality of life of cardiac surgery patients. This study aimed to localize current sources of theta and alpha power in patients who have undergone virtual multitask training (VMT) and a control group in the early postoperative period of coronary artery bypass grafting (CABG). Methods: A total of 100 male CABG patients (mean age, 62.7 ± 7.62 years) were allocated to the VMT group (n = 50) or to the control group (n = 50). EEG was recorded in the eyes-closed resting state at baseline (2–3 days before CABG) and after VMT course or approximately 11–12 days after CABG (the control group). Power EEG analysis was conducted and frequency-domain standardized low-resolution tomography (sLORETA) was used to assess the effect of VMT on brain activity. Results: After VMT, patients demonstrated a significantly higher density of alpha-rhythm (7–9 Hz) current sources (t > −4.18; p < 0.026) in Brodmann area 30, parahippocampal, and limbic system structures compared to preoperative data. In contrast, the control group had a marked elevation in the density of theta-rhythm (3–5 Hz) current sources (t > −3.98; p < 0.017) in parieto-occipital areas in comparison to preoperative values. Conclusions: Virtual reality-based multitask training stimulated brain regions associated with spatial orientation and memory encoding. The findings of this study highlight the importance of neural mechanisms underlying the effectiveness of multitask interventions and will be useful for designing and conducting future studies involving VR multitask training. Full article
(This article belongs to the Section Neurobiology and Clinical Neuroscience)
Show Figures

Figure 1

81 pages, 11973 KiB  
Article
Designing and Evaluating XR Cultural Heritage Applications Through Human–Computer Interaction Methods: Insights from Ten International Case Studies
by Jolanda Tromp, Damian Schofield, Pezhman Raeisian Parvari, Matthieu Poyade, Claire Eaglesham, Juan Carlos Torres, Theodore Johnson, Teele Jürivete, Nathan Lauer, Arcadio Reyes-Lecuona, Daniel González-Toledo, María Cuevas-Rodríguez and Luis Molina-Tanco
Appl. Sci. 2025, 15(14), 7973; https://doi.org/10.3390/app15147973 - 17 Jul 2025
Viewed by 851
Abstract
Advanced three-dimensional extended reality (XR) technologies are highly suitable for cultural heritage research and education. XR tools enable the creation of realistic virtual or augmented reality applications for curating and disseminating information about cultural artifacts and sites. Developing XR applications for cultural heritage [...] Read more.
Advanced three-dimensional extended reality (XR) technologies are highly suitable for cultural heritage research and education. XR tools enable the creation of realistic virtual or augmented reality applications for curating and disseminating information about cultural artifacts and sites. Developing XR applications for cultural heritage requires interdisciplinary collaboration involving strong teamwork and soft skills to manage user requirements, system specifications, and design cycles. Given the diverse end-users, achieving high precision, accuracy, and efficiency in information management and user experience is crucial. Human–computer interaction (HCI) design and evaluation methods are essential for ensuring usability and return on investment. This article presents ten case studies of cultural heritage software projects, illustrating the interdisciplinary work between computer science and HCI design. Students from institutions such as the State University of New York (USA), Glasgow School of Art (UK), University of Granada (Spain), University of Málaga (Spain), Duy Tan University (Vietnam), Imperial College London (UK), Research University Institute of Communication & Computer Systems (Greece), Technical University of Košice (Slovakia), and Indiana University (USA) contributed to creating, assessing, and improving the usability of these diverse cultural heritage applications. The results include a structured typology of CH XR application scenarios, detailed insights into design and evaluation practices across ten international use cases, and a development framework that supports interdisciplinary collaboration and stakeholder integration in phygital cultural heritage projects. Full article
(This article belongs to the Special Issue Advanced Technologies Applied to Cultural Heritage)
Show Figures

Figure 1

20 pages, 1798 KiB  
Article
An Approach to Enable Human–3D Object Interaction Through Voice Commands in an Immersive Virtual Environment
by Alessio Catalfamo, Antonio Celesti, Maria Fazio, A. F. M. Saifuddin Saif, Yu-Sheng Lin, Edelberto Franco Silva and Massimo Villari
Big Data Cogn. Comput. 2025, 9(7), 188; https://doi.org/10.3390/bdcc9070188 - 17 Jul 2025
Viewed by 413
Abstract
Nowadays, the Metaverse is facing many challenges. In this context, Virtual Reality (VR) applications allowing voice-based human–3D object interactions are limited due to the current hardware/software limitations. In fact, adopting Automated Speech Recognition (ASR) systems to interact with 3D objects in VR applications [...] Read more.
Nowadays, the Metaverse is facing many challenges. In this context, Virtual Reality (VR) applications allowing voice-based human–3D object interactions are limited due to the current hardware/software limitations. In fact, adopting Automated Speech Recognition (ASR) systems to interact with 3D objects in VR applications through users’ voice commands presents significant challenges due to the hardware and software limitations of headset devices. This paper aims to bridge this gap by proposing a methodology to address these issues. In particular, starting from a Mel-Frequency Cepstral Coefficient (MFCC) extraction algorithm able to capture the unique characteristics of the user’s voice, we pass it as input to a Convolutional Neural Network (CNN) model. After that, in order to integrate the CNN model with a VR application running on a standalone headset, such as Oculus Quest, we converted it into an Open Neural Network Exchange (ONNX) format, i.e., a Machine Learning (ML) interoperability open standard format. The proposed system demonstrates good performance and represents a foundation for the development of user-centric, effective computing systems, enhancing accessibility to VR environments through voice-based commands. Experiments demonstrate that a native CNN model developed through TensorFlow presents comparable performances with respect to the corresponding CNN model converted into the ONNX format, paving the way towards the development of VR applications running in headsets controlled through the user’s voice. Full article
Show Figures

Figure 1

49 pages, 3444 KiB  
Article
A Design-Based Research Approach to Streamline the Integration of High-Tech Assistive Technologies in Speech and Language Therapy
by Anna Lekova, Paulina Tsvetkova, Anna Andreeva, Georgi Dimitrov, Tanio Tanev, Miglena Simonska, Tsvetelin Stefanov, Vaska Stancheva-Popkostadinova, Gergana Padareva, Katia Rasheva, Adelina Kremenska and Detelina Vitanova
Technologies 2025, 13(7), 306; https://doi.org/10.3390/technologies13070306 - 16 Jul 2025
Viewed by 499
Abstract
Currently, high-tech assistive technologies (ATs), particularly Socially Assistive Robots (SARs), virtual reality (VR) and conversational AI (ConvAI), are considered very useful in supporting professionals in Speech and Language Therapy (SLT) for children with communication disorders. However, despite a positive public perception, therapists face [...] Read more.
Currently, high-tech assistive technologies (ATs), particularly Socially Assistive Robots (SARs), virtual reality (VR) and conversational AI (ConvAI), are considered very useful in supporting professionals in Speech and Language Therapy (SLT) for children with communication disorders. However, despite a positive public perception, therapists face difficulties when integrating these technologies into practice due to technical challenges and a lack of user-friendly interfaces. To address this gap, a design-based research approach has been employed to streamline the integration of SARs, VR and ConvAI in SLT, and a new software platform called “ATLog” has been developed for designing interactive and playful learning scenarios with ATs. ATLog’s main features include visual-based programming with graphical interface, enabling therapists to intuitively create personalized interactive scenarios without advanced programming skills. The platform follows a subprocess-oriented design, breaking down SAR skills and VR scenarios into microskills represented by pre-programmed graphical blocks, tailored to specific treatment domains, therapy goals, and language skill levels. The ATLog platform was evaluated by 27 SLT experts using the Technology Acceptance Model (TAM) and System Usability Scale (SUS) questionnaires, extended with additional questions specifically focused on ATLog structure and functionalities. According to the SUS results, most of the experts (74%) evaluated ATLog with grades over 70, indicating high acceptance of its usability. Over half (52%) of the experts rated the additional questions focused on ATLog’s structure and functionalities in the A range (90–100), while 26% rated them in the B range (80–89), showing strong acceptance of the platform for creating and running personalized interactive scenarios with ATs. According to the TAM results, experts gave high grades for both perceived usefulness (44% in the A range) and perceived ease of use (63% in the A range). Full article
Show Figures

Figure 1

18 pages, 2318 KiB  
Systematic Review
Dropout Rate of Participants with Cancer in Randomized Clinical Trials That Use Virtual Reality to Manage Pain—A Systematic Review with Meta-Analysis and Meta-Regression
by Cristina García-Muñoz, María-Dolores Cortés-Vega and Patricia Martínez-Miranda
Healthcare 2025, 13(14), 1708; https://doi.org/10.3390/healthcare13141708 - 16 Jul 2025
Viewed by 355
Abstract
Background/Objectives: Virtual reality has emerged as a promising intervention for pain management in individuals with cancer. Although its clinical effects have been explored, little is known about participant adherence and dropout behavior. This systematic review and meta-analysis aimed to estimate the pooled [...] Read more.
Background/Objectives: Virtual reality has emerged as a promising intervention for pain management in individuals with cancer. Although its clinical effects have been explored, little is known about participant adherence and dropout behavior. This systematic review and meta-analysis aimed to estimate the pooled dropout rate in randomized controlled trials using virtual reality to treat cancer pain; assess whether dropout differs between groups; and explore potential predictors of attrition. Methods: We conducted a systematic search of PubMed, Web of Science, Scopus, and CINAHL up to April 2025. Eligible studies were randomized trials involving cancer patients or survivors that compared VR interventions for pain management with any non-VR control. Proportion meta-analyses and odds ratio meta-analyses were performed. Heterogeneity was assessed using the I2 statistic, and meta-regression was conducted to explore potential predictors of dropout. The JBI appraisal tool was used to assess the methodological quality and GRADE system to determine the certainty of evidence. Results: Six randomized controlled trials were included (n = 569). The pooled dropout rate was 16% (95% CI: 8.2–28.7%). Dropout was slightly lower in VR groups (12.7%) than in controls (21.4%), but the difference was not statistically significant (OR = 0.94; 95% CI: 0.51–1.72; I2 = 9%; GRADE: very low). No significant predictors of dropout were identified. Conclusions: VR interventions appear to have acceptable retention rates in oncology settings. The pooled dropout estimate may serve as a reference for sample size calculations. Future trials should improve reporting practices and investigate how VR modality and patient characteristics influence adherence. Full article
(This article belongs to the Special Issue Innovative Approaches to Chronic Disease Patient Care)
Show Figures

Figure 1

29 pages, 3338 KiB  
Article
AprilTags in Unity: A Local Alternative to Shared Spatial Anchors for Synergistic Shared Space Applications Involving Extended Reality and the Internet of Things
by Amitabh Mishra and Kevin Foster Carff
Sensors 2025, 25(14), 4408; https://doi.org/10.3390/s25144408 - 15 Jul 2025
Viewed by 319
Abstract
Creating shared spaces is a key part of making extended reality (XR) and Internet of Things (IoT) technology more interactive and collaborative. Currently, one system which stands out in achieving this end commercially involves spatial anchors. Due to the cloud-based nature of these [...] Read more.
Creating shared spaces is a key part of making extended reality (XR) and Internet of Things (IoT) technology more interactive and collaborative. Currently, one system which stands out in achieving this end commercially involves spatial anchors. Due to the cloud-based nature of these anchors, they can introduce connectivity and privacy issues for projects which need to be isolated from the internet. This research attempts to explore and create a different approach that does not require internet connectivity. This work involves the creation of an AprilTags-based calibration system as a local solution for creating shared XR spaces and investigates its performance. AprilTags are simple, scannable markers that, through computer vision algorithms, can help XR devices figure out position and rotation in a three-dimensional space. This implies that multiple users can be in the same virtual space and in the real-world space at the same time, easily. Our tests in XR showed that this method is accurate and works well for synchronizing multiple users. This approach could make shared XR experiences faster, more private, and easier to use without depending on cloud-based calibration systems. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

17 pages, 610 KiB  
Review
Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review
by Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen and Radoslav Miltchev
Lights 2025, 1(1), 1; https://doi.org/10.3390/lights1010001 - 14 Jul 2025
Viewed by 330
Abstract
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors [...] Read more.
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems. Full article
Show Figures

Figure 1

Back to TopTop