Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,044)

Search Parameters:
Keywords = VR system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 843 KB  
Review
Virtual Reality as a Potential Cornerstone for Remote Rehabilitative Therapies
by Raviraj Nataraj
Encyclopedia 2026, 6(2), 37; https://doi.org/10.3390/encyclopedia6020037 - 2 Feb 2026
Abstract
Therapeutic approaches using virtual reality (VR) have been effective in recovering function against various physical and cognitive disorders. Given its programmability and precise activity tracking, VR is a powerful tool for therapists to personalize treatments and monitor their patients more effectively. Due to [...] Read more.
Therapeutic approaches using virtual reality (VR) have been effective in recovering function against various physical and cognitive disorders. Given its programmability and precise activity tracking, VR is a powerful tool for therapists to personalize treatments and monitor their patients more effectively. Due to the growing prevalence of VR systems for personal and work uses, and the high reliability of broadband telecommunication, the opportunity to standardize remote delivery of VR therapies is apparent. VR-based rehabilitation has high potential to be a cornerstone approach for remote therapies given critical features: (1) accessibility for home users, (2) patient–therapist engagement, (3) capacity for personalization, and (4) capabilities for precision monitoring. Unlike prior reviews that summarize established measures of efficacy of VR-based rehabilitation for various clinical populations, this perspective highlights the potency of applying VR rehabilitation methods remotely and ways to expand and optimize that usage such as its integration with wearables for monitoring and AI. Moreover, this paper restricts its focus to VR as opposed to augmented (AR) or mixed-mode (MR) reality platforms that are also increasing their prevalence in clinical settings. This perspective article broadly overviews VR-based therapies for rehabilitating physical and cognitive function for various disorder cases before postulating their potential as an effective platform for delivering remote treatment. This article concludes with essential considerations for advancing VR-based remote therapy in the future. Full article
(This article belongs to the Section Medicine & Pharmacology)
29 pages, 679 KB  
Article
Digital Boundaries and Consent in the Metaverse: A Comparative Review of Privacy Risks
by Sofia Sakka, Vasiliki Liagkou, Afonso Ferreira and Chrysostomos Stylios
J. Cybersecur. Priv. 2026, 6(1), 24; https://doi.org/10.3390/jcp6010024 - 2 Feb 2026
Abstract
Metaverse presents significant opportunities for educational advancement by facilitating immersive, personalized, and interactive learning experiences through technologies such as virtual reality (VR), augmented reality (AR), extended reality (XR), and artificial intelligence (AI). However, this potential is compromised if digital environments fail to uphold [...] Read more.
Metaverse presents significant opportunities for educational advancement by facilitating immersive, personalized, and interactive learning experiences through technologies such as virtual reality (VR), augmented reality (AR), extended reality (XR), and artificial intelligence (AI). However, this potential is compromised if digital environments fail to uphold individuals’ privacy, autonomy, and equity. Despite their widespread adoption, the privacy implications of these environments remain inadequately understood, both in terms of technical vulnerabilities and legislative challenges, particularly regarding user consent management. Contemporary Metaverse systems collect highly sensitive information, including biometric signals, spatial behavior, motion patterns, and interaction data, often surpassing the granularity captured by traditional social networks. The lack of privacy-by-design solutions, coupled with the complexity of underlying technologies such as VR/AR infrastructures, 3D tracking systems, and AI-driven personalization engines, makes these platforms vulnerable to security breaches, data misuse, and opaque processing practices. This study presents a structured literature review and comparative analysis of privacy risks, consent mechanisms, and digital boundaries in metaverse platforms, with particular attention to educational contexts. We argue that privacy-aware design is essential not only for ethical compliance but also for supporting the long-term sustainability goals of digital education. Our findings aim to inform and support the development of secure, inclusive, and ethically grounded immersive learning environments by providing insights into systemic privacy and policy shortcomings. Full article
(This article belongs to the Special Issue Current Trends in Data Security and Privacy—2nd Edition)
Show Figures

Figure 1

14 pages, 2366 KB  
Article
Validating the Performance of VR Headset Eye-Tracking Using Gold Standard Eye-Tracker and MoCap System
by Russell Nathan Todd, Jian Gong, Amy Catherine Banic and Qin Zhu
Information 2026, 17(2), 143; https://doi.org/10.3390/info17020143 - 2 Feb 2026
Abstract
The integration of eye-tracking into consumer-grade virtual reality (VR) headsets presents a transformative opportunity for assessing user mental states within simulated, immersive environments. However, the validity of this built-in technology must be established against gold-standard real-world eye-tracking systems. This study employs a novel [...] Read more.
The integration of eye-tracking into consumer-grade virtual reality (VR) headsets presents a transformative opportunity for assessing user mental states within simulated, immersive environments. However, the validity of this built-in technology must be established against gold-standard real-world eye-tracking systems. This study employs a novel paradigm using a physically moving object to evaluate the accuracy of dynamic smooth pursuit, a key oculomotor function in mental state assessment. We rigorously validated the performance of the HTC Vive Pro Eye’s integrated eye-tracker against the Tobii Pro Glasses 3 using a high-precision OptiTrack motion capture system as ground-truth for object position. Eight participants completed both 2D and 3D gaze-tracking tasks. In the 2D condition, they tracked a dot on a screen, while in the 3D condition, they tracked a physically moving object. The real-world object trajectories captured by OptiTrack were replicated within a VR environment. Gaze data from both the VR headset and the Tobii glasses were recorded simultaneously and compared to the OptiTrack baseline using Dynamic Time Warping (DTW) to quantify accuracy. Results revealed a task-dependent performance. In the 2D task, the Tobii glasses demonstrated significantly lower DTW distances, indicating superior accuracy. Conversely, in the 3D task, the VR headset significantly outperformed the glasses, showing a closer match to the real object trajectory. This suggests that while traditional eye-trackers excel in constrained 2D contexts, integrated VR eye-tracking is more accurate for naturalistic 3D gaze pursuit. We conclude that VR headset eye-tracking is not only a reliable but also a cost-effective tool for research, particularly offering enhanced performance for studies conducted within immersive 3D simulations. Full article
Show Figures

Figure 1

24 pages, 4127 KB  
Article
Harnessing AI, Virtual Landscapes, and Anthropomorphic Imaginaries to Enhance Environmental Science Education at Jökulsárlón Proglacial Lagoon, Iceland
by Jacquelyn Kelly, Dianna Gielstra, Tomáš J. Oberding, Jim Bruno and Stephanie Cosentino
Glacies 2026, 3(1), 3; https://doi.org/10.3390/glacies3010003 - 1 Feb 2026
Abstract
Introductory environmental science courses offer non-STEM students an entry point to address global challenges such as climate change and cryosphere preservation. Aligned with the International Year of Glacier Preservation and the Decade of Action for Cryospheric Sciences, this mixed-method, IRB-exempt study applied the [...] Read more.
Introductory environmental science courses offer non-STEM students an entry point to address global challenges such as climate change and cryosphere preservation. Aligned with the International Year of Glacier Preservation and the Decade of Action for Cryospheric Sciences, this mixed-method, IRB-exempt study applied the Curriculum Redesign and Artificial Intelligence-Facilitated Transformation (CRAFT) model for course redesign. The project leveraged a human-centered AI approach to create anthropomorphized, place-based narratives for online learning. Generative AI is used to amend immersive virtual learning environments (VLEs) that animate glacial forces (water, rock, and elemental cycles) through narrative-driven virtual reality (VR) experiences. Students explored Iceland’s Jökulsárlón Glacier Lagoon via self-guided field simulations led by an imaginary water droplet, designed to foster environmental awareness and a sense of place. Data collection included a five-point Likert-scale survey and thematic coding of student comments. Findings revealed strong positive sentiment: 87.1% enjoyment of the imaginaries, 82.5% agreement on supporting connection to places, and 82.0% endorsement of their role in reinforcing spatial and systems thinking. Thematic analysis confirmed that anthropomorphic imaginaries enhanced emotional engagement and conceptual understanding of glacial processes, situating glacier preservation within geographic and global contexts. This AI-enhanced, multimodal approach demonstrates how narrative-based VR can make complex cryospheric concepts accessible for non-STEM learners, promoting early engagement with climate science and environmental stewardship. Full article
Show Figures

Figure 1

36 pages, 807 KB  
Review
An Overview of Technical Aspects and Challenges in Designing Edge-Cloud Systems
by Mohammadsadeq Garshasbi Herabad, Javid Taheri, Bestoun S. Ahmed and Calin Curescu
Appl. Sci. 2026, 16(3), 1454; https://doi.org/10.3390/app16031454 - 31 Jan 2026
Viewed by 60
Abstract
Edge–cloud computing has emerged as a key enabling paradigm for augmented and virtual reality (AR/VR) systems because of the stringent computational and ultra-low-latency requirements of AR/VR workloads. Designing efficient edge–cloud systems for such workloads involves multiple technical aspects, including communication technologies, service placement, [...] Read more.
Edge–cloud computing has emerged as a key enabling paradigm for augmented and virtual reality (AR/VR) systems because of the stringent computational and ultra-low-latency requirements of AR/VR workloads. Designing efficient edge–cloud systems for such workloads involves multiple technical aspects, including communication technologies, service placement, task offloading and caching, service migration, and security and privacy. This paper provides a structured and technical analysis of these aspects from an AR/VR perspective. We adopt a two-stage literature analysis, in which Google Scholar is used to identify fundamental technical aspects and solution approaches, followed by a focused analysis of recent research trends and future directions using academic databases (e.g., IEEE Xplore, ACM Digital Library, and ScienceDirect). We present an organized classification of the core technical aspects and investigate existing solution approaches, including heuristic, metaheuristic, learning-based, and hybrid strategies. Rather than introducing application-specific designs, the analysis focuses on workload-driven challenges and trade-offs that arise in AR/VR systems. Based on this classification, we analyze recent research trends, identify underexplored technical areas, and highlight key research gaps that hinder the efficient deployment of AR/VR services over edge–cloud infrastructures. The findings of this study provide practical insights for researchers and system designers and help guide future research toward more responsive, scalable, and reliable edge–cloud AR/VR systems. Full article
(This article belongs to the Special Issue Edge Computing and Cloud Computing: Latest Advances and Prospects)
37 pages, 862 KB  
Review
Mathematical Modeling Techniques in Virtual Reality Technologies: An Integrated Review of Physical Simulation, Spatial Analysis, and Interface Implementation
by Junhyeok Lee, Yong-Hyuk Kim and Kang Hoon Lee
Symmetry 2026, 18(2), 255; https://doi.org/10.3390/sym18020255 - 30 Jan 2026
Viewed by 69
Abstract
Virtual reality (VR) has emerged as a complex technological domain that demands high levels of realism and interactivity. At the core of this immersive experience lies a broad spectrum of mathematical modeling techniques. This survey explores how mathematical foundations support and enhance key [...] Read more.
Virtual reality (VR) has emerged as a complex technological domain that demands high levels of realism and interactivity. At the core of this immersive experience lies a broad spectrum of mathematical modeling techniques. This survey explores how mathematical foundations support and enhance key VR components, including physical simulations, 3D spatial analysis, rendering pipelines, and user interactions. We review differential equations and numerical integration methods (e.g., Euler, Verlet, Runge–Kutta (RK4)) used to simulate dynamic environments, as well as geometric transformations and coordinate systems that enable seamless motion and viewpoint control. The paper also examines the mathematical underpinnings of real-time rendering processes and interaction models involving collision detection and feedback prediction. In addition, recent developments such as physics-informed neural networks, differentiable rendering, and neural scene representations are presented as emerging trends bridging classical mathematics and data-driven approaches. By organizing these elements into a coherent mathematical framework, this work aims to provide researchers and developers with a comprehensive reference for applying mathematical techniques in VR systems. The paper concludes by outlining the open challenges in balancing accuracy and performance and proposes future directions for integrating advanced mathematics into next-generation VR experiences. Full article
(This article belongs to the Special Issue Mathematics: Feature Papers 2025)
Show Figures

Figure 1

20 pages, 7325 KB  
Article
FingerType: One-Handed Thumb-to-Finger Text Input Using 3D Hand Tracking
by Nuo Jia, Minghui Sun, Yan Li, Yang Tian and Tao Sun
Sensors 2026, 26(3), 897; https://doi.org/10.3390/s26030897 - 29 Jan 2026
Viewed by 168
Abstract
We present FingerType, a one-handed text input method based on thumb-to-finger gestures. FingerType detects tap events from 3D hand data using a Temporal Convolutional Network (TCN) and decodes the tap sequence into words with an n-gram language model. To inform the design, we [...] Read more.
We present FingerType, a one-handed text input method based on thumb-to-finger gestures. FingerType detects tap events from 3D hand data using a Temporal Convolutional Network (TCN) and decodes the tap sequence into words with an n-gram language model. To inform the design, we examined thumb-to-finger interactions and collected comfort ratings of finger regions. We used these results to design an improved T9-style key layout. Our system runs at 72 frames per second and reaches 94.97% accuracy for tap detection. We conducted a six-block user study with 24 participants and compared FingerType with controller input and touch input. Entry speed increased from 5.88 WPM in the first practice block to 10.63 WPM in the final block. FingerType also supported more eyes-free typing: attention on the display panel within ±15° of head-gaze was 84.41%, higher than touch input (69.47%). Finally, we report error patterns and WPM learning curves, and a model-based analysis suggests improving gesture recognition accuracy could further increase speed and narrow the gap to traditional VR input methods. Full article
(This article belongs to the Special Issue Sensing Technology to Measure Human-Computer Interactions)
Show Figures

Figure 1

26 pages, 48079 KB  
Article
Teleoperation of Dual-Arm Manipulators via VR Interfaces: A Framework Integrating Simulation and Real-World Control
by Alejandro Torrejón, Sergio Eslava, Jorge Calderón, Pedro Núñez and Pablo Bustos
Electronics 2026, 15(3), 572; https://doi.org/10.3390/electronics15030572 - 28 Jan 2026
Viewed by 107
Abstract
We present a virtual reality (VR) framework for controlling dual-arm robotic manipulators through immersive interfaces, integrating both simulated and real-world platforms. The system combines the Webots robotics simulator with Unreal Engine 5.6.1 to provide real-time visualization and interaction, enabling users to manipulate each [...] Read more.
We present a virtual reality (VR) framework for controlling dual-arm robotic manipulators through immersive interfaces, integrating both simulated and real-world platforms. The system combines the Webots robotics simulator with Unreal Engine 5.6.1 to provide real-time visualization and interaction, enabling users to manipulate each arm’s tool point via VR controllers with natural depth perception and motion tracking. The same control interface is seamlessly extended to a physical dual-arm robot, enabling teleoperation within the same VR environment. Our architecture supports real-time bidirectional communication between the VR layer and both the simulator and hardware, enabling responsive control and feedback. We describe the system design and performance evaluation in both domains, demonstrating the viability of immersive VR as a unified interface for simulation and physical robot control. Full article
20 pages, 4222 KB  
Article
Development and Usability Evaluation of a Leap Motion-Based Controller-Free VR Training System for Inferior Alveolar Nerve Block
by Jun-Seong Kim, Kun-Woo Kim, Hyo-Joon Kim and Seong-Yong Moon
Appl. Sci. 2026, 16(3), 1325; https://doi.org/10.3390/app16031325 - 28 Jan 2026
Viewed by 97
Abstract
This study developed a virtual reality (VR) simulator for training the inferior alveolar nerve block (IANB) procedure using Leap Motion-based hand tracking and the Unity engine, and evaluated its interaction performance, task-level outcomes within the simulator, and usability. Built on a 3D anatomical [...] Read more.
This study developed a virtual reality (VR) simulator for training the inferior alveolar nerve block (IANB) procedure using Leap Motion-based hand tracking and the Unity engine, and evaluated its interaction performance, task-level outcomes within the simulator, and usability. Built on a 3D anatomical model, the system provides a pre-clinical practice environment for realistic syringe manipulation and visually guided needle insertion, enabling repeated rehearsal of the procedural workflow. Interaction stability was assessed using participant-level gesture recognition rates and input latency. Usability was evaluated via a questionnaire addressing ease of use, cognitive load, and perceived educational usefulness. The results indicated participant-level mean gesture recognition rates of 88.8–90.5% and mean response latencies of approximately 64–66 ms. In usability testing (n = 40), the item related to perceived procedural skill improvement received the highest score (4.25/5.0). Because this study did not include controlled comparisons with conventional training or objective measures of clinical competency transfer, the findings should be interpreted as preliminary evidence of technical feasibility and learner-perceived usefulness within a simulated setting. Controlled comparative studies using objective learning outcomes are warranted. Full article
Show Figures

Figure 1

12 pages, 474 KB  
Article
Toward Generalized Emotion Recognition in VR by Bridging Natural and Acted Facial Expressions
by Rahat Rizvi Rahman, Hee Yun Choi, Joonghyo Lim, Go Eun Lee, Seungmoo Lee, Chungyean Cho and Kostadin Damevski
Sensors 2026, 26(3), 845; https://doi.org/10.3390/s26030845 - 28 Jan 2026
Viewed by 129
Abstract
Recognizing emotions accurately in virtual reality (VR) enables adaptive and personalized experiences across gaming, therapy, and other domains. However, most existing facial emotion recognition models rely on acted expressions collected under controlled settings, which differ substantially from the spontaneous and subtle emotions that [...] Read more.
Recognizing emotions accurately in virtual reality (VR) enables adaptive and personalized experiences across gaming, therapy, and other domains. However, most existing facial emotion recognition models rely on acted expressions collected under controlled settings, which differ substantially from the spontaneous and subtle emotions that arise during real VR experiences. To address this challenge, the objective of this study is to develop and evaluate generalizable emotion recognition models that jointly learn from both acted and natural facial expressions in virtual reality. We integrate two complementary datasets collected using the Meta Quest Pro headset, one capturing natural emotional reactions and another containing acted expressions. We evaluate multiple model architectures, including convolutional and domain-adversarial networks, and a mixture-of-experts model that separates natural and acted expressions. Our experiments show that models trained jointly on acted and natural data achieve stronger cross-domain generalization. In particular, the domain-adversarial and mixture-of-experts configurations yield the highest accuracy on natural and mixed-emotion evaluations. Analysis of facial action units (AUs) reveals that natural and acted emotions rely on partially distinct AU patterns, while generalizable models learn a shared representation that integrates salient AUs from both domains. These findings demonstrate that bridging acted and natural expression domains can enable more accurate and robust VR emotion recognition systems. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

32 pages, 3859 KB  
Systematic Review
Digital Twin (DT) and Extended Reality (XR) in the Construction Industry: A Systematic Literature Review
by Ina Sthapit and Svetlana Olbina
Buildings 2026, 16(3), 517; https://doi.org/10.3390/buildings16030517 - 27 Jan 2026
Viewed by 248
Abstract
The construction industry is undergoing a rapid digital transformation, with Digital Twins (DTs) and Extended Reality (XR) as two emerging technologies with great potential. Despite their potential, there are several challenges regarding DT and XR use in construction projects, including implementation barriers, interoperability [...] Read more.
The construction industry is undergoing a rapid digital transformation, with Digital Twins (DTs) and Extended Reality (XR) as two emerging technologies with great potential. Despite their potential, there are several challenges regarding DT and XR use in construction projects, including implementation barriers, interoperability issues, system complexity, and a lack of standardized frameworks. This study presents a systematic literature review (SLR) of DT and XR technologies—including Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR)—in the construction industry. The study analyzes 52 peer-reviewed articles identified using the Web of Science database to explore thematic findings. Key findings highlight DT and XR applications for safety training, real-time monitoring, predictive maintenance, lifecycle management, renovation or demolition, scenario risk assessment, and education. The SLR also identifies core enabling technologies such as Building Information Modeling (BIM), Internet of Things (IoT), Big Data, and XR devices, while uncovering persistent challenges including interoperability, high implementation costs, and lack of standardization. The study highlights how integrating DTs and XR can improve construction by making it smarter, safer, and more efficient. It also suggests areas for future research to overcome current challenges and help increase the use of these technologies. The primary contribution of this study lies in deepening the understanding of DT and XR technologies by examining them through the lenses of their benefits as well as drivers for and challenges to their adoption. This enhanced understanding provides a foundation for exploring integrated DT and XR applications to advance innovation and efficiency in the construction sector. Full article
Show Figures

Figure 1

35 pages, 2368 KB  
Review
Bridging Light and Immersion: Visible Optical Interfaces for Extended Reality
by Haixuan Xu, Zhaoxu Wang, Jiaqi Sun, Chengkai Zhu and Yi Xia
Photonics 2026, 13(2), 115; https://doi.org/10.3390/photonics13020115 - 27 Jan 2026
Viewed by 249
Abstract
Extended reality (XR), encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), is rapidly reshaping the landscape of digital interaction and immersive communication. As XR evolves toward ultra-realistic, real-time, and interactive experiences, it places unprecedented demands on wireless communication systems in [...] Read more.
Extended reality (XR), encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), is rapidly reshaping the landscape of digital interaction and immersive communication. As XR evolves toward ultra-realistic, real-time, and interactive experiences, it places unprecedented demands on wireless communication systems in terms of bandwidth, latency, and reliability. Conventional RF-based networks, constrained by limited spectrum and interference, struggle to meet these stringent requirements. In contrast, visible light communication (VLC) offers a compelling alternative by exploiting the vast unregulated visible spectrum to deliver high-speed, low-latency, and interference-free data transmission—making it particularly suitable for future XR environments. This paper presents a comprehensive survey on VLC-enabled XR communication systems. We first analyze XR technologies and their diverse quality-of-service (QoS) and quality-of-experience (QoE) requirements, identifying the unique challenges posed to existing wireless infrastructures. Building upon this, we explore the fundamentals, characteristics, and opportunities of VLC systems in supporting immersive XR applications. Furthermore, we elaborate on the key enabling techniques that empower VLC to fulfill XR’s stringent demands, including high-speed transmission technologies, hybrid VLC-RF architectures, dynamic beam control, and visible light sensing capabilities. Finally, we discuss future research directions, emphasizing AI-assisted network intelligence, cross-layer optimization, and collaborative multi-element transmission frameworks as vital enablers for the next-generation VLC–XR ecosystem. Full article
(This article belongs to the Special Issue Advanced Optical Fiber Communication)
Show Figures

Figure 1

16 pages, 1594 KB  
Article
Virtual Reality-Based Dichoptic Therapy in Acquired Brain Injury: Functional and Symptom Outcomes
by Carla Otero-Currás, Francisco J. Povedano-Montero, Ricardo Bernárdez-Vilaboa, Pilar Rojas, Rut González-Jiménez, Gema Martínez-Florentín and Juan E. Cedrún-Sánchez
J. Clin. Med. 2026, 15(3), 1004; https://doi.org/10.3390/jcm15031004 - 27 Jan 2026
Viewed by 179
Abstract
Background: Acquired brain injury (ABI) often disrupts binocular vision, causing deviations on the cover test and reduced stereopsis that impair functional visual performance. This study investigated the effects of a dichoptic vision therapy protocol—based on an immersive virtual reality (VR) system—on visual [...] Read more.
Background: Acquired brain injury (ABI) often disrupts binocular vision, causing deviations on the cover test and reduced stereopsis that impair functional visual performance. This study investigated the effects of a dichoptic vision therapy protocol—based on an immersive virtual reality (VR) system—on visual field parameters, oculomotor reaction times, and self-reported visual symptoms in adults with ABI. Methods: In a controlled parallel-group design, adult ABI patients (median age 51 years) were assigned to an experimental group (dichoptic VR therapy) or a control group. Six sessions of visual therapy were performed. Primary outcomes included perimetric visual field indices and oculomotor reaction times; the secondary outcome was the Brain Injury Vision Symptom Survey (BIVSS) score. Etiology (stroke vs. traumatic brain injury) was recorded. Results: No statistically significant improvements were found in perimetric visual field indices (p > 0.05), except for a slight gain in the top-right quadrant in the experimental group. Reaction times did not differ significantly between groups. However, the experimental group reported a greater reduction in visual symptoms as measured by the BIVSS. Patients with traumatic brain injury exhibited better functional improvement, particularly in the top-left quadrant (p = 0.04). Conclusions: Dichoptic VR-based therapy did not restore perimetric field losses in ABI patients but reduced visual symptoms and may enhance functional adaptation of residual vision rather than structural recovery. The therapeutic response varied by etiology, favoring traumatic brain injury. Larger, longer trials integrating objective and subjective measures, including neuroimaging, are warranted. Full article
(This article belongs to the Special Issue Traumatic Brain Injury: Clinical Diagnosis and Management)
Show Figures

Figure 1

26 pages, 725 KB  
Article
Unlocking GAI in Universities: Leadership-Driven Corporate Social Responsibility for Digital Sustainability
by Mostafa Aboulnour Salem and Zeyad Aly Khalil
Adm. Sci. 2026, 16(2), 58; https://doi.org/10.3390/admsci16020058 - 23 Jan 2026
Viewed by 271
Abstract
Corporate Social Responsibility (CSR) has evolved into a strategic governance framework through which organisations address environmental sustainability, stakeholder expectations, and long-term institutional viability. In knowledge-intensive organisations such as universities, Green Artificial Intelligence (GAI) is increasingly recognised as an internal CSR agenda. GAI can [...] Read more.
Corporate Social Responsibility (CSR) has evolved into a strategic governance framework through which organisations address environmental sustainability, stakeholder expectations, and long-term institutional viability. In knowledge-intensive organisations such as universities, Green Artificial Intelligence (GAI) is increasingly recognised as an internal CSR agenda. GAI can reduce digital and energy-related environmental impacts while enhancing educational and operational performance. This study examines how higher education leaders, as organisational decision-makers, form intentions to adopt GAI within institutional CSR and digital sustainability strategies. It focuses specifically on leadership intentions to implement key GAI practices, including Smart Energy Management Systems, Energy-Efficient Machine Learning models, Virtual and Remote Laboratories, and AI-powered sustainability dashboards. Grounded in the Unified Theory of Acceptance and Use of Technology (UTAUT), the study investigates how performance expectancy, effort expectancy, social influence, and facilitating conditions shape behavioural intentions to adopt GAI. Survey data were collected from higher education leaders across Saudi universities, representing diverse national and cultural backgrounds within a shared institutional context. The findings indicate that facilitating conditions, performance expectancy, and social influence significantly influence adoption intentions, whereas effort expectancy does not. Gender and cultural context also moderate several adoption pathways. Generally, the results demonstrate that adopting GAI in universities constitutes a governance-level CSR decision rather than a purely technical choice. This study advances CSR and digital sustainability research by positioning GAI as a strategic tool for responsible digital transformation and by offering actionable insights for higher education leaders and policymakers. Full article
Show Figures

Figure 1

19 pages, 755 KB  
Article
Digital Intelligence and the Inheritance of Traditional Culture: A Glocalized Model of Intelligent Heritage in Huangyan, China
by Jianxiong Dai, Xiaochun Fan and Louis D. Zhang
Sustainability 2026, 18(2), 1062; https://doi.org/10.3390/su18021062 - 20 Jan 2026
Viewed by 418
Abstract
In the era of digital intelligence, cultural heritage is undergoing a profound transformation. This study investigates how digital technologies facilitate the inheritance and innovation of traditional culture in China, focusing on the case of Huangyan’s Song Rhyme Culture in Zhejiang Province. Drawing on [...] Read more.
In the era of digital intelligence, cultural heritage is undergoing a profound transformation. This study investigates how digital technologies facilitate the inheritance and innovation of traditional culture in China, focusing on the case of Huangyan’s Song Rhyme Culture in Zhejiang Province. Drawing on the framework of “glocalized intelligent heritage,” the research explores how global technological systems interact with local cultural practices to produce new forms of cultural continuity. Methodologically, the study employs a qualitative case study approach supported by empirical data. It combines policy analysis, semi-structured interviews with twenty-six stakeholders, field observations, and quantitative indicators such as visitor statistics, online engagement, and project investment. This mixed design provides both contextual depth and measurable evidence of digital transformation. The findings show that digital intelligence has reshaped cultural representation, platform-based public engagement, and local sustainability. In Huangyan, technologies such as AI-based monitoring, 3D modeling, and VR exhibitions have transformed heritage display into an interactive and educational experience. Digital media have enhanced public engagement, with more than 1.2 million virtual visits and over 20 million online interactions recorded in 2024. At the same time, the project has stimulated cultural tourism and creative industries, contributing to a 28.6% increase in cultural revenue between 2020 and 2024. The study concludes that digital intelligence can function as a cultural bridge by strengthening heritage mediation, widening access, and enabling platform- and institution-based participation, while noting that embodied intergenerational cultural transmission lies beyond the direct measurement of this research design. Full article
(This article belongs to the Section Tourism, Culture, and Heritage)
Show Figures

Figure 1

Back to TopTop