Previous Issue
Volume 5, March
 
 

Virtual Worlds, Volume 5, Issue 2 (June 2026) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
28 pages, 4046 KB  
Systematic Review
From Pre-Rendered to Autonomous: A Systematic Review of AI-Driven Character Animation and Embodiment in Virtual Reality
by Anastasios Theodoropoulos
Virtual Worlds 2026, 5(2), 20; https://doi.org/10.3390/virtualworlds5020020 - 29 Apr 2026
Viewed by 406
Abstract
In recent years, the generation and animation of avatars in virtual reality (VR) have undergone a definitive paradigm shift, transitioning from pre-rendered, manually rigged meshes to autonomous, AI-driven digital entities. While individual algorithms have been extensively studied, there is a critical lack of [...] Read more.
In recent years, the generation and animation of avatars in virtual reality (VR) have undergone a definitive paradigm shift, transitioning from pre-rendered, manually rigged meshes to autonomous, AI-driven digital entities. While individual algorithms have been extensively studied, there is a critical lack of comprehensive synthesis regarding how these generative models impact the broader sociotechnical ecosystem of Spatial Computing. To address this gap, this systematic literature review, conducted in accordance with PRISMA guidelines, analyzed 48 primary studies to evaluate the intersection of Generative AI, hardware architecture, human psychology, and digital ethics. The synthesis reveals a deeply interdependent ecosystem. While advanced neural rendering and diffusion models (RQ1) successfully bypass traditional 3D authoring bottlenecks, their pursuit of absolute visual fidelity severely antagonizes the thermal and latency constraints of standalone mobile hardware (RQ2). The literature demonstrates that failing to mitigate these bottlenecks through hardware–software co-design (e.g., specialized ASICs, gaze-contingent foveation) inevitably shatters the user’s sensorimotor loop, collapsing the sense of agency and triggering the Kinematic Uncanny Valley (RQ3). Furthermore, as these hyper-realistic avatars achieve kinematic autonomy, they introduce unprecedented sociotechnical vulnerabilities regarding spatial privacy, dataset bias, and post-mortem digital identity (RQ4). Ultimately, this review concludes that realizing a compelling and inclusive AI-driven Metaverse is no longer an isolated computer graphics challenge; it demands a rigorous, interdisciplinary paradigm shift where algorithms, silicon architectures, and cognitive psychology are inextricably co-designed under a foundational framework of digital ethics. Full article
Show Figures

Figure 1

23 pages, 1052 KB  
Article
Technology Analysis of Extended Reality Using Machine Learning and Statistical Models
by Sunghae Jun
Virtual Worlds 2026, 5(2), 19; https://doi.org/10.3390/virtualworlds5020019 - 20 Apr 2026
Viewed by 231
Abstract
Extended reality (XR), encompassing augmented reality (AR), virtual reality (VR), and mixed reality (MR), is a key enabling technology for virtual worlds, and XR-related patents continue to grow rapidly. However, patent-based XR technology analysis faces a fundamental challenge: document–keyword matrix (DKM) built from [...] Read more.
Extended reality (XR), encompassing augmented reality (AR), virtual reality (VR), and mixed reality (MR), is a key enabling technology for virtual worlds, and XR-related patents continue to grow rapidly. However, patent-based XR technology analysis faces a fundamental challenge: document–keyword matrix (DKM) built from patent titles and abstracts are typically high dimensional, sparse, and often exhibit excess zeros, which can distort inference when conventional text mining pipelines are applied without a generative count perspective. In this study, we propose a statistically grounded XR technology analysis framework that combines likelihood-based count modeling with interpretable structure mining to map XR sub-technologies from a patent DKM. Using an XR patent–keyword matrix, we fit Poisson regression (PR), negative binomial regression (NBR), and zero-inflated negative binomial regression (ZINBR) models via maximum likelihood estimation (MLE), controlling for document-length effects. Model selection by Akaike information criterion (AIC) consistently favored NBR for both target keywords, indicating substantial overdispersion in XR patent counts. We interpret exponentiated coefficients as incidence rate ratios (IRRs) and construct a technology relatedness network from significant IRR edges, revealing a dual-axis XR structure: reality is anchored in an AR or VR experience and content axis such as virtual and augment, whereas extend is embedded in a structure and integration axis for example, surface, edge, layer, and connectivity-related terms. To show how the proposed method can be applied to real domains, we searched the XR patent documents, and analyzed them for XR technology analysis. Full article
Show Figures

Figure 1

34 pages, 2540 KB  
Review
Designing Extended Intelligence: A Taxonomy of Psychobiological Effects of XR–AI Systems for Human Capability Augmentation
by Jolanda Tromp, Ilias El Makrini, Mario Trógolo, Miguel A. Muñoz, Maria B. Sánchez-Barrerra, Jose Pech Pacheco and Cándida Castro
Virtual Worlds 2026, 5(2), 18; https://doi.org/10.3390/virtualworlds5020018 - 18 Apr 2026
Viewed by 467
Abstract
Extended Reality (XR) and Artificial Intelligence (AI) are increasingly converging within cyber–physical infrastructures, including digital twins, the Spatial Web, and smart-city systems. These environments require new frameworks for understanding how human performance emerges through sustained interaction with immersive interfaces and adaptive computational agents. [...] Read more.
Extended Reality (XR) and Artificial Intelligence (AI) are increasingly converging within cyber–physical infrastructures, including digital twins, the Spatial Web, and smart-city systems. These environments require new frameworks for understanding how human performance emerges through sustained interaction with immersive interfaces and adaptive computational agents. This paper introduces the TAXI–XI-CAP framework, a two-layer model that links psychobiological mechanisms of XR–AI interaction to higher-level, experimentally testable capability constructs. The TAXI layer defines 42 mechanisms spanning perception, cognition, physiology, sensorimotor control, and social coordination, while XI-CAP organizes these into capability patterns such as remote dexterity, distributed cognition, and adaptive workload regulation. Derived through a theory-guided synthesis across XR, neuroscience, and human–automation interaction, the framework models performance as emerging from interacting mechanisms under real-world constraints. A validation-oriented research agenda is proposed, emphasizing mechanism-level measurement, capability-level evaluation, and longitudinal testing. The TAXI–XI-CAP framework provides a structured basis for hypothesis generation, comparative analysis, and empirical validation of XR–AI systems, supporting the development of reliable, scalable, and human-centered Extended Intelligence infrastructures. Full article
Show Figures

Figure 1

16 pages, 303 KB  
Article
Virtual Reality and the Sense of Belonging Among Distance Learners: A Study on Peer Relationships in Higher Education
by David Košatka, Alžběta Šašinková, Markéta Košatková, Tomáš Hunčík and Čeněk Šašinka
Virtual Worlds 2026, 5(2), 17; https://doi.org/10.3390/virtualworlds5020017 - 9 Apr 2026
Viewed by 408
Abstract
Distance learners in higher education are often assumed to face limited peer interaction, potentially weakening their sense of belonging. This study examines peer relationships and belonging among students in distance and blended university programs, with attention to the role of virtual reality (VR) [...] Read more.
Distance learners in higher education are often assumed to face limited peer interaction, potentially weakening their sense of belonging. This study examines peer relationships and belonging among students in distance and blended university programs, with attention to the role of virtual reality (VR) within digitally mediated learning environments. Immersive VR teaching is included in the curriculum for distance learning students in the studied programs. Using a mixed-methods design, survey data and open-ended responses were collected from 17 students in Information Studies and Information Service Design. An adapted Classroom Community Scale was supplemented with items addressing the perceived contribution of different communication technologies. Contrary to expectations, fully distance learners did not report weaker agreement with statements reflecting belonging than blended students; on several items, they expressed stronger agreement, particularly regarding perceived peer support and learning opportunities. Results indicate that conventional 2D communication tools, particularly chats and video calls, are central to sustaining peer relationships. VR was not perceived as essential but described by some students as an added value supporting shared experience and group cohesion. Overall, belonging emerges as a socio-technical achievement shaped by communication practices rather than physical proximity. Full article
26 pages, 1399 KB  
Article
Immersive Virtual Reality Gameplay Alters Embodiment, Time Perception, and States of Consciousness
by Nicola De Pisapia, Andrea Polo and Andrea Signorelli
Virtual Worlds 2026, 5(2), 16; https://doi.org/10.3390/virtualworlds5020016 - 3 Apr 2026
Viewed by 716
Abstract
Immersive virtual environments are increasingly investigated as tools capable of modulating conscious experience, yet the specific contribution of graded immersion to altered states of consciousness (ASC), time perception, and cognition remains unclear. The present study examined how different levels of immersion during videogame [...] Read more.
Immersive virtual environments are increasingly investigated as tools capable of modulating conscious experience, yet the specific contribution of graded immersion to altered states of consciousness (ASC), time perception, and cognition remains unclear. The present study examined how different levels of immersion during videogame play influence subjective experience and post-experience cognitive performance. Seventy-two participants played an identical 35 min segment of the videogame Half-Life: Alyx under one of three conditions: desktop PC (low immersion), head-mounted virtual reality (VR; medium immersion), or VR combined with full-body locomotion via an omnidirectional treadmill (high immersion). Following gameplay, participants completed validated measures of presence (IPQ), immersion (IEQ), ASC (5D-ASC), retrospective time estimation, and cognitive flexibility (Stroop task and Alternative Uses Test). Presence was selectively enhanced in VR relative to desktop play, whereas immersion was highest in the VR plus treadmill condition. Specific ASC dimensions related to embodiment and self-experience were selectively elevated in immersive conditions, with the most robust effects observed for disembodiment and positive depersonalization. Retrospective time-estimation accuracy was reduced in the highest immersion condition, indicating increased temporal distortion. Immersive gameplay did not produce widespread changes in executive function. Overall, the findings indicate that immersive virtual reality gameplay selectively alters embodiment-related aspects of conscious experience and retrospective time perception, without broadly changing executive function. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop