Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,497)

Search Parameters:
Keywords = virtual reality interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3205 KB  
Article
Human-Centered Collaborative Robotic Workcell Facilitating Shared Autonomy for Disability-Inclusive Manufacturing
by YongKuk Kim, DaYoung Kim, DoKyung Hwang, Juhyun Kim, Eui-Jung Jung and Min-Gyu Kim
Electronics 2026, 15(2), 461; https://doi.org/10.3390/electronics15020461 - 21 Jan 2026
Abstract
Workers with upper-limb disabilities face difficulties in performing manufacturing tasks requiring fine manipulation, stable handling, and multistep procedural understanding. To address these limitations, this paper presents an integrated collaborative workcell designed to support disability-inclusive manufacturing. The system comprises four core modules: a JSON-based [...] Read more.
Workers with upper-limb disabilities face difficulties in performing manufacturing tasks requiring fine manipulation, stable handling, and multistep procedural understanding. To address these limitations, this paper presents an integrated collaborative workcell designed to support disability-inclusive manufacturing. The system comprises four core modules: a JSON-based collaboration database that structures manufacturing processes into robot–human cooperative units; a projection-based augmented reality (AR) interface that provides spatially aligned task guidance and virtual interaction elements; a multimodal interaction channel combining gesture tracking with speech and language-based communication; and a personalization mechanism that enables users to adjust robot behaviors—such as delivery poses and user-driven task role switching—which are then stored for future operations. The system is implemented using ROS-style modular nodes with an external WPF-based projection module and evaluated through scenario-based experiments involving workers with upper-limb impairments. The experimental scenarios illustrate that the proposed workcell is capable of supporting step transitions, part handover, contextual feedback, and user-preference adaptation within a unified system framework, suggesting its feasibility as an integrated foundation for disability-inclusive human–robot collaboration in manufacturing environments. Full article
Show Figures

Figure 1

27 pages, 7277 KB  
Article
Designing Safer Pedestrian Interactions with Autonomous Vehicles: A Virtual Reality Study of External Human-Machine Interfaces in Road-Crossing Scenarios
by Raul Almeida, Frederico Pereira, Dário Machado, Emanuel Sousa, Susana Faria and Elisabete Freitas
Appl. Sci. 2026, 16(2), 1080; https://doi.org/10.3390/app16021080 - 21 Jan 2026
Abstract
As autonomous vehicles (AVs) become part of urban environments, pedestrian safety and interactions with these vehicles are critical to creating sustainable, walkable cities. Intuitive pedestrian-vehicle communication is essential not only for reducing crash risk but also for supporting policies that promote active mobility [...] Read more.
As autonomous vehicles (AVs) become part of urban environments, pedestrian safety and interactions with these vehicles are critical to creating sustainable, walkable cities. Intuitive pedestrian-vehicle communication is essential not only for reducing crash risk but also for supporting policies that promote active mobility and efficient traffic flow. This study investigates pedestrian crossing behavior in a fully immersive virtual reality environment, building on previous work by the authors conducted in a CAVE-type simulator. Participants crossed between a conventional vehicle and an AV when they perceived it was safe. The analysis examines how external human–machine interfaces (eHMIs) influence crossing decisions, collisions, safety margins, and crossing initiation time (CIT) across different vehicle speeds and traffic gaps. Three hypotheses were tested regarding the effects of eHMIs on CIT, risk-taking behavior, and perceived safety. Results show that eHMIs significantly affect pedestrian decisions: participants delayed crossings when the eHMI indicated non-yielding behavior and initiated crossings earlier when yielding was signaled. Risk-taking behavior increased at higher vehicle speeds and shorter time gaps. Although perceived safety did not increase, behavioral results indicate reliance on visual cues. These findings underscore the importance of standardizing eHMIs to support pedestrian safety and sustainable urban mobility. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 3362 KB  
Article
Design and Evaluation of a Mixed Reality System for Facility Inspection and Maintenance
by Abuzar Haroon, Busra Yucel and Salman Azhar
Buildings 2026, 16(2), 425; https://doi.org/10.3390/buildings16020425 - 20 Jan 2026
Abstract
Emerging technologies are transforming Facilities Management (FM), enabling more efficient and accurate building inspections and maintenance. Mixed Reality (MR), which integrates virtual content into real-world environments, has shown potential for improving operational performance and technician training. This study presents the development and evaluation [...] Read more.
Emerging technologies are transforming Facilities Management (FM), enabling more efficient and accurate building inspections and maintenance. Mixed Reality (MR), which integrates virtual content into real-world environments, has shown potential for improving operational performance and technician training. This study presents the development and evaluation of an MR-assisted system designed to support facility operations in academic buildings. The system was tested across three case scenarios, namely plumbing, lighting, and fire sprinkler systems, using Microsoft HoloLens®. A mixed-methods approach combined a post-use questionnaire and semi-structured interviews with twelve FM professionals, including technicians, inspectors, and managers. Results indicated that 66.67% of participants found the MR interface highly effective in visualizing systems and guiding maintenance steps. 83.33% agreed that checklist integration enhanced accuracy and learning. Technical challenges, including model drift, latency, and occasional software crashes, were also observed. Overall, the study confirms the feasibility of MR for FM training and inspection, offering a foundation for broader implementation and future research. The findings provide valuable insights into how MR-based visualization and interaction tools can enhance efficiency, learning, and communication in facility operations. Full article
(This article belongs to the Topic Application of Smart Technologies in Buildings)
Show Figures

Figure 1

34 pages, 6013 KB  
Article
Extending Digital Narrative with AI, Games, Chatbots, and XR: How Experimental Creative Practice Yields Research Insights
by Lina Ruth Harder, David Jhave Johnston, Scott Rettberg, Sérgio Galvão Roxo and Haoyuan Tang
Humanities 2026, 15(1), 17; https://doi.org/10.3390/h15010017 - 16 Jan 2026
Viewed by 310
Abstract
The Extended Digital Narrative (XDN) research project explores how experimental creative practice with emerging technologies generates critical insights into algorithmic narrativity—the intersection of human narrative understanding and computational data processing. This article presents five case studies demonstrating that direct engagement with AI and [...] Read more.
The Extended Digital Narrative (XDN) research project explores how experimental creative practice with emerging technologies generates critical insights into algorithmic narrativity—the intersection of human narrative understanding and computational data processing. This article presents five case studies demonstrating that direct engagement with AI and Extended Reality platforms is essential for humanities research on new genres of digital storytelling. Lina Harder’s Hedy Lamar Chatbot examines how generative AI chatbots construct historical personas, revealing biases in training data and platform constraints. Scott Rettberg’s Republicans in Love investigates text-to-image generation as a writing environment for political satire, documenting rapid changes in AI aesthetics and content moderation. David Jhave Johnston’s Messages to Humanity demonstrates how Runway’s Act-One enables solo filmmaking, collapsing traditional production hierarchies. Haoyuan Tang’s video game project reframes LLM integration by prioritizing player actions over dialogue, challenging assumptions about AI’s role in interactive narratives. Sérgio Galvão Roxo’s Her Name Was Gisberta employs Virtual Reality for social education against transphobia, utilizing perspective-taking techniques for empathy development. These projects demonstrate that practice-based research is not merely artistic production but a vital methodology for understanding how AI and XR platforms shape—and are shaped by—human narrative capacities. Full article
(This article belongs to the Special Issue Electronic Literature and Game Narratives)
Show Figures

Figure 1

19 pages, 1791 KB  
Article
School-Based Immersive Virtual Reality Learning to Enhance Pragmatic Language and Social Communication in Children with ASD and SCD
by Phichete Julrode, Kitti Puritat, Pakinee Ariya and Kannikar Intawong
Educ. Sci. 2026, 16(1), 141; https://doi.org/10.3390/educsci16010141 - 16 Jan 2026
Viewed by 103
Abstract
Pragmatic language is a core component of school-based social participation, yet children with Autism Spectrum Disorder (ASD) and Social Communication Disorder (SCD) frequently experience persistent difficulties in using language appropriately across everyday learning contexts. This study investigated the effectiveness of a culturally adapted, [...] Read more.
Pragmatic language is a core component of school-based social participation, yet children with Autism Spectrum Disorder (ASD) and Social Communication Disorder (SCD) frequently experience persistent difficulties in using language appropriately across everyday learning contexts. This study investigated the effectiveness of a culturally adapted, school-based immersive Virtual Reality (VR) learning program designed to enhance pragmatic language and social communication skills among Thai primary school children. Eleven participants aged 7–12 years completed a three-week, ten-session VR program that simulated authentic classroom, playground, and canteen interactions aligned with Thai sociocultural norms. Outcomes were measured using the Social Communication Questionnaire (SCQ) and the Pragmatic Behavior Observation Checklist (PBOC). While SCQ scores showed a small, non-significant reduction (p = 0.092), PBOC results demonstrated significant improvements in three foundational pragmatic domains: Initiation and Responsiveness (p = 0.032), Turn-Taking and Conversational Flow (p = 0.037), and Politeness and Register (p = 0.010). Other domains showed no significant changes. These findings suggest that immersive, culturally relevant VR environments can support early gains in core pragmatic language behaviors within educational settings, although broader social communication outcomes may require longer or more intensive learning experiences. Full article
Show Figures

Figure 1

32 pages, 8754 KB  
Review
Plasmonics Meets Metasurfaces: A Vision for Next Generation Planar Optical Systems
by Muhammad A. Butt
Micromachines 2026, 17(1), 119; https://doi.org/10.3390/mi17010119 - 16 Jan 2026
Viewed by 280
Abstract
Plasmonics and metasurfaces (MSs) have emerged as two of the most influential platforms for manipulating light at the nanoscale, each offering complementary strengths that challenge the limits of conventional optical design. Plasmonics enables extreme subwavelength field confinement, ultrafast light–matter interaction, and strong optical [...] Read more.
Plasmonics and metasurfaces (MSs) have emerged as two of the most influential platforms for manipulating light at the nanoscale, each offering complementary strengths that challenge the limits of conventional optical design. Plasmonics enables extreme subwavelength field confinement, ultrafast light–matter interaction, and strong optical nonlinearities, while MSs provide versatile and compact control over phase, amplitude, polarization, and dispersion through planar, nanostructured interfaces. Recent advances in materials, nanofabrication, and device engineering are increasingly enabling these technologies to be combined within unified planar and hybrid optical platforms. This review surveys the physical principles, material strategies, and device architectures that underpin plasmonic, MS, and hybrid plasmonic–dielectric systems, with an emphasis on interface-mediated optical functionality rather than long-range guided-wave propagation. Key developments in modulators, detectors, nanolasers, metalenses, beam steering devices, and programmable optical surfaces are discussed, highlighting how hybrid designs can leverage strong field localization alongside low-loss wavefront control. System-level challenges including optical loss, thermal management, dispersion engineering, and large-area fabrication are critically examined. Looking forward, plasmonic and MS technologies are poised to define a new generation of flat, multifunctional, and programmable optical systems. Applications spanning imaging, sensing, communications, augmented and virtual reality, and optical information processing illustrate the transformative potential of these platforms. By consolidating recent progress and outlining future directions, this review provides a coherent perspective on how plasmonics and MSs are reshaping the design space of next-generation planar optical hardware. Full article
(This article belongs to the Special Issue Photonic and Optoelectronic Devices and Systems, 4th Edition)
Show Figures

Figure 1

12 pages, 3085 KB  
Article
Data-Driven Interactive Lens Control System Based on Dielectric Elastomer
by Hui Zhang, Zhijie Xia, Zhisheng Zhang and Jianxiong Zhu
Technologies 2026, 14(1), 68; https://doi.org/10.3390/technologies14010068 - 16 Jan 2026
Viewed by 133
Abstract
In order to solve the dynamic analysis and interactive imaging control problems in the deformation process of bionic soft lenses, dielectric elastomer (DE) actuators are separated from a convex lens, and data-driven eye-controlled motion technology is investigated. According to the DE properties, which [...] Read more.
In order to solve the dynamic analysis and interactive imaging control problems in the deformation process of bionic soft lenses, dielectric elastomer (DE) actuators are separated from a convex lens, and data-driven eye-controlled motion technology is investigated. According to the DE properties, which are consistent with the deformation characteristics of hydrogel electrodes, the motion and deformation effect of eye-controlled lenses under film prestretching, lens size, and driving voltage, is studied. The results show that when the driving voltage increases to 7.8 kV, the focal length of the lens, whose prestretching λ is 4, and the diameter d is 1 cm, varies in the range of 49.7 mm and 112.5 mm. And the maximum focal-length change could reach 58.9%. In the process of eye controlling design and experimental verification, a high DC voltage supply was programmed, and eye movement signals for controlling the lens were analyzed by MATLAB software (R2023b). Eye-controlled interactive real-time motion and tunable imaging of the lens were realized. The response efficiency of soft lenses could reach over 93%. The adaptive lens system developed in this research has the potential to be applied to medical rehabilitation, exploration, augmented reality (AR), and virtual reality (VR) in the future. Full article
(This article belongs to the Special Issue AI Driven Sensors and Their Applications)
Show Figures

Figure 1

14 pages, 5725 KB  
Article
FLIP-IBM: Fluid–Structure Coupling Interaction Based on Immersed Boundary Method Under FLIP Framework
by Changjun Zou and Jia Yu
Modelling 2026, 7(1), 22; https://doi.org/10.3390/modelling7010022 - 16 Jan 2026
Viewed by 91
Abstract
Fluid–structure coupling is a prominent and hot topic in computer graphics and virtual reality. The hybrid technique known as FLIP combines the benefits of grid-based and particle-based techniques. Nevertheless, a significant problem is figuring out how to accomplish fluid–structure coupling interaction based on [...] Read more.
Fluid–structure coupling is a prominent and hot topic in computer graphics and virtual reality. The hybrid technique known as FLIP combines the benefits of grid-based and particle-based techniques. Nevertheless, a significant problem is figuring out how to accomplish fluid–structure coupling interaction based on the FLIP technique framework. We propose an immersed boundary approach to handle the problem of realistic fluid–structure coupling interaction under the FLIP framework. The benchmark test results demonstrate that, in addition to producing rich fluid–structure coupling interaction results, our novel technique also effectively reflects the effects of moving obstacle boundaries on the flow and pressure fields, thereby expanding the application area of the FLIP method. Full article
(This article belongs to the Section Modelling in Engineering Structures)
Show Figures

Figure 1

20 pages, 4891 KB  
Article
Active Inference Modeling of Socially Shared Cognition in Virtual Reality
by Yoshiko Arima and Mahiro Okada
Sensors 2026, 26(2), 604; https://doi.org/10.3390/s26020604 - 16 Jan 2026
Viewed by 195
Abstract
This study proposes a process model for sharing ambiguous category concepts in virtual reality (VR) using an active inference framework. The model executes a dual-layer Bayesian update after observing both self and partner actions and predicts actions that minimize free energy. To incorporate [...] Read more.
This study proposes a process model for sharing ambiguous category concepts in virtual reality (VR) using an active inference framework. The model executes a dual-layer Bayesian update after observing both self and partner actions and predicts actions that minimize free energy. To incorporate agreement-seeking with others into active inference, we added disagreement in category judgments as a risk term in the free energy, weighted by gaze synchrony measured using Dynamic Time Warping (DTW), which is assumed to reflect joint attention. To validate the model, an object classification task in VR including ambiguous items was created. The experiment was conducted first under a bot avatar condition, in which ambiguous category judgments were always incorrect, and then under a human–human pair condition. This design allowed verification of the collaborative learning process by which human pairs reached agreement from the same degree of ambiguity. Analysis of experimental data from 14 participants showed that the model achieved high prediction accuracy for observed values as learning progressed. Introducing gaze synchrony weighting (γ00.5) further improved prediction accuracy, yielding optimal performance. This approach provides a new framework for modeling socially shared cognition using active inference in human–robot interaction contexts. Full article
Show Figures

Figure 1

39 pages, 2573 KB  
Systematic Review
Enhancing Informal Education Through Augmented Reality: A Systematic Review Focusing on Institutional Informal Learning Places (2018–2025)
by Stephanie Moser, Miriam Lechner, Marina Lazarević and Doris Lewalter
Educ. Sci. 2026, 16(1), 114; https://doi.org/10.3390/educsci16010114 - 13 Jan 2026
Viewed by 376
Abstract
Informal learning in institutional settings plays a vital role in lifelong education by fostering self-directed knowledge acquisition. With the increasing integration of digital media into these environments, augmented reality (AR) has emerged as a particularly promising technology due to its ability to overlay [...] Read more.
Informal learning in institutional settings plays a vital role in lifelong education by fostering self-directed knowledge acquisition. With the increasing integration of digital media into these environments, augmented reality (AR) has emerged as a particularly promising technology due to its ability to overlay virtual content in real-time and across multiple sensory modalities. This systematic literature review investigates the use of AR in institutional informal learning places (IILPs) from 2018 to 2025, aiming to synthesize findings across the following overall research questions: (1) In which IILP contexts has AR been implemented, and what are the characteristics of the technology? (2) What learning-relevant functions and (3) outcomes are associated with AR in these settings? (4) Which learning theories underpin the design of AR interventions? Following the PRISMA guidelines, empirical studies were identified through comprehensive database searches (Scopus, Web of Science, IEEE Xplore, FIS Bildung) and cross-referencing. Forty-four studies were analyzed via qualitative content analysis. The goal is to provide a descriptive overview of findings, patterns, and relationships. Findings indicate that AR is widely adopted across diverse domains and institutional contexts, primarily through mobile-based AR applications for K–12 learning. Native app development signals growing technological maturity. AR enhances both cognitive and emotional-motivational outcomes, though its potential to support social interaction remains insufficiently investigated. The predominant function of AR is the provision of information. Most of the examined studies are grounded in constructivist or cognitivist learning theories, particularly the Cognitive Theory of Multimedia Learning. Only limited references to emotional-motivational frameworks and minimal references to behaviorist frameworks were found. Full article
(This article belongs to the Special Issue Investigating Informal Learning in the Age of Technology)
Show Figures

Figure 1

15 pages, 1147 KB  
Article
The Effects of Gamified Virtual Reality on Muscle Strength and Physical Function in the Oldest Old—A Pilot Study on Sarcopenia-Related Functional Outcomes
by Żaneta Grzywacz, Justyna Jaśniewicz, Anna Koziarska, Joanna Macierzyńska and Edyta Majorczyk
J. Clin. Med. 2026, 15(2), 621; https://doi.org/10.3390/jcm15020621 - 13 Jan 2026
Viewed by 285
Abstract
Background/Objectives: Sarcopenia is an age-related decline in muscle mass and strength, reducing mobility and functional independence and increasing the risk of falls. Non-pharmacological interventions remain the most effective strategies to prevent or delay its progression, with exercise recognized as the primary approach. Virtual [...] Read more.
Background/Objectives: Sarcopenia is an age-related decline in muscle mass and strength, reducing mobility and functional independence and increasing the risk of falls. Non-pharmacological interventions remain the most effective strategies to prevent or delay its progression, with exercise recognized as the primary approach. Virtual reality (VR)-based training has recently emerged as a promising tool to promote physical activity; however, its application among the oldest-old individuals remains underexplored. This is a randomized controlled pilot study to evaluate the effects of VR-based intervention using the game “Beat Saber” on muscle strength and selected physical performance indicators related to sarcopenia risk in older adults. Methods: Thirty-eight residents (mean age: 87.2) of a long-term care facility were randomly assigned to either a VR group or a control group. The VR group participated in 12 supervised VR-based training sessions of 20 min per session, three times per week for four weeks. Handgrip strength, the arm curl test, 30-s chair stand, a 2-min step-in-place test, and an 8-foot up-and-go test were assessed before and after the intervention. Results: Linear mixed-model analyses revealed significant group-by-time interactions for upper- and lower-limb strength (handgrip, arm curl, chair stand; p < 0.05), favoring the VR group. Agility and endurance (8-foot up-and-go, 2-min step-in-place) showed no significant interactions. In the VR group, the 30-s chair stand performance correlated positively with the arm curl and the 2-min step-in-place tests results, while handgrip strength correlated with the arm curl performance. In the control group, the 30-s chair stand test results correlated strongly with the 8-foot up-and-go and 2-min step-in-place tests, but no significant correlations were found for handgrip strength. Conclusions: The findings indicate short-term functional benefits of VR exercise among the oldest-old adults. VR-based training appears to be an effective and well-tolerated method to enhance physical performance in individuals aged 80 and older and may represent a valuable strategy for improving functional performance indicators associated with sarcopenia risk in adults aged 80 years and older. Full article
Show Figures

Figure 1

17 pages, 1538 KB  
Article
A Mobile Augmented Reality Integrating KCHDM-Based Ontologies with LLMs for Adaptive Q&A and Knowledge Testing in Urban Heritage
by Yongjoo Cho and Kyoung Shin Park
Electronics 2026, 15(2), 336; https://doi.org/10.3390/electronics15020336 - 12 Jan 2026
Viewed by 169
Abstract
A cultural heritage augmented reality system overlays virtual information onto real-world heritage sites, enabling intuitive exploration and interpretation with spatial and temporal contexts. This study presents the design and implementation of a cognitive Mobile Augmented Reality (MAR) system that integrates KCHDM-based ontologies with [...] Read more.
A cultural heritage augmented reality system overlays virtual information onto real-world heritage sites, enabling intuitive exploration and interpretation with spatial and temporal contexts. This study presents the design and implementation of a cognitive Mobile Augmented Reality (MAR) system that integrates KCHDM-based ontologies with large language models (LLMs) to facilitate intelligent exploration of urban heritage. While conventional AR guides often rely on static data, our system introduces a Semantic Retrieval-Augmented Generation (RAG) pipeline anchored in a structured knowledge base modeled after the Korean Cultural Heritage Data Model (KCHDM). This architecture enables the LLM to perform dynamic contextual reasoning, transforming heritage data into adaptive question-answering (Q&A) and interactive knowledge-testing quizzes that are precisely grounded in both historical and spatial contexts. The system supports on-site AR exploration and map-based remote exploration to ensure robust usability and precise spatial alignment of virtual content. To deliver a rich, multisensory experience, the system provides multimodal outputs, integrating text, images, models, and audio narration. Furthermore, the integration of a knowledge sharing repository allows users to review and learn from others’ inquires. This ontology-driven LLM-integrated MAR design enhances semantic accuracy and contextual relevance, demonstrating the potential of MAR for socially enriched urban heritage experiences. Full article
Show Figures

Figure 1

16 pages, 9469 KB  
Article
Immersion as Convergence: How Storytelling, Interaction, and Sensory Design Co-Produce Museum Virtual Reality Experiences
by Zhennuo Song and Leighton Evans
Information 2026, 17(1), 75; https://doi.org/10.3390/info17010075 - 12 Jan 2026
Viewed by 315
Abstract
Cultural heritage institutions today are experiencing a digital transformation. Virtual Reality (VR), with the promise of immersive and interactive features, has drawn the attention of artists and curators. Some prior museology research has attempted to investigate digital innovations like virtual museums and VR-based [...] Read more.
Cultural heritage institutions today are experiencing a digital transformation. Virtual Reality (VR), with the promise of immersive and interactive features, has drawn the attention of artists and curators. Some prior museology research has attempted to investigate digital innovations like virtual museums and VR-based exhibits to present the best of museum experiences; however, existing systematic research on the topic of interactive narrative experience with immersive VR technologies is rare. This paper reports on an original research project to understand the emergent issues concerning immersion, interactive and narrative in museum experience design. This research used multiple case studies, Claude Monet: The Water Lily Obsession; We live in the Ocean of Air; Mona Lisa: Beyond the Glass; Curious Alice. In total, 22 semi-structured interviews were conducted with VR experts and museum curators to understand the motivation of the designers and developers. This research hopes to contribute to the digital revolution of museums, providing a foundation for curators and artists who are interested in using VR technologies in exhibitions. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Graphical abstract

20 pages, 4633 KB  
Article
Teleoperation System for Service Robots Using a Virtual Reality Headset and 3D Pose Estimation
by Tiago Ribeiro, Eduardo Fernandes, António Ribeiro, Carolina Lopes, Fernando Ribeiro and Gil Lopes
Sensors 2026, 26(2), 471; https://doi.org/10.3390/s26020471 - 10 Jan 2026
Viewed by 262
Abstract
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense [...] Read more.
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense D455 RGB-D (Red-Green-Blue plus Depth) camera for depth acquisition, enabling 3D reconstruction of key joints. Joint angles are computed using efficient vector operations and mapped to the kinematic constraints of an anthropomorphic arm on the CHARMIE service robot. A VR-based telepresence interface provides stereoscopic video and head-motion-based view control to improve situational awareness during manipulation tasks. Experiments in real-world object grasping demonstrate reliable arm teleoperation and effective telepresence; however, vision-only estimation remains limited for axial rotations (e.g., elbow and wrist yaw), particularly under occlusions and unfavorable viewpoints. The proposed system provides a practical pathway toward low-cost, sensor-driven, immersive human–robot interaction for service robotics in dynamic environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

25 pages, 5130 KB  
Article
Interpretable Biomechanical Feature Selection for VR Exercise Assessment Using SHAP and LDA
by Urszula Czajkowska, Magdalena Żuk, Michał Popek and Celina Pezowicz
Sensors 2026, 26(2), 464; https://doi.org/10.3390/s26020464 - 10 Jan 2026
Viewed by 231
Abstract
Virtual reality (VR) technologies are increasingly applied in rehabilitation, offering interactive physical and spatial exercises. A major challenge remains the objective assessment of human movement quality (HMQA). This study aimed to identify biomechanical features differentiating correct and incorrect execution of a lateral lunge [...] Read more.
Virtual reality (VR) technologies are increasingly applied in rehabilitation, offering interactive physical and spatial exercises. A major challenge remains the objective assessment of human movement quality (HMQA). This study aimed to identify biomechanical features differentiating correct and incorrect execution of a lateral lunge and to determine the minimal number of sensors required for reliable VR-based motion analysis, prioritising interpretability. Thirty-two healthy adults (mean age: 26.4 ± 8.5 years) performed 211 repetitions recorded with the HTC Vive Tracker system (7 sensors + headset). Repetitions were classified by a physiotherapist using video observation and predefined criteria. The analysis included joint angles, angular velocities and accelerations, and Euclidean distances between 28 sensor pairs, evaluated with Linear Discriminant Analysis (LDA) and SHapley Additive exPlanations (SHAP). Angular features achieved higher LDA performance (F1 = 0.89) than distance-based features (F1 = 0.78), which proved more stable and less sensitive to calibration errors. Comparison of SHAP and LDA showed high agreement in identifying key features, including hip flexion, knee rotation acceleration, and spatial relations between headset and foot or shank sensors. The findings indicate that simplified sensor configurations may provide reliable diagnostic information, highlighting opportunities for interpretable VR-based rehabilitation systems in home and clinical settings. Full article
Show Figures

Figure 1

Back to TopTop