Educational Virtual/Augmented Reality

Special Issue Editor


E-Mail Website
Guest Editor
School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70503, USA
Interests: human–computer interaction (HCI); 3D user interfaces; 3D interfaces for video games; virtual reality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue explores methods, technologies, and studies utilizing Virtual Reality (VR) or Augmented Reality (AR) techniques to improve the educational experiences of users. VR and AR are immersive technologies that enhance teaching and learning by blending digital content with traditional educational experiences. Educational experiences utilizing VR and AR technologies transform passive learning into active, immersive, and interactive experiences, making knowledge more accessible, engaging, and practical. The challenges of integrating educational VR/AR technologies extend beyond surface-level obstacles and involve pedagogical, technical, economic, and psychological considerations. This requires rigorous evaluation, inclusive design, and sustainable implementation models. This Special Issue invites contributions on the technological, creative, perceptual, cognitive, social, and learning aspects of educational VR/AR technologies.

We encourage authors to submit original research articles, novel case studies, insightful reviews, theoretical and critical perspectives, and well-argued viewpoint articles on educational uses of VR/AR, including but not limited to the following topics:

  • Design and evaluation of educational VR/AR environments;
  • Techniques to improve educational experiences using VR/AR;
  • Input and sensing technologies for educational VR/AR;
  • Use of physiological sensors for improving educational VR/AR interfaces;
  • Empirical studies related to educational VR/AR;
  • Novel software architectures for educational VR;
  • Collaborative educational interfaces for VR, AR, or other 3D computer environments;
  • Evaluation methods for educational VR/AR;
  • Human perception in the context of educational VR/AR;
  • Novel Educational Applications using VR/AR;
  • Machine learning techniques in the context of educational VR/AR;
  • Mixed Reality (MR) applications for education;
  • Long-term studies evaluating learning using VR/AR;
  • Special education and accessibility through AR/VR;
  • Personalized and adaptive learning using AR/VR;
  • Gamification and game-based learning in VR/AR;
  • Ethical issues, privacy, and data security in immersive environments;
  • Hybrid classrooms with AR/VR integration;
  • Digital twins and 3D modeling in education (e.g., VR/AR-based science labs).

Of particular interest are articles that critically explore virtual/augmented reality techniques for improving educational experiences using sensing technologies to gain a deeper understanding of the user.

Dr. Arun K. Kulshreshth
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • education
  • virtual reality
  • augmented reality
  • human–computer interaction
  • collaborative VR/AR
  • physiological sensors
  • motion controllers
  • haptics
  • user studies
  • machine learning
  • virtual environment
  • accessibility
  • ethics
  • gamification
  • digital twins
  • virtual labs

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Review

Jump to: Other

16 pages, 3673 KB  
Review
Virtual Reality Learning Environments: A Review of Support for Autonomous Learning Development
by Pablo Fernández-Arias, Antonio del Bosque and Diego Vergara
Multimodal Technol. Interact. 2026, 10(2), 18; https://doi.org/10.3390/mti10020018 - 5 Feb 2026
Viewed by 1087
Abstract
The rapid expansion of digital education in the 21st century has positioned Virtual Reality Learning Environments (VRLEs) as promising spaces for fostering greater learner autonomy. As immersive technologies become more accessible and pedagogically versatile, they offer students opportunities to regulate their learning processes, [...] Read more.
The rapid expansion of digital education in the 21st century has positioned Virtual Reality Learning Environments (VRLEs) as promising spaces for fostering greater learner autonomy. As immersive technologies become more accessible and pedagogically versatile, they offer students opportunities to regulate their learning processes, experiment in interactive scenarios, and progress at their own pace. This review examines how autonomous learning has been conceptualized and investigated within VRLE research through a comprehensive bibliometric analysis of studies published between 2000 and 2025. The results reveal a research field shaped by two major orientations: one focused on human and pedagogical dimensions (learner diversity, instructional design, and evidence-based strategies) and another on technological innovation (artificial intelligence, machine learning, and simulation-based systems). Topic analyses show that digital and immersive education dominate current scholarly production, while areas directly related to autonomy, personalized learning, and student-centered methodologies remain comparatively less developed. Accordingly, it is crucial to reinforce pedagogical structures that enable autonomous learning in VR environments and to integrate technological advancements in a manner that translates into tangible improvements in educational quality across different settings. Full article
(This article belongs to the Special Issue Educational Virtual/Augmented Reality)
Show Figures

Figure 1

Other

Jump to: Review

22 pages, 1413 KB  
Systematic Review
Motion Capture as an Immersive Learning Technology: A Systematic Review of Its Applications in Computer Animation Training
by Xinyi Jiang, Zainuddin Ibrahim, Jing Jiang and Gang Liu
Multimodal Technol. Interact. 2026, 10(1), 1; https://doi.org/10.3390/mti10010001 - 23 Dec 2025
Viewed by 1821
Abstract
Motion capture (MoCap) is increasingly recognized as a powerful multimodal immersive learning technology, providing embodied interaction and real-time motion visualization that enrich educational experiences. Although MoCap is gaining prominence within educational research, its pedagogical value and integration into computer animation training environments have [...] Read more.
Motion capture (MoCap) is increasingly recognized as a powerful multimodal immersive learning technology, providing embodied interaction and real-time motion visualization that enrich educational experiences. Although MoCap is gaining prominence within educational research, its pedagogical value and integration into computer animation training environments have received relatively limited systematic investigation. This review synthesizes findings from 17 studies to analyze how MoCap supports instructional design, creative development, and workflow efficiency in animation education. Results show that MoCap enables a multimodal learning process by combining visual, kinesthetic, and performative modalities, strengthening learners’ sense of presence, agency, and perceptual–motor understanding. Furthermore, we identified five key technical affordances of MoCap, including precision and fidelity, multi-actor and creative control, interactivity and immersion, perceptual–motor learning, and emotional expressiveness, which together shape both cognitive and creative learning outcomes. Emerging trends highlight MoCap’s growing convergence with VR/AR, XR, real-time rendering engines, and AI-augmented motion analysis, expanding its role in the design of immersive and interactive educational systems. This review offers insights into the use of MoCap in animation education research and provides a springboard for future work on more immersive and industry-relevant training. Full article
(This article belongs to the Special Issue Educational Virtual/Augmented Reality)
Show Figures

Figure 1

Back to TopTop