Multimodal Interaction Design in Immersive Learning and Training Environments
A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).
Deadline for manuscript submissions: 20 February 2026 | Viewed by 16
Special Issue Editors
Interests: multi-modal LLM; multi-modal machine learning; knowledge graph reasoning; multi-modal knowledge graph representation learning; satellite intelligent computing
Special Issues, Collections and Topics in MDPI journals
Interests: multimodal sentiment analysis; cognition grounded data; deep learning; affective knowledge base
Special Issues, Collections and Topics in MDPI journals
Special Issue Information
Dear Colleagues,
Immersive learning and training environments—ranging from virtual and augmented reality classrooms to high-fidelity simulators and metaverse campuses—are rapidly becoming mainstream. These environments generate and depend on unprecedented volumes of multimodal data: synchronized streams of vision, audio, haptics, eye-tracking, physiological signals, speech, text, 3D geometry, and behavioral logs. To design interactions that are both effective and engaging, we must go beyond merely fusing these modalities, and instead must understand how they co-design user experience, cognition, and skill acquisition in situ.
The central challenge is no longer simply “How do we learn from multimodal data?”, but rather “How do we design multimodal interactions that leverage these data to create adaptive, safe, and inclusive immersive learning and training experiences?” Answering this question requires a confluence of advances in multimodal representation learning, human–computer interaction, immersive systems engineering, and pedagogical theory. It also demands new frameworks for real-time, privacy-preserving, and explainable analytics that can operate under the latency, bandwidth, and safety constraints of live training scenarios.
This Special Issue, therefore, invites researchers and practitioners to present cutting-edge work on multimodal interaction design in immersive learning and training environments. We seek contributions that bridge machine learning innovation with human-centered design, producing systems that not only mine multimodal data effectively, but also shape richer, safer, and more personalized learning interactions.
We invite submissions that advance multimodal interaction design for immersive learning and training environments. This Special Issue will showcase research that transforms cutting-edge multimodal representation learning into real-time, user-centered interaction paradigms for VR-, AR-, XR-, and simulator-based education. Topics of interest include, but are not limited to, the following:
- Multimodal representation learning models and paradigms for real-time learner-state estimation.
- Safety and robustness in multimodal representation learning: adversarial attacks, threats, and defenses in VR/AR/XR training pipelines.
- Distributed training techniques for federated or edge-based multimodal knowledge mining in simulators.
- Advanced approaches for complex multimodal tasks, such as cross-modal retrieval of instructional content and multimodal fusion for skill-assessment.
- Multimodal anomaly detection and its applications to detect unsafe actions or cognitive overload in immersive environments.
- Multimodal learning in knowledge-graph-based tutoring agent-, social XR-, and IoT-enhanced smart classrooms.
- Multimodal natural language and vision techniques for interactive pedagogical agents, gesture-based feedback, and embodied conversational tutors.
- Large language and vision models tailored for on-the-fly content generation and adaptive feedback in XR.
- Multimodal Large Model (MMLM) theory and technology for scalable immersive training ecosystems.
- Novel architectures (transformers, mamba, graph neural networks) for the joint-modeling of learner trajectories and environmental contexts.
- Explainable AI in multimodal graph mining and knowledge discovery for transparent learner analytics.
- Neuro-symbolic multimodal learning that combines sensor data with explicit pedagogical rules and learning objectives.
- Real-world industrial applications: workplace safety training, medical simulators, military XR systems, soft-skills training in social VR, etc.
- Comprehensive surveys and analyses charting the convergence of multimodal learning and immersive interaction design.
Dr. Qian Li
Dr. Yunfei Long
Dr. Lei Shi
Guest Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- virtual reality, augmented reality, extended reality
- metaverse campuses
- IoT-enhanced smart classrooms
- interactive pedagogical agents
- embodied conversational tutors
- multimodal interaction design
- multimodal representation learning and training
- immersive interaction systems
- human-centered systems design
- gesture-based feedback
- eye-tracking
- explainable AI
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.
Further information on MDPI's Special Issue policies can be found here.