You are currently viewing a new version of our website. To view the old version click .

Multimodal Technologies and Interaction

Multimodal Technologies and Interaction is an international, peer-reviewed, open access journal on multimodal technologies and interaction published monthly online by MDPI.

Quartile Ranking JCR - Q2 (Computer Science, Cybernetics)

All Articles (826)

This systematic review summarises the latest research on the use of augmented reality (AR) in biology education at primary, secondary and tertiary levels. Searching Web of Science, Scopus and Google Scholar, we found 40 empirical studies published up until early 2024. For each study, we analysed biological content, technical features, learning practices and pedagogical impact. AR is most used in human anatomy, particularly in the circulatory and respiratory systems, but also in genetics, cell biology, virology, botany, ecology and molecular processes. Mobile devices dominate as a mediation platform, with marker-based tracking and either commercial apps or self-developed Unity/Vuforia solutions. Almost all studies embed AR in constructivist or inquiry-based pedagogies, and report improved motivation, engagement and conceptual understanding. Nevertheless, reporting on the technical details is inconsistent and the long-term effects are not yet sufficiently researched. AR should therefore be viewed as a pedagogical tool rather than a technological goal that requires careful instructional design and equitable access to ensure meaningful and sustainable learning.

25 November 2025

PRISMA flowchart [29].

While blended learning facilitates digital literacy development, the specific design models and student factors contributing to this process remain underexplored. This study examined the relationship between various blended learning design models and digital literacy skill acquisition among 106 upper-secondary Vocational Education and Training (VET) students. Relationships among student activities, digital competencies, and prior blended learning experience were analyzed. Engagement in collaborative, task-based instructional designs—specifically collaborative projects and regular quizzing supported by digital tools—was positively associated with digital competence. Conversely, passive participation in live sessions or viewing pre-recorded videos exhibited a comparatively weaker association with competence development. While the use of virtual/augmented reality and interactive video correlated positively with digital tool usage, it did not significantly predict perceptions of online safety or content creation skills. Students with prior blended learning experience reported higher proficiency in developmental competencies, such as content creation and research, compared to their inexperienced peers. Cluster analysis identified three distinct student profiles based on technical specialization and blended learning experience. Overall, these findings suggest that blended learning implementation should prioritize structured collaboration and formative assessment.

15 December 2025

Artificial emotional intelligence is a sub-domain of human–computer interaction research that aims to develop deep learning models capable of detecting and interpreting human emotional states through various modalities. A major challenge in this domain is identifying meaningful correlations between heterogeneous modalities—for example, between audio and visual data—due to their distinct temporal and spatial properties. Traditional fusion techniques used in multimodal learning to combine data from different sources often fail to adequately capture meaningful and less computational cross-modal interactions, and struggle to adapt to varying modality reliability. Following a review of the relevant literature, this study adopts an experimental research method to develop and evaluate a mathematical cross-modal fusion model, thereby addressing a gap in the extant research literature. The framework uses the Tucker tensor decomposition to analyse the multi-dimensional array of data into a set of matrices to support the integration of temporal features from audio and spatiotemporal features from visual modalities. A cross-attention mechanism is incorporated to enhance cross-modal interaction, enabling each modality to attend to the relevant information from the other. The efficacy of the model is rigorously evaluated on three publicly available datasets and the results conclusively demonstrate that the proposed fusion technique outperforms conventional fusion methods and several more recent approaches. The findings break new ground in this field of study and will be of interest to researchers and developers in artificial emotional intelligence.

24 November 2025

Fear of needles is common among child patients. It causes stress and can lead to difficulty in procedures and future treatment avoidance. Virtual reality (VR) has emerged as a promising tool to reduce pain and anxiety non-pharmacologically. However, a research gap exists regarding what VR content is most effective in decreasing periprocedural stress. This article reports a VR feasibility study conducted with 83 child patients aged 8–12 years during a cannulation procedure. It has a between-subjects design with four groups, comparing deep breathing and mindfulness-based relaxation in a virtual nature environment (VNE) to passive VNE and standard care. The results from both relaxation exercise groups have been previously reported. This follow-up article adds findings from passive VNE and control groups, comparing all four for effectiveness and patient experience. The key findings highlight that deep breathing was highly effective according to heart rate variability (HRV) data, but less enjoyable than the mindfulness-based relaxation, which achieved higher patient satisfaction but was less effective according to HRV. Passive VNEs were pleasant but did not cause measurable stress reduction. All VR interventions improved patient experience over standard care. Relaxation exercises in a VNE reduce periprocedural stress more efficiently than passive VNEs or standard care in pediatrics.

24 November 2025

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Multimodal Technol. Interact. - ISSN 2414-4088