Next Issue
Volume 10, January
Previous Issue
Volume 9, November
 
 

Multimodal Technol. Interact., Volume 9, Issue 12 (December 2025) – 4 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
16 pages, 274 KB  
Article
Mapping Blended Learning Activities to Students’ Digital Competence in VET
by Marko Radovan and Danijela Makovec Radovan
Multimodal Technol. Interact. 2025, 9(12), 118; https://doi.org/10.3390/mti9120118 - 15 Dec 2025
Viewed by 335
Abstract
While blended learning facilitates digital literacy development, the specific design models and student factors contributing to this process remain underexplored. This study examined the relationship between various blended learning design models and digital literacy skill acquisition among 106 upper-secondary Vocational Education and Training [...] Read more.
While blended learning facilitates digital literacy development, the specific design models and student factors contributing to this process remain underexplored. This study examined the relationship between various blended learning design models and digital literacy skill acquisition among 106 upper-secondary Vocational Education and Training (VET) students. Relationships among student activities, digital competencies, and prior blended learning experience were analyzed. Engagement in collaborative, task-based instructional designs—specifically collaborative projects and regular quizzing supported by digital tools—was positively associated with digital competence. Conversely, passive participation in live sessions or viewing pre-recorded videos exhibited a comparatively weaker association with competence development. While the use of virtual/augmented reality and interactive video correlated positively with digital tool usage, it did not significantly predict perceptions of online safety or content creation skills. Students with prior blended learning experience reported higher proficiency in developmental competencies, such as content creation and research, compared to their inexperienced peers. Cluster analysis identified three distinct student profiles based on technical specialization and blended learning experience. Overall, these findings suggest that blended learning implementation should prioritize structured collaboration and formative assessment. Full article
Show Figures

Graphical abstract

16 pages, 1130 KB  
Review
Augmented Reality in Biology Education: A Literature Review
by Katja Stanič and Andreja Špernjak
Multimodal Technol. Interact. 2025, 9(12), 117; https://doi.org/10.3390/mti9120117 - 25 Nov 2025
Viewed by 1241
Abstract
This systematic review summarises the latest research on the use of augmented reality (AR) in biology education at primary, secondary and tertiary levels. Searching Web of Science, Scopus and Google Scholar, we found 40 empirical studies published up until early 2024. For each [...] Read more.
This systematic review summarises the latest research on the use of augmented reality (AR) in biology education at primary, secondary and tertiary levels. Searching Web of Science, Scopus and Google Scholar, we found 40 empirical studies published up until early 2024. For each study, we analysed biological content, technical features, learning practices and pedagogical impact. AR is most used in human anatomy, particularly in the circulatory and respiratory systems, but also in genetics, cell biology, virology, botany, ecology and molecular processes. Mobile devices dominate as a mediation platform, with marker-based tracking and either commercial apps or self-developed Unity/Vuforia solutions. Almost all studies embed AR in constructivist or inquiry-based pedagogies, and report improved motivation, engagement and conceptual understanding. Nevertheless, reporting on the technical details is inconsistent and the long-term effects are not yet sufficiently researched. AR should therefore be viewed as a pedagogical tool rather than a technological goal that requires careful instructional design and equitable access to ensure meaningful and sustainable learning. Full article
Show Figures

Figure 1

31 pages, 3429 KB  
Article
Cross-Modal Attention Fusion: A Deep Learning and Affective Computing Model for Emotion Recognition
by Himanshu Kumar, Martin Aruldoss and Martin Wynn
Multimodal Technol. Interact. 2025, 9(12), 116; https://doi.org/10.3390/mti9120116 - 24 Nov 2025
Viewed by 1238
Abstract
Artificial emotional intelligence is a sub-domain of human–computer interaction research that aims to develop deep learning models capable of detecting and interpreting human emotional states through various modalities. A major challenge in this domain is identifying meaningful correlations between heterogeneous modalities—for example, between [...] Read more.
Artificial emotional intelligence is a sub-domain of human–computer interaction research that aims to develop deep learning models capable of detecting and interpreting human emotional states through various modalities. A major challenge in this domain is identifying meaningful correlations between heterogeneous modalities—for example, between audio and visual data—due to their distinct temporal and spatial properties. Traditional fusion techniques used in multimodal learning to combine data from different sources often fail to adequately capture meaningful and less computational cross-modal interactions, and struggle to adapt to varying modality reliability. Following a review of the relevant literature, this study adopts an experimental research method to develop and evaluate a mathematical cross-modal fusion model, thereby addressing a gap in the extant research literature. The framework uses the Tucker tensor decomposition to analyse the multi-dimensional array of data into a set of matrices to support the integration of temporal features from audio and spatiotemporal features from visual modalities. A cross-attention mechanism is incorporated to enhance cross-modal interaction, enabling each modality to attend to the relevant information from the other. The efficacy of the model is rigorously evaluated on three publicly available datasets and the results conclusively demonstrate that the proposed fusion technique outperforms conventional fusion methods and several more recent approaches. The findings break new ground in this field of study and will be of interest to researchers and developers in artificial emotional intelligence. Full article
Show Figures

Figure 1

27 pages, 2519 KB  
Article
Reducing Periprocedural Pain and Anxiety of Child Patients with Guided Relaxation Exercises in a Virtual Natural Environment: A Clinical Research Study
by Ilmari Jyskä, Markku Turunen, Kaija Puura, Elina Karppa, Sauli Palmu and Jari Viik
Multimodal Technol. Interact. 2025, 9(12), 115; https://doi.org/10.3390/mti9120115 - 24 Nov 2025
Viewed by 894
Abstract
Fear of needles is common among child patients. It causes stress and can lead to difficulty in procedures and future treatment avoidance. Virtual reality (VR) has emerged as a promising tool to reduce pain and anxiety non-pharmacologically. However, a research gap exists regarding [...] Read more.
Fear of needles is common among child patients. It causes stress and can lead to difficulty in procedures and future treatment avoidance. Virtual reality (VR) has emerged as a promising tool to reduce pain and anxiety non-pharmacologically. However, a research gap exists regarding what VR content is most effective in decreasing periprocedural stress. This article reports a VR feasibility study conducted with 83 child patients aged 8–12 years during a cannulation procedure. It has a between-subjects design with four groups, comparing deep breathing and mindfulness-based relaxation in a virtual nature environment (VNE) to passive VNE and standard care. The results from both relaxation exercise groups have been previously reported. This follow-up article adds findings from passive VNE and control groups, comparing all four for effectiveness and patient experience. The key findings highlight that deep breathing was highly effective according to heart rate variability (HRV) data, but less enjoyable than the mindfulness-based relaxation, which achieved higher patient satisfaction but was less effective according to HRV. Passive VNEs were pleasant but did not cause measurable stress reduction. All VR interventions improved patient experience over standard care. Relaxation exercises in a VNE reduce periprocedural stress more efficiently than passive VNEs or standard care in pediatrics. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop