Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = MMLA

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 770 KiB  
Data Descriptor
NPFC-Test: A Multimodal Dataset from an Interactive Digital Assessment Using Wearables and Self-Reports
by Luis Fernando Morán-Mirabal, Luis Eduardo Güemes-Frese, Mariana Favarony-Avila, Sergio Noé Torres-Rodríguez and Jessica Alejandra Ruiz-Ramirez
Data 2025, 10(7), 103; https://doi.org/10.3390/data10070103 - 30 Jun 2025
Viewed by 419
Abstract
The growing implementation of digital platforms and mobile devices in educational environments has generated the need to explore new approaches for evaluating the learning experience beyond traditional self-reports or instructor presence. In this context, the NPFC-Test dataset was created from an experimental protocol [...] Read more.
The growing implementation of digital platforms and mobile devices in educational environments has generated the need to explore new approaches for evaluating the learning experience beyond traditional self-reports or instructor presence. In this context, the NPFC-Test dataset was created from an experimental protocol conducted at the Experiential Classroom of the Institute for the Future of Education. The dataset was built by collecting multimodal indicators such as neuronal, physiological, and facial data using a portable EEG headband, a medical-grade biometric bracelet, a high-resolution depth camera, and self-report questionnaires. The participants were exposed to a digital test lasting 20 min, composed of audiovisual stimuli and cognitive challenges, during which synchronized data from all devices were gathered. The dataset includes timestamped records related to emotional valence, arousal, and concentration, offering a valuable resource for multimodal learning analytics (MMLA). The recorded data were processed through calibration procedures, temporal alignment techniques, and emotion recognition models. It is expected that the NPFC-Test dataset will support future studies in human–computer interaction and educational data science by providing structured evidence to analyze cognitive and emotional states in learning processes. In addition, it offers a replicable framework for capturing synchronized biometric and behavioral data in controlled academic settings. Full article
Show Figures

Figure 1

37 pages, 1386 KiB  
Review
A Comprehensive Review of Multimodal Analysis in Education
by Jared D. T. Guerrero-Sosa, Francisco P. Romero, Víctor H. Menéndez-Domínguez, Jesus Serrano-Guerrero, Andres Montoro-Montarroso and Jose A. Olivas
Appl. Sci. 2025, 15(11), 5896; https://doi.org/10.3390/app15115896 - 23 May 2025
Viewed by 2292
Abstract
Multimodal learning analytics (MMLA) has become a prominent approach for capturing the complexity of learning by integrating diverse data sources such as video, audio, physiological signals, and digital interactions. This comprehensive review synthesises findings from 177 peer-reviewed studies to examine the foundations, methodologies, [...] Read more.
Multimodal learning analytics (MMLA) has become a prominent approach for capturing the complexity of learning by integrating diverse data sources such as video, audio, physiological signals, and digital interactions. This comprehensive review synthesises findings from 177 peer-reviewed studies to examine the foundations, methodologies, tools, and applications of MMLA in education. It provides a detailed analysis of data collection modalities, feature extraction pipelines, modelling techniques—including machine learning, deep learning, and fusion strategies—and software frameworks used across various educational settings. Applications are categorised by pedagogical goals, including engagement monitoring, collaborative learning, simulation-based environments, and inclusive education. The review identifies key challenges, such as data synchronisation, model interpretability, ethical concerns, and scalability barriers. It concludes by outlining future research directions, with emphasis on real-world deployment, longitudinal studies, explainable artificial intelligence, emerging modalities, and cross-cultural validation. This work aims to consolidate current knowledge, address gaps in practice, and offer practical guidance for researchers and practitioners advancing multimodal approaches in education. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 7324 KiB  
Article
Kinetic Phase Behavior of Binary Mixtures of Tri-Saturated Triacylglycerols Containing Lauric Acid
by Sabine Danthine
Crystals 2024, 14(9), 807; https://doi.org/10.3390/cryst14090807 - 12 Sep 2024
Cited by 1 | Viewed by 1013
Abstract
Describing fat phase behavior is of significant interest for food and non-food applications. One recognized approach to understand the behavior of complex fatty systems is to simplify the fat matrix and to emphasize only the main triacylglycerol (TAG) components. In this context, the [...] Read more.
Describing fat phase behavior is of significant interest for food and non-food applications. One recognized approach to understand the behavior of complex fatty systems is to simplify the fat matrix and to emphasize only the main triacylglycerol (TAG) components. In this context, the kinetic phase behavior and phase transformation paths of binary mixtures of selected saturated monoacids (trilaurin (LaLaLa), trimyristin (MMM), and tripalmitin (PPP)) and of mixed saturated triacylglycerols containing lauric (La) and myristic (M) acids (MMLa and LaLaM) typical from lauric fats were investigated. Kinetic phase diagrams were constructed based on DSC heating thermograms (fast cooling and reheating at 5 °C min−1) and powder X-ray diffraction data. The investigated binary kinetic phase diagram presented an apparently typical eutectic behavior, with a eutectic point that varies depending on the blend composition. Introducing mixed saturated TAGs (MMLa or LaLaM) in binary blends led to a shift in the position of the eutectic point. Considering the binary blends made of LaLaLa, it was shifted from XLaLaLa = 0.7 in the LaLaLa–MMM system to XLaLaLa = 0.5 for the LaLaLa–MMLa mixture, and to XLaLaLa = 0.25 for the LaLaLa–LaLaM blend. Finally, the blend made of the two mixed TAGs (MMLa–LaLaM) also presented a complex non-ideal behavior. Full article
(This article belongs to the Section Industrial Crystallization)
Show Figures

Figure 1

21 pages, 1148 KiB  
Article
Visualizing Collaboration in Teamwork: A Multimodal Learning Analytics Platform for Non-Verbal Communication
by René Noël, Diego Miranda, Cristian Cechinel, Fabián Riquelme, Tiago Thompsen Primo and Roberto Munoz
Appl. Sci. 2022, 12(15), 7499; https://doi.org/10.3390/app12157499 - 26 Jul 2022
Cited by 16 | Viewed by 5536
Abstract
Developing communication skills in collaborative contexts is of special interest for educational institutions, since these skills are crucial to forming competent professionals for today’s world. New and accessible technologies open a way to analyze collaborative activities in face-to-face and non-face-to-face situations, where collaboration [...] Read more.
Developing communication skills in collaborative contexts is of special interest for educational institutions, since these skills are crucial to forming competent professionals for today’s world. New and accessible technologies open a way to analyze collaborative activities in face-to-face and non-face-to-face situations, where collaboration and student attitudes are difficult to measure using traditional methods. In this context, Multimodal Learning Analytics (MMLA) appear as an alternative to complement the evaluation and feedback of core skills. We present a MMLA platform to support collaboration assessment based on the capture and classification of non-verbal communication interactions. The developed platform integrates hardware and software, including machine learning techniques, to detect spoken interactions and body postures from video and audio recordings. The captured data is presented in a set of visualizations, designed to help teachers to obtain insights about the collaboration of a team. We performed a case study to explore if the visualizations were useful to represent different behavioral indicators of collaboration in different teamwork situations: a collaborative situation and a competitive situation. We discussed the results of the case study in a focus group with three teachers, to get insights in the usefulness of our proposal. The results show that the measurements and visualizations are helpful to understand differences in collaboration, confirming the feasibility the MMLA approach for assessing and providing collaboration insights based on non-verbal communication. Full article
(This article belongs to the Special Issue Data Analytics and Machine Learning in Education)
Show Figures

Figure 1

18 pages, 5241 KiB  
Article
Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit
by Bertrand Schneider, Javaria Hassan and Gahyun Sung
Sensors 2022, 22(2), 568; https://doi.org/10.3390/s22020568 - 12 Jan 2022
Cited by 9 | Viewed by 3342
Abstract
While the majority of social scientists still rely on traditional research instruments (e.g., surveys, self-reports, qualitative observations), multimodal sensing is becoming an emerging methodology for capturing human behaviors. Sensing technology has the potential to complement and enrich traditional measures by providing high frequency [...] Read more.
While the majority of social scientists still rely on traditional research instruments (e.g., surveys, self-reports, qualitative observations), multimodal sensing is becoming an emerging methodology for capturing human behaviors. Sensing technology has the potential to complement and enrich traditional measures by providing high frequency data on people’s behavior, cognition and affects. However, there is currently no easy-to-use toolkit for recording multimodal data streams. Existing methodologies rely on the use of physical sensors and custom-written code for accessing sensor data. In this paper, we present the EZ-MMLA toolkit. This toolkit was implemented as a website and provides easy access to multimodal data collection algorithms. One can collect a variety of data modalities: data on users’ attention (eye-tracking), physiological states (heart rate), body posture (skeletal data), gestures (from hand motion), emotions (from facial expressions and speech) and lower-level computer vision algorithms (e.g., fiducial/color tracking). This toolkit can run from any browser and does not require dedicated hardware or programming experience. We compare this toolkit with traditional methods and describe a case study where the EZ-MMLA toolkit was used by aspiring educational researchers in a classroom context. We conclude by discussing future work and other applications of this toolkit, potential limitations and implications. Full article
(This article belongs to the Special Issue Integrating Sensor Technologies in Educational Settings)
Show Figures

Figure 1

27 pages, 2091 KiB  
Article
EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
by Pankaj Chejara, Luis P. Prieto, Adolfo Ruiz-Calleja, María Jesús Rodríguez-Triana, Shashi Kant Shankar and Reet Kasepalu
Sensors 2021, 21(8), 2863; https://doi.org/10.3390/s21082863 - 19 Apr 2021
Cited by 23 | Viewed by 4955
Abstract
Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account [...] Read more.
Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account MMLA’s educational nature. Furthermore, there is a lack of systematization in model evaluation in MMLA, which is also reflected in the heterogeneous reporting of the evaluation results. To overcome these issues, this paper proposes an evaluation framework to assess and report the generalizability of ML models in MMLA (EFAR-MMLA). To illustrate the usefulness of EFAR-MMLA, we present a case study with two datasets, each with audio and log data collected from a classroom during a collaborative learning session. In this case study, regression models are developed for collaboration quality and its sub-dimensions, and their generalizability is evaluated and reported. The framework helped us to systematically detect and report that the models achieved better performance when evaluated using hold-out or cross-validation but quickly degraded when evaluated across different student groups and learning contexts. The framework helps to open up a “wicked problem” in MMLA research that remains fuzzy (i.e., the generalizability of ML models), which is critical to both accumulating knowledge in the research community and demonstrating the practical relevance of these techniques. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

26 pages, 2682 KiB  
Review
Multimodal Data Fusion in Learning Analytics: A Systematic Review
by Su Mu, Meng Cui and Xiaodi Huang
Sensors 2020, 20(23), 6856; https://doi.org/10.3390/s20236856 - 30 Nov 2020
Cited by 82 | Viewed by 15456
Abstract
Multimodal learning analytics (MMLA), which has become increasingly popular, can help provide an accurate understanding of learning processes. However, it is still unclear how multimodal data is integrated into MMLA. By following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, [...] Read more.
Multimodal learning analytics (MMLA), which has become increasingly popular, can help provide an accurate understanding of learning processes. However, it is still unclear how multimodal data is integrated into MMLA. By following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, this paper systematically surveys 346 articles on MMLA published during the past three years. For this purpose, we first present a conceptual model for reviewing these articles from three dimensions: data types, learning indicators, and data fusion. Based on this model, we then answer the following questions: 1. What types of data and learning indicators are used in MMLA, together with their relationships; and 2. What are the classifications of the data fusion methods in MMLA. Finally, we point out the key stages in data fusion and the future research direction in MMLA. Our main findings from this review are (a) The data in MMLA are classified into digital data, physical data, physiological data, psychometric data, and environment data; (b) The learning indicators are behavior, cognition, emotion, collaboration, and engagement; (c) The relationships between multimodal data and learning indicators are one-to-one, one-to-any, and many-to-one. The complex relationships between multimodal data and learning indicators are the key for data fusion; (d) The main data fusion methods in MMLA are many-to-one, many-to-many and multiple validations among multimodal data; and (e) Multimodal data fusion can be characterized by the multimodality of data, multi-dimension of indicators, and diversity of methods. Full article
Show Figures

Figure 1

16 pages, 3927 KiB  
Article
Transplantation Induces Profound Changes in the Transcriptional Asset of Hematopoietic Stem Cells: Identification of Specific Signatures Using Machine Learning Techniques
by Daniela Cilloni, Jessica Petiti, Valentina Campia, Marina Podestà, Margherita Squillario, Nuria Montserrat, Alice Bertaina, Federica Sabatini, Sonia Carturan, Massimo Berger, Francesco Saglio, Giuseppe Bandini, Francesca Bonifazi, Franca Fagioli, Lorenzo Moretta, Giuseppe Saglio, Alessandro Verri, Annalisa Barla, Franco Locatelli and Francesco Frassoni
J. Clin. Med. 2020, 9(6), 1670; https://doi.org/10.3390/jcm9061670 - 1 Jun 2020
Cited by 5 | Viewed by 4849
Abstract
During the phase of proliferation needed for hematopoietic reconstitution following transplantation, hematopoietic stem/progenitor cells (HSPC) must express genes involved in stem cell self-renewal. We investigated the expression of genes relevant for self-renewal and expansion of HSPC (operationally defined as CD34+ cells) in steady [...] Read more.
During the phase of proliferation needed for hematopoietic reconstitution following transplantation, hematopoietic stem/progenitor cells (HSPC) must express genes involved in stem cell self-renewal. We investigated the expression of genes relevant for self-renewal and expansion of HSPC (operationally defined as CD34+ cells) in steady state and after transplantation. Specifically, we evaluated the expression of ninety-one genes that were analyzed by real-time PCR in CD34+ cells isolated from (i) 12 samples from umbilical cord blood (UCB); (ii) 15 samples from bone marrow healthy donors; (iii) 13 samples from bone marrow after umbilical cord blood transplant (UCBT); and (iv) 29 samples from patients after transplantation with adult hematopoietic cells. The results show that transplanted CD34+ cells from adult cells acquire an asset very different from transplanted CD34+ cells from cord blood. Multivariate machine learning analysis (MMLA) showed that four specific gene signatures can be obtained by comparing the four types of CD34+ cells. In several, but not all cases, transplanted HSPC from UCB overexpress reprogramming genes. However, these remarkable changes do not alter the commitment to hematopoietic lineage. Overall, these results reveal undisclosed aspects of transplantation biology. Full article
(This article belongs to the Section Hematology)
Show Figures

Figure 1

21 pages, 477 KiB  
Article
A Scalable Architecture for the Dynamic Deployment of Multimodal Learning Analytics Applications in Smart Classrooms
by Alberto Huertas Celdrán, José A. Ruipérez-Valiente, Félix J. García Clemente, María Jesús Rodríguez-Triana, Shashi Kant Shankar and Gregorio Martínez Pérez
Sensors 2020, 20(10), 2923; https://doi.org/10.3390/s20102923 - 21 May 2020
Cited by 17 | Viewed by 5258
Abstract
The smart classrooms of the future will use different software, devices and wearables as an integral part of the learning process. These educational applications generate a large amount of data from different sources. The area of Multimodal Learning Analytics (MMLA) explores the affordances [...] Read more.
The smart classrooms of the future will use different software, devices and wearables as an integral part of the learning process. These educational applications generate a large amount of data from different sources. The area of Multimodal Learning Analytics (MMLA) explores the affordances of processing these heterogeneous data to understand and improve both learning and the context where it occurs. However, a review of different MMLA studies highlighted that ad-hoc and rigid architectures cannot be scaled up to real contexts. In this work, we propose a novel MMLA architecture that builds on software-defined networks and network function virtualization principles. We exemplify how this architecture can solve some of the detected challenges to deploy, dismantle and reconfigure the MMLA applications in a scalable way. Additionally, through some experiments, we demonstrate the feasibility and performance of our architecture when different classroom devices are reconfigured with diverse learning tools. These findings and the proposed architecture can be useful for other researchers in the area of MMLA and educational technologies envisioning the future of smart classrooms. Future work should aim to deploy this architecture in real educational scenarios with MMLA applications. Full article
(This article belongs to the Special Issue Teaching and Learning Advances on Sensors for IoT)
Show Figures

Figure 1

27 pages, 5648 KiB  
Article
Utilizing Interactive Surfaces to Enhance Learning, Collaboration and Engagement: Insights from Learners’ Gaze and Speech
by Kshitij Sharma, Ioannis Leftheriotis and Michail Giannakos
Sensors 2020, 20(7), 1964; https://doi.org/10.3390/s20071964 - 31 Mar 2020
Cited by 25 | Viewed by 4930
Abstract
Interactive displays are becoming increasingly popular in informal learning environments as an educational technology for improving students’ learning and enhancing their engagement. Interactive displays have the potential to reinforce and maintain collaboration and rich-interaction with the content in a natural and engaging manner. [...] Read more.
Interactive displays are becoming increasingly popular in informal learning environments as an educational technology for improving students’ learning and enhancing their engagement. Interactive displays have the potential to reinforce and maintain collaboration and rich-interaction with the content in a natural and engaging manner. Despite the increased prevalence of interactive displays for learning, there is limited knowledge about how students collaborate in informal settings and how their collaboration around the interactive surfaces influences their learning and engagement. We present a dual eye-tracking study, involving 36 participants, a two-staged within-group experiment was conducted following single-group time series design, involving repeated measurement of participants’ gaze, voice, game-logs and learning gain tests. Various correlation, regression and covariance analyses employed to investigate students’ collaboration, engagement and learning gains during the activity. The results show that collaboratively, pairs who have high gaze similarity have high learning outcomes. Individually, participants spending high proportions of time in acquiring the complementary information from images and textual parts of the learning material attain high learning outcomes. Moreover, the results show that the speech could be an interesting covariate while analyzing the relation between the gaze variables and the learning gains (and task-based performance). We also show that the gaze is an effective proxy to cognitive mechanisms underlying collaboration not only in formal settings but also in informal learning scenarios. Full article
Show Figures

Figure 1

Back to TopTop