Next Article in Journal
A New Cache Update Scheme Using Reinforcement Learning for Coded Video Streaming Systems
Next Article in Special Issue
Studying Collaboration Dynamics in Physical Learning Spaces: Considering the Temporal Perspective through Epistemic Network Analysis
Previous Article in Journal
A Lightweight Attention-Based CNN Model for Efficient Gait Recognition with Wearable IMU Sensors
Previous Article in Special Issue
Implementation of a MEIoT Weather Station with Exogenous Disturbance Input
Article

EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA

1
School of Digital Technologies, Tallinn University, 10120 Tallinn, Estonia
2
School of Educational Sciences, Tallinn University, 10120 Tallinn, Estonia
3
GSIC-EMIC Group, University of Valladolid, 47011 Valladolid, Spain
*
Author to whom correspondence should be addressed.
Academic Editor: Andreas Savakis
Sensors 2021, 21(8), 2863; https://doi.org/10.3390/s21082863
Received: 25 March 2021 / Revised: 14 April 2021 / Accepted: 16 April 2021 / Published: 19 April 2021
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account MMLA’s educational nature. Furthermore, there is a lack of systematization in model evaluation in MMLA, which is also reflected in the heterogeneous reporting of the evaluation results. To overcome these issues, this paper proposes an evaluation framework to assess and report the generalizability of ML models in MMLA (EFAR-MMLA). To illustrate the usefulness of EFAR-MMLA, we present a case study with two datasets, each with audio and log data collected from a classroom during a collaborative learning session. In this case study, regression models are developed for collaboration quality and its sub-dimensions, and their generalizability is evaluated and reported. The framework helped us to systematically detect and report that the models achieved better performance when evaluated using hold-out or cross-validation but quickly degraded when evaluated across different student groups and learning contexts. The framework helps to open up a “wicked problem” in MMLA research that remains fuzzy (i.e., the generalizability of ML models), which is critical to both accumulating knowledge in the research community and demonstrating the practical relevance of these techniques. View Full-Text
Keywords: Multimodal Learning Analytics; MMLA; face-to-face collaboration; machine learning; generalizability; evaluation framework; reporting Multimodal Learning Analytics; MMLA; face-to-face collaboration; machine learning; generalizability; evaluation framework; reporting
Show Figures

Figure 1

MDPI and ACS Style

Chejara, P.; Prieto, L.P.; Ruiz-Calleja, A.; Rodríguez-Triana, M.J.; Shankar, S.K.; Kasepalu, R. EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA. Sensors 2021, 21, 2863. https://doi.org/10.3390/s21082863

AMA Style

Chejara P, Prieto LP, Ruiz-Calleja A, Rodríguez-Triana MJ, Shankar SK, Kasepalu R. EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA. Sensors. 2021; 21(8):2863. https://doi.org/10.3390/s21082863

Chicago/Turabian Style

Chejara, Pankaj, Luis P. Prieto, Adolfo Ruiz-Calleja, María J. Rodríguez-Triana, Shashi K. Shankar, and Reet Kasepalu. 2021. "EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA" Sensors 21, no. 8: 2863. https://doi.org/10.3390/s21082863

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop