Multimodal Data Fusion in Learning Analytics: A Systematic Review
1
School of Information Technology in Education, South China Normal University, Guangzhou 510631, China
2
School of Computing and Mathematics, Charles Sturt University, Albury, NSW 2640, Australia
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(23), 6856; https://doi.org/10.3390/s20236856
Received: 10 November 2020 / Revised: 26 November 2020 / Accepted: 28 November 2020 / Published: 30 November 2020
(This article belongs to the Special Issue New Trends on Multimodal Learning Analytics: Using Sensors to Understand and Improve Learning)
Multimodal learning analytics (MMLA), which has become increasingly popular, can help provide an accurate understanding of learning processes. However, it is still unclear how multimodal data is integrated into MMLA. By following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, this paper systematically surveys 346 articles on MMLA published during the past three years. For this purpose, we first present a conceptual model for reviewing these articles from three dimensions: data types, learning indicators, and data fusion. Based on this model, we then answer the following questions: 1. What types of data and learning indicators are used in MMLA, together with their relationships; and 2. What are the classifications of the data fusion methods in MMLA. Finally, we point out the key stages in data fusion and the future research direction in MMLA. Our main findings from this review are (a) The data in MMLA are classified into digital data, physical data, physiological data, psychometric data, and environment data; (b) The learning indicators are behavior, cognition, emotion, collaboration, and engagement; (c) The relationships between multimodal data and learning indicators are one-to-one, one-to-any, and many-to-one. The complex relationships between multimodal data and learning indicators are the key for data fusion; (d) The main data fusion methods in MMLA are many-to-one, many-to-many and multiple validations among multimodal data; and (e) Multimodal data fusion can be characterized by the multimodality of data, multi-dimension of indicators, and diversity of methods.
View Full-Text
Keywords:
multimodal learning analytics; data fusion; multimodal data; learning indicators; online learning
▼
Show Figures
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
MDPI and ACS Style
Mu, S.; Cui, M.; Huang, X. Multimodal Data Fusion in Learning Analytics: A Systematic Review. Sensors 2020, 20, 6856. https://doi.org/10.3390/s20236856
AMA Style
Mu S, Cui M, Huang X. Multimodal Data Fusion in Learning Analytics: A Systematic Review. Sensors. 2020; 20(23):6856. https://doi.org/10.3390/s20236856
Chicago/Turabian StyleMu, Su; Cui, Meng; Huang, Xiaodi. 2020. "Multimodal Data Fusion in Learning Analytics: A Systematic Review" Sensors 20, no. 23: 6856. https://doi.org/10.3390/s20236856
Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.
Search more from Scilit