Next Article in Journal
Training Doctoral Researchers for Applied Computing Research: Design Science and Action Research in International Contexts
Previous Article in Journal
Vincenzo Galilei and Musical Experiments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Cognitive Learning Analytics

by
Seyma Yildirim-Erbasli
1,*,
Munevver Ilgun Dibek
2 and
Alexander Taikh
1
1
Department of Psychology, Concordia University of Edmonton, Edmonton, AB T5B 4E4, Canada
2
Department of Educational Sciences, TED University, Ankara 06420, Turkey
*
Author to whom correspondence should be addressed.
Encyclopedia 2026, 6(3), 69; https://doi.org/10.3390/encyclopedia6030069
Submission received: 1 January 2026 / Revised: 9 March 2026 / Accepted: 17 March 2026 / Published: 19 March 2026
(This article belongs to the Section Social Sciences)

Definition

Cognitive Learning Analytics (CLA) is an interdisciplinary domain that combines cognitive science and learning analytics to interpret and enhance human learning through theoretically grounded data analysis. It integrates learning analytics with models of cognition to support theoretically grounded interpretation of learner data. Learning analytics, since its inception in 2011, has developed as a research field and applied practice, focusing on “the measurement, collection, analysis, and reporting of data about learners and their contexts.” It focuses on understanding and optimizing learning processes and environments by leveraging large-scale, multimodal educational data. Cognitive science, in parallel, provides established theories of human learning, memory, attention, and metacognition. CLA links observable behaviors with theoretically defined cognitive mechanisms. Through the integration of cognitive theories and computational techniques, CLA models how learners process information, make decisions, and construct knowledge in digital learning environments. CLA employs diverse data sources—including clickstream logs, eye tracking, biometric signals, and linguistic traces—to infer learners’ cognitive and affective states. These inferences inform adaptive learning systems, personalized feedback mechanisms, and intelligent tutoring tools that respond dynamically to the learner’s mental workload, engagement, or metacognitive strategies.

Graphical Abstract

1. Introduction

Digital learning environments generate large volumes of data. These data include not only traditional indicators such as assessment scores and completion rates, but also fine-grained traces of learner behavior, language use, timing, and multimodal signals captured through sensors and interaction logs. The growing availability of such data contributed to the development of learning analytics as a field dedicated to measuring, analyzing, and reporting learner data for the purpose of understanding and improving learning and the environments in which it occurs. Cognitive science has produced an extensive body of empirical and theoretical work explaining how people learn, remember, attend, solve problems, and regulate their cognitive activity. It provides theoretical accounts of the mental processes that underlie observable learning behavior and offers guidance for instructional design and educational intervention.
CLA is an interdisciplinary field that combines principles from cognitive science with the analytical and computational methods of learning analytics to study and interpret human learning in digital and technology-mediated environments. Learning analytics provides the methodological and computational foundations for collecting, modeling, and analyzing large-scale educational data, while cognitive science offers well-established theories of how humans learn, process information, and regulate their cognitive activity. CLA is at their intersection, seeking to integrate data-driven modeling with theory-driven interpretation to better understand learning processes as they unfold in authentic contexts.
This integration becomes particularly salient in complex and interactive educational technologies. Systems such as intelligent tutoring systems, adaptive learning platforms, conversational agents, dashboards, and recommendation systems generate rich streams of data while simultaneously making instructional decisions that affect learner cognition. CLA specifies how analytic outputs from these systems are interpreted in relation to learning processes. Learning analytics models learner behavior at scale; however, its explanatory scope depends on alignment with theories of cognition. CLA links analytic outputs to cognitive theory to explain learning processes. In Section 2, we review the theoretical underpinnings of learning analytics and of cognitive science. In Section 3, we present the types of data that are pertinent in CLA, and in Section 4, we examine the types of analyses commonly used in CLA and link them to concepts from cognitive science. Finally, in Section 5, we mention the key applications and use cases of CLA.

2. Conceptual and Theoretical Foundations of CLA

2.1. Learning Analytics as a Data-Driven Framework

As defined by the Society for Learning Analytics Research (SoLAR), learning analytics refers to the measurement, collection, analysis, and reporting of data about learners and their contexts [1]. It develops scalable methods for transforming raw interaction data into structured representations that can support monitoring, prediction, and decision-making in educational settings.
Learners generate data as they interact with digital learning environments, producing fine-grained traces such as navigation events, task attempts, timing information, and feedback usage [2]. Learning analytics involves the use of large-scale educational data [3]. Clow [4] conceptualized a learning analytic cycle composed of four steps: learners generate data, which is then processed into metrics, which are then analyzed and used to inform learning interventions, which in turn influence the learners. Gibson et al. [5] note that implementation of learning analytics varies from summarizing data to changing courses.
Learning analytics focuses on modeling observable learner behavior and its structure. Analytic models identify regularities, transitions, and deviations in learner activity, enabling predictions about outcomes such as performance, persistence [6], or risk [7]. These models may operate at multiple temporal scales, from moment-to-moment action sequences [8] to longitudinal engagement trajectories [9]. Without theoretical grounding, such models primarily produce descriptive or predictive outputs. This limitation defines the scope addressed by CLA. In CLA, learning analytics functions as an inferential layer that supplies structured evidence that can be interpreted through cognitive theory. In this sense, learning analytics and cognitive science are complementary.

2.2. Cognitive Science Foundations of Learning

Data-driven educational interventions are characterized by their alignment with the properties of the human cognitive system. CLA draws on multiple strands of cognitive science—including theories of memory, attention, problem-solving, self-regulation, and metacognition—to guide the interpretation of learner data and the design of analytics-informed interventions. No single theory captures the full complexity of learning; cognitive science offers complementary frameworks that help explain why particular behavioral patterns emerge in learning data and how those patterns relate to underlying mental processes. Cognitive load theory is highlighted here as one framework frequently applied in analytics-informed systems. It allows for connecting observable learning behaviors (e.g., time spent on tasks, patterns in clickstream data, or problem-solving sequences) to underlying cognitive processes, like intrinsic, extraneous, and germane load [10]. Other cognitive theories play equally important roles in explaining learner behavior and guiding analytic interpretation.
Within this broader landscape, cognitive load theory has been especially influential in educational research because it provides a clear and operationalizable account of mental effort during learning. Cognitive load theory [10,11,12] posits that learning is optimized when instructional conditions are consistent with the architecture of human cognition (i.e., the distinction between a limited-capacity working memory and a largely unlimited long-term memory) [13,14]. Within CLA, this architectural distinction supports the interpretation of observable behaviors such as error patterns or repeated attempts.
Cognitive load theory further differentiates between intrinsic, extraneous, and germane cognitive load, emphasizing that learning outcomes depend not only on the total amount of mental effort but also on its source [10]. Intrinsic load arises from the inherent complexity of the material, extraneous load from suboptimal instructional design, and germane load from processes that directly support schema construction. This framework provides criteria for interpreting learning data: variations in time on task, interaction sequences, or error rates may reflect changes in cognitive load rather than differences in motivation or ability [15].
Beyond interpretation, cognitive load theory also informs how learning analytics outputs are translated into instructional decisions, reducing unnecessary cognitive load and supporting learning (e.g., when to provide additional scaffolding or simplify task structure) [16,17]. Cognitive load theory offers analytic criteria for deciding when and why certain adaptations are likely to support learning based on inferred cognitive demands. Recent work has further demonstrated how cognitive load can be operationalized through learning analytics by linking theoretical constructs to subjective, behavioral, performance-based, and physiological indicators [18]. This mapping between theory and data characterizes the conceptual orientation of CLA: transforming rich but ambiguous learner traces into cognitively meaningful indicators that can inform both research and practice.

2.3. Cognitive Process Frameworks in Learning Analytics

Cognitive theories provide essential frameworks for interpreting learner data by enabling researchers to infer underlying cognitive operations from observable behavior. In CLA, these theories guide the mapping of digital traces (e.g., navigation patterns) onto hypothesized learning processes. Rather than treating learner data as neutral indicators of engagement or performance, CLA emphasizes that meaningful interpretation requires alignment with established models of cognition. Table 1 shows a comparison between learning analytics and CLA across key dimensions, including research goals, theoretical grounding, data modalities, validation practices, and typical outputs.
Bloom’s Taxonomy [19] offers one of the most widely adopted frameworks for characterizing cognitive demand. It organizes learning activities into hierarchical levels, ranging from knowledge and comprehension to application, analysis, synthesis, and evaluation. The revised taxonomy [20] reconceptualizes these categories as cognitive processes—remember, understand, apply, analyze, evaluate, and create—and introduces a second dimension distinguishing factual, conceptual, procedural, and metacognitive knowledge. Bloom’s framework provides a structured vocabulary for interpreting the cognitive complexity and aligning analytics with instructional intent. Building on this foundation, Gibson et al. [5] proposed the Cognitive OPeration framework for Analytics (COPA), which explicitly integrates cognitive theory into learning analytics practice. COPA operationalizes Bloom’s Revised Taxonomy by mapping learning activities to levels of cognitive processing. Such frameworks illustrate how cognitive theory can be systematically embedded within analytic pipelines. They support interpretations of learner data that account for the quality and complexity of cognitive activity.

3. Data Sources in CLA

Learning analytics systems relied primarily on log files and simple usage metrics, which offered limited capacity to identify underlying processes, such as the dynamic and cyclical process of planning, monitoring, and adapting learning [21,22]. As digital learning environments matured, however, the availability of fine-grained, time-stamped data enabled researchers to operationalize cognitive and metacognitive behaviors through observable traces. Winne [23] argued that such traces provide the closest empirical link to otherwise internal cognitive events. Table 2 presents a construct mapping matrix that illustrates how cognitive constructs can be operationalized through observable data, modeled using analytic approaches, and validated against independent measures.
Clickstream logs are widely used across large-scale digital platforms, providing researchers with continuous records of learner behavior [24]. Advances in sensing technologies, such as eye-tracking and functional near-infrared spectroscopy, made it possible to measure cognitive load, attentional allocation, and collaborative synchrony directly [25].

3.1. Behavioral Data

Behavioral data, especially clickstream logs, remain the foundational data source in CLA. Clickstream traces include time-stamped events such as accessing content, revisiting materials, seeking feedback, or attempting assessments. These actions have been shown to reveal patterns related to time management, planning, and strategic monitoring [24]. Because they are unobtrusively collected at scale, clickstream data are suitable for modeling both individual differences and population-level learning patterns.
Studies demonstrate that certain clickstream structures align closely with cognitive and motivational constructs. Navigation traces have been linked to planning and monitoring [26], procrastinatory engagement to lower performance [27], and behavioral clusters to self-reported learning strategies [28]. Integrating linguistic features with clickstream data improves predictions of Massive Open Online Course (MOOC) completion, underscoring the complementary value of discourse-based indicators [29]. Such hybrid models integrate cognitive, behavioral, and emotional indicators of learning.

3.2. Multimodal Data

Multimodal Learning Analytics (MMLA) data sources are primarily valued for their ability to capture cognitive, affective, and attentional processes [30,31]. While behavioral traces provide information about learners’ external actions, multimodal data extend this evidentiary base by enabling inferences about mental effort, engagement, regulation, and meaning-making as they unfold in real learning contexts [32,33,34]. Cognitive and affective constructs emerge through the integration of multiple complementary streams [30,31,33]. Multimodal fusion often improves learning outcome prediction and interpretation, though some modality combinations may reduce performance, highlighting the need for theory-driven selection [33].
Physiological and neurocognitive data form a central category of MMLA sources. Signals such as electrodermal activity and eye-tracking approximate cognitive and affective states (e.g., mental effort and attention) not readily inferred from behavioral logs [30,31]. Eye-tracking indicators have been used to model mental effort and learning outcomes across contexts [31], offering temporally sensitive evidence of cognitive resource allocation. Linguistic data, including text and speech, constitute an important multimodal source. Text analysis captures cognitive and affective dimensions beyond system logs [30], while speech features (e.g., fluency and prosody) have been used to infer expertise and engagement in learning contexts [31]. Handwriting and sketch data form a distinct multimodal source for studying cognition through external representations. Such data capture how learners construct and revise ideas during problem solving [30], enabling analysis of learning processes rather than outcomes alone. Action, posture, and gesture data constitute another key multimodal source in CLA. Using computer vision and motion tracking, researchers model engagement, collaboration, and learning strategies from bodily movement patterns [30,31], supporting analysis of embodied cognition. In CLA, these multimodal sources complement behavioral traces by enabling theory-driven inferences about learners’ cognitive, affective, and embodied processes.

3.3. Measurement Validity and Reliability Considerations

Although behavioral traces and multimodal signals provide valuable evidence about learning processes, their interpretation as indicators of latent cognitive states is indirect and inferential. Observable indicators function as proxies rather than direct measures of cognition, raising concerns about validity and reliability. For example, physiological measures alone may not reliably distinguish differences in cognitive load, even when grounded in established theoretical frameworks such as cognitive load theory [35]. Similarly, learners do not always behaviorally express emotional states through observable signals such as facial expressions, underscoring the limitations of relying on single observable modalities to infer internal cognitive or affective processes [36].
Further challenges arise when integrating multiple data streams. While combining modalities can improve predictive performance, adding further data streams does not necessarily enhance accuracy and may even reduce it [37]. Such findings indicate that features extracted from multimodal data streams do not map straightforwardly onto educational constructs and require careful theoretical justification and methodological validation. This reflects broader measurement validity concerns in CLA, where observable behavioral and physiological indicators serve as indirect proxies rather than direct measures of cognitive processes [33,38]. Common mitigation strategies include integration across complementary data sources, experimental manipulation of target constructs, alignment of observable indicators with theoretically grounded constructs, and cross-validation against external performance or self-report measures [33,38,39] (see Table 2).

4. Analytical and Computational Approaches in CLA

Analytical and computational methods constitute the foundation of CLA by enabling researchers to interpret complex, high-volume learning data in theoretically meaningful ways (see Table 2). These methods make it possible to model how learners think, regulate, and interact within digital environments, enabling modeling of temporal, linguistic, and multimodal patterns in learner data.

4.1. Machine Learning and Deep Learning

Machine learning (ML) and deep learning (DL) constitute core computational methods within CLA. CLA models learner behavior and infers cognitive and metacognitive processes from learning traces [40]. Learning traces encode implicit cognitive operations, which ML and DL techniques model as latent states [41].
ML encompasses a broad family of algorithms designed for prediction, classification, clustering, and anomaly detection based on patterns learned from historical data. Classical ML approaches have long been central to learning analytics. These methods support modeling of engagement and learner outcomes by linking behavioral traces with theoretically meaningful indicators of learning processes [23,40,42,43]. Their transparency and interpretability make them integral for scalable educational modeling and generating explanations aligned with cognitive theory [44].
DL extends the analytical capabilities of ML by using multilayer neural architectures that learn hierarchical representations directly from raw data, a property that enables the discovery of high-level abstractions from complex learning environments [45,46]. These architectures are relevant because cognitive processes, such as planning, monitoring, and attention, often manifest as temporal or multimodal patterns across behavioral or multimodal data. DL models a mechanism of hierarchical cognitive and metacognitive patterns in learner behavior [47,48].

4.2. Sequence Modeling Techniques

Sequential and process-oriented analytics include methodological families that differ in how they model temporal dependencies and evolving behavioral patterns. Statistical and probabilistic approaches represent learning as structured state transitions, whereas deep learning architectures capture nonlinear and dynamic behavioral patterns through hierarchical representations [49]. State-transition models, such as Markov Chain–based approaches, model learner behavior as probabilistic transitions between discrete states over time, enabling estimation of sequential dependencies and prediction of future actions [50,51]. Although extensions such as variable-order and hybrid models improve flexibility, they remain sensitive to data sparsity and have limited capacity for modeling long-range dependencies [52].
Advanced approaches include deep sequential models, such as recurrent neural networks, memory networks, and Transformer-based architectures, which capture nonlinear temporal dependencies through distributed representations of behavioral sequences [53,54]. These models represent short- and long-term dependencies; however, their complexity introduces interpretability and reliability considerations. Internal representations are often opaque and difficult to align with meaningful cognitive constructs [55], and performance may be sensitive to noisy or sparse data, long-sequence degradation, high computational demands, and overfitting risks [56].
Sequential models are often extended through multimodal and hybrid frameworks, which integrate diverse behavioral and contextual data to improve predictive performance [57]. Although such integration can improve predictive accuracy, these systems are associated with challenges, including data sparsity, noise, bias, and contextual variability [49]. Differences in platform architecture, logging granularity, and data structure may also limit generalizability. Hybrid models combining probabilistic, neural, and multimodal approaches aim to balance performance and interpretability but introduce additional complexity and validation demands [58]. While powerful, these models yield inferential approximations rather than direct measures of cognitive states, requiring careful theoretical grounding and validation.

4.3. Natural Language Processing

Language functions as a medium for communicating, integrating, and reorganizing knowledge, and therefore constitutes a central object for analytics aiming to uncover psychological, cognitive, and affective processes [59]. Natural Language Processing (NLP) leverages this linguistic trace to infer states such as confusion, engagement, epistemic stance, conceptual understanding, and metacognition.
Traditional NLP approaches have long been used to quantify properties of discourse relevant to cognition. For example, linguistic inquiry and word count categories tap cognitive processes such as causation, certainty, and insight, offering interpretable indicators of reasoning patterns and affective states [59]. These linguistic markers become proxies for cognitive operations, such as inference making, monitoring, or meaning construction.
Latent Semantic Analysis (LSA) remains foundational for modeling conceptual similarity and relevance, allowing researchers to estimate how closely learner discourse aligns with instructional content or expert reasoning [60,61]. LSA-based measures do not merely assess textual similarity; they operationalize deeper constructs such as conceptual understanding and knowledge integration. Yang [62] shows how LSA combined with deep neural architectures can operationalize cognitive-oriented NLP in MOOCs. Transformer-based models such as Bidirectional Encoder Representations from Transformers (BERT) introduce deeper semantic representations that substantially expand NLP’s capacity to infer cognitive states. Dyulicheva and Bilashova [63], for example, applied BERT to detect affective states in MOOCs.
Recent developments in NLP research have introduced a crucial dimension to CLA: cognitive plausibility. Beinborn and Hollenstein [64] define cognitive plausibility as the degree to which models reflect human-like patterns in decisions, representational structures, and processing strategies. Cognitive-plausible NLP aligns analytic models with human-like reasoning patterns, monitoring cognitive load, or diagnosing breakdowns in comprehension, ensuring that NLP models reflect human-like reasoning patterns and support explanatory analysis in CLA.

4.4. Multimodal Learning Analytics

MMLA is a subfield within learning analytics that aims to understand learning by leveraging evidence from multiple modalities rather than relying primarily on traces logged by digital platforms [32,65,66]. Multimodal data (e.g., audio, video, eye-tracking, electrodermal activity, and logs) support deeper accounts of learning experience [34]. Many educational analytics applications rely primarily on platform-generated traces, and the “seamless integration” and “sense-making” of multimodal streams are technically and conceptually challenging [34]. In CLA, this point is central: the cognitive interpretability of multimodal capture depends on explicit construct mapping from data to constructs that is theoretically justified, methodologically transparent, and empirically validated.
MMLA studies often lack alignment with theoretical frameworks [31]. This point can be leveraged to motivate theory-driven construct mapping: multimodal features should be selected and interpreted in relation to hypothesized cognitive processes, and analytic claims should specify what aspect of cognition is being inferred, through which observable manifestations, and under which contextual constraints. The literature suggests a coherent CLA-compatible position: MMLA provides an expanded evidentiary infrastructure for modeling learning as a dynamic, situated, and embodied phenomenon, with particular value for inferring cognitive, metacognitive, and affective processes that are poorly captured by platform logs alone [33,34,67]. However, the implementation of this approach involves theory-guided construct mapping, transparent and replicable fusion pipelines, careful validity arguments, and robust ethical safeguards [31,68].

5. Applications of CLA

Educational technology encompasses a variety of applications that use data and computational methods to support learning. These include Intelligent Tutoring Systems (ITS), adaptive learning platforms, conversational agents, dashboards, and recommendation systems.
ITS are computer-based learning environments designed to provide personalized instruction and feedback to learners on specific tasks or subjects [69]. Adaptive learning platforms adjust instructional content, difficulty levels, and pacing to match each learner’s performance and cognitive readiness, ensuring that learners are challenged appropriately while avoiding cognitive overload [70,71]. Conversational agents interact with learners through natural language dialogue to scaffold reasoning, promote reflection, and provide targeted feedback [72]. Research examines the role of conversational agents in engagement and cognitive processing [73,74]. Dashboards provide visual representations of learner data to instructors, administrators, or learners themselves, summarizing metrics such as progress, engagement, and performance trends, and supporting evidence-based decision-making and self-regulated learning [75,76]. Recommendation systems support learning by suggesting resources, activities, or pathways tailored to individual learner needs [77,78,79,80]. These applications constitute a set of technology-mediated learning environments.
ITS uses analytics to examine fine-grained, step-level traces to update models of learner knowledge and strategy [81]. Adaptive learning platforms operate on higher-level performance indicators to select appropriate content sequences and adjust pacing [82]. Conversational agents generate rich textual and behavioral data that learning analytics later analyze to detect patterns in dialogue, reasoning, and engagement [83]. Dashboards apply analytic processing to synthesize activity logs and performance trends into visualizations that support monitoring, reflection, and decision-making [84]. Recommendation systems detect meaningful relations among learners’ prior interactions, performance outcomes, and resource use, enabling suggestions that are aligned with individual learning needs [85].
CLA specifies theoretical and empirical principles used to interpret learner behavior in these systems. In ITS, models of memory [86], problem-solving, and metacognition [87] inform the interpretation of actions. Adaptive learning platforms apply principles of cognitive load [88], schema development [89,90], and skill acquisition [91] to sequence content and adjust difficulty. Conversational agents draw on theories of reasoning, conceptual change, and self-regulation to shape prompts and feedback [73]. Dashboard analytics incorporate principles of attention and information processing to structure visualizations in ways that reduce cognitive overload [92]. Recommendation systems rely on decision-making, prior knowledge activation, and motivation to align resource suggestions with learners’ cognitive needs [93]. Furthermore, fine-grained behavioral and multimodal traces can support formative assessment in digital learning environments. By integrating digital traces with cognitive and self-regulated learning theories [21,23] and iterative data–intervention cycles in learning analytics [4], CLA enables ongoing interpretation of learner activity. In this way, CLA provides a theoretical and analytic foundation for responsive instruction and learner self-regulation [73].

6. Conclusions and Future Directions

CLA is conceptualized as the convergence of learning analytics and cognitive science. It describes how learning analytics and cognitive science operate in a reciprocal relationship, each shaping and strengthening the other. This relationship supports the development of educational technologies that can adapt not only to what learners do, but to how they process information, manage cognitive demands, and construct understanding over time.
Future research should more systematically examine how analytic data can be mapped onto theoretically grounded cognitive constructs. For example, do measures of typing output, which is a common way of interacting with online learning platforms, reflect cognitive load? Further research can clarify whether these measures reflect intrinsic, germane, and extraneous forms of cognitive load, and how properties of the platform itself influence these measures. Furthermore, future research should examine the influence of interactive tools, such as chatbots, on cognitive and metacognitive processes. These research directions can be studied through controlled comparisons of interaction traces, performance outcomes, and complementary validation measures.

Author Contributions

Conceptualization, S.Y.-E.; writing—original draft preparation, S.Y.-E., M.I.D. and A.T.; writing—review and editing, S.Y.-E., M.I.D. and A.T.; visualization, S.Y.-E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CLACognitive Learning Analytics
SoLARSociety for Learning Analytics Research
COPACognitive Operation Framework for Analytics
MOOCMassive Open Online Course
MMLAMultimodal Learning Analytics
MLMachine learning
DLDeep learning
NLPNatural Language Processing
LSALatent Semantic Analysis
BERTBidirectional Encoder Representations from Transformers
ITSIntelligent Tutoring Systems

References

  1. Long, P.; Siemens, G. Penetrating the fog: Analytics in learning and education. EDUCAUSE Rev. 2011, 46, 30–40. [Google Scholar]
  2. Clow, D. An overview of learning analytics. Teach. High. Educ. 2013, 18, 683–695. [Google Scholar] [CrossRef]
  3. Campbell, J.P.; DeBlois, P.B.; Oblinger, D.G. Academic analytics: A new tool for a new era. EDUCAUSE Rev. 2007, 42, 40–57. [Google Scholar]
  4. Clow, D. The learning analytics cycle: Closing the loop effectively. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, Vancouver, BC, Canada, 29 April–2 May 2012. [Google Scholar] [CrossRef]
  5. Gibson, A.; Kitto, K.; Willis, J. A cognitive processing framework for learning analytics. In Proceedings of the 4th International Conference on Learning Analytics and Knowledge, Indianapolis, IN, USA, 24–28 March 2014. [Google Scholar] [CrossRef]
  6. Glick, D.; Cohen, A.; Festinger, E.; Xu, D.; Li, Q.; Warschauer, M. Predicting success, preventing failure: Using learning analytics to examine the strongest predictors of persistence and performance in an online English language course. In Utilizing Learning Analytics to Support Study Success; Springer International Publishing: Cham, Switzerland, 2019; pp. 249–273. [Google Scholar] [CrossRef]
  7. Russell, J.E.; Smith, A.; Larsen, R. Elements of success: Supporting at-risk student resilience through learning analytics. Comput. Educ. 2020, 152, 103890. [Google Scholar] [CrossRef]
  8. Andrade, A.; Delandshere, G.; Danish, J.A. Using multimodal learning analytics to model student behavior: A systematic analysis of epistemological framing. J. Learn. Anal. 2016, 3, 282–306. [Google Scholar] [CrossRef]
  9. Saqr, M.; López-Pernas, S. The longitudinal trajectories of online engagement over a full program. Comput. Educ. 2021, 175, 104325. [Google Scholar] [CrossRef]
  10. Paas, F.; Renkl, A.; Sweller, J. Cognitive load theory: Instructional implications of the interaction between information structures and cognitive architecture. Instr. Sci. 2004, 32, 1–8. [Google Scholar] [CrossRef]
  11. Sweller, J.; Ayres, P.; Kalyuga, S. Cognitive Load Theory; Springer: New York, NY, USA, 2011. [Google Scholar] [CrossRef]
  12. Sweller, J.; van Merriënboer, J.J.G.; Paas, F. Cognitive architecture and instructional design: 20 years later. Educ. Psychol. Rev. 2019, 31, 261–292. [Google Scholar] [CrossRef]
  13. Atkinson, R.C.; Shiffrin, R.M. Human memory: A proposed system and its control processes. In Psychology of Learning and Motivation; Academic Press: London, UK, 1968; Volume 2, pp. 89–195. [Google Scholar] [CrossRef]
  14. Oberauer, K. Design for a working memory. In Psychology of Learning and Motivation: Advances in Research and Theory; Academic Press: London, UK, 2009; Volume 51, pp. 45–100. [Google Scholar] [CrossRef]
  15. Kalyuga, S.; Ayres, P.; Chandler, P.; Sweller, J. The expertise reversal effect. Educ. Psychol. 2003, 38, 23–32. [Google Scholar] [CrossRef]
  16. Cooper, G.; Sweller, J. Effects of schema acquisition and rule automation on mathematical problem-solving transfer. J. Educ. Psychol. 1987, 79, 347–362. [Google Scholar] [CrossRef]
  17. Tarmizi, R.A.; Sweller, J. Guidance during mathematical problem solving. J. Educ. Psychol. 1988, 80, 424–436. [Google Scholar] [CrossRef]
  18. Herbig, N.; Düwel, T.; Helali, M.; Eckhart, L.; Schuck, P.; Choudhury, S.; Krüger, A. Investigating multi-modal measures for cognitive load detection in e-learning. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, Genoa, Italy, 12–18 July 2020. [Google Scholar] [CrossRef]
  19. Bloom, B.S.; Engelhart, M.D.; Furst, E.J.; Hill, W.H.; Krathwohl, D.R. Handbook I: Cognitive Domain; David McKay: New York, NY, USA, 1956; pp. 483–498. [Google Scholar]
  20. Anderson, L.W.; Krathwohl, D.R. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Addison Wesley Longman: New York, NY, USA, 2001. [Google Scholar]
  21. Winne, P.H.; Hadwin, A.F. The weave of motivation and self-regulated learning. In Motivation and Self-Regulated Learning: Theory, Research, and Applications; Schunk, D.H., Zimmerman, B.J., Eds.; Lawrence Erlbaum: New York, NY, USA, 2008; pp. 297–314. [Google Scholar]
  22. Zimmerman, B.J. Self-regulated learning and academic achievement: An overview. Educ. Psychol. 1990, 25, 3–17. [Google Scholar] [CrossRef]
  23. Winne, P.H. Improving measurements of self-regulated learning. Educ. Psychol. 2010, 45, 267–276. [Google Scholar] [CrossRef]
  24. Baker, R.; Xu, D.; Park, J.; Yu, R.; Li, Q.; Cung, B.; Fischer, C.; Rodriguez, F.; Warschauer, M.; Smyth, P. The benefits and caveats of using clickstream data to understand student self-regulatory behaviors: Opening the black box of learning processes. Int. J. Educ. Technol. High. Educ. 2020, 17, 13. [Google Scholar] [CrossRef]
  25. Chen, F.; Zhou, J.; Wang, Y.; Yu, K.; Arshad, S.Z.; Khawaji, A.; Conway, D. Robust Multimodal Cognitive Load Measurement; Springer Publishing Company: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  26. Cicchinelli, A.; Veas, E.; Pardo, A.; Pammer-Schindler, V.; Fessl, A.; Barreiros, C.; Lindstädt, S. Finding traces of self-regulated learning in activity streams. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge, Sydney, NSW, Australia, 5–9 March 2018. [Google Scholar] [CrossRef]
  27. Goda, Y.; Yamada, M.; Kato, H.; Matsuda, T.; Saito, Y.; Miyagawa, H. Procrastination and other learning behavioral types in e-learning and their relationship with learning outcomes. Learn. Individ. Differ. 2015, 37, 72–80. [Google Scholar] [CrossRef]
  28. Gašević, D.; Jovanović, J.; Pardo, A.; Dawson, S. Detecting learning strategies with analytics: Links with self-reported measures and academic performance. J. Learn. Anal. 2017, 4, 113–128. [Google Scholar] [CrossRef]
  29. Crossley, S.A.; Paquette, L.; Dascalu, M.; McNamara, D.S.; Baker, R.S. Combining click-stream data with NLP tools to better understand MOOC completion. In Proceedings of the 6th International Conference on Learning Analytics and Knowledge, Edinburgh, UK, 25–29 April 2016. [Google Scholar] [CrossRef]
  30. Blikstein, P.; Worsley, M. Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. J. Learn. Anal. 2016, 3, 220–238. [Google Scholar] [CrossRef]
  31. Giannakos, M.; Çukurova, M. The role of learning theory in multimodal learning analytics. Br. J. Educ. Technol. 2023, 54, 1246–1267. [Google Scholar] [CrossRef]
  32. Caskurlu, S.; Ocak, C.; Dai, C.P. The scope of multimodal learning analytics in K–8: A systematic review. J. Learn. Anal. 2025, 12, 224–236. [Google Scholar] [CrossRef]
  33. Çukurova, M.; Giannakos, M.; Martinez-Maldonado, R. The promise and challenges of multimodal learning analytics. Br. J. Educ. Technol. 2020, 51, 1441–1449. [Google Scholar] [CrossRef]
  34. Giannakos, M.N.; Sharma, K.; Pappas, I.O.; Kostakos, V.; Velloso, E. Multimodal data as a means to understand the learning experience. Int. J. Inf. Manag. 2019, 48, 108–119. [Google Scholar] [CrossRef]
  35. Larmuseau, C.; Cornelis, J.; Lancieri, L.; Desmet, P.; Depaepe, F. Multimodal learning analytics to investigate cognitive load during online problem solving. Br. J. Educ. Technol. 2020, 51, 1548–1562. [Google Scholar] [CrossRef]
  36. Ahn, B.T.; Harley, J.M. Facial expressions when learning with a queer history app: Application of the control value theory of achievement emotions. Br. J. Educ. Technol. 2020, 51, 1563–1576. [Google Scholar] [CrossRef]
  37. Emerson, A.; Cloude, E.B.; Azevedo, R.; Lester, J. Multimodal learning analytics for game-based learning. Br. J. Educ. Technol. 2020, 51, 1505–1526. [Google Scholar] [CrossRef]
  38. Yan, L.; Zhao, L.; Gašević, D.; Martinez-Maldonado, R. Scalability, sustainability, and ethicality of multimodal learning analytics. In Proceedings of the 12th International Learning Analytics and Knowledge Conference, Online, 21–25 March 2022. [Google Scholar] [CrossRef]
  39. Sharma, K.; Giannakos, M.N. Multimodal data capabilities for learning: What can multimodal data tell us about learning? Br. J. Educ. Technol. 2020, 51, 1450–1484. [Google Scholar] [CrossRef]
  40. Azevedo, R. Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues. Educ. Psychol. 2015, 50, 84–94. [Google Scholar] [CrossRef]
  41. Siemens, G.; Baker, R.S. Learning analytics and educational data mining: Towards communication and collaboration. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, Vancouver, BC, Canada, 29 April–2 May 2012. [Google Scholar]
  42. Baker, R.S.; Inventado, P.S. Educational data mining and learning analytics. In Learning Analytics: From Research to Practice; Larusson, J.A., White, B., Eds.; Springer: New York, NY, USA, 2014; pp. 61–75. [Google Scholar] [CrossRef]
  43. Romero, C.; Ventura, S. Educational data mining and learning analytics: An updated survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1355. [Google Scholar] [CrossRef]
  44. Rane, N.L.; Mallick, S.K.; Kaya, O.; Rane, J. Emerging trends and future directions in machine learning and deep learning architectures. In Applied Machine Learning and Deep Learning: Architectures and Techniques; Deep Science Publishing: London, UK, 2024; pp. 192–211. [Google Scholar] [CrossRef]
  45. Heaton, J. Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep learning. Genet. Program Evolvable 2016, 19, 305–307. [Google Scholar] [CrossRef]
  46. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  47. Lin, Y.; Chen, H.; Xia, W.; Lin, F.; Wang, Z.; Liu, Y. A comprehensive survey on deep learning techniques in educational data mining. Data Sci. Eng. 2025, 10, 564–590. [Google Scholar] [CrossRef]
  48. Patil, D.; Rane, N.L.; Desai, P.; Rane, J. Machine learning and deep learning: Methods, techniques, applications, challenges, and future research opportunities. In Trustworthy Artificial Intelligence in Industry and Society; Deep Science Publishing: London, UK, 2024; pp. 28–81. [Google Scholar] [CrossRef]
  49. Zhou, Y.; Ma, G.F.; Wen, X.; Yang, X.H.; Zhang, Y.C. Sequential recommender systems: A methodological taxonomy and research frontiers. Comput. Sci. Rev. 2026, 59, 100818. [Google Scholar] [CrossRef]
  50. Zimdars, A.; Chickering, D.M.; Meek, C. Using temporal data for making recommendations. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI 2001), Stanford, CA, USA, 30 July–2 August 2001. [Google Scholar]
  51. Shani, G.; Heckerman, D.; Brafman, R.I. An MDP-based recommender system. J. Mach. Learn. Res. 2005, 6, 1265–1295. [Google Scholar]
  52. Begleiter, R.; El-Yaniv, R.; Yona, G. On prediction using variable order Markov models. J. Artif. Intell. Res. 2004, 22, 385–421. [Google Scholar] [CrossRef]
  53. Fang, H.; Zhang, D.; Shu, Y.; Guo, G. Deep learning for sequential recommendation: Algorithms, influential factors, and evaluations. ACM Trans. Inf. Syst. 2020, 39, 10. [Google Scholar] [CrossRef]
  54. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; Tikk, D. Session-based recommendations with recurrent neural networks. arXiv 2016. [Google Scholar] [CrossRef]
  55. Li, M.J.; Li, S.T.; Yang, A.C.M.; Huang, A.Y.Q.; Yang, S.J.H. Trustworthy and explainable AI for learning analytics. In Proceedings of the LAK 2024 Workshops: Joint Proceedings of the LAK 2024 Workshops, Kyoto, Japan, 18–22 March 2024. [Google Scholar]
  56. Smirnova, E.; Vasile, F. Contextual sequence modeling for recommendation with recurrent neural networks. In Proceedings of the 2017 Workshop on Deep Learning for Recommender Systems (DLRS 2017), Como, Italy, 27 August 2017. [Google Scholar] [CrossRef]
  57. Cui, Q.; Wu, S.; Liu, Q.; Zhong, W.; Wang, L. MV-RNN: A multi-view recurrent neural network for sequential recommendation. IEEE Trans. Knowl. Data Eng. 2020, 32, 317–331. [Google Scholar] [CrossRef]
  58. Zhu, Q.; Zhou, X.; Song, Z.; Tan, J.; Guo, L. DAN: Deep attention neural network for news recommendation. Proc. AAAI Conf. Artif. Intell. 2019, 33, 5973–5980. [Google Scholar] [CrossRef]
  59. McNamara, D.S.; Allen, L.K.; Crossley, S.A.; Dascalu, M.; Perret, C.A. Natural language processing and learning analytics. In Handbook of Learning Analytics; Lang, C., Siemens, G., Wise, A.F., Gašević, D., Eds.; Society for Learning Analytics Research (SoLAR): Beaumont, AB, Canada, 2017; pp. 93–104. [Google Scholar] [CrossRef]
  60. Landauer, T.K.; McNamara, D.S.; Dennis, S.; Kintsch, W. (Eds.) Handbook of Latent Semantic Analysis; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2007. [Google Scholar]
  61. McNamara, D.S. Computational methods to extract meaning from text and advance theories of human cognition. Top. Cogn. Sci. 2011, 2, 3–17. [Google Scholar] [CrossRef]
  62. Yang, B. Learning Analytics Through Machine Learning and Natural Language Processing. Doctoral Dissertation, University of South Carolina, Columbia, SC, USA, 2023. [Google Scholar]
  63. Dyulicheva, Y.Y.; Bilashova, E.A. Learning analytics of MOOCs based on natural language processing. In Proceedings of the 4th Workshop for Young Scientists in Computer Science and Software Engineering, Kryvyi Rih, Ukraine, 18 December 2021. [Google Scholar]
  64. Beinborn, L.; Hollenstein, N. Cognitive Plausibility in Natural Language Processing; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  65. Ouhaichi, H.; Spikol, D.; Vogel, B. Research trends in multimodal learning analytics. Comput. Educ. Artif. Intell. 2023, 4, 100136. [Google Scholar] [CrossRef]
  66. Worsley, M.; Martinez-Maldonado, R.; D’Angelo, C. A new era in multimodal learning analytics: Twelve core commitments to ground and grow MMLA. J. Learn. Anal. 2021, 8, 10–27. [Google Scholar] [CrossRef]
  67. Oviatt, S.; Grafsgaard, J.; Chen, L.; Ochoa, X. Multimodal learning analytics: Assessing learners’ mental state during the process of learning. In The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition; Oviatt, S., Schuller, B., Cohen, P.R., Eds.; ACM Books: New York, NY, USA, 2018; Volume 2, pp. 331–374. [Google Scholar] [CrossRef]
  68. Ochoa, X. Multimodal learning analytics: Rationale, process, examples, and direction. In Handbook of Learning Analytics, 2nd ed.; Lang, C., Siemens, G., Wise, A.F., Gašević, D., Merceron, A., Eds.; Society for Learning Analytics Research (SoLAR): Beaumont, AB, Canada, 2022. [Google Scholar] [CrossRef]
  69. VanLehn, K. The relative effectiveness of human tutoring, intelligent tutoring systems and other tutoring systems. Educ. Psychol. 2011, 46, 197–221. [Google Scholar] [CrossRef]
  70. Kem, D. Personalised and adaptive learning: Emerging learning platforms in the era of digital and smart learning. Int. J. Soc. Sci. Humanit. Res. 2022, 5, 385–391. [Google Scholar] [CrossRef]
  71. Alqahtani, R.; Kaliappen, N.; Alqahtani, M. A review of the quality of adaptive learning tools over non-adaptive learning tools. Int. J. Qual. Res. 2021, 15, 45. [Google Scholar] [CrossRef]
  72. Yildirim-Erbasli, S.N.; Bulut, O. Conversation-based assessment: A novel approach to boosting test-taking effort in digital formative assessment. Comput. Educ. Artif. Intell. 2023, 4, 100135. [Google Scholar] [CrossRef]
  73. Yildirim-Erbasli, S.N.; Bulut, O.; Demmans Epp, C.; Cui, Y. Conversation-based assessments in education: Design, implementation, and cognitive walkthroughs for usability testing. J. Educ. Technol. Syst. 2023, 52, 27–51. [Google Scholar] [CrossRef]
  74. Yildirim-Erbasli, S.N.; Bulut, O.; Demmans Epp, C.; Cui, Y. Advancing higher education students’ assessment experiences with conversational agents. Educ. Technol. Res. Dev. 2025, 73, 1811–1834. [Google Scholar] [CrossRef]
  75. Teasley, S.D. Student-facing dashboards: One size fits all? Technol. Knowl. Learn. 2017, 22, 377–384. [Google Scholar] [CrossRef]
  76. Verbert, K.; Govaerts, S.; Duval, E.; Santos, J.L.; Van Assche, F.; Parra, G.; Klerkx, J. Learning dashboards: An overview and future research opportunities. Pers. Ubiquitous Comput. 2014, 18, 1499–1514. [Google Scholar] [CrossRef]
  77. Mhagama, J.T.; Garg, K. A systematic review of educational recommender systems: Techniques, target users, and emerging trends in personalized learning. Int. J. Technol. Educ. Sci. 2025, 2, 79–98. [Google Scholar]
  78. Urdaneta-Ponte, M.C.; Mendez-Zorrilla, A.; Oleagordia-Ruiz, I. Recommendation systems for education: Systematic review. Electronics 2021, 10, 1611. [Google Scholar] [CrossRef]
  79. Harrathi, M.; Braham, R. Recommenders in improving students’ engagement in large scale open learning. Procedia Comput. Sci. 2021, 192, 1121–1131. [Google Scholar] [CrossRef]
  80. Zou, L.; Xia, L.; Ding, Z.; Song, J.; Liu, W.; Yin, D. Reinforcement learning to optimize long-term user engagement in recommender systems. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Anchorage, AK, USA, 4–8 August 2019. [Google Scholar] [CrossRef]
  81. Baneres, D.; Caballé, S.; Clarisó, R. Towards a learning analytics support for intelligent tutoring systems on MOOC platforms. In Proceedings of the 10th International Conference on Complex, Intelligent, and Software Intensive Systems, Fukuoka, Japan, 6–8 July 2016. [Google Scholar] [CrossRef]
  82. Mavroudi, A.; Giannakos, M.; Krogstie, J. Supporting adaptive learning pathways through the use of learning analytics: Developments, challenges and future opportunities. Interact. Learn. Environ. 2018, 26, 206–220. [Google Scholar] [CrossRef]
  83. Ruan, S.; Jiang, L.; Xu, J.; Tham, B.J.K.; Qiu, Z.; Zhu, Y.; Landay, J.A. Quizbot: A dialogue-based adaptive learning system for factual knowledge. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019. [Google Scholar] [CrossRef]
  84. Paulsen, L.; Lindsay, E. Learning analytics dashboards are increasingly becoming about learning and not just analytics—A systematic review. Educ. Inf. Technol. 2024, 29, 14279–14308. [Google Scholar] [CrossRef]
  85. Dey, A.; Ganguly, A.; Banik, I.R.; Bhuiya, S.; Sengupta, S.; Das, R. Smart recommendation system in e-learning using machine learning and data analytics. SN Comput. Sci. 2025, 6, 706. [Google Scholar] [CrossRef]
  86. Wijekumar, K.; Meyer, B.J.; Lei, P.; Cheng, W.; Ji, X.; Joshi, R.M. Evidence of an intelligent tutoring system as a mindtool to promote strategic memory of expository texts and comprehension with children in grades 4 and 5. J. Educ. Comput. Res. 2017, 55, 1022–1048. [Google Scholar] [CrossRef]
  87. McCarthy, K.S.; Likens, A.D.; Johnson, A.M.; Guerrero, T.A.; McNamara, D.S. Metacognitive overload!: Positive and negative effects of metacognitive prompts in an intelligent tutoring system. Int. J. Artif. Intell. Educ. 2018, 28, 420–438. [Google Scholar] [CrossRef]
  88. Khasawneh, Y.J.A.; Khasawneh, M.A.S. Cognitive load analysis of adaptive learning technologies in special education classrooms: A quantitative approach. Int. J. Adv. Appl. Sci. 2024, 11, 34–41. [Google Scholar] [CrossRef]
  89. Jung, E.; Lim, R.; Kim, D. A schema-based instructional design model for self-paced learning environments. Educ. Sci. 2022, 12, 271. [Google Scholar] [CrossRef]
  90. Kalyuga, S. Assessment of learners’ organised knowledge structures in adaptive learning environments. Appl. Cogn. Psychol. 2006, 20, 333–342. [Google Scholar] [CrossRef]
  91. Rosen, Y.; Rushkin, I.; Rubin, R.; Munson, L.; Ang, A.; Weber, G.; Tingley, D. The effects of adaptive learning in a massive open online course on learners’ skill development. In Proceedings of the 5th Annual ACM Conference on Learning at Scale, London, UK, 26–28 June 2018. [Google Scholar] [CrossRef]
  92. Toreini, P.; Langner, M.; Maedche, A.; Morana, S.; Vogel, T. Designing attentive information dashboards. J. Assoc. Inf. Syst. 2022, 23, 521–552. [Google Scholar] [CrossRef]
  93. Chen, Y.; Li, X.; Liu, J.; Ying, Z. Recommendation system for adaptive learning. Appl. Psychol. Meas. 2018, 42, 24–41. [Google Scholar] [CrossRef]
Table 1. Key dimensions and characteristics of learning analytics vs. cognitive learning analytics.
Table 1. Key dimensions and characteristics of learning analytics vs. cognitive learning analytics.
DimensionLearning AnalyticsCognitive Learning Analytics
Primary GoalsTo monitor, predict, and enhance learning processes, and optimize educational environments.To interpret learner behavior through cognitive frameworks, providing explanatory insights, and informing instructional decisions.
Degree of Theoretical GroundingModerate; primarily data-driven with limited theoretical anchoring.High; explicitly grounded in cognitive science and learning theory.
Data ModalitiesBehavioral and multimodal data.Same as learning analytics, augmented by theoretically mapped cognitive constructs.
Validation PracticesCross-validation, predictive performance metrics, and correlational analyses.Same as learning analytics, augmented by alignment with cognitive theory and (quasi)experimental evaluation.
Typical OutputsDashboards, reports, visualizations, and predictive models.Theory-driven interpretations of cognitive, metacognitive, and affective states.
Table 2. Illustrative mapping of cognitive constructs to data, modeling, and validation in CLA.
Table 2. Illustrative mapping of cognitive constructs to data, modeling, and validation in CLA.
Cognitive ConstructObservable DataModelingValidation Strategy
Cognitive loadResponse time,
error rates
Deep learningExperimental manipulation
AttentionEye-tracking, time on taskMultimodal analysisComparison to eye-tracking benchmarks
MetacognitionHint requests, self-reportsBayesian modelingComparison with metacognitive judgments
Self-regulationPauses,
note-taking
Natural language processingQuestionnaires
Knowledge stateResponse accuracy, hint useMachine learningPredictive accuracy
Strategy useProblem-solving stepsSequence analysisExpert labeling
EngagementInteraction frequency, behavioral sequencesNeural networksBehavioral validation, performance association
Note. The construct mapping matrix is extensible. The constructs, data types, modeling approaches, and validation strategies listed are illustrative examples.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yildirim-Erbasli, S.; Dibek, M.I.; Taikh, A. Cognitive Learning Analytics. Encyclopedia 2026, 6, 69. https://doi.org/10.3390/encyclopedia6030069

AMA Style

Yildirim-Erbasli S, Dibek MI, Taikh A. Cognitive Learning Analytics. Encyclopedia. 2026; 6(3):69. https://doi.org/10.3390/encyclopedia6030069

Chicago/Turabian Style

Yildirim-Erbasli, Seyma, Munevver Ilgun Dibek, and Alexander Taikh. 2026. "Cognitive Learning Analytics" Encyclopedia 6, no. 3: 69. https://doi.org/10.3390/encyclopedia6030069

APA Style

Yildirim-Erbasli, S., Dibek, M. I., & Taikh, A. (2026). Cognitive Learning Analytics. Encyclopedia, 6(3), 69. https://doi.org/10.3390/encyclopedia6030069

Article Metrics

Back to TopTop