This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
Interpretable Knowledge Tracing via Transformer-Bayesian Hybrid Networks: Learning Temporal Dependencies and Causal Structures in Educational Data
by
Nhu Tam Mai
Nhu Tam Mai 1,
Wenyang Cao
Wenyang Cao 1,* and
Wenhe Liu
Wenhe Liu 2
1
Rossier School of Education, University of Southern California, Los Angeles, CA 90007, USA
2
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9605; https://doi.org/10.3390/app15179605 (registering DOI)
Submission received: 26 June 2025
/
Revised: 26 August 2025
/
Accepted: 27 August 2025
/
Published: 31 August 2025
Abstract
Knowledge tracing, the computational modeling of student learning progression through sequential educational interactions, represents a critical component for adaptive learning systems and personalized education platforms. However, existing approaches face a fundamental trade-off between predictive accuracy and interpretability: deep sequence models excel at capturing complex temporal dependencies in student interaction data but lack transparency in their decision-making processes, while probabilistic graphical models provide interpretable causal relationships but struggle with the complexity of real-world educational sequences. We propose a hybrid architecture that integrates transformer-based sequence modeling with structured Bayesian causal networks to overcome this limitation. Our dual-pathway design employs a transformer encoder to capture complex temporal patterns in student interaction sequences, while a differentiable Bayesian network explicitly models prerequisite relationships between knowledge components. These pathways are unified through a cross-attention mechanism that enables bidirectional information flow between temporal representations and causal structures. We introduce a joint training objective that simultaneously optimizes sequence prediction accuracy and causal graph consistency, ensuring learned temporal patterns align with interpretable domain knowledge. The model undergoes pre-training on 3.2 million student–problem interactions from diverse MOOCs to establish foundational representations, followed by domain-specific fine-tuning. Comprehensive experiments across mathematics, computer science, and language learning demonstrate substantial improvements: 8.7% increase in AUC over state-of-the-art knowledge tracing models (0.847 vs. 0.779), 12.3% reduction in RMSE for performance prediction, and 89.2% accuracy in discovering expert-validated prerequisite relationships. The model achieves a 0.763 F1-score for early at-risk student identification, outperforming baselines by 15.4%. This work demonstrates that sophisticated temporal modeling and interpretable causal reasoning can be effectively unified for educational applications.
Share and Cite
MDPI and ACS Style
Mai, N.T.; Cao, W.; Liu, W.
Interpretable Knowledge Tracing via Transformer-Bayesian Hybrid Networks: Learning Temporal Dependencies and Causal Structures in Educational Data. Appl. Sci. 2025, 15, 9605.
https://doi.org/10.3390/app15179605
AMA Style
Mai NT, Cao W, Liu W.
Interpretable Knowledge Tracing via Transformer-Bayesian Hybrid Networks: Learning Temporal Dependencies and Causal Structures in Educational Data. Applied Sciences. 2025; 15(17):9605.
https://doi.org/10.3390/app15179605
Chicago/Turabian Style
Mai, Nhu Tam, Wenyang Cao, and Wenhe Liu.
2025. "Interpretable Knowledge Tracing via Transformer-Bayesian Hybrid Networks: Learning Temporal Dependencies and Causal Structures in Educational Data" Applied Sciences 15, no. 17: 9605.
https://doi.org/10.3390/app15179605
APA Style
Mai, N. T., Cao, W., & Liu, W.
(2025). Interpretable Knowledge Tracing via Transformer-Bayesian Hybrid Networks: Learning Temporal Dependencies and Causal Structures in Educational Data. Applied Sciences, 15(17), 9605.
https://doi.org/10.3390/app15179605
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.