DK-PRACTICE: An Intelligent Platform for Knowledge Tracing and Educational Content Recommendation: A Case Study in Higher Education
Abstract
1. Introduction
- RQ1:
- How can a fast and efficient knowledge tracing model, based on BoW-inspired representation of student knowledge, be developed to effectively identify learning gaps through Pre- and Post-test assessments?
- RQ2:
- What is the impact of personalized content recommendations generated by the DK-PRACTICE platform, and how do they improve learning outcomes?
- RQ3:
- How could the DK-PRACTICE platform enhance the educational process, and how could its use be extended to other educational topics?
2. Related Works
3. Background Technologies
4. The DK-PRACTICE Platform
- Accurately estimate students’ current knowledge states;
- Predict their future performance on subsequent personalized questions;
- Recommend targeted instructional content to address specific learning objectives, concepts, and skills where deficiencies have been identified;
- Support users’ roles.
4.1. The DK-PRACTICE Platform Architecture and Functionalities
- The knowledge tracing component to predict students’ performance and to estimate students’ knowledge state, and
- The recommendation component to produce educational content recommendations.
4.2. Data Model
- Examination questions,
- The corresponding underlying concepts,
- The chapters containing the educational content of the course, and
- Students’ submitted answers across different examination periods.
4.3. Knowledge Tracing Component
- If (incorrect answer), then and .
- If , (correct answer) then and .
- A subset of 10 questions is randomly selected to form the query set for training. The remaining questions are designated as the evaluation set.
- The PB-BoW vector representation, , is generated for the query set. This vector has a length of and serves as the input to the classification module.
- The target vector, , for the classification module is constructed using the responses from the evaluation set. All elements of are equal to except for each evaluation question with response , where we set . Therefore, unanswered questions have target value , allowing us to exclude them when calculating the loss of the classifier.
The Classification Module
- Input Layer: neurons
- Hidden Layers: Three fully-connected hidden layers.
- –
- Layer 1: 15 neurons.
- –
- Layer 2: 10 neurons.
- –
- Layer 3: 5 neurons.
- Output Layer: neurons. Each neuron corresponds to a question, and its value, between 0 and 1, predicts the probability of a correct response.
| Algorithm 1 Generate Training Samples for PB-BoW KT | |
| 1: | procedureGenerateSamples(X) |
| 2: | Input: Questions and responses data for student s. The values are the indices of the questions answered by the student and are the corresponding responses (0 or 1); is the number of questions answered by this student. |
| 3: | Output: Dataset |
| 4: | Let be the total number of unique questions in the question bank. |
| 5: | Set |
| 6: | for to 200 do |
| 7: | Randomly select a subset of 10 questions to form the query set . |
| 8: | The remaining questions form the evaluation set . |
| 9: | Initialize input vector of size with zeros. |
| 10: | for each question-response pair do |
| 11: | Generate Paired-Bipolar representation: |
| 12: | if then ▹ Incorrect response |
| 13: | |
| 14: | |
| 15: | else ▹ Correct response |
| 16: | |
| 17: | |
| 18: | end if |
| 19: | end for |
| 20: | Initialize target vector of size with . |
| 21: | for each question-response pair do |
| 22: | |
| 23: | end for |
| 24: | Add the pair to . |
| 25: | end for |
| 26: | return D |
| 27: | end procedure |
4.4. Recommendation Component
4.4.1. Next Question Recommendation
- the set of Potential questions, , which remain eligible to be asked as the next question;
- the set of Answered questions that are ineligible to be asked again.
- The asked question is moved from to :
- For each , the recommendation model computes the predicted probability of a correct response based on the student’s estimated knowledge state vector :
- The next question is selected as the one whose probability of correct response is closest to 0.5,thereby targeting questions for which the model is maximally uncertain.
4.4.2. Educational Content Recommendation
4.5. Knowledge Tracing Experiments and Results
4.6. The DK-PRACTICE Platform Implementation
- with the “administrator” role (Figure 5). Beyond the general oversight of the platform, the administrator is tasked with the: (a) establishment of courses, (b) the registration of tutors, and (c) the allocation of course management responsibilities to the corresponding tutors.
- with the “tutor” role have the permissions to create a test (Figure 6) by entering all the relevant information and to set parameters such as starting and ending date and time (Figure 7), and monitor the students’ performance individually and in groups per Test (Figure 8). The ability to monitor test results enables the teacher to have an overview of the class’s knowledge state regarding the specific course, and indeed for each educational object or concept.
- with the “student” role can choose the course (Figure 9) and perform the tests set by corresponding “tutor” (Figure 10). Additionally, each “student” has access to previous test performances to view the results. In Figure 11, the results of the user Student 1 are shown after running a test (for example, the test on 1 May 2025):
- –
- The success rate in the test, and whether the questions were answered correctly or incorrectly (green color-correct answer, red color-incorrect answer)
- –
- Personalized recommendations are generated by incorporating educational content and estimating the rate of knowledge mastering through the knowledge tracing component. A lower percentage associated with a given educational content item corresponds to a larger proportion of red in the visual bar representation, indicating knowledge gaps in the concepts associated with this educational content.
- –
- The knowledge state, represented as the percentage of mastery for each concept, is derived from the knowledge tracing component following the completion of the test. The green segment of the bar chart visualizes the proportion of knowledge acquired in the concepts taught within the specific course.
5. The Case Study
5.1. Pre-Test Description
- Pre-Test User Experience
- (a)
- Navigation on the platform was smooth and seamless.
- (b)
- The platform’s features met my expectations.
- (c)
- Using the platform was easy and intuitive.
- (d)
- The design of the platform is pleasant and user-friendly.
- (e)
- The layout of the environment was logical and clear.
- Knowledge Assessment and Impact on Learning
- (a)
- The test accurately captured my knowledge in the course.
- (b)
- The platform was able to identify my strengths and weaknesses.
- (c)
- I trust the knowledge assessment process followed by the platform.
- (d)
- This experience helped me understand where I needed improvement.
- (e)
- I felt that the platform was adapted to my needs.
- Willingness to Use the Platform in the Future
- (a)
- I would continue to use the platform in other courses.
- (b)
- I would recommend the platform to other students.
5.2. Post-Test Description
- Post-Test User Experience
- (a)
- The presentation of the material was organized and understandable.
- (b)
- I would use the platform again to test my knowledge.
- (c)
- My overall experience with the platform has been positive.
- (d)
- Use of the platform was pleasant.
- (e)
- I felt that the platform understood my learning needs.
- (f)
- I feel that the platform has contributed to my progress.
- (g)
- I have a better picture of my weaknesses after this experience.
- (h)
- The personalization of the proposals was evident and useful.
- (i)
- I feel that I am better prepared for the exam.
- Effectiveness of Recommendations
- (a)
- The content was relevant to the points I was lacking.
- (b)
- The suggested material helped me to better understand the subject matter.
- (c)
- The test of the second experiment recorded my progress.
- (d)
- The proposals met my real needs.
- (e)
- My progress from the first to the second test was evident.
- (f)
- I would use the platform again to test my knowledge.
- Willingness to Use the Platform by Students
- (a)
- I would continue to use the platform in other courses.
- (b)
- I would recommend the platform to other students.
5.3. Evaluation
- User Experience: In terms of user experience, the evaluation was conducted in two phases. The first phase was implemented during the final stage of system development, during which the platform was subjected to testing to identify usability-related improvements. The evaluation highlighted issues concerning the visual presentation of questions, the display of results upon test completion, and the functionality of the password recovery mechanism. Following the incorporation of the necessary modifications, these issues were resolved, thereby ensuring the platform’s readiness for deployment in the “Computer Organization and Architecture” course.The second phase of the usability evaluation was carried out by the students who engaged with the platform during the two testing sessions. In both evaluations, students reported a positive user experience with the DK-PRACTICE platform.The survey results of the “Pre-test”, presented in the chart (Figure 14) demonstrate an overall positive evaluation of the intelligent knowledge tracing platform by higher education students. The highest levels of agreement were recorded for the statement “Using the platform was easy and intuitive”, with 48.39% agreeing and 38.71% strongly agreeing, indicating that the platform’s usability is one of its strongest features. Similarly, the layout of the environment was well received, with 41.94% agreeing and 38.71% strongly agreeing that it was logical and clear.The design of the platform was also evaluated positively, with 29.03% agreement and 38.71% strong agreement, though this item received the highest proportion of neutral responses (29.03%). Regarding functionality, 35.48% of students agreed and another 35.48% strongly agreed that the platform’s features met their expectations, although 9.68% disagreed, suggesting some room for improvement. Navigation was perceived as smooth and seamless by most respondents (32.26% agreed, 38.71% strongly agreed), although 25.81% remained neutral. In summary, the findings show that the DK-PRACTICE platform is largely intuitive, user-friendly, and effective, with only a small minority reporting dissatisfaction. These results highlight strengths in ease of use and logical design, while pointing to opportunities for further refinement of features and overall design appeal.Based on questionnaire answers after the “Post-test” (Figure 15), most respondents expressed satisfaction, with notable percentages agreeing or strongly agreeing with key statements about platform usefulness. Specifically, 72.06% felt the material was well-organized, understandable, and enjoyable to use. Additionally, students overwhelmingly believed the platform was beneficial for their learning; a combined 73.53% of users agreed or strongly agreed that it contributed to their progress and helped them identify their weaknesses more effectively. The most positive feedback centered on the intention to reuse the platform, as 70.59% of respondents would use it again to test their knowledge, demonstrating strong support for its effectiveness as a learning tool.
- Platform Effectiveness: This axis relates to the effectiveness of the services offered, specifically the assessment of knowledge state and the educational recommendations during pre-/post-tests performed by students.The findings of the quality assurance questionnaire after “Pre-test” indicate that students mostly viewed the intelligent knowledge tracing platform DK-PRACTICE primarily positively (Figure 16). Most respondents chose “Agree” on all five items, with especially high agreement for the statements “This experience helped me understand where I needed improvement” and “I trust the knowledge assessment process followed by the platform”, both surpassing 60%. A significant number of students also strongly agreed, especially regarding the platform’s ability to identify individual strengths and weaknesses and accurately assess their knowledge. Neutral responses were moderate, while reports of disagreement or strong disagreement were minimal. These results show that students generally see the platform as effective, reliable, and helpful for their learning, highlighting its potential value in higher education.The “Post-test” quality assurance questionnaire, given to higher education students, highlights both the high use and perceived effectiveness of the DK-PRACTICE platform’s recommendation system. As shown in Figure 17, most respondents actively engaged with the recommendations. Specifically, 36.76% reported a moderate level of use (score of 3 on a 5-point scale), while a significant number of students indicated a high degree of use, with 30.88% selecting a score of 4 and 16.18% choosing the highest score of 5. These results suggest that the platform’s recommendations were considered valuable enough to encourage active engagement, supporting the platform’s role as a tool for personalized learning.Further analysis of the survey data, as shown in Figure 18, confirms the high effectiveness of these recommendations from the students’ point of view. Most students agreed or strongly agreed with the statements, indicating a positive user experience. Notably, over 50% of respondents agreed that “The content was relevant to the points I was lacking” and that “The proposals met my real needs”. Additionally, a large percentage of students felt that the “suggested material helped me to understand the subject matter better” and that “The test of the second experiment recorded my progress”.These results collectively highlight the platform’s success in providing targeted and useful content that genuinely helps students improve academically.
6. Results and Discussion
7. Conclusions and Future Works
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| BKT | Bayesian Knowledge Tracing |
| BoK | Body of Knowledge |
| BoW | Bag of Words |
| CBoK | Curriculum Body of Knowledge |
| DKT | Deep Knowledge Tracing |
| DKVMN | Key-Value Memory Networks |
| EKT | Exercise-Enhanced Sequential Modeling |
| GIKT | Graph-based Knowledge Tracing |
| GPU | Graphics Processing Unit |
| IRT | Item Response Theory |
| KA | Knowledge Area |
| KSGAN | Knowledge Structure-aware Graph Attention Network |
| KT | Knowledge Tracing |
| KT-BiGRU | Knowledge Tracing Bidirectional Gated Recurrent Unit |
| KU | Knowledge Unit |
| ML | Machine Learning |
| PB-BoW | Paired-Bipolar Bag-of-Words |
| RNN | Recurrent Neural Network |
| SAKT | Self-Attentive Knowledge Tracing |
References
- Anbu, K. Enhancing physics education through artificial intelligence tools. Sci. Int. J. Res. 2025, 2, 25–32. [Google Scholar] [CrossRef]
- PhET Interactive Simulations; University of Colorado Boulder: Boulder, Colorado, 2018; Available online: https://phet.colorado.edu (accessed on 25 July 2025).
- Labster Virtual Labs. 2025. Available online: https://www.labster.com (accessed on 23 December 2025).
- ChatGPT by OpenAI. 2022. Available online: https://chatgpt.com/ (accessed on 23 December 2025).
- Squirrel AI Learning. Available online: https://www.squirrelai.com (accessed on 23 December 2025).
- IBM Watson Education. 2025. Available online: https://www.ibm.com/training/ (accessed on 23 December 2025).
- Corbett, A.T.; Anderson, J.R. Knowledge tracing: Modeling the acquisition of procedural knowledge. User Model. User Adapt. Interact. 1995, 4, 253–278. [Google Scholar] [CrossRef]
- Baker, F.B. The basics of item response theory. In ERIC Clearinghouse on Assessment and Evaluation; ERIC Clearinghouse on Assessment and Evaluation: College Park, MD, USA, 2008. [Google Scholar]
- Piech, C.; Bassen, J.; Huang, J.; Ganguli, S.; Sahami, M.; Guibas, L.; Sohl-Dickstein, J. Deep knowledge tracing. Adv. Neural Inf. Process. Syst. 2015, 28, 505–513. [Google Scholar]
- Yang, Y.; Shen, J.; Qu, Y.; Liu, Y.; Wang, K.; Zhu, Y.; Zhang, W.; Yu, Y. GIKT: A graph-based interaction model for knowledge tracing. arXiv 2020, arXiv:2009.05991. [Google Scholar] [CrossRef]
- Smadi, A.; Al-Qerem, A.; Nabot, A.; Jebreen, I.; Aldweesh, A.; Alauthman, M.; Abaker, A.M.; Al Zuobi, O.R.; Alzghoul, M.B. Unlocking the potential of competency exam data with machine learning: Improving higher education evaluation. Sustainability 2023, 15, 5267. [Google Scholar] [CrossRef]
- Knewton Alta. 2020. Available online: https://www.knewton.com (accessed on 23 December 2025).
- Carnegie Learning—MATHia. 2025. Available online: https://www.carnegielearning.com (accessed on 23 December 2025).
- Fu, L.; Long, T.; Lin, J.; Xia, W.; Dai, X.; Tang, R.; Wang, Y.; Zhang, W.; Yu, Y. AdvKT: An Adversarial Multi-Step Training Framework for Knowledge Tracing. arXiv 2025, arXiv:2401.12578. [Google Scholar]
- Ghosh, S.; Ranjan, P.; Drachsler, H.; Iqbal, Q.; Chakraborty, S.; Yau, J.Y.K. TrueLearn: A family of Bayesian algorithms to match lifelong learners to open educational resources. In AAAI Conference on Artificial Intelligence; AAAI Press: Palo Alto, CA, USA, 2020; Volume 34, pp. 3555–3562. [Google Scholar]
- Li, Z.; Yazdanpanah, V.; Wang, J.; Gu, W.; Shi, L.; Cristea, A.I.; Kiden, S.; Stein, S. TutorLLM: Customizing learning recommendations with knowledge tracing and retrieval-augmented generation. arXiv 2025, arXiv:2502.15709. [Google Scholar]
- Wang, H.; Wu, Q.; Bao, C.; Ji, W.; Zhou, G. Research on knowledge tracing based on learner fatigue state. Complex Intell. Syst. 2025, 11, 226. [Google Scholar] [CrossRef]
- Joint Task Force on Computing Curricula; Association for Computing Machinery (ACM); IEEE Computer Society. Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science; Technical Report; ACM/IEEE: New York, NY, USA, 2013. [Google Scholar]
- Joint Task Force on Computing Curricula; Association for Computing Machinery (ACM); IEEE Computer Society. Computer Science Curricula 2023: CS2023, 7th ed.; Technical Report; ACM/IEEE: New York, NY, USA, 2023. [Google Scholar]
- Corbett, A.T.; Anderson, J.R. KnowlEdge Tracing: Model. Acquis. Proced. Knowledge. User Model. User Adapt. Interact. 1994, 4, 253–278. [Google Scholar] [CrossRef]
- Zhang, J.; Shi, X.; King, I.; Yeung, D.Y. Dynamic key-value memory networks for knowledge tracing. In 26th International Conference on World Wide Web (WWW); ACM: New York, NY, USA, 2017; pp. 765–774. [Google Scholar] [CrossRef]
- Pandey, S.; Karypis, G. Self-attentive knowledge tracing. In Proceedings of the 12th International Conference on Educational Data Mining (EDM), Montréal, QC, Canada, 2–5 July 2019; pp. 384–389. [Google Scholar]
- Harris, Z.S. Distributional structure. Word 1954, 10, 146–162. [Google Scholar] [CrossRef]
- Tong, S.; Zhang, Y.; Chen, E.; Nie, J.Y. EKT: Exercise-enhanced sequential modeling for student performance prediction. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI Press: Palo Alto, CA, USA, 2018; Volume 34, pp. 2435–2443. Available online: https://aaai.org/papers/11864-exercise-enhanced-sequential-modeling-for-student-performance-prediction/ (accessed on 23 December 2025).
- Delianidi, M.; Diamantaras, K.I. KT-Bi-GRU: Student Performance Prediction with a Bi-Directional Recurrent Knowledge Tracing Neural Network. J. Educ. Data Min. 2023, 15, 1–21. [Google Scholar]
- Reddy, A.A.; Harper, M. ALEKS-Based Placement at the University of Illinois. In Knowledge Spaces: Applications in Education; Falmagne, J.C., Albert, D., Doble, C., Eppstein, D., Hu, X., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 51–68. [Google Scholar] [CrossRef]
- Settles, B.; Meeder, B. A Trainable Spaced Repetition Model for Language Learning. In 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers); Erk, K., Smith, N.A., Eds.; Association for Computational Linguistics: Berlin, Germany, 2016; pp. 1848–1858. [Google Scholar] [CrossRef]
- Mao, S.; Zhan, J.; Deng, Y.; Qin, Y.; Jiang, Y. Improving exercise-level knowledge tracing via knowledge concept-based memory network. Expert Syst. Appl. 2025, 284, 127825. [Google Scholar] [CrossRef]
- Long, T.; Yin, L.; Chang, Y.; Xia, W.; Yu, Y. Simulating Question-answering Correctness with a Conditional Diffusion. In Proceedings of the ACM on Web Conference 2025, New York, NY, USA, 28 April–2 May 2025; pp. 5173–5182. [Google Scholar] [CrossRef]
- Zhou, C.; Liu, X.; Tang, M.; Li, X. The impact of AI-based adaptive learning technologies on motivation and engagement of higher education students. Comput. Educ. 2025, 30, 22735–22752. [Google Scholar] [CrossRef]
- Hegde, V.; Vishrutha, M.; Shanthappa, P.M.; Bhat, R.; Raveendran, N.; Roshin, C. Analysing learning behaviour: A data-driven approach to improve time management and active listening skills in students. MethodsX 2025, 14, 103262. [Google Scholar] [CrossRef]
- Kotsiantis, S.; Pierrakeas, C.; Pintelas, P. Predicting Students’ Performance in Distance Learning Using Machine Learning Techniques. Appl. Artif. Intell. 2004, 18, 411–426. [Google Scholar] [CrossRef]
- Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education Promises and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019. [Google Scholar]
- Cybenko, G. Approximations by superpositions of a sigmoidal function. Math. Control. Signals Syst. 1989, 2, 183–192. [Google Scholar] [CrossRef]
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Chen, W.; Yang, T. A recommendation system of personalized resource reliability for online teaching system under large-scale user access. Mob. Netw. Appl. 2023, 28, 983–994. [Google Scholar] [CrossRef]
- Hukkeri, G.S.; Goudar, R. Machine Learning-Based Personalized Recommendation System for E-Learners. In Proceedings of the 2022 Third International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE), Bengaluru, India, 16–17 December 2022; pp. 1–6. [Google Scholar]
- Da Silva, F.L.; Slodkowski, B.K.; Da Silva, K.K.A.; Cazella, S.C. A systematic literature review on educational recommender systems for teaching and learning: Research trends, limitations and opportunities. Educ. Inf. Technol. 2023, 28, 3289–3328. [Google Scholar] [CrossRef]
- Deschênes, M. Recommender systems to support learners’ Agency in a Learning Context: A systematic review. Int. J. Educ. Technol. High. Educ. 2020, 17, 50. [Google Scholar] [CrossRef]
- Troussas, C.; Krouska, A. Path-Based Recommender System for Learning Activities Using Knowledge Graphs. Information 2023, 14, 9. [Google Scholar] [CrossRef]
- Tang, T.Y.; McCalla, G. Smart recommendation for an evolving e-learning system: Architecture and experiment. J. E Learn. 2005, 4, 105–129. [Google Scholar]
- Manouselis, N.; Drachsler, H.; Verbert, K.; Duval, E. Recommender Systems for Learning; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef]
- Roy, D.; Dutta, M. A systematic review and research perspective on recommender systems. J. Big Data 2022, 9, 59. [Google Scholar] [CrossRef]
- Qader, W.A.; Ameen, M.M.; Ahmed, B.I. An Overview of Bag of Words; Importance, Implementation, Applications, and Challenges. In Proceedings of the 2019 International Engineering Conference (IEC), Erbil, Iraq, 23–25 June 2019; pp. 200–204. [Google Scholar] [CrossRef]
- Stallings, W. Computer Organization and Architecture, 10th ed.; Pearson: London, UK, 2016. [Google Scholar]




















| Dataset | Questions | Students | Interactions | ||||
|---|---|---|---|---|---|---|---|
| Total | Train | Test | Total | Train | Test | ||
| ASSISTment 2009 | 101 | 818 | 556 | 262 | 23,722 | 16,511 | 7211 |
| ASSISTment 2017 | 101 | 1641 | 1144 | 497 | 69,712 | 47,856 | 21,856 |
| COaA Course | 120 | 1115 | 891 | 224 | 19,339 | 15,555 | 3784 |
| Dataset | PB-BoW | KT-BiGRU | SAKT | DKT | ||||
|---|---|---|---|---|---|---|---|---|
| AUC | Acc | AUC | Acc | AUC | Acc | AUC | Acc | |
| ASSISTment 2009 | 0.765 | 0.760 | 0.753 | 0.713 | 0.723 | 0.759 | 0.723 | 0.747 |
| ASSISTment 2017 | 0.733 | 0.700 | 0.716 | 0.686 | 0.650 | 0.733 | 0.644 | 0.755 |
| COaA Course | 0.805 | 0.787 | 0.755 | 0.719 | 0.770 | 0.739 | 0.756 | 0.738 |
| Concept_Name | Chapter | BOK KA | Knowledge Unit (KU) |
|---|---|---|---|
| CPU structure, Organization and architecture. | 1.1 | AR | Assembly-Level Machine Organization. |
| Microelectronics, Moore’s law. | 1.2 | AR SEP | Digital Logic and Digital Systems. Computing History. |
| History. | 1.3 | SEP | Computing History. |
| Embedded systems, Microcontrollers. | 1.5 | SPD | Embedded Platforms. |
| ARM architecture. | 1.6 | SEP | Computing History. |
| Cloud computing. | 1.7 | SEP | Computing History. |
| Design for performance, Performance balance. | 2.1 | AR | Performance and Energy Efficiency. |
| Performance laws. | 2.3 | AR | Performance and Energy Efficiency. |
| Execution rate, Processor performance, Performance measures, Order execution rate. | 2.4 | AR | Performance and Energy Efficiency |
| Interrupts, Instruction cycles, Instruction phases. | 3.2 | AR | Interfacing and Communication. Assembly Level Machine Organization. |
| QPI. | 3.5 | AR | Interfacing and Communication. |
| PCI. | 3.6 | AR | Interfacing and Communication. |
| Cache mapping. | 4.3 | AR | Memory Hierarchy. |
| Semiconductor memory, ROM. | 5.1 | AR | Memory Hierarchy. |
| Error correction. | 5.2 | AR | Memory Hierarchy. |
| DDR memory, SDRAM. | 5.3 | AR | Memory Hierarchy |
| Flash memory. | 5.4 | AR | Memory Hierarchy. |
| Magnetic disk organization. | 6.1 | AR | Interfacing and Communication. |
| RAID. | 6.2 | AR | Interfacing and Communication. |
| Programmed I/O. | 7.3 | AR | Interfacing and Communication. |
| Model | Accuracy | Mean AUC | p-Value | t-Statistic |
|---|---|---|---|---|
| PB-BoW | 0.787 | 0.805 | - | - |
| DKT | 0.738 | 0.756 | <0.001 | 89.525 |
| SAKT | 0.739 | 0.770 | <0.001 | 63.940 |
| KT-BiGRU | 0.719 | 0.754 | <0.001 | 110.810 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Delianidi, M.; Diamantaras, K.; Kokkonis, G.; Sidiropoulos, A.; Evangelidis, G.; Karapiperis, D. DK-PRACTICE: An Intelligent Platform for Knowledge Tracing and Educational Content Recommendation: A Case Study in Higher Education. Information 2026, 17, 202. https://doi.org/10.3390/info17020202
Delianidi M, Diamantaras K, Kokkonis G, Sidiropoulos A, Evangelidis G, Karapiperis D. DK-PRACTICE: An Intelligent Platform for Knowledge Tracing and Educational Content Recommendation: A Case Study in Higher Education. Information. 2026; 17(2):202. https://doi.org/10.3390/info17020202
Chicago/Turabian StyleDelianidi, Marina, Konstantinos Diamantaras, Georgios Kokkonis, Antonis Sidiropoulos, Georgios Evangelidis, and Dimitrios Karapiperis. 2026. "DK-PRACTICE: An Intelligent Platform for Knowledge Tracing and Educational Content Recommendation: A Case Study in Higher Education" Information 17, no. 2: 202. https://doi.org/10.3390/info17020202
APA StyleDelianidi, M., Diamantaras, K., Kokkonis, G., Sidiropoulos, A., Evangelidis, G., & Karapiperis, D. (2026). DK-PRACTICE: An Intelligent Platform for Knowledge Tracing and Educational Content Recommendation: A Case Study in Higher Education. Information, 17(2), 202. https://doi.org/10.3390/info17020202

