Next Article in Journal
An Innovative Finite Impulse Response Filter Design Using a Combination of L1/L2 Regularization to Improve Sparsity and Smoothness
Previous Article in Journal
Adversarial Defense for Medical Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning Path Recommendation Enhanced by Knowledge Tracing and Large Language Model

School of Computer Science, South China Normal University, Guangzhou 510631, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(22), 4385; https://doi.org/10.3390/electronics14224385
Submission received: 11 October 2025 / Revised: 1 November 2025 / Accepted: 5 November 2025 / Published: 10 November 2025
(This article belongs to the Special Issue Data Mining and Recommender Systems)

Abstract

With the development of large language model (LLM) technology, AI-assisted education systems are gradually being widely used. Learning Path Recommendation (LPR) is an important task in personalized instructional scenarios. AI-assisted LPR is gaining traction for its ability to generate learning content based on a student’s personalized needs. However, the native-LLM has the problem of hallucination, which may lead to the inability to generate learning content; in addition, the evaluation results of the LLM on students’ knowledge status are usually conservative and have a large margin of error. To address these issues, this work proposes a novel approach for LPR enhanced by knowledge tracing (KT) and LLM. Our method operates in a “generate-and-retrieve” manner: the LLM acts as a pedagogical planner that generates contextual reference exercises based on the student’s needs. Subsequently, a retrieval mechanism constructs the concrete learning path by retrieving the top-N most semantically similar exercises from an established exercise bank, ensuring the recommendations are both pedagogically sound and practically available. The KT plays the role of an evaluator in the iterative process. Rather than generating semantic instructions directly, it provides a quantitative and structured performance metric. Specifically, given a candidate learning path generated by the LLM, the KT model simulates the student’s knowledge state after completing the path and computes a knowledge promotion score. This score quantitatively measures the effectiveness of the proposed path for the current student, thereby guiding the refinement of subsequent recommendations. This iterative interaction between the KT and the LLM continuously refines the candidate learning items until an optimal learning path is generated. Experimental validations on public datasets demonstrate that our model surpasses baseline methods.

1. Introduction

Learning Path Recommendation (LPR) systems leverage real-time analysis of students’ knowledge acquisition to construct personalized learning trajectories tailored to their specific needs [1]. By dynamically adapting instructional content and methodologies, these systems not only enhance learning efficiency but also foster deeper engagement and intrinsic motivation, thereby significantly augmenting overall educational outcomes. As AI applications become increasingly pervasive in education, the interpretability and fairness of recommendation systems have also received growing attention [2,3,4]. An ideal LPR system should not only be effective but also transparent, trustworthy, and fair. In recent years, research in the field of LPR has exhibited a noticeable upward trend. Some researchers have applied knowledge tracing (KT) to LPR. The KT model has the ability to predict students’ knowledge status, so as to enhance the ability of the LPR system to dynamically evaluate students’ knowledge status and adjust the path based on this [5,6]. However, the existing KT model only has the ability of single-step prediction, that is, it can only predict the knowledge state of students at time t + 1 , which is difficult to meet the requirements of LPR for multi-step prediction of students’ knowledge status.
Recent advances in large language models (LLMs) have opened up new possibilities for LPR. LLMs can understand educational materials in textual format, generate textual content that can be used for teaching purposes, or provide answers to questions in learning [7,8]. These methods make use of the text comprehension and reasoning capabilities of LLMs. Based on the semantic understanding of the description text of the interaction of students’ learning behavior by LLM, the current knowledge state of students is inferred. However, it is also difficult to correctly reason about the knowledge state of the student’s future multi-step in this way. In addition, the LLM assessment results of students’ knowledge status are always concentrated on a certain average, for example, if it is asked to rank the student’s knowledge state from 1 to 5 according to the student’s learning record, the result given is always around 3 points. This is because LLMs have a tendency to be greedy too early, that is, they tend to choose safe answers that occur frequently in the training data to avoid extreme judgments [9]. LLM prematurely locks in the local optimal solution in decision-making, resulting in a decrease in action coverage instead of comprehensively exploring the optimal strategy. Therefore, in the ranking task, the area around the comfort zone of the model is because of the lowest risk.
To address these challenges, we propose a novel Learning Path Recommendation Enhanced by Knowledge Tracing and Large Language Model, referred to as LPReKL. Primarily, to enhance KT’s capability for multi-step prediction of students’ knowledge status, we probabilistically replace a portion of students’ responses with masked tokens during training. In addition, we have introduced information on the difficulty of the exercises in the KT model. Through this process, KT can accurately infer the student’s level of mastery of a particular knowledge concept at t + n steps. Subsequently, leveraging feedback from KT and predefined prompt templates, the LLM is designed to generate the new reference exercise text. Finally, exercises with high similarity to these reference exercise text are retrieved from an exercise bank as the candidate item of the learning path. This content undergoes validation by the KT—if deemed appropriate, it is presented to students; otherwise, the LLM refines its generated content based on KT-driven feedback through the continuous interaction and optimization between KT and LLM. The contributions of this work can be summarized as follows:
  • This work proposes a novel learning path recommendation method that fuses KT models and LLMs. It introduces a feedback mechanism that guides LLMs to automatically adjust what they generate based on KT’s predictions.
  • We reconstruct the students’ historical interaction data by incorporating mask markers, enabling KT’s training to predict the students’ state of knowledge at the next t + n time step.
  • A series of experimental results on multiple public datasets show that the proposed model is effective and has advantages over existing methods.
Traditional sequential or graph-based recommendation methods [10,11] rely on static, historical associations to recommend items [12]. In contrast, our approach performs dynamic simulation: before recommending a top-N exercise path, it forecasts the student’s knowledge state after completing the entire sequence, enabling evaluation of long-term effectiveness and selection of paths with optimal cumulative gains rather than immediate benefits. Moreover, the masking mechanism—by randomly hiding historical responses during training—forces the model to reason under incomplete information, improving its robustness and generalization for multi-step prediction. This allows reliable simulation of entirely new, unseen learning paths.
The remainder of this paper is organized as follows. Section 2 reviews related work on learning path recommendation. Section 3 formalizes the problem definition. Section 4 elaborates on the proposed methodology in detail. Experimental results and analyses are presented in Section 5. Finally, Section 6 concludes this paper and outlines future research directions.

2. Related Work

2.1. Learning Path Recommendation

Learning path recommendation systems can provide customized learning content and teaching strategies based on individual student characteristics, effectively meeting the demands of personalized education [13]. Existing LPR methods mainly fall into two categories. The first category relies on manually defined rules, including constructing learning resource dependency models based on prerequisite relationships [14,15] and modeling knowledge concept correlations using knowledge graphs [16]. Although these methods offer certain interpretability, they exhibit notable limitations in practical applications: first, the systems lack flexibility to adapt to dynamically changing learning needs; second, they incur high maintenance costs. For instance, knowledge graphs require continuous updates to maintain timeliness, otherwise their performance may degrade when handling new knowledge-related exercises. The second category formulates LPR as a sequential recommendation problem [17,18]. These methods generate complete learning paths by analyzing students’ historical behavior data. For example, Zhang et al. [1] employ a recursive propagation approach to process learner–course interaction data, utilizing graph convolutional networks to generate predicted course ratings, which are subsequently integrated with learning style similarity scores to achieve personalized course recommendations. Liu et al. construct learning networks by analyzing learning records and propose a combinatorial recommendation algorithm [19].
While achieving certain results, these methods fail to effectively capture real-time changes in students’ knowledge states, resulting in recommendations that lack personalization and adaptability. The introduction of KT effectively addresses these limitations. Unlike previous recommendation methods that rely on static rules, KT models can dynamically adjust and optimize recommendation paths by continuously analyzing students’ real-time learning performance, thereby significantly enhancing the adaptability and personalization of recommendation systems.

2.2. Knowledge Tracing for LPR

By analyzing student interactions with learning materials, KT predicts students’ mastery of specific knowledge concepts [20,21]. Bayesian Knowledge Tracing is one of the classic approaches, using Bayesian inference to estimate students’ knowledge states [22]. In 2015, Piech et al. use recurrent neural networks for KT [23], leveraging LSTMs to capture students’ knowledge states and temporal dependencies, leading to improved prediction accuracy. The advent of deep KT has significantly advanced the development of learning path recommendation systems [5,24]. By employing data-driven dynamic modeling approaches, this technology transforms traditional one-size-fits-all learning paths into personalized adaptive recommendations, thereby effectively enhancing learning efficiency and outcomes. Specifically, Zhang et al. [5] utilized knowledge tracing to annotate students’ knowledge states in historical learning logs, incorporating these state features as model inputs to generate personalized learning paths that capture both sequential relationships and selection logic among learning resources. Chen et al. [24] proposed an auxiliary prediction module based on knowledge tracing, which continuously evaluates students’ mastery levels at each node of the learning path and optimizes model parameters through cross-entropy loss functions, significantly improving recommendation stability. Notably, the application of multimodal data provides online learning systems with more comprehensive and multi-dimensional representations of student behaviors, creating opportunities for building more accurate personalized learning experiences [25,26].
However, current KT models exhibit two critical limitations: they cannot effectively predict exercises for the next t + n step and typically require large amounts of high-quality training data. These constraints significantly challenge their applicability in LPR, where sparse data and continuously emerging new exercises are common. Additionally, Wang et al. [27] argue that existing methods often fail to adequately capture complex behavioral patterns unless they effectively leverage rich world knowledge for deeper reasoning about learning materials. This limitation has prompted researchers to explore novel applications of LLMs in this field.

2.3. LLM-Assisted Recommendation

Recent advancements in deep learning have ushered in transformative breakthroughs, particularly within the domain of natural language processing. Cutting-edge LLMs, including DeepSeek and the Qwen series, exhibit unparalleled proficiency in natural language comprehension [28,29], providing robust technical support for LPR. The integration of LLMs into the educational field presents a salient advantage. LLMs possess contextual understanding and reasoning capabilities. When students struggle with overwhelming learning materials, LLMs generate personalized content based on their learning progress and knowledge states, ensuring access to the most relevant educational resources [8,30,31]. Wang et al. [31] propose a ChatGPT-based personalized English reading comprehension support system, which enhances learning by predicting students’ reading skills, generating tailored questions, and automatically evaluating responses. Cui et al. [32] employ KT techniques to estimate students’ mastery of knowledge concepts based on their learning history and subsequently generated exercise recommendations using a pre-trained language model. Li et al. [15] integrate LLM with knowledge graphs to recommend appropriate learning materials tailored to students’ knowledge states and the human knowledge system. However, current LLM-assisted recommendations suffer from two notable shortcomings: first, they directly adopt model-generated content while overlooking potential logical fallacies or factual inaccuracies; second, they lack effective interactive feedback mechanisms—once the model provides recommendations to students, it cannot collect meaningful learning feedback, thereby hindering dynamic optimization of the recommended content.
The distinctions between LPReKL and prior approaches lie in two aspects:
  • Conceptually: Previous methods used LLM and KT in a one-way pipeline, lacking bidirectional interaction [33,34]. LPReKL introduces an iterative feedback loop where KT not only diagnoses the initial knowledge state but also evaluates the quality of the LLM-generated learning path and provides feedback for dynamic refinement, forming a closed-loop optimization system.
  • Practically: We propose a masked tokens strategy that enables KT to predict performance on unknown exercises, whereas traditional KT only supports prediction for known exercises. We employ a retrieval mechanism to match LLM-generated reference exercises with real items from the exercise bank, ensuring that recommended content is pedagogically sound and practically available, rather than relying directly on potentially hallucinated LLM outputs.

3. Problem Formulation

In this section, we review the definition of LPR and explain some key terms within this work. We have listed some frequently used symbols in Table 1 and provide a detailed explanation of their roles in this context.
Learning Path Recommendation. The learning path is illustrated in Figure 1. In the Learning Path Recommendation task, the student is first assessed and receives an initial score of E s at the beginning. The recommendation system then proposes a candidate learning path l p c = q 1 , q 2 , , q K , where the student engages in the recommended exercises and generates new learning records u t = { ( q 1 , a 1 ) , ( q 2 , a 2 ) , , ( q K , a K ) } , along with a final score E e . The student’s learning records and knowledge states are updated accordingly, such that u t = u t u t 1 . The effectiveness of the learning path, denoted as E φ  [14,24,35], can be calculated using the following formula:
E φ = E e E s E s u p E s ,
where E s u p is the maximum score for the path, E s is the student’s initial score, and E e is the score after completing the target path. A higher E φ indicates a more effective learning path that better matches the needs of student.
In this work, we define a learning path as a carefully curated sequence of learning items designed to achieve a specific learning goal. The internal logic of the path lies in its targeted addressing of the student’s weak knowledge concepts and its overall effectiveness, concretely represented as an ordered list of exercises l p c = q 1 , q 2 , , q K .
Knowledge State. Given a set of | U | students, | C | knowledge concepts, and | Q | exercises, each student’s historical learning process is represented as a sequence:
L P ( u i ) = { ( q 1 , a 1 ) , ( q 2 , a 2 ) , , ( q t , a t ) , , ( q L , a L ) } ,
where u i U , q t Q denotes the exercise answered by the student at time step t, and a t 0 , 1 indicates the correctness of the response (1 for correct, 0 for incorrect). Each exercise q t is associated with one or more knowledge concepts c t . By analyzing students’ historical data, we can pinpoint their weaknesses and determine the knowledge concepts that require further reinforcement, as described below:
P ( u t + 1 i ) = Ω ( { ( q 1 , a 1 , c 1 ) , ( q 2 , a 2 , c 2 ) , , ( q t , a t , c t ) } ) ,
where Ω corresponds to a trainable algorithm or model, and u i represents the ith student. This formulation aims to predict the next learning step u t + 1 i for each student, enabling the system to dynamically adapt to their evolving knowledge states and optimize their learning trajectories.
Exercise Difficulty. The difficulty of an exercise significantly affects student learning. Exercises that are too easy or too difficult can demotivate students [36], while appropriately challenging ones enhance understanding and stimulate interest [37]. We define the difficulty of an exercise q i as the average error rate of its associated knowledge concepts from students’ learning histories:
Diff ( q i ) = 1 N K = 1 N e r r o r ( q i , { c k | c k KC [ q i ] } ) ,
where N is the number of knowledge concepts in the set K C [ q i ] , e r r o r ( q i ) is the error rate of concept c k . Zhang et al. suggest that an error rate around 30% optimally engages students [37].

4. Methodology

This section first introduces the framework and workflow of LPReKL, followed by a detailed discussion of each component.

4.1. Framework Overview

Figure 2 presents the overall architecture of LPReKL. First, a KT with the ability to predict students’ multi-step knowledge states is used to detect students’ initial knowledge states. These students’ initial knowledge states are then fused into prompt templates and transferred to the LLM, which generates a learning project for reference. These items are used as references to retrieve the most similar exercises from the practice library to construct an initial learning path. This learning path is then fed into the KT model again to evaluate the improvement value of the student’s knowledge status after using the learning path. Finally, based on the feedback of the improved value, the prompt template is adjusted and fed back to the LLM to continue or stop generating new learning items. If the LLM continues to generate new learning items, the subsequent operations will be repeated until the optimization goal is achieved.
The details of the procedures for LPR based on LPReKL are as shown in Algorithm 1.
Algorithm 1: LPReKL framework
Input: 
learning records u i , masked rate τ , Exercise bank Q.
Output: 
recommendation learning path l p c
1:
K T P r e t r a i n i n g ( u i , τ )
2:
for each u in U do
3:
     s t 1 i K T ( u t 1 i )
4:
     p r o m p t _ t e m p l a t e P r e p a r e P r o m p t ( u t 1 i , [ K C ] , [ q i ] , d )
5:
     L i s t [ q j ] L L M ( p r o m p t _ t e m p l a t e )                                                  ▹ Equation (10)
6:
     l p c M a t c h E x e r c i s e s ( L i s t [ q j ] , Q )                                                   ▹ Equation (12)
7:
     s t i K T ( l p c )
8:
    while not  I s S a t i s f i e d ( s t 1 i , s t i )  do
9:
           p r o m p t _ t e m p l a t e P r e p a r e P r o m p t ( u t i , [ K C ] , [ q i ] , d )
10:
         L i s t [ q j ] L L M ( p r o m p t _ t e m p l a t e )                                             ▹ Equation (10)
11:
         l p c M a t c h E x e r c i s e s ( L i s t [ q j ] , E x e r c i s e B a n k )                              ▹ Equation (12)
12:
    end while
13:
    Return l p c
14:
end for

4.2. Knowledge Tracing for Multi-Step Predictions

We introduce a novel masked mechanism during the training of the KT model to enable multi-step knowledge state prediction. The intuition is to simulate the uncertainty in future student responses when planning a multi-step learning path. For each training sequence, we randomly mask a proportion of the historical records by setting their mask indicator ( m i = 0 ) . This forces the model to rely on a broader context rather than just the most recent interactions, thereby improving its ability to forecast knowledge states several steps ahead.
The robust capabilities of DKT have been well established [24], prompting us to adopt the original DKT architecture [23] as the foundational simulator [5]. However, we introduce a novel mechanism: the use of masked tokens to reconstruct students’ historical interaction data, thereby enabling the model for multi-step knowledge states prediction. Given the historical response sequence of the student j as u < t j = { ( q 1 , a 1 ) , ( q 2 , a 2 ) , , ( q t 1 , a t 1 ) } , where q i and a i denote the exercise index and the corresponding response. Furthermore, we maintain a mask value m i for each record, indicating whether the answer should remain visible ( m i = 1 ) or be masked ( m i = 0 ) . The reconstructed data is expressed as:
u ˜ < t j = { ( q 1 , m 1 , a 1 ) , ( q 2 , m 2 , a 2 ) , , ( q t 1 , m t 1 , a t 1 ) } .
During training, for masked records ( m i = 0 ) , the response is replaced with a special token to indicate missing data. The input to the KT model is formulated as:
q t 1 = W q e t 1 q , m t 1 = W m e t 1 m , a t 1 = W a e t 1 a , x < t j = q t 1 m t 1 a t 1 ,
where e t 1 q , e t 1 m , and e t 1 a are one-hot vector representations of the respective symbols, and W indicates weight matrices. The model then predicts the student’s knowledge states as follows:
i t = sigmoid ( W x i x < t j + W h i h t 1 + b i ) , f t = sigmoid ( W x f x < t j + W h f h t 1 + b f ) , o t = sigmoid ( W x o x < t j + W h o h t 1 + b o ) , c t = f t c t 1 + i t tanh ( W x c x < t j + W h c h t 1 + b c ) , h t = o t tanh ( c t ) , s t = sigmoid ( W o s o t + b 0 ) .
Here, i , f , o , c correspond to the input gate, forget gate, output gate, and memory cell, respectively. Both t a n h ( ) and s i g m o i d ( ) denote activation functions, while W and b are trainable parameters. The output vector s t has a dimension equal to the number of knowledge concepts, with each entry indicating the student’s mastery of a specific knowledge concept. Our model not only predicts students’ knowledge states but also balances the impact of exercise difficulty on performance. The binary cross-entropy loss for n iterations and exercise difficulty regulation are defined as:
b c e = k = 1 n B C E ( a k , r k ) , d i f f = 1 n i = 1 n | d D i f f ( q i ) | ,
where d is a predefined difficulty level tailored to student needs. The final optimization objective is:
= α b c e + β d i f f ,
where α and β are weighting parameters.

4.3. Prompt Template of LLM

LLMs refer to ultra-large-scale neural networks trained via deep learning techniques, capable of comprehending and generating human language with remarkable fluency. These models are typically pre-trained on extensive textual corpora, allowing them to capture nuanced linguistic patterns and contextual dependencies. Owing to their transformative impact across a wide range of natural language processing tasks—such as text generation and machine translation—LLMs have garnered increasing attention in the domain of recommendation systems [38,39]. Prompt templates serve as an effective mechanism to enhance the interpretability and reasoning capabilities of LLMs; when carefully crafted, they can guide the model to produce responses that are not only coherent but also closely aligned with task-specific objectives [40]. Qwen (version 2.0; Alibaba Cloud, Hangzhou, China) is a language model introduced by Alibaba Cloud, and we have deployed this model on our servers with a parameter size of 32 billion. As illustrated in Figure 3, the prompt template is carefully designed to guide the LLM in its role as an intelligent exercise generation assistant. The blue section highlights that the LLM should focus on students’ evolving understanding of specific knowledge concepts and the desired exercise difficulty. Newly generated reference exercises should adhere to the format provided in the examples. For each student with multiple weak knowledge concepts, the LLM aims to generate exercises covering as many of these knowledge concepts as possible to optimize knowledge improvement. To ensure content quality, the reference exercises are matched against the exercise bank, with the top five most similar exercises selected for recommendation. The yellow section corresponds to each student’s learning situation, which is updated before each recommendation. The green section represents the response of LLM. The process is outlined below:
LP = LLM ( s t i ; [ KC ] ; [ q i , q j ] ; d ) ,
where s i denotes the student’s knowledge states, [ K C ] is the set of relevant knowledge concepts, [ q i ] provides example exercises, and d represents the expected difficulty level.
The LLM used in this work is the Qwen2-32B pre-trained model. We did not fine-tune this model; instead, we fully leveraged its in-context learning capability through carefully designed prompt templates. The temperature in the text generation process was set to 0.5.

4.4. Exercise Retrieval

We employ a pre-trained BERT model to obtain textual embeddings for the exercises. For each exercise, we input its text into the BERT model and use the corresponding output vector as its semantic embedding, which is then used for subsequent similarity computation.
To ensure the effectiveness and accuracy of the recommended content, we match the candidate exercises generated by the LLM with existing items in the exercise bank and select the top-N most similar ones for recommendation.
Given a reference exercise q = { w 1 , w 2 , , w n } and a candidate exercise q = { w 1 , w 2 , , w m } , where w i and w i represent words in the exercise and n and m are the number of words in each exercise text, we utilize a pre-trained BERT model to obtain contextualized vector embeddings for each token:
E ( q i ) , E ( q j ) R d ,
where E ( q i ) and E ( q j ) are the d-dimensional embeddings produced by BERT [41]. The semantic similarity between a reference and a candidate exercise is computed using cosine similarity:
sim ( q i , q j ) = E ( q i ) E ( q j ) | | E ( q i ) | | | | E ( q j ) | | .
Here, ⊙ denotes the dot product and | | | | is the Euclidean norm. Ultimately, the top N = 5 candidate exercises with the highest similarity scores are selected:
lp c = Rank 5 = { q 1 , q 2 , , q 5 } .
Before generating a learning path, the student’s knowledge state—provided by the KT model—is transformed into explicit, structured pedagogical objectives, namely the “set of knowledge concepts requiring reinforcement” and the “target difficulty level.” These objectives are clearly communicated to the LLM through carefully designed prompt templates, thereby guiding the generation process in a controlled and purposeful manner. Based on the entire exercise bank, we compute the semantic similarity between the LLM-generated reference item and all candidate exercises using a pre-trained BERT model and select the top-N most similar exercises. Through this design, the LLM functions more like a “pedagogical assistant” that operates within a well-defined instructional framework, generating contextually appropriate and semantically rich reference content. Meanwhile, the retrieval mechanism ensures that the final recommended exercises are drawn from the authentic, curated exercise bank. This separation of responsibilities effectively aligns the creative capacity of the LLM with educational fidelity, thereby preserving consistency with the intended pedagogical goals.
The detailed process of exercise retrieval is documented in Algorithm 2.
Algorithm 2: MatchExercises
  • Input:  L i s t [ q j ] generated by LLM, Exercise bank Q,
  •     Number of exercises to retrieve n
  • Output:  t o p _ n _ e x e r c i s e s
  •      a l l _ c a n d i d a t e s [ ]
  •     for each r e f in L i s t [ q j ]  do
  •          r e f r e c B E R T E m b e d ( r e f )
  •         for each c a n d i d a t e in Q do
  •              c a n d i d a t e r e c B E R T E m b e d ( c a n d i d a t e )
  •              s i m C o s i n e S i m i l a r i t y ( r e c r e f , c a n d i d a t e r e c )
  •              A p p e n d ( c a n d i d a t e , s i m ) t o a l l _ c a n d i d a t e s
  •         end for
  •     end for
  •      s o r t e d _ c a n d i d a t e s S o r t ( a l l c a n d i d a t e s , b y = s i m , d e s c e n d i n g = T r u e )
  •      t o p _ n _ e x e r c i s e s G e t T o p N ( s o r t e d _ c a n d i d a t e s , n )

5. Experiment

In order to understand the model more clearly, we conducted experiments that addressed the following research questions:
RQ1: In the task of assessing students’ knowledge status, do the results generated by direct use of LLM tend to be conservative and lack discrimination?
RQ2: Is LPReKL more effective than the existing LPR?
RQ3: What impact do the individual core components of LPReKL have on the model’s overall performance?
RQ4: How do hyperparameters in the model contribute to overall performance?

5.1. Dataset and Simulator

5.1.1. Dataset

To evaluate the performance of the proposed LPReKL, we conduct experiments on three publicly available datasets. MOOCCubeX (https://github.com/THU-KEG/MOOCCubeX?tab=readme-ov-file, accessed on 1 March 2025) is one of the largest and most comprehensive MOOC datasets, containing a wealth of exercises, knowledge concepts, and student interaction records. We extract students’ activity logs related to physics subjects for our experiments. MOOPer (http://data.openkg.cn/dataset/mooper, accessed on 1 March 2025) is derived from interaction data collected between 2018 and 2019 on the EduCoder platform, where students participated in practical programming exercises. XES3G5M (https://github.com/ai4ed/xes3g5m, accessed on 1 March 2025) is collected from a real-world online mathematics learning platform and contains third-grade students’ historical interaction records on math exercises. The processed dataset details are summarized in Table 2.

5.1.2. Simulator

To evaluate the effectiveness of different methods in LPR, we follow prior work [14,24,35] and utilize our proposed KT to assess the quality of the generated learning paths. The KT model is trained on large-scale real-world data with the objective of accurately predicting students’ responses to exercises. Therefore, it can serve as a reliable simulator for students’ knowledge state evolution, enabling a fair comparison of the expected effectiveness of learning paths generated by different recommendation methods.

5.2. Implementation Details

We discard student records with fewer than ten interactions and retain only the associated knowledge concepts. The remaining data is split into training, validation, and test sets with a ratio of 8:1:1. The hidden dimension of the KT is set to 200, and the output layer size matches the number of knowledge concepts in the dataset. Dropout with a rate of 0.6 is applied to mitigate overfitting. We use the Adam optimizer with a momentum of 0.9, a gradient clipping threshold of 3.0, an initial learning rate of 0.01, and a decay rate of 0.75. The input sequence length is fixed to 200, and shorter sequences are padded with null values. All experiments are conducted on a Tesla V100 GPU with Python 3.10, PyTorch 2.1.2, and CUDA 11.8.

5.3. Baselines and Evaluation Metric

5.3.1. Baselines

We compare LPReKL with several state-of-the-art methods:
  • FISM [42]: Generates recommendations based on a similarity matrix.
  • CluLSTM [43]: Clusters students into groups and uses LSTM to predict learning paths.
  • DQN [44]: Applies Q-learning and deep neural networks for decision-making.
  • GRU4Rec [45]: Utilizes gated recurrent units to process students’ historical interactions and generate learning paths.
  • LightGCN [46]: Employs multi-layer graph convolution to extract deep features among entities for recommendation.
  • GEHRL [35]: Tracks students’ knowledge states with KT and adopts hierarchical reinforcement learning for goal planning and recommendation.
  • SKarRec [15]: Leverages LLMs to construct textual descriptions of learning items and combines KT with graph neural networks for recommendation.
  • KGNN-KT [33]: Utilize LLMs to construct a knowledge graph from unordered knowledge concepts, and then model student behavior using GNNs.

5.3.2. Evaluation Metric

We adopt E φ (Equation (1)) to measure the effectiveness of learning paths [14,24,35] and compare all methods based on their performance using this metric. Traditional recommendation metrics (e.g., NDCG, Precision) aim to measure item “relevance” or “ranking quality.” However, in education, the ultimate goal is not merely the alignment between students and resources but the actual improvement in students’ knowledge state. A highly “relevant” exercise may fail to promote learning if it is too easy or too difficult. Optimizing for relevance alone risks recommending items that appear suitable but yield little educational benefit. In contrast, our evaluation metric, E φ , directly quantifies the gain in a student’s knowledge state after completing the recommended path—i.e., the learning gain. This ensures that our model optimization is aligned with the fundamental objective of education: meaningful and measurable knowledge advancement.

5.4. Exploring the Predictive Capabilities of LLM (RQ1)

In this experiment, we use KT instead of LLM to directly predict the knowledge state of the students, primarily because LLMs exhibit a pronounced central tendency in their assessments of knowledge states. To validate this phenomenon, we randomly sample students with varying record lengths from three datasets, input their response histories into the LLM, and obtain predictions of their knowledge states. Specifically, we categorize students’ historical learning records into three groups based on length (fewer than 100 entries, fewer than 200 entries, and more than 200 entries), randomly select 100 students from each group, and visualize the LLM’s predictions using box plots (Figure 4).
For all three datasets, when the record length is fewer than 200, the median predicted state is relatively high and the data distribution is more dispersed. In contrast, when the record length exceeds 200, the median predicted state is comparatively lower and the distribution is more concentrated. With longer input sequences, the LLM can better capture the input information, leading to more stable predictions. In summary, the LLM tends to produce moderate predictions in scoring tasks, remaining in a comfort zone to avoid extreme judgments. In comparison, KT does not suffer from this limitation and can more accurately predict students’ knowledge states. Therefore, we opt for the KT to evaluate students’ learning capabilities.

5.5. Overall Performance Comparison (RQ2)

To thoroughly evaluate the capabilities of various models, we configure three settings for selecting candidate exercises: 1 ρ = 1 : randomly sample n exercises. 2 ρ = 2 : divide all exercises into ( N n + 1 ) groups, each of size n, and randomly select one group as the candidate set. 3 ρ = 3 : use all available exercises. N denotes the total number of exercises. For the MOOCCubeX, MOOPer, and XES3G5M datasets, n is set to 100, 500, and 500, respectively. The three settings are referred to as ρ = 1, 2, and 3 in the following experiments.

5.5.1. Promotion Comparison

As shown in Table 3, LPReKL achieves superior performance in most experimental settings, being outperformed by other models only in a few cases. When the number of candidate exercises is limited ( ρ = 1 and ρ = 2 ), the performance of all models declines significantly. This is mainly because the randomly selected exercises in these scenarios often fail to cover the specific knowledge deficiencies of students, resulting in limited improvement. In contrast, when ρ = 3 , i.e., when the entire exercise bank is available, all models demonstrate notable performance gains, with LPReKL achieving particularly remarkable improvement. This highlights the model’s ability to accurately identify and address students’ weaknesses when provided with sufficient and diverse learning resources. However, LPReKL performs slightly less effectively on the XES3G5M dataset compared to the other two datasets. A possible explanation is the large number of knowledge concepts in XES3G5M, which makes it difficult for a fixed set of five recommended exercises to meet students’ diverse learning needs. Additionally, the strong semantic understanding capabilities of the LLM embedded in LPReKL are better leveraged in datasets with rich textual information, such as MOOCCubeX and MOOPer. Notably, SKarRec—which also employs an LLM—demonstrates strong competitiveness on these two datasets, further confirming the potential of LLMs in the context of LPR. For other models, FISM and CluLSTM heavily rely on co-occurrence similarity between items, and their recommendation quality deteriorates significantly when candidate exercises are randomly generated. DQN, as a classic reinforcement learning algorithm, can dynamically adjust its strategy based on the environment state, thereby demonstrating relatively stable performance. GRU4Rec and LightGCN critically depend on sufficient historical interaction data to capture student behavior patterns; as such, their representational capacity is limited in cases of data sparsity or short interaction sequences. GEHRL achieves competitive performance by decoupling the recommendation process into two stages—goal planning and exercise recommendation—and by incorporating awareness of students’ evolving knowledge states. However, its architecture lacks a closed-loop mechanism that enables real-time optimization of the recommendation strategy based on student feedback. SkarRec and KGNN-KT attempt to leverage LLMs to uncover semantic relationships among learning items. Nevertheless, in real-world educational scenarios, knowledge systems are large-scale and dynamically evolving, making it impractical to exhaustively encode and input all conceptual relationships into the LLM. This limitation hinders their generalization ability and practical applicability.
Specifically, we performed paired t-tests to compare LPReKL against the two strongest baseline models, GEHRL and SkarRec. The performance improvement of LPReKL over the best baseline is statistically significant (p < 0.05) on MOOCCubeX ( ρ = 1 , ρ = 3 ), MOOPer ( ρ = 1 , ρ = 2 ), and XES3G5M ( ρ = 2 , ρ = 3 ). In the remaining cases, the performance differences did not reach conventional significance levels. Nevertheless, our method consistently demonstrates a favorable numerical advantage across all settings, indicating its robustness and effectiveness.

5.5.2. Difficulty Comparison

An appropriate level of exercise difficulty can significantly enhance students’ learning motivation, whereas exercises that are too simple or too difficult may hinder their long-term engagement. As shown in Table 4, LPReKL consistently achieves favorable results under all three settings. This is primarily because our framework integrates both the LLM and KT components to explicitly account for difficulty when generating learning paths, ensuring that the recommended exercises align well with the predefined difficulty expectations. In contrast, other methods do not incorporate difficulty as a consideration during the recommendation process, resulting in learning paths of inconsistent quality, which may negatively impact students’ learning experience. These findings suggest that incorporating external factors—such as exercise difficulty—is crucial for delivering personalized learning services, as it helps efficiently target students’ weaknesses and accelerates progress toward their learning goals.

5.6. Ablation Study (RQ3)

We design the following variants to investigate the contribution of each component in LPReKL:
  • LPReKL-F: Removes the feedback mechanism. Candidate exercises retrieved from the exercise bank are directly recommended without iterative adjustment.
  • LPReKL-K: Reverts to the original DKT by omitting the masking mechanism when reconstructing historical interaction data.
  • LPReKL-L: Excludes the LLM. Instead, exercises relevant to students’ weak knowledge concepts, as identified by KT, are retrieved from the exercise bank for recommendation.
As shown in Table 5, the performance of LPReKL-K is on par with LPReKL, indicating that traditional KT still performs excellently in predicting student knowledge states. However, it typically requires more structured training data and cannot be generalized to multi-step prediction tasks. LPReKL-F and LPReKL-L yield the weakest performance, highlighting the critical roles of both the feedback mechanism and the LLM. The LLM’s semantic understanding and reasoning capabilities allow it to infer students’ weaknesses from contextual information and generate personalized learning items accordingly. The feedback mechanism guides the LLM to iteratively adjust its instructional strategies. Unlike prior studies, which largely overlook student feedback and rely solely on historical data for one-way recommendations, our framework introduces a dynamic, student-aware loop. This enables the model to better align with students’ evolving needs and individual learning profiles.

5.7. Parameter Analysis (RQ4)

5.7.1. Weight Coefficients of Knowledge Tracing+

The KT in our framework involves two key hyperparameters, α and β (as defined in Equation (9)). We vary these parameters to examine their influence on model performance. The results are presented in Figure 5. When α and β are set to 0.5:0.5, the model achieves the best performance on the MOOCCubeX and XES3G5M datasets. However, increasing either α or β disproportionately leads to a significant drop in performance. The MOOPer dataset presents a more complex pattern. The model performs best when α and β are set to 0.6:0.4, while further adjustments cause noticeable fluctuations in performance. We attribute this to the larger number of knowledge components in MOOPer, which requires a more balanced consideration of prediction accuracy and item difficulty—an aspect that aligns more closely with real-world educational settings. These observations highlight the importance of dataset-specific parameter tuning, as appropriate values of α and β are crucial for optimizing model performance.

5.7.2. Mask Ratio of Knowledge Tracing+

In real-world educational scenarios, the problem of a large exercise bank arises. To enable the KT to predict students’ knowledge states for the next multi-step, we reconstruct students’ historical records during training by randomly masking their responses with a certain probability. Figure 6 shows the impact of different masking rates on model performance. The results indicate that excessively high masking rates significantly degrade model performance. This is because with too many missing student records, the model struggles to accurately assess students’ knowledge states, hindering the generation of personalized learning paths. For the MOOPer dataset, which contains the most knowledge concepts, reducing the masking rate allows for more comprehensive training, leading to more precise decisions. For the other two datasets, optimal performance is achieved at a masking rate of 0.15, with further reductions yielding minimal improvements. We attribute this to the moderate number of knowledge concepts in these datasets, where a masking rate of 0.15 is sufficient for the model to make reasonably accurate predictions based on the available data.

6. Discussion

Traditional rule-based methods lack flexibility, while early sequential recommendation models struggle to capture the dynamics of knowledge acquisition. Although KT models and LLMs have individually shown promise in addressing these challenges, the former are often limited to single-step prediction, while the latter suffer from hallucination and a conservative bias in evaluation. The proposed LPReKL framework is designed to synergize the strengths of both KT and LLM to overcome these limitations. Experimental results demonstrate that LPReKL performs well across multiple datasets and settings, primarily due to two key design principles. First, the iterative feedback loop between KT and LLM enables dynamic adaptation. Unlike conventional “one-shot” recommendation approaches, our system continuously refines the learning path: the KT model evaluates the expected effectiveness of candidate paths and feeds this information back to the LLM, which then adjusts its generation strategy accordingly. This closed-loop interaction allows for personalized and context-aware path refinement. Second, the paradigm of “LLM-generated reference items + semantic retrieval from a question bank” effectively balances creativity with reliability. The LLM interprets complex pedagogical intentions and concretizes them into reference exercises, while the retrieval mechanism ensures that the final recommended items are drawn from a curated, high-quality item bank. This division of labor not only mitigates the hallucination issues commonly associated with LLMs but also enhances the system’s scalability—new, high-quality exercises can be seamlessly integrated into the bank and become immediately available for recommendation. Nevertheless, a limitation of this work lies in using the same KT model both as a component within the framework and as the primary evaluator. While this practice is common in the field and the KT model itself is robust and well validated on large-scale data, it may introduce potential evaluation bias. To further strengthen the validity of our findings, we acknowledge the importance of future online A/B testing in real educational platforms. Such studies would allow us to assess the long-term impact of LPReKL in authentic teaching and learning environments, beyond simulated offline evaluations.

7. Conclusions

In this paper, we proposed LPReKL, a novel learning path recommendation framework that synergizes Knowledge Tracing (KT) with a Large Language Model (LLM) to address the limitations of traditional educational approaches. By dynamically tracking students’ knowledge states through an enhanced KT model and leveraging the generative capabilities of LLMs, our system provides highly personalized and adaptive learning recommendations. The integration of a feedback mechanism ensures continuous optimization of the recommended content, aligning it with students’ evolving needs and balancing exercise difficulty to enhance engagement. Experimental results on multiple public datasets demonstrated that LPReKL outperforms existing baselines in terms of recommendation effectiveness and adaptability, particularly in scenarios with abundant learning resources. Ablation studies further validated the critical roles of the feedback mechanism, LLM-generated content, and the masked training strategy for KT in improving performance. Future work will explore incorporating additional personalized factors (e.g., learning styles, engagement metrics) and refining the model architecture to better handle complex real-world educational environments. This research contributes to the advancement of AI-driven educational technologies, offering a scalable solution to deliver tailored learning experiences and improve educational outcomes. Additionally, the LLM adjusts its strategy by processing feedback through natural language prompts, rather than via gradient updates. While this design offers advantages in computational efficiency and safety, and has proven effective in practice, the adjustment granularity is indeed relatively coarse. Future work may explore finer-grained optimization techniques, such as reinforcement learning from human or synthetic feedback, to refine the LLM’s decision-making process and potentially achieve further performance gains.

Author Contributions

The study was conceived and designed jointly by Y.L. and Z.W. Y.L. performed the experiments, collected and analyzed the data, and drafted the manuscript. Z.W. provided critical feedback, revised the manuscript for important intellectual content, and supervised the research. Both authors discussed the results and contributed to the final version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62377015), National Key Research and Development Program of China (Grant No. 2023YFC3341200), and the Innovation Project for General Colleges and Universities in Guangdong Province (Grant No. 2024KTSCX094).

Data Availability Statement

Suggested Data Availability Statements are available at https://github.com/THU-KEG/MOOCCubeX?tab=readme-ov-file, accessed on 1 March 2025, http://data.openkg.cn/dataset/mooper and https://github.com/ai4ed/xes3g5m, accessed on accessed on 1 March 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, G.; Gao, X.; Ye, H.; Zhu, J.; Lin, W.; Wu, Z.; Zhou, H.; Ye, Z.; Ge, Y.; Baghban, A. Optimizing learning paths: Course recommendations based on graph convolutional networks and learning styles. Appl. Soft Comput. 2025, 175, 113083. [Google Scholar] [CrossRef]
  2. Chen, X.; Zhu, Z.; Xie, Y. Design and Implementation of an Explainable Course Recommendation Algorithm Based on Causal Inference. In Proceedings of the International Conference on Computer Science and Educational Informatization, Haikou, China, 1–3 November 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 183–199. [Google Scholar]
  3. Wang, L.; Li, Q.; Cui, D.; Wang, M.; Zhao, Y.; Xu, Y.; Zhuang, H.; Zhou, Y.; Wang, L. Building Bridges, Not Walls: Fairness-Aware and Accurate Recommendation of Code Reviewers via LLm-Based Agents Collaboration. In Proceedings of the 2025 IEEE/ACM 33rd International Conference on Program Comprehension (ICPC), Ottawa, ON, Canada, 27–28 April 2025; IEEE Computer Society: Washington, DC, USA, 2025; pp. 577–588. [Google Scholar]
  4. Ma, W.; Chen, W.; Lu, L.; Fan, X. Integrating learners’ knowledge background to improve course recommendation fairness: A multi-graph recommendation method based on contrastive learning. Inf. Process. Manag. 2024, 61, 103750. [Google Scholar] [CrossRef]
  5. Zhang, F.; Feng, X.; Wang, Y. Personalized process–type learning path recommendation based on process mining and deep knowledge tracing. Knowl.-Based Syst. 2024, 303, 112431. [Google Scholar] [CrossRef]
  6. Li, S.; Liu, X.; Tang, X.; Chen, X.; Pu, J. MLKT4Rec: Enhancing Exercise Recommendation Through Multitask Learning with Knowledge Tracing. IEEE Trans. Comput. Soc. Syst. 2024, 12, 1458–1472. [Google Scholar] [CrossRef]
  7. Kuo, B.C.; Chang, F.T.; Bai, Z.E. Leveraging LLMs for Adaptive Testing and Learning in Taiwan Adaptive Learning Platform (TALP). In Proceedings of the LLM@ AIED, Tokyo, Japan, 7 July 2023; pp. 101–110. [Google Scholar]
  8. He, R.; Zhang, L.; Lyu, L.; Xue, C. Enhancing the ability of LLMs for spaceborne equipment code generation via retrieval-augmented generation and contrastive learning. Autom. Softw. Eng. 2026, 33, 1–25. [Google Scholar] [CrossRef]
  9. Schmied, T.; Bornschein, J.; Grau-Moya, J.; Wulfmeier, M.; Pascanu, R. LLMs are Greedy Agents: Effects of RL Fine-tuning on Decision-Making Abilities. arXiv 2025, arXiv:2504.16078. [Google Scholar]
  10. Fan, Y.; Tong, M.; Li, D. Learning path recommendation based on forgetting factors and knowledge graph awareness. Inf. Process. Manag. 2026, 63, 104393. [Google Scholar] [CrossRef]
  11. Mrhar, K.; Abik, M. A deep learning framework for optimizing personalized online course recommendation and selection. Decis. Anal. J. 2025, 16, 100616. [Google Scholar] [CrossRef]
  12. Wu, Q.; Ji, W.; Zhou, G. CLKT: Optimizing cognitive load management in knowledge tracing. Cogn. Comput. 2025, 17, 74. [Google Scholar] [CrossRef]
  13. Li, H.; Xu, T.; Zhang, C.; Chen, E.; Liang, J.; Fan, X.; Li, H.; Tang, J.; Wen, Q. Bringing generative AI to adaptive learning in education. arXiv 2024, arXiv:2402.14601. [Google Scholar] [CrossRef]
  14. Liu, Q.; Tong, S.; Liu, C.; Zhao, H.; Chen, E.; Ma, H.; Wang, S. Exploiting cognitive structure for adaptive learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 627–635. [Google Scholar]
  15. Li, Q.; Xia, W.; Du, K.; Zhang, Q.; Zhang, W.; Tang, R.; Yu, Y. Learning Structure and Knowledge Aware Representation with Large Language Models for Concept Recommendation. arXiv 2024, arXiv:2405.12442. [Google Scholar] [CrossRef]
  16. Liang, Z.; Mu, L.; Chen, J.; Xie, Q. Graph path fusion and reinforcement reasoning for recommendation in MOOCs. Educ. Inf. Technol. 2023, 28, 525–545. [Google Scholar] [CrossRef]
  17. Shang, Y.; Luo, X.; Wang, L.; Peng, H.; Zhang, X.; Ren, Y.; Liang, K. Reinforcement learning guided multi-objective exam paper generation. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), Minneapolis, MN, USA, 27–29 April 2023; pp. 829–837. [Google Scholar]
  18. Wang, H.; Long, T.; Yin, L.; Zhang, W.; Xia, W.; Hong, Q.; Xia, D.; Tang, R.; Yu, Y. Gmocat: A graph-enhanced multi-objective method for computerized adaptive testing. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 2279–2289. [Google Scholar]
  19. Liu, H.; Li, X. Learning path combination recommendation based on the learning networks. Soft Comput. 2020, 24, 4427–4439. [Google Scholar] [CrossRef]
  20. Abdelrahman, G.; Wang, Q.; Nunes, B. Knowledge tracing: A survey. ACM Comput. Surv. 2023, 55, 1–37. [Google Scholar] [CrossRef]
  21. Li, L.; Wang, Z.; Jose, J.M.; Ge, X. LLM supporting knowledge tracing leveraging global subject and student specific knowledge graphs. Inf. Fusion 2025, 126, 103577. [Google Scholar] [CrossRef]
  22. Corbett, A.T.; Anderson, J.R. Knowledge tracing: Modeling the acquisition of procedural knowledge. User Model. -User-Adapt. Interact. 1994, 4, 253–278. [Google Scholar] [CrossRef]
  23. Piech, C.; Bassen, J.; Huang, J.; Ganguli, S.; Sahami, M.; Guibas, L.J.; Sohl-Dickstein, J. Deep knowledge tracing. Adv. Neural Inf. Process. Syst. 2015, 28, 505–513. [Google Scholar]
  24. Chen, X.; Shen, J.; Xia, W.; Jin, J.; Song, Y.; Zhang, W.; Liu, W.; Zhu, M.; Tang, R.; Dong, K.; et al. Set-to-sequence ranking-based concept-aware learning path recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 5027–5035. [Google Scholar]
  25. Li, Q.; Yuan, X.; Yue, J.; Shen, X.; Liang, R.; Liu, S.; Yan, Z. Dual-view multi-scale cognitive representation for deep knowledge tracing. Knowl.-Based Syst. 2025, 310, 113010. [Google Scholar] [CrossRef]
  26. Huang, C.; Jiang, W.; Li, K.; Wu, J.; Zhang, J. Enhancing learning process modeling for session-aware knowledge tracing. Knowl.-Based Syst. 2025, 309, 112740. [Google Scholar] [CrossRef]
  27. Wang, Z.; Zhou, J.; Chen, Q.; Zhang, M.; Jiang, B.; Zhou, A.; Bai, Q.; He, L. LLM-KT: Aligning Large Language Models with Knowledge Tracing using a Plug-and-Play Instruction. arXiv 2025, arXiv:2502.02945. [Google Scholar]
  28. Roumeliotis, K.I.; Tselikas, N.D.; Nasiopoulos, D.K. Fake News Detection and Classification: A Comparative Study of Convolutional Neural Networks, Large Language Models, and Natural Language Processing Models. Future Internet 2025, 17, 28. [Google Scholar] [CrossRef]
  29. Kuang, J.; Shen, Y.; Xie, J.; Luo, H.; Xu, Z.; Li, R.; Li, Y.; Cheng, X.; Lin, X.; Han, Y. Natural language understanding and inference with mllm in visual question answering: A survey. ACM Comput. Surv. 2025, 57, 1–36. [Google Scholar] [CrossRef]
  30. Agrawal, G.; Pal, K.; Deng, Y.; Liu, H.; Chen, Y.C. Cyberq: Generating questions and answers for cybersecurity education using knowledge graph-augmented llms. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 23164–23172. [Google Scholar]
  31. Wang, X.; Zhong, Y.; Huang, C.; Huang, X. ChatPRCS: A personalized support system for English reading comprehension based on ChatGPT. IEEE Trans. Learn. Technol. 2024, 17, 1722–1736. [Google Scholar] [CrossRef]
  32. Cui, P.; Sachan, M. Adaptive and personalized exercise generation for online language learning. arXiv 2023, arXiv:2306.02457. [Google Scholar] [CrossRef]
  33. Zhang, D.; Niu, Q.; Wang, T.; Hou, Y.; Wu, J.; Zhang, C.; Stefanidis, A. KGNN-KT: Enhancing Knowledge Tracing in Programming Education Through LLM-Extracted Knowledge Graphs. In Proceedings of the International Conference on Intelligent Computing, Shanghai, China, 23–25 May 2025; Springer: Berlin/Heidelberg, Germany, 2025; pp. 137–147. [Google Scholar]
  34. Sun, X.; Liu, Q.; Zhang, K.; Shen, S.; Yang, L.; Li, H. Harnessing code domain insights: Enhancing programming knowledge tracing with large language models. Knowl.-Based Syst. 2025, 317, 113396. [Google Scholar] [CrossRef]
  35. Li, Q.; Xia, W.; Yin, L.; Shen, J.; Rui, R.; Zhang, W.; Chen, X.; Tang, R.; Yu, Y. Graph enhanced hierarchical reinforcement learning for goal-oriented learning path recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–25 October 2023; pp. 1318–1327. [Google Scholar]
  36. Papoušek, J.; Stanislav, V.; Pelánek, R. Impact of question difficulty on engagement and learning. In Proceedings of the Intelligent Tutoring Systems: 13th International Conference, ITS 2016, Zagreb, Croatia, 7–10 June 2016; Proceedings 13. Springer: Berlin/Heidelberg, Germany, 2016; pp. 267–272. [Google Scholar]
  37. Zhang, X.; Shang, Y.; Ren, Y.; Liang, K. Dynamic multi-objective sequence-wise recommendation framework via deep reinforcement learning. Complex Intell. Syst. 2023, 9, 1891–1911. [Google Scholar] [CrossRef]
  38. Wei, C.; Duan, K.; Zhuo, S.; Wang, H.; Huang, S.; Liu, J. Enhanced Recommendation Systems with Retrieval-Augmented Large Language Model. J. Artif. Intell. Res. 2025, 82, 1147–1173. [Google Scholar] [CrossRef]
  39. Sakurai, K.; Togo, R.; Ogawa, T.; Haseyama, M. Llm is knowledge graph reasoner: Llm’s intuition-aware knowledge graph reasoning for cold-start sequential recommendation. In Proceedings of the European Conference on Information Retrieval, Lucca, Italy, 6–10 April 2025; Springer: Berlin/Heidelberg, Germany, 2025; pp. 263–278. [Google Scholar]
  40. Li, X.; Peng, S.; Yada, S.; Wakamiya, S.; Aramaki, E. GenKP: Generative knowledge prompts for enhancing large language models. Appl. Intell. 2025, 55, 464. [Google Scholar] [CrossRef]
  41. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
  42. Kabbur, S.; Ning, X.; Karypis, G. Fism: Factored item similarity models for top-n recommender systems. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; pp. 659–667. [Google Scholar]
  43. Zhou, Y.; Huang, C.; Hu, Q.; Zhu, J.; Tang, Y. Personalized learning full-path recommendation model based on LSTM neural networks. Inf. Sci. 2018, 444, 135–152. [Google Scholar] [CrossRef]
  44. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing atari with deep reinforcement learning. arXiv 2013, arXiv:1312.5602. [Google Scholar] [CrossRef]
  45. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; Tikk, D. Session-based recommendations with recurrent neural networks. arXiv 2015, arXiv:1511.06939. [Google Scholar]
  46. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; Wang, M. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual, 25–30 July 2020; pp. 639–648. [Google Scholar]
Figure 1. Diagram of the learning process.
Figure 1. Diagram of the learning process.
Electronics 14 04385 g001
Figure 2. Framework overview.
Figure 2. Framework overview.
Electronics 14 04385 g002
Figure 3. Prompt template of LLM.
Figure 3. Prompt template of LLM.
Electronics 14 04385 g003
Figure 4. Exploring the capabilities of LLM in predicting students’ knowledge states. The box represents the interquartile range, spanning from the 25th to the 75th percentile. The horizontal line inside the box indicates the median.
Figure 4. Exploring the capabilities of LLM in predicting students’ knowledge states. The box represents the interquartile range, spanning from the 25th to the 75th percentile. The horizontal line inside the box indicates the median.
Electronics 14 04385 g004
Figure 5. Weight coefficients of KT.
Figure 5. Weight coefficients of KT.
Electronics 14 04385 g005
Figure 6. Mask ratio of KT.
Figure 6. Mask ratio of KT.
Electronics 14 04385 g006
Table 1. Key notations.
Table 1. Key notations.
NotationDescription
Q = { q 1 , q 2 , }All exercises in the exercise bank
U = { u 1 , u 2 , }All involved students
C = { c 1 , c 2 , }All relevant knowledge concepts
u i = { q 1 , a 1 , q 2 , a 2 , }A specific student’s learning records
a i Exercise answer, a i [0, 1]
E s u p Full score of the examination
E s Beginning score of the examination
E e Ending score of the examination
E φ A student’s promotion in specific knowledge concepts
K C ( q ) A set of knowledge concepts contained in q
D i f f ( q i ) Exercise difficulty of q i
dExpected difficulty
l p c Candidate exercises
s i A specific knowledge state of the student i
Table 2. Dataset statistics.
Table 2. Dataset statistics.
DatasetStudentsExerciseKCsInterations
MOOCCubeX13,091156711,206,646
MOOPer26,603175613602,007,572
XES3G5M16,37860068284,803,902
Table 3. Comparison of different methods in terms of promotion. Bold indicates the best result and underlining indicates the second best. * indicates significant improvement over baselines (p < 0.01).
Table 3. Comparison of different methods in terms of promotion. Bold indicates the best result and underlining indicates the second best. * indicates significant improvement over baselines (p < 0.01).
FISMCluLSTMDQNGRU4RecLightGCNGEHRLSKarRecKGNN-KTLPReKL
ρ = 1 0.10090.15900.20670.18020.16430.21790.20110.2032 0.2433
MOOCCubeX ρ = 2 0.15190.21880.19230.21050.22940.25770.23710.22910.2574
ρ = 3 0.24280.34220.31140.29560.28610.34460.35930.3015 0.3885
ρ = 1 0.14170.24990.20440.22080.16290.23910.23710.2205 0.2607
MOOPer ρ = 2 0.18580.23550.21680.19770.21260.30650.31090.2817 0.3356
ρ = 3 0.24070.36960.32350.33160.35760.49730.46410.37620.4852
ρ = 1 0.01960.07740.16420.13550.11780.14690.20280.18470.1952
XES3G5M ρ = 2 0.08880.04170.19160.17550.10400.20210.24150.2163 0.2587
ρ = 3 0.01650.24740.21680.21780.20620.23180.26710.2492 0.3002
Table 4. Comparison of different methods in terms of difficulty. Values closer to 0.3 indicate better performance. Bold indicates the best result and underlining indicates the second best.
Table 4. Comparison of different methods in terms of difficulty. Values closer to 0.3 indicate better performance. Bold indicates the best result and underlining indicates the second best.
FISMCluLSTMDQNGRU4RecLightGCNGEHRLSKarRecLPReKL
ρ = 1 0.49090.35590.37950.38220.36480.21940.40110.3321
MOOCCubeX ρ = 2 0.21470.23110.43460.23940.24170.37340.35050.2849
ρ = 3 0.44160.36550.34200.23240.36140.39810.25730.2790
ρ = 1 0.46360.34480.38260.42270.41160.22640.23910.3784
MOOPer ρ = 2 0.40260.40580.20050.21520.21570.44630.37440.2374
ρ = 3 0.36170.35400.36750.38740.23910.37460.22560.2851
ρ = 1 0.43140.22580.24990.43750.20880.39140.40210.3480
XES3G5M ρ = 2 0.41210.36080.37500.23560.21650.22100.25720.3505
ρ = 3 0.24440.37340.23380.35710.37530.41070.38470.2871
Table 5. Ablation study. Bold indicates the best result.
Table 5. Ablation study. Bold indicates the best result.
LPReKL-FLPReKL-KLPReKL-LLPReKL
ρ = 1 0.18990.26470.10370.2643
MOOCCubeX ρ = 2 0.26810.25010.16760.2507
ρ = 3 0.35890.38290.20360.3885
ρ = 1 0.10460.25910.13980.2607
MOOPer ρ = 2 0.09480.33590.16660.3356
ρ = 3 0.21270.48120.31980.4852
ρ = 1 0.04140.18940.08570.1952
XES3G5M ρ = 2 0.17450.26090.16590.2587
ρ = 3 0.21410.29730.21840.3002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Y.; Wu, Z. Learning Path Recommendation Enhanced by Knowledge Tracing and Large Language Model. Electronics 2025, 14, 4385. https://doi.org/10.3390/electronics14224385

AMA Style

Lin Y, Wu Z. Learning Path Recommendation Enhanced by Knowledge Tracing and Large Language Model. Electronics. 2025; 14(22):4385. https://doi.org/10.3390/electronics14224385

Chicago/Turabian Style

Lin, Yunxuan, and Zhengyang Wu. 2025. "Learning Path Recommendation Enhanced by Knowledge Tracing and Large Language Model" Electronics 14, no. 22: 4385. https://doi.org/10.3390/electronics14224385

APA Style

Lin, Y., & Wu, Z. (2025). Learning Path Recommendation Enhanced by Knowledge Tracing and Large Language Model. Electronics, 14(22), 4385. https://doi.org/10.3390/electronics14224385

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop