Next Article in Journal
Algorithmic Complexity vs. Market Efficiency: Evaluating Wavelet–Transformer Architectures for Cryptocurrency Price Forecasting
Previous Article in Journal
Architecting the Orthopedical Clinical AI Pipeline: A Review of Integrating Foundation Models and FHIR for Agentic Clinical Assistants and Digital Twins
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Task Difficulty Model for Personalized Reading Comprehension in AI-Based Learning Systems

by
Aray M. Kassenkhan
1,*,
Mateus Mendes
2,3,* and
Akbayan Bekarystankyzy
4,*
1
Department of Software Engineering, Institute of Automation and Information Technologies, Satbayev University, Almaty 050013, Kazakhstan
2
Polytechnic University of Coimbra, Rua da Misericórdia, Lagar dos Cortiços, S. Martinho do Bispo, 3045-093 Coimbra, Portugal
3
RCM2+, Polytechnic University of Coimbra, Rua Pedro Nunes, 3030-199 Coimbra, Portugal
4
School of Digital Technologies, Narxoz University, Almaty 050035, Kazakhstan
*
Authors to whom correspondence should be addressed.
Algorithms 2026, 19(2), 100; https://doi.org/10.3390/a19020100 (registering DOI)
Submission received: 24 December 2025 / Revised: 7 January 2026 / Accepted: 21 January 2026 / Published: 27 January 2026
(This article belongs to the Section Algorithms for Multidisciplinary Applications)

Abstract

This article proposes an interpretable adaptive control model for dynamically regulating task difficulty in Artificial intelligence (AI)-augmented reading-comprehension learning systems. The model adjusts, on the fly, the level of task complexity associated with reading comprehension and post-text analytical tasks based on learner performance, with the objective of maintaining an optimal difficulty level. Grounded in adaptive control theory and learning theory, the proposed algorithm updates task difficulty according to the deviation between observed learner performance and a predefined target mastery rate, modulated by an adaptivity coefficient. A simulation study involving heterogeneous learner profiles demonstrates stable convergence behavior and a strong positive correlation between task difficulty and learning performance (r = 0.78). The results indicate that the model achieves a balanced trade-off between learner engagement and cognitive load while maintaining low computational complexity, making it suitable for real-time integration into intelligent learning environments. The proposed approach contributes to AI-supported education by offering a transparent, control-theoretic alternative to heuristic difficulty adjustment mechanisms commonly used in e-learning systems.

1. Introduction

In AI-supported learning contexts, adaptive methods are becoming increasingly common to better adjust learning content and its processes. As a component of these methods, dynamic control of task difficulty becomes a major issue to maintain a high level of involvement of a learner, especially in reading and analytical learning processes.
In spite of the growing prevalence of digital learning resources, those resources are still based on coarse-grained structures for managing the levels of difficulty. The majority of the e-learning platforms present the same tasks or questions to every user, following a pre-defined roadmap. This context overlooks the dynamics of individual learning processes. Namely, there is possible cognitive overload for learners having difficulties, while distinguished learners experience unsatisfactory performance.
To overcome this shortcoming, a mathematical model for adaptive task difficulty for an AI-based learning platform is put forth in this research. The model automatically changes the level of reading comprehension and analysis assignments based on a student’s previous performance, with the intention of finding an optimal pace for learning, which should be neither too easy nor too challenging. In order to overcome the drawback, the proposed method is based on a clearly interpretable adaptive control model using mathematical formulations to control the level of difficulty of the reading comprehension and analysis tasks in the AI-based learning environment.
The performance of the model is investigated through simulation experiments with learners of different skill levels. Convergence stability and adaptability are analyzed.
Objectives and contributions.
This study proposes the development of an interpretable, computation-sparing adaptive control model that is capable of dynamically adjusting the level of reading comprehension and post-reading analysis tasks in order to keep the performance of the learner close to the specified mastery level. The objectives of this study are as follows:
(1)
Develop a discrete-time adaptive learning algorithm that correlates the difficulty levels with the discrepancy between the actual and desired mastery rates;
(2)
Incorporate the model in an AI-enhanced learning environment using the item difficulty levels to dynamically update the corresponding states of the learners;
(3)
Evaluate the stability and responsiveness of convergence on two learner populations with differing starting knowledge;
(4)
Show how the model can support a stable learning environment with low computational costs adequate for a real-time application.
The essence of the main contribution is to extend the control-theoretic framework for providing a formalized adaptive learning process for reading comprehension task difficulty as an alternative to rule-based adaptation methods, which are used along with empirical justification for stability in adaptive learning processes.
The study uses two datasets of learner performance records, obtained from simulation-based assessments (20 regular students and 18 advanced students), and evaluates the proposed model using convergence stability and association between performance and assigned difficulty.
The contributions of this work are as follows:
A method for adaptive control of task difficulty in reading comprehension tasks.
An interpretable feedback-based update mechanism linked to a target mastery level.
The rest of the article is structured as follows. The theoretical background and related works on adaptive learning systems are reviewed under Section 2. Section 3 contains the proposed mathematical model and experimental methods. The simulation results and personalized learning implications are tackled in Section 4. Section 5 contains some ethical considerations. Section 6 draws some conclusions and explores future work.

2. Literature Review

The recent boom in artificial intelligence and data-driven technologies in the learning domain has led to a new breed of adaptive learning technologies, which offer personalized learning materials. A considerable amount of research work shows that combining gamification with artificial intelligence tools leads to improvements in student motivation and engagement, particularly in reading and analytical thinking skills [1,2]. The learning systems make use of computational models for understanding student performance and adapting learning paths dynamically, so as to sustain cognitive engagement and optimize learning.
The early research on adaptive learning emphasized model classification based on two main branches: rule-based systems and data-driven adaptive models. Rule-based systems rely on heuristic rule sets, based on expert knowledge, while data-driven adaptive models rely on machine learning algorithms that identify optimal learning paths based on data analysis [3,4]. The advancements made within natural language processing have also enabled adaptive learning with personalized understanding questions and adaptive learning responses with task adjustment according to the student’s performance [5]. However, a major share of adaptive learning research still emphasizes recommending learning materials, with a gap remaining within adaptive learning based on task adjustment challenges [6,7,8,9,10,11].
From cognitive and pedagogical perspectives, researchers have emphasized the importance of maintaining an optimal match between task difficulty and learner abilities. By doing so, the aim is to develop critical thinking skills, metacognition, and persistence [12,13,14]. The concept aligns with Vygotsky’s Zone of Proximal Development, where moderate task difficulty leads to maximum learning outcomes. Consequently, adaptive task complexity adjustment emerged as an imperative technique to achieve personalized and high-order cognitive development. Recently, there have been more advances on the foundation achieved with research conducted on an inter-disciplinary approach covering artificial intelligence, behavior analysis, and educational psychology. Han et al. (2024) [15] conducted research on factors related to the adoption and retention of AI-based adaptive systems among rural middle school pupils, finding that system sophistication, perceptions of receiving feedback, and user experiences were central to these aims. Notably, adaptive personalization plays an essential role in delivering equal and high-quality learning experiences.
A review on Explainable Artificial Intelligence (XAI) in the field of education, with a case study on adaptive systems, has been conducted by Embarak (2025) [16]. The importance of interpretability and transparency within AI-based personalization and adaptive task modeling with regards to ensuring that changes within the adaptivity level are understood by educators and learners alike cannot be overstated.
A large-scale mapping study involving 147 publications, conducted by Kabudi et al. in 2021, surveyed AI-supported adaptive learning systems, and they highlighted various research trends, approaches, and challenges involving personalization [17]. The result shows that, despite personalization being an emerging trend, there are a limited number of models that focus on task or difficulty level adjustment, and thus there is an emerging need for more mathematically sound methods, as offered in this article.
Lin et al. (2025) [18] proposed a human-centered adaptive system that combined Human Factors Engineering with AI to adapt learning strategies based on the motivational level, interest, and confidence of the students. The results obtained by the researchers clearly reveal the significant impact of psychological factors on learning outcomes, thus emphasizing the need for a dynamic learning mechanism, which is in line with the adaptive task difficulty model used in the current research.
Beyond the educational sector, Mukti et al. (2025) [19], with a focus on adaptive, AI-powered interfaces on various online platforms, identified the major themes of transparency, inclusiveness, and real-time personalized design. Although their discussion is not strictly limited to the educational sector, the implications of their findings tend towards considerations that are inherently educational in the design of adaptive learning environments.
Leveraging the synergy of behavior data, Embarak (2025) [20], for instance, introduced a behavior-driven adaptive system that combined XAI with Internet of Behavior (IoB) to optimize learning support for students. This system illustrated improved engagement and performance when providing support targeted at the poorly performing students, thus providing implications for learning support from a behavior-driven adaptive approach, alongside the mathematical solutions used in this research.
In a concurrent research thread, Ezzaim et al. (2024) [21] also built a multi-factor adaptive e-learning system that uses clustering, regression, and decision trees to provide personalized learning based on performance, interests, and learning styles. This research signals the advantages of multi-dimensional models of the learner, which are also parallel to the current research on adaptive difficulties.
Researchers such as Qazi et al. (2024) [22] have surveyed the use of AI developments within Learning Management Systems (LMSs), identifying progress in infrastructure, usability, and intelligent analytics. Additionally, they recognize that, despite the proliferation of personalized functionalities such as chatbots and recommendation services, research on adaptable controls for varying levels of task complexity remains inadequately investigated, thereby making models such as that identified within this paper highly relevant.
Recent developments in generative AI have also led to novel paradigms for personalization. Gianni et al. (2025) [23] proposed a combination of Large Language Models (LLMs), meta-learning (MAML), and gamified extended reality (XR) to provide an adaptive feedback mechanism in an immersed environment. The result of this study supports that adaptive systems, whether behaviorally based or mathematically based, increase engagement and motivation of the learner, thereby contributing to the relevance of adaptive models of tasks.
At the same time, Tebourbi et al. (2025) [24] introduced the use of AIA-PAL, a multi-agent learning framework involving the use of LLMs with retrieval-based generation to develop a personalized learning path, although the work focuses on a dialogic agent-based approach, which has the same goals as the preservation of optimal engagement.
More general reviews on the application of AI in the education sector are also available. Madanchian et al. (2025) [25], for instance, researched the adaptation of AI applications within modern education environments, citing how such applications can increase the level of adaptation but simultaneously pose challenges with respect to equity, accessibility, and preserving student information, respectively. The need for AI to preserve interpretability, which is a requirement met by the proposal in this article, is thus paramount.
In the intersection of psychometrics and physiology, Arevalillo-Herráez et al. (2023) [26] combined Item Response Theory (IRT) with electrocardiographic (ECG) signals to predict task difficulty. Even though this approach utilizes a rich set of signals, a simpler mathematical solution, fit for a large-scale reading comprehension application, is suggested in this work.
Gamifications remain a part of adaptive learning systems. The impact of gamified feedback components such as points and progress bars on an adaptive vocabulary retrieval system has been investigated by Van den Broek et al. in 2026 [27]. Despite increased motivations, there has been no impact on learning, which appears to support that cognitive-level adaptations, such as dynamic difficulty-level adaptation, are required alongside engagement strategies.
In a comparative study on adaptive vs. preplanned metacognitive scaffolding in AI-supported programming tasks with elementary-level students, the findings by Liu et al. (2026) [28] confirmed that scaffolding has a significant positive impact on performance as well as a decrease in cognitive load, thus supporting the tenets on which the current model is based, with the manipulation of difficulty being used for the optimization of learning efficiency and metacognitive engagement.
In a thorough bibliometric study, Feng et al. (2025) [29] have revealed the emergence of a paradigm shift in the research area of AI in Education (AIED), which is now directed towards the development of co-adaptive learning technologies with a human-centric approach, owing to breakthroughs in generative AI technologies as well as LLM developments. The need for developing personalized learning systems with a cognitive balance has thus gained importance.
The empirical literature concerning the pedagogical usefulness of adaptive feedback is similarly robust. Katona and Katonane Gyonyoru (2025) [30] introduced AI-based adaptive software into a flipped learning strategy for programming education, which led to verifiable increases in both student participation and learning success. Along similar lines, Kinder et al. (2025) [31] found that the use of adaptive feedback from ChatGPT improved diagnostic reasoning skills in pre-service educators, which exceeded what was achievable when receiving static expert feedback. Taken together, these results lend support for the educational utility of real-time, personalized AI interventions, which are harnessed in the current research via a full adaptive difficulty formula.
Bauer et al. (2025) [32] propose a personalized learning approach that uses simulation-based learning, which varies scaffolding and feedback based on a learning profile on a conceptual level. The approach is in line with the aim of this research to provide mathematically modeled personalized learning. Additionally, a meta-analysis of 217 research studies comparing the effect of adaptivity (system-level personalization) against that of adaptability (individual-level personalization), carried out by Chernikova et al. (2025) [33], supports the use of adaptivity, mainly concerning learning complex skills, giving strong support to adaptive models, such as that presented in this work, on a difficulties adaptation concept.
In conclusion, from the literature, there is a marked shift from content-based recommendation systems towards adaptation mechanisms at a cognitive level that control learning complexity dynamically. The current research work is a continuation of this shift, as this research proposes a formally explicit, performance-oriented model that varies learning tasks in real time, thus filling the theoretical implementation gap of learning environment adaptation for AI-assisted learning.
Research gaps.
Notwithstanding the considerable advances that have been made in the area of adaptive learning with the use of AI, a number of important knowledge gaps still exist. First, a considerable amount of the preceding work on this topic has been concerned mainly with content suggestions or feedback systems that have given little consideration to the task of adapting the task difficulty as a dynamic process.
Third, despite the growing research on explainable AI in education, control theory-based explainable mechanisms for adapting to problems of varying difficulty remain relatively unexplored, even in reading comprehension and analytic learning problems. Finally, very few studies provide a thorough analysis regarding convergence properties, which are necessary for ensuring optimal adaptation in the long run.
These issues pose the motivation for the current study, which proposes an explicitly mathematical and interpretable adaptive control model to control task difficulty dynamically and achieve stable convergence in AI-based learning systems efficiently.

3. Adaptive Learning Model

The proposed methodology introduces a control-theoretic approach to adaptive task difficulty regulation, specifically tailored to reading comprehension and post-text analytical tasks. Unlike existing adaptive learning frameworks that rely primarily on heuristic rules or data-driven personalization, the proposed method formulates difficulty adaptation as a discrete-time feedback control process with explicitly defined system states, control variables, and stability objectives.

3.1. Conceptual Framework

This particular article introduces a mathematical modeling approach that has been specifically geared towards adapting the difficulty level with respect to reading comprehension and post/text analysis tasks within the context of an AI-supported learning setting. In particular, the fundamental purpose of the mathematical modeling is to dynamically adjust the level of difficulty with respect to a particular learning outcome.
The theoretical basis of this model is founded on principles from adaptive control theory and educational psychology, as well as principles from AI-informed personalized learning. The theory is consistent with the Zone of Proximal Development (ZPD), which aims to position learning within a band that is most conducive to developmental growth. The technique helps to eliminate both underdevelopment, which might result in a lack of engagement, as well as over-development, which might result in frustration and consequent reduced performance.

3.2. Task Difficulty Modeling for Reading Comprehension

Regarding the area of reading comprehension and the analytical task that follows, the concept of task difficulty represents multiple dimensions of cognition that relate to the complexity of the text, the degree of inferential reasoning required, and the level of analysis needed to adequately address comprehension questions. In contrast to procedural learning, the process of reading comprehension involves the gradual engagement of cognition; as a consequence, this process lends itself to a systematic adjustment of task difficulty that follows performance feedback.
In this framework, every reading activity has been associated with a normalized level of difficulty, where lower values indicate elementary factual comprehension, and higher values indicate activities that pertain to inference, synthesis, or critical analysis tasks. Performance P t has been measured as a proportion of correct answers to a comprehension test session at a given time step.
The adaptive update rule of difficulty is designed to correlate task difficulty with the reading proficiency skills acquired by the learners in relation to observed task mastery against a target or optimal mastery rate. The designed update rule enables a smooth transition from basic comprehension skills to analytic tasks at an optimal cognitive load.
In that regard, by directly relating performance feedback to task difficulty updates, this model is able to incorporate concepts from educational psychology such as the Zone of Proximal Development, which is compatible with a mathematical adaptive control framework suitable for AI-based learning systems.

3.3. Mathematical Model of Adaptive Difficulty

The adaptive task difficulty is described by a discrete-time recurrence relation, which changes the difficulty level based on the performance of the learners, as specified in Equation (1).
D t + 1 = D t + λ P t P ¯
where:
D t —difficulty level of the task at time step t;
D t + 1 —updated difficulty level for the next iteration;
P t [ 0,1 ] —normalized student performance (percentage of correct responses);
P ¯ —target performance level, representing the optimal mastery rate (typically 0.75);
λ —adaptivity coefficient, controlling the rate of adjustment.
The model assumes an initial difficulty level D 0 = 0.5 (average complexity).
The coefficient λ determines system responsiveness:
  • A low λ results in gradual adjustment (stable but slow adaptation);
  • A high λ accelerates adaptation (responsive but potentially unstable).
For this study, a moderate value of λ = 0.4 was chosen to ensure smooth adaptation.
Interpretation:
  • If P t > P ¯ : the learner performs above the target; the system increases difficulty.
  • If P t < P ¯ : the learner struggles; the system reduces difficulty.
  • If P t = P ¯ : the difficulty remains stable.
This formulation yields a self-correcting adaptive mechanism that stabilizes around an equilibrium where average performance equals the target success threshold.
The value of the adaptivity coefficient was determined by simulation runs. A too-large value led to oscillations, while too-small values led to slow convergence. There was thus a trade-off in choosing this value, and it was kept constant for all simulations.

3.4. Implementation Procedure for the Adaptive Difficulty Model

The process of the adaptive difficulty model application can then be achieved in the learning platform by the following steps:
(1)
Initialization. Both students have a moderate initial level of difficulty, D 0 , which is a midpoint for reading comprehension tasks.
(2)
Task Selection. For each cycle of learning, a selection of reading comprehension and analysis items is chosen from the calibrated item pool based on the student’s current difficulty situation, D t .
(3)
Performance Measurement. Learner performance, P t , is measured by the proportion of correctly answered questions during a session of assessment.
(4)
Difficulty Update. The level of task difficulty is updated using the adaptive control equation proposed, which changes D t depending on how well actual performance matches the target mastery rate.
(5)
Iteration. The new level of difficulty, D t + 1 , is involved in making decisions about choosing tasks for iteration.
This approach ensures a transparent and reproducible adaptation process for automatic regulation of the level of difficulty without the intervention of human parameters.

3.5. Algorithm 1: Adaptive Task Difficulty Control

The adaptive task difficulty control algorithm works iteratively as it keeps adjusting task difficulty based on learner performance feedback. The algorithm starts with the initialization of task difficulty to a moderate baseline value, D 0 , that represents a middle ground for the learning task.
For each learning cycle t, a set of reading comprehension and post-text analysis tasks proportional to the current difficulty level D t is sampled. The students will complete the assigned set of tasks, and the results will be collected based on the performance score P t , measured as the proportion of items answered correctly.
The task difficulty is updated based on an adaptive control policy that takes into consideration the gap between the achieved performance P t and the target level of mastery P * , denoted as the performance deviation. Mathematically, the next level of difficulty, D t + 1 , is computed as an adjustment to the current level of difficulty based on the deviation as scaled by the adaptivity factor α if the current performance exceeds the target level of mastery, or decreases when the level of performance is below the target level of mastery.
The process is then reiterated for each succeeding learning session so that task difficulty can approach asymptotically to reflect the level of mastery obtained by the learner.

3.6. Integration of Adaptive Control Theory, Educational Psychology, and AI-Based Personalization

The proposed model of adaptive task difficulty is based on an interdisciplinary integration of theories from the areas of adaptive control theory, the psychology of education, as well as artificial intelligence-inspired personalized education. Every discipline provides a unique component for the general conceptual framework.
From an adaptive control theory perspective, learner performance can be viewed as a feedback signal that serves to indicate adjustments to task difficulty. The discrete-time update rule can be seen as a feedback controller which serves to control the level of task complexity in order to keep learner performance around a desired level of mastery.
The underlying educational psychology provides the educational rationale for this control mechanism. The model supports the Zone of Proximal Development, positioning learning activities and tasks around an optimal cognitive level—not too hard in order to not lead to frustration and not too simple in order to avoid decreasing motivation. By incrementally escalating the level of analysis with incremental mastery, there would still be cognitive and meta-cognitive progress.
In the context of a personalized learning framework informed by AI, the control process is realized by the processing of learning performance data in real time and the subsequent automated task selection. The performance data of the learners are collected and normalized to update the level of difficulty that guides the selection of the learners’ reading comprehension and analysis tasks.
Individually and collectively, these components form an integrated adaptive learning framework where mathematical control helps to ensure stability and educational theory helps to provide a basis for pedagogical validity, while AI-powered personalization helps to facilitate deployment on adaptive intelligent learning platforms.
Novelty of the proposed approach.
Even though the architecture of the entire system is in line with known AI-based learning patterns, for instance, in AI literacy in mathematics and statistics, the innovation of this research is in defining task difficulty through control theory in updating the level of mastery of a target function based on certain pre-defined mastery level goals while taking observable convergence into account.
In addition, the model gives priority to interpretability and computational efficiency and supports interpretable regulation of the level of difficulty, which can easily be understood by educators and performed in a real-time environment. The results of empirical evaluation on diverse learner profiles also make this approach different.

4. Experiments and Results

4.1. Implementation Within the AI-Based Platform

The adaptive task difficulty approach is implemented as a separate module inside a web-based intelligent learning environment. The learning environment uses the following technologies:
  • Back-end: Spring Boot (Kotlin)
  • Front-end: Angular
  • Database: PostgreSQL
  • Mobile client: Flutter
AI components: Python-based services for natural language processing and recommendation, utilizing TensorFlow 2.17.0 and Scikit-learn 1.6.1.
The architecture of the complete system is represented in Figure 1. The figure indicates the relationship between user interfaces, the adaptive engine, AI services, and the central data storage.
The adaptive difficulty engine is integrated into the Assessment and Feedback subsystem. It interfaces with the user data layer and question repositories via RESTful APIs. Following each reading comprehension assessment, the system calculates the learner’s performance score (i.e., the ratio of correct answers) and updates the task difficulty level according to the model’s recursive formula.
The bank of questions embedded in the content repository is pre-coded with gradations of difficulty level incrementing from 0.1 (quite simple) to 1.0 (quite challenging). The coding of the initial set of questions was carried out manually with a pool of experts in language and literature, based on the criteria of cognitive requirement, language complexity, and analytical tasks required. Validation of the coding scheme involved pilot-testing with 40 students, in which item-response characteristics, namely rates of correct response, as well as times taken, were assessed. At present, each level of the coding scheme includes an average of 50–70 tasks, which still gives the adaptive engine sufficient milestones in terms of matching the probability of the students’ preparedness.

4.2. Data Generation and Evaluation Procedure

The values reported in Table 1 and Table 2 have been obtained from controlled simulation studies conducted to analyze the performance of the proposed adaptive difficulty framework. For every learner, their performance scores P t have been computed as a fraction of correct answers to questions presented in a reading comprehension task that consisted of 10 multiple-choice questions.
For every task, the questions were pre-marked with a normalized value of difficulty in the interval [0.1, 1.0], based on expert assessments. Starting from the first iteration, all learners participating in every experiment were presented with questions based on the corresponding value for the present difficulty D t .
After each evaluation round, the task difficulty level was then updated based on the proposed adaptive control expression using the performance value P t , target mastery level, and the adaptivity coefficient. The updated value of task difficulty D t + 1 was then used to assign the tasks in the subsequent iteration.
This process was applied over the three iterations for every learner. The values of the difficulties were reported as D 1 , D 2 , D 3 , and mean difficulties belonged to the direct results of the adaptive update process. The focus of model evaluation was on the convergence, stability, and connection between the learners’ performances and the level of task difficulty.

4.3. Experiment 1

In order to test the effectiveness of the adaptive task difficulty model, a simulation study was conducted on 20 students of a general secondary school. The experimental design had three iterations (t = 1, 2, 3) encompassing a full reading comprehension test with 10 multiple-choice questions on the understanding and inference skills based on a short narrative or factual text.
Items were randomly selected from a pool of 210 questions. Questions were automatically selected for each iteration based on the Dynamic Difficulty Level of the learner. To allow for equality, all learners were exposed to items of the same level of difficulty in each iteration. Items were varied to reduce the impact of memorization. Questions were not repeated in the three iterations.
After every assessment, the system recalculated the task-difficulty parameter through Equation (1). After every assessment, the system recalculated the task-difficulty parameter D t through Equation (1). For the experiments, the following parameter values were fixed:
  • Initial task difficulty: D 0 = 0.5
  • Target performance level: P ¯ = 0.75
  • Adaptivity coefficient: λ = 0.4
The performance score, P t , calculated as the proportion of responses that are correct for each learner, is used to simulate a realistic degree of variability in performance. The findings of the simulation are shown in Table 1, which gives the updates on difficulty and mean difficulties, with a comment on the performance. (Parameters: λ = 0.4 , P ¯ = 0.75 )
An extra experimental group involving 18 students taking part in an intensive educational program has been introduced to increase the robustness of the model. The students’ performance was used to validate the behavior of the adaptive learning model in conditions with fewer variations in performance, as well as when students have a high level of competence initially.

4.4. Model Evaluation and Metrics

In order to assess the applicability of the adaptive task-difficulty model, a number of salient indicators were considered in the simulation outcome. They are listed below.
Convergence Stability. The system has converged on equilibrium solutions for D t , supporting learning performance close to the desired level ( P t P ¯ ). The convergence has stabilized with a decay in iteration oscillations for all but a small handful of subjects within the initial 2–3 iterations.
Adaptivity Sensitivity. It was noticed that a Pearson correlation coefficient of r = 0.78 existed between the levels of task difficulty ( D t ) and the scores of performance ( P t ), which clearly justified that the adaptation process is effectively modeled.
Pedagogical Efficiency. About 80% of the subjects displayed a positive path of increased task difficulties with stable or improved performance levels. This indicates an ideal adaptation process, making this learning model significantly relevant to education for skills within the optimum cognitive interval of the learner.
Computational Efficiency. The computational cost per update iteration for each learner averaged 0.001 s. This efficiency makes the model useful for real-time applications, such as intelligent learning platforms on the web.
Justification of Parameter Robustness.
The adaptive difficulty level is running without the need for human fine tuning of the parameters, for the initial parameters are fixed and afterwards, convergence is achieved. The mastery level parameter and the adaptivity coefficient were kept constant for all individuals and for both experimental groups.
Despite big differences in baseline proficiency and development paths, the model was able to adjust levels of task difficulty to settle at equilibrium values after a limited number of iterations without requiring any adjustments for individual participants or groups.
This outcome indicates that the adaptive process is based on performance feedback rather than external control rules, giving the model the ability to adapt on its own to the progression of the problem. For the purpose of the experiments carried out, this is sufficient.

4.5. Validation with Additional Learner Profiles

In order to further test the robustness and scalability of the adaptive task-difficulty model, a supplemental study was conducted using data from a new set of 18 students attending the advanced academic program. These participants were chosen because they were representative of users starting at a higher level of proficiency, thus providing the opportunity to analyze the model’s adaptability.
In this test, each participant underwent three cycles of reading comprehension tasks, consisting of 10 different questions in each cycle that were picked from the same set of calibrated questions as in the main experiment. These questions were chosen based on the predicted proficiency level of the individual and were different in each cycle. Thus, each group was exposed to tasks of an equal level of complexity.
The detailed results for the advanced academic participants are provided below in Table 2. From the results, it can be concluded that the values of the adaptive task difficulty measure gradually increased with each iteration, increasing the average values of D from 0.56 at the end of Iteration 1 to 0.61 at the completion of Iteration 3.
Moreover, the fact that there were no major variations or divergences in the values among different learners further confirms the numerical stability of the whole adaptive process. The outcome reveals that the choice of adaptivity coefficient (λ = 0.4) allows for the optimal trade-off between the sensitivity and the speed of convergence.
The observed consistent pattern of progression and convergence for this advanced group of students does tend to confirm the earlier findings that were obtained by regular students. Taken together, these findings provide strong evidence that the adaptive difficulty model promotes consistent and reliable performance, without any need for human parameter recalibration.

4.6. Summary

The suggested adaptive model gives a mathematically lucid means for regulating personalized education that balances optimally between cognitive challenge and performance mastery by dynamically matching the level of tasks with the abilities of the individual student.
Based on Table 3, another important element that differentiates the developed control theory model approach from other adaptive learning methods reported in the existing literature is that in this model, task difficulty is clearly defined in terms of a feedback process that has analytically stable elements. The model also avoids other existing methods that may entail implicit control process mechanisms [15,18] for rule-based systems and data-driven personalization systems [12].
Such a comparison also serves to refine the positioning of the approach among existing adaptive learning systems and provides a conceptual basis for the subsequent treatment of learning outcomes discussed in Section 5.

5. Discussion

5.1. Overview of Experimental Results

From a case-oriented perspective, the outcome of the experiment shows how the adaptive difficulty model handles different learners with different beginning proficiency levels. For instance, those learners in the regular secondary school who underperformed in the beginning of the experiment began receiving less difficult tasks, facilitating a way for them to recover without disruption, while those who performed at higher levels received more challenging tasks. Such a practice shows that the adaptive system is a learning regulator with a personalized learning function rather than a tool for assessment because it changes over time.
The simulation study was conducted using a randomly chosen set of 20 students from a typical secondary school educational setting to evaluate the efficacy of the proposed adaptive difficulty level task model. The test was performed in three simulation phases for reading comprehension with 10 different questions in each phase, chosen from a prepared and scaled question set. The selection was performed automatically by the adaptive system, depending on the learner’s difficulty level, with no repetition of questions from the previous simulations.
Analysis of the results reveals that the model maintained a stable process of learning within the defined performance threshold. Students who attained proficiency levels above the cut-off threshold received more demanding tasks automatically, while students who scored less than the defined cut-off threshold received less demanding tasks.
The Pearson correlation coefficient value (r = 0.78) indicated the presence of a strong positive relationship between the students’ performance and their associated levels of difficulty as determined by the system. Convergence was reached by 80% of the participants with a margin of ±0.05 from the target mastery rate at the third iteration.
The collective adaptive dynamics for the two learner groups, consisting of 20 students from a regular secondary school and 18 from an advanced academic program, are shown in Figure 2. It can be seen from the figure that, for both learner groups, the task difficulty is incrementally raised over a period before becoming stable in the range of 0.5–0.6, indicating smooth convergence. Although the average difficulty levels of the advanced academic group are marginally higher due to better initial proficiency, the stability trend is the same for both groups.
Figure 2 shows how the adaptive difficulty level changes over the iterative learning process for two different groups of learners. For normal secondary school-level learners, it can be seen that the level of difficulty increases gradually from an average level to finally hover around levels of 0.45 to 0.50 by the third iteration.
It can be seen that the corresponding reduction in the difficulty of the task for the group of students at the school represents the correcting function of the adaptive process as the performance of the students declines below the desired mastery criterion.
A similar convergence trend can also be observed for the more sophisticated academic community, which takes place at a slightly higher equilibrium level (about 0.55–0.60). The sophisticated academic community’s higher level of proficiency at the start influences this trend. The absence of oscillations for both groups of students illustrates the stability of the convergence of the adaptive control process. The graphical representations confirm the data shown in Table 1 and Table 2.

5.2. Interpretation and Pedagogical Implications

The research results support the primary hypothesis that the maintenance of an adaptive balance between learning performance and task difficulties increases the efficiency of learning as well as cognitive engagement.
The advantage offered by this is that it dynamically matches the degree of difficulty of tasks to the learning competence level of the trainees, ensuring a constant operating environment within the trainees’ Zone of Proximal Development.
Pedagogical consideration presents a number of major advantages of adaptive difficulty control:
  • The prevention of disengagement that might come from repetitive, simple tasks.
  • Reducing cognitive overload for low-performing students, thus helping them remain motivated.
  • Providing support with self-paced mastery, where students are gradually challenged with a level of complexity that corresponds with what they are capable of learning.
The results are consistent with the overall conclusions made by Han et al. [15], Lin et al. [18], and Liu et al. [28] that point to the importance of adaptation and feedback for developing learner engagement, metacognition, and academic success.
The adaptive difficulty learning approach that has been proposed in this research incorporates these educational ideals in a mathematically precise way via a control equation that is computationally efficient, making it amenable to implementation within real-time systems of the scale that would be used in AI-based educational systems. Its transparent nature further helps increase trust in such systems.
These findings are consistent with recent studies emphasizing the importance of adaptive feedback and difficulty regulation in AI-supported learning environments [15,18,28]. Unlike approaches based on heuristic adaptation or opaque generative models, the proposed control-theoretic formulation provides an interpretable mechanism that aligns adaptive behavior with pedagogical principles such as the Zone of Proximal Development.

5.3. Model Robustness

The obtained results clearly show that the developed adaptive difficulty model meets three essential requirements for intelligent educational systems, namely stability, interpretability, and scalability [17,21,24].
Compared to adaptation systems based on heuristic mappings, such as rule-based adaptation, the recursive adaptation model described in this work has a mathematically elegant formulation that can be easily embodied in a wide range of AI systems. The linear nature of the adaptation process makes real-time computations highly efficient with very low computational cost, making it highly adaptable for implementation within mobile learning systems or Web-based learning systems.
The most noticeable aspect of the adaptation strategy in the model is that adaptation is performed relative to the deviation from the target rate of learning, not from absolute performance. Such a strategy is capable of generalized learning from a variety of learners, even with a high level of initial proficiency.

5.4. Summary

The following are the findings from the experimental evaluation:
  • The adaptive model worked effectively to keep the performance of the learners close to the desired level of 75%.
  • The adjustment mechanism showed stable convergence for both regular learners and advanced learners.
  • The high correlation between performance and level of difficulty (r) supported consistent adaptive performance.
  • The choice of adaptation coefficient (α) supported smooth transition with no instabilities.
  • The generated models had high levels of generalization on varied student profiles.
Taken together, these findings indicate that the suggested adaptive learning difficulty model has sound mathematical foundations, efficiency, and relevance to learning. The difficulty model has great potential to be a fundamental element of next-generation learning systems that are AI-based.

6. Ethical Considerations

The implementation of AI-based adaptive learning systems raises a number of ethics-related concerns, especially in regard to privacy, fairness, and transparency.
Firstly, the proposed system is intended to work with anonymized learning data, such that no personally identifiable information is processed or retained without the agreement of the individuals involved. The learning system is therefore aligned with a privacy-by-design vision, which is adequate in meeting all the necessary educational regulations, such as the General Data Protection Regulation (GDPR).
Secondly, the adaptive learning system is dependent on performance factors that are unrelated to identity factors, such that there are no potential biases that come with the system.
Thirdly, the recursive structure of the model is amenable to mathematical interpretation, which helps to make sense of how the difficulty levels are varied. Planned incorporation of Explainable AI (XAI) components is also anticipated to aid in transparency with respect to system behavior, thus encouraging trust in AI-driven decisions.
In addition, the system is meant not to replace teachers but to serve as a supporting tool to complement personalized learning. The educators are still in full control of the tasks, with the adaptive component acting as a recommendation system but not a judge on its own.
In tackling such factors, the proposed approach is meant to facilitate the enhancement of ethical, inclusive, and responsible AI solutions for the educational sector.

7. Conclusions and Future Work

This work presents a mathematically sound and explainable adaptive control paradigm for task difficulty control in reading comprehension and analytical learning with AI support. The paradigm treats task difficulty adaptation as a discrete control problem and adjusts task difficulty according to the performance of the student with respect to a specified mastery target.
Simulation-based experiments were carried out on two learner groups of varying initial proficiency levels. Convergence on stable equilibrium was established in the results, where the adaptive system dynamically controls the task difficulty level towards equilibrium ranges corresponding to the learners’ level of proficiency without the necessity of additional resetting of parameters. Visualization of the task difficulty level and qualitative comparison with existing adaptive learning systems helped to establish the efficiency of the proposed model.
In comparison to the opaque and data-driven approaches of heuristic rule-based and machine learning-based systems, as well as the personalization schemes, the control theoretic formulation proposed in the study presents a more transparent and tractable approach to adaptive challenge control in the context of intelligent learning environments.
The future research will focus on further expanding this model into incorporating learning outcomes measurements, emotionally informed feedback signals, and adaptive parameter adjustments. Furthermore, empirical studies will also be conducted within a real-world environment to validate the effectiveness of the above-mentioned learning framework.

Author Contributions

Conceptualization, A.M.K. and A.B.; methodology, A.M.K., A.B. and M.M.; software, A.M.K.; validation, A.B. and M.M.; formal analysis, A.B. and M.M.; data curation, A.M.K.; writing—original draft preparation, A.M.K.; writing—review and editing, A.B. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been/was/is funded by the Committee of Science of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. BR24993072).

Data Availability Statement

Data will be made available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Roopaei, M.; Roopaei, R. Gamifying AI Education for Young Minds: The TransAI Adventure in Learning. In Proceedings of the 5th IEEE Annual World AI IoT Congress (AIIoT), Melbourne, Australia, 24–26 July 2024; pp. 129–134. [Google Scholar] [CrossRef]
  2. Astashova, N.A.; Bondyreva, S.K.; Popova, O.S. Gamification resources in education: A theoretical approach. Educ. Sci. J. 2023, 25, 11–45. [Google Scholar] [CrossRef]
  3. Bucchiarone, A. Gamification and virtual reality for digital twin learning and training: Architecture and challenges. Virtual Real. Intell. Hardw. 2022, 4, 471–486. [Google Scholar] [CrossRef]
  4. McLeod, S.; Guy-Evans, O. Kolb’s learning styles and experiential learning cycle. In Simply Psychology; Routledge: London, UK, 2025. [Google Scholar]
  5. Papadatou-Pastou, M.; Touloumakos, A.K.; Koutouveli, C.; Barrable, A. The learning styles neuromyth: When the same term means different things to different teachers. Eur. J. Psychol. Educ. 2021, 36, 511–531. [Google Scholar] [CrossRef]
  6. Khosravi, H.; Cooper, K.; Kitto, K. RiPLE: Recommendation in peer-learning environments based on knowledge gaps and interests. arXiv 2017, arXiv:1704.00556. [Google Scholar]
  7. Hanin, V.; Colognesi, S.; Van Nieuwenhoven, C. From perceived competence to emotion regulation: Assessment of the effectiveness of an intervention among upper elementary students. Eur. J. Psychol. Educ. 2021, 36, 287–317. [Google Scholar] [CrossRef]
  8. Bobadilla, J.; Ortega, F.; Hernando, A.; Gutiérrez, A. Recommender systems survey. Knowl.-Based Syst. 2013, 46, 109–132. [Google Scholar] [CrossRef]
  9. Manouselis, N.; Drachsler, H.; Verbert, K.; Duval, E. Recommender Systems for Learning; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  10. Drachsler, H.; Verbert, K.; Santos, O.; Manouselis, N. Panorama of recommender systems to support learning. In Recommender Systems Handbook; Springer: Berlin/Heidelberg, Germany, 2015; pp. 421–451. [Google Scholar]
  11. Bandyopadhyay, S.; Szostek, J. Thinking critically about critical thinking: Assessing critical thinking of business students using multiple measures. J. Educ. Bus. 2019, 94, 259–270. [Google Scholar] [CrossRef]
  12. Hong, H.; Viriyavejakul, C.; Vate-U.-Lan, P. Enhancing critical thinking skills: Exploring generative AI-enabled cognitive offload instruction in English essay writing. J. Ecohumanism 2025, 4. [Google Scholar] [CrossRef]
  13. Ennis, R.H. Critical thinking across the curriculum: The Wisdom CTAC program. Inq. Crit. Think. Across Discip. 2013, 28, 25–45. [Google Scholar] [CrossRef]
  14. Condon, C.; Valverde, R. Increasing critical thinking in web-based graduate management courses. J. Inf. Technol. Educ. Res. 2014, 13, 177–191. [Google Scholar] [CrossRef]
  15. Han, J.; Liu, G.; Liu, X.; Yang, Y.; Quan, W.; Chen, Y. Continue using or gathering dust? A mixed method research on the factors influencing the continuous use intention for an AI-powered adaptive learning system for rural middle school students. Heliyon 2024, 10, e33251. [Google Scholar] [CrossRef]
  16. Embarak, O. Bridging Theory and Application: A Review of Explainable AI with a Case Study in Adaptive Learning Systems. Procedia Comput. Sci. 2025, 265, 73–82. [Google Scholar] [CrossRef]
  17. Kabudi, T.; Pappas, I.; Olsen, D.H. AI-enabled adaptive learning systems: A systematic mapping of the literature. Comput. Educ. Artif. Intell. 2021, 2, 100017. [Google Scholar] [CrossRef]
  18. Lin, K.-Y.; Li, M.-H.; Lo, F.-Y.; Huang, H.-C.; Matsuno, K.; Watanabe, R. Adaptive learning with human factors and Artificial Intelligence: Associations with training effectiveness in programming education. Int. J. Ind. Ergon. 2025, 110, 103834. [Google Scholar] [CrossRef]
  19. Mukti, A.J.; Trisilia, M. AI-Powered Adaptive Interface: Enhancing User Experience Through Real-Time Personalization in Digital Platforms. Procedia Comput. Sci. 2025, 269, 571–580. [Google Scholar] [CrossRef]
  20. Embarak, O. A Behaviour-Driven Framework for Smart Education: Leveraging Explainable AI and IoB in Personalized Learning Systems. Procedia Comput. Sci. 2025, 265, 457–466. [Google Scholar] [CrossRef]
  21. Ezzaim, A.; Dahbi, A.; Haidine, A.; Aqqal, A. Enabling Sustainable Learning: A Machine Learning Approach for an Eco-friendly Multi-factor Adaptive E-Learning System. Procedia Comput. Sci. 2024, 236, 533–540. [Google Scholar] [CrossRef]
  22. Qazi, S.; Kadri, M.B.; Naveed, M.; Khawaja, B.A.; Khan, S.Z.; Alam, M.M.; Su’ud, M.M. AI-Driven Learning Management Systems: Modern Developments, Challenges and Future Trends during the Age of ChatGPT. Comput. Mater. Contin. 2024, 80, 3289–3314. [Google Scholar] [CrossRef]
  23. Gianni, A.M.; Nikolakis, N.; Antoniadis, N. An LLM based learning framework for adaptive feedback mechanisms in gamified XR. Comput. Educ. X Real. 2025, 7, 100116. [Google Scholar] [CrossRef]
  24. Tebourbi, H.; Nouzri, S.; Mualla, Y.; Abbas-Turki, A. Artificial Intelligence Agents for Personalized Adaptive Learning. Procedia Comput. Sci. 2025, 265, 252–259. [Google Scholar] [CrossRef]
  25. Madanchian, M.; Drazenovic, G.; Ramzani, S.R.; Taherdoost, H. Integrating AI Tools to Enhance Learning Outcomes in Modern Education Systems. Procedia Comput. Sci. 2025, 263, 514–521. [Google Scholar] [CrossRef]
  26. Arevalillo-Herráez, M.; Katsigiannis, S.; Alqahtani, F.; Arnau-González, P. Fusing ECG signals and IRT models for task difficulty prediction in computerised educational systems. Knowl.-Based Syst. 2023, 280, 111052. [Google Scholar] [CrossRef]
  27. Van den Broek, G.S.E.; Scholten, S.; van Thuil, B.; van Rijn, H.; van Gog, T.; van der Velde, M. Gamified feedback in adaptive retrieval practice: Points and progress-bars enhance motivation but not learning. Comput. Hum. Behav. 2026, 177, 108862. [Google Scholar] [CrossRef]
  28. Liu, J.; Zhang, Y.; Li, W.; Wang, Q.; Niu, P.; Zhang, X. Adaptive vs. planned metacognitive scaffolding for computational thinking: Evidence from generative AI-supported programming in elementary education. Comput. Educ. 2026, 241, 105473. [Google Scholar] [CrossRef]
  29. Feng, S.; Zhang, H.; Gašević, D. Mapping the evolution of AI in education: Toward a co-adaptive and human-centered paradigm. Comput. Educ. Artif. Intell. 2025, 9, 100513. [Google Scholar] [CrossRef]
  30. Katona, J.; Katonane Gyonyoru, K.I. Integrating AI-based adaptive learning into the flipped classroom model to enhance engagement and learning outcomes. Comput. Educ. Artif. Intell. 2025, 8, 100392. [Google Scholar] [CrossRef]
  31. Kinder, A.; Briese, F.J.; Jacobs, M.; Dern, N.; Glodny, N.; Jacobs, S.; Leßmann, S. Effects of adaptive feedback generated by a large language model: A case study in teacher education. Comput. Educ. Artif. Intell. 2025, 8, 100349. [Google Scholar] [CrossRef]
  32. Bauer, E.; Heitzmann, N.; Bannert, M.; Chernikova, O.; Fischer, M.R.; Frenzel, A.C.; Gartmeier, M.; Hofer, S.I.; Holzberger, D.; Fischer, F.; et al. Personalizing simulation-based learning in higher education. Learn. Individ. Differ. 2025, 122, 102746. [Google Scholar] [CrossRef]
  33. Chernikova, O.; Sommerhoff, D.; Stadler, M.; Holzberger, D.; Nickl, M.; Seidel, T.; Kasneci, E.; Küchemann, S.; Kuhn, J.; Heitzmann, N. Personalization through adaptivity or adaptability? A meta-analysis on simulation-based learning in higher education. Educ. Res. Rev. 2025, 46, 100662. [Google Scholar] [CrossRef]
Figure 1. System architecture of the adaptive learning platform showing the integration of the adaptive difficulty engine with AI modules and database components.
Figure 1. System architecture of the adaptive learning platform showing the integration of the adaptive difficulty engine with AI modules and database components.
Algorithms 19 00100 g001
Figure 2. Dynamic Adjustment of Task Difficulty Across Learner Profiles over Successive Learning Iterations (The figure plots the average task difficulty against the number of iterations for two different cohorts—a total of 20 students from a regular secondary school and 18 students from an advanced academic program.).
Figure 2. Dynamic Adjustment of Task Difficulty Across Learner Profiles over Successive Learning Iterations (The figure plots the average task difficulty against the number of iterations for two different cohorts—a total of 20 students from a regular secondary school and 18 students from an advanced academic program.).
Algorithms 19 00100 g002
Table 1. Comparison of Test Results for 20 Students (Experiment 1).
Table 1. Comparison of Test Results for 20 Students (Experiment 1).
StudentTest 1
Score (P1)
D1Test 2 Score (P2)D2Test 3 Score (P3)D3Mean DifficultyObservation
1A010.600.440.720.460.780.490.46steady improvement
2A020.820.580.870.610.900.640.61consistent growth
3A030.500.400.600.440.700.480.44gradual progress
4A040.900.620.850.610.800.600.61stable high level
5A050.400.360.500.400.680.470.41recovering
6A060.750.500.800.520.780.520.51optimal adaptation
7A070.600.440.650.460.700.480.46slow but steady
8A080.920.630.950.660.900.640.64advanced level
9A090.700.480.740.490.760.500.49stable performance
10A100.450.380.600.440.680.470.43improving trend
11A110.550.420.720.490.780.510.47equalized performance
12A120.880.610.900.640.870.630.63stable high
13A130.650.460.680.470.750.500.48mild improvement
14A140.720.490.760.500.780.510.50consistent
15A150.350.340.480.390.550.420.38low performance
16A160.800.560.830.580.850.590.58adaptive learner
17A170.900.620.920.640.900.630.63high stability
18A180.500.400.650.460.720.490.45positive dynamics
19A190.600.440.750.500.770.510.48steady improvement
20A200.680.470.700.480.740.490.48stable result
Table 2. Detailed Results for 18 Students from an Advanced Academic Program.
Table 2. Detailed Results for 18 Students from an Advanced Academic Program.
StudentIteration 1 (D1)Iteration 2
(D2)
Iteration 3 (D3)Mean DifficultyObservation
1B010.540.570.600.57steady improvement
2B020.560.580.610.58consistent growth
3B030.550.580.600.58balanced performance
4B040.530.560.580.56smooth progression
5B050.570.590.610.59stable high performance
6B060.550.580.590.57steady trend
7B070.580.600.630.60strong upward trajectory
8B080.590.610.640.61advanced learner
9B090.540.560.590.56moderate improvement
10B100.550.570.600.57consistent
11B110.560.590.610.59stable performance
12B120.530.560.580.56steady growth
13B130.580.600.620.60adaptive convergence
14B140.550.580.600.58balanced adaptation
15B150.600.620.650.62advanced stability
16B160.570.590.610.59optimal performance
17B170.560.590.610.59sustained progression
18B180.590.610.630.61consistent excellence
Table 3. Qualitative Comparison of Adaptive Difficulty Approaches in AI-Supported Learning.
Table 3. Qualitative Comparison of Adaptive Difficulty Approaches in AI-Supported Learning.
ApproachDifficulty AdaptationInterpretabilityStability AnalysisComputational CostRepresentative
References
Rule-based adaptive systemsHeuristic rulesHighNot reportedLow[12,15]
ML-based personalization modelsImplicitLowRarely reportedMedium–High[18,21,24]
LLM-driven adaptive feedbackImplicitLowNot reportedHigh[27,29]
Proposed control-theoretic modelFeedback-basedHighExplicitly evaluatedLow-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kassenkhan, A.M.; Mendes, M.; Bekarystankyzy, A. An Adaptive Task Difficulty Model for Personalized Reading Comprehension in AI-Based Learning Systems. Algorithms 2026, 19, 100. https://doi.org/10.3390/a19020100

AMA Style

Kassenkhan AM, Mendes M, Bekarystankyzy A. An Adaptive Task Difficulty Model for Personalized Reading Comprehension in AI-Based Learning Systems. Algorithms. 2026; 19(2):100. https://doi.org/10.3390/a19020100

Chicago/Turabian Style

Kassenkhan, Aray M., Mateus Mendes, and Akbayan Bekarystankyzy. 2026. "An Adaptive Task Difficulty Model for Personalized Reading Comprehension in AI-Based Learning Systems" Algorithms 19, no. 2: 100. https://doi.org/10.3390/a19020100

APA Style

Kassenkhan, A. M., Mendes, M., & Bekarystankyzy, A. (2026). An Adaptive Task Difficulty Model for Personalized Reading Comprehension in AI-Based Learning Systems. Algorithms, 19(2), 100. https://doi.org/10.3390/a19020100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop