1. Introduction
In AI-supported learning contexts, adaptive methods are becoming increasingly common to better adjust learning content and its processes. As a component of these methods, dynamic control of task difficulty becomes a major issue to maintain a high level of involvement of a learner, especially in reading and analytical learning processes.
In spite of the growing prevalence of digital learning resources, those resources are still based on coarse-grained structures for managing the levels of difficulty. The majority of the e-learning platforms present the same tasks or questions to every user, following a pre-defined roadmap. This context overlooks the dynamics of individual learning processes. Namely, there is possible cognitive overload for learners having difficulties, while distinguished learners experience unsatisfactory performance.
To overcome this shortcoming, a mathematical model for adaptive task difficulty for an AI-based learning platform is put forth in this research. The model automatically changes the level of reading comprehension and analysis assignments based on a student’s previous performance, with the intention of finding an optimal pace for learning, which should be neither too easy nor too challenging. In order to overcome the drawback, the proposed method is based on a clearly interpretable adaptive control model using mathematical formulations to control the level of difficulty of the reading comprehension and analysis tasks in the AI-based learning environment.
The performance of the model is investigated through simulation experiments with learners of different skill levels. Convergence stability and adaptability are analyzed.
Objectives and contributions.
This study proposes the development of an interpretable, computation-sparing adaptive control model that is capable of dynamically adjusting the level of reading comprehension and post-reading analysis tasks in order to keep the performance of the learner close to the specified mastery level. The objectives of this study are as follows:
- (1)
Develop a discrete-time adaptive learning algorithm that correlates the difficulty levels with the discrepancy between the actual and desired mastery rates;
- (2)
Incorporate the model in an AI-enhanced learning environment using the item difficulty levels to dynamically update the corresponding states of the learners;
- (3)
Evaluate the stability and responsiveness of convergence on two learner populations with differing starting knowledge;
- (4)
Show how the model can support a stable learning environment with low computational costs adequate for a real-time application.
The essence of the main contribution is to extend the control-theoretic framework for providing a formalized adaptive learning process for reading comprehension task difficulty as an alternative to rule-based adaptation methods, which are used along with empirical justification for stability in adaptive learning processes.
The study uses two datasets of learner performance records, obtained from simulation-based assessments (20 regular students and 18 advanced students), and evaluates the proposed model using convergence stability and association between performance and assigned difficulty.
The contributions of this work are as follows:
- −
A method for adaptive control of task difficulty in reading comprehension tasks.
- −
An interpretable feedback-based update mechanism linked to a target mastery level.
The rest of the article is structured as follows. The theoretical background and related works on adaptive learning systems are reviewed under
Section 2.
Section 3 contains the proposed mathematical model and experimental methods. The simulation results and personalized learning implications are tackled in
Section 4.
Section 5 contains some ethical considerations.
Section 6 draws some conclusions and explores future work.
2. Literature Review
The recent boom in artificial intelligence and data-driven technologies in the learning domain has led to a new breed of adaptive learning technologies, which offer personalized learning materials. A considerable amount of research work shows that combining gamification with artificial intelligence tools leads to improvements in student motivation and engagement, particularly in reading and analytical thinking skills [
1,
2]. The learning systems make use of computational models for understanding student performance and adapting learning paths dynamically, so as to sustain cognitive engagement and optimize learning.
The early research on adaptive learning emphasized model classification based on two main branches: rule-based systems and data-driven adaptive models. Rule-based systems rely on heuristic rule sets, based on expert knowledge, while data-driven adaptive models rely on machine learning algorithms that identify optimal learning paths based on data analysis [
3,
4]. The advancements made within natural language processing have also enabled adaptive learning with personalized understanding questions and adaptive learning responses with task adjustment according to the student’s performance [
5]. However, a major share of adaptive learning research still emphasizes recommending learning materials, with a gap remaining within adaptive learning based on task adjustment challenges [
6,
7,
8,
9,
10,
11].
From cognitive and pedagogical perspectives, researchers have emphasized the importance of maintaining an optimal match between task difficulty and learner abilities. By doing so, the aim is to develop critical thinking skills, metacognition, and persistence [
12,
13,
14]. The concept aligns with Vygotsky’s Zone of Proximal Development, where moderate task difficulty leads to maximum learning outcomes. Consequently, adaptive task complexity adjustment emerged as an imperative technique to achieve personalized and high-order cognitive development. Recently, there have been more advances on the foundation achieved with research conducted on an inter-disciplinary approach covering artificial intelligence, behavior analysis, and educational psychology. Han et al. (2024) [
15] conducted research on factors related to the adoption and retention of AI-based adaptive systems among rural middle school pupils, finding that system sophistication, perceptions of receiving feedback, and user experiences were central to these aims. Notably, adaptive personalization plays an essential role in delivering equal and high-quality learning experiences.
A review on Explainable Artificial Intelligence (XAI) in the field of education, with a case study on adaptive systems, has been conducted by Embarak (2025) [
16]. The importance of interpretability and transparency within AI-based personalization and adaptive task modeling with regards to ensuring that changes within the adaptivity level are understood by educators and learners alike cannot be overstated.
A large-scale mapping study involving 147 publications, conducted by Kabudi et al. in 2021, surveyed AI-supported adaptive learning systems, and they highlighted various research trends, approaches, and challenges involving personalization [
17]. The result shows that, despite personalization being an emerging trend, there are a limited number of models that focus on task or difficulty level adjustment, and thus there is an emerging need for more mathematically sound methods, as offered in this article.
Lin et al. (2025) [
18] proposed a human-centered adaptive system that combined Human Factors Engineering with AI to adapt learning strategies based on the motivational level, interest, and confidence of the students. The results obtained by the researchers clearly reveal the significant impact of psychological factors on learning outcomes, thus emphasizing the need for a dynamic learning mechanism, which is in line with the adaptive task difficulty model used in the current research.
Beyond the educational sector, Mukti et al. (2025) [
19], with a focus on adaptive, AI-powered interfaces on various online platforms, identified the major themes of transparency, inclusiveness, and real-time personalized design. Although their discussion is not strictly limited to the educational sector, the implications of their findings tend towards considerations that are inherently educational in the design of adaptive learning environments.
Leveraging the synergy of behavior data, Embarak (2025) [
20], for instance, introduced a behavior-driven adaptive system that combined XAI with Internet of Behavior (IoB) to optimize learning support for students. This system illustrated improved engagement and performance when providing support targeted at the poorly performing students, thus providing implications for learning support from a behavior-driven adaptive approach, alongside the mathematical solutions used in this research.
In a concurrent research thread, Ezzaim et al. (2024) [
21] also built a multi-factor adaptive e-learning system that uses clustering, regression, and decision trees to provide personalized learning based on performance, interests, and learning styles. This research signals the advantages of multi-dimensional models of the learner, which are also parallel to the current research on adaptive difficulties.
Researchers such as Qazi et al. (2024) [
22] have surveyed the use of AI developments within Learning Management Systems (LMSs), identifying progress in infrastructure, usability, and intelligent analytics. Additionally, they recognize that, despite the proliferation of personalized functionalities such as chatbots and recommendation services, research on adaptable controls for varying levels of task complexity remains inadequately investigated, thereby making models such as that identified within this paper highly relevant.
Recent developments in generative AI have also led to novel paradigms for personalization. Gianni et al. (2025) [
23] proposed a combination of Large Language Models (LLMs), meta-learning (MAML), and gamified extended reality (XR) to provide an adaptive feedback mechanism in an immersed environment. The result of this study supports that adaptive systems, whether behaviorally based or mathematically based, increase engagement and motivation of the learner, thereby contributing to the relevance of adaptive models of tasks.
At the same time, Tebourbi et al. (2025) [
24] introduced the use of AIA-PAL, a multi-agent learning framework involving the use of LLMs with retrieval-based generation to develop a personalized learning path, although the work focuses on a dialogic agent-based approach, which has the same goals as the preservation of optimal engagement.
More general reviews on the application of AI in the education sector are also available. Madanchian et al. (2025) [
25], for instance, researched the adaptation of AI applications within modern education environments, citing how such applications can increase the level of adaptation but simultaneously pose challenges with respect to equity, accessibility, and preserving student information, respectively. The need for AI to preserve interpretability, which is a requirement met by the proposal in this article, is thus paramount.
In the intersection of psychometrics and physiology, Arevalillo-Herráez et al. (2023) [
26] combined Item Response Theory (IRT) with electrocardiographic (ECG) signals to predict task difficulty. Even though this approach utilizes a rich set of signals, a simpler mathematical solution, fit for a large-scale reading comprehension application, is suggested in this work.
Gamifications remain a part of adaptive learning systems. The impact of gamified feedback components such as points and progress bars on an adaptive vocabulary retrieval system has been investigated by Van den Broek et al. in 2026 [
27]. Despite increased motivations, there has been no impact on learning, which appears to support that cognitive-level adaptations, such as dynamic difficulty-level adaptation, are required alongside engagement strategies.
In a comparative study on adaptive vs. preplanned metacognitive scaffolding in AI-supported programming tasks with elementary-level students, the findings by Liu et al. (2026) [
28] confirmed that scaffolding has a significant positive impact on performance as well as a decrease in cognitive load, thus supporting the tenets on which the current model is based, with the manipulation of difficulty being used for the optimization of learning efficiency and metacognitive engagement.
In a thorough bibliometric study, Feng et al. (2025) [
29] have revealed the emergence of a paradigm shift in the research area of AI in Education (AIED), which is now directed towards the development of co-adaptive learning technologies with a human-centric approach, owing to breakthroughs in generative AI technologies as well as LLM developments. The need for developing personalized learning systems with a cognitive balance has thus gained importance.
The empirical literature concerning the pedagogical usefulness of adaptive feedback is similarly robust. Katona and Katonane Gyonyoru (2025) [
30] introduced AI-based adaptive software into a flipped learning strategy for programming education, which led to verifiable increases in both student participation and learning success. Along similar lines, Kinder et al. (2025) [
31] found that the use of adaptive feedback from ChatGPT improved diagnostic reasoning skills in pre-service educators, which exceeded what was achievable when receiving static expert feedback. Taken together, these results lend support for the educational utility of real-time, personalized AI interventions, which are harnessed in the current research via a full adaptive difficulty formula.
Bauer et al. (2025) [
32] propose a personalized learning approach that uses simulation-based learning, which varies scaffolding and feedback based on a learning profile on a conceptual level. The approach is in line with the aim of this research to provide mathematically modeled personalized learning. Additionally, a meta-analysis of 217 research studies comparing the effect of adaptivity (system-level personalization) against that of adaptability (individual-level personalization), carried out by Chernikova et al. (2025) [
33], supports the use of adaptivity, mainly concerning learning complex skills, giving strong support to adaptive models, such as that presented in this work, on a difficulties adaptation concept.
In conclusion, from the literature, there is a marked shift from content-based recommendation systems towards adaptation mechanisms at a cognitive level that control learning complexity dynamically. The current research work is a continuation of this shift, as this research proposes a formally explicit, performance-oriented model that varies learning tasks in real time, thus filling the theoretical implementation gap of learning environment adaptation for AI-assisted learning.
Research gaps.
Notwithstanding the considerable advances that have been made in the area of adaptive learning with the use of AI, a number of important knowledge gaps still exist. First, a considerable amount of the preceding work on this topic has been concerned mainly with content suggestions or feedback systems that have given little consideration to the task of adapting the task difficulty as a dynamic process.
Third, despite the growing research on explainable AI in education, control theory-based explainable mechanisms for adapting to problems of varying difficulty remain relatively unexplored, even in reading comprehension and analytic learning problems. Finally, very few studies provide a thorough analysis regarding convergence properties, which are necessary for ensuring optimal adaptation in the long run.
These issues pose the motivation for the current study, which proposes an explicitly mathematical and interpretable adaptive control model to control task difficulty dynamically and achieve stable convergence in AI-based learning systems efficiently.
3. Adaptive Learning Model
The proposed methodology introduces a control-theoretic approach to adaptive task difficulty regulation, specifically tailored to reading comprehension and post-text analytical tasks. Unlike existing adaptive learning frameworks that rely primarily on heuristic rules or data-driven personalization, the proposed method formulates difficulty adaptation as a discrete-time feedback control process with explicitly defined system states, control variables, and stability objectives.
3.1. Conceptual Framework
This particular article introduces a mathematical modeling approach that has been specifically geared towards adapting the difficulty level with respect to reading comprehension and post/text analysis tasks within the context of an AI-supported learning setting. In particular, the fundamental purpose of the mathematical modeling is to dynamically adjust the level of difficulty with respect to a particular learning outcome.
The theoretical basis of this model is founded on principles from adaptive control theory and educational psychology, as well as principles from AI-informed personalized learning. The theory is consistent with the Zone of Proximal Development (ZPD), which aims to position learning within a band that is most conducive to developmental growth. The technique helps to eliminate both underdevelopment, which might result in a lack of engagement, as well as over-development, which might result in frustration and consequent reduced performance.
3.2. Task Difficulty Modeling for Reading Comprehension
Regarding the area of reading comprehension and the analytical task that follows, the concept of task difficulty represents multiple dimensions of cognition that relate to the complexity of the text, the degree of inferential reasoning required, and the level of analysis needed to adequately address comprehension questions. In contrast to procedural learning, the process of reading comprehension involves the gradual engagement of cognition; as a consequence, this process lends itself to a systematic adjustment of task difficulty that follows performance feedback.
In this framework, every reading activity has been associated with a normalized level of difficulty, where lower values indicate elementary factual comprehension, and higher values indicate activities that pertain to inference, synthesis, or critical analysis tasks. Performance has been measured as a proportion of correct answers to a comprehension test session at a given time step.
The adaptive update rule of difficulty is designed to correlate task difficulty with the reading proficiency skills acquired by the learners in relation to observed task mastery against a target or optimal mastery rate. The designed update rule enables a smooth transition from basic comprehension skills to analytic tasks at an optimal cognitive load.
In that regard, by directly relating performance feedback to task difficulty updates, this model is able to incorporate concepts from educational psychology such as the Zone of Proximal Development, which is compatible with a mathematical adaptive control framework suitable for AI-based learning systems.
3.3. Mathematical Model of Adaptive Difficulty
The adaptive task difficulty is described by a discrete-time recurrence relation, which changes the difficulty level based on the performance of the learners, as specified in Equation (1).
where:
- −
—difficulty level of the task at time step t;
- −
—updated difficulty level for the next iteration;
- −
—normalized student performance (percentage of correct responses);
- −
—target performance level, representing the optimal mastery rate (typically 0.75);
- −
—adaptivity coefficient, controlling the rate of adjustment.
The model assumes an initial difficulty level (average complexity).
The coefficient determines system responsiveness:
For this study, a moderate value of was chosen to ensure smooth adaptation.
Interpretation:
If : the learner performs above the target; the system increases difficulty.
If : the learner struggles; the system reduces difficulty.
If : the difficulty remains stable.
This formulation yields a self-correcting adaptive mechanism that stabilizes around an equilibrium where average performance equals the target success threshold.
The value of the adaptivity coefficient was determined by simulation runs. A too-large value led to oscillations, while too-small values led to slow convergence. There was thus a trade-off in choosing this value, and it was kept constant for all simulations.
3.4. Implementation Procedure for the Adaptive Difficulty Model
The process of the adaptive difficulty model application can then be achieved in the learning platform by the following steps:
- (1)
Initialization. Both students have a moderate initial level of difficulty, , which is a midpoint for reading comprehension tasks.
- (2)
Task Selection. For each cycle of learning, a selection of reading comprehension and analysis items is chosen from the calibrated item pool based on the student’s current difficulty situation, .
- (3)
Performance Measurement. Learner performance, , is measured by the proportion of correctly answered questions during a session of assessment.
- (4)
Difficulty Update. The level of task difficulty is updated using the adaptive control equation proposed, which changes depending on how well actual performance matches the target mastery rate.
- (5)
Iteration. The new level of difficulty, , is involved in making decisions about choosing tasks for iteration.
This approach ensures a transparent and reproducible adaptation process for automatic regulation of the level of difficulty without the intervention of human parameters.
3.5. Algorithm 1: Adaptive Task Difficulty Control
The adaptive task difficulty control algorithm works iteratively as it keeps adjusting task difficulty based on learner performance feedback. The algorithm starts with the initialization of task difficulty to a moderate baseline value, , that represents a middle ground for the learning task.
For each learning cycle t, a set of reading comprehension and post-text analysis tasks proportional to the current difficulty level is sampled. The students will complete the assigned set of tasks, and the results will be collected based on the performance score , measured as the proportion of items answered correctly.
The task difficulty is updated based on an adaptive control policy that takes into consideration the gap between the achieved performance and the target level of mastery , denoted as the performance deviation. Mathematically, the next level of difficulty, , is computed as an adjustment to the current level of difficulty based on the deviation as scaled by the adaptivity factor if the current performance exceeds the target level of mastery, or decreases when the level of performance is below the target level of mastery.
The process is then reiterated for each succeeding learning session so that task difficulty can approach asymptotically to reflect the level of mastery obtained by the learner.
3.6. Integration of Adaptive Control Theory, Educational Psychology, and AI-Based Personalization
The proposed model of adaptive task difficulty is based on an interdisciplinary integration of theories from the areas of adaptive control theory, the psychology of education, as well as artificial intelligence-inspired personalized education. Every discipline provides a unique component for the general conceptual framework.
From an adaptive control theory perspective, learner performance can be viewed as a feedback signal that serves to indicate adjustments to task difficulty. The discrete-time update rule can be seen as a feedback controller which serves to control the level of task complexity in order to keep learner performance around a desired level of mastery.
The underlying educational psychology provides the educational rationale for this control mechanism. The model supports the Zone of Proximal Development, positioning learning activities and tasks around an optimal cognitive level—not too hard in order to not lead to frustration and not too simple in order to avoid decreasing motivation. By incrementally escalating the level of analysis with incremental mastery, there would still be cognitive and meta-cognitive progress.
In the context of a personalized learning framework informed by AI, the control process is realized by the processing of learning performance data in real time and the subsequent automated task selection. The performance data of the learners are collected and normalized to update the level of difficulty that guides the selection of the learners’ reading comprehension and analysis tasks.
Individually and collectively, these components form an integrated adaptive learning framework where mathematical control helps to ensure stability and educational theory helps to provide a basis for pedagogical validity, while AI-powered personalization helps to facilitate deployment on adaptive intelligent learning platforms.
Novelty of the proposed approach.
Even though the architecture of the entire system is in line with known AI-based learning patterns, for instance, in AI literacy in mathematics and statistics, the innovation of this research is in defining task difficulty through control theory in updating the level of mastery of a target function based on certain pre-defined mastery level goals while taking observable convergence into account.
In addition, the model gives priority to interpretability and computational efficiency and supports interpretable regulation of the level of difficulty, which can easily be understood by educators and performed in a real-time environment. The results of empirical evaluation on diverse learner profiles also make this approach different.
4. Experiments and Results
4.1. Implementation Within the AI-Based Platform
The adaptive task difficulty approach is implemented as a separate module inside a web-based intelligent learning environment. The learning environment uses the following technologies:
AI components: Python-based services for natural language processing and recommendation, utilizing TensorFlow 2.17.0 and Scikit-learn 1.6.1.
The architecture of the complete system is represented in
Figure 1. The figure indicates the relationship between user interfaces, the adaptive engine, AI services, and the central data storage.
The adaptive difficulty engine is integrated into the Assessment and Feedback subsystem. It interfaces with the user data layer and question repositories via RESTful APIs. Following each reading comprehension assessment, the system calculates the learner’s performance score (i.e., the ratio of correct answers) and updates the task difficulty level according to the model’s recursive formula.
The bank of questions embedded in the content repository is pre-coded with gradations of difficulty level incrementing from 0.1 (quite simple) to 1.0 (quite challenging). The coding of the initial set of questions was carried out manually with a pool of experts in language and literature, based on the criteria of cognitive requirement, language complexity, and analytical tasks required. Validation of the coding scheme involved pilot-testing with 40 students, in which item-response characteristics, namely rates of correct response, as well as times taken, were assessed. At present, each level of the coding scheme includes an average of 50–70 tasks, which still gives the adaptive engine sufficient milestones in terms of matching the probability of the students’ preparedness.
4.2. Data Generation and Evaluation Procedure
The values reported in
Table 1 and
Table 2 have been obtained from controlled simulation studies conducted to analyze the performance of the proposed adaptive difficulty framework. For every learner, their performance scores
have been computed as a fraction of correct answers to questions presented in a reading comprehension task that consisted of 10 multiple-choice questions.
For every task, the questions were pre-marked with a normalized value of difficulty in the interval [0.1, 1.0], based on expert assessments. Starting from the first iteration, all learners participating in every experiment were presented with questions based on the corresponding value for the present difficulty .
After each evaluation round, the task difficulty level was then updated based on the proposed adaptive control expression using the performance value , target mastery level, and the adaptivity coefficient. The updated value of task difficulty was then used to assign the tasks in the subsequent iteration.
This process was applied over the three iterations for every learner. The values of the difficulties were reported as , and mean difficulties belonged to the direct results of the adaptive update process. The focus of model evaluation was on the convergence, stability, and connection between the learners’ performances and the level of task difficulty.
4.3. Experiment 1
In order to test the effectiveness of the adaptive task difficulty model, a simulation study was conducted on 20 students of a general secondary school. The experimental design had three iterations (t = 1, 2, 3) encompassing a full reading comprehension test with 10 multiple-choice questions on the understanding and inference skills based on a short narrative or factual text.
Items were randomly selected from a pool of 210 questions. Questions were automatically selected for each iteration based on the Dynamic Difficulty Level of the learner. To allow for equality, all learners were exposed to items of the same level of difficulty in each iteration. Items were varied to reduce the impact of memorization. Questions were not repeated in the three iterations.
After every assessment, the system recalculated the task-difficulty parameter through Equation (1). After every assessment, the system recalculated the task-difficulty parameter through Equation (1). For the experiments, the following parameter values were fixed:
Initial task difficulty:
Target performance level:
Adaptivity coefficient:
The performance score,
, calculated as the proportion of responses that are correct for each learner, is used to simulate a realistic degree of variability in performance. The findings of the simulation are shown in
Table 1, which gives the updates on difficulty and mean difficulties, with a comment on the performance. (Parameters:
,
)
An extra experimental group involving 18 students taking part in an intensive educational program has been introduced to increase the robustness of the model. The students’ performance was used to validate the behavior of the adaptive learning model in conditions with fewer variations in performance, as well as when students have a high level of competence initially.
4.4. Model Evaluation and Metrics
In order to assess the applicability of the adaptive task-difficulty model, a number of salient indicators were considered in the simulation outcome. They are listed below.
Convergence Stability. The system has converged on equilibrium solutions for , supporting learning performance close to the desired level (). The convergence has stabilized with a decay in iteration oscillations for all but a small handful of subjects within the initial 2–3 iterations.
Adaptivity Sensitivity. It was noticed that a Pearson correlation coefficient of existed between the levels of task difficulty () and the scores of performance (), which clearly justified that the adaptation process is effectively modeled.
Pedagogical Efficiency. About 80% of the subjects displayed a positive path of increased task difficulties with stable or improved performance levels. This indicates an ideal adaptation process, making this learning model significantly relevant to education for skills within the optimum cognitive interval of the learner.
Computational Efficiency. The computational cost per update iteration for each learner averaged 0.001 s. This efficiency makes the model useful for real-time applications, such as intelligent learning platforms on the web.
Justification of Parameter Robustness.
The adaptive difficulty level is running without the need for human fine tuning of the parameters, for the initial parameters are fixed and afterwards, convergence is achieved. The mastery level parameter and the adaptivity coefficient were kept constant for all individuals and for both experimental groups.
Despite big differences in baseline proficiency and development paths, the model was able to adjust levels of task difficulty to settle at equilibrium values after a limited number of iterations without requiring any adjustments for individual participants or groups.
This outcome indicates that the adaptive process is based on performance feedback rather than external control rules, giving the model the ability to adapt on its own to the progression of the problem. For the purpose of the experiments carried out, this is sufficient.
4.5. Validation with Additional Learner Profiles
In order to further test the robustness and scalability of the adaptive task-difficulty model, a supplemental study was conducted using data from a new set of 18 students attending the advanced academic program. These participants were chosen because they were representative of users starting at a higher level of proficiency, thus providing the opportunity to analyze the model’s adaptability.
In this test, each participant underwent three cycles of reading comprehension tasks, consisting of 10 different questions in each cycle that were picked from the same set of calibrated questions as in the main experiment. These questions were chosen based on the predicted proficiency level of the individual and were different in each cycle. Thus, each group was exposed to tasks of an equal level of complexity.
The detailed results for the advanced academic participants are provided below in
Table 2. From the results, it can be concluded that the values of the adaptive task difficulty measure gradually increased with each iteration, increasing the average values of D from 0.56 at the end of Iteration 1 to 0.61 at the completion of Iteration 3.
Moreover, the fact that there were no major variations or divergences in the values among different learners further confirms the numerical stability of the whole adaptive process. The outcome reveals that the choice of adaptivity coefficient (λ = 0.4) allows for the optimal trade-off between the sensitivity and the speed of convergence.
The observed consistent pattern of progression and convergence for this advanced group of students does tend to confirm the earlier findings that were obtained by regular students. Taken together, these findings provide strong evidence that the adaptive difficulty model promotes consistent and reliable performance, without any need for human parameter recalibration.
4.6. Summary
The suggested adaptive model gives a mathematically lucid means for regulating personalized education that balances optimally between cognitive challenge and performance mastery by dynamically matching the level of tasks with the abilities of the individual student.
Based on
Table 3, another important element that differentiates the developed control theory model approach from other adaptive learning methods reported in the existing literature is that in this model, task difficulty is clearly defined in terms of a feedback process that has analytically stable elements. The model also avoids other existing methods that may entail implicit control process mechanisms [
15,
18] for rule-based systems and data-driven personalization systems [
12].
Such a comparison also serves to refine the positioning of the approach among existing adaptive learning systems and provides a conceptual basis for the subsequent treatment of learning outcomes discussed in
Section 5.
5. Discussion
5.1. Overview of Experimental Results
From a case-oriented perspective, the outcome of the experiment shows how the adaptive difficulty model handles different learners with different beginning proficiency levels. For instance, those learners in the regular secondary school who underperformed in the beginning of the experiment began receiving less difficult tasks, facilitating a way for them to recover without disruption, while those who performed at higher levels received more challenging tasks. Such a practice shows that the adaptive system is a learning regulator with a personalized learning function rather than a tool for assessment because it changes over time.
The simulation study was conducted using a randomly chosen set of 20 students from a typical secondary school educational setting to evaluate the efficacy of the proposed adaptive difficulty level task model. The test was performed in three simulation phases for reading comprehension with 10 different questions in each phase, chosen from a prepared and scaled question set. The selection was performed automatically by the adaptive system, depending on the learner’s difficulty level, with no repetition of questions from the previous simulations.
Analysis of the results reveals that the model maintained a stable process of learning within the defined performance threshold. Students who attained proficiency levels above the cut-off threshold received more demanding tasks automatically, while students who scored less than the defined cut-off threshold received less demanding tasks.
The Pearson correlation coefficient value (r = 0.78) indicated the presence of a strong positive relationship between the students’ performance and their associated levels of difficulty as determined by the system. Convergence was reached by 80% of the participants with a margin of ±0.05 from the target mastery rate at the third iteration.
The collective adaptive dynamics for the two learner groups, consisting of 20 students from a regular secondary school and 18 from an advanced academic program, are shown in
Figure 2. It can be seen from the figure that, for both learner groups, the task difficulty is incrementally raised over a period before becoming stable in the range of 0.5–0.6, indicating smooth convergence. Although the average difficulty levels of the advanced academic group are marginally higher due to better initial proficiency, the stability trend is the same for both groups.
Figure 2 shows how the adaptive difficulty level changes over the iterative learning process for two different groups of learners. For normal secondary school-level learners, it can be seen that the level of difficulty increases gradually from an average level to finally hover around levels of 0.45 to 0.50 by the third iteration.
It can be seen that the corresponding reduction in the difficulty of the task for the group of students at the school represents the correcting function of the adaptive process as the performance of the students declines below the desired mastery criterion.
A similar convergence trend can also be observed for the more sophisticated academic community, which takes place at a slightly higher equilibrium level (about 0.55–0.60). The sophisticated academic community’s higher level of proficiency at the start influences this trend. The absence of oscillations for both groups of students illustrates the stability of the convergence of the adaptive control process. The graphical representations confirm the data shown in
Table 1 and
Table 2.
5.2. Interpretation and Pedagogical Implications
The research results support the primary hypothesis that the maintenance of an adaptive balance between learning performance and task difficulties increases the efficiency of learning as well as cognitive engagement.
The advantage offered by this is that it dynamically matches the degree of difficulty of tasks to the learning competence level of the trainees, ensuring a constant operating environment within the trainees’ Zone of Proximal Development.
Pedagogical consideration presents a number of major advantages of adaptive difficulty control:
The prevention of disengagement that might come from repetitive, simple tasks.
Reducing cognitive overload for low-performing students, thus helping them remain motivated.
Providing support with self-paced mastery, where students are gradually challenged with a level of complexity that corresponds with what they are capable of learning.
The results are consistent with the overall conclusions made by Han et al. [
15], Lin et al. [
18], and Liu et al. [
28] that point to the importance of adaptation and feedback for developing learner engagement, metacognition, and academic success.
The adaptive difficulty learning approach that has been proposed in this research incorporates these educational ideals in a mathematically precise way via a control equation that is computationally efficient, making it amenable to implementation within real-time systems of the scale that would be used in AI-based educational systems. Its transparent nature further helps increase trust in such systems.
These findings are consistent with recent studies emphasizing the importance of adaptive feedback and difficulty regulation in AI-supported learning environments [
15,
18,
28]. Unlike approaches based on heuristic adaptation or opaque generative models, the proposed control-theoretic formulation provides an interpretable mechanism that aligns adaptive behavior with pedagogical principles such as the Zone of Proximal Development.
5.3. Model Robustness
The obtained results clearly show that the developed adaptive difficulty model meets three essential requirements for intelligent educational systems, namely stability, interpretability, and scalability [
17,
21,
24].
Compared to adaptation systems based on heuristic mappings, such as rule-based adaptation, the recursive adaptation model described in this work has a mathematically elegant formulation that can be easily embodied in a wide range of AI systems. The linear nature of the adaptation process makes real-time computations highly efficient with very low computational cost, making it highly adaptable for implementation within mobile learning systems or Web-based learning systems.
The most noticeable aspect of the adaptation strategy in the model is that adaptation is performed relative to the deviation from the target rate of learning, not from absolute performance. Such a strategy is capable of generalized learning from a variety of learners, even with a high level of initial proficiency.
5.4. Summary
The following are the findings from the experimental evaluation:
The adaptive model worked effectively to keep the performance of the learners close to the desired level of 75%.
The adjustment mechanism showed stable convergence for both regular learners and advanced learners.
The high correlation between performance and level of difficulty (r) supported consistent adaptive performance.
The choice of adaptation coefficient (α) supported smooth transition with no instabilities.
The generated models had high levels of generalization on varied student profiles.
Taken together, these findings indicate that the suggested adaptive learning difficulty model has sound mathematical foundations, efficiency, and relevance to learning. The difficulty model has great potential to be a fundamental element of next-generation learning systems that are AI-based.
6. Ethical Considerations
The implementation of AI-based adaptive learning systems raises a number of ethics-related concerns, especially in regard to privacy, fairness, and transparency.
Firstly, the proposed system is intended to work with anonymized learning data, such that no personally identifiable information is processed or retained without the agreement of the individuals involved. The learning system is therefore aligned with a privacy-by-design vision, which is adequate in meeting all the necessary educational regulations, such as the General Data Protection Regulation (GDPR).
Secondly, the adaptive learning system is dependent on performance factors that are unrelated to identity factors, such that there are no potential biases that come with the system.
Thirdly, the recursive structure of the model is amenable to mathematical interpretation, which helps to make sense of how the difficulty levels are varied. Planned incorporation of Explainable AI (XAI) components is also anticipated to aid in transparency with respect to system behavior, thus encouraging trust in AI-driven decisions.
In addition, the system is meant not to replace teachers but to serve as a supporting tool to complement personalized learning. The educators are still in full control of the tasks, with the adaptive component acting as a recommendation system but not a judge on its own.
In tackling such factors, the proposed approach is meant to facilitate the enhancement of ethical, inclusive, and responsible AI solutions for the educational sector.