Next Article in Journal
Assessing EEG Channel Similarity and Informational Relevance for Motor Tasks
Previous Article in Journal
Sustainable Maritime Applications with Lightweight Classifier Using Modified MobileNet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AlphaLearn: A Multi-Objective Evolutionary Framework for Fair and Adaptive Optimization of E-Learning Pathways

1
Laboratory of Applied Sciences and Emerging Technologies, National School of Applied Science, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco
2
Laboratory of Computer Science, Innovation and Artificial Intelligence, Faculty of Sciences Dhar El Mehraz, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco
*
Author to whom correspondence should be addressed.
Technologies 2026, 14(3), 162; https://doi.org/10.3390/technologies14030162
Submission received: 1 January 2026 / Revised: 8 February 2026 / Accepted: 17 February 2026 / Published: 5 March 2026

Abstract

Personalized e-learning seeks to adapt sequences of learning activities to individual learners, yet most existing adaptive platforms continue to rely on heuristic rules or single-objective optimization strategies. This paper introduces AlphaLearn, a conceptual evolutionary agent that frames learning pathway design as a constrained multi-objective optimization problem. The framework integrates knowledge graphs, learner modelling, and evolutionary algorithms to generate, evaluate, and iteratively refine candidate learning pathways under multiple pedagogical criteria. The contribution of this work is threefold. First, it presents a structured architectural framework for evolutionary learning pathway optimization, including a formal description of the optimization cycle and pathway representation. Second, it provides a descriptive analysis of large-scale learning analytics data from the Open University Learning Analytics Dataset (OULAD), illustrating substantial variability in learner outcomes, failure rates, and dropout across modules. Third, it offers an explicit discussion of fairness and bias mitigation, positioning equity as an integral dimension of adaptive pathway optimization rather than a post-hoc concern. The descriptive findings highlight pronounced heterogeneity in learner performance and engagement, motivating the need for adaptive systems capable of balancing learning effectiveness, efficiency, engagement, and fairness. While AlphaLearn is presented as a conceptual and methodological framework rather than a validated system, it establishes a foundation for future empirical evaluation and the development of fairness-aware evolutionary approaches to personalized e-learning.

1. Introduction

The rapid expansion of online and blended education has intensified research interest in personalized learning systems capable of adapting instructional pathways to the needs of individual learners. Advances in artificial intelligence (AI), learning analytics, and educational data mining have enabled learning management systems (LMS) and intelligent tutoring systems (ITS) to deliver adaptive feedback, content recommendations, and assessment at scale [1,2]. Despite this progress, most deployed adaptive learning platforms remain limited in their capacity to design effective learning trajectories for heterogeneous learners over extended time horizons.
A core limitation of current personalization approaches lies in learning pathway design the problem of determining which learning activities should be presented, in which order, and at what level of granularity, given a learner’s prior knowledge, objectives, time constraints, and contextual conditions. This problem is inherently complex, involving large combinatorial search spaces, prerequisite constraints, uncertain learner states, and competing pedagogical objectives [3,4]. In practice, many systems rely on heuristic rules, instructor-defined sequences, or single-objective optimization criteria, such as short-term performance or course completion, which fail to capture the multifaceted nature of learning.
Existing approaches address this challenge only partially. Rule-based adaptive systems offer transparency and pedagogical control but require extensive manual authoring and scale poorly to diverse learner populations. Reinforcement-learning-based tutors’ model instructional decisions as sequential control problems, yet they often depend on carefully engineered reward functions and tend to optimize short-term outcomes rather than long-term mastery or retention [5,6]. Recommendation systems improve resource relevance using collaborative or content-based filtering but frequently overlook curricular structure, prerequisite dependencies, and causal learning effects [7,8]. As a result, there remains a gap between localized adaptive interventions and the broader goal of coherent, personalized learning pathway optimization.
More recently, research in artificial intelligence has demonstrated that complex design problems can be addressed through population-based, iterative optimization paradigms, in which candidate solutions are generated, evaluated using explicit criteria, and progressively refined through selection and variation. Such approaches have proven effective in domains characterized by large search spaces and multiple competing objectives. While these paradigms have primarily been explored outside education, they offer a useful conceptual foundation for rethinking how learning pathways can be designed, evaluated, and adapted in a principled and autonomous manner.
Building on this perspective, this article proposes AlphaLearn, a conceptual and methodological framework that formulates personalized learning pathway design as a constrained, multi-objective evolutionary optimization problem. AlphaLearn does not present an implemented system nor claim empirical performance gains. Instead, it provides a structured framework that integrates learner modelling, curricular knowledge representation, and evolutionary search mechanisms to explore the space of possible learning pathways in a systematic way.
AlphaLearn brings together three components that are often treated independently in the literature. First, learner modelling is employed to estimate learner state, expected mastery gain, engagement likelihood, and risk of failure or dropout, drawing on techniques from knowledge tracing and learning analytics [2,9]. Second, knowledge graphs and curricular constraints encode prerequisite relations and domain structure, ensuring that candidate pathways remain pedagogically valid and interpretable [10]. Third, an evolutionary optimization process maintains a population of candidate pathways, evaluates them using multiple pedagogical criteria, and refines them through selection, variation, and diversity-preserving mechanisms [11,12]. This design explicitly supports trade-offs between objectives such as learning gain, time efficiency, engagement, and robustness.
Fairness and equity constitute an additional motivation for this work. Recent studies in educational AI have shown that adaptive systems may inadvertently reinforce existing inequalities when trained on biased data or optimized solely for aggregate performance metrics [13]. Personalized pathway recommendation raises concerns about differential treatment of learner subgroups, potentially leading to unequal learning opportunities. AlphaLearn therefore treats fairness not as an external constraint but as an explicit dimension of pathway evaluation, allowing equity-related indicators to be incorporated into the optimization process.
The contribution of this article is threefold. First, it introduces AlphaLearn, a coherent conceptual framework that formalizes personalized learning pathway design as a multi-objective evolutionary optimization problem. Second, it provides a detailed architectural and algorithmic description of the proposed framework, including pathway representation, fitness components, and evolutionary search dynamics, serving as a foundation for future implementations and empirical studies. Third, it complements the conceptual framework with a descriptive analysis of learning analytics data from the Open University Learning Analytics Dataset (OULAD), illustrating substantial heterogeneity in learner outcomes and motivating the need for adaptive and fairness-aware sequencing strategies.
This work is positioned as a conceptual and methodological contribution rather than a fully validated system. Its objective is to establish a rigorous foundation for future simulations, implementations, and controlled experiments that investigate evolutionary approaches to personalized learning. By reframing learning pathway design as an evolutionary optimization problem, AlphaLearn aims to advance current research on adaptive e-leaning toward more autonomous, scalable, and ethically grounded personalization.
The remainder of this paper is organized as follows. Section 2 reviews related work on adaptive learning systems, evolutionary optimization, and fairness in educational AI. Section 3 presents the AlphaLearn framework and its evolutionary search process. Section 4 reports a descriptive analysis of the OULAD dataset. Section 5 discusses implications, limitations, and ethical considerations.

2. Related Work

2.1. Adaptive Learning Systems and Personalized Instruction

Adaptive learning systems aim to tailor instructional content, sequencing, and feedback to individual learners based on their evolving knowledge, preferences, and performance. Early work in this area was dominated by intelligent tutoring systems (ITS), which rely on explicit domain models, learner models, and pedagogical rules to guide instruction [3]. While ITS have demonstrated strong effectiveness in well-defined domains such as mathematics and programming, they require extensive manual authoring and are difficult to scale across heterogeneous learners and domains.
More recent adaptive systems embedded in learning management systems (LMS) adopt lighter-weight personalization mechanisms, including conditional content release, mastery paths, and heuristic rules based on assessment outcomes. Although these approaches are easier to deploy, they typically provide limited adaptivity and rely on static instructional sequences defined by instructors [14]. As a result, personalization remains local and reactive, rather than proactive and globally optimized across learning trajectories.
Learning analytics and educational data mining have further expanded the capabilities of adaptive systems by enabling data-driven insights into learner behavior and performance [1,4]. However, many analytics-driven interventions focus on prediction (e.g., performance or dropout risk) rather than on optimization of learning pathways, leaving the design of instructional sequences largely unchanged.

2.2. Learner Modelling and Knowledge Tracing

Accurate learner modelling is a foundational component of personalized learning. A substantial body of research has focused on knowledge tracing, which seeks to estimate a learner’s mastery of underlying skills or concepts based on interaction data. Classical approaches such as Bayesian Knowledge Tracing (BKT) provide interpretable probabilistic models of skill acquisition but rely on simplifying assumptions about learning and forgetting [15].
More recent advances leverage neural and hybrid models, including Deep Knowledge Tracing (DKT) and memory-augmented architectures, which capture complex temporal patterns in learner behavior [9,16]. These models have improved predictive accuracy and are increasingly used to inform adaptive feedback and content recommendation. Nevertheless, learner modelling alone does not determine how learning activities should be sequenced over time, particularly when multiple objectives such as efficiency, engagement, and robustness must be balanced.
In practice, learner models are often used as inputs to rule-based or greedy decision policies, limiting their potential to support holistic pathway optimization. This gap highlights the need for mechanisms that can systematically explore alternative instructional sequences while leveraging learner state estimates.

2.3. Recommendation Systems and Learning Pathways

Recommendation systems have been widely applied in educational contexts to suggest learning resources based on learner preferences, behavior, or similarity to other users. Collaborative filtering and content-based approaches have been shown to improve resource relevance and learner satisfaction [7]. Hybrid recommenders further incorporate contextual features and learning objectives to refine recommendations [8].
Despite these advances, most educational recommender systems focus on item-level recommendations rather than on the construction of coherent learning pathways. They often ignore prerequisite structures, curricular constraints, and long-term learning goals, which can result in fragmented or pedagogically suboptimal sequences. Moreover, recommendation accuracy does not necessarily translate into improved learning outcomes, particularly when causal learning effects are not explicitly modeled [17].
Therefore, recommender-based approaches alone are insufficient to address the broader challenge of personalized pathway design across courses or programs.

2.4. Reinforcement Learning for Instructional Sequencing

Reinforcement learning (RL) has been explored as a framework for instructional decision-making, modeling the interaction between learner and system as a sequential decision process. RL-based tutors aim to select actions (e.g., tasks, hints, feedback) that maximize expected rewards, such as learning gain or engagement [6].
While RL provides a principled approach to sequential optimization, its application in education faces several challenges. Reward design is non-trivial, as short-term performance improvements may not align with long-term mastery or retention. RL methods also require substantial interaction data or high-fidelity simulators, which limits their applicability in real-world educational settings. Furthermore, most RL-based systems learn a single policy, reducing their ability to explore diverse instructional strategies or trade-offs among competing objectives.
These limitations suggest that complementary approaches capable of maintaining and evaluating multiple candidate pathways simultaneously may offer greater flexibility and robustness.

2.5. Evolutionary Optimization and Multi-Objective Approaches

Evolutionary algorithms (EAs) provide population-based optimization methods inspired by natural selection and have been successfully applied to problems characterized by large search spaces and multiple competing objectives. Multi-objective evolutionary algorithms, such as NSGA-II, explicitly maintain a set of non-dominated solutions, enabling exploration of trade-offs among objectives [11].
In educational research, evolutionary methods have been applied to curriculum sequencing, learning object selection, and scheduling, demonstrating potential advantages over greedy or rule-based approaches [12]. Variable-length representations further allow pathways of differing duration and depth to be optimized within a unified framework. However, most existing studies remain narrow in scope, focus on isolated components, or lack integration with modern learner modelling and learning analytics.
Moreover, fairness and equity considerations are rarely incorporated into evolutionary optimization in educational contexts, despite growing evidence that adaptive systems may produce disparate outcomes across learner groups.

2.6. Fairness and Ethical Considerations in Educational AI

Fairness has emerged as a critical concern in AI-driven educational systems. Research shows that predictive and adaptive models trained on historical data can reflect and amplify existing biases related to gender, socio-economic status, or prior educational opportunity [13]. In adaptive learning systems, biased personalization may lead to systematically different learning opportunities for different groups.
Recent work proposes fairness-aware approaches that incorporate equity constraints or regularization terms into optimization objectives, or that monitor group-level outcomes to detect disparate impact [18,19]. However, these ideas are only beginning to be explored in the context of learning pathway design and sequencing.
This gap motivates the integration of fairness considerations directly into the design and evaluation of adaptive learning pathways, rather than treating them as post-hoc adjustments.

2.7. Research Gap and Positioning of AlphaLearn

In summary, prior research has made substantial progress in learner modelling, recommendation systems, reinforcement learning, and evolutionary optimization for education. However, existing approaches tend to address isolated aspects of personalization and rarely provide an integrated framework for multi-objective, constraint-aware, and fairness-conscious learning pathway optimization.
AlphaLearn is positioned to address this gap by combining learner modelling, curricular knowledge representation, and evolutionary search within a unified conceptual framework. Unlike rule-based, greedy, or single-policy approaches, AlphaLearn explicitly maintains and evaluates multiple candidate pathways, supports trade-offs among pedagogical objectives, and incorporates fairness as an optimization dimension. In doing so, it advances the methodological foundations for adaptive learning systems capable of autonomous, scalable, and ethically grounded personalization.

3. Descriptive Analysis of the OULAD Dataset

3.1. Dataset Overview

The Open University Learning Analytics Dataset (OULAD) is a publicly available dataset that contains demographic information, assessment outcomes, and aggregated clickstream data for learners enrolled in distance education courses at the Open University [20]. The dataset covers 22 courses (modules) and includes 32,593 learners, with more than 10.6 million recorded interaction events, making it a widely used benchmark in learning analytics research.
At the Open University, courses are referred to as modules, each of which may be offered in multiple presentations identified by the year and month of commencement. For privacy reasons, module identifiers are anonymized in the released dataset [20]. Several analyses focus on seven representative modules four from science, technology, engineering, and mathematics (STEM) and three from the social sciences which together illustrate substantial variation in learner outcomes.
For this exploratory analysis, the studentInfo and studentRegistration tables were used to compute descriptive statistics for 32,757 learner records. Each learner’s outcome is categorized as Distinction, Pass, Fail, or Withdrawn. Although the analysis is descriptive and based on a single dataset, it provides valuable insight into the diversity of learner trajectories and engagement patterns that motivate adaptive learning pathway design.

3.2. Distribution of Final Outcomes

Figure 1 presents the overall distribution of final outcomes across all analyzed learners. Approximately 12.3% of learners achieve a Distinction, 37.7% obtain a Pass, 21.5% Fail, and 31.0% Withdraw from their course.
The relatively high withdrawal rate is consistent with prior findings in large-scale online and distance education and highlights the importance of engagement and persistence as central challenges for personalized learning systems [1,21]. These results suggest that learning success cannot be adequately characterized by performance outcomes alone and that adaptive systems must also account for learner perseverance.

3.3. Module-Level Performance and Failure Rates

Table 1 summarizes learner outcomes and failure rates for the seven anonymized modules, while Figure 2 visualizes the corresponding failure rates. Failure rates vary substantially across modules, ranging from 12.2% in module AAA to 28.7% in module GGG. Intermediate failure rates are observed for modules BBB (22.3%), CCC (17.6%), DDD (22.5%), EEE (19.2%), and FFF (22.0%).
This variability indicates that learner success is influenced not only by individual characteristics but also by module-specific factors, such as curriculum structure, assessment design, pacing, and instructional support. Similar module-level effects have been reported in previous learning analytics studies, which emphasize the role of course design in shaping learner outcomes [4,22].

3.4. Withdrawal Rates by Module

Figure 3 shows withdrawal rates across the seven modules. Modules CCC and DDD exhibit particularly high withdrawal rates, exceeding 35.9%, whereas module GGG shows a comparatively low withdrawal rate of approximately 11.5%. These differences further suggest that structural and pedagogical characteristics vary significantly across modules and should be considered when designing personalized learning pathways.
High withdrawal rates are especially relevant for adaptive systems, as they point to potential mismatches between learner needs and course demands. From a personalization perspective, such patterns motivate the inclusion of engagement- and risk-sensitive indicators in pathway evaluation.

3.5. Registration Lead Time and Learner Persistence

To explore whether early registration is associated with learner persistence, Figure 4 reports the average number of days between course registration and official start date for each outcome category. Learners who ultimately withdraw register, on average, the earliest approximately 78 days before the course start whereas learners who achieve a Distinction or Pass register around two months in advance. Learners who fail exhibit the shortest average registration lead time, at approximately 63 days.
These findings indicate that early registration alone is not a reliable proxy for motivation or perseverance, reinforcing prior observations that simple behavioral heuristics have limited predictive power in online learning contexts [21,23]. This further supports the need for dynamic, data-informed learner modelling in adaptive learning systems.

3.6. Summary of Descriptive Findings

Overall, the descriptive analysis of OULAD highlights substantial heterogeneity in learner outcomes, both across individuals and across modules. Failure and withdrawal rates vary markedly depending on course context, and simple indicators such as registration timing provide limited insight into learner success. These observations motivate the need for adaptive learning frameworks that can account for multiple objectives and contextual factors when designing personalized learning pathways, as proposed in the AlphaLearn framework.

4. The AlphaLearn Framework

This section provides a conceptual and architectural specification of the AlphaLearn framework rather than a description of a deployed or empirically evaluated system. The purpose is to formalise the structure and optimisation logic of the framework, independent of any specific implementation.
AlphaLearn is conceived as a modular agent composed of five interacting layers, as illustrated in (Figure 5).

4.1. Overview and Design Rationale

AlphaLearn is proposed as a conceptual and methodological framework for personalized learning pathway design, grounded in the formulation of instructional sequencing as a constrained, multi-objective optimization problem. The framework is designed to address three recurrent limitations in existing adaptive learning systems: (1) reliance on static or greedy sequencing strategies, (2) limited capacity to balance multiple pedagogical objectives, and (3) insufficient integration of fairness considerations.
Rather than selecting a single “best” next activity, AlphaLearn maintains and evaluates a population of candidate learning pathways, enabling systematic exploration of alternative instructional trajectories. Each pathway represents a coherent sequence of learning resources that respects curricular constraints and is evaluated using learner-specific predictive signals. This population-based perspective allows AlphaLearn to model personalization as an iterative refinement process, in which pathways are progressively improved according to explicit pedagogical criteria.
AlphaLearn is not presented as an implemented system, but as a formal framework intended to guide future implementations and empirical studies. Its design emphasizes modularity, interpretability, and extensibility.

4.2. Architectural Components

AlphaLearn is organized into five interacting layers, each corresponding to a distinct functional role in the personalization process.
Data and Resource Layer
The Data and Resource Layer stores and organizes the instructional content and curricular structure. It includes:
  • Learning resources (e.g., videos, readings, exercises, assessments),
  • Metadata describing resource attributes such as estimated difficulty, duration, learning objectives, and modality,
  • Knowledge graphs encoding prerequisite relations and conceptual dependencies.
This layer defines the feasible search space for pathway generation. By encoding curricular constraints explicitly, AlphaLearn ensures that candidate pathways remain pedagogically valid and interpretable.
Learner Model Layer
The Learner Model Layer estimates the learner’s current and prospective learning state based on interaction data. It provides predictive signals used to evaluate candidate pathways, including:
  • Estimated mastery of skills or concepts,
  • Predicted learning gain associated with specific resources,
  • Engagement likelihood and persistence indicators,
  • Risk estimates for failure or withdrawal.
These estimates may be derived from knowledge tracing models, learning analytics features, or hybrid approaches. Importantly, AlphaLearn treats the learner model as a black-box predictor: the framework does not depend on a specific modelling technique, allowing different models to be substituted without altering the optimization logic.
Pathway Representation
A learning pathway is represented as an ordered, variable-length sequence of learning resources:
P = ( r 1 , r 2 , , r k )
where each r i is a learning resource drawn from the resource layer, and k may vary across pathways. Variable-length representation allows AlphaLearn to accommodate learners with different prior knowledge, time availability, and learning goals.
Pathways must satisfy feasibility constraints, including prerequisite satisfaction and curriculum rules. These constraints are enforced during pathway generation and variation to prevent invalid sequences.
Illustrative Example
Consider a learner enrolled in an introductory programming module who has demonstrated partial mastery of basic variables but limited understanding of control structures such as conditionals and loops. Based on the learner model, AlphaLearn generates an initial population of feasible learning pathways composed of video lectures, practice exercises, and formative assessments that respect curricular prerequisite constraints.
For example, one candidate pathway may prioritise rapid progression by introducing loop constructs early, aiming to maximise time efficiency. Another pathway may include additional exercises on conditional statements and formative quizzes to increase engagement and reduce the risk of withdrawal. A third pathway may balance both strategies by interleaving short instructional videos with adaptive practice tasks.
Each candidate pathway is evaluated using the multi-objective fitness function, which estimates expected learning gain, time efficiency, engagement likelihood, and fairness-related indicators. Through the evolutionary optimization process, pathways that perform poorly across these criteria are discarded, while promising pathways are refined through variation and selection. The outcome is not a single prescribed sequence, but a set of pedagogically valid learning pathways that reflect different trade-offs and can be selected according to instructional priorities or learner needs.
Evolutionary Optimization Engine
The Evolutionary Optimization Engine is responsible for generating, evaluating, and refining candidate pathways. It maintains a population P of pathways and iteratively applies evolutionary operators:
  • Selection: choosing promising pathways based on multi-objective fitness,
  • Variation: generating new pathways via mutation and recombination,
  • Diversity preservation: maintaining heterogeneity to avoid premature convergence.
Unlike single-policy optimization approaches, this engine supports parallel exploration of multiple instructional strategies, enabling trade-offs between competing objectives.
Evaluation and Orchestration Layer
The Evaluation and Orchestration Layer computes fitness values for each pathway and interfaces with the learning environment. It aggregates learner model predictions, constraint checks, and fairness indicators into structured evaluation outputs. In a deployed system, this layer would also handle pathway recommendation, instructor oversight, and feedback collection; in this article, it serves as the conceptual integration point.

4.3. Multi-Objective Fitness Formulation

Each pathway P is evaluated using a vector-valued fitness function:
F ( P ) = f g a i n ( P ) , f e F F ( P ) , f e n g ( P ) , f F a i r ( P )
where:
f g a i n estimates expected learning gain or mastery improvement,
f e F F captures time efficiency or cost-effectiveness,
f e n g reflects predicted engagement or persistence,
f F a i r represents fairness-related indicators (e.g., disparity-sensitive penalties).
These objectives may be combined using weighted aggregation or handled explicitly through Pareto-based selection. The framework does not prescribe a single aggregation strategy, allowing adaptation to different pedagogical priorities and institutional contexts.

4.4. Evolutionary Search Process

AlphaLearn employs a population-based evolutionary loop to refine learning pathways over successive iterations. The process is summarized in Algorithm 1.
Algorithm 1. AlphaLearn evolutionary pathway optimization
Input:
   R   Set of learning resources
   L   Learner model
   C   Curricular constraints
   N   Population size
   T   Maximum number of iterations
Output:
  P* Set of non-dominated (Pareto-optimal) learning pathways
Initialize population P0 with N feasible pathways sampled from R subject to C
for t = 1 to T do
  for each pathway P in Pt − 1 do
   Evaluate fitness vector f(P) using learner model L
  end for
Select a subset S from Pt − 1 based on the multi-objective fitness function F(P)
  Generate offspring O by applying variation operators to S
  Enforce feasibility constraints C on all offspring
  Combine Pt − 1 and O into an intermediate population
  Apply elitism and diversity preservation to form Pt
end for
Return P*
This process yields a set of candidate pathways rather than a single solution, enabling informed selection among alternative trade-offs.
While the evolutionary optimisation process follows established principles of multi-objective evolutionary algorithms, its contribution lies in the formulation of learning pathway design as a constrained optimisation problem and in the integration of learner modelling, knowledge graph constraints, and fairness-aware objectives within a unified framework.

4.5. Fairness-Aware Optimization

Fairness is incorporated as a first-class consideration in AlphaLearn. Rather than optimizing solely for individual-level performance, the framework allows group-level indicators to influence pathway evaluation. Examples include:
  • Penalizing pathways predicted to widen performance gaps across learner subgroups,
  • Constraining optimization to satisfy equity thresholds,
  • Monitoring disparity-sensitive metrics during selection.
By integrating fairness into the fitness formulation, AlphaLearn supports equity-aware personalization without requiring post-hoc correction.

5. Conclusions

This paper introduced AlphaLearn, a conceptual and methodological framework that reframes personalized learning pathway design as a constrained multi-objective optimization problem. Motivated by the limitations of heuristic and single-objective adaptive systems, AlphaLearn integrates learner modelling, knowledge graph representations, and evolutionary optimization to explore and refine alternative learning pathways under multiple pedagogical criteria.
The primary contribution of this work lies not in the validation of a deployed system, but in the formalization of an evolutionary perspective on adaptive sequencing. By maintaining a population of candidate pathways and evaluating them across dimensions such as learning effectiveness, efficiency, engagement, and fairness, AlphaLearn provides a principled foundation for moving beyond greedy or myopic instructional decisions. The proposed five-layer architecture clarifies the functional roles of data representation, learner modelling, optimization, evaluation, and orchestration, offering a modular blueprint for future implementations. To ground the framework in empirical reality, this study complemented the conceptual contribution with a descriptive analysis of large-scale learning analytics data from the Open University Learning Analytics Dataset. The analysis revealed substantial heterogeneity in learner outcomes, failure rates, and withdrawal patterns across modules, underscoring the inadequacy of one-size-fits-all sequencing strategies. These findings reinforce the central motivation of AlphaLearn: adaptive learning systems must account for contextual variability and competing objectives when designing personalized learning trajectories. Fairness and equity were treated as integral design considerations rather than secondary concerns. The framework explicitly accommodates fairness-aware evaluation, enabling adaptive systems to balance individual optimization with group-level equity objectives. In doing so, AlphaLearn aligns with emerging research that emphasizes ethical responsibility, transparency, and accountability in educational artificial intelligence. This work has several limitations. AlphaLearn remains a conceptual and methodological proposal, and no empirical evaluation of learning gains or engagement improvements has been conducted. The descriptive analysis is based on a single dataset from a specific institutional context, and the effectiveness of evolutionary pathway optimization depends on the quality of underlying learner models and curricular representations. Additionally, population-based optimization introduces computational considerations that must be addressed in large-scale deployments. Despite these limitations, AlphaLearn establishes a clear research agenda for future work. Immediate next steps include implementing a prototype using state-of-the-art evolutionary algorithms, conducting simulation studies and controlled experiments on public datasets, and comparing performance against established adaptive sequencing baselines. Further research should also investigate hybrid approaches that combine evolutionary optimization with reinforcement learning, as well as systematic evaluations of fairness-aware optimization criteria. Finally, extending pathway optimization beyond single courses toward program-level and lifelong learning scenarios represents a promising direction for future exploration.
In summary, AlphaLearn contributes a structured and ethically grounded framework for adaptive learning pathway optimization. By bridging evolutionary optimization, learning analytics, and fairness-aware design, this work provides a foundation for advancing autonomous, scalable, and responsible personalized e-learning systems.

Author Contributions

Conceptualization, R.O., A.J. and L.L.; methodology, R.O.; software, R.O.; validation, R.O., L.L. and A.J.; formal analysis, R.O.; investigation, R.O.; resources, H.T.; data curation, R.O.; writing—original draft preparation, R.O.; writing—review and editing, R.O., A.J. visualization, R.O.; supervision, A.J., L.L. and H.T.; project administration, H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The Article Processing Charge (APC) was funded by author Adil Jeghal.

Data Availability Statement

The data used in this study are publicly available from the Open University Learning Analytics Dataset (OULAD). The dataset can be accessed via the UK Data Service and is described in: ref. [20].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Siemens, G.; Long, P. Penetrating the fog: Analytics in learning and education. Educ. Rev. 2011, 46, 30–40. [Google Scholar]
  2. Desmarais, M.C.; Baker, R.S. A review of recent advances in learner and skill modeling in adaptive learning environments. User Model. User-Adapt. Interact. 2012, 22, 9–38. [Google Scholar] [CrossRef]
  3. Brusilovsky, P.; Millán, E. User models for adaptive hypermedia and adaptive educational systems. In The Adaptive Web; Springer: Berlin/Heidelberg, Germany, 2007; pp. 3–53. [Google Scholar] [CrossRef]
  4. Romero, C.; Ventura, S. Educational data mining and learning analytics: An updated survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1355. [Google Scholar] [CrossRef]
  5. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef] [PubMed]
  6. Doroudi, S.; Aleven, V.; Brunskill, E. Where’s the reward? A review of reinforcement learning for instructional sequencing. Int. J. Artif. Intell. Educ. 2019, 29, 568–620. [Google Scholar] [CrossRef]
  7. Manouselis, N.; Drachsler, H.; Verbert, K.; Duval, E. Recommender Systems for Learning; Springer Science & Business Media: New York, NY, USA, 2012. [Google Scholar]
  8. Kovanović, V.; Gašević, D.; Joksimović, S.; Hatala, M.; Adesope, O. Analytics of communities of inquiry: Effects of learning technology use on cognitive presence in asynchronous online discussions. Internet High. Educ. 2015, 27, 74–89. [Google Scholar] [CrossRef]
  9. Pardos, Z.A.; Heffernan, N.T. Tutor modeling vs. student modeling. In Proceedings of the Twenty-Fifth International Florida Artificial Intelligence Research Society Conference (FLAIRS 2012), Marco Island, FL, USA, 23–25 May 2012; pp. 420–425. [Google Scholar]
  10. Noy, N.; Gao, Y.; Jain, A.; Narayanan, A.; Patterson, A.; Taylor, J. Industry-scale Knowledge Graphs: Lessons and Challenges: Five diverse technology companies show how it’s done. Queue 2019, 17, 48–75. [Google Scholar] [CrossRef]
  11. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  12. Nguyen, T.T.; Hui, P.-M.; Harper, F.M.; Terveen, L.; Konstan, J.A. Exploring the filter bubble: The effect of using recommender systems on content diversity. In Proceedings of the 23rd International Conference on World Wide Web; Association for Computing Machinery: New York, NY, USA, 2021; pp. 677–686. [Google Scholar] [CrossRef]
  13. Holstein, K.; Wortman Vaughan, J.; Daumé, H., III; Dudík, M.; Wallach, H. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–16. [Google Scholar] [CrossRef]
  14. Drachsler, H.; Kalz, M. The MOOC and learning analytics innovation cycle (MOLAC): A reflective summary of ongoing research and its challenges. J. Comput. Assist. Learn. 2016, 32, 281–290. [Google Scholar] [CrossRef]
  15. Corbett, A.T.; Anderson, J.R. Knowledge Decomposition and Subgoal Reification in the ACT Programming Tutor; Association for the Advancement of Computing in Education: Charlottesville, VA, USA, 1995. [Google Scholar]
  16. Piech, C.; Huang, J.; Nguyen, A.; Phulsuksombati, M.; Sahami, M.; Guibas, L. Learning program embeddings to propagate feedback on student code. In International Conference on Machine Learning; PMLR: Lille, France, 2015; pp. 1093–1102. [Google Scholar]
  17. Oubagine, R.; Laaouina, L.; Jeghal, A.; Tairi, H. Advancing MOOCs Personalization: The Role of Generative AI in Adaptive Learning Environments. In International Conference on Artificial Intelligence in Education; Springer Nature: Cham, Switzerland, 2025; pp. 242–254. [Google Scholar]
  18. Friedler, S.A.; Scheidegger, C.; Venkatasubramanian, S.; Choudhary, S.; Hamilton, E.P.; Roth, D. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery: Atlanta, GA, USA, 2019; pp. 329–338. [Google Scholar] [CrossRef]
  19. Zhao, Y.; Wang, Y.; Liu, Y.; Cheng, X.; Aggarwal, C.C.; Derr, T. Fairness and diversity in recommender systems: A survey. ACM Trans. Intell. Syst. Technol. 2025, 16, 1–28. [Google Scholar] [CrossRef]
  20. Kuzilek, J.; Hlosta, M.; Zdrahal, Z. Open University Learning Analytics Dataset. Sci. Data 2017, 4, 170171. [Google Scholar] [CrossRef] [PubMed]
  21. Kizilcec, R.F.; Pérez-Sanagustín, M.; Maldonado, J.J. Recommending self-regulated learning strategies does not improve performance in MOOCs. In Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale; Association for Computing Machinery: Edinburgh, UK, 2017; pp. 101–104. [Google Scholar] [CrossRef]
  22. Lakkaraju, H.; Aguiar, E.; Shan, C.; Miller, D.; Bhanpuri, N.; Ghani, R.; Addison, K.L. A machine learning framework to identify students at risk of adverse academic outcomes. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Association for Computing Machinery: Sydney, NSW, Australia, 2015; pp. 1909–1918. [Google Scholar] [CrossRef]
  23. Yudelson, M.; Koedinger, K.R.; Gordon, G.J. Individualized Bayesian knowledge tracing models. In Artificial Intelligence in Education; Springer: Berlin/Heidelberg, Germany, 2013; pp. 171–180. [Google Scholar] [CrossRef]
Figure 1. Distribution of final learning outcomes in the OULAD dataset.
Figure 1. Distribution of final learning outcomes in the OULAD dataset.
Technologies 14 00162 g001
Figure 2. Failure rate by module in the OULAD dataset.
Figure 2. Failure rate by module in the OULAD dataset.
Technologies 14 00162 g002
Figure 3. Withdrawal rate by module in the OULAD dataset.
Figure 3. Withdrawal rate by module in the OULAD dataset.
Technologies 14 00162 g003
Figure 4. Average registration lead time by outcome.
Figure 4. Average registration lead time by outcome.
Technologies 14 00162 g004
Figure 5. Conceptual architecture of the AlphaLearn framework.
Figure 5. Conceptual architecture of the AlphaLearn framework.
Technologies 14 00162 g005
Table 1. Distribution of learner outcomes and failure rates across selected OULAD modules.
Table 1. Distribution of learner outcomes and failure rates across selected OULAD modules.
ModuleDistinctionFailPassWithdrawnFailure Rate (%)
AAA449148712612.2
BBB67717673077238822.3
CCC4987811180197517.6
DDD38314122227225022.5
EEE356562129472219.2
FFF67017112978240322.0
GGG396728111829228.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oubagine, R.; Laaouina, L.; Jeghal, A.; Tairi, H. AlphaLearn: A Multi-Objective Evolutionary Framework for Fair and Adaptive Optimization of E-Learning Pathways. Technologies 2026, 14, 162. https://doi.org/10.3390/technologies14030162

AMA Style

Oubagine R, Laaouina L, Jeghal A, Tairi H. AlphaLearn: A Multi-Objective Evolutionary Framework for Fair and Adaptive Optimization of E-Learning Pathways. Technologies. 2026; 14(3):162. https://doi.org/10.3390/technologies14030162

Chicago/Turabian Style

Oubagine, Ridouane, Loubna Laaouina, Adil Jeghal, and Hamid Tairi. 2026. "AlphaLearn: A Multi-Objective Evolutionary Framework for Fair and Adaptive Optimization of E-Learning Pathways" Technologies 14, no. 3: 162. https://doi.org/10.3390/technologies14030162

APA Style

Oubagine, R., Laaouina, L., Jeghal, A., & Tairi, H. (2026). AlphaLearn: A Multi-Objective Evolutionary Framework for Fair and Adaptive Optimization of E-Learning Pathways. Technologies, 14(3), 162. https://doi.org/10.3390/technologies14030162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop