1. Introduction
In the last two decades, there has been a radical transformation in the global education landscape due to the rapid proliferation and increasing sophistication of digital technologies. Online learning platforms, learning management systems, and MOOCs have expanded educational opportunities globally. This represents an over 100% growth in the last decade, according to UN’s report (
Technical, vocational, tertiary and adult education, 2022), which is evidence of burgeoning demand and mainstreaming of virtual modalities. This unprecedented growth of online learning has indeed given a fillip to universities, corporations, and NGOs, offering flexible and cost-effective solutions to develop skills, train professionally, and engage in lifelong learning sans physical barriers to education.
Despite these advances, the benefits of digital learning environments (
Shea, 2023) have frequently been shadowed by particular relentless challenges. Of these, the most pertinent relates to effective learner engagement. Whereas learners appreciate the conveniences brought forth by and the scalability of these online platforms, too often, they report feelings of isolation, a decrease in motivation, and even instances of being overloaded with thoughts. Such affective and motivational obstacles frequently lead to low completion and high attrition. Recently,
Hollister et al. (
2022) conducted an education survey, demonstrating that less than 40% of all students complete online courses without some auxiliary motivational support. Therefore, many educators and instructional designers have explored more nuanced and learner-focused approaches that go beyond the delivery of content to foster motivation, extend engagement, and promote persistence over longer periods. This group of people, in fact, seeks the optimization of learning efficiency by personalizing delivery pertaining to the difficulty level of the contents, pacing, and the selection of resources in their quest to optimize the overall learning efficiency and enhance the quality of outcomes (
Gupta, 2024).
1.1. Objectives of the Study
This study will introduce, implement, and review a new framework, incorporating game mechanics with AI-driven personalization within an adaptive learning environment. In its most general terms, its goal is to contribute toward a system in which continuous data insights into a learner’s evolving performance and engagement patterns will inform choices concerning content and motivational strategy selection. In this study, we seek to form the basis for next-generation adaptive learning platforms that can concurrently improve learning outcomes—e.g., mastery and retention—and enhance learner motivation, engagement, and satisfaction.
In operationalizing this objective, this study was designed to pursue some key goals, as prescribed in
Figure 1.
1.1.1. Framework Development
Our literature review of studies from interdisciplinary fields like cognitive science, educational psychology, game design, and AI forms the basis for a conceptual framework of how the mechanics of games, adaptive algorithms, and personalized recommendations work together. It defines the required system components, the relationships between them, and the continuous feedback loops needed to refine them iteratively.
1.1.2. Prototype Implementation
A prototype system that implements the framework outlined was developed and tested. The architecture includes a content model responsible for the organization of learning materials through levels of complexity and relevance of the topic, a learner model that keeps track of evolving knowledge states and motivational indicators, a game mechanics layer supplying points, badges, and levels, besides the narrative, and an adaptive engine, driven by AI algorithms, which enables both academic content and motivational interventions.
1.1.3. Empirical Validation
We conducted an overall empirical evaluation to determine whether the integrated adaptive–gamified system produced, on average, statistically significant increases in learning gains, motivational levels, and engagement metrics compared to a randomized controlled condition. This investigation involved a mixed-method research design: quantitative data were obtained from pre/post-test scores, time-on-task, completion rates, etcetera, while qualitative insights were derived from surveys or user feedback. By triangulating these, we built more fine-grained insights into how and why the integrated approach outperformed the respective adaptive or gamified approaches in isolation.
1.1.4. Refinement and Future Directions
We also refined the proposed framework based on the findings and identified avenues for future research. Potential areas included long-term effectiveness, domain-specific customizations, and scaling of the framework to large and diverse learner populations. Our intention is to disseminate the findings and refined framework to give actionable guidance to educators, instructional designers, and technologists, enabling them to build more engaging, effective, and learner-centered digital learning platforms.
1.2. Research Questions and Hypotheses
This study addresses the following core research questions:
- 1.
How does an adaptive learning system integrated with game mechanics and AI-driven personalization improve learner motivation and engagement compared to a non-adaptive or non-gamified baseline condition?
H1. Learners in the adaptive gamified system will show significantly higher post-test scores than those in the control group.
- 2.
How does the integrated approach impact learning outcomes such as the post-test performance and mastery-level achievements?
H2. The adaptive gamified system will lead to significantly higher motivation scores (as measured by the MSLQ) compared to the control group.
- 3.
Which learner interaction data patterns lead to the best configuration of the level of difficulty and game-based motivational strategies?
H3. Learners in the adaptive gamified system will demonstrate greater engagement (higher time-on-task, quiz attempts, and module completion rates) than those in the control group.
- 4.
How do learners perceive the integrated experience in terms of usability, enjoyment, and perceived value, and how do these perceptions relate to metrics of performance and engagement for observed activities?
H4. Motivation and engagement metrics will be positively correlated with learning outcomes.
In addressing these questions, this study aims to produce both theoretical and practical contributions: deepening our insights theoretically on how adaptive personalization and game-based motivation might synergistically complement each other in supporting more holistic learning processes while offering, on a more practical side, a guideline for implementing and further developing such systems within diverse domains and contexts.
In a nutshell, the proposed integration of adaptive learning algorithms, AI-driven personalization, and game mechanics answers some of online education’s most pressing challenges by tackling cognitive, motivational, and affective dimensions of learning. In this respect, the research work tries to indicate a pathway toward adaptive learning environments that are personalized in instructional content and optimized to inspire learners to become and stay engaged, persist through challenges, and ultimately attain their educational goals by careful design and rigorous evaluation of the prototype with iterative refinements.
In
Section 2, we discuss the relevant literature to establish the theoretical underpinnings of adaptive learning, game mechanics, and AI-driven personalization.
Section 3 discusses the details of the design of our proposed framework.
Section 4 then outlines the methodology, followed by an empirical investigation and the results in
Section 5. Finally,
Section 6 offers a discussion of the key findings, implications, and avenues for future research. Then, we will discuss the brief yet detailed conclusions drawn from this study in
Section 7.
2. Literature Review
The convergence of adaptive learning systems, AI, and gamification has gained considerable attention in the last few years, as researchers explore effective strategies for improving learner engagement, motivation, and performance in online education (
Pereira & Raja Harun, 2024;
Yadav, 2024). This review synthesizes findings from research on (1) adaptive learning technologies, (2) AI-driven personalization, and (3) game mechanics in education, thereby framing the theoretical and empirical basis for the present study.
2.1. Adaptive Learning Technologies
Adaptive learning technologies aim to tailor instruction and assessments to individual learners’ needs in real time (
Ghanbaripour et al., 2024). Historically, much of the adaptive process has relied on static branching logic, but advances in machine learning now allow systems to adjust content difficulty, pacing, and recommended activities based on performance analytics (
White, 2020) For instance, rule-based adaptive engines have been shown to improve test scores by approximately 12–18% compared to traditional e-learning modules (
Gligorea et al., 2023). Meanwhile, advanced frameworks that integrate data-mining algorithms are helping educators detect at-risk students early and intervene with supportive resources (
Lim et al., 2023).
Moreover, adaptive learning often incorporates embedded formative assessments that guide learners through customized pathways (
Fernández-Herrero, 2024). Learners who demonstrate mastery of certain concepts may skip remedial modules, while those showing repeated misconceptions can receive targeted hints or prerequisite materials (
Halkiopoulos & Gkintoni, 2024). Studies confirm that adaptivity can bolster cognitive gains and reduce dropouts in online programs (
Jiang et al., 2022). However, many adaptive systems still lack motivational design, leading researchers to advocate for complementary strategies, such as gamification (
Bennani et al., 2021).
2.2. AI-Driven Personalization
AI-driven personalization extends the conventional adaptive paradigm by leveraging algorithms like Bayesian knowledge tracing, collaborative filtering, or deep learning networks to continuously refine learner models (
El-Sabagh, 2021;
Fahad Mon et al., 2023). These personalized systems interpret large streams of user data—quiz attempts, time-on-task, reading patterns—to predict future performance or identify conceptual gaps (
Rodrigues et al., 2020;
Vázquez-Parra et al., 2024). By proactively adjusting difficulty levels and learning paths, the platform can sustain learner engagement and optimize mastery (
Yanes et al., 2020).
In one recent large-scale study, AI-based recommendation engines led to a 22% improvement in final exam scores relative to non-personalized instruction (
Duggal et al., 2021). Another body of work demonstrates that real-time predictive analytics can detect “
frustration points” and deliver micro-interventions—such as timely hints or alternative examples—to help learners overcome barriers (
du Plooy et al., 2024;
Naseer et al., 2024b). The use of AI thus enables more granular, dynamic adjustments: if a learner repeatedly fails a concept, the system might revert to simpler tasks or supply scaffolded tutorials (
Naseer et al., 2024a;
Tariq et al., 2025).
Despite its promise, AI personalization introduces complexities around data privacy, algorithmic biases, and the need for robust data sets (
Waladi et al., 2024). Ethical frameworks and transparency in model decision making are increasingly vital, with researchers advocating for “explainable AI” tools which clarify how each adaptive recommendation is generated (
Morandini et al., 2023). Additionally, successful personalization hinges not just on technical sophistication but also on alignment with pedagogical goals, emphasizing the synergy between adaptivity and motivational design.
2.3. Gamification and Game Mechanics in Education
Gamification, defined as the incorporation of game-like features (points, badges, leaderboards, levels) into non-game contexts, has been embraced as a motivational lever in digital learning environments (
Rodrigues et al., 2019). When thoughtfully integrated, game mechanics can foster persistence, trigger higher engagement, and increase enjoyment (
Jaramillo-Mediavilla et al., 2024). Empirical evidence suggests that earning badges or leveling up can help learners visualize progress, promoting a sense of accomplishment (
Addas et al., 2024;
Ng & Lo, 2022). Furthermore, timed challenges and narrative-based quests can reshape learning into an experience that feels purposeful and compelling (
Kaya & Ercag, 2023).
Nonetheless, critics warn that superficial “pointification” may yield fleeting benefits unless game mechanics are tied to meaningful learning objectives (
Zourmpakis et al., 2024). In response, researchers are now exploring how to refine gamification design. Adaptive gamification—where the system personalizes rewards or difficulty—has emerged as a potent approach (
Hong et al., 2024). For example, a learner who struggles with algebraic manipulation might earn more frequent micro-badges for smaller achievements, whereas a learner quickly mastering topics might be challenged with advanced “quests” or boss-level challenges (
Pratiwi et al., 2024).
Social or collaborative gamification components, such as team-based missions and peer-driven leaderboards, add another motivational dimension. By incorporating cooperative competitions, learners can develop community ties and accountability (
Kawtar et al., 2022). Some studies report that team-based gamification fosters a 15–20% improvement in module completion rates (
Ramos et al., 2024), yet robust data-driven personalization is critical to avoid pitting advanced learners against novices unfairly (
Boulton, 2024).
2.4. Integrating Adaptive Learning, AI Personalization, and Gamification
As indicated above, adaptive learning and AI personalization each offer unique benefits for cognitive and performance outcomes. Gamification, in turn, can strengthen the motivational dimension. A growing body of research suggests that synthesizing these three threads yields more significant and sustained improvements compared to employing them in isolation (
Svyrydiuk et al., 2024). For instance,
Vargas Domínguez and de Trazegnies Otero (
2020) demonstrated a combined approach in an online math course: learners received tasks tailored by a neural network algorithm, while also pursuing badges for conceptual milestones and receiving real-time progress feedback. The result was a 30% reduction in dropout rates and significantly higher post-test scores compared to a non-gamified adaptive platform.
Similar success stories arise in domains such as language learning (“Mandarin Language Learning with Gamification Method”,
Fedro, 2021). A system that adaptively recommends vocabulary exercises based on learner-specific lexical gaps—and pairs each unit with short “
missions” or “
levels” to keep the user engaged—has shown to enhance retention and spontaneity in language application. A key advantage is the immediate feedback loop: as soon as a learner interacts, the system updates the learner model and adjusts the learning path and game-based incentives. This dynamic synergy fosters a sense of “
flow”, often cited as crucial for deep learning (
Liu et al., 2023). However, implementing such systems calls for robust data pipelines, carefully curated content, and iterative design cycles to fine-tune reward structures, difficulty thresholds, and AI-driven recommendations (
Hondoma, 2024).
2.5. Challenges and Gaps in the Literature
Despite the promising evidence base, challenges persist. First, many adaptive learning platforms lack holistic motivational strategies, relying heavily on algorithmic personalization without systematically embedding game mechanics (
Liu et al., 2023). Conversely, certain “gamified” courses remain static, awarding badges without adjusting difficulty or pathing based on learner performance. A prime research gap concerns how to best harmonize adaptivity with gamification, ensuring that the system tailors both instructional content and motivational strategies to each learner’s evolving needs (
Bennani et al., 2021).
Additionally, while numerous studies document improvements in short-term engagement and test scores, relatively fewer investigate longitudinal outcomes such as knowledge retention, transfer of learning, or sustained motivation over multiple semesters (
Jaramillo-Mediavilla et al., 2024). Another prevalent issue is the limited focus on affective computing: detecting real-time emotional states (e.g., frustration or confusion) could further refine personalization but requires advanced sensing tools (
Gupta, 2024). Similarly, many studies rely on convenience samples or single-institution trials, limiting the generalizability to broader, more diverse learner populations (
Hondoma, 2024).
Issues of data ethics and algorithmic transparency remain non-trivial. As AI-based adaptive platforms collect granular user data, concerns arise regarding privacy, informed consent, and potential biases—especially if the underlying models are not regularly audited (
Yadav, 2024). Large data sets may inadvertently encode demographic biases, leading to a personalized system which disproportionately benefits certain groups (
Jiang et al., 2022). Thus, calls grow for multi-stakeholder involvement in designing and evaluating these technologies (
Morandini et al., 2023).
2.6. Rationale for the Present Study
Given the literature, a consensus emerges that merging adaptive learning, AI personalization, and gamification holds remarkable potential to elevate learner engagement and outcomes (
Duggal et al., 2021;
du Plooy et al., 2024). However, systematic frameworks detailing how to integrate these approaches effectively—and empirical validations with large sample sizes—remain relatively scarce (
Ghanbaripour et al., 2024). The present study addresses these gaps in the following ways:
Proposing a new, unified framework that weaves AI-driven personalization with dynamic, game-based motivational scaffolds;
Testing its efficacy in a controlled, quasi-experimental design with a robust sample of 250 undergraduate learners across diverse majors;
Measuring not only cognitive gains but also changes in motivation, engagement metrics, and user perceptions, thereby offering a holistic assessment;
Exploring ethical considerations and usability, informing ongoing development and scaling to other courses.
By grounding the framework in existing theories (adaptive learning, self-determination theory, game-based pedagogy) and leveraging advanced analytics for personalization, this study aspires to bridge the research-to-practice gap highlighted in prior scholarship (
Kaya & Ercag, 2023).
Some scholars argue that excessive AI-driven adaptation reduces learner autonomy (
Waladi et al., 2024). Our framework mitigates this by integrating learner-controlled adaptivity, allowing users to adjust AI-generated recommendations. Critics warn against superficial gamification (
Zourmpakis et al., 2024). Our adaptive gamification approach dynamically adjusts rewards and challenges based on learner engagement and performance to ensure deeper motivation. Extrinsic vs. intrinsic motivation is a concern (
Rodrigues et al., 2020). Our system aligns with self-determination theory (SDT) by ensuring competency-based rewards that promote intrinsic motivation.
3. Proposed Framework
Building on the insights from the literature review and consultations with experts, this section now sets out a new, integrated framework that seeks to combine the motivational strengths of game mechanics with the adaptive power of AI-driven personalization. It focuses on six key constituent elements: the development of a robust, learner-centered environment in which content, learner profiles, game-based rewards, and adaptive engines dynamically collaborate to offer customized learning pathways.
3.1. Framework Overview
At the highest level, the proposed framework orchestrates six interlinked components:
Content model;
Learner model;
Game mechanics layer;
Adaptive engine (AI personalization);
Feedback and analytics system;
Implementation and evaluation.
While each component has its own specific role, their true power emerges from continuous interactions, where data flow and real-time updates allow the learning system to respond adaptively to each learner’s evolving profile. The guiding principle is to harmonize cognitive personalization (i.e., matching content difficulty and pacing to skill levels) with motivational personalization (i.e., tailoring game rewards, challenges, and feedback to keep learners engaged).
Figure 2 illustrates a visual layout of the framework.
3.2. Content Model
3.2.1. Purpose and Structure
The content model is the cornerstone of any adaptive learning platform, as it describes and structures all kinds of educational material that learners can engage with. These may be in the form of videos, interactive simulations, reading passages, quizzes, group discussions, and so on. The major aim is the modularization of these resources, so that the system will be able to choose, in a flexible manner, the most relevant and appropriate content for each and every learner.
Table 1 summarizes the typical segmentation of digital learning platforms, categorizing content into broad and detailed levels, with additional tagging for difficulty and media formats to enhance organization and accessibility.
3.2.2. Metadata and Tagging
To make adaptive selection possible, every piece of content needs to be enriched with metadata.
Table 2 provides examples of metadata commonly used in digital learning platforms, which help categorize and provide context for educational resources.
Careful tagging of content lets the system “mix and match” resources in real time, offering lessons which best fit the current skill level and preferred learning style of a learner. This method also allows for customization, as the AI might choose a more accessible video instead of a dense text-based explanation if the learner has shown better results with visual materials in prior sessions.
3.3. Learner Model
3.3.1. Continuous Learner Profiling
Working in parallel with the content model, the learner model functions like a dynamic repository that keeps information about each of the learners. This is continuously updated based on any interaction the learner has with the platform, such as completing quizzes, spending time on resources, or demonstrating mastery or struggling with tasks. Key components are shown in
Figure 3.
Continuous learner profiling forms the backbone of the personalization logic. Collecting data across a wide range of parameters allows the learner model to uncover detailed insights—for instance, identifying that a learner skips video content or performs better on textual tasks late at night. Such granular information greatly enhances the performance of the adaptive engine and game mechanics.
3.3.2. Predictive and Diagnostic Tools
Most modern adaptive systems embed predictive analytics within their learner models, employing techniques such as Bayesian knowledge tracing or machine learning classifiers to estimate the likelihood of a learner succeeding on a future task. These tools help identify knowledge gaps early on and guide the system to intervene proactively.
For example, if the model predicts that a learner has a 30% likelihood of passing the next advanced quiz, the system might lower the difficulty level, provide additional resources, or trigger motivational prompts to support the learner.
3.4. Game Mechanics Layer
3.4.1. Motivation Through Gamification
The game mechanics layer addresses one of the most pressing problems in online learning: maintaining motivation and engagement. Drawing on principles from digital gaming, this layer includes the following:
Points and Badges: Tangible rewards for completing tasks, achieving milestones, or demonstrating mastery of certain skills;
Levels or Skill Tiers: Indicators of cumulative progress, whereby learners might “level up” after accumulating enough points, unlocking new content or privileges;
Narratives or Storylines: Embedding content in thematic arcs—for example, turning a cybersecurity curriculum into a quest to protect a virtual city from cyber-attacks;
Social Components (Optional): Leaderboards, team quests, or peer-to-peer recognition, depending on the course context and privacy requirements.
3.4.2. Dynamic and Personalized Gamification
What sets this framework apart is the fact that game mechanics themselves are adaptive. Traditional gamification is often a one-size-fits-all approach; every learner sees the same badges or levels at the same intervals. By contrast, this framework’s AI-driven engine monitors learner motivation in real time. If a learner is repeatedly failing tasks and displaying signs of frustration, the system might trigger smaller, more frequent rewards to rebuild confidence. Conversely, if a learner displays strong performance and consistent engagement, the system may introduce more challenging “quests” to keep them from becoming bored, as described in
Table 3 with some examples.
3.5. Adaptive Engine AI-Powered Personalization
3.5.1. Core Functionality
The adaptive engine is the core of the framework and is responsible for weaving together information emanating from the content model, the learner model, and the game mechanics layer. this engine uses a number of advanced algorithms—ranging from basic rule-based logic to sophisticated machine learning—to complete the following tasks:
Select Appropriate Content: Match tasks or lessons to the learner’s current knowledge state;
Adjust Difficulty and Pacing: Increase or lower complexity based on performance metrics or predicted success rates;
Trigger Game Elements: Identify which game mechanics to deploy at which juncture, based on real-time motivational data.
Because personalization is a continuous feedback loop, the system continuously updates each recommendation after every learner interaction, reflecting its evolving understanding of the learner’s progress and level of engagement.
3.5.2. Algorithmic Approaches
Several algorithmic approaches can be used:
Rule-Based Engines: For example, if mastery is less than 70% in a subtopic, remedial resources and micro-quizzes are presented;
Machine Learning Recommenders: Predict the user’s probability of success on the next topic and select content with optimal difficulty;
Sequential Models: Using Markov decision processes and reinforcement learning to determine the best sequence of activities to maximize overall learning gains.
This flexibility in approach allows the framework to accommodate different pedagogical philosophies. For instance, a language course might focus on spaced repetition and frequent short quizzes, while a math course might emphasize mastery-based progression using Bayesian knowledge tracing.
3.6. Feedback and Analytics System
3.6.1. Real-Time Dashboards
The proposed framework relies heavily on continuous feedback and analytics, delivered through the feedback and analytics system, which provides valuable insights to both learners and instructors.
Learner Dashboards display real-time progress, upcoming tasks, and accumulated rewards (points, badges, and levels). Visual indicators, such as progress bars and mastery charts, assist learners in understanding their current standing.
Instructor Dashboards aggregate class-level statistics, such as performance distribution, average time-on-task, and dropout flags, while highlighting at-risk learners. These analytics enable timely interventions, such as direct messaging or scheduling support sessions.
3.6.2. Data Logging and Analytics
Behind the scenes, the system continuously logs nearly every learner interaction, including quiz submissions, time spent on resources, and forum participation. These data inform the learner model and adaptive engine in real time, while also enabling broader, long-term insights:
Content Effectiveness: Which materials generate the highest learning gains or satisfaction ratings?
Game Mechanics Impact: Which mechanics (e.g., points, narratives, challenges) are most effective in maintaining engagement?
Predictive Intervention: Can early warning signals, such as a sudden drop in logins, trigger targeted motivational messages?
By systematically capturing these data streams, the framework supports a cycle of continuous improvement, allowing the system to be iteratively refined based on empirical evidence.
3.7. Implementation and Evaluation
3.7.1. Pilot Deployments and Iterative Refinement
Despite not being a functioning part like the content model or the adaptive engine, the implementation and evaluation side is crucial for ensuring the framework’s ongoing validity and development. Steps include the following:
Pilot Studies: Conduct small-scale implementations with controlled participant groups to debug technical issues and gather initial feedback;
A/B Testing: Compare variations (e.g., different reward schedules, different difficulty thresholds) to identify best practices;
Longitudinal Studies: Assess whether the framework fosters sustained learning and retention over multiple semesters or repeated usage cycles.
Every deployment should integrate a data-driven feedback loop. The collected analytics will identify underperforming game mechanics or excessively lenient or strict AI-driven difficulty adjustments.
3.7.2. Flexibility and Scalability
The framework is inherently designed for adaptability across diverse subject domains, learner age groups, and institutional contexts. With modular components like the content model, learner model, and adaptive engine, the system can be customized for a wide range of courses—from higher education STEM programs to vocational training modules. Additionally, containerization and cloud computing enable seamless scalability, ensuring that the system efficiently supports larger class sizes or institution-wide adoption across multiple courses.
When harmonized, these elements create a dynamic learning environment in which learners receive not just the right lessons but also the right motivational cues precisely when they need them. This coordination aims to tackle longstanding hurdles in online education: low completion rates, inconsistent engagement, and insufficient personalization.
The proposed framework represents the conscious integration of pedagogical theory, game design principles, and AI-driven analytics. It addresses the documented deficiencies of conventional online learning environments by aligning each component toward the shared objectives of cognitive mastery and motivational sustainability. Crucially, the framework is designed not for one-time deployment but for continuous evolution. By enabling instructors to pilot the system, collect performance data, and refine content organization, learner modeling strategies, game mechanics, and AI recommendation algorithms, it closes the feedback loop for ongoing improvements.
Additionally, the flexible, modular architecture allows educators and developers to seamlessly incorporate emerging technologies—such as augmented reality, natural language processing, and innovative adaptive assessments—without disrupting the entire system. Similarly, the game mechanics can be scaled up, scaled down, or replaced with alternative engagement strategies to suit cultural and disciplinary contexts.
4. Methodology
The current study followed a rigorous methodology to test the efficiency of the proposed framework, which embeds game mechanics and AI-driven personalization in an adaptive learning environment. This section describes the research design, participant recruitment, system implementation, data collection instruments, data analysis procedures, and the mathematical formulations related to the adaptive processes.
4.1. Research Design
4.1.1. Overview
A quasi-experimental, pre-test–post-test design was chosen to evaluate how the integrated framework impacted learning outcomes, motivation, and engagement compared to a non-adaptive or minimally gamified control condition. This study divided participants into two parallel groups—the control group and the experimental group—over an eight-week period. This design balanced practical constraints, such as class schedules and institutional policies, with the need for comparative evaluation.
Experimental Group (Adaptive + Gamified): Learners experienced the fully implemented framework, including dynamic content delivery, AI-driven personalization, and game mechanics such as points, badges, levels, and adaptive narratives.
Control Group (Baseline LMS): Learners were exposed to the same content but in a linear sequence typical of online courses, without personalized difficulty adjustments or game-based incentives.
4.1.2. Timeline
As shown in
Table 4, the study unfolded in four main phases: (1) pre-study orientation and pre-tests, (2) intervention period, (3) post-tests, and (4) final feedback collection. Each phase is described in further detail below.
4.2. Participants and Setting
4.2.1. Participant Selection
A total of 250 undergraduate students were recruited from a PSAU that offered blended and online courses. Participants were primarily in their first or second year, enrolled in introductory STEM or general education courses relevant to the online learning materials used in this study. Eligibility required consistent internet access, basic computer literacy, and the completion of an informed consent form.
Random Assignment: After providing consent, participants were assigned to either the experimental group or the control group using a simple random draw (125 participants in each group).
Demographic Balance: Efforts were made to ensure an approximate balance in gender, academic major, and prior online learning experience across both groups.
The average prior GPA of participants was 3.1/4.0, ensuring a balanced mix of high-, medium-, and low-achievers.
4.2.2. Study Context
This study’s course content focused on foundational scientific or mathematical concepts such as introductory algebra, basic physics, or computer science fundamentals. These subjects were chosen due to their suitability for adaptive modules and frequent assessment checkpoints, both of which are critical for real-time personalization. The classes were conducted entirely online, ensuring a consistent learning environment for participants in both conditions.
4.3. System Implementation
4.3.1. Adaptive System Architecture
The experimental group in this study used an adaptive learning platform based on the proposed framework, integrating the following components:
Content Model: Structured modules tagged with difficulty levels and aligned with learning objectives;
Learner Model: Continuously tracked learner performance, time-on-task, and motivational signals;
Game Mechanics Layer: Included points, badges, levels, and narrative prompts to boost motivation;
Adaptive Engine (AI Personalization): A hybrid system combining rule-based logic and machine learning to recommend content at suitable challenge levels;
Feedback and Analytics System: Real-time dashboards for learners to track progress and for instructors to view class-level summaries.
The control group accessed the same content but without adaptive sequencing or game mechanics triggers, providing a baseline online course experience.
4.3.2. Data Flow and Integration
Each learner action—such as quiz submissions, page views, or hint requests—generated a log entry in a centralized database. The adaptive engine periodically polled these logs, updating the learner model with calculations of mastery or frustration probabilities. If the model identified a high risk of disengagement, micro-incentives, such as awarding partial points, were triggered to sustain motivation. Conversely, strong performance prompted advanced-level content recommendations.
A standout feature of the system was its ability to make real-time decisions.
where
is the predicted probability that user has mastered content ;
is the learner’s inferred ability (updated regularly);
is the difficulty parameter associated with content ccc.
If fell below a threshold (e.g., 0.70), the system recommended remediation activities or reduced the complexity of upcoming tasks. Conversely, if a learner consistently outperformed the threshold, more challenging content or bonus “quests” were offered.
4.3.3. Adaptive Learning Platform
The adaptive learning platform used in this study was the latest creation of our in-house research and development team, aimed at addressing persistent challenges related to personalized instruction and learner motivation in online environments. The lead developer directed a multidisciplinary team of software engineers, instructional designers, and educational researchers, translating the conceptual framework into a functional application. Our primary objective was to leverage AI-driven personalization algorithms and integrate game mechanics into an intuitive interface, delivering a seamless, data-informed learning experience tailored to individual needs.
Built from the ground up, the platform shown in
Figure 4 employs a modular architecture comprising four key modules: content management, learner modeling, adaptive sequencing, and gamification. The content management module contains a diverse array of learning resources, ranging from text-based explanations to interactive simulations, all tagged by difficulty, topic, and prerequisite skills. Real-time performance data and predictive analytics power the learner modeling and adaptive sequencing modules to present content at an optimal challenge level. This dynamic progression is further supported by the gamification system, which awards points, badges, and levels to sustain motivation, while simultaneously monitoring for signs of frustration or disengagement to prompt timely interventions.
4.4. Data Collection Instruments
Five key instruments were employed to capture both quantitative and qualitative data.
4.4.1. Pre- and Post-Tests
Structure: Each test contained five multiple-choice and short-answer items aligned with the course objectives. Item difficulty was distributed so as to capture both low-level recall and higher-order problem-solving skills.
Scoring: One point was awarded per correct multiple-choice answer, with partial credit for short-answer items.
Reliability: A pilot test suggested acceptable reliability (Cronbach’s approx. 0.78α ≈ 0.78).
4.4.2. Motivation Surveys
Motivational constructs were measured using a modified version of the Motivated Strategies for Learning Questionnaire (MSLQ), capturing subscales like intrinsic motivation, task value, and self-efficacy. The survey was administered at Weeks 0 and 8:
where
is the response to the i-th item in subscale k (on a 5-point Likert scale);
is the total number of items in that subscale.
This yielded a mean subscale score indicating the level of each motivational dimension.
4.4.3. Engagement Analytics
Three core metrics were automatically logged:
Time-on-Task (): The total minutes each learner actively spent engaging with content, not counting idle or inactive periods;
Module Completion Ratio: The fraction of assigned modules a learner completed on time (;
Interaction Frequency: The number of forum posts, quiz attempts, and resource views per module.
4.4.4. Learner Feedback Questionnaire
At the end of Week 8, the participants completed an online feedback form addressing the following:
System Usability: Ratings of navigation, layout, clarity of instructions;
Perceived Usefulness: Rating to what extent the system helped them learn or stay motivated;
Open-Ended Items: Invitations to share positive and negative experiences, as well as suggestions for improvement.
4.5. Data Analysis Procedures
4.5.1. Quantitative Analysis
Learning Gains
Within-Group Comparison: A paired t-test (α = 0.05) assessed the significance of pre-to-post-test improvements in each group. Effect sizes (Cohen’s d) were calculated to gauge practical significance.
Between-Group Comparison: An independent-sample t-test evaluated whether the experimental group’s post-test scores differed significantly from the control group, controlling for baseline scores when necessary.
Motivation Changes
Changes in MSLQ subscales (intrinsic motivation, task value, self-efficacy) from the pre- to post-survey were analyzed using repeated-measure ANOVA. The main effect of “Time” (pre vs. post) and the interaction effect with “Group” (experimental vs. control) were examined.
Engagement Metrics
Time-on-task () was compared between groups using Mann–Whitney U or independent t-tests, depending on the normality of the distribution. Similarly, module completion rates were subjected to a between-group comparison.
Additional correlation analyses (r) were conducted to explore the relationship between engagement metrics and the final post-test performance, providing insights into whether a greater time-on-task correlated with higher achievements.
4.5.2. Qualitative Analysis
Open-ended responses from the Learner Feedback Questionnaire and focus group transcripts underwent thematic analysis:
Initial Coding: Two researchers independently coded the data, labeling phrases which referenced satisfaction, frustration, clarity, or motivational impact;
Thematic Grouping: Codes were clustered into broader themes (e.g., “Narrative Enhancements”, “System Usability”, and “AI Recommendations”) to understand recurring topics;
Inter-Rater Reliability: Cohen’s Kappa () was calculated to measure agreement in coding. A value above 0.70 was considered acceptable;
Synthesis: Themes and subthemes were mapped to the quantitative findings to provide a richer interpretation of any observed numerical differences. For instance, if the experimental group showed higher motivation scores, the thematic analysis might have revealed participants’ positive views about leveling systems or receiving timely rewards.
4.6. Ethical Considerations
Ethical approval for this study was secured from the PSAU’s Institutional Review Board (IRB) with approval number IRB-221/PSAU-593. All participants provided their informed consent, were free to withdraw at any time, and were assured that individual data would remain confidential. System logs were anonymized, with user IDs replaced by coded identifiers to preserve privacy. This study adhered to General Data Protection Regulation (GDPR) principles where applicable, ensuring that personal data were neither shared nor used for purposes outside the scope of this research.
4.7. Methodological Rigor and Limitations
To bolster internal validity, we sought to minimize confounding factors by using the same instructional materials across both groups, ensuring that group differences could be attributed primarily to adaptive and gamified features. The pre-test was used to verify that both groups had comparable baseline knowledge. External validity considerations included the fact that the participants were from a single university context; future studies might expand to diverse settings to test generalizability.
Limitations of the chosen methodology included the following:
Quasi-Experimental Design: Without fully random assignment or multiple control conditions, some unmeasured variables could have influenced the results;
Short Study Duration (8 Weeks): Longer-term effects on motivation and retention may require multi-semester or longitudinal designs;
Self-Selection Bias: Learners who volunteered may have already possessed higher digital readiness or intrinsic motivation, potentially inflating the engagement metrics.
Nonetheless, by combining quantitative and qualitative methods, this study aimed to mitigate these limitations and draw robust conclusions about the framework’s efficacy.
5. Results
This section presents the key findings from this study, which involved a total of 250 participants randomly assigned to either an experimental group (adaptive + gamified platform) or a control group (baseline LMS).
A short demographic questionnaire indicated that approximately 60% of the participants were in their first or second year of undergraduate studies. The participants spanned various majors (predominantly STEM fields), although about 30% represented non-STEM disciplines. Baseline comparisons (e.g., pre-test scores, initial motivation ratings) between groups suggested no statistically significant differences at the start of this study, supporting the comparability of the two conditions.
5.1. Learning Gains (Pre- and Post-Test Comparisons)
5.1.1. Average Score Improvements
A central research objective was to measure knowledge gains using pre- and post-tests. Each participant completed a five-item test during week 0 (pre-test) and an equivalent-difficulty five-item test during week 8 (post-test). The results indicated substantial improvement across both groups, but the experimental group demonstrated significantly higher gains, as shown in
Figure 5.
Experimental Group: The mean pre-test score was (), increasing to () on the post-test.
Control Group: The mean pre-test score was (), rising to () on the post-test.
A repeated-measure ANOVA indicated a significant main effect of and a significant time × group interaction (). Post hoc tests revealed that the experimental group’s improvement exceeded that of the control group .
5.1.2. Effect Size and Interpretation
Using Cohen’s ddd to measure the effect size of the pre-to-post-test change, we obtained the following results:
Experimental Group: , reflecting a large effect for knowledge gains;
Control Group: , which was still a substantial gain, yet noticeably smaller than in the experimental condition.
These findings suggest that integrating AI-driven personalization and gamification mechanics led to meaningful improvements in learning outcomes. The difference in effect sizes supports the conclusion that personalization plus game elements yielded a greater impact than the standard LMS approach alone. To evaluate the broader impact, we compared the final course grades of the experimental group against students from the previous academic year (2023) who had taken the same course without adaptive gamification. The results indicated a 7.5% improvement in the average course grades for students using the adaptive system, supporting its positive effect on long-term academic performance.
5.2. Changes in Motivation and Self-Efficacy
Another pillar of this study involved monitoring motivation and self-efficacy using a modified MSLQ (Motivated Strategies for Learning Questionnaire). Participants completed a motivation survey during both week 0 and week 8, measuring subscales such as intrinsic goal orientation, task value, and self-efficacy for learning, as illustrated in
Figure 6.
Overall Motivation Trends
From the pre- to post-study periods, both groups showed moderate increases in motivation, yet the experimental group consistently reported higher gains in intrinsic motivation and self-efficacy:
Intrinsic Motivation (1–5 scale): The experimental group’s mean climbed from to , whereas the control group’s mean rose from to ;
Self-Efficacy (1–5 scale): The experimental group’s values improved from to , compared to the control group’s change from to .
An independent-sample
t-test on the post-study motivation scores yielded significant differences in intrinsic motivation and self-efficacy between the two groups
. A qualitative analysis (see
Section 5.5) corroborated the fact that the participants in the experimental group felt more supported and “challenged in a good way”.
5.3. Engagement Metrics
Engagement was a core focus point of this study, with the adaptive engine and game mechanics layer designed to sustain interest. The engagement analytics captured the following:
Time-on-Task (cumulative active minutes);
Module Completion Rates (proportion of completed modules);
Frequency of Logins, Quiz Attempts, and Forum Contributions.
This section highlights how these metrics compared across groups, with all results shown in
Figure 7.
5.3.1. Time-on-Task
Experimental Group: On average, participants spent active minutes (cumulative) over the 8-week study.
Control Group: The average was active minutes.
An independent-sample t-test indicated a significant difference in time-on-task . The experimental learners appeared more motivated to continue exploring content, likely encouraged by real-time feedback, game-based incentives, or a combination of both.
5.3.2. Module Completion Rates
Out of the 12 assigned modules, the completion rates were the following:
Approximately 30% of the experimental group completed all modules (vs. 17% in the control group). A Mann–Whitney U test (non-parametric) confirmed that the difference in distributions was statistically significant .
5.3.3. Logins, Quiz Attempts, and Forum Contributions
Logins: The experimental group averaged 11 logins per participant, while the control group averaged 8.
Quiz Attempts: The experimental group logged attempts, while the control group had attempts.
Forum Contributions: The experimental group had posts, while the control group had posts.
Overall, these data indicate that gamification and personalization likely spurred more frequent interaction with course materials.
5.4. Correlation Analyses: Engagement vs. Performance
Beyond group differences, we examined the correlations between engagement metrics and post-test scores, as shown in
Figure 8. A Pearson correlation analysis revealed that the time-on-task correlated moderately with the post-test performance
, and the module completion rate displayed an even stronger correlation
. These results, consistent across both groups, suggest that learners who devoted more active time and completed more modules tended to achieve higher knowledge gains.
However, the correlation between forum contributions and the final scores was weaker
, indicating that, while discussion posts can be beneficial, they may not be as direct an indicator of learning as time spent on tasks or modules completed, as illustrated in
Figure 9. In the experimental group, the correlation between quiz attempts and post-test performance was especially pronounced
, suggesting that repeated practice under adaptive feedback fostered deeper mastery.
5.5. Qualitative Findings from Open-Ended Feedback
While much of the analysis was quantitative, a subset of learners (roughly 60 from each group) provided open-ended comments through an optional feedback questionnaire. A thematic analysis identified the following recurring themes:
Positive Attitudes Toward Gamification (Experimental Group)
Many participants remarked that badges, points, and levels “made learning feel less tedious”;
Some praised the “narrative quests” triggered after certain modules, which they found “fun” and “motivating”.
Adaptive Difficulty Appreciated
Participants mentioned enjoying “the right level of challenge”, with the system offering simpler exercises if they struggled, or advanced tasks if they excelled;
A few users found the transitions “too abrupt” if they performed extremely well or very poorly on a quiz.
Desire for More Peer Interaction
Both groups, surprisingly, expressed a desire for more direct social features—group challenges, peer feedback on forum posts, or real-time chats—to boost collaboration;
This aligns with the modest correlation between forum contributions and performance, underscoring that discussion-based engagement might be improved further.
Technical Challenges
In summary, the qualitative data provided nuanced support for the idea that adaptive gamification fostered engagement. Some respondents, however, emphasized a need for even more social interaction and a smoother integration between adaptive content suggestions and real-time motivational prompts. The experimental group outperformed the control group in post-test scores (85.2 vs. 78.5, p < 0.01). Intrinsic motivation rose by 29% in the experimental group versus 13% in the control group. The experimental group spent 27% more time on the tasks and had greater module completion rates (9.1 vs. 7.5 out of 12).
6. Discussion
These findings extend prior work that has argued that converging adaptive learning, AI-driven personalization, and gamification can greatly enhance learner engagement and performance in digital contexts. In our quasi-experimental design, 250 undergraduate learners were randomly assigned to an experimental group receiving adaptive, gamified experiences or a control group following baseline online instruction. These revealed significant differences in their post-test scores, motivation metrics, and engagement indicators, all adding to important insights into how the three strands of adaptivity, personalization, and game mechanics could work together to optimize the learning environment.
6.1. Cognitive Gains and Adaptive Learning
The average post-test score of the participants from the experimental group was 84.5%, whereas, for the control group, this percent was 78.8%. This corresponds to many studies indicating that an adaptive engine can result in as high as 12–18% increases in test performance compared to traditional e-learning modules. Our results extend these data, demonstrating that coupling adaptivity with gamification may further raise cognitive outcomes.
This effect also rhymes with previous research indicating that embedded formative assessments and dynamic adjustments help learners avoid repeated errors and track their progress more precisely.
The effect size for the experimental group was significantly larger than that for the control group, echoing the consensus that adaptive frameworks guide learners through customized pathways, amplifying mastery and reducing attrition (
Jiang et al., 2022). As
Ghanbaripour et al. (
2024) note, motivational design remains under-implemented on many platforms, and our findings indicate that overlaying game mechanics can complement these adaptive functions in ways that create experiences that are not only cognitively effective but motivationally rich.
6.2. AI Personalization and Engagement
Contributing in important ways to the stronger performance in the experimental group was AI-driven personalization—the system’s real-time analysis of quiz attempts, hints requested, and time-on-task. Participants whose performance was repeatedly flagged received remedial exercises and supportive feedback—perfect examples of the suggested micro-interventions of
du Plooy et al. (
2024) and
Naseer et al. (
2024b).
This probably minimized frustration points and kept learners at a suitable level of difficulty, reflecting the design supported by
Lim et al. (
2023) to identify at-risk students early. Adaptively personalizing difficulty amounted to an emulation of more sophisticated AI personalization—say, Bayesian knowledge tracing—that enabled the subtle and continuous refinement of each learner’s experience. Our analysis also showed moderate to strong correlations, ranging from 0.46 to 0.61, between engagement metrics such as time-on-task and module completion rates with the final post-test scores. These values essentially echoed the findings of
Rodrigues et al. (
2020) and
Yanes et al. (
2020), who reported that personalizing difficulty and pacing often results in deeper involvement with the course material.
Because AI-driven recommendation engines respond to real-time learner data, they closely align tasks with individual capability, reducing extraneous cognitive load and fostering “optimal challenge”. Such synergy underpins the importance of exploiting robust data analytics, a point which
Waladi et al. (
2024) emphasized as crucial to successful adaptive personalization.
6.3. Motivational Effects of Gamification
Perhaps the most dramatic finding was the differential rise in motivation and self-efficacy across conditions. Learners in the experimental condition reported intrinsic motivation increases from 3.1 to 4.0 out of a total of 5, surpassing a more modest jump from 3.1 to 3.5 in the control group, and prior works have shown that well-aligned game mechanics—badges, leaderboards, and narrative quests—can serve as motivational scaffolds. These are reinforced by our data, which show that points, levels, and instant feedback gave learners energy, leading to consistent study patterns without the monotony which might sometimes be part of an online course. The trends of motivation of learners are shown in
Figure 10.
Moreover, the relationship between repeated attempts at quizzes and performance was especially strong in the experimental group, suggesting that the gamified incentives encouraged practice in a repetitive manner. Kaya and Ercag warn that “pointification” can be glib; however, our interventions combined game elements within the adaptive logic. To make a meaningful offer of rewards and challenges—and their scaling in support of learners’ mastery—further synergy is offered by
Hong et al. (
2024) and
Pratiwi et al. (
2024). As a result of this, one would presume that learners experienced their earned badges and acquired level-ups as real testaments of mastery or improvement, not trifles created by whim, which is congruent with the principle stating that “meaningful gamification” instills sustained engagement.
The key insights of the motivational effects of gamification are stated below:
Engagement peaked during week 9, coinciding with new game mechanics (e.g., leaderboard updates, challenge unlocks);
A slight dip in week 3 suggested fatigue, followed by a rebound in week 4, likely due to adaptive difficulty adjustments;
The adaptive gamified system consistently advantaged the experimental group, as it maintained higher engagement levels in this group compared to the control group.
6.4. Interplay of Adaptivity and AI with Gamified Design
Our results confirm that the triple integration of adaptivity, AI-based personalization, and game mechanics yields potent benefits. For example, time-on-task in the experimental group averaged 185 min versus 145 in the control group, reinforcing the assertion that game-enhanced adaptivity has the potential for seriously increasing persistent engagement. It also echoed the synergy identified in earlier scholarship, with real-time data updates feeding into both AI-driven difficulty adjustments and gamification triggers. This dynamic synergy pushed advanced tasks onto high-scoring learners while offering micro-badges to struggling ones, shaping a more personalized learning path.
However, it is not trivial to effectively combine these components. Past research warns that many systems remain “static” in their gamification or adopts a “one-size-fits-all” approach in their adaptivity, failing to tailor the motivational cues themselves. Our study overcame this by making sure that both the content difficulty and rewards changed in tandem to deliver a curated experience for each learner.
The fact that these interventions were consistent over the eight-week period means that, in support, integrated adaptive–gamified models create “more significant and sustained improvements” according to
Svyrydiuk et al. (
2024), compared to the employment of either adaptivity or gamification alone.
6.5. Challenges and Considerations for the Future
Despite these promising results, some limitations and challenges are worth mentioning. First, as Yadav says, “The logging of detailed interaction records raises serious ethical and data privacy concerns.” The approach followed in this paper was highly dependent on the logging events—quiz attempts and idle times—raising issues of informed consent and sensitive treatment of learner data, as noted by
Morandini et al. (
2023). The learners’ interest, improvement in scores, and intention to spend more time on module completion are shown in
Figure 11, in relation to the top learners and outcomes.
Second, while strong short-term gains indeed took place in our study, several authors have previously pointed to the need for longitudinal studies that ensure retention and skill transfer beyond a single semester. For example,
Jaramillo-Mediavilla et al. (
2024) and
Liu et al. (
2023) indicate that how learners maintain or lose their newly acquired competencies over time remains an open question. Moreover, while the correlation data underlined a positive relationship of engagement metrics with performance, correlation is not causation. Intrinsically motivated learners may have naturally spent more time on tasks—a confounding factor that future work should address by resorting to more nuanced designs. Another avenue is linked to affective computing, since frustration detection may be used to create emotional feedback loops which further tune the system’s responsiveness.
Future research should consider multi-group comparisons, such as the following:
Gamification-only vs. AI personalization-only vs. combined approach to determine the independent and interactive effects of each intervention;
Passive vs. interactive gamification to assess whether dynamic adaptive gamification outperforms static point-based systems.
For future research, we propose the following:
6.6. Implications for Practice and Research
Evidence from this study again supports the literature on the holistic approach to adaptive online learning. Our findings reveal that institutions using AI personalization and game mechanics in learning tend to increase students’ motivation, build a more profound involvement, and raise cognitive results. In this respect, embedding the principles of self-determination can potentially give learners even greater benefits by providing them with choice autonomy over tasks or pathways. Educators should ensure that the game elements supplement rather than overshadow core pedagogical goals and support the stance that purely cosmetic rewards yield diminishing returns (
Rodrigues et al., 2019). From a research perspective, further refinements of such methods might be explored, such as studying collaborative gamification, as addressed by
Kawtar et al. (
2022), or investigating subpopulations who would benefit most from these designs, such as novices versus experts. In other words, investigations using multi-institution samples would permit generalization beyond that from a single cohort, which has long been a concern with convenience sampling. Lastly, continued betterment with “explainable AI” will be crucial to the interpretation of how adaptive decisions are being made and, hence, ensuring this element of transparency, thereby building confidence among educators and learners.
In sum, our results highlight that adaptive learning, AI-driven personalization, and gamification together may jointly lead to enhanced performance, motivation, and engagement in a way supported by the broader literature. Such an integrative design meets cognitive needs not only by providing tailored tasks but also by responding to the less-considered affective and motivational dimensions that grant persistence in online environments.
7. Conclusions
This research aimed to evaluate the effects of integrating AI-driven personalization and gamification elements within an adaptive learning environment. Using a quasi-experimental design, 250 undergraduate participants were randomly assigned to an experimental group or a control group . Over an eight-week period, both groups engaged with the same core content but experienced different delivery methods. The experimental group received dynamic difficulty adjustments and game-based incentives, such as points, badges, and levels, along with immediate feedback, whereas the control group followed a standard online course structure. The findings revealed significant differences in the learning outcomes. The experimental group achieved an average post-test score of compared to for the control group. A repeated-measure ANOVA identified a significant time × group interaction , with the experimental group showing a larger effect size than the control group . Additionally, motivation and self-efficacy—measured using a modified MSLQ—improved more substantially in the experimental group. For example, intrinsic motivation increased from 3.1 to 4.0 on a 5-point scale, compared to an increase from 3.1 to 3.5 for the control group. Engagement analytics further supported these findings. The Experimental group logged an average of min of active time-on-task and completed 9.1 out of 12 modules, while the control group logged min and completed 7.5 modules. Correlation analyses showed moderate to strong relationships ( values ranging from 0.46 to 0.61) between engagement metrics, such as module completion and quiz attempts, and post-test scores. Qualitative feedback aligned with the quantitative results, with most experimental participants commending the adaptiveness of the challenge levels and the motivational benefits of the game mechanics. Overall, the results highlight the potential of combining adaptive personalization with gamification to enhance learner performance, motivation, and sustained engagement in digital learning environments.
Author Contributions
Conceptualization, Q.A., A.A., N.A. and F.N.; methodology, F.N., A.A. and M.N.K.; software, Q.A. and A.A.; validation, A.A., M.N.K., Q.A., N.A. and F.N.; formal analysis, Q.A., N.A., M.N.K. and F.N.; investigation, Q.A., N.A., F.N., A.A. and M.N.K.; resources, N.A. and F.N.; data curation, N.A., M.N.K., Q.A. and F.N.; writing—original draft preparation, Q.A., N.A., F.N., M.N.K. and A.A.; writing—review and editing, N.A., F.N., Q.A., A.A. and M.N.K.; visualization, M.N.K., F.N. and Q.A.; supervision, F.N.; project administration, F.N., N.A., Q.A. and M.N.K.; funding acquisition, A.A. All authors have read and agreed to the published version of the manuscript.
Funding
The author extends his appreciation to Prince Sattam bin Abdulaziz University for funding this research work through project number (PSAU/2024/01/99520).
Institutional Review Board Statement
Ethical approval for this study was secured from the PSAU’s Institutional Review Board (IRB) with approval number IRB-221/PSAU-593.
Informed Consent Statement
Informed consent was obtained from all participants involved in the study.
Data Availability Statement
The original contributions of data presented in this study are included in the
supplementary material as a raw data, gathered from the participants. Further inquiries can be directed to the corresponding authors.
Conflicts of Interest
The author declares no conflicts of interest.
References
- Addas, A., Naseer, F., Tahir, M., & Khan, M. N. (2024). Enhancing higher-education governance through telepresence robots and gamification: Strategies for sustainable practices in the AI-driven digital era. Education Sciences, 14(12), 1324. [Google Scholar] [CrossRef]
- Bennani, S., Maalel, A., & Ben Ghezala, H. (2021). Adaptive gamification in e-learning: A literature review and future challenges. Computer Applications in Engineering Education, 30(2), 628–642. [Google Scholar] [CrossRef]
- Boulton, A. (2024). Data-driven learning. In Corpora for language learning (pp. 43–58). Routledge. [Google Scholar] [CrossRef]
- Duggal, K., Gupta, L. R., & Singh, P. (2021). Gamification and machine learning inspired approach for classroom engagement and learning. Mathematical Problems in Engineering, 2021, 9922775. [Google Scholar] [CrossRef]
- du Plooy, E., Casteleijn, D., & Franzsen, D. (2024). Personalized adaptive learning in higher education: A scoping review of key characteristics and impact on academic performance and engagement. Heliyon, 10, e39630. [Google Scholar] [CrossRef] [PubMed]
- El-Sabagh, H. A. (2021). Adaptive e-learning environment based on learning styles and its impact on development students’ engagement. International Journal of Educational Technology in Higher Education, 18(1), 53. [Google Scholar] [CrossRef]
- Fahad Mon, B., Wasfi, A., Hayajneh, M., Slim, A., & Abu Ali, N. (2023). Reinforcement learning in education: A literature review. Informatics, 10(3), 74. [Google Scholar] [CrossRef]
- Fedro, W. I. (2021). Mandarin language learning with gamification method. International Journal of Advanced Trends in Computer Science and Engineering, 10(5), 3046–3052. [Google Scholar] [CrossRef]
- Fernández-Herrero, J. (2024). Evaluating recent advances in affective intelligent tutoring systems: A scoping review of educational impacts and future prospects. Education Sciences, 14(8), 839. [Google Scholar] [CrossRef]
- Ghanbaripour, A. N., Talebian, N., Miller, D., Tumpa, R. J., Zhang, W., Golmoradi, M., & Skitmore, M. (2024). A systematic review of the impact of emerging technologies on student learning, engagement, and employability in built environment education. Buildings, 14(9), 2769. [Google Scholar] [CrossRef]
- Gligorea, I., Cioca, M., Oancea, R., Gorski, A.-T., Gorski, H., & Tudorache, P. (2023). Adaptive learning using artificial intelligence in e-learning: A literature review. Education Sciences, 13(12), 1216. [Google Scholar] [CrossRef]
- Gupta, T. (2024). Adaptive learning systems: Harnessing AI to personalize educational outcomes. International Journal for Research in Applied Science and Engineering Technology, 12(11), 458–464. [Google Scholar] [CrossRef]
- Halkiopoulos, C., & Gkintoni, E. (2024). Leveraging AI in e-learning: Personalized learning and adaptive assessment through cognitive neuropsychology—A systematic analysis. Electronics, 13(18), 3762. [Google Scholar] [CrossRef]
- Hollister, B., Nair, P., Hill-Lindsay, S., & Chukoskie, L. (2022). Engagement in online learning: Student attitudes and behavior during COVID-19. Frontiers in Education, 7, 851019. [Google Scholar] [CrossRef]
- Hondoma, T. (2024). Enhancing engagement with personalized recommendations with ai-powered recommender systems. In AI-driven marketing research and data analytics (pp. 162–181). IGI Global. [Google Scholar] [CrossRef]
- Hong, Y., Saab, N., & Admiraal, W. (2024). Approaches and game elements used to tailor digital gamification for learning: A systematic literature review. Computers & Education, 212, 105000. [Google Scholar] [CrossRef]
- Jaramillo-Mediavilla, L., Basantes-Andrade, A., Cabezas-González, M., & Casillas-Martín, S. (2024). Impact of gamification on motivation and academic performance: A systematic review. Education Sciences, 14(6), 639. [Google Scholar] [CrossRef]
- Jiang, B., Li, X., Yang, S., Kong, Y., Cheng, W., Hao, C., & Lin, Q. (2022). Data-Driven personalized learning path planning based on cognitive diagnostic assessments in MOOCs. Applied Sciences, 12(8), 3982. [Google Scholar] [CrossRef]
- Kawtar, Z., Mohamed, E., & Mohamed, K. (2022). Collaboration in adaptive e-learning. International Journal of Computer & Organization Trends, 12(1), 15–19. [Google Scholar] [CrossRef]
- Kaya, O. S., & Ercag, E. (2023). The impact of applying challenge-based gamification program on students’ learning outcomes: Academic achievement, motivation and flow. Education and Information Technologies, 28(8), 10053–10078. [Google Scholar] [CrossRef]
- Lim, L., Lim, S. H., & Lim, W. Y. R. (2023). Efficacy of an adaptive learning system on course scores. Systems, 11(1), 31. [Google Scholar] [CrossRef]
- Liu, S., Ma, G., Tewogbola, P., Gu, X., Gao, P., Dong, B., He, D., Lai, W., & Wu, Y. (2023). Game principle: Enhancing learner engagement with gamification to improve learning outcomes. Journal of Workplace Learning, 35(5), 450–462. [Google Scholar] [CrossRef]
- Morandini, S., Fraboni, F., Balatti, E., Hackmann, A., Brendel, H., Puzzo, G., Volpi, L., Giusino, D., De Angelis, M., & Pietrantoni, L. (2023, August 22–24). Assessing the transparency and explainability of AI algorithms in planning and scheduling tools: A review of the literature. 10th International Conference on Human Interaction and Emerging Technologies (IHIET 2023), Nice, France. [Google Scholar] [CrossRef]
- Naseer, F., Khalid, M. U., Ayub, N., Rasool, A., Abbas, T., & Afzal, M. W. (2024a). Automated assessment and feedback in higher education using generative AI. In Transforming education with generative AI (pp. 433–461). IGI Global. [Google Scholar] [CrossRef]
- Naseer, F., Khan, M. N., Tahir, M., Addas, A., & Aejaz, S. M. H. (2024b). Integrating deep learning techniques for personalized learning pathways in higher education. Heliyon, 10, e32628. [Google Scholar] [CrossRef] [PubMed]
- Ng, L.-K., & Lo, C.-K. (2022). Flipped classroom and gamification approach: Its impact on performance and academic commitment on sustainable learning in education. Sustainability, 14(9), 5428. [Google Scholar] [CrossRef]
- Pereira, J. T., & Raja Harun, R. N. S. (2024). Gamification of classroom activities: Undergraduates’ perception. AJELP: The Asian Journal of English Language and Pedagogy, 12(1), 80–92. [Google Scholar] [CrossRef]
- Pratiwi, P., Situmorang, R., & Sukardjo, S. (2024). Gamification for studying mathematics by e-learning. Proceeding of the International Conference on Multidisciplinary Research for Sustainable Innovation, 1, 195–200. [Google Scholar] [CrossRef]
- Ramos, D. P., Araújo, F. R. d. S., Rancan, G., Júnior, H. G. M., & De Bona, M. (2024). Gamification and motivation in learning. RCMOS—Revista Científica Multidisciplinar O Saber, 1(1). [Google Scholar] [CrossRef]
- Rodrigues, L., Toda, A. M., Palomino, P. T., Oliveira, W., & Isotani, S. (2020). Personalized gamification: A literature review of outcomes, experiments, and approaches. In TEEM’20: Eighth international conference on technological ecosystems for enhancing multiculturality. ACM. [Google Scholar] [CrossRef]
- Rodrigues, L. F., Oliveira, A., & Rodrigues, H. (2019). Main gamification concepts: A systematic mapping study. Heliyon, 5(7), e01993. [Google Scholar] [CrossRef]
- Shea, P. (2023). Teaching Presence as a guide for productive design in online and blended learning. In The design of digital learning environments (pp. 68–83). Routledge. [Google Scholar] [CrossRef]
- Svyrydiuk, O., Balokha, A., Myshkarova, S., Kulyk, N., & Vakulenko, S. (2024). Empowering independent learning: The key role of online platforms. Revista Amazonia Investiga, 13(79), 107–122. [Google Scholar] [CrossRef]
- Tariq, R., Aponte Babines, B. M., Ramirez, J., Alvarez-Icaza, I., & Naseer, F. (2025). Computational thinking in STEM education: Current state-of-the-art and future research directions. Frontiers in Computer Science, 6, 1480404. [Google Scholar] [CrossRef]
- Technical, vocational, tertiary and adult education. (2022). In Global education monitoring report (pp. 253–271). United Nations. [CrossRef]
- Vargas Domínguez, J. M., & de Trazegnies Otero, C. (2020, November 9–10). A proposal for online synchronous typing of math related exercises. 13th Annual International Conference of Education, Research and Innovation, Online. [Google Scholar] [CrossRef]
- Vázquez-Parra, J. C., Tariq, R., Castillo-Martínez, I. M., & Naseer, F. (2024). Perceived competency in complex thinking skills among university community members in Pakistan: Insights across disciplines. Cogent Education, 12(1), 2445366. [Google Scholar] [CrossRef]
- Waladi, C., Lamarti, M. S., & Khaldi, M. (2024). Crafting an AI-powered adaptive e-learning framework: Based on kolb’s learning style. International Journal of Religion, 5(8), 232–244. [Google Scholar] [CrossRef]
- White, G. (2020). Adaptive learning technology relationship with student learning outcomes. Journal of Information Technology Education: Research, 19, 113–130. [Google Scholar] [CrossRef] [PubMed]
- Yadav, P. (2024). Gamification and personalised learning. In Transforming education for personalized learning (pp. 85–99). IGI Global. [Google Scholar] [CrossRef]
- Yanes, N., Mostafa, A. M., Ezz, M., & Almuayqil, S. N. (2020). A machine learning-based recommender system for improving students learning experiences. IEEE Access, 8, 201218–201235. [Google Scholar] [CrossRef]
- Zourmpakis, A.-I., Kalogiannakis, M., & Papadakis, S. (2024). The effects of adaptive gamification in science learning: A comparison between traditional inquiry-based learning and gender differences. Computers, 13(12), 324. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).