Next Article in Journal
Virtual Reality in Medical Education, Healthcare Education, and Nursing Education: An Overview
Previous Article in Journal
Performance and Comfort of Precise Distal Pointing Interaction in Intelligent Cockpits: The Role of Control Display Gain and Wrist Posture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Adaptive Course Framework—Case Study: Impact on Success and Engagement

by
Neslihan Ademi
1,* and
Suzana Loshkovska
2
1
Faculty of Engineering, International Balkan University, 1000 Skopje, North Macedonia
2
Faculty of Computer Science and Engineering, University of St. Cyril and Methodius, 1000 Skopje, North Macedonia
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2025, 9(7), 74; https://doi.org/10.3390/mti9070074
Submission received: 16 June 2025 / Revised: 12 July 2025 / Accepted: 17 July 2025 / Published: 19 July 2025

Abstract

Adaptive learning tailors learning to the specific needs and preferences of the learner. Although studies focusing on adaptive learning systems became popular decades ago, there is still a need for empirical evidence on the usability of adaptive learning in various educational environments. This study uses LMS log data to elucidate an adaptive course design explicitly developed for formal educational environments in higher education institutions. The framework utilizes learning analytics and machine learning techniques. Based on learners’ online engagement and tutors’ assessment of course activities, adaptive learning paths are presented to learners. To determine whether our system can increase learner engagement and prevent failures, learner success and engagement are measured during the learning process. The results show that the proposed adaptive course framework can increase course engagement and success. However, this potential depends on several factors, such as course organization, feedback, time constraints for activities, and the use of incentives.

1. Introduction

Engaging and motivating learners are the most significant challenges in education. The “one-size-fits-all” approach falls short due to individual learners’ diverse cognitive abilities. Traditional teaching methods may not effectively cater to each student’s needs and can even be demotivating for some. In contrast, one-to-one teaching enables a deeper understanding of each learner’s unique requirements, ensuring personalized attention.
Adaptive Learning Systems (ALSs) are a solution to these challenges in e-learning environments. ALSs enable learners to tailor their learning experiences according to their preferences and needs, similar to the efficiency of one-to-one teaching. However, implementing ALSs requires subject-specific data and models, making the development process time-consuming and resource-intensive.
To streamline ALS development, researchers are exploring the integration of Adaptive Learning into existing Learning Management Systems (LMSs) [1,2]. LMSs are robust platforms for disseminating learning materials, facilitating assignments and tests, and tracking learner behavior. Data mining tools developed for LMSs can analyze user behavior data to customize the learning environment. While several studies propose integrating adaptivity into LMSs, many of these initiatives remain experimental and await practical application in educational settings.
The advances in Artificial Intelligence (AI) promise to automate personalization in education, potentially alleviating the burden on teachers in lesson planning, material preparation, and assessment [3]. However, the efficacy of AI methods in education warrants empirical validation before widespread adoption. Xie et al. [4] noted that when designing AI-enabled learning systems, little attention is given to courses that develop practical skills. It is crucial to recognize that certain aspects of the teaching process, particularly tasks involving practical or creative skills, may resist automation with current technology. Thus, the use of AI in education should be judiciously considered in each specific context.
Our research focuses on implementing adaptivity in formal higher education courses, which inherently have organizational restrictions. While the course structure remains linear, adaptivity can be introduced into learning materials, assessment methods, and student activities. We propose an adaptive course framework that tailors the learning path for each student based on performance and behavior tracked through the LMS and student information system.
Our framework employs continuous formative assessments to refine data for decision-making and enhance student engagement. Predictive analysis of the LMS log data helps tutors be aware of student behavior and assist the at-risk group. When the model identifies at-risk students in the initial stages of the course, they are offered different learning paths and recovery activities to support their learning.
Adaptivity alone does not always guarantee the expected success in higher education settings because the complete process faces time constraints. All activities must be finished within a period defined by the institution. Effective course design plays a crucial role in achieving successful outcomes. Factors such as feedback, time constraints for activities, and using incentives significantly influence results. Providing timely feedback and incorporating incentives can help students engage more actively with the course.
We performed an experimental evaluation to test the adaptive learning framework and determine its advantages and shortcomings. The assessment addresses several key questions:
  • How does adaptivity impact student engagement with the LMS and student success in formal education?
  • How do time and score constraints affect student success and engagement in adaptive learning?
  • How do changes in the workload and rules (e.g., number of activities, deadlines) affect dropout and failure rates in adaptive learning?
This paper presents the experimental evaluation of our adaptive learning framework in authentic Computer Engineering courses at the bachelor’s degree level, accredited by the Ministry of Education and Science of North Macedonia. The subsequent sections provide an overview of the research, methods implemented to obtain the proposed framework, the experimental setup, results, discussion, conclusions, and avenues for future research.

2. Current Research

Adaptive Learning Systems are computer-based learning environments that provide personalized instruction tailored to individual learners’ needs, preferences, and progress [5,6]. These systems utilize artificial intelligence, data analytics, and pedagogical theories to dynamically adjust the content difficulty, presentation, and pacing based on learners’ performance and characteristics [7,8].
Research has demonstrated that ALSs effectively enhance academic performance. By personalizing learning paths, these systems allow students to focus more on challenging concepts, enabling them to master the material more comprehensively [9]. This tailored approach has been associated with higher test scores and improved grades compared to traditional learning methods [10].
ALSs can operate under various conditions, including blended and fully online learning environments with several key differences. Blended learning environments often utilize ALSs to complement face-to-face teaching and improve out-of-class activities. Online systems rely heavily on such tools as their primary means of engagement and content delivery [11]. Fully online settings may require more focused strategies to encourage participation and engagement, with adaptive systems playing a crucial role in personalizing the learning experience. During the COVID-19 pandemic, fully online environments highlighted the importance of adaptive learning technologies in maintaining engagement and performance without in-person interactions [12].
Recently, ALSs have evolved to address various aspects of personalized education by leveraging advanced technologies and methodologies. The ALOSI (Adaptive Learning Open Source Initiative) framework, developed by Harvard and Microsoft in collaboration with edX, provides open-source adaptive learning technology and a common framework to measure learning gains and learner behavior [13]. This system emphasizes assessment remediation, associated with substantial increases in learning gains. Additionally, research on ALSs for vocational education uses artificial intelligence and machine learning to provide personalized learning pathways and real-time feedback [14]. Adaptive learning systems increasingly incorporate big data technologies and advanced analytics to provide more personalized learning experiences [15]. These systems aim to address learner diversity, maximize engagement, and improve learning outcomes by customizing educational content and strategies according to individual learners’ characteristics, skills, and goals [16,17].
Some adaptive systems apply pre-tests to discover the learners’ prior knowledge, preferences, and learning styles to develop the initial user model [18,19,20,21]. Pre-tests are conducted at the beginning of the courses, and learners are classified accordingly. Some studies address computer-based detection techniques with or without machine learning to improve personal trait identification in ALSs [22].
The analyses conducted to develop user models can generally be categorized into two main types: descriptive analysis and predictive analysis. In their systematic review of AI-enabled adaptive learning systems based on 147 studies, Kabudi et al. [8] found that 44 of these studies utilized either descriptive or predictive data analytics, with some employing both approaches. Descriptive analytics focuses on identifying patterns and understanding past events, revealing what has occurred within the data. In contrast, predictive analytics involves forecasting and statistical modeling to determine future possibilities, relying on supervised, unsupervised, and semi-supervised learning models. This method has proven effective for detecting and classifying questions to assess students’ knowledge levels and selecting appropriate resources tailored to individual students. The review identified several commonly used predictive AI methods in ALSs, including Bayesian networks, neural networks, decision trees, and support vector machines. These techniques play a critical role in enhancing the adaptability and effectiveness of learning experiences.
The data stored in the LMS databases and log files provide the opportunity to gain insights into students’ online behavior. Data generated from learning environments can be integrated into the system, aiding learning evaluation and monitoring [23]. This feedback allows learning material recommenders to be refined, enabling them to track student performance better and adapt to changes in both performance and learning preferences [24]. Activity records from LMS log data can provide a valid and valuable proxy measure of student engagement [25].
Moodle’s standard reporting tools provide information and filtering options [26]. However, they only offer raw data without meaningful insights into the teaching process. To obtain information about student performance and the factors influencing student success, tutors must perform statistical analysis and apply data mining techniques. Therefore, additional tools are essential to customize ALSs based on user behavior to perform effectively. Learning Analytics (LA) and Educational Data Mining (EDM) enable the exploration, visualization, and analysis of data from various sources within educational institutions. Studies have increasingly applied LA and EDM within adaptation models, primarily utilizing student data extracted from LMS logs. For instance, these logs are leveraged for predictive purposes in the works in [27,28]. Furthermore, predictive modelling and grouping techniques can identify at-risk learners, allowing for timely interventions and support within ALS frameworks, as noted in [29].
Numerous studies have been conducted to implement and assess various adaptivity methods in LMS. Early studies such as [30] introduced GRAPPLE, an adaptive module, into LMS using a service-oriented framework and applying static adaptation rules. While this approach is familiar, there is still room for improvement, particularly in employing data-driven methods for dynamic evaluation of student achievements. Experimental studies are required to develop adequate adaptivity settings tailored to specific learning environments.
Wang et al. [31] investigated whether Squirrel AI Learning (a Chinese adaptive learning system) could address educational challenges in China. Some studies explore novel approaches to adaptivity through open LMSs. However, these often rely on learning styles or prior knowledge and may limit student interaction by grouping them at the onset of the learning process [32].
Classifying learners is a critical component of adaptive systems for managing learner profiles and monitoring learners’ progress. Learners are often categorized based on proficiency levels, learning styles, or risk status. Identifying at-risk learners who struggle with course content or face external challenges is crucial for designing targeted interventions. Adaptive systems use predictive analytics and machine learning to detect such learners and adjust their learning paths to provide additional support, such as remedial activities or motivational feedback. This approach helps bridge the gap between at-risk and proficient learners [33].
Course design plays a pivotal role in the success of ALSs, shaping how effectively these systems can respond to learners’ needs. Key elements of course design include the creation of learning paths, the establishment of clear learning outcomes, and the integration of adaptivity mechanisms. Learning paths are the individualized sequences of learning activities and resources that guide students toward achieving specific goals. Effective course design ensures these paths are flexible and align with individual learners’ characteristics, such as their prior knowledge, preferences, and pace of learning. For example, adaptive systems dynamically modify content delivery and assessments to ensure learners receive material matching their proficiency levels and learning objectives [34].
Course design can focus on achieving diverse learning outcomes, including retention, mastery of content, skill development, and critical thinking. Bloom’s taxonomy outlines six levels of cognitive skills: remembering, understanding, applying, analyzing, evaluating, and creating [35]. ALSs enhance these outcomes by continuously analyzing learner performance data and tailoring instructional methods accordingly. Frameworks like ALOSI integrate adaptive assessments and feedback to maximize learning gains and support learners in meeting their academic aims [13].
Effective feedback is a cornerstone of ALSs, enabling personalized instruction and improving learning outcomes. Feedback mechanisms help learners identify areas for improvement, reinforce strengths, and adjust their learning paths dynamically. It can be implemented as immediate feedback for real-time correction, formative feedback for ongoing guidance, summative feedback for an overall evaluation, and behavioral feedback to track engagement patterns. Machine learning-based modeling can provide real-time, individualized feedback on students’ skills, such as argumentation, significantly improving their performance across different domains and task complexities [36]. This approach outperforms traditional static methods and supports learners with varying levels of expertise.
Adaptive learning systems increasingly employ Badges and time constraints to enhance engagement and motivation. Badges, a form of gamification, reward learners for completing specific tasks or achieving milestones, fostering a sense of accomplishment and encouraging continued participation [37]. Research shows that badges can positively influence learner behavior by providing external motivation and reinforcing progress through visual indicators of achievement [38,39]. They also serve as feedback mechanisms, offering learners a tangible representation of their efforts and successes.
Time constraints, on the other hand, introduce a structured sense of urgency, guiding learners to focus their attention and prioritize tasks. These constraints can enhance time management skills and maintain learner engagement by creating a clear framework for task completion. Studies indicate that time constraints, when designed appropriately, improve cognitive processing efficiency by reducing procrastination and fostering task-oriented behavior [40,41]. However, their effectiveness depends on thoughtful implementation that aligns with learners’ needs, avoiding stress or disengagement that may arise from overly rigid timelines.
Despite significant efforts to leverage AI in education, there is still insufficient evidence regarding its classroom impact. Previous studies often focus on adapting online courses or specific course segments, overlooking the challenges of implementing adaptation in formal education settings with accreditation-related constraints.
Demonstrating how adaptivity influences learners’ performance and engagement is crucial for informing educators’ instructional strategies. Experimental studies can uncover challenges associated with implementing adaptive learning systems, providing insights into what works well and areas that need improvement to maximize learners’ benefits.
In summary, empirical research is essential for validating the effectiveness of adaptive learning systems in real educational settings, ensuring their successful integration, and addressing associated challenges.

3. Materials and Methods

Our approach to developing the Adaptive Learning Framework was based on one fundamental principle: to use only existing data stored in the faculty information system to create learners’ models and make adaptations. In this context, we did not consider making additional pre-tests to classify students. This decision aligns with the assumption that the students have similar knowledge levels because each course enrolment considers fulfilled prerequisites. However, this makes creating a learner model more complex, primarily because we decided to use only available students’ data from faculty LMS and the student information system. In such a context, we performed different studies and applied extensive data mining procedures to identify the most effective techniques for learner profiling and predictive analysis.
We conducted complete research using Moodle LMS and the faculty information system. The open-source Moodle LMS is our faculty’s primary LMS and is advantageous because it enables easy customization and extension of its functionality.
To implement framework functionality, we used the R language (ver. 4.0.4) and RStudio IDE (ver. 1.1.447) [42]. We developed new scripts in R and integrated them with Moodle’s PHP-based framework. These scripts can also be adapted for other LMS platforms that support the integration of PHP code, or they can be translated into different programming languages and included in the LMS platforms.
This section describes the process of selecting features to develop learners’ models and classifiers for predictive analysis. It also explains the proposed framework and experimental setup used for testing.

3.1. Data Collection and Pre-Processing

When the log files are considered for data mining, the preprocessing part is more complicated because of the structure of the log files. Log files contain one row for each activity in the system, so for the user-based analysis, logs should be filtered, and the data should be transformed into a data frame based on the user and user activity.
Two types of files were subject to analysis: Moodle log files and student score files. Log files are taken from Moodle in .csv format and contain all students’ activities from the courses. The student score file is a part of the student information system and includes the scores of the students for each activity within the course. Although Moodle provides features to keep the scores of the students and calculate the grades, we decided to aggregate the scores in an additional file, which is more efficient for maintaining grade calculation formulas, especially when there is a necessity to calculate points from external activities.
Several pre-processing steps are applied to the log files to keep relevant and correct information. For the research purpose, we used the following fields: Time, User full name, Affected user, Event context, Component, Event name, Description, Origin, and IP address. The first step was to convert all Cyrillic letters found in the original log files into Latin letters, as R packages did not recognize them. The second step was to filter the logged actions that did not belong to the students’ activities. Required fields are extracted, and duplicate records are removed. In the last pre-processing step, we generated new columns for userID to be used instead of the user’s full name to provide data anonymity.

3.2. Modeling Techniques

3.2.1. Engagement and Study Habits Analysis

We performed data exploration and statistical analysis to discover the learners’ study habits and engagement levels by processing log data. For engagement analysis, we made a correlation analysis of total course visits, quiz attempts, assignment submissions, attendance at online sessions, recorded lecture views, grade views, forum views, forum creation, and course material views with the course grade. We wanted to discover the most relevant features to represent student engagement. Analysis revealed that the number of course visits, assignment submissions, quiz attempts, online session attendance, and views of video recordings had the highest correlations with the course grade [43,44]. So, we combined these as the proxies of student engagement for adaptivity.
We performed weekly analyses to discover the study habits as early as possible during the course period for learner modelling. Our analysis showed that we can have a clear picture of learners’ study habits after three weeks of the course, and the correlations between total course activities and grades tend to be stable starting from that time. So, it is possible to model students’ behavior starting from the third week [45]. That might be different for other courses.

3.2.2. Profiling Student Behavior

To predict students’ behavior, we investigated strategies for profiling students, focusing on two main approaches: classification and clustering. To assess student engagement, we analyzed their completed activities and scores for each activity. Our goal was to predict their grades and identify students at risk of failing.
We used data from previous years to develop classifiers that could predict the current students’ outcomes. The process included training multiple classifiers and selecting the one with the best results. We applied classification algorithms, including Bayesian Network (BN), J48 Decision Tree (DT), and Support Vector Machine (SVM) and evaluated the predictions against current student data to measure accuracy. Among the algorithms tested, J48 Decision Tree (DT) produced the best prediction results and was incorporated into the learner model [44]. However, this approach could be implemented only when prior student data was available.
Nevertheless, when prior data was unavailable, classification became challenging. We explored clustering methods to group students based on similarities in such cases. Using algorithms from the WEKA library, we tested expectation–maximization (EM), Simple K-means, X-means, and Density-based clustering. The clustering results varied across algorithms. EM and X-means grouped users into three clusters, while the Density-based clustering algorithm created two. With the K-means algorithm, the number of clusters had to be predefined, which limited its effectiveness. We experimented with different numbers of clusters and used the elbow method to determine the optimal value by observing where the error squared flattened. This occurred at six clusters but did not align well with our intended user groupings. Consequently, we concluded that K-means required additional analysis and were unsuitable for our purpose. EM demonstrated the best performance, effectively identifying inactive students and categorizing them into a distinct cluster [46].
Although overall classification proved valuable for making critical predictions about grades and failure risks, clustering allowed grouping students based on similarities in scenarios where class labels were unavailable. Both approaches provided complementary insights into student performance and engagement.

3.3. Intervention Design

The proposed adaptive course framework is embedded as a core component of the course information support system. The framework will collect data and dynamically propose materials, activities, and assessments based on ongoing learner performance when fully implemented. It will also support at-risk learners, providing additional options instead of low-score formative assessments. The framework consists of three main parts: the Course Module, the Learner Module, and the Adaptation Module. Figure 1 presents the structure of the framework. The Course Module holds the course structure and presents the personalized learning paths of the learners. The Learner Module is responsible for modeling the learner’s behavior and informing the Adaptation Module for decision-making. The following subsections contain detailed descriptions of the system parts.

3.3.1. Course Module and Learning Path

The Course Module outlines the structure of a single course. In higher education, courses are typically organized into chapters arranged linearly as approved by the accreditation board. Each chapter is subdivided into Learning Objects (LOs), with chapter topics fixed while LOs can vary, determined by the teacher. The instructor selects LOs to deliver the required knowledge for each subject, enabling adaptation by managing educational materials and activities.
Figure 2 illustrates the data structure used to organize course elements. This is advantageous over a simple linear structure as it simplifies retrieving, adding, or omitting learning objects (LOs). LOs are categorized as Critical, Optional, or Recovery and annotated in red, blue, and green, respectively.
Critical LOs are mandatory for students, and failure to complete them prompts recovery LOs as substitutes. These activities follow predefined rules and restrictions, establishing sequential paths and necessary progression conditions.
Optional LOs allow students to choose activities and omit or revisit them as needed. One or more recovery LOs are designated for each critical LO to remediate students at risk of failing or with low success rates, address deficiencies, and effectively substitute missed critical LOs.
The course comprises two primary categories of LO: synchronous and asynchronous. Synchronous LOs are scheduled in the LMS according to the academic calendar without repetition options. Learners who miss or need to redo synchronous activities must substitute them with other activities. For instance, if students miss a live online session, they can watch the recorded lecture anytime and as frequently as necessary. Asynchronous LOs are scheduled according to the academic calendar or as required, based on student behavior, with access rules defined in the framework by the teacher at the beginning of the semester.
The learning path is an ordered set of activities a learner follows to complete a course, consisting of a Critical Learning Path (CLP) and a Personal Learning Path (PLP). The CLP includes the essential LOs required to pass the course, while the PLP, tailored to each learner, is a subset of the learning path that consists of the CLP and recovery LO for missed critical LOs. The adaptation module generates the PLP for each student, with most PLPs aligning with the CLP. Figure 3 illustrates the relationship between the learning, personal, and critical learning paths. The teacher defines the rules and restrictions for all paths integrated into the course module.
Figure 4 shows the complete Learning Path for a one-course topic. The red activities are synchronous, the blues are optional, and the greens are recovery. Arrows define the learning path, where the red arrows show the critical path, and the green arrows indicate the possible personal paths for recovery.
Table 1 lists all types of Learning Objects and their properties implemented in our context. Every assessment LO corresponds to a group of learning outcomes representing cognitive levels, such as remembering, understanding, applying, analyzing, evaluating, and creating based on Bloom’s revised taxonomy [47].

3.3.2. Learner Module

The Learner Module collects and analyzes learners’ scores and logs data to define their status, outputting the learner class, score vector, and engagement vector, as shown in Figure 5. Inputs include (1) current student activities, (2) score information, and (3) a classification tool. The Engagement Detection function processes log and score files, organizing student activity data into an Engagement Vector (EV) and assessment activities into a Score Vector (SV). If suitable log data from previous years is available, the module uses classification to predict the learner class; otherwise, it uses clustering based on the inputs. Learner classes are categorized into three groups: (1) failure risk or low engagement, (2) low grade or mid-engagement, and (3) high grade or high engagement. The module returns EV, SV, and learner class information to the Adaptation Module.
The learner module also sends the Score Vector and learner’s class to the course module as feedback to the students. The Score Vector is created by collecting points obtained through Moodle automated quizzes and assignments evaluated by tutors. All these results are aggregated in the students’ score file, which is divided into separate views according to activities. The views are always available on the Moodle course page and can be accessed by the provided links.

3.3.3. Adaptation Module

The adaptation module recommends adding LOs to learning paths for all students. Figure 6 shows the module and its interactions with the other modules. Initially (at the beginning of the course), CLP is presented to all learners until sufficient data is collected to define learners’ models. The students’ CLP activities are recorded in active log files and analyzed by the Learner Module. The module constructs the DLP using the Learner Module outputs (EV, SV, and learner class).
Before learner classes are established, the Adaptation Module checks the engagement vector and re-offers critical LOs if missing; otherwise, it offers the regular CLP. Once learner classes are generated, the Adaptation Module uses info from the Learner Module to recommend personalized LOs to be added to the DLP. The Adaptation Module evaluates EVs and SVs to identify missing or low-scored assessment activities and LOs. If a student has omitted LOs, the module prioritizes offering them first. For students who have completed all required activities, replacements for low-scored LOs are suggested to improve overall performance.
This process is repeated as long as the learner can collect enough points from the activities or until the course officially ends. The learners have the right to complete the course in a year.
Conditional rules govern the adaptation logic:
IF the engagement vector indicates a critical LO missed AND no class is assigned, THEN assign a Recovery LO.
IF score < threshold for a critical LO → THEN suggest optional remedial LO.
IF the student achieves sufficient points from alternate paths, → THEN mark critical LO as completed.
This rule-based decision logic is complemented by J48 Decision Tree classifiers when historical data is available. In its absence, clustering via Expectation Maximization (EM) identifies low-engagement patterns to trigger adaptation.

3.4. Experimental Setup

We introduced the framework for the class of 2020–2021 and applied it in each consecutive academic year. The framework was implemented in two different settings: the pre-COVID context and the COVID context. The pre-COVID setting considered the blended learning approach with face-to-face lectures and everything else supported or realized in LMS. In the COVID context, all activities are organized entirely online. Additionally, in each academic year, we applied different rules and restrictions to the assessment LOs to examine their impact on the success and engagement levels of the students.
The proposed framework is tested semi-automatically using Moodle LMS and the student information system. Instead of a single end-of-course examination, the continuous assessment approach (formative assessment) is implemented to measure the learners’ performance. We introduced additional formative assessments to granulate the process of tracking students’ performance. While Moodle facilitated automatic quiz evaluation, other tools required tutors to assess tasks and problems manually. Despite the potential of AI for monitoring student progress, it could not evaluate creative and task-specific practical skills.
Furthermore, although we could automate most of the data collection process, there is still a need for tutor interventions, especially in courses that use additional technology that is not interoperable with Moodle (ex. Tableau). The system can suggest additional materials/assessment activities. Although the system has a repository, many materials and activities cannot be produced by the system itself (e.g., finding and selecting material from different resources, finding and selecting datasets for the tasks, etc.).

3.4.1. Participants

The study participants were students from the Faculty of Computer Science and Engineering at the University of St. Cyril and Methodius in Skopje. We have tested the framework in the visualization course. Previously, we analyzed the data from the User Interfaces and Visualization courses to prove that we can use LMS log data to discover the trends in the learners’ behavior. These courses were selected because they contained various assessment methods, course activities, and many students. Also, Moodle LMS was actively used in these courses even when the education was not entirely online. The students are from the third and fourth years of their studies. All enrolled students for the courses participated in the research, and each year detailed description of the course organization was provided. The study was approved by the Institutional Review Board (Ethics Committee) of International Balkan University (#314/10-1). Student data was anonymized, and participation followed institutional data use policies. As educational records were used in aggregate form for course improvement, explicit consent was not required per university guidelines.
Table 2 gives information regarding student groups and the learning context in the course Visualization. The student groups in which we implemented the adaptive framework in different settings are named groups A, B, and C.

3.4.2. Course Organization

The proposed framework was first implemented in the Visualization course during the 2020–2021 academic year. This implementation was made feasible by the shift to fully online education necessitated by the COVID-19 pandemic. Traditional face-to-face lectures were replaced by synchronous online sessions conducted through Moodle and BigBlueButton (BBB). These live sessions enabled students to receive real-time guidance and support during class activities. The remaining course components, which were already delivered online prior to the pandemic, continued in the same format without significant changes.
The Visualization course was offered entirely online for three consecutive academic years: 2020–2021, 2021–2022, and 2022–2023. Although most courses at the Faculty eventually returned to a blended learning format, the fully online delivery of the Visualization course remained in place. This was due to its effectiveness in maintaining equal conditions for synchronous learning among all students, regardless of their location, thereby demonstrating the continued suitability of the online approach for this particular subject.
The teacher defines the learning and critical learning paths and the rules and restrictions for personal learning paths each year. By changing the rules within the course over different years, we tried to test our framework from different perspectives. To enable adaptivity, we implemented the system of granting and calculating transferable points, where “nothing is lost, everything is transferable”. Table 3 contains the graded activities and the annotated points. Total points are calculated differently each year. For example, in one year, total points were a sum of all possible activities; in the next, some activities were grouped and summed, and the maximum number of points for the group was limited. In any case, corresponding to our faculty rule, a student passes the course when (s)he collects at least 50% of the possible points.
Additionally, all students were allowed to see all materials and activities, so even if an activity was not recommended for some students, they could perform it at their discretion.

4. Results

We evaluated the adaptive framework’s impact on course completion rates, students’ course grades, and LMS engagement. Additionally, we analyzed the behavior of failing students to anticipate their reasons for failing.

4.1. Impact of Adaptive Framework on Course Success and Student Grades

We compared the student course grades and completion rates under non-adaptive and adaptive course settings with different course rules. Course completion rates are given in percentages of passing students during the semester and after the course completion. According to faculty rules, students have the right to complete a course in a year, even after the end of the 15-week semester period.
Table 4 compares average grades and the percentages of passing students during and after the semester in the case of non-adaptive and adaptive course settings. The first group shows the data with non-adaptive course settings, while the second, third, and fourth show the adaptive settings with different activities and grading rules. his section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
Adaptivity in Group A was applied with stringent time limits but without repetition and score limits on each type of homework activity. At the same time, Group B had time limits for homework activities, repetition limits for quizzes, and score limits for each activity type. As a time restriction, students were given a two-week submission window for each type of homework assignment. For group C, established rules included removing lab exercises as an activity, limiting the number of quiz repetitions, and keeping the score limit from the previous year but deleting time constraints in all homework activities. Each activity could be finished or repeated anytime until the course’s official end.
Results show that the percentage of students who passed the course during the semester without postponement increased in the years when the framework was applied. This percentage is significant in Groups A and B and less in Group C. However, the failure rates were higher than in the previous year’s non-adaptive settings. Section 4.3 analyzes the behavioral patterns of the failing students for all groups to understand the possible cause of the increased failure rate.
We spotted that the average course grade increased in the second year. After applying the score limitation rules, we saw that limiting the score and setting a maximum point for the assessment activities resulted in lower grades. Omitting a complete set of activities for group C also influences the decreased average grade and passing rate.
Figure 7 shows the distribution of the grades for each year. In the second year (Group A), more than half of the students scored a maximum grade of 10 because they could repeat the assessment LOs as often as they liked without score limits. So, many students repeated certain activity types until they collected enough points for the desired grade. In the third year (Group B) and fourth year (Group C), because of the limitation of points for each activity type, the most common grade was 6 (six, the lowest passing grade). “There is a time” mentality is another reason for last year’s grade decrease. Many students postponed their tasks for an extended period, and there was no time to repeat the suggested activities by the framework.
Beyond tracking performance via raw scores, we evaluated outcomes by mapping assessments to Bloom’s taxonomy. Classwork and quizzes mainly targeted lower cognitive levels (remembering, understanding), while projects and lab homework addressed higher levels (applying, analyzing, creating). Notably, students in Adaptive Group B—who participated in all assessment types—demonstrated broader coverage across all cognitive domains. In contrast, Group C students focused predominantly on quizzes, leading to lower engagement in tasks involving synthesis and evaluation.

4.2. Students’ LMS Engagement

To measure the students’ engagement levels, we compared their average number of online activities sorted by activity type during the academic years. Table 5 shows the average number of online ungraded activities for each year. These averages were calculated by adding the total number of activities performed by all students and dividing them by the total number of students.
The Non-adaptive blended group (2019–2020), with only 45 students, showed lower engagement levels in ungraded activities, such as LMS course visits (414.91) and course views (102.21). This group operated with time limits; repetitions were allowed only during exam periods. These constraints likely limited opportunities for interaction, resulting in lower engagement than the adaptive groups. In contrast, Adaptive Group B (2021–2022), which had the highest number of students (306), demonstrated the most substantial improvement in LMS engagement metrics, including 619.36 LMS visits and 173.83 course views. This group operated under stricter rules, including time limits, repetition limits on quizzes, and score limitations, which likely provided a structured environment that fostered consistent engagement.
Adaptive Group A (2020–2021), which had no repetition limits and allowed students to score all points, exhibited moderate LMS visits (442.36) and course views (118.28). Removing score restrictions allowed them to collect points from different activities and reduced the urgency to engage frequently with the platform. On the other hand, Adaptive Group C (2022–2023), which had no time limits but included repetition and score limitations, had the lowest LMS visits (369.75) and course views (94.78) among adaptive groups. Despite the flexibility of no time limits, the rules imposed on repetition and scoring may not have been sufficient to drive engagement consistently across activities, indicating that a balance between flexibility and structure is critical.
Attendance at live sessions showed notable variation among groups. In 2021–2022, in response to insights regarding the influence of classwork submission and live session attendance on student performance, we made strategic adjustments to the course settings to foster active participation. We implemented a system of badges to incentivize students to engage in the live lecture sessions and complete assigned class work. Specifically, students could receive a badge only if they attended the live lecture and submitted the designated classwork. Live sessions’ attendance and all other online activities increased in Group B. With the badge application, the classwork submission rate rose from 73% to 80% when the students with no actions in the course were excluded. Adaptive Group C achieved the highest attendance rate (22.05), likely driven by removing time limits, which allowed students to participate more flexibly in live sessions. However, this group also had the lowest recording views (7.99), indicating a trade-off between live attendance and asynchronous review. By comparison, Adaptive Group B balanced both live session attendance (19.79) and recording views (12.54), reflecting a structured yet flexible approach that encouraged participation in both formats.
It is important to note that the observed increase in live session attendance is not attributed to the adaptive framework. Instead, this was primarily driven by the intentional changes made to the course organization. Using badges as incentives for desired behaviors, we successfully encouraged students to take a more proactive approach to their learning experience.
Table 6 shows the average number of graded activities for each group with their standard deviations to show the variations, while Figure 8 illustrates the mean scores from each activity type.
Participation in graded activities also varied significantly between groups. The Non-adaptive group exhibited strong engagement in traditional assessments such as homework, with 10.73 submissions, the highest among all groups. However, the lack of lab exercises due to the teaching style limited the diversity of assessment types available. Adaptive groups benefited from introducing lab-related tasks, particularly Adaptive Group B, which showed the highest participation in both LabClasswork (7.47) and LabHomework (7.93). These results suggest that including lab exercises in adaptive frameworks encouraged greater engagement.
Adaptive Group C demonstrated the highest quiz submissions (8.16). However, their participation in other graded activities, such as classwork (5.37) and homework (7.84), was comparatively lower, suggesting that restrictions on repetitions and scoring alone may not have been sufficient to drive consistent engagement across all activity types. Adaptive Group A, with no repetition limits and all points scored, showed slightly lower participation in quizzes (7.13) than Group B, reinforcing the idea that less structured rules may reduce motivation for consistent interaction. In Group C, many students failed to accomplish even the activities that belong to CLP, decreasing the number of assignments submitted. Moreover, the increase in attendance at live lectures coincided with a decrease in recorded views, indicating a shift in student engagement patterns. Notably, lab classwork was omitted in that particular year to alleviate students’ workload, contributing to decreased total activities and grades.
The differences in teaching styles and rules across groups significantly influenced engagement. The non-adaptive group (2019–2020), with strict time limits and limited repetition opportunities, exhibited lower engagement in both ungraded and graded activities. Adaptive Group A (2020–2021) implemented no repetition limits and scored all points, which fostered moderate engagement but lacked the structure to achieve consistent activity levels. Adaptive Group B (2021–2022), with stricter time and repetition limits and score limitations, showed the highest overall engagement across both ungraded and graded activities. These structured rules created a clear framework for participation and accountability, driving optimal results. Despite having no time limits, adaptive Group C (2022–2023) struggled to maintain consistent engagement, as the flexibility may have reduced the urgency for interaction in certain activities. This highlights the importance of tailoring adaptive frameworks to balance flexibility and structure based on the cohort’s needs.
Project submission is the least performed activity each year, as it is the most enormous task of all assessment types. Considering the organization of the course in the first two years, students could score high grades without performing this task. So, the students took the shortest and most effortless types of activities. In the third year, with limited points from each type of activity, project submissions reached 30%, the peak value for all years.
We noticed that the number of activities decreased last year, and 28 students (13%) dropped out without performing any course activity. The drop-out rate and reasons behind it should be investigated further.
The standard deviations in Table 6 highlight the variability in student participation across different LMS activities, offering insights into how consistently students engaged within each group. In the Non-Adaptive Group, standard deviations were relatively low across most categories, such as Quiz (SD = 1.24) and Lab Classwork (SD = 1.08). This indicates that students in this group participated more uniformly in these activities, likely due to the structured and predictable nature of traditional learning methods. However, higher variability in Classwork (SD = 3.07) and Homework (SD = 2.76) suggests that some students were less engaged or struggled more with these areas than others.
For Adaptive Group A, variability was slightly higher overall, as reflected in the total standard deviation 8.10. This indicates that the adaptive learning strategies encouraged more diverse student engagement levels. Participation in Quiz (SD = 1.43) was relatively consistent, showing that quizzes might have been universally accessible or straightforward. However, categories like Lab Classwork (SD = 2.87) and Lab Homework (SD = 2.51) showed moderate variability, suggesting that while some students fully embraced these practical components, others participated less frequently.
Adaptive Group B had the highest overall participation but also exhibited significant variability, with a total standard deviation of 10.08. This reflects that the adaptive learning strategies were not equally effective for all students in this group. While many students engaged heavily, others completed significantly fewer activities. Variability was particularly noticeable in Lab Classwork (SD = 3.35) and Classwork (SD = 3.14), where engagement varied widely. However, participation in Quiz (SD = 1.41) was relatively consistent, suggesting that quizzes provided a more level playing field for this group.
Finally, Adaptive Group C exhibited the highest variability overall, with a total standard deviation of 11.71. This group showed consistent participation in Quiz (SD = 2.17), but variability increased in other categories, such as Classwork (SD = 3.58) and Homework (SD = 3.67). The high variability indicates that some students in this group were highly motivated by adaptive methods, while others were less engaged, leading to uneven participation. Notably, the absence of data for Lab Homework likely impacted the total variability for this group.
In summary, the standard deviations reveal that adaptive learning strategies increased overall engagement and introduced more significant variability across students. While some students thrived in these environments, others struggled to keep pace, leading to broader participation gaps. In contrast, the Non-Adaptive Group showed more uniform participation. This suggests that adaptive methods can enhance engagement but require additional support to ensure consistent benefits across all students.
The results demonstrate the varying impacts of adaptive frameworks and teaching styles on student engagement and participation. By comparing Non-adaptive and Adaptive groups across ungraded and graded activities, it becomes clear that introducing adaptive strategies significantly influenced behavior. However, effectiveness depended heavily on the specific rules and teaching methods.

4.3. Behavioral Patterns of Failing Students

We examined the learning behavior of the failing students. Table 7 illustrates the failure and dropout rates observed across each academic year. A significant contributing factor to the decline in course completion rates is the proportion of students who failed to engage in any course activities throughout the year. Notably, the failure rate peaked at 36% in the most recent year, marking the lowest completion rate recorded. When we checked the engagement levels of failed students, we noticed that some failed students dropped out without performing any course activities.
To assess statistical significance in dropout trends, we conducted a chi-square test comparing dropout frequencies across the four groups (non-adaptive vs. adaptive A, B, C). The results indicated a significant difference (χ2 = 12.47, p < 0.01), confirming that course design had a measurable impact on failure rates.
Figure 9 shows the number of student drop-outs (bars) and drop-out percentage (red line) for each academic year. This dual-axis chart helps compare both the absolute and relative scale of drop-outs over time.
Table 8 shows the analysis of failed students in Group A. Four of the 19 failed students dropped out without performing any activity. The remaining students usually participated poorly in all activities and did not follow the offered recovery LOs. Also, one significant common point with the failed students is that they did not submit any classwork or attend lecture sessions; they only performed some asynchronous activities. On the other hand, passing students were regular in classwork, live sessions, and other activities.
When we analyzed the failed students in Group B, 15 (5%) drop-outs did not participate in any course activity. Students who passed the course were quite regular with the classwork. Only four students passed without participation in classwork, but they compensated for this gap by submitting a project or recovery homework assignments.
To understand the trends in learner behavior for the last year (Group C), we performed an EM cluster analysis based on the students’ lab work, classwork, homework, and project submissions without considering their grades. The EM algorithm resulted in three clusters. After clustering, we checked the grades and their distribution. Cluster0 includes the students who mostly repeated the activities, Cluster1 consists of the students with one or fewer activities, and Cluster2 contains the students who did not perform any. Table 9 shows the grade means and standard deviations of each cluster. Table 10 illustrates the lab work submission rates as an example. Across all activity types, the maximum number of repetitions recorded was 3, accounting for only 1% of the total submissions, while the minimum was 0. This result aligns with the limitation that restricts the maximum number of repetitions to three attempts.

5. Discussion

This study provides insights into implementing adaptive learning in real-world formal education settings. It shows that it is possible to integrate adaptation/personalization even in formal education, where course contents are robust and limited by accreditation bodies. The study also emphasizes the importance and effect of course rules, such as time constraints and score limitations for certain activities in adaptive settings.
Many studies in the literature consider learning styles and prior knowledge for adaptation. In such cases, learners are classified before the course, and this classification does not change over time. Our study shows that adaptation is possible with dynamic data about the learner, such as learner preferences and performance. The course used in the study is designed to assess students’ learning at the end of each topic through quizzes, classwork, lab work, and homework activities, and adaptation is performed accordingly.
Initially, the adaptive framework allowed students to follow the course with time restrictions but without score limitations, allowing them to choose activity types and repeat them as many times as needed. This setup increased engagement levels and average course grades, with more than half of the students achieving the highest grade. However, students primarily focused on more straightforward tasks, such as quizzes, while neglecting more complex activities, such as homework assignments or projects. While this approach improved grades, it limited students’ opportunities to achieve higher learning outcomes aligned with Bloom’s taxonomy. Given that engineering education emphasizes practical over reproductive knowledge, adaptive systems should encourage students to apply their knowledge to real-world problems and aim for higher cognitive outcomes. However, maybe part of the problem lies in our educational system that allows them to pass the course with 50%.
When we clustered the students based on their LO submissions without considering their scores, it was seen that the students who repeated the activities were in a cluster with higher grades. Failing students were the ones who made no effort to use the opportunity to repeat recommended activities. This shows that the students who used the recovery recommendations and followed the offered learning path passed the course and scored better. Students’ success depended on their acceptance rate of the recommendations. If they do not perform the recommended activities, there is no way to assess their learning and increase course success.
Throughout the years of implementing the adaptive framework, we observed that students who dropped out were typically those who had not engaged in any activities; notably, there were no instances of students participating in multiple sessions and then leaving. Many explanations exist for why some students did not take any activity, including the number of students who were wrongly enrolled in the course. The students were automatically added to Moodle when enrolled but were not erased if they changed the course. However, for students who actively participated in the course, we spotted interesting patterns in behavior.
In the first year, Group A was subjected to the most rigorous set of rules and deadlines and the most significant workload—43 activities without considering tasks provided by the adaptation module. Each student could repeat the activity if (s)he was unsatisfied with the points, and all activities were visible to all students. Surprisingly, the increased workload and strict rules did not raise the dropout rates; it resulted in minimal dropout rates and improved course grades and passing rates during the semester.
When the badges were introduced to promote online session attendance and classwork on Group B, they increased overall engagement but not course success. In subsequent years, we reduced or modified the activities and rules to alleviate student burden. We dropped the complete set of activities and reduced the total number of activities to 34, introduced more flexible deadlines, and developed strict rules only for repeating quizzes. Again, all activities were offered to each student. However, this led to an increase in failure rates and dropouts.
These findings highlight the importance of balancing flexibility with structured rules in adaptive learning environments. While effective adaptive strategies, such as those implemented in 2020–2021 and 2021–2022, can drive engagement and success, overly flexible approaches, such as those seen in 2022–2023, may lead to disengagement and lower completion rates. Adaptive frameworks must include clear guidelines, motivational mechanisms, and targeted support for complex tasks like projects to optimize participation and learning outcomes. Continuous monitoring and iterative refinement are essential to maintaining long-term success.
This study also raises important questions about why reduced workloads and greater flexibility resulted in lower student success and higher dropout rates. While behavioral patterns were observed, more systematic research is needed to understand these outcomes fully.
In this study, we implemented the semi-automated Adaptive Course Framework. As Moodle operates based on PHP codes, the complete Adaptive Course Framework can be easily implemented and embedded into Moodle LMS, and in future, it can be fully automated. The R scripts created for analysis and knowledge discovery can be integrated into PHP codes without any interface.
To advance toward full automation, the following steps are envisioned:
  • Integrate AI-based content recommenders that auto-suggest resources based on learner profiles.
  • Automate grading using NLP and rubrics for open-ended tasks.
  • Embed dashboards to visualize EV/SV trends for tutors.
  • Develop autonomous decision engines capable of adjusting course rules in real time.
Our next pilot will test these features in tandem to reduce instructor workload while preserving instructional quality.
Our results reinforce the findings in [8], that predictive analytics can identify at-risk learners and guide interventions. The adaptive logic implemented aligns with ALOSI’s emphasis on assessment-driven remediation [13], extending it with longitudinal real-world validation.
While our framework is embedded in Moodle, it differs from initiatives like ALOSI (Harvard/Microsoft) and Squirrel AI (China). ALOSI emphasizes structured assessment remediation and learning analytics dashboards but lacks direct integration with student information systems. Squirrel AI uses fine-grained concept mastery detection, but its commercial nature limits customizability. Our model’s strength lies in its open-source integration, fine-tuned control over course logic, and adaptability in formal education environments.

6. Conclusions

This study introduced and evaluated a generic, scalable adaptive course framework integrated within a standard Learning Management System (LMS). Its goal was to personalize learning in formal higher education without relying on rigid models of learning styles. Through its implementation in a real-world course over multiple aca-demic years, the framework demonstrated that adaptive learning environments can significantly improve engagement and academic outcomes when designed with thoughtful course structures and flexible assessment policies.
The results show that the success of the adaptive framework depends heavily on course design elements such as time constraints, score flexibility, and the use of incentives. Allowing students to complete assessments at their own pace—particularly in the absence of rigid grading deadlines—was found to enhance motivation and overall participation. Conversely, reintroducing stricter assessment rules led to a return to less engaged learning behaviors. These findings underscore the dynamic relationship between course policies and learner behavior.
While one group benefited from a structured yet flexible approach (Group B), an-other with broader autonomy (Group C) showed increased live session attendance but reduced consistency in overall course engagement. This contrast reveals that adaptivity must be balanced with guidance to achieve optimal outcomes. Moreover, it highlights the critical role of teaching strategies, well-designed assessment diversity, and clear expectations in shaping learning behaviors.
Course design, therefore, must be viewed as an evolving process rather than a fixed structure. While learning analytics and AI can automate aspects of course delivery and adaptation, instructors remain central to interpreting outcomes, designing creative assessments, and setting pedagogical priorities. This human–machine collaboration is vital, particularly in assessing higher-order learning outcomes that require judgment, creativity, and critical thinking.

6.1. Limitations and Future Work

This study focused on a single course in one university context, which limits the generalizability of its findings. Although previous applications of the framework (e.g., in the Human–Computer Interaction course; [43]) provide support for its replicability, broader implementation across multiple disciplines is needed. The framework is most effective in courses designed with continuous assessment and modular learning objectives—yet such design may face resistance from instructors more accustomed to traditional course structures.
Another challenge concerns scalability. Instructor workload remains a significant barrier, particularly when it comes to grading open-ended tasks and managing learning object (LO) recommendations. Semi-automated dashboards, predictive alerts, and feedback templates could ease this burden, but human judgment will remain essential for evaluating complex learning artifacts.
The current framework offers limited support for collaborative learning. Future iterations should incorporate adaptive peer tasks and dynamic group assignments. Engagement metrics from discussion forums could feed into learner models, enabling the system to recommend social learning activities when individual engagement declines.
Future studies should also employ experimental control groups to isolate the effect of the adaptive framework from external factors like the pandemic. Structured surveys and interviews would deepen our understanding of learner satisfaction, dropout patterns, and the broader impact of adaptive interventions. Furthermore, the framework’s domain-agnostic structure allows it to be extended into humanities, business, and other disciplines using essays, discussions, or case-based LOs.

6.2. Key Contributions and Takeaways

Developed and tested a scalable adaptive learning framework integrated into a standard LMS without relying on predefined learning styles.
Demonstrated that structured flexibility in assessment design increases engagement and improves learning outcomes.
Highlighted the importance of course policies (e.g., time limits, score flexibility) in influencing learner behavior and motivation.
Proposed future extensions for broader disciplinary application, improved collaborative features, and instructor-support tools to enhance scalability and adoption.

Author Contributions

Conceptualization, N.A. and S.L.; methodology, N.A.; software, N.A.; validation, N.A. and S.L.; formal analysis, N.A.; investigation, N.A. and S.L.; resources, S.L.; data curation, N.A. and S.L.; writing—original draft preparation, N.A.; writing—review and editing, N.A. and S.L.; visualization, N.A.; supervision, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research and Academic Ethics Committee of International Balkan University (314/10-1, 20 May 2025).

Informed Consent Statement

Formal written consent was not obtained for this study, as it was conducted within the normal scope of instructional activity in a university course. Students were fully informed of the course structure, including the use of adaptive technologies, assessment mechanisms, and engagement tracking, from the beginning of each academic term. Participation in all course activities was a standard component of their academic program and carried no additional risk beyond regular coursework. The data analyzed were collected as part of routine educational processes and were anonymized prior to analysis to ensure the protection of student identity and privacy. No interventions outside the established curriculum were introduced, and no individually identifiable data were reported. In accordance with institutional guidelines and ethical standards for educational research, explicit written consent was deemed unnecessary due to the non-invasive nature of the study and the transparency of course policies.

Data Availability Statement

The anonymized datasets generated during and/or analyzed during the study are available from the corresponding author upon reasonable request.

Acknowledgments

During the preparation of this work, the authors used Grammarly and ChatGPT 4o for proofreading and text improvement, respectively. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the publication’s content.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ALSAdaptive Learning Systems
BBBBig Blue Button
CLPCritical Learning Path
EDMEducational Data Mining
EVEngagement Vector
LALearning Analytics
LMSLearning Management Systems
LOLearning Object
PLPPersonal Learning Path
SVScore Vector

References

  1. Luna-Urquizo, J. Learning Management System Personalization Based on Multi-Attribute Decision Making Techniques and Intuitionistic Fuzzy Numbers. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 669–676. [Google Scholar] [CrossRef]
  2. Ean Heng, L.; Pei Voon, W.; Jalil, N.A.; Lee Kwun, C.; Chee Chieh, T.; Fatiha Subri, N. Personalization of Learning Content in Learning Management System. In ICSCA ’21, Proceedings of the 2021 10th International Conference on Software and Computer Applications, Kuala Lumpur, Malaysia, 23–26 February 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 219–223. [Google Scholar] [CrossRef]
  3. Ahmad, S.F.; Alam, M.M.; Rahmat, M.K.; Mubarik, M.S.; Hyder, S.I. Academic and Administrative Role of Artificial Intelligence in Education. Sustainability 2022, 14, 1101. [Google Scholar] [CrossRef]
  4. Xie, H.; Chu, H.C.; Hwang, G.J.; Wang, C.C. Trends and Development in Technology-Enhanced Adaptive/Personalized Learning: A Systematic Review of Journal Publications from 2007 to 2017. Comput. Educ. 2019, 140, 103599. [Google Scholar] [CrossRef]
  5. Wu, C.H.; Chen, Y.S.; Chen, T.C. An Adaptive E-Learning System for Enhancing Learning Performance: Based on Dynamic Scaffolding Theory. EURASIA J. Math. Sci. Technol. Educ. 2017, 14, 903–913. [Google Scholar] [CrossRef]
  6. McCarthy, K.S.; Watanabe, M.; Dai, J.; McNamara, D.S. Personalized Learning in ISTART: Past Modifications and Future Design. J. Res. Technol. Educ. 2020, 52, 301–321. [Google Scholar] [CrossRef]
  7. Jing, Y.; Zhao, L.; Zhu, K.; Wang, H.; Wang, C.; Xia, Q. Research Landscape of Adaptive Learning in Education: A Bibliometric Study on Research Publications from 2000 to 2022. Sustainability 2023, 15, 3115. [Google Scholar] [CrossRef]
  8. Kabudi, T.; Pappas, I.; Olsen, D.H. AI-Enabled Adaptive Learning Systems: A Systematic Mapping of the Literature. Comput. Educ. Artif. Intell. 2021, 2, 100017. [Google Scholar] [CrossRef]
  9. Dziuban, C.; Moskal, P.; Johnson, C.; Evans, D. Adaptive Learning: A Tale of Two Contexts. Curr. Issues Emerg. Elearn. 2017, 4, 3. [Google Scholar]
  10. Brown, A.H.; Green, T.D. The Essentials of Instructional Design: Connecting Fundamental Principles with Process and Practice. In The Essentials of Instructional Design: Connecting Fundamental Principles with Process and Practice; Routledge: Abingdon-on-Thames, UK, 2019; pp. 1–274. [Google Scholar] [CrossRef]
  11. Castro, R. Blended Learning in Higher Education: Trends and Capabilities. Educ. Inf. Technol. 2019, 24, 2523–2546. [Google Scholar] [CrossRef]
  12. Faghir Ganji, M.; Jafari Malvajerd, A.; Moradi, A.; Amanollahi, A.; Ansari-Moghaddam, A.; Basir Ghafouri, H. Teachers and Managers Experiences of Virtual Learning during the COVID-19 Pandemic: A Qualitative Study. Heliyon 2024, 10, e24118. [Google Scholar] [CrossRef] [PubMed]
  13. Rosen, Y.; Rushkin, I.; Rubin, R.; Munson, L.; Ang, A.; Weber, G.; Lopez, G.; Tingley, D. Adaptive Learning Open Source Initiative for MOOC Experimentation. In Artificial Intelligence in Education, Proceedings of the 19th International Conference, AIED 2018, London, UK, 27–30 June 2018; Springer: Berlin/Heidelberg, Germany, 2018; Volume 10948, pp. 307–311. [Google Scholar] [CrossRef]
  14. Zhao, S.; Hai, G.; Ma, H. Adaptive Learning Systems: Exploring Personalized Paths in Vocational Education. Curric. Learn. Explor. 2024, 2. [Google Scholar] [CrossRef]
  15. Liu, X.; Du, Y.; Sun, F.; Zhai, L. Design of Adaptive Learning System Based on Big Data. In ICIE ’17, Proceedings of the 6th International Conference on Information Engineering, Dalian, China, 17–18 August 2017; Association for Computing Machinery: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
  16. Graf, A. Exploring the Role of Personalization in Adaptive Learning Environments. Int. J. Softw. Eng. Comput. Sci. IJSECS 2023, 3, 50–56. [Google Scholar] [CrossRef]
  17. Sabeima, M.; Lamolle, M.; Nanne, M.F. Towards Personalized Adaptive Learning in E-Learning Recommender Systems. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 14–20. [Google Scholar] [CrossRef]
  18. Diaz, F.S.; Rubilar, T.P.; Figueroa, C.C.; Silva, R.M. An Adaptive E-Learning Platform with VARK Learning Styles to Support the Learning of Object Orientation. In Proceedings of the 2018 IEEE World Engineering Education Conference (EDUNINE), Buenos Aires, Argentina, 11–14 March 2018; pp. 1–6. [Google Scholar]
  19. Nafea, S.M.; Siewe, F.; He, Y. An Adaptive Learning Ontological Framework Based on Learning Styles and Teaching Strategies. Proceedings of 85th ISERD International Conference, Cairo, Egypt, 11–12 September 2017; pp. 11–12. [Google Scholar]
  20. Rasheed, F.; Wahid, A. Learning Style Recognition: A Neural Network Approach. In First International Conference on Artificial Intelligence and Cognitive Computing; Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2019; pp. 301–312. [Google Scholar]
  21. El-Sabagh, H.A. Adaptive E-Learning Environment Based on Learning Styles and Its Impact on Development Students’ Engagement. Int. J. Educ. Technol. High. Educ. 2021, 18, 53. [Google Scholar] [CrossRef]
  22. Afini Normadhi, N.B.; Shuib, L.; Md Nasir, H.N.; Bimba, A.; Idris, N.; Balakrishnan, V. Identification of Personal Traits in Adaptive Learning Environment: Systematic Literature Review. Comput. Educ. 2019, 130, 168–190. [Google Scholar] [CrossRef]
  23. Sachan, D.; Saroha, K. A Review of Adaptive and Intelligent Online Learning Systems. Lect. Notes Netw. Syst. 2022, 314, 251–262. [Google Scholar] [CrossRef]
  24. Motz, B.; Quick, J.; Schroeder, N.; Zook, J.; Gunkel, M. The Validity and Utility of Activity Logs as a Measure of Student Engagement. In LAK19, Proceedings of the 9th International Conference on Learning Analytics & Knowledge, Tempe, AZ, USA, 4–8 March 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 300–309. [Google Scholar] [CrossRef]
  25. Henrie, C.R.; Bodily, R.; Larsen, R.; Graham, C.R. Exploring the Potential of LMS Log Data as a Proxy Measure of Student Engagement. J. Comput. High. Educ. 2018, 30, 344–362. [Google Scholar] [CrossRef]
  26. Conde, M.Á.; Hérnandez-García, Á.; García-Peñalvo, F.J.; Séin-Echaluce, M.L. Exploring Student Interactions: Learning Analytics Tools for Student Tracking. Lect. Notes Comput. Sci. Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform. 2015, 9192, 50–61. [Google Scholar] [CrossRef]
  27. Figueira, Á. Mining Moodle Logs for Grade Prediction: A Methodology Walk-Through. In Proceedings of the 5th International Conference on Technological Ecosystems for Enhancing Multiculturality, Cádiz, Spain, 18–20 October 2017; pp. 1–8. [Google Scholar]
  28. Käser, T.; Hallinen, N.R.; Schwartz, D.L. Modeling Exploration Strategies to Predict Student Performance within a Learning Environment and Beyond. In LAK ’17, Proceedings of the Seventh International Learning Analytics & Knowledge Conference, Vancouver, BC, Canada, 13–17 March 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 31–40. [Google Scholar] [CrossRef]
  29. Gupta, P.; Kulkarni, T.; Toksha, B. AI-Based Predictive Models for Adaptive Learning Systems. In Artificial Intelligence in Higher Education; CRC Press: Boca Raton, FL, USA, 2022; pp. 113–136. [Google Scholar] [CrossRef]
  30. De Bra, P.; Smits, D.; van der Sluijs, K.; Cristea, A.I.; Foss, J.; Glahn, C.; Steiner, C.M. GRAPPLE: Learning Management Systems Meet Adaptive Learning Environments. In Intelligent and Adaptive Educational-Learning Systems; Springer: Berlin/Heidelberg, Germany, 2013; pp. 133–160. [Google Scholar]
  31. Wang, S.; Christensen, C.; Cui, W.; Tong, R.; Yarnall, L.; Shear, L.; Feng, M. When Adaptive Learning Is Effective Learning: Comparison of an Adaptive Learning System to Teacher-Led Instruction. Interact. Learn. Environ. 2023, 31, 793–803. [Google Scholar] [CrossRef]
  32. Arsovic, B.; Stefanovic, N. E-Learning Based on the Adaptive Learning Model: Case Study in Serbia. Sadhana—Acad. Proc. Eng. Sci. 2020, 45, 266. [Google Scholar] [CrossRef]
  33. Alsadoon, E. The Impact of an Adaptive E-Course on Students’ Achievements Based on the Students’ Prior Knowledge. Educ. Inf. Technol. 2020, 25, 3541–3551. [Google Scholar] [CrossRef]
  34. Shamsutdinova, T.M. Formation of Individual Educational Trajectory in Adaptive Learning Management Systems. Open Educ. 2021, 25, 36–44. [Google Scholar] [CrossRef]
  35. Bloom, B.S. Taxonomy of Educational Objectives Book 1: Cognitive Domain; Addison-Wesley Longman Ltd.: Boston, MA, USA, 1984. [Google Scholar]
  36. Wambsganss, T.; Janson, A.; Söllner, M.; Koedinger, K.; Marco, J.; Leimeister, J.M. Improving Students’ Argumentation Skills Using Dynamic Machine-Learning–Based Modeling. Inf. Syst. Res. 2024, 36, 474–507. [Google Scholar] [CrossRef]
  37. Tenório, K.; Dermeval, D.; Monteiro, M.; Peixoto, A.; Silva, A.P. da Exploring Design Concepts to Enable Teachers to Monitor and Adapt Gamification in Adaptive Learning Systems: A Qualitative Research Approach. Int. J. Artif. Intell. Educ. 2022, 32, 867–891. [Google Scholar] [CrossRef]
  38. Hamari, J. Do Badges Increase User Activity? A Field Experiment on the Effects of Gamification. Comput. Hum. Behav. 2017, 71, 469–478. [Google Scholar] [CrossRef]
  39. Denny, P. The Effect of Virtual Achievements on Student Engagement. In CHI ’13, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris France, 27 April–2 May 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 763–772. [Google Scholar] [CrossRef]
  40. Chiu, M.C.; Moss, E.; Richards, T. Effect of Deadlines on Student Submission Timelines and Success in a Fully-Online Self-Paced Course. In SIGCSE 2024, Proceedings of the 55th ACM Technical Symposium on Computer Science Education, Portland, OR, USA, 20–23 March 2024; Association for Computing Machinery: New York, NY, USA, 2024; Volume 1, pp. 207–213. [Google Scholar] [CrossRef]
  41. Ariely, D.; Wertenbroch, K. Procrastination, Deadlines, and Performance: Self-Control by Precommitment. Psychol. Sci. 2002, 13, 219–224. [Google Scholar] [CrossRef] [PubMed]
  42. Allaire, J.J. RStudio: Integrated Development Environment for R; R Foundation for Statistical Computing: Vienna, Austria, 2016. [Google Scholar]
  43. Ademi, N.; Loshkovska, S. Exploratory Analysis of Student Activities and Success Based on Moodle Log Data. In Proceedings of the 16th International Conference on Informatics and Information Technologies, Mavrovo, North Macedonia, 10–12 May 2019. [Google Scholar]
  44. Ademi, N.; Loshkovska, S.; Kalajdziski, S. Prediction of Student Success Through Analysis of Moodle Logs: Case Study. In ICT Innovations 2019. Big Data Processing and Mining, Proceedings of the 11th International Conference, Ohrid, North Macedonia, 17–19 October 2019; Gievska, S., Madzarov, G., Eds.; Springer: Cham, Switzerland, 2019; pp. 27–40. [Google Scholar]
  45. Ademi, N.; Loshkovska, S. Weekly Analysis of Moodle Log Data for Future Use in Prediction. In Proceedings of the 17th International Conference on Informatics and Information Technologies—CIIT 2020, Mavrovo, North Macedonia, 8 May 2020. [Google Scholar]
  46. Ademi, N.; Loshkovska, S. Clustering Learners in a Learning Management System to Provide Adaptivity. In Proceedings of the ICT Innovations 2020, Skopje, North Macedonia, 24–26 September 2020; pp. 82–95. [Google Scholar]
  47. Anderson, L.W. Objectives, evaluation, and the improvement of education. Stud. Educ. Eval. 2005, 31, 102–113. [Google Scholar] [CrossRef]
Figure 1. General Structure of Proposed Adaptivity Mechanism.
Figure 1. General Structure of Proposed Adaptivity Mechanism.
Mti 09 00074 g001
Figure 2. Course organization.
Figure 2. Course organization.
Mti 09 00074 g002
Figure 3. Learning paths and their relation.
Figure 3. Learning paths and their relation.
Mti 09 00074 g003
Figure 4. Learning path of one-course topic.
Figure 4. Learning path of one-course topic.
Mti 09 00074 g004
Figure 5. Functionality of learner module.
Figure 5. Functionality of learner module.
Mti 09 00074 g005
Figure 6. Functionality of Adaptation Module.
Figure 6. Functionality of Adaptation Module.
Mti 09 00074 g006
Figure 7. Course grades in percentages.
Figure 7. Course grades in percentages.
Mti 09 00074 g007
Figure 8. Mean scores by group and activity type.
Figure 8. Mean scores by group and activity type.
Mti 09 00074 g008
Figure 9. Drop-outs (bars) and drop-out percentage (line) by academic year.
Figure 9. Drop-outs (bars) and drop-out percentage (line) by academic year.
Mti 09 00074 g009
Table 1. Different Learning Objects presented in the course.
Table 1. Different Learning Objects presented in the course.
Learning ObjectsTypeType of Learning Outcomes/Aims of LOs
Lecture Notes (L)Asynchronous, CriticalPresentation slides are available during the course timeline and can be accessed anytime.
Online lectures
(OL)
Synchronous, CriticalVideo-conferencing (BigBlueButton) Online Sessions with the instructor are organized within the faculty schedule’s timeframes, and repetition is not considered.
Recorded views (R)Asynchronous, Optional, RecoveryVideo-conferencing online session recordings. They are made during online lectures and are available any time after the lecture. There are no restrictions or limitations on their access.
Classwork (C)Synchronous, Optional, AssessmentCoached activities during the online lectures to remember and apply the course content. They are graded only if finished within a defined deadline and without a recovery option. Classwork is available to all students.
Homework (H)Asynchronous, Critical, AssessmentSmall tasks based on a specific topic to understand and apply concepts. Tasks are graded. Recovery actions are considered and usually consist of new homework. The selection of the recovery activity is part of the adaptation module.
Quizzes (Q)Asynchronous, Optional, Assessment, RecoveryQuizzes (SEQ) with multiple choice questions aimed to help remember terminology. Quizzes are graded, and the rules and restrictions regarding the quizzes are defined for each academic year.
Lab Exercises (L)Synchronous, Optional, AssessmentLab exercises aim to develop practical problem-solving skills by applying the topic knowledge. They follow the format of classwork with online guidance. They must be submitted by a predefined deadline, and the recovery action is not considered. In certain academic years, this activity is wholly omitted.
Lab Homework (LH)Asynchronous, Critical, AssessmentProblems that help students apply, analyze, and evaluate the learned topics. The activities are graded, and recovery is considered. Rules and restrictions are previously defined and are used by the adaptation module.
Project (P)Asynchronous, Optional, AssessmentA final project where students apply acquired knowledge to solve real problems and create a unique product. This LO is optional; the project is graded, but recovery is not considered.
Table 2. Learner groups by year and teaching style.
Table 2. Learner groups by year and teaching style.
GroupTeaching StyleRulesNumber of the Students
2019–2020Non-adaptive blendedtime limits; no lab exercises, repetition allowed during exam periods45
2020–2021 (Group A)Adaptive with live sessionstime limits, no repetition limits, all points scored172
2021–2022 (Group B)Adaptive with live sessions time limits, repetition limits on quizzes, limitations on scores306
2022–2023 (Group C)Adaptive with live sessionsno time limits, repetition limits on quizzes, and constraints on scores.214
Table 3. Assessment activities and their points.
Table 3. Assessment activities and their points.
Activity TypeMaximum Number of Activities During One SemesterMinimum Number of Points per ActivityTotal Points from All Activities in One Semester
Self-Evaluation Quiz10570
Classwork1030200
Homework1030500
Lab Classwork1030200
Lab Homework1030330
Badges91090
Project1100150
Total Scoreabove points are calculated each year with a limit of 1000
Table 4. Course success in non-adaptive vs. adaptive learning settings.
Table 4. Course success in non-adaptive vs. adaptive learning settings.
AdaptationParticipantsAverage GradePercentage of the Total Number of Students Passed During the SemesterTotal Percentage of Students Passed at the End of the YearOverall Failure Rate
Non-adaptive458.9029.73%95%5%
Adaptive Group A1729.0886.18%89%11%
Adaptive Group B3067.3272.91%83%17%
Adaptive Group C2147.4448.00%64%36%
Table 5. Average ungraded LMS activities per student.
Table 5. Average ungraded LMS activities per student.
AdaptationTotal LMS Course VisitsOnline Class AttendanceRecording
Views
Course
Views
Non-adaptive414.9110.04* 15.24102.21
Adaptive Group A442.368.504.52118.28
Adaptive Group B619.3619.7912.54173.83
Adaptive Group C369.7522.057.9994.78
* Shows the average number of accesses to lecture presentations.
Table 6. Average number of graded LMS activities per student.
Table 6. Average number of graded LMS activities per student.
GroupsQuiz
M ± SD
Classwork M ± SDHomework M ± SDLab Classwork M ± SDLab Homework M ± SDProject
M ± SD
Total
M ± SD
Non-adaptive6.84 ± 1.246.49 ± 3.078.33 ± 2.762.51 ± 1.08NA0.07 ± 0.2524.07 ± 5.67
Adaptive Group A7.13 ± 1.436.17 ± 2.886.52 ± 2.084.23 ± 2.875.54 ± 2.510.10 ± 0.3029.59 ± 8.10
Adaptive Group B6.98 ± 1.417.65 ± 3.148.57 ± 2.997.47 ± 3.357.93 ± 2.580.30 ± 0.4538.6 ± 10.08
Adaptive Group C8.16 ± 2.175.37 ± 3.587.84 ± 3.676.59 ± 2.28NA0.10 ± 0.4027.97 ± 11.71
Table 7. Failure and dropout rates per year.
Table 7. Failure and dropout rates per year.
Academic YearFailure RateNr of Students Who Drop OutPercentage of Drop-Outs
2019–20205%00.00%
2020–2021 Group A 11%42.33%
2021–2022 Group B17%154.90%
2022–2023 Group C36%2813.08%
Table 8. Activities of the failing students in adaptive learning settings Group A.
Table 8. Activities of the failing students in adaptive learning settings Group A.
Number of StudentsPercentagesBehavior Pattern
Drop-out421%None of the activities are performed
Failed1368%No classwork submission, no BBB attendance only quizzes repeated
211%Only one classwork submission
Total19
Table 9. Clusters based on performed activities (Group C).
Table 9. Clusters based on performed activities (Group C).
ClustersNr of StudentsMean GradeStandard Deviation (Grade)
Cluster0108 (50%)7.721.60
Cluster152 (24%)5.830.76
Cluster254 (25%)50.0019
Table 10. Lab work submission rates of Group C.
Table 10. Lab work submission rates of Group C.
Average Number of Lab Work RepetitionsNot SubmittedSubmitted, Submitted and ResubmittedResubmitted Three Times
Percentage of students38%73%1%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ademi, N.; Loshkovska, S. Data-Driven Adaptive Course Framework—Case Study: Impact on Success and Engagement. Multimodal Technol. Interact. 2025, 9, 74. https://doi.org/10.3390/mti9070074

AMA Style

Ademi N, Loshkovska S. Data-Driven Adaptive Course Framework—Case Study: Impact on Success and Engagement. Multimodal Technologies and Interaction. 2025; 9(7):74. https://doi.org/10.3390/mti9070074

Chicago/Turabian Style

Ademi, Neslihan, and Suzana Loshkovska. 2025. "Data-Driven Adaptive Course Framework—Case Study: Impact on Success and Engagement" Multimodal Technologies and Interaction 9, no. 7: 74. https://doi.org/10.3390/mti9070074

APA Style

Ademi, N., & Loshkovska, S. (2025). Data-Driven Adaptive Course Framework—Case Study: Impact on Success and Engagement. Multimodal Technologies and Interaction, 9(7), 74. https://doi.org/10.3390/mti9070074

Article Metrics

Back to TopTop