Next Article in Journal
SAXS Investigation of Hierarchical Structures in Biological Materials
Previous Article in Journal
A Gene Ontology-Based Pipeline for Selecting Significant Gene Subsets in Biomedical Applications
Previous Article in Special Issue
Quality Management System in Shaping Students’ Pro-Quality Attitude in the Era of Industry 4.0
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mitigating Conceptual Learning Gaps in Mixed-Ability Classrooms: A Learning Analytics-Based Evaluation of AI-Driven Adaptive Feedback for Struggling Learners

1
SK Research-Oxford Business College, Oxford OX1 2BQ, UK
2
Computer Science and Software Engineering Department, Beaconhouse International College, Faisalabad 38000, Pakistan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4473; https://doi.org/10.3390/app15084473
Submission received: 5 March 2025 / Revised: 26 March 2025 / Accepted: 11 April 2025 / Published: 18 April 2025
(This article belongs to the Special Issue Application of Smart Learning in Education)

Abstract

:

Featured Application

The AI-driven adaptive feedback system from this study can be a practical addition to Learning Management Systems (LMS) and intelligent tutoring platforms to deliver real-time personalized learning assistance. The system’s implementation across classes of different abilities and corporate training and professional programs drives student interaction and cognitive resource manipulation as well as knowledge durability using data-driven methods that scale up for personalized educational methods.

Abstract

Adaptation through Artificial Intelligence (AI) creates individual-centered feedback strategies to reduce academic achievement disparities among students. The study evaluates the effectiveness of AI-driven adaptive feedback in mitigating these gaps by providing personalized learning support to struggling learners. A learning analytics-based evaluation was conducted on 700 undergraduate students enrolled in STEM-related courses across three different departments at Beaconhouse International College (BIC). The study employed a quasi-experimental design, where 350 students received AI-driven adaptive feedback while the control group followed traditional instructor-led feedback methods. Data were collected over 20 weeks, utilizing pre- and post-assessments, real-time engagement tracking, and survey responses. Results indicate that students receiving AI-driven adaptive feedback demonstrated a 28% improvement in conceptual mastery, compared to 14% in the control group. Additionally, student engagement increased by 35%, with a 22% reduction in cognitive overload. Analysis of interaction logs revealed that frequent engagement with AI-generated feedback led to a 40% increase in retention rates. Despite these benefits, variations in impact were observed based on prior knowledge levels and interaction consistency. The findings highlight the potential of AI-driven smart learning environments to enhance educational equity. Future research should explore long-term effects, scalability, and ethical considerations in adaptive AI-based learning systems.

1. Introduction

1.1. Background and Context

The increasing adoption of AI in education has transformed traditional pedagogical methods, enabling adaptive and personalized learning experiences. In mixed-ability classrooms, students exhibit diverse levels of prior knowledge, cognitive abilities, and learning speeds, leading to conceptual learning gaps—disparities in understanding fundamental concepts that hinder academic progress. These gaps are particularly pronounced in STEM and language education, where cumulative knowledge-building is essential for long-term learning success [1]. Traditional classroom instruction, often constrained by time and resources, struggles to address the individual needs of students, resulting in learning inequities [2].
AI-driven adaptive feedback systems have emerged as a promising solution to this challenge. These systems use learning analytics and machine learning algorithms to analyze student interactions, detect misunderstandings, and provide real-time personalized feedback [3]. Unlike static, one-size-fits-all teaching approaches, AI-driven adaptive feedback dynamically adjusts instructional content, ensuring that struggling learners receive targeted support while advanced students progress at an appropriate pace. However, the effectiveness of AI-driven feedback in closing conceptual learning gaps remains an area requiring empirical validation, particularly in higher education settings where diverse learning needs are prominent.
AI-driven feedback plays a crucial role in closing learning gaps by providing personalized learning enhancements tailored to students’ needs. As illustrated in Figure 1, AI-based interventions lead to targeted support for struggling students, increased engagement, improved understanding, and enhanced knowledge retention. By leveraging adaptive learning strategies [4], AI-driven feedback systems ensure that students receive timely interventions [5], fostering a more interactive, data-driven, and effective learning environment. These elements collectively contribute to a more inclusive educational framework that optimizes student performance in mixed-ability classrooms.

1.2. Research Problem

Despite the growing integration of AI in education, there is limited empirical evidence on the impact of adaptive feedback mechanisms in heterogeneous classrooms [6]. Many existing smart learning technologies focus on content delivery, but fewer studies have examined how AI-based feedback improves conceptual mastery, student engagement, and retention—key indicators of learning success. Moreover, cognitive overload remains a concern in AI-mediated learning environments, as students may struggle to process multiple forms of adaptive input [7].
This study investigates the role of AI-driven adaptive feedback in mitigating conceptual learning gaps in mixed-ability classrooms. Using a learning analytics-based evaluation, we assess how AI-powered feedback influences student engagement, retention, and performance over a 20-week period across three departments of BIC. By leveraging real-time learning analytics, this research provides quantifiable insights into the effectiveness of adaptive learning technologies.

1.3. Research Objectives

The primary objectives of this study are to:
  • Evaluate the effectiveness of AI-driven adaptive feedback in improving conceptual mastery among struggling learners.
  • Assess the impact of AI feedback on student engagement and cognitive overload in smart learning environments.
  • Analyze retention rates and long-term learning outcomes associated with AI-generated feedback interventions.
  • Examine variations in feedback effectiveness across different subjects (STEM vs. language education) and student demographics.

1.4. Research Questions

To address the research objectives, the study investigates the following key questions:
  • To what extent does AI-driven adaptive feedback improve conceptual mastery in mixed-ability classrooms?
  • How does adaptive feedback influence student engagement and reduce cognitive overload in smart learning environments?
  • What impact does AI-driven adaptive feedback have on long-term knowledge retention and course completion rates?
  • Are there significant differences in feedback effectiveness based on subject domain, prior knowledge levels, and student demographics?

1.5. Significance of the Study

This research contributes to the growing field of AI-enhanced smart learning environments by offering empirical insights into the role of personalized feedback in addressing learning disparities. By integrating learning analytics, this study moves beyond theoretical discussions to provide quantitative evidence on the impact of AI-driven interventions in real-world educational settings. Findings will be valuable to:
  • Educators seek data-driven strategies for improving student engagement and performance.
  • EdTech developers designing adaptive learning platforms with AI-driven interventions.
  • Policymakers aim to implement AI-based solutions in higher education.
The remainder of the paper is organized as Section 2 discusses the prior research on learning analytics, adaptive feedback, and AI in education. Section 3 discusses the quasi-experimental design, data collection methods, and analytical techniques used in the study. Section 4 presents the results and the findings on conceptual mastery improvement, student engagement, and retention rates. Section 5 discusses the interpretation of the results in relation to previous research and practical implications. At the end, Section 6 shows the key takeaways, research limitations, and directions for future studies.

2. Literature Review

The integration of AI in education has revolutionized traditional teaching methodologies, offering personalized learning experiences tailored to individual student needs [8,9]. In mixed-ability classrooms, where students exhibit diverse learning proficiencies [10], AI-driven adaptive feedback systems have emerged as a promising solution to bridge conceptual learning gaps [11]. This literature review examines the current state of research on AI-driven adaptive feedback, learning analytics, and their impact on student engagement and retention in mixed-ability educational settings.

2.1. AI-Driven Adaptive Feedback in Education

AI-driven adaptive feedback systems utilize machine learning algorithms to analyze student performance in real time, providing personalized feedback that addresses individual learning needs [12]. These systems have been shown to enhance student engagement and improve learning outcomes by tailoring instructional content to the learner’s current understanding. For instance, a study by [13] demonstrated that AI-powered feedback mechanisms significantly improved students’ grasp of complex concepts in STEM education [14,15]. Similarly, research by [16] highlighted the potential of AI tutors in offering individualized practice and feedback, thereby enhancing the learning experience.
In the context of second language acquisition, AI-driven tools have been effective in providing adaptive feedback to learners [17]. A study by [18] explored the use of AI-integrated Computer-Assisted Language Learning (CALL) tools, revealing that these technologies offer useful feedback that meets real-world language learning needs. The research emphasized the importance of AI-based adaptive feedback in improving learners’ writing performance in a second language [19,20,21].
However, the implementation of AI-driven adaptive feedback is not without challenges. Educators have expressed concerns regarding the integration of such technologies into existing curricula [22]. A detailed analysis by [23,24] discussed educators’ motivations, implementation strategies, outcomes, and challenges when incorporating automated writing feedback tools in classrooms. The study underscored the necessity for professional development and support to facilitate the effective adoption of AI-driven feedback systems.

2.2. Learning Analytics in Mixed-Ability Classrooms

Learning analytics involves the collection and analysis of data related to student learning behaviors, aiming to inform and optimize educational practices [25]. In mixed-ability classrooms, learning analytics can identify individual student needs, enabling the provision of targeted support. A systematic literature review by [26] highlighted the transformative role of learning analytics in personalizing educational feedback mechanisms, thereby enhancing learning outcomes.
The application of learning analytics extends to various educational contexts. For example, a study by [27] examined the use of learning analytics-based interventions in flipped classroom settings. The research found that such interventions positively impacted students’ academic achievement and engagement, particularly in environments where students exhibited varying levels of prior knowledge [28].
Despite its benefits, the adoption of learning analytics faces obstacles, including concerns about data privacy and the need for educators to develop competencies in data interpretation. Research by [29] emphasized the importance of addressing these challenges through professional development and the establishment of clear data governance policies.

2.3. Impact on Student Engagement and Retention

The integration of AI-driven adaptive feedback and learning analytics has been associated with increased student engagement and retention [30]. By providing personalized learning experiences [31,32], these technologies can motivate students to participate actively in their learning processes [33]. A case study by [34] demonstrated that adaptive learning systems incorporating AI and learning analytics significantly improved student engagement metrics and reduced dropout rates.
Furthermore, the use of AI in education has shown promise in supporting students with disabilities [35]. A review report by [36] highlighted how AI tools, such as chatbots and word prediction programs, assist students with learning disabilities in overcoming academic challenges, thereby promoting inclusivity and retention [37].
However, the effectiveness of these technologies can vary based on factors such as prior knowledge levels and the frequency of interaction with AI-driven tools. Studies suggest that while AI-driven feedback can enhance learning outcomes, its impact is maximized when integrated thoughtfully into the curriculum and accompanied by educator support [38,39,40].

2.4. Challenges and Considerations

While AI-driven adaptive feedback and learning analytics offer numerous benefits, several challenges must be addressed to ensure their effective implementation. Concerns regarding data privacy and the ethical use of student information are paramount. Educators and policymakers must establish robust data governance frameworks to protect student data.
Additionally, the successful adoption of these technologies requires educators to possess a certain level of data literacy [41]. Professional development programs are essential to equip teachers with the skills necessary to interpret learning analytics and implement AI-driven feedback effectively. Research by [42] highlighted the importance of ongoing support and training for educators in this domain.
Moreover, the design of AI-driven tools should consider the diverse needs of students in mixed-ability classrooms [43]. Ensuring that these technologies are inclusive and accessible to all learners is crucial for promoting educational equity. To systematically synthesize prior research and highlight this study’s contributions, Table 1 compares key studies on AI-driven feedback with our findings, focusing on improvements in conceptual mastery, engagement, and retention.

3. Methodology

3.1. Research Design

The study adopts a learning analytics-based quasi-experimental design to evaluate the impact of AI-driven adaptive feedback on conceptual learning gaps in mixed-ability classrooms. A pre-test/post-test control group design was implemented, allowing for a comparative analysis of the effectiveness of AI-based feedback systems against traditional instructor-led feedback methods. The Supplementary Materials include a language education assessment test, STEM assessment test, delayed recall test, semi-structured interview guide for students, student engagement survey, IRB approval letter, participant consent form, cognitive load questionnaire, and pre-test/post-test assessment framework. These materials offer additional insights into the research instruments, ethical approvals, and assessment framework used in the study.
Participants were divided into two groups:
  • Experimental Group (n = 350): Received AI-driven adaptive feedback.
  • Control Group (n = 350): Received traditional instructor-led feedback.
The study was conducted over 20 weeks across three departments at BIC, integrating AI-powered learning analytics to assess conceptual mastery, student engagement, and knowledge retention. Table 2 shows the overview of research design.

3.2. Participants

A randomized selection of 700 undergraduate students from three different departments at BIC was conducted. The sample was balanced across STEM and Language Education disciplines as mentioned in Table 3. Within the STEM disciplines, the study focused on topics such as introductory programming concepts (e.g., variables, loops, and functions in Computer Science), algebraic problem-solving (e.g., linear equations and quadratic functions in Mathematics), and fundamental mechanics (e.g., Newton’s laws and kinematics in Physics). For Language Education, the themes included English composition skills (e.g., essay structure and argumentative writing), grammar proficiency (e.g., syntax and tense usage), and vocabulary development (e.g., context-based word acquisition). These topics were selected to represent foundational areas where conceptual learning gaps are prevalent, allowing AI-driven adaptive feedback to target specific challenges in mixed-ability settings.
Participants were included based on voluntary participation, ensuring ethical adherence and minimizing selection bias.
Students were selected through a transparent, randomized process from a pool of approximately 1200 undergraduates enrolled in STEM (Computer Science, Mathematics, Physics) and Language Education (English) courses at Beaconhouse International College. Participation was voluntary, and advertised via university announcements, with 700 students consenting after an initial ethics briefing. Random assignment to the experimental (n = 350, AI-driven feedback) and control (n = 350, traditional feedback) groups was conducted using a computer-generated randomization algorithm, ensuring balanced representation across gender (51% male, 49% female), age (mean 21.0 ± 2.2), and prior academic performance (mean GPA 2.89 ± 0.65). This approach minimized selection bias and supported the quasi-experimental design’s integrity.

3.3. AI-Driven Adaptive Feedback System

The AI-driven adaptive feedback system operated as a desktop application to improve individualized learning together with immediate feedback and automatic performance assessments in classrooms with different student abilities. AI in this system used data structures to deliver precise feedback that personalized learning sessions. The system design included different analytical and engagement monitoring tools, which optimized student improvement. Accessibility and usability formed the foundation of interface design alongside content structure because both elements support active learning along with student-based progress assessment.

3.3.1. System Architecture and Interface Design

The AI-based adaptive feedback model follows the equation:
F a d a p t i v e = α P + β T + γ E
where:
  • F a d a p t i v e = AI-generated feedback score.
  • P = Performance score based on correctness.
  • T = Response time per question.
  • E = Engagement level (frequency of AI interactions).
  • α ,   β   a n d   γ = Learning weights optimized via machine learning.
The system consisted of ten main interfaces that supported better student learning results as well as instructional choices. Student progress information and AI-generated insights about students alongside access to essential system functionalities were available on the Main Dashboard as seen in Figure 2. The interface showed a performance graph via engagement charts, quick insights, and recent feedback logs to help students track their academic achievements. The system provided users with a structured side menu to move between different modules with ease.
The AI Feedback System, which has openAI API integrated at backend to respond to student queries in Figure 3 enabled users to submit their work for instant feedback that categorized mistakes and offered structured recommendation solutions. Students used the text-based input panel to prompt open and multiple choice questions and to provide written and audio responses while the AI analysis panel presented systematic explanations that suggested improvement steps. The system implements adjustable feedback protocols, that use students’ past performance patterns for delivering immediate individualized productive feedback.
Educational staff accessed the Instructor Analytics Dashboard (Figure 4), which featured complete tools for performance checking and student engagement measurement together with predictive analytics capabilities. Through this interface, instructors accessed real-time student scoring data in addition to progress reports and automatic AI difficulty assessments that helped them determine which students needed assistance. The engagement analytics module displayed topic understanding through bar chart visualizations and AI-generated instructional recommendation graphs alongside time-on-task performance charts. Instructional staff gained the ability to modify feedback from AI through this interface while also enabling them to select desired feedback strengths and generate analytical reports for educational research purposes.
This module offered personalized recommendations by using AI to deliver learning experiences depending on each student’s current proficiency level and their conceptual mastery as well as their engagement activities (Figure 5). The skill strength indicator subdivided learners into beginner, intermediate, and advanced groups to deliver specific learning paths. Programmed suggestions from the AI system offered students personalized exercises combined with Supplementary Materials together with instructional video content to boost their performance.
Three features were composed of the AI-Generated Reports and Feedback History module, which preserved all previously generated feedback combined with student performance data and knowledge retention records (Figure 6). The module displayed student conceptual progression patterns throughout time so students could identify past errors and develop learning methods using AI-generated evaluations. Through its predictive analytics features the module used historical student data to make performance predictions which improved the clarity of feedback processes as well as teaching and learning decisions for both students and educators.
An essential element of the system was the Live Chat and AI Tutor (Figure 7), which offered students academic support directly from an AI chatbot in real time. Students used NLP algorithms to enable the chatbot to provide context-sensitive feedback which produced interactive sessions for problem-solving. Users could choose between three modes for chatting: Step-by-Step Assistance, Quick Answer Mode, and Interactive Whiteboard Mode within the interface. Student users could select between computer-based guidance and quick solutions to customize their learning methods through these system functions.
The Student Performance Dashboard presented student advancement data through graphical displays, which appear in Figure 8. The visualization of weekly test scores measured academic growth throughout time and it also compared AI feedback with instructor-delivered feedback for effectiveness evaluation. Students gained immediate understanding of their learning development through this interface which allowed them to understand their progress and improve their study approaches.
Through the Engagement Analytics module (Figure 9) learners received an organized examination of their engagement patterns and study activities and their response reliability. The system produced analysis depicting relationship between academic study duration and educational performance magnification throughout the learning process. Using bar chart, students together with instructors could spot feedback utilization patterns that pointed toward essential learning areas needing more attention.
Figure 10 in the design provided users with a Comparative Analysis module, which displayed how their performance compared with their peers in the class. The module showcased student positions in rankings as well as showed AI feedback usage statistics to increase student engagement with AI feedback while using peer performance evaluation for motivation. Through the monitoring process, students gained awareness regarding their position within the student body, which enabled proper planning of their studying activities.
Students could obtain personalized study schedules through the AI-Powered Study Plan and Recommendations module which created plans through AI-based assessment techniques (Figure 11). The system distributed time for studies among different subjects by presenting students’ learning capabilities with radar charts. Such monitoring provided students with essential knowledge about how to arrange their study time to improve their classroom performance.
The analysis tool predicted student learning outcomes to provide specific support for students showing learning problems. Educators deployed real-time instructional strategies through the instructor dashboard by using alerts from AI systems. Engineering tracking features in the system made direct links between student study activities and better performance results by showing patterns between active learning durations and concept grasping progress. Through its role, the AI tutor delivered time-sensitive knowledge reinforcement while students received customized explanations that matched their individual learning speed.

3.3.2. Implementation Using OpenAI API Selection and Grouping

The AI-driven feedback system integrates the OpenAI API as its backend, leveraging a pre-trained large language model (likely GPT-based, specifics proprietary to OpenAI) to generate adaptive feedback. Our contribution focused on developing a custom frontend—a web-based interface enabling student interaction—while the backend, including AI model training, algorithm selection (deep learning transformers), and core processing, was handled by the OpenAI API. No local training or hyperparameter tuning was performed; instead, we supplied study-specific inputs (e.g., 700 students’ responses, pre/post-test data) via API calls, preprocessed locally by formatting text inputs and removing incomplete entries for compatibility. The system architecture comprises a frontend module (Python/Django, hosted on a local AWS EC2 instance, 4 vCPUs, 8 GB RAM) interfacing with OpenAI’s cloud-based API, which processes requests and returns feedback in ~1–2 s per interaction, depending on API load. Statistical methods (e.g., regression) were applied separately for outcome analysis, not feedback generation. While Section 3.3.1 screenshots illustrate the interface, a UML sequence diagram (attached as Supplementary File) detailing the frontend–API interaction is provided in the Supplementary Materials, replacing less substantive visuals to clarify functional flow.

3.4. Data Collection Methods

The research data collection procedure aimed to acquire complete systematic information regarding student performance levels alongside their engagement activities and AI-generated feedback interactions. Researchers combined quantitative system logs and survey data with student and instructor qualitative feedback into their research approach. For 20 weeks 700 undergraduate students participated, including 350 students who received AI adaptive feedback as well as 350 students in the control group who received feedback from instructors.

3.4.1. Participant Selection and Grouping

Participants were selected from multiple disciplines, including Computer Science, Mathematics, and Physics, to ensure a diverse representation of learning styles and academic backgrounds. Table 4 shows that students were divided into two groups:
Each student was assigned a unique identifier in the AI-driven system to ensure anonymity and facilitate automated data logging without manual intervention.

3.4.2. Data Sources and Collection Tools

The pre-tests and post-tests were designed as 50-question assessments, each lasting approximately 60 min, and comprised a mix of 70% multiple-choice questions (e.g., selecting correct programming syntax or solving algebraic equations) and 30% short-answer questions (e.g., explaining a physics concept or composing a brief paragraph). The pre-tests assessed foundational knowledge specific to each theme—Computer Science (e.g., basic programming constructs), Mathematics (e.g., algebraic operations), Physics (e.g., mechanics principles), and English Language (e.g., grammar and vocabulary)—to establish a baseline. The post-tests, while maintaining the same format and difficulty level within each theme, shifted focus to applied understanding (e.g., debugging code, solving multi-step math problems, applying physics laws, or writing structured essays). Although the pre-tests and post-tests were theme-specific and not identical across disciplines (e.g., Math vs. English), they were standardized within each subject to ensure consistency in measuring progress, with question banks validated by subject-matter experts for reliability and relevance.
To ensure data integrity and comprehensiveness, multiple data sources were employed, including automated system logs, pre- and post-study assessments, surveys, and instructor observations.
  • System-Generated Learning Logs
The AI-driven system recorded every student interaction, including response accuracy, time spent on tasks, feedback utilization, and engagement frequency. Logs capture data at five-minute intervals, allowing for a fine-grained analysis of student learning behavior.
2.
Pre- and Post-Study Assessments
To measure the impact of AI-driven feedback, students were administered two standardized assessments: one before implementation and another at the end of the study. These assessments evaluated conceptual understanding, problem-solving skills, and knowledge retention as shown in Table 5.
To align with the study’s focus on STEM and Language Education, the pre- and post-test instruments were specifically designed for each content area. For Computer Science, pre-tests assessed baseline knowledge of programming fundamentals (e.g., variables, conditionals) with questions like “Identify the correct syntax for a loop”, while post-tests evaluated application (e.g., “Write a short program to solve a given problem”). In Mathematics, pre-tests covered algebraic foundations (e.g., solving linear equations), with post-tests testing multi-step problem-solving (e.g., word problems involving quadratics). For Physics, pre-tests focused on conceptual mastery of mechanics (e.g., “State Newton’s First Law”), and post-tests required application (e.g., “Calculate the velocity of an object given force and mass”). In the English Language, pre-tests measured grammar and vocabulary (e.g., “Correct the sentence structure”), while post-tests assessed composition skills (e.g., “Write a paragraph using specified vocabulary”). Each 50-question test combined multiple-choice (70%) and short-answer (30%) formats, was developed with input from subject experts and underwent pilot testing to ensure validity and reliability across mixed-ability learners.
3.
Student Engagement and Feedback Surveys
To capture student perceptions of AI-driven feedback, structured surveys were conducted in weeks 10 and 20. Surveys included Likert-scale questions assessing feedback clarity, perceived usefulness, and self-reported engagement levels as mentioned in Table 6. Each survey consisted of 15 questions—10 Likert-scale items (e.g., rating feedback usefulness from 1 to 5) and 5 open-ended prompts (e.g., describing helpful features)—and took approximately 10–15 min to complete, ensuring minimal disruption while gathering detailed insights into student experiences.
4.
Instructor Observations and Reports
Instructors monitoring the AI-driven feedback system documented student engagement trends and intervention effectiveness. These reports were compared against AI-generated analytics to validate system effectiveness.

3.4.3. Ethical Consideration, Data Privacy and Data Validation

The study followed ethical data privacy arrangements and obtained participant consent before starting the research. Participants received information about the research goals and AI feedback characteristics together with their cancellation privileges throughout the study period. The data collection system functioned without storing actual names because it used anonymous IDs for tracking purposes. The IRB approval number is IRB-24/2741-EDU-113.
To preserve objectivity and equity, the researchers put bias mitigation techniques into practice to control feedback quality between all student participants. Regular assessments of instructor intervention logs checked whether AI assessment methods conformed to traditional pedagogy practices.
The data reliability package included these multiple validation frameworks:
  • The performance data integration process utilized cross-validation methods between AI log results and instructor evaluation results to maintain data accuracy.
  • A random audit of feedback logs checked that AI suggested actions complied with expert standards for educational practice.
  • Surveys included repetitions of identical questions presented at different times to confirm reporting precision.
The research utilized diverse collection methods to create an extensive assessment of adaptive feedback driven by AI and its effects on student learning within heterogeneous classrooms. Performance tracking alongside learner feedback created a solid analytical base for investigations in later parts of this work.

3.5. Data Analysis and Statistical Methods

The approach details the statistical analysis and analytical methods through which researchers will evaluate AI-driven adaptive feedback on the gap in conceptual learning and student engagement together with cognitive overload assessment and retention metrics. The research analysis depends on inferential statistical testing together with machine learning methods and learning analytics tools for extracting meaningful data from student performance records. Our intended goal is to test research questions through the implementation of these analysis techniques.

3.5.1. Descriptive Statistics and Data Normalization

Before conducting inferential statistical analyses, descriptive statistics were calculated to summarize participant performance across different feedback conditions (AI-driven vs. instructor-led).
From Table 7, the experimental group receiving AI-driven adaptive feedback showed a 28% improvement in conceptual mastery, while the control group exhibited a 14% increase, suggesting that AI-driven interventions enhance learning outcomes more effectively than traditional feedback mechanisms. The pre- and post-tests summarized in Table 6 were administered across the study’s focal themes: Computer Science (e.g., variables, loops), Mathematics (e.g., linear equations), Physics (e.g., Newton’s laws), and English Language (e.g., essay writing, syntax). Each assessment, consisting of 50 questions (70% multiple-choice, 30% short-answer), was designed to take approximately 60 min to complete. This duration was determined through pilot testing to ensure comprehensive evaluation of conceptual mastery and applied understanding within a practical timeframe, consistent across all themes for both the experimental and control groups.
In the study, “feedback effectiveness” is defined as the capacity of AI-driven adaptive feedback to improve conceptual mastery, enhance student engagement, increase knowledge retention, and reduce cognitive overload, tailored to the needs of struggling learners in mixed-ability classrooms. It was measured through a combination of quantitative and qualitative indicators: (1) improvement in conceptual mastery, calculated as the percentage increase in pre- to post-test scores; (2) engagement levels, assessed via the frequency and quality of student interactions with the AI system (e.g., average weekly interactions); (3) retention rates, evaluated through delayed recall test scores administered four weeks post-intervention; and (4) cognitive overload reduction, quantified using NASA-TLX survey responses. These metrics were triangulated with qualitative data from student surveys on perceived feedback usefulness (Likert-scale ratings) and instructor observations to ensure a comprehensive evaluation of effectiveness across diverse learning dimensions.
To ensure the validity of our inferential analyses, data normalization was performed. Outliers were detected using Z-score normalization:
Z = X μ σ
where:
  • X = raw score.
  • μ = mean.
  • σ = standard deviation.
Score with Z > 3 were considered outliers and excluded from further analysis.
To evaluate the impact of AI-driven feedback on conceptual mastery, a paired t-test was used within each group, and an independent t-test was applied to compare post-test scores between groups.
t = X ¯ 1 X ¯ 2 S p 1 n 1 + 1 n 2
where:
  • X ¯ 1 ,   X ¯ 2 = mean scores for pre- and post-tests.
  • S p = pooled standard deviation.
  • n 1 ,   n 2 = sample sizes.
The AI-driven group showed a statistically significant improvement, confirming that adaptive feedback enhances learning outcomes. Paired t-tests revealed significant improvements in conceptual mastery for both groups: the experimental group (AI-driven feedback) demonstrated a highly significant increase (t(349) = 14.72, p < 0.001), while the control group (instructor-led feedback) also showed a significant but less pronounced improvement (t(349) = 2.53, p = 0.012). These results underscore the superior efficacy of AI-driven adaptive feedback over traditional methods.
To assess engagement levels, we analyzed interaction frequency with the AI system. The number of feedback interactions per week was compared between students who improved the most and those who showed minimal progress as shown in Table 8.
The “engagement increase” represents the percentage improvement in student interaction frequency with the AI-driven feedback system (experimental group) or traditional feedback (control group) over the 20-week period. It was calculated by comparing the baseline engagement—average weekly interactions in Week 1 (e.g., 13.5 for high performers in the experimental group)—to the overall study average (e.g., 18.2), yielding a 35% increase for high performers. Interactions were tracked via system logs, capturing actions such as accessing feedback, completing adaptive tasks, or responding to AI prompts. This metric reflects how consistently students engaged with the feedback mechanisms, correlating higher engagement with greater performance gains, as validated by the regression analysis (R2 = 0.76, p < 0.001).
In this study, “engagement” is defined as the extent of active student participation with the feedback system, encompassing actions such as logging into the AI platform, reviewing feedback, completing adaptive tasks, and interacting with AI-generated prompts or instructor comments. It was measured through system-generated logs that recorded these interactions at five-minute intervals, capturing metrics like frequency (e.g., average weekly interactions), duration (e.g., time spent per session), and responsiveness (e.g., task completion rates). For the experimental group, this included engagement with AI features like the Live Chat and AI Tutor, while the control group’s engagement was tracked via attendance and participation logs. This granular data enabled the correlation between engagement levels and learning outcomes, as evidenced by the regression results.
A linear regression analysis confirmed that higher AI engagement correlated positively with post-test improvements ( R 2 = 0.76 ,   p < 0.001 ) .
Cognitive load was measured using NASA-TLX surveys before and after the intervention. Repeated measures of ANOVA were used to determine whether cognitive load levels decreased significantly.
C l o u d = i = 1 6 W i   R i
where:
  • C l o u d = cognitive load score.
  • W i = weight assigned to each factor (mental demand, physical demand, temporal demand, effort, frustration, performance).
  • R i = raw rating of each factor.
Results showed a 22% reduction in cognitive overload in the experimental group, compared to 6% in the control group.
To evaluate long-term retention, delayed recall tests were conducted. A multiple linear regression model predicted retention scores:
R r e t = δ F a d a p t i v e + η L + λ E
where:
  • R r e t   = retention rate.
  • F a d a p t i v e = AI feedback score.
  • L = learning time per session.
  • E = engagement frequency.
  • δ ,   η ,   λ = regression coefficients.
The model indicated that feedback frequency and engagement positively correlated with knowledge retention ( R 2 = 0.82 ,   p < 0.001 ) .
An ANOVA test examined whether feedback effectiveness varied between STEM and language students as defined in Table 9.
The STEM group benefited more from AI feedback than the language group. The interaction effect was statistically significant, suggesting discipline-specific differences in AI feedback efficiency.
Note that the p-values reported for comparisons across outcome variables—conceptual mastery, engagement, cognitive load, and retention—are uncorrected for multiple comparisons. Readers should interpret these significant levels with caution, considering the potential for inflated Type I error rates across multiple tests.

3.5.2. AI-Generated Descriptive Statistics and Data Normalization

The AI system categorized feedback into three levels:
  • Low Complexity Feedback (basic error identification).
  • Moderate Complexity Feedback (hints and guided corrections).
  • High Complexity Feedback (adaptive learning path suggestions).
A Chi-square test ( χ 2 ) showed that students receiving high-complexity feedback had the highest conceptual improvement ( p < 0.01 ) .

4. Results

The empirical findings of the study analyze the impact of AI-driven adaptive feedback on conceptual mastery, student engagement, cognitive overload, and retention rates in mixed-ability classrooms.
The findings are structured to address the four research questions. Results are interpreted through both inferential and descriptive statistical measures to validate the effectiveness of AI feedback in mitigating learning gaps.

4.1. Improvement in Conceptual Mastery

The study evaluated AI-based adaptive feedback through a comparison of experimental group pre-test and post-test scores with the control group scores using instructor-led feedback. Both experimental groups including AI feedback and instructor feedback show their pre-test and post-test score mean ranges as depicted in Figure 12.
Conceptual understanding of students increased substantially after receiving AI-generated feedback. The experimental group demonstrated an average performance increase of 28% based on their standard deviation of 12.3 during pre-testing and 10.4 during post-testing at 80.2%. Post-test scores from the control group showed a more limited effect as they elevated from 51.8% to 65.2% during the assessment period. The paired t-test results proved the difference between AI-driven adaptive feedback and traditional feedback effectiveness was statistically important at p < 0.001 and demonstrated that adaptive feedback achieved twice the enhancement in conceptual mastery.
AI-driven feedback produced steady student performance because students concentrated their results close to the learning gain median value of 28%. The control group students showed inconsistent performance levels because several participants kept their learning gains very low or non-existent compared to the AI-generated feedback group. AI-generated feedback leads to better learning performance alongside standardized student achievement rates across the population.

4.2. Student Engagement Trends

The analysis of student engagement utilized learning analytics to monitor their time of interactions with AI technology as well as traditional feedback methods during each week. The 20-week engagement trends can be observed in Figure 13.
Students who received AI-based feedback demonstrated increasing engagement during the semester according to the data, which reached its peak during Week 12 until it stabilized. The control group participants demonstrated a gradual reduction of engagement following Week 8 because they lost interest in typical feedback systems. Results from regression analysis show that AI-driven interactions maintain conceptual growth because of their direct positive relationship (R2 = 0.76, p < 0.001) that enhances student participation and motivation levels.
The visual display verifies that students obtaining multiple weekly interactions with the AI system achieved better learning results. Students demonstrated a 30% average learning improvement from AI feedback when they interacted with it more than 15 times per week while students who interacted with AI less than five times per week showed an average increase of 10%. AI administrative feedback works best for students who consistently interact with it.

4.3. Reduction in Cognitive Overload

The NASA-TLX cognitive load scale measured cognitive overload throughout the study before and after the provided intervention. Figure 14 depicts how cognitive load shifts throughout the study periods for both groups of students.
AI-driven feedback implemented for students resulted in a substantial 22% decrease in cognitive overload during the study while the control group experienced only 6% less cognitive overload. The statistical test demonstrated a statistically noteworthy decrease (p < 0.01) in cognitive overload based on paired t-test results. The virtual feedback system shows that it optimizes cognitive load by selecting instructional content suited to student proficiency levels to prevent them from investing unnecessary mental effort.
During Week 6, the system started providing adaptive feedback, which led to stable cognitive load improvements in the AI-driven group until the end of the study period. The control group exhibited illogical cognitive load score movements that showed traditional feedback failed to provide persistent cognitive ease.

4.4. Knowledge Retention Rate Improvements

The delayed recall tests administered four weeks following the intervention served as the retention measure.
Twenty-five students receiving AI-driven feedback kept 85% of their newly learned content whereas the other students maintained just 65% of their learning. The statistical analysis using regression showed feedback frequency and engagement levels directly impacted knowledge retention because they produced an R2 value of 0.82 (p < 0.001).
The scatterplot in Figure 15 illustrates the relationship between initial post-test scores and delayed recall scores, with point sizes reflecting average weekly AI feedback interactions for the experimental group (ranging from 0 to 18.2) and a uniform marker for the control group (traditional feedback). Some confusion may arise from the visual clustering: students with 5–10 AI interactions are present (e.g., moderate performers averaged 12.4 interactions, with some falling in this range), but their distribution overlaps with other ranges, making them less distinct. The chart separates the experimental and control groups to highlight AI feedback’s impact, though this distinction was not explicitly labeled. Notably, students with 10 or more weekly interactions (e.g., high performers at 18.2) consistently scored above 75 on delayed recall tests, reflecting a strong positive correlation (R2 = 0.82, p < 0.001) between frequent AI engagement and retention, rather than an absolute threshold. This pattern suggests that sustained interaction with adaptive feedback reinforces long-term memory, though small sample variability or ceiling effects may amplify this trend.
Students who achieved above 25% initial learning gains based on their testing demonstrated superior retention results, which confirm that AI feedback boosts knowledge retention in students.

4.5. Variations in Feedback Effectiveness Across Subjects

An ANOVA test evaluated if AI-driven feedback created different levels of effectiveness between students studying STEM subjects versus students studying Language Education. Figure 16 shows the results of post-test scores among STEM students as well as Language students.
The study findings revealed that STEM students received better outcomes from AI-generated feedback as their post-test scores reached 81.4% while Language Education student scores settled at 78.2%. Both disciplines received lower scores from the control group participants because they scored 67.2% in STEM and 64.8% in Language. Subject domain emerged as a significant factor that affected feedback effectiveness according to the ANOVA evaluation (p = 0.028).
Figure 17, a Sankey diagram, visualizes how AI-driven feedback complexity—categorized as low (basic error identification), moderate (hints and guided corrections), and high (adaptive learning path suggestions)—is distributed between STEM and Language Education students. The straight and curving lines represent the flow of feedback instances from the AI system to each subject domain, with line width proportional to the volume of feedback delivered (e.g., STEM students received a higher proportion of high-complexity feedback, approximately 50% of total instances, due to the analytical nature of topics like programming and physics). The curving reflects transitions between complexity levels and subject areas, illustrating that STEM students received more varied and complex feedback, while Language Education students predominantly received moderate feedback (e.g., 60% of instances focused on grammar and composition). This distribution aligns with the ANOVA findings (p = 0.028), highlighting subject-specific tailoring of AI feedback to address conceptual demands.
The AI-driven feedback engine produced more complex feedback content consistently for students taking STEM classes because it used the subject matter to adapt feedback depth accordingly. Language education students received standardized feedback which dedicated most attention to linguistic aspects of their work.

5. Discussion

The overall impact of AI-driven adaptive feedback on student performance across multiple learning dimensions is illustrated in Figure 18. The plot visually demonstrates how students in the AI feedback group consistently outperformed those in the control group across key metrics, including conceptual mastery, engagement, cognitive load reduction, and retention rates.
These findings reinforce the argument that adaptive AI-driven feedback provides a personalized learning experience, reducing learning gaps and enhancing retention over time.

5.1. Adaptations in Feedback Efficiency

The AI-driven feedback group achieved significant concept knowledge improvement in accordance with research that supports personalized learning systems. Adaptive learning technologies transform education through personalized instruction which leads to better student understanding and performance according to [12]. The use of AI-driven adaptive learning programs has proven effective in improving student learning results due to their ability to provide instantaneous feedback and promote student independence [33].
The experimental group receiving AI-driven feedback exhibited a 28% improvement rate, surpassing the 14% improvement rate of the control group, thus demonstrating AI’s capacity to double the learning gains of traditional educational methods. Ref. [34] confirm that AI enhances education by delivering instant, tailored feedback.
The notably large 28% improvement in conceptual mastery may seem striking but is attributable to several factors: (1) the AI system’s ability to deliver real-time, personalized feedback tailored to individual misconceptions, particularly for struggling learners; (2) the focus on mixed-ability classrooms, where baseline performance gaps were wider, amplifying relative gains; and (3) the 20-week intervention period, allowing sustained engagement to compound learning effects. In contrast, the control group’s 14% gain reflects traditional feedback’s broader, less targeted approach, aligning with prior studies reporting modest improvements (e.g., Ref. [12]). This disparity underscores the AI’s precision in addressing specific conceptual deficits, though ceiling effects or initial low performance may also contribute.

5.2. Enhancement of Conceptual Mastery

The study findings verify existing documentation about AI because students who frequently utilized AI-based systems showed enhanced learning outcomes. The adaptive education systems driven by AI establish custom learning trails as well as immediate feedback systems to generate student independence and enhance educational interest [43]. Student commitment as well as retention rates increase when AI offers customized learning paths and instant assistance according to [40].
The results showed student learning gains at 30% when they received AI-generated feedback 15 times per week in accordance with [38] research on immediate feedback and student interest maintenance for active learning.

5.3. Reduction in Cognitive Load

Research indicates that AI-driven feedback provides cognitive load reduction to students as personal instruction allows students to focus better. Students learn better when AI systems adjust their content presentation and deliver specific feedback since it decreases superfluous cognitive processing [8].
AI demonstrated its effectiveness through cognitive overload reductions, which reached 22% within an experimental group while the control group experienced only a 6% decrease according to research findings.

5.4. Improved Retention Rates

Research on personalized feedback indicates that the higher retention rates measured in the AI-driven feedback group align with scientific documentation on knowledge retention practices. The analysis of student performance and engagement by AI systems helps identify students at risk so necessary interventions can be organized, which leads to increased retention rates [44].
Research findings show that AI-driven feedback led the experimental group to retain 85% of learned concepts while the control group maintained only 65%. This indicates that AI-powered feedback promotes both short-term results and long-term memory maintenance.

5.5. Discipline-Specific Variations

The diverse reactions that students demonstrated toward AI feedback depend on their academic field as noted in this research, which supports studies emphasizing developed solutions for specific contexts. Research has proven that automated intelligence systems can enhance language education when offering customized adaptive feedback immediately, which develops students’ writing capabilities [19]. Research shows that AI adaptive learning systems optimize STEM education engagement and attainment through their adaptive learning pathways, which deliver immediate feedback to students [45].
Research shows STEM students gain the most benefits through advanced AI feedback systems, but language students demonstrate better outcomes with feedback of moderate complexity. This demonstrates why educators must adapt AI interventions according to the requirements of individual subjects.

5.6. Implications for Practice

This study generates multiple practical implications, which instructors and policymakers should implement. Educational curricula that include AI-based adaptive feedback systems enable students to master concepts better while maintaining engagement, lowering mentally demanding tasks and raising overall learning achievement. Teaching methods require specific attention because the design of digital tools must relate to disciplinary demands. The benefits AI provides ought to enhance human teaching rather than replace it entirely. Research studies show that when AI generates feedback and complements it with human intervention the process leads to improved student performance and increased student engagement [46].

5.7. Limitations and Future Research

This research offers significant understanding, yet additional examinations must happen because of its identified restrictions. A research duration of 20 weeks proved insufficient for determining sustained AI feedback effects because longer-term studies would generate more complete data results. The research took place within a particular educational environment, but future investigations should study its relevance to multiple educational environments and student demographics.
Academic research must proceed to investigate AI’s ethical consequences in education as well as risks associated with data protection and algorithmic discrimination together with technology dependence concerns. The insights students have about AI-based feedback systems will guide educational developers in creating both beneficial and ethically sustainable AI technological applications for educational settings.

6. Conclusions

The researchers evaluated how AI-based adaptive feedback affects conceptual learning, student interest, mental workload, and university retention statistics within heterogeneous undergraduate settings. Studies showed that AI-generated feedback brought better learning effects than conventional instructor feedback thus reinforcing the developing role of AI in personal educational delivery. Students who received AI-generated feedback showed enhanced conceptual mastery by 28% whereas the control group students experienced only a 14% improvement. Students experience the benefits of learning when receiving adaptive AI feedback as compared to traditional educational support because AI provides specific real-time assistance that meets personal requirements. Students in the AI group showed remarkably higher engagement compared to their counterparts by interacting more often with the feedback system, so active participation increased by 35%. The research outcomes correspond to previous scholarly work, which established that adaptive learning environments increase student motivational levels while sustaining their educational commitment. The research established AI-based feedback as the main driver behind cognitive overload reduction since participants in the experimental group experienced a 22% decrease but participants in the control group only achieved a 6% reduction. Adaptive feedback improves both student knowledge acquisition and cognitive efficiency because it provides feedback that matches the learning rate and organizational structure. CI-based feedback showed long-term positive educational effects because experimental participants maintained 85% conceptual understanding while their counterparts in the control group kept 65% knowledge. The study’s outcome adds important evidence to the current AI in education research, which demonstrates how AI-driven assessment helps learners achieve better educational success and closes academic learning differences. The field requires further investigation into the implementation scope and ethical aspects with long-term outcomes of these AI-driven educational interventions throughout different learning environments.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15084473/s1. All Supplementary Materials are provided in a compressed ZIP file attached to this submission. The ZIP file includes the Language Education Assessment Test, STEM Assessment Test, Delayed Recall Test, Semi-Structured Interview Guide for Students, Student Engagement Survey, IRB Approval letter, Participant Consent Form, Cognitive Load Questionnaire, and Pre-Test/Post-Test Assessment Framework. These materials offer additional insights into the research instruments, ethical approvals, and assessment framework used in the study.

Author Contributions

Conceptualization, F.N. and S.K.; methodology, F.N. and S.K.; software, F.N.; validation, F.N. and S.K.; formal analysis, F.N. and S.K.; investigation, F.N.; resources, F.N. and S.K.; data curation, F.N.; writing—original draft preparation, S.K. and F.N.; writing—review and editing, F.N. and S.K.; visualization, F.N.; supervision, F.N.; project administration, S.K.; funding acquisition, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Beaconhouse International College (protocol code IRB-24/2741-EDU-113 and the date of approval was 27 July 2024).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

All the data sets along with all instruments used to conduct this study are attached to the Supplementary File, which is submitted with this manuscript. If anyone requires more detailed data, please contact the correspondence author.

Acknowledgments

The authors extend their sincere gratitude to the students of Beaconhouse International College for their active participation in this study. Their valuable contributions and engagement were instrumental in the successful completion of this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  2. Baker, R.S.; Inventado, P.S. Educational data mining and learning analytics. In Learning Analytics; Springer: New York, NY, USA, 2014; pp. 61–75. [Google Scholar] [CrossRef]
  3. Mane, P.A.; Jagtap, D.S. AI-Enhanced educational platform for personalized learning paths, automated grading, and real-time feedback. Int. J. Sci. Res. Eng. Manag. 2024, 8, 1–7. [Google Scholar] [CrossRef]
  4. Chigbu, B.I.; Umejesi, I.; Makapela, S.L. Adaptive intelligence revolutionizing learning and sustainability in higher education. In Implementing Interactive Learning Strategies in Higher Education; IGI Global: Hershey, PA, USA, 2024; pp. 151–176. [Google Scholar] [CrossRef]
  5. Salameh, W.A.K. Exploring the impact of AI-driven real-time feedback systems on learner engagement and adaptive content delivery in education. Int. J. Sci. Res. Arch. 2025, 14, 98–104. [Google Scholar] [CrossRef]
  6. Sotirov, M.; Petrova, V.; Nikolova-Sotirova, D. Personalized gamified education: Feedback mechanisms and adaptive learning paths. In Proceedings of the 2024 8th International Symposium on Innovative Approaches in Smart Technologies (ISAS), Istanbul, Turkey, 6–7 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
  7. Sari, H.E.; Tumanggor, B.; Efron, D. Improving educational outcomes through adaptive learning systems using AI. Int. Trans. Artif. Intell. (ITALIC) 2024, 3, 21–31. [Google Scholar] [CrossRef]
  8. Mishra, M.S. Revolutionizing education: The impact of ai-enhanced teaching strategies. Int. J. Res. Appl. Sci. Eng. Technol. 2024, 12, 9–32. [Google Scholar] [CrossRef]
  9. Inthanon, W.; Wised, S. Tailoring education: A comprehensive review of personalized learning approaches based on individual strengths, needs, skills, and interests. J. Educ. Learn. Rev. 2024, 1, 35–46. [Google Scholar] [CrossRef]
  10. Tabassum, S. Managing mixed-ability classrooms. In Futuristic Trends in Social Sciences Volume 3 Book 28; Iterative International Publishers: Chikkamagaluru, India; Selfypage Developers Pvt Ltd.: Karnataka, India, 2024; pp. 34–41. [Google Scholar] [CrossRef]
  11. Singh, B.; Singh, G.; Ravesangar, K. Teacher-Focused approach to merging intelligent tutoring systems with adaptive learning environments in ai-driven classrooms. In Advances in Higher Education and Professional Development; IGI Global: Hershey, PA, USA, 2024; pp. 77–100. [Google Scholar] [CrossRef]
  12. Sajja, R.; Sermet, Y.; Cikmaz, M.; Cwiertny, D.; Demir, I. Artificial intelligence-enabled intelligent assistant for personalized and adaptive learning in higher education. Information 2024, 15, 596. [Google Scholar] [CrossRef]
  13. Ateş, H.; Gündüzalp, C. Proposing a conceptual model for the adoption of artificial intelligence by teachers in STEM education. Interact. Learn. Environ. 2025, 1–27. [Google Scholar] [CrossRef]
  14. Zhai, X. Conclusions and Foresight on AI-Based STEM Education. In Uses of Artificial Intelligence in STEM Education; Oxford University Press: Oxford, UK, 2024; pp. 581–588. [Google Scholar] [CrossRef]
  15. Tariq, R.; Aponte Babines, B.M.; Ramirez, J.; Alvarez-Icaza, I.; Naseer, F. Computational thinking in STEM education: Current state-of-the-art and future research directions. Front. Comput. Sci. 2025, 6, 1480404. [Google Scholar] [CrossRef]
  16. Mohamed, A.M.; Shaaban, T.S.; Bakry, S.H.; Guillén-Gámez, F.D.; Strzelecki, A. Empowering the faculty of education students: Applying ai’s potential for motivating and enhancing learning. Innov. High. Educ. 2024. [Google Scholar] [CrossRef]
  17. Godwin-Jones, R.; O’Neill, E.; Ranalli, J. Integrating AI tools into instructed second language acquisition. In Exploring AI in Applied Linguistics; Iowa State University Digital Press: Ames, IA, USA, 2024; pp. 9–23. [Google Scholar] [CrossRef]
  18. Fazal, N.; Tahir, M.S.; Chaudhary, M.; Abbasi, M. Effectiveness of AI integration into computer-assisted language learning (CALL) on student writing skills based on gender. Pak. J. Humanit. Soc. Sci. 2024, 12, 224–230. [Google Scholar] [CrossRef]
  19. Kenshinbay, T.; Ghorbandordinejad, F. Exploring ai-driven adaptive feedback in the second language writing skills prompt. EIKI J. Eff. Teach. Methods 2024, 2. [Google Scholar] [CrossRef]
  20. Chan, S.; Lo, N.; Wong, A. Leveraging generative AI for enhancing university-level English writing: Comparative insights on automated feedback and student engagement. Cogent Educ. 2024, 12, 2440182. [Google Scholar] [CrossRef]
  21. Addas, A.; Naseer, F.; Tahir, M.; Khan, M.N. Enhancing higher-education governance through telepresence robots and gamification: Strategies for sustainable practices in the ai-driven digital era. Educ. Sci. 2024, 14, 1324. [Google Scholar] [CrossRef]
  22. Hartman, R.J.; Townsend, M.B.; Jackson, M. Educators’ perceptions of technology integration into the classroom: A descriptive case study. J. Res. Innov. Teach. Learn. 2019, 12, 236–249. [Google Scholar] [CrossRef]
  23. Pappa, C.I.; Georgiou, D.; Pittich, D. Technology education in primary schools: Addressing teachers’ perceptions, perceived barriers, and needs. Int. J. Technol. Des. Educ. 2023, 24, 485–503. [Google Scholar] [CrossRef]
  24. Theodorio, A.O. Examining the support required by educators for successful technology integration in teacher professional development program. Cogent Educ. 2024, 11, 2298607. [Google Scholar] [CrossRef]
  25. Patidar, P.; Ngoon, T.; Vogety, N.; Behari, N.; Harrison, C.; Zimmerman, J.; Ogan, A.; Agarwal, Y. Edulyze: Learning analytics for real-world classrooms at scale. J. Learn. Anal. 2024, 11, 297–313. [Google Scholar] [CrossRef]
  26. Bhavsar, S.S.; Dixit, D.; Sthul, S.; Mane, P.; Suryawanshi, S.S.; Dhawas, N. Exploring the role of big data analytics in personalizing e-learning experiences. Adv. Nonlinear Var. Inequalities 2024, 27, 571–583. [Google Scholar] [CrossRef]
  27. Zavodna, M.; Mrazova, M.; Poruba, J.; Javorcik, T.; Guncaga, J.; Havlaskova, T.; Tran, D.; Kostolanyova, K. Microlearning: Innovative digital learning for various educational contexts and groups. Eur. Conf. E-Learn. 2024, 23, 442–450. [Google Scholar] [CrossRef]
  28. Belete, Y. The link between students’ community engagement activities and their academic achievement. Educatione 2024, 3, 61–84. [Google Scholar] [CrossRef]
  29. Ajani, O.A. Exploring the alignment of professional development and classroom practices in african contexts: A discursive investigation. J. Integr. Elem. Educ. 2023, 3, 120–136. [Google Scholar] [CrossRef]
  30. Nuangchalerm, P. AI-Driven learning analytics in STEM education. Int. J. Res. STEM Educ. 2023, 5, 77–84. [Google Scholar] [CrossRef]
  31. Jangili, A.; Ramakrishnan, S. The role of machine learning in enhancing personalized online learning experiences. Int. J. Data Min. Knowl. Manag. Process 2024, 14, 1–17. [Google Scholar] [CrossRef]
  32. Vázquez-Parra, J.C.; Tariq, R.; Castillo-Martínez, I.M.; Naseer, F. Perceived competency in complex thinking skills among university community members in Pakistan: Insights across disciplines. Cogent Educ. 2024, 12, 2445366. [Google Scholar] [CrossRef]
  33. Naseer, F.; Khan, M.N.; Tahir, M.; Addas, A.; Aejaz, S.M.H. Integrating deep learning techniques for personalized learning pathways in higher education. Heliyon 2024, 10, e32628. [Google Scholar] [CrossRef]
  34. Demartini, C.G.; Sciascia, L.; Bosso, A.; Manuri, F. Artificial intelligence bringing improvements to adaptive learning in education: A case study. Sustainability 2024, 16, 1347. [Google Scholar] [CrossRef]
  35. Alsolami, A.S. The effectiveness of using artificial intelligence in improving academic skills of school-aged students with mild intellectual disabilities in Saudi Arabia. Res. Dev. Disabil. 2025, 156, 104884. [Google Scholar] [CrossRef]
  36. Okonkwo, C.W.; Ade-Ibijola, A. Chatbots applications in education: A systematic review. Comput. Educ. Artif. Intell. 2021, 2, 100033. [Google Scholar] [CrossRef]
  37. Kingchang, T.; Chatwattana, P.; Wannapiroon, P. Artificial intelligence chatbot platform: AI chatbot platform for educational recommendations in higher education. Int. J. Inf. Educ. Technol. 2024, 14, 34–41. [Google Scholar] [CrossRef]
  38. Hadiprakoso, R.B.; Sihombing, R.P.P. CodeGuardians: A gamified learning for enhancing secure coding practices with ai-driven feedback. Ultim. InfoSys J. Ilmu Sist. Inf. 2024, 15, 105–112. [Google Scholar] [CrossRef]
  39. Saraswat, D. AI-driven pedagogies: Enhancing student engagement and learning outcomes in higher education. Int. J. Sci. Res. (IJSR) 2024, 13, 1152–1154. [Google Scholar] [CrossRef]
  40. Kushwaha, P.; Namdev, D.; Kushwaha, S.S. SmartLearnHub: AI-Driven Education. Int. J. Res. Appl. Sci. Eng. Technol. 2024, 12, 1396–1401. [Google Scholar] [CrossRef]
  41. Henderson, J.; Corry, M. Data literacy training and use for educational professionals. J. Res. Innov. Teach. Learn. 2020. ahead-of-print. [Google Scholar] [CrossRef]
  42. Hu, J. The challenge of traditional teaching approach: A study on the path to improve classroom teaching effectiveness based on secondary school students’ psychology. Lect. Notes Educ. Psychol. Public Media 2024, 50, 213–219. [Google Scholar] [CrossRef]
  43. Eltahir, M.E.; Mohd Elmagzoub Babiker, F. The influence of artificial intelligence tools on student performance in e-learning environments: Case study. Electron. J. E-Learn. 2024, 22, 91–110. [Google Scholar] [CrossRef]
  44. Villegas-Ch, W.; Govea, J.; Revelo-Tapia, S. Improving student retention in institutions of higher education through machine learning: A sustainable approach. Sustainability 2023, 15, 14512. [Google Scholar] [CrossRef]
  45. Falebita, O.S.; Kok, P.J. Strategic goals for artificial intelligence integration among STEM academics and undergraduates in African higher education: A systematic review. Discov. Educ. 2024, 3, 151. [Google Scholar] [CrossRef]
  46. Hansen, R.; Prilop, C.N.; Alsted Nielsen, T.; Møller, K.L.; Frøhlich Hougaard, R.; Büchert Lindberg, A. The effects of an AI feedback coach on students’ peer feedback quality, composition, and feedback experience. Tidsskr. Læring Og Medier (LOM) 2025, 17. [Google Scholar] [CrossRef]
Figure 1. AI-driven feedback closes learning gaps in education.
Figure 1. AI-driven feedback closes learning gaps in education.
Applsci 15 04473 g001
Figure 2. Main dashboard of AI-driven feedback system.
Figure 2. Main dashboard of AI-driven feedback system.
Applsci 15 04473 g002
Figure 3. AI-driven feedback system query and analysis.
Figure 3. AI-driven feedback system query and analysis.
Applsci 15 04473 g003
Figure 4. Instructor analytics and monitoring.
Figure 4. Instructor analytics and monitoring.
Applsci 15 04473 g004
Figure 5. AI learning path and personalized recommendations.
Figure 5. AI learning path and personalized recommendations.
Applsci 15 04473 g005
Figure 6. AI-generated reports and feedback history.
Figure 6. AI-generated reports and feedback history.
Applsci 15 04473 g006
Figure 7. Live chat and AI tutor for students.
Figure 7. Live chat and AI tutor for students.
Applsci 15 04473 g007
Figure 8. Student performance dashboard.
Figure 8. Student performance dashboard.
Applsci 15 04473 g008
Figure 9. Engagement analytics.
Figure 9. Engagement analytics.
Applsci 15 04473 g009
Figure 10. Student comparative analysis.
Figure 10. Student comparative analysis.
Applsci 15 04473 g010
Figure 11. AI-powered study plan and future recommendations.
Figure 11. AI-powered study plan and future recommendations.
Applsci 15 04473 g011
Figure 12. Comparison of students’ pre- and post-test scores with AI feedback and without it.
Figure 12. Comparison of students’ pre- and post-test scores with AI feedback and without it.
Applsci 15 04473 g012
Figure 13. Student’s engagement scores with AI feedback and without it.
Figure 13. Student’s engagement scores with AI feedback and without it.
Applsci 15 04473 g013
Figure 14. Comparison of pre- and post-intervention cognitive load.
Figure 14. Comparison of pre- and post-intervention cognitive load.
Applsci 15 04473 g014
Figure 15. Relationship between initial post-test scores and delayed recall test scores, with average weekly AI feedback interactions indicated for the experimental group and traditional feedback for the control group.
Figure 15. Relationship between initial post-test scores and delayed recall test scores, with average weekly AI feedback interactions indicated for the experimental group and traditional feedback for the control group.
Applsci 15 04473 g015
Figure 16. Post-test scores of students with AI feedback and without it for STEM and language education students.
Figure 16. Post-test scores of students with AI feedback and without it for STEM and language education students.
Applsci 15 04473 g016
Figure 17. Sankey diagram illustrating the flow of AI-driven feedback complexity (low, moderate, high) to STEM and Language Education students, with line width representing feedback volume.
Figure 17. Sankey diagram illustrating the flow of AI-driven feedback complexity (low, moderate, high) to STEM and Language Education students, with line width representing feedback volume.
Applsci 15 04473 g017
Figure 18. Overall impact of AI-driven adaptive feedback on student performance across multiple learning dimensions.
Figure 18. Overall impact of AI-driven adaptive feedback on student performance across multiple learning dimensions.
Applsci 15 04473 g018
Table 1. Comparison of AI-Driven Feedback Studies.
Table 1. Comparison of AI-Driven Feedback Studies.
StudyFocusConceptual Mastery
Improvement
Engagement
Improvement
Retention
Improvement
Comparison to Current Study
[13]AI in STEM educationSignificant (quantitative not specified)Not measuredNot measuredOur study quantifies 28% improvement, extends to mixed-ability settings.
[12]Personalized AI feedbackImproved (no % specified)Increased (qualitative)Enhanced (qualitative)We provide specific metrics (28%, 35%, 85%) across diverse disciplines.
[34]Adaptive learning case studyNot specifiedSignificant (metrics not detailed)Reduced dropout ratesOur 35% engagement increase and 85% retention rate offer precise, scalable evidence.
[16]AI tutors in higher educationImproved understanding (no %)Enhanced (qualitative)Not measuredWe add cognitive load reduction (22%) and broader applicability (STEM + Language).
Current Study (2025)AI feedback in mixed-ability classrooms28% improvement35% increase85% retentionBuilds on prior work with quantified outcomes, learning analytics, and diverse student needs.
Table 2. Research design overview.
Table 2. Research design overview.
VariableExperimental GroupControl Group
Feedback TypeAI-Driven Adaptive FeedbackTraditional Instructor Feedback
Number of Participants350350
Learning Duration20 Weeks20 Weeks
Assessment TypeAI-based adaptive quizzes, real-time feedbackInstructor-led feedback on quizzes and assignments
Engagement MetricsAI-logged interactions, error corrections, adaptive suggestionsAttendance, participation logs
Table 3. Participant demographics.
Table 3. Participant demographics.
Demographic VariableExperimental Group (n = 350)Control Group (n = 350)Total (N = 700)
Gender52% Male, 48% Female50% Male, 50% Female51% Male, 49% Female
Age Range (Mean ± SD)18–24 (20.8 ± 2.1)18–24 (21.2 ± 2.3)18–24 (21.0 ± 2.2)
Field of Study60% STEM, 40% Language58% STEM, 42% Language59% STEM, 41% Language
Prior AI Learning Experience42% Yes, 58% No40% Yes, 60% No41% Yes, 59% No
Average GPA (Mean ± SD)2.85 ± 0.62.92 ± 0.72.89 ± 0.65
Table 4. Participant selection and grouping.
Table 4. Participant selection and grouping.
GroupNumber of StudentsLearning Method
AI-Driven Group350AI Adaptive Feedback System
Control Group350Traditional Instructor Feedback
Table 5. Pre- and post-study assessment information.
Table 5. Pre- and post-study assessment information.
Assessment TypeNumber of QuestionsTopics CoveredScore Range
Pre-Study Test50Baseline Conceptual Knowledge0–100%
Post-Study Test50Applied Understanding0–100%
Table 6. Survey questions sample.
Table 6. Survey questions sample.
Survey QuestionScale (1–5)Data Type
“How useful was the AI feedback in improving your understanding?”1 (Not at all)–5 (Extremely)Quantitative
“Did AI-driven feedback reduce your learning anxiety?”1 (Not at all)–5 (Significantly)Quantitative
“Which features of the AI feedback system did you find most helpful?”Open-endedQualitative
Table 7. Descriptive statistics of pre- and post-test scores.
Table 7. Descriptive statistics of pre- and post-test scores.
GroupPre-Test Mean (SD)Post-Test Mean (SD)Improvement (%)
Experimental (AI-Driven Feedback, n = 350)52.4 (±12.3)80.2 (±10.4)28%
Control (Instructor Feedback, n = 350)51.8 (±11.7)65.2 (±9.8)14%
Note: Pre- and post-tests covered STEM themes (Computer Science: programming fundamentals; Mathematics: algebra; Physics: mechanics) and Language Education (English: grammar, vocabulary, composition). Each test lasted approximately 60 min.
Table 8. Engagement metrics across performance groups.
Table 8. Engagement metrics across performance groups.
Performance GroupAverage Weekly AI InteractionsEngagement Increase (%)
High Performers (Top 20%)18.235%
Moderate Performers (Middle 60%)12.422%
Low Performers (Bottom 20%)6.38%
Note: Engagement increase is defined as the percentage growth in average weekly interactions (e.g., logins, feedback reviews, task completions) from the baseline (Week 1) to the study average, measured via system logs.
Table 9. Post-test scores by subject.
Table 9. Post-test scores by subject.
GroupSTEM (n = 420)Language (n = 280)p-Value
AI Feedback Group81.4 (±9.5)78.2 (±10.1)p = 0.028
Instructor Feedback Group67.2 (±8.4)64.8 (±9.3)p = 0.036
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Naseer, F.; Khawaja, S. Mitigating Conceptual Learning Gaps in Mixed-Ability Classrooms: A Learning Analytics-Based Evaluation of AI-Driven Adaptive Feedback for Struggling Learners. Appl. Sci. 2025, 15, 4473. https://doi.org/10.3390/app15084473

AMA Style

Naseer F, Khawaja S. Mitigating Conceptual Learning Gaps in Mixed-Ability Classrooms: A Learning Analytics-Based Evaluation of AI-Driven Adaptive Feedback for Struggling Learners. Applied Sciences. 2025; 15(8):4473. https://doi.org/10.3390/app15084473

Chicago/Turabian Style

Naseer, Fawad, and Sarwar Khawaja. 2025. "Mitigating Conceptual Learning Gaps in Mixed-Ability Classrooms: A Learning Analytics-Based Evaluation of AI-Driven Adaptive Feedback for Struggling Learners" Applied Sciences 15, no. 8: 4473. https://doi.org/10.3390/app15084473

APA Style

Naseer, F., & Khawaja, S. (2025). Mitigating Conceptual Learning Gaps in Mixed-Ability Classrooms: A Learning Analytics-Based Evaluation of AI-Driven Adaptive Feedback for Struggling Learners. Applied Sciences, 15(8), 4473. https://doi.org/10.3390/app15084473

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop