Skip to Content
Education SciencesEducation Sciences
  • Article
  • Open Access

12 January 2026

Adaptive and Personalized Learning in Higher Education: An Artificial Intelligence-Based Approach

,
and
1
Department of Information Technology, Universidad Latina de América, Morelia 58188, Michoacán, Mexico
2
Facultad de Ciencias Físico Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Morelia 58030, Michoacán, Mexico
3
Facultad de Ingeniería Eléctrica, Universidad Michoacana de San Nicolás de Hidalgo, Morelia 58030, Michoacán, Mexico
*
Author to whom correspondence should be addressed.

Abstract

The integration of Artificial Intelligence (AI) in higher education offers a potential solution to the scalability of personalized learning, yet empirical frameworks connecting diagnostic data with teacher-mediated interventions remain limited in developing contexts. This study adopts a sequential multi-phase research design to address this gap. Phase 1 comprised a diagnostic quantitative analysis of the National Survey on Access and Permanence in Education (ENAPE 2021), involving a representative sample of 3422 Mexican undergraduate students. Using Exploratory Factor Analysis (KMO = 0.96) and Pearson correlations, the study established a structural baseline. Phase 2 implemented a quasi-experimental exploratory pilot (N = 23) across two academic clusters (Civil Engineering and Nutrition) using “ActivAI”, a custom GPT configured with Retrieval-Augmented Generation (RAG). Results from Phase 1 revealed a strong, statistically significant correlation ( r = 0.72 , p < 0.01 ) between the perceived impact of education on daily life and the perception of equity, identifying “relevance” as a key driver of accessibility. Phase 2 results demonstrated high student satisfaction with AI-driven personalization (M = 4.49, SD = 0.64), although disciplinary variations in engagement were observed (SD = 0.85 in Nutrition versus 0.45 in Engineering). The study concludes by proposing the Dynamic Integration Model, which leverages AI not as a replacement for instruction but as a scalability toolkit for teacher-led orchestration, ensuring that personalization addresses dynamic student needs rather than static learning styles.

1. Introduction

As technological innovation accelerates, traditional teaching methodologies are being reexamined in favor of student-centered approaches that can navigate the complexities of mass education. The implementation of adaptive and personalized learning strategies reflects a paradigmatic shift in how higher education institutions attempt to accommodate individual learning needs within standardized curricula Bombaerts and Vaessen (2022). However, a conceptual distinction is necessary: while adaptive learning involves the algorithmic adjustment of content and pacing according to a student’s specific progress, personalized learning encompasses a broader pedagogical aim—creating educational experiences that are relevant, meaningful, and connected to the learner’s context.
Integrating these approaches presents a significant logistical challenge. Critics rightly argue that manually tailoring content for large cohorts places an unsustainable burden on already overloaded instructors. In this context, Advanced Learning Technologies (ALTs), particularly those driven by Artificial Intelligence (AI), offer a potential solution not by replacing the teacher, but by scaling their capacity to attend to diverse needs. As noted by Srinivasa et al. (2022), AI technologies can provide real-time recommendations and feedback mechanisms that would be logistically impossible for a human instructor to manage alone in high-ratio settings.
Despite this potential, the transition to AI-enhanced education is not merely technical but deeply structural. In the post-pandemic landscape, particularly in developing regions like Mexico, the digital divide and the lack of scalable pedagogical frameworks have exacerbated educational inequalities. While higher education institutions are beginning to integrate these technologies to enhance accessibility, the literature remains fragmented regarding how to operationalize AI without diminishing the teacher’s agency or ignoring ethical constraints.
Current scholarship suggests that AI plays a critical role by enabling large-scale personalization; however, valid concerns persist regarding the “black box” nature of these tools and their actual impact on equity. If implemented correctly within a Hybrid Intelligence framework—where AI handles data processing and content variation while teachers orchestrate the pedagogical strategy—this development has the potential to foster a more inclusive environment. As Karam (2023) observes, such integration can also support greater student engagement and motivation, provided it is grounded in pedagogical relevance rather than novelty.
Synthesizing these perspectives reveals a critical gap in the literature: while the theoretical benefits of AI personalization are well-documented, there is a scarcity of empirical frameworks that connect diagnostic data with teacher-mediated AI interventions in contexts characterized by structural inequality. Addressing this gap, the primary objective of this study is to propose and pilot the Dynamic Integration Model, a framework that utilizes secondary data as a diagnostic baseline to inform a teacher-led, AI-supported intervention. Consequently, this research is guided by the central question of to what extent AI-driven adaptive learning activities, when implemented through teacher-guided processes, can meaningfully improve accessibility and perceived equity among higher-education students in Mexico.

2. Theoretical Framework

It is our belief that adapting to new student-centered learning approaches remains vital. Recent analyses, such as those by Chans et al. (2023), indicate that structural, academic, and socio-emotional impacts persist well beyond the return to in-person instruction. These include widening digital divides, increased dropout rates, documented psychological distress among students and faculty, and the absence of a coordinated national recovery framework. Consequently, a comprehensive recovery strategy remains necessary. Such a strategy should incorporate diagnostic tools, remedial and bridging courses, and personalized assessments aimed at addressing persistent learning gaps (Hevia et al., 2022; Ortiz-Gallegos, 2020; Pozo et al., 2024; Ramírez Montoya et al., 2023). It is also essential not to overlook the socioemotional well-being of teachers and students.
These distance learning challenges were not unique to Mexico. Evidence from Angrist et al. (2022) shows that countries across multiple regions—including Latin America, North America, Europe, Asia, and Africa—experienced comparable difficulties during the global shift to emergency remote education. Their large-scale, cross-country analysis reveals similar patterns in learning losses, access barriers, and student attitudes, suggesting that the challenges observed in Mexico were part of a broader worldwide phenomenon rather than isolated circumstances. In Mexico, the transition to online education revealed that students exhibit attitudes and motivations toward online learning similar to those found in other countries studied (Angrist et al., 2022). Notably, self-efficacy, understood as students’ belief in their capability to successfully perform academic tasks, emerged as a significant moderator of cognitive engagement in online learning environments Aguilera-Hermida et al. (2021).

2.1. Foundations of Adaptive Learning and Scalability Mechanisms

According to Alamri et al. (2021), personalized learning can be understood through three foundational components: Adaptive Learning Models, Learning Analytics, and Blended Learning Platforms. Their framework highlights how these elements work together to support instructional designs that respond to students’ individual needs and learning trajectories. Adaptive learning models tailor content and activities to each student’s needs, improving retention and comprehension. Learning analytics leverage data to provide targeted feedback, facilitating timely interventions and adjusting teaching strategies. Blended learning platforms combine online and face-to-face instruction, offering flexibility and an inclusive environment. Taken together, we argue that these approaches can contribute to more effective and personalized learning experiences by highlighting the importance of continuous adaptation and the use of advanced technologies to address the diverse needs of students in higher education.
However, properly addressing these diverse needs requires a scientifically grounded understanding of learner variability. While earlier educational frameworks often relied on the concept of ’learning styles,’ contemporary neuroscience has challenged the validity of categorizing learners into rigid modalities, highlighting the lack of neurological evidence to support such classifications Goswami and Bryant (2012). Consequently, current research on personalized learning in digital environments has shifted away from these labels, emphasizing instead the need to adapt instruction to meaningful and dynamic learner characteristics. In this vein, Shemshack and Spector (2020) identify a set of empirically grounded elements to guide personalization, including students’ prior knowledge, cognitive strengths, learning preferences, motivational factors, and the ways in which they process information. These constructs provide a more robust foundation for designing adaptive learning systems capable of offering differentiated pathways and tailored content, enhancing educational effectiveness while aligning with each learner’s unique profile. Despite the pedagogical promise of these adaptive models, their implementation is not without obstacles. Higher education faces several challenges on a global scale; among the most significant is the inequality in technological adaptation. The variability in access to technology and digital skills among institutions, teachers, and students has created a considerable gap in educational quality (Amemasor et al., 2025; Hadar Shoval, 2025; Núñez-Canal et al., 2022).
Preparing students for the digital and sustainable economy requires the development of advanced twenty-first-century competencies, including technical, cognitive, and critical-thinking skills (Wang et al., 2023). Meeting these demands also depends on the continuous strengthening of Educators’ Digital Competence (EDC), as instructors must update their pedagogical and technological skills to effectively integrate digital tools into their teaching (Miranda et al., 2021). At the institutional level, universities face increasing pressure to adopt hybrid models that combine face-to-face and online learning in flexible and meaningful ways. However, when these models are implemented without corresponding innovations in instructional design, a pedagogical misalignment can emerge, in which course structures fail to address learner variability or support differentiated learning pathways (Almusaed et al., 2023). This gap often results in reduced engagement and feelings of disconnection among students, particularly when instructional materials remain homogeneous and place excessive responsibility on learners to navigate their own progress without adaptive or personalized guidance. Similarly, conventional e-learning environments tend to follow a “one-size-fits-all” approach, which negatively impacts student engagement and performance (El-Sabagh, 2021; Gopal et al., 2021; Kabudi et al., 2021; Wang et al., 2023). Reinforcing this view, Shemshack et al. (2021) warn that when instructional materials ignore learner characteristics—such as prior knowledge and interests—students perceive the content as less meaningful, diminishing motivation.
To address this disconnection, recent scholarship positions Generative AI as a pivotal tool for contextualization rather than mere content delivery. Baidoo-Anu and Owusu Ansah (2023) argue that generative models offer unprecedented capabilities to simulate real-world scenarios and dynamically reframe abstract curricular concepts into narratives that resonate with students’ specific backgrounds. By transforming static instructional materials into interactive, context-aware resources, these tools can bridge the gap between standardized curriculum requirements and the learner’s need for personal relevance, effectively mitigating the engagement deficits caused by homogeneous instruction.
Along these lines, Almeida et al. (2024) notes that personalization has positive effects on learning nona-versive topics—that is, topics not associated with emotionally challenging or stressful situations—since personalization can make the material more accessible and relevant for students.
Kabudi et al. (2021) present adaptive learning as an AI-enabled approach capable of adjusting instructional pathways in real time based on learner data, performance patterns, and interaction histories. Their work highlights how AI systems support individualized learning trajectories by continuously analyzing student behavior and providing tailored content, feedback, and pacing.
Gligorea et al. (2023) identify a variety of adaptive learning approaches in the research literature, including system architectures, conceptual frameworks, and analytical techniques such as Bayesian networks and neural networks. However, most of these developments remain at the experimental or pilot stage and are not yet widely implemented in higher education institutions. Their adoption requires substantial investment in digital infrastructure, teacher training, and institutional capacity.
Minn (2022) demonstrates that combining knowledge-assessment models, learning analytics, and blended learning platforms can enhance personalization, but these integrations also depend on resources, organizational readiness, and sustained professional development. As a result, the practical implementation of AI-driven adaptive learning remains uneven, highlighting the gap between research innovations and large-scale deployment in educational systems. Building on Minn (2022), the integration of AI-driven adaptive mechanisms into educational platforms enables large-scale personalization by adjusting content and feedback according to each learner’s evolving knowledge state. Evidence shows that adaptive learning environments—supported by techniques such as knowledge tracing, item response theory, and intelligent tutoring systems—are generally more effective than traditional instruction in improving learning outcomes. However, these approaches do not fully address equity challenges on their own, even though they offer evidence-based methods for delivering more individualized and responsive learning experiences.
From an implementation perspective, a decisive factor for the adoption of personalized learning is scalability. Addressing the logistical challenge of implementation in large-enrollment courses (e.g., cohorts of 60 to 200 students), recent literature emphasizes that AI-driven adaptivity functions as a scalability mechanism rather than a manual burden. As argued by Molenaar (2022) in the context of hybrid intelligence, the goal is not for the teacher to manually construct distinct materials for every student, but to orchestrate algorithmic systems that can autonomously generate variations based on diagnostic data. Furthermore, effective scalability in large classes often relies on dynamic clustering, where AI identifies subgroups of students with similar learning gaps Peng et al. (2019), allowing the instructor to deploy targeted interventions for clusters rather than individuals. This approach, validated by Tretiak et al. (2025), confirms that Generative AI significantly reduces the time required for resource creation and feedback, making personalized attention viable even in high-ratio educational settings.
Complementing this content adaptation, the logistical feasibility of the model is further reinforced by the automation of formative feedback loops. In traditional high-enrollment settings, providing timely, personalized feedback to every student is often the primary operational bottleneck. However, Lim et al. (2023) indicate that Generative AI can effectively function as a first-tier evaluator, delivering immediate, scaffolded responses to student inquiries or submissions at a scale unattainable for human instructors alone. This capability enables a shift in the teacher’s role from high-volume manual correction to strategic oversight.
Crucially, this automation does not diminish the instructor’s influence on student affect. Chiu et al. (2024) demonstrate that while AI-based chatbots can provide immediate cognitive scaffolding, student motivation and self-determination are significantly higher when this technological support is perceived as being orchestrated by a supportive teacher. Their findings suggest that the most effective adaptive environments are those where AI provides the scalability of feedback, while the teacher ensures the social-emotional connection, creating a symbiotic support system that enhances both performance and intrinsic motivation. By relying on aggregated performance data—often visualized through teacher-facing dashboards—educators can identify class-wide misconceptions in real-time and adjust the collective instructional strategy (Molenaar, 2022). Consequently, the teacher modulates the learning environment based on synthesized patterns rather than being overwhelmed by the granular processing of individual transactions.

2.2. Ethical, Technical, and Governance Challenges

However, despite the potential benefits, recent systematic reviews on AI in education emphasize that ethical and technical limitations are not merely peripheral issues but central constraints for large-scale adoption. Zhu et al. (2025) synthesize evidence across multiple educational contexts, identifying recurrent risks such as algorithmic bias, a lack of transparency in decision-making processes, and unequal power relations between technology providers and educational institutions. These reviews demonstrate that fairness, accountability, and explainability remain insufficiently addressed in many AI systems deployed in education. This deficiency raises critical questions regarding who truly benefits from personalization and whose interests are prioritized when these models are trained and implemented.
Inextricably linked to these ethical risks is the datafication of learning, which brings privacy and surveillance concerns to the forefront. Studies on AI ethics underscore that while continuous tracking of student interactions and behaviors enables richer analytics, it also creates new vulnerabilities if data governance is weak or opaque (An et al., 2024). For instance, Peña et al. (2024) argue that balancing the pedagogical benefits of AI with robust protections for privacy, consent, and data security has become a core challenge for educational systems. This is particularly pressing when commercial platforms collect and repurpose student data beyond its original instructional context.
Finally, from a governance perspective, the large-scale implementation of AI in higher education remains constrained by infrastructure, institutional capacity, and user acceptance. Comprehensive reviews document that structural barriers—such as limited connectivity, insufficient technical support, and the uneven distribution of resources—hinder the effective integration of advanced AI tools, especially in low-resource settings (Garzón et al., 2025). Furthermore, concerns regarding privacy, bias, and reliability can erode trust among teachers and students. This reality reinforces the need for robust institutional policies and participatory governance mechanisms that guide responsible AI adoption, ensuring that decisions are not left solely to vendors or individual instructors, but are grounded in pedagogical strategy.

2.3. Teacher Agency and Pedagogical Limits

Given the ethical and technical constraints outlined above, empirical and review studies consistently indicate that AI reshapes, rather than replaces, the role of teachers in higher education. Systematic analyses of generative AI in educational settings demonstrate that, although AI can support content generation, feedback, and routine tasks, teachers remain irreplaceable in providing nuanced guidance, socioemotional support, and ethical judgment (Li et al., 2025; Rifah & Zamahsari, 2022). These findings suggest that AI is best understood as an augmentative tool. This perspective redistributes certain instructional tasks while reinforcing the critical need for teacher agency in orchestrating learning activities and making context-sensitive pedagogical decisions.
This reconfiguration of roles is further evidenced by studies focusing on teachers’ own perspectives, which report that AI-driven tools are significantly altering classroom dynamics and instructional design expectations. Adhikari and Pandey (2025) shows that teachers perceive AI as a catalyst for changing how they foster student agency, shifting their primary role from knowledge transmitters toward active facilitators. In this capacity, educators curate resources, design AI-supported tasks, and help students interpret algorithmic recommendations. Similarly, Tretiak et al. (2025) notes that these technologies require teachers to coordinate human and machine feedback, mediating between automated suggestions and learners’ individual needs rather than simply delegating instruction to digital systems.
To navigate this evolving landscape effectively, the importance of teachers’ AI literacy and continuous professional development cannot be overstated. Research on role transformation in the AI era argues that educators require not only technical skills but also a critical understanding of how AI systems function, the assumptions underpinning their recommendations, and how to integrate them into coherent pedagogical frameworks (H. T. Du & Wang, 2025). Scoping reviews further emphasize that teachers are expected to guide students in using AI responsibly, contextualizing AI-generated outputs, and safeguarding higher-order learning goals—such as critical thinking and academic integrity—even as AI becomes pervasive in everyday study practices (Xia et al., 2024).
Recent meta-analyses of AI in higher education highlight key pedagogical and epistemological constraints of personalization. A comprehensive review by Bond et al. (2024) reveals that while adaptive and predictive systems are prevalent, less attention has been paid to how such systems impact shared learning trajectories, curriculum coherence, and higher-order disciplinary thinking. These authors argue that personalization driven by algorithmic decision-making may inadvertently deepen inequalities when models rely on incomplete or biased data sets or when learner pathways become overly fragmented. In this way, adaptive systems may excel at optimizing short-term task performance, but they do not automatically promote the deeper understanding of discipline-specific epistemologies or broader educational aims.
In the specific context of Generative AI, the proliferation of these tools has intensified concerns regarding students’ cognitive dependence and the potential erosion of critical thinking. Systematic reviews indicate that an overreliance on generative models can reduce cognitive reflection, encourage superficial engagement with content, and introduce hallucinated information that is difficult for novice learners to detect (Melisa et al., 2025; Salido et al., 2025). Experimental studies with university students reinforce this concern, showing that while AI can support problem-solving in complex tasks, it poses significant risks to independent reasoning if learners are not explicitly guided to question, verify, and reinterpret AI-generated suggestions rather than accepting them at face value (X. Du et al., 2025).
Consequently, Generative AI challenges existing assessment practices, raising fundamental questions about authorship, academic integrity, and the nature of evidence of learning. Bittle and El-Gayar (2025) synthesizes research on this tension, documenting the conflict between the productive uses of AI for feedback and drafting versus the risks of opaque authorship and plagiarism. Complementary analyses argue that institutions must rethink evaluation designs, placing greater emphasis on process, metacognitive skills, and dialogic assessment rather than solely on static written outputs that can be easily produced by AI (Xia et al., 2024). Together, these studies suggest that the pedagogical value of personalization depends not only on technical capabilities but on how institutions redefine assessment and learning evidence in AI-rich environments.
Synthesizing these perspectives points to a persistent gap in the literature: although the theoretical benefits and risks of AI-enabled personalization are increasingly well documented, there are still relatively few empirically grounded frameworks that integrate AI-driven adaptivity with explicit, teacher-mediated pedagogical oversight to address these epistemological and ethical concerns. In particular, there is a need for models that leverage AI not to replace the instructor, but to augment their practice through structured, data-informed insights—such as those derived from large-scale educational assessments like ENAPE—while preserving teacher agency in instructional design and classroom decision-making. To address this gap, the present study proposes and examines a teacher-in-the-loop adaptive learning model that operationalizes these principles, using national baseline data to contextualize student needs and exploring how AI-assisted personalization can support more equitable and accessible learning experiences in higher education without undermining instructional quality or academic integrity.

3. Materials and Methods

To address the research objectives comprehensively, this study adopts a sequential multi-phase research design. This methodological architecture is structured in two complementary stages: Phase 1 (Diagnostic Analysis), which employs a quantitative, non-experimental design based on secondary analysis of the ENAPE dataset to identify structural learning conditions and justification gaps; and Phase 2 (Quasi-Experimental Pilot), which implements a teacher-mediated AI intervention (ActivAI) to evaluate the practical viability of adaptive personalization in a real-world setting.
The rationale for this sequence is grounded in the need to first construct a statistically grounded baseline characterizing the multidimensional learning conditions—such as educational access and perceived relevance—before deploying the intervention. Thus, the descriptive analyses in Phase 1 serve as the empirical diagnostic framework that contextualizes the specific design and application of the AI-driven strategies tested in Phase 2.

3.1. Phase 1: Diagnostic Analysis

The data analyzed originate from the 2021 Education Module of the National Survey on Access and Permanence in Education (ENAPE), administered by the National Institute of Statistics and Geography (INEGI). ENAPE follows a probabilistic, stratified, multistage sampling design that produces nationally representative estimates of educational conditions among individuals aged 0–29. For the purposes of this study, the sample was restricted to respondents aged 18 or older who have pursued or are currently pursuing higher education in Mexico, ensuring alignment with the target population most relevant to adaptive learning initiatives.
Analyses were conducted using Python 3.12 and standard scientific libraries (Table 1). A multistep preprocessing pipeline was implemented:
Table 1. Example of Python libraries used.
  • Selection of numeric variables relevant to higher education.
  • Removal of variables with more than 50% missing values.
  • Replacement of infinite values and imputation of remaining missing data using constant-value substitution (zero-filling), consistent with the categorical nature of the ENAPE indicators.
  • Elimination of zero-variance variables.
  • Standardization of all retained variables using z-score normalization.
This process yielded a final analytical matrix of 3422 respondents and 48 standardized indicators.

3.1.1. Instruments and Variables

The variables selected for the secondary analysis were extracted from the ENAPE 2021 questionnaires, an instrument designed and validated by the National Institute of Statistics and Geography (INEGI) to measure educational trajectories and mobility. As a government-standardized instrument, ENAPE employs rigorous psychometric validation protocols to ensure construct validity and reliability across the national population.
For this study, the screened items were analyzed based on their theoretical alignment with the constructs of educational relevance, access, and performance. Following the dimensionality reduction process (EFA), the analysis focused on four latent constructs composed of the following observed variables:
  • Impact of Education on Daily Life (Factor 1):
    Measured through Likert-scale items assessing the perceived utility of education in decision-making, quality of life, and employment opportunities (e.g., items PB3_6 to PB3_9).
  • Access and Continuity (Factor 2): Comprising indicators of enrollment status and semester progression (e.g., PA3_3, PA3_6), operationalized as categorical variables reflecting retention.
  • Educational Performance (Factor 3): Aggregating variables related to academic remediation, such as the need for supplemental training or extraordinary exams (e.g., PA3_7 series).
  • Perception of Assessment (Factor 4): Grouping items related to evaluation formats, including project-based and multimedia assessments (e.g., PA3_8 series).

3.1.2. Analysis Strategy

Following IMRaD conventions, the analytical procedure consisted of three sequential steps:
  • Descriptive Analysis. Univariate statistics and frequency distributions were computed to assess the distributional characteristics and identify potential anomalies prior to multivariate modeling.
  • Dimensionality Reduction and Factor Extraction. An Exploratory Factor Analysis (EFA) was conducted using the MinRes extraction method and Varimax rotation. Sampling adequacy was verified through Bartlett’s Test of Sphericity ( p < 0.001 ) and the Kaiser–Meyer–Olkin (KMO) index. For all inferential tests in this diagnostic phase, the statistical significance level was set at α = 0.05 . The number of factors was determined through convergence of (i) eigenvalues > 1 , (ii) inspection of the scree plot, (iii) theoretical interpretability, and (iv) retention of loadings 0.40 . This procedure resulted in a four-factor solution representing latent dimensions of learning impact, access and progression, academic trajectories, and evaluation practices.
  • Correlation Analysis. Pearson’s correlation coefficients were computed among the factor scores to examine the interrelationships between latent constructs. These associations provide empirical grounding for understanding how adaptive learning models might engage distinct dimensions of students’ educational experiences.

3.2. Phase 2: Quasi-Experimental Pilot

3.2.1. Participants and Context

The pilot intervention was conducted during the Autumn 2024 academic term at the Universidad Latina de América (UNLA). The study employed a non-probabilistic convenience sampling method, recruiting a total of 23 undergraduate students ( N = 23 ) enrolled in two distinct academic programs to ensure disciplinary diversity. The sample consisted of:
  • Civil Engineering Cluster ( n = 14 ): Students enrolled in the “Sustainable Road Project” course (Proyecto Vial Sustentable), characterized by project-based learning requirements.
  • Nutrition Cluster ( n = 9 ): Students enrolled in the “Cellular and Molecular Biology” course (Biología Celular y Molecular), focused on theoretical and factual knowledge acquisition.
The intervention followed a single-group exploratory design without a control group, aiming to assess the feasibility and perceived impact of the AI-driven personalization model in a real-world setting.

3.2.2. Intervention Procedure: The ActivAI Workflow

The intervention utilized “ActivAI,” a custom Generative Pre-trained Transformer (GPT) configured with Retrieval-Augmented Generation (RAG) capabilities to ensure pedagogical rigor. The procedure followed a structured teacher-in-the-loop workflow:
  • Diagnostic Input. Prior to the instructional sessions, the instructors inputted qualitative diagnostic observations into a cloud-based database (Airtable). These entries included the group identifier, course subject, and specific behavioral or cognitive traits observed (e.g., “The group is quiet but works well individually,” or “They struggle with team integration”).
  • Automated Orchestration. The instructor accessed the ActivAI interface and requested an activity for a specific group. The system utilized a custom API action to retrieve the stored diagnostic comments from Airtable.
  • Content Generation (RAG). Using the retrieved diagnostic data as a contextual prompt, the GPT generated a personalized teaching–learning activity. To prevent “hallucinations” and ensure educational quality, the model was restricted to referencing three uploaded evidence-based frameworks: authentic assessment elements, active learning strategies, and peer instruction methodologies.
  • Human Validation and Application. The instructor reviewed the AI-generated proposal for safety and accuracy—following the decision tree protocols for safe AI use Sabzalieva and Valentini (2023)—and subsequently applied the tailored activity in the classroom environment.
As illustrated in Figure 1, a GPT requires several configuration elements: name, description, instructions, example prompts, reference documents, capabilities (e.g., code interpreter, image analysis, web browsing), and additional actions. One of the most distinctive features of the ActivAI GPT was the integration of a custom action connected to a dynamic Airtable. This table enables instructors to enter the group name, course, average grades (monthly, midterm, etc.), and, most importantly, a comment describing how they perceive the group.
Figure 1. GPT Communication Diagram. The arrows indicate the flow of information and interaction between the instructor, the AI system, and the external database. Solid arrows represent data exchange processes, while bidirectional arrows denote iterative feedback loops supporting adaptive activity generation.
What makes this GPT unique is the inclusion of an “Action” that connects to a dynamic table in Airtable. Instructors regularly input into this table the group name, subject, grade average, and most importantly, a written comment regarding their perception of the group. At a given point, the GPT will request the group name, access the Airtable, retrieve the comments, and generate an activity tailored to the group’s immediate needs. To generate the learning activity, the GPT follows a structured interaction with the user, prompting for:
  • Average age of the audience.
  • Type of focus (e.g., workshop, higher-education activity, etc.).
  • Learning objective.
  • Topic.
  • Subtopic.
  • Prior knowledge (if required).
  • Activity duration.
  • Group name (to retrieve teachers’ comments).
Once the user types “I want to design an activity,” the GPT begins guiding the user through these inputs.

3.2.3. Data Collection and Measures

To evaluate the intervention, an ad-hoc instrument titled “Course Experience Survey: Personalization and Adaptivity” was administered post-implementation. The instrument consisted of 17 items evaluated on a 5-point Likert scale (1 = “Totally Disagree” to 5 = “Totally Agree”), structured into four theoretical dimensions derived from the literature on adaptive learning:
  • Satisfaction with Personalization (4 items): Assessed the perception of tailored feedback and pacing (e.g., “The pace of some sessions was adjusted to our understanding”).
  • Relevance and Utility (5 items): Measured the connection between activities and real-world contexts or student interests (e.g., “Examples connected with real situations we mentioned”).
  • Perceived Impact on Learning (4 items): Evaluated progress awareness and the utility of feedback for improvement (e.g., “Feedback helped me improve the next submission”).
  • Participation and Engagement (4 items): Gauged behavioral involvement when choices were offered versus rigid pathways (e.g., “I participated more when I could choose the work approach”).
To ensure data quality, the instrument included reverse-coded items (R) to control for acquiescence bias. Complementary to student perceptions, instructor observation logs were used to qualitatively corroborate changes in class engagement levels pre- and post-intervention. Regarding the qualitative component, the diagnostic comments entered by teachers into Airtable were not subjected to interpretive phenomenological coding; instead, these textual inputs functioned as operational prompts within the RAG architecture to trigger specific content generation paths.

3.2.4. Statistical Analysis Strategy

Given the exploratory nature of the pilot study and the sample size ( N = 23 ), the analysis of Phase 2 focused on descriptive statistics (mean, standard deviation, and frequency distributions) to identify patterns of user perception and engagement. This approach aligns with the objective of assessing feasibility and initial acceptance rather than establishing causal generalization at this stage. Quantitative data processing was performed using Python (Pandas library), and qualitative observations were synthesized to provide contextual explanation for the numerical trends.

4. Results

4.1. Descriptive Analysis

A total of 3422 individuals aged 18 or older who were currently or formerly enrolled in higher education were included in the final analytical sample. Figure 2 presents the gender distribution, which shows a nearly balanced representation: 1737 men (50.8%) and 1685 women (49.2%). This balanced composition ensures adequate representation of both genders in the subsequent multivariate analyses.
Figure 2. By gender percentages of people aged 18 or older.
Regarding geographical distribution, Figure 3 displays respondents across the 32 states of the Mexican Republic. Participation ranged from 47 individuals (Tlaxcala) to 192 individuals (Tabasco), reflecting substantial geographical diversity. This pattern is consistent with ENAPE’s probabilistic sampling design and provides a robust national baseline for examining educational access and equity contextualized within the country’s distinct regional realities.
Figure 3. People aged 18 or older surveyed, by state.
In Figure 3, we can observe the number of individuals aged 18 or older who are enrolled in higher education and were surveyed, broken down by state within the Mexican Republic.
Taken together, these descriptive indicators confirm the internal coherence of the analytical sample and justify its suitability for subsequent multivariate procedures.

4.2. Component and Factor Analysis Structure

Before interpreting the latent dimensions, the factorability of the dataset was statistically confirmed. The Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy yielded a value of 0.96, indicating excellent suitability for factor analysis. Bartlett’s Test of Sphericity was statistically significant ( χ 2 = 257,184.85 , p < 0.001 ), confirming that the correlation matrix was appropriate for factor extraction.
Factor extraction was conducted using the Minimum Residual (MinRes) method with Varimax rotation. Inspection of the scree plot and eigenvalues revealed that exactly four components exceeded the Kaiser criterion (eigenvalue > 1), thus defining the underlying structure of the data. The rotated factor loadings (≥0.40) from the final set of 48 standardized indicators supported this four-factor solution.
Regarding the explanatory power of the model, the four extracted factors accounted for the majority of shared variance in the dataset (Figure 4). Factor 1 explained approximately 22% of the variance, followed by Factor 2 (6%), Factor 3 (4%), and Factor 4 (2%). This distribution aligns with socio-educational datasets, where one dominant dimension is typically followed by several smaller but theoretically meaningful components. For full details on the underlying structure, the complete rotated factor loading matrix is provided in Appendix A (Table A1).
Figure 4. Explained Variance by Each Factor.
The factors were interpreted as follows:
  • Factor 1 (Impact of Education on Daily Life): Perceived utility of education in decision-making, quality of life, and employment opportunities.
  • Factor 2 (Access and Continuity): Enrollment status, semester progression, and academic retention.
  • Factor 3 (Educational Performance): Behaviors related to academic remediation, including supplemental training, extraordinary exams, and course repetition.
  • Factor 4 (Perception of Education): Evaluation practices, such as written/oral assessments and the submission of multimedia evidence.

4.3. Correlation Analysis

To examine the structural relationships between these dimensions, Pearson correlation coefficients were calculated (Table 2). These associations do not imply causality; rather, they provide a structural overview of how students’ perceptions and educational experiences co-vary within the dataset.
Table 2. Correlation matrix results.
The analysis highlights two critical relationships:
(1)
Factor
1–Factor 4 (Impact of Education on Daily Life × Perception of Education)
A strong and statistically significant positive relationship ( r = 0.72 ) was observed. The strong correlation observed between ’Impact on Daily Life’ (Factor 1) and ’Perception of Education’ (Factor 4) highlights a critical pedagogical reality: students evaluate educational quality based on its perceived utility and relevance to their personal and professional contexts. While earlier frameworks relied on ’learning styles’ to achieve this—a concept now largely debated Goswami and Bryant (2012)—contemporary research identifies contextual relevance as the true driver of engagement.
Recent evidence suggests that Generative AI is uniquely positioned to bridge this gap. Unlike traditional static materials, AI tools can dynamically contextualize abstract curricular concepts into scenarios relevant to the student’s daily life and career aspirations (Baidoo-Anu & Owusu Ansah, 2023). Furthermore, AI-driven adaptive systems provide immediate, scaffolded feedback that mimics personalized tutoring, significantly improving the student’s perception of institutional support and educational value (Chiu et al., 2024).
Therefore, although the ENAPE survey does not measure AI exposure directly, it identifies the structural mechanism (relevance and daily impact) that AI interventions are theoretically and empirically proven to enhance. The proposed model (ActivAI) is thus designed to target this specific mechanism, using AI to maximize the relevance that the survey data identified as essential for perceived equity.
Figure 5 provides a descriptive conceptual mapping of the relationship between these latent constructs and is not intended as a causal model. The diagram highlights that the evaluation of educational equity is embedded in the practical usefulness of the learning experience.
Figure 5. Correlation analysis of factors 1 and 4.
(2)
Factor 2–Factor 3 (Access to Education × Educational Performance)
A remarkably strong positive correlation ( r = 0.91 ) was also observed (Figure 6). This indicates that academic continuity—represented by enrollment stability and course progression—is strongly associated with academic remediation, performance, and participation in complementary training. This finding aligns with the theoretical framework of Alamri et al. (2021), which conceptualizes retention and performance as interdependent outcomes. Practically, this implies that interventions aimed at improving academic achievement cannot be decoupled from those that support sustained access throughout the student trajectory.
Figure 6. Correlation analysis of factors 2 and 3.

4.4. Results of the Quasi-Experimental Pilot (Phase 2)

To complement the diagnostic findings from Phase 1, the results of the Phase 2 pilot intervention ( N = 23 ) are presented below. The data, collected via the Course Experience Survey, were processed to calculate descriptive statistics (mean and standard deviation) for each of the four theoretical dimensions. To ensure analytical rigor, reverse-coded items were inverted prior to calculation.
Table 3 summarizes the participants’ perceptions, disaggregated by academic cluster (Civil Engineering vs. Nutrition).
Table 3. Descriptive statistics for the four dimensions of the Course Experience Survey.
The quantitative results reveal a robust positive reception of the ActivAI model across both disciplines, with a global satisfaction mean of M = 4.49 . However, a disciplinary variation is observable: the Civil Engineering cluster consistently reported higher scores across all dimensions ( M range = 4.55–4.68) with lower variability (SD < 0.60), suggesting a strong alignment between the project-based nature of their curriculum and the AI-generated activities. In contrast, the Nutrition cluster, while positive ( M > 4.0 ), displayed higher dispersion (SD 0.85 ), indicating more varied individual experiences with the adaptive format.

Instructor Observations (Engagement)

Triangulating these results with the instructor observation logs, a qualitative shift in classroom dynamics was corroborated. Instructors reported that the “Participation” dimension scores ( M = 4.43 ) corresponded with observable behaviors, such as an increase in voluntary questioning and “time on task” during the sessions. Notably, for the Nutrition group—initially characterized in the diagnostic phase as “quiet” and “needing security”—instructors observed that the personalized scaffolding provided by the AI activities fostered a safer environment for interaction, aligning with the quantitative finding that relevance drives engagement ( r = 0.72 in Phase 1).
However, a nuanced analysis of the pilot data reveals that the perceived benefit was not uniform. The higher standard deviation observed in the Nutrition cluster (SD 0.85 ) compared to Civil Engineering suggests that the effectiveness of AI-driven scaffolding may vary depending on the disciplinary nature of the course (theoretical vs. project-based). This contradicts the assumption that “one size fits all,” even within adaptive systems, highlighting the need for discipline-specific tuning of the variation engine.

5. Discussion

The findings of this study provide empirical support for the theoretical premise that AI-driven personalization can serve as a catalyst for educational equity when implemented through a teacher-mediated framework. Specifically, the strong correlation ( r = 0.72 ) identified between the perceived relevance of education and its equitable impact aligns with the “Hybrid Intelligence” model proposed by Molenaar (2022), suggesting that technological interventions are most effective when they enhance—rather than replace—human pedagogical judgment.
Furthermore, the high satisfaction scores reported in the pilot intervention ( M = 4.49 ) corroborate recent evidence by Tretiak et al. (2025) regarding the positive reception of generative tools in higher education. However, unlike studies that focus solely on efficiency, our results highlight that student engagement is driven primarily by the relevance of the content ( r = 0.91 between Access and Performance), reinforcing Alamri et al. (2021) conclusion that retention is inextricably linked to the personalization of the learning experience.
Based on the diagnostic findings derived from the ENAPE analysis and the theoretical imperatives identified in the literature, this study proposes the Dynamic Integration Model of Artificial Intelligence for Personalization and Continuous Improvement. As illustrated in Figure 7, the model operates as a systemic loop designed to solve the scalability and relevance gaps in higher education. The architecture is organized into sequential phases: Inputs, Orchestration, Scalability, Implementation, and Feedback.
Figure 7. Operational workflow and architecture of the Dynamic Integration Model. Solid arrows indicate unidirectional data and process flows, whereas bidirectional arrows represent iterative feedback loops between the classroom, the instructor, and the AI system. Color-coded elements distinguish the main functional layers: pedagogical orchestration and classroom processes, AI-driven scalability mechanisms, and analytical feedback supporting continuous improvement.

5.1. Inputs: The Strategic Foundation

The model initiates with a tri-dimensional input phase designed to address the specific gaps identified in the literature. Rather than relying on generic data, three distinct information streams are required:
  • Student Diagnostics: These capture dynamic learner characteristics such as prior knowledge and motivation Shemshack and Spector (2020), alongside structural access conditions identified in the ENAPE analysis. Importantly, this component avoids discredited learning styles (Goswami & Bryant, 2012).
  • Pedagogical Parameters: Defined by the instructor, who acts as the strategic orchestrator (Molenaar, 2022). These parameters include learning goals, relevance criteria, and curriculum alignment.
  • Ethical Guardrails: This foundational layer establishes strict protocols for data privacy, bias mitigation, and content verification Zhu et al. (2025) prior to any AI processing.

5.2. Orchestration and Design: The Human-in-the-Loop

In this model, the teacher transitions from content delivery to Pedagogical Orchestration. Leveraging the input data, the instructor fulfills three critical functions:

5.2.1. Diagnostic Interpretation

The instructor analyzes student profile patterns to determine the required level and type of instructional variation.
Action: The teacher reviews diagnostic information and identifies needs such as: “Cluster A has low prior knowledge and requires scaffolding; Cluster B has high motivation but low reading comprehension and benefits from multimedia resources.”
Rationale: This reflects Molenaar’s Hybrid Intelligence principle, where human contextual judgment guides machine-executed differentiation, consistent with Shemshack and Spector (2020).

5.2.2. Master Prompt Design (Instructional Design & Prompting)

This is the most critical function. The teacher creates the “base recipe” for content generation.
Action: Define the non-negotiable learning objective (e.g., “Understand thermodynamics”) and specify variation criteria: “Generate an explanation of thermodynamics using everyday-life analogies for Group 1 and complex mathematical formulas for Group 2.”
Rationale: Following Tretiak et al. (2025), AI enables teachers to shift from content manufacturing to pedagogical orchestration.

5.2.3. Safety Guardrails Configuration

Action: The teacher configures constraints such as:
  • “Do not use real student names in examples.”
  • “Verify factual content using the validated RAG database.”
Rationale: This responds to global ethical AI guidance and ensures protection against hallucinations (Zhu et al., 2025).

5.3. Scalability Toolkit

Once strategy is defined, the process moves to the scalability infrastructure that enables personalization in large cohorts without manual overload.
  • Content Variation Engine: Transforms the teacher’s base instruction into multiple differentiated versions (simplified, analogy-based, critical-analysis, etc.).
  • Large Language Model (LLM): Executes the variation by processing teacher-defined prompts (e.g., GPT or Claude).
  • GPT Filter (RAG & Safety Decision Tree): Inspired by Sabzalieva and Valentini (2023), this layer validates whether the content is factual and safe before reaching students. Outputs failing verification are flagged.
  • Teacher Dashboard: Presents aggregated progress patterns rather than isolated interactions, enabling data-informed pedagogical decisions.

5.4. Classroom Implementation

The outputs of the Scalability Toolkit are applied directly in the classroom. Students follow personalized learning pathways that connect curricular content to their interests and professional contexts. This strengthens the real-world impact of education, increasing perceived relevance and motivation.
In this stage, adaptability no longer depends on producing one-to-one materials; instead, the system identifies group-level patterns and clusters learners accordingly. This enables targeted variation that is feasible for large-enrollment courses.
Generative models also provide immediate, subgroup-tailored feedback, reducing teacher workload and shifting the instructional focus toward strategic oversight rather than transactional grading.
Collectively, these mechanisms—dynamic grouping, automated variation, and scalable feedback—enable personalization at both the individual and meso levels, creating sustainable units of differentiation within the classroom.

5.5. Diagnostic & Clustering Tool

As students interact with the materials, this tool captures performance data. Instead of overwhelming teachers with individual datapoints, it identifies clusters of learners with common gaps or behavioral patterns. This information is fed back into the LLM and Dashboard, allowing dynamic restructuring of groups without manual intervention.

5.6. Feedback Loop

The model concludes with a cyclical feedback phase. Learning outcomes and student perception data return to the input phase, forming a continuous improvement cycle. This loop refines both the algorithmic clustering and the teacher’s pedagogical strategy, strengthening alignment between ethical constraints, instructional goals, and data-driven insights.

5.7. Limitations and Ethical Considerations

Despite its promising results, this study acknowledges several limitations. First, the quasi-experimental design lacked a control group, limiting causal inference; future research should employ randomized controlled trials (RCTs). Second, as Zhu et al. (2025) warn, deploying LLMs introduces risks related to privacy and algorithmic bias. While this model integrates an Ethical Guardrails layer, long-term impacts on critical thinking require longitudinal study. Finally, the model’s scalability depends on digital infrastructure, which remains uneven in many developing contexts (Amemasor et al., 2025).

5.8. Significance of the Proposed Model

The proposed Dynamic Integration Model offers a scalable and theoretically grounded architecture for personalized learning. By shifting the teacher’s role from content manufacturing to pedagogical orchestration, it addresses the dual challenge of relevance and scale, offering a viable pathway for post-pandemic educational recovery.

6. Conclusions

This study set out to determine the extent to which AI-driven adaptive learning could enhance educational accessibility and equity in the Mexican higher education context. By integrating a diagnostic secondary analysis (ENAPE) with a teacher-mediated pilot intervention (ActivAI), the research offers a dual empirical contribution. First, the diagnostic phase established that perceived educational relevance is statistically intrinsic to equity ( r = 0.72 ), validating the premise that accessibility is not merely about connectivity but about the personalization of the learning experience. Second, the pilot results demonstrate that Generative AI, when constrained by ethical guardrails and pedagogical intent, can successfully operationalize this relevance at scale, yielding high levels of student satisfaction ( M = 4.49 ) and behavioral engagement across diverse disciplinary contexts.

6.1. Theoretical and Practical Contributions

Theoretically, this research advances the field by proposing the Dynamic Integration Model, a framework that moves beyond the binary of “automation vs. human instruction.” By validating a workflow in which the teacher acts as the strategic orchestrator and the AI as the variation engine, the study confirms Molenaar’s concept of Hybrid Intelligence as a viable pathway for addressing the scalability crisis in large-enrollment courses.
Practically, the deployment of the Scalability Toolkit (Airtable + RAG + LLM) provides institutions with a replicable, low-code architecture to implement adaptive learning without requiring prohibitive proprietary software.

6.2. Recommendations

In response to the empirical findings and the “Human-in-the-Loop” imperative, the following recommendations are proposed for educational stakeholders:
  • For Institutions: Shift professional development from basic digital literacy to Pedagogical AI Orchestration. Training should prioritize enabling teachers to interpret diagnostic data and design “Master Prompts,” rather than simply operating software.
  • For Policy: Establish institutional Ethical Guardrails (as defined in the Input Phase) prior to AI deployment. Policies must explicitly prohibit the use of “black-box” AI for high-stakes assessment, ensuring that all AI-generated feedback passes through a human validation filter.
  • For Instructional Design: Adopt a Cluster-First approach for scalability. Instead of attempting unsustainable one-to-one personalization, educators should use AI to target dynamic subgroups—as evidenced by the Civil Engineering vs. Nutrition variance—allowing efficient yet tailored scaffolding.

6.3. Limitations and Future Research

While the reported high satisfaction levels suggest a positive reception, satisfaction does not equate to cognitive mastery. This study’s exploratory design limits causal claims regarding academic performance improvements (“optimization”). The disciplinary variation observed—where Nutrition students exhibited higher dispersion in acceptance than Engineering students—indicates that AI adaptivity is not a monolithic solution.
Future research should prioritize Randomized Controlled Trials (RCTs) to measure objective learning gains (e.g., grades) alongside student perceptions. Moreover, further investigation is needed to determine how disciplinary epistemologies (hard vs. soft sciences) moderate the effectiveness of AI-driven scaffolding.

Author Contributions

J.R.H.-H.; conceptualization, methodology, validation, formal analysis, investigation, writing—original draft writing—review & editing Data curation, J.O.-B. (Jesus Ortiz-Bejar); writing—review & editing, conceptualization methodology and resources, and J.O.-B. (Jose Ortiz-Bejar); methodology, validation Writing—review & editing, funding acquisition, supervision and resources. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Secretaría de Ciencia, Humanidades, Tecnología e Innovación (SECIHTI), Mexico, through Project CF-2023-I-1174. Additional financial support was provided by the Universidad Michoacana de San Nicolás de Hidalgo and Universidad Latina de América.

Data Availability Statement

Dataset available on request from the authors.

Acknowledgments

This research was made possible thanks to the generous financial support provided by the Universidad Latina de América, whose commitment to educational innovation has been a cornerstone of this study. We also extend our sincerest gratitude to the academic professionals who actively participated in the quasi-experiment; their valuable collaboration greatly enriched this project. The combination of institutional resources and the collective effort of UNLA’s academic community has been pivotal in advancing personalized learning through artificial intelligence, marking a milestone in the development of innovative and adaptive pedagogical practices. Authors acknowledge the support provided by SECIHTI through Project CF-2023-I-1174 within the framework of the Ciencia de Frontera 2023.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1 presents the complete rotated factor loading matrix for the four-factor solution. It details the factor loadings for each variable across the four identified factors (F1 through F4), providing the full statistical basis for the analysis discussed in the main text.
Table A1. Complete rotated factor loading matrix for the four-factor solution.
Table A1. Complete rotated factor loading matrix for the four-factor solution.
VariableF1F2F3F4
PA3_2−0.013−0.013−0.0280.039
PA3_3_NIVEL−0.0360.0110.006−0.015
PA3_3_SEMESTRE−0.0510.077−0.021−0.048
PA3_4−0.144−0.960−0.001−0.097
PA3_60.1470.8490.0030.094
PA3_7_1−0.0050.2460.2450.715
PA3_7_20.0040.2560.1910.827
PA3_7_30.0080.2830.1990.800
PA3_8_10.0510.6360.0360.046
PA3_8_20.0730.4900.0930.085
PA3_8_30.0710.6040.0260.031
PA3_8_40.0600.6320.0940.036
PA3_8_50.0610.5960.0330.020
PA3_8_60.0740.6910.0850.040
PA3_8_70.1540.919−0.0080.099
PA3_8_80.1470.951−0.0100.090
PB3_1−0.971−0.091−0.218−0.011
PB3_30.7800.0620.1640.036
PB3_5_NIVEL0.9660.0910.2200.008
PB3_60.8310.0940.205−0.014
PB3_80.7080.0850.164−0.014
PB3_9_10.3060.0030.4740.281
PB3_9_20.324−0.0160.5280.311
PB3_9_30.3290.0070.5160.273
PB3_10_10.3270.0620.5870.003
PB3_10_20.2890.0650.677−0.010
PB3_10_30.2310.0620.7660.017
PB3_10_40.2550.0680.7180.075
PB3_10_50.3770.0520.5700.081
PB3_11_10.7600.0640.2140.044
PB3_11_20.7690.0900.2130.005
PB3_11_30.7550.0930.1930.025
PB3_11_40.8620.1000.216−0.023
PB3_11_50.9710.0900.2140.012
PB3_12_10.7900.0900.2000.012
PB3_12_20.7650.0690.189−0.001
PB3_12_30.8680.0820.2090.015
PB3_12_40.9380.0880.2190.011
PB3_12_50.9590.0900.2140.006
PB3_12_60.7740.1180.210−0.016
PB3_12_70.9700.0900.2160.013
PB3_12_80.9700.0910.2190.011
PB3_16_10.8110.0760.1810.050
PB3_16_20.9250.0840.2000.034
PB3_16_30.8380.0910.1920.040
PB3_16_40.9570.0870.2110.021
PB3_16_50.7530.0700.181−0.034
PD3_10.2670.0490.0490.021

References

  1. Adhikari, D. P., & Pandey, G. P. (2025). Integrating AI in higher education: Transforming teachers’ roles in boosting student agency. Educational Technology Quarterly, 2025(2), 151–168. [Google Scholar] [CrossRef]
  2. Aguilera-Hermida, A. P., Quiroga-Garza, A., Gómez-Mendoza, S., Del Río Villanueva, C. A., Avolio Alecchi, B., & Avci, D. (2021). Comparison of students’ use and acceptance of emergency online learning due to COVID-19 in the USA, Mexico, Peru, and Turkey. Education and Information Technologies, 26(6), 6823–6845. [Google Scholar] [CrossRef]
  3. Alamri, H. A., Watson, S., & Watson, W. (2021). Learning technology models that support personalization within blended learning environments in higher education. TechTrends, 65(1), 62–78. [Google Scholar] [CrossRef]
  4. Almeida, L. M. C. G., Münzer, S., & Kühl, T. (2024). More personal, but not better: The personalization effect in learning neutral and aversive health information. Journal of Computer Assisted Learning, 40(5), 2248–2260. [Google Scholar] [CrossRef]
  5. Almusaed, A., Almssad, A., Yitmen, I., & Homod, R. Z. (2023). Enhancing student engagement: Harnessing “AIED”’s power in hybrid education—A review analysis. Education Sciences, 13(7), 632. [Google Scholar] [CrossRef]
  6. Amemasor, S. K., Oppong, S. O., Ghansah, B., Benuwa, B.-B., & Agbeko, M. (2025). The influence of digital professional development and professional learning communities on STEM teachers’ digital competency. PLoS ONE, 20(1), e0328883. [Google Scholar] [CrossRef]
  7. An, Q., Yang, J., Xu, X., Zhang, Y., & Zhang, H. (2024). Decoding AI ethics from users’ lens in education: A systematic review. Heliyon, 10(20), e39357. [Google Scholar] [CrossRef]
  8. Angrist, N., Bergman, P., & Matsheng, M. (2022). Experimental evidence on learning using low-tech when school is out. Nature Human Behaviour, 6(7), 941–950. [Google Scholar] [CrossRef] [PubMed]
  9. Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62. [Google Scholar] [CrossRef]
  10. Bittle, K., & El-Gayar, O. (2025). Generative AI and academic integrity in higher education: A systematic review and research agenda. Information, 16(4), 296. [Google Scholar] [CrossRef]
  11. Bombaerts, G., & Vaessen, B. (2022). Motivational dynamics in basic needs profiles: Toward a person-centered motivation approach in engineering education. Journal of Engineering Education, 111(2), 357–375. [Google Scholar] [CrossRef]
  12. Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21(1), 4. [Google Scholar] [CrossRef]
  13. Chans, G. M., Orona-Navar, A., & Orona-Navar, C. (2023). Higher education in Mexico: The effects and consequences of the COVID-19 pandemic. Sustainability, 15(12), 9476. [Google Scholar] [CrossRef]
  14. Chiu, T. K., Moorhouse, B. L., Chai, C. S., & Ismailov, M. (2024). Teacher support and student motivation to learn with artificial intelligence (AI) based chatbot. Interactive Learning Environments, 32(7), 3240–3256. [Google Scholar] [CrossRef]
  15. Du, H. T., & Wang, X. (2025). Research on the role transformation of teachers in the AI Era. International Journal on Social and Education Sciences, 7(4), 346–359. [Google Scholar] [CrossRef]
  16. Du, X., Du, M., Zhou, Z., & Bai, Y. (2025). Facilitator or hindrance? The impact of AI on university students’ higher-order thinking skills in complex problem solving. International Journal of Educational Technology in Higher Education, 22(1), 39. [Google Scholar] [CrossRef]
  17. El-Sabagh, H. A. (2021). Adaptive e-learning environment based on learning styles and its impact on development students’ engagement. International Journal of Educational Technology in Higher Education, 18(1), 53. [Google Scholar] [CrossRef]
  18. Garzón, J., Patiño, E., & Marulanda, C. (2025). Systematic review of artificial intelligence in education: Trends, benefits, and challenges. Multimodal Technologies and Interaction, 9(8), 84. [Google Scholar] [CrossRef]
  19. Gligorea, I., Cioca, M., Oancea, R., Gorski, A. T., & Gorski, H. (2023). Adaptive learning using artificial intelligence in e-learning: A literature review. Education Sciences, 13(12), 1216. [Google Scholar] [CrossRef]
  20. Gopal, R., Singh, V., & Aggarwal, A. (2021). Impact of online classes on the satisfaction and performance of students during the pandemic period of COVID 19. Education and Information Technologies, 26(6), 6923–6947. [Google Scholar] [CrossRef]
  21. Goswami, U., & Bryant, P. (2012). Children’s cognitive development and learning. In The cambridge primary review research surveys (pp. 141–169). Routledge. [Google Scholar]
  22. Hadar Shoval, D. (2025). Artificial intelligence in higher education: Bridging or widening the digital divide? Education Sciences, 15(5), 637. [Google Scholar] [CrossRef]
  23. Hevia, F. J., Vergara-Lope, S., Velásquez-Durán, A., & Calderón, D. (2022). Estimation of the fundamental learning loss and learning poverty related to COVID-19 pandemic in Mexico. International Journal of Educational Development, 88, 102515. [Google Scholar] [CrossRef]
  24. Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled adaptive learning systems: A systematic mapping of the literature. Computers and Education: Artificial Intelligence, 2, 100017. [Google Scholar] [CrossRef]
  25. Karam, J. (2023). Reforming higher education through AI. In N. Azoury, & G. Yahchouchi (Eds.), Governance in higher education (pp. 275–306). Springer Nature. [Google Scholar] [CrossRef]
  26. Li, B., Tan, Y. L., Wang, C., & Lowell, V. (2025). Two years of innovation: A systematic review of empirical generative AI research in language learning and teaching. Computers and Education: Artificial Intelligence, 9, 100445. [Google Scholar] [CrossRef]
  27. Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790. [Google Scholar] [CrossRef]
  28. Melisa, R., Ashadi, A., Triastuti, A., Hidayati, S., Salido, A., Luansi Ero, P., Marlini, C., Zefrin, Z., & Al Fuad, Z. (2025). Critical thinking in the age of AI: A systematic review of AI’s effects on higher education. Educational Process International Journal, 14, e2025031. [Google Scholar] [CrossRef]
  29. Minn, S. (2022). AI-assisted knowledge assessment techniques for adaptive learning environments. Computers and Education: Artificial Intelligence, 3, 100050. [Google Scholar] [CrossRef]
  30. Miranda, J., Navarrete, C., Noguez, J., Molina-Espinosa, J.-M., Ramírez-Montoya, M.-S., Navarro-Tuch, S. A., Bustamante-Bello, M.-R., Rosas-Fernández, J.-B., & Molina, A. (2021). The core components of education 4.0 in higher education: Three case studies in engineering education. Computers & Electrical Engineering, 93, 107278. [Google Scholar] [CrossRef]
  31. Molenaar, I. (2022). Towards hybrid human-AI learning technologies. European Journal of Education, 57(4), 632–645. [Google Scholar] [CrossRef]
  32. Núñez-Canal, M., de Obesso, M. d. l. M., & Pérez-Rivero, C. A. (2022). New challenges in higher education: A study of the digital competence of educators in COVID times. Technological Forecasting and Social Change, 174, 121270. [Google Scholar] [CrossRef]
  33. Ortiz-Gallegos, T. (2020). Student academic performance, learning modality, gender and ethnicity at a four-year university in new mexico [Unpublished Doctoral dissertation]. Grand Canyon University.
  34. Peng, H., Ma, S., & Spector, J. M. (2019). Personalized adaptive learning: An emerging pedagogical approach enabled by a smart learning environment. Smart Learning Environments, 6, 9. [Google Scholar] [CrossRef]
  35. Peña, J. M., Moreno, O. B., Herrera, Á. L. O., & Moreno, T. E. B. (2024). Balancing privacy and ethics in the use of artificial intelligence in education. Sapiens International Multidisciplinay Journal, 1(3), 148–170. [Google Scholar] [CrossRef]
  36. Pozo, J. I., Cabellos, B., & Echeverría, M. d. P. (2024). Has the educational use of digital technologies changed after COVID-19? A longitudinal study. PLoS ONE, 19(10), e0311695. [Google Scholar] [CrossRef] [PubMed]
  37. Ramírez Montoya, M. S., Romero Rodríguez, J. M., Glasserman Morales, L. D., & Navas Parejo, M. R. (2023). Collaborative online international learning between Spain and Mexico: A microlearning experience to enhance creativity in complexity. Education + Training, 65(2), 340–354. [Google Scholar] [CrossRef]
  38. Rifah, L., & Zamahsari, G. K. (2022, November 22–23). Can technology replace the teachers’ role in higher education settings? A systematic literature review. 7th International Conference on Sustainable Information Engineering and Technology (pp. 217–221), Malang, Indonesia. [Google Scholar] [CrossRef]
  39. Sabzalieva, E., & Valentini, A. (2023). ChatGPT and artificial intelligence in higher education: Quick start guide. UNESCO International Institute for Higher Education in Latin America and the Caribbean (IESALC), Caracas, Venezuela. Available online: https://etico.iiep.unesco.org/en/chatgpt-and-artificial-intelligence-higher-education-quick-start-guide (accessed on 10 September 2025).
  40. Salido, A., Syarif, I., Sitepu, M. S., Suparjan, Wana, P. R., Taufika, R., & Melisa, R. (2025). Integrating critical thinking and artificial intelligence in higher education: A bibliometric and systematic review of skills and strategies. Social Sciences & Humanities Open, 12, 101924. [Google Scholar] [CrossRef]
  41. Shemshack, A., Kinshuk, & Spector, J. M. (2021). A comprehensive analysis of personalized learning components. Journal of Computers in Education, 8(4), 485–503. [Google Scholar] [CrossRef]
  42. Shemshack, A., & Spector, J. M. (2020). A systematic literature review of personalized learning terms. Smart Learning Environments, 7(1), 33. [Google Scholar] [CrossRef]
  43. Srinivasa, K., Kurni, M., & Saritha, K. (2022). Learning, teaching, and assessment methods for contemporary learners. Springer. [Google Scholar]
  44. Tretiak, O., Smolnykova, H., Fedorova, Y., Yakunin, Y., & Shopina, M. (2025). Optimization of the educational process through the use of artificial intelligence in teachers’ work. Revista Eduweb, 19(1), 105–119. [Google Scholar] [CrossRef]
  45. Wang, S., Christensen, C., Cui, W., Tong, R., Yarnall, L., Shear, L., & Feng, M. (2023). When adaptive learning is effective learning: Comparison of an adaptive learning system to teacher-led instruction. Interactive Learning Environments, 31(2), 793–803. [Google Scholar] [CrossRef]
  46. Xia, Q., Weng, X., Ouyang, F., Lin, T. J., & Chiu, T. K. (2024). A scoping review on how generative artificial intelligence transforms assessment in higher education. International Journal of Educational Technology in Higher Education, 21(1), 40. [Google Scholar] [CrossRef]
  47. Zhu, H., Sun, Y., & Yang, J. (2025). Towards responsible artificial intelligence in education: A systematic review on identifying and mitigating ethical risks. Humanities and Social Sciences Communications, 12(1), 1111. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.