Next Article in Journal
The Influence of Digital Capabilities on Elderly Pedestrians’ Road-Sharing Acceptance with Autonomous Vehicles: A Case Study of Wuhan, China
Previous Article in Journal
Thermo-Mechanical Fatigue in AISI 347 Austenitic Stainless Steel: Phase Transformation Kinetics at Elevated Temperatures
Previous Article in Special Issue
Awareness to Action: Student Knowledge of and Responses to an Early Alert System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Frontiers of Artificial Intelligence for Personalized Learning in Higher Education: A Systematic Review of Leading Articles

1
Department of Teacher Education, Ningbo University, Ningbo 315211, China
2
Faculty of Arts, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 10096; https://doi.org/10.3390/app151810096
Submission received: 29 June 2025 / Revised: 6 September 2025 / Accepted: 8 September 2025 / Published: 16 September 2025
(This article belongs to the Special Issue The Application of Digital Technology in Education)

Abstract

Artificial Intelligence (AI) is reshaping higher education by enabling personalized learning (PL) and enhancing teaching and learning practices. To examine global research trends, pedagogical paradigms, equity and sustainability considerations, instructional strategies, learning outcomes, and interdisciplinary collaboration, this study systematically reviewed 29 articles indexed in the Social Sciences Citation Index (SSCI) Q1, representing the top 25% of cited articles, published between January 2020 and December 2024 in the Web of Science database. Results indicate that AI-PL research is concentrated in Asia, particularly China, and predominantly situated within education and computer science. Quantitative designs prevail, often complemented by qualitative insights, with supervised machine learning as the most common algorithm. While constructivist principles implicitly guide most studies, explicit theoretical grounding improves AI-pedagogy alignment and educational outcomes. AI demonstrates potential to enhance instructional approaches such as PBL, STEAM, gamification, and UDL, and to foster higher-order skills, yet uncritical use may undermine learner autonomy. Systematic attention to equity and SDG-related objectives remains limited. Emerging interdisciplinary collaborations show promise but are not yet fully institutionalized, constraining integrative system design. These findings underscore the need for stronger theoretical framing, alignment of AI with pedagogical and societal imperatives, and professional development to enhance educators’ AI literacy. Coordinated efforts among academia, industry, and policymakers are essential to develop scalable, context-sensitive AI solutions that advance inclusive, adaptive, and transformative higher education.

1. Introduction

Personalized Learning (PL) represents a transformative shift in higher education, aiming to accommodate the diverse needs, preferences, and goals of individual learners. Rooted in the principles of learner-centered education, PL tailors instructional strategies, content delivery, and assessment methods to align with students’ unique learning trajectories [1,2]. By moving beyond one-size-fits-all instruction, PL promotes engagement, inclusivity, and academic success through technology-driven customization [3,4]. Artificial Intelligence (AI) plays a critical enabling role in realizing this vision, offering dynamic capabilities such as adaptive feedback, real-time performance monitoring, and intelligent content recommendation [5,6].
Through advanced techniques like machine learning, deep learning, and natural language processing, AI-driven systems support the personalization of educational content and learner pathways [7,8]. For example, the use of distance eTeaching and eLearning (DTL), an intelligent ubiquitous learning system, integrates AI technologies—such as context-aware behavior analysis, adaptive recommendation algorithms, and personalized learning path generation—to deliver individualized support in real time [9]. This system demonstrates how AI facilitates seamless learning experiences across physical and digital environments while enhancing self-regulated learning and engagement. In addition, AI-driven adaptive learning platforms can provide real-time analytics and responsive content delivery, enabling instructors to continuously refine instructional strategies based on learner progress and preferences [1,10]. Such tools are being applied across a wide range of domains—from STEM education to the humanities—allowing educators to design more inclusive and context-sensitive learning environments [11]. For instance, intelligent tutoring systems and personalized dashboards help track individual trajectories, fostering not only academic improvement but also self-regulation and learner autonomy [3,12]. By addressing key limitations of traditional instruction—such as lack of differentiation and limited scalability—AI applications enhance the responsiveness, equity, and sustainability of educational systems [5,13]. Such AI-based applications not only increase personalization and efficiency but also promote equitable access to learning opportunities in increasingly diverse educational settings.
Despite the growing body of research on AI in Personalized Learning (PL), significant gaps exist in the literature, underscoring the need for a systematic review to consolidate and synthesize current findings. Previous reviews have predominantly focused on technical or domain-specific aspects of AI, often neglecting its interdisciplinary applications and pedagogical implications [14,15]. For instance, research by Fariani, Junus [14] emphasizes cognitive impacts but overlooks the socio-emotional and ethical dimensions of AI integration in education. Moreover, existing reviews rarely consider how AI-supported PL corresponds to broader educational aims, such as social equity, inclusion, and the United Nations Sustainable Development Goals (SDG 4), especially in underserved or non-Western contexts [16]. Studies on higher-order learning outcomes—such as critical thinking, creativity, and ethical reasoning—also remain unsystematic [17,18]. In addition, limited attention has been given to emerging pedagogical paradigms, such as socioformation or Universal Design for Learning (UDL), which emphasize social co-construction, cultural relevance, and learner agency [19,20,21]. This review addresses these gaps by systematically examining leading articles indexed in the Social Sciences Citation Index (SSCI) Q1, representing the top 25% of cited articles, published between January 2020 and December 2024 in the Web of Science database (WoS), focusing on cutting-edge developments in AI-enabled PL within the context of higher education. The review goes beyond descriptive mapping to explore critical dimensions, including pedagogical orientations, social development implications, instructional innovation strategies, higher-order outcomes, and interdisciplinary collaboration. By consolidating insights from top-tier journals, this study highlights emerging trends, best practices, and unresolved challenges in the field. In doing so, this systematic review not only advances scholarly understanding but also informs the development of innovative, evidence-based strategies to enhance personalized learning in diverse educational contexts.
To achieve these objectives, the study proposes a multidimensional analytical framework, addressing the following research questions:
RQ1: Which countries dominate research on AI-driven personalized learning in higher education? What research methods, sample sizes, data sources, recurring themes, and AI algorithm types are most used? How does this research map differ across studies of varying scholarly impact (high-, medium-, and low-impact) on AI-driven personalized learning in higher education?
RQ2: What pedagogical paradigms or learning theories underpin the implementation of AI in personalized learning (e.g., behaviorism, constructivism, connectivism, socioformation)? Are these models explicitly stated or implicitly embedded in the studies?
RQ3: To what extent do the reviewed studies address social equity, accessibility, and sustainable development goals (such as SDG 4)? How do these considerations shape the application of AI for personalized learning, particularly in underserved regions or populations?
RQ4: What types of innovative instructional strategies—such as project-based learning (PBL), STEAM, gamification, Universal Design for Learning (UDL), or socioformative projects—are integrated into AI-driven personalized learning approaches? How does AI support or enhance these strategies?
RQ5: Do the studies report improvements in higher-order skills (e.g., critical thinking, creativity, ethical awareness, emotional regulation) and academic learning outcomes (e.g., test scores, engagement, completion rates)? How are these outcomes measured and interpreted?
RQ6: To what extent do the studies demonstrate interdisciplinary or transdisciplinary collaboration (e.g., education + computer science, psychology + engineering)? How does such collaboration influence research design, implementation, and findings?

2. Methodology

This systematic literature review (SLR) was conducted and reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) guidelines [22]. The review protocol was prospectively registered in the International Prospective Register of Systematic Reviews (PROSPERO; Registration No.: CRD420251133713). The PRISMA 2020 flow diagram is included to illustrate the study selection process (Figure 1). The following review process is organized into three key stages: planning, conducting, and reporting. These stages ensure a rigorous and structured approach to synthesize the research on Artificial Intelligence (AI) applications in Personalized Learning (PL) within higher education.

2.1. Search Strategy and Inclusion Criteria

The review targeted peer-reviewed journal articles published between 1 January 2020 and 31 December 2024 to reflect the period of rapid development and adoption of AI technologies in higher education. The Web of Science (WoS) Core Collection was selected as the sole database due to its comprehensive coverage of high-impact journals in social sciences, particularly those indexed in SSCI Q1, which ensures the inclusion of the most rigorously peer-reviewed and influential research relevant to the pedagogical and social aspects of AI in higher education [23]. While we recognize that the use of a single database may limit the breadth of the review, WoS was selected due to its extensive coverage of high-quality, peer-reviewed journals that align with the scope of this study. Future research could expand its scope by incorporating additional databases, such as Scopus, to enhance the comprehensiveness of the findings and reduce potential biases introduced by using only one data source. The search strategy combined Boolean operators and keywords related to three domains: AI technologies (i.e., “artificial intelligence” OR “AI” OR “AI based” OR “automated grad*” OR “automated tutor” OR “automated scor*” OR “machine intelligence” OR “machine learning” OR “intelligent support” OR “intelligent virtual reality” OR “intelligent agent*” OR “intelligent system” OR “intelligent tutor*”), personalized learning (i.e., “personalized learning” OR “personalized e-learning” OR “PL” OR “personalized online learning” OR “personal*” OR “adaptive learning system*” OR “adaptive system*” OR “adaptive educational system*” OR “adaptive testing”), and higher education (i.e., “higher education” OR “college” OR “undergrad*” OR “graduate” OR “postgrad*” OR “university” OR “sophomore” OR “course” OR “freshman” or “tertiary” or “post-secondary education”).
The inclusion and exclusion criteria are summarized below:
Included
  • Peer-reviewed empirical articles published in SSCI Q1 journals
  • English-language articles published between 2020–2024
  • Studies with explicit focus on AI-based personalized learning in higher education
Excluded
  • Review articles, proceeding papers, and retracted papers
  • Early access articles
  • Non-English publications
  • Studies outside higher education or not involving AI as a core tool
  • Studies without clear empirical application of PL
  • Articles from SSCI Q2–Q4 journals

2.2. Study Selection Process

The study selection followed four stages: Identification, Screening, Eligibility, and Inclusion.

2.2.1. Identification

A total of 210 articles were retrieved from the WoS Core Collection. 61 studies were excluded using WoS metadata filters such as “Document Type” and “Publication Status” during the initial metadata export for reviews, proceeding papers, and retracted papers. Proceeding papers were excluded on the grounds that they frequently do not undergo the same rigorous peer-review process as journal articles, are subject to abbreviated review timelines, and often disseminate preliminary findings whose definitive validity remains to be established.

2.2.2. Screening

In the subsequent phase, texts of 149 studies were reviewed. 29 records were eliminated after screening titles and abstracts because they were either Early Access items or written in languages other than English. Early Access articles were omitted to safeguard the structural integrity of the dataset and to guarantee the reproducibility of all downstream analyses. Although these manuscripts have usually completed peer review, they typically lack complete bibliographic metadata—volume, issue, and pagination—and their citation metrics remain volatile while cross-database synchronization is often delayed. To eliminate such inconsistencies, only formally published articles bearing definitive volume, issue, and page numbers were retained. Likewise, non-English publications were excluded to ensure uniform interpretation of content and consistent methodological appraisal across the entire sample.

2.2.3. Eligibility

The next phase, eligibility assessment, involved a full-text evaluation of 120 articles. 91 records were excluded after applying further exclusion criteria, such as being not related to AI in HE, non-empirical articles, not related to AI in PL in HE, without clear descriptions of practical applications, without IF index, and being studies of Q2, Q3, Q4 ranking. The Q1 filter was applied at the eligibility stage rather than during initial screening to enable multidimensional assessment of content quality and topic relevance.

2.2.4. Inclusion

The final set of 29 empirical studies published in SSCI Q1 journals (based on JCR Category rankings 2023, which uses 2022 data) was retained. The inclusion of only Q1 journals ensured that only the highest-quality publications were considered. The overall process, including the number of articles excluded at each stage, is summarized in the following table (Figure 1).

2.3. Quality Assessment

To ensure the inclusion of only high-quality research, a thorough quality assessment was performed on the 29 selected studies. A seven-criterion framework was used to assess methodological quality: (1) clarity in research objectives, (2) inclusion of a comprehensive literature review, (3) clear presentation of related work to position the study within the current body of research, (4) detailed description of methodology or model architecture, (5) presentation of clear research results, (6) alignment of conclusions with research objectives, and (7) recommendations for future work [14]. Only studies that met all seven criteria were retained for analysis. Two independent reviewers applied these criteria (0 or 1 score per item). Any discrepancies were resolved through discussion and consensus with a third senior reviewer. Only studies scoring 7/7 were retained. This quality assessment ensured that the selected studies were robust and contributed valuable insights to the understanding of AI in PL within higher education.

2.4. Data Extraction, Impact Stratification, and Synthesis

Data were extracted to answer five research questions, focusing on the research background, pedagogical models, social development dimension, instructional innovation strategies, higher-order outcomes and disciplinary composition. Thematic analysis was conducted using an inductive coding approach [24]. This method allowed for the identification of emergent themes from the data, ensuring that the analysis was closely aligned with the content of the studies without imposing preconceived categories [25]. The coding process included familiarization, code generation, theme identification, and sub-theme development. The process was collaborative, with the primary researcher conducting the analysis and the co-authors validating the findings. This collaborative approach ensured a robust and comprehensive interpretation of the data, providing insights into the current state of AI applications in personalized learning and identifying both emerging trends and gaps in the existing literature.
To evaluate influence variance among studies, this review additionally implemented impact stratification using citation data from the Web of Science Core Collection. Citation count is a widely recognized indicator of academic influence [26]. Since all included studies were from SSCI Q1 journals, journal-based metrics offered limited differentiation. Therefore, impact grouping was based on citation distribution quartiles: High-impact studies (≥36 citations, top 25%), Medium-impact (5–35 citations), and Low-impact (<5 citations, bottom 25%). Citation data were retrieved on 2 August 2025. A corresponding comparison table and subgroup analysis of findings are provided in Section 3.

3. Results

3.1. Impact-Based Grouping of Studies

To capture heterogeneity in scholarly influence, the 29 included studies were classified into three citation-based impact groups: high-impact (≥75th percentile), medium-impact (25th–75th percentile), and low-impact (<25th percentile). This grouping enables comparative analysis across countries, disciplines, methodological designs, thematic emphases, and AI algorithm applications. The classification yielded 8 high-impact studies, 15 medium-impact studies, and 6 low-impact studies (Table 1).
This stratification provides a structured lens for evaluating whether certain geographical regions, disciplinary orientations, or methodological approaches are more strongly associated with higher scholarly visibility. In the following subsections, results are organized into three thematic areas—(1) Countries and Disciplines, (2) Research Methods, Sample Sizes, and Data Sources, and (3) Research Themes and Types of AI Algorithms—to facilitate comparison across impact levels.

3.1.1. Countries and Disciplines

Totally, the analysis of 29 reviewed articles reveals a clear geographical distribution in AI research for personalized learning (PL) in higher education, with significant contributions concentrated in Asia (Figure 2). Notably, 24 of the reviewed studies (82.8%) are authored by researchers based in Asian countries, indicating a strong regional focus on this topic. Among these, China accounts for 12 articles, representing 41.4% of all studies, followed by South Korea, India, Malaysia, and Saudi Arabia, each contributing two articles. Additional contributions from Asia come from Oman, the United Arab Emirates, Jordan, and Cyprus, each represented by one study [30,47]. Outside Asia, Europe is moderately represented, with six studies from countries including Italy, Greece, Sweden, Switzerland, Romania, and Russia [36,40]. Oceania is represented exclusively by three studies from Australia, while North America contributes two studies from the United States, and South America is represented by one study from Chile [48,52]. This regional distribution highlights Asia’s dominance in AI-PL research while indicating increasing interest across other continents.
The reviewed studies span a total of 13 academic disciplines, emphasizing the interdisciplinary nature of AI-PL research in higher education (Figure 3). Education and Computer Science dominate, each accounting for 6 articles (20.7%). Education-focused research explores the pedagogical impacts of AI, such as how AI enhances learning accessibility and improves teaching strategies [7,37]. Studies in computer science focus on the development and optimization of AI tools and algorithms, including natural language processing (NLP) and predictive analytics [29,45]. Additional disciplines include Educational Technology (3 articles, 10.3%), which explores AI-powered learning management systems [27], and Engineering (3 articles, 10.3%), which focuses on hybrid systems and adaptive learning tools [43]. Disciplines such as Psychology, Social Sciences, and Art and Humanities each contribute 2–3 articles, demonstrating growing interest in AI’s broader cognitive and social impacts [38,51]. Less represented fields include Nursing and Language Learning, with one article each, underscoring the need for greater exploration of AI’s role in specialized domains [8].
An analysis of the included studies based on impact-level grouping reveals notable patterns in geographical distribution, disciplinary focus, and potential implications for educational outcomes. In terms of impact-based grouping, high-impact studies were predominantly conducted in China (n = 3) and Malaysia (n = 2), with additional contributions from India, South Korea, Oman, the United Arab Emirates, Saudi Arabia, Russia, and Sweden. Disciplinary foci were concentrated in Education (n = 3), Educational Technology (n = 2), and Computer Science (n = 2), alongside emerging work in STEM, arts, business, and information management. These studies tend to report more substantial educational benefits, such as improved personalized learning outcomes and innovative AI applications in higher education. Medium-impact studies showed broader geographic and disciplinary diversity, largely authored in China (n = 9) and Australia (n = 2), led by Computer Science (n = 6) and Engineering (n = 3), but also including Psychology, Humanities, Mathematics, Management, Education, and Economics, suggesting that AI-PL adoption is spreading across multiple fields and regions. Low-impact studies were most geographically and disciplinarily heterogeneous, with the United States (n = 2) contributing the largest share, and disciplinary oriented in Interdisciplinary research (n = 2), Engineering, Writing Studies, Language Learning, Applied AI, Computer Science, Social Sciences, and Education, indicating exploratory or emerging efforts whose educational effectiveness may be context-specific or limited by methodological rigor.

3.1.2. Research Methods, Sample Sizes, and Data Sources

Figure 4 shows the commonly utilized methods in studies on AI-driven personalized learning in higher education. Quantitative research methods dominate the methodological landscape of AI-PL studies, utilized in 18 of the 29 reviewed articles (62.1%). Of these, survey-based research emerges as the most frequently used sub-method, appearing in 11 articles. These studies rely on structured surveys or questionnaires to gather large-scale empirical data from diverse participant groups [7,28]. Computational analytic methods, such as machine learning and predictive modeling, are employed in three articles, reflecting the increasing integration of big data into educational research [45]. Experimental and quasi-experimental approaches, though less common, are utilized in four studies to evaluate the impact of AI interventions in controlled or semi-controlled settings [35,46]. Qualitative methods are adopted in six articles (20.7%), focusing on interviews or thematic analysis to explore nuanced participant experiences [50]. Mixed-methods approaches are used in five studies (17.2%), combining surveys with interviews or computational analyses to provide a holistic understanding of AI’s educational applications [27,38].
The reviewed studies demonstrate considerable variation in sample sizes, reflecting diverse research designs and objectives (Figure 5). Seven studies (24.1%) employ small sample sizes of fewer than 50 participants, often in qualitative or exploratory investigations aimed at capturing detailed insights [33,44]. Moderate sample sizes, ranging from 300 to 999 participants, account for 10 studies (34.5%), reflecting efforts to balance generalizability with logistical feasibility [7,28]. Large sample sizes of 1000 to 1999 participants are employed in four studies (13.8%), often leveraging secondary data or machine learning techniques for predictive modeling [27,32]. One study uniquely features a dataset of over 5000 participants, showcasing the potential for AI to analyze massive datasets [43]. Notably, one article lacks explicit sample size reporting, relying instead on publicly available datasets for algorithm testing [39]. This diversity highlights the methodological adaptability of AI-PL research in addressing various educational challenges.
Figure 6 shows that a multiple range of data sources is employed across the 29 studies, with surveys and questionnaires emerging as the most prevalent, appearing in 17 articles (58.6%). These instruments provide large-scale, structured data to analyze learner behavior and perceptions [7,28]. Existing datasets or secondary data are utilized in six articles (20.7%), particularly for machine learning and big data analyses [36,39]. Interviews are used in three studies (10.3%) to capture in-depth qualitative insights, focusing on participants’ experiences and perceptions of AI tools [44,50]. Multimedia data, such as teaching videos or student-generated drawings, are employed in two studies (6.9%) to explore non-traditional data types in education [37,52]. One study uniquely uses social media and online platform data, reflecting an emerging trend of analyzing user-generated content for learning analytics [29]. This variety of data sources demonstrates the adaptability of AI-PL research in capturing both quantitative and qualitative dimensions of personalized learning.
Comparative scrutiny of research methods, sample sizes, and data sources across impact-level strata discloses convergences as well as divergences, each mirroring prevailing methodological currents and exerting discernible leverage on resultant findings. Across all groups, survey-based approaches were the most frequently employed, underscoring the prevalence of self-reported data in AI-supported personalized learning research. In the high-impact group, surveys were dominant (n = 4), complemented by mixed methods, machine learning experiments, algorithmic development, qualitative methods, and quasi-experiments (each n = 1). Corresponding data sources were primarily surveys and questionnaires (n = 7), with occasional use of online platforms or social media datasets, indicating a focus on structured and quantifiable measures to support robust findings. Medium-impact studies also favored surveys (n = 6), but demonstrated greater methodological variety, including qualitative approaches (n = 3), experiments (n = 2), mixed methods (n = 2), case studies, machine learning, and algorithm development (each n = 1). Data sources similarly prioritized surveys and questionnaires (n = 8), supplemented by secondary datasets (n = 4) and existing institutional records (n = 4), and interviews (n = 2), reflecting an expansion toward more varied evidence and richer contextual insights. Low-impact studies displayed the highest proportion of mixed-method designs (n = 2) relative to group size, along with isolated uses of machine learning, algorithm development, quasi-experiments, qualitative methods, and surveys. Data sources were distributed among existing data (n = 2), secondary datasets (n = 2), and smaller shares of surveys (n = 2), interviews, and multimedia resources, indicating exploratory or heterogeneous approaches with less emphasis on standardized metrics.
Sample size patterns further highlight methodological trends. High-impact studies showed a polarized pattern, with both medium-to-large samples (300–499: n = 2; 500–999: n = 2; 1000–1999: n = 2) and very small samples (under 50: n = 2), suggesting that impactful studies may succeed either through large-scale validation or highly targeted, in-depth investigation. Medium-impact studies spanned the full range from under 50 to over 5000 participants, with 300–499 most common (n = 3), reflecting flexibility in study scale and design. Low-impact studies skewed toward small samples (under 50: n = 3), potentially limiting generalizability and statistical power.

3.1.3. Research Themes and Types of AI Algorithms

Seven dominant themes emerge from the reviewed studies, highlighting the broad focus of AI-PL research (Figure 7). The most prevalent theme, AI Tools and Applications in Education, is represented by nine articles (31%), which explore practical implementations like learning management systems and AI-powered feedback tools [29,39]. AI in Personalized Learning, addressed in six articles (20.7%), investigates how AI customizes educational experiences to meet individual learner needs [7,27]. Ethical and social implications, such as data privacy and emotional responses to AI, are discussed in five articles (17.2%), reflecting growing attention to the challenges posed by AI in education [30,38]. Discipline-specific themes include AI in STEM Education and AI in Language Learning, each with three articles, while two studies address Supporting Students with Special Needs [41]. Emerging themes like Generative AI emphasize innovative applications, indicating future directions for the field [40].
As depicted in Figure 8, supervised machine learning dominates the algorithmic landscape, underpinning six investigations and accounting for 20.7% of all implementations. This algorithm type is predominantly employed for tasks like predicting student performance and categorizing learner behaviors [29,35]. Natural Language Processing (NLP) is the second most prevalent, utilized in five studies (17.2%) for applications like language acquisition and writing assistance [7,8]. Other algorithms include unsupervised learning and deep learning, each appearing in three articles, highlighting their role in clustering and feature extraction tasks [28,39]. Generative AI and hybrid systems are represented by three studies each, focusing on dynamic content creation and the integration of multiple AI methods [38,40]. Rule-based systems and predictive modeling appear in fewer studies, suggesting opportunities for further exploration in educational applications to enhance personalized learning and outcome prediction [37,51]. This algorithmic diversity reflects the field’s innovative approaches to addressing complex challenges in personalized learning.
Across impact-level strata, research themes and AI algorithms diverge in focal emphasis and technical rigor, collectively mapping the shifting priorities and advancing technological trajectories that characterize AI-mediated personalized learning. High-impact studies were primarily concentrated on AI tools and applications in education (n = 5) and AI in personalized learning (n = 2), with individual studies addressing ethical, social, and psychological implications. This concentration suggests that highly cited research prioritizes practical and scalable educational applications while occasionally considering broader societal impacts. Medium-impact studies demonstrated a wider thematic spectrum, including ethical, social, and psychological implications of AI (n = 3), AI in personalized learning (n = 3), AI tools and applications in education (n = 2), AI in engineering and STEM education (n = 2), AI for supporting students with special needs (n = 2), and AI in language and writing education (n = 2). Such diversity indicates a balance between applied studies and exploratory research addressing emerging educational contexts and inclusivity challenges. Low-impact studies most often examined AI tools and applications in education (n = 2), supplemented by single studies in engineering/STEM, personalized learning, language and writing education, and ethical/social/psychological domains, reflecting smaller-scale, less thematically cohesive investigation.
Regarding AI algorithms, high-impact studies most frequently employed natural language processing (NLP) (n = 2) and deep learning (n = 2), alongside recommendation algorithms, supervised learning, generative AI, and rule-based systems (each n = 1). The use of advanced AI techniques aligns with their emphasis on innovation and impactful educational interventions. Medium-impact studies exhibited greater algorithmic diversity, with supervised learning (n = 4), unsupervised learning (n = 3), hybrid systems (n = 2), and generative AI (n = 2) leading the list, plus recommendation algorithms, deep learning, rule-based AI, and NLP (each n = 1). This variety reflects exploratory experimentation with different AI methods to address diverse pedagogical challenges. Low-impact studies were dominated by NLP (n = 2), with isolated applications of supervised learning, recommendation algorithms, predictive modeling, and hybrid systems, indicating narrower algorithmic exploration and more limited methodological sophistication.

3.2. Pedagogical Paradigms or Learning Theories

Across the 29 studies, theoretical grounding was heterogeneous. Roughly one-third of the articles explicitly named a pedagogical or learning theory to frame AI-supported personalized learning, including Activity Theory [37], Constructivism and allied approaches such as design thinking and competency-based education [40], Generative Learning Theory [10], Felder–Silverman learning styles [48], post-humanism/distributed cognition [32], student-centered and project/task-based learning [44], and motivational/learning frameworks such as Basic Psychological Needs/Self-Determination and I-PACE [51]. In these studies, theory typically informed concrete design choices—for example, generative learning guided the embedding of summarizing/organizing/reflecting strategies in AI environments [10], Activity Theory structured the analysis of learner–tool–community interactions in AI-assisted settings [37], and student-centered/PBL rationalized adaptive feedback and task sequencing.
By contrast, about half of the corpus relied on implicit pedagogical assumptions. Many papers operationalized constructivist/socioconstructivist or connectivist logics—e.g., “learn–practice–feedback” cycles and adaptive guidance [33,36], personal learning environments [29], or the 3P model as an interpretive lens for students’ engagement [7]—without naming a formal pedagogy. Several studies adopted a student-centered discourse or emphasized personalization and collaboration [35], yet still treated pedagogy as background rather than a primary design driver. A smaller subset anchored AI-PL primarily in technology-adoption frameworks—TAM, UTAUT, TPB, TRI, and related IS models [27,28,30,31,42]. While these models robustly explain usage intentions and acceptance, they function more as behavioral than pedagogical theories and seldom articulate how instructional strategies (e.g., scaffolding, feedback, social co-construction) are to be structured.
Two notable gaps emerge. First, classical behaviorism was virtually absent, and connectivism appeared mostly implicitly rather than as an explicit design principle. Second, socioformation, which foregrounds social co-construction and quality-of-life outcomes, was only implicitly aligned via needs-based perspectives [51], with no studies explicitly operationalizing socioformative design. Overall, the field skews toward constructivist family logics (often implicit) or adoption/acceptance lenses (explicit), with relatively few studies that explicitly translate a pedagogical paradigm into implementable AI design rules for personalization. This pattern underscores a continuing need for research that (i) makes pedagogical commitments explicit, (ii) links those commitments to AI functionalities (e.g., recommendation, feedback, collaboration orchestration), and (iii) evaluates impacts on higher-order outcomes in a theoretically coherent manner.

3.3. Sustainable Development and Equity

The reviewed studies show limited but varied engagement with sustainable development goals (SDGs), equity, and accessibility, with only a minority explicitly addressing these dimensions in design or evaluation. Using the operational categories (A–E) defined for this review, the distribution is as follows: 1 study explicitly mentions an SDG (Category A), 7 studies explicitly foreground equity or inclusion (B), 8 studies raise implicit ethical or fairness concerns without operationalizing them (C), 10 studies give little or no attention to such issues (D), and 3 studies make only indirect connections to broader social or educational benefits (E).
First, the explicitly inclusion-oriented studies (B) illustrate concrete ways in which social aims can shape AI-PL design. For example, Ou, Stöhr [32] integrate AI-language tools to support dyslexic, ADHD, autistic, and L2 learners, pairing technical design with recommendations for institutional policies on privacy and bias. Zingoni, Taborri [41] create a machine-learning classifier explicitly for students with dyslexia, framing it as an assistive, accessibility-focused intervention. Similarly, Țală, Muller [40] propose AI pathways customized for learners with impairments, grounded in UNESCO/OECD guidelines, while Chang [43] design an inclusive course recommender serving over 5600 students across diverse disciplines. These studies reframe AI features from pure optimization toward assistive personalization, evaluate representativeness in sample selection, and couple technical innovation with policy-oriented recommendations—demonstrating a deliberate alignment between AI affordances and inclusion goals.
Second, studies in the implicit/ethical cluster (C) surface concerns relevant to equity but stop short of embedding them as design requirements. For instance, Chan and Hu [7] highlight student fears that AI might “widen the gap between rich and poor,” Bouteraa, Bin-Nashwan [30] focus on academic-integrity risks, and Zhong, Luo [51] discuss AI dependency and mental-health vulnerabilities in particular subgroups. Although these contributions shape the discourse on responsible deployment by advocating integrity policies, balanced usage, and ethical safeguards, they seldom quantify accessibility outcomes or integrate adaptive features designed to mitigate disadvantage. As such, they occupy a middle ground between equity awareness and concrete, equity-driven innovation.
Finally, a plurality of studies (Categories D and E) either omit social-development considerations or address them only indirectly. Category D (n = 10) papers prioritize technical performance, adoption, or pedagogical efficacy without explicit attention to SDGs, equity, or the digital divide [27,28,39]. Category E (n = 3), including Kong, Ning [44] and Wang, Aguilar [49], links AI to improved teaching quality or student decision-making, implying potential societal benefits but without operationalizing equity in the design. This concentration on functionality within relatively well-resourced contexts raises the risk that AI interventions, if transferred to underserved settings without adaptation, may fail to reduce or could even exacerbate existing disparities.

3.4. Instructional Innovation Strategies

Across the reviewed literature, explicit integration of innovative instructional strategies—such as project-based learning (PBL), STEAM programs, socioformative projects, design thinking, gamification, or Universal Design for Learning (UDL)—is relatively uncommon. Only a minority of works adopt sustained, pedagogy-driven designs, including project-based or socioformative approaches [38,42,44] and design-thinking frameworks [40]. Other innovative practices appear in gamified micro-learning contexts [35,38], AI-assisted writing pedagogies [30,47,49], and inclusion-oriented practices drawing on UDL principles [32,41]. The majority of studies, however, focus on adaptive pathways and recommendation systems without embedding them into a comprehensive pedagogical redesign [33,36,39,43].
Where innovative strategies are adopted, AI predominantly functions as a scaffolding and orchestration mechanism rather than replacing the pedagogy itself. Reported supports include intelligent task recommendation, adaptive sequencing, formative assessment automation, group formation, affective state detection, and AI-generated content. For instance, dialogue-based project support guided learners through multi-stage PBL tasks [44], while AI-driven grouping algorithms and tailored resource recommendations enhanced collaborative creativity in socioformative contexts [38]. In AI-assisted writing, large language models provided real-time feedback, stylistic refinement, and idea generation [8,49], whereas gamification designs leveraged AI to adapt challenge levels and feedback loops [35]. Inclusive learning systems applied classification and recommendation models to match instructional resources to the needs of students with learning difficulties [32,41]. Nevertheless, many technically sophisticated studies [39,45] fail to reconfiguring classroom activity structures, rarely providing detailed task scripts, assessment rubrics, or collaboration protocols. As a result, AI in personalized learning currently serves more to support existing pedagogical practices than to catalyze comprehensive instructional transformation.

3.5. Impacts of AI on Personalized Learning Outcomes and Higher-Order Skills

Only a small subset of studies demonstrates objective evidence of academic performance gains through experimental or quasi-experimental designs. Specifically, AI-assisted interventions produced significant pre–post improvements in multiple subjects [35] and enhanced artwork appreciation using rubric-based evaluation [33]. Similarly, in writing instruction, AI support was associated with improved product quality, though researchers also warned of potential risks to personal expression [49]. Complementing these findings, learning-analytics studies linked AI-mediated behaviors to higher final grades, such as the production of cognitively substantive forum posts, suggesting that AI-enhanced engagement can predict academic achievement [29].
In contrast, a larger body of research reports perceived gains in higher-order skills, such as critical thinking, creativity, collaboration, and self-directed learning, typically measured through surveys, interviews, or qualitative analysis [8,10,32,34,38,44,46]. While these findings highlight promising learner experiences, they often lack standardized performance metrics or longitudinal validation. Moreover, many studies emphasize system-level outcomes, including algorithmic accuracy (e.g., RMSE, NDCG) or adoption intentions, rather than direct educational impact [27,28,36,43,48]. Additionally, risk-oriented investigations caution that reliance on generative AI may undermine originality, integrity, and critical-thinking skills, underscoring the need for ethical guidelines and metacognitive scaffolds [7,37,51].
Taken together, the literature reveals an uneven evidence base: while there is emerging proof of academic achievement improvements in targeted contexts and self-perceived skill development across several domains, much of the research remains indirect, relying on perceptions or system proxies. Consequently, the transformative potential of AI for fostering higher-order learning outcomes has yet to be demonstrated through rigorous, longitudinal, and transfer-sensitive evaluations.

3.6. Interdisciplinary and Transdisciplinary Collaboration

The extent of interdisciplinary engagement across the reviewed studies varied markedly, ranging from genuine transdisciplinary integration to superficial disciplinary juxtaposition. In more advanced cases, education researchers and computer scientists collaborated in co-design processes that embedded pedagogical principles directly into computational models, thereby shaping both system architecture and instructional design [36,41,52]. Such studies exemplify how transdisciplinary practice can generate research designs that are simultaneously pedagogically valid and technically robust. By contrast, a larger proportion of investigations displayed only limited interdisciplinarity, where diverse disciplinary expertise was present but operationalized in parallel rather than integrative ways—for example, reporting algorithmic performance alongside learner surveys without demonstrating cross-domain synthesis [27,38,40]. At the minimal end, some projects remained confined within a single disciplinary lens, either technical or educational, offering little evidence of cross-field interaction [44].
The depth of collaboration demonstrably shaped research implementation and outcomes. Stronger interdisciplinary integration produced methodologically richer studies, combining psychometric assessment, system validation, and classroom observation to enhance ecological validity and pedagogical relevance [10,43]. These designs not only improved the interpretability of AI-driven personalization but also situated findings within authentic educational contexts. Conversely, studies with weaker collaboration tended to yield fragmented insights, where technical performance metrics lacked pedagogical translation or educational implications remained under-theorized [28,35]. Overall, while interdisciplinary participation is increasingly visible, sustained transdisciplinary co-design—where disciplinary boundaries are transcended to produce genuinely integrative frameworks—remains the exception rather than the norm.

4. Discussion

This discussion synthesizes core findings in the literature from SSCI Q1 journals (top 25% most-cited in social sciences) published between January 2020 and December 2024 in the Web of Science database, focusing on AI-personalized learning (AI-PL) integration in higher education. The review mapped global trends, pedagogical paradigms, sustainability and equity considerations, instructional innovations, learning outcomes, and interdisciplinary collaboration, offering a comprehensive account of the field’s trajectory. Results reveal a strong geographical concentration in Asia, particularly China, alongside disciplinary dominance of education and computer science. Methodologically, quantitative designs and supervised learning algorithms prevail, while explicit theoretical grounding and systematic engagement with equity remain limited. Although AI demonstrates clear potential to enhance academic outcomes, higher-order skills, and innovative strategies such as PBL, STEAM, and gamification, risks of cognitive disengagement and diminished autonomy emerge when pedagogical and ethical safeguards are absent. Furthermore, interdisciplinary collaboration is expanding but remains fragmented, constraining the design of integrative, human-centered AI systems. Collectively, these patterns highlight both the transformative promise and unresolved challenges of AI adoption in higher education.

4.1. Countries and Disciplines

High-impact studies were predominantly conducted in China and Malaysia, reflecting strong research momentum in these regions, particularly within Education, Educational Technology, and Computer Science. Medium-impact studies showed broader geographic and disciplinary diversity, suggesting that AI-PL adoption is spreading across multiple fields and regions, though with somewhat less pronounced impact on measurable learning outcomes. Low-impact studies were the, indicating exploratory or emerging efforts whose educational effectiveness may be context-specific or limited by methodological rigor. The dominance of Asia in AI-PL research reflects the region’s significant investment in educational technology and innovation, with China accounting for almost half of the reviewed studies [29,35]. The geographical concentration of AI-PL research in China reflects a confluence of policy-driven initiatives, infrastructural advantages, and academic ecosystem dynamics. First, with China’s substantial investment in AI research and its strategic emphasis on integrating advanced technologies into educational frameworks, AI-driven education has become a robust domestic focus [35,38]. For instance, with the guide of government, AI technology industrialization has increased clearly, while Chinese government utilized various types of government support to facilitate enterprise innovation on AI technology to reduce education redundancy, enhancing the effectiveness of artificial intelligence in the educational domain [53]. Second, as the largest higher education system in the world, China, with its 44.3 million higher education students [54], creates a structural advantage for data-driven AI applications, which provides unparalleled training datasets in Western contexts. Third, the academic ecosystem differences between the East and West profoundly influence the distribution of research directions. Chinese researchers may have a publication preference for high-quality journals, potentially linked to the research assessment system where Q1 journal papers carry significant weight in university research evaluations, especially in “Double First-Class” universities [55].
Meanwhile, countries like South Korea, India, and Malaysia demonstrate their growing capacity for impactful contributions, leveraging AI to address unique educational challenges. However, limited representation from regions like North America and South America highlights disparities in resource allocation and technological infrastructure [48]. Europe’s moderate presence, particularly from Italy and Sweden, underscores a focus on ethical and multidisciplinary AI applications [32,41]. The scattered distribution outside Asia indicates that while there is global engagement with AI in personalized learning, the efforts are not yet as concentrated or extensive as those in Asia. Overall, the global distribution of authors signifies an expanding and diversified effort to integrate AI into personalized learning across higher education. However, the regional imbalance highlights the need for increased collaboration and knowledge exchange between dominant regions like Asia and other parts of the world to foster a more balanced and inclusive advancement of AI-driven personalized learning. For instance, China’s AI systems, motivated by government policies, have significantly reformed education by integrating AI into curricula and mandating partnerships between AI companies and educational institutions [56]. Conversely, Europe’s adherence to the General Data Protection Regulation (GDPR) emphasizes data privacy, influencing AI deployment in education to prioritize user consent and data protection [57]. Contrasting China’s centralized AI systems with Europe’s GDPR-compliant models can extract transferable insights, such as balancing innovation with privacy, to guide global AI integration in education.
The interdisciplinary nature of AI-PL research is evident in contributions from 13 distinct fields. Education and computer science dominate, reflecting their foundational role in developing and applying AI technologies for personalized learning. This dominance is unsurprising, given that education provides the pedagogical framework necessary for implementing AI-driven personalized learning strategies. Education research prioritizes accessibility and learner engagement, examining how AI tools address diverse learning needs [7,37]. Conversely, computer science offers the technical expertise required to develop and refine AI algorithms and systems, which advances the technical infrastructure, such as predictive models and adaptive algorithms [29,45]. By integrating perspectives from psychology, social sciences, and educational technology, these studies address the complex interplay between technology and human factors in learning processes [42,51]. For example, one study demonstrates how AI tools can influence students’ creativity and emotional engagement, highlighting the importance of considering psychological dimensions in the design and implementation of AI-driven educational tools [38]. This interdisciplinary approach not only enriches the research but also ensures that AI applications are holistic and considerate of diverse learner needs. Engineering-focused studies, for instance, are pivotal in developing advanced course recommendation systems and adaptive learning technologies that enhance the personalization of learning experiences [43,48]. Similarly, research in the arts explores innovative ways to use AI to foster creativity and engagement, thereby broadening the scope of personalized learning beyond traditional academic subjects [33]. Social Sciences research plays a critical role in addressing the ethical and societal implications of AI in education, ensuring that AI applications are fair, responsible, and beneficial to all stakeholders [30]. Despite this breadth, areas like nursing and language education remain underexplored, signaling opportunities for expanding research into specialized domains where AI could address unique challenges [8,44]. At the same time, fostering greater geographical diversity is critical to ensure a truly inclusive research landscape. Underrepresented regions such as Africa and South America bring unique cultural and educational contexts that could enrich the understanding of AI applications. Expanding collaborative networks and ensuring equitable resource distribution would not only enhance diversity but also generate innovative approaches to AI integration, ultimately advancing the global sustainability and inclusivity of personalized learning.

4.2. Research Methods, Sample Sizes, and Data Sources

Impact patterns indicate that high-impact studies tend to balance methodological rigor with sufficient sample sizes to achieve robust, generalizable findings, while medium- and low-impact studies explore broader methodological diversity but may face limitations in scale or consistency, highlighting an ongoing trend toward methodological refinement in AI-enhanced personalized learning research. The methodological approaches in AI-PL research reveal a strong emphasis on quantitative studies, particularly survey-based research. This approach accounts for over half of the reviewed studies, which is likely attributable to the scalability and generalizability that quantitative methods offer, allowing researchers to analyze large datasets and identify significant patterns in learner behaviors and outcomes [7,28]. Computational methods, such as machine learning and predictive analytics, are also prevalent, enabling researchers to analyze large datasets and uncover trends in personalized learning outcomes [39,45]. However, the relatively limited application of experimental and quasi-experimental designs highlights a gap in rigorous causal investigations. These designs are critical for assessing the effectiveness of AI interventions under controlled conditions [35,46]. Qualitative methods, though less common, provide valuable insights into learners’ and educators’ experiences with AI, often complementing quantitative data to offer a richer understanding of the technology’s impact [32,44]. By integrating quantitative and qualitative data, mixed methods studies provide a more holistic understanding of AI’s role in personalized learning, capturing both statistical trends and individual experiences [27,38]. This approach is particularly valuable in addressing the multifaceted challenges of AI integration, ensuring that technological advancements are aligned with pedagogical goals and learner needs. Future research should adopt mixed-method approaches to balance empirical rigor with contextual depth, addressing the complexities of AI integration in education.
The diversity in sample sizes across the reviewed studies reflects the flexibility of AI-PL research in addressing various research objectives. Small sample sizes, typically involving fewer than 50 participants, are commonly used in exploratory or pilot studies to gain detailed insights into specific educational settings [33,44]. These smaller studies provide rich, nuanced understandings of individual learner experiences and the effectiveness of AI tools in personalized learning settings, thereby offering depth that large-scale quantitative studies may overlook. Moderate sample sizes, ranging from 300 to 999 participants, are the most prevalent, striking a balance between feasibility and generalizability [30,31]. Large-scale studies, leveraging datasets with over 1000 participants, exemplify the potential for big data analytics to enhance personalized learning [27,34]. The presence of a mega-scale study with over 5000 participants highlights the emerging trend of leveraging big data analytics in personalized learning research [43]. Such studies utilize advanced machine learning techniques and predictive modeling to analyze large datasets, enabling the identification of significant trends and the development of scalable AI solutions. This approach not only enhances the statistical robustness of findings but also facilitates the application of AI tools across extensive educational contexts, promoting scalability and adaptability.
The reviewed studies highlight a predominant reliance on surveys and secondary datasets as primary data sources. Surveys, utilized in 17 studies, are particularly effective for capturing large-scale learner perspectives on AI-PL systems, emphasizing subjective experiences and usability metrics [7,28]. Surveys and questionnaires facilitate the collection of structured, standardized data, enabling robust statistical analyses which inform the effectiveness of AI interventions in diverse learning environments [30,31]. Secondary datasets, used in six studies, demonstrate the growing adoption of machine learning techniques to analyze pre-existing data, such as academic performance metrics and behavioral logs [36,39]. By utilizing pre-collected data, researchers can conduct large-scale analyses without the logistical challenges of primary data collection, thereby enhancing the depth and breadth of their investigations. Conversely, interviews and multimedia data are less frequently employed, appearing in three and two studies, respectively, but provide valuable qualitative insights into learner interactions and experiences [37,44]. Social media data, utilized in only one study, reflects an emerging trend of leveraging digital platforms for real-time educational analytics [29]. This trend aligns with the increasing digitization of education, where understanding online behaviors and interactions can provide valuable insights into the effectiveness and adoption of AI tools in real-world settings.

4.3. Research Themes and Types of AI Algorithms

Thematic trends in AI-PL research highlight its multidimensional nature, with seven dominant themes emerging from the reviewed studies. High-impact research tends to concentrate on high-value educational applications employing advanced AI algorithms, medium-impact studies explore both thematic and algorithmic diversity, and low-impact studies display narrower focus and less innovative algorithmic use, revealing clear trends in both research priorities and technological approaches within AI-enhanced personalized learning. AI Tools and Applications in Education, the most prevalent theme, focuses on practical implementations like learning management systems, recommendation algorithms, and adaptive feedback mechanisms [28,29]. The studies within this category often focus on the integration of AI into classroom activities, its effectiveness in supporting learning outcomes, and its potential to transform instructional design. These tools demonstrate AI’s capacity to optimize teaching strategies and improve learning achievement. AI in Personalized Learning, a closely related theme, emphasizes tailoring educational experiences to individual needs, leveraging adaptive learning systems to provide customized feedback and resources [7,27]. This theme encompasses studies investigating AI-driven adaptive learning systems, personalized feedback mechanisms, and the customization of curriculum to address diverse learner profiles. Ethical and social considerations have also gained traction, with studies exploring issues such as algorithmic bias, data privacy, and emotional impacts on learners [30,38]. The broader impacts of AI integration, including ethical concerns, psychological effects on learners, and the socio-cultural implications of widespread AI adoption in education should also be a prominent theme. Emerging themes, such as AI in STEM and Language Education, highlight discipline-specific applications but remain underexplored relative to broader themes [8,36]. Niche themes, including Generative AI and Supporting Students with Special Needs, reflect innovative directions but require further investigation to ensure equitable AI integration [40,41]. The trends indicate a progression towards more sophisticated, integrative, and ethically conscious approaches to AI, highlighting the dynamic and multifaceted nature of AI’s role in transforming higher education.
The reviewed studies reveal the extensive application of diverse AI algorithms, showcasing the field’s adaptability in addressing educational challenges. Supervised machine learning emerges as the most used algorithm, particularly for tasks involving predictive analytics and learner classification [29,35]. This aligns with the goal of many studies to enhance educational outcomes through data-driven decision-making and tailored interventions. Natural Language Processing (NLP) is also prominent, facilitating applications in language education and academic writing by offering real-time feedback and automated content analysis [7,8]. Furthermore, emerging algorithms, such as deep learning and hybrid systems, represent a shift towards more sophisticated models capable of analyzing unstructured data and integrating multiple functions [38,39]. Generative AI tools, while less common, demonstrate potential for dynamic content creation and personalized learning materials [40,42]. The incorporation of recommendation algorithms, generative AI, and hybrid AI systems demonstrates a shift towards creating more interactive and personalized learning environments, where AI not only predicts outcomes but also generates tailored content and integrates multiple AI techniques for enhanced functionality However, the limited use of rule-based AI and predictive modeling suggests that these approaches are still niche, potentially due to their reliance on predefined rules and specific data requirements, which may limit their flexibility and scalability in diverse educational contexts [37,51]. The increasing reliance on advanced algorithms, such as deep learning and generative AI, highlights the field’s commitment to innovation while underscoring the need for rigorous evaluations of their educational impact.

4.4. Pedagogical Paradigms in AI-Supported Personalized Learning

Pedagogical paradigms in AI-supported personalized learning are predominantly implicit and constructivist in orientation, while explicitly theorized and innovative frameworks remain underutilized. First, the implicit constructivist mainstream lacks clarity and depth. Most studies operationalize personalization via adaptive sequencing, feedback loops, or self-regulatory mechanisms without explicitly articulating their pedagogical underpinnings [7,29,33,34,35]. Such designs often equate PL with efficiency gains and user engagement while under-specifying instructional mechanisms (e.g., types of scaffolding, modes of social orchestration). As a result, they tend to focus on proximal indicators, such as learner perceptions or platform usage. Although these implementations imply a learner-centered, constructivist logic, their lack of explicit theoretical framing reduces transparency and replicability, making cross-study synthesis of instructional mechanisms more difficult.
Second, explicitly theorized studies demonstrate richer objectives and alignment between pedagogy and AI design. A smaller group of studies explicitly adopts and operationalizes frameworks such as Activity Theory, Generative Learning Theory, post-humanism, Human-Centered AI, design thinking, and competency-based education [10,32,37,40,43]. This explicit theory–design mapping also enhances transparency in how AI functionalities—such as feedback timing, recommendation logic, or collaborative task structuring—enabling richer evaluation of higher-order learning outcomes like creativity and ethical reasoning.
Third, notable gaps remain in inclusion-oriented paradigms and theory–tool integration. The literature rarely incorporates socioformation or Universal Design for Learning in an explicit, systematic manner. An exception is Zhong, Luo [51], whose implicit use of Basic Psychological Needs theory gestures toward socioformative goals. When inclusion is considered (e.g., Zingoni, Taborri [41]’s focus on dyslexia support), it is primarily as assistive technology rather than as part of a theoretically framed inclusive pedagogy. Similarly, while adoption and acceptance models (e.g., TAM, UTAUT, TPB) are used to explain user engagement, they are seldom integrated with learning theories that specify how personalization should be pedagogically enacted [27,28,30,31,42]. This disconnect limits the field’s capacity to address broader educational aims—particularly in underserved or non-Western contexts—and to design AI-supported PL that is both inclusive and pedagogically robust.
Future research on pedagogical paradigms in AI-supported personalized learning should prioritize explicit theorization and operationalization of instructional frameworks. Rather than relying on implicit constructivist assumptions, systems should deliberately embed paradigms such as socioformation, Universal Design for Learning (UDL), or generative learning, aligning adaptive algorithms with concrete pedagogical objectives—for example, incorporating collaborative knowledge-building analytics to scaffold social learning processes [58]. In parallel, adoption models like UTAUT must be paired with robust educational theories such as design thinking or activity theory, thereby enabling evaluation not only of user acceptance but also of underlying learning mechanisms and reflective practices [59]. Finally, AI-PL systems should be assessed against inclusive and higher-order goals—critical thinking, creativity, equity, and emotional regulation—rather than performance metrics alone. For instance, a UDL-informed AI tutor could integrate multimodal resources and scaffolded challenges tailored to neurodiverse learners, while systematically measuring growth in autonomy, engagement, and decision-making [60].

4.5. Sustainable Development and Equity

The corpus reveals a persistent disconnection between the technical/learning-outcome focus of most AI-PL research and the broader social objectives of sustainable development and educational equity. Explicit integration of frameworks such as the United Nations Sustainable Development Goals (SDG 4) remains rare, and only a small number of studies place equity considerations at the core of their design or evaluation. Where issues of equity and inclusion are made explicit, the literature offers promising yet limited examples. Research targeting accessibility—such as Zingoni, Taborri [41] on dyslexia support, Ou, Stöhr [32] on adaptive AI learning tools for students with reading difficulties, ADHD, autism, or second-language needs, and Țală, Muller [40] on custom AI approaches aligned with UNESCO/OECD guidelines—demonstrates how personalization can shift from performance optimization toward assistive and socially responsive functions. These works highlight practical mechanisms, including adaptive scaffolds, multimodal delivery, and institutional policy integration, that can broaden participation. However, such efforts remain a minority, often confined to narrow user groups or institutional settings, and rarely operationalize SDG targets into measurable educational outcomes, such as retention rates or equitable attainment.
Prevailing studies emphasize academic and technical indicators such as model accuracy, learner engagement, and adoption rates, whereas equity and public-good considerations receive only peripheral attention. Studies focused on prediction, recommendation, or adoption frequently omit disaggregated analyses by socio-economic status, disability, or geographic location, limiting the ability to assess distributive impacts [27,28,39,45]. Ethical or fairness issues are occasionally acknowledged [7,30,51], yet such concerns often do not translate into design safeguards or targeted evaluation. Even when the digital divide is mentioned [10,38,47], the discussion is typically brief and lacks empirical measurement in underserved contexts, resulting in a body of work that is methodologically sophisticated in technical validation but underdeveloped in assessing whether AI-PL advances equity-related outcomes.
Advancing AI-supported personalized learning requires positioning sustainable development and equity at the center of research design and evaluation [61,62]. This entails embedding SDG-aligned indicators—such as equitable participation, retention, and inclusive attainment—into study frameworks, with systematic disaggregation of outcomes by socio-economic status, disability, language proficiency, and geography [63]. Participatory and co-design approaches with marginalized learners, as illustrated in Zingoni, Taborri [41] and Ou, Stöhr [32], are essential to ensure contextual relevance and to align innovation with the lived realities of target groups. Equally important are longitudinal and multi-site field trials in resource-constrained settings, which can establish scalability, external validity, and uncover unintended distributional effects. For example, explicitly setting equity targets, co-developing AI tools with underserved learners, and tracking both achievement and access patterns across multiple terms would enable AI-PL initiatives to function not merely as technological advances but as drivers of inclusive and sustainable educational transformation [64].

4.6. Instructional Innovation Strategies

The integration of AI into personalized learning shows a marked divergence between studies adopting innovative instructional strategies—such as PBL, STEAM, and gamification—and those retaining conventional, teacher-led approaches, with the former generally demonstrating greater pedagogical depth and learner engagement. Studies employing innovative strategies often describe richer, multi-dimensional learning experiences, where AI acts as both a scaffolding and orchestration mechanism. For example, AI-supported project-based designs facilitated complex task decomposition, collaborative grouping, and adaptive feedback loops that sustained inquiry and creativity over extended periods [38,44]. In gamification contexts, AI-driven adaptivity enhanced challenge–skill balance, promoted motivation, and enabled real-time personalization [35]. STEAM-oriented applications leveraged AI for dynamic resource recommendation and cross-disciplinary content integration, aligning technological affordances with creative problem-solving goals [40]. Such studies typically provided at least partial documentation of instructional design, activity sequencing, and AI–pedagogy alignment, yielding more compelling evidence of enhanced engagement, skill development, and learner autonomy compared to control or baseline conditions.
By contrast, research that retained traditional instructional paradigms—often characterized by lecture-based delivery or individual drill-and-practice—tended to integrate AI as a peripheral enhancement rather than as a co-driver of pedagogical change. In these cases, AI tools were frequently limited to automating feedback, recommending resources, or providing surface-level adaptivity without reconfiguring the underlying learning model [36,39,45]. Although such approaches occasionally reported efficiency gains or modest improvements in test performance, they rarely demonstrated broader competencies such as collaboration, creativity, or critical thinking. Moreover, the absence of detailed implementation protocols and the lack of authentic, problem-based tasks limited the capacity of these studies to show transformative learning effects. This contrast underscores that AI’s potential to amplify personalized learning is most fully realized when paired with thoughtfully designed, innovative instructional strategies rather than appended to conventional formats.
Moving beyond surface-level adaptivity, future studies need to investigate how AI can be systematically embedded into the design and enactment of innovative pedagogies. This entails not only using AI for automating feedback or sequencing content but also aligning it with deeper instructional redesigns, such as AI-supported project scaffolds that foster long-term inquiry, or adaptive gamification systems that sustain learner motivation across diverse contexts [65]. For example, a socioformative project could leverage AI to dynamically group learners based on evolving competencies and generate tailored collaborative tasks, thereby addressing both cognitive growth and social equity [66]. By advancing such models, future work can better demonstrate how AI contributes to transformative learning outcomes that extend beyond efficiency or test performance.

4.7. Impacts of AI on Personalized Learning Outcomes and Higher-Order Skills

AI-driven personalized learning has demonstrated promising positive effects on both academic performance and higher-order skills, particularly in contexts employing rigorous designs. Experimental and quasi-experimental studies show significant gains in domain-specific performance: for instance, mathematics, language, and social science scores improved significantly post-intervention [35], while AI-supported art systems yielded enhanced artwork quality based on rubric assessments [33]. In writing contexts, AI-assisted tools enhanced content quality, albeit raising concerns about originality [49]. Additionally, learning-analytics research reveals that students who engage with cognitively substantive AI-facilitated discussions, measured via ML classifiers of forum posts, tend to achieve higher final grades [29]. Moreover, many studies document perceived development of higher-order capacities, such as critical thinking, creativity, metacognition, self-direction, and emotional regulation, through qualitative and survey-based designs [8,10,32,34,38,44,46]. These findings reflect students’ subjective experience of enhanced cognitive and emotional engagement, though they are limited by self-report and the absence of performance validation.
However, alongside these benefits, several studies warn of detrimental effects arising from over-reliance on AI. Learners may experience diminished critical thinking, loss of originality, or weakened cognitive autonomy [7,37,40]. In terms of psychological challenges, the continuous monitoring and frequent AI assessments may increase performance anxiety, and the lack of human presence can cause emotional disengagement further [38]. Concerning barriers for teachers, educators often experience difficulties in adapting to AI-driven learning environments due to a lack of sufficient training and guidance on how to effectively integrate AI tools into their teaching methods [10]. Moreover, AI-driven education diminishes teacher-student interaction, making it harder for educators to provide emotional and psychological support essential for effective learning [38]. Empirical evidence even suggests neurological disengagement: Javadi, Emo [67] showed that when individuals followed external guidance rather than planning autonomously, hippocampal and lateral prefrontal activations were markedly reduced, indicating diminished engagement of higher-order cognitive networks. By analogy, excessive reliance on AI in learning may similarly attenuate neural involvement in critical thinking, problem-solving, and creative reasoning, explaining potential cognitive and neurological disengagement. Complementary literature emphasizes the risk of cognitive offloading, where habitual AI reliance reduces independent decision-making and analytical thinking [68].
Crucially, future inquiry must design studies that intentionally integrate AI as a thought partner rather than a cognitive crutch. For example, an intervention could embed “ask-don’t-solve” AI interactions—where learners must critique AI-generated suggestions before adopting them—within PBL units, measuring both creative problem-solving outcomes and measures of metacognition. These approaches align with curricular frameworks advocating first scaffolding deep thinking before AI use, thereby safeguarding higher-order cognitive engagement [69]. Additionally, longitudinal mixed-methods studies ought to assess whether these interventions support sustained critical thinking and emotional regulation over time. In doing so, the literature can shift from describing AI’s surface benefits to demonstrating how AI can genuinely cultivate deep learning and higher-order capacities when embedded in balanced, ethically framed learning designs.

4.8. Interdisciplinary and Transdisciplinary Collaboration

The varied landscape of disciplinary collaboration across the reviewed literature reveals both promising integrative models and largely parallel disciplinary efforts. Notably, several projects exemplify deep interdisciplinarity: for instance, Zingoni, Taborri [32] combine psychology, engineering, and special-education expertise to co-design machine-learning tools tailored for students with dyslexia, ensuring both technological precision and pedagogical accessibility. Similarly, Dann, O’Neill [52] integrate educational psychology and computer science to develop microskill-assessment systems, allowing real-time feedback on teaching behaviors—a design that would have been impossible without cross-domain synergy. These collaborations rigorously influence study design: educational researchers define the learning constructs, engineers operationalize those constructs algorithmically, and iterative testing embeds findings in authentic instructional practice. These collaborative arrangements not only strengthen the rigor of system design but also expand interpretive depth, allowing educational inquiries to be anchored in robust computational frameworks while simultaneously informed by cognitive and behavioral theories. Conversely, many studies feature multidisciplinary authorship without detailing integrative mechanisms—reducing interdisciplinary contributions to parallel interpretations rather than co-constructed insights [27,38].
Nevertheless, collaboration remains uneven across disciplines. Less represented, but equally important, are opportunities for underrepresented disciplines to participate in AI-supported personalization. Kong, Ning [44] present a nursing education example where AI + project-task learning could enhance critical thinking and technical communication; however, they do not foreground engineering or learning science collaborations that could deepen design and evaluation. Likewise, language learning domains offer rich potential: Goulart, Matte [8] explore AI effects on writing, but fall short of incorporating applied linguistics, psycholinguistics, and AI design in a joint framework. While technology-intensive pairings dominate, weaker domains such as nursing, language education, and the arts are underrepresented despite their high potential for applying AI personalization to address domain-specific challenges (e.g., clinical decision training or multilingual literacy development).
To advance the field, future studies should actively foster transdisciplinary design teams that include domain specialists (e.g., nursing educators, language acquisition experts, ethicists) alongside engineers and learning scientists [70]. For example, in nursing education, a project could co-develop AI-enhanced clinical simulation modules where nursing faculty define critical thinking checklists, learning scientists craft reflection prompts, and engineers build adaptive scenario generators. In language learning, teams combining SLA researchers with NLP engineers could co-build tools that adapt prompts to learners’ proficiency levels and error patterns [71]. Empirically, these collaborations should be evaluated using mixed methods—combining performance metrics, domain-aligned rubrics, and qualitative usability feedback—to document how disciplinary integration enhances learning outcomes, authenticity, and equity. Pursuing such approaches will shift AI-personalization research from techno-centric prototypes to educational innovations that are contextually grounded, culturally sensitive, and pedagogically robust [46].

5. Study Limitations

This study has several limitations that should be acknowledged. First, the analysis concentrated exclusively on SSCI Q1 articles indexed in the Web of Science, which ensured the inclusion of high-quality and widely cited research but may have introduced selection bias by excluding valuable insights from Q2–Q4 journals, regionally influential publications, or emerging but less-cited studies. The decision to apply the Q1 filter at the full-text assessment stage rather than during initial screening may also have led to inconsistencies in selection. Moreover, Early Access publications were excluded to maintain data consistency and replicability, as these articles typically lack finalized bibliographic metadata (e.g., volume, issue, pagination) and their citation metrics remain volatile across databases. However, we acknowledge that this decision may have led to the omission of the most recent and potentially cutting-edge research, which often appears in Early Access format months before its formal publication. This trade-off between data stability and timeliness is a limitation of the present review. Second, the findings are shaped by the classification and coding strategies used, which could influence the interpretation of results, as with any qualitative synthesis. Third, the reliance on secondary data constrains the ability to evaluate the real-time applicability of AI tools in diverse educational settings. While the findings primarily apply to higher education contexts with established AI infrastructure, their generalizability to K–12 education or to regions with limited technological capacity remains uncertain. Future research could extend its scope by incorporating multiple journal tiers, diverse educational contexts, real-world cases, and Early Access publications, while triangulating quality assessment through complementary metrics to balance timeliness, rigor, and external validity.

6. Conclusions

This study systematically reviewed the integration of Artificial Intelligence (AI) in personalized learning (PL) within higher education by examining SSCI Q1 articles. The review mapped global research trends, pedagogical paradigms, sustainability and equity considerations, instructional strategies, learning outcomes, and interdisciplinary collaboration, highlighting both achievements and persistent gaps. Findings indicate that research on AI-driven personalized learning remains geographically concentrated in Asia, with China at the forefront, and is primarily situated within education and computer science. While the field has demonstrated methodological advancement and increasing algorithmic sophistication, explicit pedagogical framing and systematic attention to equity remain underdeveloped. Evidence suggests that AI has strong potential to enhance innovative instructional strategies and higher-order skills, yet risks of cognitive disengagement and diminished autonomy persist when pedagogical and ethical safeguards are absent. Moreover, interdisciplinary collaborations are emerging but not yet fully institutionalized, limiting the capacity to design AI systems that balance technical precision with educational and humanistic goals. The review yields six key conclusions that synthesize findings across the research questions:
  • Geographically and disciplinarily, research remains concentrated in Asia—especially China—while education and computer science dominate the disciplinary landscape; methodologically, studies privilege quantitative designs and supervised learning algorithms, with high-impact work marked by stronger rigor and generalizability.
  • In terms of pedagogical paradigms, most studies are implicitly guided by constructivism, while explicit theoretical grounding is less common but yields clearer AI–pedagogy alignment and richer educational outcomes.
  • With respect to sustainable development and equity, only a subset of research systematically engages with accessibility or SDG-related metrics, as most studies remain focused on technical performance and proximal learning outcomes.
  • Regarding instructional innovation, AI is most effective when integrated with approaches such as PBL, STEAM, gamification, or UDL, though it is still largely applied as a technical add-on rather than a driver of pedagogical transformation.
  • In evaluating learning outcomes, AI-enhanced PL shows potential to improve academic performance and higher-order skills, but risks of cognitive erosion and diminished autonomy emerge when AI is used uncritically.
  • From the perspective of interdisciplinary integration, while collaborations across education, psychology, and computer science are growing, fully integrative transdisciplinary projects remain rare and are concentrated in a limited number of exemplars.
  • Taken together, these findings suggest that the future of AI-supported personalized learning requires stronger theoretical grounding, systematic integration of sustainable development and equity objectives, and genuine interdisciplinary co-design. To advance AI adoption in higher education beyond efficiency toward inclusivity, critical thinking, and broader social impact, research agendas should explicitly align technological innovation with pedagogical frameworks and societal imperatives, prioritizing professional development to enhance educators’ AI literacy and practical skills, while fostering collaborative efforts among academia, industry, and policymakers to develop scalable, context-sensitive solutions that meet diverse educational needs.

Author Contributions

Conceptualization: J.P. and Y.L.; writing—original draft preparation: J.P. and Y.L.; methodology: J.P. and Y.L.; investigation: J.P. and Y.L.; supervision: J.P. and Y.L.; visualization: J.P. and Y.L.; writing—review and editing: J.P. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets used and/or analysed during the current study (the bibliography of included studies) are available from the corresponding author upon request.

Acknowledgments

The authors wish to express their profound gratitude to Shenzhen University for its invaluable support, which was instrumental in the successful execution of this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Almogren, A.S.; Al-Rahmi, W.M.; Dahri, N.A. Integrated technological approaches to academic success: Mobile learning, social media, and AI in visual art education. IEEE Access 2024, 12, 175391–175413. [Google Scholar] [CrossRef]
  2. Buitrago, M.; Chiappe, A. Representation of knowledge in digital educational environments: A systematic review of literature. Australas. J. Educ. Technol. 2019, 35, 46–62. [Google Scholar] [CrossRef]
  3. George, G.; Lal, A.M. PERKC: Personalized kNN with CPT for course recommendations in higher education. IEEE Trans. Learn. Technol. 2024, 17, 885–892. [Google Scholar] [CrossRef]
  4. Halkiopoulos, C.; Gkintoni, E. Leveraging AI in E-learning: Personalized learning and adaptive assessment through cognitive neuropsychology—A systematic analysis. Electronics 2024, 13, 3762. [Google Scholar] [CrossRef]
  5. Bayly-Castaneda, K.; Ramirez-Montoya, M.S.; Morita-Alexander, A. Crafting personalized learning paths with AI for lifelong learning: A systematic literature review. Front. Educ. 2024, 9, 1424386. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Yun, Y.; An, R.; Cui, J.; Dai, H.; Shang, X. Educational data mining techniques for student performance prediction: Method review and comparison analysis. Front. Psychol. 2022, 12, 698490. [Google Scholar] [CrossRef] [PubMed]
  7. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  8. Goulart, L.; Matte, M.L.; Mendoza, A.; Alvarado, L.; Veloso, I. AI or student writing? Analyzing the situational and linguistic characteristics of undergraduate student writing and AI-generated assignments. J. Second Lang. Writ. 2024, 66, 101160. [Google Scholar] [CrossRef]
  9. Mehmood, R.; Alam, F.; Albogami, N.N.; Katib, I.; Albeshri, A.; Altowaijri, S.M. Utilearn: A personalised ubiquitous teaching and learning system for smart societies. IEEE Access 2017, 5, 2611–2625. [Google Scholar] [CrossRef]
  10. Wu, D.; Zhang, S.; Ma, Z.; Yue, X.G.; Dong, R.K. Unlocking potential: Key factors shaping undergraduate self-directed learning in AI-enhanced educational environments. Systems 2024, 12, 332. [Google Scholar] [CrossRef]
  11. Grimalt-Álvaro, C.; Usart, M. Sentiment analysis for formative assessment in higher education: A systematic literature review. J. Comput. High. Educ. 2024, 36, 647–682. [Google Scholar] [CrossRef]
  12. Fathi, J.; Rahimi, M.; Derakhshan, A. Improving EFL learners’ speaking skills and willingness to communicate via artificial intelligence-mediated interactions. System 2024, 121, 103254. [Google Scholar] [CrossRef]
  13. Alotaibi, N.S. The impact of AI and lMS integration on the future of higher education: Opportunities, challenges, and strategies for transformation. Sustainability 2024, 16, 10357. [Google Scholar] [CrossRef]
  14. Fariani, R.I.; Junus, K.; Santoso, H.B. A systematic literature review on personalised learning in the higher education context. Technol. Knowl. Learn. 2023, 28, 449–476. [Google Scholar] [CrossRef]
  15. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  16. Yang, C.; Wang, T.; Xiu, Q. Towards a sustainable future in education: A systematic review and framework for inclusive education. Sustainability 2025, 17, 3837. [Google Scholar] [CrossRef]
  17. Melo-López, V.-A.; Basantes-Andrade, A.; Gudiño-Mejía, C.-B.; Hernández-Martínez, E. The impact of artificial intelligence on inclusive education: A systematic review. Educ. Sci. 2025, 15, 539. [Google Scholar] [CrossRef]
  18. Jaramillo, J.J.; Chiappe, A. The AI-driven classroom: A review of 21st century curriculum trends. Prospects 2024, 54, 645–660. [Google Scholar] [CrossRef]
  19. Garcia Ramos, J.; Wilson-Kennedy, Z. Promoting equity and addressing concerns in teaching and learning with artificial intelligence. Front. Educ. 2024, 9, 1487882. [Google Scholar] [CrossRef]
  20. Kirk, H.R.; Gabriel, I.; Summerfield, C.; Vidgen, B.; Hale, S.A. Why human–AI relationships need socioaffective alignment. Humanit. Soc. Sci. Commun. 2025, 12, 728. [Google Scholar] [CrossRef]
  21. Luna-Nemecio, J.; Tobón, S.; Juárez-Hernández, L.G. Sustainability-based on socioformation and complex thought or sustainable social development. Resour. Environ. Sustain. 2020, 2, 100007. [Google Scholar] [CrossRef]
  22. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  23. Chu, H.C.; Hwang, G.H.; Tu, Y.F.; Yang, K.H. Roles and research trends of artificial intelligence in higher education: A systematic review of the top 50 most-cited articles. Australas. J. Educ. Technol. 2022, 38, 22–42. [Google Scholar]
  24. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  25. Nowell, L.S.; Norris, J.M.; White, D.E.; Moules, N.J. Thematic analysis: Striving to meet the trustworthiness criteria. Int. J. Qual. Meth. 2017, 16, 1609406917733847. [Google Scholar] [CrossRef]
  26. Tahamtan, I.; Bornmann, L. What do citation counts measure? An updated review of studies on citations in scientific documents published between 2006 and 2018. Scientometrics 2019, 121, 1635–1684. [Google Scholar] [CrossRef]
  27. Pillai, R.; Sivathanu, B.; Metri, B.; Kaushik, N. Students’ adoption of AI-based teacher-bots (T-bots) for learning in higher education. Inf. Technol. People 2024, 37, 328–355. [Google Scholar] [CrossRef]
  28. Jo, H. Understanding AI tool engagement: A study of ChatGPT usage and word-of-mouth among university students and office workers. Telemat. Inform. 2023, 85, 102067. [Google Scholar] [CrossRef]
  29. Wu, J.Y.; Hsiao, Y.C.; Nian, M.W. Using supervised machine learning on large-scale online forums to classify course-related facebook messages in predicting learning achievement within the personal learning environment. Interact. Learn. Environ. 2020, 28, 65–80. [Google Scholar] [CrossRef]
  30. Bouteraa, M.; Bin-Nashwan, S.A.; Al-Daihani, M.; Dirie, K.A.; Benlahcene, A.; Sadallah, M.; Zaki, H.O.; Lada, S.; Ansar, R.; Fook, L.M.; et al. Understanding the diffusion of AI-generative (ChatGPT) in higher education: Does students’ integrity matter? Comput. Hum. Behav. Rep. 2024, 14, 100402. [Google Scholar] [CrossRef]
  31. Dahri, N.A.; Yahaya, N.; Al-Rahmi, W.M.; Vighio, M.S.; Alblehai, F.; Soomro, R.B.; Shutaleva, A. Investigating AI-based academic support acceptance and its impact on students’performance in Malaysian and Pakistani higher education institutions. Educ. Inf. Technol. 2024, 29, 18695–18744. [Google Scholar] [CrossRef]
  32. Ou, A.W.; Stöhr, C.; Malmström, H. Academic communication with AI-powered language tools in higher education: From a post-humanist perspective. System 2024, 121, 103225. [Google Scholar] [CrossRef]
  33. Chiu, M.-C.; Hwang, G.-J.; Hsia, L.-H.; Shyu, F.-M. Artificial intelligence-supported art education: A deep learning-based system for promoting university students’ artwork appreciation and painting outcomes. Interact. Learn. Environ. 2024, 32, 824–842. [Google Scholar] [CrossRef]
  34. Al-Zahrani, A.M.; Alasmari, T.M. Exploring the impact of artificial intelligence on higher education: The dynamics of ethical, social, and educational implications. Humanit. Soc. Sci. Commun. 2024, 11, 912. [Google Scholar] [CrossRef]
  35. Zhou, C. Integration of modern technologies in higher education on the example of artificial intelligence use. Educ. Inf. Technol. 2023, 28, 3893–3910. [Google Scholar] [CrossRef]
  36. Iatrellis, O.; Savvas, I.K.; Kameas, A.; Fitsilis, P. Integrated learning pathways in higher education: A framework enhanced with machine learning and semantics. Educ. Inf. Technol. 2020, 25, 3109–3129. [Google Scholar] [CrossRef]
  37. Lai, C.-L. Exploring university students’ preferences for AI-assisted learning environment: A drawing analysis with activity theory framework. Educ. Technol. Soc. 2021, 24, 1–15. [Google Scholar]
  38. Lin, H.; Chen, Q. Artificial intelligence (AI) -integrated educational applications and college students’ creativity and academic emotions: Students and teachers’ perceptions and attitudes. BMC Psychol. 2024, 12, 487. [Google Scholar] [CrossRef]
  39. Zheng, L.; Wang, C.; Chen, X.; Song, Y.; Meng, Z.; Zhang, R. Evolutionary machine learning builds smart education big data platform: Data-driven higher education. Appl. Soft. Comput. 2023, 136, 110114. [Google Scholar] [CrossRef]
  40. Țală, M.L.; Muller, C.N.; Nastase, I.A.; State, O.; Gheorghe, G. Exploring university students perceptions of generative artificial intelligence in education. Amfiteatru Econ. 2024, 26, 71–88. [Google Scholar] [CrossRef]
  41. Zingoni, A.; Taborri, J.; Calabrò, G. A machine learning-based classification model to support university students with dyslexia with personalized tools and strategies. Sci. Rep. 2024, 14, 273. [Google Scholar] [CrossRef]
  42. Chai, C.S.; Yu, D.; King, R.B.; Zhou, Y. Development and validation of the Artificial Intelligence Learning Intention Scale (AILIS) for university students. Sage Open 2024, 14, 21582440241242188. [Google Scholar] [CrossRef]
  43. Chang, H.T.; Lin, C.Y.; Jheng, W.B.; Chen, S.H.; Wu, H.H.; Tseng, F.C.; Wang, L.C. AI, please help me choose a course: Building a personalized hybrid course recommendation system to assist students in choosing courses adaptively. Educ. Technol. Soc. 2023, 26, 203–217. [Google Scholar]
  44. Kong, W.; Ning, Y.; Ma, T.; Song, F.; Mao, Y.; Yang, C.; Li, X.; Guo, Y.; Liu, H.; Shi, J.; et al. Experience of undergraduate nursing students participating in artificial intelligence plus project task driven learning at different stages: A qualitative study. BMC Nurs. 2024, 23, 314. [Google Scholar] [CrossRef] [PubMed]
  45. Singh, H.; Kaur, B.; Sharma, A.; Singh, A. Framework for suggesting corrective actions to help students intended at risk of low performance based on experimental study of college students using explainable machine learning model. Educ. Inf. Technol. 2024, 29, 7997–8034. [Google Scholar] [CrossRef]
  46. Wang, X.; Xu, X.; Zhang, Y.; Hao, S.; Jie, W. Exploring the impact of artificial intelligence application in personalized learning environments: Thematic analysis of undergraduates’ perceptions in China. Humanit. Soc. Sci. Commun. 2024, 11, 1644. [Google Scholar] [CrossRef]
  47. Gasaymeh, A.-M.M.; Beirat, M.A.; Abu Qbeita, A.A. University students’ insights of generative artificial intelligence (AI) writing tools. Educ. Sci. 2024, 14, 1062. [Google Scholar] [CrossRef]
  48. Ramírez-Correa, P.; Alfaro-Pérez, J.; Gallardo, M. Identifying engineering undergraduates’ learning style profiles using machine learning techniques. Appl. Sci. 2021, 11, 10505. [Google Scholar] [CrossRef]
  49. Wang, C.; Aguilar, S.J.; Bankard, J.S.; Bui, E.; Nye, B. Writing with AI: What college students learned from utilizing ChatGPT for a writing assignment. Educ. Sci. 2024, 14, 976. [Google Scholar] [CrossRef]
  50. Cha, S.; Loeser, M.; Seo, K. The impact of AI-based course-recommender system on students’ course-selection decision-making process. Appl. Sci. 2024, 14, 3672. [Google Scholar] [CrossRef]
  51. Zhong, W.; Luo, J.; Lyu, Y. How do personal attributes shape AI dependency in Chinese higher education context? Insights from needs frustration perspective. PLoS ONE 2024, 19, e0313314. [Google Scholar] [CrossRef]
  52. Dann, C.; O’Neill, S.; Getenet, S.; Chakraborty, S.; Saleh, K.; Yu, K. Improving teaching and learning in higher education through machine learning: Proof of concept’ of AI’s ability to assess the use of key microskills. Educ. Sci. 2024, 14, 886. [Google Scholar] [CrossRef]
  53. Shi, J.; Mei, J.; Zhu, L.; Wang, Y. Estimating the innovation efficiency of the artificial intelligence industry in China based on the three-stage DEA model. IEEE Trans. Eng. Manag. 2024, 71, 9217–9228. [Google Scholar] [CrossRef]
  54. Wu, Y. More Chinese Receive Higher Education. China Daily, 18 May 2022. Available online: https://www.chinadaily.com.cn/a/202205/18/WS628447b5a310fd2b29e5d58d.html (accessed on 14 February 2025).
  55. Xu, C.Q. Towards a framework for evaluating the research performance of Chinese double first-class universities. Front. Educ. China 2020, 15, 369–402. [Google Scholar] [CrossRef]
  56. Yang, X. Accelerated move for AI education in China. ECNU Rev. Educ. 2019, 2, 347–352. [Google Scholar] [CrossRef]
  57. CNIL. AI and GDPR: The CNIL Publishes New Recommendations to Support Responsible Innovation. CNIL, 7 February 2025. Available online: https://www.cnil.fr/en/ai-and-gdpr-cnil-publishes-new-recommendations-support-responsible-innovation (accessed on 14 February 2025).
  58. Walter, Y. Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. Int. J. Educ. Technol. High Educ. 2024, 21, 15. [Google Scholar] [CrossRef]
  59. Saguin, E.; Salome, J.; Favodon, B.; Lahutte, B.; Gignoux-Froment, F. Validation of a didactic model evaluating the usability, usefulness and acceptability of psychological first aid teaching through simulation. BMC Med. Educ. 2024, 24, 1431. [Google Scholar] [CrossRef]
  60. Beaux, H.; Karimi, P.; Pop, O.; Clark, R. Guiding empowerment model: Liberating neurodiversity in online higher education. In Proceedings of the 2024 IEEE Frontiers in Education Conference (FIE), Washington, DC, USA, 13–16 October 2024; pp. 1–9. [Google Scholar]
  61. Aguilar-Esteva, V.; Acosta-Banda, A.; Carreño Aguilera, R.; Patiño Ortiz, M. Sustainable social development through the use of artificial intelligence and data science in education during the COVID emergency: A systematic review using PRISMA. Sustainability 2023, 15, 6498. [Google Scholar] [CrossRef]
  62. Sultana, R.; Faruk, M. Does artificial intelligence increase learners’ sustainability in higher education: Insights from Bangladesh. J. Data Inf. Manag. 2024, 6, 161–172. [Google Scholar] [CrossRef]
  63. Nedungadi, P.; Tang, K.-Y.; Raman, R. The transformative power of generative artificial intelligence for achieving the sustainable development goal of quality education. Sustainability 2024, 16, 9779. [Google Scholar] [CrossRef]
  64. Okulich-Kazarin, V.; Artyukhov, A.; Skowron, Ł.; Artyukhova, N.; Wołowiec, T. Will AI become a threat to higher education sustainability? A study of students’ views. Sustainability 2024, 16, 4596. [Google Scholar] [CrossRef]
  65. Zourmpakis, A.-I.; Kalogiannakis, M.; Papadakis, S. Adaptive gamification in science education: An analysis of the impact of implementation and adapted game elements on students’ motivation. Computers 2023, 12, 143. [Google Scholar] [CrossRef]
  66. Kerimbayev, N.; Adamova, K.; Shadiev, R.; Altinay, Z. Intelligent educational technologies in individual learning: A systematic literature review. Smart Learn. Environ. 2025, 12, 1. [Google Scholar] [CrossRef]
  67. Javadi, A.H.; Emo, Z.; Howard, L.R.; Zisch, F.E.; Yu, Y.; Knight, R.; Pinelo Silva, J.; Spiers, H.J. Hippocampal and prefrontal processing of network topology to simulate the future. Nat. Commun. 2017, 8, 14652. [Google Scholar] [CrossRef]
  68. Gerlich, M. AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies 2025, 15, 28. [Google Scholar] [CrossRef]
  69. Peng, H.; Chen, J.; Shi, Y. Exploring the effect of a flexible scaffolding for promoting deep learning in smart classrooms. Educ. Inf. Technol. 2025. Epub ahead of printing. [Google Scholar] [CrossRef]
  70. El Arab, R.A.; Al Moosa, O.A.; Abuadas, F.H.; Somerville, J. The role of AI in nursing education and practice: Umbrella review. J. Med. Internet Res. 2025, 27, e69881. [Google Scholar] [CrossRef]
  71. Ziegler, N.; Meurers, D.; Rebuschat, P.; Ruiz, S.; Moreno-Vega, J.L.; Chinkina, M.; Li, W.; Grey, S. Interdisciplinary research at the intersection of CALL, NLP, and SLA: Methodological implications from an input enhancement project. Lang. Learn. 2017, 67 (Suppl. S1), 209–231. [Google Scholar] [CrossRef]
Figure 1. Flowchart of study selection.
Figure 1. Flowchart of study selection.
Applsci 15 10096 g001
Figure 2. Geographical distribution of included articles.
Figure 2. Geographical distribution of included articles.
Applsci 15 10096 g002
Figure 3. Examined disciplines in the included studies.
Figure 3. Examined disciplines in the included studies.
Applsci 15 10096 g003
Figure 4. Frequency of research methodologies.
Figure 4. Frequency of research methodologies.
Applsci 15 10096 g004
Figure 5. Distribution of sample size.
Figure 5. Distribution of sample size.
Applsci 15 10096 g005
Figure 6. Distribution of data source.
Figure 6. Distribution of data source.
Applsci 15 10096 g006
Figure 7. Distribution of research themes.
Figure 7. Distribution of research themes.
Applsci 15 10096 g007
Figure 8. Distribution of AI algorithms types.
Figure 8. Distribution of AI algorithms types.
Applsci 15 10096 g008
Table 1. Synthesis of Key Findings from 29 Studies.
Table 1. Synthesis of Key Findings from 29 Studies.
No.Author/YearCountryDisciplineMethodSample SizeData SourceThemeAI Algorithm UsedImpact Grouping
1 (Chan & Hu, 2023) [7] China Education, STEM, Arts, Business Survey 300–499 Surveys/Questionnaires AI in Personalized Learning Natural Language Processing (NLP) High
2 (Pillai et al., 2024) [27] India Educational Technology, Computer Science Mixed Method 1000–1999 Surveys/Questionnaires AI in Personalized Learning Recommendation Algorithms High
3 (Jo, 2023) [28] South Korea Educational Technology, Information Management Survey 500–999 Surveys/Questionnaires AI Tools and Applications in Education Deep Learning High
4 (Wu et al., 2020) [29] China Education, Computer Science Machine Learning/Algorithmic Under 50 Online Platforms/Social Media AI Tools and Applications in Education Machine Learning (Supervised) High
5 (Bouteraa et al., 2024) [30] Oman, Malaysia, United Arab Emirates Multidisciplinary (Higher Education Ethics, Social Sciences) Survey 500–999 Surveys/Questionnaires Ethical, Social, and Psychological Implications of AI Generative AI High
6 (Dahri et al., 2024) [31] Malaysia, Saudi Arabia, Russia Education Survey 300–499 Surveys/Questionnaires AI Tools and Applications in Education Rule-Based AI High
7 (Ou et al., 2024) [32] Sweden Multidisciplinary (Academic Communication, AI Tools) Qualitative 1000–1999 Surveys/Questionnaires AI Tools and Applications in Education Natural Language Processing (NLP) High
8 (Chiu et al., 2024) [33] China Art Education Quasi-experimental Under 50 Surveys/Questionnaires AI Tools and Applications in Education Deep Learning High
9 (Al-Zahrani & Alasmari, 2024) [34] Saudi Arabia Medicine, Engineering, Humanities, Business Survey 1000–1999 Surveys/Questionnaires Ethical, Social, and Psychological Implications of AI Recommendation Algorithms Medium
10 (Zhou, 2023) [35] China Mathematics, Computer Science, Management, Sociology Experimental 300–499 Surveys/Questionnaires AI Tools and Applications in Education Machine Learning (Supervised) Medium
11 (Iatrellis et al., 2020) [36] Greece Computer Science Case Study 100–299 Existing Datasets/Secondary Data AI in Engineering and STEM Education Deep Learning Medium
12 (Lai, 2021) [37] China Teacher Education Qualitative 50–99 Multimedia Data AI in Personalized Learning Rule-Based AI Medium
13 (Lin & Chen, 2024) [38] China Psychology, Education Mixed Method 100–299 Surveys/Questionnaires Ethical, Social, and Psychological Implications of AI Hybrid AI Systems Medium
14 (Zheng et al., 2023) [39] China Computer Science Experimental Not Specified Existing Datasets/Secondary Data AI Tools and Applications in Education Machine Learning (Unsupervised) Medium
15 (Țală et al., 2024) [40] Romania Economics Survey 300–499 Surveys/Questionnaires Generative AI and Economic Implications Generative AI Medium
16 (Zingoni et al., 2024) [41] Italy Special Education Survey 1000–1999 Surveys/Questionnaires AI for Supporting Students with Special Needs Machine Learning (Supervised) Medium
17 (Chai et al., 2024) [42] China Educational Technology, Psychology Survey 500–999 Surveys/Questionnaires AI in Personalized Learning Generative AI Medium
18 (Wu et al., 2024) [10] China, Cyprus, Australia Humanities, Sciences, Arts Survey 300–499 Surveys/Questionnaires AI in Language and Writing Education Machine Learning (Supervised) Medium
19 (Chang et al., 2023) [43] China Engineering, Computer Science, Management Mixed Method 5000 or More Existing Datasets/Secondary Data AI in Personalized Learning Machine Learning (Unsupervised) Medium
20 (Kong et al., 2024) [44] China Nursing Qualitative Under 50 Interviews AI for Supporting Students with Special Needs Machine Learning (Unsupervised) Medium
21 (Singh et al., 2024) [45] India, Australia Computer Science, Information Technology Machine Learning/Algorithmic 500–999 Existing Datasets/Secondary Data AI in Engineering and STEM Education Machine Learning (Supervised) Medium
22 (Wang et al., 2024) [46] China Engineering, Computer Science, Mathematics, Economics Qualitative Under 50 Interviews AI in Language and Writing Education Hybrid AI Systems Medium
23 (Gasaymeh et al., 2024) [47] Jordan Education Survey 50–99 Surveys/Questionnaires Ethical, Social, and Psychological Implications of AI Natural Language Processing (NLP) Medium
24 (Ramírez-Correa et al., 2021) [48] Chile Engineering Machine Learning/Algorithmic 100–299 Existing Datasets/Secondary Data AI in Engineering and STEM Education Machine Learning (Supervised) Low
25 (Wang et al., 2024) [49] United States Writing, Multidisciplinary Quasi-experimental Under 50 Surveys/Questionnaires AI in Personalized Learning Natural Language Processing (NLP) Low
26 (Goulart et al., 2024) [8] United States Language Learning Mixed Method 50–99 Existing Datasets/Secondary Data AI in Language and Writing Education Natural Language Processing (NLP) Low
27 (Cha et al., 2024) [50] South Korea, Switzerland Applied Artificial Intelligence, Computer Science Qualitative Under 50 Interviews AI Tools and Applications in Education Recommendation Algorithms Low
28 (Zhong et al., 2024) [51] China Social Sciences, Multidisciplinary Survey 500–999 Surveys/Questionnaires Ethical, Social, and Psychological Implications of AI Predictive Modeling Low
29 (Dann et al., 2024) [52] Australia Education Mixed Method Under 50 Multimedia Data AI Tools and Applications in Education Hybrid AI Systems Low
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, J.; Li, Y. Frontiers of Artificial Intelligence for Personalized Learning in Higher Education: A Systematic Review of Leading Articles. Appl. Sci. 2025, 15, 10096. https://doi.org/10.3390/app151810096

AMA Style

Peng J, Li Y. Frontiers of Artificial Intelligence for Personalized Learning in Higher Education: A Systematic Review of Leading Articles. Applied Sciences. 2025; 15(18):10096. https://doi.org/10.3390/app151810096

Chicago/Turabian Style

Peng, Jun, and Yue Li. 2025. "Frontiers of Artificial Intelligence for Personalized Learning in Higher Education: A Systematic Review of Leading Articles" Applied Sciences 15, no. 18: 10096. https://doi.org/10.3390/app151810096

APA Style

Peng, J., & Li, Y. (2025). Frontiers of Artificial Intelligence for Personalized Learning in Higher Education: A Systematic Review of Leading Articles. Applied Sciences, 15(18), 10096. https://doi.org/10.3390/app151810096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop