Next Article in Journal
One Size Fits None: Rethinking Bibliometric Indicators for Fairer Assessment and Strategic Research Planning
Previous Article in Journal
Predicting Star Scientists in the Field of Artificial Intelligence: A Machine Learning Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Contemporary AI-Education Intersections and Developing an Integrated Convergence Framework: A Bibliometric-Driven and Inductive Content Analysis

Faculty of Education, The University of Hong Kong, Hong Kong SAR, China
*
Author to whom correspondence should be addressed.
Metrics 2025, 2(4), 23; https://doi.org/10.3390/metrics2040023 (registering DOI)
Submission received: 22 August 2025 / Revised: 4 October 2025 / Accepted: 10 October 2025 / Published: 3 November 2025

Abstract

Artificial intelligence (AI) has rapidly permeated education since 2014, propelled by technological innovation and global investment, yet scholarly discourse on contemporary AI-Education intersections remains largely fragmented. The present study addresses this notable gap through a bibliometric-driven and inductive content analysis to inform future research and practice. A total of 317 articles published between 2014 and October 2024 were retrieved from WOSCC and Scopus following the PRISMA protocol. Keyword co-occurrence and co-citation analyses with VOSviewer (version 1.6.20) were employed to visualize the intellectual structures shaping the field, while qualitative inductive content analysis was conducted to address the limitations of bibliometric methods in revealing deeper thematic insights. This dual-method approach identified four thematic clusters and eleven prevailing research trends. Subsequently, through interpretive synthesis, five interrelated research issues were identified: limited congruence between technological and pedagogical affordances, insufficient bottom-up perspectives in AI literacy frameworks, an ambiguous relationship between computational thinking and AI, a lack of explicit interpretation of AI ethics, and limitations of existing professional development frameworks. To address these gaps pragmatically, thirty issue-specific recommendations were consolidated into five overarching themes, culminating in the Integrated AI-Education Convergence Framework. This framework advocates for pedagogy-centric, ethically grounded, and contextually responsive AI integration within interdisciplinary educational research and practice.

Graphical Abstract

1. Introduction

The intersections between artificial intelligence (AI) and education can be traced back several decades, with foundational efforts emerging as early as the 1970s [1]. While these early developments laid important groundwork, the recent proliferation of complex, data-driven AI systems has been catalyzed by advances in hardware architectures and computational infrastructures—particularly since 2014 [2,3,4]. These technological breakthroughs have enabled the deployment of increasingly sophisticated AI applications across diverse educational contexts [5,6].
In parallel, the rapid diffusion of AI technologies has prompted global discourse on societal and economic implications, especially with regard to youth and education [7,8]. As the field continues to evolve, there is a growing need to systematically map the research landscape at the contemporary intersections between AI and education, identify prevailing research trends (e.g., topical emphases, methodological patterns), and delineate cross-cutting conceptual and methodological gaps. In response, the present study examines these contemporary intersections, seeking to contribute to a deeper understanding of this rapidly transforming field.
To achieve this goal, the study conducts a bibliometric-driven and inductive content analysis. While bibliometric methods are frequently employed to reveal overarching patterns within the literature, they sometimes struggle to capture deeper thematic insights or interpret complex, cross-cutting issues. To address this limitation, the study integrates bibliometrics with inductive content analysis. This dual-method approach aims to provide a more comprehensive and nuanced understanding of the field, which, in turn, informs the development of an integrated framework to guide future AI-Education research.

1.1. Tracing the Intertwined History of AI and Education

The theoretical foundations of AI were established during the mid-20th century, with contributions from Warren McCulloch, Walter Pitts, Alan Turing, John von Neumann, Claude Shannon, and Norbert Wiener [1]. The term “Artificial Intelligence” was formally coined at the 1956 Dartmouth Summer Research Project led by John McCarthy, Herbert Simon, Allen Newell, and Marvin Minsky [9]. These pioneers envisioned that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (p. 12) [9], revealing an intrinsic alignment between AI and education.
Early AI research focused on “general intelligent action” (p. 116) [10] and relied on symbolic, rule-based models of logic and reasoning [11]. These frameworks underpinned early intelligent systems for education, such as ELIZA, a natural language processing system developed by Joseph Weizenbaum [12], and SCHOLAR, the first intelligent tutoring system introduced by Jamie Carbonell [13], widely regarded as a milestone in AI for education [14,15]. Seymour Papert’s work on human–computer interaction also shaped educational computing. The Logo programming environment enabled children to explore mathematical concepts through digital experimentation [16,17]. Logo was built in Lisp, a language rooted in symbolic AI [18,19], which underscores the technical and conceptual links between early AI and educational innovation.
Despite these advances, the field entered a prolonged “second AI winter” from the late 1980s to the early 2000s [20], largely due to computational constraints, though research at the AI-Education interface continued within select academic circles [1].

1.2. The Resurgence and Acceleration of AI Innovation Since 2014

The late 2000s marked a formal shift from symbolic reasoning to pragmatic, data-driven approaches enabled by advances in computational hardware and large-scale datasets [21,22]. Benchmark datasets, including ImageNet [23] and the Stanford Sentiment Treebank [24], supported the development of pre-trained models that approximate selected human cognitive functions [25].
A notable inflection point occurred in 2014, when NVIDIA publicly committed to high-performance computing architectures tailored for AI [2,3,4]. As a leading developer of graphics processing units (GPUs) and parallel computing technologies, NVIDIA has been instrumental in scaling AI models by providing the necessary infrastructure. The parallel processing capabilities of GPUs, in contrast to traditional central processing units (CPUs), enable the efficient handling of the large-scale matrix operations and high-dimensional data characteristic of deep learning workloads. NVIDIA’s GPU architectures, including Volta, Turing, Ampere, and Ada Lovelace, have set industry standards for deep learning performance, supporting both training and inference at unprecedented scales [26]. In addition to GPUs, NVIDIA has developed specialized AI accelerators, such as the Tensor Core units integrated within its newer architectures, which are specifically optimized for deep learning operations and mixed-precision computing.
As a result, these hardware innovations continue to drive high-profile AI initiatives and startups, including Google DeepMind, Tesla Autopilot, and OpenAI, all of which have been supported by substantial corporate investment. The centrality of NVIDIA’s technological advancements to recent AI breakthroughs has also contributed to the company’s attainment of a trillion-dollar market valuation in 2023. Collectively, these groundbreaking developments are increasingly described as constituting the “Fourth Industrial Revolution,” characterized by the convergence of digital, biological, and physical systems with AI [27,28].

1.3. Conceptualizing AI: Philosophical, Technical, and Scholarly Definitions

The rapid evolution and expanding scope of AI complicate efforts to establish a singular definition [25,29]. Definitions vary across philosophical, technical, and applied domains. The Stanford Encyclopedia of Philosophy emphasizes artificial agents that, in suitable contexts, can be regarded as persons, foregrounding questions of agency and personhood [30]. The European Commission’s High-Level Expert Group on AI defines AI in more technical terms as systems that analyze their environment and act with some degree of autonomy to achieve specific goals [31].
Drawing on these perspectives and prior scholarly contributions (e.g., [5,29,32]), this study adopts two nuanced and closely intertwined interpretations of AI: one as a field and the other as an entity. As an interdisciplinary field, AI encompasses both theoretical and applied developments that seek to enable machines to perform tasks with outcomes closely mimicking or resembling those of human cognitive processes. Building upon this first interpretation, the second construes AI as an entity—that is, a system, machine, or agent capable of performing tasks in a manner that gives the impression of human-like execution or intelligence. Together, these interpretations capture the breadth of AI as both a discipline and as entities, spanning from narrow, task-specific systems to the broader ambitions of human-like adaptability.

1.4. AI and Education Since 2014: Developments, Gaps, and Rationale for Inquiry

Since 2014, AI integration in education has accelerated through data-intensive approaches such as deep learning, enabling applications including student modeling, adaptive instruction, and performance prediction, especially in formal education [33]. Many of these applications are grounded in pedagogical, knowledge, and learner aspects that underpin the design of AI-driven educational technologies [32]. At the policy level, more than 30 countries have introduced national AI strategies aimed at equipping citizens, especially youth, with essential AI competencies [34,35,36]. In the United States, Touretzky et al. [8] advanced the “5 Big AI Ideas” framework for K–12, building upon complementary efforts, particularly the Computer Science Teachers Association (CSTA) K–12 Computer Science Standards [28], which delineate core learning objectives to support comprehensive computer science curricula, including AI-related competencies. Collectively, these initiatives have increasingly informed level-specific adaptations [37,38,39]. Likewise, China’s 2022 National Information Science and Technology Curriculum Standards incorporated AI content at the compulsory education level [40]. Generative AI systems, such as ChatGPT, have further intensified debates about AI’s transformative potential in everyday life and in educational practice [41,42,43,44].
A growing body of reviews maps parts of this landscape, yet coverage remains fragmented by level, technique, or focal issue. Crompton et al. [45] focus on K–12 contexts, which limits transfer to higher education and system-level perspectives. Ordoñez-Avila et al. [46] emphasize data mining for teacher evaluation in higher education, with limited engagement with pedagogy or ethics. Sanusi et al. [47] concentrate on machine learning in K–12, excluding other AI paradigms, including generative AI and interdisciplinary perspectives. Scoping reviews offer useful overviews but often lack integrative depth across conceptual, methodological, and ethical strands. Yim and Su [48] examine AI literacy in K–12 and identify tools and strategies, yet do not connect literacy with broader theoretical and system-level dynamics. Yan et al. [44] discuss ethical and practical issues of large language models in education, but do not synthesize cross-cutting implications for curriculum, pedagogy, and professional development. In short, the issue is not the absence of reviews, but the persistence of silos and the limited translation of mapping results into an integrative, pedagogy-centric, and ethically grounded synthesis.
Accordingly, the present study advances three contributions that address these gaps. First (1), it consolidates topical, methodological, and contextual strands across K–12, higher education, teacher education, and policy, including recent developments in generative AI [44,45,46,47,48]. Second (2), it combines keyword co-occurrence and co-citation mapping with inductive content analysis to move beyond network description toward interpretive synthesis of cross-cutting research issues and actionable recommendations [44,45,46,47,48,49]. Third (3), it translates these insights into an integrated framework that aligns technological affordances with pedagogy, ethics, and context to guide future research and practice. The study’s scope spans 2014 to October 2024 and uses PRISMA-based retrieval from the Web of Science Core Collection (WOSCC) and Scopus databases.

1.5. Research Questions

Guided by the rationale outlined above, this study is structured around the following research questions (RQs):
  • RQ1: What are the prevailing research trends in the academic discourse on AI and education from 2014 to October 2024?
  • RQ2: Which publications have been most influential in shaping research on AI and education?
  • RQ3: What are the cross-cutting research issues and gaps at the contemporary intersections between AI and education?
  • RQ4: What issue-specific recommendations can be proposed to address identified research issues, and how can they be consolidated into an integrated framework to guide future research?
Each RQ addresses a specific gap identified in Section 1.4. RQ1 consolidates topical themes and methodological patterns using keyword co-occurrence mapping and inductive content analysis. RQ2 benchmarks field influence by analyzing the impact and network centrality of highly co-cited publications. RQ3 moves beyond network description by extending inductive content analysis with interpretive synthesis to surface cross-cutting research issues. Lastly, RQ4 translates these insights into an integrated framework that aligns technological, pedagogical, ethical, and contextual considerations.
To avoid ambiguity, the following key terms are further clarified: in RQ1, prevailing research trends refers to dominant topical emphases, theoretical orientations, and methodological patterns; in RQ3, research issues and gaps refers to overlapping challenges at AI-Education intersections, including misalignments between technological affordances, pedagogy, ethics, professional development, and policy.
The remainder of this paper is structured as follows. Section 2 outlines the research scope, data collection procedures, and analytical protocols. Section 3 presents results from the keyword co-occurrence and co-citation analyses, as well as the inductive content analysis. Section 4 synthesizes these results to identify cross-cutting research issues and gaps, proposes issue-specific recommendations, and introduces an integrated convergence framework to inform future research at the intersections between AI and education.

2. Methodology

2.1. Conceptual Scope and Critical Considerations

Bibliometric analysis provides a systematic and replicable approach to examining large volumes of scholarly literature, enabling researchers to identify dominant research trends, intellectual structures, and citation patterns across disciplines [50,51,52]. As the OECD Glossary of Statistical Terms [53] notes, bibliometrics serves as a valuable tool to “map the development of new (multi-disciplinary) fields of science and technology “ (p. 49). Given the interdisciplinary and evolving nature of AI-Education research, bibliometric techniques were deemed appropriate for addressing the RQs guiding this study.
In defining the conceptual scope for keyword selection, this study adopts a comprehensive view of both AI and education. AI is predominantly conceptualized as an interdisciplinary field that encompasses theoretical, technical, and applied dimensions. The selected keywords reflect major developments since 2014, including machine learning, deep learning, neural networks, natural language processing, computer vision, generative AI, large language models, and related subfields. Additionally, the scope incorporates governance and design considerations, such as explainable AI, responsible AI, algorithmic fairness, and AI literacy (see Table 1). The educational dimension extends beyond general references to “school” or “education” to include specific contexts such as K–12, tertiary and higher education, STEM education, teacher education and training, professional development, and educational technology (see Table 1). This deliberate specificity ensures that the dataset captures studies situated at the intersection of AI and substantive educational practice, including curriculum, pedagogy, assessment, teacher learning, and system-level initiatives. This conceptual scope directly informed both the construction of search strings in Table 1 and the inclusion-exclusion criteria detailed in Section 2.2.
While conventional bibliometric studies often rely on a single database, typically WOSCC or Scopus, this study adopted a dual-database approach to achieve broader coverage. Although integrating bibliometric data from multiple sources introduces challenges related to data wrangling and harmonization [54,55,56], this approach was necessary due to the breadth and disciplinary diversity of the AI-Education nexus. The interdisciplinary scope encompasses both computer science (CS) and educational research, which differ in their bibliometric reporting conventions. For instance, discrepancies in metadata completeness were observed between CS and education publications, likely reflecting disciplinary norms. These inconsistencies were addressed through rigorous manual verification and completion of bibliometric records.
In light of these considerations, WOSCC and Scopus were purposefully selected for their comprehensive indexing of scholarly output in CS, AI, and educational technology [57,58]. This dual-database approach was intended to maximize the robustness, inclusivity, and reliability of the dataset underpinning all subsequent analyses.

2.2. Data Collection

The data collection process adhered to a modified version of the PRISMA protocol [59], with initial keyword identification guided by the PRISMA Checklist-S [60]. A preliminary scoping exercise was conducted by two domain experts to identify key terms relevant to AI and education, as delineated in Section 2.1. These terms formed the basis for constructing advanced search strings used to query the WOSCC and Scopus databases (see Table 1). Additional search parameters included: (i) literature type (journal articles or review articles), (ii) language (English), (iii) publication period (1 January 2014 to 30 October 2024), and (iv) accessibility (full-text availability). The initial search identified 2337 articles. After removing 986 duplicates, 1351 articles remained for the screening stage (see Figure 1). Data collection was completed in early November 2024.
Screening was conducted in two phases. In the first phase, titles and abstracts were assessed for relevance. In the second phase, full-text screening was performed. The inclusion-exclusion criteria were designed to ensure both technical and contextual relevance: (i) the study must discuss, design, employ, or involve AI or its applications; (ii) it must be situated within an educational context; (iii) it must be written in English; and (iv) it must explicitly address AI, particularly its contemporary sub-domains, and educational content. Educational context is defined as formal, non-formal, or informal learning settings, teacher education and professional development, or curriculum and policy environments with direct implications for teaching and learning. Two independent reviewers conducted the screening process, achieving an inter-rater reliability of over 93% [61]. Articles without accessible full texts were excluded. Following both screening phases, 317 articles met all inclusion-exclusion criteria and were retained for analysis. The bibliometric records of these articles were retrieved from WOSCC and Scopus. Prior to analysis, the dataset was carefully reviewed to ensure completeness and accuracy. A detailed schematic of the review process is provided in Appendix A (see Figure A1).

2.3. Data Analysis

Bibliometric techniques are typically classified into two broad categories: evaluative and relational [50]. Evaluative techniques assess research performance using indicators such as publication counts and citation frequencies. In contrast, relational techniques examine structural relationships among research elements, including keyword co-occurrence, co-citations, co-authorships, and bibliographic coupling [50,62]. Given the objective of mapping the intellectual and thematic contours of AI-Education research, this study prioritized relational techniques and paired them with qualitative analysis to enhance interpretive depth. The alignment of analytic methods with RQs is summarized as follows:
  • RQ1: Keyword co-occurrence analysis was employed to detect topical clusters and methodological patterns, complemented by inductive content analysis to interpret these clusters and derive prevailing trends.
  • RQ2: Co-citation analysis, combined with evaluative indicators such as co-citation counts and link strengths, was used to identify influential publications and benchmark their field-shaping roles.
  • RQ3: Inductive content analysis, supplemented by a review of highly co-cited publications, was conducted to surface cross-cutting research issues and gaps across different contexts and approaches.
  • RQ4: Interpretive synthesis, as an extension of the inductive content analysis, was undertaken to translate identified issues into issue-specific recommendations and to consolidate them into an integrated convergence framework.
For procedural transparency, the definitions and technical scope of these methods are as follows. Keyword co-occurrence analysis is a relational technique that identifies patterns in the frequency with which keywords appear together, thereby revealing clusters that may indicate emerging or established “research hotspots” (p. 700) [63]. Author keywords and indexed keywords from the retained records were processed to construct a co-occurrence network in VOSviewer, which generated cluster visualizations. These cluster memberships and proximities were then interpreted qualitatively to infer topical emphases, theoretical orientations, and methodological patterns. This procedure addressed RQ1.
Co-citation analysis is another relational technique that examines how often two documents are cited together, thereby revealing intellectual linkages and thematic clusters at the document, author, or source level [62]. Co-citation network was constructed using VOSviewer to identify influential publications. To benchmark influence for RQ2, network positions and total link strengths were combined with evaluative indicators such as co-citation counts. Unlike keyword co-occurrence analysis, co-citation analysis can transcend the initial search parameters and inclusion criteria, providing a broader understanding of the field’s evolution [63]. This procedure addressed RQ2.
Inductive content analysis is a qualitative technique applied to overcome the limitations of purely quantitative mapping. Two independent coders systematically analyzed the articles within each keyword cluster to identify recurring themes and research foci [64,65]. Codes were iteratively grouped into categories and abstracted into higher-order themes, with intercoder agreement exceeding 89%. This qualitative layer enabled nuanced, context-sensitive interpretation of bibliometric findings for RQ1 and supported the identification of cross-cutting research issues and gaps for RQ3.
Interpretive synthesis extends inductive content analysis through a multi-step process. Building on the coded themes, this approach identifies key research issues and gaps, formulates issue-specific recommendations, and consolidates them into an integrated convergence framework [63,66,67]. The synthesis triangulates qualitative insights with bibliometric patterns and is further elaborated in Section 4, where the framework is presented to guide future research at the intersections between AI and education. This procedure addressed RQ4.
Nevertheless, all bibliometric visualizations and network analyses were conducted using VOSviewer (version 1.6.20), a widely recognized software package for bibliometric mapping [68,69]. VOSviewer has been employed in numerous high-quality studies across disciplines (e.g., [52,63,70,71]), and was selected for its capacity to generate interpretable visual representations of bibliometric networks.

3. Findings

3.1. Publication Dynamics and Regional Patterns

As outlined in the preceding section, a total of 317 articles met the inclusion-exclusion criteria and were subjected to bibliometric and inductive content analysis. The temporal distribution of these publications, as illustrated in Figure 2, reveals several noteworthy trends. A modest increase in research activity was observed at the end of 2014 (marked by the first red dot in Figure 2), followed by a pronounced surge beginning in 2018 (indicated by the second red dot). The number of publications peaked in 2023, with 83 articles published in that year alone. It is important to clarify that data collection for this study concluded in November 2024, and, thus, only articles published up to October 2024 were included in the dataset. Remarkably, 77 articles (nearly 24% of the total corpus) were published between January and October 2024. This suggests that the total research output for the full 2024 calendar year would likely have exceeded the 2023 peak, particularly given the common indexing lag for articles published in the final quarter of the year.
These temporal patterns provide compelling evidence of the accelerating scholarly interest in AI-Education intersections since 2014. A plausible catalyst for this growth can be traced to NVIDIA’s strategic commitment in 2014 to developing high-performance computing architectures tailored for AI applications [2,3,4]. The company’s successive architectures (e.g., Volta, Turing, Ampere, Ada Lovelace) have become foundational to contemporary AI research [26]. Enhanced hardware capabilities are widely credited with enabling more complex and scalable AI models, thereby stimulating technical research across domains, including education.
This influence is particularly evident in the early phase of the dataset. Of the 22 articles published between 2014 and 2018, 16 can be broadly categorized as ‘technical papers’ that focused on developing, evaluating, or simulating AI technologies within educational contexts. For instance, D’Mello [72] explored the integration of attention-aware mechanisms into intelligent tutoring systems, while Sosnovsky and Brusilovsky [73] evaluated topic-based adaptation and student modelling in quiz-based learning environments. Tosi and Yoshimi [74] introduced a visually oriented neural network simulator designed to support flexible experimentation in educational settings. These studies exemplify the early emphasis on system design, simulation, and adaptive learning technologies.
Among these early contributions, the systematic review by Nye et al. [75] stands out for its comprehensive synthesis of 17 years of research on the AutoTutor family of systems. This review not only traced the evolution of natural language tutoring technologies but also evaluated their pedagogical effectiveness and theoretical grounding. By consolidating findings across multiple iterations of AutoTutor, the study offered critical insights into the affordances and limitations of conversational agents in education. Collectively, these technical papers laid the groundwork for subsequent innovations, which are discussed in greater detail in Section 3.2.
It is also worth noting that NVIDIA’s 2014 commitment materialized concretely in December 2017 with the release of the Volta architecture [76]. Coinciding with this hardware breakthrough was the seminal paper of Vaswani et al. [77], Attention Is All You Need, which introduced the Transformer architecture. This paper proposed several innovations, most notably the self-attention mechanism, parallel processing capabilities, and scalability, that have since become foundational to pre-trained and large language models. The convergence of these hardware and software advancements arguably served as a dual catalyst for the sharp increase in AI-Education research output observed after 2018. Improved computational infrastructure enabled the training of more sophisticated models, while the Transformer architecture unlocked new possibilities for natural language processing, adaptive learning, and multimodal educational applications.
Figure 3 presents the geographical distribution of research output, highlighting the top twenty countries or regions represented in the dataset. The United States leads with 72 articles, accounting for approximately 23% of the total. Mainland China follows closely with 66 articles, or roughly 21%. Spain and Germany occupy the third and fourth positions with 16 and 15 articles, respectively, while Hong Kong ranks fifth with 12 contributions. Notably, when Hong Kong’s output is combined with that of Mainland China, the total rises to 78 articles (approximately 25% of the dataset), thereby surpassing the United States.
Other regions within the ‘top ten’ include Finland, the United Kingdom, and Türkiye, each contributing eight or more articles. The dataset also includes contributions from developing countries such as India, Pakistan, the Philippines, and Tanzania. Although these countries represent a smaller share of the total output, their inclusion reflects a growing global interest in AI-Education research. Nevertheless, the overall landscape remains dominated by technologically advanced nations, with the United States and China (PRC) collectively accounting for over 40% of all publications.
It is important to interpret these regional publication outputs with consideration of language-related limitations. The present study considered only articles written in English, as indexed in the selected databases. Consequently, the number of articles attributed to each country or region may differ if publications in other regional languages were also included. This language criterion may particularly affect the representation of research output from countries where English is not the primary medium of academic publication.

3.2. Keyword Co-Occurrence Analysis: Topical Clusters

To map the research landscape at the contemporary intersections between AI and education, the keyword co-occurrence analysis was conducted using a frequency threshold of four (f = 4). This threshold required each keyword to appear in at least four distinct articles to be included in the analysis, thereby ensuring both relevance and analytical clarity [63]. The Lin-log modularity method was applied with default clustering parameters, a technique particularly suited to scale-free datasets due to its ability to minimize edge crossings and enhance the interpretability of network visualizations [69].
The analysis was performed using VOSviewer (version 1.6.20), which generated a network map comprising 72 co-occurring keywords, distributed across four topical clusters (see Figure 4). This number falls “within the optimal data range (e.g., 40–120 keywords)” (p. 703) for keyword co-occurrence analysis, as noted by Ali and Tse [63]. The resulting clusters imply emergent thematic concentrations within the literature and serve as a foundation for the subsequent qualitative analysis. A full list of the 72 keywords, along with their corresponding cluster affiliations, is presented in Table 2.
This co-occurrence mapping not only reveals the density and distribution of key terms but also offers a preliminary indication of the field’s thematic contours, as represented by the distinct colors in Figure 4. However, to move beyond surface-level associations and uncover deeper conceptual patterns, the inductive content analysis was conducted, as detailed in the next section.

3.3. Inductive Content Analysis: Prevailing Research Trends (RQ1)

To address RQ1, the inductive content analysis [64] was conducted on the articles associated with the keyword clusters identified in the co-occurrence analysis. This process unfolded in three iterative phases: (i) identifying the co-occurring keywords and their corresponding articles within each cluster based on their research foci and thematic orientations; (ii) categorizing the highly co-occurring keywords and their associated articles into distinct semantic groupings; and (iii) reviewing these groupings to explicate the relationships among them and derive broader thematic insights.
This analytical process was carried out independently by two coders, who achieved an intercoder agreement exceeding 89%, thereby indicating a high level of reliability. Through this iterative coding and review procedure, eleven prevailing research trends were identified across the four clusters. Each cluster was subsequently assigned a central thematic designation, reflecting the overarching semantic interpretation of its constituent research trends. The clusters and their corresponding thematic foci are as follows:
  • Cluster 1: Applying AI Techniques to Address Educational Challenges
  • Cluster 2: Expanding the Role of AI in K–12 Educational Contexts
  • Cluster 3: Enhancing STEM Education through AI Technologies
  • Cluster 4: Preparing Teachers to Teach or Integrate AI in the Classroom
These thematic designations serve as analytical anchors for the subsequent sections, where each cluster is examined in detail. The findings include representative studies, key research trends, and the conceptual implications of each cluster’s thematic focus.

3.3.1. Cluster 1: Applying AI Techniques to Address Educational Challenges

Cluster 1 explores the diverse applications of AI techniques in education, revealing three interrelated research trends. Collectively, these trends illustrate how AI technologies have evolved to address specific educational challenges while also being adapted across varied learning contexts. The inductive content analysis of these trends not only highlights their individual contributions but also uncovers deeper interconnections that may shape future trajectories in AI-Education research.
Trend 1: Prominent AI Techniques in the Development of Educational Applications
The first trend centers on the foundational role of prominent AI techniques, such as machine learning (ML), deep learning (DL), neural networks, natural language processing (NLP), AI models, and data mining, in the development of educational applications. ML, as a core subset of AI, encompasses a wide array of algorithms, including decision trees (DT), k-nearest neighbors (KNN), artificial neural networks (ANN), and support vector machines (SVM). These algorithms enable systems to autonomously learn from data, identify patterns, and support decision-making processes—a capability often referred to as “data mining” in statistical contexts [46,78,79,80,81].
ANNs, inspired by the biological structure of the human nervous system, are composed of interconnected nodes, commonly referred to as perceptrons or neurons, organized into input layers, synaptic weights, hidden layers, summing junctions, and activation functions [82,83,84]. DL, a subfield of ML, builds upon ANNs by incorporating multiple hidden layers, thereby enabling the processing of more complex and abstract data patterns [82,85,86,87]. The term ‘deep’ refers to the layered architecture of these networks.
NLP, another pivotal AI subdomain, has gained increasing prominence, particularly with the advancement of ANNs and large language models such as ChatGPT [78,88,89,90,91]. NLP is concerned with the representation, understanding, and generation of human language. Its applications in education span three broad areas:
1.
Text mining for analyzing written materials, including teachers’ reasoning and reflective practices [88,92,93,94].
2.
Automated feedback generation, exemplified by tools such as M-powering, which provide adaptive feedback in large-scale online courses [44,95,96].
3.
Speech recognition for multimodal data analysis, such as the use of virtual microphones in collaborative learning environments to study group interactions [97,98,99,100].
These techniques reflect a growing trend toward integrating AI into educational environments, with the potential to enhance instructional efficiency, personalize learning, and address the diverse needs of learners and educators.
Trend 2: Exploring the Functionalities of AI Techniques in Addressing Educational Tasks
Building upon the technical foundations outlined above, the second trend shifts focus toward the practical functionalities of AI techniques in performing a range of educational tasks. Keywords such as “prediction,” “classification,” “assessment,” “quality,” “knowledge,” and “reflection” reflect the breadth of AI’s utility in educational contexts.
For instance, ML algorithms have been widely used to predict students’ learning performance, thereby enabling timely assessment and intervention (e.g., [101,102,103,104]). Knowledge-tracking systems, often based on representation graphs or learner-resource response channels, have also been developed to monitor and model students’ learning trajectories [105,106,107]. Automated assessment tools powered by AI have been applied to evaluate learners’ depth of reflection (e.g., [108,109,110]), while classification techniques have been used to predict academic achievement and dropout risk, offering valuable insights for targeted support strategies (e.g., [111,112,113,114]).
In addition, these functionalities have informed the design of AI-based systems for evaluating educational quality more broadly (e.g., [110,115,116,117]). Despite their promise, such systems must be critically examined for accuracy, reliability, and fairness. Ensuring that these tools are equitable and contextually appropriate remains a pressing concern.
Comparative studies have also examined the predictive accuracy of various AI techniques. Findings suggest that convolutional neural networks (CNN) generally outperform other methods (e.g., KNN, DT, logistic regression (LR), Naïve Bayes (NB), SVM) particularly in tasks involving high-dimensional data [86,106,118,119]. However, the computational intensity of CNNs may limit their feasibility in resource-constrained educational settings. Thus, aligning the choice of AI technique with the specific needs and constraints of stakeholders is essential. While CNNs may offer superior performance in complex analyses, simpler algorithms like DT or KNN may be more suitable for smaller datasets or low-resource environments.
Trend 3: Research Trajectories in AIED Across Diverse Educational Contexts
The third trend highlights the interdisciplinary and context-sensitive nature of AI applications in education. This is reflected in keywords such as “AI in education,” “learning analytics,” “higher education,” “primary education,” and “systematic review.” The term AI in Education (AIED) broadly refers to the integration of AI technologies into various educational settings [78,120], encompassing applications such as personalized learning, automated assessment, and adaptive feedback systems.
Recent developments in generative AI have further expanded the scope of AIED, introducing new possibilities for content generation, interactive learning, and pedagogical innovation [121,122,123]. Learning analytics has emerged as a particularly influential area, involving the use of AIED tools to model and measure learners’ implicit, complex learning processes [80,111,124,125]. This field seeks to uncover patterns in educational data to support learning optimization and evidence-based decision-making.
Some studies have explored the combination of AIED with multimodal technologies, such as wearable sensors, eye-tracking systems, and facial recognition tools, to enrich data sources and capture nuanced insights into learners’ cognitive, emotional, and behavioral states [100,126,127,128,129]. These approaches may offer a more holistic understanding of student engagement and learning dynamics.
Higher education has become a focal point for AIED applications, with many studies emphasizing technological affordances over pedagogical considerations (e.g., [6,80,130,131,132,133,134,135,136]). However, this emphasis has led to a relative lack of inquiry into how AIED impacts learners and instructors within specific educational contexts [133,136,137,138,139,140]. Without a stronger integration of pedagogical frameworks, there is a risk that AIED may be driven more by technological innovation than by educational relevance.
In K–12 settings, systematic reviews have documented a wide range of AIED applications across subjects and grade levels, spanning teaching, learning, assessment, and administration (e.g., [5,44,45,123,141]). Researchers have advocated for the use of learning analytics to inform teachers’ instructional design (e.g., [80,125,142,143,144]). However, further research appears necessary to support teachers in effectively implementing these systems, particularly in light of challenges related to training, resources, and ethical concerns around student data use.
Holistic Connections Across Trends in Cluster 1
Taken together, the three trends identified in Cluster 1 reveal a synergistic relationship between AI technologies, their functional applications, and their deployment across educational contexts. For example, the development of NLP tools for automated feedback (Trend 1) directly supports the need for scalable and adaptive assessment systems (Trend 2). Similarly, the interdisciplinary applications of AIED in higher education and K–12 settings (Trend 3) are underpinned by technical advancements in ML and DL (Trend 1).
Moreover, the growing emphasis on learning analytics (Trend 3) complements the predictive and classificatory functions of AI systems (Trend 2), suggesting the potential for integrating these tools into comprehensive educational frameworks. These interconnections underscore the importance of adopting holistic research approaches that bridge technological innovation with pedagogical theory and practice. They also highlight the need to remain attentive to the contextual realities and ethical considerations faced by diverse educational stakeholders.

3.3.2. Cluster 2: Expanding the Role of AI in K–12 Educational Contexts

Cluster 2 explores the growing integration of AI within K–12 educational settings, identifying three interconnected research trends. These trends collectively underscore efforts to design AI-focused curricula, employ interactive technologies to demystify AI concepts, and apply AI tools in online learning environments. The inductive content analysis reveals not only the individual contributions of each trend but also their broader implications for cultivating computational thinking (CT), ethical awareness, and inclusive learning opportunities in K–12 education.
Trend 4: Broadening the Integration of AI into K–12 Educational Settings
The fourth trend highlights the increasing emphasis on embedding AI into K–12 education through curriculum design and the development of targeted learning activities. Keywords such as “K–12 education,” “education,” “artificial intelligence,” “school,” “computational thinking,” “generative AI,” and “ethics” reflect the wide-ranging scope of this endeavor. Scholars have proposed various approaches to introducing AI and ML content to younger learners, stressing the importance of early exposure to prepare students for a future shaped by AI technologies (e.g., [47,48,145,146,147]).
A central area of focus within this trend is the design and implementation of AI curricula. Researchers argue for the urgent inclusion of AI education in schools, contending that foundational knowledge and skills are vital for navigating an AI-driven world (e.g., [36,148,149,150]). Some curriculum models incorporate CT activities that simulate real-world problem-solving, thereby fostering both technical proficiency and problem-solving capabilities [138,145,151]. These initiatives build upon existing CT education frameworks and aim to integrate programming skills with broader cognitive strategies.
An emerging direction within this trend involves the role of CT when learners engage with generative AI and large language models (LLMs). For example, Hijón-Neira et al. [152] demonstrated that LLMs can provide personalized feedback and unsolicited hints, thereby helping secondary students grasp abstract programming concepts and enhance their CT development. Similarly, integrating generative AI into block-based coding environments has become a novel approach to advancing CT education [153,154,155,156]. Platforms such as Snap! now enable students to embed AI models like ChatGPT as functional coding blocks, allowing them to create AI-powered applications such as chatbots. These approaches align with constructionist learning philosophies, which emphasize active engagement with computational artefacts as a means of constructing knowledge [156,157].
Equally significant within this trend is the integration of AI ethics into K–12 education. Policymakers and educators increasingly advocate for equipping students with the knowledge to use AI responsibly and ethically (e.g., [150,158,159,160,161]). For instance, Williams et al. [158] implemented a project-based AI ethics curriculum for middle school students in the United States, while Finland’s AuroraAI national program aims to foster ethical AI literacy among its citizens [159]. In China, Du et al. [161] explored how ethical awareness influences teachers’ perceptions of AI’s societal role and its responsible classroom integration. These initiatives suggest that AI education must extend beyond technical content to include critical discussions on societal impact, bias, accountability, and ethical implications.
Trend 5: Interactive Educational Technologies for Demystifying AI for K–12 Students
The fifth trend focuses on the use of interactive educational technologies to introduce AI concepts to K–12 students. Keywords such as “educational technology,” “educational robotics,” “robotics,” “tools,” “augmented reality,” and “virtual reality” highlight the diversity of tools employed to make AI more accessible and engaging for young learners. Among these, educational robotics has emerged as a particularly effective medium for teaching AI and its subdomains.
Numerous studies have explored the pedagogical value of tangible, programmable robotics in enhancing students’ understanding of AI (e.g., [147,162,163,164,165,166]). For instance, Sophokleous et al. [167] reviewed the integration of computer vision into educational robotics and concluded that such technologies positively influence the K–12 learning process. Henze et al. [168] examined four types of educational robotics (Makeblock mTiny, Makeblock Cody Rocky, Makeblock Neuron, and DJI Tello Edu Drone) finding them to be effective cognitive tools for teaching AI and CT while also increasing student interest. Similarly, Bellas et al. [166] introduced the Robobo Project, a robotics-based educational tool validated over six years, designed to facilitate AI learning across multiple educational levels. Chen et al. [169] further argued that AI-powered robots and chatbots can support collaborative, game-based, and virtual learning experiences.
Beyond robotics, augmented reality (AR) and virtual reality (VR) have been utilized to create immersive learning environments that translate abstract AI concepts into tangible, experiential learning (e.g., [170,171,172,173]). These technologies allow students to engage with AI in contextually rich scenarios, thereby enhancing comprehension and retention. However, Tedre et al. [145] caution that the prevailing emphasis on theoretical and conceptual aspects of AI education should be balanced by a stronger focus on real-world applications of AI and ML. This perspective suggests a potential reorientation of educational priorities, advocating for the inclusion of practical and contemporary AI content in K–12 curricula.
Trend 6: Applying AI in K–12 Online Learning Environments
The sixth trend investigates the application of AI in K–12 online learning environments, a topic that gained particular relevance during the COVID-19 pandemic. Keywords such as “internet,” “online learning,” “predictive models,” “explanations,” and “communication” reflect the growing interest in how AI can support virtual education.
For example, Ong et al. [82] used “deep learning neural networks (DLNN)” techniques to evaluate contextual factors influencing students’ online learning experiences during the pandemic. Similarly, Peng et al. [174] applied AI-based statistical methods to examine the role of ICT in students’ reading performance within blended learning environments. Almohesh [175] explored the impact of AI-powered tools—specifically ChatGPT—on Saudi Arabian primary students’ autonomy in online classes, noting both the benefits of fostering self-directed learning and the risks of over-reliance on AI. Other studies have used AI models to predict students’ online learning styles [144], classify reading competencies based on communication patterns [176], and assess the quality of written explanations following science inquiry activities [177].
While these applications highlight AI’s potential to enhance online learning, they also raise important questions about equity, accessibility, and contextual variability in learning outcomes. Factors such as access to technology, student motivation, parental involvement, and cultural influences may significantly shape the effectiveness of AI in virtual settings. Future research could explore these variables to better understand how AI might be leveraged to support inclusive and equitable online education.
Holistic Connections Across Trends in Cluster 2
The inductive content analysis of Cluster 2 reveals a dynamic interplay between curriculum innovation, interactive technologies, and online learning applications. For instance, the integration of AI ethics and generative AI tools into K–12 curricula (Trend 4) aligns with the use of robotics and AR/VR environments to provide hands-on, experiential learning opportunities (Trend 5). Similarly, the deployment of AI-driven predictive models and assessment tools in online settings (Trend 6) complements efforts to personalize and adapt AI curricula (Trend 4), while also extending the reach of interactive technologies into virtual learning spaces (Trend 5).
Moreover, the emphasis on fostering CT through AI curricula (Trend 4) resonates with the use of data-driven and interactive platforms, such as generative AI-powered coding environments, to enhance student engagement and learning (Trend 5). These tools may also be adapted to address equity and accessibility concerns in online education (Trend 6), ensuring broader participation and inclusion. Taken together, these interconnected trends highlight the need for cohesive educational frameworks that integrate technical, ethical, and practical dimensions of AI education in K–12 contexts.

3.3.3. Cluster 3: Enhancing STEM Education Through AI Technologies

Cluster 3 explores the transformative potential of AI technologies in STEM education, identifying three interrelated research trends. These trends focus on embedding AI learning elements into STEM curricula, designing technology-enhanced environments to foster engagement, and leveraging AIED applications to support STEM teaching and learning. The inductive content analysis of these trends reveals their contributions to enriching STEM education and underscores the importance of examining their pedagogical affordances, long-term impacts, and practical implementation across diverse educational contexts.
Trend 7: Incorporating AI Learning Elements to Enrich STEM Educational Contexts
The seventh trend centers on the integration of AI learning elements into STEM educational contexts, as reflected in keywords such as “STEM education,” “science,” and “mathematics.” Researchers have explored various strategies for embedding AI into STEM curricula and instructional activities, suggesting that such integration may deepen learners’ disciplinary understanding and foster interdisciplinary thinking (e.g., [138,149,178,179]). Xu and Ouyang [180] argue that AI’s inherently interdisciplinary nature makes it particularly well-suited to complement STEM education, as it draws upon core principles from science, technology, engineering, and mathematics.
Several empirical studies exemplify this integration. Bellas et al. [149] developed a short-term, project-based STEM curriculum for European high school students that embedded “AI literacy” through real-world problem-solving using intelligent smartphone applications. Similarly, Meng-Leong and Hung [179] introduced AI-based statistical modelling to help K–12 STEM learners simulate complex experiments in silico, reporting improvements in learners’ “AI thinking.” Yim and Su [48] conducted a scoping review of AI learning tools in K–12 education, identifying project-based and game-based pedagogies, using platforms such as Teachable Machine, Scratch, and LearningML, as effective means of fostering AI literacy. These initiatives extend beyond K–12 contexts; for example, Lin et al. [178] designed hands-on STEM and AI learning activities for non-engineering undergraduates, reporting notable gains in “AI literacy.”
However, the terminologies employed in these studies, such as “AI literacy” and “AI thinking,” remain inconsistently defined. This conceptual ambiguity may hinder the development of standardized frameworks for integrating AI into STEM education. Future research could critically examine these terms, proposing robust, operational definitions that align with interdisciplinary educational goals. Furthermore, exploring how AI can serve as a bridge between STEM disciplines; for example, by linking mathematical modelling, computational simulation, and engineering design, AI content may enhance the coherence and relevance of STEM curricula.
Trend 8: Designing Technology-Enhanced STEM Learning Experiences to Engage Students
The eighth trend focuses on the design of technology-enhanced environments aimed at creating engaging and interactive STEM learning experiences. Keywords such as “students,” “teachers,” “engagement,” “design,” and “technology” highlight the emphasis on fostering active participation in STEM education through innovative tools and platforms. Recent studies suggest that such environments can support immersive, collaborative, and experiential learning approaches in STEM contexts (e.g., [170,171,172,173,181,182]).
For instance, Poonja et al. [171] proposed an AR and haptics-driven product for STEM instruction, enabling students to interact with a world map using haptic feedback powered by Vuforia, Unity 3D, and Open-Haptics. Ferro et al. [182] developed Gea2, an interactive game featuring an intelligent pedagogical agent that provides unsolicited natural language hints to support classroom learning. King et al. [170] demonstrated how an AI-powered VR training system could enhance teacher-student interactions during mathematical problem-solving. Lohakan and Seetao [173] introduced an AI kit incorporating computer vision and Python programming for high school STEM education, which was found to improve engagement and learning outcomes through virtual experimentation. Similarly, Saundarajan et al. [172] designed a mobile application with AR and computer vision functionality to help students understand mathematical equations through real-time, camera-based worked solutions.
These examples illustrate the potential of AI and immersive technologies to make STEM learning more interactive and engaging. However, their effectiveness may depend on several contextual factors, including students’ prior knowledge, accessibility, and the alignment of these tools with curriculum objectives. While these technologies appear to enhance engagement, further research is needed to assess their impact on deeper learning outcomes, such as problem-solving, critical thinking, and conceptual understanding. Additionally, exploring the scalability of such environments in resource-constrained settings could yield valuable insights into their broader applicability.
Trend 9: Leveraging AIED Applications to Facilitate STEM Teaching and Learning
The ninth trend highlights the use of AIED applications to support STEM teaching and learning, as evidenced by keywords such as “teaching,” “learning,” “performance,” “intelligent tutoring systems,” and “support.” STEM education often involves complex, collaborative, and context-sensitive practices, which can challenge educators’ ability to monitor and support students effectively. AI applications have increasingly been adopted to address these challenges [48,180].
For example, Lee et al. [183] developed a DL-based “STEM Learning Behavior Analysis System (SLBAS)” to monitor K–12 students’ behaviors during STEM activities, using the ICAP framework (interactive, constructive, active, passive) to guide its design. Bertolini et al. [184] applied a Bayesian framework to classify undergraduate students’ learning behaviors in STEM courses, identifying factors that influence retention and attrition. Bhatt et al. [185] explored the use of DL and implicit feedback in K–12 recommender systems, demonstrating that modelling the sequence of student interactions significantly improves recommendation accuracy. Owens et al. [186] introduced “Decibel Analysis for Research in Teaching (DART),” an ML-based algorithm that analyses classroom audio recordings to evaluate teaching practices.
Intelligent tutoring systems (ITS) have also gained traction in STEM education [75,187]. For instance, Bywater et al. [188] developed an NLP-based ITS to support algebraic problem-solving by categorizing mathematical problems and predicting appropriate equations. Lu et al. [189] introduced a trustworthy knowledge tracing model for ITS, incorporating explainable AI (xAI) to enhance transparency, accountability, and user trust. While ITS are often associated with higher education, Liu et al. [190] demonstrated their potential in K–12 STEM classrooms, showing how multimodal data analysis can support collaborative learning.
Despite these advancements, the pedagogical affordances and long-term impacts of AIED applications in STEM education remain underexplored. Key questions persist regarding how these systems influence students’ conceptual understanding, collaboration, and critical thinking. Further research is also needed to investigate how ITS and other AIED tools can be adapted to diverse educational contexts, including differentiated instruction and inclusive practices in K–12 settings.
Holistic Connections Across Trends in Clusters 3
The inductive content analysis of Cluster 3 reveals a multifaceted relationship between AI-integrated curriculum design, immersive learning environments, and AIED applications in STEM education. For instance, the incorporation of AI learning elements into STEM curricula (Trend 7) aligns with the deployment of immersive technologies (Trend 8) to foster active engagement and deeper conceptual understanding. Similarly, the development of ITS and behavior analysis systems (Trend 9) complements efforts to create interactive and adaptive learning environments (Trend 8), suggesting the potential for these tools to be integrated into cohesive STEM education frameworks.
Moreover, the interdisciplinary nature of AIED applications (Trend 9) resonates with the collaborative and contextual learning practices emphasized in STEM education (Trend 7). These interconnections underscore the importance of adopting holistic approaches that bridge technological innovation with pedagogical theory and practice. Future research could explore how these trends collectively contribute to preparing students for STEM careers, not only by developing technical competencies but also by fostering critical thinking, problem-solving, and ethical reasoning. Longitudinal studies may be particularly valuable in assessing how these integrated approaches influence learners’ STEM competencies and career trajectories over time.

3.3.4. Cluster 4: Preparing Teachers to Teach or Integrate AI in the Classroom

Cluster 4 explores the critical role of teacher preparation in the integration of AI into educational settings, identifying two closely related research trends. These trends focus on equipping teachers with the Technological, Pedagogical, and Content Knowledge (TPACK) and professional competencies necessary to teach AI, as well as preparing them to meaningfully incorporate AI technologies into classroom practice. The inductive content analysis of these trends illuminates their contributions to teacher readiness while also highlighting the need for further exploration of their pedagogical and contextual dimensions.
Trend 10: The Emerging Need to Prepare Teachers for AI Education
The tenth trend underscores the growing necessity of preparing teachers to deliver AI education, as reflected in keywords such as “professional development,” “teacher education,” “pre-service teachers,” “AI education,” “pedagogical content knowledge,” “competencies,” “self-efficacy,” and “motivation.” Many K–12 educators face considerable challenges in teaching AI due to limited epistemological understanding and the persistence of misconceptions. These include beliefs such as “AI can learn on its own,” “AI is inexpensive,” or “AI algorithms struggle with complex data” [191,192]. Uğraş and Uğraş [193] identified similar misconceptions among early childhood educators, particularly regarding AI tools like ChatGPT, with concerns ranging from technological dependence to misinformation and diminished teacher-student interaction. Alshorman [194] found that Jordanian science teachers expressed apprehensions about AI’s reliability, ethical implications, and data privacy, fearing that AI integration might reduce instructional autonomy and exacerbate educational inequalities.
In response to these challenges, scholars have proposed frameworks to guide the design of professional development (PD) programs. For instance, Xiao-Fan et al. [195] investigated experienced K–12 IT teachers’ perceptions of AI education through the TPACK framework, identifying distinct domains such as AI-specific knowledge, general pedagogical knowledge, and pedagogical AI knowledge. Yau et al. [196] categorized secondary teachers’ conceptions of AI education into six areas, including technology bridging, knowledge delivery, and ethics establishment. Sun et al. [197] evaluated a TPACK-based PD initiative, reporting significant improvements in CS teachers’ AI teaching skills, content knowledge, and self-efficacy. Similarly, Tang et al. [198] proposed the ML4STEM PD program, which outlined goals for integrating machine learning into STEM education through a TPACK-informed approach.
Despite these developments, several gaps remain. For instance, the conceptual boundaries between different TPACK dimensions, such as pedagogical AI knowledge and general pedagogical knowledge, are often blurred. Moreover, many frameworks overlook the influence of teachers’ dispositions, beliefs, and attitudes on their epistemological understanding of AI and their willingness to teach it. While Xia et al. [148] explored teachers’ motivation for planning AI curricula using self-determination theory, their focus on psychological needs did not fully address the alignment between AI content and effective pedagogical strategies. Future research could investigate how motivational factors interact with instructional approaches, particularly in designing AI curricula that balance technical complexity with accessibility. Longitudinal studies may also be valuable in assessing the sustained impact of PD initiatives on teachers’ AI instructional practices.
Trend 11: Enhancing Teachers’ Readiness to Integrate AI Technologies into the Classroom
The eleventh trend centers on the need to prepare teachers to integrate AI technologies into classroom instruction, as indicated by keywords such as “technology integration,” “acceptance,” “classroom teaching,” “strategies,” “skills,” “attitudes,” and “anxiety.” Seufert et al. [199] organized studies on AI technology integration using the Knowledge, Skills, and Attitudes (KSA) framework, linking technology-related knowledge to the TPACK model and emphasizing that teachers must first understand the functionality of AI tools before effectively adopting them in their pedagogical practices.
In terms of skills, studies have examined how teachers can leverage AI to enhance instructional strategies. Zeegers and Elliott [200], for example, explored the use of NLP-based applications to improve teachers’ questioning techniques, thereby fostering greater student engagement. Additionally, research has investigated teachers’ attitudes toward AI adoption. Kuleto et al. [201] found that while Serbian K–12 teachers were generally optimistic about AI integration, they expressed concerns regarding the lack of emotional and humanistic support from AI systems. Chocarro et al. [202], using the technology acceptance model (TAM), found that perceived simplicity and utility significantly influenced teachers’ attitudes toward chatbots. Cohn et al. [129] introduced a multimodal AI-supported framework for collaboration between teachers and researchers, demonstrating how AI-generated timelines can enhance formative feedback and student engagement in STEM classrooms. Zhang et al. [203] identified factors influencing pre-service teachers’ intentions to use AI tools, including perceived ease of use, usefulness, and gender-specific variables such as user anxiety and enjoyment.
To address teacher anxiety and build confidence, PD initiatives have been developed to demystify AI technologies. Nazaretsky et al. [204], for instance, introduced a PD programme for science teachers that aimed to clarify misconceptions about AI and demonstrate its empowering potential. This initiative sought to align the technological affordances of AI with their pedagogical applications, ensuring meaningful integration into classroom practice. Similarly, Ding et al. [160] implemented a case-based PD programme, showing that engaging teachers with structured and ill-structured case problems could enhance AI literacy and promote reflective integration strategies within subject-specific teaching contexts.
Nonetheless, several challenges persist. Aligning AI technologies with curriculum objectives remains complex, particularly when considering teachers’ prior experience, resource availability, and institutional support. Future research could explore how PD initiatives might be tailored to diverse educational settings, especially those with limited resources. Additionally, investigating how AI tools can support differentiated instruction and inclusive practices, such as addressing the needs of students with learning disabilities or those from linguistically diverse backgrounds, could provide valuable insights into the broader applicability of AI in education.
Holistic Connections Across Trends in Clusters 4
The inductive content analysis of Cluster 4 reveals a reciprocal relationship between teacher preparation for AI education (Trend 10) and the integration of AI technologies into classroom practice (Trend 11). For example, TPACK-based frameworks used to inform PD initiatives for teaching AI (Trend 10) also support the development of the technological knowledge and skills required for effective classroom integration (Trend 11). Furthermore, addressing misconceptions about AI through PD (Trend 10) aligns with efforts to reduce teacher anxiety and promote positive attitudes toward AI tools (Trend 11).
The emphasis on teacher motivation and self-efficacy in AI education (Trend 10) also resonates with studies on technology acceptance and integration (Trend 11), suggesting that holistic PD programs should address both pedagogical and technological dimensions. Such programs could combine hands-on training with reflective exercises that encourage teachers to critically examine their beliefs about AI. Exploring how these trends collectively contribute to cultivating an AI-literate teaching workforce may be a fruitful area for further inquiry. A workforce equipped not only to enhance student learning outcomes but also to promote ethical, responsible, and inclusive AI use in classrooms will be essential to the future of AI-integrated education.

3.4. Co-Citation Analysis: Highly Co-Cited Publications (RQ2)

To address RQ2, the co-citation analysis was conducted to identify the influential publications shaping the field. This was achieved by examining patterns of co-citation among references cited across the 317 articles included in the dataset. Out of 13,856 co-cited references, 41 publications met the inclusion threshold of a minimum co-citation frequency of six (f = 6). This indicates that each of these works was co-cited at least six times across the 317 articles analyzed. The resulting network visualization map, presented in Figure 5, illustrates the structural relationships among these highly co-cited publications. The Lin-Log modularity method was applied to reduce edge crossings and enhance overall readability [69].
The ‘top ten’ most frequently co-cited publications are summarized in Table 3, while the full list of 41 publications is provided in Appendix B (see Table A1). A detailed review of these publications was undertaken to triangulate their positionality within both the keyword co-occurrence and co-citation networks, in accordance with the analytical approach outlined by Ali and Tse [63]. As described in Section 2, this review supplemented the extraction of key insights during the interpretive synthesis process, which represents a natural extension of the inductive content analysis. More importantly, the review facilitated and corroborated the identification of major research gaps and unresolved issues within the existing literature. Critical details of the interpretive synthesis process are provided in Section 4.1. The findings derived from this synthesis process, together with their implications and recommendations for future research, are carefully examined and elaborated upon in the next section.

4. Discussion: Interpretive Synthesis and Implications

4.1. Cross-Cutting Research Issues and Gaps (RQ3)

This section synthesizes and discusses insights from the inductive content analysis of the four clusters identified within the keyword co-occurrence network, supplemented by a review of 41 highly co-cited publications (Appendix B), to address RQ3. As outlined in Appendix A, the interpretive synthesis was guided by a structured, multi-step approach grounded in established qualitative research procedures [63,66,67]. This methodological rigor enabled a comprehensive and systematic identification of cross-cutting research issues and gaps in the AI-Education literature. The synthesis proceeded through the following iterative steps:
  • Review of Key Sections: The methods, findings, and discussion sections of 317 articles within the keyword co-occurrence clusters were reviewed to identify recurring themes, conceptual patterns, and methodological tendencies.
  • Assessment of Thematic Positioning: Publications were thematically positioned within the context of the four clusters derived from the keyword co-occurrence network (Figure 4). This step linked axial (qualitative) codes to network-derived clusters and facilitated the extraction of insights aligned with dominant research trends.
  • Triangulation Across Networks: The 41 highly co-cited publications were reviewed to locate their positioning within both the keyword co-occurrence and co-citation networks, consistent with the analytical approach outlined by Ali and Tse [63]. This step corroborated and refined the extracted insights by assessing whether they were supported by the highly co-cited literature shaping the field.
  • Synthesis of Insights: The extracted insights were critically examined to uncover underexplored dimensions, overlooked areas, and methodological limitations, which were then synthesized into a coherent set of cross-cutting research issues and gaps.
Through this interpretive synthesis, several prominent challenges and limitations were identified within the existing AI-Education literature. These issues are summarized in Table 4 and serve as the foundation for pragmatic, issue-specific recommendations aimed at guiding future research (RQ4). These targeted recommendations are consolidated into a cohesive, integrated framework, as discussed in Section 4.2. The following sections elaborate on each identified research issue, offering critical reflections on their implications and significance for the continued advancement of the field.

4.1.1. Issue 1: Limited Congruence Between Technological and Pedagogical Affordances of AIED Applications

A prominent research issue identified in the current literature concerns the limited congruence between the technological and pedagogical affordances of AIED applications. This gap is evident across both K–12 and higher education contexts, where a substantial body of technical studies tends to prioritize the technological capabilities of AIED systems while often overlooking the educational settings in which these systems are implemented. As a result, the pedagogical affordances (i.e., those features that directly influence instructional strategies and learning outcomes) are frequently under-theorized or insufficiently addressed, thereby constraining the practical utility of AIED tools for educators and learners.
This disconnect is particularly observable in studies focused on predictive analytics, such as those aiming to forecast student dropout rates at institutional, national, or regional levels (e.g., [111,112,113,114,205,206]). These studies have employed advanced AI models [113], wearable technologies for behavioral data collection [206], and complex datasets capturing various learner attributes [112]. While such approaches undoubtedly enhance predictive accuracy, they often fall short in translating these insights into actionable pedagogical strategies. Without clear guidance on how to respond to predictive indicators, educators may experience cognitive overload, struggling to interpret and apply the data meaningfully within their instructional practices.
This concern is echoed in Zawacki-Richter et al. [207], the 1st highly co-cited publication, which offers a systematic review of AI applications in higher education. They highlight the persistent misalignment between technological capabilities and pedagogical needs, questioning the extent to which AIED systems are designed to address real-world educational challenges. Similarly, Luckin et al. [32], the 8th highly co-cited publication, advocate for a pedagogy-first approach, emphasizing that technological design must be informed by the specific educational contexts in which AIED tools are deployed. Despite these calls, efforts to bridge the gap between technological innovation and pedagogical relevance remain limited. Although general frameworks for AI integration in higher education have been proposed (e.g., [80,130,133]), many of them lack the contextual granularity needed to support educators and learners in diverse instructional settings. This raises a critical question for future research: how can AIED applications be systematically designed to address educational problems in specific contexts while maintaining a balance between technological sophistication and pedagogical relevance?
An additional dimension of this issue involves the insufficient attention to social interaction within AIED systems, particularly in online learning environments. Shi and Guo [132] identify this as a recurring challenge, noting that social interaction is essential for sustaining long-term engagement among both students and teachers [137,208]. Addressing this gap presents a dual opportunity: to enhance learner engagement and to support collaborative learning processes. One promising direction may lie in the advancement of personalized-oriented ITS research (e.g., [75,189,209]). While ITS platforms have demonstrated considerable potential in adapting instruction to individual learners, their capacity to support meaningful peer-to-peer and teacher-student interactions remains underdeveloped.
This trajectory aligns with the vision articulated by Holmes et al. [210], the 7th highly co-cited publication, who anticipated that “AI should be getting into the realm of deeper self-learning by the early 2020s and become capable of assisting, collaborating, coaching, and mediating [learners] by early 2023” (pp. 3–4) [210]. However, realizing this vision requires a more deliberate integration of pedagogical insights into the design and deployment of AIED systems. Future research might explore how emerging AI technologies can be leveraged to balance personalization with social interaction, thereby addressing the dual challenges of learner engagement and pedagogical alignment.
In summary, the current literature suggests that AIED applications often privilege technological advancement at the expense of pedagogical utility. This misalignment may result in systems that fail to address the practical needs of educators or to enhance learning outcomes in meaningful ways. Furthermore, while personalized ITS have shown promise, they frequently neglect the importance of social interaction, which is critical for fostering collaborative and socially enriched learning environments. Addressing this issue will require a concerted effort to integrate pedagogical theory into the design of AIED systems and to promote collaboration among AI developers, education researchers, and practitioners.
Recommendations to Address Research Issue 1
The following specific recommendations are proposed to address Research Issue 1, drawing on the axial codes (R1, R2, R3, R4, R5, and R6) derived from the preceding interpretive synthesis.
First, future AIED research should adopt a pedagogy-first approach (R1) by prioritizing pedagogical considerations during the design phase to ensure that technological affordances align with the instructional needs of educators and learners across diverse educational settings. Second, researchers are encouraged to develop context-specific frameworks (R2) by designing AIED models tailored to address distinct educational challenges, incorporating both generalizable design principles and localized pedagogical requirements. Third, it is advisable for AIED systems to integrate actionable insights for educators (R3), providing clear and practical strategies for interpreting and applying predictive analytics. Such integration may reduce cognitive overload and support informed pedagogical decision-making.
Fourth, design efforts should enhance social interaction features (R4) within AIED applications, particularly in online learning systems, to foster sustained engagement and collaboration. Fifth, by leveraging advances in personalization (R5), AIED applications could aim to balance individualized learning pathways with opportunities for peer and teacher interaction. Finally, promoting interdisciplinary collaboration (R6) among AI developers, education researchers, and practitioners may help bridge the gap between technological innovation and pedagogical application, thus fostering more holistic and context-sensitive solutions.

4.1.2. Issue 2: Insufficient Bottom-Up Perspectives in AI Literacy Frameworks

The concept of ‘AI literacy’ has gained considerable traction in recent years, as evidenced by its frequent appearance in Clusters 2 and 3 of the keyword co-occurrence analysis (see Figure 4). Despite this growing interest, many studies continue to lack a clear, context-specific conceptualization of AI literacy within targeted educational settings (e.g., [36,120,122,149,151,165,178,211]). While some researchers have adopted the widely cited definition proposed by Long and Magerko [212], the 3rd highly co-cited publication, this definition largely reflects a top-down perspective [150]. According to their framework, “AI literacy is a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (p. 2) [212]. These competencies are organized into five dimensions: (i) What is AI? (ii) What can AI do? (iii) How does AI work? (iv) How should AI be used? (v) How do people perceive AI?
Although foundational, this framework is primarily derived from expert opinions, grey literature, and the “5 Big AI Ideas” framework developed by Touretzky et al. [8], the 4th highly co-cited publication. Their work, developed under the “AI for K–12” initiative, involved collaboration with the Association for the Advancement of Artificial Intelligence (AAAI) and the Computer Science Teachers Association (CSTA). However, the extent of K–12 educators’ contributions during the framework’s development remains unclear. This lack of explicit teacher input raises important concerns about the practical applicability of such frameworks across diverse classroom contexts.
Building on these early efforts, Wong et al. [213] adapted AI literacy frameworks for K–12 education in Hong Kong, categorizing AI literacy into three dimensions: AI concepts, AI applications, and AI ethics. While more succinct, this framework offers limited guidance for designing and implementing AI content tailored to the specific needs of different K–12 settings. Similarly, Ng et al. [214], the 34th highly co-cited publication, proposed a four-dimensional framework—drawing on Bloom’s taxonomy [215]—encompassing: Know and understand AI; Use and apply AI; Evaluate and create AI; and AI ethics. However, this framework presents interpretive challenges.
For instance, it is often significantly more challenging to understand and articulate how an AI tool functions than to use the tool for creating digital artefacts. Within Bloom’s Taxonomy, albeit, “understanding” is positioned at a lower level of cognitive complexity compared to “creating” [215]. This hierarchical misalignment may lead to confusion when designing learning outcomes. Consider the following example: the outcome “explain how an AI-image generator produces images from a prompt” (aligned with the understanding level) may, in practice, demand a deeper cognitive engagement than “create an image by writing a prompt to an AI-image generator” (aligned with the creating level). In such cases, the act of creating with AI tools may be more accessible for learners than providing a conceptual explanation of the underlying mechanisms. This misalignment arguably complicates the task of educators and instructional designers in formulating learning outcomes and ensuring constructive alignment with learning activities and assessments.
These interpretive challenges underscore the need for more nuanced AI literacy frameworks, which should, perhaps, be informed by the Structure of the Observed Learning Outcome (SOLO) taxonomy [216]. The SOLO taxonomy delineates five levels of understanding: prestructural, unistructural, multistructural, relational, and extended-abstract. These levels allow for a more context-sensitive classification of learning outcomes. For instance, within the SOLO taxonomy, “explain” is typically situated at a higher level of cognitive complexity (relational or extended-abstract), depending on the depth of understanding required. Furthermore, SOLO distinguishes between different forms of “creating.” For example, “creating with something” (e.g., employing AI tools) is generally classified at the multistructural level, whereas “creating something new” (e.g., developing AI tools) is associated with the highest, extended-abstract level. Such distinctions are possible because the SOLO taxonomy recognizes that understanding itself is multi-layered, providing a more flexible and contextually appropriate basis for designing learning outcomes, especially in AI literacy education.
Recent scholarship has increasingly emphasized the importance of incorporating bottom-up perspectives into AI literacy frameworks. Casal-Otero et al. [217], for example, reviewed AI literacy studies in K–12 settings and identified varied approaches to teaching AI, including recognizing AI artefacts, understanding AI processes, and using AI tools. They underscored the need to consider the perspectives of both teachers and students when designing curricula, particularly to accommodate diverse learning needs and gender differences. Similarly, Laupichler et al. [49] reviewed AI literacy research in higher education and found that top-down frameworks often confuse educators, leaving them uncertain about how to structure courses or design AI content. They also highlighted the persistent challenge of defining AI literacy in a manner “that is clear and unambiguous” (p. 13) [49].
Expanding on this discourse, Kong et al. [36] argued that AI literacy should extend beyond technical competencies to include critical thinking, ethical awareness, and problem-solving skills. They emphasized the need for AI education to empower individuals as active participants in an AI-driven society, ensuring that AI literacy aligns with broader goals such as equity, sustainability, and lifelong learning.
In an exploratory interpretive study, Carolus et al. [218] proposed a digital-interaction model for AI literacy, developed from interviews with AI experts. Their model comprises three overarching dimensions: understanding functional principles of AI systems, mindful usage of AI systems, and user group-dependent competencies, along with ten subdimensions. While this model offers valuable theoretical insights, its complexity may limit its applicability for novice learners, particularly in K–12 contexts. Addressing this concern, Chiu et al. [150] co-designed an AI literacy framework with experienced K–12 teachers, integrating ‘bottom-up’ perspectives to enhance practical relevance. Their framework expands the scope of AI literacy to include confidence, self-reflection, and ethical reasoning. However, the study focused primarily on middle school teachers, raising questions about its generalizability across different educational levels and contexts.
Collectively, these studies highlight the value of interpretive methodologies in capturing the nuanced perspectives of educators and learners. Nevertheless, as shown in Table 5, AI literacy research remains dominated by top-down frameworks grounded in expert-driven reviews or theoretical constructs. Consequently, observational or exploratory studies that examine the relationships between AI literacy dimensions based on the lived experiences of educators and learners remain limited [150,151,211]. This gap underscores the need for future research to adopt inclusive, bottom-up approaches that prioritize the insights of those directly engaged in teaching and learning.
In summary, many existing AI literacy frameworks lack inclusivity and contextual adaptability, limiting their effectiveness across diverse educational settings. These frameworks, often designed from a top-down perspective, tend to overlook the practical insights of teachers and students at the grassroots level. Moreover, they frequently fail to account for varying levels of understanding among novice and advanced learners, particularly in K–12 contexts. The absence of simplified, context-specific curricula and adequate teacher training may further hinder their implementation.
Recommendations to Address Research Issue 2
The following specific recommendations are proposed to address Research Issue 2, drawing on the axial codes (R7, R8, R9, R10, R11, and R12) derived from the preceding interpretive synthesis.
First, future research should incorporate bottom-up perspectives (R7) by prioritizing interpretive methodologies that capture the lived experiences of teachers and students. Such methodologies may ensure that AI literacy frameworks remain inclusive and contextually relevant across diverse educational settings. Second, it is suggested that AI literacy frameworks adopt nuanced taxonomies (R8), such as the SOLO taxonomy [216], to design learning outcomes that reflect varying levels of understanding and competency. Third, efforts should focus on simplifying AI literacy frameworks for novice learners (R9), particularly within K–12 contexts, to ensure accessibility, age-appropriateness, and practicality for both young learners and their educators.
Fourth, researchers and practitioners are encouraged to develop context-specific curricula (R10) by collaborating to create AI literacy curricula tailored to the unique needs of specific educational levels, while accounting for cultural, contextual, and learner diversities. Fifth, fostering teacher training and resources (R11) is essential; PD programs should be designed to equip educators with the requisite knowledge and skills to understand and implement AI literacy frameworks effectively, thus bridging the gap between theory and practice. Finally, promoting interdisciplinary collaboration (R12) among AI researchers, education specialists, and classroom practitioners may support the development of holistic frameworks that balance expert insights with the practical realities of teaching and learning.

4.1.3. Issue 3: Ambiguous Relationship Between Computational Thinking and AI in STEM Education

Clusters 2 and 3 of the keyword co-occurrence analysis (Figure 4 and Table 2) reveal intricate and multidirectional linkages among key terms such as “computational thinking,” “artificial intelligence,” “generative AI,” “K–12 education,” “STEM education,” “educational robotics,” and “tools.” These interconnections suggest promising synergies; however, the relationship between computational thinking (CT) and AI remains conceptually ambiguous and insufficiently theorized. This ambiguity extends into broader STEM educational contexts (e.g., [39,138,145,151,152,168,219]), where the integration of AI technologies could potentially enrich student-centered learning environments. Clarifying this relationship is essential for informing the design of AI-integrated STEM curricula that foster both CT and AI literacy in meaningful and pedagogically sound ways.
Within STEM education, researchers have argued that embedding AI technologies in tangible educational tools (e.g., educational robotics, block-based programming environments, AR/VR platforms) can support highly interactive, constructionist learning experiences (e.g., [39,151,168,173]). These environments are typically designed to promote hands-on, student-centered learning. Some scholars have extended this argument by suggesting that such tools may also be leveraged to cultivate AI literacy among K–12 learners (e.g., [147,163,165,166,213]). This pragmatic approach builds on the foundational developments of CT and STEM education to support the learning of AI concepts. However, the a priori relationship between CT and AI remains unclear, raising important questions about how these domains intersect and whether CT serves as a conceptual or pedagogical foundation for AI learning.
Insights from Lodi and Martini [17] and Wong et al. [213] offer initial conceptual grounding for this intersection. Lodi and Martini [17] revisit Seymour Papert’s original interpretation of CT, which emphasizes “making and understanding computational objects” (p. 894). Wong et al. [213] extend this perspective by conceptualizing AI as a coalescence of such computational objects. They propose that existing CT teaching and learning activities could be adapted to introduce AI concepts, applications, and ethical considerations. Supporting this view, Lin et al. [151] demonstrated a positive association between AI literacy and CT efficacy among Chinese secondary school students. Their findings suggest that high-level CT skills may be fostered through learning designs that embed AI within real-life problem-solving contexts. Nevertheless, further empirical research is needed to examine whether this relationship holds across diverse educational settings and learner populations.
Collaborative efforts to develop AI curricula often involve partnerships between educationists and teachers, as illustrated by Chiu et al. [220], the 12th highly co-cited publication. These collaborations, however, are frequently constrained by the absence of practical AI competency frameworks, which are essential for guiding instructional design and enabling teachers to implement AI learning content effectively. Casal-Otero et al. [217] argue that many existing frameworks remain overly theoretical and top-down, making them difficult to operationalize in classroom contexts. Leveraging established CT pedagogies may offer a pathway to address this challenge, yet doing so requires a clearer understanding of the foundational relationship between CT and AI literacy.
Conversely, a growing body of research has explored the integration of AI, particularly machine learning (ML), into CT and STEM education (e.g., [145,213,219]). Given that AI is a major subfield of CS, its development is inherently tied to the CT skills of its practitioners. Tedre et al. [145] note that such integration necessitates a recalibration of current CT education, as “there is no agreement over the relationship between ML skills and knowledge and the multitude of skills and knowledge labeled computational thinking” (p. 110568). This observation invites further inquiry into how CT development through AI learning might influence students’ epistemological understanding of AI and its applications.
An emerging area of interest involves the role of CT in engaging with generative AI and AI models, particularly LLMs. For example, Hijón-Neira et al. [152] demonstrated that LLMs can scaffold programming education by offering personalized feedback and unsolicited hints, thereby enhancing students’ CT skills. However, several challenges have been identified, including the unreliability of generated responses and students’ overreliance on these tools [221,222]. Reeves et al. [222] observed that even slight variations in prompts can significantly alter LLM outputs, underscoring the importance of “prompt engineering” [154,223] as an emerging skill set. Future research could explore how CT can be cultivated through interactions with generative AI, while also addressing the pedagogical and ethical implications of tool reliance and output variability [156,224].
In summary, the relationship between CT and AI remains conceptually ambiguous and empirically underexplored, despite its potential to transform STEM education. While CT is already embedded in many STEM curricula, its role in supporting the teaching of AI concepts, applications, and ethics has yet to be fully realized. Clarifying this relationship could inform the development of AI literacy curricula, guide the integration of AI into CT education, and support the adaptation of existing pedagogical frameworks to better prepare students for AI-driven futures. At the same time, emerging technologies such as generative AI offer new opportunities to enhance CT, but also present challenges related to tool reliability, learner overreliance, and the lack of practical competency frameworks.
Recommendations to Address Research Issue 3
The following specific recommendations are proposed to address Research Issue 3, drawing on the axial codes (R13, R14, R15, R16, R17, and R18) derived from the preceding interpretive synthesis.
First, future research should systematically explore foundational relationships (R13) by investigating how CT and AI literacy intersect and influence one another within STEM educational contexts. Second, it is recommended that researchers and educators leverage CT for AI literacy (R14) by adapting existing CT teaching and learning activities to introduce AI concepts, applications, and ethical considerations, thus building on established pedagogical practices in STEM education. Third, efforts should focus on integrating AI into CT curricula (R15), particularly by embedding ML into CT and STEM curricula. Such integration may necessitate recalibrating current CT frameworks to include AI-specific competencies that reflect emerging technological demands.
Fourth, researchers are encouraged to explore generative AI in CT education (R16) by examining how CT skills can be cultivated through engagement with generative AI tools, such as LLMs, while also addressing challenges related to tool reliability, ethical concerns, bias in outputs, and potential learner overreliance. Fifth, the development of practical AI competency frameworks (R17) is advised, with collaborative efforts between researchers and practitioners aiming to create bottom-up frameworks that support educators in designing and implementing effective AI learning content within STEM education. Finally, promoting interdisciplinary collaboration (R18) among AI researchers, education specialists, and practitioners may foster innovative approaches to integrating CT and AI literacy, thereby enriching STEM education across diverse learning environments.

4.1.4. Issue 4: Lack of Explicit Interpretation of AI Ethics for Educators

The need for educators to gain a clearer understanding of AI ethics has been increasingly emphasized by researchers and policymakers through various calls to action (e.g., [36,150,225,226,227]). These scholarly contributions underscore the importance of ethical considerations in AIED applications and the necessity of equipping future generations with the ability to critically engage with AI. However, despite this growing consensus, practical guidance to support educators in interpreting and integrating AI ethical principles into their teaching remains limited. This gap leaves many teachers without the necessary resources to demystify AI ethics and embed it meaningfully into classroom practices.
A related concern lies in the limited conceptualization of AI ethics within existing AI literacy frameworks. For example, Ng et al.’s [214] four-dimensional framework, grounded in Bloom’s taxonomy [215], treats AI ethics as a discrete dimension rather than embedding it across all levels of learning. Such compartmentalization may hinder educators’ ability to integrate ethical considerations holistically. Ideally, AI ethics should permeate every stage of AI education—whether students are learning to understand, use, or create AI systems. When treated as a standalone component, ethical engagement risks being overlooked in curriculum design, a concern echoed by scholars across both K–12 and higher education contexts (e.g., [150,228,229]). These scholars advocate for a more integrated and collaborative approach to teaching AI ethics. In alignment with this view, Touretzky et al.’s [8] “Five Big Ideas” framework places the societal impact of AI at the center of AI education, encouraging educators to adopt a mindset that foregrounds ethics throughout the learning process.
Another challenge stems from the terminology used in AI education. For instance, Meng-Leong and Hung [179] introduced the term “AI thinking” in the context of K–12 STEM education, loosely defining it as a logical reasoning process facilitated by data-driven, AI-based tools. While the term aims to highlight collaborative problem-solving in STEM, it may inadvertently confuse educators by obscuring the locus of agency—whether it is the human, the AI, or both. A more precise term, such as ‘AI-mediated thinking,’ may offer greater conceptual clarity. This term could be defined as the process by which humans engage in problem-solving or reasoning, with AI providing data-driven insights, tools, or support to enhance human decision-making. Such a human-centric framing reinforces the role of educators and learners as active agents, with AI serving as a cognitive aid rather than a decision-maker.
Despite its conceptual potential, Meng-Leong and Hung’s [179] work does not address the ethical implications of student interactions with AI tools during problem-solving or solution development. This omission highlights a broader need for future research to position AI ethics as a core component of all AI-related teaching and learning activities. Cardona et al. [226] similarly argue that AI ethics should be interwoven throughout the educational process—from curriculum design to classroom implementation. Encouragingly, researchers at MIT have developed “project-based” AI ethics activities for middle school students, embedding ethical reflection into technical lessons [158]. These activities encourage learners to critique AI systems through an ethical lens and to grapple with the moral dimensions involved in designing and deploying such systems.
Moreover, the ethical complexities of human-AI interactions, particularly in contexts involving children and interactive technologies such as robotics, present unique challenges that warrant deeper investigation. Smakman et al. [230] emphasize the need to understand these interactions in order to uncover their implications for AI education. Their work suggests that fostering ethical awareness requires more than abstract instruction; it involves engaging students with the lived realities of AI use. Building on this, future research should aim to refine key concepts (e.g., AI literacy, AI-mediated thinking) while developing clear, actionable frameworks that support educators in addressing the ethical dimensions of AI in the classroom.
In summary, the integration of AI ethics into education demands a unified and embedded approach. Treating ethics as an isolated topic risks fragmenting students’ understanding and limiting their capacity to critically evaluate the ethical implications of designing, using, and interacting with AI technologies. Addressing this issue requires clearer conceptual definitions, enhanced educator support, and practical resources that facilitate the teaching of ethics through student-centered methods. While project-based learning has shown promise in this regard, its potential remains underutilized across diverse educational contexts.
Recommendations to Address Research Issue 4
The following specific recommendations are proposed to address Research Issue 4, drawing on the axial codes (R19, R20, R21, R22, R23, and R24) derived from the preceding interpretive synthesis.
First, it is recommended to integrate AI ethics across learning levels (R19), embedding ethical considerations throughout all stages of AI education to ensure that students develop a critical understanding of ethics in relation to the use, design, and implementation of AI technologies. Second, efforts should be made to clarify and refine terminologies (R20), particularly by revising ambiguous terms such as ‘AI thinking’ to emphasize their human-centric nature. For instance, adopting terms like ‘AI-mediated thinking’ may provide greater clarity by highlighting the role of humans as decision-makers and AI as supportive cognitive tools.
Third, the development of practical resources for educators (R21) is essential. Collaborative initiatives between researchers and policymakers should focus on creating accessible resources, guidelines, and PD programs to help educators demystify AI ethics and integrate it effectively into their teaching practices. Fourth, embedding ethics into project-based learning (R22) is advised, building on existing initiatives to incorporate AI ethics into activities that enable students to critically evaluate and construct rudimentary AI systems while addressing ethical considerations.
Fifth, future research should explore the ethical implications of human-AI interactions (R23), particularly by investigating the ethical dimensions of children’s interactions with AI technologies such as robotics. Such research may provide educators with actionable insights for addressing the ethical use of these technologies in classroom settings. Finally, promoting interdisciplinary collaboration (R24) among AI ethicists, education researchers, and practitioners could foster the development of holistic frameworks that integrate ethical considerations into all aspects of AI education.

4.1.5. Issue 5: Limitations of Existing PD Frameworks in AI Teacher Education Research

Within Cluster 4 of the keyword co-occurrence network (see Figure 4), several PD frameworks have been identified to support educators in preparing for AI teaching and learning (e.g., [160,195,197,231]). While these initiatives represent meaningful progress, they also exhibit a range of conceptual and practical limitations that warrant further scholarly attention.
One of the most prominent issues lies in the widespread reliance on the integrative TPACK framework [232,233], which has been extensively adopted in teacher technology education. Although TPACK offers a foundational structure, its current applications in AI education often lack the specificity required to address the unique theoretical and applied dimensions of AI. In particular, the dual nature of AI concepts (such as ML) poses challenges for educators, especially those from nontechnical backgrounds [145,160,198]. Understanding ML necessitates both a priori (theoretical) and a posteriori (applied) knowledge, yet existing PD frameworks tend to treat these knowledge domains superficially. This limitation may hinder teachers’ ability to grasp the hierarchical complexity of AI concepts.
The hierarchical model of the SOLO taxonomy [216] offers a potentially valuable lens through which to address this issue. By categorizing understanding into five levels (prestructural, unistructural, multistructural, relational, and extended-abstract levels), the SOLO taxonomy could support a more structured and differentiated approach to teacher learning. However, this model has yet to be meaningfully integrated into existing PD frameworks for AI education.
A second challenge emerges from the duality of AI education, as articulated in the Artificial Intelligence and the Future of Teaching and Learning report [226]. This duality highlights two critical perspectives: (i) AI as a tool to support teaching and learning across all subjects, and (ii) AI as a subject that students must learn about. Accordingly, educators are increasingly expected to both teach AI content and integrate AI tools into their pedagogical practices. This dual role complicates the distinction between Technological Pedagogical Knowledge (TPK) and Technological Content Knowledge (TCK), as their boundaries may blur in practice. Such ambiguity can create confusion for educators attempting to navigate the overlapping demands of AI integration.
In response to this complexity, Celik [231] proposed an extension of the TPACK framework by introducing “intelligent technology knowledge” (intelligent-TK) and ethical assessment dimensions. While this extension acknowledges the importance of AI ethics, it arguably oversimplifies the pedagogical landscape by prioritizing technological knowledge (TK) over the more nuanced dimensions of TPK and TCK. For instance, Celik posited that teachers equipped with sufficient TK could inherently know how to teach with AI tools. However, this assumption lacks empirical validation and may underestimate the pedagogical intricacies involved in AI integration.
Similarly, Yau et al. [196] proposed a six-dimensional framework for teacher knowledge in AI education, encompassing: (i) technology bridging, (ii) knowledge delivery, (iii) interest stimulation, (iv) ethics establishment, (v) capability cultivation, and (vi) intellectual development. While this framework offers a broader perspective, several of its dimensions appear to be conceptually overlapping. For example, “ethics establishment” could arguably be embedded within “knowledge delivery,” as ethical considerations should permeate the instructional process. Likewise, the distinction between “capability cultivation” and “intellectual development” may be difficult to operationalize in practice. These overlaps risk creating ambiguity for educators attempting to implement the framework effectively.
Addressing these limitations requires a more nuanced understanding of teacher knowledge in AI education, one that recognizes the multifaceted nature of pedagogical practice and prioritizes student-centered approaches. Vazhayil et al. [234], the 10th highly co-cited publication, emphasize the importance of collaboration among policymakers, researchers, industry leaders, and educators in enhancing the effectiveness of PD initiatives. Their findings highlight the need to consider contextual variables, such as cultural, social, economic, and technical factors, and to align pedagogical strategies with the specific demands of local educational environments.
In summary, existing PD frameworks for AI teacher education face several interrelated challenges. These include insufficient differentiation of educator understanding, conceptual ambiguity in knowledge dimensions, and limited integration of ethical considerations. While extensions of the TPACK framework offer promising directions, they remain largely untested in empirical contexts. Furthermore, contextual factors are frequently underexplored, diminishing the relevance and applicability of PD initiatives across diverse settings. Finally, a persistent lack of interdisciplinary collaboration continues to inhibit the development of robust, scalable, and context-sensitive AI teacher education programs.
Recommendations to Address Research Issue 5
The following specific recommendations are proposed to address Research Issue 5, drawing on the axial codes (R25, R26, R27, R28, R29, and R30) derived from the preceding interpretive synthesis.
First, it is recommended that future PD frameworks incorporate hierarchical models of understanding (R25), such as the SOLO taxonomy [216], in order to account for varying levels of educator understanding. These models may facilitate the structured development of both theoretical and applied AI knowledge within teacher training. Second, researchers should clarify overlapping knowledge dimensions (R26) by investigating the interplay between TPK and TCK within AI teacher education frameworks. Such clarification may provide educators with more practical guidance for integrating AI into their instructional practices.
Third, it is advisable to empirically validate framework extensions (R27), such as the proposed “intelligent-TK” [213], through rigorous empirical testing. This process could ensure that such extensions address the pedagogical complexity of AI education while maintaining conceptual coherence. Fourth, ethical considerations should be integrated across teacher education (R28), embedding ethics throughout all dimensions of AI teacher education frameworks to ensure that AI ethics is treated as a foundational element rather than a peripheral topic.
Fifth, the design of context-specific PD programs (R29) is essential. Policymakers and researchers should collaborate to develop PD initiatives that account for contextual factors, including cultural diversity and technological constraints, and that are tailored to the specific needs of educators in varied educational environments. Finally, promoting interdisciplinary collaboration (R30) among policymakers, researchers, industry leaders, and educators could foster the development of comprehensive and scalable AI teacher education programs. Such partnerships may also bridge the gap between theoretical models and practical implementation.

4.2. Overarching Recommendations: An Integrated Framework for AI-Education Convergence (RQ4)

Building on the five research issues presented in Section 4.1 and summarized in Table 4, this section outlines the synthesis process through which issue-specific recommendations were consolidated into a cohesive, integrated framework (RQ4). Consistent with the structured approach described in Appendix A and grounded in established qualitative procedures [63,66,67], the process was iterative yet systematic, with the goal of producing overarching and actionable recommendations that address the multifaceted challenges of AI integration in education.

4.2.1. Rationale of the Synthesis Process

To formulate a unified framework, the final step of the interpretive synthesis described in Section 4.1—namely, the Synthesis of Insights—was systematically reiterated with a cross-issue focus. Axial codes (R1 to R30) corresponding to the recommendations for each research issue were comparatively reviewed to identify points of convergence, complementarity, and remaining gaps. This process yielded a set of interrelated themes that underpin the Integrated AI-Education Convergence Framework, as detailed below.
First, Pedagogical Focus and Contextualization (Theme 1) emerges as a prominent theme, grounded in recommendations that emphasize a pedagogy-first approach (R1) and the development of context-specific frameworks (R2) that align AI technologies with educational objectives (R4, R5). This theme is further reinforced by calls to simplify AI learning frameworks for novice learners (R9) and to tailor curricula to diverse educational contexts (R10, R14), underscoring the importance of adaptable and locally relevant instructional design.
Second, Actionability and Practical Guidance (Theme 2) is identified as another noteworthy theme, reflected in the call for actionable insights (R3) and the provision of practical resources for educators along with effective PD (R11, R21, R29). This theme highlights that empirically validated tools and user-oriented guidelines are essential for bridging the gap between theory and practice in AI education.
Third, Interdisciplinary Collaboration and Bottom-up Perspectives (Theme 3) are consistently supported across research issues, with recommendations calling for collaboration among AI developers, education researchers, practitioners, and policymakers (R6, R12, R18, R24, R30). The importance of bottom-up perspectives (R7, R17), including the lived experiences of teachers and students, is also emphasized to ensure that AI literacy frameworks remain relevant and grounded in practice.
Fourth, Ethical Considerations and Terminological Clarity (Theme 4) are likewise highlighted, with recommendations advocating for the embedding of ethical considerations across all levels of AI education (R19, R22, R23) and the refinement of ambiguous terms (R13, R20, R26, R27), such as ‘AI thinking’ and ‘intelligent-TK.’ The integration of AI ethics into teacher education (R28) further underscores the necessity of centering ethical literacy and linguistic precision within AI education initiatives.
Finally, Structured Competency Models and Curriculum Integration (Theme 5) is identified through recommendations to adopt hierarchical competency models (R8, R25), such as the SOLO taxonomy [216], and to integrate AI concepts into CT and STEM curricula (R15, R16). These calls for competency models and AI-content integration are intended to provide a systematic foundation for scaffolding AI literacy among both educators and learners, supporting curriculum development across educational levels.
Nevertheless, these five themes are analytically distinct yet mutually reinforcing. They reflect the broader demands of AI integration in education and serve as the conceptual basis for the integrated framework presented below.

4.2.2. The Integrated AI-Education Convergence Framework

In response to the challenges identified in Section 4.1, the Integrated AI-Education Convergence Framework is proposed as a strategic blueprint to guide future research and practice at the contemporary intersections between AI and education (see Figure 6). The framework comprises five interdependent dimensions, each derived from the interpretive synthesis:
  • Adopt Pedagogy-Centric and Contextualized Approaches: Prioritize educational goals in the design and implementation of AI systems. Innovations should be tailored to address the diverse needs of classroom environments and cultural contexts, ensuring that pedagogical relevance remains central.
  • Embed Actionable Insights and Practical Resources: Develop AI tools and curricula that include clear guidelines, practical strategies, and empirically validated resources. These should encompass user-friendly interfaces and comprehensive PD materials to support both educators and learners in real-world educational settings.
  • Leverage Interdisciplinary Collaboration and Bottom-Up Methodologies: Foster sustained collaboration among AI researchers, education specialists, practitioners, and policymakers. Incorporate the voices of teachers and students to ensure that AI learning content is both theoretically robust and practically applicable.
  • Integrate Ethical Considerations and Ensure Terminological Clarity: Embed ethical considerations across AI curricula and PD programs. Employ human-centric terminologies, such as ‘AI-mediated thinking,’ to clarify the respective roles of humans and AI, thereby promoting responsible use and critical engagement.
  • Develop and Validate Hierarchical Competency Models for AI Literacy: Implement structured frameworks, such as those based on the SOLO taxonomy [216], to support the systematic development of AI literacy. These frameworks should integrate theoretical, practical, and ethical dimensions to ensure adaptability in response to evolving technological frontiers.
As illustrated in Figure 6, the Integrated AI-Education Convergence Framework serves as an empirically informed and strategically designed guide for addressing the research challenges outlined in Section 4.1. Central to this framework is the concept of ‘Integrated AI-Education Convergence,’ which denotes the deliberate alignment of AI-driven innovation with educational principles, practices, and policies. Rather than allowing technological advancement to dictate pedagogical direction, this convergence promotes a pedagogy-first orientation—one that embeds ethical awareness, interdisciplinary collaboration, and structured competency development at its core.
Framed through the lens of future research, the framework underscores the need for continuous exploration and refinement. Ensuring that AI’s role in education remains pedagogically responsive and ethically grounded requires sustained scholarly attention, collaborative engagement, and a steadfast commitment to contextual relevance.

4.3. Limitations of the Study

This study is subject to several limitations that may influence the interpretation of its findings. First, the bibliometric data were collected exclusively from two major databases: WOSCC and Scopus. This selective approach was adopted to maintain data integrity and to avoid the formatting inconsistencies that often arise when merging records from multiple, non-compatible sources. However, this decision may have resulted in the omission of relevant literature indexed in other databases, potentially affecting the comprehensiveness of the bibliometric-driven and inductive content analysis.
Second, the temporal scope of the study was confined to literature published between 2014 and October 2024. While this range was selected to capture recent developments and trends, it may have excluded earlier works that continue to shape the field. To mitigate this limitation to a pragmatic extent, the co-citation analysis was conducted to identify highly influential publications referenced within the 317 articles analyzed. Many of these co-cited works, published prior to 2014, were subsequently incorporated into the interpretive synthesis to ensure that foundational research was considered in the identification of key issues and gaps. Nonetheless, the exclusion of non-English publications and the reliance on English-language databases may also limit the generalizability of the findings, particularly in regions where English is not the primary language of academic communication.

5. Conclusions

The present study was conducted to pragmatically elucidate the contemporary intersections between AI and education, with the overarching aim of informing future educational research and practice. Through a comprehensive keyword co-occurrence analysis, four entwined thematic clusters were identified: Cluster 1: Applying AI Techniques to Address Educational Challenges; Cluster 2: Expanding the Role of AI in K–12 Educational Contexts; Cluster 3: Enhancing STEM Education through AI Technologies; and Cluster 4: Preparing Teachers to Teach or Integrate AI in the Classroom. These clusters provided the analytical foundation for a rigorous inductive content analysis, which revealed eleven prevailing research trends: (i) prominent AI techniques in the development of educational applications, (ii) exploring the functionalities of AI techniques in addressing educational tasks, (iii) research trajectories in AIED across diverse educational contexts, (iv) broadening the integration of AI into K–12 educational settings, (v) interactive educational technologies for demystifying AI for K–12 students, (vi) applying AI in K–12 online learning environments, (vii) incorporating AI learning elements to enrich STEM educational contexts, (viii) designing technology-enhanced STEM learning experiences to engage students, (ix) leveraging AIED applications to facilitate STEM teaching and learning, (x) the emerging need to prepare teachers for AI education, and (xi) enhancing teachers’ readiness to integrate AI technologies into the classroom.
Building on these findings, the study identified five interrelated research issues through an interpretive synthesis of the prevailing trends and highly co-cited publications: (1) limited congruence between technological and pedagogical affordances of AIED applications, (2) insufficient bottom-up perspectives in AI literacy frameworks, (3) ambiguous relationship between computational thinking and AI in STEM education, (4) lack of explicit interpretation of AI ethics for educators, and (5) limitations of existing PD frameworks in AI teacher education research.
These issues reflect critical tensions and gaps in the current literature and practice, offering valuable insights for researchers and practitioners seeking to extend the frontiers of this rapidly evolving interdisciplinary field. In response, the study proposed thirty issue-specific recommendations, each designed to address the nuanced challenges embedded within the five research issues.
To consolidate these recommendations into a cohesive vision for future research, the interpretive synthesis process was reiterated. This process revealed five overarching themes that cut across the issue-specific recommendations and informed the development of a strategic response. These themes are: (i) the need to adopt pedagogy-centric and contextualized approaches, (ii) the importance of embedding actionable insights and practical resources, (iii) the value of interdisciplinary collaboration and bottom-up methodologies, (iv) the imperative to integrate ethical considerations and ensure terminological clarity, and (v) the necessity of developing and validating hierarchical competency models for AI literacy.
These five themes were formalized into the Integrated AI-Education Convergence Framework, which offers a strategic blueprint for guiding future research and implementation efforts. At its core, this framework articulates a deliberate convergence between AI-driven innovation and educational principles. It advocates for a pedagogy-first orientation, in which technological tools are designed and deployed in service of clearly defined learning objectives. Ethical awareness is embedded as a foundational element, ensuring that AI integration is critically examined and responsibly enacted. Interdisciplinary collaboration is positioned as a catalyst for innovation, while structured competency models, such as the SOLO taxonomy, provide a scaffold for systematically developing AI literacy across educational levels. By integrating these dimensions, the framework highlights the complex, contextual, and ethical demands of contemporary AI-Education integration.
This study thus contributes a coherent and empirically grounded response to the challenges identified in the literature, offering a framework that is both theoretically robust and practically applicable. The Integrated AI-Education Convergence Framework is not intended as a static solution but rather as a dynamic guide for continuous refinement. It invites researchers, educators, and policymakers to engage in ongoing dialogue, experimentation, and adaptation, ensuring that AI’s role in education remains responsive to evolving pedagogical priorities and ethical imperatives.
Additionally, it is noteworthy that the temporal patterns observed (see Figure 2) reveal a pronounced surge in AI-Education research output since 2018, with this upward trajectory likely to persist into 2025 and beyond. While this projection may seem anticipated given broader technological developments, it nonetheless underscores the urgency and significance of the recommendations proposed in this study. The accelerating pace of research indicates that the field will continue to evolve rapidly, necessitating ongoing monitoring of emerging trends and proactive adaptation of research agendas. Future studies should remain attentive to these developments, ensuring that scholarly inquiry and practical implementation keep pace with the expanding scope and complexity of AI integration in education.
In conclusion, this study offers a comprehensive and methodologically rigorous account of the contemporary intersections between AI and education. By integrating bibliometrics and inductive content analysis, it provides both a diagnostic overview and a strategic framework to inform future inquiry. The Integrated AI-Education Convergence Framework stands as a timely and actionable contribution—one that seeks to inform theory and practice, foregrounds pedagogy and ethics, and charts a path for more contextually grounded, interdisciplinary, and sustainable AI integration in education.

Author Contributions

M.A. led the study’s conceptualization, design, and protocol development, coordinated data retrieval and screening, directed both bibliometric and qualitative analyses, synthesized findings, and took primary responsibility for drafting and finalizing the manuscript. M.M. (Ming Ma) contributed to data collection, participated in the content analysis, and assisted with synthesis and writing. M.M. (Mian Muneeb) supported data retrieval and organization. G.K.W.W. provided supervision and strategic guidance, ensuring methodological rigor and integrity throughout the project. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The authors confirm that all data supporting the findings of this study are available within the article and its appendices. The raw bibliometric data underlying this study can be retrieved from the WOSCC and Scopus databases.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AAAIAssociation for the Advancement of Artificial Intelligence
AIArtificial intelligence
AIEDArtificial intelligence in education
ANNArtificial neural networks
ARAugmented reality
CNNConvolutional neural networks
CPUCentral processing unit
CSComputer science
CSTAComputer Science Teachers Association
CTComputational thinking
DARTDecibel Analysis for Research in Teaching
DLDeep learning
DLNNDeep learning neural networks
DTDecision trees
GPUGraphics processing unit
intelligent-TKIntelligent technology knowledge
ITSIntelligent tutoring systems
KNNk-nearest neighbors
KSAKnowledge, skills, and attitudes
LLMLarge language model
LRLogistic regression
MLMachine learning
NBNaïve Bayes
NLPNatural language processing
PDProfessional development
PRCPeople’s Republic of China
PTMPre-trained model
RQResearch question
SLBASSTEM Learning Behavior Analysis System
SOLOStructure of the Observed Learning Outcome
SVMSupport vector machines
TAMTechnology acceptance model
TCKTechnological content knowledge
TKTechnological knowledge
TPACKTechnological, pedagogical, and content knowledge
TPKTechnological pedagogical knowledge
VRVirtual reality
WOSCCWeb of Science Core Collection
xAIExplainable artificial intelligence

Appendix A

Figure A1. Review process schematics.
Figure A1. Review process schematics.
Metrics 02 00023 g0a1

Appendix B

Table A1. Highly co-cited publications.
Table A1. Highly co-cited publications.
No.Publication TitleYearCo-CitationsLink Strength
1Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where Are the Educators?20192694
2Sustainable Curriculum Planning for Artificial Intelligence Education: A Self-Determination Theory Perspective20202089
3What is AI Literacy? Competencies and Design Considerations20201872
4Envisioning AI for K–12: What Should Every Child Know about AI?20191691
5Artificial Intelligence in Education: A Review20201537
6Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology19891358
7Artificial Intelligence in Education: Promise and Implications for Teaching and Learning20191353
8Intelligence Unleashed: An Argument for AI in Education20161245
9Exploring the Impact of Artificial Intelligence on Teaching and Learning in Higher Education20171241
10Focusing on Teacher Education to Introduce AI in Schools: Perspectives and Illustrative Findings20191236
11The Measurement of Observer Agreement for Categorical Data19771166
12Creation and Evaluation of a Pretertiary Artificial Intelligence (AI) Curriculum20221153
13Knowledge Tracing: Modeling the Acquisition of Procedural Knowledge19941146
14Vision, Challenges, Roles, and Research Issues of Artificial Intelligence in Education20201043
15Evolution and Revolution in Artificial Intelligence in Education20161037
16The Relative Effectiveness of Human Tutoring, Intelligent Tutoring Systems, and Other Tutoring Systems20111034
17Learning Machine Learning with Very Young Children: Who is Teaching Whom?20201011
18Applying the Self-Determination Theory (SDT) to Explain Student Engagement in Online Learning during the COVID-19 Pandemic2021962
19The Power of Feedback2007955
20Teaching Machine Learning in School: A Systematic Mapping of the State of the Art2020948
21Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge2020936
22Chatbots for Learning: A Review of Educational Chatbots for the Facebook Messenger2020930
23A Year in K–12 AI Education2020868
24A is for Artificial Intelligence: The Impact of Artificial Intelligence Activities on Young Children’s Perceptions of Robots2020863
25Random Forests2001858
26IRobot: Teaching the Basics of Artificial Intelligence in High Schools2016856
27Digital Support for Student Engagement in Blended Learning Based on Self-determination Theory2021852
28Promoting Students’ Well-Being by Developing Their Readiness for the Artificial Intelligence Age2020750
29Gentle Introduction to Artificial Intelligence for High-School Students Using Scratch2019748
30Why Are We Not Teaching Machine Learning at High School? A Proposal2018744
31The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching2014737
32Aims for Cultivating Students’ Key Competencies Based on Artificial Intelligence Education in China2021733
33Dropout Prediction in E-learning Courses Through the Combination of Machine Learning Techniques2009731
34Conceptualizing AI Literacy: An Exploratory Review2021728
35Scikit-learn: Machine Learning in Python2011625
36R: A Language and Environment for Statistical Computing2022619
37Designing One-Year Curriculum to Teach Artificial Intelligence for Middle School2020613
38Co-Designing Machine Learning Apps in K–12 With Primary School Children2020610
39Artificial Intelligence Education for Young Children: Why, What, and How in Curriculum Design and Implementation202269
40Applying Machine Learning in Science Assessment: A Systematic Review202066
41Youth Learning Machine Learning Through Building Models of Athletic Moves201964

References

  1. Doroudi, S. The Intertwined Histories of Artificial Intelligence and Education. Int. J. Artif. Intell. Educ. 2022, 33, 885–928. [Google Scholar] [CrossRef]
  2. NVIDIA. Deep Learning with GPUs; Nvidia Corporation: Santa Clara, CA, USA, 2014. [Google Scholar]
  3. NVIDIA. GPU-Based Deep Learning Inference: A Performance and Power Analysis; Nvidia Corporation: Santa Clara, CA, USA, 2015. [Google Scholar]
  4. NVIDIA. GPU Technology Conference 2014; Nvidia Corporation: Santa Clara, CA, USA, 2014. [Google Scholar]
  5. Zafari, M.; Bazargani, J.S.; Sadeghi-Niaraki, A.; Choi, S.-M. Artificial Intelligence Applications in K-12 Education: A Systematic Literature Review. IEEE Access 2022, 10, 61905–61921. [Google Scholar] [CrossRef]
  6. Salas-Pilco, S.Z.; Yang, Y. Artificial intelligence applications in Latin American higher education: A systematic review. Int. J. Educ. Technol. High. Educ. 2022, 19, 21. [Google Scholar] [CrossRef]
  7. OECD. Future of Education and Skills 2030: Conceptual Learning Framework; Organization for Economic Cooperation and Development (OECD): Paris, France, 2018. [Google Scholar]
  8. Touretzky, D.; Gardner-McCune, C.; Martin, F.; Seehorn, D. Envisioning AI for K-12: What Should Every Child Know about AI? In Proceedings of the 33th AAAI Conference on Artificial Intelligence (AAAI 2019), Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 9795–9799. [Google Scholar] [CrossRef]
  9. McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, 31 August 1955. AI Mag. 2006, 27, 12–14. [Google Scholar] [CrossRef]
  10. Newell, A.; Simon, H.A. Computer science as empirical inquiry: Symbols and search. Commun. ACM 1976, 19, 113–126. [Google Scholar] [CrossRef]
  11. Mira, J.M. Symbols versus connections: 50 years of artificial intelligence. Neurocomputing 2008, 71, 671–680. [Google Scholar] [CrossRef]
  12. Weizenbaum, J. ELIZA—A computer program for the study of natural language communication between man and machine. Commun. ACM 1966, 26, 23–28. [Google Scholar] [CrossRef]
  13. Carbonell, J.R. Mixed-Initiative Man-Computer Instructional Dialogues: Final Report; Bolt Beranek and Newman: Cambridge, MA, USA, 1970. [Google Scholar]
  14. Dede, C.; Swigger, K. The Evolution of Instructional Design Principles for Intelligent Computer-Assisted Instruction. J. Instr. Dev. 1988, 11, 15–22. [Google Scholar]
  15. Luckin, R.; Cukurova, M.; Kent, C.; du Boulay, B. Empowering educators to be AI-ready. Comput. Educ. Artif. Intell. 2022, 3, 100076. [Google Scholar] [CrossRef]
  16. Papert, S.; Harel, I. Constructionism; Ablex Publishing: Norwood, NJ, USA, 1991; pp. xi, 518. [Google Scholar]
  17. Lodi, M.; Martini, S. Computational Thinking, Between Papert and Wing. Sci. Educ. 2021, 30, 883–908. [Google Scholar] [CrossRef]
  18. McCarthy, J. History of LISP. ACM SIGPLAN Not. 1978, 13, 217–223. [Google Scholar] [CrossRef]
  19. Solomon, C.; Harvey, B.; Kahn, K.; Lieberman, H.; Miller, M.L.; Minsky, M.; Papert, A.; Silverman, B. History of Logo. Proc. 2020 ACM Program. Lang. (PACMPL) 2020, 4, 79. [Google Scholar] [CrossRef]
  20. Frana, P.L.; Klein, M.J. Encyclopedia of Artificial Intelligence: The Past, Present, and Future of AI, 1st ed.; ABC-CLIO, LLC: Santa Barbara, CA, USA, 2021. [Google Scholar]
  21. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  22. Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
  23. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Kai, L.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; Volume 18, pp. 248–255. [Google Scholar] [CrossRef]
  24. Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C.D.; Ng, A.Y.; Potts, C. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), Seattle, WA, USA, 18–21 October 2013; Volume 17, pp. 1631–1642. [Google Scholar]
  25. Russell, S.J.; Norvig, P. Artificial Intelligence a Modern Approach; Pearson: London, UK, 2010. [Google Scholar]
  26. Wang, Y.E.; Wei, G.-Y.; Brooks, D. Benchmarking TPU, GPU, and CPU platforms for deep learning. arXiv 2019, arXiv:1907.10701. [Google Scholar] [CrossRef]
  27. Xu, M.; David, J.M.; Kim, S.H. The fourth industrial revolution: Opportunities and challenges. Int. J. Financ. Res. 2018, 9, 90–95. [Google Scholar] [CrossRef]
  28. CSTA. K–12 Computer Science Standards; Computer Science Teachers Association (CSTA): New York, NY, USA, 2018. [Google Scholar]
  29. Wang, P. On Defining Artificial Intelligence. J. Artif. Gen. Intell. 2019, 10, 1–37. [Google Scholar] [CrossRef]
  30. Bringsjord, S.; Govindarajulu, N.S. What Exactly Is AI? Stanford University Press: Redwood City, CA, USA, 2023. [Google Scholar]
  31. AI-HLEG. A Definition of AI: Main Capabilities and Scientific Disciplines; European Commission’s High-Level Expert Group on Artificial Intelligence (AI-HLEG): Brussels, Belgium, 2019. [Google Scholar]
  32. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L.B. Intelligence Unleashed: An Argument for AI in Education; Pearson: London, UK, 2016. [Google Scholar]
  33. Feng, S.; Law, N. Mapping Artificial Intelligence in Education Research: A Network—Based Keyword Analysis. Int. J. Artif. Intell. Educ. 2021, 31, 277–303. [Google Scholar] [CrossRef]
  34. Schiff, D. Education for AI, not AI for Education: The Role of Education and Ethics in National AI Policy Strategies. Int. J. Artif. Intell. Educ. 2022, 32, 527–563. [Google Scholar] [CrossRef]
  35. UNICEF. Policy Guidance on AI for Children 2.0; United Nations Children’s Fund (UNICEF): Helsinki, Finland, 2021. [Google Scholar]
  36. Kong, S.-C.; Korte, S.-M.; Burton, S.; Keskitalo, P.; Turunen, T.; Smith, D.; Wang, L.; Lee, J.C.-K.; Beaton, M.C. Artificial Intelligence (AI) literacy—An argument for AI literacy in education. Innov. Educ. Teach. Int. 2024, 622, 477–483. [Google Scholar] [CrossRef]
  37. Opel, S.; Schlichtig, M.; Schulte, C. Developing teaching materials on artificial intelligence by using a simulation game (work in progress). In Proceedings of the 14th Workshop in Primary and Secondary Computing Education (WiPSCE 2019), Glasgow, UK, 23–25 October 2019; Volume 14, pp. 1–2. [Google Scholar] [CrossRef]
  38. Sabuncuoglu, A. Designing one year curriculum to teach artificial intelligence for middle school. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE), Trondheim, Norway, 15–19 June 2020; Volume 25, pp. 96–102. [Google Scholar] [CrossRef]
  39. Shamir, G.; Levin, I. Transformations of computational thinking practices in elementary school on the base of artificial intelligence technologies. In Proceedings of the 12th International Conference on Education and New Learning Technologies (EDULEARN 2020), Palma de Mallorca, Spain, 6–7 July 2020; Volume 12, pp. 1596–1605. [Google Scholar] [CrossRef]
  40. Song, J.; Yu, J.; Yan, L.; Zhang, L.; Liu, B.; Zhang, Y.; Lu, Y. Develop AI teaching and learning resources for compulsory education in China. In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI 2023), Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 16033–16039. [Google Scholar] [CrossRef]
  41. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  42. Sætra, H.S. Generative AI: Here to stay, but for good? Technol. Soc. 2023, 75, 102372. [Google Scholar] [CrossRef]
  43. Fenwick, M.; Jurcys, P. Originality and the future of copyright in an age of generative AI. Comput. Law Secur. Rev. 2023, 51, 105892. [Google Scholar] [CrossRef]
  44. Yan, L.; Sha, L.; Zhao, L.; Li, Y.; Martinez—Maldonado, R.; Chen, G.; Li, X.; Jin, Y.; Gašević, D. Practical and ethical challenges of large language models in education: A systematic scoping review. Br. J. Educ. Technol. 2024, 55, 90–112. [Google Scholar] [CrossRef]
  45. Crompton, H.; Jones, M.V.; Burke, D. Affordances and challenges of artificial intelligence in K-12 education: A systematic review. J. Res. Technol. Educ. 2022, 56, 248–268. [Google Scholar] [CrossRef]
  46. Ordoñez-Avila, R.; Salgado Reyes, N.; Meza, J.; Ventura, S. Data mining techniques for predicting teacher evaluation in higher education: A systematic literature review. Heliyon 2023, 9, e13939. [Google Scholar] [CrossRef]
  47. Sanusi, I.T.; Oyelere, S.S.; Vartiainen, H.; Suhonen, J.; Tukiainen, M. A systematic review of teaching and learning machine learning in K-12 education. Educ. Inf. Technol. 2022, 28, 5967–5997. [Google Scholar] [CrossRef]
  48. Yim, I.H.Y.; Su, J. Artificial intelligence (AI) learning tools in K-12 education: A scoping review. J. Comput. Educ. 2024, 12, 93–131. [Google Scholar] [CrossRef]
  49. Laupichler, M.C.; Aster, A.; Schirch, J.; Raupach, T. Artificial intelligence literacy in higher and adult education: A scoping literature review. Comput. Educ. Artif. Intell. 2022, 3, 100101. [Google Scholar] [CrossRef]
  50. Ninkov, A.; Frank, J.R.; Maggio, L.A. Bibliometrics: Methods for studying academic publishing. Perspect. Med. Educ. 2022, 11, 173–176. [Google Scholar] [CrossRef] [PubMed]
  51. Pritchard, A. Statistical Bibliography or Bibliometrics? J. Doc. 1969, 25, 348–349. [Google Scholar]
  52. Liao, H.; Tang, M.; Luo, L.; Li, C.; Chiclana, F.; Zeng, X.-J. A Bibliometric Analysis and Visualization of Medical Big Data Research. Sustainability 2018, 10, 166. [Google Scholar] [CrossRef]
  53. OECD. OECD Glossary of Statistical Terms; Organization for Economic Cooperation and Development (OECD): Paris, France, 2008. [Google Scholar] [CrossRef]
  54. Caputo, A.; Kargina, M. A user-friendly method to merge Scopus and Web of Science data during bibliometric analysis. J. Mark. Anal. 2022, 10, 82–88. [Google Scholar] [CrossRef]
  55. Echchakoui, S. Why and how to merge Scopus and Web of Science during bibliometric analysis: The case of sales force literature from 1912 to 2019. J. Mark. Anal. 2020, 8, 165–184. [Google Scholar] [CrossRef]
  56. Kumpulainen, M.; Seppänen, M. Combining Web of Science and Scopus datasets in citation-based literature study. Scientometrics 2022, 127, 5613–5631. [Google Scholar] [CrossRef]
  57. Martín-Martín, A.; Orduna-Malea, E.; Thelwall, M.; Delgado López-Cózar, E. Google Scholar, Web of Science, and Scopus: A systematic comparison of citations in 252 subject categories. J. Informetr. 2018, 12, 1160–1177. [Google Scholar] [CrossRef]
  58. Birkle, C.; Pendlebury, D.A.; Schnell, J.; Adams, J. Web of Science as a data source for research on scientific and scholarly activity. Quant. Sci. Stud. 2020, 1, 363–376. [Google Scholar] [CrossRef]
  59. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef]
  60. Rethlefsen, M.L.; Kirtley, S.; Waffenschmidt, S.; Ayala, A.P.; Moher, D.; Page, M.J.; Koffel, J.B. PRISMA-S: An extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst. Rev. 2021, 10, 39. [Google Scholar] [CrossRef]
  61. Siddaway, A.P.; Wood, A.M.; Hedges, L.V. How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annu. Rev. Psychol. 2019, 70, 747–770. [Google Scholar] [CrossRef]
  62. Donthu, N.; Kumar, S.; Mukherjee, D.; Pandey, N.; Lim, W.M. How to conduct a bibliometric analysis: An overview and guidelines. J. Bus. Res. 2021, 133, 285–296. [Google Scholar] [CrossRef]
  63. Ali, M.; Tse, A.W.C. Research Trends and Issues of Engineering Design Process for STEM Education in K-12: A Bibliometric Analysis. Int. J. Educ. Math. Sci. Technol. 2023, 11, 695–727. [Google Scholar] [CrossRef]
  64. Thomas, D. A General Inductive Approach for Qualitative Data Analysis. Am. J. Eval. 2003, 27, 237–246. [Google Scholar] [CrossRef]
  65. Merriam, S.B.; Tisdell, E.J. Qualitative Data Analysis. In Qualitative Research: A Guide to Design and Implementation, 4th ed.; John Wiley & Sons: Newark, NJ, USA, 2015; pp. 195–236. [Google Scholar]
  66. Seuring, S.; Gold, S. Conducting content-analysis based literature reviews in supply chain management. Supply Chain Manag. 2012, 17, 544–555. [Google Scholar] [CrossRef]
  67. Chatha, K.A.; Butt, I.; Tariq, A. Research methodologies and publication trends in manufacturing strategy: A content analysis based literature review. Int. J. Oper. Prod. Manag. 2015, 35, 487–546. [Google Scholar] [CrossRef]
  68. Van Eck, N.J.; Waltman, L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 2010, 84, 523–538. [Google Scholar] [CrossRef]
  69. Van Eck, N.J.; Waltman, L. The VOSviewer Manual; Univeristeit Leiden: Leiden, The Netherlands, 2018. [Google Scholar]
  70. Yu, Y.; Li, Y.; Zhang, Z.; Gu, Z.; Zhong, H.; Zha, Q.; Yang, L.; Zhu, C.; Chen, E. A bibliometric analysis using VOSviewer of publications on COVID-19. Ann. Transl. Med. 2020, 8, 816. [Google Scholar] [CrossRef]
  71. Kirby, A. Exploratory Bibliometrics: Using VOSviewer as a Preliminary Research Tool. Publications 2023, 11, 10. [Google Scholar] [CrossRef]
  72. D’Mello, S.K. Giving Eyesight to the Blind: Towards Attention-Aware AIED. Int. J. Artif. Intell. Educ. 2016, 26, 645–659. [Google Scholar] [CrossRef]
  73. Sosnovsky, S.; Brusilovsky, P. Evaluation of Topic-based Adaptation and Student Modeling in QuizGuide. User Model. User-Adapt. Interact. 2015, 25, 371–424. [Google Scholar] [CrossRef]
  74. Tosi, Z.; Yoshimi, J. Simbrain 3.0: A flexible, visually-oriented neural network simulator. Neural Netw. 2016, 83, 1–10. [Google Scholar] [CrossRef]
  75. Nye, B.D.; Graesser, A.C.; Hu, X. AutoTutor and Family: A Review of 17 Years of Natural Language Tutoring. Int. J. Artif. Intell. Educ. 2014, 24, 427–469. [Google Scholar] [CrossRef]
  76. NVIDIA. NVIDIA Launches Revolutionary Volta GPU Platform, Fueling Next Era of AI and High Performance Computing; Nvidia Corporation: Santa Clara, CA, USA, 2017. [Google Scholar]
  77. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 31, pp. 6000–6010. [Google Scholar]
  78. Gillani, N.; Eynon, R.; Chiabaut, C.; Finkel, K. Unpacking the “Black Box” of AI in Education. J. Educ. Technol. Soc. 2023, 26, 99–111. [Google Scholar] [CrossRef]
  79. Suh, S.C.; Anusha Upadhyaya, B.N.; Ashwin, N.N.V. Analyzing Personality Traits and External Factors for Stem Education Awareness using Machine Learning. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 1–4. [Google Scholar] [CrossRef]
  80. Sailer, M.; Ninaus, M.; Huber, S.E.; Bauer, E.; Greiff, S. The End is the Beginning is the End: The closed-loop learning analytics framework. Comput. Hum. Behav. 2024, 158, 108305. [Google Scholar] [CrossRef]
  81. Xu, S.; Sze, S. Enhancing University Performance Evaluation through Digital Technology: A Deep Learning Approach for Sustainable Development. J. Knowl. Econ. 2024, 15, 20578–20594. [Google Scholar] [CrossRef]
  82. Ong, A.K.S.; Cuales, J.C.; Custodio, J.P.F.; Gumasing, E.Y.J.; Pascual, P.N.A.; Gumasing, M.J.J. Investigating Preceding Determinants Affecting Primary School Students Online Learning Experience Utilizing Deep Learning Neural Network. Sustainability 2023, 15, 3517. [Google Scholar] [CrossRef]
  83. Musso, M.F.; Cascallar, E.C.; Bostani, N.; Crawford, M. Identifying Reliable Predictors of Educational Outcomes Through Machine-Learning Predictive Modeling. Front. Educ. 2020, 5, 104. [Google Scholar] [CrossRef]
  84. Wu, C.-H.; Hung, C.-H.; Shen, C.-C.; Yu, J.-H. Enhancing the Willingness of Adopting AI in Education Using Back-propagation Neural Networks. Sens. Mater. 2024, 36, 905–917. [Google Scholar] [CrossRef]
  85. Christou, V.; Tsoulos, I.; Loupas, V.; Tzallas, A.T.; Gogos, C.; Karvelis, P.S.; Antoniadis, N.; Glavas, E.; Giannakeas, N. Performance and early drop prediction for higher education students using machine learning. Expert Syst. Appl. 2023, 225, 120079. [Google Scholar] [CrossRef]
  86. Poudyal, S.; Mohammadi-Aragh, M.J.; Ball, J.E. Prediction of Student Academic Performance Using a Hybrid 2D CNN Model. Electronics 2022, 11, 1005. [Google Scholar] [CrossRef]
  87. Hooshyar, D.; Azevedo, R.; Yang, Y. Augmenting Deep Neural Networks with Symbolic Educational Knowledge: Towards Trustworthy and Interpretable AI for Education. Mach. Learn. Knowl. Extr. 2024, 6, 593–618. [Google Scholar] [CrossRef]
  88. Wulff, P.; Westphal, A.; Mientus, L.; Nowak, A.; Borowski, A. Enhancing writing analytics in science education research with machine learning and natural language processing—Formative assessment of science and non-science preservice teachers’ written reflections. Front. Educ. 2023, 7, 1061461. [Google Scholar] [CrossRef]
  89. Emenike, M.E.; Emenike, B.U. Was This Title Generated by ChatGPT? Considerations for Artificial Intelligence Text-Generation Software Programs for Chemists and Chemistry Educators. J. Chem. Educ. 2023, 100, 1413–1418. [Google Scholar] [CrossRef]
  90. Killian, C.; Marttinen, R.; Howley, D.; Sargent, J.; Jones, E. “Knock, Knock: Who’s There?” ChatGPT and Artificial Intelligence-Powered Large Language Models: Reflections on Potential Impacts Within Health and Physical Education Teacher Education. J. Teach. Phys. Educ. 2023, 42, 385–389. [Google Scholar] [CrossRef]
  91. Rejeb, A.; Rejeb, K.; Appolloni, A.; Treiblmaier, H.; Iranmanesh, M. Exploring the impact of ChatGPT on education: A web mining and machine learning approach. Int. J. Manag. Educ. 2024, 22, 100932. [Google Scholar] [CrossRef]
  92. Mientus, L.; Wulff, P.; Nowak, A.; Borowski, A. Fast-and-frugal means to assess reflection-related reasoning processes in teacher training—Development and evaluation of a scalable machine learning-based metric. Z. Erzieh. 2023, 26, 677–702. [Google Scholar] [CrossRef]
  93. Wulff, P.; Buschhüter, D.; Westphal, A.; Nowak, A.; Becker, L.; Robalino, H.; Stede, M.; Borowski, A. Computer-Based Classification of Preservice Physics Teachers’ Written Reflections. J. Sci. Educ. Technol. 2021, 30, 1–15. [Google Scholar] [CrossRef]
  94. Zhang, W.; Yan, R.; Yuan, L. How Generative AI Was Mentioned in Social Media and Academic Field? A Text Mining Based on Internet Text Data. IEEE Access 2024, 12, 43940–43947. [Google Scholar] [CrossRef]
  95. Demszky, D.; Liu, J.; Hill, H.C.; Jurafsky, D.; Piech, C. Can Automated Feedback Improve Teachers’ Uptake of Student Ideas? Evidence From a Randomized Controlled Trial in a Large-Scale Online Course. Educ. Eval. Policy Anal. 2023, 46, 483–505. [Google Scholar] [CrossRef]
  96. Koutcheme, C.; Dainese, N.; Sarsa, S.; Hellas, A.; Leinonen, J.; Denny, P. Open Source Language Models Can Provide Feedback: Evaluating LLMs’ Ability to Help Students Using GPT-4-As-A-Judge. In Proceedings of the 2024 Innovation and Technology in Computer Science Education (ITiCSE), Milan, Italy, 5–7 July 2024; Volume 31, pp. 52–58. [Google Scholar] [CrossRef]
  97. Gomez, A.; Pattichis, M.S.; Celedón-Pattichis, S. Speaker Diarization and Identification From Single Channel Classroom Audio Recordings Using Virtual Microphones. IEEE Access 2022, 10, 56256–56266. [Google Scholar] [CrossRef]
  98. Tai, T.-Y.; Chen, H.H.-J. Improving elementary EFL speaking skills with generative AI chatbots: Exploring individual and paired interactions. Comput. Educ. 2024, 220, 105112. [Google Scholar] [CrossRef]
  99. Lyu, B.; Lai, C.; Guo, J. Effectiveness of Chatbots in Improving Language Learning: A Meta—Analysis of Comparative Studies. Int. J. Appl. Linguist. 2025, 35, 834–851. [Google Scholar] [CrossRef]
  100. Li, Y.; Liu, H.; Wald, M. DeepVision: Heads-up Computing and AI in Education. In Proceedings of the 2024 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp), Melbourne, Australia, 5–9 October 2024; Volume 26, pp. 627–630. [Google Scholar] [CrossRef]
  101. Wang, X.; Zhang, L.; He, T. Learning Performance Prediction-Based Personalized Feedback in Online Learning via Machine Learning. Sustainability 2022, 14, 7654. [Google Scholar] [CrossRef]
  102. Yousafzai Bashir, K.; Hayat, M.; Sher, A. Application of machine learning and data mining in predicting the performance of intermediate and secondary education level student. Educ. Inf. Technol. 2020, 25, 4677–4697. [Google Scholar] [CrossRef]
  103. Nianfan, P. Performance evaluation of English learning through computer mode using neural network and AI techniques. J. Intell. Fuzzy Syst. 2021, 40, 6949–6959. [Google Scholar] [CrossRef]
  104. Fatima, S.S.; Sheikh, N.A.; Osama, A. Authentic assessment in medical education: Exploring AI integration and student-as-partners collaboration. Postgrad. Med. J. 2024, 100, 959–967. [Google Scholar] [CrossRef] [PubMed]
  105. Wang, Z.; Hou, Y.; Zeng, C.; Zhang, S.; Ye, R. Multiple Learning Features–Enhanced Knowledge Tracing Based on Learner–Resource Response Channels. Sustainability 2023, 15, 9427. [Google Scholar] [CrossRef]
  106. Lyu, L.; Wang, Z.; Yun, H.; Yang, Z.; Li, Y. Deep Knowledge Tracing Based on Spatial and Temporal Representation Learning for Learning Performance Prediction. Appl. Sci. 2022, 12, 7188. [Google Scholar] [CrossRef]
  107. Hooshyar, D. Temporal learner modelling through integration of neural and symbolic architectures. Educ. Inf. Technol. 2024, 29, 1119–1146. [Google Scholar] [CrossRef]
  108. Barthakur, A.; Joksimovic, S.; Kovanovic, V.; Mello, R.F.; Taylor, M.; Richey, M.; Pardo, A. Understanding Depth of Reflective Writing in Workplace Learning Assessments Using Machine Learning Classification. IEEE Trans. Learn. Technol. 2022, 15, 567–578. [Google Scholar] [CrossRef]
  109. Ullmann, T.D. Automated Analysis of Reflection in Writing: Validating Machine Learning Approaches. Int. J. Artif. Intell. Educ. 2019, 29, 217–257. [Google Scholar] [CrossRef]
  110. Jho, H.; Ha, M. Towards Effective Argumentation: Design and Implementation of a Generative AI-based Evaluation and Feedback System. J. Balt. Sci. Educ. 2024, 23, 280–291. [Google Scholar] [CrossRef]
  111. Queiroga, E.M.; Batista Machado, M.F.; Paragarino, V.R.; Primo, T.T.; Cechinel, C. Early Prediction of At-Risk Students in Secondary Education: A Countrywide K-12 Learning Analytics Initiative in Uruguay. Information 2022, 13, 401. [Google Scholar] [CrossRef]
  112. Krüger, J.G.C.; Britto, A.d.S.; Barddal, J.P. An explainable machine learning approach for student dropout prediction. Expert Syst. Appl. 2023, 233, 120933. [Google Scholar] [CrossRef]
  113. Mnyawami, Y.N.; Maziku, H.H.; Mushi, J.C. Enhanced Model for Predicting Student Dropouts in Developing Countries Using Automated Machine Learning Approach: A Case of Tanzanian’s Secondary Schools. Appl. Artif. Intell. 2022, 36, 2071406. [Google Scholar] [CrossRef]
  114. Fernández-García, A.J.; Preciado, J.C.; Melchor, F.; Rodriguez-Echeverria, R.; Conejero, J.M.; Sánchez-Figueroa, F. A Real-Life Machine Learning Experience for Predicting University Dropout at Different Stages Using Academic Data. IEEE Access 2021, 9, 133076–133090. [Google Scholar] [CrossRef]
  115. Wang, Z.; Yan, W.; Zeng, C.; Tian, Y.; Shi, D. A Unified Interpretable Intelligent Learning Diagnosis Framework for Learning Performance Prediction in Intelligent Tutoring Systems. Int. J. Intell. Syst. 2023, 2023, 4468025. [Google Scholar] [CrossRef]
  116. Jescovitch, L.N.; Scott, E.E.; Cerchiara, J.A.; Merrill, J.; Urban-Lurain, M.; Doherty, J.H.; Haudek, K.C. Comparison of Machine Learning Performance Using Analytic and Holistic Coding Approaches Across Constructed Response Assessments Aligned to a Science Learning Progression. J. Sci. Educ. Technol. 2021, 30, 150–167. [Google Scholar] [CrossRef]
  117. Falla-Falcón, N.; López-Meneses, E.; Ramírez-Fernández, M.-B.; Vázquez-Cano, E. Graphic Model of Virtual Teaching Supervision through Fuzzy Logic in Non-University Educational Centers. Int. J. Environ. Res. Public Health 2022, 19, 16533. [Google Scholar] [CrossRef]
  118. Phanichraksaphong, V.; Tsai, W.-H. Automatic Evaluation of Piano Performances for STEAM Education. Appl. Sci. 2021, 11, 11783. [Google Scholar] [CrossRef]
  119. Zhang, X.; Yang, L. A Convolutional Neural Network-Based Predictive Model for Assessing the Learning Effectiveness of Online Courses Among College Students. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 509–515. [Google Scholar] [CrossRef]
  120. Kim, J.; Lee, H.; Cho, Y.H. Learning design to support student-AI collaboration: Perspectives of leading teachers for AI in education. Educ. Inf. Technol. 2022, 27, 6069–6104. [Google Scholar] [CrossRef]
  121. Chiu, T.K.F. The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interact. Learn. Environ. 2023, 32, 6187–6203. [Google Scholar] [CrossRef]
  122. Lozano, A.; Carolina Blanco, F. Is the Education System Prepared for the Irruption of Artificial Intelligence? A Study on the Perceptions of Students of Primary Education Degree from a Dual Perspective: Current Pupils and Future Teachers. Educ. Sci. 2023, 13, 733. [Google Scholar] [CrossRef]
  123. Liu, J.; Wang, C.; Liu, Z.; Gao, M.; Xu, Y.; Chen, J.; Cheng, Y. A bibliometric analysis of generative AI in education: Current status and development. Asia Pac. J. Educ. 2024, 44, 156–175. [Google Scholar] [CrossRef]
  124. Levine, R.A.; Rivera, P.E.; He, L.; Fan, J.; Bresciani Ludvick, M.J. A learning analytics case study: On class sizes in undergraduate writing courses. Stat 2023, 12, e527. [Google Scholar] [CrossRef]
  125. Kuromiya, H.; Majumdar, R.; Horikoshi, I.; Ogata, H. Learning analytics for student homework activities during a long break: Evidence from K-12 education in Japan. Res. Pract. Technol. Enhanc. Learn. 2024, 19, 34. [Google Scholar] [CrossRef]
  126. Umar Bin, Q.; Christopoulos, A.; Solomon Sunday, O.; Ogata, H.; Laakso, M.-J. Multimodal Technologies in Precision Education: Providing New Opportunities or Adding More Challenges? Educ. Sci. 2021, 11, 338. [Google Scholar] [CrossRef]
  127. Romine, W.; Schroeder, N.; Graft, J.; Yang, F.; Sadeghi, R.; Zabihimayvan, M.; Kadariya, D.; Banerjee, T. Using Machine Learning to Train a Wearable Device for Measuring Students’ Cognitive Load during Problem-Solving Activities Based on Electrodermal Activity, Body Temperature, and Heart Rate: Development of a Cognitive Load Tracker for Both Personal and Classroom Use. Sensors 2020, 20, 4833. [Google Scholar] [CrossRef] [PubMed]
  128. Prieto, L.P.; Sharma, K.; Kidzinski, Ł.; Rodríguez-Triana, M.J.; Dillenbourg, P. Multimodal teaching analytics: Automated extraction of orchestration graphs from wearable sensor data. J. Comput. Assist. Learn. 2018, 34, 193–203. [Google Scholar] [CrossRef] [PubMed]
  129. Cohn, C.; Snyder, C.; Fonteles, J.H.; Ashwin, T.S.; Montenegro, J.; Biswas, G. A multimodal approach to support teacher, researcher and AI collaboration in STEM+C learning environments. Br. J. Educ. Technol. 2024, 56, 595–620. [Google Scholar] [CrossRef]
  130. Zhang, X.; Cao, Z. A Framework of an Intelligent Education System for Higher Education Based on Deep Learning. Int. J. Emerg. Technol. Learn. 2021, 16, 233–248. [Google Scholar] [CrossRef]
  131. Chandrasekaran, D.; Mago, V. Automating Transfer Credit Assessment-A Natural Language Processing-Based Approach. Comput. Mater. Contin. 2022, 73, 2257–2274. [Google Scholar] [CrossRef]
  132. Shi, Y.; Guo, F. Exploring Useful Teacher Roles for Sustainable Online Teaching in Higher Education Based on Machine Learning. Sustainability 2022, 14, 14006. [Google Scholar] [CrossRef]
  133. Batista, J.; Mesquita, A.; Carnaz, G. Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review. Information 2024, 15, 676. [Google Scholar] [CrossRef]
  134. Li, S.; Liu, T. Performance Prediction for Higher Education Students Using Deep Learning. Complexity 2021, 2021, 9958203. [Google Scholar] [CrossRef]
  135. Kilian, P.; Loose, F.; Kelava, A. Predicting Math Student Success in the Initial Phase of College with Sparse Information Using Approaches From Statistical Learning. Front. Educ. 2020, 5, 502698. [Google Scholar] [CrossRef]
  136. De La Hoz, E.J.; Zuluaga, R.; Mendoza, A. Assessing and Classification of Academic Efficiency in Engineering Teaching Programs. J. Effic. Responsib. Educ. Sci. 2021, 14, 41–52. [Google Scholar] [CrossRef]
  137. Wijaya, T.T.; Su, M.; Cao, Y.; Weinhandl, R.; Houghton, T. Examining Chinese preservice mathematics teachers’ adoption of AI chatbots for learning: Unpacking perspectives through the UTAUT2 model. Educ. Inf. Technol. 2024, 30, 1387–1415. [Google Scholar] [CrossRef]
  138. Chen, P.-Y.; Liu, Y.-C. Impact of AI Robot Image Recognition Technology on Improving Students’ Conceptual Understanding of Cell Division and Science Learning Motivation. J. Balt. Sci. Educ. 2024, 23, 208–220. [Google Scholar] [CrossRef]
  139. Tatar, C.; Jiang, S.; Rosé, C.P.; Chao, J. Exploring Teachers’ Views and Confidence in the Integration of an Artificial Intelligence Curriculum into Their Classrooms: A Case Study of Curricular Co-Design Program. Int. J. Artif. Intell. Educ. 2025, 35, 702–735. [Google Scholar] [CrossRef]
  140. Bettayeb, A.M.; Abu Talib, M.; Sobhe Altayasinah, A.Z.; Dakalbab, F. Exploring the impact of ChatGPT: Conversational AI in education. Front. Educ. 2024, 9, 1379796. [Google Scholar] [CrossRef]
  141. Wang, N.; Wang, X.; Su, Y.-S. Critical analysis of the technological affordances, challenges and future directions of Generative AI in education: A systematic review. Asia Pac. J. Educ. 2024, 44, 139–155. [Google Scholar] [CrossRef]
  142. Lechuga, C.G.; Doroudi, S. Three Algorithms for Grouping Students: A Bridge Between Personalized Tutoring System Data and Classroom Pedagogy. Int. J. Artif. Intell. Educ. 2022, 33, 843–884. [Google Scholar] [CrossRef]
  143. Sperling, K.; Stenliden, L.; Nissen, J.; Heintz, F. Still w(AI)ting for the automation of teaching: An exploration of machine learning in Swedish primary education using Actor-Network Theory. Eur. J. Educ. 2022, 57, 584–600. [Google Scholar] [CrossRef]
  144. Pardamean, B.; Suparyanto, T.; Cenggoro, T.W.; Sudigyo, D.; Anugrahana, A. AI-Based Learning Style Prediction in Online Learning for Primary Education. IEEE Access 2022, 10, 35725–35735. [Google Scholar] [CrossRef]
  145. Tedre, M.; Toivonen, T.; Kahila, J.; Vartiainen, H.; Valtonen, T.; Jormanainen, I.; Pears, A. Teaching Machine Learning in K-12 Classroom: Pedagogical and Technological Trajectories for Artificial Intelligence Education. IEEE Access 2021, 9, 110558–110572. [Google Scholar] [CrossRef]
  146. Vartiainen, H.; Tedre, M.; Kahila, J.; Valtonen, T. Tensions and trade-offs of participatory learning in the age of machine learning. Educ. Media Int. 2020, 57, 285–298. [Google Scholar] [CrossRef]
  147. Holstein, K.; Aleven, V. Designing for human-AI complementarity in K-12 education. AI Mag. 2022, 43, 239–248. [Google Scholar] [CrossRef]
  148. Xia, Q.; Chiu, T.K.F.; Lee, M.; Sanusi, I.T.; Dai, Y.; Chai, C.S. A self-determination theory (SDT) design approach for inclusive and diverse artificial intelligence (AI) education. Comput. Educ. 2022, 189, 104582. [Google Scholar] [CrossRef]
  149. Bellas, F.; Guerreiro-Santalla, S.; Naya, M.; Duro, R.J. AI Curriculum for European High Schools: An Embedded Intelligence Approach. Int. J. Artif. Intell. Educ. 2023, 33, 399–426. [Google Scholar] [CrossRef]
  150. Chiu, T.K.F.; Ahmad, Z.; Ismailov, M.; Sanusi, I.T. What are artificial intelligence literacy and competency? A comprehensive framework to support them. Comput. Educ. Open 2024, 6, 100171. [Google Scholar] [CrossRef]
  151. Lin, X.-F.; Zhou, Y.; Shen, W.; Luo, G.; Xian, X.; Pang, B. Modeling the structural relationships among Chinese secondary school students’ computational thinking efficacy in learning AI, AI literacy, and approaches to learning AI. Educ. Inf. Technol. 2023, 29, 6189–6215. [Google Scholar] [CrossRef]
  152. Hijón-Neira, R.; Connolly, C.; Pizarro, C.; Pérez-Marín, D. Prototype of a Recommendation Model with Artificial Intelligence for Computational Thinking Improvement of Secondary Education Students. Computers 2023, 12, 113. [Google Scholar] [CrossRef]
  153. Kahn, K.; Winters, N. Child-Friendly Programming Interfaces to AI Cloud Services. In Proceedings of the 12th European Conference on Technology Enhanced Learning (EC-TEL 2017), Tallinn, Estonia, 12–15 September 2017; Volume 12, pp. 566–570. [Google Scholar] [CrossRef]
  154. Kahn, K.; Prasad, R.; Veera, G. AI Snap! Blocks for Speech Input and Output, Computer Vision, Word Embeddings, and Neural Net Creation, Training, and Use. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), Online, 22 February–1 March 2022; Volume 36, pp. 12861–12864. [Google Scholar] [CrossRef]
  155. Fleger, C.-B.; Amanuel, Y.; Krugel, J. Learning Tools Using Block-based Programming for AI Education. In Proceedings of the 14th IEEE Global Engineering Education Conference (EDUCON 2023), Salmiya, Kuwait, 1–4 May 2023; Volume 14, pp. 1–5. [Google Scholar] [CrossRef]
  156. Ali, M.; Wong, G.K.-W.; Ma, M. K–12 Pre-service Teachers’ Perspectives on AI Models and Computational Thinking: The Insights from an Interpretative Research Inquiry. In Proceedings of the 8th APSCE International Conference on Computational Thinking and STEM Education (CTE-STEM 2024), Beijing, China, 28–30 May 2024; Volume 8, pp. 66–71. [Google Scholar] [CrossRef]
  157. Kahn, K.; Winters, N. Constructionism and AI: A history and possible futures. Br. J. Educ. Technol. 2021, 52, 1130–1142. [Google Scholar] [CrossRef]
  158. Williams, R.; Ali, S.; Devasia, N.; DiPaola, D.; Hong, J.; Kaputsos, S.; Jordan, B.; Breazeal, C. AI + Ethics Curricula for Middle School Youth: Lessons Learned from Three Project-Based Curricula. Int. J. Artif. Intell. Educ. 2022, 33, 325–383. [Google Scholar] [CrossRef] [PubMed]
  159. Leikas, J.; Johri, A.; Latvanen, M.; Wessberg, N.; Hahto, A. Governing Ethical AI Transformation: A Case Study of AuroraAI. Front. Artif. Intell. 2022, 5, 836557. [Google Scholar] [CrossRef] [PubMed]
  160. Ding, A.-C.E.; Shi, L.; Yang, H.; Choi, I. Enhancing teacher AI literacy and integration through different types of cases in teacher professional development. Comput. Educ. Open 2024, 6, 100178. [Google Scholar] [CrossRef]
  161. Du, H.; Sun, Y.; Jiang, H.; Islam, A.Y.M.A.; Gu, X. Exploring the effects of AI literacy in teacher learning: An empirical study. Humanit. Soc. Sci. Commun. 2024, 11, 559. [Google Scholar] [CrossRef]
  162. Priya, S.; Bhadra, S.; Chimalakonda, S.; Venigalla, A.S.M. ML-Quest: A game for introducing machine learning concepts to K-12 students. Interact. Learn. Environ. 2022, 31, 229–244. [Google Scholar] [CrossRef]
  163. Park, Y.; Shin, Y. Text Processing Education Using a Block-Based Programming Language. IEEE Access 2022, 10, 128484–128497. [Google Scholar] [CrossRef]
  164. Verner, I.; Cuperman, D.; Reitman, M. Robot Online Learning to Lift Weights: A Way to Expose Students to Robotics and Intelligent Technologies. Int. J. Online Eng. 2017, 13, 174. [Google Scholar] [CrossRef]
  165. Karalekas, G.; Vologiannidis, S.; Kalomiros, J. Teaching Machine Learning in K–12 Using Robotics. Educ. Sci. 2023, 13, 67. [Google Scholar] [CrossRef]
  166. Bellas, F.; Naya-Varela, M.; Mallo, A.; Paz-Lopez, A. Education in the AI era: A long-term classroom technology based on intelligent robotics. Humanit. Soc. Sci. Commun. 2024, 11, 1425. [Google Scholar] [CrossRef]
  167. Sophokleous, A.; Christodoulou, P.; Doitsidis, L.; Chatzichristofis, S.A. Computer Vision Meets Educational Robotics. Electronics 2021, 10, 730. [Google Scholar] [CrossRef]
  168. Henze, J.; Schatz, C.; Malik, S.; Bresges, A. How Might We Raise Interest in Robotics, Coding, Artificial Intelligence, STEAM and Sustainable Development in University and On-the-Job Teacher Training? Front. Educ. 2022, 7, 872637. [Google Scholar] [CrossRef]
  169. Chen, X.; Cheng, G.; Zou, D.; Zhong, B.; Xie, H. Artificial Intelligent Robots for Precision Education: A Topic Modeling-Based Bibliometric Analysis. J. Educ. Technol. Soc. 2023, 26, 171–186. [Google Scholar] [CrossRef]
  170. King, S.; Boyer, J.; Bell, T.; Estapa, A. An Automated Virtual Reality Training System for Teacher-Student Interaction: A Randomized Controlled Trial. JMIR Serious Games 2022, 10, e41097. [Google Scholar] [CrossRef] [PubMed]
  171. Poonja, H.A.; Shirazi, M.A.; Khan, M.J.; Javed, K. Engagement detection and enhancement for STEM education through computer vision, augmented reality, and haptics. Image Vis. Comput. 2023, 136, 104720. [Google Scholar] [CrossRef]
  172. Saundarajan, K.; Osman, S.; Kumar, J.A.; Daud, M.F.; Abu, M.S.; Pairan, M.R. Learning Algebra using Augmented Reality: A Preliminary Investigation on the Application of Photomath for Lower Secondary Education. Int. J. Emerg. Technol. Learn. 2020, 15, 123–133. [Google Scholar] [CrossRef]
  173. Lohakan, M.; Seetao, C. Large-scale experiment in STEM education for high school students using artificial intelligence kit based on computer vision and Python. Heliyon 2024, 10, e31366. [Google Scholar] [CrossRef]
  174. Peng, Y.; Wang, Y.; Hu, J. Examining ICT attitudes, use and support in blended learning settings for students’ reading performance: Approaches of artificial intelligence and multilevel model. Comput. Educ. 2023, 203, 104846. [Google Scholar] [CrossRef]
  175. Almohesh, A.R.I. AI Application (ChatGPT) and Saudi Arabian Primary School Students’ Autonomy in Online Classes: Exploring Students and Teachers’ Perceptions. Int. Rev. Res. Open Distance Learn. 2024, 25, 1–18. [Google Scholar] [CrossRef]
  176. Ortego, R.G.; Sánchez, I.M. Relevant Parameters for the Classification of Reading Books Depending on the Degree of Textual Readability in Primary and Compulsory Secondary Education (CSE) Students. IEEE Access 2019, 7, 79044–79055. [Google Scholar] [CrossRef]
  177. Wiley, J.; Hastings, P.; Blaum, D.; Jaeger, A.J.; Hughes, S.; Wallace, P.; Griffin, T.D.; Britt, M.A. Different Approaches to Assessing the Quality of Explanations Following a Multiple-Document Inquiry Activity in Science. Int. J. Artif. Intell. Educ. 2017, 27, 758–790. [Google Scholar] [CrossRef]
  178. Lin, C.-H.; Yu, C.-C.; Shih, P.-K.; Wu, L.Y. STEM based Artificial Intelligence Learning in General Education for Non-Engineering Undergraduate Students. Educ. Technol. Soc. 2021, 24, 224–237. [Google Scholar]
  179. Meng-Leong, H.; Hung, W.L.D. Educing AI-Thinking in Science, Technology, Engineering, Arts, and Mathematics (STEAM) Education. Educ. Sci. 2019, 9, 184. [Google Scholar] [CrossRef]
  180. Xu, W.; Ouyang, F. The application of AI technologies in STEM education: A systematic review from 2011 to 2021. Int. J. STEM Educ. 2022, 9, 59. [Google Scholar] [CrossRef]
  181. Johnson-Glenberg, M.; Yu, C.; Liu, F.; Amador, C.; Bao, Y.; Yu, S.; Likamwa, R. Embodied Mixed Reality with Passive Haptics in STEM Education: Randomized Control Study with Chemistry Titration. Front. Virtual Real. 2023, 4, 1047833. [Google Scholar] [CrossRef]
  182. Ferro, L.S.; Sapio, F.; Terracina, A.; Temperini, M.; Mecella, M. Gea2: A Serious Game for Technology-Enhanced Learning in STEM. IEEE Trans. Learn. Technol. 2021, 14, 723–739. [Google Scholar] [CrossRef]
  183. Lee, H.-Y.; Cheng, Y.-P.; Wang, W.-S.; Lin, C.-J.; Huang, Y.-M. Exploring the Learning Process and Effectiveness of STEM Education via Learning Behavior Analysis and the Interactive-Constructive- Active-Passive Framework. J. Educ. Comput. Res. 2023, 61, 951–976. [Google Scholar] [CrossRef]
  184. Bertolini, R.; Finch, S.; Nehm, R. An application of Bayesian inference to examine student retention and attrition in the STEM classroom. Front. Educ. 2023, 8, 1073829. [Google Scholar] [CrossRef]
  185. Bhatt, S.M.; Noortgate, W.V.D.; Verbert, K. Investigating the Use of Deep Learning and Implicit Feedback in K12 Educational Recommender Systems. IEEE Trans. Learn. Technol. 2024, 17, 112–123. [Google Scholar] [CrossRef]
  186. Owens, M.T.; Seidel, S.B.; Wong, M.; Bejines, T.E.; Lietz, S.; Perez, J.R.; Sit, S.; Subedar, Z.-S.; Acker, G.N.; Akana, S.F.; et al. Classroom sound can be used to classify teaching practices in college science courses. Proc. Natl. Acad. Sci. USA 2017, 114, 3085–3090. [Google Scholar] [CrossRef]
  187. Maniktala, M.; Cody, C.; Barnes, T.; Chi, M. Avoiding Help Avoidance: Using Interface Design Changes to Promote Unsolicited Hint Usage in an Intelligent Tutor. Int. J. Artif. Intell. Educ. 2020, 30, 637–667. [Google Scholar] [CrossRef]
  188. Bywater, J.P.; Chiu, J.L.; Hong, J.; Sankaranarayanan, V. The Teacher Responding Tool: Scaffolding the teacher practice of responding to student ideas in mathematics classrooms. Comput. Educ. 2019, 139, 16–30. [Google Scholar] [CrossRef]
  189. Lu, Y.; Wang, D.; Chen, P.; Zhang, Z. Design and Evaluation of Trustworthy Knowledge Tracing Model for Intelligent Tutoring System. IEEE Trans. Learn. Technol. 2024, 17, 1661–1676. [Google Scholar] [CrossRef]
  190. Liu, R.; Stamper, J.; Davenport, J.; McNamara, D.; Nzinga, K.; Sherin, B. Learning linkages: Integrating data streams of multiple modalities and timescales. J. Comput. Assist. Learn. 2019, 35, 99–109. [Google Scholar] [CrossRef]
  191. Velander, J.; Taiye, M.A.; Otero, N.; Milrad, M. Artificial Intelligence in K-12 Education: Eliciting and reflecting on Swedish teachers’ understanding of AI and its implications for teaching & learning. Educ. Inf. Technol. 2023, 29, 4085–4105. [Google Scholar] [CrossRef]
  192. Antonenko, P.; Abramowitz, B. In-service teachers’ (mis)conceptions of artificial intelligence in K-12 science education. J. Res. Technol. Educ. 2022, 55, 64–78. [Google Scholar] [CrossRef]
  193. Uğraş, H.; Uğraş, M. ChatGPT in early childhood STEM education: Can it be an innovative tool to overcome challenges? Educ. Inf. Technol. 2024, 30, 4277–4305. [Google Scholar] [CrossRef]
  194. Alshorman, S.M. The Readiness to Use AI in Teaching Science: Science Teachers’ Perspective. J. Balt. Sci. Educ. 2024, 23, 432–448. [Google Scholar] [CrossRef]
  195. Lin, X.-F.; Chen, L.; Chan, K.K.; Peng, S.; Chen, X.; Xie, S.; Liu, J.; Hu, Q. Teachers’ Perceptions of Teaching Sustainable Artificial Intelligence: A Design Frame Perspective. Sustainability 2022, 14, 7811. [Google Scholar] [CrossRef]
  196. Yau, K.; Chai, C.; Chiu, T.K.F.; Meng, H.; King, I.; Yam, Y. A phenomenographic approach on teacher conceptions of teaching Artificial Intelligence (AI) in K-12 schools. Educ. Inf. Technol. 2022, 28, 1041–1064. [Google Scholar] [CrossRef]
  197. Sun, J.; Ma, H.; Zeng, Y.; Han, D.; Jin, Y. Promoting the AI teaching competency of K-12 computer science teachers: A TPACK-based professional development approach. Educ. Inf. Technol. 2023, 28, 1509–1533. [Google Scholar] [CrossRef]
  198. Tang, J.; Zhou, X.; Wan, X.; Daley, M.; Bai, Z. ML4STEM Professional Development Program: Enriching K-12 STEM Teaching with Machine Learning. Int. J. Artif. Intell. Educ. 2023, 33, 185–224. [Google Scholar] [CrossRef]
  199. Seufert, S.; Guggemos, J.; Sailer, M. Technology-related knowledge, skills, and attitudes of pre- and in-service teachers: The current situation and emerging trends. Comput. Hum. Behav. 2021, 115, 106552. [Google Scholar] [CrossRef]
  200. Zeegers, Y.; Elliott, K. Who’s asking the questions in classrooms? Exploring teacher practice and student engagement in generating engaging and intellectually challenging questions. Pedagog. Int. J. 2019, 14, 17–32. [Google Scholar] [CrossRef]
  201. Kuleto, V.; Ilić, M.P.; Bucea-Manea-Țoniş, R.; David-Florin, C.; Mihălcescu, H.; Mindrescu, V. The Attitudes of K–12 Schools’ Teachers in Serbia towards the Potential of Artificial Intelligence. Sustainability 2022, 14, 8636. [Google Scholar] [CrossRef]
  202. Chocarro, R.; Cortiñas, M.; Marcos-Matás, G. Teachers’ attitudes towards chatbots in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educ. Stud. 2023, 49, 295–313. [Google Scholar] [CrossRef]
  203. Zhang, C.; Schießl, J.; Plößl, L.; Hofmann, F.; Gläser-Zikuda, M. Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis: Revista de Universidad y Sociedad del Conocimiento. Int. J. Educ. Technol. High. Educ. 2023, 20, 49. [Google Scholar] [CrossRef]
  204. Nazaretsky, T.; Ariely, M.; Cukurova, M.; Alexandron, G. Teachers’ trust in AI-powered educational technology and a professional development program to improve it. Br. J. Educ. Technol. 2022, 53, 914–931. [Google Scholar] [CrossRef]
  205. Mnyawami, Y.N.; Maziku, H.H.; Mushi, J.C. Comparative Study of AutoML Approach, Conventional Ensemble Learning Method, and KNearest Oracle-AutoML Model for Predicting Student Dropouts in Sub-Saharan African Countries. Appl. Artif. Intell. 2022, 36, 2145632. [Google Scholar] [CrossRef]
  206. Rasheed, M.A.; Chand, P.; Saad, A.; Hamza, S.; Hoodbhoy, Z.; Siddiqui, A.; Hasan, B.S. Use of artificial intelligence on Electroencephalogram (EEG) waveforms to predict failure in early school grades in children from a rural cohort in Pakistan. PLoS ONE 2021, 16, e0246236. [Google Scholar] [CrossRef]
  207. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  208. Chi, M.T.H.; Wylie, R. The ICAP Framework: Linking Cognitive Engagement to Active Learning Outcomes. Educ. Psychol. 2014, 49, 219–243. [Google Scholar] [CrossRef]
  209. Azevedo, R.; Bouchet, F.; Duffy, M.; Harley, J.; Taub, M.; Trevors, G.; Cloude, E.; Dever, D.; Wiedbusch, M.; Wortha, F.; et al. Lessons Learned and Future Directions of MetaTutor: Leveraging Multichannel Data to Scaffold Self-Regulated Learning with an Intelligent Tutoring System. Front. Psychol. 2022, 13, 813632. [Google Scholar] [CrossRef]
  210. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education: Promise and Implications for Teaching and Learning; Center for Curriculum Redesign (CCR): Boston, MA, USA, 2019; p. 242. [Google Scholar]
  211. Zhao, L.; Wu, X.; Luo, H. Developing AI Literacy for Primary and Middle School Teachers in China: Based on a Structural Equation Modeling Analysis. Sustainability 2022, 14, 14549. [Google Scholar] [CrossRef]
  212. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 Conference on Human Factors in Computing Systems (CHI), Honolulu, HI, USA, 25–30 April 2020; Volume 38, pp. 1–16. [Google Scholar] [CrossRef]
  213. Wong, G.K.-W.; Ma, X.; Dillenbourg, P.; Huan, J. Broadening artificial intelligence education in K-12: Where to start? ACM Inroads 2020, 11, 20–29. [Google Scholar] [CrossRef]
  214. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  215. Bloom, B.S.; Krathwohl, D.R. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain; Longmans: New York, NY, USA, 1956. [Google Scholar]
  216. Biggs, J.B.; Collis, K.F. Evaluating the Quality of Learning: The SOLO Taxonomy (Structure of the Observed Learning Outcome); Academic Press: New York, NY, USA, 1982. [Google Scholar]
  217. Casal-Otero, L.; Catala, A.; Fernández-Morante, C.; Taboada, M.; Cebreiro, B.; Barro, S. AI literacy in K-12: A systematic literature review. Int. J. STEM Educ. 2023, 10, 29. [Google Scholar] [CrossRef]
  218. Carolus, A.; Augustin, Y.; Markus, A.; Wienrich, C. Digital interaction literacy model—Conceptualizing competencies for literate interactions with voice-based AI systems. Comput. Educ. Artif. Intell. 2023, 4, 100114. [Google Scholar] [CrossRef]
  219. Oyelere, S.; Friday Joseph, A.; Sanusi, I. Developing a pedagogical evaluation framework for computational thinking supporting technologies and tools. Front. Educ. 2022, 7, 957739. [Google Scholar] [CrossRef]
  220. Chiu, T.K.F.; Meng, H.E.; Chai, C.S.; King, I.; Wong, S.; Yam, Y. Creation and Evaluation of a Pretertiary Artificial Intelligence (AI) Curriculum. IEEE Trans. Educ. 2022, 65, 30–39. [Google Scholar] [CrossRef]
  221. Hellas, A.; Leinonen, J.; Sarsa, S.; Koutcheme, C.; Kujanpää, L.; Sorva, J. Exploring the Responses of Large Language Models to Beginner Programmers’ Help Requests. In Proceedings of the 2023 ACM Conference on International Computing Education Research (ICER), Chicago, IL, USA, 8–10 August 2023; Volume 19, pp. 93–105. [Google Scholar] [CrossRef]
  222. Reeves, B.; Sarsa, S.; Prather, J.; Denny, P.; Becker, B.A.; Hellas, A.; Kimmel, B.; Powell, G.; Leinonen, J. Evaluating the Performance of Code Generation Models for Solving Parsons Problems with Small Prompt Variations. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education (ITiCSE), Turku, Finland, 7–12 July 2023; Volume 28, pp. 299–305. [Google Scholar] [CrossRef]
  223. Denny, P.; Kumar, V.; Giacaman, N. Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education (SIGCSE), Toronto, ON, Canada, 15–18 March 2023; Volume 54, pp. 1136–1142. [Google Scholar] [CrossRef]
  224. Ali, M.; Chen, B.; Wong, G.K.-W. Developing Alice: A Scaffolding Agent for AI-Mediated Computational Thinking. In Proceedings of the 9th International Conference on Computational Thinking and STEM Education (CTE-STEM 2025), Hong Kong, China, 18–20 June 2025; Volume 9, pp. 26–31. [Google Scholar] [CrossRef]
  225. Vincent-Lancrin, S.; Van der Vlies, R. Trustworthy artificial intelligence (AI) in education. OECD Educ. Work. Pap. 2020, 218, 1–17. [Google Scholar] [CrossRef]
  226. Cardona, M.A.; Rodríguez, R.J.; Ishmael, K. Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations; Office of Educational Technology, US Department of Education: Washington, DC, USA, 2023.
  227. Miao, F.; Holmes, W.; Huang, R.; Zhang, H. AI and Education: A Guidance for Policymakers; UNESCO Publishing: Paris, France, 2021. [Google Scholar]
  228. Raji, I.D.; Scheuerman, M.K.; Amironesei, R. You Can’t Sit with Us: Exclusionary Pedagogy in AI Ethics Education. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT), New York, NY, USA, 3–10 March 2021; Volume 4, pp. 515–525. [Google Scholar] [CrossRef]
  229. Knowles, M.A. Five Motivating Concerns for AI Ethics Instruction. In Proceedings of the 84th Association for Information Science and Technology (ASIS&T 2021), Salt Lake City, UT, USA, 30 October–2 November 2021; Volume 84, pp. 472–476. [Google Scholar] [CrossRef]
  230. Smakman, M.; Konijn, E.; Vogt, P.; Pankowska, P. Attitudes towards Social Robots in Education: Enthusiast, Practical, Troubled, Sceptic, and Mindfully Positive. Robotics 2021, 10, 24. [Google Scholar] [CrossRef]
  231. Celik, I. Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Comput. Hum. Behav. 2023, 138, 107468. [Google Scholar] [CrossRef]
  232. Koehler, M.J.; Mishra, P.; Cain, W. What Is Technological Pedagogical Content Knowledge (TPACK)? J. Educ. 2013, 193, 13–19. [Google Scholar] [CrossRef]
  233. Mishra, P.; Warr, M.; Islam, R. TPACK in the age of ChatGPT and Generative AI. J. Digit. Learn. Teach. Educ. 2023, 39, 235–251. [Google Scholar] [CrossRef]
  234. Vazhayil, A.; Shetty, R.; Rao, B.; Nagarajan, A. Focusing on Teacher Education to Introduce AI in Schools: Perspectives and Illustrative Findings. In Proceedings of the 2019 IEEE International Conference on Technology for Education (T4E), Goa, India, 9–11 December 2019; Volume 10, pp. 71–77. [Google Scholar] [CrossRef]
Figure 1. Flow diagram of the review process based on the PRISMA protocol.
Figure 1. Flow diagram of the review process based on the PRISMA protocol.
Metrics 02 00023 g001
Figure 2. Distribution of articles based on their publication years.
Figure 2. Distribution of articles based on their publication years.
Metrics 02 00023 g002
Figure 3. Twenty countries or regions with the most articles published.
Figure 3. Twenty countries or regions with the most articles published.
Metrics 02 00023 g003
Figure 4. Network map of the keyword co-occurrence analysis.
Figure 4. Network map of the keyword co-occurrence analysis.
Metrics 02 00023 g004
Figure 5. Network visualization map of the highly co-cited publications.
Figure 5. Network visualization map of the highly co-cited publications.
Metrics 02 00023 g005
Figure 6. Integrated AI-Education Convergence Framework.
Figure 6. Integrated AI-Education Convergence Framework.
Metrics 02 00023 g006
Table 1. Key terms related to AI and Education and search strings.
Table 1. Key terms related to AI and Education and search strings.
Terms Related to AI
artificial intelligence (AI), explainable AI, AI ethics, responsible AI, algorithmic fairness, AI literacy, intelligent system, machine learning (ML), deep learning (DL), neural network, natural language processing (NLP), computer vision, generative AI, large language model (LLM), AI model, transformer model, pre-trained model (PTM), reinforcement learning, supervised learning, unsupervised learning, representation learning, multimodal learning
Terms Related to Education
K–12 education, elementary education, primary education, secondary education, tertiary education, STEM education, higher education, teacher education, teacher training, professional development, education, schooling, pedagogy, education technology
Search String (WOSCC)
TS = (“artificial intelligence” OR “AI” OR “explainable AI” OR “AI ethic*” OR “responsible AI” OR “algorithmic fairness” OR “AI literacy” OR “intelligent system*”) AND TS = (“machine learning” OR “ML” OR “deep learning” OR “DL” OR “neural network*” OR “natural language processing” OR “NLP” OR “computer vision” OR “generative AI” OR “large language model*” OR “LLM” OR “AI model*” OR “transformer model*” OR “pre-train* model*” OR “PTM” OR “reinforcement learning” OR “supervised learning” OR “unsupervised learning” OR “representation learning” OR “multimodal learning”) AND TS = (“K–12 education” OR “K12 education” OR “K12” OR “elementary education” OR “primary education” OR “secondary education” OR “tertiary education” OR “STEM education” OR “higher education” OR “teacher education” OR “teacher training” OR “professional development” OR “education” OR “education technolog*”) AND DT = (Article OR Review) AND LA = (English) AND PY = (2014–2024)
Search String (Scopus)
(TITLE-ABS-KEY(“artificial intelligence” OR “AI” OR “explainable AI” OR “AI ethic*” OR “responsible AI” OR “algorithmic fairness” OR “AI literacy” OR “intelligent system*”)) AND (TITLE-ABS-KEY(“machine learning” OR “ML” OR “deep learning” OR “DL” OR “neural network*” OR “natural language processing” OR “NLP” OR “computer vision” OR “generative AI” OR “large language model*” OR “LLM” OR “AI model*” OR “transformer model*” OR “pre-train* model*” OR “PTM” OR “reinforcement learning” OR “supervised learning” OR “unsupervised learning” OR “representation learning” OR “multimodal learning”)) AND (TITLE-ABS-KEY(“K–12 education” OR “K12 education” OR “K12” OR “elementary education” OR “primary education” OR “secondary education” OR “tertiary education” OR “STEM education” OR “higher education” OR “teacher education” OR “teacher training” OR “professional development” OR “education” OR “education technolog*”)) AND (DOCTYPE(ar) OR DOCTYPE(re)) AND (LANGUAGE(english)) AND (PUBYEAR > 2013 AND PUBYEAR < 2025)
Table 2. Co-occurring keywords within their respective clusters.
Table 2. Co-occurring keywords within their respective clusters.
ClustersKeywords
Cluster 1machine learning, deep learning, models, neural-networks, natural language processing, data mining, prediction, systems, classification, information, assessment, quality, knowledge, reflection, AI in education, learning analytics, higher education, primary education, systematic review
Cluster 2K–12 education, education, artificial intelligence, generative AI, ethics, computational thinking, educational robotics, robotics, tools, augmented reality, virtual reality, internet, online learning, school, educational technology, predictive models, explanations, communication
Cluster 3STEM education, science, mathematics, students, engagement, learning, performance, impact, design, teachers, teaching, technology, intelligent tutoring systems, support
Cluster 4teacher education, professional development, AI education, pedagogical content knowledge, framework, preservice teachers, anxiety, attitudes, competence, achievement, beliefs, acceptance, motivation, self-efficacy, self-determination theory, classroom teaching, secondary education, technology integration, strategies, skills, validation
Table 3. ‘Top ten’ highly co-cited publications.
Table 3. ‘Top ten’ highly co-cited publications.
No.Publication TitleYearCo-CitationsLink Strength
1Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where Are the Educators?20192694
2Sustainable Curriculum Planning for Artificial Intelligence Education: A Self-Determination Theory Perspective20202089
3What is AI Literacy? Competencies and Design Considerations20201872
4Envisioning AI for K–12: What Should Every Child Know about AI?20191691
5Artificial Intelligence in Education: A Review20201537
6Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology19891358
7Artificial Intelligence in Education: Promise and Implications for Teaching and Learning20191353
8Intelligence Unleashed: An Argument for AI in Education20161245
9Exploring the Impact of Artificial Intelligence on Teaching and Learning in Higher Education20171241
10Focusing on Teacher Education to Introduce AI in Schools: Perspectives and Illustrative Findings20191236
Table 4. Cross-cutting research issues and gaps.
Table 4. Cross-cutting research issues and gaps.
No.Research IssueKeyword Co-Occurring Cluster No.Highly Co-Cited Publications No.
1Limited congruence between technological and pedagogical affordances of AIED applications 1, 2, 3, 41st, 5th, 7th, 8th, 9th, 14th *, 15th *, 16th *, 38th *, 40th *
2Insufficient bottom-up perspectives in AI literacy frameworks2, 33rd, 4th, 28th *, 30th *, 34th *, 37th *
3Ambiguous relationship between computational thinking and AI in STEM education1, 320th *, 24th *, 26th *, 29th *, 35th *, 39th *
4Lack of explicit interpretation of AI ethics for educators1, 2, 43rd, 4th, 7th, 9th, 39th *
5Limitations of existing PD frameworks in AI teacher education research2, 3, 42nd, 10th, 21st *, 31st *
Note: The publications suffixed with an asterisk (*) are provided in Appendix B.
Table 5. AI literacy framework studies.
Table 5. AI literacy framework studies.
Study Author(s) Study TypeStudy ContextStudy Title
Touretzky et al. [8]Conceptual paperK–12Explicating AI Literacy of Employees at Digital Workplaces
Long and Magerko [212]Scoping reviewGeneral publicWhat is AI Literacy? Competencies and Design Considerations
Wong et al. [213]Conceptual paperK–12Broadening Artificial Intelligence Education in K–12: Where to Start?
Ng et al. [214]Exploratory reviewK–12Conceptualizing AI Literacy: An Exploratory Review
Laupichler et al. [49]Exploratory reviewHigher educationArtificial Intelligence Literacy in Higher and Adult Education
Casal-Otero et al. [217]Systematic review K–12AI Literacy in K–12: A Systematic Literature Review
Carolus et al. [218]Expert interview AI developersDigital Interaction Literacy Model—Conceptualizing Competencies for Literate Interactions with Voice-based AI Systems
Kong et al. [36]Conceptual paperGeneral publicArtificial Intelligence (AI) literacy—An argument for AI literacy in Education
Chiu et al. [150]Co-design studyK–12What are Artificial Intelligence Literacy and Competency? A Comprehensive Framework to Support Them
Touretzky et al. [8]Conceptual paperK–12Explicating AI Literacy of Employees at Digital Workplaces
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, M.; Ma, M.; Muneeb, M.; Wong, G.K.W. Mapping Contemporary AI-Education Intersections and Developing an Integrated Convergence Framework: A Bibliometric-Driven and Inductive Content Analysis. Metrics 2025, 2, 23. https://doi.org/10.3390/metrics2040023

AMA Style

Ali M, Ma M, Muneeb M, Wong GKW. Mapping Contemporary AI-Education Intersections and Developing an Integrated Convergence Framework: A Bibliometric-Driven and Inductive Content Analysis. Metrics. 2025; 2(4):23. https://doi.org/10.3390/metrics2040023

Chicago/Turabian Style

Ali, Muhammad, Ming Ma, Mian Muneeb, and Gary K. W. Wong. 2025. "Mapping Contemporary AI-Education Intersections and Developing an Integrated Convergence Framework: A Bibliometric-Driven and Inductive Content Analysis" Metrics 2, no. 4: 23. https://doi.org/10.3390/metrics2040023

APA Style

Ali, M., Ma, M., Muneeb, M., & Wong, G. K. W. (2025). Mapping Contemporary AI-Education Intersections and Developing an Integrated Convergence Framework: A Bibliometric-Driven and Inductive Content Analysis. Metrics, 2(4), 23. https://doi.org/10.3390/metrics2040023

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop