4.1.1. Issue 1: Limited Congruence Between Technological and Pedagogical Affordances of AIED Applications
A prominent research issue identified in the current literature concerns the limited congruence between the technological and pedagogical affordances of AIED applications. This gap is evident across both K–12 and higher education contexts, where a substantial body of technical studies tends to prioritize the technological capabilities of AIED systems while often overlooking the educational settings in which these systems are implemented. As a result, the pedagogical affordances (i.e., those features that directly influence instructional strategies and learning outcomes) are frequently under-theorized or insufficiently addressed, thereby constraining the practical utility of AIED tools for educators and learners.
This disconnect is particularly observable in studies focused on predictive analytics, such as those aiming to forecast student dropout rates at institutional, national, or regional levels (e.g., [
111,
112,
113,
114,
205,
206]). These studies have employed advanced AI models [
113], wearable technologies for behavioral data collection [
206], and complex datasets capturing various learner attributes [
112]. While such approaches undoubtedly enhance predictive accuracy, they often fall short in translating these insights into actionable pedagogical strategies. Without clear guidance on how to respond to predictive indicators, educators may experience cognitive overload, struggling to interpret and apply the data meaningfully within their instructional practices.
This concern is echoed in Zawacki-Richter et al. [
207], the 1st highly co-cited publication, which offers a systematic review of AI applications in higher education. They highlight the persistent misalignment between technological capabilities and pedagogical needs, questioning the extent to which AIED systems are designed to address real-world educational challenges. Similarly, Luckin et al. [
32], the 8th highly co-cited publication, advocate for a pedagogy-first approach, emphasizing that technological design must be informed by the specific educational contexts in which AIED tools are deployed. Despite these calls, efforts to bridge the gap between technological innovation and pedagogical relevance remain limited. Although general frameworks for AI integration in higher education have been proposed (e.g., [
80,
130,
133]), many of them lack the contextual granularity needed to support educators and learners in diverse instructional settings. This raises a critical question for future research: how can AIED applications be systematically designed to address educational problems in specific contexts while maintaining a balance between technological sophistication and pedagogical relevance?
An additional dimension of this issue involves the insufficient attention to social interaction within AIED systems, particularly in online learning environments. Shi and Guo [
132] identify this as a recurring challenge, noting that social interaction is essential for sustaining long-term engagement among both students and teachers [
137,
208]. Addressing this gap presents a dual opportunity: to enhance learner engagement and to support collaborative learning processes. One promising direction may lie in the advancement of personalized-oriented ITS research (e.g., [
75,
189,
209]). While ITS platforms have demonstrated considerable potential in adapting instruction to individual learners, their capacity to support meaningful peer-to-peer and teacher-student interactions remains underdeveloped.
This trajectory aligns with the vision articulated by Holmes et al. [
210], the 7th highly co-cited publication, who anticipated that “AI should be getting into the realm of deeper self-learning by the early 2020s and become capable of assisting, collaborating, coaching, and mediating [learners] by early 2023” (pp. 3–4) [
210]. However, realizing this vision requires a more deliberate integration of pedagogical insights into the design and deployment of AIED systems. Future research might explore how emerging AI technologies can be leveraged to balance personalization with social interaction, thereby addressing the dual challenges of learner engagement and pedagogical alignment.
In summary, the current literature suggests that AIED applications often privilege technological advancement at the expense of pedagogical utility. This misalignment may result in systems that fail to address the practical needs of educators or to enhance learning outcomes in meaningful ways. Furthermore, while personalized ITS have shown promise, they frequently neglect the importance of social interaction, which is critical for fostering collaborative and socially enriched learning environments. Addressing this issue will require a concerted effort to integrate pedagogical theory into the design of AIED systems and to promote collaboration among AI developers, education researchers, and practitioners.
Recommendations to Address Research Issue 1
The following specific recommendations are proposed to address Research Issue 1, drawing on the axial codes (R1, R2, R3, R4, R5, and R6) derived from the preceding interpretive synthesis.
First, future AIED research should adopt a pedagogy-first approach (R1) by prioritizing pedagogical considerations during the design phase to ensure that technological affordances align with the instructional needs of educators and learners across diverse educational settings. Second, researchers are encouraged to develop context-specific frameworks (R2) by designing AIED models tailored to address distinct educational challenges, incorporating both generalizable design principles and localized pedagogical requirements. Third, it is advisable for AIED systems to integrate actionable insights for educators (R3), providing clear and practical strategies for interpreting and applying predictive analytics. Such integration may reduce cognitive overload and support informed pedagogical decision-making.
Fourth, design efforts should enhance social interaction features (R4) within AIED applications, particularly in online learning systems, to foster sustained engagement and collaboration. Fifth, by leveraging advances in personalization (R5), AIED applications could aim to balance individualized learning pathways with opportunities for peer and teacher interaction. Finally, promoting interdisciplinary collaboration (R6) among AI developers, education researchers, and practitioners may help bridge the gap between technological innovation and pedagogical application, thus fostering more holistic and context-sensitive solutions.
4.1.2. Issue 2: Insufficient Bottom-Up Perspectives in AI Literacy Frameworks
The concept of ‘AI literacy’ has gained considerable traction in recent years, as evidenced by its frequent appearance in Clusters 2 and 3 of the keyword co-occurrence analysis (see
Figure 4). Despite this growing interest, many studies continue to lack a clear, context-specific conceptualization of AI literacy within targeted educational settings (e.g., [
36,
120,
122,
149,
151,
165,
178,
211]). While some researchers have adopted the widely cited definition proposed by Long and Magerko [
212], the 3rd highly co-cited publication, this definition largely reflects a top-down perspective [
150]. According to their framework, “AI literacy is a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (p. 2) [
212]. These competencies are organized into five dimensions: (i) What is AI? (ii) What can AI do? (iii) How does AI work? (iv) How should AI be used? (v) How do people perceive AI?
Although foundational, this framework is primarily derived from expert opinions, grey literature, and the “5 Big AI Ideas” framework developed by Touretzky et al. [
8], the 4th highly co-cited publication. Their work, developed under the “AI for K–12” initiative, involved collaboration with the Association for the Advancement of Artificial Intelligence (AAAI) and the Computer Science Teachers Association (CSTA). However, the extent of K–12 educators’ contributions during the framework’s development remains unclear. This lack of explicit teacher input raises important concerns about the practical applicability of such frameworks across diverse classroom contexts.
Building on these early efforts, Wong et al. [
213] adapted AI literacy frameworks for K–12 education in Hong Kong, categorizing AI literacy into three dimensions: AI concepts, AI applications, and AI ethics. While more succinct, this framework offers limited guidance for designing and implementing AI content tailored to the specific needs of different K–12 settings. Similarly, Ng et al. [
214], the 34th highly co-cited publication, proposed a four-dimensional framework—drawing on Bloom’s taxonomy [
215]—encompassing: Know and understand AI; Use and apply AI; Evaluate and create AI; and AI ethics. However, this framework presents interpretive challenges.
For instance, it is often significantly more challenging to understand and articulate how an AI tool functions than to use the tool for creating digital artefacts. Within Bloom’s Taxonomy, albeit, “understanding” is positioned at a lower level of cognitive complexity compared to “creating” [
215]. This hierarchical misalignment may lead to confusion when designing learning outcomes. Consider the following example: the outcome “explain how an AI-image generator produces images from a prompt” (aligned with the
understanding level) may, in practice, demand a deeper cognitive engagement than “create an image by writing a prompt to an AI-image generator” (aligned with the
creating level). In such cases, the act of creating with AI tools may be more accessible for learners than providing a conceptual explanation of the underlying mechanisms. This misalignment arguably complicates the task of educators and instructional designers in formulating learning outcomes and ensuring constructive alignment with learning activities and assessments.
These interpretive challenges underscore the need for more nuanced AI literacy frameworks, which should, perhaps, be informed by the Structure of the Observed Learning Outcome (SOLO) taxonomy [
216]. The SOLO taxonomy delineates five levels of understanding: prestructural, unistructural, multistructural, relational, and extended-abstract. These levels allow for a more context-sensitive classification of learning outcomes. For instance, within the SOLO taxonomy, “explain” is typically situated at a higher level of cognitive complexity (relational or extended-abstract), depending on the depth of understanding required. Furthermore, SOLO distinguishes between
different forms of “creating.” For example, “creating with something” (e.g., employing AI tools) is generally classified at the
multistructural level, whereas “creating something new” (e.g., developing AI tools) is associated with the highest,
extended-abstract level. Such distinctions are possible because the SOLO taxonomy recognizes that understanding itself is multi-layered, providing a more flexible and contextually appropriate basis for designing learning outcomes, especially in AI literacy education.
Recent scholarship has increasingly emphasized the importance of incorporating bottom-up perspectives into AI literacy frameworks. Casal-Otero et al. [
217], for example, reviewed AI literacy studies in K–12 settings and identified varied approaches to teaching AI, including recognizing AI artefacts, understanding AI processes, and using AI tools. They underscored the need to consider the perspectives of both teachers and students when designing curricula, particularly to accommodate diverse learning needs and gender differences. Similarly, Laupichler et al. [
49] reviewed AI literacy research in higher education and found that top-down frameworks often confuse educators, leaving them uncertain about how to structure courses or design AI content. They also highlighted the persistent challenge of defining AI literacy in a manner “that is clear and unambiguous” (p. 13) [
49].
Expanding on this discourse, Kong et al. [
36] argued that AI literacy should extend beyond technical competencies to include critical thinking, ethical awareness, and problem-solving skills. They emphasized the need for AI education to empower individuals as active participants in an AI-driven society, ensuring that AI literacy aligns with broader goals such as equity, sustainability, and lifelong learning.
In an exploratory interpretive study, Carolus et al. [
218] proposed a digital-interaction model for AI literacy, developed from interviews with AI experts. Their model comprises three overarching dimensions: understanding functional principles of AI systems, mindful usage of AI systems, and user group-dependent competencies, along with ten subdimensions. While this model offers valuable theoretical insights, its complexity may limit its applicability for novice learners, particularly in K–12 contexts. Addressing this concern, Chiu et al. [
150] co-designed an AI literacy framework with experienced K–12 teachers, integrating ‘bottom-up’ perspectives to enhance practical relevance. Their framework expands the scope of AI literacy to include confidence, self-reflection, and ethical reasoning. However, the study focused primarily on middle school teachers, raising questions about its generalizability across different educational levels and contexts.
Collectively, these studies highlight the value of interpretive methodologies in capturing the nuanced perspectives of educators and learners. Nevertheless, as shown in
Table 5, AI literacy research remains dominated by top-down frameworks grounded in expert-driven reviews or theoretical constructs. Consequently, observational or exploratory studies that examine the relationships between AI literacy dimensions based on the lived experiences of educators and learners remain limited [
150,
151,
211]. This gap underscores the need for future research to adopt inclusive, bottom-up approaches that prioritize the insights of those directly engaged in teaching and learning.
In summary, many existing AI literacy frameworks lack inclusivity and contextual adaptability, limiting their effectiveness across diverse educational settings. These frameworks, often designed from a top-down perspective, tend to overlook the practical insights of teachers and students at the grassroots level. Moreover, they frequently fail to account for varying levels of understanding among novice and advanced learners, particularly in K–12 contexts. The absence of simplified, context-specific curricula and adequate teacher training may further hinder their implementation.
Recommendations to Address Research Issue 2
The following specific recommendations are proposed to address Research Issue 2, drawing on the axial codes (R7, R8, R9, R10, R11, and R12) derived from the preceding interpretive synthesis.
First, future research should
incorporate bottom-up perspectives (R7) by prioritizing interpretive methodologies that capture the lived experiences of teachers and students. Such methodologies may ensure that AI literacy frameworks remain inclusive and contextually relevant across diverse educational settings. Second, it is suggested that AI literacy frameworks
adopt nuanced taxonomies (R8), such as the SOLO taxonomy [
216], to design learning outcomes that reflect varying levels of understanding and competency. Third, efforts should focus on
simplifying AI literacy frameworks for novice learners (R9), particularly within K–12 contexts, to ensure accessibility, age-appropriateness, and practicality for both young learners and their educators.
Fourth, researchers and practitioners are encouraged to develop context-specific curricula (R10) by collaborating to create AI literacy curricula tailored to the unique needs of specific educational levels, while accounting for cultural, contextual, and learner diversities. Fifth, fostering teacher training and resources (R11) is essential; PD programs should be designed to equip educators with the requisite knowledge and skills to understand and implement AI literacy frameworks effectively, thus bridging the gap between theory and practice. Finally, promoting interdisciplinary collaboration (R12) among AI researchers, education specialists, and classroom practitioners may support the development of holistic frameworks that balance expert insights with the practical realities of teaching and learning.
4.1.3. Issue 3: Ambiguous Relationship Between Computational Thinking and AI in STEM Education
Clusters 2 and 3 of the keyword co-occurrence analysis (
Figure 4 and
Table 2) reveal intricate and multidirectional linkages among key terms such as “computational thinking,” “artificial intelligence,” “generative AI,” “K–12 education,” “STEM education,” “educational robotics,” and “tools.” These interconnections suggest promising synergies; however, the relationship between computational thinking (CT) and AI remains conceptually ambiguous and insufficiently theorized. This ambiguity extends into broader STEM educational contexts (e.g., [
39,
138,
145,
151,
152,
168,
219]), where the integration of AI technologies could potentially enrich student-centered learning environments. Clarifying this relationship is essential for informing the design of AI-integrated STEM curricula that foster both CT and AI literacy in meaningful and pedagogically sound ways.
Within STEM education, researchers have argued that embedding AI technologies in tangible educational tools (e.g., educational robotics, block-based programming environments, AR/VR platforms) can support highly interactive, constructionist learning experiences (e.g., [
39,
151,
168,
173]). These environments are typically designed to promote hands-on, student-centered learning. Some scholars have extended this argument by suggesting that such tools may also be leveraged to cultivate AI literacy among K–12 learners (e.g., [
147,
163,
165,
166,
213]). This pragmatic approach builds on the foundational developments of CT and STEM education to support the learning of AI concepts. However, the
a priori relationship between CT and AI remains unclear, raising important questions about how these domains intersect and whether CT serves as a conceptual or pedagogical foundation for AI learning.
Insights from Lodi and Martini [
17] and Wong et al. [
213] offer initial conceptual grounding for this intersection. Lodi and Martini [
17] revisit Seymour Papert’s original interpretation of CT, which emphasizes “making and understanding computational objects” (p. 894). Wong et al. [
213] extend this perspective by conceptualizing AI as a coalescence of such computational objects. They propose that existing CT teaching and learning activities could be adapted to introduce AI concepts, applications, and ethical considerations. Supporting this view, Lin et al. [
151] demonstrated a positive association between AI literacy and CT efficacy among Chinese secondary school students. Their findings suggest that high-level CT skills may be fostered through learning designs that embed AI within real-life problem-solving contexts. Nevertheless, further empirical research is needed to examine whether this relationship holds across diverse educational settings and learner populations.
Collaborative efforts to develop AI curricula often involve partnerships between educationists and teachers, as illustrated by Chiu et al. [
220], the 12th highly co-cited publication. These collaborations, however, are frequently constrained by the absence of practical AI competency frameworks, which are essential for guiding instructional design and enabling teachers to implement AI learning content effectively. Casal-Otero et al. [
217] argue that many existing frameworks remain overly theoretical and top-down, making them difficult to operationalize in classroom contexts. Leveraging established CT pedagogies may offer a pathway to address this challenge, yet doing so requires a clearer understanding of the foundational relationship between CT and AI literacy.
Conversely, a growing body of research has explored the integration of AI, particularly machine learning (ML), into CT and STEM education (e.g., [
145,
213,
219]). Given that AI is a major subfield of CS, its development is inherently tied to the CT skills of its practitioners. Tedre et al. [
145] note that such integration necessitates a recalibration of current CT education, as “there is no agreement over the relationship between ML skills and knowledge and the multitude of skills and knowledge labeled computational thinking” (p. 110568). This observation invites further inquiry into how CT development through AI learning might influence students’ epistemological understanding of AI and its applications.
An emerging area of interest involves the role of CT in engaging with generative AI and AI models, particularly LLMs. For example, Hijón-Neira et al. [
152] demonstrated that LLMs can scaffold programming education by offering personalized feedback and unsolicited hints, thereby enhancing students’ CT skills. However, several challenges have been identified, including the unreliability of generated responses and students’ overreliance on these tools [
221,
222]. Reeves et al. [
222] observed that even slight variations in prompts can significantly alter LLM outputs, underscoring the importance of “prompt engineering” [
154,
223] as an emerging skill set. Future research could explore how CT can be cultivated through interactions with generative AI, while also addressing the pedagogical and ethical implications of tool reliance and output variability [
156,
224].
In summary, the relationship between CT and AI remains conceptually ambiguous and empirically underexplored, despite its potential to transform STEM education. While CT is already embedded in many STEM curricula, its role in supporting the teaching of AI concepts, applications, and ethics has yet to be fully realized. Clarifying this relationship could inform the development of AI literacy curricula, guide the integration of AI into CT education, and support the adaptation of existing pedagogical frameworks to better prepare students for AI-driven futures. At the same time, emerging technologies such as generative AI offer new opportunities to enhance CT, but also present challenges related to tool reliability, learner overreliance, and the lack of practical competency frameworks.
Recommendations to Address Research Issue 3
The following specific recommendations are proposed to address Research Issue 3, drawing on the axial codes (R13, R14, R15, R16, R17, and R18) derived from the preceding interpretive synthesis.
First, future research should systematically explore foundational relationships (R13) by investigating how CT and AI literacy intersect and influence one another within STEM educational contexts. Second, it is recommended that researchers and educators leverage CT for AI literacy (R14) by adapting existing CT teaching and learning activities to introduce AI concepts, applications, and ethical considerations, thus building on established pedagogical practices in STEM education. Third, efforts should focus on integrating AI into CT curricula (R15), particularly by embedding ML into CT and STEM curricula. Such integration may necessitate recalibrating current CT frameworks to include AI-specific competencies that reflect emerging technological demands.
Fourth, researchers are encouraged to explore generative AI in CT education (R16) by examining how CT skills can be cultivated through engagement with generative AI tools, such as LLMs, while also addressing challenges related to tool reliability, ethical concerns, bias in outputs, and potential learner overreliance. Fifth, the development of practical AI competency frameworks (R17) is advised, with collaborative efforts between researchers and practitioners aiming to create bottom-up frameworks that support educators in designing and implementing effective AI learning content within STEM education. Finally, promoting interdisciplinary collaboration (R18) among AI researchers, education specialists, and practitioners may foster innovative approaches to integrating CT and AI literacy, thereby enriching STEM education across diverse learning environments.
4.1.4. Issue 4: Lack of Explicit Interpretation of AI Ethics for Educators
The need for educators to gain a clearer understanding of AI ethics has been increasingly emphasized by researchers and policymakers through various calls to action (e.g., [
36,
150,
225,
226,
227]). These scholarly contributions underscore the importance of ethical considerations in AIED applications and the necessity of equipping future generations with the ability to critically engage with AI. However, despite this growing consensus, practical guidance to support educators in interpreting and integrating AI ethical principles into their teaching remains limited. This gap leaves many teachers without the necessary resources to demystify AI ethics and embed it meaningfully into classroom practices.
A related concern lies in the limited conceptualization of AI ethics within existing AI literacy frameworks. For example, Ng et al.’s [
214] four-dimensional framework, grounded in Bloom’s taxonomy [
215], treats AI ethics as a discrete dimension rather than embedding it across all levels of learning. Such compartmentalization may hinder educators’ ability to integrate ethical considerations holistically. Ideally, AI ethics should permeate every stage of AI education—whether students are learning to understand, use, or create AI systems. When treated as a standalone component, ethical engagement risks being overlooked in curriculum design, a concern echoed by scholars across both K–12 and higher education contexts (e.g., [
150,
228,
229]). These scholars advocate for a more integrated and collaborative approach to teaching AI ethics. In alignment with this view, Touretzky et al.’s [
8] “Five Big Ideas” framework places the societal impact of AI at the center of AI education, encouraging educators to adopt a mindset that foregrounds ethics throughout the learning process.
Another challenge stems from the terminology used in AI education. For instance, Meng-Leong and Hung [
179] introduced the term “AI thinking” in the context of K–12 STEM education, loosely defining it as a logical reasoning process facilitated by data-driven, AI-based tools. While the term aims to highlight collaborative problem-solving in STEM, it may inadvertently confuse educators by obscuring the locus of agency—whether it is the human, the AI, or both. A more precise term, such as ‘AI-mediated thinking,’ may offer greater conceptual clarity. This term could be defined as the process by which humans engage in problem-solving or reasoning, with AI providing data-driven insights, tools, or support to enhance human decision-making. Such a human-centric framing reinforces the role of educators and learners as active agents, with AI serving as a cognitive aid rather than a decision-maker.
Despite its conceptual potential, Meng-Leong and Hung’s [
179] work does not address the ethical implications of student interactions with AI tools during problem-solving or solution development. This omission highlights a broader need for future research to position AI ethics as a core component of all AI-related teaching and learning activities. Cardona et al. [
226] similarly argue that AI ethics should be interwoven throughout the educational process—from curriculum design to classroom implementation. Encouragingly, researchers at MIT have developed “project-based” AI ethics activities for middle school students, embedding ethical reflection into technical lessons [
158]. These activities encourage learners to critique AI systems through an ethical lens and to grapple with the moral dimensions involved in designing and deploying such systems.
Moreover, the ethical complexities of human-AI interactions, particularly in contexts involving children and interactive technologies such as robotics, present unique challenges that warrant deeper investigation. Smakman et al. [
230] emphasize the need to understand these interactions in order to uncover their implications for AI education. Their work suggests that fostering ethical awareness requires more than abstract instruction; it involves engaging students with the lived realities of AI use. Building on this, future research should aim to refine key concepts (e.g., AI literacy, AI-mediated thinking) while developing clear, actionable frameworks that support educators in addressing the ethical dimensions of AI in the classroom.
In summary, the integration of AI ethics into education demands a unified and embedded approach. Treating ethics as an isolated topic risks fragmenting students’ understanding and limiting their capacity to critically evaluate the ethical implications of designing, using, and interacting with AI technologies. Addressing this issue requires clearer conceptual definitions, enhanced educator support, and practical resources that facilitate the teaching of ethics through student-centered methods. While project-based learning has shown promise in this regard, its potential remains underutilized across diverse educational contexts.
Recommendations to Address Research Issue 4
The following specific recommendations are proposed to address Research Issue 4, drawing on the axial codes (R19, R20, R21, R22, R23, and R24) derived from the preceding interpretive synthesis.
First, it is recommended to integrate AI ethics across learning levels (R19), embedding ethical considerations throughout all stages of AI education to ensure that students develop a critical understanding of ethics in relation to the use, design, and implementation of AI technologies. Second, efforts should be made to clarify and refine terminologies (R20), particularly by revising ambiguous terms such as ‘AI thinking’ to emphasize their human-centric nature. For instance, adopting terms like ‘AI-mediated thinking’ may provide greater clarity by highlighting the role of humans as decision-makers and AI as supportive cognitive tools.
Third, the development of practical resources for educators (R21) is essential. Collaborative initiatives between researchers and policymakers should focus on creating accessible resources, guidelines, and PD programs to help educators demystify AI ethics and integrate it effectively into their teaching practices. Fourth, embedding ethics into project-based learning (R22) is advised, building on existing initiatives to incorporate AI ethics into activities that enable students to critically evaluate and construct rudimentary AI systems while addressing ethical considerations.
Fifth, future research should explore the ethical implications of human-AI interactions (R23), particularly by investigating the ethical dimensions of children’s interactions with AI technologies such as robotics. Such research may provide educators with actionable insights for addressing the ethical use of these technologies in classroom settings. Finally, promoting interdisciplinary collaboration (R24) among AI ethicists, education researchers, and practitioners could foster the development of holistic frameworks that integrate ethical considerations into all aspects of AI education.
4.1.5. Issue 5: Limitations of Existing PD Frameworks in AI Teacher Education Research
Within Cluster 4 of the keyword co-occurrence network (see
Figure 4), several PD frameworks have been identified to support educators in preparing for AI teaching and learning (e.g., [
160,
195,
197,
231]). While these initiatives represent meaningful progress, they also exhibit a range of conceptual and practical limitations that warrant further scholarly attention.
One of the most prominent issues lies in the widespread reliance on the
integrative TPACK framework [
232,
233], which has been extensively adopted in teacher technology education. Although TPACK offers a foundational structure, its current applications in AI education often lack the specificity required to address the unique theoretical and applied dimensions of AI. In particular, the dual nature of AI concepts (such as ML) poses challenges for educators, especially those from nontechnical backgrounds [
145,
160,
198]. Understanding ML necessitates both
a priori (theoretical) and
a posteriori (applied) knowledge, yet existing PD frameworks tend to treat these knowledge domains superficially. This limitation may hinder teachers’ ability to grasp the hierarchical complexity of AI concepts.
The hierarchical model of the SOLO taxonomy [
216] offers a potentially valuable lens through which to address this issue. By categorizing understanding into five levels (prestructural, unistructural, multistructural, relational, and extended-abstract levels), the SOLO taxonomy could support a more structured and differentiated approach to teacher learning. However, this model has yet to be meaningfully integrated into existing PD frameworks for AI education.
A second challenge emerges from the duality of AI education, as articulated in the
Artificial Intelligence and the Future of Teaching and Learning report [
226]. This duality highlights two critical perspectives: (i) AI as a tool to support teaching and learning across all subjects, and (ii) AI as a subject that students must learn about. Accordingly, educators are increasingly expected to both teach AI content and integrate AI tools into their pedagogical practices. This dual role complicates the distinction between Technological Pedagogical Knowledge (TPK) and Technological Content Knowledge (TCK), as their boundaries may blur in practice. Such ambiguity can create confusion for educators attempting to navigate the overlapping demands of AI integration.
In response to this complexity, Celik [
231] proposed an extension of the TPACK framework by introducing “intelligent technology knowledge” (intelligent-TK) and ethical assessment dimensions. While this extension acknowledges the importance of AI ethics, it arguably oversimplifies the pedagogical landscape by prioritizing technological knowledge (TK) over the more nuanced dimensions of TPK and TCK. For instance, Celik posited that teachers equipped with sufficient TK could inherently know how to teach with AI tools. However, this assumption lacks empirical validation and may underestimate the pedagogical intricacies involved in AI integration.
Similarly, Yau et al. [
196] proposed a six-dimensional framework for teacher knowledge in AI education, encompassing: (i) technology bridging, (ii) knowledge delivery, (iii) interest stimulation, (iv) ethics establishment, (v) capability cultivation, and (vi) intellectual development. While this framework offers a broader perspective, several of its dimensions appear to be conceptually overlapping. For example, “ethics establishment” could arguably be embedded within “knowledge delivery,” as ethical considerations should permeate the instructional process. Likewise, the distinction between “capability cultivation” and “intellectual development” may be difficult to operationalize in practice. These overlaps risk creating ambiguity for educators attempting to implement the framework effectively.
Addressing these limitations requires a more nuanced understanding of teacher knowledge in AI education, one that recognizes the multifaceted nature of pedagogical practice and prioritizes student-centered approaches. Vazhayil et al. [
234], the 10th highly co-cited publication, emphasize the importance of collaboration among policymakers, researchers, industry leaders, and educators in enhancing the effectiveness of PD initiatives. Their findings highlight the need to consider contextual variables, such as cultural, social, economic, and technical factors, and to align pedagogical strategies with the specific demands of local educational environments.
In summary, existing PD frameworks for AI teacher education face several interrelated challenges. These include insufficient differentiation of educator understanding, conceptual ambiguity in knowledge dimensions, and limited integration of ethical considerations. While extensions of the TPACK framework offer promising directions, they remain largely untested in empirical contexts. Furthermore, contextual factors are frequently underexplored, diminishing the relevance and applicability of PD initiatives across diverse settings. Finally, a persistent lack of interdisciplinary collaboration continues to inhibit the development of robust, scalable, and context-sensitive AI teacher education programs.
Recommendations to Address Research Issue 5
The following specific recommendations are proposed to address Research Issue 5, drawing on the axial codes (R25, R26, R27, R28, R29, and R30) derived from the preceding interpretive synthesis.
First, it is recommended that future PD frameworks
incorporate hierarchical models of understanding (R25), such as the SOLO taxonomy [
216], in order to account for varying levels of educator understanding. These models may facilitate the structured development of both theoretical and applied AI knowledge within teacher training. Second, researchers should
clarify overlapping knowledge dimensions (R26) by investigating the interplay between TPK and TCK within AI teacher education frameworks. Such clarification may provide educators with more practical guidance for integrating AI into their instructional practices.
Third, it is advisable to
empirically validate framework extensions (R27), such as the proposed “intelligent-TK” [
213], through rigorous empirical testing. This process could ensure that such extensions address the pedagogical complexity of AI education while maintaining conceptual coherence. Fourth,
ethical considerations should be integrated across teacher education (R28), embedding ethics throughout all dimensions of AI teacher education frameworks to ensure that AI ethics is treated as a foundational element rather than a peripheral topic.
Fifth, the design of context-specific PD programs (R29) is essential. Policymakers and researchers should collaborate to develop PD initiatives that account for contextual factors, including cultural diversity and technological constraints, and that are tailored to the specific needs of educators in varied educational environments. Finally, promoting interdisciplinary collaboration (R30) among policymakers, researchers, industry leaders, and educators could foster the development of comprehensive and scalable AI teacher education programs. Such partnerships may also bridge the gap between theoretical models and practical implementation.