Abstract
In today’s digital era, teachers are expected to incorporate artificial intelligence (AI) into the classroom. Teacher educators must therefore model its use while evaluating their own AI-related knowledge to guide future teachers effectively. Existing assessments often rely on self-reporting questionnaires, which may introduce bias, and the TPACK (Technological-Pedagogical-Content-Knowledge) framework, which overlooks distinctive AI characteristics. This study develops and validates an AI-TPACK assessment tool for teacher educators, grounded in authentic pedagogy and systematically designed through the ADDIE model (Analysis, Design, Development, Implementation, and Evaluation). The study aims to identify AI-relevant TPACK components and add new ones; test the tool’s validity; and analyze teacher-educator competency patterns. The development involved dual literature reviews (22 TPACK studies; 34 AI studies) and empirical analysis of 60 authentic instructional artifacts. Five experts confirmed their content validity (CVR = 0.86, CVI = 0.91) and the inter-rater reliability (ICC = 0.84, range 0.76–0.88). The tool comprises 4 components—AIK, AIPK, AICK, and Integration—14 criteria, and 65 indicators, and reveals four competency patterns: technological innovator; pedagogical integrator; content developer; and beginner. The strong correlation (r = 0.78) between AIPK and integration underscores the importance of synergy. The tool contributes theoretically and practically to advancing teacher-educators’ AI knowledge and competency assessments.
    1. Introduction
The ongoing revolution of artificial intelligence in education (AIEd) has reshaped society, presenting both opportunities and challenges to education (; ). Among its advantages, the growing availability of relatively low-cost large language models (LLM) supports writing, knowledge generation (), visualization of abstract concepts and intricate subject matter (), and fosters more interactive learning among students (). While the benefits of incorporating AI tools into teaching are evident, simply integrating them into the classroom does not guarantee the quality of education (). To fully harness the potential of AI, educators must purposefully embed it in their instruction, in line with specific learning objectives (). Those responsible for shaping the next generation of educators play a pivotal role in this process. Effective teacher training requires deliberate and systematic design that fosters AI-oriented technological pedagogical content knowledge (AI-TPACK) to enable future teachers to integrate AI tools in their classrooms in an ethical and effective manner ().
Preparing educators in this way is particularly urgent, as the AI era introduces challenges that are both novel and increasingly complex. Unlike traditional digital technologies, AI tools possess autonomous capabilities, machine learning, and dynamic adaptability (). Addressing these demands requires not only technical proficiency, but also a profound understanding of how AI can fundamentally alter both teaching and learning processes (). Within this context, teacher educators serve as primary change agents in the educational ecosystem, thereby requiring a robust framework for understanding and evaluating their personal AI-integration competency. Their role extends beyond modeling effective practices; it also includes preparing future teachers to successfully navigate AI-driven environments, thereby contributing to systemic change across educational levels (; ).
The present review seeks to advance the development and validation of a comprehensive AI-TPACK measurement tool that can be used by teacher educators. Grounded in the lesson planning transformation model () and relevant recommendations for AI-TPACK research (), this paper focuses on the construction of an applicable instrument for facilitating comparative evaluation across the planning, implementation, and reflection stages of AI-integrated teaching. Given the paucity of clear implementations and descriptions of current AI-TPACK assessment tools in practice (), the proposed instrument is designed for varied AI applications and aligned pedagogical approaches.
1.1. From TPACK to AI-TPACK
The TPACK model builds on ’s () seminal concept of pedagogical content knowledge (PCK), which highlights the specialized knowledge that teachers require in order to transform subject matter into comprehensible forms for students.  () extended this framework by adding a technological dimension, yielding seven interrelated domains: (1) technological knowledge (TK)—the ability to understand and apply technology; (2) pedagogical knowledge (PK)—encompassing teaching methods and processes; (3) content knowledge (CK)—representing subject matter expertise; (4) technological content knowledge (TCK)—the interaction between technology and content; (5) technological pedagogical knowledge (TPK)—the manner in which technology supports pedagogy; (6) pedagogical content knowledge (PCK)—transforming content for teaching; and (7) the integrative TPACK—for the effective integration of technology.
Critiques of TPACK have prompted refinements, such as emphasizing contextual factors (e.g., school culture and resources) that shape its application (). Traditional evaluation tools include self-reporting questionnaires, notably ’s () TPACK instrument, which assesses perceptions across the seven domains, alongside qualitative approaches, such as reflections and observations (). However, reliance on these self-assessment methods has been criticized for subjectivity, overemphasis on self-perception, and limited ability to capture authentic classroom practices (; ). Although psychometric evaluations tend to reveal adequate reliability (e.g., Cronbach’s alpha > 0.80), they raise concerns about construct validity across diverse contexts ().
While TPACK provides a solid foundation for digital integration, it falls short with regard to AI, given the latter’s distinct attributes—such as autonomy, adaptability, and ethical complexity (; ). As such, broad models may risk obscuring these idiosyncrasies, underscoring the need for tailored extensions, such as the AI-TPACK (; ).
1.2. AI in Education: Current Trends
Recent syntheses and high-impact studies converge on core pedagogical trends and system priorities in education and teacher preparation, linking classroom practice to institutional readiness and policy needs. The dominant trajectory frames AI as an augmenting force that reshapes learning goals, assessment, and professional development rather than as an outright teacher replacement. Recent systematic and influential papers report a shift toward learning-with-AI that foregrounds higher-order thinking, reconfigured assessment, and integrated AI-ethical literacy in curricula, with growing emphasis on personalized instruction through educational chatbots for interactive support and large-scale adaptive learning systems (e.g., ; ; ). Reviews also document accelerating institutional pressure to embed pre-service and in-service training on AI literacy and prompt engineering while noting limited empirical evaluation and uneven institutional readiness, particularly in preparing educators to navigate intelligent tutoring systems and recommendation systems for resource curation (; ). At the system level, scholars highlight governance concerns—equity, bias, data protection—and recommend new professional standards, assessment models, and co-designed pilots to preserve relational teaching functions while scaling personalization through adaptive analytics for data-informed decision making and supporting multimodal learning environments enabled by large language models (LLMs) and teacher productivity (; ; ; ).
1.3. Teacher Education in the AI Age
Teacher education in the AI Age encounters multifaceted challenges, particularly with the rapid integration of AI technologies. As noted by  (), a significant research imbalance favors AI applications in teaching over professional development, alongside ethical concerns such as privacy breaches and biases. While previous research has advocated for professional development to ensure seamless integration (; ) and highlighted the need for innovative pedagogies (), these models often address technology in a general sense.
A critical gap persists within existing inquiries. While models of professional growth have emphasized TPACK-based approaches (; ) and self-efficacy enhancements (), they overwhelmingly overlook the unique demands of AI—such as algorithmic literacy and ethical integration (). This is particularly evident in the current state of technology integration in teacher education, which remains oriented towards general tools ().
Teacher educators, as pivotal change leaders (), need specialized frameworks to navigate these new challenges. This underscores the urgent need for a framework like AI-TPACK, which is designed to equip them with the tools for cultivating adaptive expertise in AI-enhanced education. Such professional growth greatly depends on transforming academic knowledge into practical scripts through iterative cycles of planning, action, and reflection ()
1.4. Extending and Refining the TPACK Model
Beyond the TPACK, several models have sought to capture evolving digital practices. ’s () SAMR model (substitution, augmentation, modification, and redefinition) and ’s () RAT framework (replacement, amplification, transformation) represent early attempts. Technology-specific adaptations include ’s () incorporation of Web 2.0 for social collaborations and ’s () VR-TPACK for immersive experiences. Finally, contextual expansions encompass ’s () cultural TPACK and ’s () leadership-oriented variant.
However, despite these advances, prior extensions have rarely addressed the unique demands of AI. Models such as SAMR and RAT emphasize integration levels, but overlook autonomous learning and ethical implications, often resulting in superficial applications (). Broader frameworks, such as DPACK, account for digital culture—yet risk downplaying AI-specific features, such as algorithmic bias and adaptability (). These gaps highlight the need for a dedicated AI-TPACK extension, grounded in authentic pedagogical artifacts, to capture nuanced integrations beyond self-reports (; ).
1.5. Early Conceptualizations and Empirical Advances
Initial attempts to formulate an AI-TPACK highlight the educational potential of AI.  () proposes a competency framework that emphasizes ethical and pedagogical integration, while  () outline AI literacy for teachers, and  () presents a systematic review of AI implications. Core components emerging from these efforts include AI Knowledge (AIK) for understanding capabilities and limitations, AI Ethics for ensuring fairness and transparency, and AI Pedagogy for effective instructional integration. Empirical studies provide further insights:  (), for example, examine teachers’ sentiments towards AI,  () explore future readiness, and  () investigates skill-building. Collectively, these studies reveal only nascent patterns and underscore the scarcity of validated tools.
In order to refine AI-TPACK both conceptually and empirically,  () introduced a seven-element framework, including AI-technological knowledge (AI-TK) and its intersections (Figure 1), yielding seven interrelated domains: AI-technological knowledge (AI-TK), which involves understanding the core principles and functions of AI technologies; pedagogical knowledge (PK), which encompasses teaching methods and processes, particularly how to integrate AI tools effectively; content knowledge (CK), which represents subject matter expertise and how it intersects with AI applications; AI-technological content knowledge (AI-TCK), which is the ability to integrate specific AI tools to represent particular subject matter; AI-technological pedagogical knowledge (AI-TPK), which is the knowledge of how to use AI tools to support specific teaching methods; pedagogical content knowledge (PCK), which involves the transformation of subject matter for teaching with an emphasis on how AI can assist in this process; and finally, AI-TPACK, the overarching integrative knowledge required to effectively use AI to enhance both teaching and learning of specific content.
      
    
    Figure 1.
      AI-TPACK structural diagram by ().
  
The researchers found predictive relationships among the components, with technical knowledge exerting the strongest influence. Similarly,  () adapted the TPACK for generative AI, underscoring its highly adaptable and social dimensions. Empirical validations have further strengthened the AI-TPACK, though notable methodological gaps remain.  () developed the Intelligent TPACK framework and validated it through structural equation modeling with 312 faculty members. Their analysis shows that AI literacy significantly predicts AI acceptance (β = 0.43, p < 0.001) with the model explaining 67% of the variance. Finally,  () validated a five-factor AI competency model (teacher professional engagement, instructional design, content choices, learning competencies, and facilitating learners’ AI competency) using confirmatory factor analysis (χ2 = 234.56, df = 125, CFI = 0.95). Their findings established strong convergent validity with 21st-century skills (r = 0.72, p < 0.001).
However, these empirical advances predominantly rely on self-reporting methodologies, exposing a critical gap in the field of study-unit assessments.  () validated the AI-TPACK scale in Indian settings, showing that cultural factors necessitate adaptation (RMSEA = 0.068, CFI = 0.93). Building on this,  () demonstrate that AI competency positively predicts TPACK, which in turn mediates teaching performance. Yet a fundamental limitation persists: the absence of validated instruments grounded in authentic pedagogical artifacts. This gap underscores the need for artifact-based assessment tools that can evaluate the AI-TPACK as it emerges in real educational settings, such as lesson plans, instructional materials, and reflective practices.
1.6. Assessing and Developing AI-TPACK
To measure TPACK, self-reporting surveys are often used (; ). However, despite their benefits, such methods are prone to biases, such as overestimations within emerging domains () and a failure to differentiate between technologies, which inadequately addresses the dynamic nature of AI (). Critiques also emphasize the discrepancies between self-reports and actual practices (), highlighting the need for more authentic, artifact-based assessments that draw on lesson plans and instructional materials (; ).
Building on these concerns, psychometric principles remain essential, including content validity, construct validity, and reliability (). For AI-TPACK, artifact-based tools could offer a more authentic evaluation that captures ethical, contextual, and adaptive dimensions (). Although recent validations, such as ’s () model (CFI = 0.94), underscore the mediating role of trust, instruments that are grounded in authentic artifacts remain scarce. This highlights the need for hybrid approaches that combine psychometric rigor with real-world evidence.
Beyond measurement, the cultivation of AI-TPACK is equally significant. Professional development initiatives draw on established models such as ’s () attitudinal change model, addressing barriers through sustained training () and professional communities (). In AI contexts, competencies range from basic tool use to advanced ethical integration (). Clustering analyses reveal a range of profiles, from low to high integrators, shaped by self-efficacy and pedagogical beliefs ().
Emerging evidence also reveals discipline-related variations; among math teachers in China, AI-TPACK remains at the preliminary stage, with self-efficacy supporting progress but traditional beliefs hindering it (). Studies on generative AI integration highlight four user types—from cautious adapters to pedagogical innovators—and propose the PSAMR (including prohibition) as an extension of the SAMR (). Systematic reviews of AI-related professional development demonstrate that integrating TPACK into workshop designs and other settings enhances innovation but requires contextual adaptation (). Finally, institutional support and ethical considerations further shape these patterns, emphasizing the need for tailored programs for diverse profiles.
To examine these developments rigorously, a range of research methodologies have been employed. Quantitative methodologies, including surveys for tool validation, longitudinal tracking of development, and comparative analyses across groups, provide measurable insights into AI-TPACK evolution. Qualitative approaches, such as ethnographic observations in training settings, action research that involves teachers as investigators, and case studies of instructional units, provide contextual depth (). Finally, mixed methods, such as ’s () design-based research, content analysis of curricula, and social network examinations of collaborations, combine these strengths to enable holistic inquiry. These methodologies all play a key role in capturing the complexities of the AI-TPACK, enabling rigorous exploration of knowledge interplay, while informing evidence-based extensions. For tool validation, structural equation modeling ensures psychometric robustness (), while artifact analysis supports authenticity ().
In conclusion, research has established the TPACK model as a widely recognized framework for technology integration () while highlighting the educational promise of AI () and stimulating discourse on teacher training in the AI era (). Yet theoretical gaps persist, including the absence of a fully articulated AI-TPACK model that is grounded in authentic output. Methodologically, validated artifact-based instruments remain scarce, while practical shortcomings continue to be identified, following the neglecting of disciplinary and cultural variations (). Cross-cultural validation is also limited, with research largely confined to Western settings, underscoring the need for context-sensitive adaptations.
2. Research Aims and Questions
This study advances an empirically grounded AI-TPACK framework through the development and validation of a comprehensive AI-TPACK assessment instrument based on authentic pedagogical artifacts from teacher educators. The objectives are threefold: to define and map the AI-TPACK components, establish a reliable measurement tool, and identify competency patterns that can guide professional development in teacher education programs.
The following three research questions were defined:
What components of AI-TPACK emerge within the authentic pedagogical outputs provided by teacher educators, and how can these elements be defined and mapped?
To what extent does the developed tool provide a valid and reliable measure of the theoretical AI-TPACK construct?
What levels of AI-TPACK can be identified among teacher educators’ artifacts, and what characterizes each level?
Addressing these questions could provide a novel framework for refining curricula, enhancing assessment practices, and informing policies for AI integration in teacher education programs, ultimately fostering systemic improvements across educational settings.
3. Methodology
This study employed a systematic multi-phase approach based on the ADDIE (Analysis, Design, Development, Implementation, Evaluation) model to develop and validate a comprehensive AI-TPACK assessment instrument. This model, widely used in instructional design (), provides a structured methodology for creating research-based educational tools. The ADDIE cycle, particularly the continuous feedback loop between Development and Evaluation, provides the robust methodological rigor necessary to iteratively build, refine, and empirically validate a complex, multi-dimensional assessment framework. The following are details of the five-phase process: conceptual grounding, literature review, instrument development, implementation, and evaluation.
3.1. Phase 1: Conceptual Grounding—Defining Purpose and Theoretical Foundation
The first phase focused on establishing the instrument’s purpose and theoretical foundation. The application context was defined, and appropriate theoretical models were selected to ensure both conceptual and practical relevance. Grounded in the TPACK framework, the instrument was designed around three core dimensions: AI knowledge (AIK), AI pedagogical knowledge (AIPK), and AI content knowledge (AICK). These dimensions were elaborated into distinct AI-TPACK components to capture comprehensive and precise insights into the educational context.
3.2. Phase 2: Literature Integration—Building the Evidence Base
The second phase of the process empirically justified the theoretical choices and established an empirical basis for the instrument. Two systematic reviews were conducted. First, existing TPACK measurement instruments were examined. Studies were included if they used, described, or developed instruments for measuring TPACK among pre-service or in-service teacher educators. The search strategy combined AI-powered platforms, such as SciSpace, with established academic databases (e.g., Web of Science, Scopus, ERIC, Google Scholar) using the primary search query “TPACK” AND (“instrument” OR “rubric” OR “assessment”) AND (“validation” OR “development”) AND “teacher”. Complementary searches of gray literature and targeted themes (e.g., educational technology and teacher training) expanded the coverage. This process led to the identification of 505 records. Following systematic screening and eligibility assessments, 25 studies were included in the qualitative synthesis—comprising 16 journal articles, 6 conference papers, and 3 book chapters (Figure 2).
      
    
    Figure 2.
      PRISMA flow diagram: TPACK measurement instruments literature review.
  
Next, as the second part of phase 2, a targeted systematic literature review was conducted with the aim of identifying methodologically rigorous studies on quality frameworks, pedagogical effectiveness, teacher competency development, and implementation assessments—all in the context of AI in teacher education. The search strategy was designed to capture studies that employ validated frameworks or instruments. This search yielded 294 studies, all published between 2020 and 2024, addressing topics such as framework validation, pedagogical effectiveness, longitudinal and multi-institutional assessment, and teacher competence development. Additional targeted search sets identified 111 studies on the impact of AI on assessment methodologies and pedagogical evaluation and 32 studies on innovative assessment strategies in AI-enhanced teacher education. Finally, 73 records (55 citations and 18 references) complemented this process, bringing the total to 510.
Strict eligibility criteria were applied. Studies were required to demonstrate methodological rigor, employ validated frameworks or instruments, and address at least one of the following topics: development and validation of quality frameworks, assessment of pedagogical effectiveness, evaluation of teacher competencies, or implementation supported by empirical evidence.
Following the removal of duplicate items, 396 records were screened by title and abstract. Of these, 282 were excluded for insufficient methodological or empirical quality. The remaining 114 full-text articles were then reviewed in detail by the authors, resulting in the exclusion of another 96 studies, due to the lack of a validated framework (n = 35), insufficient empirical assessment (n = 30), and inadequate quality criteria (n = 31). The final synthesis comprised 18 studies that met all inclusion criteria, representing rigorous and empirically grounded research on frameworks for AI integration in teacher education (Figure 3).
      
    
    Figure 3.
      PRISMA flow diagram: AI teacher education quality frameworks literature review.
  
3.3. Phase 3: Instrument Development—Constructing the Assessment Tool
The third phase focused on the initial instrument development. A comprehensive synthesis of 43 studies revealed well-established assessment approaches for TPACK measurement, consistently supported by validation evidence and reliability standards. From this review, 37 assessment criteria were extracted and adapted to the AI-TPACK framework, ensuring coverage of its three core dimensions.
The instrument was then refined through a rigorous content validation process. An expert panel of five specialists evaluated content validity using Lawshe’s method, yielding strong results (CVR = 0.86 and CVI = 0.91). All panel members held leadership roles in digital innovation units within teacher education settings, had at least five years’ practical experience in technology integration, demonstrated expertise in training educators in educational technologies, had an academic background in education or learning technologies, and were highly familiar with AI implementation in teacher education (Table 1). Each expert independently rated item relevance, from 1 (not relevant) to 4 (highly relevant), and item clarity, from 1 (unclear) to 4 (very clear), via an online survey. They then participated in a 90 min structured group discussion via the Zoom platform to resolve rating disagreements and finalize any necessary revisions. CVR was calculated using Lawshe’s formula. According to ’s () original table, the minimum threshold for a 5-expert panel is 0.99 at p < 0.05 (one-tailed test); however, recalculations by  () suggest 1.00 due to exact binomial probabilities. Given the small panel size and practical considerations, we adopted a flexible approach, retaining items with CVR ≥ 0.60 when supported by high mean relevance ratings (M ≥ 3.6) and theoretical relevance, in line with common adaptations in validation studies. CVI was computed as the proportion of items that were rated 3 or 4 by at least 80% of the experts. Items with a CVI lower than 0.80 were revised.
       
    
    Table 1.
    Expert panel demographics.
  
3.4. Phase 4: Implementation—Pilot Testing and Refinement
The fourth stage, implementation, involved applying the novel instrument to 60 authentic artifacts collected between 2023 and 2025 during three cycles of a 180 h course for teacher educators entitled “Digital learning design and innovation in academia” at the [name removed for blind review] R&D institution for teachers educators. The 60 participating teacher educators were composed of 48 academic faculty and 12 Teachers’ R&D staff, representing 15 different institutions from different regions across the country and various sectors.
The artifacts included lesson plans, instructional units, and reflective accounts drawn from diverse contexts, including special education (40%), arts and education (25%), values education (20%), and techno-pedagogy (15%). Beyond serving as descriptive products, these artifacts embodied authentic, process-oriented evidence of teaching design and reflective inquiry. They captured the integration of digital tools and platforms—such as Canva, Base44, and Nearpod—used for documenting, analyzing, and sharing pedagogical practices. Each artifact represented a dynamic synthesis of AI-TPACK dimensions, demonstrating how technological innovation was intentionally aligned with pedagogical reasoning and adaptive content design, following frameworks such as Universal Design for Learning (UDL) and reflective practice. This ensured representation across high levels of AI-TPACK knowledge and skills and enabled the instrument to be validated against a wide range of integration practices. To maintain methodological continuity, the same panel of five experts applied the instrument to the authentic artifacts.
As mentioned, the literature review yielded 37 theoretical criteria for assessing TPACK competency; yet the empirical analysis of the 60 artifacts revealed gaps that required the addition of four dimensions: (1) specific technological proficiency levels, (2) performance-based student engagement measures, (3) structured critical reflection, and (4) artifact-based innovation measures. This finding underscores the importance of authentic artifact-based validation for developing practical measurement tools.
3.5. Phase 5: Evaluation—Validation and Finalization
The fifth and final phase of the study involved evaluation and validation. Inter-rater reliability was assessed by randomly selecting 20 (33%) of the artifacts for independent analysis by the five experts. Inter-rater reliability was calculated using a two-way mixed-effects model for absolute agreement of average measures (ICC[3,k]), as recommended for assessment instruments (), supplemented by Cohen’s Kappa for ordinal-level agreement between raters. Statistical analysis was conducted using the SPSS software V.30 (IBM Corp., Armonk, NY, USA). Reliability and validity were assessed across several dimensions: content validity (CVR and CVI calculations following Lawshe’s method); inter-rater reliability; and intraclass correlation coefficient (ICC) with 95% confidence intervals. For CVR, the original threshold for a 5-expert panel is 0.99 (or 1.00 per  recalculations); items slightly below this (e.g., CVR = 0.60) were retained based on high mean ratings (M = 3.6), theoretical importance, and expert discussion. CVI was computed as the proportion of items rated 3 or 4 by ≥80% of experts, with revisions for those below 0.80. Internal consistency was not assessed, as the tool focuses on observable behaviors rather than latent constructs. These procedures confirmed the instrument’s robustness for capturing AI-TPACK in authentic contexts.
The validation process resulted in a comprehensive AI-TPACK assessment tool comprising 14 criteria and 65 observable indicators. The instrument supports both quantitative scoring (on a 1–4 scale) and qualitative analysis, enabling the identification of AI integration patterns within authentic pedagogical artifacts.
Ethical approval was obtained from the Ethics Committee at the authors’ affiliated institution [MOFET R&D Institution]. Informed consent was received from all participants. Data were anonymized using unique identifier codes, with linking information stored separately in password-protected files. All sensitive data were saved in encrypted, secured servers, compliant with institutional data protection protocols.
4. Findings
4.1. Instrument Purpose and Theoretical Foundation
The instrument presented in this paper was designed to measure the AI-TPACK of teacher educators who integrate AI into their teaching through written deliverables (planning), documented learning units (implementation), and written reflections. To address the situational and contextual nature of the AI-TPACK, the tool specifies three main components: (1) AI knowledge (AIK); (2) AI pedagogical knowledge (AIPK); and (3) AI content knowledge (AICK). Within this structure, AIPK, AICK, and AI-TPACK were chosen as the primary levels of assessment, while the AIK component and integration served as foundational elements. The traditional TPACK knowledge components (TK, PK, and CK) were excluded from the instrument, since the AI-related elements require separate assessment measurements.
To substantiate these components, established theorical models were incorporated. The SAMR model (, ) was used to inform the AIPK by defining four levels of technology integration (substitution, augmentation, modification, and redefinition). The dual coding theory () was used to support the AICK dimension, highlighting how technology influences knowledge processing. Finally, Schön’s model of reflection (1983)—which distinguishes between technical, practical, and critical reflection—was used to evaluate the depth of reflective practice.
4.2. Systematic Literature Review
To establish a robust foundation for the instrument, two systematic literature reviews were conducted. The first examined existing TPACK measurement tools to identify their strengths and limitations, while the second focused on AI integration in teacher education to determine key pedagogical criteria and competencies. Our analysis of existing tools revealed a significant gap: none address the unique characteristics of AI technologies. Although they do address digital technologies, they lack criteria for assessing the pedagogical, content, and ethical implications of advanced AI tools. This limitation underscores the need for a new, AI-specific instrument. The second review, which synthesized findings from 34 high-quality studies, identified key criteria for effective AI integration in teacher training. These criteria were categorized based on the conceptual framework presented above, providing the empirical basis for the instrument’s components and emphasizing the importance of moving beyond general technology use to target AI-specific competencies, such as language processing tools and generative AI. The complete set of criteria and their sources, derived from both reviews, are presented in Table 2.
       
    
    Table 2.
    Summary of assessment criteria from the literature review (n = 37).
  
This summary table provides a concise overview of the conceptual framework. The full, detailed list of criteria, including the “Description” column and all 65 observable indicators, is presented in Appendix A for complete methodological transparency.
4.3. Initial Instrument Development and Content Validity Assessments
The initial version of the instrument was developed by synthesizing findings from the two literature reviews. This process yielded 37 assessment criteria, ensuring the tool’s relevance to the specific AI context. These criteria were operationalized into observable indicators to create a four-level scoring system. To ensure consistency and objectivity, a coding manual was drafted with explicit examples for each indicator; moreover, a weighting system was applied based on the frequency of each indicator in the literature—an approach that focused on a variety of AI tools in teacher education and provided a clear contextual focus for the AI-TPACK components from the onset.
The instrument was further refined through rigorous content validation by five experts in AI and teacher education. As mentioned, the process yielded strong results, with a CVR of 0.86 (above the required threshold of 0.62) and a CVI of 0.91 (also above the required threshold of 0.78). Combined with overall expert agreement of 94%, these outcomes confirm the instrument’s strong content validity. In addition, the experts emphasized the need to align the tool with evolving digital realities in classrooms. They affirmed its strong relevance to the authentic artifacts and recommended enhancing its focus on accessibility, equity in AI use, and the cultivation of critical thinking towards AI technologies. Based on this feedback, revisions were made to the instrument, including refining definitions in phases 2–3 (particularly within the AIPK component), adding concrete examples from expert practice, simplifying some terminology to minimize technological jargon, and incorporating new AI tools that emerged during the research period.
4.4. Field-Testing and Final Instrument Development
The validation process progressed from expert review to application with 60 authentic artifacts that were produced by 60 teacher educators. Analyzing real pedagogical materials revealed gaps that had been overlooked in the literature, leading to the addition of new practice-driven criteria. As a result, the initial 37 criteria were revised into a refined, final version. The completed tool consists of 14 criteria organized under 4 primary components and 3 supporting measures, encompassing 65 observable indicators. These revisions were substantiated through expert ratings. Averages and CVR values for each item are presented in Figure 4.
      
    
    Figure 4.
      Content Validity Ratio (CVR) per item, based on expert ratings.
  
The bar chart in Figure 4 illustrates the Content Validity Ratio (CVR) for each of the 13 core assessment items, as rated by the panel of five experts (N = 5). The red dashed line denotes a practical CVR threshold of 0.60 used in this study for retention, although ’s () original table specifies 0.99 for a five-expert panel. Items below 0.99 but above 0.60 (e.g., ‘Advanced technical integration’ and ‘Synergy between components’) were retained based on their high average rating (M = 3.6) and essential theoretical contribution to the AI-TPACK construct, as recommended in recalibrations by  (). The overall Content Validity analysis yielded strong results for the entire instrument, with an average CVR = 0.86 and a Content Validity Index (CVI) = 0.91. These values significantly exceed the minimum thresholds required (CVR_{min} = 0.62; CVI_{min} = 0.78) and confirm the tool’s robust content validity. Overall agreement among the experts stood at 94% (items approved without changes).
Note on Scale: The ratings for item relevance ranged from 1 (not relevant) to 4 (highly relevant). While the majority of items surpassed the CVR threshold, the items with CVR = 0.60 (such as ‘Advanced technical integration’ and ‘Synergy between components’) were marginally below the critical threshold. These items were retained in the final instrument based on their high average rating (M = 3.6) and their essential theoretical contribution to the AI-TPACK construct.
The pie chart in Figure 5 illustrates the normalized weight distribution of the final AI-TPACK assessment instrument, based on the fundamental 70%/30% theoretical split (Core Components/Supporting Measures). The overall scoring model allocates 70% of the total score to the four core AI-TPACK knowledge components (AIK, AIPK, AICK, Integration), and 30% to the three crucial supporting measures (Reflection, Engagement, and Innovation). This distribution highlights two key structural findings: (1) The Supporting Measures collectively constitute the largest single segment, emphasizing the tool’s focus on practical, non-technical aspects like reflection and innovation. (2) Among the Core Components, AI-Pedagogical Knowledge (AIPK) carries the greatest proportional weight, underscoring the priority of pedagogical application over pure technical knowledge in the instrument’s design. The full breakdown of the 14 criteria, including the final normalized weight percentage for each item, is detailed in the assessment rubric located in Appendix B.
      
    
    Figure 5.
      Weight distribution of AI-TPACK assessment tool components (normalized).
  
The empirical analysis also informed the refinement of the instrument’s scoring system. Indicators were re-evaluated and their weights adjusted according to their demonstrated impact on teaching quality via the authentic artifacts. Additional indicators were inductively derived from these materials. For instance, new criteria were added to distinguish between essential and non-essential AI tools and to assess the alignment of time allocation with learning objectives.
This iterative process—combining theory-based development, expert validation, and empirical field-testing—ensured that the final instrument was both theoretically sound and practically applicable. It provided a valid and reliable tool for measuring AI-TPACK among teacher educators—through both quantitative scoring (on a 1–4 scale) and qualitative analysis of AI integration patterns in authentic pedagogical artifacts.
Figure 5 presents the final structure of the AI-TPACK assessment tool, refined through a rigorous two-phase validation process. The tool comprises 14 criteria and 65 observable indicators, enabling dual scoring and analysis. The weighting system allocates 70% of the total score to the four core AI-TPACK components and 30% to the three supporting measures, reflecting their relative importance in evaluating practical AI integration in education.
4.5. Inter-Rater Reliability
The instrument’s reliability was tested on a random subset of 20 (33%) of the artifacts, independently rated by the same five experts from the validation phase. Results indicated substantial agreement among raters, with Cohen’s Kappa = 0.704 (), and an overall ICC = 0.84, indicating very good reliability (). As shown in Figure 6 the highest agreement was for AIK (ICC = 0.88, excellent) and the lowest for Integration (ICC = 0.76, good). Moreover, all components exceeded the accepted reliability threshold (>0.75), confirming the robustness of the tool across domains and providing a solid foundation for identifying competency patterns in teacher educators’ AI-TPACK.
      
    
    Figure 6.
      Inter-rater reliability (ICC) per component.
  
4.6. Statistical Findings and AI-TPACK Patterns
The analysis of 60 pedagogical artifacts, all provided by teacher educators, revealed a comprehensive picture of their skill levels and patterns of AI integration in their teaching. The overall mean AI-TPACK score was M = 2.55 (out of 4; SD = 0.73), indicating a moderate level of competence across the sample. Distribution by competency level showed that 5% (n = 3) were at the basic level (M = 1.25); 38% (n = 23) were at the intermediate level (M = 2.15); 45% (n = 27) were at the intermediate–high level (M = 2.95); and 12% (n = 7) were at the advanced level (M = 3.65). This indicates that most teacher educators demonstrate intermediate-to-high levels of AI competency, suggesting a solid foundation with room for further development. Figure 7 presents a detailed breakdown of the descriptive statistics for each AI-TPACK component, highlighting variations across AIK, AIPK, and AICK.
      
    
    Figure 7.
      Mean competency score per AI-TPACK component.
  
Figure 7 illustrates the average competency scores (mean) across the seven AI-TPACK components. The red dashed line represents the overall sample mean (M = 2.55). The analysis reveals significant variations in competence:
Highest Competency: The highest mean scores were recorded for Engagement (M = 2.89) and AICK (M = 2.78), suggesting teacher educators are most proficient in using AI to enrich content and involve students.
Lowest Competency: The lowest mean score was for Innovation (M = 2.23), followed by Integration (M = 2.34), highlighting a need for professional development focused on creating novel and holistic AI-based solutions.
Overall Status: Most components fell within the intermediate–high range, supporting the overall finding of a moderate level of AI-TPACK competence in the sample (Moverall = 2.55).
4.7. Identified Integration Patterns
The qualitative analysis revealed four distinct patterns of AI-TPACK integration among the participating teacher educators. First, the technological innovators (15%) exhibited advanced AIK (M ≥ 3.5) and moderate AIPK (M = 2.5–3.0)—demonstrating advanced technical skills and the ability to develop original AI solutions. Next, the pedagogical integrators (35%) exhibited high AIPK (M ≥ 3.0) and moderate AIK (M = 2.0–3.0)—demonstrating their ability to adapt AI tools for enhancing existing teaching methods. The content creators (30%) exhibited high AICK (M ≥ 3.0)—they prioritized using AI to generate and update subject-specific learning materials. Finally, the novice learners (20%) were characterized by low-to-intermediate scores across all components (M = 1.5–2.5)—they tended to only use basic, widely available AI tools, typically applied at a substitutional level.
The analysis also revealed significant correlations between the tool’s components (Figure 8). The strongest correlation (r = 0.78) was seen between AIPK and Integration, indicating that effective AI integration is heavily dependent on a robust pedagogical understanding of the technology. These findings underscore the persistent gap between technical expertise and pedagogical implementation.
      
    
    Figure 8.
      Correlation of components with integration (INT).
  
The correlation analysis was conducted using Pearson’s correlation coefficient (r) to examine the linear relationships between the AI-TPACK components. This parametric test was justified because the data analyzed were the composite mean scores for each component (e.g., AIK, AIPK, Integration). Since these scores are calculated as the average of multiple ordinal indicators, they are statistically robustly treated as continuous, approximating interval-level data, which is a standard psychometric practice for correlational analysis between composite factor scores.
Figure 8 visually confirms the strongest correlation found in the analysis, which is between AIPK (AI Pedagogical Knowledge) and Integration (r = 0.78). This strong relationship, highlighted by the fact that AIPK is the only component to clearly surpass the r = 0.70 threshold, supports the theoretical claim that pedagogical understanding is the central driver for effective AI integration. Conversely, the technical component (AIK) exhibits a significantly weaker correlation r = 0.48, underscoring the gap between technical expertise and meaningful implementation. These findings underscore the persistent gap between technical expertise and pedagogical implementation.
Overall, the comprehensive analysis of 60 authentic artifacts confirmed the validity and reliability of the innovative AI-TPACK assessment tool presented in this paper. The results indicate a moderate level of AI-TPACK competency among teacher educators and reveal distinct integration patterns. Strong inter-component correlations further highlight the central role of pedagogical knowledge and reflective practices for successful AI integration. The tool thus provides a solid foundation for the objective and systematic measurement of AI-TPACK competency among teacher educators—enabling the identification of personal strengths and weaknesses, to guide future professional development.
5. Discussion
This study aimed to develop and validate a comprehensive tool for measuring AI-TPACK proficiency through the analysis of authentic pedagogical outputs of teacher educators. The findings demonstrate that the proposed tool is both valid and reliable, while providing a nuanced picture of teacher-educator’s AI competencies and patterns that characterize their integration of AI into teaching. Interpreting these results in line with the three research questions highlights several key contributions to both theory and practice.
The first question focused on the components of AI-TPACK as manifested in authentic teacher-educator artifacts. The study provides a clear response to this question by defining an assessment tool that extends the classic TPACK (), to capture the unique affordances and challenges of AI. Unlike existing tools, which often remain generic and overlook the discrete features of AI (; ), the present instrument incorporates specific knowledge components—AIK, AIPK, and AICK—tailored to the pedagogical use of AI. Developed iteratively through literature analysis and empirical materials from the field, the final structure of 14 criteria and 65 indicators underscores the necessity of grounding theoretical constructs in authentic practice—thereby echoing similar calls in the literature (; ; ; ).
The second research question confirmed the tool’s methodological rigor. A panel of five specialists confirmed strong content validity, while inter-related measures provided evidence of component reliability. These results address concerns about the limitations of existing TPACK tools with regard to AI (; ). Moreover, by grounding measurements in lesson plans, reflections, and other teacher-educator outputs—rather than relying on self-reports—the tool aligns with recommendations for artifact-based assessment tools (; ; ; ).
The third research question explored patterns and proficiency levels among teacher educators. The findings reveal moderate competency levels, but substantial variation across individuals. Four integration patterns emerged—technological innovator, pedagogical integrator, content developer, and beginner—mirroring typologies from previous studies on technology adoption (e.g., ; ; ; ). The distribution of these patterns suggests that while many teacher educators are capable of integrating AI meaningfully, their strengths are unevenly distributed, reinforcing the importance of tailored professional development programs ().
Notably, the strong correlation between AIPK and Integration highlights pedagogy as the central driver of successful AI use. This supports theoretical models such SAMR (), and affirms that transformative integration (e.g., modification and redefinition) depends less on technical knowledge (AIK) and more on pedagogical vision and application (; ). This insight provides theoretical validation for the TPACK model () and suggests that successful integration requires synergy among the various components. Additionally, the overemphasis on isolated components, particularly technical knowledge, at the expense of pedagogy underscores a recurring gap noted in recent reviews (; ).
Taken together, these findings confirm the tool’s theoretical contribution by extending TPACK to AI contexts, demonstrate its reliability and validity through empirical testing, and provide insights into the heterogeneous ways teacher educators approach AI integration. They also emphasize that professional development must move beyond generic training to address specific competency profiles and foster synergy across knowledge domains, thereby preparing teacher educators to act as effective change leaders in the era of AI.
5.1. Practical Implications
This study carries important implications regarding professional development programs for teacher educators. The validated tool enables the identification of AI-TPACK profiles, supporting teacher-training planners—in line with literature on the role of teacher educators as change leaders (; ). Rather than offering generic training, the tool allows for targeted guidance. For example, technological innovators may benefit from greater emphasis on AI pedagogy, while pedagogical integrators likely require training on more advanced AI tools (; ). It also functions as an assessment tool, providing ongoing evaluation and actionable feedback (; ). Furthermore, the findings highlight the need for curriculum design that integrates content knowledge and holistic AI integration into technology courses—thereby avoiding a narrow technical focus (; ). The instrument’s reliance on authentic artifacts also makes it highly adaptable; researchers can easily modify the specific criteria to align with different national curriculum standards or institutional contexts without compromising the integrity of the overall AI-TPACK framework.
5.2. Study Limitations and Future Research
This study has several limitations that should be acknowledged. First, the sample, drawn from teacher educators who completed an extensive training course, may not be representative of the broader population. Their advanced outputs may reflect a more ideal state of AI-TPACK proficiency. Moreover, our reliance on written outputs alone may not fully capture the complexity of real-world classroom implementation, a limitation noted in previous artifact-based studies (e.g., ; ). Future research should include a larger, more diverse sample from a range of educational backgrounds. Additionally, incorporating methods like classroom observations and conducting longitudinal studies could provide a more comprehensive understanding of the construction over time.
6. Conclusions
This study systematically addressed its three primary research objectives. First, by defining the AI-TPACK components that emerged from analyzing authentic pedagogical artifacts (RQ1), it provided a refined conceptual extension of the TPACK framework. Second, through rigorous expert validation and empirical analysis, the research confirmed the tool’s strong validity and reliability as a robust measurement instrument (RQ2). Finally, the study identified four distinct teacher educator competency patterns (RQ3) that offer actionable insights for guiding targeted professional development.
The current research makes key contributions to the TPACK field by developing and validating an initial tool for measuring AI-TPACK proficiency among teacher educators. The instrument, found to be both valid (CVR = 0.86, CVI = 0.91) and reliable (ICC = 0.84), effectively differentiates between knowledge components and proficiency levels. It offers a practical resource for future research, as well as for teacher-training planners and policymakers, supporting the diagnosis and development of key AI competencies in the emerging AI era. The analysis also advances the integration of AI-specific dimensions—AIK, AIPK, and AICK—into the existing framework and provides empirical evidence for the value of artifact-based assessment over subjective self-reports.
These findings highlight that successful AI integration is heavily driven by pedagogical understanding (AIPK) rather than pure technical knowledge (AIK), emphasizing the need for professional development to foster synergy between domains.
The demonstrated contributions provide a strong foundation for future research and for designing professional development that promotes both effective and responsible AI integration in education. However, the findings should be interpreted cautiously due to limitations related to the advanced sample composition (teacher educators in training) and the reliance on written artifacts alone, underscoring the necessity for future longitudinal and diverse cross-cultural validations to capture real-world classroom practices.
Funding
This research received no external funding, and the APC was funded by MOFET R&D Institute.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board the MOFET Institute, approval code is IRB2507 on 1 September 2025.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
Data is unavailable due to privacy restrictions.
Acknowledgments
The author would like to express sincere gratitude to the academic staff and teacher educators who generously agreed to share and permit the analysis of the learning products and artifacts they developed during their internship at the MOFET R&D Institute.
Conflicts of Interest
The author declares no conflicts of interest.
Appendix A
       
    
    Table A1.
    Assessment criteria from the literature review (n = 37).
  
Table A1.
    Assessment criteria from the literature review (n = 37).
      | Category | Criterion ID | Assessment Indicator | Description | Measurement Level | 
|---|---|---|---|---|
| AI-Technology Knowledge | ATK-1 | Understanding of AI Technologies | Comprehension of machine learning, natural language processing, computer vision | Foundational | 
| AI-Technology Knowledge | ATK-2 | Proficiency with Educational AI Tools | Ability to use AI tools specifically designed for education | Foundational | 
| AI-Technology Knowledge | ATK-3 | Technological Problem-Solving | Capability to troubleshoot technical issues related to AI tools | Foundational | 
| AI-Technology Knowledge | ATK-4 | Understanding of Technological Limitations | Recognition of AI tool constraints and challenges | Foundational | 
| AI-Technology Knowledge | ATK-5 | Ability to Evaluate New Tools | Critical assessment of emerging AI tools | Advanced | 
| AI-Technology Knowledge | ATK-6 | Understanding of Basic Algorithms | General comprehension of how AI systems operate | Advanced | 
| AI-Technology Knowledge | ATK-7 | Technological Context Adaptation | Selection of appropriate tools for specific situations | Advanced | 
| AI-Pedagogical Knowledge | APK-1 | Personalized Learning Implementation | Using AI to create individualized learning experiences | Teaching Strategies | 
| AI-Pedagogical Knowledge | APK-2 | Automated and Immediate Feedback | Implementation of AI-based feedback systems | Teaching Strategies | 
| AI-Pedagogical Knowledge | APK-3 | Differentiated Instruction | Using AI to support diverse learning needs | Teaching Strategies | 
| AI-Pedagogical Knowledge | APK-4 | Advanced Classroom Management | Integrating AI into classroom organization and management | Teaching Strategies | 
| AI-Pedagogical Knowledge | APK-5 | Active Learning with AI | Encouraging student engagement through AI tools | Pedagogical Principles | 
| AI-Pedagogical Knowledge | APK-6 | Collaborative Learning | Using AI to promote student cooperation | Pedagogical Principles | 
| AI-Pedagogical Knowledge | APK-7 | Critical Thinking Development | Fostering critical thinking about AI | Pedagogical Principles | 
| AI-Pedagogical Knowledge | APK-8 | Formative Assessment | Using AI for continuous assessment | Pedagogical Principles | 
| AI-Content Knowledge | ACK-1 | Subject-Specific AI Applications | Using AI tools relevant to specific academic subjects | Content Integration | 
| AI-Content Knowledge | ACK-2 | Concept Demonstration through AI | Explaining content concepts using AI tools | Content Integration | 
| AI-Content Knowledge | ACK-3 | Quality Content Creation | Developing learning materials using AI | Content Integration | 
| AI-Content Knowledge | ACK-4 | Information Search and Verification | Using AI for academic information retrieval and validation | Content Integration | 
| AI-Content Knowledge | ACK-5 | Appropriate Difficulty Levels | Adapting content difficulty for different students | Content Adaptation | 
| AI-Content Knowledge | ACK-6 | Content Relevance | Maintaining content relevance to learning objectives | Content Adaptation | 
| AI-Content Knowledge | ACK-7 | Information Currency | Using AI to update current content | Content Adaptation | 
| AI-TPACK Integration | AIT-1 | Holistic Lesson Planning | Thoughtful integration of AI, pedagogy, and content | Integrated Planning | 
| AI-TPACK Integration | AIT-2 | Clear Learning Objectives | Defining clear goals for AI use | Integrated Planning | 
| AI-TPACK Integration | AIT-3 | Target Audience Adaptation | Adapting AI use to student needs | Integrated Planning | 
| AI-TPACK Integration | AIT-4 | Learning Sequence | Creating logical sequence of AI-based activities | Integrated Planning | 
| AI-TPACK Integration | AIT-5 | Effective Implementation | Smooth execution of AI activities in classroom | Implementation and Assessment | 
| AI-TPACK Integration | AIT-6 | Monitoring and Adaptation | Tracking progress and adjusting accordingly | Implementation and Assessment | 
| AI-TPACK Integration | AIT-7 | Reflection and Improvement | Self-evaluation and continuous improvement | Implementation and Assessment | 
| AI-TPACK Integration | AIT-8 | Learning Outcome Assessment | Measuring AI impact on student learning | Implementation and Assessment | 
| AI Ethics and Responsibility | AER-1 | Privacy and Data Security | Protecting student privacy | Ethical Considerations | 
| AI Ethics and Responsibility | AER-2 | Algorithmic Transparency | Explaining AI functionality to students | Ethical Considerations | 
| AI Ethics and Responsibility | AER-3 | Bias Prevention | Identifying and preventing algorithmic bias | Ethical Considerations | 
| AI Ethics and Responsibility | AER-4 | Responsible Use | Promoting thoughtful and responsible AI use | Ethical Considerations | 
| AI Ethics and Responsibility | AER-5 | AI Literacy | Developing basic AI understanding among students | Literacy Development | 
| AI Ethics and Responsibility | AER-6 | Critical Thinking about AI | Encouraging critical thinking about AI impacts | Literacy Development | 
| AI Ethics and Responsibility | AER-7 | Digital Responsibility | Developing personal responsibility in technology use | Literacy Development | 
       
    
    Table A2.
    Structure of the final AI-TPACK assessment tool.
  
Table A2.
    Structure of the final AI-TPACK assessment tool.
      | Main Category | Sub-Category | Criterion | Number of Indicators | Weight (%) | Corrected Weight (%) (Sum = 100%) | 
|---|---|---|---|---|---|
| AIK | Basic AI Knowledge | 1. Identification of appropriate AI tools | 4 | 4.4 | 5.0 | 
| Advanced AI Abilities | 2. Complex technical integration | 5 | 5.6 | 6.4 | |
| 3. Development of original solutions | 6 | 7.0 | 8.0 | ||
| AIPK | Pedagogical Approaches | 4. Alignment with learning objectives | 5 | 5.6 | 6.4 | 
| Learning Design | 5. Interactive learning experiences | 6 | 7.0 | 8.0 | |
| 6. Transformation of pedagogical practices | 4 | 4.9 | 5.6 | ||
| AICK | Content Representation | 7. Adapting content presentation | 4 | 4.4 | 5.0 | 
| Knowledge Processing | 8. Processing and organizing knowledge | 5 | 5.6 | 6.4 | |
| 9. Creation of new knowledge approaches | 5 | 5.5 | 6.3 | ||
| Integration | Foundational Integration | 10. Connection between components | 4 | 4.4 | 5.0 | 
| Overall Integration | 11. Synergy between components | 6 | 7.0 | 8.0 | |
| Supporting Measures | Reflection | 12. Depth of reflection on AI use | 3 | 10.0 | 10.0 | 
| Student Engagement | 13. Level of student engagement with AI | 4 | 10.0 | 10.0 | |
| Innovation | 14. Level of innovation in AI use | 4 | 10.0 | 10.0 | |
| Total | 14 Criteria | 65 Indicators | 91.4% | 100% | 
       
    
    Table A3.
    Descriptive statistics of AI-TPACK scores.
  
Table A3.
    Descriptive statistics of AI-TPACK scores.
      | Component | Mean | Standard Deviation | Minimum | Maximum | Median | Coefficient of Variation | 
|---|---|---|---|---|---|---|
| AIK | 2.67 | 0.84 | 1.25 | 4.00 | 2.75 | 0.31 | 
| AIPK | 2.45 | 0.91 | 1.00 | 4.00 | 2.50 | 0.37 | 
| AICK | 2.78 | 0.76 | 1.50 | 4.00 | 3.00 | 0.27 | 
| Integration | 2.34 | 0.88 | 1.00 | 4.00 | 2.25 | 0.38 | 
| Reflection | 2.56 | 0.94 | 1.00 | 4.00 | 2.67 | 0.37 | 
| Engagement | 2.89 | 0.71 | 1.75 | 4.00 | 3.00 | 0.25 | 
| Innovation | 2.23 | 1.02 | 1.00 | 4.00 | 2.00 | 0.46 | 
| Overall AI-TPACK | 2.55 | 0.73 | 1.20 | 3.85 | 2.60 | 0.29 | 
       
    
    Table A4.
    AI-TPACK component correlations.
  
Table A4.
    AI-TPACK component correlations.
      | AIK | AIPK | AICK | INT | REFL | ENG | INNOV | |
|---|---|---|---|---|---|---|---|
| AIK | 1.00 | ||||||
| AIPK | 0.67 ** | 1.00 | |||||
| AICK | 0.54 ** | 0.71 ** | 1.00 | ||||
| INT | 0.48 ** | 0.78 ** | 0.65 ** | 1.00 | |||
| REFL | 0.41 * | 0.56 ** | 0.62 ** | 0.58 ** | 1.00 | ||
| ENG | 0.52 ** | 0.64 ** | 0.69 ** | 0.59 ** | 0.71 ** | 1.00 | |
| INNOV | 0.59 ** | 0.72 ** | 0.58 ** | 0.74 ** | 0.68 ** | 0.63 ** | 1.00 | 
Note: * p < 0.01, ** p < 0.05.
Appendix B. Full AI-TPACK Artifact Assessment Tool
Introduction to the Tool
This comprehensive framework, based on the extended AI-TPACK model, provides a systematic and practical tool for analyzing artifacts (instructional units) that integrate artificial intelligence in teacher education. Designed for both theoretical rigor and practical application, the framework includes a total of 14 criteria and 65 indicators. These are translated into a practical rubric with seven main components and four proficiency levels for each, allowing for a thorough and accurate evaluation while remaining flexible for diverse educational contexts and AI applications.
- Part A: Basic Unit Information
 - Identification Details:
 
- Instructor Name: _______________
 - Content Area: _______________
 - Course Type: (Lecture/Seminar/Lab/Other) _______________
 - Academic Hours: _______________
 - Number of Activities in the Unit: _______________
 
- Part B: AI-TPACK Analysis by Components
 - 1. AIK—Artificial Intelligence Knowledge
 
| Level | Description | Examples | Score | 
| Basic | Single AI tool, simple use | Using ChatGPT for information retrieval | 1 | 
| Intermediate | Multiple AI tools or advanced single tool | Leonardo AI + ChatGPT + Canva | 2 | 
| Advanced | Creating applications/bots, complex integration | Base44, NotebookLM, bot development | 3 | 
| Innovative | Original AI solutions, unique applications | Custom applications, innovative integrations | 4 | 
- Detailed Criteria:
 - Criterion 1: Identification and Selection of AI Tools (4 indicators)
 
- 1.1 Identifies AI tools suitable for the content area
 - 1.2 Selects tools based on clear pedagogical objectives
 - 1.3 Adapts tool selection to students’ proficiency levels
 - 1.4 Considers technical limitations and accessibility
 
- Criterion 2: Technical Application of AI (5 indicators)
 
- 2.1 Integrates multiple AI tools effectively
 - 2.2 Customizes AI tool settings for specific contexts
 - 2.3 Addresses technical issues in AI implementation
 - 2.4 Optimizes AI tool performance for educational use
 - 2.5 Demonstrates understanding of AI capabilities and limitations
 
- Criterion 3: Innovation and AI Development (5 indicators)
 
- 3.1 Develops original AI-based solutions
 - 3.2 Creates innovative applications using AI platforms
 - 3.3 Adapts existing AI tools for novel educational purposes
 - 3.4 Experiments with advanced AI technologies
 - 3.5 Shares and documents AI innovations for others
 
- Unit Score: ___/4
 - 2. AIPK—AI-Pedagogical Knowledge
 
| Level | Description | Examples | Score | 
| Substitution | AI replaces existing tools | AI instead of a dictionary or Google search | 1 | 
| Augmentation | AI enhances existing activities | Creating diverse exercises, personalized feedback | 2 | 
| Modification | AI enables significant redesign | Interactive learning, simulations | 3 | 
| Transformation | AI creates entirely new possibilities | Human-AI collaboration, innovative learning experiences | 4 | 
- Detailed Criteria:
 - Criterion 4: Basic Pedagogical Integration (4 indicators)
 
- 4.1 Uses AI to appropriately replace traditional teaching tools
 - 4.2 Enhances existing activities through AI augmentation
 - 4.3 Aligns AI use with specific learning objectives
 - 4.4 Maintains pedagogical focus during AI integration
 
- Criterion 5: Designing Learning Experiences (5 indicators)
 
- 5.1 Redesigns learning experiences using AI capabilities
 - 5.2 Creates interactive learning environments with AI
 - 5.3 Promotes student autonomy through AI-supported learning
 - 5.4 Encourages collaborative learning enriched by AI
 - 5.5 Adapts teaching methods to leverage AI advantages
 
- Criterion 6: Transformative Pedagogical Practice (5 indicators)
 
- 6.1 Creates new pedagogical approaches enabled by AI
 - 6.2 Promotes human-AI collaboration in learning
 - 6.3 Transforms traditional classroom dynamics with AI
 - 6.4 Enables personalized learning pathways through AI
 - 6.5 Develops AI-enriched assessment strategies
 
- Unit Score: ___/4
 - 3. AICK—AI-Content Knowledge
 
| Level | Description | Examples | Score | 
| Presentation | AI presents existing content | Displaying eye diseases using AI | 1 | 
| Processing | AI assists in organizing and adapting content | Mind maps, personalized summaries | 2 | 
| Creation | AI participates in creating new content | Accompanying images, new outputs | 3 | 
| Innovation | AI enables new content approaches | Digital VTS, virtual experiences | 4 | 
- Detailed Criteria:
 - Criterion 7: Content Presentation and Adaptation (4 indicators)
 
- 7.1 Uses AI to present existing content in accessible formats
 - 7.2 Adjusts content complexity with AI assistance
 - 7.3 Creates visual representations of abstract concepts
 - 7.4 Organizes and structures content using AI tools
 
- Criterion 8: Content Enrichment and Processing (5 indicators)
 
- 8.1 Enriches content with AI-generated examples and illustrations
 - 8.2 Creates multimedia content using AI tools
 - 8.3 Develops interactive content experiences with AI
 - 8.4 Curates and summarizes content from multiple AI sources
 - 8.5 Personalizes content delivery based on student needs
 
- Criterion 9: Knowledge Creation and Innovation (5 indicators)
 
- 9.1 Generates new content in collaboration with AI
 - 9.2 Develops original case studies and scenarios with AI
 - 9.3 Creates domain-specific applications using AI
 - 9.4 Explores new approaches to content delivery with AI
 - 9.5 Synthesizes knowledge from multiple AI sources
 
- Unit Score: ___/4
 - Part C: Integration Analysis
 - 4. Level of AI Integration in Pedagogical Design
 
| Level | Description | Indicators | Score | 
| Separate | AI as an additional, disconnected activity | AI appears only in a specific part | 1 | 
| Partially Integrated | AI linked to learning objectives | AI supports some objectives | 2 | 
| Fully Integrated | AI integrated across all unit stages | AI in planning, implementation, reflection | 3 | 
| Integrative | AI as an inseparable part of pedagogy | Unit cannot function without AI | 4 | 
- Detailed Criteria:
 - Criterion 10: Basic Integration Methods (4 indicators)
 
- 10.1 Connects AI use to learning objectives
 - 10.2 Aligns AI tools with pedagogical strategies
 - 10.3 Balances AI and non-AI learning activities
 - 10.4 Maintains consistency across learning stages
 
- Criterion 11: Strategic Integration Planning (5 indicators)
 
- 11.1 Plans systematic AI integration throughout the curriculum
 - 11.2 Sequences AI activities optimally for learning progression
 - 11.3 Aligns AI use with assessment strategies
 - 11.4 Considers long-term impacts of AI integration
 - 11.5 Adjusts integration based on feedback and student outputs
 
- Criterion 12: Advanced Synergistic Integration (5 indicators)
 
- 12.1 Creates seamless integration across all TPACK components
 - 12.2 Develops synergies between AI, pedagogy, and content
 - 12.3 Embeds AI as an essential component, not an add-on
 - 12.4 Achieves integration in all stages of planning, implementation, and reflection
 - 12.5 Creates holistic learning ecosystems integrating AI
 
- Unit Score: ___/4
 - Part D: Additional Qualitative Metrics
 - 5. Depth of Reflection on AI
 
- 0 points: No reflection on AI use
 - 1 point: Technical reflection only (worked/did not work)
 - 2 points: Pedagogical reflection (impact on learning)
 - 3 points: Critical reflection (advantages and disadvantages)
 
- Criterion 13: Depth of Reflection on AI Use (3 indicators)
 
- 13.1 Reflects on technical aspects of AI implementation
 - 13.2 Analyzes the pedagogical impact of AI integration
 - 13.3 Critically evaluates ethical and societal implications of AI use
 
- 6. Student Engagement with AI
 
- Passive observation (1 point)
 - Guided use (2 points)
 - Active creation (3 points)
 - Critical thinking about AI (4 points)
 
- 7. Innovation in AI Use
 
- Routine use (1 point)
 - Creative application (2 points)
 - Innovative problem-solving (3 points)
 - Innovative methodological development (4 points)
 
- Criterion 14: Student Engagement and Innovation (4 indicators)
 
- 14.1 Promotes student engagement with AI tools at appropriate levels
 - 14.2 Encourages student creativity through collaboration with AI
 - 14.3 Promotes critical thinking about AI’s role and limitations
 - 14.4 Develops innovative approaches to AI-enhanced learning
 
- Unit Score: ___/4
 - ___________________________________________________________________________
 - Part E: Overall AI-TPACK Score Calculation
 - Calculation Formula:
 - AI-TPACK Score = [(AIK + AIPK + AICK + Integration) × 0.7] + [(Reflection + Student Engagement + Innovation) × 0.3]
 - AI-TPACK Score Calculation Guidelines
 
- Calculate the Core Group (70% of the total score): Sum the scores of the AIK, AIPK, AICK, and Integration components. Multiply this sum by 0.7.
 - Calculate the Extended Group (30% of the total score): Sum the scores of the Reflection, Student Engagement, and Innovation components. Multiply this sum by 0.3.
 - Final Score (Raw): Add the results from steps 1 and 2 to obtain the raw final AI-TPACK score.
 
- The weights (70% and 30%) indicate the relative importance of each component group in the final score.
 - Raw to Normalized Score Conversion Table
 
| Final Score (Raw) | Normalized Final Score | Proficiency Level | 
| 3.7–5.8 | 1.0–1.4 | Basic Level (Beginner) | 
| 5.9–9.1 | 1.5–2.4 | Intermediate Level (Developing) | 
| 9.2–12.4 | 2.5–3.4 | Intermediate-High Level (Proficient) | 
| 12.5–14.8 | 3.5–4.0 | Advanced Level (Expert Proficiency) | 
- Final Score: ___/4.0
 - Part F: Qualitative Comments
 - Strengths in AI Integration:
 
- ___________________________________________________________________________
 - ___________________________________________________________________________
 - ___________________________________________________________________________
 
- Areas for Improvement:
 
- ___________________________________________________________________________
 - ___________________________________________________________________________
 - ___________________________________________________________________________
 
- Unique Patterns Identified in AI Use:
 
- ___________________________________________________________________________
 - ___________________________________________________________________________
 - ___________________________________________________________________________
 
- Recommendations for Further Development:
 
- ___________________________________________________________________________
 - ___________________________________________________________________________
 - ___________________________________________________________________________
 
- Instructions for Use:
 
- Evaluate each instructional unit separately.
 - Rate each component based on evidence from the document.
 - Calculate the overall score.
 - Document qualitative insights.
 - Compare patterns across different units.
 
- To facilitate the assessment process, we have developed a dedicated application which simplifies data entry and automatically calculates the final scores.
 
References
- Admiraal, W., van Vugt, F., Kranenburg, F., Koster, B., Smit, B., Weijers, S., & Lockhorst, D. (2017). Preparing pre-service teachers to integrate technology into K–12 instruction: Evaluation of a technology-infused approach. Technology, Pedagogy and Education, 26(1), 105–120. [Google Scholar] [CrossRef]
 - Al-Abdullatif, A. M. (2024). Modeling teachers’ acceptance of generative artificial intelligence use in higher education: The role of AI literacy, intelligent TPACK, and perceived trust. Education Sciences, 14(11), 1209. [Google Scholar] [CrossRef]
 - Alshahrani, B. T., Pileggi, S. F., & Karimi, F. (2024). A social perspective on AI in the higher education system: A semisystematic literature review. Electronics, 13(8), 1572. [Google Scholar] [CrossRef]
 - Archambault, L., & Crippen, K. (2009). Examining TPACK among K-12 online distance educators in the United States. Contemporary Issues in Technology and Teacher Education, 9(1), 71–88. [Google Scholar]
 - Avalos, B. (2011). Teacher professional development in teaching and teacher education over ten years. Teaching and Teacher Education, 27(1), 10–20. [Google Scholar] [CrossRef]
 - Ayre, C., & Scally, A. J. (2014). Critical values for Lawshe’s content validity ratio: Revisiting the original methods of calculation. Measurement and Evaluation in Counseling and Development, 47(1), 79–86. [Google Scholar] [CrossRef]
 - Bobula, M. (2024). Generative artificial intelligence (AI) in higher education: A comprehensive review of challenges, opportunities, and implications. Journal of Learning Development in Higher Education, (30). [Google Scholar] [CrossRef]
 - Bower, M., Torrington, J., Lai, J. W. M., Petocz, P., & Alfano, M. (2024). How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence? Outcomes of the ChatGPT teacher survey. Education and Information Technologies, 29, 15403–15439. [Google Scholar] [CrossRef]
 - Celik, I. (2023). Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers in Human Behavior, 138, 107468. [Google Scholar] [CrossRef]
 - Celik, I., Gedrimiene, E., Siklander, S., & Muukkonen, H. (2024). The affordances of artificial intelligence-based tools for supporting 21st-century skills: A systematic review of empirical research in higher education. Australasian Journal of Educational Technology, 40(3), 19–38. [Google Scholar] [CrossRef]
 - Chai, C. S., Koh, J. H. L., & Tsai, C.-C. (2013). A review of technological pedagogical content knowledge. Educational Technology & Society, 16(2), 31–51. [Google Scholar]
 - Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: A quantitative analysis using structural equation modelling. Education and Information Technologies, 25(5), 3443–3463. [Google Scholar] [CrossRef]
 - Chiu, T. K. F. (2023a). Future research recommendations for transforming higher education with generative AI. Computers and AI in Education, 6, 100197. [Google Scholar] [CrossRef]
 - Chiu, T. K. F. (2023b). The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments, 32, 6187–6203. [Google Scholar] [CrossRef]
 - Chiu, T. K. F., & Chai, C. S. (2020). Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability, 12(14), 5568. [Google Scholar] [CrossRef]
 - Chiu, T. K. F., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4, 100118. [Google Scholar] [CrossRef]
 - Choudhury, S., Deb, J. P., Pradhan, P., & Mishra, A. (2024). Validation of the teachers AI-TPACK scale for the Indian educational setting. International Journal of Experimental Research and Review, 43, 119–133. [Google Scholar] [CrossRef]
 - Drummond, A., & Sweeney, T. (2017). Can an objective measure of technological pedagogical content knowledge (TPACK) supplement existing TPACK measures? British Journal of Educational Technology, 48(4), 928–939. [Google Scholar] [CrossRef]
 - Ebner, M., Lienhardt, C., Rohs, M., & Meyer, I. (2010). Microblogs in higher education—A chance to facilitate informal and process-oriented learning? Computers & Education, 55(1), 92–100. [Google Scholar] [CrossRef]
 - Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., & Yoon, K. S. (2001). What makes professional development effective? Results from a national sample of teachers. American Educational Research Journal, 38(4), 915–945. [Google Scholar] [CrossRef]
 - Graham, C. R. (2011). Theoretical considerations for understanding technological pedagogical content knowledge (TPACK). Computers & Education, 57(3), 1953–1960. [Google Scholar] [CrossRef]
 - Guskey, T. R. (2002). Professional development and teacher change. Teachers and Teaching, 8(3), 381–391. [Google Scholar] [CrossRef]
 - Harris, J. B., Hofer, M. J., Blanchard, M. R., Grandgenett, N. F., Schmidt, D. A., van Olphen, M., & Young, C. A. (2010). Testing a TPACK-based technology integration observation instrument. In D. Gibson, & B. Dodge (Eds.), Proceedings of SITE 2010—Society for information technology & teacher education international conference (pp. 4352–4359). Association for the Advancement of Computing in Education (AACE). [Google Scholar]
 - Hofer, M., Grandgenett, N., Harris, J., & Swan, K. (2011). Testing a TPACK-based technology integration observation instrument. In M. Koehler, & P. Mishra (Eds.), Proceedings of SITE 2011—Society for information technology & teacher education international conference (pp. 4352–4359). Association for the Advancement of Computing in Education (AACE). [Google Scholar]
 - Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign. [Google Scholar]
 - Hughes, J. (2005). The role of teacher knowledge and learning experiences in forming technology-integrated pedagogy. Journal of Technology and Teacher Education, 13(2), 277–302. [Google Scholar]
 - Kim, S.-W. (2024). Development of a TPACK educational program to enhance pre-service teachers’ teaching expertise in artificial intelligence convergence education. International Journal of Advanced Science, Engineering and Information Technology, 14(1), 19552. [Google Scholar] [CrossRef]
 - Kimmons, R., Graham, C. R., & West, R. E. (2020). The PICRAT model for technology integration in teacher preparation. Contemporary Issues in Technology and Teacher Education, 20(1), 176–198. [Google Scholar]
 - Koehler, M. J., Mishra, P., & Cain, W. (2013). What is technological pedagogical content knowledge (TPACK)? Journal of Education, 193(3), 13–19. [Google Scholar] [CrossRef]
 - Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. [Google Scholar] [CrossRef]
 - König, J., Bremerich-Vos, A., Buchholtz, C., Fladung, I., & Glutsch, N. (2020). General pedagogical knowledge, pedagogical adaptivity in written lesson plans, and instructional practice among preservice teachers. Journal of Curriculum Studies, 52(5), 616–638. [Google Scholar] [CrossRef]
 - Küchemann, S., Avila, K., Dinc, Y., Boolzen, C., Revenga Lozano, N., Ruf, V., Stausberg, N., Steinert, S., Fischer, F., Fischer, M., Kasneci, E., Kasneci, G., Kuhr, T., Kutyniok, G., Malone, S., Sailer, M., Schmidt, A., Stadler, M., Weller, J., & Kuhn, J. (2024). Are large multimodal foundation models all we need? On opportunities and challenges of these models in education [Preprint]. EdArXiv. [Google Scholar] [CrossRef]
 - Landis, J. R., & Koch, G. G. (1977). An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics, 33(2), 363–374. [Google Scholar] [CrossRef]
 - Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575. [Google Scholar] [CrossRef]
 - Li, J., Zhang, Y., & Wang, H. (2025). A case study of teachers’ generative artificial intelligence integration processes and factors influencing them. Teaching and Teacher Education, 153, 105157. [Google Scholar] [CrossRef]
 - López-Regalado, O., Núñez-Rojas, N., López-Gil, O. R., Lloclla-Gonzáles, H., & Sánchez Rodríguez, J. (2024). Artificial intelligence in university education: Systematic review. Research Square. [Google Scholar] [CrossRef]
 - McKenney, S., & Reeves, T. C. (2018). Conducting educational design research. Routledge. [Google Scholar]
 - Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. [Google Scholar] [CrossRef]
 - Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education, 39, 235–251. [Google Scholar] [CrossRef]
 - Molenda, M. (2015). In search of the elusive ADDIE model. Performance Improvement, 54(2), 40–42. [Google Scholar] [CrossRef]
 - Mourlam, D., Chesnut, S., & Bleecker, H. (2021). Exploring preservice teacher self-reported and enacted TPACK after participating in a learning activity types short course. Australasian Journal of Educational Technology, 37(3), 152–169. [Google Scholar] [CrossRef]
 - Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2023). A review of AI teaching and learning from 2000 to 2020. Education and Information Technologies, 28(7), 8445–8501. [Google Scholar] [CrossRef]
 - Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. [Google Scholar] [CrossRef]
 - Ng, D. T. K., Leung, J. K. M., & Chu, S. K. W. (2025). Investigating the mediating role of TPACK on teachers’ AI competency and their teaching performance in higher education. Computers and Education: Artificial Intelligence, 6, 100461. [Google Scholar] [CrossRef]
 - Ning, Y., Zhang, C., Xu, B., Zhou, Y., & Wijaya, T. T. (2024). Teachers’ AI-TPACK: Exploring the relationship between knowledge elements. Sustainability, 16(3), 978. [Google Scholar] [CrossRef]
 - Organisation for Economic Co-operation and Development (OECD). (2023). OECD digital education outlook 2023: Towards an effective digital education ecosystem. OECD Publishing. [Google Scholar] [CrossRef]
 - Paivio, A. (1991). Dual coding theory: Retrospect and current status. Canadian Journal of Psychology/Revue canadienne de psychologie, 45(3), 255. [Google Scholar] [CrossRef]
 - Pesovski, I., Santos, R., Henriques, R. A. P., & Trajkovik, V. (2024). Generative AI for customizable learning experiences. Sustainability, 16(7), 3034. [Google Scholar] [CrossRef]
 - Puentedura, R. R. (2006, August). Transformation, technology, and education. Hippasus. [Google Scholar]
 - Puentedura, R. R. (2013). SAMR and TPCK: An introduction. Hippasus. [Google Scholar]
 - Radianti, J., Majchrzak, T. A., Fromm, J., & Wohlgenannt, I. (2020). A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda. Computers & Education, 147, 103778. [Google Scholar] [CrossRef]
 - Rosenberg, J. M., & Koehler, M. J. (2015). Context and technological pedagogical content knowledge (TPACK): A systematic review. Journal of Research on Technology in Education, 47(3), 186–210. [Google Scholar] [CrossRef]
 - Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. [Google Scholar]
 - Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK): The development and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123–149. [Google Scholar] [CrossRef]
 - Seifert, H., & Lindmeier, A. (2024). Developing a performance-based assessment to measure pre-service secondary teachers’ digital competence to use digital mathematics tools. Journal für Mathematik-Didaktik, 45(2), 317–348. [Google Scholar] [CrossRef]
 - Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. John Wiley & Sons. [Google Scholar]
 - Shen, S.-L. (2024). Application of large language models in the field of education. Theoretical and Natural Science, 34(1), 140–147. [Google Scholar] [CrossRef]
 - Shrestha, B. L., Dahal, N., Hasan, M. K., Paudel, S., & Kapar, H. (2025). Generative AI on professional development: A narrative inquiry using TPACK framework. Frontiers in Education, 10, 1550773. [Google Scholar] [CrossRef]
 - Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. [Google Scholar] [CrossRef]
 - Stender, A., Brückmann, M., & Neumann, K. (2017). Transformation of topic-specific professional knowledge into personal pedagogical content knowledge through lesson planning. International Journal of Science Education, 39(12), 1690–1714. [Google Scholar] [CrossRef]
 - Sun, J., Ma, H., Zeng, Y., Han, D., & Jin, Y. (2023). Promoting the AI teaching competency of K-12 computer science teachers: A TPACK-based professional development approach. Education and Information Technologies, 28, 1509–1533. [Google Scholar] [CrossRef]
 - Tan, X., Cheng, G., & Ling, M. H. (2025). Artificial intelligence in teaching and teacher professional development: A systematic review. Computers and Education: Artificial Intelligence, 8, 100355. [Google Scholar] [CrossRef]
 - Thyssen, C., Huwer, J., Irion, T., & Schaal, S. (2023). From TPACK to DPACK: The “digitality-related pedagogical and content knowledge”-model in STEM-education. Education Sciences, 13(8), 769. [Google Scholar] [CrossRef]
 - Tondeur, J., Scherer, R., Siddiq, F., & Baran, E. (2018). Enhancing pre-service teachers’ technological pedagogical content knowledge (TPACK): A mixed-method study. Educational Technology Research and Development, 68, 319–343. [Google Scholar] [CrossRef]
 - Tondeur, J., van Braak, J., Ertmer, P. A., & Ottenbreit-Leftwich, A. (2017). Understanding the relationship between teachers’ pedagogical beliefs and technology use in education: A systematic review of qualitative evidence. Educational Technology Research and Development, 65(3), 555–575. [Google Scholar] [CrossRef]
 - United Nations Educational, Scientific and Cultural Organization (UNESCO). (2023). Guidance for generative AI in education and research. UNESCO. [Google Scholar]
 - Valtonen, T., Sointu, E. T., Mäkitalo-Siegl, K., & Kukkonen, J. (2015). Developing a TPACK measurement instrument for 21st century pre-service teachers. Seminar.net—International Journal of Media, Technology and Lifelong Learning, 11(2), 87–100. [Google Scholar] [CrossRef]
 - Voogt, J., Fisser, P., Pareja Roblin, N., Tondeur, J., & van Braak, J. (2013). Technological pedagogical content knowledge—A review of the literature. Journal of Computer Assisted Learning, 29(2), 109–121. [Google Scholar] [CrossRef]
 - Voogt, J., Westbroek, H., Handelzalts, A., Walraven, A., McKenney, S., Pieters, J., & De Vries, B. (2011). Teacher learning in collaborative curriculum design. Teaching and Teacher Education, 27(8), 1235–1244. [Google Scholar] [CrossRef]
 - Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21, 15. [Google Scholar] [CrossRef]
 - Wenger, E., McDermott, R., & Snyder, W. M. (2002). Cultivating communities of practice: A guide to managing knowledge. Harvard Business Press. [Google Scholar]
 - Willermark, S. (2018). Technological pedagogical and content knowledge: A review of empirical studies published from 2011 to 2016. Journal of Educational Computing Research, 56(3), 315–343. [Google Scholar] [CrossRef]
 - Xie, M., & Luo, L. (2025). The status quo and future of AI-TPACK for mathematics teacher education students: A case study in Chinese universities [Preprint]. arXiv, arXiv:2503.13533. [Google Scholar]
 - Yang, Y.-F., Tseng, C. C., & Lai, S.-C. (2024). Enhancing teachers’ self-efficacy beliefs in AI-based technology integration into English speaking teaching through a professional development program. Teaching and Teacher Education, 144, 104582. [Google Scholar] [CrossRef]
 - Young, J. R. (2016). Cultural implications in educational technology: A survey. In Handbook of research on educational communications and technology (pp. 1–10). Springer. [Google Scholar]
 - Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
 
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.  | 
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).