Next Article in Journal
How Morphology, Context, Vocabulary and Reading Shape Lexical Inference in Typical and Dyslexic Readers
Previous Article in Journal
The Effect of Growth Mindset Interventions on Students’ Self-Regulated Use of Retrieval Practice
Previous Article in Special Issue
Correction: Shiu, W. H. C. (2025). Conceptualising the Pedagogical Purposes of Technologies by Technological, Pedagogical Content Knowledge and Substitution, Augmentation, Modification and Redefinition in English as a Second Language Classrooms. Education Sciences, 15(4), 411
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chilean Teachers’ Knowledge of and Experience with Artificial Intelligence as a Pedagogical Tool

1
Faculty of Social Sciences, University of Chile, Santiago 7800284, Chile
2
Center for Advanced Research in Education, Institute of Education, University of Chile, Santiago 8330014, Chile
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(10), 1268; https://doi.org/10.3390/educsci15101268
Submission received: 21 August 2025 / Revised: 19 September 2025 / Accepted: 19 September 2025 / Published: 23 September 2025
(This article belongs to the Special Issue Digital Competence of Educators: Opportunities and Challenges)

Abstract

Although Artificial Intelligence (AI) is transforming teachers’ knowledge and professional practice, its full potential has yet to be fully realized. To incorporate AI effectively into pedagogical contexts, it is essential that teachers possess the knowledge necessary to guide its responsible use. However, in Latin America, there remains limited empirical evidence to support this process. To address this gap, this empirical study analyzes teachers’ knowledge of AI using the Intelligent-TPACK framework, which includes an ethical dimension. A validated and adapted questionnaire was administered to 709 primary and secondary school teachers from the Metropolitan Region of Chile, using a non-probability sampling method. The sample is compositional–descriptive in nature for the study variables and is not statistically representative of the broader population. Data were analyzed through descriptive and inferential statistical methods. The results reveal mixed levels of knowledge—slightly higher in technological knowledge yet lower in terms of integration and ethical awareness. Significant differences were found by gender, age, teaching level, and subject area. Regression models identified teaching experience, gender, and educational level as the most consistent predictors. Additionally, cluster analysis revealed four exploratory professional profiles characterized by varying degrees of knowledge. These findings are discussed in light of teacher training needs and aim to inform the development of professional learning programs better aligned with the actual demands of the teaching profession.

1. Introduction

Artificial Intelligence (AI) has forcefully entered the global conversation. Between 2022 and 2023 alone, human interactions with AI systems increased by more than 400% (Maslej et al., 2024). In response to this situation, governments have expressed growing concern about preparing citizens to face with confidence and responsibility a future where AI will be increasingly present (Berryhill et al., 2019; Lorenz et al., 2023; Cazzaniga et al., 2024). This concern is also reflected in the educational field, where teachers and students are already using various AI tools such as natural language processing, intelligent agents, computer vision, adaptive learning, data mining, and speech recognition, among others (Holmes & Porayska-Pomsta, 2023; Miao et al., 2021; Yim & Su, 2025; Williamson & Eynon, 2020; OECD, 2023). For teaching practice, the use of AI tools goes beyond generative AI or large language models such as ChatGPT.
Recent reviews show that some natural language processing tools allow teachers to save time on routine management and planning tasks, such as rubric design, diversification of assessment questions, automatic grading, and the preparation of new teaching materials. This enables teachers to dedicate more time to other activities such as feedback and assessment (Grassini, 2023; Labadze et al., 2023; Yan et al., 2024; Celik, 2023; Zawacki-Richter et al., 2019). These tools can also be leveraged during lessons to answer students’ questions and provide complementary explanations to those of the teacher, fostering peer interaction, conversation, and exchange of ideas, and thereby promoting more collaborative learning environments (Adel et al., 2024; Labadze et al., 2023; Lo, 2023).
Other reviews have examined the use of computer vision tools, bringing out their usefulness in facilitating real-time monitoring of classroom activities, as well as in analyzing students’ facial expressions to identify emotions and estimate levels of engagement during different moments of a lesson (Dimitriadou & Lanitis, 2023; Anwar et al., 2023). It has also been observed that some online learning platforms, combined with adaptive systems or technologies that use data mining, give teachers the ability to personalize instruction based on each student’s needs and pace. This feature helps teachers broaden their assessment strategies and act promptly to foresee students’ low academic performance (Dimitriadou & Lanitis, 2023; M. E. Dogan et al., 2023).
Given its potential, integrating AI tools into Education (AIED) offers a promising opportunity for professional teaching practice. It allows for better time management, broader access to knowledge, easier dissemination of information, and the promotion of more effective and personalized learning tailored to each individual. This has positioned AI as a key driver for advancing the future of education (Miao et al., 2023; UNESCO, 2019; Miao et al., 2021, 2022; Zhang & Aslan, 2021). Maximizing the benefits of AIED also involves engaging with the various aspects of teaching work. Usually, teachers gain more from AI when their professional development aligns with pedagogical goals and practical needs (S. Dogan et al., 2025). In this regard, beyond just the technological elements, AIED requires rethinking new scenarios to include AI in different parts of schooling such as management, classroom environment, teaching strategies, curriculum, assessment, educational needs, and the teacher’s professional role itself (Stolpe & Hallström, 2024).
The tasks teachers undertake both inside and outside the classroom are complex and involve various types of pedagogical, content, and technological knowledge specifically related to these areas (Shulman, 1986; Mishra & Koehler, 2006; Koehler & Mishra, 2008; Mishra, 2019; Mishra et al., 2023; Ning et al., 2024). Several studies suggest that teachers with high technological competence in AI are better equipped to select appropriate technological tools for educational purposes. This allows them, for example, to personalize instruction and provide timely feedback (Edwards et al., 2018; Popenici & Kerr, 2017). Conversely, teachers lacking these skills often fail to fully utilize the pedagogical opportunities that these tools offer (Joo et al., 2018). Thus, a strong pedagogical and didactic foundation in AI can enable teachers to improve teaching methods, increase motivation, and boost students’ academic achievement (Alé et al., 2025; Alé & Arancibia, 2025; Cavalcanti et al., 2021; Y. Wang et al., 2021). AI will not replace teachers’ work, but it has the potential to transform many areas of professional practice (Seufert et al., 2021). Therefore, for teachers to creatively and effectively utilize AI in their teaching and achieve successful educational integration, they must combine content, pedagogical, and technological knowledge (Celik, 2023; Mishra et al., 2023).
On the other hand, although AIED provides valuable opportunities for teaching and learning, it also introduces significant ethical challenges for education (Holmes et al., 2022; Stolpe & Hallström, 2024). Issues such as data privacy, algorithmic bias, discrimination, fairness, automation, access democratization, and respect for human rights are central topics in current debates (Almusharraf & Alotaibi, 2022; Bulathwela et al., 2024; Shum & Luckin, 2019; Kitto & Knight, 2019; European Commission, 2020; Williamson & Eynon, 2020; OECD, 2023). Some of these ethical concerns stem from a lack of transparency among technology developers, which reduces perceptions of fairness in these systems and, consequently, affects teachers’ and students’ trust in their use (Shin & Park, 2019).
In summary, AIED is transforming professional knowledge, with teachers’ technological expertise becoming just as important as their ability to communicate and integrate pedagogical, disciplinary, and ethical knowledge. This combination of skills can help ensure the responsible and safe use of AI.
Despite the importance of the teaching role in the processes of AI integration in education (OECD, 2023), little is still known about teachers’ knowledge of the subject (Sun et al., 2023; Kim et al., 2022; Luckin et al., 2022; Tan et al., 2024), and there is limited empirical evidence linking such knowledge to the AI ethical aspects involved (Celik, 2023; Holmes et al., 2022). This lack of evidence is reflected in the gap between the AI technology training provided by educational institutions and the actual needs expressed by teachers (Cukurova et al., 2024; Chiu & Chai, 2020; Ng et al., 2023; Tan et al., 2024; Zawacki-Richter et al., 2019).
In Latin American research, this topic is still emerging, unlike in countries such as China, the United States, and the United Kingdom (Maslej et al., 2024). Studies in Latin America have largely neglected the use of AI tools and ethics within the framework of technological, pedagogical, and content knowledge (TPACK), as seen in works by Sierra et al. (2024) and Kadluba et al. (2024). When they do address it, the focus is often limited to specific contexts (e.g., Castro et al., 2025). Additionally, there is limited evidence regarding teachers’ professional knowledge about using AI tools in their practice. This is especially true in Chile, where it remains unclear how teachers utilize AI’s potential in teaching or whether they are aware of how to manage it ethically. Thus, there is a need for studies that explore this knowledge and help tailor training and professional development to each specific context. The study presented in this article, to our knowledge, is the first large-scale evaluation of teachers’ AI-related technological, pedagogical, content and ethical knowledge in Chile.
Considering this background, the objective of the study presented in this article was to analyze trends in a large sample of teachers regarding technological, pedagogical, content, and ethical knowledge related to the use of AI tools in their professional practice.

2. Conceptual Framework

We refer below to the relationship between the main concepts that are part of the theoretical framework of the study, as introduced in the preceding sections: the Technological, Pedagogical, and Content Knowledge Framework (TPACK) and Intelligent-TPACK, one of its most recent adaptations.

2.1. TPACK Framework

The Technological, Pedagogical, and Content Knowledge (TPACK) framework was conceptualized by Mishra and Koehler (2006) based on Shulman’s (1986, 1987) Pedagogical Content Knowledge (PCK) conceptual framework. PCK refers to the knowledge to transform disciplinary content into forms that are comprehensible and teachable to students through appropriate pedagogical strategies.
At its core, TPACK (see Figure 1) relates and integrates these three types of knowledge (PK, CK, and TK), while recognizing that each of them is specific and distinguishable from the others (Mishra & Koehler, 2006; Mishra, 2019).
The TPACK framework identifies new types of specific knowledge related to technology, pedagogy, and content. These are Technological Content Knowledge (TCK), Technological Pedagogical Knowledge (TPK), and Technological Pedagogical Content Knowledge (TPACK).
The Technological Content Knowledge (TCK) refers to the understanding that allows teachers to represent theoretical concepts through technology, especially in creating new representations of theoretical and mental structures. It is a type of knowledge separate from pedagogical knowledge. The Technological Pedagogical Knowledge (TPK) involves understanding general pedagogical (and didactic) practices that teachers can use while integrating technologies. The focus is on how technology can support various teaching and learning goals. It is independent of specific content and applicable across any subject area. In turn, Technological Pedagogical Content Knowledge (TPACK) represents an integrated understanding that helps teachers select, adapt, and use specific technologies to represent and transform particular content using relevant pedagogical or didactic strategies. It includes aligning objectives, content, methods, and assessments with the selected technology, students’ characteristics, and the teaching environment (Mishra & Koehler, 2006).
TPACK suggests that the process of technology integration is contextually based and shaped by environmental factors (Mishra, 2019). For instance, this process is influenced by teachers’ beliefs about how students learn, their hands-on experiences with what works or does not in their classrooms, different views on the role of technology in learning, teaching methods, and factors related to educational communities, among others.
In recent years, TPACK has been flexibly adapted to new contexts and educational settings, being applied across various subjects, teaching modalities, strategies, and professional profiles. For instance, Polly (2024) applied it in primary school mathematics classrooms with in-service teachers using educational platforms, simulators, and teaching activities. His study confirmed that TPACK can be implemented with diverse technologies and that its effectiveness is largely mediated by school context and teachers’ beliefs. Kuo and Kuo (2024) implemented an adaptation of TPACK-G, based on Hsu et al. (2013) studies, to evaluate pre-service teachers’ knowledge in multiple areas when using digital games, finding that factors such as gender and prior experience with video games influenced their pedagogical and content knowledge levels. Cowan and Farrell (2023) applied TPACK with pre-service teacher mentors through virtual reality environments and found that, although their experience with this technology was limited, they recognized its pedagogical and didactic potential, as well as students’ role in the integration process. Krug et al. (2023) combined various technological tools to model in 3D and create augmented reality applications in a seminar with pre-service science teachers (physics, chemistry, and biology), helping them improve self-efficacy, motivation, and confidence in using this technology for science teaching.

2.2. Intelligent-TPACK Framework

The Intelligent-TPACK framework was proposed by Celik (2023) to adapt the traditional TPACK model to the uses of major AI tools, incorporating activities related to automation and adaptive feedback. In addition, this framework expands the dimensions of the original model by incorporating ethical knowledge regarding the use of AI in education, so that teachers are able to assess whether they can recognize bias, ensure transparency and accountability, and promote equitable and fair learning.
Building on this, the Intelligent-TPACK framework (see Figure 2) proposes the existence of five new types of specific knowledge linked to pedagogy (PK) and content (CK), but within a context that is sensitive to the ethical dimension.
According to Celik (2023), each of the dimensions is described as follows:
“Intelligent-TK tackles the knowledge to interact with AI-based tools and to use fundamental functionalities of AI-based tools. This component aims to measure teachers’ familiarization level with the technical capacities of AI-based tools.
Intelligent-TPK addresses the knowledge of pedagogical affordances of AI-based tools, such as providing personal and timely feedback and monitoring students’ learning. Additionally, Intelligent-TPK evaluates teachers’ understanding of alerting (or notification) and how they interpret messages from AI-based tools.
Intelligent-TCK focuses on the knowledge of field-specific AI tools. It assesses how well teachers incorporate AI tools to update their content knowledge. This component also addresses teachers’ understanding of particular technologies that are best suited for subject-matter learning in their specific field.
Intelligent-TPACK is considered the core area of knowledge. It evaluates teachers’ professional knowledge to choose and use appropriate AI-based tools (e.g., intelligent tutoring systems) for implementing teaching strategies (e.g., monitoring and providing timely feedback) to achieve instructional goals in a specific domain.
Ethics evaluates the teacher’s judgment regarding the use of AI-based tools. The evaluation focuses on transparency, fairness, accountability, and inclusiveness.”
Similarly to TPACK, successfully integrating AI tools into educational practice requires teachers to have a nuanced understanding of how these five components interact. This study specifically focuses on examining the knowledge components related to the technological aspect of Intelligent-TPACK (TK, TCK, TPK, and TPACK).

3. Methods

3.1. Design and Research Questions

This is primarily a quantitative survey-based study, with a descriptive-exploratory scope and a cross-sectional design. To achieve the proposed aim, the study sought to answer the following three research questions:
  • What levels of technological, pedagogical, content, and ethical knowledge are reported by a sample of teachers from the Metropolitan Region (Chile) regarding the use of AI in education?
  • Are there significant differences in teachers’ knowledge of AI according to sociodemographic, professional, and disciplinary variables such as gender, age, or subject taught?
  • What professional teacher profiles emerge from the combination of technological, pedagogical, content, and ethical knowledge regarding the integration of AI in their professional practice?
Answering these questions will enable the identification of trends and gaps in teachers’ knowledge about AI, informing the determination of teacher training and professional development needs.
To implement the research design, we followed three main procedures. First, an extensive literature review was carried out to select, adapt, and validate the Intelligent-TPACK instrument. Second, the validated instrument was administered to a large sample of teachers working in the Metropolitan Region of Chile. Finally, the main trends, factors, and profiles in the responses were analyzed. A detailed description of the three procedures is presented below.

3.2. Implementation of the Adapted Questionnaire

3.2.1. Population and Study Sample

The target population comprised approximately 70,000 active primary and secondary school teachers, distributed across nearly 2500 schools in the Metropolitan Region of Chile (Mineduc, 2024). To contact participants, a public database of institutional emails was obtained from official school websites. Invitations were sent via email, including a link to the Google Forms questionnaire, along with an explanation of the study’s aims, benefits, and ethical considerations. Additional invitations were extended during seminars and conferences attended by teachers, as well as through social media.
The questionnaire was piloted between November and December 2024 with an initial sample of 42 teachers from the Metropolitan Region. This process helped refine the items and scales. Subsequently, the main data collection took place between January and July 2025, using the refined version of the questionnaire, which was distributed via institutional email and shared during academic events and on social media platforms.
Data collection was conducted in the Metropolitan Region for two reasons. First, this region is home to approximately 10 of Chile’s 19 million inhabitants and nearly 50% of the active primary and secondary teaching workforce (Mineduc, 2024). It also has a high density of schools, teacher training centers, and professional development networks, which facilitated the data collection process. Second, since this research was conducted within the framework of a doctoral thesis project, limiting the sample to this region ensured logistical and temporal feasibility. We acknowledge, however, that this decision poses a relevant limitation for the generalization of findings to other contexts. Nonetheless, it provides a fairly broad view of the Chilean teacher population.
Since randomness was not controlled in the invitation process, the sampling design was non-probability (self-selected volunteers contacted by institutional email, events, and social media).
To ensure that the study had a sufficiently large base for analysis, we calculated the finite-population sample size as a planning heuristic (Z = 1.96, p = 0.50, e = 0.04), which yielded a target of n = 596 using Formula (1) (L. Cohen et al., 2018; Tillé, 2020):
n =   Z 2   ·   p   ·   ( 1 p ) e 2 · N N 1   +   Z 2   ·   p   ·   ( 1 p ) e 2
It is important to emphasize that this computation assumes simple random sampling. Because our design was non-probabilistic, the formula is reported only as a reference for planning and does not justify reporting margins of error or confidence intervals for population inference.
Instead, the achieved sample should be interpreted as compositional–descriptive of the study variables. Descriptive estimates and comparative tests therefore apply strictly within the achieved sample, and not as representative parameters of the teacher population.
Additionally, efforts were made to ensure diversity in terms of demographic variables such as gender, educational level (primary or secondary), teachers’ age, and school type (public or private). Diversity was weighted according to the demographic structure of teachers in the Metropolitan Region. Participation quotas were established based on sociodemographic variables relevant to the Chilean school system, such as gender, educational level, administrative dependence, geographical location, etc.
By legal criteria, teachers working in levels prior to primary education were excluded, in line with Chilean regulations restricting the use of digital technologies with children of those ages.
A total of 712 teachers responded to the survey, exceeding the estimated minimum sample size of 596 participants (see Table 1). Of this total, three responses were excluded from the analysis due to the absence of informed consent, resulting in a final sample of N = 709 valid responses. Additionally, for the gender variable, eight responses that did not fit within the binary categories of male or female were excluded from comparative analyses.
Overall, the obtained sample reflected the distribution of the teacher population in the Metropolitan Region of Chile with reasonable consistency. However, in the case of the variable “school type,” the sample showed a discrepancy compared to the reference population.

3.2.2. Questionnaire Characteristics

The Intelligent-TPACK questionnaire by Celik (2023) was selected because it aligns with the aim of this study and incorporates both ethical aspects and updated pedagogical uses of AI, including feedback and personalized learning. The questionnaire consists of 27 items: five for TK, seven for TPK, four for TCK, seven for Intelligent-TPACK, and four for the ethics dimension. Each item was translated into Spanish while preserving the semantic structure of each statement, in line with the original conceptual definitions. This adaptation included slight wording adjustments to ensure clarity and the use of terms relevant to the local educational context. The Likert scale consists of 5 points, with options ranging from “strongly disagree” to “strongly agree”. In this study, the midpoint option (“neither agree nor disagree”) was removed to obtain clearer response trends. Thus, a 4-point scale was used with the following values: 1 = strongly disagree, 2 = disagree, 3 = agree, and 4 = strongly agree.
The questionnaire was expanded to include two additional sections. The first, an introductory section, contained closed demographic questions to gather teacher characteristics such as age, gender, subject taught, teaching level (primary or secondary), years of experience, and some school characteristics. The second section, following the demographic section, combined open and closed questions to enable teachers to describe their experiences with AI tools across various topics—such as climate change, gender equity, global citizenship, health, and emotional well-being—in different activities, including information search, content creation, rewriting, translation, conversation, data analysis, personalization, and automation, as well as in different areas of teaching, like lesson planning, learning environments, didactics, assessment, curriculum development, and professional responsibilities.
Finally, this adapted version was piloted with 42 Chilean teachers, who completed the questionnaire and provided qualitative feedback. Validation focused primarily on Principal Components Analysis (PCA) and item reduction (PCA, Varimax) to define a new abbreviated version of the instrument. Then, using the main sample (N = 709), we conducted a Confirmatory Factor Analysis (CFA) in AMOS 26 (with maximum likelihood and 5000 bootstrap resamples). In the CFA, we evaluated standardized loadings, significance (p < 0.001), and global fit using χ2, df, χ2/df, CFI, TLI, NFI, GFI, AGFI, RMSEA (90% CI), and SRMR. Detailed evidence (matrices, item-level indices, and proposed modifications) is presented in Appendix A.
Additionally, to reinforce internal consistency, we conducted reliability tests using Cronbach’s alpha and McDonald’s ω, as well as tests of convergent and discriminant validity (CR, AVE).

3.2.3. Data Analysis Strategies

The analysis of closed-ended responses from Intelligent-TPACK was conducted using descriptive and inferential statistical strategies, processed in R 3.6.0+, RSTUDIO 2025.09.0+387, SPSS 29 and AMOS 26.
First, to evaluate internal consistency, we calculated Cronbach’s alpha for each dimension. Second, to determine the distribution type of the data and select the most appropriate tests, normality assumptions were checked using the Kolmogorov–Smirnov and Shapiro–Wilk tests. Third, to identify statistically significant differences between groups of teachers, comparative analyses were performed: the Mann–Whitney U test was applied for dichotomous variables (gender, educational level) and the Kruskal–Wallis H test for variables with more than two categories (administrative dependence, age, teacher evaluation level, and subject taught). Additionally, to compare scores among the dimensions of the Intelligent-TPACK model, the Friedman test was used, complemented by Wilcoxon post hoc analyses, which identified specific contrasts within the model itself. Fourth, to examine the strength and direction of associations between continuous variables (age and years of experience) and the model dimensions, Spearman correlations were calculated. Fifth, to explore the predictive capacity of different variables for each model dimension, multiple linear regression models were developed, reporting ANOVA results, the coefficient of determination (R2), standardized betas, and collinearity diagnostics. Finally, a cluster analysis using the k-means algorithm was conducted, which identified and described four professional teacher profiles based on their responses.
Since the comparative analyses were performed using non-parametric tests, all descriptive summaries report the median (Mdn) and interquartile range (IQR) by dimension and group, with means and standard deviations added only for reference. We also calculated and reported appropriate effect sizes for each test: r for Mann–Whitney, ε2 for Kruskal–Wallis, and Kendall’s W for Friedman. Furthermore, to compare observed levels against theoretical reference points (2.5 and 3.0), one-sample Wilcoxon tests with effect size r were applied.

4. Results

First, as shown in Table 2, the main results of the internal consistency tests for each dimension of the questionnaire—estimated using Cronbach’s α and McDonald’s ω (maximum likelihood and 5000 bootstrap resamples)—were favorable. In all cases, the values exceeded the minimum recommended threshold of 0.70, supporting the reliability of the subscales (Pallant, 2020; George & Mallery, 2021). All subscales reached acceptable or very good levels of internal consistency. The TK dimension yielded the highest indices (α = 0.886; ω = 0.891), indicating very good reliability. TPK showed acceptable values (α = 0.793; ω = 0.805), and TCK, TPACK, and Ethics also demonstrated very good internal consistency, though slightly lower compared to other dimensions. Therefore, these results indicate that each group of items adapted from the Intelligent-TPACK questionnaire presents satisfactory internal reliability.
Meanwhile, the results of the CFA showed significant standardized factor loadings above 0.50 for all items, supporting convergent validity (Hair et al., 2019). The correlations between factors (r, r2) ranged from moderate to high, with several exceeding 0.85. The global fit indices indicated an acceptable fit in CFI/IFI (CFI = 0.921; IFI = 0.921; TLI = 0.896) and a low SRMR (0.042), along with a high RMSEA (RMSEA = 0.113; 90% CI [0.106–0.120]), suggesting that the model could be improved. The complete CFA results are reported in Appendix A.
The means obtained for each dimension ranged between 2.099 (Ethics) and 2.665 (TK), on a 1-to-4 Likert scale, suggesting a general trend between low and moderate in teachers’ perceptions regarding their mastery of technological, pedagogical, and content knowledge related to AI tools.
To determine the type of data distribution and define whether parametric or non-parametric tests should be used in subsequent analyses, three types of normality tests were applied: Kolmogorov–Smirnov with Lilliefors correction, Shapiro–Wilk, and D’Agostino–Pearson (Field, 2024). Kolmogorov–Smirnov with and Shapiro–Wilk results are presented in Table 3.
The results for the five dimensions indicated significant values (p < 0.001), which allows us to reject the null hypothesis of normality in data distribution. Therefore, the distribution of the data is not normal, justifying the use of non-parametric statistics in subsequent analyses.
At the same time, considering that the Kolmogorov–Smirnov and Shapiro–Wilk tests are sensitive to large samples and tend to reject the null hypothesis of normality even with slight deviations, we decided to complement the normality analysis with the omnibus D’Agostino–Pearson test, which integrates sample size, skewness, and kurtosis.
The omnibus D’Agostino–Pearson test results also confirmed the rejection of normality in all five I-TPACK components: TK (K2 = 30.82, p < 0.001), TPK (K2 = 9.42, p = 0.009), TCK (K2 = 12.63, p = 0.002), TPACK (K2 = 25.47, p < 0.001), and Ethics (K2 = 24.00, p < 0.001). All results had p < 0.001, except for TPK, which had p < 0.05. Consequently, given that all three tests consistently rejected the normality hypothesis, non-parametric tests were used for the comparative analyses.
In the next sections, specific results were organized based on the research questions.

4.1. What Levels of Technological, Pedagogical, Content, and Ethical Knowledge Are Reported by a Sample of Teachers from the Metropolitan Region (Chile) Regarding the Use of AI in Education?

Based on the main sample (N = 709), general descriptive statistics were calculated for each dimension of the model (TK, TPK, TCK, TPACK, and Ethics). Table 4 presents the means, standard deviations (SD), medians (Mdn), interquartile ranges, skewness and kurtosis coefficients, along with 95% confidence intervals for the mean. These measures were reported to support the non-parametric analyses used in the subsequent sections.
As shown in Table 4, the medians for TPK, TCK, TPACK, and Ethics are around 2.00, indicating a low to moderate level of perceived knowledge. Only the TK component exceeds the theoretical midpoint (2.5). Similarly, the mean values are highest for TK = 2.67 (SD = 0.91; CI95% [2.60–2.73]), followed by TPK = 2.52 (SD = 0.81; [2.46–2.58]), and lower values for TCK = 2.45 (SD = 0.86; [2.38–2.51]), TPACK = 2.23 (SD = 0.88; [2.16–2.29]), and Ethics = 2.10 (SD = 0.84; [2.04–2.16]). On a 1–4 scale, this reflects predominantly low to moderate levels.
Additionally, general descriptive statistics were calculated for each of the variables analyzed (gender, educational level, school type, age, and school subject). Median (IQR) and Mean (SD) results are presented in the Appendix B tables.
For the gender variable, males and females presented nearly identical medians across all dimensions (Mdn ≈ 2.67 for TK, TPK, and TCK; 2.33 for TPACK and Ethics; IQR = 1.00), suggesting very similar central distributions. However, when observing the means and standard deviations, slight differences emerged in favor of males, who reported slightly higher scores in TK (2.85 (0.84) vs. 2.57 (0.93)) and in Ethics (2.28 (0.91) vs. 2.01 (0.79)).
Regarding educational level, secondary school teachers tended to outperform primary school teachers. The median for secondary was TK = 3.00 (1.00) and TPK = 2.67 (1.33), while for primary it was 2.33 (1.00) in both dimensions. This pattern is also seen in the means, with TK for secondary teachers at 2.84 (0.86) vs. 2.42 (0.92) for primary, and TPK at 2.62 (0.81) vs. 2.37 (0.78).
For school type, the highest values were found in fully private institutions (Private 3), with medians of 3.00 in TK, TPK, and TCK, and 2.67 in TPACK and Ethics, along with means such as TK = 2.90 (0.85) and TCK = 2.69 (0.79). In contrast, municipal schools (Public 1) showed the lowest values, with medians of 2.33 and mean scores around TK = 2.70 (0.84).
Age revealed an interesting pattern. The medians suggest that the 50–60 age group recorded the highest values, with 3.00 in TK, TPK, and TCK, and 2.67 in TPACK and Ethics. However, the means reveal a different trend: younger groups, aged 20–30 and 30–40, achieved better results in TK, with 3.05 (0.73) and 2.95 (0.78), respectively, compared to the 50–60 group with only 2.32 (0.94).
Regarding the school subject variable, the subject Science for Citizenship stood out, with a median of 4.00 (IQR = 1.50–2.00) and high means in TK (3.11 (1.07)) and TPK (3.00 (1.26)). Biology also stood out, with Mdn = 3.33 and TK = 3.14 (0.78), as did Philosophy, with Mdn = 3.33 and TK = 3.13 (0.76). On the opposite end, Indigenous Cultures presented the lowest scores and greatest dispersion, with medians around 1.33–1.50 and TK = 1.92 (1.42).

4.2. Are There Significant Differences in Teachers’ Knowledge of AI According to Sociodemographic, Professional, and Disciplinary Variables Such as Gender, Age, or Subject Taught?

4.2.1. Group Differences Analysis

Possible significant differences in Intelligent-TPACK responses were analyzed according to teachers’ sociodemographic and professional variables. Since the normality tests previously conducted indicated significant deviations from a normal distribution, non-parametric tests were applied. For comparisons between two groups (e.g., gender or teaching level), the Mann–Whitney U test was used, while for variables with more than two categories (such as age, teacher evaluation band, subject, and school type), the Kruskal–Wallis H test was employed.
The results of the Mann–Whitney U test (see Table 5) were interpreted using a significance threshold of p < 0.01 to minimize the risk of Type I error in multiple comparisons (Field, 2024).
Using this criterion, statistically significant differences were found between males and females in four out of the five model dimensions. The only dimension not meeting this stricter threshold was TPK (p = 0.013). Although medians in this dimension are identical, the means differ; and the significant difference suggests shifts in the distribution. Moreover, the effect sizes were small (r = 0.09–0.14), indicating that while the gender differences are statistically significant, their magnitude is very small (these can also be contrasted with the values reported in Appendix B).
Regarding teaching level, the results showed statistically significant differences in all model dimensions, with higher scores among secondary school teachers compared to primary school teachers. In this case, effect sizes were somewhat larger (r = 0.14–0.23), with small to moderate magnitudes, confirming higher outcomes for secondary teachers.
As for the Kruskal–Wallis test results (Table 6), no statistically significant differences were observed by school type, and effect sizes were low in all cases (ε2 ≤ 0.014), indicating that this variable did not have a relevant impact on teachers’ perceptions.
In contrast, the variable age showed statistically significant differences across all five dimensions (p < 0.001), with small to moderate effect sizes (ε2 = 0.03–0.12).
Lastly, subject taught also presented significant differences in all dimensions (p ≤ 0.003), though with small effect sizes (ε2 = 0.028–0.079).
Additionally, to analyze possible differences in teachers’ self-perceptions regarding the Intelligent-TPACK components, Friedman’s non-parametric test for related samples was applied. The main results indicated statistically significant differences among the model’s components (χ2 (4) = 651.309; p < 0.001), suggesting that mean score distributions were not equivalent across the five analyzed dimensions. This result justified applying post hoc comparisons to identify in detail which dimensions significantly differed from one another. For this, Wilcoxon tests with correction for ten multiple comparisons were applied, and effect sizes were calculated to complement the interpretation (see Table 7).
The post hoc comparisons revealed significant differences across all pairs. TK scored the highest, and Ethics the lowest, with the largest gap recorded between these two (Z = −16.141, r = 0.61, large effect). Large effects were also found between TK and TPACK (Z = −14.748, r = 0.55) and between TPK and Ethics (Z = −15.121, r = 0.57). Moderate-to-large effects appeared between TCK and Ethics (r = 0.49) and between TPK and TPACK (r = 0.46). In contrast, the smallest differences were observed between TPK and TCK (Z = −3.623, r = 0.14, small effect) and between TPACK and Ethics (Z = −6.394, r = 0.24, small-to-medium effect).

4.2.2. Correlation Analysis

Since years of teaching experience were recorded as exact values, this variable was treated as continuous. By contrast, age was collected in predefined categories (e.g., 20–30, 30–40, etc.), so it was treated as ordinal in the comparative analyses. Spearman’s test is suitable as it does not assume normality and identifies monotonic associations between variables. By contrast, categorical variables such as gender, school type, or subject cannot be correlated, as their categories are unordered. Table 8 summarizes the most relevant results.
All correlations were negative and statistically significant, indicating that both greater age and more years of teaching experience are associated with lower self-perceptions in each of the Intelligent-TPACK dimensions. The strongest relationship was observed in Technological Knowledge (TK) (rho = −0.349 and rho = −0.330), which corresponds to a moderate negative effect, suggesting that younger and less experienced teachers reported greater self-perceived knowledge in their technological handling of AI tools in professional practice. Small negative correlations were also observed in TCK and TPK (rho ≈ −0.20 to −0.22); indicating that older or more experienced teachers report relatively lower knowledge for integrating technology into their subject areas and teaching practices.

4.2.3. Multiple Linear Regression

This multiple linear regression analysis was conducted to identify which sociodemographic and professional variables (such as age, years of experience, gender, school type, teaching level, career stage, and subject) significantly predict teachers’ perceptions in the Intelligent-TPACK model. We verified the main assumptions of the model: the Durbin–Watson values (1.61–1.73) confirmed the independence of errors; the Q–Q plots and residuals versus predicted values showed no relevant deviations from linearity or normality; homoscedasticity was consistent based on standardized residuals and robust HC3 estimations; multicollinearity was low (VIF < 4; tolerance > 0.20); and finally, no influential cases were detected.
To interpret the regression model, several complementary analyses were applied. First, ANOVA was used to determine whether the overall model was statistically significant, that is, whether the set of predictors significantly explained the dependent variable. Next, the adjusted coefficient of determination (Adjusted R2) was examined, indicating the percentage of variance explained by the model, adjusted for the number of predictors. Standardized coefficients (β) were then analyzed to identify which individual variables had significant predictive weight. Finally, collinearity indicators (VIF and tolerance) were reviewed to verify predictor independence and strengthen the robustness of the model.
As shown in Table 9, ANOVA confirmed that all multiple regression models were statistically significant (p < 0.001), indicating that the set of sociodemographic and professional variables included improved prediction of teachers’ perceptions in each Intelligent-TPACK dimension compared to a null model (without predictors).
This complete model was statistically significant for all Intelligent-TPACK dimensions, indicating that at least one of the sample-level predictors explained variance.
The interpretation of this effect is complemented by the adjusted R2 values, where the dimensions of Intelligent-TPACK and their effects were analyzed. Since human behavior in social sciences is often influenced by factors not measured or analyzed, relatively low adjusted R2 values are expected. Therefore, their interpretation should be contextualized.
Based on this criterion, the models show (see Table 10): TK with a medium effect (Adjusted R2 = 0.173), TCK with a small effect (upper bound) (Adjusted R2 = 0.127), and TPK, TPACK, and Ethics with small effects (Adjusted R2 = 0.069, 0.074, and 0.090, respectively). This indicates that approximately 17.3% to 6.9% of the variance in teachers’ perceptions of their technological, pedagogical, content, and ethical knowledge in AI can be explained by sociodemographic and professional variables. The greatest effects were found in TK (Adjusted R2 = 0.173) and TCK (Adjusted R2 = 0.127).
To deepen the understanding of these results, a detailed analysis was conducted to identify which sociodemographic and professional variables had greater or lesser predictive weight. Table 11 presents the standardized coefficients (β) and statistical significance values for each Intelligent-TPACK predictor.
The multiple regression across the five Intelligent-TPACK dimensions shows that for this sample, the most consistent predictors were teaching level, years of experience, and gender.
Age presented small negative effects in TK (β = −0.202; p = 0.002) and TCK (β = −0.161; p = 0.017), though not in the remaining dimensions.
Years of experience showed small negative effects in TK (β = −0.168; p = 0.009), TPK (β = −0.224; p = 0.001), and TCK (β = −0.141; p = 0.032), and a moderate effect in TPACK (β = −0.294; p < 0.001).
Gender was associated with lower scores, with small effects in TK (β = −0.142; p < 0.001) and Ethics (β = −0.139; p < 0.001).
Teaching level showed positive and significant effects across all dimensions except TPK. In contrast, variables such as school type and subject taught did not yield statistically significant effects in any dimension.
Collinearity indicators (tolerance > 0.2 and VIF < 4) ruled out redundancy issues among predictors, supporting the validity of the models.
Finally, these results suggest that, within this sample, being female and teaching at the primary level in Chile—regardless of school type or subject—were consistent predictors associated with lower self-perceptions of knowledge for using AI tools.

4.3. What Professional Teacher Profiles Emerge from the Combination of Technological, Pedagogical, Content, and Ethical Knowledge Regarding the Integration of AI in Their Professional Practice?

Cluster Analysis

To complement the previous statistical analyses, a cluster analysis was conducted to identify teacher professional profiles according to their levels of competence across the five dimensions of the Intelligent-TPACK. A combined strategy was applied, beginning with an exploratory hierarchical analysis using Ward’s method and Euclidean distance to observe the structure of natural groupings through a dendrogram (see Figure 3).
Internal validity was assessed using the silhouette measure of cohesion and separation obtained through the TwoStep Cluster procedure. The average silhouette score was 0.20, indicating weak to moderate separation among the four clusters. This suggests that although the profiles are distinguishable, they should be considered exploratory. Additionally, to assess the stability of the solution, the analysis was replicated in a random subsample comprising 50% of the cases. The cluster structure and the main patterns of the centers remained consistent with those of the full sample.
Based on these results, the final partition was defined using the K-means algorithm, with an optimal number of four clusters. The choice of four clusters was justified through visual analysis of the hierarchical dendrogram. According to Hair et al. (2019), this type of cut allows the optimal number of groups to be defined when a marked difference in intergroup variance is observed. In this case, the structure evidenced four well-differentiated groupings. The K-means algorithm was implemented in SPSS with default initialization (without setting an explicit seed); listwise deletion and a maximum of 20 iterations (convergence criterion = 0.001).
The analysis was conducted on average scores per dimension, considering that all scales shared the same range (1 to 4), which allowed for direct comparison without additional standardization.
Before performing the cluster analysis, the integrity of the database was verified. No missing values were detected in the five dimensions. Descriptive statistics confirmed that all variables fell within the expected range of the Likert scale (1–4), with standard deviations below 1.0, indicating adequate variability. During the inspection of boxplots and extreme values, no anomalies were observed; therefore, no cases were removed for outlier detection.
Additionally, to enhance the comparative analysis between clusters, the Kruskal–Wallis test was applied for each dimension, along with effect size calculations using Kruskal–Wallis η2. The results confirmed significant differences across all dimensions (TK: H = 536.92, p < 0.001; TPK: H = 494.18, p < 0.001; TCK: H = 538.43, p < 0.001; TPACK: H = 487.02, p < 0.001; Ethics: H = 456.97, p < 0.001). The estimated effect sizes were high in all dimensions (TK = 0.76, TPK = 0.70, TCK = 0.76, TPACK = 0.69, and Ethics = 0.64). Dunn’s post hoc comparisons with Bonferroni correction showed that all pairwise contrasts were significant (p < 0.001), except between Clusters 2 and 4 in the TPACK dimension (p = 1.000).
Finally, four professional teacher profiles were established, as presented in Table 12.
From the data in Table 12, it can be observed that Cluster 1 (21.3%) corresponds to a professional profile with high scores in all dimensions of the Intelligent-TPACK, including the Ethics dimension, with values above option 3 of the Likert scale, reflecting a moderate degree of agreement. Cluster 2 (28.6%) represents a profile with intermediate-low levels and no particularly outstanding areas, as scores are concentrated around option 2, indicating moderate disagreement. Cluster 3 (19.6%) is characterized as a critical profile, with very low values across all dimensions, particularly in integration and ethics, with averages close to option 1, reflecting a high level of disagreement. Finally, Cluster 4 (30.5%) shows a professional profile with high technological knowledge (TK above option 3) but low levels in the other dimensions, once again highlighting weakness in ethics.
Overall, 78.8% of teachers (the sum of Clusters 2, 3, and 4) were classified into exploratory profiles with low knowledge regarding the pedagogical and ethical use of artificial intelligence. This result should be interpreted cautiously given the non-probabilistic sampling and the moderate internal validity, but it still suggests that more than three-quarters of participants show limitations in integrating AI into teaching, which may represent a challenge for initial teacher education and professional development in Chile.
Furthermore, to visually communicate the four clusters according to variables such as gender and years of teaching experience, five scatter plots were created and are presented in Figure 4.
The results in Figure 4 visually display some of the differences identified in the comparative analyses. For example, as years of experience increase, scores tend to decrease across all Intelligent-TPACK dimensions, suggesting less appropriation of AI with age and career trajectory. Regarding gender, some relevant trends are also observed. For instance, in the TK and TPK components, Cluster 4 shows greater predominance among women, whereas Cluster 1 appears more frequently among men.

5. Discussion and Conclusions

With the widespread adoption of new AI tools, forms of interaction and multiple fields of professional knowledge have begun to change (OECD, 2023; Holmes, 2023; Seufert et al., 2021; Mishra et al., 2023). This transformation reaches different levels of teaching work, such as school management, pedagogy, curriculum, and assessment, raising the need for a more complex and better-adjusted pedagogical perspective (Stolpe & Hallström, 2024; K. Wang, 2024; S. Dogan et al., 2025).
In this context, the successful integration of any technology largely depends on teachers’ knowledge (Mishra & Koehler, 2006; Mishra, 2019). However, a gap persists in understanding how this knowledge is specifically articulated with the use of AI tools (Sun et al., 2023; Kim et al., 2022; Luckin et al., 2022; Tan et al., 2024; Celik, 2023). Moreover, at the international level, this situation has created significant challenges and mismatches between the training provided by educational institutions and the real needs expressed by teachers regarding the integration of AI tools (Cukurova et al., 2024; Chiu & Chai, 2020; Ng et al., 2023; Tan et al., 2024; Zawacki-Richter et al., 2019). This has been especially critical in regions with lower research productivity, such as Latin America (Maslej et al., 2024).
Therefore, to help reduce this gap, the purpose of this study was to analyze trends in teachers’ knowledge in Chile using the Intelligent-TPACK framework, which, due to its qualities, has become a robust, flexible, and suitable model to guide this type of analysis (S. Dogan et al., 2025; Celik & Dogan, 2025).
After reviewing the results, we were able to generate discussions and draw conclusions for each research question.

5.1. What Levels of Technological, Pedagogical, Content, and Ethical Knowledge Are Reported by a Sample of Teachers from the Metropolitan Region (Chile) Regarding the Use of AI in Education?

Given that distributions deviated from normality, we report medians and interquartile ranges (Mdn, IQR) in the Discussion, complementing them with means and standard deviations (M, SD) in parentheses. To substantiate claims about ‘low’ or ‘moderate’ levels on a 4-point scale, we ran one-sample Wilcoxon signed-rank tests against the conceptual midpoint (2.5) and, as a stricter benchmark, against 3.0, reporting test statistics, p-values, and effect sizes.
First, based on the results of this study, we found that the levels of knowledge reported by the participating teachers on the use of AI in education are mixed, though with a general tendency toward low levels. AI-TK showed the highest central tendency (Mdn = 2.67, IQR = 1.33; M = 2.67, SD = 0.91), followed by AI-TPK (Mdn = 2.33, IQR = 1.00; M = 2.52, SD = 0.81) and AI-TCK (Mdn = 2.33, IQR = 1.00; M = 2.45, SD = 0.86). Lower central values appeared in AI-TPACK (Mdn = 2.00, IQR = 1.33; M = 2.23, SD = 0.88) and Ethics (Mdn = 2.00, IQR = 1.33; M = 2.10, SD = 0.84). To substantiate these interpretations, one-sample Wilcoxon signed-rank tests were conducted against the conceptual midpoint (2.5) and, as a stricter benchmark (against 3.0). AI-TK was significantly higher than 2.5 (Z = 5.27, p < 0.001, r = 0.20), yet significantly lower than 3.0 (Z = −8.44, p < 0.001, r = 0.32), indicating slightly above-midpoint rather than high knowledge. AI-TPK (Z = 0.67, p = 0.502, r = 0.03) and AI-TCK (Z = −1.63, p = 0.104, r = 0.06) did not differ from 2.5 but were clearly lower than 3.0 (TPK: Z = −13.49, p < 0.001, r = 0.51; TCK: Z = −13.80, p < 0.001, r = 0.52). By contrast, both AI-TPACK and Ethics were significantly below 2.5 (TPACK: Z = −8.12, p < 0.001, r = 0.30; Ethics: Z = −11.63, p < 0.001, r = 0.44) and below 3.0 (TPACK: Z = −17.45, p < 0.001, r = 0.65; Ethics: Z = −19.18, p < 0.001, r = 0.72); confirming low levels in these dimensions.
This aligns with findings from other related studies, although comparisons must be made with caution given the different scales and populations involved. For example, Karatas and Atac (2025) implemented the Intelligent-TPACK on a 7-point scale with 304 pre-service teachers of English as a second language. Their results showed a similar pattern to ours, with relatively higher values in AI-TK and AI-TCK, and somewhat lower scores in AI-TPK and AI-TPACK integration. The weakest dimension was Ethics, confirming that issues such as transparency, bias, and responsible use remain critical and pending areas for strengthening.
Similarly, Gregorio et al. (2024) applied the Intelligent-TPACK (7-point scale) to 212 pre-service teachers and found consistently high levels across all dimensions. Within this overall trend, AI-TCK was the strongest dimension, while AI-TPACK and Ethics ranked lower, again echoing the pattern observed in our data.
Velander et al. (2023) used the Intelligent-TPACK framework in a study with in-service K-12 teachers and university trainers in Sweden. While the study was conducted in a very different context and with a smaller sample, it also highlighted that much AI knowledge is acquired incidentally and often reflects partial or even erroneous conceptions. At the same time, teachers acknowledged the potential benefits of AI for personalization and learning monitoring, consistent with the mixed strengths we observed in Chilean teachers.
Saenen et al. (2024) applied the I-TPACK model in focus groups with teachers and students in Flanders, and Castro et al. (2025) studied rural secondary teachers in Chile. Although their contexts were different, both studies revealed challenges in achieving effective technological integration and, especially, in incorporating ethical considerations—once again consistent with the relative weakness of the Ethics dimension in our study.
It is worth noting that the higher levels associated with the TK component observed in this study do not necessarily stem only from teachers’ “formal” academic training (such as government-led workshops, official courses, peer evaluation sessions, or curricular update conferences), since this knowledge can also be acquired through informal or self-guided means such as online courses, tutorial videos, MOOCs, etc. This nuance may be key to understanding and explaining the gap between TK and the other dimensions.
Finally, more broadly, the low score in the Ethics dimension in this study is relevant, and consistent with the growing international debate on the ethical risks of AI (Holmes et al., 2022; OECD, 2023). This also aligns with previous studies that warned about teachers’ limited ethical preparedness to face AI challenges (Celik, 2023; Karatas & Atac, 2025; Gregorio et al., 2024) and highlights the urgent need to strengthen this dimension in both initial and continuing teacher education programs.

5.2. Are There Significant Differences in Teachers’ Knowledge of AI According to Sociodemographic, Professional, and Disciplinary Variables Such as Gender, Age, or Subject?

We found significant differences across all analyzed variables except for school type (public or private); however, given the unbalanced sample distribution by school type, this result should be interpreted with caution. The highest scores were obtained by young male teachers working in secondary education and teaching subjects such as Biology, Technology, and Natural Sciences—i.e., STEM disciplines. Regression models confirmed that experience, gender, and educational level are the most consistent sample-level predictors of responses in the Intelligent-TPACK. These trends have also been observed in similar studies (Cai et al., 2016; Diao et al., 2024; Møgelvang et al., 2024). For example, Cai et al. (2016) conducted a meta-analysis and concluded that men generally hold more favorable attitudes toward technology use than women, although with a small effect size. Age and experience were also confirmed across all dimensions, suggesting that younger, less experienced teachers perceive themselves as more knowledgeable in the use of AI tools. This trend aligns with recent findings. For example, Zeng et al. (2022) conducted a meta-analysis on teacher self-efficacy and found that age and career stage moderate the relationship between digital self-efficacy and TPACK, while gender does not significantly.
On the other hand, although we did not observe statistically significant differences between public and private school teachers, this finding should not be overestimated or generalized. In our sample, the distribution of teachers according to “school type” did not adequately reflect the actual composition of the teaching population in the Metropolitan Region, which may have introduced bias in the analysis of differences. Therefore, the absence of significant differences should be interpreted with caution, as it may be influenced by methodological procedures. Nonetheless, beyond this limitation, some recent studies have reported that, regardless of whether a school is public or private, the adoption of AI in schools largely depends on the availability of resources, school culture, and institutional support. For example, Zhao et al. (2025), through an empirical study with 202 secondary school teachers in China using an adapted instrument to measure AI usage intention and innovation-diffusion, found that the main factors mediating effective AI use were facilitating conditions (infrastructure/support), career aspirations, and perceived usefulness. This aligns with Kaufman et al. (2025), who, in a report from the Research and Development Corporation (RAND), noted that teachers and principals in schools serving more disadvantaged populations use AI tools less frequently and less effectively than those in better-resourced institutions with stronger institutional guidance. Similarly, Traga and Rocconi (2025), in a survey of 242 primary and secondary teachers, found that teachers identified ongoing professional development workshops and the creation of clear policies and “best practice” guidelines as their main support needs to effectively and responsibly integrate AI into their classrooms.
Although we did not identify studies that specifically examine sociodemographic differences using the Intelligent-TPACK model, there are clear points of connection with the international Teaching and Learning International Survey (TALIS). Similarly to the analyses conducted in our study, the TALIS offers comparative perspectives on teachers, teaching, and learning across countries, including personal and contextual characteristics of teachers such as age, gender, professional experience, and educational level (OECD, 2019, 2020). In general terms, some of the most relevant findings from TALIS 2018 showed that participation in professional development is nearly universal, although gaps persist in critical areas such as the use of technologies for teaching, where many teachers report needing training to learn more advanced applications relevant to their professional practice (OECD, 2019). Additionally, the survey found that collaborative practices among teachers—such as peer feedback and joint professional learning sessions—are infrequent, despite their positive impact in supporting innovative practices in the classroom (OECD, 2020). These patterns are consistent with our findings, where moderating factors such as age, gender, and educational level are associated with varying levels of teachers’ technological knowledge (TK, TCK, TPACK-AI).
In its most recent framework, TALIS 2025 has already incorporated specific questions regarding the use of artificial intelligence tools (OECD, 2024, 2025). It is worth noting that, as of now, this version has not yet been implemented, so no results are available. However, the framework used for its development is publicly accessible. In future rounds, TALIS will ask, for example, whether teachers have received training related to AI, whether they perceive a need to acquire new competencies to integrate these technologies into their pedagogical practice, and their level of agreement with the different roles that AI could play in teaching—such as lesson planning, material adaptation, student support, administrative automation, and ethical dilemmas. In the future, these results could allow for comparisons with our findings to identify new points of convergence and divergence.

5.3. What Professional Teacher Profiles Emerge from the Combination of Technological, Pedagogical, Content, and Ethical Knowledge Regarding AI Integration in Their Professional Practice?

The cluster analysis identified four exploratory professional teacher profiles regarding Intelligent-TPACK.
The first minority group (21.3%) was characterized by high scores across all dimensions, with strong self-perception in integrating AI pedagogically and ethically. The second, majority group (30.5%) corresponds to a profile with high valuation of technological knowledge, strong TK, but clear limitations in other dimensions, especially Ethics. A third group (28.6%) suggest perceptions of low-to-intermediate knowledge, without particularly strong areas, reflecting an emerging profile. Finally, a fourth group (19.6%) represented a critical profile, with very low scores across all dimensions, indicating serious difficulties in incorporating AI into teaching practice.
Aggregating these profiles, we found that 78.8% of the participating teachers fall into profiles with low or very low Intelligent-TPACK levels, suggesting that the main challenges go beyond technological mastery and focus on training to integrate AI critically, fairly, and responsibly. These findings align with recommendations from leading international frameworks, which suggest updating teacher professional development plans to move from a technical perspective—focused solely on technological and programming skills—towards a more critical and comprehensive view of AI’s benefits and risks, connected to multiple dimensions of teaching. In this way, well-prepared teachers can understand, apply, and create with AI tools while also strengthening their ethical competencies (Miao & Cukurova, 2024; Miao et al., 2024; Holmes et al., 2022; OECD, 2023).
The results on professional profiles suggest that there is no single starting point or single path to achieve successful AI integration in Chilean education. On the contrary, it is necessary to design specific training trajectories that address the needs of each profile and gaps associated with specific variables. For example, young men teaching STEM subjects tend to report higher knowledge in nearly all Intelligent-TPACK dimensions. Therefore, training strategies must be sensitive and take into account factors such as gender, age, and subject.

5.4. Key Implications

5.4.1. Implications for Public Policy

The evidence from this study provides an updated view of teachers in the Metropolitan Region of Chile and their perceptions of technological, pedagogical, disciplinary, and ethical knowledge regarding AI.
In general, TK was relatively higher (slightly above the midpoint); however, all other dimensions assessed showed low results. This is reflected in the fact that over 75% of the surveyed teachers fall into professional profiles with limited knowledge to effectively integrate AI pedagogically, in terms of content, and ethically in their professional practice.
This finding may suggest several implications. First, the pattern of results could be associated with greater exposure in teacher education programs (both initial and continuous) in Chile to technical AI competencies—such as software management, the use of digital resources, or basic programming—compared to less emphasis on pedagogical, didactic, and ethical dimensions. However, this interpretation should be considered with caution due to the sampling limitations and the lack of direct evidence regarding curricula and training plans. This would explain, at least in part, why TK is relatively high compared to other dimensions of the model. In line with this, initial and continuous training policies should be projected—or at the very least updated—to address these specific professional gaps, prioritizing the ethical dimension of AI (which emerges as the most urgent challenge) before continuing to deepen technical skills (Miao & Cukurova, 2024; Miao et al., 2024; Holmes et al., 2022; OECD, 2023).
Second, it is possible that initial and continuous training initiatives are simply insufficient, and that the emphasis on technical knowledge stems from non-formal and informal learning spaces, where teachers self-manage and seek resources on their own. For example, through online courses, open platforms, MOOCs, tutorial videos, digital communities, and web-based training. This would not be surprising, considering that AI reached classrooms through students even before teachers had the opportunity to reflect on its use. Therefore, digital self-learning environments may also help explain the high TK scores and should be considered as valid knowledge sources to anchor future professional development initiatives.
In 2021, Chile enacted its “National Artificial Intelligence Policy,” updated in 2024, whose primary aim is to promote the ethical and responsible development and use of AI across society, so that this technology contributes to the country’s new model of development and growth.
An action plan aligned with this public policy was established, structured around three main pillars: enabling factors, development and adoption, and governance and ethics. Ethics, therefore, is embedded in Chile’s foundational proposals. However, these objectives must also be reflected in both initial and continuous teacher training.
As a second key point, the discovery of significant gaps based on gender, age, and educational level highlights the need for differentiated policies that target specific groups within the Chilean teaching population. Training programs on AI should prioritize strengthening the knowledge of women, primary school teachers, and educators over the age of 50, as these groups reported significantly lower levels of knowledge than others.
As a third point, the cluster analysis provided a clearer understanding of the diversity of professional profiles within Chile’s school system. Identifying four clusters reveals that there is no single starting point for AI-related teacher training. While some Chilean teachers appear to have high levels of knowledge across all dimensions, others need to reinforce specific areas such as pedagogical or ethical aspects. Consequently, public policies should adapt and evaluate their training proposals to meet the diverse needs of existing teacher profiles.
Finally, despite the government’s coordinated efforts, AI has yet to be incorporated into Chile’s national teacher curriculum, nor is it present in the main textbooks, plans, or programs used by teachers in their lessons. This suggests that its implementation and research are still in early stages in the country. The risk here is that teacher training in AI remains limited to isolated initiatives, dependent on short courses or workshops with potentially unclear messages and no solid grounding in empirical evidence. To bridge this gap, it is necessary to move toward the progressive integration of AI into curricular artifacts across various subjects and educational levels. Such a policy would help ensure that AI is not only understood as a technological resource, but also as an essential component for the future.

5.4.2. Implications for Future Research

The empirical study conducted in Chile highlights a lack of empirical research in Latin America on teachers’ knowledge of AI, opening a relevant field for comparative and international studies.
We hope that future research will delve into qualitative or mixed methodologies, not only to measure perceptions but also to analyze actual classroom practices and the impact of AI on student learning.
Moreover, it is necessary to expand research into underexplored contexts such as primary education, rural schools, and hybrid or virtual modalities, as these were underrepresented in this study.
Another pending area is to examine how the trajectories of the different teacher profiles identified through cluster analysis evolve over time.
Finally, a key task is to investigate whether future professional development plans help improve the identified teacher profile configurations and lead to enhanced perceptions of knowledge.
These implications are proposed with the foregoing caveats, as they derive from a non-probability sample.

5.5. Limitations and Future Work

A primary limitation of this study is that it followed a non-probabilistic design, which limits the ability to generalize the findings to the entire teaching population in Chile. In this regard, the study is at risk of self-selection bias, as teachers were recruited through institutional emails, seminars, and social media. This strategy implies that the teachers who agreed to participate may differ from those who did not respond, which further limits the generalizability of the results. Therefore, we reiterate that the findings should be interpreted with caution, as they reflect descriptive trends within the participating teacher sample.
Additionally, the sample presents an imbalance in terms of school type compared to the reference population, which may act as a potential attenuator of effects associated with this variable.
Secondly, regarding the validation of the instrument, although we achieved adequate reliability and convergent validity, the global fit of the CFA showed some issues. The CFA exhibited a mixed global fit (acceptable CFI, elevated RMSEA) and limited discriminant validity (high inter-factor correlations), which means that comparisons between dimensions and profiles should be interpreted cautiously. Future studies will require improvements to the instrument (item revision/new items and model re-specification).

Author Contributions

Conceptualization, J.A.; Methodology, J.A., B.Á. and R.A.; Formal Analysis, J.A., B.Á. and R.A.; Investigation, J.A. and B.Á.; Data Curation, J.A.; Writing—Original Draft, J.A.; Writing—Review and Editing, J.A., B.Á. and R.A.; Supervision, B.Á. and R.A.; Project Administration, B.Á. and R.A.; Funding Acquisition, J.A., B.Á. and R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ANID BECAS/DOCTORADO NACIONAL (grant number 21240783), PIA/Basal Funds for Centers of Excellence (grant number FB0003), PIA/Basal Funds for Centers of Excellence (grant number AFB240004) and ANID Exploración (grant number 13240075).

Institutional Review Board Statement

The study was conducted in accordance with the Decla-ration of Helsinki and approved by the Ethics Committee of the Faculty of Social Sciences, University of Chile 0.67 11 November 2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are openly available in Mendeley Data at https://doi.org/10.17632/m52p6kcvxj.1.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
AIEDArtificial Intelligence in Education
TPACKTechnological Pedagogical and Content Knowledge
TKTechnological Knowledge
TCKTechnological Content Knowledge
TPKTechnological Pedagogical Knowledge
PCKPedagogical Content Knowledge
STEMScience, Technology, Engineering, and Mathematics

Appendix A

Validation Procedures for the Adapted Questionnaire

  • Preliminary Analysis
Using the responses from all teachers who participated in the pilot study (N = 42), we conducted a preliminary analysis of the items. For each item, we calculated descriptive statistics (mean, standard deviation, median, skewness, and kurtosis), as well as the corrected item-total correlation (CITC) as an index of item discrimination. The results are presented below in Table A1.
Table A1. Descriptive statistics of items (N = 42).
Table A1. Descriptive statistics of items (N = 42).
ItemMinMaxMeanSDMedSkewKurtCITC
TK1131.950.5822.0−0.0010.1570.664
TK2131.980.4682.0−0.0902.0310.699
TK3131.880.5042.0−0.2430.9450.616
TK4132.000.4942.00.0001.5140.741
TK5132.020.4682.00.0902.0310.824
TPK1142.900.4843.0−1.6266.3730.529
TPK2242.900.4843.0−0.2741.3890.561
TPK3242.980.5173.0−0.0401.0780.548
TPK4242.930.4073.0−0.5823.3170.254
TPK5242.900.4843.0−0.2741.3890.529
TPK6142.830.4903.0−1.7204.6550.485
TPK7142.880.4533.0−2.1877.8550.260
TCK1242.980.6043.00.008−0.0680.673
TCK2242.760.5762.00.039−0.2860.781
TCK3242.790.5652.0−0.026−0.1340.765
TCK4242.790.5202.0−0.2680.0980.664
TPACK1131.640.5771.00.204−0.6670.612
TPACK2131.790.6451.00.228−0.5850.700
TPACK3131.600.7011.00.761−0.5780.816
TPACK4131.760.5321.0−0.192−0.1270.563
TPACK5131.790.5651.0−0.026−0.1340.779
TPACK6131.600.7011.00.761−0.5780.744
TPACK7141.830.6961.00.6941.0790.614
ETHIC1141.450.6701.01.7143.8030.420
ETHIC2131.210.4701.02.1544.2130.521
ETHIC3131.330.6121.01.6921.8370.450
ETHIC4131.190.4551.02.4165.5830.575
Additionally, Table A2 summarizes the main reliability indices for each scale, including Cronbach’s alpha coefficient, McDonald’s total omega (estimated under the assumption of unidimensionality using the closed-form solution by Hancock and An (2020), and implemented in SPSS via the OMEGA macro developed by Hayes and Coutts (2020)), along with the range of corrected item-total correlations and inter-item correlations.
Table A2. Reliability of scales.
Table A2. Reliability of scales.
Itemk (Items)Cronbach’s αMcDonald’s ωCITC Inter-Item r
TK50.8740.8760.616–0.8240.48–0.74
TPK70.7400.6830.254–0.561−0.06–0.65
TCK40.8680.8710.664–0.7810.53–0.74
TPACK70.8910.8950.563–0.8160.35–0.75
Ethics40.6910.6850.420–0.5750.30–0.60
Overall, the results indicate adequate levels of internal consistency across most scales, with α and ω values close to or above 0.70 and corrected item-total correlations within acceptable ranges.
2.
Verification of factorability
Before conducting the PCA, we estimated the Pearson correlation matrix using listwise deletion (Likert-type scale treated as an interval approximation in this pilot). The determinant was 4.29 × 10−10, reflecting high intercorrelations without perfect collinearity. Sampling adequacy resulted in a KMO = 0.635, and Bartlett’s test of sphericity was significant (χ2(351) = 672.244; p < 0.001), justifying the factor extraction procedure (Kaiser, 1974; Pallant, 2020).
3.
Principal Components Analysis (PCA)
Following the recommendations of Hair et al. (2019), we performed a Principal Component Analysis (PCA) on the 27 items, using orthogonal Varimax rotation (Kaiser normalization). The solution converged in 19 iterations. A total of seven components with eigenvalues > 1 were retained, as shown in the scree plot in Figure A1.
Figure A1. Scree plot.
Figure A1. Scree plot.
Education 15 01268 g0a1
The components explained 74.14% of the variance: C1 16.44%, C2 13.90%, C3 11.33%, C4 9.85%, C5 8.51%, C6 7.12%, C7 6.99% (see Table A3).
Table A3. Summary of the global PCA and Varimax.
Table A3. Summary of the global PCA and Varimax.
ElementResult
Input matrixPearson correlations (listwise deletion)
Determinant4.29 × 10−10
Sampling adequacyKMO = 0.635
Sphericity (Bartlett)χ2(351) = 672.244, p < 0.001
ExtractionPCA
RotationVarimax (orthogonal); Kaiser normalization
Retention criteriaEigenvalue > 1 + Scree
Convergence19 iterations
# of factors retained7
Explained varianceC1 16.44%, C2 13.90%, C3 11.33%, C4 9.85%, C5 8.51%, C6 7.12%, C7 6.99%
The rotated structure was interpretable and consistent with the subscales (see Table A4): a dominant TPACK component; one TK component; a TCK block (with TCK4 loading on a second component of the same domain); TPK divided into two components; and one Ethics component. Some specific cross-loadings were observed (e.g., TPK2, TPACK7, E1), which were resolved during the item reduction process.
Table A4. Main loadings by component (Varimax rotation; |λ| ≥ 0.50 shown).
Table A4. Main loadings by component (Varimax rotation; |λ| ≥ 0.50 shown).
ComponentHighest-Loading Items (Main)
C1–TPACKTPACK5 (0.888), TPACK3 (0.835), TPACK6 (0.754), TPACK2 (0.752), TPACK4 (0.686), TPACK1 (0.632), TPACK7 (0.552)
C2–TKTK5 (0.878), TK4 (0.840), TK2 (0.746), TK3 (0.741), TK1 (0.657)
C3–TCK (core)TCK3 (0.862), TCK1 (0.827), TCK2 (0.802)
C4–TCK (extension)TCK4 (0.620)
C5–TPK (group 1)TPK1 (0.827), TPK5 (0.813), TPK6 (0.764), TPK2 (0.616)
C6–EthicsE4 (0.750), E2 (0.726), E3 (0.707), E1 (0.502)
C7–TPK (group 2)TPK4 (0.813), TPK7 (0.785), TPK3 (0.711)
In the Chilean context, teachers usually work long hours and have limited time to participate in research studies, making it difficult to administer very lengthy questionnaires. Therefore, considering this limitation and aiming to increase teachers’ willingness to participate and improve response rates in the Metropolitan Region, we simplified the instrument by applying two baseline criteria.
First, we retained only those items with communalities greater than or equal to 0.65 (see Table A5), ensuring that each variable contributed meaningfully to the explained variance. Second, we prioritized items that participants in the pilot study reported understanding more easily and for which they expressed no doubts or comments, while maintaining greater conceptual coherence with the Intelligent-TPACK framework.
Table A5. Communalities of all items (Extraction: PCA).
Table A5. Communalities of all items (Extraction: PCA).
ItemExtractionSelected
TK10.643No
TK20.675No
TK30.650Yes
TK40.818Yes
TK50.835Yes
TPK10.773Yes
TPK20.801No
TPK30.747No
TPK40.729Yes
TPK50.832No
TPK60.708Yes
TPK70.810No
TCK10.764No
TCK20.812Yes
TCK30.803Yes
TCK40.705Yes
TPACK10.550No
TPACK20.736No
TPACK30.768Yes
TPACK40.567No
TPACK50.816Yes
TPACK60.743Yes
TPACK70.815No
E10.761Yes
E20.756No
E30.680Yes
E40.718Yes
Total27 items15 items
After this selection procedure, the questionnaire was reduced from 27 to 15 items (see Appendix C), maintaining a balanced distribution across the theoretical dimensions of the Intelligent-TPACK model. Specifically, the TK dimension retained items TK3 (0.650), TK4 (0.818), and TK5 (0.835); TPK retained TPK1 (0.773), TPK4 (0.729), and TPK6 (0.708); TCK retained TCK2 (0.812), TCK3 (0.803), and TCK4 (0.705); TPACK included TPACK3 (0.768), TPACK5 (0.816), and TPACK6 (0.743); and the Ethics dimension retained items E1 (0.761), E3 (0.680), and E4 (0.718).
It is worth noting that, during the reduction from 27 to 15 items, whenever cross-loadings ≥ 0.30 emerged, we prioritized the items with higher communalities and stronger conceptual coherence within each dimension, discarding semantically redundant items.
4.
Confirmatory Factor Analysis
Following the PCA, a Confirmatory Factor Analysis (CFA) was conducted in AMOS 26 using our main sample (N = 709). As shown in Figure A2, we applied a five-factor model with three items per factor (TK, TPK, TCK, TPACK, and Ethics).
Figure A2. Confirmatory Factor Analysis Model for the Five I-TPACK Factors.
Figure A2. Confirmatory Factor Analysis Model for the Five I-TPACK Factors.
Education 15 01268 g0a2
The CFA revealed high and statistically significant factor loadings (all greater than 0.50; p < 0.001), supporting the convergent validity of the items.
However, the global fit indices yielded mixed results and revealed certain methodological limitations that must be taken into account when interpreting this study’s findings:
χ2(80) = 806.418, p < 0.001; χ2/df = 10.08; CFI = 0.921; TLI = 0.896; NFI = 0.913; GFI = 0.846; AGFI = 0.769; RMSEA = 0.113 (90% CI [0.106–0.120]); SRMR = 0.042.
While CFI, NFI, and SRMR fall within acceptable ranges, the χ2/df ratio and RMSEA indicate a poor model fit.
Lastly, Table A6 summarizes the results of convergent and discriminant validity. All factor loadings were statistically significant (λ > 0.50, p < 0.001), and the values for composite reliability (CR) exceeded the 0.70 threshold, with the average variance extracted (AVE) above 0.50.
Table A6. Summary of CFA results (standardized loadings, CR, AVE).
Table A6. Summary of CFA results (standardized loadings, CR, AVE).
ConstructStd. Loadings (Min–Max)CRAVE
TK0.79–0.930.8910.733
TPK0.63–0.830.8020.577
TCK0.77–0.880.8560.666
TPACK0.83–0.880.8880.726
Ethics0.80–0.870.8820.714
As for discriminant validity, high correlations were observed between some pairs (e.g., TCK–TPK = 0.947; TPK–TPACK = 0.922; Ethics–TPACK = 0.915), indicating limited evidence of discriminant validity.

Appendix B

Complementary Data

Table A7. Median (IQR) of Intelligent-TPACK Model Dimensions According to Sociodemographic and Professional Variables.
Table A7. Median (IQR) of Intelligent-TPACK Model Dimensions According to Sociodemographic and Professional Variables.
CategoryTKTPKTCKTPACKÉtica
Male2.67 (1.00)2.67 (1.00)2.67 (1.00)2.33 (1.00)2.33 (1.00)
Female2.67 (1.00)2.67 (1.00)2.67 (1.00)2.33 (1.00)2.33 (1.00)
Primary2.33 (1.00)2.33 (1.00)2.33 (1.00)2.00 (1.00)2.00 (1.33)
Secondary3.00 (1.00)2.67 (1.33)2.67 (1.00)2.33 (1.00)2.33 (1.33)
Public 12.33 (1.00)2.33 (1.00)2.33 (1.00)2.00 (1.00)2.00 (1.00)
Public 22.67 (1.00)2.67 (1.00)2.67 (1.00)2.33 (1.00)2.33 (1.00)
Private 12.67 (1.00)2.67 (1.00)2.67 (1.00)2.33 (1.00)2.33 (1.00)
Private 22.67 (1.00)2.67 (1.00)2.67 (1.00)2.33 (1.00)2.33 (1.00)
Private 33.00 (1.00)3.00 (1.00)3.00 (1.00)2.67 (1.00)2.67 (1.00)
20–30 years old2.33 (1.00)2.33 (1.00)2.33 (1.00)2.00 (1.00)2.00 (1.00)
30–40 years old3.00 (1.00)2.67 (1.33)2.67 (1.00)2.33 (1.00)2.33 (1.00)
40–50 years old3.00 (1.00)2.67 (1.00)2.67 (1.00)2.33 (1.00)2.33 (1.00)
50–60 years old3.00 (1.00)3.00 (1.00)3.00 (1.00)2.67 (1.00)2.67 (1.00)
60–70 years old2.33 (1.00)2.33 (1.00)2.33 (1.00)2.00 (1.00)2.00 (1.00)
70–80 years old2.50 (1.00)2.83 (1.00)2.33 (1.00)2.00 (1.00)2.33 (1.00)
Visual arts3.17 (0.42)2.67 (0.75)3.00 (0.83)2.50 (0.92)2.00 (0.42)
Biology3.33 (1.34)2.67 (1.34)2.67 (1.50)2.33 (1.33)2.33 (1.00)
Natural sciences2.33 (1.58)2.33 (1.00)2.50 (1.25)2.00 (1.17)2.00 (1.33)
Science for Citizenship4.00 (1.67)4.00 (1.50)4.00 (2.00)3.67 (2.00)2.67 (1.83)
Physical Education2.83 (1.00)2.67 (1.33)2.50 (1.25)2.33 (1.17)2.50 (1.25)
Philosophy3.33 (0.58)2.67 (0.67)3.00 (1.33)2.33 (1.00)2.67 (1.00)
Physics3.00 (1.33)3.00 (1.33)2.67 (1.00)2.67 (2.00)2.33 (1.33)
History2.67 (1.33)2.33 (1.25)2.33 (1.33)2.00 (1.33)2.00 (1.83)
English3.17 (1.33)2.67 (1.00)2.83 (1.33)2.33 (1.00)2.00 (1.00)
Indigenous Culture1.33 (2.42)1.50 (2.33)1.50 (2.00)1.33 (2.42)1.50 (2.50)
Communication2.33 (2.33)2.33 (1.33)2.33 (1.67)2.00 (1.67)2.00 (1.33)
Mathematics2.67 (1.00)2.67 (0.67)2.33 (1.00)2.00 (1.00)2.00 (1.67)
Music3.33 (0.50)3.00 (0.50)2.67 (0.33)2.33 (0.50)2.00 (0.67)
Counseling2.33 (1.67)2.00 (1.00)2.33 (1.33)2.00 (1.67)2.00 (1.00)
Chemistry3.33 (1.00)2.33 (0.83)2.67 (1.00)2.00 (0.67)2.00 (1.00)
Religion2.67 (1.00)2.33 (1.00)2.33 (0.67)2.33 (1.50)2.67 (1.50)
Technology2.67 (2.00)3.00 (1.00)2.33 (1.83)4.00 (1.33)2.67 (0.67)
Note. Public 1 = Municipal; Public 2 = SLEP; Private 1 = Subsidized; Private 2 = Delegated Administration; Private 3 = Fully Private.
Table A8. Means (SD) of the Intelligent-TPACK Model Dimensions According to Sociodemographic and Professional Variables.
Table A8. Means (SD) of the Intelligent-TPACK Model Dimensions According to Sociodemographic and Professional Variables.
VariableTKTPKTCKTPACKETHIC
Male2.85 (0.84)2.62 (0.83)2.58 (0.88)2.38 (0.90)2.28 (0.91)
Female2.57 (0.93)2.46 (0.80)2.37 (0.85)2.15 (0.86)2.01 (0.79)
Primary2.42 (0.92)2.37 (0.78)2.24 (0.81)2.06 (0.87)1.89 (0.80)
Secondary2.84 (0.86)2.62 (0.81)2.59 (0.87)2.34 (0.86)2.24 (0.84)
Public 12.70 (0.84)2.57 (0.79)2.50 (0.81)2.24 (0.83)2.10 (0.82)
Public 22.63 (0.91)2.45 (0.83)2.41 (0.93)2.28 (0.89)2.22 (0.90)
Private 12.57 (0.94)2.41 (0.83)2.34 (0.89)2.17 (0.92)2.01 (0.85)
Private 22.53 (1.08)2.56 (0.67)2.27 (0.90)2.04 (0.79)2.12 (0.88)
Private 32.90 (0.85)2.67 (0.80)2.69 (0.79)2.36 (0.88)2.21 (0.82)
20–303.05 (0.73)2.72 (0.82)2.80 (0.84)2.39 (0.91)2.29 (0.87)
30–402.95 (0.78)2.70 (0.75)2.68 (0.77)2.38 (0.79)2.27 (0.79)
40–502.53 (0.94)2.36 (0.86)2.29 (0.87)2.12 (0.82)2.01 (0.83)
50–602.32 (0.94)2.33 (0.73)2.23 (0.87)2.01 (0.89)1.95 (0.86)
60–702.02 (0.82)2.26 (0.75)1.91 (0.72)2.10 (1.12)1.70 (0.81)
70–802.50 (0.24)2.83 (0.71)2.33 (0.47)2.00 (0.00)2.33 (0.47)
Visual arts3.22 (0.27)2.72 (0.44)2.89 (0.50)2.56 (0.58)1.94 (0.53)
Biology3.14 (0.78)2.86 (0.85)2.85 (0.82)2.55 (0.85)2.35 (0.79)
Natural sciences2.58 (0.98)2.46 (0.84)2.45 (0.86)2.12 (0.89)2.00 (0.88)
Science for
Citizenship
3.11 (1.07)3.00 (1.26)2.94 (1.32)2.83 (1.35)2.33 (1.17)
Physical Education2.71 (0.87)2.69 (0.79)2.59 (0.75)2.51 (0.85)2.43 (0.79)
Philosophy3.13 (0.76)2.67 (0.83)2.80 (0.92)2.30 (0.90)2.47 (0.67)
Physics2.97 (0.84)2.71 (0.87)2.64 (0.96)2.30 (0.93)2.28 (0.87)
History2.57 (0.93)2.33 (0.85)2.27 (0.90)2.15 (0.87)2.08 (0.90)
English3.04 (0.78)2.79 (0.68)2.70 (0.75)2.42 (0.82)2.15 (0.73)
Indigenous Culture1.92 (1.42)2.00 (1.36)2.00 (1.41)1.92 (1.42)2.00 (1.41)
Communication2.35 (1.01)2.35 (0.86)2.26 (0.93)1.98 (0.83)1.94 (0.83)
Mathematics2.80 (0.71)2.61 (0.67)2.52 (0.72)2.17 (0.75)2.03 (0.83)
Music2.96 (0.56)2.73 (0.49)2.57 (0.56)2.51 (0.57)2.24 (0.47)
Counseling2.23 (0.85)2.01 (0.71)2.04 (0.75)2.00 (0.83)1.89 (0.78)
Chemistry3.02 (0.91)2.46 (0.75)2.48 (0.92)2.13 (0.79)2.17 (0.83)
Religion2.67 (0.58)2.52 (0.58)2.67 (0.60)2.48 (0.82)2.30 (0.84)
Technology2.79 (0.88)2.92 (0.77)2.69 (0.90)3.26 (0.92)2.75 (0.77)
Note. Public 1 = Municipal; Public 2 = SLEP; Private 1 = Subsidized; Private 2 = Delegated Administration; Private 3 = Fully Private.

Appendix C

Adapted Intelligent-TPACK Questionnaire (Spanish Version)
AI-TK
TK3 Sé cómo iniciar una tarea con herramientas de IA mediante texto o voz.
TK4 Tengo conocimientos suficientes para usar varias herramientas de IA.
TK5 Estoy familiarizado con las herramientas de IA y sus capacidades técnicas.
AI-TPK
TPK1 Puedo comprender la contribución pedagógica de las herramientas de IA en mi campo de enseñanza.
TPK4 Sé cómo usar herramientas de IA para monitorear el aprendizaje de mis estudiantes.
TPK6 Puedo comprender las notificaciones de herramientas de IA para apoyar el aprendizaje de mis estudiantes.
AI-TCK
TCK2 Conozco diversas herramientas de IA que son utilizadas por profesionales de mi asignatura.
TCK3 Puedo usar herramientas de IA para comprender mejor los contenidos de asignatura.
TCK4 Sé cómo usar herramientas de IA específicas para mi asignatura.
AI-TPACK
TPACK3 En la enseñanza de mi disciplina, sé cómo utilizar diferentes herramientas de IA para ofrecer retroalimentación en tiempo real.
TPACK5 Puedo impartir lecciones que combinen de manera adecuada el contenido de enseñanza, las herramientas de IA y las estrategias didácticas.
TPACK6 Puedo asumir un rol de liderazgo entre mis colegas en la integración de herramientas de IA en alguna asignatura.
AI-ETHIC
E1 Puedo evaluar en qué medida las herramientas de IA consideran las diferencias individuales de mis estudiantes durante el proceso de enseñanza (por ejemplo, sexo, género, nivel socio económico, etc.).
E3 Puedo comprender la justificación de cualquier decisión tomada por una herramienta basada en IA.
E4 Puedo identificar quiénes son los desarrolladores responsables en el diseño y la toma de decisiones de las herramientas basadas en IA.

References

  1. Adel, A., Ahsan, A., & Davison, C. (2024). ChatGPT promises and challenges in education: Computational and ethical perspectives. Education Sciences, 14(8), 814. [Google Scholar] [CrossRef]
  2. Alé, J., & Arancibia, M. L. (2025). Emerging technology-based motivational strategies: A systematic review with meta-analysis. Education Sciences, 15(2), 197. [Google Scholar] [CrossRef]
  3. Alé, J., Ávalos, B., & Araya, R. (2025). Scientific practices for understanding, applying and creating with artificial intelligence in K-12 education: A scoping review. Review of Education, 13(2), e70098. [Google Scholar] [CrossRef]
  4. Almusharraf, N., & Alotaibi, H. (2022). An error-analysis study from an EFL writing context: Human and automated essay scoring approaches. Technology Knowledge And Learning, 28(3), 1015–1031. [Google Scholar] [CrossRef]
  5. Anwar, A., Rehman, I. U., Nasralla, M. M., Khattak, S. B. A., & Khilji, N. (2023). Emotions matter: A systematic review and meta-analysis of the detection and classification of students’ emotions in stem during online learning. Education Sciences, 13(9), 914. [Google Scholar] [CrossRef]
  6. Berryhill, J., Kok Heang, K., Clogher, R., & McBride, K. (2019). Hello, world: Artificial intelligence and its use in the public sector. In OECD working papers on public governance. No. 36. OECD Publishing. [Google Scholar] [CrossRef]
  7. Bulathwela, S., Pérez-Ortiz, M., Holloway, C., Cukurova, M., & Shawe-Taylor, J. (2024). Artificial intelligence alone will not democratise education: On educational inequality, techno-solutionism and inclusive tools. Sustainability, 16(2), 781. [Google Scholar] [CrossRef]
  8. Cai, Z., Fan, X., & Du, J. (2016). Gender and attitudes toward technology use: A meta-analysis. Computers & Education, 105, 1–13. [Google Scholar] [CrossRef]
  9. Castro, A., Díaz, B., Aguilera, C., Prat, M., & Chávez-Herting, D. (2025). Identifying rural elementary teachers’ perception challenges and opportunities in integrating artificial intelligence in teaching practices. Sustainability, 17(6), 2748. [Google Scholar] [CrossRef]
  10. Cavalcanti, A. P., Barbosa, A., Carvalho, R., Freitas, F., Tsai, Y., Gašević, D., & Mello, R. F. (2021). Automatic feedback in online learning environments: A systematic literature review. Computers And Education Artificial Intelligence, 2, 100027. [Google Scholar] [CrossRef]
  11. Cazzaniga, M., Jaumotte, F., Li, L., Melina, G., Panton, A. J., Pizzinelli, C., Rockall, E. J., & Tavares, M. M. (2024). Gen-AI: Artificial intelligence and the future of work. IMF Staff Discussion Note, 2024(1), 1. [Google Scholar] [CrossRef]
  12. Celik, I. (2023). Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers In Human Behavior, 138, 107468. [Google Scholar] [CrossRef]
  13. Celik, I., & Dogan, S. (2025). Intelligent-TPACK for AI-assisted literacy instruction. In Reimagining literacy in the age of AI (pp. 92–112). Chapman and Hall/CRC. [Google Scholar] [CrossRef]
  14. Chiu, T., & Chai, C. (2020). Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability, 12, 5568. [Google Scholar] [CrossRef]
  15. Cohen, J. (2013). Statistical power analysis for the behavioral sciences. Routledge. [Google Scholar] [CrossRef]
  16. Cohen, L., Manion, L., & Morrison, K. (2018). Research methods in education (8th ed.). Routledge. [Google Scholar] [CrossRef]
  17. Cowan, P., & Farrell, R. (2023). Virtual reality as the catalyst for a novel partnership model in initial teacher education: ITE subject methods tutors’ perspectives on the island of Ireland. Education Sciences, 13(3), 228. [Google Scholar] [CrossRef]
  18. Cukurova, M., Kralj, L., Hertz, B., & Saltidou, E. (2024). Professional development for teachers in the age of AI. European Schoolnet. Available online: https://discovery.ucl.ac.uk/id/eprint/10186881 (accessed on 22 September 2025).
  19. Diao, Y., Li, Z., Zhou, J., Gao, W., & Gong, X. (2024). A meta-analysis of college students’ intention to use generative artificial intelligence. arXiv, arXiv:2409.06712. [Google Scholar] [CrossRef]
  20. Dimitriadou, E., & Lanitis, A. (2023). A critical evaluation, challenges, and future perspectives of using artificial intelligence and emerging technologies in smart classrooms. Smart Learning Environments, 10(1), 12. [Google Scholar] [CrossRef]
  21. Dogan, M. E., Dogan, T. G., & Bozkurt, A. (2023). The use of artificial intelligence (AI) in online learning and distance education processes: A systematic review of empirical studies. Applied Sciences, 13(5), 3056. [Google Scholar] [CrossRef]
  22. Dogan, S., Nalbantoglu, U. Y., Celik, I., & Dogan, N. A. (2025). Artificial intelligence professional development: A systematic review of TPACK, designs, and effects for teacher learning. Professional Development in Education, 51, 519–546. [Google Scholar] [CrossRef]
  23. Edwards, C., Edwards, A., Spence, P. R., & Lin, X. (2018). I, teacher: Using artificial intelligence (AI) and social robots in communication and instruction. Communication Education, 67(4), 473–480. [Google Scholar] [CrossRef]
  24. European Commission. (2020). High-level expert group on artificial intelligence. The assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment. Available online: https://data.europa.eu/doi/10.2759/002360 (accessed on 22 September 2025).
  25. Field, A. (2024). Discovering statistics using IBM SPSS statistics (6th ed.). SAGE Publications Ltd. [Google Scholar]
  26. George, D., & Mallery, P. (2021). IBM SPSS Statistics 27 step by step: A simple guide and reference (17th ed.). Routledge. [Google Scholar] [CrossRef]
  27. Grassini, S. (2023). Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Education Sciences, 13(7), 692. [Google Scholar] [CrossRef]
  28. Gregorio, T. A. D., Alieto, E. O., Natividad, E. R., & Tanpoco, M. R. (2024). Are preservice teachers “totally PACKaged”? A quantitative study of pre-service teachers’ knowledge and skills to ethically integrate artificial intelligence (AI)-based tools into education. In Lecture notes in networks and systems (pp. 45–55). Springer. [Google Scholar] [CrossRef]
  29. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis (8th ed.). Cengage. [Google Scholar]
  30. Hancock, G. R., & An, J. (2020). A closed-form alternative for estimating ω reliability under unidimensionality. Measurement: Interdisciplinary Research and Perspectives, 18(1), 1–14. [Google Scholar] [CrossRef]
  31. Hayes, A. F., & Coutts, J. J. (2020). Use omega rather than Cronbach’s alpha for estimating reliability. But…. Communication Methods and Measures, 14(1), 1–24. [Google Scholar] [CrossRef]
  32. Holmes, W. (2023). The unintended consequences of artificial intelligence and education. Education International. [Google Scholar]
  33. Holmes, W., & Porayska-Pomsta, K. (2023). The ethics of artificial intelligence in education: Practices, challenges, and debates. Routledge. [Google Scholar] [CrossRef]
  34. Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. [Google Scholar] [CrossRef]
  35. Hsu, C., Liang, J., Chai, C., & Tsai, C. (2013). Exploring preschool teachers’ technological pedagogical content knowledge of educational games. Journal of Educational Computing Research, 49(4), 461–479. [Google Scholar] [CrossRef]
  36. Joo, Y. J., Park, S., & Lim, E. (2018). Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and technology acceptance model. Journal of Educational Technology & Society, 21(3), 48–59. [Google Scholar]
  37. Kadluba, A., Strohmaier, A., Schons, C., & Obersteiner, A. (2024). How much C is in TPACK? A systematic review on the assessment of TPACK in mathematics. Educational Studies in Mathematics, 118, 169–199. [Google Scholar] [CrossRef]
  38. Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36. [Google Scholar] [CrossRef]
  39. Karatas, F., & Atac, B. A. (2025). When TPACK meets artificial intelligence: Analyzing TPACK and AI-TPACK components through structural equation modelling. Education And Information Technologies, 30, 8979–9004. [Google Scholar] [CrossRef]
  40. Kaufman, J. H., Woo, A., Eagan, J., Lee, S., & Kassan, E. B. (2025). Uneven adoption of artificial intelligence tools among U.S. teachers and principals in the 2023–2024 school year. RAND Corporation. [Google Scholar] [CrossRef]
  41. Kim, S., Jang, Y., Choi, S., Kim, W., Jung, H., Kim, S., & Kim, H. (2022). Correction to: Analyzing teacher competency with TPACK for K-12 AI education. KI—Künstliche Intelligenz, 36(2), 187. [Google Scholar] [CrossRef]
  42. Kitto, K., & Knight, S. (2019). Practical ethics for building learning analytics. British Journal of Educational Technology, 50(6), 2855–2870. [Google Scholar] [CrossRef]
  43. Koehler, M. J., & Mishra, P. (2008). Introducción a TPACK. In AACTE Committee on Innovation and Technology (Ed.), Handbook of technological pedagogical content knowledge (TPCK) for educators (Vol. 1, pp. 3–29). Routledge. [Google Scholar]
  44. Krug, M., Thoms, L., & Huwer, J. (2023). Augmented reality in the science classroom—Implementing pre-service teacher training in the competency area of simulation and modeling according to the DiKoLAN framework. Education Sciences, 13(10), 1016. [Google Scholar] [CrossRef]
  45. Kuo, Y., & Kuo, Y. (2024). An exploratory study of pre-service teachers’ perceptions of technological pedagogical content knowledge of digital games. Research And Practice In Technology Enhanced Learning, 19, 008. [Google Scholar] [CrossRef]
  46. Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: Systematic literature review. International Journal of Educational Technology in Higher Education, 20(1), 56. [Google Scholar] [CrossRef]
  47. Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. [Google Scholar] [CrossRef]
  48. Lorenz, P., Perset, K., & Berryhill, J. (2023). Initial policy considerations for generative artificial intelligence. OECD Artificial Intelligence Papers, 1. OECD Publishing. [Google Scholar] [CrossRef]
  49. Luckin, R., George, K., & Cukurova, M. (2022). AI for school teachers. Routledge. [Google Scholar] [CrossRef]
  50. Maslej, N., Fattorini, L., Perrault, R., Parli, V., Reuel, A., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Niebles, J. C., Shoham, Y., Wald, R., Clark, J., & Lyon, K. (2024). Artificial intelligence index report 2024 (7th ed.). Stanford University, Human-Centered Artificial Intelligence. Available online: https://hai-production.s3.amazonaws.com/files/hai_ai-index-report-2024-smaller2.pdf (accessed on 22 September 2025).
  51. Miao, F., & Cukurova, M. (2024). AI competency framework for teachers. UNESCO. [Google Scholar] [CrossRef]
  52. Miao, F., Hinostroza, J. E., Lee, M., Isaacs, S., Orr, D., Senne, F., Martinez, A.-L., Song, K.-S., Uvarov, A., Holmes, W., Vergel de Dios, B., & UNESCO. (2022). Guidelines for ICT in education policies and masterplans (ED-2021/WS/34). UNESCO. [Google Scholar] [CrossRef]
  53. Miao, F., Holmes, W., Ronghuai, H., & Hui, Z. (2021). AI and education: Guidance for policy-makers. UNESCO. [Google Scholar] [CrossRef]
  54. Miao, F., Holmes, W., & UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. [Google Scholar] [CrossRef]
  55. Miao, F., Shiohira, K., & Lao, N. (2024). AI competency framework for students. UNESCO. [Google Scholar] [CrossRef]
  56. Mineduc. (2024). Cargos docentes de Chile—Directorio 2024. Datos Abiertos Mineduc. Available online: https://datosabiertos.mineduc.cl/ (accessed on 22 September 2025).
  57. Mishra, P. (2019). Considering contextual knowledge: The TPACK diagram gets an upgrade. Journal of Digital Learning in Teacher Education, 35(2), 76–78. [Google Scholar] [CrossRef]
  58. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record: The Voice of Scholarship in Education, 108(6), 1017–1054. [Google Scholar] [CrossRef]
  59. Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235–251. [Google Scholar] [CrossRef]
  60. Møgelvang, A., Bjelland, C., Grassini, S., & Ludvigsen, K. (2024). Gender differences in the use of generative artificial intelligence chatbots in higher education: Characteristics and consequences. Education Sciences, 14(12), 1363. [Google Scholar] [CrossRef]
  61. Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2023). A review of AI teaching and learning from 2000 to 2020. Education And Information Technologies, 28(7), 8445–8501. [Google Scholar] [CrossRef]
  62. Ning, Y., Zhang, C., Xu, B., Zhou, Y., & Wijaya, T. T. (2024). Teachers’ AI-TPACK: Exploring the relationship between knowledge elements. Sustainability, 16(3), 978. [Google Scholar] [CrossRef]
  63. OECD. (2019). TALIS 2018 results (Vol. I): Teachers and school leaders as lifelong learners. OECD Publishing. [Google Scholar] [CrossRef]
  64. OECD. (2020). TALIS 2018 results (Vol. II): Teachers and school leaders as valued professionals. OECD Publishing. [Google Scholar] [CrossRef]
  65. OECD. (2023). Opportunities, guidelines and guardrails on effective and equitable use of AI in education. OECD Publishing. [Google Scholar]
  66. OECD. (2024). Teaching and learning international survey (TALIS) 2024: Teacher questionnaire, survey instrument. In OECD TALIS 2024 Database. OECD. [Google Scholar]
  67. OECD. (2025). Teaching and learning international survey (TALIS) 2024 conceptual framework. OECD Publishing. [Google Scholar] [CrossRef]
  68. Pallant, J. (2020). SPSS survival manual: A step by step guide to data analysis using IBM SPSS (7th ed.). Routledge. [Google Scholar] [CrossRef]
  69. Polly, D. (2024). Examining TPACK enactment in elementary mathematics with various learning technologies. Education Sciences, 14(10), 1091. [Google Scholar] [CrossRef]
  70. Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 22. [Google Scholar] [CrossRef] [PubMed]
  71. Saenen, L., Hermans, K., Rocha, M. D. N., Struyven, K., & Emmers, E. (2024). Co-designing inclusive excellence in higher education: Students’ and teachers’ perspectives on the ideal online learning environment using the I-TPACK model. Humanities and Social Sciences Communications, 11(1), 890. [Google Scholar] [CrossRef]
  72. Seufert, S., Guggemos, J., & Sailer, M. (2021). Technology-related knowledge, skills, and attitudes of pre- and in-service teachers: The current situation and emerging trends. Computers in Human Behavior, 115, 106552. [Google Scholar] [CrossRef] [PubMed]
  73. Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. [Google Scholar] [CrossRef]
  74. Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. [Google Scholar] [CrossRef]
  75. Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–22. [Google Scholar] [CrossRef]
  76. Shum, S. J. B., & Luckin, R. (2019). Learning analytics and AI: Politics, pedagogy and practices. British Journal of Educational Technology, 50(6), 2785–2793. [Google Scholar] [CrossRef]
  77. Sierra, Á. J., Iglesias, J. O., & Palacios-Rodríguez, A. (2024). Diagnosis of TPACK in elementary school teachers: A case study in the colombian caribbean. Education Sciences, 14(9), 1013. [Google Scholar] [CrossRef]
  78. Stolpe, K., & Hallström, J. (2024). Artificial intelligence literacy for technology education. Computers and Education Open, 6, 100159. [Google Scholar] [CrossRef]
  79. Sun, J., Ma, H., Zeng, Y., Han, D., & Jin, Y. (2023). Promoting the AI teaching competency of K-12 computer science teachers: A TPACK-based professional development approach. Education and Information Technologies, 28(2), 1509–1533. [Google Scholar] [CrossRef]
  80. Tan, X., Cheng, G., & Ling, M. H. (2024). Artificial intelligence in teaching and teacher professional development: A systematic review. Computers and Education Artificial Intelligence, 8, 100355. [Google Scholar] [CrossRef]
  81. Tillé, Y. (2020). Sampling and estimation from finite populations. Wiley. [Google Scholar] [CrossRef]
  82. Tomczak, M., & Tomczak, E. (2014). The need to report effect size estimates revisited: An overview of some recommended measures of effect size. Trends in Sport Sciences, 21(1), 19–25. [Google Scholar]
  83. Traga, Z. A., & Rocconi, L. (2025). AI literacy: Elementary and secondary teachers’ use of AI-tools, reported confidence, and professional development needs. Education Sciences, 15(9), 1186. [Google Scholar] [CrossRef]
  84. UNESCO. (2019, May 16–18). Beijing consensus on artificial intelligence and education. International Conference on Artificial Intelligence and Education, Planning Education in the AI Era: Lead the Leap, Beijing, China. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000368303 (accessed on 22 September 2025).
  85. Velander, J., Taiye, M. A., Otero, N., & Milrad, M. (2023). Artificial intelligence in K-12 education: Eliciting and reflecting on Swedish teachers’ understanding of AI and its implications for teaching & learning. Education and Information Technologies, 29(4), 4085–4105. [Google Scholar] [CrossRef]
  86. Wang, K. (2024). Pre-service teachers’ GenAI anxiety, technology self-efficacy, and TPACK: Their structural relations with behavioral intention to design GenAI-Assisted teaching. Behavioral Sciences, 14(5), 373. [Google Scholar] [CrossRef]
  87. Wang, Y., Nadler, E. O., Mao, Y., Adhikari, S., Wechsler, R. H., & Behroozi, P. (2021). Universe machine: Predicting galaxy star formation over seven decades of halo mass with zoom-in simulations. The Astrophysical Journal, 915(2), 116. [Google Scholar] [CrossRef]
  88. Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. [Google Scholar] [CrossRef]
  89. Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55, 90–112. [Google Scholar] [CrossRef]
  90. Yim, I. H. Y., & Su, J. (2025). Artificial intelligence (AI) learning tools in K-12 education: A scoping review. Journal of Computers in Education, 12, 93–131. [Google Scholar] [CrossRef]
  91. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1–27. [Google Scholar] [CrossRef]
  92. Zeng, Y., Wang, Y., & Li, S. (2022). The relationship between teachers’ information technology integration self-efficacy and TPACK: A meta-analysis. Frontiers in Psychology, 13, 1091017. [Google Scholar] [CrossRef]
  93. Zhang, K., & Aslan, A. B. (2021). AI technologies for education: Recent research & future directions. Computers and Education Artificial Intelligence, 2, 100025. [Google Scholar] [CrossRef]
  94. Zhao, J., Li, S., & Zhang, J. (2025). Understanding teachers’ adoption of AI technologies: An empirical study from Chinese middle schools. Systems, 13(4), 302. [Google Scholar] [CrossRef]
Figure 1. Visual representation of the updated TPACK Framework (Mishra, 2019, p. 2).
Figure 1. Visual representation of the updated TPACK Framework (Mishra, 2019, p. 2).
Education 15 01268 g001
Figure 2. Visual representation of the Intelligent-TPACK Framework (Celik, 2023, p. 8).
Figure 2. Visual representation of the Intelligent-TPACK Framework (Celik, 2023, p. 8).
Education 15 01268 g002
Figure 3. Dendrogram using Ward Linkage (rescaled distance cluster combine).
Figure 3. Dendrogram using Ward Linkage (rescaled distance cluster combine).
Education 15 01268 g003
Figure 4. Scatter plots of responses in each Intelligent-TPACK dimension where (a) TK; (b) TPK; (c) TCK; (d) TPACK; (e) Ethical Knowledge. Responses are distributed according to teachers’ years of experience in gender panels.
Figure 4. Scatter plots of responses in each Intelligent-TPACK dimension where (a) TK; (b) TPK; (c) TCK; (d) TPACK; (e) Ethical Knowledge. Responses are distributed according to teachers’ years of experience in gender panels.
Education 15 01268 g004aEducation 15 01268 g004b
Table 1. Comparison between the population and the study sample.
Table 1. Comparison between the population and the study sample.
VariablesReference PopulationSample
N%N%
Gender
Female46,23566%46465.2%
Male23,66734%23733.3%
Educational level
Primary29,20542%29040.9%
Secondary40,69758%41959.1%
School type
Public20,78730%29842%
Private49,11570%41158%
Geography
Urban67,84097%65892.8%
Rural20623%517.2%
Age (years old)
20–3063729%10014.1%
30–4022,35432%24134.0%
40–5018,34026%19627.6%
50–6011,63817%10214.4%
60–70944914%689.6%
70–8015142%20.3%
80+2350.3%00%
Total69,902100%709100%
Note. Non-probability sample.
Table 2. Cronbach’s alpha, mean, variance, and interpretation.
Table 2. Cronbach’s alpha, mean, variance, and interpretation.
DimensionItemsCronbach’s αMcDonald’s ωInterpretation
TK30.8860.891Very good
TPK30.7930.805Acceptable
TCK30.8490.855Very good
TPACK30.8850.888Very good
Ethics30.8820.883Very good
Table 3. Normality and distribution statistics.
Table 3. Normality and distribution statistics.
Kolmogorov–SmirnovShapiro–Wilk
DimensionZp-ValueWp-ValueSkewnessKurtosis
TK0.122<0.0010.934<0.001−0.352−0.740
TPK0.092<0.0010.961<0.001−0.062−0.551
TCK0.092<0.0010.948<0.0010.016−0.653
TPACK0.128<0.0010.934<0.0010.330−0.653
Ethics0.138<0.0010.921<0.0010.366−0.526
Table 4. Descriptive statistics of items (N = 709).
Table 4. Descriptive statistics of items (N = 709).
ScaleMdnQ1–Q3MeanSDSkewKurt95% CI Mean
TK2.671.332.670.91−0.352−0.740[2.60–2.73]
TPK2.331.002.520.81−0.062−0.551[2.46–2.58]
TCK2.331.002.450.860.016−0.653[2.38–2.51]
TPACK2.001.332.230.880.330−0.653[2.16–2.29]
Ethics2.001.332.100.840.366−0.526[2.04–2.16]
Table 5. Results of Mann–Whitney U non-parametric tests and effect size.
Table 5. Results of Mann–Whitney U non-parametric tests and effect size.
DimensionU de Mann–WhitneyZ Statisticp-ValueDecisionr
Gender
TK45,865.500−3.706<0.001Reject H00.14
TPK48,928.000−2.4920.013Do not reject H00.09
TCK47,597.500−3.0220.003Reject H00.11
TPACK47,549.000−3.0480.002Reject H00.12
Ethics45,786.500−3.769<0.001Reject H00.14
School level
TK76,729.5006.005<0.001Reject H00.23
TPK70,448.5003.646<0.001Reject H00.14
TCK74,602.5005.210<0.001Reject H00.20
TPACK72,717.5004.512<0.001Reject H00.17
Ethics75,727.5005.677<0.001Reject H00.21
Note. Effect size r was interpreted following J. Cohen’s (2013) benchmarks: ~0.10 = small effect, ~0.30 = medium effect, ≥0.50 = large effect.
Table 6. Results of Kruskal–Wallis H non-parametric tests and effect size.
Table 6. Results of Kruskal–Wallis H non-parametric tests and effect size.
DimensionH (K-W)glp-ValueDecisionε2
School type
TK10.55340.032Do not reject H00.009
TPK10.53440.032Do not reject H00.008
TCK14.61440.060Do not reject H00.014
TPACK4.31640.365Do not reject H00.000
Ethics5.55040.235Do not reject H00.001
Age
TK88.9695<0.001Reject H00.119
TPK34.6025<0.001Reject H00.042
TCK78.2555<0.001Reject H00.104
TPACK27.9265<0.001Reject H00.033
Ethics38.7055<0.001Reject H00.048
Subject
TK59.84116<0.001Reject H00.063
TPK50.37416<0.001Reject H00.048
TCK35.987160.003Reject H00.028
TPACK71.77816<0.001Reject H00.079
Ethics45.15416<0.001Reject H00.052
Note. Effect size ε2 was interpreted following the benchmarks by Tomczak and Tomczak (2014): ~0.01 = small, ~0.06 = medium, >0.14 = large.
Table 7. Results of post hoc comparisons between components.
Table 7. Results of post hoc comparisons between components.
ComparisonZp-Valuer
TK–TPK−6.013<0.0010.23
TK–TCK−9.920<0.0010.37
TK–TPACK−14.748<0.0010.55
TK–Ethics−16.141<0.0010.61
TPK–TCK−3.623<0.0010.14
TPK–TPACK−12.157<0.0010.46
TPK–Ethics−15.121<0.0010.57
TCK–TPACK−10.436<0.0010.39
TCK–Ethics−13.022<0.0010.49
TPACK–Ethics−6.394<0.0010.24
Note. Effect size r was interpreted using J. Cohen’s (2013) thresholds: ~0.10 = small, ~0.30 = medium, ≥0.50 = large effect.
Table 8. Spearman correlations.
Table 8. Spearman correlations.
DimensionYears of Teacher ExperienceAge (Years Old)
Spearman’s Rhop-ValueSpearman’s Rhop-Value
TK−0.330<0.001−0.349<0.001
TPK−0.218<0.001−0.202<0.001
TCK−0.308<0.001−0.321<0.001
TPACK−0.218<0.001−0.189<0.001
Ethics−0.218<0.001−0.219<0.001
Note. Interpretation of Spearman’s rho followed J. Cohen (2013) guidelines: values around |0.10–0.29| indicate a small effect, |0.30–0.49| a moderate effect, and ≥|0.50| a large effect.
Table 9. Summary of ANOVA analysis for regression models by dimension.
Table 9. Summary of ANOVA analysis for regression models by dimension.
Dimensiondf Regressiondf ResidualFp-Value
TK769321.875<0.001
TPK76938.372<0.001
TCK769315.489<0.001
TPACK76938.944<0.001
Ethics76937.066<0.001
Table 10. Interpretation of adjusted R2.
Table 10. Interpretation of adjusted R2.
DimensionRR2Adjusted R2 Standard ErrorDurbin-WatsonInterp.
TK0.4250.1810.1730.827381.635Medium
TPK0.2790.0780.0690.782261.652Small
TCK0.3680.1350.1270.806301.719Small
TPACK0.2880.0830.0740.846281.612Small
Ethics0.3150.1000.0900.803621.732Small
Note. According to J. Cohen (2013), adjusted R2 values can be interpreted statistically at different effect size levels (translating the thresholds of f2 = 0.02, 0.15, and 0.35 into approximate R2 ranges). Adjusted R2 values ≤ 0.02 are considered negligible, >0.02 and <0.13 indicate small effects, ≈0.13 and <0.26 medium effects, and ≥0.26 large effects.
Table 11. Coefficients and significance of predictor variables.
Table 11. Coefficients and significance of predictor variables.
PredictorStandardized βp-ValueToleranceVIFInterp.
TK
Age−0.2020.0020.2773.613Small-mod.
Years of experience−0.1680.0090.2883.475Small
Gender−0.142<0.0010.9741.026Small
Teaching level0.270<0.0010.8601.163Small-mod.
School type−0.0160.6560.8661.155Negligible
Subject−0.0010.9140.9501.052Negligible
TPK
Age0.0000.9980.2773.613Negligible
Years of experience−0.2240.0010.2883.475Small-mod.
Gender−0.0840.0230.9741.026Small
Teaching level0.1040.0080.8601.163Small
School type−0.0210.5920.8661.155Negligible
Subject0.0020.9490.9501.052Negligible
TCK
Age−0.1610.0170.2773.613Small
Years of experience−0.1410.0320.2883.475Small
Gender−0.1110.0020.9741.026Small
Teaching level0.1160.0020.8601.163Small
School type−0.0330.3920.8661.155Negligible
Subject−0.0090.7950.9501.052Negligible
TPACK
Age0.1060.1270.2773.613Negligible
Years of experience−0.294<0.0010.2883.475Moderate
Gender−0.1050.0050.9741.026Small
Teaching level0.1280.0010.8601.163Small
School type−0.0200.6020.8661.155Negligible
Subject0.0620.0980.9501.052Negligible
Ethics
Age−0.1120.1040.2773.613Negligible
Years of experience−0.0960.1550.2883.475Negligible
Gender−0.139<0.0010.9741.026Small
Teaching level0.160<0.0010.8601.163Small-mod.
School type−0.0350.3670.8661.155Negligible
Subject0.0560.1280.9501.052Negligible
Note. According to J. Cohen (2013), adjusted R2 values can be interpreted statistically at different effect size levels (translating the thresholds of f2 = 0.02, 0.15, and 0.35 into approximate R2 ranges). Adjusted R2 values ≤ 0.02 are considered negligible, >0.02 and <0.13 indicate small effects, ≈0.13 and <0.26 medium effects, and ≥0.26 large effects.
Table 12. Means (SD) by dimension according to cluster.
Table 12. Means (SD) by dimension according to cluster.
DimensionCluster 1Cluster 2Cluster 3Cluster 4
TK3.59 (0.47)2.28 (0.40)1.41 (0.58)3.19 (0.39)
TPK3.52 (0.42)2.31 (0.40)1.43 (0.46)2.70 (0.46)
TCK3.52 (0.48)2.21 (0.35)1.22 (0.34)2.71 (0.45)
TPACK3.37 (0.48)2.18 (0.61)1.09 (0.21)2.21 (0.49)
Ethics3.19 (0.55)2.16 (0.48)1.09 (0.24)1.93 (0.59)
N (%)151 (21.3%)203 (28.6%)139 (19.6%)216 (30.5%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alé, J.; Ávalos, B.; Araya, R. Chilean Teachers’ Knowledge of and Experience with Artificial Intelligence as a Pedagogical Tool. Educ. Sci. 2025, 15, 1268. https://doi.org/10.3390/educsci15101268

AMA Style

Alé J, Ávalos B, Araya R. Chilean Teachers’ Knowledge of and Experience with Artificial Intelligence as a Pedagogical Tool. Education Sciences. 2025; 15(10):1268. https://doi.org/10.3390/educsci15101268

Chicago/Turabian Style

Alé, Jhon, Beatrice Ávalos, and Roberto Araya. 2025. "Chilean Teachers’ Knowledge of and Experience with Artificial Intelligence as a Pedagogical Tool" Education Sciences 15, no. 10: 1268. https://doi.org/10.3390/educsci15101268

APA Style

Alé, J., Ávalos, B., & Araya, R. (2025). Chilean Teachers’ Knowledge of and Experience with Artificial Intelligence as a Pedagogical Tool. Education Sciences, 15(10), 1268. https://doi.org/10.3390/educsci15101268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop