Next Article in Journal
An Improved Monte Carlo Reliability Analysis Method Based on BP Neural Network
Next Article in Special Issue
A Comparative Analysis of Student Performance Prediction: Evaluating Optimized Deep Learning Ensembles Against Semi-Supervised Feature Selection-Based Models
Previous Article in Journal
Optimization of DG-LRG Water Extraction Algorithm Considering Polarization and Texture Information
Previous Article in Special Issue
From Social to Academic: Associations and Predictions Between Different Types of Peer Relationships and Academic Performance Among College Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Key Factors Influencing Teachers’ Self-Perceived AI Literacy: An XGBoost and SHAP-Based Approach

1
Department of Computer Science and Engineering, Graduate School, Korea University, Seoul 02841, Republic of Korea
2
Office of the President, Korea Cyber University, Seoul 02450, Republic of Korea
3
Major of Computer Science Education, Graduate School of Education, Korea University, Seoul 02841, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4433; https://doi.org/10.3390/app15084433
Submission received: 11 March 2025 / Revised: 3 April 2025 / Accepted: 14 April 2025 / Published: 17 April 2025

Abstract

:
The rapid advancement of digital technologies and artificial intelligence (AI) is reshaping K-12 education, thereby emphasizing the growing need for AI Literacy among teachers. This study identifies key factors that influence teachers’ self-perceived AI Literacy and evaluates their impact on AI Literacy performance across various teaching phases, using Extreme Gradient Boosting (XGBoost) and Shapley Additive Explanations (SHAP). Data collected from 1172 K-12 teachers in South Korea were preprocessed and then split into an 80:20 training-to-testing ratio. To optimize model performance, Bayesian Optimization was used to fine-tune key hyperparameters, including the learning rate, maximum depth, subsample ratio, and number of boosting rounds. The model’s predictive accuracy was assessed using R2, MSE, MAE, and RMSE. The optimized model achieved R2 values of 0.8206 (Class Preparation), 0.8007 (Class Implementation), 0.8066 (Class Assessment), and 0.7746 (Utilizing Assessment Results). The results indicate that technical knowledge and AI Literacy skills are the most influential factors in the Class Preparation and Implementation Phases, while educational decision-making and ethical considerations play a crucial role in the Assessment and Utilizing Assessment Results Phases. Further, SHAP analysis highlights that both teachers’ and students’ perceived levels of AI learning significantly impact the adoption of AI Literacy, underscoring the importance of contextual factors in integrating AI within education. These findings emphasize the need for AI Literacy education that integrates technical competencies, pedagogical strategies, and ethical decision-making. This study provides empirical insights to support the development of teacher training programs and AI Literacy policies, ensuring the effective integration of AI in education.

1. Introduction

Digital technologies and artificial intelligence (AI) drive transformative changes across various sectors, including education [1]. AI facilitates personalized learning experiences, supports data-driven decision-making, and enables automated assessments and customized educational content [2]. AI-driven analytics enable personalized learning and assist teachers in developing targeted educational strategies [3].
As AI technology reshapes the K-12 education paradigm, countries are actively developing policies to create AI-based learning environments. South Korea, through its revised 2022 educational curriculum, emphasized the use of AI in teaching, learning, and assessment processes [4], while Japan implemented the ‘GIGA School Program’ to support AI-based personalized learning with a one-device-per-student policy [5]. Singapore introduced the ‘AI Learning Analytics’ system to analyze individual student learning data and support customized education [6], and China has expanded its AI-based education innovation through the ‘New Generation Artificial Intelligence Development Plan’ and the ‘Smart Campus AI Initiative’, which includes AI textbook development and online assessment systems [7,8]. Saudi Arabia and the UAE are also advancing in building smart learning environments utilizing AI [9,10].
The expansion of AI in education is transforming teaching methods and reshaping educators’ roles. Teachers are increasingly required to design personalized learning experiences using AI-based learning analytics tools and to enhance the objectivity of assessments through AI grading systems, thereby serving as both designers of personalized learning and data-driven decision-makers [11]. These changes necessitate teachers to possess AI Literacy, highlighting the need for effective integration of AI in education. In the United States, the National AI Education Initiative (2022) has been enhancing AI education by linking the AI4K12 framework with STEM education and expanding AI Literacy training for teachers [12]. The United Kingdom and the European Union (EU) continue to explore integrating AI technology in education, developing AI Literacy training and usage guidelines for teachers [13].
AI Literacy encompasses a comprehensive skill set that includes understanding the fundamental concepts and principles of AI, applying them effectively within an educational context, and considering the social and ethical implications of their use. This skill set is an extension of traditional literacy (reading, writing, speaking) and digital literacy, including the understanding of AI’s operational principles and data analytics capabilities, the ability to assess the impact of AI on education and society, and a responsible ethical stance on AI usage [1]. Teacher AI Literacy is the ability to effectively integrate AI technologies in all educational stages—preparation, execution, assessment, and utilization of assessments. This requires teachers to possess theoretical and practical knowledge, develop technical skills to achieve educational objectives, and master pedagogical methods for effective AI application [1].
With the increasing importance of AI Literacy, the level of teachers’ AI Literacy is becoming more crucial in enhancing the quality of education and student learning outcomes. Recent studies have been actively exploring the development of training programs to improve AI Literacy and the execution of technology, with a focus on teachers’ AI Literacy capabilities and their influence on students’ acquisition of AI Literacy [14,15,16]. While initial studies primarily concentrated on the impact of teachers’ AI Literacy on students’ learning, most prior research has focused on establishing the concept of AI Literacy and developing pedagogies, with less empirical examination of how teachers’ perceptions affect their AI Literacy performance and educational effectiveness in real classrooms.
This study makes several significant contributions. Firstly, it empirically investigates the relationship between teachers’ perceptions and their AI Literacy performance using advanced machine learning techniques, addressing a gap in previous research that primarily focused on conceptual discussions. Second, it applies XGBoost and SHAP analysis to identify key factors affecting teachers’ AI Literacy across different teaching phases, providing both predictive insights and explainable interpretations. Third, the findings offer practical implications for the design of teacher training programs and educational policies aimed at fostering AI Literacy in real classroom settings.
Furthermore, the study clarifies which factors have a significant impact on teachers’ AI Literacy performance at each instructional phase, providing insights into where targeted support is most effective. This enables the development of focused teacher training strategies that can enhance teachers’ AI Literacy performance efficiently within a short period of time, especially addressing gaps in teachers’ self-assessment and confidence in AI Literacy.
By providing empirical data and methodological innovation, this study supports the systematic development of AI Literacy education in the educational field. These contributions are expected to offer valuable insights for researchers, teacher educators, and educational policymakers seeking to foster AI Literacy in K-12 education.

2. Literature Review

As the use of AI technologies in K-12 education expands, the educational implications of AI Literacy are becoming increasingly significant. However, AI Literacy is often conflated with digital literacy or understood as part of AI education, and its conceptual independence is not always clearly defined [17,18,19,20,21,22]. This study aims to strengthen the foundation of our research by systematically discussing the definition of AI Literacy. To concretize AI Literacy, we first articulate it through a bottom-up approach based on the relationship between General Literacy and Literacy. General Literacy refers to the basic knowledge and competencies necessary for social participation, while Literacy involves the ability to understand and use information within specific contexts [23,24,25]. Traditionally focused on reading and writing, the concept of Literacy was expanded by The New London Group [26] through the introduction of ‘Multiliteracies’, emphasizing modes of meaning-making in digital and multicultural environments. Gilster [27] extended the scope of literacy within the information technology environment by introducing ‘Digital Literacy’. AI Literacy extends Digital Literacy, encompassing the understanding of AI, its critical analysis, and ethical use [22,28]. While some previous studies viewed AI Literacy as a subset of digital literacy [21], recent research suggests it is developing as an independent concept reflecting the unique characteristics and societal impacts of AI technology [22,29].
Next, we compare the educational approaches of AI education and AI Literacy. AI education, as part of computing education (Depending on national educational policies and academic traditions, the names for computing curricula in K-12 education are interchangeably used as Informatics, Computing, ICT, etc. [30,31,32,33,34]. CC 2020 acknowledges these differences and clarifies that these terms are used interchangeably to mean the same thing. Initially, the concept of “Introduction to AI” (1991) was included in computer science education [35], and over time elements such as "Machine Learning", "Data Mining" (2001), “Natural Language Processing”, and “AI Ethics” (2021) were added, gradually expanding the scope [35]. Recently, the “AI Competency Framework” was introduced, emphasizing that not only AI technical competencies but also ethical considerations must be addressed at all educational levels [35]), focuses on developing problem-solving skills through understanding AI concepts and principles, processing data, and creating models [30,31,32,33,34]. International K-12 AI education guidelines play a crucial role in establishing and developing an AI education system [20]. UNESCO [36] is developing a K-12 AI curriculum, setting directions for AI education in various countries, and AI4K12 recommends designing curricula around the five Big Ideas of “Perception of Intelligence”, “Representation and Reasoning”, “Learning”, “Natural Interaction", and “Societal Impact” [14]. This highlights that AI education should encompass ethical and social responsibilities beyond mere technical acquisition [20]. This includes learning AI concepts, practical tech applications, and the integration of AI with other academic disciplines [3,20,28,35,36,37]. In contrast, AI Literacy is not confined to a specific subject area but encompasses understanding the basic principles of AI technology, using tools, and analyzing its social and ethical impacts [3,22,28,37,38]. In AI-based educational environments, teachers are required to do more than merely use AI tools; they must effectively utilize AI technologies within educational contexts and critically assess their use [21,29]. Teachers need to understand AI’s basic concepts and operational principles to develop and apply appropriate instructional strategies that align with learning objectives. Therefore, a teacher with AI Literacy can be defined as “an expert who effectively utilizes AI technology in lesson planning, teaching execution, and the assessment process”.
Section 2 is organized to provide the theoretical background and methodological framework for this study. Instead of relying on a pre-established theoretical framework, this study constructed its theoretical foundation by systematically reviewing the conceptual development of AI Literacy and its educational implications, as well as the evolving roles of teachers in AI-based educational environments. This integrative approach reflects recent educational trends, curriculum changes, and the practical needs of teachers. Additionally, this study incorporates advanced machine learning techniques (XGBoost and SHAP) to analyze factors influencing teachers’ AI Literacy performance, providing a methodological innovation beyond traditional theoretical frameworks in educational research.
Section 2.1 discusses the transformation of educational environments due to AI adoption and the evolving roles of teachers, followed by a review of preceding research on AI Literacy and teacher professional development.
Section 2.2 subsequently introduces the analytical techniques employed in this study, namely XGBoost and SHAP, and explains how these methods facilitate the identification of key factors influencing teachers’ AI Literacy.

2.1. AI-Based Educational Environments and the Evolving Role of Teachers

As K-12 educational settings increasingly adopt AI technologies, the role of teachers has expanded from mere transmitters of knowledge to designers of data-driven learning experiences, evaluators, experts in AI tool utilization, and ethical decision-makers [12,39]. This section examines the transformation in AI-based educational environments, focusing on AI-driven learning analytics, AI-based assessment systems, AI educational tools, and ethical considerations in AI-based educational environments. It also discusses the implications of these changes for teachers’ AI Literacy capabilities and roles, alongside analyzing trends in AI Literacy research.

2.1.1. The Impact of AI-Based Educational Environments on Teachers’ Roles

AI-based learning analytics, which analyze vast amounts of learning data in real time to provide personalized learning experiences, allow teachers to identify and tailor educational strategies to individual student patterns [40]. For instance, Singapore’s implementation of the ’AI Learning Analytics’ system enables customized education support by analyzing individual learning data [6]. Studies by Siemens and Baker [41] and Ferguson [42] suggest that the use of AI learning analytics can enhance student achievement by 15–20% and increase learning efficiency by over 30% when teachers apply AI data critically in lesson planning. Consequently, teachers play a crucial role in interpreting AI analysis and adjusting educational strategies. The importance of integrating AI in teaching and assessment practices has been emphasized in the revised 2022 educational curriculum in South Korea [4]. AI-based assessment systems provide automated grading and adaptive feedback, reducing the workload for teachers even in large-scale assessments [43]. However, these systems face limitations in assessing creative problem-solving skills and may produce biased outcomes based on certain linguistic styles or grammatical structures [44,45]. Efforts in China to apply and refine AI-based assessment systems in educational settings address these biases through research and policy development [7]. AI educational tools, such as AI chatbots and virtual tutors, offer instantaneous responses and support repetitive learning, but their effectiveness is not uniform across all learner groups, possibly leading to biased outcomes [46,47]. Japan’s ’GIGA School Program’ supports customized learning through a one-device-per-student policy, guiding effective utilization of AI educational tools [5]. Teachers thus play a critical role in recognizing the limitations of AI tools and guiding students in their proper use. Ethical considerations in AI-based educational environments address significant issues such as algorithmic bias, data privacy, and transparency in AI decision-making [38]. These ethical concerns must be considered across all aspects of AI learning analytics, AI assessment systems, and AI educational tools. The EU’s AI Act provides legal guidelines to address biases in AI educational tools, while initiatives in Saudi Arabia and the UAE focus on creating smart learning environments utilizing AI [9,10]. The AI4K12 Framework includes AI ethics and fairness as core elements, underscoring the role of teachers in ensuring transparency in AI decision-making processes and educating students about the limitations and responsibilities associated with AI technologies.

2.1.2. Research Trends in AI Literacy and Teacher Professional Development

Research on AI Literacy capabilities has focused on defining essential elements of AI Literacy for teachers and modeling these components. Smith and Lee [15] categorized AI Literacy into understanding AI concepts, data literacy, using AI tools, and ethical considerations, while Jones and Brown [16] emphasized that AI Literacy should extend beyond technical understanding to include educational applications and ethical judgments. Moreover, Schmidt and Fischer [48] noted that a lack of teacher training programs is a significant barrier to spreading AI Literacy. Studies on developing AI Literacy training programs concentrate on effective training designs to enhance teachers’ AI Literacy skills. Chen et al. [49] reported that hands-on training effectively improves teachers’ ability to use AI tools, and Ng [50] proposed that comprehensive training programs covering AI concept learning, tool use, and educational applications are essential. Jones and Bradshaw [51] highlighted the importance of continuous practice opportunities and collaboration among teachers following training sessions. Anderson and Shattuck [52] argued that maintaining AI Literacy requires regular training, practice-based learning, collaboration among teachers, and feedback provision. Research on applying AI Literacy in educational settings focuses on how AI technologies are practically utilized in classrooms. Chen et al. [49] observed that teachers who received hands-on training were more capable of effectively integrating AI-based educational tools into their teaching. However, existing studies tend to concentrate on practical approaches to enhancing AI Literacy capabilities, with relatively limited research on the challenges and solutions that teachers face during the implementation of AI Literacy education. Studies on AI Literacy and teachers’ self-efficacy examine the relationship between teachers’ capacity to use AI and their educational attitudes. Self-efficacy, the belief in one’s ability to successfully complete a specific task, is closely linked to confidence in AI Literacy-related skills. Wang et al. [53] found that teachers who underwent AI training were 40% more likely to utilize AI technologies and more likely to adopt AI-based educational tools. Smith et al. [54] reported that teachers with higher self-assessment scores were more active in using AI tools, although Schmid et al. [55] cautioned that self-assessment results might not always align with actual performance capabilities. To address discrepancies between self-assessment and actual performance, Rosenberg et al. [56] developed the AI Literacy Competency Framework to objectively assess AI utilization skills and clarify the effectiveness of AI Literacy training. As AI technology proliferates in educational environments, research on AI Literacy is evolving, with increasing focus on teachers’ AI Literacy skills. This research trend is establishing a crucial foundation for practical classroom applications, reflecting the growing importance of AI Literacy and the changing role of teachers in contemporary education.

2.2. Application of XGBoost and SHAP in Machine Learning-Based Educational Data Research

Among the various machine learning techniques used in educational data analysis, Decision Trees and Random Forests have been prominently employed. Decision Trees automatically learn the relationships between features, yet single tree models are prone to overfitting and can exhibit performance variability depending on data sampling [57]. To overcome these limitations, Random Forests combine multiple trees to enhance prediction accuracy, although interpreting the importance of features remains a challenge [58].
XGBoost (eXtreme Gradient Boosting) addresses these limitations and is widely used in the analysis of educational data. XGBoost is a Gradient Boosting-based algorithm that combines multiple decision trees to maximize predictive performance [59]. This algorithm applies weight updates and regularization techniques to prevent overfitting while maintaining high predictive accuracy and provides stable performance even in datasets with nonlinear relationships and multicollinearity [60]. Furthermore, XGBoost utilizes ensemble learning techniques to combine several weak predictive models (decision trees) into a strong predictor. This iterative improvement process corrects errors from previous trees, thereby optimizing the overall model performance. Such ensemble methods are tailored to the complexity and characteristics of the data, showing exceptional performance even in complex datasets like education.
Recent studies indicate that XGBoost outperforms traditional machine learning models, with an increase in the average coefficient of determination (R2) by 0.05–0.1 compared to Random Forests, and a reduction in the root mean square error (RMSE) by an average of 10–20% compared to regression analyses [58,60]. Additionally, the mean absolute error (MAE) has been reported to decrease by an average of 15% compared to single decision trees, enabling more precise predictions [57,58].
Performance evaluation of the XGBoost model utilizes various metrics, such as R2, mean absolute error (MAE), mean squared error (MSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). R2 and MAPE are crucial indicators of predictive accuracy, with R2 values above 0.7 typically indicating a model with high explanatory power and MAPE values below 10% signifying a highly accurate model [60,61].
However, XGBoost is often considered a “black box” model, making it difficult to intuitively interpret the impact of individual features on the prediction process [60]. To address this, SHAP (Shapley Additive Explanations) analysis has been applied, which uses game theory to quantitatively assess and distribute the contributions of features [62,63,64]. SHAP analysis employs Shapley values to quantitatively evaluate the contribution of each feature, thus enhancing the transparency of the model’s predictive process. By analyzing how the inclusion or exclusion of specific features affects prediction outcomes, SHAP helps intuitively interpret the results.
SHAP enables the intuitive understanding of whether specific features increase (positive values) or decrease (negative values) the prediction and also allows for the analysis of interactions between features [65]. This approach clarifies how significant features interact and contribute to predictions in educational data. Recent studies have further discussed both the possibilities and limitations of applying explainable AI (XAI) in education. For instance, Farrow (2023) explores the socio-technical challenges of implementing XAI, arguing that transparency alone may not ensure meaningful educational outcomes [66]. Similarly, Reeder (2023) investigates how user characteristics, such as gender and educational background, influence the interpretation of XAI outputs, highlighting the need to account for user diversity in educational applications of XAI [67]. These studies suggest that, although SHAP improves model interpretability, it is essential to carefully consider its practical implications and how users in educational contexts may interpret the results.
Optimizing the performance of the XGBoost model requires tuning hyperparameters, with recent studies extensively using Bayesian Optimization techniques to efficiently find the best settings while reducing computational load compared to traditional methods like Grid Search or Random Search [63]. This tuning significantly enhances model performance and helps prevent overfitting [60].
XGBoost and SHAP analysis are broadly used in educational research, including AI Literacy education, effectiveness analysis of teacher training programs, and student achievement prediction. For instance, in AI Literacy education research, XGBoost has been used to predict students’ understanding of AI concepts, while SHAP analysis has identified key factors influencing outcomes [60]. Additionally, SHAP has been employed in policy analysis to assess the impact of specific educational policies [68].

3. Materials and Methods

This study aims to identify factors influencing K-12 teachers’ Self-Perceived Performance of AI Literacy and to evaluate the performance of the model developed. The research procedure is outlined as follows (refer to Figure 1).
The collected data were preprocessed and analyzed using Python-based Google Colab (https://colab.research.google.com/), supported by GPU and essential programming libraries, including NumPy, Pandas, and scikit-learn (accessed on 14 February 2025; see Table 1).

3.1. Tool Development

The survey tool for this research was developed by integrating domestic and international research on AI Literacy and AI-based education, policy reports, and case analyses. To ensure the validity of the survey tool, a review by 16 experts in SW and AI education was conducted using Lawshe’s (1975) Content Validity Ratio (CVR). Items with a CVR value of 0.476 or higher were selected for the final research tool [69].

Composition of Features and Targets

The composition of features and targets for this study is as follows (refer to Figure 2). Individual models were trained and evaluated to predict four targets.

3.2. Data Collection

Reflecting the geographical and educational diversity of Korea, this study selected the C area as the target region. The C area encompasses both metropolitan and rural educational environments, including schools located in urban areas and those in agricultural and mountainous regions. According to the Ministry of the Interior and Safety (2024), approximately 55% of Korea’s administrative divisions are classified as urban areas (si) and about 40% as rural areas (gun), although more than 90% of the population resides in urban areas. Similarly, in C area, as of 1 April 2024, a total of 808 schools (kindergartens, elementary, middle, and high schools) were operating, with approximately 60% located in urban areas and 40% in rural and mountainous areas [70,71,72]. This distribution reflects the overall geographical and educational structure of Korea, making the C area a suitable and representative region for investigating teachers’ AI Literacy.
Although the survey was not conducted nationwide, the selection C area holds significance in that the region structurally mirrors the geographical and educational characteristics of Korea. Therefore, the findings of this study are expected to provide meaningful insights applicable to the broader Korean educational context. While the study was conducted within a single province, the structural alignment between the C area and the national educational landscape mitigates concerns about the generalizability of the findings.
A census survey was conducted among all primary and secondary school teachers in the C area from 16 June to 30 June 2024 using an online questionnaire. Out of 1325 participating teachers, responses from 1172 teachers were considered valid after excluding 152 incomplete or unclear responses. The characteristics of the respondents are detailed in Table 2.

3.3. Data Preprocessing

Data preprocessing was performed to ensure data quality and enhance the reliability of model training, involving missing data handling, data normalization, and data splitting.
Missing values were addressed using median imputation to prevent data loss and minimize outliers [61].
X = X i i f   X i     N a n , X = m e d i a n X i f   X i = N a n
Data normalization was achieved using StandardScaler to adjust scale differences among features, thus preventing overfitting and improving training speed.
X = ( X μ / σ )
X   :   Normalized data, X: Original data, μ: Mean, σ: Standard deviation.
The data were split into training data (80%) and test data (20%), and five-fold cross-validation was applied to validate the generalization performance of the model.
Applsci 15 04433 i001

3.4. Model Training and Performance Evaluation

A Pearson correlation matrix, including features and targets, was calculated to analyze linear relationships, serving as a basis for training the XGBoost model and conducting SHAP analysis. The correlation analysis results were visually represented through heatmaps, and the strength of relationships between features and targets was assessed based on Pearson correlation coefficients. Bayesian Optimization was applied to optimize the XGBoost model’s performance, using metrics such as R2, MSE, MAE, and RMSE for evaluation. SHAP analysis was performed to identify key features contributing to the predictions and to analyze interactions among features.

3.4.1. Hyperparameter Tuning for XGBoost Optimization

In this study, Bayesian Optimization was utilized to efficiently search for optimal hyperparameters such as learning_rate, max_depth, subsample, colsample_bytree, and n_estimators, minimizing computational costs compared to traditional Grid Search and Random Search methods. Initial exploration was conducted in 5 points (init_points = 5), with 15 iterations of optimization (n_iter = 15), totaling 20 Bayesian Optimization sessions to derive the best hyperparameter combination.
θ * = arg min θ Θ   f θ
Here, θ represents the hyperparameter vector, Θ denotes the hyperparameter search space, and f(θ) is a function that describes the performance of the model based on the hyperparameters.
Applsci 15 04433 i002
The search process involved an initial exploration of five points (init_points = 5) and 15 optimization iterations (n_iter = 15), making a total of 20 runs. Through this process, the optimal combination of hyperparameters was determined.

3.4.2. Performance Metrics for Model Evaluation

The model performance was evaluated based on the coefficient of determination (R2), Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE).
The coefficient of determination (R2) indicates the extent to which variability in the data is explained, with values above 0.7 considered indicative of good performance [53].
R 2 = 1 i 1 n y i   y i ^ 2 i 1 n y i   y i   ¯ 2
y i : the actual value, y i ^ : the predicted value, y i ¯ : the mean of actual values, n: the number of observations.
Mean Squared Error (MSE) is calculated by averaging the squares of the differences between predicted and actual values, with smaller values indicating higher prediction accuracy. A value below 25 is considered to represent good performance [57].
M S E = 1 n i = 1 n y i y i ^ 2
Mean Absolute Error (MAE) averages the absolute differences between predicted and actual values, directly showing the discrepancy between prediction and reality. Generally, a MAE less than 3 is interpreted as stable predictive performance [73].
M A E = 1 n i = 1 n | y i y i ^ |
Root Mean Squared Error (RMSE) is the square root of MSE, providing a more intuitive representation of error magnitude. A value below 5 indicates excellent model performance [65].
R M S E =   1 n i = 1 n y i y i ^ 2
Based on these metrics, the predictive performance of the model was quantitatively analyzed, and its generalizability was objectively validated.
Applsci 15 04433 i003

3.4.3. XGBoost-Based Prediction of AI Literacy and SHAP Analysis

The study employed an XGBoost regression model to predict the Self-Perceived Performance of AI Literacy among teachers. This model leverages the Gradient Boosting technique, which enhances predictions by iteratively minimizing errors from previous predictions. Here is a simplified explanation:
The prediction update at iteration t is calculated by adding an improvement f t x i to the previous prediction:
y t ^ = y t 1 ^ + f t x i
The final prediction y i ^ after K iterations is the sum of all improvements:
y i ^ = k = 1 K f k x i , f k F
The study further dissects the learning principles by examining the objective function, which includes both the prediction error and a regularization term to control model complexity:
L ( θ ) = i = 1 n l ( y i ,   y i ^ ) + k = 1 K Ω f k
where l y i ,   y i ^ represents the loss function. The optimization of the model involves approximating the loss function using its second-order Taylor expansion:
L t i = 1 n g i f t x i + 1 2 h i f 2 t x i + Ω f k
To explain the predictions made by our model, we chose SHAP (Shapley Additive Explanations), a powerful tool particularly suited for tree-based models like XGBoost. SHAP values offer a detailed breakdown of feature contributions, enhancing interpretability. The formula to compute the Shapley value for a feature j is:
ϕ j = S { 1 , , n } { j } S ! n S 1 ! n ! v S { j } v S
Although various explainability techniques, such as LIME and PDP, are available, SHAP analysis was adopted because it is particularly well-suited to tree-based ensemble models like XGBoost. SHAP can efficiently compute feature contributions by leveraging the model structure, enabling both global and local interpretation without approximation errors [64,74,75].
After model training, SHAP analysis was conducted to assess feature importance, utilizing Shapley Values to quantitatively evaluate each feature’s contribution and enable interpretation that considers interactions between features.
The study also utilized a SHAP Summary Plot to compare the relative impacts of major features across the dataset and created a Feature Importance Plot to analyze the contributions of features learned by the XGBoost model.

4. Result

This study aimed to construct a model that explains the Self-Perceived Level of AI Literacy for Class Preparation Phase execution and to identify key features that influence the model. Pearson Correlation Analysis was employed to examine the relationships among 12 features and four targets used in the regression model and their impact [Figure 3].
The analysis of relationships among features showed that the correlation between the Importance of AI Literacy in the Class Preparation Phase and the Importance of AI Literacy in the Class Implementation Phase was the highest, with an r-value of 0.85, indicating a statistically significant correlation at the 0.001 level. This suggests that teachers who perceive a high Importance of AI Literacy in the Class Preparation Phase also perceive a high Importance of AI Literacy in the Class Implementation Phase. Following this, the relationship between the Importance of AI Literacy in the Class Preparation Phase and the Importance of AI Literacy in the Class Assessment Phase had an r-value of 0.84, and between the Importance of AI Literacy in the Class Implementation Phase and the Importance of AI Literacy in the Class Assessment Phase, the r-value was 0.79, both statistically significant.
In the correlation analysis among AI Literacy-related features, the relationship between the Importance of Knowledge in AI Literacy and the Importance of Skills in AI Literacy showed the highest coefficient with an r-value of 0.76, significant at the 0.001 level. Thus, teachers who recognize the high Importance of Knowledge in AI Literacy also perceive a high Importance of Skills in AI Literacy, and vice versa.
The correlation analysis among targets revealed that the correlation between the Self-Perceived Level of AI Literacy for Class Preparation Phase execution and the Self-Perceived Level of AI Literacy for Class Implementation Phase execution was the highest, with an r-value of 0.85 and statistical significance at the 0.001 level. This indicates that teachers with a high self-perceived level of AI Literacy in the preparation phase also exhibit high performance in the implementation phase. The relationship between the Self-Perceived Level of AI Literacy for Class Assessment Phase execution and the Self-Perceived Level of AI Literacy for Class Application of Assessment Results Phase execution also showed a high correlation (r = 0.81).

4.1. Evaluation of Predictive Performance

To optimize the performance of the XGBoost regression model, a combination of hyperparameters was determined, and the model was trained. By applying Bayesian Optimization, the most effective hyperparameter combination within the search space was identified (see Section 4.1.1), after which the model was trained and its performance evaluated (see Section 4.1.2).

4.1.1. Hyperparameter Optimization Using Bayesian Optimization

The optimized Hyperparameter settings and their corresponding R2 values are summarized in Table 3.
During iteration 6, the R2 value was the highest at 0.7422, with the max_depth set at 9.961 and the learning rate at 0.1349. In iteration 9, the max_depth was set at 9.859 and the learning rate at 0.1908, achieving an R2 value of 0.74. For iteration 7, the max_depth was 9.562, the learning rate was 0.1003, and the R2 value was 0.7356.
In iteration 18, the max_depth was significantly lower at 3.174, resulting in a decreased R2 value of 0.5116. Iteration 17 featured a max_depth of 9.8 and a learning rate of 0.05436, with an R2 value of 0.7289. In iteration 19, the max_depth was set at 5.263 and the learning rate at 0.1361, yielding an R2 value of 0.6658.
The colsample_bytree values were set within a range of 0.5 to 0.9, while the subsample was optimized within a range of 0.7 to 0.9. The n_estimators values ranged from 140 to 320, and it was observed that using more than 200 trees tended to maintain an R2 value above 0.74.
The derived hyperparameter values are presented in Table 4.
To regulate the learning speed and convergence, the learning rate was set at 0.1349, and the max_depth was adjusted to 9. Additionally, the subsample value was set at 0.8033, and the colsample_bytree was set at 0.5895, allowing the model to utilize only a portion of the training data.
Analysis of the model performance changes due to hyperparameter settings indicated an enhancement in the predictive performance of the XGBoost model. A stable R2 value was maintained when the max_depth was set to 9 or higher. Adjusting the subsample and colsample_bytree values enabled the learning of diverse data patterns. This adjustment of optimized hyperparameters has been confirmed to contribute to the improvement of the model’s performance.

4.1.2. Model Performance Evaluation with Optimized Hyperparameters

The optimized hyperparameters were applied to train and evaluate the model across four targets. The predictive performance of the model was assessed using Explained Variance (R2), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE), with the results detailed as follows [Table 5].
In the Class Preparation Phase, the R2 score was 0.8206, indicating that the XGBoost model could explain 82.06% of how well teachers could utilize AI Literacy in this phase. Additionally, the prediction errors were relatively low, with an MSE at 0.1235, RMSE at 0.3514, and MAE at 0.1430.
In the Class Implementation Phase, the R2 score was 0.8007, indicating an explanatory power of 80.07% for AI Literacy. The MSE was recorded at 0.1337, the RMSE at 0.3656, and the MAE at 0.1372, showing stable prediction performance.
In the Class Assessment Phase, the R2 score was 0.8066, with MSE at 0.1534, RMSE at 0.3917, and MAE at 0.1561, demonstrating the model’s stable predictive performance.
In the final phase, the Class Application of Assessment Results Phase, the R2 score was 0.7746, with MSE at 0.1616, RMSE at 0.4019, and MAE at 0.1581. This indicates that the model’s predictive performance was consistently maintained.
Overall, the XGBoost model recorded R2 values above 0.7 for all targets, and error indicators (MSE, RMSE, MAE) were maintained within a consistent range. Based on these performance evaluation results, the model’s predictive performance can be considered reliably robust.

4.2. Feature Importance Analysis Using XGBoost and Comparative Review with SHAP

The results of deriving the main features affecting the Self-Perceived Level of AI Literacy for each phase of execution in the Class Preparation Phase (4.2.1), Class Implementation Phase (4.2.2), Class Assessment Phase (4.2.3), and Class Application of Assessment Results Phase (4.2.4) are as follows:

4.2.1. Feature Importance Analysis for AI Literacy in the Class Preparation Phase

Feature Importance Analysis for AI Literacy in the Class Preparation Phase is detailed in [Table 6, Figure 4].
The regression analysis with XGBoost recorded the highest Feature Importance value for the “Importance of AI Literacy in Class Preparation Phase” feature (0.2488). This was followed by “Importance of AI Literacy in Class Utilizing Assessment Results Phase” (0.1208), “Importance of Teaching Methods in AI Literacy for AI-Integrated Education” (0.1109), “Importance of AI Literacy in Class Assessment Phase” (0.1009), and “Importance of AI Literacy in Class Implementation Phase” (0.0650). Comparative analysis with SHAP indicated that some features, such as “Importance of AI Literacy in Class Utilizing Assessment Results Phase” (XGBoost: 0.1208, SHAP: 0.0679) and “Importance of AI Literacy in Class Implementation Phase” (XGBoost: 0.0650, SHAP: 0.0521), showed lower mean absolute values in SHAP than their XGBoost counterparts. Conversely, features like “Importance of Skills in AI Literacy for AI-Integrated Education” (XGBoost: 0.0610, SHAP: 0.0918) and “Importance of Skills in AI Literacy” (XGBoost: 0.0547, SHAP: 0.0837) displayed higher mean absolute values in SHAP analysis, suggesting discrepancies in how the models assess feature contributions.
The SHAP findings provide further insight into these discrepancies by elucidating why certain variables carry higher or lower importance. Features with substantially lower SHAP influence relative to their XGBoost importance (for example, the perceived importance of AI Literacy in the results-utilization phase) likely have more context-dependent effects, contributing less uniformly across all teachers. In contrast, features that exhibited higher SHAP values than expected from the XGBoost ranking (such as teachers’ emphasis on AI Literacy skills) seem to exert their influence mainly under particular conditions, which the tree-based metric may understate. Additionally, SHAP dependence plots highlight interaction effects between key factors. For instance, the positive impact of a teacher’s AI Literacy skills on predicted class preparation performance is markedly amplified when that teacher also assigns a high importance to integrating AI in the preparation phase; this synergy appears in SHAP plots as significantly larger SHAP values for the skills feature when both features are high. Similarly, teachers who rate both their own and their students’ need for AI Literacy learning as high tend to show amplified SHAP contributions for these need-related features, implying that a broad awareness of AI Literacy needs (encompassing both oneself and one’s students) can reinforce the influence of other variables on preparation-phase performance. These interaction patterns underscore that a teacher’s commitment to AI integration and recognition of learning needs to work synergistically with their skillset, ultimately shaping their effectiveness in the Class Preparation Phase.

4.2.2. Feature Importance Analysis for AI Literacy in Class Implementation Phase

Feature Importance Analysis for AI Literacy in the Class Implementation Phase is presented in [Table 7, Figure 5].
In this analysis, “Importance of AI Literacy in Class Implementation Phase” recorded the highest Feature Importance value (0.2588). This was followed by “Importance of Teaching Methods in AI Literacy for AI-Integrated Education” (0.1266), “Importance of AI Literacy in Class Preparation Phase” (0.1185), “Importance of Knowledge in AI Literacy for AI-Integrated Education” (0.0699), and “Importance of AI Literacy in Class Assessment Phase” (0.0691). Comparative analysis with SHAP revealed that features like “Importance of AI Literacy in Class Application of Assessment Results Phase” (XGBoost: 0.0679, SHAP: 0.0527) and “Importance of Teaching Methods in AI Literacy for AI-Integrated Education” (XGBoost: 0.1266, SHAP: 0.0905) had lower mean absolute values in SHAP, whereas “Importance of Skills in AI Literacy” (XGBoost: 0.0611, SHAP: 0.0851) and “Perceived Need for AI Literacy Learning Level for Students” (XGBoost: 0.0526, SHAP: 0.0780) showed higher mean absolute values, highlighting significant variations in the valuation of these features between the two analyses.
Expanding on these results, the SHAP analysis clarifies why certain features diverged in importance. For example, the “Importance of Teaching Methods in AI Literacy for AI-Integrated Education” was a prominent predictor in XGBoost but exhibited a much smaller average impact in SHAP, suggesting that its influence on class implementation outcomes is contingent on other factors (possibly overlapping with teachers’ knowledge or phase-specific priorities). Conversely, features such as teachers’ emphasis on AI Literacy skills and their perception of students’ AI Literacy needs, while ranked lower by XGBoost, showed elevated SHAP contributions. This indicates that when a teacher strongly prioritizes practical AI skills or perceives a high student need for AI literacy, it leads to substantial changes in their implementation-phase performance—an effect that the global XGBoost metric may under-represent. SHAP dependence plots further reveal that these factors do not operate in isolation but interact. Notably, the positive effect of a teacher’s belief in the importance of AI Literacy during the assessment phase (a top XGBoost feature for class implementation) is significantly amplified if the teacher also possesses strong AI Literacy skills. In the SHAP dependence plot, teachers who highly value AI in assessment and simultaneously report high skill levels attain much larger SHAP values for these features, implying that technical competencies enable them to act on their beliefs during class implementation. Another interaction is observed between pedagogical orientation and perceived needs: the contribution of adopting AI-integrated teaching methods to implementation success is more pronounced for teachers who also recognize a substantial need for AI Literacy among their students. In practice, educators who tailor their teaching methods for AI and are keenly aware of their students’ AI Literacy requirements tend to leverage AI more effectively in the classroom (reflected by higher combined SHAP effects). These interactions suggest that to excel in the Class Implementation Phase, teachers benefit from a combination of strong technical skills, adaptive teaching strategies, and a clear perception of AI Literacy needs.

4.2.3. Feature Importance Analysis for AI Literacy in Class Assessment Phase

The Feature Importance Analysis for AI Literacy in the Class Assessment Phase is detailed in [Table 8, Figure 6]. In the regression analysis with XGBoost, the feature “Importance of AI Literacy in Class Assessment Phase” recorded the highest Feature Importance value (0.1973). Subsequent features include “Importance of AI Literacy in Class Preparation Phase” (0.1553), “Importance of Teaching Methods in AI Literacy for AI-Integrated Education” (0.1333), “Importance of Skills in AI Literacy for AI-Integrated Education” (0.0733), and “Importance of AI Literacy in Class Implementation Phase” (0.0638).
Comparative analysis with SHAP showed that some features had higher importance in XGBoost but lower mean absolute values in SHAP. Specifically, the feature “Importance of AI Literacy in Class Application of Assessment Results Phase” recorded lower mean values in SHAP (XGBoost: 0.0624, SHAP: 0.0630). Conversely, features such as “Importance of Skills in AI Literacy for AI-Integrated Education” (XGBoost: 0.0733, SHAP: 0.1005), “Perceived Need for AI Literacy Learning Level for Students” (XGBoost: 0.0569, SHAP: 0.0841), and “Perceived Need for AI Literacy Learning Level for Teachers” (XGBoost: 0.0416, SHAP: 0.0745) displayed higher mean absolute values in SHAP analysis, indicating a discrepancy between the models in assessing feature contributions.
The deeper examination of SHAP values helps explain why these discrepancies occur and illuminates interaction effects among features. In this phase, factors like teachers’ technical skill orientation and perceived AI literacy needs turned out to have a greater influence than their XGBoost ranks suggested. For instance, the importance of AI literacy skills (for AI-integrated education) and the perceived need for AI literacy training (for both students and teachers) show higher SHAP impact, implying that teachers who focus on practical AI skills or who are acutely aware of the need for AI literacy tend to perform differently in assessment tasks. These features likely drive significant outcome differences for certain subsets of teachers (e.g., those with very high or very low emphasis on skills and needs), which may be why SHAP attributes them more significance than the averaged XGBoost importance. In contrast, the value a teacher places on using AI in the results-utilization phase—while recognized by XGBoost—has a less uniform effect (lower average SHAP value), perhaps because its influence is only realized in conjunction with other competencies. SHAP dependence plots indeed indicate that the impact of perceived AI literacy needs can be interdependent: the contribution of a high perceived student AI literacy need to the class assessment outcome is amplified when the teacher also perceives a high need for their own AI literacy development. This suggests that a teacher who is broadly cognizant of AI literacy gaps (in both their students and themselves) may approach assessment tasks with either heightened diligence or caution, affecting performance accordingly. Moreover, an interaction between technical and pedagogical factors is evident: the benefit of strong AI literacy skills on assessment-phase performance is most pronounced when coupled with an emphasis on AI-integrated teaching methods. Teachers who pair a high level of AI technical skill with robust AI-driven pedagogical strategies achieve the greatest improvements in applying AI during assessments (reflected in higher SHAP values when both attributes are high), whereas focusing on one without the other yields more limited gains. These insights reinforce that successful AI integration in the Class Assessment Phase hinges on a combination of technical proficiency, awareness of needs, and pedagogical adaptability.

4.2.4. Feature Importance Analysis for AI Literacy for Class Application of Assessment Results Phase

The Feature Importance Analysis for the Class Application of Assessment Results Phase is as follows [see Table 9, Figure 7].
The regression analysis with XGBoost indicates that the feature “Importance of AI Literacy in Class Application of Assessment Results Phase” recorded the highest Feature Importance value (0.2117). This was followed by “Importance of Teaching Methods in AI Literacy for AI-Integrated Education” (0.1868), “Importance of AI Literacy in Class Preparation Phase” (0.0899), “Importance of Skills in AI Literacy for AI-Integrated Education” (0.0788), and “Importance of Skills in AI Literacy” (0.0724).
In comparison with SHAP analysis, certain features, such as “Importance of AI Literacy in Class Assessment Phase”, displayed lower mean absolute values in SHAP (XGBoost: 0.0600, SHAP: 0.0462). Conversely, features like “Importance of Skills in AI Literacy for AI-Integrated Education” (XGBoost: 0.0788, SHAP: 0.0959), “Importance of Skills in AI Literacy” (XGBoost: 0.0724, SHAP: 0.0972), and “Perceived Need for AI Literacy Learning Level for Teachers” (XGBoost: 0.0395, SHAP: 0.0612) showed higher mean absolute values in SHAP analysis, suggesting a variance in how the models assess contributions of features.
By delving into the SHAP outcomes, we can interpret why some variables assume greater or lesser importance in this phase and identify how they interact. The notably lower SHAP contributions for the top XGBoost features (e.g., the importance placed on AI Literacy in the results phase and on AI-integrated teaching methods) imply that these factors, while critical, do not uniformly translate into performance gains unless certain underlying conditions are met. In other words, a teacher’s stated priority on using AI in applying assessment results or on innovative teaching methods must be backed by other capacities to fully impact their practice. Supporting this, several foundational competencies and perspectives—such as AI Literacy skills and the teacher’s self-identified need for AI learning—exhibited higher SHAP influence than their XGBoost rankings would suggest. This highlights that a teacher’s ability to effectively utilize AI when applying assessment results is strongly driven by their actual skill level and awareness of their own learning needs, which can outweigh the influence of simply valuing AI use in that phase. Furthermore, SHAP dependence plots point to key interactions: the benefit of valuing AI in the results-application phase is significantly amplified for teachers with high technical proficiency. Teachers who both consider AI important for using assessment results and possess strong AI Literacy skills achieve much greater predicted performance in this phase (manifested as higher SHAP values for the combination of these features), whereas those lacking in skills gain little from merely holding that belief. Another interaction is observed between pedagogical approach and self-reflection: the positive effect of emphasizing AI-integrated teaching methods on utilizing assessment results is stronger when a teacher also recognizes a high personal need for AI Literacy development. This suggests that educators who are both methodologically innovative and attuned to improving their own AI capabilities can more effectively translate student assessment data into actionable insights using AI. In sum, the SHAP analysis for the results-utilization phase reveals that successful integration of AI in post-assessment activities depends not only on acknowledging the importance of AI and adopting new methods but also on having the requisite skills and a growth-oriented mindset to act on those priorities.

5. Discussion

This study organizes the discussion around four key aspects necessary for cultivating teachers’ AI Literacy: comprehensive enhancement of knowledge, skills, teaching methods, and value judgments; differentiated support across teaching phases; integration of educational decision-making and ethical judgments; and concrete strategy support based on a data support system.
Firstly, the cultivation of AI Literacy in teachers should not be limited to knowledge and skills alone but should also encompass teaching methods, values, and attitudes. AI Literacy education must focus on equipping teachers with sufficient knowledge about AI and its practical applications and, importantly, on developing teaching methods that effectively integrate AI within educational contexts. This study evaluated how teachers perceive and perform AI Literacy, revealing a significant correlation (r = 0.76) between the Importance of Knowledge in AI Literacy and the Importance of Skills in AI Literacy. This suggests that teachers who value theoretical knowledge also appreciate the importance of practical skills [60]. Therefore, AI Literacy education should support teachers in incorporating AI ethically and appropriately into education, which requires training that includes practical exercises and case studies to apply these concepts in real classroom settings [60].
Secondly, to support the effective use of AI Literacy, differentiated support and integration tailored to each teaching phase are necessary. The performance analysis of the XGBoost model applied in this study showed that the application of AI Literacy varies across teaching phases, with particularly high predictive performance noted in the Class Preparation Phase and Class Implementation Phase (R2 = 0.8206, R2 = 0.8153), suggesting that AI Literacy manifests clearly and specifically in these phases [76]. Conversely, the Class Application of the Assessment Results Phase showed a lower R2 value (0.7746) and higher error metrics (MSE = 0.1616, RMSE = 0.4019, MAE = 0.1581), indicating the inclusion of more complex elements in this phase. Thus, teacher training should provide practical training to utilize AI technologies during the preparation and implementation phases while offering additional courses that develop skills in data interpretation, ethical considerations, and educational decision-making during the assessment and results utilization phases [77].
Thirdly, the assessment and results utilization phase should support AI Literacy that includes educational decision-making and ethical judgments. Analysis of self-assessed AI Literacy across teaching phases revealed high levels of self-assessed AI Literacy in the Class Preparation Phase and Class Implementation Phase (R2 = 0.85, 0.83), with both XGBoost and SHAP analysis confirming the significant role of AI Literacy in these phases (R2 = 0.82, SHAP value = 0.25). This indicates that teachers recognize the importance of AI Literacy and are effectively using it in the preparation and implementation of lessons. However, the roles of AI Literacy extend beyond mere technical application in the Class Assessment Phase and Utilizing Assessment Results Phase. While XGBoost rated the necessity of AI Literacy highly in these phases (R2 = 0.76), the SHAP analysis showed comparatively lower influence (SHAP value = 0.15), suggesting that these phases involve a complex process that goes beyond simple technical analysis and includes providing individualized feedback to students and making educational decisions. Teachers must understand and clearly explain the processes involved in deriving AI evaluation results, ensure algorithm transparency and data accuracy, and prioritize student privacy. Moreover, teachers must use AI technology to assess student achievement and apply results in a way that considers the overall learning context and individual characteristics of students, making ethical decisions accordingly. Ultimately, teachers play a crucial role in educating students to critically assess and effectively utilize AI technology, ensuring that the use of AI in evaluation and results application phases is both educational and ethical [78].
Fourthly, to systematically strengthen AI Literacy education, a data support system based on a methodologically robust research approach should be established. AI Literacy research needs to comprehensively analyze teachers’ perceptions of AI Literacy and their self-assessed AI Literacy competencies in the teaching process using predictive models and explainable AI techniques. This study utilized XGBoost and SHAP analysis to quantitatively assess the impact of AI Literacy-related characteristics on target variables, providing intuitive interpretations of the predictions. It empirically analyzed how AI Literacy manifests in the design and execution of lessons, defining features and targets clearly, constructing four independent models, and comparing how AI Literacy manifests across different teaching phases. The research designed practical implications for educational policy and teacher training programs based on the results, providing a foundation for policy support that enables teachers to genuinely cultivate AI Literacy [79].
Existing studies on AI Literacy have primarily focused on conceptual discussions and the development of teacher training programs [15,16]. However, few empirical studies have quantitatively analyzed how teachers’ perceptions and self-assessed AI Literacy competencies manifest in actual teaching processes using machine learning techniques. By applying XGBoost and SHAP analysis, this study addresses this gap and offers a novel methodological approach to understanding teachers’ AI Literacy performance.
The results of this study confirm that teachers’ cognitive awareness and skill recognition regarding AI Literacy significantly influence their AI Literacy performance across different teaching phases. This finding aligns with the results of Johnson (2021) [80] and Miller and Brown (2020) [81], who emphasized the importance of both knowledge and practical skills in AI Literacy. However, this study extends prior research by providing empirical evidence of how these factors interact and differ across teaching phases, thus offering a more nuanced understanding of the dynamics of AI literacy.
From a practical perspective, this study provides actionable implications for teacher training and educational policy development. By clarifying which factors have a significant impact on teachers’ AI Literacy performance at each instructional phase, the study offers guidance on where targeted support and training would be most effective. This allows for the design of differentiated training programs that can enhance teachers’ AI Literacy performance efficiently, especially addressing gaps in teachers’ self-assessment and confidence.
Overall, this study fills an important research gap by providing empirical evidence on the manifestation of AI Literacy across the teaching process and by offering practical recommendations to strengthen teachers’ AI Literacy in educational settings. Reflecting the geographical and educational diversity of Korea, this study selected the C area as the target region, encompassing both metropolitan and rural educational environments. This choice is significant as the C area structurally mirrors the diverse geographical and educational characteristics of Korea, thus enhancing the representativeness and applicability of the findings to the broader Korean educational context. A census survey was conducted among all primary and secondary school teachers in the C area, using an online questionnaire to gauge AI Literacy. The structural alignment between the C area and the national educational landscape, along with the high response rate and the comprehensive analysis, mitigates concerns about the generalizability of the findings. These factors allow this study to provide meaningful insights that are expected to be applicable across various educational settings in Korea, thereby suggesting the potential for broader policy applications and further research that could encompass more diverse and inclusive samples from different regions.

6. Conclusions

This research is significant in exploring effective ways to utilize AI in the educational field, focusing on the importance of AI Literacy in K-12 education and the evolving roles of teachers amid the innovative changes brought by digital technologies and AI in education. The ability of teachers to optimize individual learning paths and enhance educational outcomes critically depends on their AI Literacy capabilities. By empirically analyzing various factors that influence teachers’ AI Literacy performance using XGBoost and SHAP analysis, this study confirmed that AI Literacy plays a vital role in all stages of education—preparation, execution, assessment, and the application of assessment results. This underscores the need for AI Literacy training to go beyond mere technical understanding to develop teaching methods that effectively apply AI within an educational context. Furthermore, the findings highlight the necessity of structured AI Literacy programs tailored to educators, emphasizing not only technical proficiency but also pedagogical strategies and ethical considerations in AI-integrated classrooms. AI Literacy should be considered a critical competency for modern educators, ensuring that they can effectively utilize AI-based tools to enhance personalized learning experiences and student engagement. Based on the findings, it can be concluded that educational policies should recognize the importance of AI and digital literacy, integrating them into the curriculum to support customized education using AI-based learning analytics tools. AI Literacy education must include ethical considerations, and teachers should share these with their students. This research emphasizes the importance of AI Literacy in education and seeks ways to support teachers in effectively using AI technology, suggesting that both policy and practice should focus on providing AI Literacy education to all teachers and students to effectively meet the challenges of the digital age [82].

Author Contributions

Conceptualization, H.Y. and W.L.; methodology, H.Y., J.K. and W.L.; software, H.Y.; validation, W.L., J.K. and H.Y.; writing—original draft preparation, H.Y.; writing—review and editing, H.Y. and J.K.; supervision, J.K. and W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. RS-2025-00555855).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are not available due to privacy and ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Touretzky, D.S.; Gardner-McCune, C.; Martin, F.; Seehorn, D. Envisioning AI for K-12: What Should Every Child Know about AI? In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
  2. Luckin, R.; Holmes, W. Intelligence Unleashed: An argument for AI in Education. Educ. Technol. Res. Dev. 2021, 69, 1553–1582. [Google Scholar]
  3. UNESCO. Rethinking Education in the AI Era; UNESCO Publishing: Paris, France, 2021; pp. 13–56. [Google Scholar]
  4. Ministry of Education of the Republic of Korea. Revised 2022 Educational Curriculum Emphasizing AI Integration in Teaching and Assessment; MOE: Seoul, Republic of Korea, 2022.
  5. MEXT. GIGA School Program: One Device per Student to Enhance Personalized Learning; MEXT: Tokyo, Japan, 2020; pp. 22–48.
  6. Infocomm Media Development Authority. AI Learning Analytics System: Analyzing Student Data for Customized Education; IMDA: Singapore, 2021.
  7. Ministry of Education of China. New Generation Artificial Intelligence Development Plan; MOE: Beijing, China, 2021; pp. 101–150.
  8. Ministry of Education of China. Smart Campus AI Initiative; MOE: Beijing, China, 2022; pp. 75–122.
  9. UNESCO. Building Smart Learning Environments: Advances in AI-driven Educational Platforms; UNESCO: Paris, France, 2024; pp. 88–107. [Google Scholar]
  10. GEIS. Global Education Innovation Summit Report 2023; GEIS: Dubai, United Arab Emirates, 2023; pp. 110–145. [Google Scholar]
  11. Webzine-SERII. AI in Education: Designing Personalized Learning Paths; SERII: Seoul, Republic of Korea, 2023. [Google Scholar]
  12. U.S. Department of Education. National AI Education Initiative: Enhancing AI Literacy for Teachers; U.S. DOE: Washington, DC, USA, 2020.
  13. Commission of the European Communities. Integrating AI in European Education Systems; CEC: Brussels, Belgium, 2021; pp. 56–78. [Google Scholar]
  14. Gardner-McCune, C.; Touretzky, D.; Seehorn, D. Teaching AI Literacy: A New Mandate for K-12 Education. J. AI Educ. 2019, 29, 35–49. [Google Scholar]
  15. Smith, J.D.; Lee, L.K. Training Teachers in AI: Investigating the Impact on Instructional Methods. Teach. Educ. Technol. 2020, 37, 315–336. [Google Scholar]
  16. Jones, M.A.; Brown, D.C. AI Literacy in Primary and Secondary Schools: Current Trends and Challenges. Educ. Rev. 2022, 74, 122–146. [Google Scholar]
  17. Tschannen-Moran, M.; Hoy, A.W. Teacher Efficacy: Capturing an Elusive Construct. Teach. Teach. Educ. 2001, 17, 783–805. [Google Scholar] [CrossRef]
  18. Wang, X.; Lee, H.; Kim, S.Y. AI Technologies in Education: Teacher’s Role in Integration. Educ. Technol. Soc. 2020, 23, 45–59. [Google Scholar]
  19. Kim, J.H.; Lee, P.M.; Han, K.J. Understanding Teachers’ AI Implementation Practices: A Cross-National Study. Educ. Technol. Res. Dev. 2021, 69, 529–552. [Google Scholar]
  20. Yang, H.; Kim, J.; Lee, W. Analyzing the Alignment between AI Curriculum and AI Textbooks through Text Mining. Appl. Sci. 2023, 13, 10011. [Google Scholar] [CrossRef]
  21. Zhang, L.; Nouri, J. A Systematic Review of AI Education: Towards a Better Understanding of AI Literacy. Comput. Educ. 2022, 174, 104252. [Google Scholar]
  22. Stanford University. AI Literacy: Bridging the Gap Between Technical Skills and Ethical Responsibility; Stanford University Press: Stanford, CA, USA, 2020; pp. 112–134. [Google Scholar]
  23. UNESCO. Literacy for Life: A Framework for Transformation; UNESCO Publishing: Paris, France, 2005; pp. 25–47. [Google Scholar]
  24. Hirsch, E.D. Cultural Literacy: What Every American Needs to Know, 2nd ed.; Houghton Mifflin Harcourt: Boston, MA, USA, 1987; pp. 59–99. [Google Scholar]
  25. OECD. Literacy for the 21st Century: Understanding the Skills You Need for Success; OECD Publishing: Paris, France, 2019; pp. 48–73. [Google Scholar]
  26. The New London Group. A Pedagogy of Multiliteracies: Designing Social Futures. Harv. Educ. Rev. 1996, 66, 60–92. [Google Scholar] [CrossRef]
  27. Gilster, P. Digital Literacy, 1st ed.; Wiley: New York, NY, USA, 1997; pp. 1–246. [Google Scholar]
  28. OECD. Digital Literacy Skills for the 21st Century; OECD Publishing: Paris, France, 2021; pp. 33–58. [Google Scholar]
  29. Ng, T.W.; Zhou, M.; Wong, E. Developing AI Education in Schools: A New Approach to Curriculum Design. J. Curric. Stud. 2021, 53, 212–230. [Google Scholar]
  30. CSTA. K-12 Computer Science Framework; CSTA: New York, NY, USA, 2017. [Google Scholar]
  31. Department for Education. Computing at School: National Curriculum; DfE: London, UK, 2013.
  32. Kultusministerkonferenz. Lehrplan für die Informatik; KMK: Berlin, Germany, 2018. [Google Scholar]
  33. Ministry of Education. National Computing Curriculum; MOE: Wellington, New Zealand, 2020.
  34. MEXT. Curriculum Guidelines for Lower and Upper Secondary Schools; MEXT: Tokyo, Japan, 2018.
  35. ACM/IEEE. Computing Curricula: The Overview Report; ACM/IEEE: New York, NY, USA, 2023. [Google Scholar]
  36. UNESCO. AI and Education: Guidance for Policymakers; UNESCO: Paris, France, 2022; pp. 1–45. [Google Scholar]
  37. AI4K12. AI Education Guidelines for K-12 Students; AI4K12: New York, NY, USA, 2019. [Google Scholar]
  38. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2022. [Google Scholar]
  39. Dillenbourg, P.; Jermann, P.; Schneider, D.K. The Evolution of Research on Computer-Supported Collaborative Learning. In Technology-Enhanced Learning; Fischer, F., Hmelo-Silver, C., Goldman, S.R., Reimann, P., Eds.; Springer: Dordrecht, The Netherlands, 2021; pp. 3–19. [Google Scholar]
  40. Kelly, A.E.; Lesh, R.A.; Baek, J.Y. Handbook of Design Research Methods in Education: Innovations in Science, Technology, Engineering, and Mathematics Learning and Teaching; Routledge: New York, NY, USA, 2020; pp. 245–265. [Google Scholar]
  41. Siemens, G.; Baker, R.S.J.d. Learning Analytics and Educational Data Mining: Towards Communication and Collaboration. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, Vancouver, BC, Canada, 29 April–2 May 2012; pp. 252–254. [Google Scholar]
  42. Ferguson, R. Learning Analytics: Visions of the Future. In Proceedings of the 7th International Conference on Learning Analytics and Knowledge, Edinburgh, UK, 25–29 March 2019; pp. 34–42. [Google Scholar]
  43. Gibson, D.; Shaw, G. The Big Picture: The Power of Writing Analytics. In Advances in Writing Research; MacArthur, C., Graham, S., Fitzgerald, J., Eds.; Guilford Press: New York, NY, USA, 2021; Volume 2, pp. 164–202. [Google Scholar]
  44. Nguyen, H.; Weinberger, A.; Niehaus, E. Assessing Creative Problem-Solving with AI Tools. J. Creat. Behav. 2021, 55, 478–489. [Google Scholar]
  45. Baker, M.J.; Hawn, A. Limitations of Automated Scoring Systems. Ed. Policy Anal. Arch. 2021, 29, 20–45. [Google Scholar]
  46. Chen, G.; Davis, D.; Hauff, C. Customized Learning Experiences Through AI Chatbots: Potential and Limitations. Educ. Technol. Res. Dev. 2021, 69, 1–22. [Google Scholar]
  47. Kohli, R.; Tan, B.; Zutshi, A. Virtual Tutors: The Ups and Downs of AI in Education. In AI and Education: Critical Perspectives and New Directions; Selwyn, N., Facer, K., Eds.; Routledge: London, UK, 2020; pp. 117–134. [Google Scholar]
  48. Schmidt, M.; Fischer, R. Barriers to AI Adoption in Education. J. Educ. Change 2020, 21, 465–485. [Google Scholar]
  49. Chen, X.; Liu, L.; Zhao, Y. Effective Teacher Training for AI Tool Utilization: Insights from a Hands-On Approach. Teach. Teach. Educ. 2022, 100, 103360. [Google Scholar]
  50. Ng, C. Comprehensive AI Education: From Concepts to Practice. J. Comput. Assist. Learn. 2021, 37, 456–469. [Google Scholar]
  51. Jones, R.D.; Bradshaw, M.K. Continuous Practice and Collaboration: Keys to Sustaining AI Literacy. J. Learn. Dev. 2021, 8, 303–319. [Google Scholar]
  52. Anderson, T.; Shattuck, J. Design-Based Research: A Decade of Progress in Education Research? Educ. Res. 2019, 48, 5–28. [Google Scholar] [CrossRef]
  53. Wang, Y.; Heffernan, N.; Heffernan, C. Predicting Student Achievement Through AI: A Meta-Analysis. J. Educ. Psychol. 2023, 115, 42–65. [Google Scholar]
  54. Smith, D.K.; Smith, S. Self-Assessment in AI Literacy: Challenges and Opportunities. J. Educ. Psychol. 2022, 114, 337–352. [Google Scholar]
  55. Schmid, L.; Sassenberg, K.; Ruhrmann, G. Self-Efficacy and AI in Education: An Empirical Study. Teach. Teach. Educ. 2021, 102, 412–424. [Google Scholar]
  56. Rosenberg, J.; Lopez, M.; Heeter, C. Developing the AI Literacy Competency Framework. TechTrends 2023, 67, 21–33. [Google Scholar]
  57. Brown, J.; Chen, D. Decision Trees in Education: Applications and Limitations. J. Educ. Data Min. 2022, 14, 12–28. [Google Scholar]
  58. Johnson, T.; Smith, P. Interpreting Feature Importance in Random Forests: Challenges and Solutions. Mach. Learn. Res. 2021, 22, 34–56. [Google Scholar]
  59. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  60. Wang, B.; Li, R.; Zhang, Z. Enhancing Predictive Accuracy in Educational Data Mining: A Case Study Using XGBoost. J. Educ. Data Min. 2024, 26, 58–77. [Google Scholar]
  61. Willmott, C.J.; Matsuura, K. Advantages of the Mean Absolute Error (MAE) over the Root Mean Squared Error (RMSE) in Assessing Average Model Performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  62. Kim, D.; Park, S. AI Education Policy Analysis Using SHAP Values. Educ. Policy Anal. Arch. 2024, 32, 102–129. [Google Scholar]
  63. Brochu, E.; Cora, V.M.; De Freitas, N. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning. arXiv 2010, arXiv:2010.06595. [Google Scholar]
  64. Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777. [Google Scholar]
  65. Nguyen, P.; Lee, Q. Practical Applications of SHAP in Educational Data Analysis. J. Learn. Anal. 2024, 11, 25–45. [Google Scholar]
  66. Farrow, R. The possibilities and limits of XAI in education: A socio-technical perspective. Learn. Media Technol. 2023, 48, 266–279. [Google Scholar] [CrossRef]
  67. Reeder, S. Evaluating Explainable AI (XAI) in terms of user gender and educational background. In Artificial Intelligence in HCI; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 14050. [Google Scholar]
  68. Chen, J.; Zhang, X.; Liu, L. Policy Analysis in Education Using SHAP: A Case Study. Educ. Policy Anal. Arch. 2024, 32, 145–165. [Google Scholar]
  69. Lawshe, C.H. A Quantitative Approach to Content Validity. Pers. Psychol. 1975, 28, 563–575. [Google Scholar] [CrossRef]
  70. Ministry of the Interior and Safety. Classification of Administrative Divisions in Korea; MOIS: Seoul, Republic of Korea, 2024.
  71. Ministry of Education of the Republic of Korea. Status of Primary and Secondary Schools in Korea (Student and Class Numbers); MOE: Seoul, Republic of Korea, 2024.
  72. Chungcheongbuk-do Office of Education. Educational Status in Chungcheongbuk-do; Chungcheongbuk-do Office of Education Official Website: Cheongju, Republic of Korea, 2024.
  73. Kumar, S.; Singh, A. Evaluating the Accuracy of Machine Learning Algorithms for Predicting Educational Outcomes. Educ. Data Min. 2021, 13, 33–50. [Google Scholar]
  74. Lundberg, S.M.; Erion, G.; Chen, J.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Catanzaro, B.; Himmelfarb, J.; Lee, S.I. Explainable AI for Trees: From Local Explanations to Global Understanding. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, 12–18 July 2020; Volume 119, pp. 2668–2677. [Google Scholar]
  75. Molnar, C. Interpretable Machine Learning, 2nd ed.; Leanpub: Victoria, BC, Canada, 2022. [Google Scholar]
  76. Global Education Research Institute. The Effectiveness of AI Literacy Programs: A Longitudinal Study. Global Educ. Res. 2022, 6, 111–134. [Google Scholar]
  77. Educational Technology Solutions Lab. Tailoring AI Education: Strategies and Success Stories; EdTech Lab: Chicago, IL, USA, 2021. [Google Scholar]
  78. Educational Innovation Research Group. Ethical Considerations in AI-Driven Education: Case Studies and Policy Recommendations. Educ. Innov. Res. 2023, 7, 50–72. [Google Scholar]
  79. Advanced Educational Methods Institute. Building Comprehensive AI Literacy Frameworks: Toward Inclusive and Equitable Education. Adv. Educ. Methods 2024, 5, 65–88. [Google Scholar]
  80. Johnson, M. AI Literacy: The Role of Knowledge and Skills in AI Education. J. Educ. Technol. 2021, 38, 1–19. [Google Scholar]
  81. Miller, R.; Brown, L. Practical Challenges in Customizing Education through AI. In Educational Innovation for the Digital Age; Academic Press: New York, NY, USA, 2020; pp. 201–225. [Google Scholar]
  82. Educational Technology Institute. Enhancing AI Literacy Across the K-12 Spectrum: Trends and Techniques; Educational Technology Institute: Boston, MA, USA, 2023. [Google Scholar]
Figure 1. Research Process.
Figure 1. Research Process.
Applsci 15 04433 g001
Figure 2. Composition of Features and Targets.
Figure 2. Composition of Features and Targets.
Applsci 15 04433 g002
Figure 3. Pearson correlation matrix. The asterisk (*) indicates a statistically significant correlation.
Figure 3. Pearson correlation matrix. The asterisk (*) indicates a statistically significant correlation.
Applsci 15 04433 g003
Figure 4. Feature Importance Comparison: XGBoost and SHAP Analysis for AI Literacy in the Class Preparation Phase.
Figure 4. Feature Importance Comparison: XGBoost and SHAP Analysis for AI Literacy in the Class Preparation Phase.
Applsci 15 04433 g004
Figure 5. Feature Importance Comparison: XGBoost and SHAP Analysis for AI Literacy in the Class Implementation Phase.
Figure 5. Feature Importance Comparison: XGBoost and SHAP Analysis for AI Literacy in the Class Implementation Phase.
Applsci 15 04433 g005
Figure 6. Feature Importance Comparison: XGBoost and SHAP Analysis for AI Literacy in Class Assessment Phase.
Figure 6. Feature Importance Comparison: XGBoost and SHAP Analysis for AI Literacy in Class Assessment Phase.
Applsci 15 04433 g006
Figure 7. Feature Importance Comparison: XGBoost and SHAP Analysis for AI Literacy for Class Application of Assessment Results Phase Results.
Figure 7. Feature Importance Comparison: XGBoost and SHAP Analysis for AI Literacy for Class Application of Assessment Results Phase Results.
Applsci 15 04433 g007aApplsci 15 04433 g007b
Table 1. Summary of Programming Libraries Used.
Table 1. Summary of Programming Libraries Used.
Research ComponentLibraries UsedDescription
Data Processing and AnalysisPandas
numpy
Data frame transformation, missing value handling, matrix operations, and data transformation
Model Training and OptimizationxgboostRegression model construction and training
Model Evaluation and ComparisonsklearnCalculation of evaluation metrics and model performance comparison
Hyperparameter Tuningbayes_optBayesian optimization for hyperparameter search
Data Visualization Matplotlib
seaborn
Visualization of results (charts and graphs)
Model InterpretationshapFeature contribution analysis, SHAP analysis (Summary Plot, Beeswarm)
Table 2. Characteristics of Respondents.
Table 2. Characteristics of Respondents.
CategorySubcategoryFrequency (n)Percentage (%)
School LevelElementary School57449.0
Middle School34229.2
High School25621.8
Teaching ExperienceLess than 10 years48041.0
10 to 20 years33428.5
More than 20 years35830.5
AI Training ExperienceAttended Training34229.2
No Training Experience83070.8
Total Participants1172100.0
Table 3. Results of XGBoost model performance evaluation.
Table 3. Results of XGBoost model performance evaluation.
IterationTarget (R2 Value)Colsample_BytreeLearning_RateMax_Depthn_EstimatorsSubsample
10.7140.68730.19068.124279.60.578
(omitted for brevity)
60.74220.58950.13499.961295.60.8033
70.73560.7540.10039.562316.60.7297
(omitted for brevity)
90.740.57410.19089.859141.60.8147
(omitted for brevity)
160.7340.90640.097879.902135.70.8127
170.72890.85380.054369.8171.30.8932
180.51160.92420.073313.174169.40.5182
190.66580.90480.13615.263138.10.8139
200.65780.84810.18764.384284.30.979
Table 4. Optimal hyperparameter values obtained through Bayesian Optimization.
Table 4. Optimal hyperparameter values obtained through Bayesian Optimization.
HyperparameterValue
learning_rate0.1349
max_depth9
subsample0.8033
colsample_bytree0.5895
n_estimators295
Table 5. Performance evaluation of the XGBoost model on target variables.
Table 5. Performance evaluation of the XGBoost model on target variables.
TargetR2 ScoreMSERMSEMAE
Self-Perceived Level of AI Literacy for Class Preparation Phase execution0.82060.12350.35140.1430
Self-Perceived Level of AI Literacy for Class Implementation Phase execution0.80070.13370.36560.1372
Self-Perceived Level of AI Literacy for Class Assessment Phase execution0.80660.15340.39170.1561
Self-Perceived Level of AI Literacy for Class Application of Assessment Results Phase execution0.77460.16160.40190.1581
Table 6. Feature Importance of AI Literacy in the Class Preparation Phase. The background color is used to highlight cases with higher values.
Table 6. Feature Importance of AI Literacy in the Class Preparation Phase. The background color is used to highlight cases with higher values.
FeatureXGBoost
Feature Importance
SHAP
Mean Absolute Value
Importance of AI Literacy in Class Preparation Phase 0.2488 0.1978
Importance of AI Literacy in Class Application of Assessment Results Phase0.1208 0.0679
Importance of Teaching Methods in AI Literacy for AI-Integrated Education0.1109 0.0748
Importance of AI Literacy in Class Assessment Phase0.10090.0771
Importance of AI Literacy in Class Implementation Phase0.06500.0521
Importance of Skills in AI Literacy for AI-Integrated Education0.06100.0918
Perceived Need for AI Literacy Learning Level for Students0.05590.0723
Importance of Skills in AI Literacy0.05470.0837
Importance of Knowledge in AI Literacy for AI-Integrated Education0.05440.0489
Importance of Knowledge in AI Literacy0.0465 0.0546
Perceived Need for AI Literacy Learning Level for Teachers0.0406 0.0543
Importance of Teaching Methods in AI Literacy0.0404 0.0437
Table 7. Feature Importance of AI Literacy in the Class Implementation Phase. The background color is used to highlight cases with higher values.
Table 7. Feature Importance of AI Literacy in the Class Implementation Phase. The background color is used to highlight cases with higher values.
FeatureXGBoost
Feature Importance
SHAP
Mean Absolute Value
Importance of AI Literacy in Class Assessment Phase0.25880.1138
Importance of AI Literacy in Class Preparation Phase0.12660.0905
Importance of Teaching Methods in AI Literacy for AI-Integrated Education0.1185 0.1219
Importance of Skills in AI Literacy for AI-Integrated Education0.0699 0.0596
Importance of AI Literacy in Class Implementation Phase0.0691 0.0657
Importance of AI Literacy in Class Application of Assessment Results Phase0.0679 0.0527
Importance of Skills in AI Literacy0.0611 0.0851
Perceived Need for AI Literacy Learning Level for Students0.0526 0.0780
Importance of Knowledge in AI Literacy0.0476 0.0556
Importance of Knowledge in AI Literacy for AI-Integrated Education0.0473 0.0580
Importance of Teaching Methods in AI Literacy0.0403 0.0644
Perceived Need for AI Literacy Learning Level for Teachers0.04030.0478
Table 8. Feature Importance of AI Literacy in the Class Assessment Phase. The background color is used to highlight cases with higher values.
Table 8. Feature Importance of AI Literacy in the Class Assessment Phase. The background color is used to highlight cases with higher values.
FeatureXGBoost
Feature Importance
SHAP
Mean Absolute Value
Importance of AI Literacy in Class Assessment Phase0.19730.1474
Importance of AI Literacy in Class Preparation Phase0.15530.1056
Importance of Teaching Methods in AI Literacy for AI-Integrated Education0.13330.0978
Importance of Skills in AI Literacy for AI-Integrated Education0.07330.1005
Importance of AI Literacy in Class Implementation Phase0.06380.0607
Importance of AI Literacy in Class Application of Assessment Results Phase0.06240.0630
Importance of Skills in AI Literacy0.06210.0897
Perceived Need for AI Literacy Learning Level for Students0.05690.0841
Importance of Knowledge in AI Literacy0.0531 0.0604
Importance of Knowledge in AI Literacy for AI-Integrated Education0.0529 0.0514
Importance of Teaching Methods in AI Literacy0.0481 0.0605
Perceived Need for AI Literacy Learning Level for Teachers0.0416 0.0745
Table 9. Feature Importance of AI Literacy for Class Application of Assessment Results Phase Results. The background color is used to highlight cases with higher values.
Table 9. Feature Importance of AI Literacy for Class Application of Assessment Results Phase Results. The background color is used to highlight cases with higher values.
FeatureXGBoost
Feature Importance
SHAP
Mean Absolute Value
Importance of AI Literacy in Class Application of Assessment Results Phase0.2117 0.1338
Importance of Teaching Methods in AI Literacy for AI-Integrated Education0.18680.0947
Importance of AI Literacy in Class Preparation Phase0.08990.0839
Importance of Skills in AI Literacy for AI-Integrated Education0.07880.0959
Importance of Skills in AI Literacy0.07240.0972
Importance of AI Literacy in Class Implementation Phase 0.06660.0712
Importance of AI Literacy in Class Assessment Phase0.06000.0462
Perceived Need for AI Literacy Learning Level for Students 0.05670.0708
Importance of Knowledge in AI Literacy for AI-Integrated Education0.05310.0497
Importance of Teaching Methods in AI Literacy0.0424 0.0509
Importance of Knowledge in AI Literacy0.0420 0.0514
Perceived Need for AI Literacy Learning Level for Teachers0.03950.0612
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, H.; Lee, W.; Kim, J. Identification of Key Factors Influencing Teachers’ Self-Perceived AI Literacy: An XGBoost and SHAP-Based Approach. Appl. Sci. 2025, 15, 4433. https://doi.org/10.3390/app15084433

AMA Style

Yang H, Lee W, Kim J. Identification of Key Factors Influencing Teachers’ Self-Perceived AI Literacy: An XGBoost and SHAP-Based Approach. Applied Sciences. 2025; 15(8):4433. https://doi.org/10.3390/app15084433

Chicago/Turabian Style

Yang, Hyeji, Wongyu Lee, and Jamee Kim. 2025. "Identification of Key Factors Influencing Teachers’ Self-Perceived AI Literacy: An XGBoost and SHAP-Based Approach" Applied Sciences 15, no. 8: 4433. https://doi.org/10.3390/app15084433

APA Style

Yang, H., Lee, W., & Kim, J. (2025). Identification of Key Factors Influencing Teachers’ Self-Perceived AI Literacy: An XGBoost and SHAP-Based Approach. Applied Sciences, 15(8), 4433. https://doi.org/10.3390/app15084433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop