Next Article in Journal
Moving Minds: How Physical Activity Shapes Motivation and Self-Concept in School Children
Previous Article in Journal
Social Workers’ Reports on Needs and Recommendations to Enhance School Safety
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling K12 Teachers’ Online Teaching Competency and Its Predictive Relationship with Performance—A Mixed-Methods Study Based on Behavioral Event Interviews

1
School of Information Technology in Education, South China Normal University, Guangzhou 510631, China
2
School of Computer and Information Engineering, Hubei Normal University, Huangshi 435002, China
*
Author to whom correspondence should be addressed.
Behav. Sci. 2025, 15(5), 628; https://doi.org/10.3390/bs15050628
Submission received: 21 February 2025 / Revised: 24 April 2025 / Accepted: 25 April 2025 / Published: 5 May 2025

Abstract

:
This study constructs and validates a multidimensional online teaching competency model for K12 teachers through an integrated mixed-methods design. Combining behavioral event interviews (n = 38) with large-scale psychometric evaluation (n = 4378), we identified six hierarchically organized competency dimensions encompassing 29 measurable elements. The model differentiates between 12 discriminative competencies and 17 baseline competencies, further categorized into explicit (knowledge, technical, instructional, management) and implicit (achievement orientation, individual traits) dimensions. Exploratory and confirmatory analyses validated the model’s robust multidimensional structure (CFI = 0.923, TLI = 0.914, RMSEA = 0.042). Structural equation modeling revealed significant competency-performance linkages, with 10 of 12 hypothesized paths attaining statistical significance (p < 0.05). Management competencies emerged as the strongest predictor of both process (β = 0.37) and outcome performance (β = 0.29), followed by instructional competencies (β = 0.31 and 0.24 respectively). The model provides empirically grounded guidance for developing online teaching norms, competency-based teacher training programs, and performance evaluation systems.

1. Introduction

The global educational landscape has undergone unprecedented digital acceleration since 2020, with over 89% of OECD countries institutionalizing national online learning platforms by 2021 (OECD, 2021). Within this worldwide transformation, China’s 2022 National Smart Education Platform launch represents a strategically scaled implementation, serving as an empirical testbed for competency model validation under high-stakes conditions. This platform, integrated across 32 provincial systems, now supports 18.3 million K12 teachers, making it a critical context for studying systemic digital pedagogy transitions. The reform of teaching models supported by digital resources has been continuously advancing, leading to changes in the teaching and learning environment, classroom structures, and teaching methods. A new educational ecosystem that integrates both online and face-to-face education has gradually emerged (Chen, 2020). This also placed new demands on K12 teachers, making online teaching a norm in their daily work.
Although the investment in infrastructure has increased in various countries, teacher readiness remains a universal bottleneck. UNESCO’s 2022 Global Education Monitoring Report identifies “competency gaps in technology-mediated instruction” as the top barrier to equitable digital learning in 76% of surveyed countries (UNESCO, 2022). In China, there are misunderstandings about online teaching (X. P. Wang, 2020), mirroring issues observed in India’s DIKSHA platform (Kumar & Sharma, 2021) and Brazil’s Aula Digital (Araújo et al., 2021). For instance, some equate it with live teaching, recorded teaching, or simply transferring face-to-face teaching methods online without adaptation. Zhao et al. (2022) surveyed 3107 K12 teachers and found that the most concerning issues in practice remain the operation of online teaching platforms and resource application. The overall understanding and adaptability to online teaching are at a moderate level, with significant urban–rural disparities.
Additionally, both pre-service and in-service stages of K12 teacher training lack systematic online teaching capability development (Yang & Zhang, 2020). During the pre-service stage, there are some courses such as “Foundations of Education” and “Multimedia Technology and Application” in training programs, as well as practicum and internship components. However, these focus discretely on pedagogy or the application of resource creation tools and subject tools, without sufficient emphasis on designing instructional activities, selecting teaching strategies, and designing instructional evaluations in online environments. During the in-service stage, training related to online teaching for K12 teachers still exhibits a “skill-oriented” bias, rarely addressing the diverse competencies required during online teaching, such as organizing learning activities, designing interactions, and analyzing student performance based on data. Fu (2020) surveyed 7111 teachers and found that most trainings were either too theoretical to be practically implemented or too technical, assuming that mastering the use of teaching software tools suffices for effective online teaching.
Furthermore, many schools or regions still evaluate such performance solely through student exam scores—a reductionist approach that neglects critical multidimensional aspects, including systematic activity design, collaborative learning dynamics, and the cultivation of students’ multiple intelligences during online instruction, thereby underscoring the urgency for evidence-based evaluation frameworks. This study conceptualizes online teaching performance as teaching effectiveness that encompasses two dimensions: process performance (e.g., learner engagement efficiency, instructional interaction quality) and outcome performance (e.g., academic achievements, skill development; Rahmatullah, 2016; Gökdaş et al., 2024).
Therefore, the digital transformation of education demands reimagined teacher competencies, which is a challenge that is both context-specific and globally resonant. In order to solve the problems of K12 online teaching standardization, online teaching training systematization, and online teaching evaluation scientifically, it is necessary to use scientific methods to construct an online teaching competence model. This study aims to answer the following research questions:
RQ1: What are the characteristics of K12 teachers’ online teaching competence?
RQ2: What is the structure of the competency model composed of these characteristics?
RQ3: What is the relationship between the competency characteristics in the model and the prediction of online teaching performance?

2. Literature Review

Competence is not merely an ability but also a qualification or prerequisite for engaging in a particular job. In specific work contexts, it refers more to comprehensive capabilities rather than basic problem solving, encompassing not only explicit knowledge and skills but also implicit traits, qualities, values, and motivations. For this reason, the competency model of teachers in specific work or job contexts has increasingly gained attention from researchers.

2.1. The Competence of K12 Teachers

2.1.1. Theoretical Foundations

Research on competency stems from the continuous specialization of social division of labor following the Industrial Revolution, and has permeated various fields such as management, economics, and education. McClelland (1973) introduced the concept of “competency”, defining it as “the knowledge, skills, abilities, traits, or motives that are directly related to job performance or important outcomes in the process”. Building on this foundation, Spencer (1993) proposed the classic theoretical model of competency, the “iceberg model”, which posits that above the surface, visible traits include skills and knowledge, which are easily acquired and changed; below the surface, invisible deep-seated competencies, such as self-concept, social roles, and motivations, play a decisive role in behavior and performance.

2.1.2. Applied Frameworks in Varied Contexts

Emerging from these theoretical foundations, empirical investigations of teacher competency models coalesce into two thematic paradigms.
Firstly, process-oriented frameworks emphasize instructional workflows. Danielson et al. (2024) used the teaching business process as a clue and proposed a four-dimensional model, including planning and preparation, classroom environment monitoring, teaching, and professional responsibility. Richey et al. (2001) proposed a teacher competency model for instructional design consisting of four dimensions and 19 competency characteristics, including professional basic abilities, planning and analysis, design and development, implementation, and management. Koszalka et al. (2013) also focused on instructional design and developed a teacher competency model including professional basic abilities, planning and analysis, design and development, implementation, and management. Ferrández-Berrueco and Sánchez-Tarazaga (2014) believed that teachers should possess competencies in four areas: subject matter, methodological, social, and personal abilities.
Secondly, scenario-specific adaptations address contextualized pedagogical demands. Many scholars have also conducted research on teacher competency for different educational stages, positions, or teaching scenarios. For example, X. L. Luo (2010) interviewed 28 middle school teachers and supplemented with open questionnaires to construct a middle school teacher competency model consisting of nine competency characteristic groups. Yuan et al. (2021) constructed a home-school cooperation competency model for K12 teachers, including three levels: home-school cooperation knowledge and skills, attitudes and values, and personality and achievement motivation. X. Wang et al. (2024), focusing on interdisciplinary teaching readiness (ITR), confirmed that ITR consists of three factors (including interdisciplinary teaching knowledge structure readiness, interdisciplinary teaching skills readiness, and interdisciplinary teaching attitudes readiness) and 24 items using item analysis and critical ratio method. Zhou et al. (2024) investigated the significance of improving cross-cultural communicative competence (CCC) in undergraduate English instruction.

2.1.3. International Standards

Additionally, some countries and organizations have also proposed relevant models to promote professional development among teachers. For instance, UNESCO (2022) established ICT competency standards for teachers from six aspects: understanding ICT, curriculum and assessment, pedagogy, application of digital skills, organization and management, and teacher professional learning. The Australian Institute for Teaching and School Leadership published the Australian Professional Standards for Teachers in 2011 and revised them in 2018. AITSL (2018) includes three domains (knowledge, practice, and engagement) and seven standards. The Teachers College of Columbia University (2015) developed a global competence certificate, including cognitive skills (critical thinking, creative thinking, comparative and reasoning abilities, information discernment and judgment capabilities, and expression of viewpoints) as well as action skills (digital literacy, dialogue abilities, translating ideas into actions, communication in dialogues, collaborative cooperation, shared responsibility, and sharing of outcomes).

2.2. The Competence for Online Teaching

Research on online teaching competency has evolved along two distinct yet complementary trajectories. The first strand focuses on conceptualizing the multidimensional roles and capabilities required for remote instruction, while the second centers on establishing standardized frameworks for digital education competence.

2.2.1. The Conceptual Development

The conceptual research trajectory reveals a progressive expansion of understanding teacher roles in digital environments. Early foundational work by Thach and Murphy (1995) established 11 critical roles ranging from instructional design to evaluation expertise, accompanied by 10 core capabilities. Subsequent studies progressively refined these dimensions: Williams (2003) expanded the role taxonomy to 13 positions, including technical specialists and instructional supporters, while Goodyear et al. (2001) introduced communication dynamics through 23 capability–task pairings. This conceptual evolution continued through Kabilan’s (2004) emphasis on motivational and ideological dimensions, Dabbagh’s (2003) nine-role interaction model, and Alvarez et al.’s (2009) five-category system integrating social and managerial competencies. More recent contributions, such as Ally (2019), have synthesized these elements into comprehensive nine-domain models encompassing both pedagogical and technological dimensions.
The global shift towards prioritizing information and communication technology has underscored the pressing need to bolster and cultivate digital competence within educational contexts (Guillén-Gámez et al., 2021). Basilotta-Gómez-Pablos et al. (2022) reviewed studies on the digital competence of university teachers, revealing a dominant focus on self-assessment and the need for more practical training programs. Bilbao Aiastui et al. (2021) and De la Calle et al. (2021) have further explored the integration of digital skills in education, highlighting the need for continued research and development in this area.

2.2.2. Establishing Standardized Frameworks

Parallel to these conceptual developments, institutional standardization efforts have emerged. The “EU Framework for Digital Education Competence of Teachers” proposed in 2017 includes a model of teacher digital competence with six domains and 22 capabilities, such as professional engagement, digital resources, teaching and learning, assessment, empowering learners, and promoting learners (Redecker, 2017). The International Society for Technology in Education in the United States also revised the “National Educational Technology Standards for Teachers” in 2017, categorizing teachers into seven functional roles, including learners, leaders, citizens, collaborators, designers, facilitators, and analysts (Feng et al., 2018).
Despite these advancements, three critical gaps emerge from the existing scholarship. First, the existing research in the field of online teaching competency has focused more on the rapidly developing applications and advancements in higher education, distance education, with less attention paid to K12 education. Second, current models insufficiently account for the blended learning realities of contemporary K12 education, where online–offline integration requires distinct competency profiles differing from pure distance education models. Third, there’s limited empirical validation of proposed frameworks in actual K12 classroom implementations (Bilbao Aiastui et al., 2021; De la Calle et al., 2021). These lacunae highlight the need for a purpose-built competency framework addressing the unique requirements of K12 teachers navigating hybrid educational landscapes.

2.3. The Relationship Between Competency and Performance

The competency-performance nexus has been systematically examined across disciplines through three primary research lenses: empirical validation studies, process-outcome differentiation analyses, and contextual adaptation investigations.

2.3.1. Empirical Validation Studies

Empirical validation studies, covering a wide range of topics, including education, medicine, and business management, have established robust quantitative foundations for this relationship. This study focuses on the relationship between teachers’ competencies and teaching performance in an online teaching environment. Syarif and Angger (2017) used quantitative research methods to study the relationship between professional competency and performance among public primary school teachers in Central Java, India, and confirmed a significant positive correlation between primary school teachers’ competency and performance (r = 0.979). Wahyuddin (2016) also studied Indian teachers and investigated the relationship between teacher competence and emotional intelligence through descriptive and inferential analysis. The results showed that there is an association between teacher competence and teacher performance, and there is also an association between emotional intelligence and teacher performance.
Both process-based performance and outcome-based performance have been proven to be related to teachers’ competencies. Rahmatullah (2016), in exploring the correlation between learning effectiveness (focused on the learning process) and teacher competence, confirmed that there is a high correlation between teacher competence and both learning efficacy and teacher performance; and there is a relationship between learning efficacy and teacher performance. Tabassum et al. (2020) reached a similar conclusion in their investigation of the relationship between social competence and academic performance (focused on outcomes) among college students. The sample included 4708 participants from different universities in Pakistan. The data analytics results showed that the relationship between academic performance and social competence was significant. Similarly, the social skills, cognitive skills, and interpersonal skills of social competence had a significant impact on academic performance.

2.3.2. Contextual Adaptation Research

Contextual adaptation research introduces organizational moderators into this relationship. Junior et al. (2017) believes that professional competence includes knowledge, skills, abilities, traits, and behaviors, and in a survey exploring the relationship between employee competence and job performance in public organizations, it was found that the support provided by management has a significant impact on employee performance, and that different structures of competence have different effects on employee performance.
With the rapid development of digital technologies, teachers need to be able to use these technologies effectively and develop students’ digital skills. Gökdaş et al. (2024) found through a comprehensive literature analysis that among the seven factors identified as influencing teachers’ digital competence, 26.34% of the literature focused on the link between teachers’ digital competences and student performance, demonstrating that enhanced digital skills positively impact student outcomes.
Three critical syntheses emerge from this body of research. First, the competence-performance relationship demonstrates cross-contextual validity through replicable effect sizes. Second, performance measurement specificity determines observed effect magnitudes-composite measures yield stronger correlations than single indicators. Third, digital transformation introduces new mediating variables, particularly technological adaptability and student-facing digital literacies.
These syntheses justify our operational definition of performance as encompassing both process metrics (learning habits, self-learning ability, etc.) and outcome indicators (learning achievements, professional development, etc.).

3. Methodology

To address the three research questions outlined above, the study is divided into three phases, each corresponding to a distinct sub-study (Figure 1).

3.1. Behavioral Event Interview (BEI) for RQ1

This sub-study aims to explore the characteristics of the online teaching competency model using the behavioral event interview (BEI) method. This approach involves a time-compressed observation of actions, where respondents are asked to describe in detail key instances of effective and ineffective work in their professional activities. Through coding and analysis, these instances are broken down into specific behaviors, thereby identifying the competencies required for a particular job or position. The procedure is structured as follows.
First, interviews were conducted with teachers. A random sample of 25 teachers was selected, comprising 13 high-performing teachers and 12 ordinary-performing teachers from the same schools. Face-to-face interviews were conducted and audio-recorded.
Second, transcription and open coding were conducted. The recorded interviews were transcribed verbatim. Open coding was applied to the transcripts to extract competency-related characteristics articulated by the teachers.
Third, axial coding was conducted. The outcomes of open coding were analyzed to categorize emergent competency characteristics into core genera.
Finally, selective coding was conducted. A core genus was selected to integrate the characteristics derived from open coding, resulting in a preliminary competency model that maps characteristics to their respective dimensions.

3.2. EFA and PCA for RQ2

This phase employed exploratory factor analysis (EFA) and principal component analysis (PCA) to validate and optimize the model.
The first step was questionnaire development and data collection. The competency characteristics identified in Phase 1 were operationalized into a questionnaire distributed to K12 teachers. Then, an exploratory factor analysis (EFA) was conducted to verify the alignment of characteristics with their hypothesized core dimensions. Moreover, weight assignment via principal component analysis (PCA) was utilized to calculate the relative weights of each competency feature within the model. This process yielded a revised competency model with weighted characteristics’ contributions.

3.3. SEM for RQ3

The relationship between competency features and teaching performance was examined using structural equation modeling (SEM).
Firstly, teaching performance indicators were defined based on a literature review, followed by the formulation of research hypotheses. Next, a questionnaire targeting performance indicators was administered, and the collected data underwent reliability and validity analyses. Furthermore, SEM-based path analysis was performed to test the hypothesized relationships between the competency characteristics and teaching performance. Finally, the predictive structural model linking competency characteristics to teaching performance was statistically validated.

4. Characteristics of K12 Teachers’ Online Teaching Competence (RQ1)

Competency modeling identifies the critical characteristics that individuals require to excel in specific professional roles (Wiyanah et al., 2021). Addressing RQ1, this study employs the BEI method to systematically examine the distinctive competencies underlying effective online instruction. The BEI approach is particularly suited for this investigation as it: (1) captures actual teaching behaviors rather than self-reported perceptions, (2) has demonstrated effectiveness in educational competency research, such as Wiyanah et al. (2021), which takes this approach, emphasizing process and constructing a model of teachers’ English teaching competence for presentation, practice, and production (PPP), and (3) enables differentiation between baseline and exceptional performance. Focusing specifically on K12 online teaching contexts, our analysis aims to establish a comprehensive competency framework that reflects both the unique demands of digital pedagogy and the developmental needs of teachers.

4.1. Interview Outline Design

To better guide respondents in describing the most successful and regrettable events in a specific job and to uncover deeper behavioral details, this study utilizes the STAR tool. The STAR tool designs the interview outline from four aspects: situation, task, action, and result, as shown in Table 1.

4.2. Interview Implementation and Reliability Analysis

The respondents were divided into a high-performing group and an average-performing group. Both groups were required to have over 30 h of online teaching experience. Additionally, the high-performing group was defined by meeting at least one of the following criteria: (1) holding distinguished teaching titles such as “key subject teacher” or “academic discipline leader”; (2) achieving provincial-level or higher awards in significant educational initiatives (e.g., the “One Teacher, One Exemplary Lesson” national campaign); or (3) attaining senior professional titles (Level 1 or advanced certification). The average-performing group consisted of teachers randomly selected from the same schools as high-performing participants, matching the minimum 30 h online teaching experience requirement. The interview process employed a single-blind design, ensuring participants remained unaware of their group assignment throughout the study.
A total of 13 teachers were interviewed in the high-performance group, with an average teaching experience of 16.8 years and an average of 80.7 h of online teaching; 12 teachers in the general-performance group, with an average teaching experience of 9.04 years and an average of 47.9 h of online teaching. The respondents did not know to which group they belonged in advance, and the study utilized a single-blind design.
In order to improve the reliability and validity, before the formal interview, three teachers from a primary school in Dongguan City, Guangdong Province, were sampled for pre-interview, and the statements and questions reported by the respondents as being difficult were corrected. The formal interviews were completed during four teacher trainings organized by the Hubei Audio-Visual Education Museum. During the interview, the interviewees were mainly guided to describe three key events of success and regret in online teaching practice in an appropriate order based on the interview outline. After the interview, “iFLYTEK Listening Software” (3.0) was used to transform the interview recordings into text. After screening out one sample that failed automatic speech recognition due to dialect and two samples that were biased from the interview outline, this study ultimately compiled 22 interview transcripts totaling 178,000 words, yielding an average of 8078 words per participant (calculated by dividing total word count by 22 respondents). This average notably surpasses the 8000-word threshold for sample stability in BEI data established by Shi et al. (2002).
The study selects classification consistency and coding reliability coefficient to measure its reliability. Category agreement (CA) considers the degree of consistency of coding classification of the same interview text content by multiple coders, and its calculation formula is: CA = 2S/(T1 + T2) (Smith, 1992), where S refers to the number of coders with the same coding classification, and T1 and T2 represent the total number of coders of two coders, respectively. Coding reliability coefficient (R) is compound reliability, and the calculation formula is: R = (n × CA)/[1 + (n − 1) × CA] (Dong, 2019), where n is the number of coders.
In this study, the classification consistency and coding reliability calculation results of the two coders for 22 interview texts are as follows: the maximum value of classification consistency CA is 0.938, the minimum value is 0.551, the maximum value of coding reliability coefficient is 0.955, and the minimum value is 0.711; the overall classification consistency is 0.655, and the coding reliability coefficient is 0.787, indicating that the consistency of the two coders is at a good level and the reliability is high. The coding results can be further analyzed statistically.

4.3. Open Encoding

The coding and analysis of interview text is an important process to draw research conclusions based on BEI. The coding process follows the logical sequence of grounded theory’s “open coding-axis coding-selective coding”.
Open coding is the process of analyzing, inspecting, conceptualizing, and comparing data. The purpose is to discover and name concept categories from interview text materials. In this study, NVivo software 11.0 was used for open coding, and coding nodes were set up from three aspects.
First, data were extracted from 23 studies that were closely related to “online teaching competence” selected in the literature analysis and classified according to the three levels of skills and knowledge, attitudes and values, and traits and motivations using the “iceberg model”. After merging similar or duplicate items, an initial set of 32 alternative competency characteristics was formed, which served as a reference node for open-ended coding. Second, names were self-created, using the coder’s understanding of the views mentioned in the interview text, which can reflect the meaning of the text as nodes. Third, the real coding was conducted, and words were extracted from the utterances used by the respondents themselves as coding nodes.
When coding, if there were characteristic items that were not available in the initial set, they were directly added to the coding nodes. After the initial coding, the text was reviewed, the coding was checked, all existing evidence supporting a categorized coding was confirmed, and it was determined whether they intersect or include each other. Finally, characteristic items such as “ individual learning feedback” and “group learning feedback” that did not appear in the initial set were added. “Distance immediate response” and “critical thinking,” which existed in the initial set but were not mentioned in the interview, were deleted, resulting in a list of 32 open coding nodes, as shown in Table 2.

4.4. Axis Coding and Selective Coding

The main axis coding established the relationship between concept genera through deduction and induction, and connected the main and secondary concept genera to form a two-level coding. Selective coding extracted a “core genus” that can briefly explain all the phenomena through the genera and relationships developed by the first two levels of coding. Through open coding analysis, we identified 32 characteristic items that were systematically categorized into six core dimensions: (1) knowledge characteristics, (2) technical characteristics, (3) instructional characteristics, (4) management characteristics, (5) achievement characteristics, and (6) individual traits. Consistent with McClelland’s (1973) iceberg model, the first four dimensions (knowledge, technical, instructional, and management) represent explicitly observable competencies and trainable skills at the “surface level”. The remaining two dimensions (achievement characteristics and individual traits) constitute implicit competencies, reflecting deeper, less observable attributes that underlie superior performance. In this process, pedagogical knowledge and psychological knowledge were merged into pedagogical and psychological knowledge (PPK), individual learning feedback and group learning feedback were merged into learning feedback (LF), and autonomous learning ability was merged into autonomous development consciousness, and 29 characteristic items were obtained, as shown in Table 3.

4.5. Discriminative Competency Characteristics Analysis

For the coded data, a further differential analysis was carried out to test the content validity of the code on the one hand, and which competency characteristics were discriminative on the other hand. Therefore, the average grade score with better stability in relation to the length of the interview text (Shi et al., 2002) was selected, and the difference analysis was performed on 30 coding nodes to evaluate its validity and explore the discriminative competency characteristics (Hair et al., 1995).
In order to identify the characteristic items that can distinguish between high performance and ordinary performance, an independent sample t-test was conducted on the average grade scores of competency characteristics of teachers in the high-performance group and the ordinary-performance group. The average grade scores refer to the evaluation process, where, after initially coding the interview transcripts to identify competency items in the previous step, coders re-analyzed the textual data to assess participants’ behavioral performance across each established competency dimension.
Data analytics found that there were significant differences in 12 competency characteristics between the two groups (Table 4). They are fused knowledge (t = 1.975, p = 0.047 *), technology adaptation (t = 3.304, p = 0.003 ***), collaborative teaching ability (t = 3.285, p = 0.003 **), data-based learning situation analysis (t = 2.827, p = 0.027 *), online learning evaluation (t = 2.707, p = 0.031 **), learner retention (t = 3.700, p = 0.003 **), distance emotional perception (t = 2.529, p = 0.019 **), learning feedback (t = 3.247, p = 0.004 **), community thinking (t = 3.673, p = 0.000 ***), achievement motivation (t = 4.167, p = 0.000 ***), teaching self-efficacy (t = 4.453, p = 0.000 ***), and communication skills (t = 4.246, p = 0.000 ***). These characteristics can distinguish between excellent and ordinary teachers and are classified as discriminative competencies, while others are classified as benchmark competencies.
The identified discriminative competencies refer to those qualitatively distinct capabilities that statistically differentiate high-performing educators from their peers. In contrast, benchmark competencies encompass threshold qualifications required for normal occupational adequacy. This dichotomy aligns with McClelland’s (1973) iceberg model, where discriminative competencies reside in the submerged “differentiating factors” layer, whereas benchmark competencies constitute the visible “baseline requirements” (Boyatzis, 2008).
Combined with the above coding and analysis results, an initial model of online teaching competency for primary and secondary school teachers was developed, which includes 29 competency characteristics in six core categories, as shown in Figure 2. Among them, knowledge characteristics, technical characteristics, teaching characteristics, and management characteristics belong to explicit characteristics, and achievement characteristics and personal characteristics belong to recessive characteristics. This includes two categories. One is benchmark competency; that is, the common competency characteristics in online teaching, which are the basic requirements for online teaching work, with a total of 17 items. The other is discriminative competency; that is, the competency characteristics that can distinguish high-performance teachers from ordinary performance teachers, with certain discrimination ability, a total of 12 items.

5. Structural Optimization of Online Teaching Competency Model (RQ2)

Building on our initial competency characterization (RQ1), we now address RQ2: What is the structure of the competency model composed of these characteristics? To establish both empirical validity and theoretical coherence, this phase employs factor analysis, principal component analysis, and model fitting techniques. These methods serve three critical functions: (1) verifying the statistical robustness of individual competency items identified through BEI and mitigating potential researcher bias inherent in qualitative interpretations, (2) optimizing the dimensional aggregation of these features, and (3) clarifying their weight distribution in the competency model.
Delcker et al. (2025) constructed a six-dimensional artificial intelligence competency model for teachers through a survey of 480 teachers in vocational schools. Raúl González-Fernández et al. (2024) conducted quantitative data analytics using the Logit model through a questionnaire survey of more than 100 teachers. The results show that teachers should possess competencies including leadership, research, sharing and reflection, collaboration and communication, and digital and innovation. This study’s data source is a questionnaire survey of 4378 K12 teachers sampled from 23 districts (counties) in China (Table 5). The questionnaire was distributed online through official education administration work groups, where teachers voluntarily filled it out to become participants. The survey platform used was Wenjuanxing (https://www.wjx.cn, accessed on 16 October 2024).
There are two sources of the scale for the core variables in the questionnaire: one is the mature scale tested by reliability and validity in the existing research, which was revised and quoted according to the research context; the other is the unavailable scale, which was compiled according to the manifestation and connotation of the feature items in the behavioral event. In the recycled data, the internal consistency coefficient (Cronbach Alpha) of each dimension is greater than 0.7, and the reliability is good.

5.1. Structural Verification

In order to verify whether it is reasonable to divide the online teaching competence of K12 teachers into six core categories and 29 characteristic items in the initial model, an exploratory factor analysis was carried out on the data. First, the results of Bartlett spherical and KMO tests showed that the KMO value was 0.878, greater than 0.6, and the Bartlett spherical test value (approximate chi-square) was 8639.971 (Sig. = 0.000), which reached the standard of scholar Hair et al. (1995). Second, the principal component factor extraction method was selected, and the maximum variance method was rotated; a total of six common factors were extracted, and the cumulative variance interpretation rate was 67.27%. Overall, the results of exploratory factor analysis were good and consistent with the six feature groups in the initial model.
In order to further explore the corresponding relationship between the six factors and the items, and in order to verify whether each second-order competency characteristic item belongs to the six core genera constructed by qualitative analysis, the rotated component matrix results were analyzed, and the results show that each competency characteristic factor has no double load phenomenon, and all the item factor loads are greater than 0.4. That is to say, it is confirmed that the online teaching competency model of K12 teachers is a multi-dimensional hierarchical structure, composed of six dimensions: technical characteristics, knowledge characteristics, teaching characteristics, management characteristics, achievement characteristics, and personal characteristics. The attribution of competency characteristic factors represented by most of the items is consistent with the initial model.
However, “learning feedback” is attributed to management characteristics in the initial model, while exploratory factor analysis shows that it should be attributed to teaching characteristics. “Data-based learning analysis” and “online learning evaluation” are also not attributed to teaching characteristics as expected by the initial model, but to management characteristics. “Self-development consciousness” is attributed to individual traits in the initial model, and is adjusted to achievement characteristics here. Therefore, the initial model needs to be adjusted according to the results of exploratory analysis.

5.2. Weight Calculation

The methods of weight calculation usually include principal component analysis, AHP analytic hierarchy process, entropy method, etc. To quantitatively refine the initial competency model derived from qualitative research, principal component analysis (PCA) was selected for weight calculation due to its ability to: (1) objectively determine weights based on variance maximization, (2) reduce subjective bias compared to expert-dependent methods (e.g., AHP), and (3) handle multidimensional competency indicators through data-driven dimension reduction.
The calculation process is divided into three steps. First, SPSS 25.0 is used for standardization and principal component analysis. The results show that the principal component characteristic roots of the six common factors are 15.323, 8.446, 4.967, 3.707, 2.151, and 1.840, respectively. Second, the coefficients of each characteristic term in the linear combination of principal components are calculated, that is, the weights of each factor. Third, the scores of each variable in the comprehensive model are calculated, and the weights are obtained by normalization.
Table 6 presents the computed weight distribution across six competency dimensions, with teaching characteristics (w = 0.25) and management characteristics (w = 0.24) emerging as the most critical components, followed by achievement characteristics (w = 0.16). The remaining dimensions show progressively lower weights: technical characteristics (0.14), individual traits (0.12), and knowledge characteristics (0.10). This weighting pattern suggests that classroom-related competencies carry substantially greater importance than other dimensions in the proposed model.

5.3. Model Optimization

According to the above factor analysis and weight distribution results, the model was revised and improved (as shown in Figure 3). The above-the-ice surface represented by the wavy line is the explicit competency, and below is the implicit competency; items in italics with gray background are the discriminative competencies, and the others are the benchmark competencies.

6. Predictive Relationship Between Online Teaching Competency and Performance (RQ3)

Having established the structural framework of K12 teachers’ online teaching competencies (addressing RQ1 and RQ2), we now turn to RQ3: What is the relationship between the competency characteristics in the model and the prediction of online teaching performance? To test this predictive validity—and simultaneously validate the nomological network of the competency model—we employed structural equation modeling (SEM). The nomological network, the original conceptualization that appeared in Cronbach and Meehl’s (1955) foundational work titled “Construct Validity in Psychological Tests”, requires that antecedent variables or outcome variables must be introduced on the basis of the original model to form the law relationship of the model. This study further verifies the validity of the law relationship between the competency model and the outcome variable (performance) by using the structural equation model.

6.1. Research Variables and Hypotheses

In the structural equation model, which explores the relationship between competence and performance, the independent variable is competence, and the dependent variable is performance. Competence, according to the competency model, is expressed by six first-order characteristic factors, including knowledge characteristics, technical characteristics, teaching characteristics, management characteristics, achievement characteristics, and individual traits.
However, there are usually three ways to express performance. First, it is expressed in terms of results or outputs. For example, Bernardin (1984) argued that performance is the record of an individual’s output during a specific job, task, or activity at a specific time. Second, it is expressed in terms of process behavior performance. For example, Campbell (2012) argued that performance is a goal-related behavior controlled by the employees themselves. Obilor (2020) confirmed that high school teachers’ communication skills (one of the competency characteristics) largely affect students’ academic performance (outcomes). Third, it is expressed in terms of behavior and results. For example, Binning and Barrett (1989) argued that the best way to describe performance is to reflect the relationship between behavior and results. This view holds that performance is work-related, multi-dimensional, multi-caused, and dynamically changing. This view is consistent with this study. Stufflebeam (2000) pointed out in the decision-oriented CIPP model that when judging and evaluating the success of a project, not only the outcome factors should be considered, but also the background factors, input factors and process factors should be considered, especially the process factors can provide a lot of effective information for decision makers to revise the project plan. Therefore, in the measurement of online teaching performance, in addition to the outcome performance, this study also emphasizes the achievement behavior of teachers and students in the process of teaching and learning, that is, process performance.
Many scholars have different opinions on the performance or achievement indicators of online teaching and learning. After studying more than ten typical learning analysis systems, Buniyamin et al. (2015) sorted out multiple indicators, including academic performance, course participation, learning style, social performance, etc. When studying learning achievement, Bukralia et al. (2015) used academic ability, economic level, academic goals, technical preparation, course motivation, and participation as predictors. Sun and Fang (2019) analyzed the performance analysis framework of online learning into seven dimensions: engagement level, interaction level, positive level, stage achievement, learning attitude, learning habits, and frustration level. F. X. Guo and Liu (2018) used classroom behavior, homework performance, cognitive level, and test scores as indicators to evaluate learning effects in online learning research based on Blackboard. In addition, there are a large number of studies on online teaching effects, such as SPOC, MOOC, flipped classroom, and mixed classroom, most of which use variables such as academic performance, test pass rate, learning motivation, learning interest, and student satisfaction to represent learning effects.
In the process of online teaching, there are not only students but also teachers. Therefore, the measurement of online teaching performance should not only consider the development of students, but also the development of teachers. Therefore, drawing on Binning and Barrett’s (1989) framework, this study adopts a dual-perspective approach involving both teachers and students. Specifically, it evaluates K12 online teaching performance through two dimensions—process performance and outcome performance—while integrating the perspectives of these two key stakeholders. Process performance is mainly measured from learning habits, knowledge transfer ability, learning interest, self-study learning ability, learning input, etc. Outcome performance is mainly measured from learning achievement, teaching goal achievement, teaching satisfaction, and teacher professional development.
In summary, the following research hypotheses (H) are proposed for exploring the relationship between competence and performance in K12 online teaching.
H1a. 
Knowledge competency characteristics have a significant predictive effect on process performance.
H1b. 
Knowledge competency characteristics have a significant predictive effect on outcome performance.
H2a. 
Technical competency characteristics have a significant predictive effect on process performance.
H2b. 
Technical competency characteristics have a significant predictive effect on outcome performance.
H3a. 
Teaching competency characteristics have a significant predictive effect on process performance.
H3b. 
Teaching competency characteristics have a significant predictive effect on outcome performance.
H4a. 
Management competency characteristics have a significant predictive effect on process performance.
H4b. 
Management competency characteristics are significant predictors of outcome performance.
H5a. 
Achievement competency characteristics have a significant predictive effect on process performance.
H5b. 
Achievement competency characteristics have a significant predictive effect on outcome performance.
H6a. 
Personal competency characteristics have a significant predictive effect on process performance.
H6b. 
Personal competency characteristics are significant predictors of outcome performance.

6.2. Research Tool Design and Data Collection

In the context of this study, the measurement items of process performance were adapted from the learning performance and participation in Buniyamin et al. (2015), academic ability in Bukralia et al. (2015), learning habits and learning attitudes in Sun and Fang (2019), cognitive level, knowledge transfer, and learning engagement in F. X. Guo and Liu (2018), and learning interaction, learning interest, and autonomous learning ability in X. Luo (2016). The measurement items of outcome performance were adapted from the teaching satisfaction, teaching professional ability, and self-compiled student achievement and teaching goal achievement in TALIS2018. After completing the design of the measurement indicators of each latent variable, the questionnaire was designed for each measurement dimension in the form of a 5-point Likert scale. The 5-point Likert scale measures attitudes by asking respondents to rate statements on a symmetric agreement scale (1 = strongly disagree, 5 = strongly agree), converting qualitative responses into quantifiable data.
In order to ensure the validity of the final survey data, a pre-survey of the initial questionnaire was carried out before the formal survey. The pre-survey was administered through our established network of collaborating teachers in primary and secondary schools across multiple regions, including Shenzhen, Dongguan, Yongkang, and Wuhan. These participating teachers, with whom we had previous research collaborations, helped distribute the questionnaires, resulting in 162 valid responses. According to the feedback from the pre-survey teachers, the description of the question items was simplified to reduce the cognitive load of the respondents. Finally, a questionnaire composed of 1 screening question item, 11 sample background and characteristic questions (single and multiple-select questions), 50 competence characteristic variable questions, 17 online teaching performance variable questions, a total of 79 question items was formed. To assess teachers’ online teaching performance, this study employed teacher self-reports of observable student outcomes. Given that students were the primary participants in online instruction, teachers evaluated their own effectiveness based on perceived student improvements. For instance, teachers were asked to rate statements such as “Online assignment sharing and feedback improved students’ work quality through peer learning”.
According to the principle of convenient sampling, the questionnaires were distributed in the working group of audio-visual education centers in Hubei Province through the Wenjuanxing, and then forwarded by the teachers of audio-visual education centers in various places to the teachers of local primary and secondary schools. After the survey, a total of 13,865 teacher questionnaires were collected from 41 districts (counties) from the Wenjunxing platform, most of which were filled in through mobile phones by means of WeChat forwarding links. The questionnaires with a response time of less than 200 s were screened out, the samples that participated in online teaching for less than 10 class hours were deleted, and 12,726 valid questionnaires were recovered, with an effective collection rate of 91.78%.

6.3. Reliability and Validity Analysis

In order to test the reliability of the questionnaire, this study used the Cronbach’s α coefficient to measure the degree of internal consistency for reliability testing. According to the standard of Nunnally (1978), when the α coefficient is greater than 0.7, the questionnaire has good internal consistency. For the six competence variables and two performance variables in the model, the Cronbach’s α coefficient of the measurement items is greater than 0.7, indicating that the measurement reliability is good.
The core variables of competence were tested by exploratory factor analysis in Section 5.1. The retained factor loads were all greater than 0.5, and the structural validity was good.
Exploratory factor analysis (EFA) was used to test the structural validity of the performance variables measured in the questionnaire. There were 17 performance items in total. The results of KMO and Bartlett spherical tests showed that the KMO was 0.856, greater than 0.7. The approximate chi-square value of the Bartlett spherical test was 11,342.734, the degree of freedom (df) was 336, and the significance (Sig.) was 0.000. This shows that the data concentration of each performance item is good, which is suitable for factor analysis.
Next, the principal component method was used to extract the factors. The eigenvalues were greater than 1, and the maximum variance method was used to rotate, sort by size, and the coefficients of less than 0.5 were cancelled. After orthogonal rotation, a total of two common factors were extracted, and the cumulative variance rate is 74.405%, indicating that the extracted two common factors and 17 items can well explain the performance variable information (as shown in Table 7). The rotated factor loads are all greater than 0.6, indicating that the structural validity of the performance measurement items is good.

6.4. Path Analysis and Hypothesis Testing

The online teaching competency of the surveyed teachers was rated at an upper-middle level across six dimensions (measured on a 5-point Likert scale ranging from 1 [strongly disagree] to 5 [strongly agree]). Specifically, the mean scores were as follows: cognitive characteristics (M = 3.509), technical characteristics (M = 3.496), teaching characteristics (M = 3.570), management characteristics (M = 3.562), achievement characteristics (M = 3.544), and individual traits (M = 3.574). Notably, all participants in this study were teachers with prior experience in online instruction. Their sustained engagement in professional training and practical implementation of online teaching over recent years contributed to their relatively high overall competency. Both process performance (M = 3.623) and outcome performance (M = 3.235) were similarly positioned within the upper-middle range of the scale (1–5 points). Among them, the process performance is significantly higher than the result performance, which also shows that the implicit performance in the online teaching process in terms of learners’ learning habits, interests, and investment cannot be ignored.
The structural model analysis yielded nuanced insights into competency-performance relationships. Among the 12 hypothesized paths, 10 demonstrated statistical significance (p < 0.05), confirming the overall model validity (Table 8). However, two non-significant relationships warrant attention: (1) knowledge characteristics → process performance (p = 0.178 > 0.05) and (2) individual traits → outcome performance (p = 0.527 > 0.05). That is to say, the prediction effect of knowledge competency characteristics on online teaching process performance is not significant; the competence of individual traits dimension has no significant effect on online teaching outcome performance prediction. These exceptions suggest that knowledge application in online teaching may depend more on pedagogical integration (technical/instructional competencies) than pure content knowledge, and personality factors might mediate—rather than directly determine—outcomes, aligning with Bandura and Wessels’s (1997) social cognitive theory of reciprocal determinism.
The joint variability interpretation rate of each dimension of competency and process performance was 54.8%, and the joint variability interpretation rate of outcome performance was 45.2%. It is remarkably high for behavioral research. The management competency characteristics had the strongest predictive effect on process performance (β = 0.363) and outcome performance (β = 0.289). It emphasized the centrality of organizational skills in virtual classrooms and the need to prioritize structured facilitation over purely technical training in teacher development programs.

6.5. Fitting Test of Structural Models

The results of the fitting degree parameters of the structural model of competence and performance shown in Table 9 show that the chi-square–degree of freedom ratio χ2/df is 1.927, which is less than 3; the GFI is 0.913, the AGFI is 0.911, the CFI is 0.917, the IF I is 0.920, and the TLI is 0.912, which are all greater than the fitting standard 0.9; the SRMR is 0.043, which satisfies the fitting standard less than 0.05; the RMSEA is 0.048, which satisfies the fitting standard less than 0.08. It can be considered that the structural model of the relationship between competence and performance has a good fit, indicating that the path relationship proposed in this study is in good agreement with the actual measurement data.

7. Conclusions and Application Recommendations

7.1. Conclusions

This study establishes a comprehensive online teaching competency model for K12 teachers, consisting of six core dimensions: knowledge characteristics, technical characteristics, instructional characteristics, management characteristics, achievement characteristics, and individual traits. These dimensions collectively encompass 29 competency elements, with 12 serving as discriminative features and 17 as generic characteristics. Notably, the first four dimensions represent explicit competencies, while achievement characteristics and individual traits reflect implicit qualities.
The proposed model demonstrates a robust multidimensional hierarchical structure with satisfactory goodness-of-fit indices. While exploratory factor analysis confirmed the initial conceptualization of knowledge characteristics and technical characteristics, other dimensions required model adjustments based on empirical findings.
The competency-performance structural analysis revealed strong model fit, with 10 of the 12 hypothesized paths showing statistical significance. Two exceptions were the nonsignificant paths from knowledge characteristics to process performance (p = 0.178) and from individual traits to outcome performance (p = 0.527). These results generally support our theoretical framework linking teacher competencies to online teaching performance.
In addition, this study has several limitations that should be acknowledged. First, as participants’ online teaching experiences were conducted across different platforms rather than a unified system, platform-specific characteristics may have influenced the results. Second, while we examined the relationship between online teaching competence and performance, the analysis did not account for potential mediating or moderating effects that could provide deeper insights into this relationship.
Looking ahead, the ongoing development of China’s National Smart Education Platform presents valuable opportunities for future research. This standardized platform could facilitate more controlled investigations of online teaching competencies while eliminating platform-related variances. Additionally, future studies could explore mediating mechanisms (e.g., teachers’ technological pedagogical knowledge) and moderating factors (e.g., school support) that may influence the competence-performance relationship.

7.2. Application Recommendations for the Online Teaching Competency Model

7.2.1. Designing Competency-Based Online Teaching Norms for K12 Educators

Current online teaching norms for K12 educators predominantly emphasize technical operations and explicit skills (e.g., platform navigation, digital resource management), as noted by B. X. Guo (2015). However, findings from this study highlight the critical role of implicit competencies—such as adaptive awareness of student needs (achievement characteristics) and collaborative resource-sharing behaviors (individual traits)—in shaping online teaching performance.
To address this gap, we propose developing comprehensive teaching standards grounded in the six-dimensional competency model (knowledge, technical, instructional, management, achievement, and individual traits). These norms should systematically integrate observable behavioral benchmarks across all phases of online instruction, from lesson preparation to post-teaching evaluation.
For explicit competencies, norms could standardize technical protocols (e.g., “Apply interactive whiteboard tools to facilitate real-time student engagement”) and pedagogical practices (e.g., “Design asynchronous discussion boards aligned with lesson objectives”). Meanwhile, implicit competencies require innovative assessment strategies, such as evaluating teachers’ ability to personalize instruction based on learning analytics or their participation in peer-driven resource networks.
To ensure developmental progression, norms should adopt a tiered structure: foundational standards might focus on platform mastery (technical characteristics), intermediate tiers on data-informed instructional adjustments (instructional + achievement characteristics), and advanced tiers on fostering collaborative learning ecosystems (management + individual traits).
Iterative refinement of these standards should leverage the competency-performance pathways identified in this study, particularly prioritizing dimensions with higher explanatory power (e.g., 54.8% variance in process performance).

7.2.2. Competency Model-Driven Online Teacher Training for K12 Educators

Teacher training programs, while critical for professional development, often operate under the flawed assumption that knowledge and skill acquisition directly translate to teaching competence—an approach that overlooks the complex cognitive processes and adaptive problem solving that are inherent to effective pedagogy. To address this limitation, we suggest that the training framework can be aligned with the six-dimensional competence model of online teaching. This paradigm shift prioritizes performance-oriented training, emphasizing how teachers integrate competencies to resolve real-world instructional challenges rather than merely mastering discrete skills.
The framework differentiates training strategies based on competency typology and plasticity. The explicit competency characteristics are easier to acquire, while the implicit competency characteristics require a long time period; the benchmark competency characteristics are easy to improve quickly through training, while the discriminative competency characteristics are difficult to change through short-term training. Therefore, the appropriate training type and priority should be determined according to the attributes and weights of different competency characteristics, for instance, by allocating 40% of training program hours to high-impact management and achievement characteristics. Teachers already have a knowledge base in pre-service education, so the knowledge characteristics can be trained by borrowing systematic training resources. While the collaborative teaching ability, data-based learning situation analysis, online learning evaluation, etc. in the teaching characteristics belong to the discriminative competence characteristics, and the weight is high, so the selection of backbone teacher training can be carried out; for the information technology skills that can be quickly acquired, it can be completed through school-based short-term internal training.

7.2.3. Competency Model-Informed Evaluation Strategies for K12 Online Teaching

The competency model is the behavioral operationalization of online teaching, encompassing explicit dimensions (knowledge, technical, instructional, management) and implicit qualities (achievement, individual traits). It provides a robust framework for designing multidimensional teacher evaluation systems. Traditional assessment practices often conflate observable technical proficiency with holistic teaching competence, thereby undervaluing critical but latent capacities such as adaptive student engagement (achievement characteristics) or collaborative innovation (individual traits).
For explicit competencies, standardized quantitative rubrics can objectively assess skills such as lesson structure coherence (instructional characteristics) or platform functionality mastery (technical characteristics). Conversely, implicit competencies demand many different methods. For example, structured classroom observations could capture teachers’ ability to personalize instruction based on real-time analytics (achievement characteristics). Crucially, the 54.8% variance explained in process performance suggests embedding longitudinal assessments to track competency development trajectories, such as semester-long case studies analyzing teachers’ responsiveness to student engagement data.
In addition, weight allocation should mirror the model’s structural hierarchy. For example, assigning 30% of evaluation scores to management characteristics, given their outsized impact on both process and outcome performance.

Author Contributions

Conceptualization, J.T. and W.T.; methodology, J.T.; software, W.T.; validation, W.T. and J.T.; formal analysis, W.T.; investigation, J.T.; data curation, W.T.; writing—original draft preparation, J.T.; writing—review and editing, J.T.; visualization, J.T.; supervision, W.T.; project administration, J.T.; funding acquisition, J.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Humanities and Social Sciences Research Project, Ministry of Education of China, grant number 22YJA880048.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Academic Committee Office (ACO) of the School of Information Technology in Education of South China Normal University, Guangzhou, China. Approval Code: SCNU_SITE_20250120.Approval Date: 20250120.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available on request due to restrictions. The data presented in this study are available on request from the corresponding author for two reasons. First, the data are not publicly available due to privacy restrictions such as school, age, teaching subject, etc. Second, the data are part of an ongoing longitudinal study. Full public release would compromise ongoing research objectives.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ally, M. (2019). Competency profile of the digital and online teacher in future education. International Review of Research in Open and Distance Learning, 20(2), 302–318. [Google Scholar] [CrossRef]
  2. Alvarez, I., Guasch, T., & Espasa, A. (2009). University teacher roles and competencies in online learning environments: A theoretical analysis of teaching and learning practices. European Journal of Teacher Education, 32(3), 321–336. [Google Scholar] [CrossRef]
  3. Araújo, M. M. D., Andreatta-da-Costa, L., & Robaina, J. V. L. (2021, October 26–29). Role of remote laboratories on STEM education and the digital economy: An overview for Brazil in the 2020’s. Ninth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM’21), Barcelona, Spain. [Google Scholar]
  4. Australian Institute for Teaching and School Leadership. (2018). Australian professional standards for teachers. AITSL. Available online: https://www.aitsl.edu.au/standards (accessed on 17 October 2024).
  5. Bandura, A., & Wessels, S. (1997). Self-efficacy. Cambridge University Press. [Google Scholar]
  6. Basilotta-Gómez-Pablos, V., Matarranz, M., Casado-Aranda, L. A., & Otto, A. (2022). Teachers’ digital competencies in higher education: A systematic literature review. International Journal of Educational Technology in Higher Education, 19(1), 8. [Google Scholar] [CrossRef]
  7. Bernardin, H. J. (1984). An analysis of black–white differences in job performance. In Academy of management proceedings (Vol. 1984, No. 1, pp. 265–268). Academy of Management. [Google Scholar]
  8. Bilbao Aiastui, E., Arruti Gómez, A., & Carballedo Morillo, R. (2021). A systematic literature review about the level of digital competences defined by DigCompEdu in higher education. Aula Abierta, 50(4), 841–850. [Google Scholar] [CrossRef]
  9. Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology, 74(3), 478–494. [Google Scholar] [CrossRef]
  10. Boyatzis, R. E. (2008). Competencies as a behavioral approach to emotional intelligence. Journal of Management Development, 27(9), 749–770. [Google Scholar] [CrossRef]
  11. Bukralia, R., Deokar, A. V., & Sarnikar, S. (2015). Using academic analytics to predict dropout risk in e-learning courses. In L. S. Iyer, & D. J. Power (Eds.), Reshaping society through analytics, collaboration, and decision support (pp. 67–93). Springer. [Google Scholar]
  12. Buniyamin, N., bin Mat, U., & Arshad, P. M. (2015, November 17–18). Educational data mining for prediction and classification of engineering students’ achievement. 2015 IEEE 7th International Conference on Engineering Education (ICEED) (pp. 49–53), Kanazawa, Japan. [Google Scholar]
  13. Campbell, J. P. (2012). Behavior, performance, and effectiveness in the twenty-first century. In The Oxford handbook of organizational psychology (Vol. 1, pp. 159–194). Oxford University Press. [Google Scholar]
  14. Chen, X. H. (2020). The call of the era for constructing an online education theory. China Distance Education, 2020(8), 22–26. [Google Scholar]
  15. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302. [Google Scholar] [CrossRef]
  16. Dabbagh, N. (2003). Scaffolding: An important teacher competency in online learning. Techtrends, 47(2), 39–44. [Google Scholar] [CrossRef]
  17. Danielson, C., Furman, J. S., & Kappes, L. (2024). Enhancing professional practice: The framework for teaching. ASCD. [Google Scholar]
  18. De la Calle, A. M., Pacheco-Costa, A., Gomez-Ruiz, M. A., & Guzman-Simon, F. (2021). Understanding teacher digital competence in the framework of social sustainability: A systematic review. Sustainability, 13(23), 13283. [Google Scholar] [CrossRef]
  19. Delcker, J., Heil, J., & Ifenthaler, D. (2025). Evidence-based development of an instrument for the assessment of teachers’ self-perceptions of their artificial intelligence competence. Educational Technology Research and Development, 73(1), 115–133. [Google Scholar] [CrossRef]
  20. Dong, Q. (2019). Research methods in psychology and education. Beijing Normal University Press. [Google Scholar]
  21. Feng, Y., Zhong, W., & Ren, Y. (2018). Interpretation and comparative study of the national educational technology standards for teachers in the United States. Modern Educational Technology, 28(11), 7. [Google Scholar]
  22. Ferrández-Berrueco, R., & Sánchez-Tarazaga, L. (2014). Teaching competences in Secondary Education. Analysis of teachers’ profiles. Relieve, 20(1), 1–20. [Google Scholar]
  23. Fu, W. (2020). The management of teacher competence: A study based on online learning during the pandemic. Modern Educational Management, 2020(8), 100–107. [Google Scholar]
  24. González-Fernández, R., Ruiz-Cabezas, A., Domínguez, M. C. M., Subía-Álava, A. B., & Salazar, J. L. D. (2024). Teachers’ teaching and professional competences assessment. Evaluation and Program Planning, 103, 102396. [Google Scholar] [CrossRef]
  25. Goodyear, P., Salmon, G., Spector, M., Steeples, C., & Tickner, S. (2001). Competence for online teaching: A special report. Educational Technological Research and Development, 49(1), 65–72. [Google Scholar] [CrossRef]
  26. Gökdaş, İ., Karacaoğlu, Ö. C., & Özkaya, A. (2024). COVID-19 and teachers’ digital competencies: A comprehensive bibliometric and topic modeling analysis. Humanities and Social Sciences Communications, 11(1), 1740. [Google Scholar] [CrossRef]
  27. Guillén-Gámez, F. D., Mayorga-Fernández, M. J., Bravo-Agapito, J., & Escribano-Ortiz, D. (2021). Analysis of teachers’ pedagogical digital competence: Identification of factors predicting their acquisition. Technology, Knowledge and Learning, 26(3), 481–498. [Google Scholar] [CrossRef]
  28. Guo, B. X. (2015). English classroom teaching standards research: From the perspective of global educational outlook pre-service foreign language teachers’ standards for classroom teaching norms—A case study of Blackboard flipped classroom teaching practice. Perspectives on Global Education, 44(4), 114–122. [Google Scholar]
  29. Guo, F. X., & Liu, Q. (2018). Correlation between online learning behaviors and learning outcomes: A flipped classroom teaching practice based on Blackboard. Higher Education of Sciences, 137(1), 8–13. [Google Scholar]
  30. Hair, J. F., Jr., Anderson, R. E., Tatham, R. L., & Walczak, S. (1995). Multivariate data analysis. Prentice Hall. [Google Scholar]
  31. Junior, F. A., Rodrigues, D. A., Teixeira, J. A., & Richter, L. D. D. (2017). Empirical relationships between support to informal learning, professional competences and human performance in a Brazilian public organisation. International Journal of Learning and Intellectual Capital, 14(1), 90–108. [Google Scholar] [CrossRef]
  32. Kabilan, M. K. (2004). Online professional development: A literature analysis of teacher competency. Journal of Computing in Teacher Education, 21(2), 51–57. [Google Scholar]
  33. Koszalka, T. A., Russ-Eft, D. F., & Reiser, R. (2013). Instructional designer competencies: The standards. IAP. [Google Scholar]
  34. Kumar, R., & Sharma, S. (2021). Teacher readiness on India’s DIKSHA platform. EdTech India, 12(3), 45–67. [Google Scholar]
  35. Luo, X. (2016). The impact of teacher-student behavioral interaction in flipped classrooms on teaching effectiveness in primary and secondary schools [Doctoral dissertation, Shaanxi Normal University]. [Google Scholar]
  36. Luo, X. L. (2010). Exploring the competency model of secondary school teachers. Theory and Practice of Education, 30(12), 50–53. [Google Scholar]
  37. McClelland, D. C. (1973). Testing for competence rather than “intelligence”. American Psychologist, 28(1), 1–14. [Google Scholar] [CrossRef]
  38. Nunnally, J. C. (1978). Psychometric theory (2nd ed.). McGraw Hill. [Google Scholar]
  39. Obilor, E. I. (2020). Teachers’ communication skills and students’ academic performance. European Educational Research Journal, 13(4), 1–16. [Google Scholar]
  40. OECD. (2021). OECD digital education outlook 2021: Pushing the frontiers with artificial intelligence, blockchain and robots. OECD Publishing. [Google Scholar] [CrossRef]
  41. Rahmatullah, M. (2016). The relationship between learning effectiveness, teacher competence and teachers performance madrasah tsanawiyah at Serang, Banten, Indonesia. Higher Education Studies, 6(1), 169–181. [Google Scholar] [CrossRef]
  42. Redecker, C. (2017). European framework for the digital competence of educators: Digcompedu (JRC Research Reports JRC107466). Joint Research Centre. [Google Scholar]
  43. Richey, R. C., Fields, D. C., & Foxon, M. (2001). Instructional design competencies: The standards. ERIC Clearinghouse on Information & Technology, Syracuse University. [Google Scholar]
  44. Shi, K., Wang, C., & Li, C. P. (2002). Research on the evaluation of enterprise managers’ competency model. Psychology and Educational Research Method, 34(3), 306–311. [Google Scholar]
  45. Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis. Cambridge University Press. [Google Scholar]
  46. Spencer, S. M. (1993). Competence at work: Models for superior performance. John Wiley. [Google Scholar]
  47. Stufflebeam, D. L. (2000). The CIPP model for evaluation. In Evaluation models: Viewpoints on educational and human services evaluation (pp. 279–317). Springer. [Google Scholar]
  48. Sun, F. Q., & Fang, R. (2019). Based on the analysis of learning analysis of online academic achievement influencing factors research. China Distance Education, 386(3), 48–54. [Google Scholar]
  49. Syarif, S. M., & Angger, W. P. (2017). Relationship between motivation to achieve and professional competence in the performance of elementary school teachers. International Education Studies, 10(7), 118. [Google Scholar]
  50. Tabassum, R., Akhter, N., & Iqbal, Z. (2020). Relationship between social competence and academic performance of university students. Journal of Educational Research, 23(1), 111–130. [Google Scholar]
  51. Teachers College of Columbia University. (2015). The global search for education: Being global—A global competence certificate. Published Monday, March 16. Available online: https://www.tc.columbia.edu/articles/2015/march/tcs-global-competence-certificate-is-the-subject-of-huffing/ (accessed on 25 May 2024).
  52. Thach, E. C., & Murphy, K. L. (1995). Competencies for distance education professionals. Educational technology research and development, 43(1), 57–79. [Google Scholar] [CrossRef]
  53. UNESCO. (2022). Global education monitoring report 2021/2: Non-state actors in education: Who chooses? Who loses? UNESCO. [Google Scholar] [CrossRef]
  54. Wahyuddin, W. (2016). The relationship between teacher competence, emotional intelligence and teacher performance in Madrasah Tsanawiyah at District of Serang Banten. Higher Education Studies, 6(1), 128. [Google Scholar] [CrossRef]
  55. Wang, X., Yuan, L., & Li, S. (2024). Developing and validating an Interdisciplinary Teaching Readiness Scale (ITRS) for pre-service teachers in China. PLoS ONE, 19(12), e0315723. [Google Scholar] [CrossRef]
  56. Wang, X. P. (2020). Positioning reconstruction and organizational implementation of online education and teaching: Practices and reflections on online teaching in Zhejiang province during the epidemic period. Digital Teaching in Primary and Secondary Schools, 29(5), 64–68. [Google Scholar]
  57. Williams, P. E. (2003). Roles and competences for distance education programs in higher institutions. The American Journal of Distance Education, 17(1), 45–57. [Google Scholar] [CrossRef]
  58. Wiyanah, S., Irawan, R., & Kurniawan, J. (2021). Using PPP method in the process of online training and strengthening EFL teachers’ pedagogic competence. Journal of Physics Conference Series, 1823(1), 012010. [Google Scholar] [CrossRef]
  59. Yang, X., & Zhang, Y. (2020). Analysis of online teaching and online training for primary and secondary school teachers under the prevention and control of the epidemic situation. Modern Educational Technology, 30(3), 5–11. [Google Scholar]
  60. Yuan, K. M., Zhou, X. R., & Ye, Q. P. (2021). Research on the competency model of home-school collaboration for primary and secondary school teachers. China Educational Technology, 413(6), 98–104. [Google Scholar]
  61. Zhao, J., Hu, Y., & Li, X. (2022). Online teaching: Perception and practice—A questionnaire survey of 3107 middle school teachers in China. Educational Science Exploration, 40(1), 21–28. [Google Scholar]
  62. Zhou, R., Samad, A., & Perinpasingam, T. (2024). A systematic review of cross-cultural communicative competence in EFL teaching: Insights from China. Humanities and Social Sciences Communications, 11(1), 1750. [Google Scholar] [CrossRef]
Figure 1. Research design.
Figure 1. Research design.
Behavsci 15 00628 g001
Figure 2. The initial model of online teaching competence of K12 teachers.
Figure 2. The initial model of online teaching competence of K12 teachers.
Behavsci 15 00628 g002
Figure 3. Modified online teaching competency model for K12 teachers.
Figure 3. Modified online teaching competency model for K12 teachers.
Behavsci 15 00628 g003
Table 1. Interview outline according to the STAR tool.
Table 1. Interview outline according to the STAR tool.
Situation/TaskActionResult
  • What was the situation of the “success/regret” event of online teaching you want to tell about?
  • What tasks did you face, and what goals did you want to achieve?
  • Who participated in your task?
  • What measures did you take to complete the task?
  • How did you judge, or what are the reasons that made you choose this measure?
  • What difficulties did you encounter in completing the task and how did you solve them?
  • What was the outcome or effect of the task completed in this event?
  • What do you think were the main factors that influenced this outcome or effect?
Table 2. Open coding nodes from the interview texts.
Table 2. Open coding nodes from the interview texts.
Competency Characteristics of Online Teaching 1
subject content knowledgeSCKlearner retentionLR
pedagogical knowledgePKdistance emotional awarenessDEA
psychological knowledgePSKdistance learning monitoringDLM
technical knowledgeTKindividual learning feedbackILF
integrated knowledgeIKgroup learning feedbackGLF
technology selectionTSpersonalized learning conceptPLC
technology adaptationTAlearning-centered conceptLCC
technology useTUteaching innovationTI
data awarenessDAcommunity thinkingCT
information literacyILachievement motivationAM
distance teaching abilityDTteaching self-efficacyTSE
collaborative teaching abilityCTAself-development awarenessSDA
online interaction designOIDself-learning abilitySLA
data-based learning analysisDLAcommunication skillsCS
online learning evaluationOLEprofessional responsibilityPR
learning intervention strategiesLISsocial adaptabilitySA
1 The capital letters after the items are abbreviations. For example, achievement motivation is abbreviated as AM.
Table 3. Six core genera.
Table 3. Six core genera.
Core GeneraCompetency Characteristics
Knowledge characteristicsSCK, pedagogical and psychological knowledge (PPK), TK, FK
Technology characteristicsTS, TA, TU, DA, IL
Teaching characteristicsDT, CTA, OID, DLA, OLE, LIS
Management characteristicsLR, DEA, DLM, learning feedback (LF)
Achievement characteristicsPLC, LCC, TI, CT, AM
Individual traitsTSE, SDA, CS, PR, SA
Table 4. Results of paired t-test analysis 1.
Table 4. Results of paired t-test analysis 1.
CharacteristicsHigh-Performance Group (n = 12)Ordinary-Performance Group (n = 10)tSig.
MeanS.D.MeanS.D.
SCK4.4100.8314.1760.6220.5510.587
PK3.3060.6503.2480.6821.9300.067
PSK3.0130.6863.2150.6031.7630.117
TK3.7110.6413.5480.6121.9290.067
IK3.8390.7353.2530.6181.9750.047 *
TS3.8290.8483.3680.6011.2510.224
TA3.7450.7763.3760.6903.3040.003 ***
TU3.9180.8483.2800.6502.2270.036
DA4.1270.7224.0660.6000.4740.640
IL4.0610.7023.3480.5980.3430.735
DT4.2200.8463.5330.6650.9770.339
CTA3.8180.8203.3740.6223.2850.003 **
OID3.4840.8643.2700.6451.2690.218
DLA3.2520.7193.1130.6352.8270.027 *
OLE4.2270.7603.0970.6292.7070.031 **
LIS3.2050.8073.0010.6300.0080.994
LR3.8460.7233.2220.6493.7000.003 **
DEA3.8210.8233.3100.6602.5290.019 **
DLM3.6740.8653.4680.6901.2460.226
ILF3.4180.8643.3710.6343.2470.004 **
PLC3.7650.6693.2690.6152.2160.045
LCC3.6270.6263.2410.6291.8610.076
TI3.7390.7973.2090.6181.5860.127
CT3.8160.8733.4610.6633.6730.000 ***
AM4.0250.8323.6500.6404.1670.000 ***
TSE4.1580.8643.4610.6424.4530.000 ***
SDA3.7360.7803.3010.5820.0080.994
CS3.5280.7473.1680.5734.2460.000 **
PR3.0160.8312.8080.6221.5790.129
SA4.1120.6504.0040.6821.5290.117
1 The bolded rows in the table emphasize items demonstrating statistically significant differences between the two groups. * p < 0.05, ** p < 0.01,*** p < 0.001.
Table 5. Sample distribution.
Table 5. Sample distribution.
Sample DistributionFrequencyPercentage (%)
School LocationProvincial Capital121727.8
Prefecture-level City119227.2
County/District138731.7
Township/Rural Area58213.3
GenderMale160336.3
Female277563.7
AgeBelow 30110925.3
31–4099322.7
41–50171539.2
Above 5056112.8
Teaching yearsBelow 5 Years113217.3
6–10 Years5259.8
11–15 Years6574.8
Above 15 Years206468.1
EducationAssociate Degree or Below173724.6
Bachelor’s Degree249273.0
Master’s Degree or Above1492.5
Teaching LevelPrimary School271162.8
Junior High School117928.2
Senior High School4889.0
Professional TitleSenior Teacher74617.0
First-Level Teacher137631.4
Second-Level Teacher103223.6
Third-Level Teacher/No title122428
Role in online learningOnline Q&A/Support Teachers135731.0
Live/Recorded Lecture Teachers195444.6
Both roles106724.4
Total4378100.0
Table 6. Competency factor weight analysis results.
Table 6. Competency factor weight analysis results.
Core
Genera
CharacteristicsCoefficients in Linear Combinations
(Factor Weights)
Score in the ModelSecondary WeightFirst-Level Weight
123456
TeachingOID0.210.020.020.140.06−0.120.110.050.24
LIS0.190.060.010.030.050.090.110.05
DT0.190.060.010.030.050.040.100.05
LF0.160.060.010.10−0.020.070.090.05
CTA0.150.060.010.110.02−0.040.090.04
KnowledgeTK0.050.060.050.080.010.060.050.030.10
IK0.040.050.050.10−0.010.070.050.02
SCK0.040.060.090.080.01−0.010.050.03
PPK0.040.050.020.10−0.020.070.040.02
TechnologyIL0.010.050.35−0.090.060.110.070.030.14
TA0.010.050.29−0.070.07−0.140.050.02
DA0.010.050.28−0.070.080.160.060.03
TU0.000.060.280.07−0.080.180.060.03
TS0.000.060.240.060.030.090.060.03
AchievementAM0.030.06−0.010.07−0.090.200.040.020.16
LCC0.03−0.060.000.070.02−0.050.010.01
TI0.080.060.010.160.02−0.010.070.03
PLC0.090.07−0.010.18−0.020.060.070.03
SDA0.080.060.000.16−0.010.030.070.03
CT0.090.06−0.030.18−0.050.120.070.03
ManagementLR0.080.190.100.010.21−0.070.100.050.24
DEA0.050.270.090.000.130.070.110.05
OLE0.060.270.160.010.160.080.120.06
DLM0.050.260.160.020.130.080.120.06
DLA0.000.24−0.150.010.010.010.040.02
IndividualTSE0.000.24−0.100.010.010.010.050.020.12
PR0.050.240.110.01−0.280.140.080.04
CS0.010.230.19−0.010.010.030.090.04
SA0.010.23−0.210.010.020.030.030.02
Table 7. Component matrix after performance measurement item rotation 1,2.
Table 7. Component matrix after performance measurement item rotation 1,2.
Title 1Title 2Title 3
PP Item 5: Learning habits0.876
PP Item 8: Communication and collaboration0.871
PP Item 4: Autonomous learning ability0.860
PP Item 7: Liking to learn0.859
PP Item 6: Enthusiasm0.849
PP Item 3: Interest in learning0.843
PP Item 1: Assignment0.755
PP Item 2: Knowledge transfer0.747
OP Item 8: Teaching capacity enhancement 0.897
OP Item 6: Technology improvement 0.863
OP Item 7: Resource accumulation 0.833
OP Item 1: Learning achievement 0.720
OP Item 2: Knowledge mastery 0.719
OP Item 3: Learning needs satisfied 0.697
OP Item 4: Learning objectives achieved 0.690
OP Item 9: Continuous use intention 0.688
RP Item 5: Satisfaction 0.610
Eigenvalue10.6122.037
Explanation rate of variance41.58532.820
Cumulative variance interpretation rate41.58574.405
1 Extraction method: principal component analysis. Rotation method: Caesar normalized maximum variance method. The rotation has converged after three iterations. 2 PP: Process performance; OP: Outcome performance.
Table 8. Path analysis results 1.
Table 8. Path analysis results 1.
HypothesisPath RelationshipsEstimateS.E.C.R.pStandardized Path Coefficient (β)Result
H1aPP<---Knowledge characteristics0.0890.0661.3480.1780.079Unsupported
H1bOP<---Knowledge characteristics0.2820.01815.667***0.265Supported
H2aPP<---Technology characteristics0.2010.01118.273***0.192Supported
H2bOP<---Technology characteristics0.1910.01910.053***0.178Supported
H3aPP<---Teaching characteristics0.0860.0117.818***0.072Supported
H3bOP<---Teaching characteristics0.2720.01320.923***0.268Supported
H4aPP<---Management characteristics0.2640.01814.667***0.289Supported
H4bOP<---Management characteristics0.3720.02713.778***0.363Supported
H5aPP<---Achievement characteristics0.1610.01213.417***0.146Supported
H5bOP<---Achievement characteristics0.1820.01710.706***0.165Supported
H6aPP<---Individual traits0.0520.0321.6250.5270.047Unsupported
H6bOP<---Individual traits0.1240.01111.273***0.109Supported
1 *** p < 0.001. PP: Process performance; OP: Outcome performance.
Table 9. Relationship between competency and performance, structural model fitting index.
Table 9. Relationship between competency and performance, structural model fitting index.
Statistical TestAbsolute Fit IndexRelative Fit Index
χ2/dfGFIAGFISRMRRMSEACFIIFITLI
Adaptation standard<3>0.9>0.9<0.05<0.08>0.9>0.9>0.9
Parameters of this model1.9270.9130.9110.0430.0480.9170.9200.912
Other parameters: χ2 = 1882.679; p = 0.000; df = 977
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, J.; Tian, W. Modeling K12 Teachers’ Online Teaching Competency and Its Predictive Relationship with Performance—A Mixed-Methods Study Based on Behavioral Event Interviews. Behav. Sci. 2025, 15, 628. https://doi.org/10.3390/bs15050628

AMA Style

Tian J, Tian W. Modeling K12 Teachers’ Online Teaching Competency and Its Predictive Relationship with Performance—A Mixed-Methods Study Based on Behavioral Event Interviews. Behavioral Sciences. 2025; 15(5):628. https://doi.org/10.3390/bs15050628

Chicago/Turabian Style

Tian, Jun, and Wenhui Tian. 2025. "Modeling K12 Teachers’ Online Teaching Competency and Its Predictive Relationship with Performance—A Mixed-Methods Study Based on Behavioral Event Interviews" Behavioral Sciences 15, no. 5: 628. https://doi.org/10.3390/bs15050628

APA Style

Tian, J., & Tian, W. (2025). Modeling K12 Teachers’ Online Teaching Competency and Its Predictive Relationship with Performance—A Mixed-Methods Study Based on Behavioral Event Interviews. Behavioral Sciences, 15(5), 628. https://doi.org/10.3390/bs15050628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop