Next Article in Journal
Mathematical Knowledge for Teaching the Area of Plane Surfaces: A Literature Review on Professional Noticing
Previous Article in Journal
Politicizing the Department of Education in the War Against DEI: Theorizing Implications for the Principal Preparation Landscape
Previous Article in Special Issue
Digital Divides and Educational Inclusion: Perceptions from the Educational Community in Spain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Structural Model of Distance Education Teachers’ Digital Competencies for Artificial Intelligence

by
Julio Cabero-Almenara
1,
Antonio Palacios-Rodríguez
1,*,
Maria Isabel Loaiza-Aguirre
2 and
Dhamar Rafaela Pugla-Quirola
2
1
Department of Didactics and Educational Organization, University of Seville, C. Pirotecnia, s/n, 41013 Seville, Spain
2
Department of Business Science, Technical University of Loja, San Cayetano Alto, C. París, Loja 110160, Ecuador
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(10), 1271; https://doi.org/10.3390/educsci15101271
Submission received: 26 July 2025 / Revised: 3 September 2025 / Accepted: 19 September 2025 / Published: 23 September 2025

Abstract

Integrating Artificial Intelligence (AI) into education poses new challenges and opportunities, particularly in the training of university professors, where Teaching Digital Competence (TDC) emerges as a key factor to leverage its potential. The aim of this study was to evaluate a structural model designed to measure TDC in relation to the educational use of AI. A quantitative methodology was applied using a validated questionnaire distributed through Google Forms between March and May 2024. The sample consisted of 368 university professors. The model examined relationships among key dimensions, including cognition, capacity, vision, ethics, perceived threats, ai-powered innovation, and job satisfaction. The results indicate that cognition is the strongest predictor of capacity, which in turn significantly influences vision and ethics. AI-powered innovation presented limited explained variance, while perceived threats from AI negatively affected capacity. Additionally, job satisfaction was mainly influenced by external factors beyond the model. The overall model fit confirmed its reliability in explaining the proposed relationships. This study highlights the critical role of cognitive training in AI for teachers and the importance of designing targeted professional development programs to enhance TDC. Although a generally positive attitude towards AI was identified, perceptions of threats remained low.

1. Introduction

The speed with which Artificial Intelligence (AI) has been introduced into educational institutions, particularly higher education institutions, has been unprecedented compared to any other technology. Such rapid adoption has generated significant challenges from the outset. Concerns have arisen in training institutions and among teachers, especially regarding academic integrity and ethical issues, as well as biases introduced by AI (Sullivan et al., 2023). From the beginning, scholars have highlighted two recurrent measures to address these challenges: (a) updating evaluation methods and institutional policies in universities and (b) providing targeted training for teachers and students in the use of AI (Lo, 2023).
This concern is reflected in the speed with which higher education institutions have developed good practice guides for AI integration. A broad review of these proposals is available in the recent publication by González (2024). In the Spanish-speaking context, examples include initiatives by University of Guadalajara (2023) and the University of Burgos (Abella, 2024). These contextual examples suggest that cultural and linguistic factors may shape how AI is perceived and adopted by educators, highlighting potential differences from the predominantly English-language literature.
The accelerated proliferation of AI-related publications exemplifies this global trend, with 98% appearing in English. Six countries emerge as leading contributors with more than one hundred publications each: the United States (25%), China (13%), the United Kingdom (8%), Spain (5%), and both Canada and India (4%) (Mena-Guacas et al., 2024). The predominance of English underscores the structural opportunities and constraints faced by Spanish-speaking educators, particularly in relation to linguistic accessibility, digital literacy, and the availability of institutional support, all of which may significantly influence the adoption of AI within teaching practice.
The increase in publications has led to a corresponding rise in meta-analyses, which highlight several key points: AI can enhance student learning and act as a student assistant, but poor educational practices remain a concern; teachers require training to use AI effectively; interdisciplinary approaches are recommended; and limitations such as bias, misinformation, and privacy issues must be considered (Kim, 2023; Bond et al., 2024; Casanova & Martínez, 2024; Forteza-Martínez & Alonso-López, 2024; García-Peñalvo et al., 2024; López Regalado et al., 2024; Vélez-Rivera et al., 2024).
The educational use of AI thus presents both opportunities and challenges. SWOT studies have been conducted to systematically identify these aspects, consulting experts about the strengths, weaknesses, opportunities, and threats associated with this technology (Farrokhnia et al., 2023; Jiménez et al., 2023). While these studies provide valuable insights, most of the literature has focused on general higher education contexts. In contrast, distance education introduces specific factors—such as reduced face-to-face interaction, differences in technology access, and varied institutional support—that may influence how teachers adopt AI. These contextual nuances suggest that existing models of AI readiness and digital competence may require adaptation to accurately reflect the realities of Spanish-speaking distance education environments.
Table 1 provides a summary of responses from the SWOT analyses.
This study contributes theoretically by explicitly examining how these contextual factors—language, culture, and distance education settings—interact with teachers’ digital competencies in AI. Unlike previous research, such as Wang et al. (2023), which focuses primarily on general higher education populations, this study identifies the unique dynamics of Spanish-speaking distance education, highlighting how cognitive, ethical, and institutional dimensions shape AI adoption. By adapting the structural model to this context, the study provides a more nuanced understanding of AI readiness, offering a framework that can inform targeted teacher training and policy decisions in culturally and linguistically specific educational settings.

2. Teacher Use of AI

Examining AI readiness within the context of distance education is particularly salient, given that this modality depends extensively on digital tools and platforms to facilitate teaching and learning, often with limited direct support from institutions or peers. Educators in distance education therefore encounter distinct challenges when adopting AI, including managing technological constraints, maintaining student engagement remotely, and effectively integrating AI in the absence of face-to-face guidance. Comparative evidence from traditional, in-person settings suggests that while certain trends—such as the beneficial impact of cognitive training on teachers’ capacity and vision—remain consistent, other factors, including perceived risks and the uptake of innovative practices, may differ due to the unique constraints and affordances inherent to distance education (Celik et al., 2022).
This transformation has always occurred with all technologies that have come to educational institutions, especially the disruptive ones: textbooks and the Internet. Knowing the opinions and beliefs that teachers have about it, the possibilities they grant it, and the fears it arouses are key aspects to understanding how AI is being incorporated and will be incorporated into training institutions.
However, before addressing this aspect, research indicates that AI offers various possibilities of use for educational institutions, which the European Commission (2022, p. 14) has established in four broad categories as noted:
  • Teaching students: Using AI to teach students (student-oriented).
  • Supporting students: Using AI to support student learning (student-oriented).
  • Teacher support: AI is used to support teachers (teacher-oriented).
  • System support: Using AI to support diagnosis or planning at the system level (system-oriented).
More specifically, when talking about ChatGPT, Jeon and Lee (2023), in the meta-analysis they carried out on its uses, managed to identify four application roles: interlocutor, content provider, teaching assistant, and evaluator. They also identified three roles that teachers play with it: orchestrating different resources with quality pedagogical decisions, turning students into active researchers, and increasing ethical awareness of AI.
As regards teachers, one of the first aspects to point out is that the majority of their attitudes and degree of acceptance of this technology for the educational use of AI are pretty positive (Adekunde et al., 2022; Cabero-Almenara et al., 2024a; Perezchica-Vega et al., 2024). However, it must also be recognized that they express more significant concern about their excessive dependence and the ethical, social, and pedagogical implications that it may have (Sullivan et al., 2023; Yuk & Lee, 2023; Perezchica-Vega et al., 2024).
These expectations and concerns lead to Webb (2024) pointing out that teachers can adopt three positions when faced with AI: avoid it, try to leave it behind, or adapt to it. The last option is the most coherent one, which involves understanding that AI is unavoidable and will soon be a necessary tool for students. This last option is more necessary, on the one hand, because the volume of its use by students is beginning to be broader than even their teachers (Chao-Rebolledo & Rivera-Navarro, 2024), and on the other hand, because the uses to which it is dedicated could be considered as not at all academic, an aspect that has repercussions on the need for teachers to have good training in order to empower students in the quality and ethical use of AI (Munar Garau et al., 2024).
As González (2024, p. 100) points out: “…it is far from reality, ignoring the fact that this technology is increasingly present and will form part of the necessary skills that students must acquire in order to successfully face the professional activities they will have to carry out in the future, in addition to the fact that doing so means depriving them of tools that, used appropriately, can contribute positively to their training and academic performance.”
At the same time, different studies (Cabero-Almenara et al., 2024b) have shown that these attitudes towards its use are more significant in teachers with a constructivist conception of teaching than in those with a transmissive conception. This fact is related to the recent claim to seek new ways of using AI, which has led to transforming the conception of learning “with” technology instead of the traditional vision of learning “from” technology (Fuertes Alpiste, 2024). This is what Morozov (2024) has called using AI to improve the person and not just to augment them.
On the other hand, when attempts have been made to establish relationships between the age of teachers and their gender, research has indicated that male teachers have more significant attitudes towards its use than female teachers and that younger teachers tend to see it as more positive and valuable for training than older ones (Dúo et al., 2023; Villegas-José & Delgado-García, 2024).
Now, suppose we initially count on the positive aspect of the significant attitudes that teachers usually have towards AI to be incorporated into teaching. In that case, we also have the limitation, as the teachers themselves recognize, of the limited training they have to incorporate into teaching, management, and research (Hernando et al., 2022; Tongfei, 2023; Temitayo et al., 2024).
Training that is claimed to be associated with media, digital, and computer literacy allows them to understand what AI is, how it is used, and how the fundamental rights of people and students are promoted, so that they are clear about the risks and opportunities that this technology has when applied to training (Hernando et al., 2022). Such is the importance of training and capacity building that some authors propose it as a future line of research in the field of AI (Chiu, 2023), and others recommend the development of MOOC courses to facilitate self-training of teachers (Ruiz-Rojas et al., 2023).
In any case, it must be acknowledged that this aspect of training is a complex task. On the one hand, we have a tool that is relatively young for teachers and students; on the other, the skills that teachers must have concerning this technology have not yet been sufficiently defined since the potential of AI in education has not yet been sufficiently exploited, mainly to provide active and in-depth learning; and finally, because the volume of apps that are constantly appearing and that allow different unforeseen actions to be carried out in a short period poses a difficulty.
Finally, this training is more necessary because if the teacher has adequate training for incorporating AI into teaching, it will also be very valuable for training students in its use (Ayuso-del Puerto & Gutiérrez-Esteban, 2022). On the other hand, it should not be forgotten that according to different competency frameworks for teacher training, one of the tasks they must perform is empowering students to use a variety of ICTs (Munar Garau et al., 2024).

3. Methodology

The objectives pursued in this research are as follows:
(a)
The model’s viability was formulated through structural equation modelling to analyze the level of digital teaching competence with AI of teachers in training.
(b)
To study the relationships between the dimensions of the Digital Teaching Competence model and AI for teachers in training.

3.1. Research Structure and Hypothesis

The development of thus research was based on the proposal made at the time by Wang et al. (2023), who came to point out different variables that influenced the job satisfaction that teachers found for the incorporation of AI into teaching, among which were cognition, the ability they had to manage AI, which impacted on the ethical vision they had of this technology. All dimensions influenced their perceptions regarding the threats and the innovation that AI aroused.
The authors cited the proposal made by Karaca et al. (2021) to explain the dimensions mentioned above. They understood cognition as the cognitive preparation regarding the basic knowledge a person had regarding AI, capacity as the competence to use AI for learning, vision as the critical understanding possessed about AI, and ethics as the mastery of the legal, moral, and ethical norms established for the responsible use of AI.
At the same time, Wang et al. (2023) considered the dimensions of “AI-powered innovation,” “Threats perceived by AI,” and “Job satisfaction.” The first could be understood as the process of developing and applying new ideas, products, services, or methodologies that are significantly improved or transformed through Artificial Intelligence technologies. Moreover, referring specifically to teaching, this innovation would include different aspects such as the personalization of learning, the automation of administrative tasks, the implementation of intelligent tutoring for students, or the creation of content.
“Perceived AI Threats” could be understood as the potential risks and dangers that individuals or societies associate with developing and implementing this technology in different contexts. In the educational field, this would imply considering different aspects, such as data privacy and security, students’ dependence on this technology in their training process, unequal access, or biases it may incorporate.
Finally, “Job satisfaction,” which in our case was education, indicates the degree of acceptance the teacher has regarding this technology for its incorporation into his/her professional teaching activity. The instrument used in the research consisted of two sections: the first was intended to collect biographical data of the teacher, specifically his/her gender, age, and the branch in which he/she taught (Arts and Humanities, Sciences, Health Sciences, Social and Legal Sciences, and Engineering and Architecture). The second section consisted of 31 items with five response levels, ranging from “strongly disagree” to “strongly agree.” This instrument was used by Wang et al. (2023), who reported a level of reliability, according to Cronbach’s alpha, between 0.90 and 0.97, depending on the different dimensions analyzed.
The items that made up each dimension were as follows: cognition (5 items), skill (6 items), vision (3 items), ethics (4 items), perceived threats from AI (5 items), AI-enhanced innovation (3 items), and job satisfaction (5 items).
The model developed in this research is presented in Figure 1.
From the model presented, we derived the following hypotheses to be contrasted:
H1-H2-H3: 
The teacher’s cognition regarding AI directly and significantly influences the skill, vision, and ethics he or she has regarding this technology.
H4-H5-H6-H7: 
The teacher’s ability regarding AI directly and significantly influences their vision regarding AI, their ethics regarding it, the perceived threats of AI, and the perception of innovation produced by this technology.
H8-H9-H10: 
The teacher’s vision regarding AI directly and significantly influences the ethics he or she has regarding this technology, the perceived threats of AI, and the perception of innovation he or she has regarding this technology.
H11-H12: 
Perceived threats to AI directly and significantly influence the perception of AI-powered innovation and job satisfaction when working with it.
H13: 
The perception of AI-enhanced innovation directly and significantly influences the teacher’s job satisfaction with AI.

3.2. Information Gathering Instrument

The instrument used in the research comprised two sections: the first was intended to collect biographical data of the teacher, specifically their gender, age and the branch in which they taught (Arts and Humanities, Sciences, Health Sciences, Social and Legal Sciences, and Engineering and Architecture), and the second was made up of 31 items with five levels of responses from strongly disagree to agree strongly. The instrument was used by Wang et al. (2023), who obtained a level of reliability according to Cronbach’s alpha between 0.90 and 0.97, depending on the different dimensions that were analyzed.
The items comprising each dimension were as follows: cognition (5 items), skill (6 items), vision (3 items), ethics (4 items), perceived threats from AI (5 items), AI-enhanced innovation (3 items), and job satisfaction (5 items).
The instrument was applied via the Internet before the training action regarding the application of AI in distance education began.
For the reliability analysis of the instrument, Cronbach’s alpha and McDonald’s omega statistics were applied, both for the different dimensions and the entire instrument. The results achieved are presented in Table 2.
The values achieved allow us to point out that the significance levels, both for the entire instrument and the dimensions that comprise it, are adequate (Mateo, 2004; O’Dwyer & Bernauer, 2014).

3.3. The Research Sample

The sample comprises 368 university professors, of whom 41.3% (152) are men and 58.7% (216) are women. Regarding age, most professors are concentrated in the 41 to 50 age range, representing 41.8% (154), followed by the 31 to 40 age group, which accounts for 31.5% (116) of the participants. Professors over 50 constitute 23.6% (87), while only 3.0% (11) are between 21 and 30 years old. Regarding the branch of knowledge, the area of Social and Legal Sciences is the most represented, with 39.1% (144), followed by Arts and Humanities with 21.7% (80) and Sciences with 18.2% (67). The least represented areas are Engineering and Architecture, which account for 14.4% (53) of the professors, and Health Sciences, with 6.5% (24). Overall, the sample reflects a greater female presence, predominantly between 41 and 50 years old, and a preponderance of professors from Social Sciences and Law.

4. Results

Regarding the average scores and standard deviations achieved in each of the dimensions that made up the instrument, the values obtained are presented in Table 3.
The analysis of the means and standard deviations of the different variables shows a positive trend in teachers’ perceptions of their preparation and use of AI technologies and their job satisfaction. Cognition, with a mean of 3.92 and a standard deviation of 0.715, indicates that teachers understand the role of AI in education. The ability to integrate AI into their teaching has a slightly lower mean, 3.73, with a deviation of 0.764, suggesting that, although they feel capable, there is a more excellent dispersion in this ability. At the same time, it is noteworthy that teachers perceive that AI can have great value in creating educational innovation actions with it (4.11). The highest score refers to teachers’ job satisfaction when using AI (4.21). This score is relatively stable among teachers if we analyze as it is the lowest standard deviation of all the scores achieved (0.662). It should be noted that the low scores achieved in the standard deviations indicate that the scores offered by the different teachers are very similar.
The vision (VI) on the opportunities and challenges of AI is high, with a mean of 3.90 and a deviation of 0.769, reflecting an optimistic attitude. Regarding ethics (ET), teachers show a solid awareness of the ethical aspects of using AI (mean = 3.86), although with a slightly higher deviation (0.819), suggesting different levels of understanding or agreement on this aspect.
Perceived threats from AI (PT) have a lower mean of 3.38 with a relatively high standard deviation (0.944), reflecting that some teachers perceive potential risks or challenges associated with AI, although opinions vary considerably. On the other hand, AI-enhanced innovation (INN) scores highest, with a mean of 4.11 and a low dispersion (0.718), indicating that teachers view AI as a positive means of introducing innovative approaches in teaching.
Finally, job satisfaction (JS) is very high, with a mean of 4.21 and a deviation of 0.662. This indicates that teachers generally feel satisfied and proud of their work, possibly influenced by incorporating AI technologies that improve their work experience.
The means and standard deviations are presented regarding the values obtained in each of the items in Table 4.
The descriptive analysis of the items shows that teachers perceive a high level of clarity in their role in the AI era, with a mean of 4.27 (CO1) and a standard deviation of 0.810. They also report a good ability to balance the relationship between AI technologies and teaching (3.94, CO2) and understand how AI technologies work in education (3.74, CO3). Although moderate in terms of skills to integrate AI into teaching, the scores are positive, highlighting the ability to optimize teaching with a mean of 3.99 (AB5) and a standard deviation of 0.867. Scores related to ethics are also high, with an average of 4.20 both in understanding digital ethics (ET1) and in teachers’ ethical obligations (ET2), although items on personal information security (3.33, ET3) show more variability (1.106). Regarding perceived threats, teachers do not alarmingly perceive AI as weakening their role (3.09, PT1), but they express concerns about reducing face-to-face communication (3.36, PT2). Scores on innovation are high, highlighting that AI allows teaching to be organized innovatively (4.19, INN3). Finally, regarding job satisfaction, teachers report high satisfaction with their job (4.28, JS3), pride in their work (4.49, JS4), and that they find their work enjoyable (4.43, JS5).
Ordered from highest to lowest, the five items that received the highest rating from teachers were as follows:
  • 4.49 (JS4) I feel proud of my work.
  • 4.28 (JS3) I am satisfied with my job.
  • 4.27 (CO1) I clearly understand the new role of teachers in the era of AI.
  • 4.20 (ET1) I understand the digital ethics teachers must possess in the era of AI.
  • 4.20 (ET2) I understand the ethical obligations and responsibilities teachers must assume when using AI technologies.
Items that, on the one hand, denote the favourable attitude of teachers towards AI and especially their satisfaction with their work and, on the other hand, are the dimensions “Cognition,” “Innovation,” and “Job satisfaction,” where these items are mainly produced.
As regards the five lowest-rated items, the results were as follows:
  • 3.09 (PT1) I feel that AI technologies could weaken the importance of teachers in education.
  • 3.16 (PT5) In my opinion, the excessive use of AI technologies can reduce the need for human teachers in the classroom, making it difficult for teachers to convey correct values to students.
  • 3.33 (ET3) I use AI technologies to keep personal information safe.
  • 3.36 (PT2) The use of AI technologies has reduced the frequency of face-to-face communication with colleagues and students.
  • 3.58 (PT3) Students’ over-reliance on learning guidance provided by AI technologies can undermine the relationship between teachers and students.
As can be seen, teachers indicate that they do not feel threatened at work or with a loss of social prestige in the “Perceived threats from AI” dimension.
Structural analysis models are becoming increasingly relevant in social research due to their ability to analyze manifest and latent variables. These models facilitate the effective integration of both types of variables (Alaminos et al., 2015).
Two main methodological approaches predominate in structural equation modelling (SEM): those based on covariances and the partial least squares (PLS) approach. The PLS approach was selected in this study because it does not require the assumption of multivariate normality in the data. The PLS analysis was implemented using SmartPLS 4 software, following the standard phases established for this type of study (Lévy, 2006; Sampeiro, 2019).
Regarding the loadings or simple correlations between the indicators and their respective constructs, Table 5 presents the values obtained. For an indicator to be considered part of a construct, its loading must be close to 0.6 (Carmines & Zeller, 1979).
It is observed that all values present loads close to 0.7, which ensures that no element was eliminated during the analysis.
The next step is to assess the composite reliability, an indicator of the internal consistency of the set of indicators associated with the latent variables (Lévy, 2006). This analysis allows us to determine whether the indicators consistently measure the same construct and whether the latent variable is adequately represented. A minimum value of 0.7 is established to consider a good fit. The corresponding results are detailed in Table 6.
Convergent validity is simultaneously calculated to determine whether a set of indicators reflects a single underlying construct. To do so, Average Variance Extracted (AVE) is used. An AVE value greater than 0.5 is desirable, as it indicates that more than 50% of the construct’s variance can be explained by its indicators The results obtained are presented in Table 7.
Two main methods are used to assess discriminant validity, which seeks to confirm whether each construct significantly differs from the others: the Fornell–Larcker criterion (Table 8) and cross-factor loadings (Table 9).
The Fornell–Larcker criterion states that a construct’s Average Variance Extracted (AVE) must exceed the variance shared with any other construct in the model. In addition, correlations between constructs must be lower, in absolute value, than the square root of the AVE. This is verified by examining the values on the matrix’s main diagonal, which represent the square root of the AVE. These values must be higher than the elements outside the diagonal, which correspond to the correlations between constructs.
Next, cross-loading analysis is carried out, allowing us to verify whether the items assigned to a construct measure the aspect of that construct. To do this, the loadings are expected to be greater in their corresponding construct than in any other, guaranteeing the indicators’ specificity in measuring each construct.
The analyses confirm that the questionnaire items present acceptable reliability and high consistency with the dimensions to which they belong in the model.
The structural model is then analyzed by obtaining the standardized regression coefficients (path coefficients), the Student’s t values, and the R2 coefficients of determination. These indicators allow the percentage of variance of the constructs explained by the predictor variables to be assessed, providing key information for assessing the viability and fit of the proposed model (see Figure 2).
The model reveals significant relationships between its variables, highlighting that cognition (CO) is a strong predictor of capability (CA), with a high weight (0.724), indicating that cognitive development significantly increases the perception of capability, while the latter is also negatively influenced by perceived threats to AI (PT) (−0.381) and positively by vision (VI) (0.366), explaining 52.4% of its variance. Capability, in turn, is the primary determinant of vision (0.680), which shows a high explained variance (61.8%), further strengthened by cognition (0.480), highlighting the role of cognitive skills in organizational alignment. Ethics (ET), with an R2 of 56.8%, is strongly associated with capability (0.680), while its relationship with AI-enhanced innovation (INN) (0.018) and perceived threats (PT) (0.153) is limited. On the other hand, AI-enhanced innovation, which only explains 26.8% of its variance, is moderately influenced by ethics (0.328) and slightly by vision (0.172), highlighting that factors external to the model have greater relevance in this construct. Perceived threats from AI (PT), with an R2 of 17.1%, are positively linked to ethics (0.153) and very weakly to innovation (0.038). Finally, job satisfaction (JS) shows the lowest explained variance (11.9%), with no significant direct relationships in the model, suggesting that it depends on external variables not considered herein. The model highlights the importance of cognition, ability, and vision as central variables, while ethics and innovation have more secondary roles. It also allows for incorporating other factors to understand job satisfaction and innovation better.
The results show that the relationships between the main dimensions of the model are significant. Finally, the SRMR (Standardized Root Mean Square Residual) indicator was used to evaluate the structural model’s goodness of fit. This indicator gave a value of 0.062, less than 0.08, indicating a good fit.

5. Discussion and Conclusions

The results obtained in this study provide a solid basis for understanding the relationships between the dimensions of Teaching Digital Competence (TDC) applied to Artificial Intelligence (AI), highlighting both the reliability of the instrument used and the robustness of the proposed structural model. Firstly, the validated questionnaire’s high overall and dimension-wise reliability confirms its ability to measure AI-related digital competencies in higher education consistently. These values not only exceed the recommended standards but also coincide with previous research, such as that of Wang et al. (2023), which consolidates the relevance of the instrument. Other studies using structural equation models in educational contexts (Bagozzi & Yi, 1988; Alaminos et al., 2015; Lévy, 2006) also reinforce the methodological robustness of this approach.
The proposed structural model has proven robust and well-founded, as it does not present negative relationships between the dimensions. This reinforces its usefulness for diagnosing and designing specific training strategies. Furthermore, the adequate correspondence between the items and the dimensions supports its methodological rigour, consolidating its applicability in similar contexts. As noted by Celik et al. (2022), the promises and challenges of AI for teachers require solid measurement tools, and this study contributes with empirical evidence in that direction.
A notable finding is the importance of the cognition dimension, which is a key pillar in developing other competencies. This result indicates that any AI training program for teachers should prioritize the conceptual strengthening of cognitive capabilities, which is essential to understanding the possibilities of AI and applying them effectively in teaching and research. The significant influence of this dimension on capacity, vision, and ethics underlines its role in aligning individual competencies with organizational and educational values. This aligns with Fuertes Alpiste (2024), who frames generative AI as a tool for cognition, and with Abella (2024), who emphasizes that teacher training in the AI era must begin by reinforcing teachers’ conceptual knowledge before practice.
The results also reflect a predominantly positive attitude towards AI on the part of teachers, evidenced by the high levels of innovation and interest in this technology. On the other hand, the “Perceived threats” dimension recorded low scores, indicating that AI is not perceived as a significant risk but rather as an opportunity that educational institutions should take advantage of to promote its integration into teaching practice. This perception is consistent with findings from Cabero-Almenara et al. (2024a, 2024b), who report that teachers’ acceptance of AI is strongly mediated by their pedagogical beliefs and trust in technology, rather than by fear of risks.
However, although AI-driven innovation showed a moderate explained variance, its relationship with ethics and vision highlights that these factors are determinants for incorporating AI into teachers’ professional practice. This result connects with the need for a more rigorous ethical reflection in the use of AI in higher education, as pointed out by Bond et al. (2024) and the European Commission (2022).
Job satisfaction’s low explained variance could be due to factors external to the model, such as working conditions, institutional recognition, or access to resources. This finding suggests that it is necessary to explore how digital skills and AI can be integrated into broader teacher well-being strategies to maximize their positive impact. As García-Peñalvo et al. (2024) argue, the new educational reality in the era of generative AI requires not only technological adaptation but also systemic policies that ensure equitable access and support for teachers.
The study has several limitations that should be considered. First, the use of a self-report questionnaire may introduce biases related to the interpretation of questions or the honesty of responses (Hernández-Sampieri & Mendoza, 2018). Second, the non-random nature of the sample limits the generalization of the results. We have acknowledged potential skewness in participation, noting that certain groups may have been more likely to respond, particularly those with greater interest or confidence in AI. Despite this, the high participation of teachers in distance education at this university partially mitigates this limitation.
Additionally, the questionnaire was administered before the AI training to capture baseline attitudes, providing a clearer understanding of initial perceptions and competencies, which is important for interpreting subsequent changes or effects. It is also worth noting that the explained variance for some key constructs was lower than expected, suggesting the possibility of missing variables or measurement limitations that future research could address.
Several lines of analysis are proposed for future research. First, it would be important to replicate the study in other universities to assess the consistency of results across different educational and cultural contexts. Second, comparing teachers’ digital skills in face-to-face versus distance education modalities and analyzing potential differences would provide further insight.
Another relevant line is the incorporation of sociodemographic variables, such as gender, age, professional experience, and field of expertise, to identify their influence on the model’s dimensions and enrich the analysis. As AI becomes more widely integrated into education, longitudinal studies are recommended to observe how Teaching Digital Competence evolves in relation to AI.

Author Contributions

Conceptualization, J.C.-A. and A.P.-R.; methodology, J.C.-A.; software, D.R.P.-Q.; validation, J.C.-A., A.P.-R. and M.I.L.-A.; formal analysis, A.P.-R.; investigation, D.R.P.-Q.; resources, M.I.L.-A.; data curation, D.R.P.-Q.; writing—original draft preparation, J.C.-A.; writing—review and editing, A.P.-R.; visualization, M.I.L.-A.; supervision, A.P.-R.; project administration, A.P.-R.; funding acquisition, M.I.L.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The institution where this study was conducted does not have an ethics committee. The data used in this paper comes from a questionnaire administered to people who gave their consent (informed consent attached), and all data are processed anonymously.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abella, V. (2024). Teaching in the age of Artificial Intelligence: Practical approaches for teachers. University of Burgos. [Google Scholar]
  2. Adekunde, M., Temityayo, I., Adelana, O., Aruleba, K., & Adelana, O. (2022). Teachers’ readiness and intention to teach Artificial Intelligence in schools. Computers and Education: Artificial Intelligence, 3, 100099. [Google Scholar] [CrossRef]
  3. Alaminos, A., Francés, F., Penalva, C., & Santacreu, O. (2015). Introduction to structural models in social research. Pydlos Ediciones. [Google Scholar]
  4. Ayuso-del Puerto, D., & Gutiérrez-Esteban, P. (2022). Artificial Intelligence as an educational resource during initial teacher training. RIED-Ibero-American Journal of Distance Education, 25(2), 347–362. [Google Scholar] [CrossRef]
  5. Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models. Journal of the Academy of Marketing Science, 16, 74–94. [Google Scholar] [CrossRef]
  6. Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Wang, S., & George, S. (2024). A meta systematic review of Artificial Intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21(4), 4. [Google Scholar] [CrossRef]
  7. Cabero-Almenara, J., Palacios-Rodríguez, A., Loaiza-Aguirre, M. I., & Andrade-Abarca, P. S. (2024a). The impact of pedagogical beliefs on the adoption of generative AI in higher education: Predictive model from UTAUT2. Frontiers in Artificial Intelligence, 7, 1497705. [Google Scholar] [CrossRef]
  8. Cabero-Almenara, J., Palacios-Rodríguez, A., Loaiza-Aguirre, M. I., & Rivas-Manzano, M. (2024b). Acceptance of educational Artificial Intelligence by teachers and its relationship with some variables and pedagogical beliefs. Education Sciences, 14, 740. [Google Scholar] [CrossRef]
  9. Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment. Sage. [Google Scholar]
  10. Casanova, A., & Martínez, M. (2024). Scientific production on Artificial Intelligence and education: A scientometric analysis. Hachetetepé. Scientific Journal of Education and Communication, 28, 1–23. [Google Scholar] [CrossRef]
  11. Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of Artificial Intelligence for teachers: A systematic review of research. TechTrends, 66, 616–630. [Google Scholar] [CrossRef]
  12. Chao-Rebolledo, J., & Rivera-Navarro, M. A. (2024). Uses and perceptions of Artificial Intelligence tools in higher education in Mexico. Ibero-American Journal of Education, 95(1), 57–72. [Google Scholar] [CrossRef]
  13. Chiu, T. (2023). The impact of Generative AI (GenAI) on practices, policies and research directions in education: A case of ChatGPT and Midjourney. Interactive Learning Environments, 32(10), 6187–6203. [Google Scholar] [CrossRef]
  14. Dúo, P., Moreno, A. J., Lopez, J., & Marin, J. A. (2023). Artificial Intelligence and Machine Learning as an educational resource from the perspective of teachers in different non-university educational stages. RiiTE Interuniversity Journal of Research in Educational Technology, 15, 58–78. [Google Scholar] [CrossRef]
  15. European Commission. (2022). Ethical guidelines on the use of Artificial Intelligence (AI) and data in education and training for educators. Publications Office of the European Union. [Google Scholar]
  16. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460–474. [Google Scholar] [CrossRef]
  17. Forteza-Martínez, A., & Alonso-López, N. (2024). Artificial Intelligence in the social science area: Systematic literature review in web of science and Scopus. Tripodos, 55, 116–141. [Google Scholar] [CrossRef]
  18. Fuertes Alpiste, M. (2024). Framing Generative AI applications as tools for cognition in education. Pixel-Bit. Journal of Media and Education, 71, 42–57. [Google Scholar] [CrossRef]
  19. García-Peñalvo, F., Llorens-Largo, F., & Vidal, J. (2024). The new reality of education in the face of advances in generative Artificial Intelligence. RIED-Ibero-American Journal of Distance Education, 27(1), 9–39. [Google Scholar] [CrossRef]
  20. González, G. (2024). 1 AD (after ChatGPT). In Artificial Intelligence in higher education. PUV Universitat de Valencia. [Google Scholar]
  21. Hernando, A., Municio, A., Vázquez, A., Gardó, E., & Martínez, H. (2022). Algorithms under scrutiny: Why AI in education? Bofill Foundation. [Google Scholar]
  22. Hernández-Sampieri, R., & Mendoza, C. P. (2018). Research methodology: Quantitative, qualitative and mixed routes. McGraw-Hill. [Google Scholar]
  23. Jeon, J., & Lee, S. (2023). Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT. Education and Information Technologies. [Google Scholar] [CrossRef]
  24. Jiménez, L., López-Gómez, J., Martín-Baos, J. A., Romero, F., & Serrano-Guerrero, J. (2023). ChatGPT: Reflections on the emergence of generative Artificial Intelligence in university teaching. Proceedings of the Jenui, 8, 113–120. [Google Scholar]
  25. Karaca, O., Çalışkan, S. A., & Demir, K. (2021). Medical Artificial Intelligence readiness scale for medical students (MAIRS-MS)–Development, validity and reliability study. BMC Medical Education, 21(1), 112. [Google Scholar] [CrossRef]
  26. Kim, S. (2023). Trends in research on ChatGPT and adoption-related issues discussed in articles: A narrative review. Science Editing, 11(1), 3–11. [Google Scholar] [CrossRef]
  27. Lévy, J. P. (2006). Modelling with covariance structures in social sciences: Essential and advanced topics and special contributions. Netbiblo. [Google Scholar]
  28. Lo, C. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. [Google Scholar] [CrossRef]
  29. López Regalado, O., Núñez-Rojas, N., López Gil, O. R., & Sánchez-Rodríguez, J. (2024). Analysis of the use of Artificial Intelligence in university education: A systematic review. Pixel-Bit. Journal of Media and Education, 70, 97–122. [Google Scholar] [CrossRef]
  30. Mateo, J. (2004). Ex post-facto research. In F. Bisquerra (Ed.), Research methodology (pp. 195–230). La Muralla. [Google Scholar]
  31. Mena-Guacas, A., Vázquez-Cano, E., Fernández-Márquez, E., & López-Meneses, E. (2024). Artificial Intelligence and its scientific production in the field of education. University Training, 17(1), 155–164. [Google Scholar] [CrossRef]
  32. Morozov, E. (2024). Another Artificial Intelligence is possible. Le monde Diplomatique en Español, 346, 25–28. [Google Scholar]
  33. Munar Garau, J., Oceja, J., & Salinas Ibáñez, J. (2024). Equivalences between the indicators of the SELFIE tool and the DigCompEdu framework using the Delphi technique. Pixel-Bit. Journal of Media and Education, 1(69), 131–168. [Google Scholar] [CrossRef]
  34. O’Dwyer, L., & Bernauer, J. (2014). Quantitative research for the qualitative researcher. Sage. [Google Scholar]
  35. Perezchica-Vega, J. E., Sepúlveda-Rodríguez, J. A., & Román-Méndez, A. D. (2024). Generative Artificial Intelligence in higher education: Uses and teachers’ opinions. European Public & Social Innovation Review, 9, 1–20. [Google Scholar] [CrossRef]
  36. Ruiz-Rojas, L., Acosta-Vargas, P., De-Moreta-Llovet, J., & González-Rodríguez, M. (2023). Empowering education with generative Artificial Intelligence tools: Approach with an instructional design matrix. Sustainability, 15, 11524. [Google Scholar] [CrossRef]
  37. Sampeiro, V. (2019). Structural equations in educational models: Characteristics and phases in their construction. Apertura, 11(1), 90–103. [Google Scholar] [CrossRef]
  38. Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning and Teaching, 6(1), 31–40. [Google Scholar] [CrossRef]
  39. Temitayo, I., Adekunle, M., & Tolorunleke, A. (2024). Investigating pre-service teachers’ Artificial Intelligence perception from the perspective of planned behavior theory. Computers and Education: Artificial Intelligence, 6, 100202. [Google Scholar] [CrossRef]
  40. Tongfei, F. (2023). Practice and exploration of conducting Artificial Intelligence teacher training in universities under the background of industry education integration. Adult and Higher Education, 5, 113–117. [Google Scholar] [CrossRef]
  41. University of Guadalajara. (2023). Guidelines and definitions on the use of generative Artificial Intelligence in academic processes. Practical guide; Virtual University System; University of Guadalajara. [Google Scholar]
  42. Vélez-Rivera, R., Muñoz-Álvarez, D., Leal-Orellana, P., & Ruiz-Garrido, A. (2024). Use of Artificial Intelligence in higher education and its ethical implications. Systematic mapping of literature. Hachetetepé. Scientific Journal of Education and Communication, 28, 1–17. [Google Scholar] [CrossRef]
  43. Villegas-José, V., & Delgado-García, M. (2024). Artificial Intelligence: Innovative educational revolution in Higher Education. Pixel-Bit. Journal of Media and Education, 71, 159–177. [Google Scholar] [CrossRef]
  44. Wang, X., Li, L., Tan, S., Yang, L., & Lei, J. (2023). Preparing for AI-enhanced education: Conceptualizing and empirically examining teachers’ AI readiness. Computers in Human Behavior, 146, 107798. [Google Scholar] [CrossRef]
  45. Webb, M. (2024). A generative AI first. Version 1.3. Available online: https://nationalcentreforai.jiscinvolve.org/wp/2024/01/02/generative-ai-primer/#3-1 (accessed on 23 February 2025).
  46. Yuk, C., & Lee, K. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10, 60. [Google Scholar] [CrossRef]
Figure 1. Research model and formulated hypotheses.
Figure 1. Research model and formulated hypotheses.
Education 15 01271 g001
Figure 2. CDD IA model.
Figure 2. CDD IA model.
Education 15 01271 g002
Table 1. Weaknesses, strengths, threats, and opportunities of AI (Farrokhnia et al., 2023; Jiménez et al., 2023).
Table 1. Weaknesses, strengths, threats, and opportunities of AI (Farrokhnia et al., 2023; Jiménez et al., 2023).
WeaknessesStrengths
-
Authorship and ethics of the work carried out.
-
Hallucinations (invention of unknown content).
-
Cultural and social biases.
-
Limited mathematical reasoning.
-
Lack of deep understanding.
-
Accessibility and availability (although 24/7 is not guaranteed).
-
A free version is available.
-
Use of natural language.
-
Ability for self-improvement.
-
Provide personalized responses.
-
Versatility and efficiency of response regardless of the subject matter.
-
Technological and economic driver.
-
Provide real-time responses.
ThreatsOpportunities
-
Tool dependency.
-
Decline in higher-order cognitive skills.
-
Misinformation and loss of critical sense.
-
Training for understanding the response.
-
Originality of the work.
-
Resolution of student exercises.
-
Academic integrity.
-
Opportunity to contrast and verify information.
-
Improve reading comprehension, synthesis skills, and creativity.
-
Helps to understand complex problems.
-
Catalyst of learning.
-
Access to more information.
-
Reduce the workload in teaching.
-
Facilitate the personalization of learning.
Table 2. Instrument reliability indices.
Table 2. Instrument reliability indices.
AlphaOmega
Cognition (CO)0.9500.921
Skill (AB)0.9670.912
Vision (VI)0.9120.910
Ethics (ET)0.9350.915
Perceived threats from AI (PT)0.9240.910
AI-powered innovation (AI)0.9340.906
Job Satisfaction (JS)0.9360.908
Cognition (CO)0.9270.912
Total0.9350.916
Table 3. Means and standard deviations of the different dimensions of the instrument.
Table 3. Means and standard deviations of the different dimensions of the instrument.
DimensionsAverageStandard Deviation
Cognition (CO)3.920.715
Skill (AB)3.730.764
Vision (VI)3.900.769
Ethics (ET)3.860.819
Perceived threats from AI (PT)3.380.944
AI-powered innovation (AI)4.110.718
Job Satisfaction (JS)4.210.662
Total3.880.769
Table 4. Means and standard deviations in each of the items.
Table 4. Means and standard deviations in each of the items.
ItemsAverageSt. D
(CO1) I clearly understand the new role of the teacher in the era of AI4.270.810
(CO2) I can effectively balance the relationship between teachers and AI technologies.3.940.846
(CO3) I understand how AI technologies are trained and work in education.3.740.890
(CO4) I can distinguish the functions and characteristics of different AI tools and applications.3.560.899
(CO5) I understand the importance of using AI technologies for data collection, analysis, evaluation, and security in education in the AI era.4.080.875
(AB1) I can effectively integrate AI technologies into my classroom routines.3.660.925
(AB2) I can design different teaching approaches based on different functions of AI technologies.3.500.942
(AB3) I can rationally use AI technologies to solve problems discovered during the teaching process.3.580.901
(AB4) Based on the real-time and visual feedback provided by AI technologies, I can improve my teaching in the next step.3.890.888
(AB5) I can optimize and reorganize the teaching process with the help of AI technologies.3.990.867
(AB6) I can discuss, share, and collaborate effectively with other teachers on the use of AI technologies to jointly design high-quality teaching solutions.3.760.903
(VI1) I understand the strengths and limitations of AI technologies.3.970.863
(VI2) I have my own unique thoughts and views on how to improve and utilize AI technologies for education.3.840.923
(VI3) I foresee the opportunities and challenges that AI technologies bring to education.3.900.849
(ET1) I understand the digital ethics that teachers must possess in the era of AI.4.200.895
(ET2) I understand the ethical obligations and responsibilities that teachers must assume in the process of using AI technologies.4.200.930
(ET3) I know how to keep personal information safe when using AI technologies.3.331.106
(ET4) I use teacher and student data generated by AI systems following legal and ethical standards.3.721.137
(PT1) I feel that AI technologies could weaken the importance of teachers in education.3.091.277
(PT2) I feel that the use of AI technologies has reduced the frequency of face-to-face communication with colleagues and students.3.361.184
(PT3) Students’ over-reliance on learning guidance provided by AI technologies can undermine the relationship between teachers and students.3.581.062
(PT4) I believe that the frequent use of AI technologies to assist in teaching and learning can create inertia, which can reduce the thinking and decision-making capacity of teachers and students.3.681.022
(PT5) In my opinion, the excessive use of AI technologies can reduce the need for human teachers in the classroom, making it difficult for teachers to convey correct values to students.3.161.275
(INN1) AI technologies allow me to perform tasks that were previously difficult to do without them.3.980.916
(INN2) AI technologies allow me to experiment with innovative pedagogy.4.150.808
(INN3) AI technologies allow me to organize teaching in innovative ways.4.190.811
(JS1) In most respects, my work is close to my ideal.3.910.802
(JS2) The current condition of my work is excellent.3.940.867
(JS3) I am satisfied with my job.4.280.803
(JS4) I feel proud of my work.4.490.712
(JS5) My job is enjoyable.4.430.742
Table 5. Simple loadings or correlations of the indicators with their respective constructs.
Table 5. Simple loadings or correlations of the indicators with their respective constructs.
ABCOETINNJSPTVI
AB10.776
AB20.847
AB30.894
AB40.833
AB50.812
AB60.826
CO1 0.836
CO2 0.830
CO3 0.829
CO4 0.844
CO5 0.703
ET1 0.882
ET2 0.849
ET3 0.694
ET4 0.753
INN1 0.781
INN2 0.929
INN3 0.912
JS1 0.827
JS2 0.817
JS3 0.841
JS4 0.827
JS5 0.754
PT1 0.769
PT2 0.786
PT3 0.867
PT4 0.826
PT5 0.799
VI1 0.914
VI2 0.818
VI3 0.874
Table 6. Composite reliability.
Table 6. Composite reliability.
DimensionComposite Reliability
Capacity (AB)0.913
Cognition (CO)0.872
Ethics (ET)0.833
AI-powered innovation (AI)0.861
Job Satisfaction (JS)0.963
Perceived threats from AI (PT)0.885
Vision (VI)0.842
Table 7. Average Variance Extracted (AVE).
Table 7. Average Variance Extracted (AVE).
DimensionAverage Variance Extracted (AVE)
Capacity (AB)0.693
Cognition (CO)0.656
Ethics (ET)0.637
AI-powered innovation (AI)0.768
Job Satisfaction (JS)0.662
Perceived threats from AI (PT)0.656
Vision (VI)0.756
Table 8. Fornell–Larcker criterion.
Table 8. Fornell–Larcker criterion.
ABCOETINNJSPTVI
Capacity (AB)0.832
Cognition (CO)0.7240.810
Ethics (ET)0.5640.5960.798
AI-powered innovation (AI)0.4610.3390.3810.877
Job Satisfaction (JS)0.3410.3060.4340.3420.814
Perceived threats from AI (PT)−0.0040.0710.3160.2000.1050.810
Vision (VI)0.7130.7450.7520.4580.3530.2440.869
Table 9. Cross-loading matrix.
Table 9. Cross-loading matrix.
ABCOETINNJSPTVI
AB10.7760.5900.4170.3840.359−0.0880.517
AB20.8470.6420.4460.4160.335−0.0520.579
AB30.8940.7180.4880.3370.286−0.0380.660
AB40.8330.6020.5170.3620.2870.0460.624
AB50.8120.4930.4760.4740.2410.0890.596
AB60.8260.5580.4690.3400.1940.0220.577
CO10.5570.8360.5110.2950.2900.0990.600
CO20.6500.8300.5140.3610.3080.1270.647
CO30.5680.8290.4740.2570.2600.0670.563
CO40.6310.8440.4760.1850.2250.0030.639
CO50.5130.7030.4320.2720.144−0.0170.560
ET10.4730.4970.8820.3550.3040.2880.737
ET20.5190.5320.8490.4260.3370.1290.646
ET30.4150.5070.6940.1700.3740.3110.514
ET40.3810.3580.7530.2260.4050.3090.456
INN10.3040.1930.3350.7810.2400.3180.354
INN20.4830.4010.3540.9290.3160.1090.443
INN30.4120.2790.3170.9120.3380.1240.401
JS10.3050.2650.3860.4030.8270.1230.315
JS20.2350.2380.4600.1740.8170.1650.360
JS30.2660.2520.3890.1820.8410.0950.322
JS40.3090.2520.2570.3090.827−0.0210.236
JS50.2180.2180.2920.1430.7540.0880.191
PT1−0.095−0.0350.212−0.0230.0590.7690.162
PT20.0280.0270.2420.0890.1410.7860.106
PT30.0890.1340.3020.2640.1360.8670.289
PT40.0410.1430.2670.2820.0760.8260.240
PT5−0.102−0.0230.2430.1200.0200.7990.149
VI10.6590.6560.6540.3850.3310.1590.914
VI20.6190.6200.5600.4210.1740.1520.818
VI30.5850.6650.7360.3910.4010.3160.874
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cabero-Almenara, J.; Palacios-Rodríguez, A.; Loaiza-Aguirre, M.I.; Pugla-Quirola, D.R. A Structural Model of Distance Education Teachers’ Digital Competencies for Artificial Intelligence. Educ. Sci. 2025, 15, 1271. https://doi.org/10.3390/educsci15101271

AMA Style

Cabero-Almenara J, Palacios-Rodríguez A, Loaiza-Aguirre MI, Pugla-Quirola DR. A Structural Model of Distance Education Teachers’ Digital Competencies for Artificial Intelligence. Education Sciences. 2025; 15(10):1271. https://doi.org/10.3390/educsci15101271

Chicago/Turabian Style

Cabero-Almenara, Julio, Antonio Palacios-Rodríguez, Maria Isabel Loaiza-Aguirre, and Dhamar Rafaela Pugla-Quirola. 2025. "A Structural Model of Distance Education Teachers’ Digital Competencies for Artificial Intelligence" Education Sciences 15, no. 10: 1271. https://doi.org/10.3390/educsci15101271

APA Style

Cabero-Almenara, J., Palacios-Rodríguez, A., Loaiza-Aguirre, M. I., & Pugla-Quirola, D. R. (2025). A Structural Model of Distance Education Teachers’ Digital Competencies for Artificial Intelligence. Education Sciences, 15(10), 1271. https://doi.org/10.3390/educsci15101271

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop