Next Article in Journal
Psychosocial Resources (Social Support and School) and Physical Activity During Free Time Among High-School Students
Previous Article in Journal
Writing Inquiry in a Post-Truth World: An Essay in Voice, Method, and Meaning
Previous Article in Special Issue
Adapting and Validating DigCompEdu for Early Childhood Education Students Through Expert Competence Coefficient
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence Adoption Amongst Digitally Proficient Trainee Teachers: A Structural Equation Modelling Approach

by
María Belén Morales-Cevallos
1,
Santiago Alonso-García
2,
Alejandro Martínez-Menéndez
2 and
Juan José Victoria-Maldonado
2,*
1
Faculty of Marketing and Communication, Universidad Ecotec, Via Principal Campus Ecotec Km 13.5, Sam-borondón 092302, Ecuador
2
Departament of Didactics and School Organization, Universidad de Granada, 18011 Granada, Spain
*
Author to whom correspondence should be addressed.
Soc. Sci. 2025, 14(6), 355; https://doi.org/10.3390/socsci14060355
Submission received: 1 April 2025 / Revised: 29 April 2025 / Accepted: 15 May 2025 / Published: 3 June 2025
(This article belongs to the Special Issue Educational Technology for a Multimodal Society)

Abstract

:
The present study examines how pre-service teachers’ digital competence influences their acceptance and use of Artificial Intelligence (AI) in educational settings. Employing a quantitative approach via Structural Equation Modelling (SEM), the authors analyzed self-reported data from Early Childhood and Elementary Education students in Andalusian (Spain) universities. The findings indicate that professional engagement is associated with a critical assessment of AI, focusing on pedagogical and ethical considerations, whereas digital content creation skills promote a more positive and proactive attitude toward AI adoption. These results underscore the importance of teacher education programs that combine technical skills with critical thinking to foster responsible AI integration. This study acknowledges limitations, including its regional scope and cross-sectional design and recommends future longitudinal and comparative research to validate and expand these insights. By addressing these gaps, future studies could enhance our understanding of AI adoption in diverse educational contexts.

1. Introduction

Given their potential to transform the teaching and learning process, the usage of Information and Communication Technologies (ICTs) in education has been actively promoted in recent years. With the advent of the internet and increasing worldwide connectivity, numerous studies have historically highlighted its unique usability and that of the networks sustained by them, both as a means of easily accessing multidisciplinary and multilingual information (Adell 1996; Piette 2000), as well as the creative and knowledge-sharing possibilities entailed by participation in online interactive spaces (Batanero 1998), amply sustained by a profound archive of educational resources (Aznar-Díaz et al. 2025).
The integration of ICTs in educational settings and practice, nonetheless, extends beyond a mere digital transition, as the overall scheme of both organizational and professional culture within the field may need to be assessed and newly conceptualized from both a scholarly and professional perspective with the rise of a globalized world, sustained and interconnected through constant technological advancements (Núñez et al. 2022). Key aspects such as teacher training, the development of digital competencies among students, and the adaptation of curriculums are essential to ensuring that ICT integration not only enhances the educational process but also equips citizens with the skills needed to navigate such a rapidly evolving society.
Among the most influential digital tools of recent years, Artificial Intelligence (AI) has gained unparalleled significance as one of the most relevant milestones of technological advancement in history (Tong and Zhang 2023), achieving remarkable global adoption, particularly since 2022 with the launch of ChatGPT-3. This technology attracted one million users within its first five days of operation and surpassed 100 million within the first few months of public use and exploitation.
Given the magnitude of this phenomenon, various professional and academic sectors have focused their efforts on analyzing and implementing Artificial Intelligence into their operational frameworks. Particularly, the significance of AI in education can be easily recognized, as stated by Bozkurt (2023) through an examination of over 500 academic articles and more than 1600 researchers that have made investigation efforts and proposals regarding the potential utility of this tool as a transformational resource regarding education and learning throughput for a variety of educational contexts and stages.

2. Literature Review

The impact of AI in education is profound and multifaceted. Its primary applications include personalized learning, the development of intelligent tutoring systems, the automation of administrative tasks, and the optimization of educational data analysis. These new approaches to educational diversity, given their adaptability and personalization capabilities, have the potential to significantly enhance the efficiency, accessibility, and adaptability of any given educational setting. However, they also introduce critical ethical and methodological challenges, such as ensuring equitable access to technology, protecting data privacy, and redefining the role of educators in an increasingly automated learning environment. Addressing these challenges is essential to harnessing AI’s benefits while maintaining an inclusive, ethical, and human-centered approach to education.
In this context, it is essential for educational institutions to adopt a strategic approach to the integration of AI, ensuring that its implementation enhances critical thinking, creativity, and student autonomy. Moreover, specialized teacher training is crucial in order to equip educators with the necessary skills to effectively utilize AI-based tools, fostering a responsible and ethical use of technology that prioritizes educational quality and student development. By proactively addressing these aspects, training centers can maximize AI’s potential while maintaining a balanced, student-centered learning environment.
However, the results obtained thus far regarding the implementation of Artificial Intelligence in the educational system are conflicting. Bozkurt (2024) highlights that one of the primary challenges behind this heterogeneity regarding educational research output is the lack of specific knowledge among educators about the optimization of AI use in educational contexts, especially when it comes to designing precise, parsimonious, and highly refined prompts (Lin 2022). Without adequate raw knowledge and skills regarding adequate utilization of these tools, educators may struggle to leverage AI effectively, potentially reinforcing inefficiencies rather than enhancing educational quality.
In response to this challenge, there has been a growing interest in analyzing the role of teachers’ digital competence as a determining factor in the effective integration of AI. In this regard, the ability of educators to understand, apply, and critically evaluate AI tools has become a key training prerequisite to maximizing the pedagogical potential of these technologies while fostering responsible and effective use in the classroom.
In this regard, several studies have explored the relationship between teachers’ digital literacy and their effectiveness in utilizing AI-related applications in the classroom. Research indicates that educators with higher levels of digital literacy are more willing to adopt AI technologies and effectively leverage their benefits in teaching and learning processes. Similarly, a lack of adequate training can result in resistance to adoption, improper use of AI tools, or an over-reliance on AI without meaningful pedagogical integration.
To bridge this gap, various initiatives have been launched to train teachers in the effective use of AI, equipping them with the necessary skills for its appropriate application in educational settings. These training programs focus on key areas such as prompt design, enabling educators to generate high-quality input that optimizes AI output, avoiding the common garbage ingarbage out design principle. A critical analysis of AI-generated results as well as ensuring a reflective and informed approach to technology use are required.

2.1. Professional Engagement and Performance Expectancy

Several studies have been carried out along these lines, particularly Skantz-Åberg et al.’s (2022) contributions, who conclude that teachers who show proactivity towards self-training achieve better development in their AI-mediated professional practice. In this regard, following Lee (2023), mastering the use and varied applications of AI-related resources may allow educators, within their permanent training, to access more areas of knowledge and development that will further enrich their skills and professional practice.
It is then necessary to establish that adequate training and interest in developing knowledge about and through ICTs show a positive impact on the understanding of AI as an educational resource (Galindo-Domínguez et al. 2024; Peng et al. 2023; Lee 2023). Likewise, from the perspective of future teachers, it is understood that a better attitude towards ICTs contributes to improving AI dexterity, especially in complex contexts where curricular adaptations are necessary (Bozkurt 2024; De Frutos et al. 2023; Skantz-Åberg et al. 2022; Suconota-Pintado et al. 2023).

2.2. Professional Engagement and Effort Expectancy

Considering that educators showing professional commitment tend to acknowledge and voluntarily strive to solve personal training and professional needs, the precise intended usability of AI-based applications within the educational field should be further explored and differentiated between common administrative, usually repetitive and systematic, tasks and actual dedication to teaching and learning responsibilities based on pedagogical design and action (Voogt 2010). The former, although more mechanical, repetitive, and simple, has a significant impact on the development of their educational practice, as they contribute to accelerating the so-called burnout syndrome. Thus, those teachers who become aware of how to properly and efficiently work with AI actually manage to facilitate their daily work, allowing them to devote more time and high-quality effort to their main didactic activities (De Frutos et al. 2023; Del Moral-Pérez et al. 2024).

2.3. Professional Engagement and Social Influence

Teacher commitment covers a wide range of skills that can be considered a part of the competence domain. In particular, a predisposition towards lifelong learning and awareness of ICT use are key aspects set out in the European reference framework. In this sense, educators that are more digitally proficient tend to have a positive social influence on the use of AI, as their immediate social context may be similarly knowledgeable in the use of these tools (Rahimi and Mosalli 2024). However, social influence is one of the concepts that generates the most debate, as it may be easily influenced by professional commitment, formal training, and one’s own social context and environment.
Furthermore, as pointed out by Usán-Supervía and Castellanos-Vega (2024), Chiu et al. (2024), Rahimi and Mosalli (2024), and Usán-Supervía and Castellanos-Vega (2024), teachers have a positive social influence on the adoption of AI, while stressing the need for specific training in its use (Bozkurt 2023). Conversely, from the student’s perspective, social influence is more questioned. In this sense, both Alonso-Rodríguez (2024) and the European Union criticized students’ use of AI, which could have a negative social influence.
Overall, it can be concluded that higher professional engagement has a positive influence on social influence, because as education and interest in knowledge increase, skills for the positive use of AI are acquired, which in turn improves social influence. However, the opposite situation could also occur, in which acquiring new knowledge or developing in a more educated environment leads to a greater rejection of the use of AI.

2.4. Professional Engagement and Hedonic Motivation

To understand this relationship, it is necessary to understand the concept of hedonic motivation, which, for the purposes of this research, is considered as a synonym for intrinsic motivation. In this regard, the committed teacher who seeks to establish online connections with other teachers, voluntarily attends training on AI, and who promotes its use within his or her educational institution demonstrates high intrinsic motivation (Svoboda 2024). In addition, knowledge about how these tools work, understanding their capabilities, and participating in circles for sharing professional educational practices with AI further strengthen this motivation to use them (Galindo-Domínguez et al. 2025).
As mentioned earlier, social influence may generate some rejection from students towards AI, and the motivation to train and provide a clear response regarding the positive use of this tool could affect their own motivation (Zhang et al. 2023). Thus, those students who are more committed and have a greater willingness to engage in AI-related training will also develop a stronger intrinsic motivation to work with AI (Hezam and Alkhateeb 2024; Kalniņa et al. 2024).

2.5. Professional Engagement and Price Value

With regard to professional engagement and price value, their relationship, at the theoretical level, is fairly consensual; however, it is difficult to work with it in specific contexts. It should be mentioned that the valuation of a fair price depends on many factors, such as its use, the tool itself, or the price. Therefore, given that no specific tool was selected, commented on, or highlighted for this research, a generalized AI-based application concept was used as a reference for the present study. Moreover, it is necessary to understand that as pre-service teachers are still students, they have a worse assessment of the cost value of these tools. However, similar studies in the above context in other countries show that regardless of the economic level, both teachers and students have an adequate assessment of the price of AI (e.g., Wang and Sun 2024).
Regarding the factors hereby examined, it should be noted that greater knowledge about the use of AI-based tools positively influences perceptions of them. As the commitment to teaching increases, pre-service teachers develop a better understanding of how each AI-based tools can be beneficial, leading to an improved assessment in terms of perceived value (Lü et al. 2024; Wang and Sun 2024).

2.6. Professional Engagement and Habit of Use

The professional engagement of teachers is reflected in their dedication to continuous learning and the integration of technological innovations into their pedagogical practices. This commitment is essential for the adoption and consistent use of AI tools in educational settings (Clemente-Alcocer et al. 2024). Ongoing training in digital competence enables educators to understand and effectively apply AI in the classroom, thereby enhancing teaching and learning processes. According to Bolaño-García and Duarte-Acosta (2024), most students are unaware of AI’s potential in the educational field, highlighting the need for teachers to invest in their own training to properly guide their students.
Moreover, the integration of AI in education can personalize learning and address the individual needs of students. A systematic review conducted by Suconota-Pintado et al. (2023) suggests that AI can significantly enhance learning personalization by providing activity recommendations and feedback tailored to each student’s specific needs. However, the effective implementation of AI in education does not solely depend on the availability of technology but also on the professional commitment of teachers to train themselves and adapt to these new resources. Cukurova et al. (2023) found that factors such as teacher self-efficacy and product quality are important but not necessarily the only relevant predictors in this regard. The acceptance of AI in educational settings is also influenced by the perception of its usefulness and fairness. Karran et al. (2024) highlight that the acceptance of AI in education is a complex issue that requires careful consideration of specific AI applications and the perceptions of the various stakeholders involved.

2.7. Digital Resources in Performance Expectation

The creation of digital resources, as defined by Mora-Cantallops et al. (2022), refers to the ability to create, use, and adapt diverse digital resources within an educational context. This concept holds significant implications for teachers’ professional development, as these resources are essential for enhancing pedagogical practices. As previously mentioned, performance expectation is heavily influenced by digital competencies, given that higher competence levels correlate with improved performance of the implemented AI-based tools in question (Bozkurt 2023; De Frutos et al. 2023; Del Moral-Pérez et al. 2024).
Specifically, from an instructional perspective, AI has become an essential tool that aids in the instant management and transformation of materials, addressing educators’ needs. This connection may hint, as outlined by Skantz-Åberg et al. (2022), at the existence of a positive influence of digital resources on performance expectation.

2.8. Digital Resource Creation in Effort Expectation

As previously mentioned, teachers fulfill a multifaceted role that extends beyond classroom instruction, encompassing pedagogical, administrative, and operational responsibilities. Within this complex professional landscape, educators often prioritize the intellectual and interpersonal dimensions of teaching, while assigning lower priority to routine bureaucratic tasks. These latter responsibilities include automated assessment grading, document management, and the generation of standardized instructional materials (García-Cabrero et al. 2008). As such, it is generally affirmed that the average educator worldwide assigns more relevance to their didactic and teaching functions that are directly linked to classroom instruction over the aforementioned more procedural and systematic tasks (Hargreaves and Fullan 2012).
Emerging research suggests that educators’ digital competence plays a pivotal role in shaping their perceptions and utilization of AI tools. Specifically, instructors with limited proficiency in digital content creation tend to view AI primarily as an assistive technology for streamlining basic operational functions. Common applications include the automated generation of visually enhanced presentations (even when working with instructor-provided content), the use of template-driven systems for objective assessment grading, and the efficient organization of instructional documentation (Gonçalves Costa et al. 2024; Van den Berg and du Plessis 2023).
The relationship between digital literacy and AI integration appears particularly salient when examining perceived implementation efforts. According to the Unified Theory of Acceptance and Use of Technology (UTAUT), technologies perceived as requiring minimal adaptation efforts are more likely to be embraced by users (Venkatesh et al. 2012). Instructors with basic digital skills demonstrate greater willingness to adopt AI for low-complexity tasks, as these applications do not require significant behavioral or pedagogical changes. Conversely, more sophisticated AI applications, such as adaptive learning systems or intelligent tutoring platforms, may be perceived as requiring further training and effort by this population, acting as a potential cause behind their limited adoption in mainstream practice (Zawacki-Richter et al. 2019).
This phenomenon has important implications for professional development programs. Current findings suggest that targeted training initiatives should address not only technical AI competencies but also the pedagogical reasoning required for meaningful AI integration (Chiu et al. 2023). Without such holistic preparation, there exists a risk that AI adoption may remain confined to peripheral administrative functions, rather than transforming core teaching and learning processes.

2.9. Digital Resources and Social Influence

Previous studies have addressed how universities themselves promote the application of AI in an educational context. In this regard, educational influencers have emerged as key references for pre-service teachers, disseminating insights into their teaching practices and workflows (Collado-Alonso et al. 2023; Martínez-Domingo et al. 2024). Thus, we will examine how the rise of educational influencers, notably including Laurimathteacher, exerts a significant influence on trainee teachers.
Two key influencers stand out in terms of social influence and digital resources. First, the ability to create content plays a crucial role in shaping social influence, as demonstrated by Pérez Ibáñez (2024). Through his website, https://jose-david.com/, he disseminates both paid and free training on the potential uses of AI. Similarly, the website and social media platform 4Docent.es (4Docent.es 2024, accessed on 14 March 2025) serves as a benchmark, illustrating how specialized training in digital content creation can have a social impact, particularly among pre-service teachers.

2.10. Digital Resource Creation and Hedonic Motivation

Recent research suggests a positive correlation between users’ ability to adapt, create, and repurpose digital content and their hedonic motivation to utilize AI systems (Adelana et al. 2024). As individuals develop greater proficiency in modifying texts, images, audio, and other digital formats, they perceive an overall better and swiftly easier AI usage. This phenomenon operates through a positive feedback mechanism: AI lowers technical barriers to content manipulation, thereby facilitating creative experimentation and activating reward systems associated with hedonic motivation (Ayanwale et al. 2024).
From another perspective, successful interactions, whether transitioning between different art styles, remixing multimedia elements, or generating content variations, produces micro rewards that reinforce usage behavior (Khalil and Alsenaidi 2024). The playful nature of this process, where focus lies on the enjoyment of the creative act rather than its practical utility, aligns overall with the very idea behind hedonic motivation.
When AI preserves human creative control while optimizing tedious process aspects, hedonic motivation peaks. Conversely, when systems make autonomous decisions that reduce human participation, subjective enjoyment decreases. This suggests that designing AI systems to enhance rather than replace human creative capability may be crucial for sustaining high levels of hedonic engagement in digital environments (Adelana et al. 2024; Ayanwale et al. 2024; Khalil and Alsenaidi 2024). As such, perceived user agency may act as a key moderator regarding educators’ acceptance and usability of AI tools.

2.11. Digital Resources and Facilitating Conditions

A critical consideration is how educators with advanced digital content creation competencies foster habitual AI usage, thereby facilitating its adoption both individually and across their educational communities. This trend has prompted higher education institutions to increasingly integrate AI tools to enhance teaching–learning processes. Educators with strong digital skills tend to implement these technologies to optimize workflows, allocating more time to direct instruction. Concurrently, teachers who previously faced challenges in developing engaging materials are now using AI to produce visually compelling presentations and supplementary resources (Caballero-García et al. 2024; Moreira-Zambrano et al. 2025).
This evolution has led to strategic partnerships between universities and AI platforms, such as the pioneering collaboration by the University of Granada, aimed at elevating the quality of instructor-generated content and expanding pedagogical possibilities in higher education. Parallel to these institutional efforts, microcredential programs have emerged as structured professional development initiatives, equipping educators with competencies to create AI-enhanced, high-quality learning materials (Centro de Servicios Informáticos y Redes de Comunicación 2024).
Equally significant is the influential role digitally proficient educators play in their institutional ecosystems. As demonstrated by Rahimi and Mosalli (2024), highly motivated teachers with advanced digital competencies frequently assume leadership in procuring technological resources. These professionals not only advocate for institutional support through formal channels but often demonstrate remarkable commitment by personally investing potentially beneficial ICT resources. This proactive behavior reflects both their dedication to pedagogical innovation and their conviction in technology’s transformative potential for learning optimization.

3. The Present Study: Background and Objectives

A critical examination of the scientific literature reveals persistent efforts to establish conceptual linkages between Artificial Intelligence (AI) integration and teachers’ digital competence, while identifying key social factors influencing this dynamic. As Tenberga and Daniela (2024) demonstrate, digital competence constitutes a foundational element for the contextualized development of AI in education. As these authors state, targeted training in these technologies enhances social cognition within educational communities, where proficient use yields both operational efficiency and qualitatively superior pedagogical outcomes across diverse settings (Zhang 2024).
This scholarly discourse has crystallized into two distinct research trajectories for conceptualizing the AI–digital competence nexus:
  • Structural Integration Approach: Following Tenberga and Daniela’s (2024) framework, this paradigm advocates for the formal inclusion of AI-specific domains within digital competence frameworks. This position has spurred the development of novel assessment instruments, exemplified by Chiu et al.’s (2024) validated scales for evaluating AI-related teaching competencies.
  • Competency-Mediated Adoption Approach: Aligned with the present study’s orientation, this research stream investigates how pre-existing digital competence influences AI utilization patterns. Khalil and Alsenaidi’s (2024) work in this domain reveals that educators’ AI implementation strategies frequently remain constrained, while demonstrating how digital competence training cultivates both critical engagement and positive dispositions toward educational technology (Bedir Erişti and Freedman 2024).
Grounding this theoretical discussion, foundational work reminds us that digital competency processes fundamentally concern the capacity to orchestrate effective technology-mediated teaching–learning ecosystems. This conceptualization remains particularly relevant when examining AI’s evolving role in contemporary pedagogical practice.

4. Objectives and Theory-Driven Hypotheses

The main objective of this research was to determine the influence of pre-service teachers’ digital competence on the acceptance and use of AI. Based on this general objective, the following specific objectives were established.
S.O. 1: assess the impact of professional engagement on key technology acceptance dimensions.
S.O. 2: examine the effect of digital content creation on key technology acceptance dimensions.
S.O. 3: determine the indirect relationships of professional engagement and digital content creation with AI usage intentions, mediated by key technology acceptance dimensions.
Therefore, and upon the previously presented literature review, the hypotheses guiding the analysis conducted within the present study are based on two main hypotheses on which the rest of the hypotheses are built. On the one hand, it was hypothesized that professional engagement has an influence on the acceptance and use of AI (H1). On this basis, the influence of professional engagement on performance expectancy (H1a), effort expectancy (H1b), social influence (H1c), hedonic motivation (H1d), price Value (H1e), and habits (H1f), as key areas regarding the acceptance and use of technology, were determined, finally defining indirect impacts through each of the dimensions of professional engagement on behavioral intention.
On the other hand, the second main hypothesis established the influence of the capacity to create digital content on the same dimensions planned by UTAUT2 (H2), establishing the influence of digital resource creation on performance expectancy (H2a), effort expectancy (H2b), social influence (H2c), facilitating conditions (H2d), hedonic motivation (H2e), price value (H2f), and habits (H2g). Complementarily, the indirect influence of digital resource creation over behavioral intention through the aforementioned AI acceptance dimensions will be explored (Figure 1).

5. Materials and Methods

The present study was developed under a quantitative approach, following an ex post facto prospective design with more than one causal step (Montero and León 2005) and implementing Multigroup Structural Equation Modelling (M-SEM) based on the answers provided by participants via a self-administered questionnaire distributed among students and prospective educators enrolled in Early Childhood and Elementary Education Bachelor’s Degrees at different Andalusian universities.
Contextualizing the present research in the Spanish region of Andalusia as a well-established AI-education stronghold (Universidad de Huelva 2022; Universidad de Sevilla 2024) and considering that the total amount of students enrolled within the two more densely populated educational undergraduate programs (Early Childhood and Elementary Education) in the area is up to 23,780 (Ministerio de Universidades 2022), a sample size of 379 respondents was necessary in order to achieve true statistical representativity of the studied population under a 95% confidence level and a 0.5 margin of error. In order to achieve such a sampling quota, a non-probabilistic convenience sampling procedure was used (Cochran and Díaz 1980), inviting potential participants to voluntarily fill in the selected research instruments.

5.1. Participants and Procedure

The participating student body replied to several demographic questions of research interest (sex, currently enrolled degree, and predisposition to implement AI in future hypothetical educational scenarios), as well as previously designed validated scales aimed towards measuring future educators’ digital competence and AI usability and usage intentions. Aligned with national and current legislation (Jefatura del Estado 2018), every potential participant was informed of the purposes and methodological principles of the present research project, including anonymous reports and compilations of personal information, as well as being asked to fill-in informed consent form prior to offering answers to the provided instrument.
The final study sample included 270 Early Childhood Education (203 women and 67 men) and 523 Elementary Education prospective teachers (388 women and 135 men) (n = 793; e = ±1.255). It should be noted that among the eight participating universities, five of them had signed a license agreement with AI-based tools, resulting in 586 learners having facilitated access to such resources, while 207 future educators rely on their personal access to such applications. The apparent sampling imbalance between men and women mainly derives from an overall perception of educational careers as a feminized endeavor, guaranteeing the sociodemographic representativity of this study. The sample size allowed the statistical analysis to hold enough power (0.95) to detect medium effect sizes (f2 = 0.05) in regression studies under a maximum number of predictors of nine, as the minimum established sample able to reach said characteristics was fixed at 481 participants (Faul et al. 2009).

5.2. Research Instrument and Data Analysis

The instrument implemented throughout the present research project was composed of different subscales taken from validated and widely acknowledged measurement instruments. Firstly, the research team selected the professional engagement (ENG) and digital resource creation (RES) scales included in the DigCompEdu Check-In Tool (Mora-Cantallops et al. 2022), both being based on a 7-point Likert scale in which the initial level represents a non-initiation state, whereas the other six stages align with the commonly used reference levels of proficiency frameworks (i.e., A1–C2). Each level of the scale was built up by statements related to proficiency levels related to the competence area under assessment. It should be noted that since the original instrument was explicitly meant for in-service educators, slight wording modifications were necessary in order to compile an interpretable scale for future educators with no prior field-related experience.
On the other hand, to measure the level of acceptance and willingness to implement AI by future professionals in the field of education and training, the authors opted for the questionnaire linked to the Unified Theory of Acceptance and Use of Technology (UTAUT2) (Venkatesh et al. 2012). The scale in question is formed by eight different latent variables related to the acceptance and use of mobile internet technology, including performance expectancy (PEX), which is related to the professional and academic benefits of using such technology; effort expectancy (EEX), which is based on the required dedication to master mobile internet; social influence (INF), involving the personal weight of other users’ opinions on one’s own behavior; facilitating conditions (FACs) linked to the possibility of using both human and material resources regarding the use of mobile technology; hedonic motivation (MOT), measuring the enjoyability of the usage; price value (VAL), versing on the fairness of the price on technology; habits (HABs), assessing the users’ usage tendencies; and behavioral intention (BEH), which is a latent variable determined by both observed indicators and every prior factor and which is associated with the willingness to continue using this technological resource in future instances.
It should be noted, however, that slight item wording modifications were applied to the latter scale, as its original version mainly refers to mobile technology as a whole instead of particular AI applications. Additionally, the base version of the instrument uses a 7-point Likert scale; however, following the recommendations provided by Saifi et al. (2025) and Russo et al. (2021), a 5-point scale was implemented, using I completely disagree and I completely agree as anchor values, as this provided robust answers at a minimal loss of information. Such initiative follows previous studies that implemented similar changes to the aforementioned subscales (e.g., Quicaño-Arones et al. 2019).
Following Suárez Rodríguez and Jornet Meliá’s (1994) recommendations and taking into consideration that both instruments had only been validated at the content-related level, an initial construct and criterion-related validation process was developed before conducting a SEM-based analytical study. The final research instrument, formed by the scales extracted from both measurement tools, included a total of 10 latent constructs, measured by 33 observed variables, with every construct having at least three uncorrelated indicators (Kenny and Milan 2012).
Hair et al. (2019), both the R-based statistical package lavaan (Rosseel 2012) as well as the IBM SPSS software (IBM Corp., Armonk, NY, USA, version 28.0), alongside the multivariate normality SPSS macro provided by DeCarlo (1997), were used. The present research has also utilized artificial intelligence tools to enhance the clarity and quality of the writing. This practice is considered an ethical use of such tools in accordance with the Committee on Publication Ethics (COPE) guidelines.

6. Results

6.1. Normality Assessment of Sample Distribution

As an initial step, the univariate normality of the data’s distribution was assessed in order to determine the analytical procedure that was most adequate for the gathered replies. Although the usual normality statistical tests retrieved highly significant results, thus affirming a severe departure from normality for each observed variable, the obtained skewness and kurtosis coefficients for every item in the constructed scale were distributed within the acceptable range of ±3 for the former and ±10 for the latter (Kline 2016). As such, univariate normality was confirmed for every observed variable (see Table 1).
Nevertheless, values of 111.490 and 1405.524 were retrieved for Mardia’s multivariate skewness and kurtosis, respectively, therefore being distanced from their expected thresholds of 0 for skewness and p(p + 2) = 1.155 for kurtosis (p being the number of ad-dressed observed variables) (Mardia 1970, 1974). As a result, it was confirmed that the multivariate normality assumption was violated for the studied sample, which led the analysis to be conducted under robust estimation methods. Taking into consideration both the ordinal nature of the data and that every included variable used, at least, a 5-point Likert-type scale, the researchers opted for the Scalar Correction of the Maximum Likelihood estimation proposed by Satorra and Bentler (1994).
In order to assure the absence of multicollinearity issues, as these may result in critical risks regarding the proper fitting of the model to the provided data, the Variance Inflation Factor (VIF) of each observed variable, alongside their corresponding tolerance statistics, determined through a linear regression of the scoring of each dimension over its corresponding items, was examined. Taking as reference the thresholds proposed by Kim (2019), that is, VIF values under five and tolerance statistics over 0.2, the absence of multicollinearity at the univariate level was confirmed. Table 1 offers a detailed overview of these procedures.

6.2. Ad Hoc Validation of Research Instrument

Having introduced several measurement instruments derived from different scales, it was deemed necessary to conduct a construct and criterion-related validation process to assure the proper factorial validity of the studied constructs. Additionally, in order to establish the design adequacy of the newly formed instrument, both its factor-level and whole-scale reliability were assessed. Associating every observed variable to its predetermined endogenous variable, intercorrelating every exogenous latent variable, and including the paths established between such constructs by their original authors (i.e., behavioral intentions being determined as a second-order formative construct), a confirmatory factor analysis (CFA) was initially performed.
As the UTAUT2 instrument’s question structure is mainly based on control items, following Hermida’s (2015) recommendations, an a priori determination of correlated pairs of error variances between similar in-nature variables was allowed, with the main goal of explaining common variance, unrelated to the indicator–factor statistical relationship, that could potentially bias the cause–effect interactions among endogenous variables. Following both the generality rule stated by Kenny (2011), as well as the theoretically based error correlations principle highlighted in Gerbing and Anderson’s (1984) work, every observable variable pair that could be deemed as a control item of another indicator had its error variances correlated in the estimation of the measurement models included in the present CFA (see Table 2).
Under such considerations, the majority of observed variables showed adequate standardized factor loadings, either higher or close to the recommended criterion of λ ≤ 0.7 (Muijs 2022), while all of them retrieved a substantial coefficient of determination (Cohen 1988), reinforcing their predefined association to their respective scales. Even though several factors did not achieve the common threshold of the average variance extracted (AVE ≤ 0.5), taking into consideration that all of them reached the 0.4 reference point and that every subscale showed composite reliability (CR) values far superior than 0.6, following Hair et al. (2016) recommendations when assessing the validity of newly formed scales, the implemented constructs can be deemed valid, thus confirming the existence of convergent validity in the research instrument.
Additionally, as no CR value for any of the underlying factors surpassed the 0.9 high-end threshold, the absence of multicollinearity can be extended up to the construct level (Hair et al. 2022). Regarding the reliability of the instrument, all factors showed adequate values of Cronbach’s Alpha (i.e., α ≤ 0.7). Nevertheless, considering both Alpha’s upward bias and its lack of stability in the presence of correlated error variances (Cortina 1993), McDonald’s Omega can be used as a reference for the assessment of the scales’ reliability, with every dimension having reached the recommended 0.7 value (Ventura-León and Caycho-Rodríguez 2017). In this way, both Alpha and Omega were established at satisfactory levels regarding the reliability of the instrument as a whole.
Regarding discriminant validity, it was decided to assess the difference between con-structs via both determining their Heterotrait–Monotrait ratio of correlations (HTMT2s) as well as opposing every Average Shared Squared Variance (ASV) in order to combine both the statistical analysis and more traditional rule of thumb-based methods (Cheung et al. 2024). As shown in Table 3, the HTMT2 for each construct lies beneath the critical threshold of 0.9 (Henseler et al. 2015), while the ASV of each factor showed lower values than that of their respective AVE, therefore confirming the existence of discriminant validity in the research instrument.
The overall structural model made out of every measurement model associated with each construct yielded generally adequate Goodness-of-Fit (GoF) values. Regarding the absolute fit, the χ2 test retrieved significant results (χ2 = 1939.129; p = 0.000), which can be traced back to its sensitivity in sample sizes greater than 200 (Bentler and Bonett 1980), which are not able to offer realistic nor robust GoF assessments.
Conversely, the Goodness-of-Fit Index (GFI) was established at 0.954, which is higher than the recommended 0.9 value (Kocakaya and Kocakaya 2014), while its adjusted version (AGFI) remained at 0.945, being acceptable for complex models with numerous latent factors (Brett and Drasgow 2002). The Standardized Root Mean Residual (SRMR) remained below the acceptable threshold of 0.9 (Hu and Bentler 1999). Finally, the Root Mean Squared Error of Approximation (RMSEA) showed a value of 0.063 (pclose = 0.05), with a 95% confidence interval in the [0.060, 0.066] range, therefore being below the critical 0.08 threshold (Gerbing and Anderson 1984; MacCallum and Hong 1997).
As for measures of incremental fit, the Tucker–Lewis Index (TLI) was determined to have a value of 0.939, the Comparative Fit Index (CFI) equaled 0.946, and the Normed Fit Index (NFI) retrieved 0.931, all values being under the recommended 0.9 threshold (Hu and Bentler 1999). Additionally, the Relative Fit Index (RFI) remained very close to the 0.9 cutoff criterion (RFI = 0.922) (Muijs 2022; Wu et al. 2017), with the Incremental Fit Index (IFI) having reached that same recommended value (IFI = 0.947) (Bollen 1989).
Finally, regarding the parsimony adjustment of the model, the χ2-to-degrees of freedom (df) ratio was established in the excellent threshold below three (χ2/df = 1.939, df = 466) (Marsh and Hocevar 1985). Complementarily, the parsimony-adjusted versions of prior indices showed equally acceptable values, including a Parsimony Goodness-of-Fit Index (PGFI) of 0.793, a Parsimony Normed-Fit Index (PNFI) of 0.821, and a Parsimony Comparative-Fit Index (PCFI) of 0.801, all above the respective recommended values of 0.5 (Jöreskog and Sörbom 1993), 0.5, and 0.6 (Hu and Bentler 1999).
To offer a complementary validation perspective, criterion-related validity was established through the significative differences between opinion/experience groups, formed prior to the distribution of the research instrument, as instructed in Agus et al. (2024) work. It was determined that prospective teachers that actively affirmed desiring to use AI in educational settings showed significant differences in every dimension, except regarding hedonic motivation, habits, and behavioral intention, which can be attributable to the lack of distinction in the UTAUT instrument between the general use of a technological resource and its implementation for professional tasks (see Table 4), thus generally reaffirming the criterion-related validity of the instrument.
Once the equation model was established on the basis of the hypotheses set out in the theoretical framework, it was essential to analyze the influence of professional skills training on the development of teachers’ digital competencies. It was also necessary to highlight the level of significance of skills training on the use and acceptance of AI.
In particular, it can be seen that all the dimensions established by the UTAUT2 model vary according to the level of self-perceived professional competence. Thus, teachers with a high level of professional commitment tend to rate AI negatively and show less acceptance of this tool. On the contrary, those with a higher self-perceived competence in content creation show a better evaluation and a higher acceptance of AI (Table 5).
As shown in the table above, the direct effect of competence development on the intention to use AI was not calculated. This is because all dimensions of the UTAUT2 model influence this relationship, requiring an analysis of indirect effects to determine its impact. Specifically, digital teaching competencies indirectly affect the intention to use AI through their relationship with effort expectancy. The model reflecting these relationships is illustrated in Figure 2.

7. Discussion

Having presented the results concerning the various relationships examined, several findings have emerged that both contradict and corroborate the theoretical framework established earlier. First, we highlight the influence of both competence areas on performance expectancy. The digital resource creation dimension demonstrates a positive relationship with all dimensions of AI acceptance and use, aligning with prior scientific literature (Collado-Alonso et al. 2023; Gonçalves Costa et al. 2024; Van den Berg and du Plessis 2023). Of particular interest among the findings is the significant influence of this competency on behavioral intention, as the interaction between these two dimensions strengthens the positive relationship between effort expectancy and social influence.
Continuing with the previously established relationships, professional engagement has both supported and challenged various implications regarding the acceptance and use of AI. The scientific literature advocates the emergence of specialized AI-training programs, as participation in such initiatives has been shown to positively influence multiple factors, particularly performance expectancy (Bozkurt 2024; De Frutos et al. 2023; Skantz-Åberg et al. 2022; Suconota-Pintado et al. 2023).
However, our findings reveal an inverse relationship between pre-service teachers’ professional engagement and their perceived performance expectancy of AI tools. Specifically, those demonstrating stronger commitment to peer communication networks, professional development participation, and digital tool integration in teaching showed lower expectations regarding AI’s performance benefits.
This phenomenon may be explained through a competence-based lens: educators with established technological–pedagogical skills may perceive less need for additional digital tools to enhance their teaching practice. It should be noted that these results do not suggest that better-trained teachers use AI less effectively but rather that they demonstrate more measured expectations compared to their less-experienced counterparts who may overestimate the technology’s advantages.
The present study unveils yet another contradiction with the existing literature, as professional engagement would, theoretically, correlate positively with effort expectancy, as greater ICT proficiency should reduce perceived effort. While teachers indeed face numerous non-instructional tasks that impact their work (Voogt 2010)—tasks for which AI proves particularly efficient due to its mechanical nature (De Frutos et al. 2023; Del Moral-Pérez et al. 2024)—this study reveals that pre-service teachers developing ICT and AI competencies recognize an important nuance. Although AI simplifies certain tasks, they understand that effectively using multiple tools requires additional training and implementation effort, potentially explaining this divergence from theoretical expectations.
Regarding social influence, while significant debate exists, previously presented results clearly illustrate the beliefs withheld by pre-service teachers. Alonso-Rodríguez (2024) and the European Union present harsh critiques of AI development, arguing that despite its necessity for social and professional integration, poor practices typically outweigh proper implementations. Conversely, Rahimi and Mosalli (2024), Chiu et al. (2024), Usán-Supervía and Castellanos-Vega (2024), and Bozkurt (2023), along with Andalusian universities, maintain that the benefits of these tools outweigh the drawbacks.
In our study context, the stance of Alonso-Rodríguez (2024) and the European Parliament appears more influential. While participants recognize AI’s potential benefits, they remain aware of its misuse, leading to explicit skepticism. Concerning hedonic motivation, we observed the same pattern as with professional engagement. Digitally proficient pre-service teachers perceive less benefits from AI than their less-experienced counterparts, partially contradicting the existing literature (Hezam and Alkhateeb 2024; Kalniņa et al. 2024). This pattern repeats itself in AI cost evaluations, diverging from Wang and Sun (2024) and Lü et al. (2024). This raises important questions about whether these results are held specifically for AI-trained educators or generally across AI-proficient professionals.
Finally, regarding usage habits, the existing literature typically associates higher professional engagement with increased technology use habits (Bolaño-García and Duarte-Acosta 2024; Clemente-Alcocer et al. 2024; Cukurova et al. 2023; Karran et al. 2024). However, our findings reveal an inverse relationship in the studied context, suggesting this established pattern may not universally apply.

8. Conclusions

This study reveals a complex interplay between educators’ digital competence and their perceptions of AI in education. The findings challenge conventional assumptions by demonstrating that higher levels of professional engagement do not necessarily correlate with more positive expectations of AI tools. Instead, educators with strong technological–pedagogical skills exhibit more measured and critical perspectives, recognizing both the potential benefits and limitations of AI integration.
This research highlights that skills related to digital content creation serve as a key facilitator for AI acceptance, suggesting that practical, hands-on experience with digital tools shapes more realistic expectations of AI’s role in education. However, this study also uncovered important contradictions regarding effort expectancy, showing that while AI may streamline certain tasks, educators perceive significant hidden costs in terms of training and implementation requirements.
Regarding social influence, the findings indicate that pre-service teachers weigh ethical considerations and potential misuse heavily in their evaluations of AI, leading to cautious rather than enthusiastic adoption. This critical perspective extends to hedonic motivation and cost evaluations, where more experienced educators demonstrate lower perceived benefits compared to their less-experienced counterparts.
Ultimately, this study underscores that successful AI integration in education requires moving beyond technical training to address deeper pedagogical and ethical considerations. Educators need support in developing critical frameworks for evaluating AI’s role, ensuring that its implementation enhances rather than disrupts meaningful teaching and learning processes. These insights point to the importance of context-specific approaches to AI adoption, recognizing that educator expertise fundamentally shapes perceptions and usage patterns.
The consistent patterns observed across multiple dimensions of technology acceptance suggest that current theoretical models may need refinement to account for the nuanced ways in which professional experience mediates AI adoption. Future efforts should focus on developing more sophisticated frameworks that capture these dynamics, particularly in preparing educators for the evolving digital landscape.

8.1. Future Research Directions

Having drawn the conclusions and presented the various points discussed above, the following future research directions are proposed:
-
Intervention Planning: The assessment of pre-service teachers’ prior knowledge reveals a widespread lack of familiarity with AI applications in education. This study reveals the need to develop a structured protocol focused on building competencies in educational AI implementation.
-
Demographic Analysis: A critical research direction involves examining potential variations in AI-adoption patterns across different sociodemographic groups. This analysis should investigate how influential factors such as age, educational background, and technological exposure are.

8.2. Research Limitations

While this study provides valuable insights into AI adoption among pre-service teachers, several limitations should be acknowledged. This research focused exclusively on Early Childhood and Elementary Education programs in Andalusia, which may limit the generalizability of the findings to other educational contexts or experienced in-service teachers. The reliance on self-reported data through questionnaires introduces potential response biases, and the cross-sectional design prevents robustly establishing causal relationships between variables. Additionally, this study measured behavioral intentions rather than actual AI usage in classroom settings, and the rapid evolution of AI technologies means that the specific findings about current tools may require periodic re-evaluations. Future research should address these limitations through mixed-method longitudinal designs across diverse educational settings.

Author Contributions

Conceptualization, A.M.-M. and J.J.V.-M.; Data curation, M.B.M.-C., S.A.-G. and J.J.V.-M.; Formal analysis, S.A.-G. and J.J.V.-M.; Funding acquisition, M.B.M.-C. and A.M.-M.; Investigation, J.J.V.-M.; Methodology, S.A.-G., A.M.-M. and J.J.V.-M.; Project administration, A.M.-M.; Resources, S.A.-G.; Software, S.A.-G. and J.J.V.-M.; Supervision, S.A.-G. and A.M.-M.; Validation, A.M.-M. and J.J.V.-M.; Visualization, M.B.M.-C.; Writing—original draft, S.A.-G. and J.J.V.-M.; Writing—review & editing, M.B.M.-C., A.M.-M. and J.J.V.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

According to our institution (University of Granada), it is not necessary to request certification from the corresponding ethics committee, as informed consent has been specified by means of a yes or no question. Firstly, the following link: https://secretariageneral.ugr.es/unidades/oficina-proteccion-datos/registro-de-actividades-de-tratamiento/encuestas, accessed on 14 May 2025 specifies how the response to a questionnaire may be used for research purposes as long as an informed consent is included (this can be a simple yes/no question), based on Article 6.1(a) of the GDPR, and the research has been clearly explained. Secondly, in the instructions for the research ethics committee application process provided in the following link: https://investigacion.ugr.es/ayudas/comites-etica/evaluacion accessed on 14 May 2025, the following cases are mentioned as requiring ethical committee involvement: (a) Patients recruited through the public or private healthcare system (b) Clinical data (c) Biological samples These conditions are not relevant to the nature of the present research. Even if the educational context is not explicitly mentioned in this statement, the research does not involve direct classroom intervention. Therefore, ethical committee approval would not be necessary for this study.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

This research is part of the doctoral thesis entitled: “Influence of Artificial Intelligence on the Development of Digital Skills of Teacher Trainees”. The present research has used artificial intelligence for the improvement of writing which is considered an ethical use in committee on publication ethics guidelines.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. 4Docent.es. 2024. Cuaderno Docente con IA. Available online: https://4docent.es/ (accessed on 11 February 2025).
  2. Adelana, Owolabi Paul, Musa Adekunle Ayanwale, and Ismaila Temitayo Sanusi. 2024. Exploring Pre-Service Biology Teachers’ Intention to Teach Genetics Using an AI Intelligent Tutoring-Based System. Cogent Education 11: 2310976. [Google Scholar] [CrossRef]
  3. Adell, Jordi. 1996. Internet en educación: Una gran oportunidad. Net Conexión 11: 44–47. [Google Scholar]
  4. Agus, Mirian, Giovanni Bonaiuti, and Arianna Marras. 2024. Psychometric validation of the Robotics Interest Questionnaire (RIQ) scale with Italian teachers. Journal of Science Education and Technology 33: 68–83. [Google Scholar] [CrossRef]
  5. Alonso-Rodríguez, Ana María. 2024. Hacia un Marco Ético de la Inteligencia Artificial en la Educación. Teoría de la Educación. Revista Interuniversitaria 36: 79–98. [Google Scholar] [CrossRef]
  6. Ayanwale, Musa Adekunle, Emmanuel Kwabena Frimpong, Oluwaseyi Aina Gbolade Opesemowo, and Ismaila Temitayo Sanusi. 2024. Exploring factors that support pre-service teachers’ engagement in learning artificial intelligence. Journal for STEM Education Research 8: 199–229. [Google Scholar] [CrossRef]
  7. Aznar-Díaz, Isabel, Patricia Ayllón-Salas, Francisco D. Fernández-Martín, and María Ramos-Navas-Parejo. 2025. Exploring Predictors of Success in Massive Open Online Courses (MOOC). RIED-Revista Iberoamericana de Educación a Distancia 28: 239–257. [Google Scholar] [CrossRef]
  8. Batanero, Carmen. 1998. Recursos para la educación estadística en Internet. Uno 15: 13–26. [Google Scholar]
  9. Bedir Erişti, Suzan Duygu, and Kerry Freedman. 2024. Integrating Digital Technologies and AI in Art Education: Pedagogical Competencies and the Evolution of Digital Visual Culture. Participatory Educational Research 11: 57–79. [Google Scholar] [CrossRef]
  10. Bentler, Peter M., and Douglas G. Bonett. 1980. Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin 88: 588–606. [Google Scholar] [CrossRef]
  11. Bolaño-García, Manuel, and Nicolás Duarte-Acosta. 2024. Una Revisión Sistemática del Uso de la IA en la Educación. Revista Colombiana de Cirugía 39: 51–63. [Google Scholar]
  12. Bollen, Kenneth A. 1989. Structural Equations with Latent Variables. Hoboken: John Wiley & Sons. [Google Scholar]
  13. Bozkurt, Aras. 2023. Unleashing the Potential of Generative AI, Conversational Agents and Chatbots in Educational Praxis. Open Praxis 15: 261–70. [Google Scholar] [CrossRef]
  14. Bozkurt, Aras. 2024. Tell Me Your Prompts and I Will Make Them True. Open Praxis 16: 111–18. [Google Scholar] [CrossRef]
  15. Brett, Jeanne M., and Fritz Drasgow. 2002. The Psychology of Work: Theoretically Based Empirical Research. Mahwah: Lawrence Erlbaum Associates. [Google Scholar]
  16. Caballero-García, Presentación Ángeles, Pilar Ester Mariñoso, Isabel Morales Jareño, and Emilio Cañadas Rodríguez. 2024. La IA como herramienta de creación de contenidos y personalización del proceso de enseñanza aprendizaje. In Metodologías Emergentes en la Investigación y Acción Educativa. Edited by Ana Belén Barragán, María del Mar Simón Márquez, José Jesús Gázquez Linares, Elena Martínez Casanova and Silvia Fernández Gea. Madrid: Dykinson, pp. 177–91. [Google Scholar]
  17. Centro de Servicios Informáticos y Redes de Comunicación. 2024. Inteligencia Artificial en la UGR: Microsoft Copilot. Universidad de Granada. Available online: https://csirc.ugr.es/informacion/noticias/microsoft-copilot (accessed on 8 February 2024).
  18. Cheung, Gordon W., Helena D. Cooper-Thomas, Rebecca S. Lau, and Linda C. Wang. 2024. Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations. Asia Pacific Journal of Management 41: 745–83. [Google Scholar] [CrossRef]
  19. Chiu, Thomas K. F., Benjamin Luke Moorhouse, Ching Sing Chai, and Murod Ismailov. 2023. Teacher Support and Student Motivation to Learn with Artificial Intelligence (AI) Based Chatbot. Interactive Learning Environments 32: 3240–56. [Google Scholar] [CrossRef]
  20. Chiu, Thomas K. F., Zubair Ahmad, and Murat Çoban. 2024. Development and validation of teacher artificial intelligence (AI) competence self-efficacy (TAICS) scale. Education and Information Technologies 30: 6667–85. [Google Scholar] [CrossRef]
  21. Clemente-Alcocer, Antonio A., Antonio Cabello-Cabrera, and Elizabeth Añorve-García. 2024. La IA en la Educación: Desafíos Éticos. Revista Latinoamericana de Ciencias Sociales y Humanidades 5: 464–72. [Google Scholar] [CrossRef]
  22. Cochran, William Gemmell, and Eva C. Díaz. 1980. Técnicas de Muestreo. Mexico City: Compañía Editorial Continental. [Google Scholar]
  23. Cohen, Jacob. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd ed. New York: Routledge. [Google Scholar]
  24. Collado-Alonso, Rocío, Laura Picazo-Sánchez, Ana-Teresa López-Pastor, and Agustín García-Matilla. 2023. ¿Qué enseña el social media? Influencers y followers ante la educación informal en redes sociales. Revista Mediterránea de Comunicación 14: 259–70. [Google Scholar] [CrossRef]
  25. Cortina, José M. 1993. What Is Coefficient Alpha? An Examination of Theory and Applications. Journal of Applied Psychology 78: 98–104. [Google Scholar] [CrossRef]
  26. Cukurova, Mutlu, Xin Miao, and Richard Brooker. 2023. Adoption of Artificial Intelligence in Schools. In Artificial Intelligence in Education. AIED 2023. Edited by Ning Wang, Genaro Rebolledo-Mendez, Noboru Matsuda, Olga C. Santos and Vania Dimitrova. Cham: Springer, pp. 151–63. [Google Scholar] [CrossRef]
  27. De Frutos, Nahia Delgado, Lucía Campo-Carrasco, Martín Sainz de la Maza, and José María Extabe-Urbieta. 2023. Application of artificial intelligence (AI) in education: Benefits and limitations of AI as perceived by primary, secondary, and higher education teachers. Revista Electrónica Interuniversitaria de Formación del Profesorado 27: 207–25. [Google Scholar] [CrossRef]
  28. DeCarlo, Lawrence T. 1997. On the Meaning and Use of Kurtosis. Psychological Methods 2: 292–307. [Google Scholar] [CrossRef]
  29. Del Moral-Pérez, M. Esther, Nerea López-Bouzas, and Jonathan Castañeda-Fernández. 2024. Transmedia skill derived from the process of converting films into educational games with augmented reality and artificial intelligence. Journal of New Approaches in Educational Research 13: 15. [Google Scholar] [CrossRef]
  30. Faul, Franz, Edgar Erdfelder, Albert Buchner, and Axel-Georg Lang. 2009. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods 41: 1149–60. [Google Scholar] [CrossRef] [PubMed]
  31. Galindo-Domínguez, Héctor, Nahia Delgado, Lucía Campo, and Martín Sainz de la Maza. 2024. Uso de ChatGpt en educación superior. Un análisis en función del género, rendimiento académico, año y grado universitario del alumnado. Red U 22: 16–30. [Google Scholar] [CrossRef]
  32. Galindo-Domínguez, Héctor, Nahia Delgado, María-Victoria Urruzola, Jose-María Etxabe, and Lucía Campo. 2025. Using artificial intelligence to promote adolescents’ learning motivation. A longitudinal intervention from the self-determination theory. Journal of Computer Assisted Learning 41: e70020. [Google Scholar] [CrossRef]
  33. García Cabrero, Benilde, Javier Loredo Enríquez, and Guadalupe Carranza Peña. 2008. Análisis de la práctica educativa de los docentes: Pensamiento, interacción y reflexión. Revista Electrónica de Investigación Educativa 10: 1–15. [Google Scholar]
  34. Gerbing, David W., and James C. Anderson. 1984. On the Meaning of Within-Factor Correlated Measurement Errors. Journal of Consumer Research 11: 572–80. [Google Scholar] [CrossRef]
  35. Costa, Guilherme Gonçalves, Wilton J. D. Nascimento Júnior, Murilo Nícolas Mombelli, and Gildo Girotto Júnior. 2024. Revisiting a teaching sequence on the topic of electrolysis: A comparative study with the use of artificial intelligence. Journal of Chemical Education 101: 3255–63. [Google Scholar] [CrossRef]
  36. Hair, Joseph F., G. Tomas M. Hult, Christian Ringle, and Marko Sarstedt. 2016. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). London: SAGE. [Google Scholar]
  37. Hair, Joseph F., William C. Black, Barry J. Babin, and Rolph E. Anderson. 2019. Multivariate Data Analysis. Boston: Cengage. [Google Scholar]
  38. Hair, Joseph F., G. Tomas M. Hult, Christian Ringle, Marko Sarstedt, Nicholas P. Danks, and Siegfried Roy. 2022. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook. Cham: Springer. [Google Scholar]
  39. Hargreaves, Andy, and Michael Fullan. 2012. Professional Capital: Transforming Teaching in Every School. New York: Teachers College Press. [Google Scholar]
  40. Henseler, Jörg, Christian M. Ringle, and Marko Sarstedt. 2015. A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science 43: 115–35. [Google Scholar] [CrossRef]
  41. Hermida, Rafael. 2015. The Problem of Allowing Correlated Errors in Structural Equation Modeling: Concerns and Considerations. Computational Methods in Social Sciences 3: 5–17. [Google Scholar]
  42. Hezam, Abdulrahman Mokbel Mahyoub, and Abdulelah Alkhateeb. 2024. Short stories and AI tools: An exploratory study. Theory and Practice in Language Studies 14: 2053–62. [Google Scholar] [CrossRef]
  43. Hu, Li-tze, and Peter M. Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modelling: A Multidisciplinary Journal 6: 1–55. [Google Scholar] [CrossRef]
  44. Jefatura del Estado. 2018. Ley Orgánica 3/2018, de Protección de Datos Personales y Garantía de los Derechos Digitales. Madrid: Boletín Oficial del Estado (BOE), No. 294 (December 6), pp. 1–118. Available online: https://www.boe.es/buscar/pdf/2018/BOE-A-2018-16673-consolidado.pdf (accessed on 20 March 2025).
  45. Jöreskog, Karl G., and Dag Sörbom. 1993. LISREL 8: Structural Equation Modeling with the SIMPLIS Command Language. Mahwah: Lawrence Erlbaum Associates. [Google Scholar]
  46. Kalniņa, Daiga, Dita Nīmante, and Sanita Baranova. 2024. Artificial intelligence for higher education: Benefits and challenges for pre-service teachers. Front. Educ. 9: 1501819. [Google Scholar] [CrossRef]
  47. Karran, Alex J., Patrick Charland, J. T. Martineau, A. de Arana, A. M. Lesage, Sylvain Senecal, and Pierre-Majorique Leger. 2024. Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education. arXiv arXiv:2402.15027. [Google Scholar]
  48. Kenny, David A. 2011. Respecification of Latent Variable Models. Structural Equation Modeling. Available online: https://davidakenny.net/cm/respec.htm (accessed on 20 March 2025).
  49. Kenny, David A., and Silvia Milan. 2012. Identification: A nontechnical discussion of a technical issue. In Handbook of Structural Equation Modeling. Edited by Rick H. Hoyle. New York: The Guilford Press, pp. 145–63. [Google Scholar]
  50. Khalil, Hanan, and Said Alsenaidi. 2024. Teachers’ digital competencies for effective AI integration in higher education in Oman. Journal of Education and E-Learning Research 11: 698–707. [Google Scholar] [CrossRef]
  51. Kim, Jae Hae. 2019. Multicollinearity and Misleading Statistical Results. Korean Journal of Anesthesiology 72: 558–69. [Google Scholar] [CrossRef] [PubMed]
  52. Kline, Rex B. 2016. Principles and Practice of Structural Equation Modeling, 4th ed. New York: Guilford Press. [Google Scholar]
  53. Kocakaya, Serhat, and Fatih Kocakaya. 2014. A structural equation modeling on factors of how experienced teachers affect the students’ science and mathematics achievements. Education Research International 2014: 490371. [Google Scholar] [CrossRef]
  54. Lee, Y. 2023. A study on the effectiveness analysis of liberal arts education for the improvement of artificial intelligence literacy of pre-service teachers. Journal of Computer Education Research 26: 73–89. [Google Scholar] [CrossRef]
  55. Lin, Hua. 2022. Influences of Artificial Intelligence in Education on Teaching Effectiveness. International Journal of Emerging Technologies in Learning (iJET) 17: 144–56. [Google Scholar] [CrossRef]
  56. Lü, Hong, Ling He, Hao Yu, Tong Pan, and Kai Fu. 2024. A Study on Teachers’ Willingness to Use Generative AI. Sustainability 16: 7216. [Google Scholar] [CrossRef]
  57. MacCallum, Robert C., and Sehee Hong. 1997. Power analysis in covariance structure modeling using GFI and AGFI. Multivariate Behavioral Research 32: 193–210. [Google Scholar] [CrossRef]
  58. Mardia, Kantil V. 1970. Measures of Multivariate Skewness and Kurtosis with Applications. Biometrika 57: 519–30. [Google Scholar] [CrossRef]
  59. Mardia, Kantil V. 1974. Applications of Some Measures of Multivariate Skewness and Kurtosis in Testing Normality and Robustness Studies. Sankhyā: The Indian Journal of Statistics Series B 36: 115–28. [Google Scholar]
  60. Marsh, Herbert W., and Dennis Hocevar. 1985. Application of Confirmatory Factor Analysis to the Study of Self Concept: First and Higher Order Factor Models and Their Invariance Across Groups. Psychological Bulletin 97: 562–82. [Google Scholar] [CrossRef]
  61. Martínez-Domingo, José-Antonio, José-María Romero-Rodríguez, Arturo Fuentes-Cabrera, and Inmaculada Aznar-Díaz. 2024. Los Influencers y su Papel en la Educación: Una Revisión Sistemática. Manila: Educar. [Google Scholar] [CrossRef]
  62. Ministerio de Universidades. 2022. Libro de Trabajo: Academica21_EEUU. Estudiantes en las Universidades Españolas. Available online: https://public.tableau.com/views/Academica21_EEU/InfografiaEEU?%3AshowVizHome=no&%3Aembed=true (accessed on 20 March 2025).
  63. Montero, Isabel, and Orlando G. León. 2005. A guide for naming research studies in Psychology. International Journal of Clinical and Health Psychology 5: 115–27. [Google Scholar]
  64. Mora-Cantallops, Marçal, Andreia Inamorato dos Santos, Cristina Villalonga-Gómez, Juan Ramón Lacalle Remigio, Juan Camarillo Casado, José Manuel Sota Eguizabal, Juan Ramón Velasco, and Pedro Miguel Ruiz Martínez. 2022. The Digital Competence of Academics in Spain: A Study Based on the European Frameworks DigCompEdu and OpenEdu. Luxembourg: Publications Office of the European Union. [Google Scholar] [CrossRef]
  65. Moreira-Zambrano, Mayra del Carmen, Yurelquis Marzo-Villalón, and Segress García-Hevia. 2025. IA en primer año de bachillerato técnico: Guía didáctica para su uso ético y eficaz. Scientific Journal MQRInvestigar 9: 1–35. [Google Scholar] [CrossRef]
  66. Muijs, Daniel. 2022. Doing Quantitative Research in Education with IBM SPSS Statistics, 3rd ed. London: SAGE. [Google Scholar]
  67. Núñez, Raúl Prada, William Rodrigo Avendaño Castro, and Cesar Augusto Hernández Suarez. 2022. Globalización y cultura digital en entornos educativos. Revista Boletín Redipe 11: 262–72. [Google Scholar] [CrossRef]
  68. Peng, Yi, Yanyu Wang, and Jie Hu. 2023. Examining ICT attitudes, use and support in blended learning settings for students’ reading performance: Approaches of artificial intelligence and multilevel model. Computers & Education 203: 104846. [Google Scholar] [CrossRef]
  69. Pérez Ibáñez, J. D. 2024. Curso de Evaluación Competencial con IA. Available online: https://jose-david.com/ (accessed on 20 March 2025).
  70. Piette, Jacques. 2000. La educación en medios de comunicación y las nuevas tecnologías. Comunicar 14: 17–22. [Google Scholar] [CrossRef]
  71. Quicaño-Arones, César, Carla León-Fernández, and Antonio Moquillaza-Vizarreta. 2019. Un modelo para medir el comportamiento en la aceptación tecnológica del servicio de Internet en hoteles peruanos basado en UTAUT2. Caso ‘Casa Andina’. 3C TIC: Cuadernos de Desarrollo Aplicados a las TIC 8: 12–35. [Google Scholar] [CrossRef]
  72. Rahimi, Ali Reza, and Zahra Mosalli. 2024. The Role of 21-Century Digital Competence in Shaping Pre-Service Language Teachers’ Skills. Journal of Computers in Education 12: 165–89. [Google Scholar] [CrossRef]
  73. Rosseel, Yves. 2012. lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software 48: 1–36. [Google Scholar] [CrossRef]
  74. Russo, Giuseppe Maria, Patricia Amelia Tomei, Bernardo Serra, and Sylvia Mello. 2021. Differences in the use of 5- or 7-point Likert scale: An application in food safety culture. Organizational Cultures: An International Journal 21: 1–17. [Google Scholar] [CrossRef]
  75. Saifi, Sajuddin, Shaista Tanveer, Mohd Arwab, Dori Lal, and Nabila Mirza. 2025. Exploring the persistence of Open AI adoption among users in Indian higher education: A fusion of TCT and TTF model. In Education and Information Technologies. Cham: Springer. [Google Scholar] [CrossRef]
  76. Satorra, Albert, and Peter M. Bentler. 1994. Corrections to Test Statistics and Standard Errors in Covariance Structure Analysis. In Latent Variables Analysis: Applications for Developmental Research, edited by Alexander von Eye and Clifford C. Clogg. Thousand Oaks: SAGE, pp. 399–419. [Google Scholar]
  77. Skantz-Åberg, Emma, Anna Lantz-Andersson, Monica Lundin, and Peter Williams. 2022. Teachers’ Professional Digital Competence. Cogent Education 9: 1–23. [Google Scholar] [CrossRef]
  78. Suárez Rodríguez, José M., and José M. Jornet Meliá. 1994. Evaluación referida al criterio: Construcción de un test criterial de clase. In Problemas y Métodos de Investigación en Educación Personalizada. Edited by Víctor García-Hoz Rosales. Madrid: Rialp, pp. 419–43. [Google Scholar]
  79. Suconota-Pintado, Luis, Rubén Sánchez-Prado, Carlos Orellana-Peláez, and Walter Ávila-Aguilar. 2023. IA y Sostenibilidad. Magazine de las Ciencias 8: 12–28. [Google Scholar] [CrossRef]
  80. Svoboda, Petr. 2024. Teachers’ Digital Competencies. TEM Journal 13: 2195–207. [Google Scholar] [CrossRef]
  81. Tenberga, Ilze, and Linda Daniela. 2024. Artificial Intelligence Literacy Competencies for Teachers Through Self-Assessment Tools. Sustainability 16: 10386. [Google Scholar] [CrossRef]
  82. Tong, Ying, and Li Zhang. 2023. Discovering the Next Decade’s Synthetic Biology Research Trends with ChatGPT. Synthetic and Systems Biotechnology 8: 220–23. [Google Scholar] [CrossRef]
  83. Universidad de Huelva. 2022. Copilot. Available online: https://www.uhu.es/copilot (accessed on 1 February 2025).
  84. Universidad de Sevilla. 2024. Guías Copilot. Microsoft 365 Universidad de Sevilla. Available online: https://m365.us.es/es/guias/copilot (accessed on 20 March 2025).
  85. Usán-Supervía, Pedro, and Raquel Castellanos-Vega. 2024. Fomento de la Motivación Académica y la Competencia Digital. Revista Electrónica de Investigación Psicoeducativa 22: 419–40. [Google Scholar] [CrossRef]
  86. Van den Berg, Geesje, and Elize du Plessis. 2023. ChatGPT and generative AI: Possibilities for its contribution to lesson planning, critical thinking, and openness in teacher education. Education Sciences 13: 998. [Google Scholar] [CrossRef]
  87. Venkatesh, Viswanath, James Y. L. Thong, and Xin Xu. 2012. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly 36: 157–78. [Google Scholar] [CrossRef]
  88. Ventura-León, José L., and Tomás Caycho-Rodríguez. 2017. El Coeficiente Omega: Un Método Alternativo para la Estimación de la Confiabilidad. Revista Latinoamericana de Ciencias Sociales, Niñez y Juventud 15: 625–27. [Google Scholar]
  89. Voogt, Joke. 2010. Teacher Factors Associated with Innovative Curriculum Goals and Pedagogical Practices. Journal of Computer Assisted Learning 26: 453–64. [Google Scholar] [CrossRef]
  90. Wang, Ying, and Lin Sun. 2024. Research on the Influence Mechanism of the Willingness to Apply Generative AI. New Explorations in Education and Teaching 2: 54–57. [Google Scholar] [CrossRef]
  91. Wu, Guangdong, Cong Liu, Xianbo Zhao, and Jian Zuo. 2017. Investigating the relationship between communication-conflict interaction and project success among construction project teams. International Journal of Project Management 35: 1466–82. [Google Scholar] [CrossRef]
  92. Zawacki-Richter, Olaf, Victoria I. Marín, Melissa Bond, and Franziska Gouverneur. 2019. Systematic review of research on artificial intelligence applications in higher education—where are the educators? International Journal of Educational Technology in Higher Education 16: 39. [Google Scholar] [CrossRef]
  93. Zhang, Yuhong. 2024. Research on the connotation, dilemmas, and practical path of digital and intelligent competency of university teachers in the ‘Big Data + Artificial Intelligence’ era. Journal of Computer Technology and Electronic Research 1: 1–14. [Google Scholar] [CrossRef]
  94. Zhang, Chengming, Jessica Schießl, Lea Plößl, Florian Hofmann, and Michaela Gläser-Zikuda. 2023. Acceptance of artificial intelligence among pre-service teachers: A multi-group analysis. International Journal of Educational Technology in Higher Education 20: 49. [Google Scholar] [CrossRef]
Figure 1. Hypothetical moderation SEM model. Note. PE = professional engagement; RES = digital resource creation; PEX = performance expectancy; EEX = effort expectancy; INF = social influence; FACs = facilitating conditions; MOT = hedonic motivation; VAL = price value; HABs = habits; BEH = behavioral intention.
Figure 1. Hypothetical moderation SEM model. Note. PE = professional engagement; RES = digital resource creation; PEX = performance expectancy; EEX = effort expectancy; INF = social influence; FACs = facilitating conditions; MOT = hedonic motivation; VAL = price value; HABs = habits; BEH = behavioral intention.
Socsci 14 00355 g001
Figure 2. Structural equation modelling with indirect effects. Note. While the model was estimated based on the proposed hypotheses to enhance result interpretation, only statistically significant indirect effects are displayed. Non-significant relationships are denoted with an Asterisk (*). COMP = professional engagement; RES = digital resource creation; PEX = performance expectancy; EEX = effort expectancy; INF = social influence; FACs = facilitating conditions; MOT = hedonic motivation; VAL = price value; HABs = habit; BEH = behavioral intention.
Figure 2. Structural equation modelling with indirect effects. Note. While the model was estimated based on the proposed hypotheses to enhance result interpretation, only statistically significant indirect effects are displayed. Non-significant relationships are denoted with an Asterisk (*). COMP = professional engagement; RES = digital resource creation; PEX = performance expectancy; EEX = effort expectancy; INF = social influence; FACs = facilitating conditions; MOT = hedonic motivation; VAL = price value; HABs = habit; BEH = behavioral intention.
Socsci 14 00355 g002
Table 1. Normality and multicollinearity statistics of the sample.
Table 1. Normality and multicollinearity statistics of the sample.
VariableMSDVIFToleranceK-S-LS-WSkewnessKurtosis
ENG1 4.110 1.657 1.4180.705 0.168 *** 0.905 *** 0.203 −0.853
ENG2 3.480 1.344 1.4500.690 0.320 *** 0.810 *** 1.251 1.223
ENG3 3.700 1.601 1.5100.662 0.184 *** 0.927 *** 0.161 −1.066
ENG4 2.590 1.449 1.2100.826 0.238 *** 0.877 *** 0.768 −0.167
RES1 3.820 1.499 1.4050.712 0.211 *** 0.913 *** 0.19 −1.114
RES2 3.930 1.854 1.4000.714 0.155 *** 0.923 *** 0.068 −1.209
RES3 3.910 1.533 1.3290.753 0.157 *** 0.947 *** 0.218 −0.588
PEX1 4.150 0.828 1.5690.637 0.252 *** 0.802 *** −1.042 1.543
PEX3 4.160 0.842 1.4910.671 0.261 *** 0.796 *** −1.132 1.744
PEX4 3.520 1.073 1.3000.769 0.208 *** 0.899 *** −0.377 −0.51
EEX1 3.970 0.851 2.0970.477 0.259 *** 0.846 *** −0.666 0.393
EEX2 3.860 0.858 1.8570.538 0.268 *** 0.855 *** −0.608 0.406
EEX3 3.980 0.857 2.0910.478 0.266 *** 0.841 *** −0.746 0.58
EEX4 3.850 0.857 1.8390.544 0.269 *** 0.857 *** −0.585 0.319
INF1 3.240 0.968 2.4410.410 0.250 *** 0.888 *** −0.071 0.002
INF2 3.260 0.949 3.0370.329 0.262 *** 0.877 *** −0.086 0.173
INF3 3.170 0.952 2.3500.426 0.263 *** 0.875 *** −0.104 0.247
FAC1 4.020 0.816 1.6340.612 0.288 *** 0.824 *** −0.791 0.709
FAC2 3.770 0.878 1.5690.637 0.262 *** 0.869 *** −0.464 −0.088
FAC3 4.070 0.743 1.3590.736 0.287 *** 0.810 *** −0.728 1.116
FAC4 3.90 0.84 1.2550.797 0.311 *** 0.828 *** −0.827 0.881
MOT1 3.80 0.862 2.3910.418 0.220 *** 0.861 *** −0.324 −0.036
MOT2 3.820 0.806 2.1000.476 0.247 *** 0.855 *** −0.288 −0.025
MOT3 3.750 0.858 2.5850.387 0.227 *** 0.865 *** −0.311 0.003
VAL1 3.370 0.949 2.1540.464 0.240 *** 0.888 *** −0.118 −0.059
VAL2 3.420 0.901 2.9420.340 0.262 *** 0.871 *** −0.062 0.11
VAL3 3.420 0.871 2.6480.378 0.269 *** 0.865 *** −0.029 0.189
HAB1 3.280 1.165 1.7100.585 0.194 *** 0.907 *** −0.325 −0.673
HAB2 2.420 1.249 1.7300.578 0.203 *** 0.877 *** 0.515 −0.767
HAB3 3.110 1.033 1.4170.706 0.218 *** 0.905 *** −0.222 −0.304
BEH1 3.870 0.832 1.6420.609 0.284 *** 0.839 *** −0.744 1.083
BEH2 3.010 1.058 1.7320.577 0.192 *** 0.915 *** 0.022 −0.513
BEH3 3.440 0.971 2.2690.441 0.226 *** 0.887 *** −0.487 0.082
Multivariate 111.490 ***1405.524 ***
Note. M = mean; SD = standard deviation; VIF = variance inflation factor; K-S = Kolmogorov–Smirnov test statistic with Lilliefors’ correction; S-W = Shapiro–Wilk test statistic; *** Significant at p < 0.001.
Table 2. Standardized factor loadings alongside convergent and discriminant validity measures.
Table 2. Standardized factor loadings alongside convergent and discriminant validity measures.
Latent Variables and IndicatorsλR2AVEαωCR
Control
Professional Engagement (ENG)ENG1 0.6690.4480.4050.7210.7290.728
ENG2 0.6300.397
ENG3 0.7170.514
ENG4 0.5100.260
Digital Resource Creation (RES)RES1 0.7020.4920.4440.7000.7060.705
RES2 0.6600.436
RES3 0.6360.405
Performance Expectancy (PEX)PEX1PEX30.6800.4630.4190.7070.7080.683
PEX3PEX10.6140.377
PEX4 0.6460.417
Effort Expectancy (EEX)EEX1EEX30.7380.5450.5980.8550.8550.856
EEX2EEX40.8160.666
EEX3EEX10.7330.538
EEX4EEX20.8020.643
Social Influence (INF)INF1INF20.8040.6470.7100.8840.8860.880
INF2INF10.8920.796
INF3 0.8280.686
Facilitating Conditions (FACs)FAC1FAC20.6660.4430.4100.7420.7480.733
FAC2FAC10.7150.511
FAC3 0.6280.395
FAC4 0.5390.290
Hedonic Motivation (MOT)MOT1MOT20.8940.7990.7330.8710.8730.892
MOT2MOT10.8630.745
MOT3 0.8100.657
Price Value (VAL)VAL1VAL30.7930.6290.5250.8810.8820.768
VAL2 0.6860.471
VAL3VAL10.6900.476
Habits (HABs)HAB1 0.8260.6830.7390.7700.7810.895
HAB2HAB30.8600.739
HAB3HAB20.8920.796
Behavioral Intention (BEH)BEH1BEH20.7730.5970.6270.7970.8150.834
BEH2BEH10.7860.617
BEH3 0.8160.666
TotalN/AN/AN/A0.9060.895N/A
Note. λ = standardized factor loading; R2 = coefficient of determination; AVE = average variance extracted; α = Cronbach’s Alpha; ω = McDonald’s Omega; CR = composite reliability.
Table 3. Discriminant validity measures.
Table 3. Discriminant validity measures.
ENGRESPEXEEXINFFACMOTHABVALBEH
ENG0.14
RES0.890.14
PEX0.170.210.34
EEX0.220.220.750.29
INF0.220.20.580.380.19
FAC0.30.290.610.760.360.25
MOT0.250.20.70.610.430.620.259
VAL0.230.210.570.420.620.220.5230.24
HAB0.210.220.480.410.470.450.5130.590.19
BEH0.1650.1550.7350.4940.5580.4560.6690.8730.552N/A
Note. The diagonals represent each construct’s ASV. It should be noted that behavioral intention (BEH) was not considered for the inter-construct correlation analysis, as it is not a purely reflective construct, as per the original structure of the UTAUT2 instrument.
Table 4. Criterion-related validity measures.
Table 4. Criterion-related validity measures.
Factor Levene’s Test Independent Sample t Test
ENG 10.532 ** −12.933 ***
RES 16.467 *** −28.483 ***
TPEX 3.552 −2.903 **
TEEX 0.087 −3.011 **
INF 4.532 * −2.078 *
TFAC 0.685 −4.193 ***
TMOT 2.652 −1.805
VAL8.420 ** −3.025 **
HAB 5.648 * −1.398
BEH 4.110 * −0.871
Note. Given that the normal distribution of the sample was previously confirmed, Student’s t test was implemented, unless in factors that violated the homoscedasticity assumption between the studied groups, using Welch’s adjusted independent sample t test instead. Note: p < 0.05 (*) p < 0.03 (**), p < 0.01 (***). Asterisks indicate levels of statistical significance.
Table 5. Relationships established between teachers’ professional digital competencies and the use and uptake of AI.
Table 5. Relationships established between teachers’ professional digital competencies and the use and uptake of AI.
HypothesisRelationshipβBSDpStatus
H1ACOMP → PEX−0.5860.056−1.4340.000Confirmed
H1BCOMP → EEX−0.3790.041−9.2670.000Confirmed
H1CCOMP → INF−0.5290.053−9.9370.000Confirmed
H1DCOMP → MOT−0.5160.053−9.7100.000Confirmed
H1ECOMP → VAL−0.4940.051−9.6370.000Confirmed
H1FCOMP → HAB−0.8550.082−1.3940.000Confirmed
H2AREC → PEX1.7120.09917.2610.000Confirmed
H2BREC → EEX1.3570.07917.2380.000Confirmed
H2CREC → INF1.6050.09616.7970.000Confirmed
H2DREC → FAC0.7190.03819.1690.000Confirmed
H2EREC → MOT1.7170.09817.5720.000Confirmed
H2FREC → VAL1.5820.09317.0240.000Confirmed
H2GREC → HAB2.4190.14117.1870.000Confirmed
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Morales-Cevallos, M.B.; Alonso-García, S.; Martínez-Menéndez, A.; Victoria-Maldonado, J.J. Artificial Intelligence Adoption Amongst Digitally Proficient Trainee Teachers: A Structural Equation Modelling Approach. Soc. Sci. 2025, 14, 355. https://doi.org/10.3390/socsci14060355

AMA Style

Morales-Cevallos MB, Alonso-García S, Martínez-Menéndez A, Victoria-Maldonado JJ. Artificial Intelligence Adoption Amongst Digitally Proficient Trainee Teachers: A Structural Equation Modelling Approach. Social Sciences. 2025; 14(6):355. https://doi.org/10.3390/socsci14060355

Chicago/Turabian Style

Morales-Cevallos, María Belén, Santiago Alonso-García, Alejandro Martínez-Menéndez, and Juan José Victoria-Maldonado. 2025. "Artificial Intelligence Adoption Amongst Digitally Proficient Trainee Teachers: A Structural Equation Modelling Approach" Social Sciences 14, no. 6: 355. https://doi.org/10.3390/socsci14060355

APA Style

Morales-Cevallos, M. B., Alonso-García, S., Martínez-Menéndez, A., & Victoria-Maldonado, J. J. (2025). Artificial Intelligence Adoption Amongst Digitally Proficient Trainee Teachers: A Structural Equation Modelling Approach. Social Sciences, 14(6), 355. https://doi.org/10.3390/socsci14060355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop