Next Article in Journal
A Qualitative Approach to EFL Postgraduates’ GenAI-Assisted Research Writing Within Social Sciences
Previous Article in Journal
Virtual Reality and Digital Twins for Mechanical Engineering Lab Education: Applications in Composite Manufacturing
Previous Article in Special Issue
Exploring the Impact of Different Assistance Approaches on Students’ Performance in Engineering Lab Courses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Initial Validation of the IMPACT Model: Technological Appropriation of ChatGPT by University Faculty

by
Luz-M. Pereira-González
1,
Andrea Basantes-Andrade
1,*,
Miguel Naranjo-Toro
1 and
Mailevy Guia-Pereira
2
1
Grupo de Investigación de Ciencia en Red (eCIER), Universidad Técnica del Norte, Ibarra EC100150, Ecuador
2
Asociación Científica Universitaria de Estudiantes de Medicina de la Universidad de Los Andes (ACUEM-ULA), Merida 5101, Venezuela
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(11), 1520; https://doi.org/10.3390/educsci15111520
Submission received: 24 September 2025 / Revised: 28 October 2025 / Accepted: 4 November 2025 / Published: 10 November 2025
(This article belongs to the Special Issue ChatGPT as Educative and Pedagogical Tool: Perspectives and Prospects)

Abstract

This study presents the initial validation of the IMPACT model, a psychometric tool developed to evaluate how university faculty adopt ChatGPT in higher education. It specifically addresses the existing gap in validated instruments designed for educators, as most prior research has focused on student-based adoption models. A total of 206 professors completed a 39-item Likert-scale questionnaire. Exploratory factor analysis using principal axis factoring with oblimin rotation identified the underlying structure of the instrument. Reliability and internal consistency were examined through Cronbach’s alpha and McDonald’s omega. The analysis revealed a five-factor structure comprising functional appropriation, ethical and academic concerns, cost and accessibility, facilitating conditions, and perceived reliability and trustworthiness. Intention to use and performance expectancy merged into a single factor, and social influence did not emerge as a determinant. The model demonstrated strong reliability and internal consistency across all dimensions. The IMPACT model offers a validated framework for understanding faculty adoption of ChatGPT, emphasizing functional, ethical, and infrastructural factors over social influence. These findings provide a foundation for confirmatory analyses and contribute to advancing theoretical and practical insights into AI integration in higher education teaching.

1. Introduction

The incorporation of generative language models based on artificial intelligence, such as ChatGPT, has sparked significant interest in the field of education (Alqahtani et al., 2023) and is redefining the dynamics of teaching and learning in higher education (Mourtajji & Arts-Chiss, 2024). Emerging technologies that use generative AI, including ChatGPT—language models based on deep learning, specifically on a type of neural network known as a transformer—can produce text that closely resembles human writing. This development has driven a profound transformation in educational practice, fueled by its ability to support content structuring, assessment, and instructional design (Javaid et al., 2023; Labadze et al., 2023; Naznin et al., 2025).
In the context of teaching practice, these tools enable the generation of instructional materials, the formulation of questions, grading of texts, the provision of automated feedback, and support for intelligent tutoring processes (Dong et al., 2024; Kasneci et al., 2023; Yan et al., 2023). They also contribute to educational inclusion by enhancing accessibility for students with specific needs and enable the design of more engaging learning environments through gamification strategies (Huber et al., 2024; Zirar, 2023).
However, alongside these benefits, significant challenges emerge that require critical assessment. Several studies warn of the risk of excessive dependence on these tools, which could hinder the development of critical thinking and intellectual autonomy (Kwak & Pardos, 2024; Dong et al., 2024). Moreover, ongoing concerns persist regarding data privacy, algorithmic transparency, and the perpetuation of cultural and linguistic biases, which could exacerbate pre-existing inequalities, particularly in Global South contexts (Bender, 2024; J. Lee et al., 2024; Zirar, 2023).
Considering this scenario, the effective adoption of models like ChatGPT by faculty members requires not only digital competencies and pedagogical judgment, but also a comprehensive framework to understand the factors influencing their acceptance and use. Empirically, most psychometric studies have focused on the student perspective, while evidence concerning faculty remains scarce and, in some cases, methodologically limited.
From the faculty’s perspective, research on the adoption of ChatGPT in higher education reveals divergent methodological approaches. At the international level, Barakat et al. (2025) have proposed a methodological framework based on the Technology Acceptance Model (TAM), employing Principal Components Analysis (PCA) as the extraction method, despite the fact that the goal is not to reduce correlated variables into uncorrelated components but rather to extract latent factors grounded in a preexisting theory (Floyd & Widaman, 1995; Costello & Osborne, 2005; Matsunaga, 2010; Saif et al., 2024).
In turn, Sigüenza Orellana et al. (2024), in Ecuador, developed an instrument to measure teachers’ perceptions of ChatGPT by conducting an exploratory factor analysis to validate their CPDUChatGPT questionnaire. However, their methodological approach does not clearly specify the statistical assumptions verified for the application of the factor analysis, the extraction method used, or the type of rotation employed, elements that are essential to ensure scientific replicability.
Previous research in educational technologies has shown how the lack of clarity in factor analysis procedures can lead to unstable factor structures and inconsistent theoretical models (Thompson & Daniel, 1996; Worthington & Whittaker, 2006). This issue extends to the specific field of teacher technology adoption, where studies such as those by Venkatesh et al. (2003) have demonstrated the importance of following rigorous protocols in factor analysis to establish valid and generalizable technology acceptance models.
The IMPACT model addresses a conceptual gap not covered by theoretical frameworks such as TAM2 or UTAUT2, which explain the adoption of general technologies based on perceived usefulness, ease of use, and social influence. However, with the emergence of generative artificial intelligence, new dimensions have surfaced that were not previously considered and are directly related to the unique characteristics of a tool capable of generating content from user-issued prompts. In this sense, the IMPACT model deepens the understanding of disruptive technology adoption by providing a conceptual framework that integrates constructs such as ethical and academic concerns, perceived reliability, and functional appropriation. Notably, this latter dimension establishes the main difference from the CHASSIS model, as in student contexts, intention to use and perceived usefulness appear as separate dimensions.
Another feature that marks a clear distinction between the TAM2, UTAUT2, and CHASSIS models is that, in the IMPACT model, social influence does not emerge as a relevant dimension. This suggests that emotional maturity and professional experience may cause social influence to lose its decisive role in the adoption of ChatGPT among university faculty.

2. Theoretical Framework

2.1. Technology Acceptance Models

Generative artificial intelligence, particularly ChatGPT, has substantially transformed the contemporary technological and educational landscape, generating growing interest in understanding the factors that determine its adoption and continued use (Almogren et al., 2024; Salih et al., 2025). Regarding acceptance and intention to use, four foundational models have served as primary references in research on the adoption of digital innovations: the Technology Acceptance Model (TAM; Davis, 1989) and its extension TAM2 (Venkatesh & Davis, 2000), as well as the Unified Theory of Acceptance and Use of Technology (UTAUT; Venkatesh et al., 2003) and its later update, UTAUT2 (Venkatesh et al., 2012).
In TAM, Davis (1989) considered only two dimensions: perceived usefulness and ease of use (effort expectancy). Subsequently, TAM2 (Venkatesh & Davis, 2000) suggested that user acceptance is influenced by two types of processes: social influence—including subjective norm, voluntariness, and image—and cognitive instrumental processes—comprising job relevance, output quality, result demonstrability, and perceived ease of use. These factors jointly determine perceived usefulness, while Perceived Ease of Use remains an independent variable that also functions as a predictor of Perceived Usefulness. Building on this foundation, Venkatesh et al. (2003) considered job relevance, output quality, result demonstrability, and perceived ease of use, all of which determine perceived usefulness. In a further extension of this conceptual framework, UTAUT2, Venkatesh et al. (2012) proposed the inclusion of hedonic motivation, habit, and price value to extend its applicability to consumer contexts. These factors, depending on the context, may be moderated by variables such as age, gender, experience, and voluntariness of use (Abdalla et al., 2024; Dwivedi et al., 2019; Soares et al., 2024). UTAUT2 can explain up to 70% of the variance in behavioral intention (Venkatesh & Zhang, 2010), making it a more robust model than those commonly used to understand technology adoption in educational settings.
Research on emerging technology adoption has been primarily grounded in the Technology Acceptance Model (Ashfaq et al., 2020; Dahri et al., 2024; García et al., 2024; M. K. Kim et al., 2025; A. T. Lee et al., 2025; Ma et al., 2024). However, these models were developed before the emergence of generative artificial intelligence and therefore do not incorporate its inherent dimensions, such as ethical and academic concerns or the reliability of generated responses.
Drawing from these theoretical frameworks and current literature, the CHASSIS model (Pereira-González et al., 2025a) proposed seven conceptual dimensions that were subsequently empirically validated. However, its design focuses on learners, overlooking the complexities involved in technological adoption within professional pedagogical practice. University students typically operate in highly socialized environments where peer pressure and social media influence significantly affect their technological choices. Faculty, conversely, acting from a position of professional autonomy, possess a broader perspective that enables them to evaluate digital tools through more independent and academically rigorous criteria.
This distinction is reflected in two key empirical findings: first, social influence did not emerge as a significant factor in the analysis, indicating that pedagogical technology adoption is guided by independent professional judgment rather than social pressures. Second, an integrated dimension of functional appropriation emerged, combining intention to use and performance expectancy into a single construct. This integration reveals a cognitive process characteristic of experienced professionals, in which the assessment of a tool’s utility and its integration into pedagogical practice are not sequential stages but simultaneous and interdependent processes, guided by instrumental rationality oriented toward specific pedagogical objectives. Therefore, a faculty-specific model is needed—one that prioritizes dimensions associated with pedagogical usefulness, curricular integration, and functional appropriation over social or emotional factors, which are highly relevant in student populations but secondary or non-significant when examining technology adoption among faculty.

2.2. ChatGPT and Generative Artificial Intelligence in Educational Contexts

Several studies have adapted these models to analyze ChatGPT in academic settings; however, unlike other technologies (Alotaibi, 2025; Mennella et al., 2024). For this reason, theoretical approaches that incorporate these new determinants are necessary, particularly in university contexts where instructors and students adopt these tools within formal teaching and learning environments.

2.3. Determining Factors in ChatGPT Use Among University Educators

2.3.1. Perceived Usefulness

The possibility of enhancing academic performance is the most influential factor driving university students to adopt generative artificial intelligence. This core belief makes perceived usefulness the strongest and most consistent predictor of their intention to use such technologies (Alshammari & Babu, 2025; Al Murshidi et al., 2024; García-Alonso et al., 2024). Xue et al. (2024) conducted a systematic review of 162 articles—mostly based on samples from Asia and North America—and found that the most commonly used combination in studies of technology adoption is the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). They also found that performance expectancy is the most influential determinant of usage intention.
This impact has been corroborated by findings from studies employing various statistical techniques, including exploratory (Sallam et al., 2023) and confirmatory factor analyses (Abdaljaleel et al., 2024), multivariate regression (Sallam et al., 2024), and structural equation modeling (Ngo et al., 2024; Saxena & Doleck, 2023). Similarly, Makkonen et al. (2023) demonstrated that perceived usefulness has a direct, positive, and highly significant effect on the intention to continue using generative AI chatbots in the workplace, identifying it as the key component of their model. These results reinforce the notion that this perception is essential in any model aiming to explain technology adoption.

2.3.2. Ethical and Cognitive Concerns

Ethical concerns can limit the intention to use ChatGPT. Theseissues can be categorized into four main domains: legal, humanistic, algorithmic, and informational. In the realm of algorithmic ethics, Bender et al. (2021) highlight the dangers associated with large language models like GPT-3, pointing to issues such as algorithmic bias, high energy consumption, and lack of transparency in their operation. Zhou et al. (2024) emphasize the cultural, ideological, and gender biases that these technologies may perpetuate. Stahl (2025) also addresses algorithmic ethics but broadens the focus to a more humanistic perspective, expressing concern over human dignity, autonomy, and moral responsibility. He advocates for a contextual and relational ethics framework, grounded in humanistic philosophy and applied to the development and use of emerging technologies such as ChatGPT.
Sebastian (2023) reviews the dual challenge of safeguarding users’ sensitive information while maintaining the efficiency of machine learning models and discusses critical concerns regarding data protection and privacy in the context of chatbot systems based on large language models (LLMs), with particular focus on ChatGPT. Meanwhile, S. J. Kim (2024a) highlights key legal implications such as intellectual property and copyright, plagiarism, attribution, and the confidentiality and privacy required during peer review processes when using AI tools that may store or process sensitive data. The absence of clear legal frameworks to define accountability among users, service providers, and AI developers generates uncertainty that can inhibit technology adoption.
Regarding concerns about the impact of generative artificial intelligence on memory, critical thinking, and cognitive skills, recent literature in educational contexts presents both positive and negative findings related to the use of tools like ChatGPT. Some studies have reported that the immediate access to answers provided by generative AI may limit the development of working memory and consequently reduce students’ ability to retain information, as it diminishes the cognitive effort required to process, solve problems, and consolidate knowledge (Abbas et al., 2024; León-Domínguez, 2024; Zhai et al., 2024). This phenomenon also appears to be associated with increased procrastination and a decline in academic performance when AI use is intensive and lacks reflective engagement (Bai et al., 2023; Gerlich, 2025).
Nevertheless, research has shown that when AI is appropriately integrated into the educational process, it can foster creativity, the development of critical thinking skills, and problem-solving abilities (Huamani-Anco & Maraza-Quispe, 2025; S. J. Kim, 2024b; Toma & Yánez-Pérez, 2024). AI supports the presentation of complex problems and personalized feedback, which stimulates analysis, self-reflection, and decision-making. In fact, the use of AI has been found to significantly enhance critical thinking, particularly when its implementation is oriented toward personalized learning and the critical evaluation of information (Ashraf et al., 2025; Lawasi et al., 2024; Fuchs & Aguilos, 2023; Mastrogiacomi, 2024). As a result, educational approaches are increasingly advocating for a sociocognitive architecture that integrates human and artificial intelligence in a complementary way to maximize benefits and minimize the risks associated with these technologies (Dubey et al., 2024; Mogavi et al., 2024). It is especially important to properly align technological innovation with the preservation of academic integrity (Fajt & Schiller, 2025).

2.3.3. Economic and Technological Barriers

The extension of the Unified Theory of Acceptance and Use of Technology—UTAUT2—proposed by Venkatesh et al. (2012), was designed to shift the application of UTAUT from an organizational context to one focused on voluntary adoption and consumer behavior. To implement this adaptation, the authors introduced three new determinants: hedonic motivation, habit, and price value—the latter defined as the balance between the cost of using the technology and the benefits it provides to the user (Y. Wang & Zhang, 2023; Suryavanshi et al., 2025). In contexts where the cost is borne individually by the user, price value becomes the most significant factor in the UTAUT2 model (Strzelecki et al., 2024).
In this regard, Parveen et al. (2024), using structural equation modeling, found that price value constitutes a significant latent variable when users are university students. Similarly, Saranza et al. (2024) identified a direct correlation between frequency of use and perceived value in the use of ChatGPT, and further noted that this correlation is moderated by users’ disposable income. These studies provide a valid foundation to support proposals aimed at adapting traditional technology acceptance models to the distinctive characteristics associated with the use of generative artificial intelligence.

2.3.4. Facilitating Conditions

Recent studies have consistently emphasized the importance of institutional support in the acceptance and intended use of ChatGPT by university faculty and students. According to Baek et al. (2024), institutional policies that authorize the use of ChatGPT are predictive of higher usage rates. Supporting this view, the study by Romero-Rodríguez et al. (2023), conducted using the UTAUT2 model, found that facilitating conditions—specifically access, availability of resources, and institutional support—play a crucial role in both the intention to use and the actual use of ChatGPT among university students.
Similarly, in Malaysia, Poobalan and Latip (2023) demonstrated that facilitating conditions directly affect the adoption of ChatGPT. When students have access to adequate resources, technical guidance, and appropriate infrastructure, the likelihood of using the tool increases significantly. This trend is also evident in faculty perceptions. A study conducted in Poland reported that facilitating factors—such as institutional support and access to pedagogical and technological resources—have a direct impact on the use, acceptance, and adoption of ChatGPT in higher education settings (Strzelecki et al., 2024).
This conclusion is further supported by the systematic review conducted by Kovari (2024), who argue that the ethical and effective use of ChatGPT by students and faculty largely depends on the existence of clear institutional guidelines and policies designed to support—and regulate—its use (Alzahrani & Alzahrani, 2025). Likewise, H. Wang et al. (2024) found that the intention and purpose of using ChatGPT increase significantly when explicit institutional policies are accompanied by pedagogical integration strategies and effective institutional support.
Collectively, these studies—both from the perspectives of students and faculty—underscore the need to incorporate structural and organizational dimensions into technology adoption models, particularly when analyzing the use of generative artificial intelligence tools in higher education contexts.

2.3.5. Accuracy and Reliability

ChatGPT users frequently express concern regarding the accuracy and reliability of the information it provides, as the model can sometimes generate inaccurate or biased content while presenting it as truthful (Golden, 2023; Limna et al., 2023). This phenomenon, known as algorithmic “hallucination,” negatively impacts users for two reasons. First, it undermines their trust; second, it forces them to spend a considerable amount of time identifying inaccuracies in the information received (Sun et al., 2024). From a reliability perspective, Balaskas et al. (2025) conducted a study using a partial least squares structural equation model (PLS-SEM) and demonstrated that trust in the tool not only directly influences the intention to use it, but also mediates the relationship between perceived usefulness and that intention. Additionally, this trust translates into greater ease of use and a more positive attitude toward accepting responses provided by generative artificial intelligence, which is ultimately crucial for its sustained adoption.
This study aimed to adapt a theoretical model previously supported by replicable empirical evidence to a faculty sample by validating the CHASSIS model (Pereira-González et al., 2025a), originally developed with student data. The IMPACT model proposed in this research integrates empirically derived dimensions through exploratory factor analysis, grounded in the theoretical frameworks of prior technology acceptance models. Recognizing the specificities of the teaching role, professional maturity, and institutional context, the results reveal a five-factor structure that differs from that found among students. Notably, it highlights the merging of behavioral intention and performance expectancy into a single functional factor and the absence of the social influence component. These transformations reflect a more pragmatic and autonomous appropriation of technology by faculty members.
Furthermore, the study confirms the critical relevance of perceived reliability and institutional conditions in ensuring the sustainable adoption of this technological innovation in higher education. By integrating theoretical foundations with empirical findings, this research contributes both to a deeper understanding of the acceptance of generative artificial intelligence technologies in higher education and to the advancement of the scientific field by providing a robust theoretical framework for analyzing their adoption in diverse educational contexts. Additionally, it offers a valid and reliable instrument to support decision-making and the design of educational policies aimed at optimizing the ethical and effective integration of ChatGPT in university teaching.

2.4. Related Work

Recent studies have examined the adoption of generative AI tools, such as ChatGPT, in higher education; however, most have focused on students rather than faculty. In general, empirical research has relied on technology acceptance frameworks such as the Technology Acceptance Model (TAM; Davis, 1989) and the Unified Theory of Acceptance and Use of Technology (UTAUT; Venkatesh et al., 2003). These models emphasize behavioral intention, ease of use, and perceived usefulness (Dwivedi et al., 2023; Alshammari & Alshammari, 2024). More recent investigations have modified these frameworks to account for the integration of generative AI in educational settings, highlighting the importance of perceived ethics, trust, and cognitive load (Đerić et al., 2025; Acosta-Enriquez et al., 2024).
While student attitudes toward ChatGPT have been extensively explored (Kasneci et al., 2023; García-Peñalvo, 2024; Uppal & Hajian, 2025), studies validating instruments that address teacher-specific adoption factors remain scarce and have been conducted in contexts that may not reflect the reality of Latin America (Alzahrani & Alzahrani, 2025; Enang & Christopoulou, 2024). Particularly, two studies were conducted with professors from two public universities in the United States (Shata, 2025; Shata & Hartley, 2025). These studies employed t-tests and multiple regression to test hypotheses about generative AI use among faculty. The study found that, although users expressed more concerns than non-users, the latter reported less comfort with technology—comfort being the only significant predictor of intention to use—while specific concerns had no significant effect. This suggests that familiarity and confidence with technology are key to its educational adoption.
Despite these contributions, no psychometric instrument has yet combined functional appropriation, ethical concerns, and infrastructural conditions within a unified factorial model. The proposed IMPACT model, designed to capture the multidimensional nature of ChatGPT adoption among university faculty, bridges the empirical gap between student-centered frameworks and the practical realities of faculty integration of AI in teaching and assessment.
Although numerous studies have examined the adoption of generative artificial intelligence tools by students, the topic has been scarcely explored among university faculty. The IMPACT model represents an extension of previous theoretical frameworks addressing technology acceptance but incorporates additional dimensions such as ethical and academic concerns, functional appropriation, and perceived reliability. Moreover, it provides a theoretical foundation specifically adapted to the Latin American context, contributing to a deeper understanding of the dimensions influencing ChatGPT adoption among higher education instructors.

3. Materials and Methods

A cross-sectional study was conducted, corresponding to an explanatory investigation with emphasis on the structural component, framed within the relational–explanatory level (Hurtado de Barrera, 2012), with a quantitative orientation and an analytical observational design.
The population consisted of faculty members from the five undergraduate schools of Universidad Técnica del Norte (Ibarra, Ecuador) during the March 2025–August 2025 academic term: 142 from the Health Sciences School; 150 from Education, Arts, and Technology; 91 from Applied Science and Engineering; 77 from Agro-Environmental and Biotechnology Sciences; and 81 from Economic, Legal, and Administrative Sciences, for a total population of N = 541 faculty members.
For this study, a probabilistic sample (n) was drawn using stratified sampling, with each school representing a stratum. The sample size was calculated using a 5% significance level (Z = 1.96), an allowable sampling error (e) of 5.5%, and a 50% success probability (p), which represents the most conservative condition that maximizes sample size. The formula used for calculating the sample size, applicable to a finite population (Mukti, 2025), was:
n =   Z 2 p q N e 2 N 1 + Z 2 p q
In Equation (1), n is the number of faculty members who took part in the study, and N is the number of professors at the university. For this study, the sampling error tolerance for the estimation process, represented by the variable e, was set at 5%. At the 95% confidence level, Z is the critical value. p and q are the estimated percentages of people who have and do not have the relevant trait, respectively. Using these criteria, the smallest sample size was determined to ensure that the faculty population was statistically representative. p and q denote the estimated proportions of individuals who possess and do not possess the characteristic of interest, respectively. This approach guarantees that the resulting sample adequately reflects the variability within the target population, providing a reliable basis for subsequent statistical analyses.
The application of the formula yielded a sample size of 201, which was increased to 206 for practical purposes related to stratum distribution. Stratification was conducted using a two-stage sampling design. In the first stage, proportional allocation was applied using the following formula (Paliz Sánchez et al., 2024; Sánchez Espejo, 2019):
Stratum   sample   size =   n N × Stratum   size
and subsequently, the participants were selected through simple random sampling.
In Equation (2), n stands for the total number of samples that were already chosen for the study, and N stands for the total number of faculty members in all strata. Stratum size is the number of faculty members in each stratum, such as a department, academic area, or faculty. The term (n/N) shows the sampling fraction used for each stratum to get a fair distribution. This procedure ensures that each subgroup of the population is represented in the sample according to its actual proportion in the total population, thereby improving the accuracy and representativeness of the estimates.
Participants were categorized into generational cohorts based on their year of birth, following internationally recognized classifications (Table 1). This framework allows for the analysis of adoption patterns considering generational differences in technological familiarity and attitudes toward innovation.
Table 2 summarizes the sociodemographic and professional characteristics of the participating university faculty members (n = 206). The participants represented a diverse academic workforce spanning five disciplinary areas and a wide age range, indicative of an experienced and mature teaching population. Most respondents were in mid-career stages, corresponding primarily to Generations X and Y, which aligns with the predominance of educators who have navigated both pre- and post-digital academic environments. Overall, the sample reflects a balanced gender distribution and a high level of professional experience, consistent with the demographic profile of Ecuadorian higher education faculty.
In addition to sociodemographic data, participants reported how frequently they use ChatGPT for academic and research purposes. As summarized in Table 3, the results indicate that most faculty members engage with generative AI tools on a regular basis, reflecting a moderate-to-high level of integration of ChatGPT into their professional routines. This behavioral pattern underscores the growing familiarity of university educators with AI-assisted academic practices and supports the need to examine the factors underlying their adoption through the IMPACT model.
The CHASSIS model, developed and validated by Pereira-González et al. (2025a), and originally composed of 39 items, was adapted to construct the ChatGPT Adoption Intention Questionnaire for university faculty. This theoretical framework is grounded in a strategic matrix integrating constructs from established technology acceptance theories (UTAUT, UTAUT2, TAM, and TAM2), supported by empirical evidence and by recent adaptations of validated instruments incorporating generative artificial intelligence (Davis, 1989; Venkatesh et al., 2003, 2012; Venkatesh & Davis, 2000; Bolívar-Cruz & Verano-Tacoronte, 2025; Dwivedi et al., 2023; García-Alonso et al., 2024; Menon & Shilpa, 2023; Romero-Rodríguez et al., 2023; Sallam et al., 2023).
Since the original items were in English, two independent bilingual translators with expertise in educational technology conducted a forward–backward translation. A team of three educational psychologists reviewed the Spanish adaptation to ensure technical, semantic, and conceptual equivalence (Ponce Gea & Serrano Pastor, 2022), adjusting terminology to the Spanish-speaking university context and replacing generic references to “technological tools” with “ChatGPT.” The experts evaluated each item in terms of relevance, clarity, and theoretical coherence, removing repetitive or ambiguous items that did not meet the agreement criteria (CVC > 0.80) (Beaton et al., 2000; Boateng et al., 2018).
Due to the change in population between the CHASSIS model and the instrument proposed in this study, a new exploratory factor analysis (EFA) was carried out to examine the structure and identify possible differences in response patterns based on faculty characteristics, professional experience, and institutional environment. Two experts in higher education methodology and educational technology further refined the items to ensure contextual and linguistic appropriateness for faculty respondents. Cognitive validity was assessed with 25 university professors selected across faculties using the think-aloud technique to evaluate comprehension and interpretation. Problematic items were restructured prior to administering the final version of the instrument.
The final questionnaire was digitized using Microsoft Forms, maintaining the same five-point Likert scale employed in the CHASSIS model: 1 = Strongly disagree, 2 = Disagree, 3 = Neither agree nor disagree, 4 = Agree, and 5 = Strongly agree. The exploratory analysis conducted in SPSS v.30 led to a redistribution of factors and the elimination of six items that did not meet psychometric criteria (factor loadings < 0.50, cross-loadings, or low communalities), resulting in a 33-item instrument with high validity and reliability for research in higher education.
The selection of items for the IMPACT questionnaire was based on theoretical and empirical foundations. Each item was selected to reflect constructs considered in existing technology acceptance theories (UTAUT, UTAUT2, TAM, TAM2) and to capture the specific characteristics of academic use of generative artificial intelligence, particularly those associated with ethical and academic dimensions.
To ensure semantic consistency in the Likert-scale statements, items expressing concerns, risks, or negative perceptions about ChatGPT use were conceptually reverse-coded. The inversion was performed using the transformation X_(reverse-coded) = k + 1 − X, where k is the number of response categories (in this case, 5). This recoding ensured that higher scores across all items consistently reflected a more favorable attitude toward ChatGPT use. In total, prior to conducting EFA, items from the following factors were reverse-coded: (a) ethical and academic concerns, (b) cost and financial accessibility, and (c) social influence/anxiety.
McDonald’s (2013) ω was used as an indicator of internal consistency, following theoretical foundations and current best methodological practices (Malkewitz et al., 2023; Rogers, 2022), as it does not assume tau-equivalence and is more appropriate for multifactorial structures. To calculate McDonald’s ω, the closed-form algorithm for estimating factor loadings proposed by Hancock and An (2020), ωha, was used, as implemented in the SPSS macro developed by Hayes and Coutts (2020). However, Cronbach’s α was also reported to facilitate comparisons with previous literature and provide boundary estimates.
To determine the number of factors to extract in adapting the model to the faculty context, four criteria were employed: (a) Kaiser’s a, with eigenvalues greater than one; (b) the scree plot (Cattell, 1966); (c) parallel analysis (Horn, 1965); and (d) the minimum average partial (MAP) test (Velicer, 1976). For the last two methods, O’Connor’s (2000) syntax for SPSS was used.
For the univariate normality assumption, skewness and kurtosis values were considered acceptable when below an absolute value of 1.5 (George & Mallery, 2010). To assess multicollinearity, the inverse of the correlation matrix was calculated, and a direct inspection of the diagonal elements was conducted.
An oblique rotation (direct Oblimin) was employed based on the theoretical assumption that the instrument’s latent dimensions could be correlated (J. D. Brown, 2009; Costello & Osborne, 2005; Dai et al., 2025; Lloret-Segura et al., 2014), which is common in models of technology perception and use. Oblique rotation enables the identification of complex, non-orthogonal structures with greater fidelity. Kaiser normalization, the default option in the rotation algorithm, was applied.
Pearson’s correlation matrix was used for EFA due to two advantages offered by SPSS: (1) it allows extraction using principal axis factoring (Rogers, 2022), a recommended method given the behavior of the collected data (Fabrigar et al., 1999), under the assumption that underlying factors explain the observed covariances among items; and (2) it allows the specification of the delta parameter in the direct Oblimin rotation method. Additionally, polychoric correlations are more complex estimations that attempt to better capture the relationships among ordinal variables, such as Likert-scale responses. Since they are not directly estimated (but derived from the assumption of underlying normally distributed continuous variables), they tend to introduce greater variability and sampling error compared to the Pearson matrix, which is calculated directly from continuous or treated-as-continuous data. Consequently, a larger dataset is needed to stabilize estimates and reduce error in factor extraction when using polychoric matrices (Lloret-Segura et al., 2014).
Oblique rotation was selected because, unlike orthogonal rotation which assumes latent variables are independent, some correlation among factors was expected. A delta value of 0.2 was chosen, as it yielded a more interpretable and coherent factor solution and accommodated moderate correlations without forcing strong dependencies—an approach consistent with factorial structures in educational or psychological contexts where constructs tend to be correlated but not redundant (Costello & Osborne, 2005; Jennrich, 1973; Lloret-Segura et al., 2014).
To enhance the external and contextual validity of the model being adapted for university faculty, six additional questions were included to obtain a broader characterization of participants’ current experiences and attitudes toward ChatGPT. These questions capture key dimensions not directly covered by the model’s latent factors, such as current frequency of ChatGPT use—measured on an ordinal scale (1 = Never, 2 = Rarely [less than once a month], 3 = Occasionally [1–3 times a month], 4 = Frequently [1–3 times per week], 5 = Very frequently [almost every day])—as well as concrete perceptions regarding perceived academic usefulness, the reliability of the information provided, ethical concerns about authorship, future willingness to use the tool, and an overall evaluation of the platform. The latter five self-reported questions reflect participants’ self-perceptions and were measured using a continuous scale from 0 to 10, allowing greater sensitivity in capturing variability in perceptions compared to categorical scales. Their inclusion is methodologically justified as a complementary strategy for assessing related constructs that may influence the intention to use or reflect significant variations among user profiles. These variables also enable the subsequent development of descriptive, correlational, and ordinal regression analyses to explore relationships between current use and future expectations, as well as potential moderating effects on the intention to use as measured by the structural model. This approach strengthens the study design by integrating experiential and attitudinal data of an empirical nature, providing additional evidence for a more comprehensive and contextually grounded interpretation of the exploratory factor analysis results.
In the final version of the manuscript in English, the generative artificial intelligence tool ChatGPT (GPT-4o) was employed as a supportive resource for stylistic, grammatical, and editorial refinement. This assistance was strictly limited to improving the language and style of the text and did not influence the scientific content, the data analysis, or the interpretation of the results. The authors have reviewed and edited the final version and take full responsibility for the content of this publication.

4. Results

An exploratory factor analysis (EFA) was conducted to assess whether the CHASSIS model (comprising seven factors proposed by Pereira-González et al., 2025a) could be replicated in a faculty sample. The analysis was based on a questionnaire consisting of 39 items, applied to a sample of 206 participants, with ordinal data measured on a five-point Likert scale: Strongly disagree (1), Disagree (2), Neither agree nor disagree (3), Agree (4), Strongly agree (5).

4.1. Normality

Univariate normality was assessed using skewness and kurtosis statistics. All item values were found within the acceptable range of ±1.5 (skewness: –1.21 to 0.25; kurtosis: −1.06 to 1.70) (Burdenski, 2000; Demir, 2022; Ferrando & Anguiano-Carrasco, 2010; Tabachnick & Fidell, 2013), indicating an approximately normal univariate distribution. Only item IU_I3 showed elevated kurtosis (4.02), but it was deemed tolerable since all other items exhibited acceptable behavior.
For the multivariate normality test, Mardia’s coefficients were calculated using the SPSS macro adapted from DeCarlo (1997), Appendix A. This test revealed significant kurtosis values for all analyzed factors, indicating a violation of the assumption of multivariate normality (b2ₚ > expected value; p < 0.001). Consequently, estimation methods based on maximum likelihood were ruled out, and a more robust extraction method was chosen—specifically, principal axis factoring—as recommended by Cain et al. (2017), Fabrigar et al. (1999), and Mardia (1970).

4.2. Sample Adequacy and Sphericity Analysis

The assessment of data suitability for exploratory factor analysis yielded a Kaiser–Meyer–Olkin (KMO) index of 0.89, which corresponds to the “meritorious” classification according to Kaiser’s (1974) interpretation. Additionally, Bartlett’s test of sphericity was significant (χ2 = 3789, df = 378, p < 0.001). Individual sampling adequacy was also evaluated using MSA (Measure of Sampling Adequacy) indices, calculated from the diagonal of the anti-image correlation matrix. Values ranged from 0.703 to 0.936, exceeding the recommended minimum threshold of 0.50, thus supporting the appropriateness of including each variable in the exploratory factor analysis (Lorenzo-Seva & Ferrando, 2021; Sappaile et al., 2023). Taken together, the evidence from the global KMO index and Bartlett’s test confirms that the correlation matrix meets the necessary conditions to proceed with factor extraction.

4.3. Determination of the Number of Factors to Extract

Four complementary methods were applied to determine the optimal number of factors to extract: Kaiser’s rule, the scree plot, the Minimum Average Partial (MAP) method, and parallel analysis with permutations.
Kaiser’s rule identified five factors with eigenvalues greater than 1: 8.484, 5.328, 2.230, 1.806, and 1.241. The scree plot showed an inflection point at the fifth factor, with a clear drop in eigenvalues up to that point, followed by curve stabilization from the sixth factor onward. The MAP criterion yielded a minimum average squared partial correlation of 0.0174, suggesting the retention of five factors. In the parallel analysis—conducted using principal axis factoring and generating random datasets via within-variable permutations to preserve each variable’s distributional shape (randtype = 2)—the observed eigenvalues exceeded the 95th percentile of simulated data up to the fifth factor.
Table 4 presents the complete breakdown of eigenvalues and explained variance. Given the consistency across all four methods, and in contrast to the original CHASSIS model, the decision was made to extract five factors, revealing a distinct structural pattern when the model is applied to faculty data.

4.4. Item Exclusion Criteria

Before initiating the item refinement process, a thorough review of the CHASSIS theoretical model (CHatGPT Adoption and Sustained usage among Students in Institutional Settings; Pereira-González et al., 2025a) was conducted, contrasting it with established conceptual frameworks such as the Technology Acceptance Model (TAM; Davis, 1989), the Unified Theory of Acceptance and Use of Technology (UTAUT; Venkatesh et al., 2003), and its extension UTAUT2 (Venkatesh et al., 2012). This review ensured conceptual consistency and content validity prior to the factor analysis.
When communalities range between 0.4 and 0.7, and each factor is measured by three to four items, a sample size of 200 participants is considered adequate (Lloret-Segura et al., 2014). Under these conditions, a factor loading is considered significant when it exceeds 0.40 (MacCallum et al., 1999; Mundfrom et al., 2005; de Winter et al., 2009). Based on these criteria, items were removed during the empirical phase if they showed factor loadings below 0.40 and communalities below 0.30, which indicate low representativeness of the latent construct and insufficient variance explained by the common factors, respectively (Costello & Osborne, 2005; Lloret-Segura et al., 2014).
In addition, items with factor loadings below 0.50 and cross-loadings on two or more factors were excluded if the difference between those loadings was less than 0.10, suggesting conceptual ambiguity and a threat to the model’s discriminant validity (Howard, 2016; Hair et al., 2019; Tabachnick & Fidell, 2013). This strategy aimed to maximize the model’s parsimony and its theoretical and statistical coherence.
The final version of the instrument included 28 items, resulting in a case-to-item ratio greater than 7. This value falls within the empirically supported range in the specialized literature, which generally recommends a minimum ratio of 3 to 10 participants per item to ensure stable factor solutions in both exploratory (EFA) and confirmatory factor analyses (CFA) (MacCallum et al., 1999; Gorsuch, 1983; Floyd & Widaman, 1995).
Although some authors advocate for more stringent criteria (e.g., 10:1 or even 20:1), recent research acknowledges that a ratio of at least 5 to 10 cases per item is adequate in contexts where factor loadings are moderate and average communalities are acceptable (Hair et al., 2019; Lloret-Segura et al., 2014). In this case, the ratio achieved provides a robust foundation for the factor analysis without compromising the validity of the results.

4.5. Correlation Between Items

Table 5 presents the correlations among the resulting factors. These correlations are consistent with the obliqueness assumption of the model and allow for the assessment of the relative independence among the construct’s dimensions, in accordance with the interpretation guidelines proposed by Hair et al. (2019), Costello and Osborne (2005), and Ferrando and Lorenzo-Seva (2014).
As shown in Table 5, several factors exhibit absolute correlations equal to or greater than 0.30—highlighted in bold—which supports the use of oblique rotation (Oblimin with Kaiser normalization). This pattern of correlations indicates that, although the factors represent distinct dimensions, they share a certain degree of explanatory variance, which is to be expected given the interdependent nature of the constructs assessed in the IMPACT model.
Moreover, the high correlation between Factor 1 and Factor 5 (r = 0.762) may suggest a potential hierarchical relationship or the presence of a second-order dimension, a possibility that should be further explored in future studies using confirmatory factor analysis or structural equation modeling.

4.6. Multicollinearity

The diagonal of the inverse correlation matrix indicated collinearity values ranging from 1.85 to 4.12. These values can also be obtained through a linear regression in which the dependent variable is the probability of future use of ChatGPT, and the items are treated as independent variables. The resulting range suggests the presence of moderate, yet non-problematic, multicollinearity (Kyriazos & Poga, 2023; Vörösmarty & Dobos, 2020).

4.7. Communalities

Regarding the communalities obtained from the exploratory factor analysis, the initial values ranged from 0.460 to 0.757, suggesting that most items shared an acceptable proportion of common variance prior to extraction. After applying the extraction method, final communalities ranged from 0.392 to 0.784. These values indicate that the retained factors adequately explain the variance of the observed items, especially considering that a threshold of 0.40 is commonly accepted as the minimum value for retaining an item in the model (Ferrando & Lorenzo-Seva, 2014; Lloret-Segura et al., 2014). In this regard, the results support the appropriateness of the selected items to represent the proposed latent constructs.
Details of both initial and extracted communalities can be consulted in Table 6. As shown there, most items presented adequate extraction values, falling within ranges that reflect a substantive relationship with the identified latent factors. Specifically, 26 out of the 28 items showed communalities greater than 0.45, indicating that a considerable proportion of their variance is explained by the proposed factorial structure. This finding supports the model’s internal consistency and suggests a coherent factorial representation aligned with the theoretical dimensions. However, two items (EAC_I2 and CA_I5) presented communalities below 0.45, which may suggest lower integration within the general construct. Nevertheless, they did not fall below critical thresholds, warranting exclusion and were retained based on their conceptual relevance.

4.8. Latent Variables and Proposed Model

The various extraction methods applied converged in identifying a five-factor structure as the most appropriate for the model in university faculty. The complete pattern matrix, including factor loadings ≥0.40 after Oblimin rotation and supported by theoretical or empirical justification, is available in Table 7.
The new theoretical structure, validated by empirical findings, was composed of the following dimensions:
  • Factor 1: Functional Appropriation of ChatGPT Technology (fusion of intention to use and perceived usefulness), FACT;
  • Factor 2: Ethical and Academic Concerns, EAC;
  • Factor 3: Cost and Accessibility, CA;
  • Factor 4: Facilitating Conditions, FC;
  • Factor 5: Perceived Reliability and Trustworthiness, PRT.
Based on the extracted factors, the resulting model was named IMPACT: Instrument for Measuring the Perceived Appropriation of ChatGPT in Teaching.

4.9. Internal Consistency

To assess the internal consistency of each subscale, Cronbach’s alpha and McDonald’s omega (hierarchical approximation version, ωha) coefficients were calculated based on the Pearson correlation matrix using the SPSS macro developed by Hayes and Coutts (2020). The results indicate high reliability across all subscales, with both α and ω(HA) values exceeding 0.79. The overall internal consistency of the instrument was 0.858 (α) and 0.764 (ωha), suggesting that while the subscales exhibit high internal homogeneity, the global construct may reflect a potentially multidimensional structure. Table 8 summarizes the reliability indices obtained for each subscale and for the instrument as a whole.

5. Discussion

5.1. Main Findings

The factorial structure of the IMPACT model, adapted for university faculty, shows a significant reorganization compared to the original CHASSIS model developed for students (Pereira-González et al., 2025a). Five factors emerged in the IMPACT model: (1) Functional Appropriation of ChatGPT Technology (FACT), which integrates items related to intention to use and performance expectancy; (2) Ethical and Academic Concerns (EAC); (3) Cost and Accessibility (CA); (4) Facilitating Conditions (FC); and (5) Perceived Reliability and Trustworthiness (PRT).
In contrast, the CHASSIS model included seven distinct factors: Ethical and Academic Concerns (EAC), Performance Expectancy (PE), Cost and Financial Accessibility (CFA), Intention to Use (IU), Social Influence/Anxiety (SIA), Perceived Confidence and Reliability (PCR), and Facilitating Conditions (FC). The comparison between both models reveals two key transformations: the merging of the IU and PE factors into a more robust component (FACT), and the non-emergence of the SIA component in the faculty sample. These results are consistent with findings reported by Bolívar-Cruz and Verano-Tacoronte (2025), and Valaidum and Mahat (2024), who also observed that among experienced professionals, behavioral intention tends to integrate with perceived usefulness when technology use becomes instrumental to performance goals.
The FACT dimension was supported by high factor loadings (ranging from 0.678 to 0.929), strong internal consistency (McDonald’s ωha = 0.941), an in-depth semantic analysis (see Appendix B), and solid theoretical backing. This factorial convergence aligns with prior findings in technology acceptance models such as TAM (Davis, 1989) and its extension TAM2 (Venkatesh & Davis, 2000), which established that perceived usefulness becomes the dominant predictor of behavioral intention once users gain expertise. More recent studies (Dwivedi et al., 2019; Barakat et al., 2025; Strzelecki et al., 2024) corroborate this pattern, emphasizing that experienced users internalize perceived utility as part of their professional practice rather than as a preliminary motivational stage. Moreover, prior studies have documented practical differences in technology adoption attitudes between students and faculty (Bhaskar & Rana, 2024; Bhat et al., 2024; Scherer et al., 2019; van Raaij & Schepers, 2008).
This reconfiguration therefore suggests that, for faculty, behavioral intention to use is mediated by perceived usefulness and integrate into a functional dimension oriented toward professional performance. It reflects a more pragmatic, internalized, and sustained appropriation of the tool, consistent with evidence from Bhaskar and Rana (2024) and Scherer et al. (2019), who found that faculty members’ technology adoption is shaped more by cognitive evaluations of functionality and less by affective or social influences. The concept of functional appropriation also parallels findings by van Raaij and Schepers (2008), who demonstrated that in professional environments, usage intention and performance expectancy converge as interdependent processes that guide sustained adoption.
In this context, the functional appropriation of ChatGPT can be understood as the extent to which university faculty internalize its use as a valuable, efficient, and necessary resource for their academic and research activities, demonstrating a sustained willingness to integrate it into their professional practice. This appropriation implies not only behavioral intention but also functional valuation, perceived usefulness, and satisfaction with the outcomes. While this factor has not been specifically defined in previous studies, the present findings are consistent with those reported by Barakat et al. (2025), Dwivedi et al. (2019) and Strzelecki et al. (2024). This finding cannot overlook the growing concerns about ChatGPT’s limitations in academic contexts, where responses often appear credible but may be formulaic, outdated, or lack precise substantiation (Mahama et al., 2023). The notion of functional appropriation thus extends beyond mere technical adoption but cannot ignore the risk of what Martín-Crespo Rodríguez (2024) describes as a form of pseudo-instrumental rationality—a technocratic mindset that prioritizes efficiency and procedural performance while paradoxically constraining critical engagement and human creativity.
Although faculty may perceive ChatGPT as a tool that reduces cognitive workload and improves efficiency, this instrumental dependence risks fostering a form of cognitive somnolence —a condition in which users uncritically accept AI-generated results without subjecting them to rigorous evaluation (Martín-Crespo Rodríguez, 2024). This observation aligns with the warnings of Mahama et al. (2023), who emphasize that AI-assisted academic writing can undermine intellectual autonomy and academic integrity by normalizing uncritical dependence on algorithmic text generation.
The convergence between these findings on functional appropriation and broader critiques suggests that patterns of generative AI adoption in higher education should be examined not only through behavioral and usage metrics but also through their epistemic and ethical implications—specifically, how such technologies may reshape academic labor, critical reasoning, and the very notion of intellectual authorship within universities.
On the other hand, the theoretical factor of Social Influence/Anxiety (SIA) did not emerge as a distinct dimension in the faculty model. Its exclusion was due to factorial collapse and weak loadings, suggesting a lack of structural coherence within this population. This absence may be interpreted considering faculty members’ professional profiles, which generally reflect greater autonomy and lower dependence on social judgment. This finding aligns with the UTAUT model (Venkatesh et al., 2003), which posits that social influence weakens as users gain autonomy and experience. Similar patterns have been reported in higher education contexts, where professional independence reduces the role of social judgment in decision-making (Booyse & Scheepers, 2024; Hakimi et al., 2024). Furthermore, in institutional settings where the use of AI tools is still emerging and lacks formal regulation, the limited visibility of social or institutional pressure may further explain the non-significance of this construct—consistent with the observations of Palm (2022) and Takahashi et al. (2024) regarding innovation diffusion in educational systems.

5.2. Practical Implications

The IMPACT model provides a theoretically grounded framework for the adoption of generative artificial intelligence tools in educational institutions, addressing the growing interest in understanding the factors that determine the successful integration of emerging technologies in educational settings (Bhat et al., 2024; Yu et al., 2024). Beyond offering a psychometric instrument, the model contributes to institutional policy and pedagogical design by identifying the dimensions most relevant to educators—functional appropriation, ethical and academic concerns, perceived reliability, cost and accessibility, and facilitating conditions—these dimensions highlight that adoption is not only a technical process but also a pedagogical and ethical one.
The emergence of the Functional Appropriation of ChatGPT Technology (FACT) factor suggests that institutional training should move beyond technical proficiency to foster reflective and critical use of AI, aligning with prior findings that sustained adoption depends on perceived pedagogical utility rather than social pressure (Dwivedi et al., 2019; Scherer et al., 2019). Moreover, the inclusion of the Ethical and Academic Concerns dimension underscores the need for policies that prevent what Martín-Crespo Rodríguez (2024) calls pseudo-instrumental rationality—an efficiency-driven mindset that may limit critical engagement—and guard against overreliance or cognitive somnolence in AI use.
Consistent with Mahama et al. (2023), these results suggest that effective integration of generative AI in universities requires balancing functional benefits with ethical oversight to preserve academic integrity and intellectual autonomy.

5.2.1. Factorial Structure and Theoretical Foundation

The five-factor structure of the model aligns with contemporary trends toward more comprehensive and context-sensitive approaches compared to classical models such as TAM and UTAUT2 (Davis et al., 1989; Venkatesh et al., 2012). The Functional Appropriation of ChatGPT Technology (FACT) factor integrates intention to use and performance expectancy—elements that have consistently demonstrated strong predictive power in educational contexts (Feng et al., 2025; Hakimi et al., 2024). This integration is consistent with studies identifying performance expectancy as the strongest predictor of behavioral intention in higher education (Abdi et al., 2025; Faraon et al., 2025; Wedlock & Trahan, 2020) and reflects a pragmatic, outcome-oriented perspective that emerges among experienced faculty (Bhaskar & Rana, 2024; Scherer et al., 2019).
The Ethical and Academic Concerns (EAC) dimension addresses a critical gap in the literature on AI adoption in education. Ethical considerations have increasingly been recognized as determinants of user trust and acceptance, encompassing issues such as academic integrity, algorithmic bias, and data privacy (Bozkurt, 2024; Fowler, 2023). These results reinforce recent arguments that technology adoption models should integrate moral and epistemic responsibility as core constructs (Martín-Crespo Rodríguez, 2024; Mahama et al., 2023).
The Cost and Accessibility (CA) dimension incorporates economic and availability-related barriers, which have been widely documented as critical factors in institutional technology adoption (Capraro et al., 2024; Xiao et al., 2024), particularly in public universities in developing countries, where budgetary constraints can restrict equitable access (Complete College America, 2025; Hughes et al., 2025). This finding aligns with global calls for inclusive AI integration that bridges the digital divide across socioeconomic contexts (Dwivedi et al., 2019).
Facilitating Conditions (FC) represent a well-established construct in UTAUT, with prior research demonstrating that support infrastructure, training, and regulatory frameworks play a crucial role in technology acceptance (Venkatesh et al., 2003; Dysart & Weckerle, 2015; Kong et al., 2024). Within the context of ChatGPT, this includes not only technical support but also AI literacy and governance policies that ensure responsible and transparent implementation (Çer, 2025; Jaipal-Jamani et al., 2018).
Finally, the Perceived Reliability and Trustworthiness (PRT) dimension constitutes a significant contribution to the model, as it captures one of the most pressing concerns surrounding the use of generative artificial intelligence systems in educational environments. Specifically, perceived reliability has been closely linked to the willingness to integrate such tools into teaching and academic practices, since errors, biases, or inconsistent responses can compromise both the quality of learning and user trust (Giannakos et al., 2024; Panda & Kaur, 2024; Miraz et al., 2025). Recent studies have documented performance limitations of ChatGPT in specialized domains that require complex cognitive tasks—such as law, medicine, the hard sciences, or empirical sciences—where high levels of precision and contextual understanding are essential (Cong-Lem et al., 2024). Such findings reinforce the importance of this dimension as an independent determinant influencing both acceptance and sustainable use of generative AI in higher education.
Figure 1 illustrates the final factorial configuration of the IMPACT model, derived from the exploratory factor analysis and grounded in prior technology acceptance frameworks (TAM, TAM2, UTAUT, UTAUT2, and CHASSIS). The figure highlights the theoretical and structural evolution leading to the proposed model, particularly the integration of Intention to Use and Performance Expectancy into Functional Appropriation and the non-significance of Social Influence/Anxiety in faculty contexts.

5.2.2. Implications for Institutional Implementation

The approach proposed in the IMPACT model is consistent with research on the diffusion of innovations in higher education, which highlights the importance of considering multiple levels of analysis and contextual factors (Palm, 2022; Stasewitsch et al., 2022; Takahashi et al., 2024). This multi-layered perspective reinforces the idea that faculty adoption of generative AI cannot be understood in isolation from institutional culture, governance structures, and pedagogical norms, as previously noted in studies on digital transformation in universities (Dwivedi et al., 2019; Scherer et al., 2019).
The model’s capacity to support differentiated strategies based on faculty profiles addresses the well-documented differences in attitudes and readiness among educator groups. Variables such as teaching experience, disciplinary background, and attitudes toward technology have been shown to moderate adoption processes (Abedi & Ackah-Jnr, 2023; Nikoçeviq-Kurti & Bërdynaj-Syla, 2024). This aligns with prior findings indicating that personalized and discipline-sensitive implementation strategies enhance the sustainability of technological innovation in academic contexts (Herodotou et al., 2020; Yu et al., 2024).
The emphasis on continuous evaluation grounded in empirical evidence aligns with best practices in institutional technological change management. Successful implementation of emerging technologies requires iterative monitoring and adaptive feedback mechanisms that ensure sustained alignment with pedagogical objectives and ethical standards (Herodotou et al., 2020; Strielkowski et al., 2025). In this regard, the IMPACT model contributes a structured framework for evaluating not only adoption rates but also the depth and quality of integration, supporting the transition toward evidence-based and ethically grounded AI governance in higher education (Mahama et al., 2023; Martín-Crespo Rodríguez, 2024).

5.2.3. Contribution and Applicability

The IMPACT model provides a framework that enables educational institutions to navigate the inherent complexity of integrating generative AI tools, balancing potential benefits with the minimization of risks and resistance. This approach is particularly relevant in the current context, where institutions face growing pressure to adopt emerging technologies while maintaining educational quality standards and addressing ethical considerations (Booyse & Scheepers, 2024; Feng et al., 2025; Stasewitsch et al., 2022). Similarly to prior frameworks emphasizing responsible digital transformation (Dwivedi et al., 2019; Mahama et al., 2023), the IMPACT model advances a holistic vision of institutional AI adoption that integrates pedagogical effectiveness, transparency, and ethical accountability.
The model offers an evidence-based roadmap that can support informed decision-making processes and increase the likelihood of successful technology implementation in complex and dynamic educational environments. By combining empirical validation with theoretical coherence, it contributes to institutional strategies that promote sustainable and equitable AI integration (Kong et al., 2024; Yu et al., 2024).
Additionally, the IMPACT model advances theoretical research on the emerging adoption of generative artificial intelligence by demonstrating that, in faculty attitudes toward ChatGPT acceptance, social influence is not a determining factor. Instead, adoption behaviors are structured around functional, ethical, and infrastructural considerations. This finding aligns with prior studies reporting that professional autonomy and self-regulated decision-making moderate the role of social influence in technology adoption (Scherer et al., 2019; Bhaskar & Rana, 2024). Consequently, it highlights the need to adapt the conceptualization of existing theoretical models of technology use intention, such as UTAUT and TAM, and suggests that, in academic contexts, the intention to use generative AI is shaped by a more integrated perception of usefulness and professional ethics (Martín-Crespo Rodríguez, 2024; Mahama et al., 2023).

5.3. Methodological Strengths

The present study incorporates several methodological considerations aimed at enhancing the robustness and replicability of the IMPACT model, proposed to evaluate the functional appropriation of ChatGPT among university faculty. First, multiple convergent strategies were employed to determine the optimal number of factors to extract, including Kaiser’s criterion (Kaiser, 1960), the scree plot, parallel analysis, and the Minimum Average Partial (MAP) method, following the recommendations of O’Connor (2000) and Velicer (1976). This methodological triangulation strengthened the empirical validity of the resulting five-factor structure, overcoming limitations associated with the exclusive use of heuristic criteria (Hayton et al., 2004; Fabrigar et al., 1999). Similar approaches have been recommended in psychometric validation studies in education to ensure replicability and theoretical precision (Ferrando & Lorenzo-Seva, 2014; Lloret-Segura et al., 2014).
Second, the assumptions of univariate and multivariate normality were rigorously assessed. While individual items displayed acceptable distributions (skewness and kurtosis ranging from −1.21 to 1.70), Mardia’s test revealed significant deviations from multivariate normality. This finding led to the selection of principal axis factoring, a method more robust to such violations (Cain et al., 2017; Ferrando & Anguiano-Carrasco, 2010). This decision aligns with best practices in psychometric analysis for non-normally distributed data, ensuring that factor solutions remain stable even when multivariate assumptions are not met (Kyriazos & Poga, 2023).
In addition, McDonald’s hierarchical omega (ωha) was computed for each subscale using the closed-form algorithm proposed by Hancock and An (2020), implemented in SPSS via the Hayes and Coutts (2020) macro. This approach overcomes the limitations of Cronbach’s alpha by not requiring tau-equivalence and provides a more accurate estimate of internal consistency in multifactorial structures (Zinbarg et al., 2005; Malkewitz et al., 2023). The inclusion of ωha as a reliability index strengthens the psychometric rigor of the study and aligns with current recommendations for evaluating internal structure consistency in social sciences (Deng & Chan, 2017; Valencia Londoño et al., 2025).
Ultimately, a rigorous item refinement process was carried out, grounded in both theoretical and statistically supported criteria. Item removal was based on the combination of low factor loading (<0.40), insufficient communalities (<0.30), and ambiguous cross-loadings, following thresholds established by Costello and Osborne (2005), Lloret-Segura et al. (2014), and Hair et al. (2019). This strategy allowed for the construction of a parsimonious, theoretically coherent instrument with adequate discriminant validity, ensuring the conceptual integrity of each factor. Moreover, the combination of empirical filtering and theoretical validation represents a methodological advance over traditional exploratory approaches, supporting the internal consistency and construct validity of the IMPACT model (Ferrando & Lorenzo-Seva, 2014).

5.4. Study Limitations

Although the results support the psychometric strength of the proposed model, certain methodological limitations must be acknowledged when interpreting the findings.
First, although the sample size used (n = 206) met established empirical criteria—exceeding the minimum 5:1 participant-to-item ratio (Memon et al., 2020) and the recommended threshold of 20 participants per independent variable (Hair et al., 2019)—and was validated using a priori analysis with the A priori Sample Size Calculator for Structural Equation Models (Soper, 2025), recent literature suggests that larger samples may lead to more precise estimates and more stable factor solutions, particularly in studies targeting diverse populations (Lorenzo-Seva & Ferrando, 2024). The a priori analysis, assuming a medium effect size (f2 = 0.30), statistical power of 0.80, significance level of 0.05, five latent variables, and 28 observed variables, indicated a minimum of 150 participants to detect significant effects and 148 to adequately represent the model structure.
Second, the study relies exclusively on exploratory factor analysis and requires validation through confirmatory factor analysis using an independent sample. This limitation prevents drawing definitive conclusions regarding the stability and replicability of the identified factorial structure.
Finally, potential biases related to self-selection and socially desirable responding cannot be ruled out. In studies involving academic expectations, professional ethics, or the adoption of tools such as ChatGPT, faculty members may report more favorable attitudes or responses aligned with institutional expectations, even if their actual opinions or behaviors differ from those reported.

5.5. Future Research Directions

The findings of this study open several avenues for further research aimed at strengthening and expanding the IMPACT model toward broader and more contextualized applications. First, it is recommended to validate the factorial structure through confirmatory factor analysis (CFA) on an independent sample stratified by key variables such as gender, age, and academic discipline. This approach would allow for the assessment of factorial invariance across different faculty subpopulations using multigroup techniques (G. T. L. Brown et al., 2015; Fischer & Karl, 2019), thereby ensuring that the instrument’s structure remains stable and generalizable in diverse contexts. Additionally, it would be pertinent to explore second-order hierarchical models that group the identified dimensions under broader latent constructs, allowing for more integrative and parsimonious theoretical interpretations (Gould, 2015).
It is also advisable to compare the instrument with other psychometrically validated questionnaires grounded in related theoretical frameworks. Such cross-validation would enhance the empirical evidence of construct validity and reinforce the scientific foundation of the IMPACT model (Strauss & Smith, 2009; Grand-Guillaume-Perrenoud et al., 2023).
Another important line of inquiry involves examining potential moderation effects by contextual and personal variables, such as teaching experience, academic field, familiarity with artificial intelligence tools, and faculty digital competencies (Dunn et al., 2015; Garrido-Ruso & Aibar-Guzmán, 2022). These comparisons could be explored through multigroup structural equation models or moderation analyses, advancing a more nuanced understanding of university faculty’s adoption of emerging technologies like ChatGPT.
Finally, future studies should prioritize implementing more diversified and rigorous sampling strategies to minimize self-selection bias and enhance sample representativeness, thereby improving the external validity and generalizability of the findings to other educational settings.

6. Conclusions

This study developed and validated, through Exploratory Factor Analysis, the IMPACT model (Instrument for Measuring the Perceived Appropriation of ChatGPT in Teaching), a psychometric tool specifically designed to assess the adoption of generative artificial intelligence—ChatGPT—by university faculty. The findings reveal fundamental differences compared to the model previously applied to students, indicating factorial patterns that reflect the distinctive professional and institutional roles of teaching staff.
Within the resulting five-factor structure of the IMPACT model, a factor emerged describing the Functional Appropriation of ChatGPT Technology (FACT), which integrates two constructs—intention to use and performance expectancy—that appeared as separate dimensions in the student model. This suggests that university faculty experience a more cohesive integration between perceived usefulness and behavioral intention toward generative AI technologies.
The absence of the Social Influence/Anxiety factor, present in CHASSIS, aligns with prior literature documenting increased professional autonomy and reduced dependence on social judgment as users gain experience. This implies that in ChatGPT adoption, faculty members tend to value intrinsic criteria of usefulness and efficiency over social pressure or external expectations.
The four factors common to both models—Ethical and Academic Concerns (EAC), Cost and Accessibility (CA), Facilitating Conditions (FC), and Perceived Reliability and Trustworthiness (PRT)—maintained their conceptual structure, reaffirming their theoretical and empirical relevance in teaching contexts.
The use of principal axis factoring, oblique rotation, and triangulation through multiple extraction criteria resulted in a stable and conceptually coherent factorial structure. Internal consistency values (α between 0.795 and 0.940; ωha between 0.797 and 0.941) confirm the reliability of the instrument’s subscales. The factorial solution explained 68.2% of the total variance and yielded a Kaiser–Meyer–Olkin index (KMO = 0.89), classified as meritorious (Kaiser, 1974), with high communalities (0.460 to 0.757) and strong factor loadings (0.678 to 0.929).
The identification of the factorial structure of the IMPACT model provides a foundation for higher education institutions to design specific interventions that strengthen facilitating and accessibility conditions while addressing ethical concerns and implementing strategies to help faculty enhance trust and evaluate the reliability of outcomes generated by such tools.
While this research contributes to the emerging theoretical corpus on ChatGPT adoption in universities, it should be recognized as an initial approach requiring further validation through confirmatory factor analysis and replication across diverse contexts.
Finally, the IMPACT model provides not only a reliable psychometric tool but also a theoretical contribution that serves as a bridge between general technology acceptance models and the ethical–functional realities underlying the adoption of generative artificial intelligence in higher education.

Author Contributions

Conceptualization, L.-M.P.-G., A.B.-A. and M.N.-T.; Methodology, L.-M.P.-G., A.B.-A. and M.G.-P.; Software, L.-M.P.-G. and M.G.-P.; Validation, L.-M.P.-G., A.B.-A., M.N.-T. and M.G.-P.; Formal analysis, L.-M.P.-G. and A.B.-A.; Investigation, L.-M.P.-G., A.B.-A. and M.G.-P.; Resources, L.-M.P.-G. and A.B.-A.; Data curation, L.-M.P.-G. and A.B.-A.; Writing—original draft, L.-M.P.-G. and A.B.-A.; Writing—review and editing, L.-M.P.-G., A.B.-A. and M.N.-T.; Visualization, L.-M.P.-G., A.B.-A. and M.G.-P.; Supervision, L.-M.P.-G.; Project administration, L.-M.P.-G.; Funding acquisition, L.-M.P.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee of the Faculty of Education, Science, and Technology (FECYT) (Approval No. UTN-FECYT-CEI-2024-0000001356-2 (dated 19 February 2024)).

Informed Consent Statement

All faculty members who participated in the study provided written informed consent after being fully briefed on the purpose of the research. Participants were explicitly informed of the anonymous nature of their responses and the strict confidentiality with which their information would be handled. Participation was entirely voluntary, and no identifying data were collected at any stage of the study.

Data Availability Statement

The data supporting the findings of this study are openly available on the Open Science Framework (OSF) repository at https://doi.org/10.17605/OSF.IO/YXDSV (Pereira-González et al., 2025b).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

* Macro adapted from DeCarlo (1997) to assess multivariate normality (Mardia’s * skewness and kurtosis). Applicable in SPSS.
DATASET ACTIVATE ConjuntoDatos1.
EXAMINE VARIABLES = PEA_I1 PEA_I2 PEA_I3 PEA_I4 PEA_I5 PEA_I6 PEA_I7 CFP_I1 CFP_I2 CFP_I3 CAF_I1
CAF_I2 CAF_I3 CAF_I4 CAF_I5 CF_I1 CF_I2 CF_I3 ED_I1 ED_I2 ED_I3 ED_I4 ED_I5 ED_I6 ED_I7 IU_I1 IU_I2 IU_I3
/PLOT BOXPLOT STEMLEAF
/COMPARE GROUPS
/STATISTICS DESCRIPTIVES
/CINTERVAL 95
/MISSING LISTWISE
/NOTOTAL.
* Mardia’s multivariate skewness and kurtosis (b1p, b2p).
 
PRESERVE.
SET PRINTBACK = OFF.
DEFINE mardia (vars = !CHAREND(‘/’)).
SET MXLOOPS = 50,000.
MATRIX.
GET x/VARIABLES = !vars/NAMES = varnames/MISSING = OMIT.
COMPUTE n = NROW(x).
COMPUTE p = NCOL(x).
COMPUTE xbar = CSUM(x)/n.
COMPUTE j = MAKE(n,1,1).
COMPUTE xdev = x − j * xbar.
RELEASE x.
 
* Covariance matrix and inverse.
COMPUTE s = SSCP(xdev)/n.
COMPUTE sinv = INV(s).
 
* Generalized square distances and calculation of b1p.
COMPUTE gii = MAKE(n,1,0).
COMPUTE gsum = MAKE(n,1,0).
LOOP i = 1 TO n.
COMPUTE gii(i) = xdev(i,:) * sinv * T(xdev(i,:)).
COMPUTE gij = xdev(i,:) * sinv * T(xdev).
COMPUTE gsum(i) = CSUM(gij&**3).
END LOOP.
 
* Multivariate Skewness(b1p).
COMPUTE b1p = CSUM(gsum)/(n * n).
COMPUTE chib1p = (n * b1p)/6.
COMPUTE sm = ((p + 1)*(n + 1)*(n + 3))/(n * ((n + 1)*(p + 1) − 6)).
COMPUTE chism = (n * b1p * sm)/6.
COMPUTE df = (p*(p + 1)*(p + 2))/6.
COMPUTE pb1p = 1 − CHICDF(chib1p, df).
COMPUTE pb1psm = 1 − CHICDF(chism, df).
 
PRINT {b1p, chib1p, pb1p, chism, pb1psm}
/TITLE = “Mardia’s Multivariate Skewness (with small sample adjustment)”
/CLABELS = “b1p”, “Chi(b1p)”, “p-value”, “adj-Chi”, “p-value”
/FORMAT = F10.4.
 
* Multivariate kurtosis (b2p).
COMPUTE b2p = CSUM(gii&**2)/n.
COMPUTE nb2p = (b2p − p*(p + 2))/SQRT(8*p*(p + 2)/n).
COMPUTE pnb2p = 2 * (1 − CDFNORM(ABS(nb2p))).
 
PRINT {b2p, nb2p, pnb2p}
/TITLE = “Mardia’s Multivariate Kurtosis”
/CLABELS = “b2p”, “N(b2p)”, “p-value”
/FORMAT = F10.4.
END MATRIX.
!ENDDEFINE.
RESTORE.
 
mardia vars = PEA_I1 to PEA_I7
mardia vars = CFP_I1 to CFP_I3
mardia vars = CAF_I1 to CAF_I5
mardia vars = CF_I1 to CF_I3
mardia vars = ED_I1 to IU_I3

Appendix B

Table A1. Semantic analysis of items belonging to the FACT factor (Functional Appropriation of ChatGPT Technology).
Table A1. Semantic analysis of items belonging to the FACT factor (Functional Appropriation of ChatGPT Technology).
Item in the CHASSIS ModelPredominant Semantic CoreInterpretative Comment
IU_I1: I consider it important to continue using ChatGPT in my teaching and research practice.Appraisal/Continued useAlthough formulated as intention, the semantic core is perceived value, not behavioral decision. The expression ‘I consider it important’ may be more attitudinal than behavioral.
IU_I2: I am likely to continue using ChatGPT as a support tool in my academic and research work.Intention/ContinuityThis item explicitly expresses intention (likelihood of continued use), but it is contextualized in academic work, making the interpretation more practical than attitudinal.
IU_I3: I plan to continue using ChatGPT in the future.Intention to use (pure)This item directly expresses a projected intention, without an explicit functional context. It is the only item that can be clearly differentiated as strictly intentional.
PEA_I1: The use of ChatGPT has had a positive impact on my work performance.Impact/PerformanceSimilarly to item 4, it frames the cause-effect relationship between use and improved performance.
PEA_I2: Using ChatGPT helps me perform my academic and research tasks more efficiently.Efficiency/Functional supportClearly linked to performance; there is no intentional component.
PEA_I3: ChatGPT provides me with valuable resources for my work performance.Resources/PerformanceStrongly focused on usefulness, with clear instrumental value.
PEA_I4: My work performance has improved thanks to ChatGPT’s support.Improvement in performance/Attributed causalityClearly focused on the effect of use, with no attitudinal or projective dimension.
PEA_I5: I believe that using ChatGPT is useful for improving my job performancePerceived instrumental utility of ChatGPT for enhancing job effectivenessThis item expresses a belief in the practical utility of ChatGPT to enhance work effectiveness, aligning with performance expectancy constructs. It emphasizes a functional and goal-oriented use of the tool, reinforcing the notion of pragmatic appropriation within the workplace.
PEA_I6: ChatGPT facilitates my learning and understanding of scientific-academic topics.Learning/UnderstandingLinked to academic performance and functionality, not attitude or intention.
PEA_I7: I am satisfied with the overall experience of using ChatGPT in my teaching and research work.Satisfaction/ExperienceAlthough it has an attitudinal component, it is linked to the outcome of the use. It could be considered a bridge between subjective evaluation and perceived performance.

References

  1. Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21, 10. [Google Scholar] [CrossRef]
  2. Abdaljaleel, M., Barakat, M., Alsanafi, M., Salim, N. A., Abazid, H., Malaeb, D., Mohammed, A. H., Hassan, B. A. R., Wayyes, A. M., Farhan, S. S., El Khatib, S., Rahal, M., Sahban, A., Abdelaziz, D. H., Mansour, N. O., AlZayer, R., Khalil, R., Fekih-Romdhane, F., Hallit, R., … Sallam, M. (2024). A multinational study on the factors influencing university students’ attitudes and usage of ChatGPT. Scientific Reports, 14(1), 52549. [Google Scholar] [CrossRef] [PubMed]
  3. Abdalla, S., Al-Maamari, W., & Al-Azki, J. (2024). Data analytics-driven innovation: UTAUT model perspectives for advancing healthcare social work. Journal of Open Innovation: Technology, Market, and Complexity, 10(4), 100411. [Google Scholar] [CrossRef]
  4. Abdi, A. N. M., Omar, A. M., Ahmed, M. H., & Ahmed, A. A. (2025). The predictors of behavioral intention to use ChatGPT for academic purposes: Evidence from higher education in Somalia. Cogent Education, 12(1), 2460250. [Google Scholar] [CrossRef]
  5. Abedi, E. A., & Ackah-Jnr, F. R. (2023). First-order barriers still matter in teachers’ use of technology: An exploratory study of multi-stakeholder perspectives of technology integration barriers. International Journal of Education and Development Using Information and Communication Technology, 19(2), 148–165. Available online: https://files.eric.ed.gov/fulltext/EJ1402796.pdf (accessed on 1 April 2025).
  6. Acosta-Enriquez, B. G., Arbulú Ballesteros, M. A., Arbulu Perez Vargas, C. G., Orellana Ulloa, M. N., Gutiérrez Ulloa, C. R., Pizarro Romero, J. M., Gutiérrez Jaramillo, N. D., Cuenca Orellana, H. U., Ayala Anzoátegui, D. X., & López Roca, C. (2024). Knowledge, attitudes, and perceived ethics regarding the use of ChatGPT among generation Z university students. International Journal for Educational Integrity, 20, 10. [Google Scholar] [CrossRef]
  7. Almogren, A. S., Al-Rahmi, W. M., & Dahri, N. A. (2024). Exploring factors influencing the acceptance of ChatGPT in higher education: A smart education perspective. Heliyon, 10(11), e31887. [Google Scholar] [CrossRef]
  8. Al Murshidi, G., Shulgina, G., Kapuza, A., & Costley, J. (2024). How understanding the limitations and risks of using ChatGPT can contribute to willingness to use. Smart Learning Environments, 11, 36. [Google Scholar] [CrossRef]
  9. Alotaibi, S. M. F. (2025). Determinants of generative artificial intelligence (GenAI) adoption among university students and its impact on academic performance: The mediating role of trust in technology. Interactive Learning Environments, 33, 1–30. [Google Scholar] [CrossRef]
  10. Alqahtani, T., Badreldin, H. A., Alrashed, M., Alshaya, A. I., Alghamdi, S. S., Bin Saleh, K., Alowais, S. A., Alshaya, O. A., Rahman, I., Al Yami, M. S., & Albekairy, A. M. (2023). The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Research in Social & Administrative Pharmacy: RSAP, 19(8), 1236–1242. [Google Scholar] [CrossRef]
  11. Alshammari, S. H., & Alshammari, M. H. (2024). Factors affecting the adoption and use of ChatGPT in higher education. International Journal of Information and Communication Technology Education, 20(1), 339557. [Google Scholar] [CrossRef]
  12. Alshammari, S. H., & Babu, E. (2025). The mediating role of satisfaction in the relationship between perceived usefulness, perceived ease of use and students’ behavioural intention to use ChatGPT. Scientific Reports, 15, 7169. [Google Scholar] [CrossRef]
  13. Alzahrani, A., & Alzahrani, A. (2025). Comprendiendo la adopción de ChatGPT en universidades: El impacto del TPACK y UTAUT2 en los docentes. RIED-Revista Iberoamericana de Educación a Distancia, 28(1), 37–58. [Google Scholar] [CrossRef]
  14. Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics and Informatics, 54, 101473. [Google Scholar] [CrossRef]
  15. Ashraf, M. A., Alam, J., & Kalim, U. (2025). Effects of ChatGPT on students’ academic performance in Pakistan higher education classrooms. Scientific Reports, 15(1), 16434. [Google Scholar] [CrossRef]
  16. Baek, T. H., Kim, J., & Kim, J. H. (2024). Effect of disclosing AI-generated content on prosocial advertising evaluation. International Journal of Advertising, 43(7), 1–22. [Google Scholar] [CrossRef]
  17. Bai, L., Liu, X., & Su, J. (2023). ChatGPT: The cognitive effects on learning and memory. Brain and Behavior, 1(3), e30. [Google Scholar] [CrossRef]
  18. Balaskas, S., Tsiantos, V., Chatzifotiou, S., & Rigou, M. (2025). Determinants of ChatGPT adoption intention in higher education: Expanding on TAM with the mediating roles of trust and risk. Information, 16(2), 82. [Google Scholar] [CrossRef]
  19. Barakat, M., Salim, N. A., & Sallam, M. (2025). University educators perspectives on ChatGPT: A technology acceptance model-based study. Open Praxis, 17(1), 129–144. [Google Scholar] [CrossRef]
  20. Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine, 25(24), 3186–3191. [Google Scholar] [CrossRef]
  21. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March 3–10). On the dangers of stochastic parrots: Can language models be too big? 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21) (pp. 610–623), Virtual Event. [Google Scholar] [CrossRef]
  22. Bender, S. M. (2024). Awareness of artificial intelligence as an essential digital literacy: ChatGPT and Gen-AI in the classroom. Changing English, 31(2), 161–174. [Google Scholar] [CrossRef]
  23. Bhaskar, P., & Rana, S. (2024). The ChatGPT dilemma: Unravelling teachers’ perspectives on inhibiting and motivating factors for adoption of ChatGPT. Journal of Information, Communication and Ethics in Society, 22(2), 219–239. [Google Scholar] [CrossRef]
  24. Bhat, M., Tiwari, C., Bhaskar, P., & Khan, S. (2024). Examining ChatGPT adoption among educators in higher educational institutions using extended UTAUT model. Journal of Information, Communication and Ethics in Society, 22(3), 331–353. [Google Scholar] [CrossRef]
  25. Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quiñonez, H. R., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6, 149. [Google Scholar] [CrossRef]
  26. Bolívar-Cruz, A., & Verano-Tacoronte, D. (2025). Is anxiety affecting the adoption of ChatGPT in university teaching? A gender perspective. Technology, Knowledge and Learning, 30, 2373–2392. [Google Scholar] [CrossRef]
  27. Booyse, D., & Scheepers, C. B. (2024). Barriers to adopting automated organisational decision-making through the use of artificial intelligence. Management Research Review, 47(1), 64–85. [Google Scholar] [CrossRef]
  28. Bozkurt, A. (2024). GenAI et al.: Cocreation, authorship, ownership, academic ethics and integrity in a time of generative AI. Open Praxis, 16(1), 1–10. [Google Scholar] [CrossRef]
  29. Brown, G. T. L., Harris, L. R., O’Quin, C., & Lane, K. E. (2015). Using multi-group confirmatory factor analysis to evaluate cross-cultural research: Identifying and understanding non-invariance. International Journal of Research & Method in Education, 40(1), 66–90. [Google Scholar] [CrossRef]
  30. Brown, J. D. (2009). Choosing the right type of rotation in PCA and EFA. Shiken: JALT Testing & Evaluation SIG Newsletter, 13(3), 20–25. Available online: https://teval.jalt.org/test/PDF/Brown31.pdf (accessed on 3 March 2025).
  31. Burdenski, T. K., Jr. (2000). Evaluating univariate, bivariate, and multivariate normality using graphical and statistical procedures. Multiple Linear Regression Viewpoints, 26(2), 15–28. Available online: https://imaging.mrc-cbu.cam.ac.uk/statswiki/FAQ/Simon?action=AttachFile&do=get&target=mahalplot.pdf (accessed on 20 March 2025).
  32. Cain, M. K., Zhang, Z., & Yuan, K.-H. (2017). Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation. Behavior Research Methods, 49(5), 1716–1735. [Google Scholar] [CrossRef] [PubMed]
  33. Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., Bonnefon, J.-F., Brañas-Garza, P., Butera, L., Douglas, K. M., Everett, J. A. C., Gigerenzer, G., Greenhow, C., Hashimoto, D. A., Holt-Lunstad, J., Jetten, J., Johnson, S., Kunz, W. H., Longoni, C., … Viale, R. (2024). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus, 3(6), pgae191. [Google Scholar] [CrossRef]
  34. Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1(2), 245–276. [Google Scholar] [CrossRef]
  35. Complete College America. (2025). Generating college completion: Charting a path to institutional AI adoption for student success in higher education. Complete College America. [Google Scholar]
  36. Cong-Lem, N., Soyoof, A., & Tsering, D. (2024). A systematic review of the limitations and associated opportunities of ChatGPT. International Journal of Human–Computer Interaction, 41(7), 3851–3866. [Google Scholar] [CrossRef]
  37. Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(1), 7. [Google Scholar] [CrossRef]
  38. Çer, E. (2025). Enhancing lecturer awareness of technology integration within the TPACK framework: A mixed methods study. STEM Education, 5(3), 356–382. [Google Scholar] [CrossRef]
  39. Dahri, N. A., Yahaya, N., Al-Rahmi, W. M., Aldraiweesh, A., Alturki, U., Almutairy, S., Shutaleva, A., & Soomro, R. B. (2024). Extended TAM based acceptance of AI-powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study. Heliyon, 10(8), e29317. [Google Scholar] [CrossRef]
  40. Dai, H., Sun, C., Xu, S., Farina, F., Huang, X., Wang, Y., Zhang, Q., & Shen, H. (2025). Cultural adaptation and psychometric evaluation of the fear and avoidance of memory loss scale in a Chinese context. Aging & Mental Health, 29(6), 1144–1151. [Google Scholar] [CrossRef] [PubMed]
  41. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  42. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. [Google Scholar] [CrossRef]
  43. DeCarlo, L. T. (1997). On the meaning and use of kurtosis. Psychological Methods, 2(3), 292. [Google Scholar] [CrossRef]
  44. Demir, S. (2022). Comparison of normality tests in terms of sample sizes under different skewness and kurtosis coefficients. International Journal of Assessment Tools in Education, 9(2), 397–409. [Google Scholar] [CrossRef]
  45. Deng, L., & Chan, W. (2017). Testing the difference between reliability coefficients alpha and omega. Educational and Psychological Measurement, 77(2), 185–203. [Google Scholar] [CrossRef]
  46. de Winter, J. C. F., Dodou, D., & Wieringa, P. A. (2009). Exploratory factor analysis with small sample sizes. Multivariate Behavioral Research, 44(2), 147–181. [Google Scholar] [CrossRef]
  47. Dong, B., Bai, J., Xu, T., & Zhou, Y. (2024, April 19–21). Large language models in education: A systematic review. 2024 6th International Conference on Computer Science and Technologies in Education (CSTE), Xi’an, China. [Google Scholar] [CrossRef]
  48. Dubey, S., Ghosh, R., Dubey, M. J., Chatterjee, S., Das, S., & Benito-León, J. (2024). Redefining cognitive domains in the era of ChatGPT: A comprehensive analysis of artificial intelligence’s influence and future implications. Medical Research Archives, 12(6), 5383. [Google Scholar] [CrossRef] [PubMed]
  49. Dunn, E. C., Masyn, K. E., Johnston, W. R., & Subramanian, S. V. (2015). Modeling contextual effects using individual-level data and without aggregation: An illustration of multilevel factor analysis (MLFA) with collective efficacy. Population Health Metrics, 13, 12. [Google Scholar] [CrossRef] [PubMed]
  50. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI. International Journal of Information Management, 71, 102642. [Google Scholar] [CrossRef]
  51. Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21, 719–734. [Google Scholar] [CrossRef]
  52. Dysart, S., & Weckerle, C. (2015). Professional development in higher education: A model for meaningful technology integration. Journal of Information Technology Education: Innovations in Practice, 14, 255–265. [Google Scholar] [CrossRef] [PubMed]
  53. Đerić, E., Frank, D., & Vuković, D. (2025). Exploring the Ethical Implications of Using Generative AI Tools in Higher Education. Informatics, 12(2), 36. [Google Scholar] [CrossRef]
  54. Enang, E., & Christopoulou, D. (2024). Exploring academics’ intentions to incorporate ChatGPT into their teaching practices. Journal of University Teaching and Learning Practice, 21(8), 1–29. [Google Scholar] [CrossRef]
  55. Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299. [Google Scholar] [CrossRef]
  56. Fajt, B., & Schiller, E. (2025). ChatGPT in academia: University students’ attitudes towards the use of ChatGPT and plagiarism. Journal of Academic Ethics, 23(2), 1–20. [Google Scholar] [CrossRef]
  57. Faraon, M., Rönkkö, K., Milrad, M., & Tsui, E. (2025). International perspectives on artificial intelligence in higher education: An explorative study of students’ intention to use ChatGPT across the Nordic countries and the USA. Education and Information Technologies. [Google Scholar] [CrossRef]
  58. Feng, J., Yu, B., Tan, W. H., Dai, Z., & Li, Z. (2025). Key factors influencing educational technology adoption in higher education: A systematic review. PLoS Digital Health, 4(4), e0000764. [Google Scholar] [CrossRef]
  59. Fernandes, A., Alturas, B., & Fernandes, A. (2025). Hybrid generation: Perceptions of social networks among Generation X in Portugal. Entertainment Computing, 52, 100914. [Google Scholar] [CrossRef]
  60. Ferrando, P. J., & Anguiano-Carrasco, C. (2010). El análisis factorial como técnica de investigación en Psicología. Papeles del Psicólogo, 31(1), 18–33. [Google Scholar]
  61. Ferrando, P. J., & Lorenzo-Seva, U. (2014). El análisis factorial exploratorio de los ítems: Algunas consideraciones adicionales. Anales de Psicología, 30(3), 1170–1175. [Google Scholar] [CrossRef]
  62. Fischer, R., & Karl, J. A. (2019). A primer to (cross-cultural) multi-group invariance testing possibilities in R. Frontiers in Psychology, 10, 1507. [Google Scholar] [CrossRef]
  63. Floyd, F. J., & Widaman, K. F. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7(3), 286–299. [Google Scholar] [CrossRef]
  64. Fowler, D. S. (2023). AI in higher education: Academic integrity, harmony of insights, and recommendations. Journal of Ethics in Higher Education, 3, 127–143. [Google Scholar] [CrossRef]
  65. Fuchs, K., & Aguilos, V. (2023). Integrating artificial intelligence in higher education: Empirical insights from students about using ChatGPT. International Journal of Information and Education Technology, 13(9), 1365–1371. [Google Scholar] [CrossRef]
  66. García, J. A. M., Gómez, C. G., López, A. T., & Schlosser, M. J. (2024). Applying the technology acceptance model to online self-learning: A multigroup analysis. Journal of Innovation & Knowledge, 9(4), 100571. [Google Scholar] [CrossRef]
  67. García-Alonso, E. M., León-Mejía, A. C., Sánchez-Cabrero, R., & Guzmán-Ordaz, R. (2024). Training and technology acceptance of ChatGPT in university students of social sciences: A netcoincidental analysis. Behavioral Sciences, 14(7), 612. [Google Scholar] [CrossRef]
  68. García-Peñalvo, F. J. (2024). Inteligencia artificial generativa y educación: Un análisis desde múltiples perspectivas. Education in the Knowledge Society (EKS), 25, 31942. [Google Scholar] [CrossRef]
  69. Garrido-Ruso, M., & Aibar-Guzmán, B. (2022). The moderating effect of contextual factors and employees’ demographic features on the relationship between CSR and work-related attitudes: A meta-analysis. Corporate Social Responsibility and Environmental Management, 29(5), 1839–1854. [Google Scholar] [CrossRef]
  70. George, D., & Mallery, P. (2010). SPSS for Windows step by step: A simple guide and reference (10th ed.). Pearson. [Google Scholar]
  71. Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. [Google Scholar] [CrossRef]
  72. Giannakos, M. N., Sharma, K., Papamitsiou, Z., & Pérez-Sanagustín, M. (2024). The promise and challenges of generative AI in education. Behaviour & Information Technology, 43(4), 213–228. [Google Scholar] [CrossRef]
  73. Golden, W. (2023). ChatGPT: A trusted source? Irish Journal of Technology Enhanced Learning, 7(2), 113–125. [Google Scholar] [CrossRef]
  74. Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Lawrence Erlbaum Associates. [Google Scholar]
  75. Gould, S. J. (2015). Second order confirmatory factor analysis: An example. In J. M. Hawes, & G. B. Glisan (Eds.), Proceedings of the 1987 academy of marketing science (AMS) annual conference (pp. 488–490). Springer. [Google Scholar] [CrossRef]
  76. Grand-Guillaume-Perrenoud, J. A., Geese, F., Uhlmann, K., Blasimann, A., Wagner, F. L., Neubauer, F. B., Huwendiek, S., Hahn, S., & Schmitt, K.-U. (2023). Mixed methods instrument validation: Evaluation procedures for practitioners developed from the validation of the Swiss Instrument for Evaluating Interprofessional Collaboration. BMC Health Services Research, 23, 83. [Google Scholar] [CrossRef]
  77. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis (8th ed.). Cengage Learning. [Google Scholar]
  78. Hakimi, T. I., Jaafar, J. A., Mohamad, M. A., & Omar, M. (2024). Unified theory of acceptance and use of technology (UTAUT) applied in higher education research: A systematic literature review and bibliometric analysis. Multidisciplinary Reviews, 7(12), 2024303. [Google Scholar] [CrossRef]
  79. Hancock, G. R., & An, J. (2020). A closed-form alternative for estimating ω reliability under unidimensionality. Measurement: Interdisciplinary Research and Perspectives, 18(1), 1–14. [Google Scholar] [CrossRef]
  80. Hayes, A. F., & Coutts, J. J. (2020). Use Omega rather than Cronbach’s Alpha for estimating reliability. But…. Communication Methods and Measures, 14(1), 1–24. [Google Scholar] [CrossRef]
  81. Hayton, J. C., Allen, D. G., & Scarpello, V. G. (2004). Factor retention decisions in exploratory factor analysis: A tutorial on parallel analysis. Organizational Research Methods, 7(2), 191–205. [Google Scholar] [CrossRef]
  82. Herodotou, C., Naydenova, G., Boroowa, A., Gilmour, A., & Rienties, B. (2020). How can predictive learning analytics and motivational interventions increase student retention and enhance administrative support in distance education? Journal of Learning Analytics, 7(2), 72–83. [Google Scholar] [CrossRef]
  83. Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. [Google Scholar] [CrossRef] [PubMed]
  84. Howard, M. C. (2016). A review of exploratory factor analysis decisions and overview of current practices: What we are doing and how can we improve? International Journal of Human–Computer Interaction, 32(1), 51–62. [Google Scholar] [CrossRef]
  85. Huamani-Anco, Y. S., & Maraza-Quispe, B. (2025). Evaluation of the impact of ChatGPT on the development of research skills in secondary education students: An experimental approach. International Journal of Information and Education Technology, 15(1), 59–69. [Google Scholar] [CrossRef]
  86. Huber, S. E., Kiili, K., Nebel, S., Mayer, I., Stoyanov, S., & Ninaus, M. (2024). Leveraging the potential of large language models in education through playful and game-based learning. Educational Psychology Review, 36, 25. [Google Scholar] [CrossRef]
  87. Hughes, L., Malik, T., Dettmer, S., Al-Busaidi, A. S., & Dwivedi, Y. K. (2025). Reimagining higher education: Navigating the challenges of generative AI adoption. Information Systems Frontiers, 1–23. [Google Scholar] [CrossRef]
  88. Hurtado de Barrera, J. (2012). El proyecto de investigación: Comprensión holística de la metodología y la investigación (7th ed.). Ediciones Quirón. [Google Scholar]
  89. Ismail, H., Johaimi Ling, N. L. F., Abdul Wahab, M., Mohd Fawzy, F. F., Mohamed Shaari, S., & Jiram, W. R. A. (2025). The elderly-friendly housing neighbourhood preferred features by generations in Malaysia. Built Environment Journal, 22, 15–25. [Google Scholar] [CrossRef]
  90. Ismail, H., & Shaari, S. M. (2019). Housing decision: The choice between location, house and neighbourhood among Malaysian generations. MATEC Web of Conferences, 266, 01026. [Google Scholar] [CrossRef]
  91. Jaipal-Jamani, K., Figg, C., Collier, D., Gallagher, T., Winters, K., & Ciampa, K. (2018). Developing TPACK of university faculty through technology leadership roles. Italian Journal of Educational Technology, 26(1), 39–55. [Google Scholar] [CrossRef]
  92. Javaid, M., Haleem, A., Singh, R. P., Khan, S., & Khan, I. H. (2023). Unlocking the opportunities through ChatGPT tool towards ameliorating the education system. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 3(2), 100115. [Google Scholar] [CrossRef]
  93. Jennrich, R. I. (1973). Standard errors for obliquely rotated factor loadings. Psychometrika, 38(4), 593–604. [Google Scholar] [CrossRef]
  94. Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20(1), 141–151. [Google Scholar] [CrossRef]
  95. Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36. [Google Scholar] [CrossRef]
  96. Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. [Google Scholar] [CrossRef]
  97. Kim, M. K., Jhee, S. Y., & Han, S.-L. (2025). The impact of ChatGPT’s quality factors on perceived usefulness, perceived enjoyment, and continuous usage intention using the IS success model. Asia Marketing Journal, 26(4), 3. [Google Scholar] [CrossRef]
  98. Kim, S. J. (2024a). Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: A narrative review. Science Editing, 11(2), 96–106. [Google Scholar] [CrossRef]
  99. Kim, S. J. (2024b). Trends in research on ChatGPT and adoption-related issues discussed in articles: A narrative review. Science Editing, 11(1), 3–11. [Google Scholar] [CrossRef]
  100. Kong, S. C., Yang, Y., & Hou, C. (2024). Examining teachers’ behavioural intention of using generative artificial intelligence tools for teaching and learning based on the extended technology acceptance model. Computers and Education: Artificial Intelligence, 7, 100328. [Google Scholar] [CrossRef]
  101. Kovari, A. (2024). Ethical use of ChatGPT in education—Best practices to combat AI-induced plagiarism. Frontiers in Education, 9, 1465703. [Google Scholar] [CrossRef]
  102. Kwak, Y., & Pardos, Z. (2024). Bridging large language model disparities: Skill tagging of multilingual educational content. British Journal of Educational Technology, 55(5), 2039–2057. [Google Scholar] [CrossRef]
  103. Kyriazos, T., & Poga, M. (2023). Dealing with multicollinearity in factor analysis: The problem, detections, and solutions. Open Journal of Statistics, 13, 404–424. [Google Scholar] [CrossRef]
  104. Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: Systematic literature review. International Journal of Educational Technology in Higher Education, 20, 56. [Google Scholar] [CrossRef]
  105. Lawasi, M. C., Rohman, V. A., & Shoreamanis, M. (2024). The use of AI in improving student’s critical thinking skills. Proceedings Series on Social Sciences & Humanities, 18, 366–370. [Google Scholar] [CrossRef]
  106. Lee, A. T., Ramasamy, R. K., & Subbarao, A. (2025). Understanding psychosocial barriers to healthcare technology adoption: A review of TAM (technology acceptance model) and UTAUT (unified theory of acceptance and use of technology) frameworks. Healthcare, 13(3), 250. [Google Scholar] [CrossRef]
  107. Lee, J., Hicke, Y., Yu, R., Brooks, C., & Kizilcec, R. (2024). The life cycle of large language models in education: A framework for understanding sources of bias. British Journal of Educational Technology, 55(5), 1982–2002. [Google Scholar] [CrossRef]
  108. León-Domínguez, U. (2024). Potential cognitive risks of generative transformer-based AI chatbots on higher order executive functions. Neuropsychology, 38(4), 293–308. [Google Scholar] [CrossRef]
  109. Limna, P., Kraiwanit, T., Jangjarat, K., Klayklung, P., & Chocksathaporn, P. (2023). The use of ChatGPT in the digital era: Perspectives on chatbot implementation. Journal of Applied Learning & Teaching, 6(1), 32. [Google Scholar] [CrossRef]
  110. Lloret-Segura, S., Ferreres-Traver, A., Hernández-Baeza, A., & Tomás-Marco, I. (2014). El Análisis Factorial Exploratorio de los Ítems: Una guía práctica, revisada y actualizada. Anales de Psicología, 30(3), 1151–1169. [Google Scholar] [CrossRef]
  111. Lorenzo-Seva, U., & Ferrando, P. J. (2021). MSA: The forgotten index for identifying inappropriate items before computing exploratory item factor analysis. Methodology, 17(4), 296–306. [Google Scholar] [CrossRef]
  112. Lorenzo-Seva, U., & Ferrando, P. J. (2024). Determining sample size requirements in EFA solutions: A simple empirical proposal. Multivariate Behavioral Research, 59(5), 899–912. [Google Scholar] [CrossRef]
  113. Ma, J., Wang, P., Li, B., Wang, T., Pang, X. S., & Wang, D. (2024). Exploring user adoption of ChatGPT: A technology acceptance model perspective. International Journal of Human–Computer Interaction, 41(2), 1431–1445. [Google Scholar] [CrossRef]
  114. MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in factor analysis. Psychological Methods, 4(1), 84–99. [Google Scholar] [CrossRef]
  115. Mahama, I., Aboagye, J., & Nketiah, E. (2023). ChatGPT in academic writing: A threat to human creativity and academic integrity? Indonesian Journal of Innovation and Applied Sciences, 3(3), 423–432. [Google Scholar] [CrossRef]
  116. Makkonen, M., Salo, M., & Pirkkalainen, H. (2023, October 27–28). The effects of job and user characteristics on the perceived usefulness and use continuance intention of generative artificial intelligence chatbots at work. 9th International Conference on Socio-Technical Perspectives in IS (STPIS’23), Portsmouth, UK. CEUR-WS.org. Available online: https://ceur-ws.org/Vol-3598/paper9.pdf (accessed on 1 March 2025).
  117. Malkewitz, C. P., Schwall, P., Meesters, C., & Hardt, J. (2023). Estimating reliability: A comparison of Cronbach’s α, McDonald’s ωt and the greatest lower bound. Social Sciences & Humanities Open, 7(1), 100368. [Google Scholar] [CrossRef]
  118. Mardia, K. V. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika, 57(3), 519–530. [Google Scholar] [CrossRef]
  119. Martín-Crespo Rodríguez, P. (2024). La inteligencia artificial y la razón instrumental. Revista Paideia, 119, 75–87. Available online: https://sepfi.es/wp-content/uploads/2024/10/Paideia119_WEB_La-IA-y-la-razon-instrumental.pdf (accessed on 4 May 2025).
  120. Mastrogiacomi, F. (2024). Facilitating a paradigm shift for teaching and learning with AIs. Italian Journal of Educational Technology, 31(1), 69–81. [Google Scholar] [CrossRef]
  121. Matsunaga, M. (2010). How to factor-analyze your data right: Do’s, don’ts, and how-to’s. International Journal of Psychological Research, 3(1), 97–110. [Google Scholar] [CrossRef]
  122. McDonald, R. P. (2013). Test theory: A unified treatment (1st ed.). Psychology Press. [Google Scholar]
  123. Memon, M. A., Ting, H., Cheah, J.-H., Thurasamy, R., Chuah, F., & Cham, T. H. (2020). Sample size for survey research: Review and recommendations. Journal of Applied Structural Equation Modeling, 4(2), i–xx. [Google Scholar] [CrossRef]
  124. Mennella, C., Maniscalco, U., De Pietro, G., & Esposito, M. (2024). Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon, 10(4), e26297. [Google Scholar] [CrossRef]
  125. Menon, D., & Shilpa, K. (2023). “Chatting with ChatGPT”: Analyzing the factors influencing users’ intention to use the OpenAI’s ChatGPT using the UTAUT model. Heliyon, 9(11), e20962. [Google Scholar] [CrossRef]
  126. Miraz, M. H., Jin, H. H., Mahmood, M., Hasan, M. T., Sarkar, J. B., Salleh, N. M., & Chowdhury, A. Y. (2025). ChatGPT’s application in management education: In-depth critical analysis of opportunities, threats, and strategies. Multidisciplinary Reviews, 8(10), 2025338. [Google Scholar] [CrossRef]
  127. Mogavi, R. H., Deng, C., Kim, J. J., Zhou, P., Kwon, Y. D., Metwally, A. H. S., Tlili, A., Bassanelli, S., Bucchiarone, A., Gujar, S., Nacke, L. E., & Hui, P. (2024). ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Computers in Human Behavior: Artificial Humans, 2(1), 100027. [Google Scholar] [CrossRef]
  128. Mourtajji, L., & Arts-Chiss, N. (2024). Unleashing ChatGPT: Redefining technology acceptance and digital transformation in higher education. Administrative Sciences, 14(12), 325. [Google Scholar] [CrossRef]
  129. Mukti, B. H. (2025). Sample size determination: Principles and applications for health research. Health Sciences International Journal, 3(1), 127–143. [Google Scholar] [CrossRef]
  130. Mundfrom, D. J., Shaw, D. G., & Ke, T. L. (2005). Minimum sample size recommendations for conducting factor analyses. International Journal of Testing, 5(2), 159–168. [Google Scholar] [CrossRef]
  131. Naznin, K., Al Mahmud, A., Nguyen, M. T., & Chua, C. (2025). ChatGPT integration in higher education for personalized learning, academic writing, and coding tasks: A systematic review. Computers, 14(2), 53. [Google Scholar] [CrossRef]
  132. Ngo, T. T. A., Tran, T. T., An, G. K., & Nguyen, P. T. (2024). ChatGPT for educational purposes: Investigating the impact of knowledge management factors on student satisfaction and continuous usage. IEEE Transactions on Learning Technologies, 17, 1341–1352. [Google Scholar] [CrossRef]
  133. Nikoçeviq-Kurti, E., & Bërdynaj-Syla, L. (2024). ChatGPT integration in higher education: Impacts on teaching and professional development of university faculty. Educational Process: International Journal, 13(3), 22–39. [Google Scholar] [CrossRef]
  134. O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instrumentation, and Computers, 32, 396–402. [Google Scholar] [CrossRef]
  135. Paliz Sánchez, C. D. R., Mazacón Cervantes, C. J., Mazacón Gómez, M. N., & Suárez Guamán, P. J. (2024). Bioestadística: Introducción a la estadística en ciencias de la salud (1st ed.). Binario. [Google Scholar]
  136. Palm, A. (2022). Innovation systems for technology diffusion: An analytical framework and two case studies. Technological Forecasting and Social Change, 182, 121821. [Google Scholar] [CrossRef]
  137. Panda, S., & Kaur, N. (2024). Exploring the role of generative AI in academia: Opportunities and challenges. IP Indian Journal of Library Science & Information Technology, 9(1), 12–23. [Google Scholar] [CrossRef]
  138. Parveen, K., Phuc, T. Q. B., Alghamdi, A. A., Hajjej, F., Obidallah, W. J., Alduraywish, Y. A., & Shafiq, M. (2024). Unraveling the dynamics of ChatGPT adoption and utilization through structural equation modeling. Scientific Reports, 14(1), 23469. [Google Scholar] [CrossRef] [PubMed]
  139. Pereira-González, L. M., Basantes-Andrade, A., Mora-Grijalva, M., & Galárraga-Andrade, A. (2025a). Latent dimensions in the adoption of ChatGPT at the University: CHASSIS model. Alteridad, 20(2), 184–195. [Google Scholar] [CrossRef]
  140. Pereira-González, L. M., Basantes-Andrade, A., Naranjo-Toro, M., & Guia-Pereira, M. (2025b). IMPACT Model [Dataset]. Open Science Framework (OSF). [Google Scholar] [CrossRef]
  141. Ponce Gea, A. I., & Serrano Pastor, F. J. (2022). The cultural element in the adaptation of a test: Proposals and reflections on internal and external influences. Education Sciences, 12(5), 291. [Google Scholar] [CrossRef]
  142. Poobalan, D., & Latip, A. R. A. (2023). Investigating the determinants of ChatGPT adoption among university students. Quantum Journal of Social Sciences and Humanities, 5(6), 51–64. [Google Scholar] [CrossRef]
  143. Rogers, P. (2022). Best practices for your exploratory factor analysis: A factor tutorial. Revista de Administração Contemporânea, 26(6), e210085. [Google Scholar] [CrossRef]
  144. Romero-Rodríguez, J., Ramírez-Montoya, M., Buenestado-Fernández, M., & Lara-Lara, F. (2023). Uso de ChatGPT en la universidad como herramienta para el pensamiento complejo: Percepción de utilidad por parte de los estudiantes. Revista de Nuevos Enfoques en Investigación Educativa, 12(2), 323–339. [Google Scholar] [CrossRef]
  145. Saif, N., Khan, S. U., Shaheen, I., ALotaibi, F. A., Alnfiai, M. M., & Arif, M. (2024). Chat-GPT: Validating technology acceptance model (TAM) in education sector via ubiquitous learning mechanism. Computers in Human Behavior, 154, 108097. [Google Scholar] [CrossRef]
  146. Salih, S., Husain, O., Hamdan, M., Abdelsalam, S., Elshafie, H., & Motwakel, A. (2025). Transforming education with AI: A systematic review of ChatGPT’s role in learning, academic practices, and institutional adoption. Results in Engineering, 25, 103837. [Google Scholar] [CrossRef]
  147. Sallam, M., Elsayed, W., Al-Shorbagy, M., Barakat, M., El Khatib, S., Ghach, W., Alwan, N., Hallit, S., & Malaeb, D. (2024). ChatGPT usage and attitudes are driven by perceptions of usefulness, ease of use, risks, and psycho-social impact: A study among university students in the UAE. Frontiers in Education, 9, 1414758. [Google Scholar] [CrossRef]
  148. Sallam, M., Salim, N. A., Barakat, M., Al-Mahzoum, K., Al-Tammemi, A. B., Malaeb, D., Hallit, R., & Hallit, S. (2023). Assessing health students’ attitudes and usage of ChatGPT in Jordan: Validation study. JMIR Medical Education, 9, e48254. [Google Scholar] [CrossRef]
  149. Sappaile, B. I., Abeng, A. T., & Nuridayanti, N. (2023). Exploratory factor analysis as a tool for determining indicators of a research variable: Literature review. Indonesian Journal of Educational and Neural Studies, 1(6), 387. [Google Scholar] [CrossRef]
  150. Saranza, C., Villamar, E., Arlan, E., Francia, J., Lopio-Alas, L., & Buca, R. (2024). Exploring the impact of usage frequency on perceived value of ChatGPT among university students: The moderating role of income. International Journal of Social Science and Human Research, 7(12), 9430–9442. [Google Scholar] [CrossRef]
  151. Saxena, A., & Doleck, T. (2023). A structural model of student continuance intentions in ChatGPT adoption. Eurasia Journal of Mathematics, Science and Technology Education, 19(12), em2366. [Google Scholar] [CrossRef]
  152. Sánchez Espejo, F. G. (2019). Tesis: Desarrollo metodológico de la investigación (1st ed.). Centrum Legalis. [Google Scholar]
  153. Scherer, R., Siddiq, F., & Tondeur, J. (2019). The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Computers & Education, 128, 13–35. [Google Scholar] [CrossRef]
  154. Sebastian, G. (2023). Privacy and data protection in ChatGPT and other AI chatbots: Strategies for securing user information. International Journal of Security and Privacy in Pervasive Computing, 15(1), 1–14. [Google Scholar] [CrossRef]
  155. Shata, A. (2025). Opting out of AI: Exploring perceptions, reasons, and concerns behind faculty resistance to generative AI. Frontiers in Communication, 10, 1614804. [Google Scholar] [CrossRef]
  156. Shata, A., & Hartley, K. (2025). Artificial intelligence and communication technologies in academia: Faculty perceptions and the adoption of generative AI. International Journal of Educational Technology in Higher Education, 22, 14. [Google Scholar] [CrossRef]
  157. Sigüenza Orellana, J., Andrade Cordero, C., & Chitacapa Espinoza, J. (2024). Validación del cuestionario para docentes: Percepción sobre el uso de ChatGPT en la educación superior. Revista Andina de Educación, 8(1), 000816. [Google Scholar] [CrossRef]
  158. Soares, A., Lerigo-Sampson, M., & Barker, J. (2024). Recontextualising the unified theory of acceptance and use of technology (UTAUT) framework to higher education online marking. Journal of University Teaching and Learning Practice, 21(8), e7ft8x880. [Google Scholar] [CrossRef]
  159. Soper, D. S. (2025). A-priori sample size calculator for structural equation models (versión 4.0) [Computer software]. Daniel Soper. Available online: https://www.danielsoper.com/statcalc (accessed on 1 April 2025).
  160. Srivastava, S. J. (2024). Insights on managing generational diversity at workplace. In Proceedings of the Society of Petroleum Engineers—ADIPEC 2024 (Artículo 222297-MS). Society of Petroleum Engineers. [Google Scholar] [CrossRef]
  161. Stahl, B. C. (2025). Locating the ethics of ChatGPT—Ethical issues as affordances in AI ecosystems. Information, 16(2), 104. [Google Scholar] [CrossRef]
  162. Stasewitsch, E., Dokuka, S., & Kauffeld, S. (2022). Promoting educational innovations and change through networks between higher education teachers. Tertiary Education and Management, 28, 61–79. [Google Scholar] [CrossRef]
  163. Strauss, M. E., & Smith, G. T. (2009). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1–25. [Google Scholar] [CrossRef] [PubMed]
  164. Strielkowski, W., Grebennikova, V., Lisovskiy, A., Rakhimova, G., & Vasileva, T. (2025). AI-driven adaptive learning for sustainable educational transformation. Sustainable Development, 33(2), 1921–1947. [Google Scholar] [CrossRef]
  165. Strzelecki, A., Cicha, K., Rizun, M., & Rutecka, P. (2024). Acceptance and use of ChatGPT in the academic community. Education and Information Technologies, 29, 22943–22968. [Google Scholar] [CrossRef]
  166. Sun, Y., Sheng, D., Zhou, Z., & Wu, Y. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11, 1278. [Google Scholar] [CrossRef]
  167. Suryavanshi, P., Kapse, M., & Sharma, V. (2025). Integrating ChatGPT into software development: Valuating acceptance and utilisation among developers. Australasian Accounting, Business and Finance Journal, 19(1), 96–117. [Google Scholar] [CrossRef]
  168. Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Pearson Education. [Google Scholar]
  169. Takahashi, C. K., Bastos de Figueiredo, J. C., & Scornavacca, E. (2024). Investigating the diffusion of innovation: A comprehensive study of successive diffusion processes through analysis of search trends, patent records, and academic publications. Technological Forecasting and Social Change, 198, 122991. [Google Scholar] [CrossRef]
  170. Thompson, B., & Daniel, L. G. (1996). Factor analytic evidence for the construct validity of scores: A historical overview and some guidelines. Educational and Psychological Measurement, 56(2), 197–208. [Google Scholar] [CrossRef]
  171. Toma, R. B., & Yánez-Pérez, I. (2024). Effects of ChatGPT use on undergraduate students’ creativity: A threat to creative thinking? Discover Artificial Intelligence, 4, 74. [Google Scholar] [CrossRef]
  172. Uppal, K., & Hajian, S. (2025). Students’ perceptions of ChatGPT in higher education: A study of academic enhancement, procrastination, and ethical concerns. European Journal of Educational Research, 14(1), 199–211. [Google Scholar] [CrossRef]
  173. Valaidum, S., & Mahat, J. (2024, November 25–29). Factors influencing ChatGPT use behaviour among trainee teachers. International Conference on Computers in Education (ICCE), Manila, Philippines. [Google Scholar] [CrossRef]
  174. Valencia Londoño, P. A., Trujillo Orrego, S. P., Duque Monsalve, L. F., & Giraldo Cardona, L. S. (2025). Factor structure and reliability of the Family Resilience Scale (FRAS): Adaptation with Colombian families exposed to stressful events. Frontiers in Psychology, 16, 1568139. [Google Scholar] [CrossRef]
  175. van Raaij, E. M., & Schepers, J. J. L. (2008). The acceptance and use of a virtual learning environment in China. Computers & Education, 50(3), 838–852. [Google Scholar] [CrossRef]
  176. Velicer, W. F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika, 41, 321–327. [Google Scholar] [CrossRef]
  177. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Management Science, 46(2), 186–204. [Google Scholar] [CrossRef]
  178. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  179. Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. [Google Scholar] [CrossRef]
  180. Venkatesh, V., & Zhang, X. (2010). Unified theory of acceptance and use of technology: U.S. vs. China. Journal of Global Information Technology Management, 13(1), 5–27. [Google Scholar] [CrossRef]
  181. Vörösmarty, G., & Dobos, I. (2020). Green purchasing frameworks considering firm size: A multicollinearity analysis using variance inflation factor. Supply Chain Forum: An International Journal, 21(4), 290–301. [Google Scholar] [CrossRef]
  182. Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence, 7, 100326. [Google Scholar] [CrossRef]
  183. Wang, V. H.-C., Silver, D., & Pagán, J. A. (2022). Generational differences in beliefs about COVID-19 vaccines. Preventive Medicine, 157, 107005. [Google Scholar] [CrossRef]
  184. Wang, Y., & Zhang, W. (2023). Factors influencing the adoption of generative AI for art designing among Chinese Generation Z: A structural equation modeling approach. IEEE Access, 11, 143272–143284. [Google Scholar] [CrossRef]
  185. Wedlock, B. C., & Trahan, M. P. (2020). Revisiting the unified theory of acceptance and the use of technology (UTAUT) model and scale: An empirical evolution of educational technology. Research Issues in Contemporary Education, 4(1), 6–20. Available online: https://eric.ed.gov/?id=EJ1244613 (accessed on 3 March 2025).
  186. Worthington, R. L., & Whittaker, T. A. (2006). Scale development research: A content analysis and recommendations for best practices. The Counseling Psychologist, 34, 806–838. [Google Scholar] [CrossRef]
  187. Xiao, J., Xu, Z., Xiao, A., Wang, X., & Skare, M. (2024). Overcoming barriers and seizing opportunities in the innovative adoption of next-generation digital technologies. Journal of Innovation & Knowledge, 9(4), 100622. [Google Scholar] [CrossRef]
  188. Xue, L., Rashid, A. M., & Ouyang, S. (2024). The Unified Theory of Acceptance and Use of Technology (UTAUT) in higher education: A systematic review. SAGE Open, 14(1), 21582440241229570. [Google Scholar] [CrossRef]
  189. Yan, L., Sha, L., Zhao, L., Li, Y., Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2023). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55, 90–112. [Google Scholar] [CrossRef]
  190. Yu, C., Yan, J., & Cai, N. (2024). ChatGPT in higher education: Factors influencing ChatGPT user satisfaction and continued use intention. Frontiers in Education, 9, 1354929. [Google Scholar] [CrossRef]
  191. Yurtseven, N., & Karadeniz, Ş. (2020). An overview of generation alpha. In The teacher of Generation Alpha (pp. 11–31). Verlag Peter Lang AG. [Google Scholar]
  192. Zhai, C., Wibowo, S., & Li, L. D. (2024). Efectos de la excesiva dependencia de los sistemas de diálogo de IA en las capacidades cognitivas de los estudiantes: Una revisión sistemática. Smart Learning Environments, 11, 28. [Google Scholar] [CrossRef]
  193. Zhou, J., Müller, H., Holzinger, A., & Chen, F. (2024). Ethical ChatGPT: Concerns, challenges, and commandments. Electronics, 13(17), 3417. [Google Scholar] [CrossRef]
  194. Zinbarg, R. E., Revelle, W., Yovel, I., & Li, W. (2005). Cronbach’s α, Revelle’s β, and McDonald’s ωh: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70(1), 123–133. [Google Scholar] [CrossRef]
  195. Zirar, A. (2023). Exploring the impact of language models, such as ChatGPT, on student learning and assessment. Review of Education, 11(3), e3433. [Google Scholar] [CrossRef]
Figure 1. Structural integration of dimensions in the IMPACT model compared to previous frameworks.
Figure 1. Structural integration of dimensions in the IMPACT model compared to previous frameworks.
Education 15 01520 g001
Table 1. Generational Cohort Classification Criteria.
Table 1. Generational Cohort Classification Criteria.
GenerationBirth YearsKey CharacteristicsAuthors & Years
Traditionalists/Silent1928–1945Experienced pre-technology era; value discipline and hard work.Ismail and Shaari (2019), Ismail et al. (2025); V. H.-C. Wang et al. (2022)
Baby Boomers1946–1964Experienced post-WWII economic boom; value loyalty and work ethic.Ismail and Shaari (2019), Ismail et al. (2025); Fernandes et al. (2025)
Generation X1965–1980Adapted to both pre- and post-digital worlds; value independence and work–life balance.Ismail and Shaari (2019), Ismail et al. (2025); Fernandes et al. (2025); Yurtseven and Karadeniz (2020)
Millennials/Generation Y1981–1996Digital natives; value flexibility and social consciousness.Ismail and Shaari (2019), Ismail et al. (2025); Fernandes et al. (2025); Srivastava (2024)
Generation Z1997–2012Grew up with digital technology; value instant access to information and social media.Ismail et al. (2025); Fernandes et al. (2025); Srivastava (2024)
Generation Alpha2013–2024Known as “tech thumbs,” highly integrated with technology from birth.Srivastava (2024); V. H.-C. Wang et al. (2022)
Note. Generational cohort classification adapted from Ismail and Shaari (2019), Ismail et al. (2025), Fernandes et al. (2025), Srivastava (2024), V. H.-C. Wang et al. (2022), and Yurtseven and Karadeniz (2020).
Table 2. Sociodemographic and Professional Characteristics of the Participants (n = 206).
Table 2. Sociodemographic and Professional Characteristics of the Participants (n = 206).
VariableCategories/Values%/M (SD)
Academic areaHealth26.2
Education and Technology27.7
Applied Engineering17.0
Agro-Environmental Sciences and Biotechnology14.1
Administration and Economics15.0
Age (years)Range: 28–77M = 45.0 (SD = 9.5)
SexMale57.3
Female42.7
Generation cohortBaby Boomers6.0
Generation X43.2
Generation Y/Millennials49.5
Generation Z1.0
Years of teaching
experience
<6 years2.4
6–10 years20.4
11–15 years32
16–20 years24.8
21–25 years14.1
>25 years6.3
Self-identification
(ethnicity)
Mixed-race93.7
NationalityEcuadorian97.6
Table 3. Frequency of ChatGPT Use in Academic and Research Activities (n = 206).
Table 3. Frequency of ChatGPT Use in Academic and Research Activities (n = 206).
Usage Level 1%
Never0
Rarely (less than once a month)14.1
Occasionally (1–3 times per month)29.6
Frequently (1–3 times per week)44.7
Very frequently (almost every day)11.7
1 Categories were derived from a self-reported five-point frequency scale.
Table 4. Total variance explained.
Table 4. Total variance explained.
Initial EigenvaluesRotation Sums of Squared Loadings 1
FactorTotal %Variance%CumulativeTotal
EAC8.48430.30030.3007.917
PRT5.32819.02849.3284.614
CA2.2307.96457.2934.900
FACT1.8066.45163.7433.783
FC1.2414.43168.1755.884
1 Extraction method: Principal Axis Factoring.
Table 5. Factor correlation matrix corresponding to the IMPACT model.
Table 5. Factor correlation matrix corresponding to the IMPACT model.
Factor12345
11.000
20.0151.000
3−0.395−0.4851.000
40.4570.295−0.4351.000
50.7620.036−0.3350.4741.000
Extraction method: principal axis. Rotation: Oblimin with Kaiser normalization. Significant correlations between factors (|r| ≥ 0.30) are highlighted in bold, justifying the use of oblique rotation.
Table 6. Communalities in the Exploratory Factor Analysis of the IMPACT model.
Table 6. Communalities in the Exploratory Factor Analysis of the IMPACT model.
ItemDescriptionInitialExtraction *
EAC_I1Inv_I feel that relying on ChatGPT could reduce my ability to analyze and reflect on my own.0.6780.701
EAC_I2Inv_I am concerned that ChatGPT may limit the development of my critical thinking.0.5830.447
EAC_I3Inv_I believe that the constant use of ChatGPT may affect my ability to think critically.0.6220.578
EAC_I4Inv_I feel that by using ChatGPT, the authorship of my work may be questioned.0.6170.562
EAC_I5Inv_I am concerned that my work may lose originality if I use ChatGPT.0.6210.643
EAC_I6Inv_I am concerned that by using ChatGPT, the authorship of my scientific-academic work may become unclear.0.5400.483
EAC_I7Inv_I feel that using ChatGPT’s responses may lead me to commit unintentional plagiarism.0.5400.504
PRT_I1I consider ChatGPT’s responses to be accurate and reliable.0.5640.621
PRT_I2ChatGPT’s responses are usually accurate and useful for my academic and research work.0.6510.660
PRT_I3I trust the accuracy of the information provided by ChatGPT.0.6500.733
CA_I1Inv_The cost of advanced ChatGPT versions may represent a barrier to its use in my academic and research work.0.7360.745
CA_I2Inv_The cost of advanced ChatGPT versions is a barrier to my access and use of the tool.0.7300.784
CA_I3Inv_I wish I had greater financial access to use the advanced models and features of ChatGPT.0.5330.486
CA_I4Inv_The cost of advanced ChatGPT versions may influence how often I use it.0.7410.764
CA_I5Inv_The cost of technology access affects my use of ChatGPT.0.4950.392
FACT_I1The use of ChatGPT has had a positive impact on my work performance.0.5570.500
FACT_I2Using ChatGPT helps me carry out my academic and research tasks more efficiently.0.7460.708
FACT_I3ChatGPT provides me with valuable resources for my job performance.0.6220.554
FACT_I4My work performance has improved thanks to the support of ChatGPT.0.7110.689
FACT_I5I believe that using ChatGPT is useful for improving my job performance.0.6740.632
FACT_I6ChatGPT facilitates my learning and understanding of scientific-academic topics.0.6230.598
FACT_I7I am satisfied with the overall experience of using ChatGPT in my teaching and research work.0.7570.746
FC_I1My institution facilitates access to tools like ChatGPT.0.4850.560
FC_I2I perceive institutional support for the use of ChatGPT.0.4600.485
FC_I3I receive support from my institution to use ChatGPT in my work.0.5660.696
FACT_I8I consider it important to continue using ChatGPT in my teaching and research practice.0.7080.697
FACT_I9I am likely to continue using ChatGPT as a support tool in my academic and research work.0.6950.635
FACT_I10I plan to continue using ChatGPT in the future.0.7040.644
* Extraction method: principal axis factorization.
Table 7. IMPACT Model Pattern Matrix.
Table 7. IMPACT Model Pattern Matrix.
ItemFactor *
FACTEACCAFCPRT
I believe that using ChatGPT is useful for improving my job performance.0.929
I am likely to continue using ChatGPT as a support tool in my academic and research work.0.913
I plan to continue using ChatGPT in the future.0.913
My work performance has improved thanks to the support of ChatGPT.0.905
I am satisfied with the overall experience of using ChatGPT in my teaching and research work.0.875
Using ChatGPT helps me carry out my academic and research tasks more efficiently.0.855
I consider it important to continue using ChatGPT in my teaching and research practice.0.847
ChatGPT provides me with valuable resources for my job performance.0.754
ChatGPT facilitates my learning and understanding of scientific-academic topics.0.738
The use of ChatGPT has had a positive impact on my work performance.0.678
Inv_I feel that relying on ChatGPT could reduce my ability to analyze and reflect on my own. 0.903
Inv_I believe that the constant use of ChatGPT may affect my ability to think critically. 0.814
Inv_I am concerned that my work may lose originality if I use ChatGPT. 0.802
Inv_I feel that by using ChatGPT, the authorship of my work may be questioned. 0.771
Inv_I feel that using ChatGPT’s responses may lead me to commit unintentional plagiarism. 0.758
Inv_I am concerned that ChatGPT may limit the development of my critical thinking. 0.692
Inv_I am concerned that by using ChatGPT, the authorship of my scientific-academic work may become unclear. 0.643
Inv_The cost of advanced ChatGPT versions is a barrier to my access and use of the tool. 0.993
Inv_The cost of advanced ChatGPT versions may influence how often I use it. 0.938
Inv_The cost of advanced ChatGPT versions may represent a barrier to its use in my academic and research work. 0.936
Inv_The cost of technology access affects my use of ChatGPT. 0.656
Inv_I wish I had greater financial access to use the advanced models and features of ChatGPT. 0.653
I receive support from my institution to use ChatGPT in my work. 0.884
My institution facilitates access to tools like ChatGPT. 0.830
I perceive institutional support for the use of ChatGPT. 0.669
I trust the accuracy of the information provided by ChatGPT. 0.982
I consider ChatGPT’s responses to be accurate and reliable. 0.911
ChatGPT’s responses are usually accurate and useful for my academic and research work. 0.771
* Extraction method: principal axis factoring. Rotation method: Oblimin with Kaiser normalization. The rotation converged in 5 iterations.
Table 8. Reliability estimates by subscale (α and ωha).
Table 8. Reliability estimates by subscale (α and ωha).
Subscaleα (Pearson)ω (McDonald HA)
EAC0.8940.893
PRT0.8520.855
CA0.8820.886
FACT0.9400.941
FC0.7950.797
General0.8580.764
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pereira-González, L.-M.; Basantes-Andrade, A.; Naranjo-Toro, M.; Guia-Pereira, M. Initial Validation of the IMPACT Model: Technological Appropriation of ChatGPT by University Faculty. Educ. Sci. 2025, 15, 1520. https://doi.org/10.3390/educsci15111520

AMA Style

Pereira-González L-M, Basantes-Andrade A, Naranjo-Toro M, Guia-Pereira M. Initial Validation of the IMPACT Model: Technological Appropriation of ChatGPT by University Faculty. Education Sciences. 2025; 15(11):1520. https://doi.org/10.3390/educsci15111520

Chicago/Turabian Style

Pereira-González, Luz-M., Andrea Basantes-Andrade, Miguel Naranjo-Toro, and Mailevy Guia-Pereira. 2025. "Initial Validation of the IMPACT Model: Technological Appropriation of ChatGPT by University Faculty" Education Sciences 15, no. 11: 1520. https://doi.org/10.3390/educsci15111520

APA Style

Pereira-González, L.-M., Basantes-Andrade, A., Naranjo-Toro, M., & Guia-Pereira, M. (2025). Initial Validation of the IMPACT Model: Technological Appropriation of ChatGPT by University Faculty. Education Sciences, 15(11), 1520. https://doi.org/10.3390/educsci15111520

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop