Next Article in Journal
Beyond Stereotypes: Qualitative Research on Roma Community Values in North-East Romania
Next Article in Special Issue
Adapted Exercise and Adapted Sport as Rights of Health Citizenship in Italy: A Legal–Policy Rationale and Framework for Inclusion in the Livelli Essenziali di Assistenza (LEA) and the Role of the Chinesiologo
Previous Article in Journal
Experiential Civic Learning: When the Established Order Falters
Previous Article in Special Issue
Perceptions of Diversity in School Leadership Promotions: An Initial Exploratory Study in the Republic of Ireland
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Not Ready for AI? Exploring Teachers’ Negative Attitudes Toward Artificial Intelligence

by
Laurențiu Gabriel Țîru
1,*,
Vasile Gherheș
2,
Ionela Stoicov
1 and
Miroslav Stanici
2
1
Faculty of Sociology and Social Work, West University of Timisoara, 300223 Timisoara, Romania
2
Department of Communication and Foreign Languages, Politehnica University of Timisoara, 300006 Timisoara, Romania
*
Author to whom correspondence should be addressed.
Societies 2025, 15(12), 337; https://doi.org/10.3390/soc15120337
Submission received: 30 October 2025 / Revised: 28 November 2025 / Accepted: 1 December 2025 / Published: 3 December 2025

Abstract

This study examines teachers’ negative attitudes toward artificial intelligence (AI) in education, focusing on the role of digital literacy, demographic characteristics, and direct AI experience. Using a quantitative cross-sectional design, data were collected from 1110 Romanian pre-university teachers through a self-administered online questionnaire. Exploratory factor analysis confirmed a two-dimensional structure of negative attitudes—Perceived AI Threat and Distrust in the Fairness and Ethics of AI—with good internal reliability (α = 0.93 and α = 0.62, respectively). Results indicated significant gender differences, with women reporting higher levels of perceived threat, while distrust in AI fairness showed no significant variation across gender, age, or teaching degree. Teachers in urban areas expressed greater skepticism toward AI ethics than those in rural settings. Higher levels of digital literacy were negatively correlated with both dimensions of negative attitudes, suggesting that digital competence mitigates technological anxiety. Moreover, frequent personal and professional use of AI predicted lower perceived threat levels, emphasizing the moderating role of experiential familiarity. These findings advance understanding of the psychosocial and cognitive factors shaping educators’ perceptions of AI and highlight the importance of AI literacy programs that integrate technical, ethical, and reflective components to foster informed and confident engagement with intelligent technologies.

1. Introduction

In contemporary education, the rapid transformation driven by the continuous advancement of digital technologies, and more recently by the expansion of artificial intelligence (AI) applications, has highlighted the urgent need to reconsider how schools responsibly integrate these innovations. The COVID-19 pandemic accelerated the digitalization of education, highlighting the essential role of digital skills for both teachers and students. As education systems return to in-person instruction, increasing attention has turned to how emerging technologies such as AI can be effectively leveraged to personalize learning and to stimulate critical reflection on the ethical and social implications of automation [1,2].
In this context, the concept of AI literacy has become central. However, scientific literature reveals a notable diversity of approaches and a lack of conceptual consensus. This diversity, which ranges from children’s early interactions with educational robots to advanced professional training in the use of complex AI tools [3,4,5], creates difficulties in integrating AI literacy into educational curricula and underlines the lack of validated tools for assessing related competencies. Unlike digital literacy, which primarily focuses on accessing and responsibly using online resources [6], AI literacy encompasses understanding algorithms, critically analyzing automated decisions, and reflecting ethically on their broader societal impact [7].
An essential aspect emphasized in recent research is the persistence of teachers’ resistance to integrating AI into educational practice. Empirical studies suggest that levels of digital literacy, trust in technology, institutional support, and ethical concerns directly influence this resistance [8,9]. Among the most frequently cited fears are the loss of pedagogical control, job insecurity, digital surveillance, and the depersonalization of the teaching process [10]. Such uncertainty and skepticism are often amplified by the absence of best-practice examples and by institutional pressure to adopt insufficiently understood technologies [11].
To address these challenges, recent studies have proposed educational initiatives and training frameworks designed to develop AI literacy among teachers and, indirectly, among students. Programs such as SAIL, based on collaborative and reflective activities [12], or critical co-discovery approaches [13], show that AI integration can be more easily accepted when it is accompanied by continuous professional development, ethical reflection, and institutional support. Furthermore, several studies emphasize the importance of flexible educational policies, explicit guidelines for responsible AI use, and an organizational culture grounded in trust and collaboration [14,15].
In light of these considerations, the present study seeks to examine the relationship between AI literacy, demographic factors, and teachers’ resistance to integrating AI technologies into education. By clarifying the mechanisms that shape the acceptance or rejection of emerging technologies, this analysis provides valuable insights for the development of educational policies and training programs tailored to teachers’ actual needs.
Considering these aspects, the present study aims to investigate negative attitudes toward AI in education and their associated factors, by addressing the following specific objectives:
O1: To determine how negative attitudes toward AI use vary according to teachers’ socio-demographic characteristics (e.g., age, gender, level of experience, and residence area).
O2: To investigate the relationship between the level of digital competencies and negative attitudes toward AI.
O3: To examine the associations between experiences of AI use (for personal and professional purposes) and the perceived levels of threat or distrust toward AI.
By addressing these objectives, the study contributes to a deeper understanding of the factors underlying teachers’ reluctance to adopt AI-based technologies and, consequently, to identify effective ways to overcome such barriers and facilitate the efficient integration of AI tools into contemporary education.

1.1. Conceptualizing AI Literacy in Education

The concept of AI literacy has emerged as a central focus in contemporary educational research, reflecting the growing need to prepare both teachers and students for the responsible and critical use of artificial intelligence (AI) technologies. Despite this increasing attention, the term remains conceptually fragmented, encompassing a broad spectrum of meanings and pedagogical practices. In their review of 124 studies published between 2020 and 2024, Gu and Ericson [2] observe that definitions of AI literacy range from children’s first encounters with programmable robots to advanced university courses on AI-based tools. Similarly, Wolters et al. [1] emphasize that most empirical research focuses on K–12 students, neglecting adult learners and teachers, and that validated instruments for assessing AI literacy remain scarce.
This persistent diversity reflects the rapid evolution of AI technologies, particularly generative AI, and fundamental disagreements about what educators need to know. To address this conceptual fragmentation and establish a global framework for AI competencies, UNESCO [16] developed the first international AI Competency Framework for Teachers, defining 15 competencies across five interconnected dimensions: human-centered mindset, ethics of AI, AI foundations and applications, AI pedagogy, and AI for professional learning. The framework categorizes these competencies into three progression levels (Acquire, Deepen, and Create) reflecting the complexity of developing AI literacy.
Recent research in higher education reveals that the concept of AI literacy is both multidimensional and still under development, encompassing generic, domain-specific, and ethical aspects. According to recent findings [17], self-efficacy and AI literacy are decisive predictors of educators’ willingness to adopt AI-based tools. Their findings indicate that, while educators acknowledge the trans-formative potential of AI, including personalized learning, increased insight into student understanding, and administrative efficiency, numerous challenges remain, including limited infrastructure, lack of ethical guidance, insufficient technical training, and the need for targeted professional development.
El-dosuky [3] proposes an integrated framework for developing AI literacy in pre-university education, stressing that restricting AI education to programming skills may increase resistance to new technologies. He argues that genuine AI literacy must integrate technical, ethical, and collaborative dimensions, which can be effectively operationalized through frameworks such as TPACK.
This multidimensional approach positions AI literacy as both a cognitive and a moral competence, extending beyond procedural knowledge to include critical reflection on algorithmic decision-making and societal impact. A distinction frequently noted in literature concerns the relationship between digital literacy and AI literacy. While digital literacy primarily involves the ability to access, evaluate, and create digital content responsibly [6], AI literacy introduces an additional layer of understanding related to algorithms, data ethics, and automated decision processes [7]. Ha et al. [18] further emphasize that AI literacy requires not only functional knowledge but also ethical literacy, enabling teachers and students to navigate issues of bias, transparency, and accountability inherent in AI-driven systems.

1.2. Attitudinal, Ethical, and Institutional Dimensions of Resistance

A significant body of research highlights teachers’ persistent resistance to integrating AI technologies into their professional practice. This resistance, often attributed to psychological, institutional, and ethical factors, remains a critical barrier to the educational use of AI. Du et al. [19] found that the absence of ethical literacy among educators intensifies reluctance, as teachers frequently feel unprepared to address algorithmic fairness, data privacy, or digital surveillance. Likewise, Holmes et al. [9] report that inadequate training and the lack of ethical guidance contribute to uncertainty and superficial AI implementations.
Recent evidence highlights that teachers’ resistance is driven not only by technical challenges but also by deep-seated professional concerns. Tariq [20] demonstrates that educators frequently express anxiety regarding the potential replacement of their roles, the erosion of pedagogical autonomy, and the risk of being relegated to mere facilitators in the learning process. Moreover, ethical apprehensions related to algorithmic transparency, data privacy, and digital surveillance exacerbate skepticism and distrust toward AI systems. These attitudinal and ethical concerns are further amplified by institutional limitations such as unclear policies, insufficient support structures, and scarce professional development opportunities.
Extrinsic barriers, including insufficient infrastructure, over-crowded classrooms, and limited material resources, also strongly influence resistance. Mehdaoui [21], in a study of EFL instructors in Algeria, demonstrates that while educators recognize the potential of AI, external difficulties such as technical challenges, lack of training, and in-sufficient resources substantially inhibit the adoption of emerging technologies in education.
Professional development guided by pedagogical frameworks such as TPACK must integrate both technical knowledge and practical applications to enhance teachers’ confidence in delivering AI-integrated courses [22]. Without such comprehensive preparation, teachers report anxiety and hesitation regarding AI adoption in their teaching practice. However, individual-level training is insufficient when institutional structures undermine AI integration.
From a structural perspective, Zawacki-Richter et al. [10] argue that AI adoption in education is frequently driven by administrative or economic imperatives rather than pedagogical reflection, which reinforces skepticism among teachers. The resulting managerial logic fosters concerns transparency, surveillance, and depersonalization of education. Howard and Mozejko [11] extend this argument by showing that teachers’ reluctance often stems not from technophobia but from institutional pressures to adopt poorly understood tools without adequate support. Such contexts create frustration and insecurity rather than empowerment.
Howard et al. [8] also demonstrate that openness to AI integration varies by age, experience, and school culture, with younger or more digitally experienced teachers generally more receptive. However, they note that trust in AI’s usefulness and transparency, rather than demographic variables alone, is the strongest predictor of acceptance. The lack of institutional support, insufficient policy clarity, and fears of professional devaluation further exacerbate teachers’ skepticism [23]. Collectively, these studies underscore that resistance to AI is a multifaceted phenomenon shaped by psychological, ethical, and contextual determinants. Collectively, the reviewed studies show that resistance to AI is a multifaceted phenomenon shaped by a dynamic interplay of psychological, ethical, and contextual determinants. Addressing such resistance requires multifaceted strategies that combine robust training programs with comprehensive governance frameworks.

1.3. Educational Initiatives and Frameworks for Developing AI Literacy

To address these challenges, recent research has proposed several educational frameworks designed to enhance AI literacy and reduce resistance among teachers. MacDowell et al. [12] introduced the SAIL (Student Artificial Intelligence Literacy) framework, which emphasizes experiential and collaborative activities that improve both technical competence and ethical awareness. Participation in such reflective programs has been shown to increase teachers’ confidence and readiness to integrate AI tools into their classrooms.
Research on effective professional development models further supports these multidimensional approaches. A recent scoping review of AI literacy in teacher education examined six programs designed to enhance teachers’ AI literacy and identified several critical success factors [24]. One effective approach, the “AI Book Club” model, combines independent reflection and learning activities with collaborative online discussions, allowing teachers to experiment, ask questions, and critically engage with AI concepts in a psychologically safe environment. Teachers consistently valued this spaced approach combined with collaborative reflection, which accommodated their different learning styles and contextual needs. Similarly, peer learning emerged as a crucial mechanism, with teachers appreciating the opportunity to engage in hands-on, exploratory AI teaching activities alongside colleagues.
Recent evidence demonstrates that targeted professional development programs can substantially increase teachers’ confidence and trust in AI educational technologies. A comprehensive initiative that combined theoretical and practical knowledge about AI, while emphasizing transparency, teacher agency, and the complementary role of technology, proved effective in enhancing teachers’ AI literacy and willingness to adopt AI tools [25]. This approach primarily reduced psychological barriers such as algorithm aversion and strengthened teachers’ ability to critically evaluate and override algorithmic recommendations.
Dilek et al. [13] propose a critical co-discovery approach to teacher training, combining collaborative projects, mentoring, and case-based discussions to foster deeper understanding of AI’s mechanisms and limitations. Tan et al. [15] similarly argue that effective integration of AI into education requires comprehensive professional development that includes not only technical instruction but also pedagogical and ethical reflection. Their findings point to a persistent gap between policy-level enthusiasm for AI and the practical realities of classroom implementation.
Additional evidence supports the importance of such multidimensional training programs. Moura and Carvalho [26] report that most teachers, once trained, view AI as an auxiliary tool rather than a threat, provided ethical standards and student guidance are clearly established. Gravino et al. [27] reinforce this perspective, identifying a discrepancy between macro-level policy goals and the everyday practices of teachers, suggesting that professional development and institutional support are crucial for translating AI literacy into sustainable pedagogical innovation.

1.4. Demographic and Digital Competence Factors

Recent empirical studies explore how teachers’ demographic characteristics and levels of digital literacy influence their attitudes toward AI. Çayak [28] found that lower digital literacy levels are significantly associated with stronger negative attitudes toward AI, primarily due to uncertainty and lack of confidence, whereas variables such as gender, experience, or school level exert less influence.
Complementing these findings, recent path analysis results show that demographic variables such as age, gender, and field of study influence attitudes toward AI in different ways [29]. Their study revealed that age demonstrates a positive correlation with both AI knowledge and attitudes toward AI, suggesting that older educators may develop more nuanced perspectives through accumulated professional experience. Notably, gender and field of study showed no significant direct correlations with AI attitudes, implying that demographic characteristics alone do not predetermine receptiveness to AI. Instead, the analysis revealed that AI knowledge serves as a crucial mediating variable, through which demographic factors influence attitudinal outcomes. This mediation effect suggests that providing targeted knowledge development, rather than addressing demographic characteristics, may be the most effective pathway to shifting attitudes.
Other empirical research has shown that the level and dimensions of artificial intelligence anxiety among pre-service teachers are significantly influenced by their field of study rather than by demographic factors such as age or gender. Falebita [30] found that, while the overall level of AI anxiety in university teacher education programs is moderate, specific forms of anxiety, such as technology intimidation, fear of job displacement, technological dependence, and ethical concerns, are pronounced in certain disciplines, particularly among those specializing in science and mathematics. Importantly, the study reported no statistically significant differences in AI anxiety based on gender, age, or year of study, highlighting the greater relevance of disciplinary background over general demographic variables.
Extending this line of inquiry, recent evidence indicates that both gender and age can moderate the relationships between perceived usefulness, perceived ease of use, and behavioral intention to adopt generative AI tools in higher education [31]. Although gender differences did not reach statistical significance, age emerges as a substantial moderating factor. Younger instructors perceived generative AI as more useful and easier to use, and their adoption decisions were shaped primarily by positive attitudes rather than by instrumental considerations. Conversely, older instructors’ adoption intentions were strongly influenced by perceptions of practical utility, indicating that age-responsive professional development strategies may be needed to ensure equitable AI integration across faculty groups.
These age-related patterns are further supported by findings from a study involving 306 teachers across seven schools, which showed that younger teachers are generally more likely to adopt AI technologies [32]. Surprisingly, personal innovativeness and level of openness to new experiences did not stimulate teachers to adopt AI for teaching. Instead, the study documented a statistically significant link between institutional policy and the use of AI by colleagues on the one hand, and AI adoption among schoolteachers on the other. This finding suggests that providing targeted knowledge development, rather than ad-dressing demographic characteristics or relying on personality traits, may be the most effective pathway to shifting attitudes, particularly when coupled with institutional support and peer modeling.
Viberg et al. [33] confirm that teachers’ trust in AI depends largely on perceived benefits, self-efficacy with digital tools, and understanding of AI mechanisms, while cultural and contextual factors play a more decisive role than purely demographic ones.
This body of research suggests that the development of AI literacy cannot be separated from the broader framework of digital competence. Teachers’ ability to understand, evaluate, and ethically use AI systems depends not only on training availability but also on the cultural and institutional context that shapes their professional identities. Accordingly, resistance to AI integration emerges as a dynamic interplay between personal competencies, institutional conditions, and the perceived alignment of AI with educational values.
The reviewed literature reveals a growing recognition of AI literacy as a multidimensional construct that integrates technical, ethical, and critical thinking competencies. However, several significant gaps persist. First, there is still no unified conceptual or operational definition of AI literacy, which complicates curriculum design and empirical assessment. Second, while numerous studies have addressed digital literacy or teacher attitudes separately, few have systematically examined how digital competence and demographic factors jointly influence resistance to AI integration. Third, existing research often overlooks national and cultural contexts that mediate teachers’ perceptions and practices.
Considering these limitations, the present study aims to explore the relationship between AI literacy, socio-demographic variables, and teachers’ resistance to AI integration in education. By providing empirical evidence from a large and diverse sample of pre-university teachers, the study contributes to clarifying the mechanisms underlying acceptance or rejection of emerging technologies and offers insights for the development of informed educational policies and targeted professional training programs.

2. Materials and Methods

2.1. Research Design and Approach

The present study adopted a quantitative, cross-sectional design [34,35] to explore how pre-university teachers perceive the integration of artificial intelligence (AI) into educational practice. This approach was selected to obtain a comprehensive snapshot of an emerging phenomenon in the Romanian educational context, allowing the analysis of relationships between socio-demographic characteristics, digital literacy levels, and negative attitudes toward AI. The survey method was employed [36,37], using an online self-administered questionnaire distributed to a diverse sample of teachers. The design enabled the identification of patterns and associations among variables related to digital competence, AI literacy, and teacher resistance.

2.2. Participants and Sampling Procedure

The study sample consisted of 1110 pre-university teachers, including both primary and secondary educators. Participants were recruited through a combination of formal channels (county school inspectorates) and informal professional networks (online teacher communities and social media groups). A snowball sampling strategy [38] was applied, encouraging participants to share the questionnaire with colleagues from their institutions. Participation was voluntary, anonymous, and uncompensated, consistent with ethical research principles [36,39,40]. Prior to completion, respondents were informed about the study’s purpose, the confidentiality of their data, and the exclusive academic use of results.
Responses were collected from 36 Romanian counties, including Bucharest, ensuring broad geographical coverage despite the convenience nature of the sample [41,42]. The largest proportions of respondents were from Timiș (14.9%), Ilfov (11%), Teleorman (9.5%), Botoșani (7.8%), and Bihor (7.5%), totaling 562 teachers (50.7% of the sample). Regarding the area of residence, 69.6% of participants were from urban areas and 30.4% from rural settings, providing a relatively balanced distribution. The gender distribution reflected the feminization trend typical of Romanian pre-university education, with 81.5% female respondents, consistent with national statistics indicating that women represent over 70% of teachers in this sector [43].

2.3. Instruments and Measures

The instrument used to measure general attitudes toward artificial intelligence was an adapted version of the General Attitudes toward Artificial Intelligence Scale (GAAIS), developed by Schepman and Rodway [44] and subsequently confirmed through additional validation analyses [45]. In previous validations, the GAAIS demonstrated a stable two-factor structure, comprising two distinct dimensions, positive attitudes and negative attitudes, and showed high levels of internal consistency, being recognized as a robust tool for assessing perceptions of AI.
For the present study, the instrument was translated and culturally adapted to the Romanian context following established guidelines for the cross-cultural adaptation of psychometric tools [46,47]. The authors first independently translated the General Attitudes toward Artificial Intelligence Scale (GAAIS) from English into Romanian, following specific recommendations [48,49]. The resulting translations were compared and consolidated into a consensus version. Subsequently, a certified English teacher with expertise in academic translation in the social sciences performed a back-translation into English. The back-translated version was then compared with the original items, and minor adjustments were introduced to ensure semantic equivalence and conceptual coherence. This multi-step procedure ensured both linguistic accuracy and cultural appropriateness of the Romanian version of the GAAIS.
Certain items were removed during adaptation to improve face validity, ensuring that all items were comprehensible and culturally appropriate for Romanian teachers. This process aligns with recommendations emphasizing the role of face validity in maintaining clarity and relevance of items for the target population [50,51].
The instrument used to assess teachers’ levels of digital competence was adapted from the tool developed by Rodríguez-de-Dios et al. [52], originally designed to measure the degree of digitalization among university students. The original scale is structured around six dimensions: technological skills, personal security skills, critical thinking skills, security skills, informational skills, and communication skills.
For the purposes of the present study, only three of these dimensions were retained—technological skills, informational skills, and personal security skills—while preserving the original five-point Likert scale format, ranging from 1 (“strongly disagree”) to 5 (“strongly agree”), indicating respondents’ level of agreement with each statement.
Technological competence refers to the ability to effectively use digital technologies, operate various types of software and hardware, and adapt to technological innovations. Informational competence concerns the management of large volumes of digital information and involves the ability to search, select, analyze, evaluate, and synthesize relevant data. Personal security competence entails the responsible and safe use of digital environments, protection of personal data, and management of online reputation, thereby preventing exposure to cybersecurity risks and threats [53]. In line with the procedure applied for the GAAIS, this instrument was also translated and culturally adapted using the same multi-step approach to ensure linguistic accuracy and conceptual equivalence within the Romanian context.
To assess participants’ direct experiences with AI-based tools, two distinct self-assessment variables were included. The first variable, “AI use for personal purposes” measured the frequency of using AI applications in everyday activities such as tourism, culinary recommendations, general knowledge, or entertainment. The second variable, “AI use for professional purposes” captured the extent to which participants employed AI in educational or occupational contexts, including procedure development, lesson planning, complementary activities, or accessing information relevant to teaching practice.
Both variables were evaluated using a five-point Likert scale (1 = “not at all,” 5 = “to a very great extent”). The resulting scores were treated as indicators of participants’ practical familiarity with AI and were subsequently used in the correlational analyses corresponding to Objective 3 of the study.

2.4. Data Collection Procedure

The data were collected through an online survey platform over a four-week period. Invitations to participate were disseminated via email and online teacher communities. Before responding, participants viewed an information sheet describing the purpose, voluntary nature, and confidentiality of the research. The estimated completion time for the questionnaire was approximately 10–12 min. No identifying personal data were collected.

2.5. Ethical Considerations

The study adhered to international ethical standards for social science research [39]. All procedures complied with the principles of informed consent, anonymity, and voluntary participation. Participants could withdraw at any time without penalty. The study protocol was reviewed to ensure compliance with institutional research ethics requirements. No data was collected that could directly identify participants, and all responses were stored securely in encrypted digital format accessible only to the research team.

3. Results

To verify the factorial structure of the scale measuring negative attitudes toward artificial intelligence, an exploratory factor analysis (EFA) was conducted (Table 1, Appendix A). The objective of this analysis was to test the unidimensional structure of the negative attitude measurement scale. In accordance with Field’s recommendations [54], the suitability of the current study’s data set for factor analysis was confirmed by the Kaiser-Meyer-Olkin index (KMO = 0.906), which indicates an excellent level of sampling adequacy, and by Bartlett’s test of sphericity (χ2(28) = 5486.77, p < 0.001), suggesting that the correlation matrix is suitable for factor reduction. Given the hypothesis of correlations existing between the investigated dimensions, principal component analysis was performed with Oblimin rotation, following Pallant’s recommendations [55]. As a result of principal component analysis, based on Kaiser’s criterion (eigenvalues > 1) and the conceptual coherence of associated items, two factors were extracted [56,57]. These together explain 72.23% of the total variance in the responses.
The first factor (Factor 1) consists of items that address concerns about the impact of artificial intelligence on both the individual and society, reflecting an emotional and existential dimension of the AI threat (Table 2). The second factor (Factor 2) includes items highlighting perceptions related to ethical aspects, reliability, and the implementation of artificial intelligence, suggesting a dimension characterized by organizational misuse and technical errors of AI systems (Table 3). Each item displayed a significant loading on only one factor, except for the item “Artificial intelligence is used to spy on people,” which was excluded from the final analysis since it had double factor loadings (λ = 0.48 on Factor 1 and λ = 0.34 on Factor 2. The correlation between the two factors was moderate (r = 0.474), confirming the conceptually complementary yet distinct nature of the dimensions.
To evaluate the internal consistency of the “Perceived AI Threat” dimension (Factor 1), which is composed of five items, Cronbach’s alpha coefficient was calculated, resulting in a value of α = 0.926, indicating very good internal reliability. This value suggests that the selected items measure a common construct and that the scale is suitable for use in further analyses as a composite score. Inter-item correlations ranged between r = 0.623 and r = 0.795, reflecting moderate to high relationships among items, with no indications of excessive redundancy. This range suggests that each item contributes distinctly, yet convergently, to the assessment of a common trait: the fear regarding the potentially threatening impact of artificial intelligence on the individual and society.
Similarly, to evaluate the internal consistency of the dimension called “Distrust in the fairness and ethics of AI” (Factor 2), consisting of two items (“Organizations use artificial intelligence unethically.” and “I believe artificial intelligence systems make many errors.”), Cronbach’s alpha coefficient was calculated, with a value of α = 0.624. Although this value is moderate, it is considered acceptable in the context of very short scales, where alpha is sensitive to the number of included items [58]. To further assess internal reliability, the inter-item correlation was also computed, which was r = 0.454, indicating a moderate positive relationship between the two items. Nevertheless, this dimension should be regarded as tentative, and its refinement or reconceptualization warrants careful consideration in future research.
Based on the results of the exploratory factor analysis and the Cronbach’s alpha coefficient values, the computation of composite scores for each of the two identified dimensions was deemed justified. Thus, for the factor Perceived AI Threat (α = 0.926), a composite score was calculated to reflect the respondents’ general level of fear, while for the factor Distrust in AI Fairness and Ethics (α = 0.624), despite the moderate internal reliability—explainable by the small number of items [59,60]—a composite score was also computed, considering the conceptual coherence and the inter-item correlation (r = 0.454). These composite scores will be used in subsequent statistical analyses.
After confirming the structural validity and internal consistency of the scale, subsequent analyses focused on how the participants’ socio-demographic characteristics are associated with negative perceptions toward artificial intelligence. These analyses examined the differences and statistical relationships corresponding to the first objective of the research.
The results of the descriptive analysis indicate that the mean scores are similar for the two dimensions. The Perceived AI Threat dimension recorded a mean of M = 3.35, SD = 1.15, while Distrust in AI Fairness and Ethics had a mean of M = 3.35, SD = 0.96. These values reflect a moderate level of negative perceptions toward artificial intelligence, with participants, on average, positioning themselves around the midpoint of the 1–5 response scale.
The distributions for both dimensions approximate normality, with low and negative skewness coefficients (−0.28 for Perceived AI Threat and −0.02 for Distrust in AI Fairness and Ethics), suggesting a slight tendency for responses to cluster toward the higher end of the scale. The kurtosis coefficients were −0.89 and −0.45, respectively, indicating slightly platykurtic distributions (flatter than a normal distribution), yet remaining within acceptable limits for the assumption of normality [57].
The Mann–Whitney test results showed significant gender differences for the Perceived AI Threat dimension, Z = −2.869, p = 0.004 (Table 4). Women had a higher mean rank (Mean Rank = 568.61) compared to men (Mean Rank = 497.61), indicating that women perceive artificial intelligence as more threatening than men. For the Distrust in Fairness and Ethics of AI dimension, the gender difference was not statistically significant (Z = −1.946, p = 0.052), although the general trend suggests a slightly higher level of distrust among men (Mean Rank = 594.33) compared to women (Mean Rank = 546.70). The magnitude of the gender effect for Perceived AI Threat was small (r = 0.09).
The Mann–Whitney analysis did not indicate significant differences in perceived AI threat between respondents from urban (Mean Rank = 553.13) and rural (Mean Rank = 560.94) environments, Z = −0.374, p = 0.709 (Table 4). However, significant differences were observed for Distrust in Fairness and Ethics of AI, Z = −3.225, p = 0.001. Urban educators had a higher mean rank (Mean Rank = 575.72) than their rural counterparts (Mean Rank = 509.12), suggesting stronger distrust in the accuracy and ethics of AI systems in urban settings. The magnitude of this urban–rural difference on the Distrust in Fairness and Ethics of AI dimension was small (r = 0.10).
The Kruskal–Wallis test results showed no significant differences among teaching degree categories, neither for Perceived AI Threat, H(3) = 2.454, p = 0.484, nor for Distrust in Fairness and Ethics of AI, H(3) = 2.284, p = 0.516 (Table 4). Although mean rank values indicate minor variations (between 518.61 and 579.38 for Factor 1, and between 517.47 and 562.30 for Factor 2), these differences are not statistically consistent enough to support teaching degree as influencing negative perceptions of AI.
Spearman correlations did not reveal significant relationships between participants’ age and the two analyzed dimensions, Perceived AI Threat (ρ = −0.014, p = 0.651) and Distrust in Fairness and Ethics of AI (ρ = 0.008, p = 0.794).
Next, analyses focused on the relationship between the level of digital competencies and negative perceptions of artificial intelligence, aiming to identify possible associations between the two dimensions of negative attitudes (Perceived AI Threat and Distrust in Fairness and Ethics of AI) and components of digital literacy. Given previously identified gender differences in negative AI perceptions, correlation analyses were conducted separately for male (Table 5) and female (Table 6) teachers to highlight potential group differences.
For the male participant group (N = 205), Spearman correlation analyses indicated significant negative associations between the dimensions of negative attitudes toward AI and the level of digital competencies (Table 5). Thus, Perceived AI Threat (F1) demonstrated significant negative correlations with all three dimensions of digital competence: Information literacy (ρ = −0.287, p < 0.001), Web navigation literacy (ρ = −0.207, p = 0.003), and Data & security literacy (ρ = −0.140, p = 0.045). Similarly, Distrust in Fairness and Ethics of AI (F2) was negatively correlated with Information literacy (ρ = −0.172, p = 0.013) and Web navigation literacy (ρ = −0.144, p = 0.039), while no significant relationship was found with Data & security literacy (ρ = −0.003, p = 0.968).
For the female participant group (N = 905), Spearman correlation analyses revealed statistically significant associations between dimensions of negative attitudes toward AI and the level of digital competencies (Table 6). The Perceived AI Threat dimension (F1) was significantly negatively correlated with Information literacy (ρ = −0.250, p < 0.001), Web navigation literacy (ρ = −0.096, p = 0.004), and Data & security literacy (ρ = −0.114, p = 0.001). For the Distrust in Fairness and Ethics of AI dimension (F2), significant negative correlations were found with Information literacy (ρ = −0.074, p = 0.027), but positive correlations emerged with Web navigation literacy (ρ = 0.133, p < 0.001) and Data & security literacy (ρ = 0.125, p < 0.001).
In comparison, the pattern of associations between negative attitude dimensions and digital competencies was similar for both groups, although the strength of relationships varied slightly. In general, the negative correlations between Perceived AI Threat and the dimensions of digital competencies were more pronounced among men than women, while for Distrust in Fairness and Ethics of AI, positive correlations appeared in the female group. These results suggest nuanced gender differences in how negative perceptions of AI are associated with the level of digital competencies, which will be discussed further in the interpretation section.
After exploring the relationships between digital competencies and negative perceptions of artificial intelligence, the analysis was extended to examine how direct experiences of using AI, both for personal and professional purposes, are related to dimensions of negative attitudes. This step aims to clarify whether practical familiarity with AI-based technologies is associated with reduced perceptions of threat or distrust.
Spearman correlation analyses indicated a significant negative association between the use of artificial intelligence in personal interest areas (such as tourism, culinary recommendations, general culture) and Perceived AI Threat (ρ = −0.286, p < 0.001), while the relationship with Distrust in Fairness and Ethics of AI was not significant (ρ = −0.060, p = 0.200).
Regarding the use of artificial intelligence for professional purposes (such as work procedures, teaching tasks, complementary activities, or accessing useful information), Spearman correlations also revealed significant negative relationships with both dimensions of negative attitudes toward AI. Perceived AI Threat was moderately negatively correlated (ρ = −0.264, p < 0.001), while Distrust in Fairness and Ethics of AI was also negatively correlated, though to a lesser extent (ρ = −0.107, p = 0.022).

4. Discussion

The present research aimed to investigate negative attitudes toward artificial intelligence (AI) among teachers, focusing on two main dimensions, Perceived AI Threat and Distrust in the Fairness and Ethics of AI, and their relationships with socio-demographic variables, digital competency level, and AI usage experience. The results provide a nuanced picture of how teachers perceive the impact of AI-based technologies and the individual factors shaping these perceptions.

4.1. Differences in Negative Perceptions of AI

The results indicated significant gender differences in the perception of the threat posed by AI, with women reporting higher levels of technological anxiety compared to men. This result converges with previous literature showing that women often manifest a more cautious attitude and heightened sensitivity to ethical and social risks of AI [61,62]. In studies of technological anxiety, gender differences are often explained by sociocultural and experiential factors—women tend to perceive threats related to loss of control or impact on human values more acutely, whereas men may demonstrate greater confidence in managing emerging technologies [33,61,63].
Conversely, distrust in the fairness and ethics of AI did not differ significantly between genders, although the general trend showed slightly higher skepticism among men. This finding may relate to research suggesting that perceptions of algorithm accuracy and organizational responsibility depend more on professional experience and exposure to practical applications of AI than on socio-demographic factors [33,63,64].
Differences observed by place of residence revealed higher levels of distrust in urban areas, which may reflect more frequent exposure to critical discourse about automation, data ethics, and algorithmic bias [65]. Also, the lack of differences by age and teaching degree suggests that negative perceptions of AI are distributed relatively uniformly among teacher generations, confirming observations by other authors who note that, in educational contexts, professional training level does not directly determine attitudes toward technology [66,67].

4.2. Relationships Between Digital Competencies and Negative Perceptions of AI

The results showed negative correlations between the level of digital competencies and negative perceptions of AI. The higher teachers’ digital literacy, especially in information literacy and web navigation literacy, the less likely they are to perceive AI as threatening. This result is consistent with the literature on AI literacy and digital competence, which emphasizes that familiarity with and understanding of AI mechanisms contributes to reduced technological anxiety and increased confidence in its educational applicability [7,68,69]. For example, Galindo-Domínguez et al. [69] highlighted a significant association between teachers’ digital competence level and their attitudes toward integrating AI into the educational process. Similarly, Dhahir et al. [68] underline the importance of digital literacy as a key factor in recognizing and evaluating content manipulated by AI-based technologies.
Meanwhile, subtle gender differences were identified: for men, negative relationships are stronger and consistent between all components of digital competence and both dimensions of negative attitudes, while for women a mixed association emerged, some positive correlations with the Distrust in Fairness and Ethics of AI dimension. International results confirm a similar pattern, showing that women tend to be more critical in evaluating the ethics and social impact of technology, suggesting the need for gender-sensitive educational practices [70,71].
The findings converge with results reported by Kasinidou et al. [66] according to which teachers’ attitudes toward AI are strongly mediated by the general level of digital skills and perceptions of social responsibility for using technology. Recently published reviews of AI literacy also emphasize the need to integrate the ethical and critical dimension alongside technical competencies [71,72].
Overall, these results confirm the hypothesis of an inverse relationship between digital competence and negative perceptions of AI, supporting the idea that developing AI literacy, defined by Long and Magerko [7] as the set of cognitive, ethical, and critical skills needed to understand and use AI, can be an effective strategy for reducing fears about automation and strengthening teachers’ professional autonomy [7,68,69,71].
However, differences among domains of use are relevant. Personal experience, for example, using artificial intelligence in everyday activities, seems to reduce the perception of threat to a greater extent, while professional use exerts moderately positive effects on both dimensions. These findings are supported by recent research showing that personal interactions with AI systems have a much stronger influence on perceived risk levels compared to professional or institutional use [73,74]. Furthermore, the study by Ackerhans et al. [73] emphasizes that controlled experience with AI in professional settings increases trust in intelligent tools while maintaining a high level of ethical vigilance, especially in education and healthcare, where perceptions of professional identity and responsibility remain sensitive to technological changes.
The global report by KPMG and the University of Melbourne [75] highlights that the level of trust in artificial intelligence is mainly influenced by users’ familiarity with technology and direct interaction experiences, as well as by perceived practical benefits. The report also notes that traditional demographic factors such as age or gender tend to have a lesser influence on trust compared to usage context and frequency of interactions with AI systems.
Overall, the results suggest that negative attitudes toward AI among teachers are not determined solely by fear or skepticism, but rather by digital literacy level, concrete experiences with AI, and the educational and cultural context into which it is introduced. The present study adds to the existing literature by highlighting a two-dimensional relationship in attitudes toward AI, emotional and ethical-cognitive, and by demonstrating that these dimensions respond differently to contextual variables.

5. Conclusions

The present study provides an overview of teachers’ negative perceptions of artificial intelligence, highlighting the influence of demographic factors, digital competencies, and direct user experience. The statistical analyses confirmed the two-dimensional structure of negative attitudes, perceived threat and ethical and technological distrust, dimensions that capture both emotional reactions and cognitive judgments regarding the integration of AI in educational settings. As one of the first large-scale explorations of teachers’ attitudes toward AI in the Romanian educational system, this study offers an initial empirical foundation for understanding how risk perceptions and ethical concerns emerge in this context.
Gender differences reveal a familiar pattern observed in research on emerging technologies: female teachers tend to perceive higher risks, while the level of distrust remains relatively constant across genders. Conversely, higher levels of digital competence appear to mitigate these perceptions, suggesting that digital and AI literacy may function as resilience factors against potential technological anxiety.
Familiarity with AI use, whether for personal or professional purposes, is associated with more positive and open attitudes. Direct exposure seems to reduce perceptions of risk and strengthen the sense of control over technology. This relationship supports the assumption that practical experience facilitates the process of AI acceptance more broadly.
From a theoretical standpoint, the study contributes to understanding the psychosocial mechanisms that influence technological acceptance among teachers. Empirical evidence can support the refinement of existing explanatory models and provide a basis for interdisciplinary approaches to the relationship between technological innovation and the teaching profession.
From an applied perspective, the findings highlight the need to develop AI literacy training programs that integrate technical, ethical, and pedagogical dimensions of AI use. Such an approach can strengthen teachers’ trust in AI-based tools and foster a professional culture open to innovation and digital transformation. Taken together, these conclusions also point toward several promising directions for future research. Building on these exploratory findings, future research could test several hypotheses regarding teachers’ perceptions of AI. For example: (H1) digital competence may negatively predict perceived AI threat; (H2) prior exposure to AI tools may reduce perceived risk; (H3) ethical concerns and distrust may be shaped by contextual rather than demographic factors (e.g., school-level technological infrastructure, role-related responsibilities, institutional norms regarding data use). These hypotheses may be examined in subsequent studies using confirmatory, longitudinal, or multi-level designs.

6. Limitations of the Study

The study employed a quantitative cross-sectional design, suitable for describing teachers’ perceptions of artificial intelligence, yet limited in its ability to capture their subjective, motivational, or contextual dimensions. A complementary qualitative approach could provide a deeper understanding of these aspects [34,76]. Furthermore, the exploratory and non-representative nature of the sample, obtained through voluntary participation, constrains the generalizability of the findings to the wider population of Romanian teachers [41,77]. Although the sampling strategy does not allow for probabilistic inference, the study achieved broad national coverage, including participants from 36 of Romania’s 41 counties as well as Bucharest, and representing both urban and rural schools. This territorial spread enhances the descriptive value of the dataset and reflects the diversity of pre-university teaching contexts in Romania. However, the findings cannot be regarded as nationally representative, and generalizations should be limited to the participating teachers. Future research employing stratified or probability-based sampling would be required to obtain nationally representative estimates. Given the absence of large-scale national studies on the acceptance of artificial intelligence in education, the present research may be regarded as an initial exploratory effort. However, to ensure the external validity of the conclusions, further sociological investigations based on nationally representative samples are required. As emphasized by Findley, Kikuta, and Denly [78], sample representativeness is a fundamental condition for the generalization of results to the target population, thereby strengthening the robustness and scientific relevance of the research.

Author Contributions

Conceptualization, L.G.Ț., V.G., I.S. and M.S.; Methodology, L.G.Ț., V.G., I.S. and M.S.; Software, L.G.Ț. and I.S.; Validation, L.G.Ț. and I.S.; Formal analysis, L.G.Ț., V.G. and I.S.; Investigation, L.G.Ț. and I.S.; Resources, L.G.Ț., V.G., I.S. and M.S.; Data curation, L.G.Ț., V.G. and I.S.; Writing—original draft, L.G.Ț., V.G., I.S. and M.S.; Writing—review & editing, L.G.Ț. and V.G.; Supervision, L.G.Ț. and V.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was partially supported by the West University of Timișoara through the CNFIS-FDI-2025-F-0426 project.

Institutional Review Board Statement

This study received ethical approval from the Scientific Council of University Research and Creation of the West University of Timisoara (Notice no. 55080/25.08.2025). The research involved an anonymous, minimal-risk online survey.

Informed Consent Statement

Participants were informed about the study purpose, voluntary participation, anonymity, and data protection on the introductory page of the online questionnaire. Consent was implied by proceeding with the completion of the survey.

Data Availability Statement

The data presented in this study is available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Exploratory Factor Analysis Results

Table A1. Exploratory Factor Analysis of the Adapted AI Attitudes Scale: Factor Loadings and Communalities.
Table A1. Exploratory Factor Analysis of the Adapted AI Attitudes Scale: Factor Loadings and Communalities.
ItemFactor 1
Perceived AI Threat
Factor 2:
Distrust in the Fairness and Ethics of AI
Communality (h2)
AI seems threatening to me.0.8300.0610.741
AI could take control over humans.0.913−0.1070.752
I believe AI is dangerous.0.9090.0160.841
I think with fear about future uses of AI.0.906−0.0200.805
People like me will suffer if AI is increasingly used.0.8380.0110.711
Organizations use artificial intelligence unethically.−0.0960.9280.785
I believe artificial intelligence systems make many errors.0.1390.7240.639
AI is used to spy on people.0.4810.3420.504
Note. Factor loadings are taken from the pattern matrix (SPSS 23). Bold values indicate primary loadings meeting the retention rule of |λ| ≥ 0.40 with a minimum difference of Δλ ≥ 0.20. One item (‘AI is used to spy on people’) displayed cross-loadings and was excluded from the final factor structure. Communalities reflect extraction values. The two retained factors were moderately correlated (r = 0.474).

References

  1. Wolters, A.; Arz von Straussenburg, A.F.; Riehle, D.M. AI Literacy in Adult Education—A Literature Review. In Proceedings of the 57th Hawaii International Conference on System Sciences (HICSS 2024), Waikiki Beach, HI, USA, 3–6 January 2024; Bui, T.X., Ed.; University of Hawaiʻi at Mānoa: Honolulu, HI, USA, 2024; pp. 6888–6897. [Google Scholar]
  2. Gu, X.; Ericson, B.J. AI Literacy in K-12 and Higher Education in the Wake of Generative AI: An Integrative Review. In Proceedings of the 2025 ACM Conference on International Computing Education Research V.1, Charlottesville, VA, USA, 3–6 August 2025; pp. 125–140. [Google Scholar]
  3. El-dosuky, M. Artificial Intelligence Literacy Framework for K-12 Students. SSRN 2024. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4947507 (accessed on 27 November 2025).
  4. Gherheș, V.; Fărcașiu, M.A.; Cernicova-Buca, M.; Coman, C. AI vs. Human-Authored Headlines: Evaluating the Effectiveness, Trust, and Linguistic Features of ChatGPT-Generated Clickbait and Informative Headlines in Digital News. Information 2025, 16, 150. [Google Scholar] [CrossRef]
  5. Gherheş, V. Artificial Intelligence: Perception, Expectations, Hopes and Benefits. Int. J. User-Syst. Interact. 2018, 11, 219–230. [Google Scholar]
  6. Ng, W. Can We Teach Digital Natives Digital Literacy? Comput. Educ. 2012, 59, 1065–1078. [Google Scholar] [CrossRef]
  7. Long, D.; Magerko, B. What Is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–16. [Google Scholar]
  8. Howard, S.K.; Tondeur, J.; Siddiq, F.; Scherer, R. Ready, Set, Go! Profiling Teachers’ Readiness for Online Teaching in Secondary Education. Technol. Pedagog. Educ. 2021, 30, 141–158. [Google Scholar] [CrossRef]
  9. Holmes, W. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019. [Google Scholar]
  10. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic Review of Research on Artificial Intelligence Applications in Higher Education–Where Are the Educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 1–27. [Google Scholar] [CrossRef]
  11. Howard, S.K.; Mozejko, A. Teachers: Technology, Change and Resistance. In Teaching and Digital Technologies: Big Issues and Critical Questions; Henderson, M., Romeo, G., Eds.; Cambridge University Press: Cambridge, UK, 2015; pp. 307–317. ISBN 978-1-316-09196-8. [Google Scholar]
  12. MacDowell, P.; Moskalyk, K.; Korchinski, K.; Morrison, D. Preparing Educators to Teach and Create with Generative Artificial Intelligence. Can. J. Learn. Technol. 2024, 50, 1–23. [Google Scholar] [CrossRef]
  13. Dilek, M.; Baran, E.; Aleman, E. AI Literacy in Teacher Education: Empowering Educators through Critical Co-Discovery. J. Teach. Educ. 2025, 76, 294–311. [Google Scholar] [CrossRef]
  14. Brown, Y. Overcoming Teacher Resistance to Technology and Artificial Intelligence in the Classroom. LinkedIn, 7 February 2024. Available online: https://www.linkedin.com/pulse/overcoming-teacher-resistance-technology-artificial-classroom-brown-yupse/ (accessed on 27 November 2025).
  15. Tan, X.; Cheng, G.; Ling, M.H. Artificial Intelligence in Teaching and Teacher Professional Development: A Systematic Review. Comput. Educ. Artif. Intell. 2024, 8, 100355. [Google Scholar] [CrossRef]
  16. AI Competency Framework for Teachers—UNESCO Digital Library. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000391104 (accessed on 27 November 2025).
  17. Mah, D.-K.; Groß, N. Artificial Intelligence in Higher Education: Exploring Faculty Use, Self-Efficacy, Distinct Profiles, and Professional Development Needs. Int. J. Educ. Technol. High. Educ. 2024, 21, 58. [Google Scholar] [CrossRef]
  18. Ha, S.; Kim, S. Developing a Digital Platform Literacy Framework; International Telecommunications Society (ITS): Calgary, AB, Canada, 2023. [Google Scholar]
  19. Du, H.; Sun, Y.; Jiang, H.; Islam, A.; Gu, X. Exploring the Effects of AI Literacy in Teacher Learning: An Empirical Study. Humanit. Soc. Sci. Commun. 2024, 11, 559. [Google Scholar] [CrossRef]
  20. Tariq, U. Challenges in AI-Powered Educational Technologies: Teacher Perspectives and Resistance. AI EDIFY J. 2024, 1, 1–10. [Google Scholar]
  21. Mehdaoui, A. Unveiling Barriers and Challenges of AI Technology Integration in Education: Assessing Teachers’ Perceptions, Readiness and Anticipated Resistance. Futur. Educ. 2024, 4, 95–108. [Google Scholar] [CrossRef]
  22. Yang, Y.N.; Kong, S.C. Professional Development for Teachers in AI Literacy Education: Teaching Machine Learning to Senior Primary and Junior Secondary Students. In Proceedings of the 17th International Conference on Computer Supported Education, CSEDU 2025, Porto, Portugal, 1–3 April 2025; SciTePress: Setúbal, Portugal, 2025; pp. 35–42. [Google Scholar]
  23. Azzam, A.; Charles, T. A Review of Artificial Intelligence in K-12 Education. Open J. Appl. Sci. 2024, 14, 2088–2100. [Google Scholar] [CrossRef]
  24. Sperling, K.; Stenberg, C.-J.; McGrath, C.; Åkerfeldt, A.; Heintz, F.; Stenliden, L. In Search of Artificial Intelligence (AI) Literacy in Teacher Education: A Scoping Review. Comput. Educ. Open 2024, 6, 100169. [Google Scholar] [CrossRef]
  25. Nazaretsky, T.; Ariely, M.; Cukurova, M.; Alexandron, G. Teachers’ Trust in AI-powered Educational Technology and a Professional Development Program to Improve It. Br. J. Educ. Technol. 2022, 53, 914–931. [Google Scholar] [CrossRef]
  26. Moura, A.; Carvalho, A.A.A. Teachers’ Perceptions of the Use of Artificial Intelligence in the Classroom; Atlantis Press: Dordrecht, The Netherlands, 2024; pp. 140–150. [Google Scholar]
  27. Gravino, C.; Iannella, A.; Marras, M.; Pagliara, S.M.; Palomba, F. Teachers Interacting with Generative Artificial Intelligence: A Dual Responsibility; Ceur Workshop Proceedings: Aachen, Germany, 2024; Volume 3762. [Google Scholar]
  28. Çayak, S. Investigating the Relationship between Teachers’ Attitudes toward Artificial Intelligence and Their Artificial Intelligence Literacy. J. Educ. Technol. Online Learn. 2024, 7, 367–383. [Google Scholar] [CrossRef]
  29. Tin, T.T.; Chor, K.Y.; Hui, W.J.; Cheng, W.Y.; Kit, C.J.; Husin, W.N.A.-A.W.; Tiung, L.K. Demographic Factors Shaping Artificial Intelligence (AI) Perspectives: Exploring Their Impact on University Students’ Academic Performance. Pak. J. Life Soc. Sci. 2024, 22, 12248–12264. [Google Scholar] [CrossRef]
  30. Falebita, O. Evaluating Artificial Intelligence Anxiety among Pre-Service Teachers in University Teacher Education Programs. J. Math. Instr. Soc. Res. Opin. 2025, 4, 1–16. [Google Scholar] [CrossRef]
  31. Miranda, F.J.; Chamorro-Mera, A. The Impact of Gender and Age on HEI Teachers’ Intentions to Use Generative Artificial Intelligence Tools. ITLT 2025, 108, 112–128. [Google Scholar] [CrossRef]
  32. Bakhadirov, M.; Alasgarova, R. Factors Influencing Teachers’ Use of Artificial Intelligence for Instructional Purposes. IAFOR J. Educ. 2024, 12, 9–32. [Google Scholar] [CrossRef]
  33. Viberg, O.; Cukurova, M.; Feldman-Maggor, Y.; Alexandron, G.; Shirai, S.; Kanemune, S.; Wasson, B.; Tømte, C.; Spikol, D.; Milrad, M.; et al. What Explains Teachers’ Trust in AI in Education across Six Countries? Int. J. Artif. Intell. Educ. 2024, 35, 1288–1316. [Google Scholar] [CrossRef]
  34. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches; SAGE Publications: Thousand Oaks, CA, USA, 2017; ISBN 1-5063-8669-5. [Google Scholar]
  35. Kaplan, D. The Sage Handbook of Quantitative Methodology for the Social Sciences; SAGE Publications: Thousand Oaks, CA, USA, 2004; ISBN 1-4833-6587-5. [Google Scholar]
  36. Wolf, C.; Fu, Y.; Smith, T.; Joye, D. The SAGE Handbook of Survey Methodology; SAGE Publications Ltd.: London, UK, 2016. [Google Scholar]
  37. Iluţ, P.; Rotariu, T.-I. Ancheta Sociologică Şi Sondajul de Opinie: Teorie Şi Practică; Polirom: Iași, Romania, 1997; ISBN 973-9248-65-9. [Google Scholar]
  38. Morgan, D.L. Snowball Sampling. In The SAGE Encyclopedia of Qualitative Research Methods; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2008; Volume 2, pp. 815–816. [Google Scholar]
  39. Wiles, R.; Crow, G.; Heath, S.; Charles, V. The Management of Confidentiality and Anonymity in Social Research. Int. J. Soc. Res. Methodol. 2008, 11, 417–428. [Google Scholar] [CrossRef]
  40. Coffelt, T.A. Confidentiality and Anonymity of Participants. In The SAGE Encyclopedia of Communication Research Methods; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2017; pp. 228–230. ISBN 978-1-4833-8141-1. [Google Scholar]
  41. Etikan, I.; Musa, S.A.; Alkassim, R.S. Comparison of Convenience Sampling and Purposive Sampling. Am. J. Theor. Appl. Stat. 2016, 5, 1–4. [Google Scholar] [CrossRef]
  42. Lavrakas, P.J. Encyclopedia of Survey Research Methods; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2008; ISBN 978-1-4129-6394-7. [Google Scholar]
  43. Profiroiu, C.M.; Nastacă, C.C. Gender Equality in the Romanian Educational System. Proc. Adm. Public Manag. Int. Conf. 2018, 14, 79–93. [Google Scholar]
  44. Schepman, A.; Rodway, P. Initial Validation of the General Attitudes towards Artificial Intelligence Scale. Comput. Hum. Behav. Rep. 2020, 1, 100014. [Google Scholar] [CrossRef]
  45. Schepman, A.; Rodway, P. The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory Validation and Associations with Personality, Corporate Distrust, and General Trust. Int. J. Hum.-Comput. Interact. 2023, 39, 2724–2741. [Google Scholar] [CrossRef]
  46. Boateng, G.O.; Neilands, T.B.; Frongillo, E.A.; Melgar-Quiñonez, H.R.; Young, S.L. Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research: A Primer. Front. Public Health 2018, 6, 149. [Google Scholar] [CrossRef]
  47. Beaton, D.E.; Bombardier, C.; Guillemin, F.; Ferraz, M.B. Guidelines for the Process of Cross-Cultural Adaptation of Self-Report Measures. Spine 2000, 25, 3186. [Google Scholar] [CrossRef]
  48. Cruchinho, P.; López-Franco, M.D.; Capelas, M.L.; Almeida, S.; Bennett, P.M.; da Silva, M.M.; Teixeira, G.; Nunes, E.; Lucas, P.; Gaspar, F. Translation, Cross-Cultural Adaptation, and Validation of Measurement Instruments: A Practical Guideline for Novice Researchers. J. Multidiscip. Healthc. 2024, 17, 2701–2728. [Google Scholar] [CrossRef]
  49. Talik, W.; Talik, E.B.; Grassini, S. Measurement Invariance of the Artificial Intelligence Attitude Scale (AIAS-4): Cross-Cultural Studies in Poland, the USA, and the UK. Curr. Psychol. 2025, 44, 15758–15766. [Google Scholar] [CrossRef]
  50. Holden, R.R.; Jackson, D.N. Item Subtlety and Face Validity in Personality Assessment. J. Consult. Clin. Psychol. 1979, 47, 459–468. [Google Scholar] [CrossRef]
  51. Nevo, B. Face Validity Revisited. J. Educ. Meas. 1985, 22, 287–293. [Google Scholar] [CrossRef]
  52. Rodríguez-de-Dios, I.; Igartua, J.-J.; González-Vázquez, A. Development and Validation of a Digital Literacy Scale for Teenagers. In Proceedings of the TEEM’16: 4th International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 2–4 November 2016; pp. 1067–1072. [Google Scholar]
  53. Rodríguez-de-Dios, I.; Igartua, J.-J. Skills of Digital Literacy to Address the Risks of Interactive Communication. In Information and Technology Literacy: Concepts, Methodologies, Tools, and Applications; IGI Global Scientific Publishing: Hershey, PA, USA, 2018; pp. 621–632. [Google Scholar]
  54. Field, A. Discovering Statistics Using IBM SPSS Statistics; SAGE Publications Limited: London, UK, 2024; ISBN 1-5296-6870-0. [Google Scholar]
  55. Pallant, J. SPSS Survival Manual: A Step by Step Guide to Data Analysis Using IBM SPSS; Routledge: Abingdon, UK, 2020; ISBN 1-003-11745-7. [Google Scholar]
  56. Kaiser, H.F. The Application of Electronic Computers to Factor Analysis. Educ. Psychol. Meas. 1960, 20, 141–151. [Google Scholar] [CrossRef]
  57. George, D.; Mallery, P. IBM SPSS Statistics 29 Step by Step: A Simple Guide and Reference; Routledge: Abingdon, UK, 2024; ISBN 1-032-62215-6. [Google Scholar]
  58. Tavakol, M.; Dennick, R. Making Sense of Cronbach’s Alpha. Int. J. Med. Educ. 2011, 2, 53. [Google Scholar] [CrossRef] [PubMed]
  59. Cortina, J.M. What Is Coefficient Alpha? An Examination of Theory and Applications. J. Appl. Psychol. 1993, 78, 98–104. [Google Scholar] [CrossRef]
  60. Eisinga, R.; Grotenhuis, M.; Pelzer, B. The Reliability of a Two-Item Scale: Pearson, Cronbach, or Spearman-Brown? Int. J. Public Health 2013, 58, 637–642. [Google Scholar] [CrossRef]
  61. Brougham, D.; Haar, J. Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): Employees’ Perceptions of Our Future Workplace. J. Manag. Organ. 2018, 24, 239–257. [Google Scholar] [CrossRef]
  62. Nomura, T.; Kanda, T.; Suzuki, T.; Kato, K. Prediction of Human Behavior in Human–Robot Interaction Using Psychological Scales for Anxiety and Negative Attitudes Toward Robots. IEEE Trans. Robot. 2008, 24, 442–451. [Google Scholar] [CrossRef]
  63. Russo, C.; Romano, L.; Clemente, D.; Iacovone, L.; Gladwin, T.E.; Panno, A. Gender Differences in Artificial Intelligence: The Role of Artificial Intelligence Anxiety. Front. Psychol. 2025, 16, 1559457. [Google Scholar] [CrossRef]
  64. Lucas, M.; Zhang, Y.; Bem-haja, P.; Vicente, P.N. The Interplay between Teachers’ Trust in Artificial Intelligence and Digital Competence. Educ. Inf. Technol. 2024, 29, 22991–23010. [Google Scholar] [CrossRef]
  65. Pietsch, M.; Mah, D.-K. Leading the AI Transformation in Schools: It Starts with a Digital Mindset. Educ. Tech Res. Dev. 2025, 73, 1043–1069. [Google Scholar] [CrossRef]
  66. Kasinidou, M.; Kleanthoys, S.; Otterbacher, J. Cypriot Teachers’ Digital Skills and Attitudes towards AI. Discov. Educ. 2025, 4, 1. [Google Scholar] [CrossRef]
  67. Granström, M.; Oppi, P. Assessing Teachers’ Readiness and Perceived Usefulness of AI in Education: An Estonian Perspective. Front. Educ. 2025, 10, 1622240. [Google Scholar] [CrossRef]
  68. Dhahir, D.F.; Kenda, N.; Dirgahayu, D.; Syarifuddin, S.; Djaffar, R.; Widihastuti, R.; Rustam, M.; Pala, R. The Relationship of Digital Literacy, Exposure to AI-Generated Deepfake Videos, and the Ability to Identify Deepfakes in Generation X. J. Pekommas 2024, 9, 357–368. [Google Scholar] [CrossRef]
  69. Galindo-Domínguez, H.; Delgado, N.; Campo, L.; Losada, D. Relationship between Teachers’ Digital Competence and Attitudes towards Artificial Intelligence in Education. Int. J. Educ. Res. 2024, 126, 102381. [Google Scholar] [CrossRef]
  70. Asio, J.M.R.; Sardina, D.P. Gender Differences on the Impact of AI Self-Efficacy on AI Anxiety through AI Self-Competency: A Moderated Mediation Analysis. J. Pedagog. Res. 2025, 9, 55–71. [Google Scholar] [CrossRef]
  71. Serdenia, J.R.C.; Dumagay, A.H.; Balasa, K.A.; Capacio, E.A.; Lauzon, L.D.S. Attitude, Acceptability, and Perceived Effectiveness of Artificial Intelligence in Education: A Quantitative Cross-Sectional Study among Future Teachers. LatIA 2025, 3, 313. [Google Scholar] [CrossRef]
  72. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI Literacy: An Exploratory Review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  73. Ackerhans, S.; Wehkamp, K.; Petzina, R.; Dumitrescu, D.; Schultz, C. Perceived Trust and Professional Identity Threat in AI-Based Clinical Decision Support Systems: Scenario-Based Experimental Study on AI Process Design Features. JMIR Form. Res. 2025, 9, e64266. [Google Scholar] [CrossRef]
  74. Li, Y. Relationship between Perceived Threat of Artificial Intelligence and Turnover Intention in Luxury Hotels. Heliyon 2023, 9, e18520. [Google Scholar] [CrossRef]
  75. Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025. Available online: https://figshare.unimelb.edu.au/articles/report/Trust_attitudes_and_use_of_artificial_intelligence_A_global_study_2025/28822919?file=54013232 (accessed on 29 October 2025).
  76. Queirós, A.; Faria, D.; Almeida, F. Strengths and Limitations of Qualitative and Quantitative Research Methods. Eur. J. Educ. Stud. 2017, 3, 369–387. [Google Scholar] [CrossRef]
  77. Jager, J.; Putnick, D.L.; Bornstein, M.H. II. More than Just Convenient: The Scientific Merits of Homogeneous Convenience Samples. Monogr. Soc. Res. Child Dev. 2017, 82, 13–30. [Google Scholar] [CrossRef]
  78. Findley, M.G.; Kikuta, K.; Denly, M. External Validity. Annu. Rev. Political Sci. 2021, 24, 365–393. [Google Scholar] [CrossRef]
Table 1. Factor loadings of items concerning negative perceptions of artificial intelligence following Oblimin rotation.
Table 1. Factor loadings of items concerning negative perceptions of artificial intelligence following Oblimin rotation.
Item FormulatedFactor 1Factor 2
Artificial intelligence could take control over humans.0.913
I believe artificial intelligence is dangerous.0.909
I think with fear about the future uses of artificial intelligence.0.906
People like me will suffer if artificial intelligence is increasingly used.0.838
I find artificial intelligence threatening.0.830
Organizations use artificial intelligence unethically.0.928
I believe AI systems make many errors.0.724
Table 2. Descriptive statistics for the items comprising the “Perceived AI Threat” dimension.
Table 2. Descriptive statistics for the items comprising the “Perceived AI Threat” dimension.
ItemMeanSDMedianIQR
Artificial intelligence could take control over humans.3.411.37443
I believe artificial intelligence is dangerous.3.371.27732
I think with fear about the future uses of artificial intelligence.3.241.32432
People like me will suffer if AI is increasingly used.3.131.34132
I find artificial intelligence threatening.3.591.24942
Table 3. Descriptive statistics for the items comprising the “Distrust in the fairness and ethics of AI” dimension.
Table 3. Descriptive statistics for the items comprising the “Distrust in the fairness and ethics of AI” dimension.
ItemMeanSDMedianIQR
Organizations use artificial intelligence unethically.3.311.15031
I believe artificial intelligence systems make many errors.3.401.09331
Table 4. Differences in negative perceptions of AI by socio-demographic variables.
Table 4. Differences in negative perceptions of AI by socio-demographic variables.
VariableCategoriesNF1—Perceived AI Threat
Mean Rank
F2—Distrust in Fairness and Ethics of AI
Mean Rank
Test Statistic/Result
GenderMale205497.61594.33Z = −2.869, p = 0.004 (F1) Z = −1.946, p = 0.052 (F2)
Female905568.61546.70
ResidenceUrban773553.13575.72Z = −0.374, p = 0.709 (F1) Z = −3.225, p = 0.001 (F2)
Rural337560.94509.12
Teaching DegreeNo
teaching
degree
126518.61561.94H = 2.454, p = 0.484 (F1) H = 2.284, p = 0.516 (F2)
Teacher Tenure
Exam
173560.17553.12
Degree II132579.38517.47
Degree I679556.51562.30
Table 5. Spearman correlations between negative AI attitude dimensions and digital literacy components—male group (N = 205).
Table 5. Spearman correlations between negative AI attitude dimensions and digital literacy components—male group (N = 205).
Dimensions/FactorsInformation LiteracyWeb Navigation
Literacy
Data & Security Literacy
Perceived AI Threat (F1)−0.287 **−0.207 **−0.140 *
Distrust in Fairness and Ethics of AI (F2)−0.172 *−0.144 *−0.003
Note. Values represent Spearman correlation coefficients (ρ). N = 205. * p < 0.05, ** p < 0.01.
Table 6. Spearman correlations between the dimensions of negative attitudes toward AI and components of digital literacy—female group (N = 905).
Table 6. Spearman correlations between the dimensions of negative attitudes toward AI and components of digital literacy—female group (N = 905).
Dimensions/FactorsInformation LiteracyWeb Navigation LiteracyData & Security Literacy
Perceived AI Threat (F1)−0.250 **−0.096 **−0.114 **
Distrust in Fairness and Ethics of AI (F2)−0.074 *0.133 **0.125 **
Note. Values represent Spearman correlation coefficients (ρ). N = 905. * p < 0.05, ** p < 0.01.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Țîru, L.G.; Gherheș, V.; Stoicov, I.; Stanici, M. Not Ready for AI? Exploring Teachers’ Negative Attitudes Toward Artificial Intelligence. Societies 2025, 15, 337. https://doi.org/10.3390/soc15120337

AMA Style

Țîru LG, Gherheș V, Stoicov I, Stanici M. Not Ready for AI? Exploring Teachers’ Negative Attitudes Toward Artificial Intelligence. Societies. 2025; 15(12):337. https://doi.org/10.3390/soc15120337

Chicago/Turabian Style

Țîru, Laurențiu Gabriel, Vasile Gherheș, Ionela Stoicov, and Miroslav Stanici. 2025. "Not Ready for AI? Exploring Teachers’ Negative Attitudes Toward Artificial Intelligence" Societies 15, no. 12: 337. https://doi.org/10.3390/soc15120337

APA Style

Țîru, L. G., Gherheș, V., Stoicov, I., & Stanici, M. (2025). Not Ready for AI? Exploring Teachers’ Negative Attitudes Toward Artificial Intelligence. Societies, 15(12), 337. https://doi.org/10.3390/soc15120337

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop