Next Article in Journal
God, Ethics, and Evolution: An Islamic Rejoinder to Sterba’s Moral Critique
Previous Article in Journal
The Spiritual Pursuit in Lin Yutang’s Literary Works: A Cross-Cultural Interpretation and Empirical Study in the Context of Christian New Evangelization
Previous Article in Special Issue
From Verse to Vision: Exploring AI-Generated Religious Imagery in Bible Teaching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Utility: The Impact of Religiosity and Calling on AI Adoption in Education

by
Mátyás Turós
1,*,
Ilona Pajtókné Tari
1,
Enikő Szőke-Milinte
1,
Rita Rubovszky
1,
Klára Soltész-Várhelyi
2,
Viktor Zsódi
3 and
Zoltán Szűts
1
1
Doctoral School of Education, Eszterhazy Karoly Catholic University, 3300 Eger, Hungary
2
Institute of Psychology, Pázmány Péter Catholic University, 1088 Budapest, Hungary
3
Institute of Religious Education and Pastoral Sociology, Sapientia College of Theology of Religious Orders, 1014 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Religions 2025, 16(8), 1069; https://doi.org/10.3390/rel16081069
Submission received: 5 July 2025 / Revised: 1 August 2025 / Accepted: 13 August 2025 / Published: 19 August 2025
(This article belongs to the Special Issue Religious Communities and Artificial Intelligence)

Abstract

The social integration of artificial intelligence (AI) poses fundamental challenges to value-driven domains such as education, where the adoption of new technologies raises not merely technical but also deeply rooted ethical and identity-related questions. While dominant technology acceptance models (e.g., TAM and UTAUT) primarily focus on cognitive-rational factors (e.g., perceived usefulness), they often overlook the cultural and value-based elements that fundamentally shape adaptation processes. Addressing this research gap, the present study examines how two hitherto under-researched factors—religiosity and teacher’s sense of calling—influence teachers’ attitudes toward AI and, ultimately, its adoption. The research is based on a survey of 680 Catholic secondary school teachers in Hungary. To analyse the data, we employed structural equation modelling (PLS-SEM) to examine the mechanisms of influence among religiosity, sense of calling, and AI attitudes. The results indicate that neither religiosity nor a sense of calling exerts a significant direct effect on AI adoption, and their indirect effects are also marginal. Although statistically significant relationships were found—stronger religiosity reduces a supportive evaluation of AI, while a higher sense of calling increases AI-related concerns—their practical significance is negligible. The study’s main conclusion is that successful AI integration, building on teachers’ pragmatic attitudes, is achieved not by neglecting value-based factors, but by developing critical AI literacy that treats technology as a responsible amplifier of pedagogical work. This finding suggests that value-based extensions of technology acceptance models should be approached with caution, as the role of these factors may be more limited than theoretical considerations imply.

1. Introduction

Artificial intelligence (AI) exerts a dual impact on education (Alwaqdani 2025; Molefi et al. 2024; Zhang et al. 2023), as illustrated by how generative AI tools like ChatGPT offer opportunities for the teaching and learning process while also posing challenges (Kong et al. 2024). To harness these opportunities and address the challenges, the educational integration of AI must depend largely on teachers’ attitudes and acceptance (Molefi et al. 2024; Özbek et al. 2024). Accordingly, mapping teachers’ perceptions and adoption intentions regarding AI is a developing area of technology acceptance research (Alwaqdani 2025; Sanusi et al. 2024). Research in this field typically builds on the technology acceptance model (TAM) and the unified theory of acceptance and use of technology (UTAUT) frameworks. TAM emphasises the roles of perceived usefulness, ease of use, attitude, and behavioural intention in assessing technology and is often combined with other models, such as extended versions that also consider subjective norms (Dehghani and Mashhadi 2024; Galimova et al. 2024; Rafique et al. 2023; Zhang et al. 2023). Related research also examines the roles of teacher self-efficacy and AI-related anxiety (Kaya et al. 2024; Sanusi et al. 2024). The present study investigates teachers’ AI adoption, building on the dominant TAM and UTAUT models in the field of technology acceptance. However, while these models primarily concentrate on cognitive-rational factors (perceived usefulness and ease of use) and social influences, our research explores how value-based factors—specifically religiosity and a teacher’s sense of calling—fit into the technology acceptance process. Our approach aims not to replace the TAM/UTAUT models, but to complement them.
Studies on value-based or informal institutional factors influencing teachers’ relationship with AI (e.g., school community traditions, unwritten pedagogical norms, and professional group identity) or on factors rooted in personal values (e.g., religiosity, generational values, and culturally derived beliefs) are rare in the literature. Our study’s starting point is whether religiosity—as a factor that fundamentally shapes one’s worldview and value system (Ahmed et al. 2025; Caidi et al. 2025)—and a teacher’s sense of calling significantly impact the educational acceptance of AI technologies. This is similar to the findings of Lin et al. (2025), who demonstrated that teacher beliefs about mathematics education and AI literacy mediate the relationship between technological perceptions and commitment to AI tools in a mathematics education context. A religious worldview, for instance, may impede digital transformation (Jiang et al. 2024) because it is often associated with ethical and theological concerns about AI (Ahmed et al. 2025; Fernandez-Borsot 2023; Graves 2025) that touch upon questions of human dignity, the order of creation, or anthropomorphisation (Coghill 2023; Cruz 2025; Kaunda 2024). Religiosity may therefore play a decisive role in attitudes toward AI (Ahmed et al. 2025; Caidi et al. 2025). The influence of these factors is particularly important in educational settings where the institutional background is closely tied to religious values, such as Catholic secondary schools in Hungary. In these cases, institutional support—in line with the findings of Hazzan-Bishara et al. (2025)—may influence AI adoption not only directly but also indirectly, through teachers’ intrinsic motivation and self-efficacy.

2. Background

Technology acceptance models provide a theoretical framework for understanding teacher behaviour (Galimova et al. 2024; Zhang et al. 2023). Recent research shows that these models are most effective when supplemented with psychological and social constructs such as self-efficacy (Sanusi et al. 2024), subjective norms (Kong et al. 2024), or AI-specific anxiety (Kaya et al. 2024). However, AI raises not only technological, psychological, and social questions; its existence and use also touch upon ethical and theological dimensions such as human dignity, the boundary between humans and machines, and the nature of knowledge, creativity, and autonomy (Coghill 2023; Cruz 2025; Kaunda 2024). Alkhouri (2024) also emphasises the complexity of AI’s religio-psychological impact, while Fernandez-Borsot (2023) and Graves (2024) discuss theological and ethical dilemmas related to technology’s transcendent, redemptive, or dominant roles.
Examining these authors’ work more closely, we see that Alkhouri (2024) analyses how AI-based applications (e.g., chatbots and virtual reality) might transform religious practices, spiritual seeking, and community experiences. In their study, the author raises several ethical concerns: the authenticity of AI-simulated religious experiences, the potential for algorithmic bias in religious contexts, and the challenges of protecting data and preserving genuine human connections. Fernandez-Borsot (2023) analyses two key phenomena through the philosophical categories of transcendence, immanence, and relationality. On one hand, he examines how contemplation might be marginalised by technology’s action-oriented “enframing” logic; on the other, he demonstrates how technology—as an extension of the human body—can lead to “dissociation” from the lived body. According to the author, spirituality, particularly body-based practices, offers an integrative counterbalance to these tendencies. Graves (2024) proposes theological frameworks for evaluating near-future AI developments, with particular attention to how these developments might affect concepts of human suffering and flourishing (eudaimonia) and the theological interpretation of human uniqueness (imago Dei) in light of ever-advancing AI capabilities, for instance, in the realms of consciousness or the capacity for a relationship with God.
These approaches confirm and complement our central assertion: AI is not merely a technical tool but a complex phenomenon that raises spiritual and moral questions. In religious contexts, these questions are pronounced and relate to the emergence of scepticism, caution, or rejection of AI technologies, potentially leading to value conflicts. We hypothesise that certain religious values—such as service to the community, care, or justice—may also facilitate the responsible and reflective use of AI.
Alongside religiosity, a teacher’s sense of calling is another factor that has recently become a focus of pedagogical psychology, yet it remains underrepresented in technology acceptance research. A sense of calling, besides being a strong source of intrinsic motivation, can imbue the choice of and persistence in the teaching profession with a deeper, even spiritual, meaning. Furthermore, it determines how teachers respond to new challenges in their profession—including, undoubtedly, AI (Lan 2024). Demographic factors may also be relevant to how teachers experience and interpret religiosity and their sense of calling and how these, in turn, influence their relationship with AI. In technology acceptance models, these variables often serve as controls (Kaya et al. 2024; Zhang et al. 2023) and may have both direct and indirect effects (cf. Bakhadirov et al. 2024) on dimensions such as teacher attitudes or adoption intentions.
We therefore place the educational integration of AI within a broader, value-based framework. Our aim is to conduct a complex examination of the psychological, spiritual, professional, and demographic factors influencing teacher technology acceptance. Supplementing classic technology acceptance models with the factors of religiosity and sense of calling creates an opportunity for a deeper understanding of teacher attitudes toward AI. This allows for the formulation of research and practical implications that can support the responsible, context-sensitive integration of AI into public education—particularly within faith-based institutions.
Our study seeks to explore the role of religiosity, a teacher’s sense of calling, and certain demographic variables in shaping attitudes and adoption intentions regarding the use of AI in schools. Given the complexity of this objective, we will examine the direct and indirect effects of these factors within a complex technology acceptance model (i.e., an empirical measurement model). We seek to answer the following questions: (Q1) How do religiosity and a teacher’s sense of calling influence teacher attitudes toward AI? (Q2) What direct and indirect effects do religiosity and a sense of calling exert on AI adoption, and are these effects mediated by attitudes toward AI? Our study’s added value lies in its examination of AI acceptance among Catholic secondary school teachers from a value- and identity-based perspective.

Hypotheses

Numerous ethical and theological studies (Fernandez-Borsot 2023; Graves 2024; Onyeukaziri 2024) point to value conflicts that hinder technology acceptance. These include the questioning of human dignity and uniqueness by technology, the ontological implications of artificial intelligence, and tensions between the theological conception of work and the technological worldview. Buyukyazici and Serti (2024) found a negative correlation between religiosity and positive attitudes toward innovation. According to research by Kozak and Fel (2024), AI elicits stronger negative emotional reactions (e.g., fear and anger) from religious individuals. However, the issue is not so straightforward: research by Karamouzis and Fokides (2017) among teacher candidates highlights the complexity of the relationship between religiosity and technological attitudes. Furthermore, teacher self-efficacy can also influence one’s relationship with AI. The findings of Viberg et al. (2024) and Gökçe Tekin (2024) suggest that teachers with high self-efficacy perceive more advantages in AI, approach it with less concern, adopt it with greater trust, and apply it more readily. Correspondingly, Zhang et al. (2023) associate low self-efficacy with negative outcomes, such as stress and technological dependency.
Based on the literature, attitudes toward AI do not appear as a unified construct, but are fed by multiple emotional and cognitive sources. According to Kozak and Fel (2024), religiosity affects emotional reactions to AI differently: it increases fear and anger while reducing sadness and disgust. Hopcan et al. (2024) state that AI-related anxiety comprises dimensions such as concerns about learning difficulties, job security, and social impacts. According to Buyukyazici and Serti (2024), religiosity negatively influences certain dimensions of innovation attitudes; however, the magnitude and significance of the effect differ across attitude elements, suggesting that the impact of religiosity is not uniform but is differentiated by attitude type. Based on these findings, we can reasonably assume that religiosity influences different dimensions of AI attitudes in different ways. In formulating our hypotheses, we considered that in religious thought, the soul, consciousness, and personality are of divine origin, and humans hold a special place in creation (imago Dei). Consequently, religious individuals may draw a sharper line between human and machine and may be more cautious about personifying artificial entities: attributing human traits to a machine could contradict the conviction that true consciousness and personality are exclusively divine gifts.
We define a teacher’s sense of calling as a deep, intrinsic motivation that frames teaching not merely as a job but as a meaningful mission that serves the common good and provides personal satisfaction. This construct extends beyond self-efficacy (a belief in one’s ability for a specific task) to encompass dimensions of service, responsibility, satisfaction, and long-term commitment (Jain and Kaur 2021). Although prior research suggests that self-efficacy can reduce technology-related concerns (Gökçe Tekin 2024; Viberg et al. 2024), other components of a sense of calling—such as a sense of responsibility toward students and a commitment to teaching quality—may act in the opposite direction. Teachers with a strong sense of calling might worry more about the educational applications of AI precisely because they feel a deeper responsibility for teaching and are more cautious about technologies that could affect the quality of the teacher–student relationship or the personal nature of education. The various components of a teacher’s sense of calling may therefore indirectly influence aspects of AI attitudes in different, even contradictory, ways.
Some technology acceptance models also posit a direct path from self-efficacy to usage intention. In Gökçe Tekin’s (2024) model, the relationship between self-efficacy and usage intention (H4 hypothesis) proved significant, indicating that direct effects may exist beyond attitudes. According to Viberg et al. (2024), teachers’ self-efficacy toward AI-based educational technology (AI-EdTech) and their AI understanding affect trust in technology not directly, but through perceived benefits and concerns. These factors mediate through attitudes, not independently of them. However, cultural values—such as uncertainty avoidance, collectivism, and masculinity—also directly influence trust, regardless of the benefits or concerns teachers perceive regarding AI-EdTech. Since certain deeper, identity-level factors—such as cultural values—can affect trust in and adoption of technology independently of attitudes, it is conceivable that religiosity and a teacher’s sense of calling may play a similar role in the context of AI application.
According to Buyukyazici and Serti (2024), religiosity has a negative effect on innovation attitudes (which are important antecedents of technology acceptance). Based on their findings, religiosity reinforces attitudes that hinder innovation. Karamouzis and Fokides (2017) examined the religious views, technology use, and attitudes toward technology of Greek theology and teacher-training students. A cluster analysis identified three distinct profiles: (1) religious students with positive technological attitudes; (2) non-religious students also with positive technological attitudes; and (3) moderately religious students with negative technological attitudes. The authors found significant associations between religiosity and attitude toward technology, particularly concerning the roles of gender and age. Religiosity and technology acceptance were not mutually exclusive: theology students were more religious than teacher-training students and viewed the compatibility of religion and technology more positively. The contradictory findings of these two studies (Buyukyazici and Serti 2024; Karamouzis and Fokides 2017) highlight that the relationship between religiosity and technology attitudes is complex and context-dependent. While Buyukyazici and Serti (2024) found a generally negative relationship, the results of Karamouzis and Fokides (2017) present a more nuanced picture, suggesting that the type of religiosity, educational background, and other demographic factors may influence this association. While the literature shows a contradictory relationship between religiosity and technology, our research focuses on a specific population representing a more traditional value system. Given the central role of human dignity and the order of creation in Catholic teaching, we assume that in this context, religiosity strengthens caution and more critical attitudes toward technology, especially AI that imitates human cognition, consistent with the findings of Buyukyazici and Serti (2024). Furthermore, we assume that the effect of religiosity is not limited to shaping attitudes. Religious conviction provides a deeper, identity-level framework that can directly influence behavioural adoption, independent of explicit attitudes. Unconscious norms or values (e.g., caution with novelty and respect for the created order) may also be at play, directly inhibiting the adoption of AI technologies. Therefore, we hypothesise partial mediation. Similarly, a teacher’s sense of calling may not act solely through attitudes. A teacher with a strong sense of calling might be more directly motivated to try to adopt new technologies (like AI) due to an internal commitment to professional development, even if their initial attitudes are ambivalent. This proactive, identity-driven behaviour also implies a direct effect.
Perceived usefulness predicts teachers’ AI usage intention. In Viberg et al. (2024) research, trust played a significant role, whilst in Gökçe Tekin’s (2024) study, self-efficacy and anxiety were significant. Ogbu Eke (2024) and Alwaqdani (2025) confirmed that positive perceptions—such as adaptability or personal usefulness—increase artificial intelligence acceptance. Hopcan et al. (2024) demonstrated a relationship between AI-related anxiety and attitudes toward machine learning: teacher candidates who viewed machine learning technology more positively were less concerned about potential job loss caused by artificial intelligence. Although, to our knowledge, the effect of anthropomorphic perception has not been empirically examined, this factor may also influence the extent of AI application.
Since religiosity demonstrably affects attitudes toward artificial intelligence and innovation (Buyukyazici and Serti 2024; Karamouzis and Fokides 2017; Kozak and Fel 2024) and since these attitudes are closely related to technology acceptance and usage intention (Gökçe Tekin 2024; Ogbu Eke 2024; Viberg et al. 2024), we assume that religiosity also indirectly influences AI adoption. Besides the direct effect, we also wish to examine whether an indirect effect exists in parallel. The indirect effect is supported by Buyukyazici and Serti’s (2024) empirical results: the authors found that the effect of religiosity operates through innovation attitudes. The technology acceptance model literature also points out that psychological factors such as self-efficacy—which is a component of teacher’s sense of calling (Lan 2024)—primarily shape attitudes through perceived usefulness and perceived ease of use. These attitudes are then direct antecedent variables of usage intention and technological adoption (Gökçe Tekin 2024; Viberg et al. 2024). Based on all this, it can be assumed that AI attitudes play a mediating role in the relationship between both religiosity and sense of calling and AI adoption.
Studies examining teachers’ acceptance of artificial intelligence (Hopcan et al. 2024; Viberg et al. 2024) do not focus on the direct effect of demographic factors on religiosity. In the literature examining the acceptance of AI technologies, psychological (e.g., self-efficacy and anxiety) and cultural variables (e.g., Hofstede’s cultural dimensions) typically receive greater emphasis. The literature presents a complex picture regarding the effect of demographic factors (age, gender, and educational level) on artificial intelligence. Some studies find that the direct effect of these factors on acceptance or trust is limited, especially when models also consider other, stronger psychological predictors (Gökçe Tekin 2024; Viberg et al. 2024). Other studies point out that certain demographic characteristics—such as gender—may play a moderating role (Zhang et al. 2023) or their relationship is mediated by other factors (e.g., attitudes and anxiety) (Hopcan et al. 2024), and nationality as a demographic variable directly influenced ChatGPT adoption amongst university educators and showed correlation with related attitudes (Barakat et al. 2025). However, other research has demonstrated the direct, negative effect of age on teachers’ AI adoption (Bakhadirov et al. 2024). This suggests that demographic characteristics, particularly age and professional experience, may continue to be relevant factors in artificial intelligence acceptance, even if their effect is context-dependent or operates in interaction with other variables. Whilst Hopcan et al. (2024) identified gender and age differences in certain dimensions of AI-related attitudes, Bolívar-Cruz and Verano-Tacoronte (2025) found amongst Spanish university teachers that different factors (including various forms of anxiety) influence ChatGPT acceptance in men and women, highlighting the importance of gender perspective. Recent research shows that the role of demographic factors in artificial intelligence acceptance is uncertain (Al-Kfairy 2024). Given the complex and context-dependent nature of effects, in the present research, we incorporate demographic variables (gender, age, years in profession, and educational level) as control variables in our model. Our aim is to ensure that we can examine the effects of the main psychological constructs under investigation—intrinsic religiosity, teacher’s sense of calling, and AI attitudes—on AI adoption whilst statistically filtering out the potential distorting influence of these background factors. Accordingly, since our research focuses on the mechanisms through which intrinsic religiosity and teacher’s sense of calling affect AI adoption both through AI attitudes and directly, we do not formulate separate, independent hypotheses regarding the specific predictive or mediated effects of demographic variables on AI attitudes or AI adoption. Finally, we note that although the relationship between attitudes and behaviour is theoretically trivial and evident, empirical testing is warranted due to the well-documented attitude–behaviour gap. AI adoption is also a function of structural, competence-based, and situational factors that may influence actual use independently of attitudes.
Based on the above, in our research, we test hypotheses partly based on the literature and partly exploratory in nature. For exploratory hypotheses, we formulated assumptions without direction: for H2b because the self-efficacy component of sense of calling may reduce concerns whilst the sense of responsibility component may increase them; for H2c and H5 because anthropomorphic perception may both encourage use (curiosity) and inhibit it (fear); for H8c due to the exploratory nature of H5; for H9b due to uncertainty about the H2b effect; and for H9c due to the exploratory nature of H2c and H5:
H1: 
Religiosity influences teachers’ AI-related attitudes and perceptions.
H1a: 
Stronger religiosity decreases the supportive evaluation of AI.
H1b: 
Stronger religiosity increases AI-related concerns.
H1c: 
Stronger religiosity decreases the anthropomorphic perception of AI.
H2: 
A teacher’s sense of calling influences teachers’ AI-related attitudes.
H2a: 
A higher sense of calling increases the supportive evaluation of AI.
H2b: 
A higher sense of calling influences AI-related concerns.
H2c: 
A higher sense of calling influences the anthropomorphic perception of AI.
H3: 
A supportive evaluation of AI positively influences AI adoption.
H4: 
AI-related concerns negatively influence AI adoption.
H5: 
The anthropomorphic perception of AI influences AI adoption.
H6: 
Religiosity directly and negatively influences AI adoption.
H7: 
A teacher’s sense of calling directly and positively influences AI adoption.
H8: 
AI attitudes and perceptions mediate the relationship between religiosity and AI adoption.
H8a: 
The supportive evaluation of AI negatively mediates the relationship between religiosity and AI adoption.
H8b: 
AI-related concerns negatively mediate the relationship between religiosity and AI adoption.
H8c: 
The anthropomorphic perception of AI mediates the relationship between religiosity and AI adoption.
H9: 
AI attitudes and perceptions mediate the relationship between a teacher’s sense of calling and AI adoption.
H9a: 
The supportive evaluation of AI positively mediates the relationship between a teacher’s sense of calling and AI adoption.
H9b: 
AI-related concerns mediate the relationship between a teacher’s sense of calling and AI adoption.
H9c: 
The anthropomorphic perception of AI mediates the relationship between a teacher’s sense of calling and AI adoption.

3. Methods

3.1. Procedure and Ethical Considerations

The research questionnaire was structured around three main thematic units. Data collection was conducted in two phases. The study adhered to all ethical requirements: participants were informed beforehand of the research objectives and conditions, and participation was entirely voluntary and anonymous. Ethical approval was obtained (No: RK/428/2025). The online questionnaire did not collect any personally identifiable data, and the data were analysed exclusively in an aggregate form. The questionnaire was technically designed to prevent the submission of erroneous or missing responses, and answering all questions was mandatory. The scale items were presented in a randomised order to reduce response biases (e.g., response set and category repetition). The translation of the questionnaire items into Hungarian was carried out by the research team and underwent multiple rounds of verification.

3.2. Participants and Sampling

The study population consists of teachers working in Catholic secondary schools in Hungary. In the first phase of data collection, at the end of March 2025, we contacted schools with which the authors or their university had a pre-existing relationship. This approach was based on accessibility, i.e., convenience sampling. During this phase, which concluded on 15 April 2025, a total of 415 teachers completed the online questionnaire. However, this sample size proved insufficient for a reliable application of the planned statistical analysis. To determine the required sample size for the PLS-SEM analysis, we applied the inverse square root method proposed by Hair et al. (2022) [original method: Kock and Hadaya (2018)]. Considering the model’s complexity, the hypothesised mediation paths, and the partially exploratory nature of the research, we aimed for an analysis capable of significantly detecting even weaker, yet theoretically relevant, effects (p_min ≈ 0.10) with a 5% significance level and 80% statistical power. According to the inverse square root method [n_min > (2.486/p_min)2], a reliable detection of such small effect sizes requires a sample of at least 619 participants (in contrast to the minimum of 155 needed to detect medium-strength effects, where p_min ≈ 0.20). Furthermore, a larger sample increases the accuracy and stability of PLS-SEM parameter estimates, which is essential for the reliable analysis of our complex model containing hierarchical and formative elements.
To meet both methodological requirements and research objectives, we extended the study to all Catholic secondary schools in Hungary from 28 April 2025. Approximately 2000 teachers work in church-maintained Catholic secondary schools in Hungary. Using an official list, we contacted the head of every institution and, via an official letter of invitation, informed them of the research objectives, its scientific nature, and the conditions for participation. The letter also emphasised that participation was voluntary and anonymous. The deadline for completing the questionnaire was 2 June 2025. By this deadline, a total of 682 teachers had completed the questionnaire. During data cleaning, we checked for minimum and maximum response values, the consistency between age and years in the profession (the latter cannot exceed the former minus 18), response patterns (e.g., intentional sequences like 1-2-3-4-5), and qualitative responses. In this process, we identified and removed two respondents who provided humorous, inconsistent, or almost certainly intentionally incorrect responses.
The final dataset for analysis contained the responses of 680 participants. The demographic characteristics of the sample are as follows: mean age was 48.8 years (SD = 10.2, range: 22–72); mean years in the profession was 22 years (SD = 11.2, range: 1–49). Of the participants, 69.3% (n = 471) were female, and 30.7% (n = 209) were male. The distribution by educational attainment was secondary education, 6 participants (0.9%); college degree, 126 participants (18.5%); university degree, 526 participants (77.4%); and PhD/DLA degree, 22 participants (3.2%).

3.3. Measures

The questionnaire employed in this study was structured into three main thematic sections.
The first section focused on demographic control variables: the respondent’s gender, age, years in the profession, and level of education. The items measuring AI adoption pertained to the frequency of use of various platforms, the forms of this use, and its context (workplace vs. personal). The surveyed platforms included ChatGPT, Google Gemini/Bard, Microsoft Copilot (Bing AI), Claude, Midjourney, DALL-E, and an “other AI software” category. Workplace usage types comprised: information search and retrieval, image generation, presentation preparation, text summarisation, text composition (e.g., drafts, papers, and essays), transcription of voice notes, database analysis, and seeking advice. Personal usage modes included the automation of simple, non-academic text-based tasks (e.g., writing letters), translation to or from a foreign language, describing the functionality of processes or tools, entertainment and chatting, and seeking advice. The frequency of platform use was assessed on a seven-point Likert scale (from “none” to “more than two hours daily”), while work-related and personal use were evaluated on a five-point Likert scale (from “never” to “regularly”). The “AI adoption” variable, therefore, encompasses usage related to both professional teaching duties and personal life (a total of 20 items: 7 platforms, 8 workplace usage types, and 5 personal usage types).
In the second section, we used three validated scales: (a) the Attitudes Towards AI Application at Work (AAAW) scale (Park et al. 2024); (b) the Duke Religion Index (DUREL) (Koenig et al. 1997); and (c) the Teacher’s Sense of Calling Scale (TSCS) (Jain and Kaur 2021).
The AAAW scale (Park et al. 2024) examines employee attitudes toward AI along six dimensions: perceived humanlikeness, perceived adaptability, perceived quality, AI use anxiety, job insecurity, and personal usefulness/utility. During the scale’s development (Park et al. 2024), the results of the confirmatory factor analysis (CFI = 0.98, TLI = 0.97, RMSEA = 0.04, SRMR = 0.04) and the internal consistency indicators (α = 0.86–0.94) were excellent. The authors demonstrated construct validity through their correlations with the Big Five dimensions and with openness to technological innovation. Divergent validity was supported by the finding that the AAAW scale showed only weak or non-significant correlations with general job satisfaction indicators (|r| = 0.02–0.21). Respondents evaluate the statements on a five-point Likert scale (1 = strongly disagree, 5 = strongly agree). The scale contains no reverse-coded items.
The Duke University Religion Index (DUREL) is a brief, five-item instrument covering three main dimensions of religiosity: organised religious activity (ORA, one item), non-organised religious activity (NORA, one item), and intrinsic religiosity (IR, three items). Respondents rate the ORA and NORA dimensions on six-point scales and intrinsic religiosity on a five-point Likert scale. Although DUREL originally measures three dimensions of religiosity, the present study focuses on intrinsic religiosity (IR). As originally described by Allport and Ross (1967), intrinsic religiosity represents a form of religiosity that is an end in itself, serving as an individual’s central life motivation and framework (Masters 2013). Individuals with this orientation consider their religion the foundation of their lives and strive to live it consistently. While ORA and NORA measure the frequency of religious behaviours, IR captures the subjective, cognitive, and motivational aspects of religiosity (Koenig et al. 1997) and is one of the most frequently studied constructs of religiosity in psychological research (Masters 2013). The present research focuses on how teachers’ personal values and deeper convictions—which are best reflected by intrinsic religiosity—influence their attitudes toward AI and their adoption intentions. It is assumed that this internal, personal, and motivational dimension of religiosity (Masters 2013) has a stronger and more direct relationship with an individual’s stance on technology than do more external, behavioural manifestations. Furthermore, DUREL’s three-item IR subscale itself possesses robust psychometric properties; for instance, during its original development, Cronbach’s alpha was 0.75 (Koenig and Büssing 2010), and its reliability and validity have been confirmed by subsequent research (Koenig et al. 2015). Using this subscale allows us to narrow the research focus to the most personal, value-based dimension of religiosity while contributing to the model’s parsimony. Therefore, while acknowledging the multidimensional nature of religiosity, we operationalise religiosity in this study using DUREL’s three-item intrinsic religiosity (IR) subscale, as it serves as the best indicator of teachers’ personal values and convictions.
The Teacher’s Sense of Calling Scale (TSCS) (Jain and Kaur 2021) is designed to measure a teacher’s sense of calling. This 10-item instrument examines 3 dimensions: service (4 items), satisfaction (3 items), and longevity (3 items). During the scale’s development (Jain and Kaur 2021), confirmatory factor analysis confirmed the three-dimensional structure, and the scale’s reliability indicators were excellent (α = 0.92; for the dimensions: 0.82–0.86). The scale’s construct validity was supported by its positive correlation with work engagement (r = 0.50). Respondents evaluate the statements on a six-point Likert scale (1 = strongly disagree, 6 = strongly agree). The scale contains no reverse-coded items.
The third section of the questionnaire contained one open-ended question: “Please list up to five concepts that best describe your relationship with artificial intelligence.” (The responses are analysed in a separate paper due to space constraints.)

3.4. Operationalization of Latent Variables and Research Model

AI attitudes: The AAAW scale (Park et al. 2024), used to measure attitudes toward artificial intelligence, identifies six specific dimensions: perceived humanlikeness, perceived adaptability, perceived quality, AI use anxiety, job insecurity, and personal usefulness/utility. In our model, all six latent variables are treated as first-order latent variables.
Intrinsic religiosity: In line with the rationale provided in the Measures section, religiosity is operationalised as a reflective latent variable based on the three-item intrinsic religiosity (IR) subscale of the Duke Religion Index.
Teacher’s sense of calling: Although Jain and Kaur (2021) reported the reliability of the entire scale, we operationalise a teacher’s sense of calling reflectively, using the three original dimensions of the Teacher’s Sense of Calling Scale (TSCS): service, satisfaction, and longevity.
AI adoption: We treat AI adoption as a formative construct, as we define it as the aggregate of various, not necessarily interchangeable, modes of use and activities. The 20 indicators capture distinct, discrete aspects of AI integration and teacher acceptance. The formative approach aligns well with the assumption that different usage modes collectively act as building blocks to determine the overall level of a teacher’s AI adoption. For example, a teacher may use AI-based text generation tools intensively but use image generators less frequently. These different usage patterns all contribute to the overall picture of adoption, and a high value on one indicator does not necessarily imply a high value on another. The level of adoption is therefore interpreted as a combination of various components rather than a reflection of a single underlying factor. In compiling the 20 indicators, we aimed to map the potential modes of use as broadly as possible, while anticipating that some items might exhibit low variance in the sample due to the novelty of the topic and the heterogeneous diffusion of AI tools. According to the theoretical principles of formative measurement models, the indicators define the construct; therefore, they should not be removed on theoretical grounds, as doing so could alter the construct’s meaning. However, the practical application of PLS-SEM analysis requires consideration of statistical modelling limitations. An indicator with no or critically low variance in the sample carries no information about inter-respondent differences and cannot contribute meaningfully to explaining the variance of the latent variable. More importantly, such items can cause technical problems, such as singular matrix errors during the bootstrapping procedure, which would prevent the estimation of the model’s significance levels.
As shown in Table 1, the measurement of AI-related attitudes and a teacher’s sense of calling is based on multidimensional scales. In our analysis, we apply an empirically driven, bottom-up modelling strategy. Within this framework, we first examine all constructs at the level of their first-order dimensions and then decide on the measurement model type based on the results. Subscales with correlations below r < 0.7 (corresponding to ~49% of explained variance) that logically belong to the same overarching construct are specified as second-order formative measurement models (hierarchical component models, HCM). This means each subscale contributes a unique, independent aspect to the final construct. Subscales with high correlations (r > 0.7), indicating a common underlying factor, are specified as second-order reflective measurement models. It should be noted that in structural equation modelling, the calculation of path coefficients between second-order latent variables and an outcome variable in HCMs requires a specialised procedure (Becker et al. 2023; Hair et al. 2024a, 2024b). If the original subscales are retained as first-order factors and then grouped into a second-order construct, the analysis can be conducted using a two-stage approach. In the first stage, the latent variable scores of the subscales are estimated from their indicators. In the second stage, these scores serve as indicators for the second-order variable(s) to estimate their relationship with the outcome variable. This procedure ensures that the second-order constructs transmit the full informational content of their constituent subscales to the model’s predictive structure. Figure 1 illustrates the theoretical model corresponding to our hypotheses and model specification.

3.5. Data Analysis

Our research examines a complex system of relationships among variables, based partly on the existing literature and partly on exploratory assumptions. To answer our research questions, we employ structural equation modelling (SEM), specifically its partial least squares (PLS-SEM) variant. We chose this method because PLS-SEM is a suitable option not only when confirming a well-established theory but also when conducting partially exploratory research, such as seeking new relationships or aiming to maximise the predictive power of an existing model in a new context or with new variables. The primary goal of PLS-SEM is to maximise the explained variance of the dependent variables, thereby enhancing predictive accuracy. Consequently, it can be applied even when a hypothesis lacks a strong theoretical foundation but is logically plausible and improves the model’s predictive power (Becker et al. 2023; Hair et al. 2022; Hair 2024). Furthermore, PLS-SEM has less stringent requirements regarding data distribution compared to covariance-based SEM (CB-SEM).
The analysis was performed using SmartPLS 4 software (Ringle et al. 2024), with thresholds for evaluation determined by theoretical recommendations (Hair et al. 2022; Hair 2024) and established practices (Hair et al. 2024a, 2024b). To assess internal consistency, we used composite reliability (CR or ρC) and Dijkstra–Henseler’s ρA reliability indicators, with values above 0.7 generally considered acceptable. The significance of the structural model’s standardised path coefficients was examined via a bootstrapping procedure with 10,000 samples, using percentile bootstrap confidence intervals. To check for multicollinearity, we used the variance inflation factor (VIF), considering values below 5, and ideally below 3, as acceptable. To assess convergent validity, we examined the average variance extracted (AVE), which required a value of at least 0.5. Discriminant validity was tested using the heterotrait–monotrait (HTMT) ratio of the correlations between latent variables, with values below 0.9 considered acceptable. This criterion is more sensitive and reliable in detecting discriminant validity issues than the traditional Fornell–Larcker criterion, the results of which we have therefore omitted. The relative effect of predictor constructs on the model’s explanatory power was evaluated using the f2 index, with values interpreted as follows: ≥0.02 (small), ≥0.15 (medium), and ≥0.35 (large). The research objectives—which include predicting a formative outcome variable, examining theoretically underexplored paths, and the concurrent presence of exploratory and applied elements—warrant the use of a robust predictive assessment method that considers the model’s practical applicability and generalisability. Accordingly, we evaluated the model’s predictive relevance using the PLSpredict procedure. This allows for an examination of the model’s out-of-sample predictive power (Q2predict), as opposed to traditional in-sample indicators like the Stone–Geisser Q2 statistic. As some indicators in the model are reflectively related to their latent variable, their deletion is permissible if certain criteria are met. Consequently, we iteratively examined the indicators’ outer loadings and removed items that performed inadequately (λ ≤ 0.708) where necessary. During this process, we considered the impact of deletions on reliability and validity metrics while ensuring the content validity of the constructs was preserved.

4. Results

4.1. Measurement Model

In the case of formative measurement models, the first step was to identify items with extremely low variance, as these can cause technical problems during the bootstrapping procedure. We observed low standard deviation for several indicators of AI adoption, indicating that respondents made little use of these tools or functions. To ensure the statistical stability of the model, the following seven items were deleted: database analysis, voice transcription, Claude, Copilot/Bing, DALL-E, Midjourney, and Gemini/Bard. Given their general relevance, we report these results in detail in Table 2.
For the reflective measurement models, we examined the indicators’ outer loadings. Items whose loadings did not reach the accepted threshold were removed iteratively. Due to their weak loadings, the following six items were deleted: (a) “Artificial intelligence provides workers with comprehensive information” (λ = 0.492); (b) “Information from artificial intelligence is always up-to-date” (λ = 0.540); (c) “I believe my work can be replaced by artificial intelligence” (λ = 0.653); (d) “Artificial intelligence has desires” (λ = 0.690); (e) “Artificial intelligence produces correct information” (λ = 0.690); and (f) “Working with students is a gratifying experience even outside of class” (λ = 0.379).
In the next step, we used a bootstrapping procedure to examine the statistical significance of all remaining indicators. Items whose contribution (loading or weight) was not significant (p < 0.05) were also removed. Based on this criterion, the following indicators were deleted from the AI adoption construct: Platform use: (a) other AI platform. Workplace use: (b) text writing (e.g., composition, paper, and essay); (c) image generation; (d) seeking advice. Personal use subscale: (e) entertainment, chatting; (f) automation of simple, non-academic text tasks (e.g., letter writing); (g) describing the functionality of processes or tools; (h) translation to or from a foreign language. The five remaining items of the adoption subscale are presented in Table 5.
In the subsequent step, we looked for subscales that could be aggregated. The correlations among the six subscales of AI attitudes were not strong enough to create a reflective–reflective second-order construct (0.02 < r < 0.48), and the same was true for the subscales of teacher’s sense of calling (0.37 < r < 0.70). Therefore, in line with our hypotheses, the theoretical measurement model, empirical results, and the following theoretical considerations, we organised the AI attitudes (AAAW scales) and teacher’s sense of calling (TSCS scales) into second-order, formative–reflective factors.
We consolidated the six theoretical dimensions of the AAAW into three second-order factors. The “Supportive evaluation of AI” factor comprises the dimensions of adaptability, perceived quality, and personal utility. Collectively, these reflect the perceived benefits of AI, its inherent potential, and the value-creating capacity it offers the individual. They are logically coherent, as all three express a positive evaluation of the technology and optimistic expectations related to it. The “AI-related concerns” variable was formed from usage-related anxiety and job insecurity. The common element of these two dimensions is that they represent the risks, fears, and reservations associated with the introduction and use of AI, whether they are individual (anxiety) or existential (job insecurity) concerns. The perception of anthropomorphism constitutes a separate, third category: the “Anthropomorphic perception of AI” variable. This dimension is unique as it does not focus directly on the utility or dangers of AI, but rather on the specific perception of the extent to which the technology exhibits human-like characteristics. This type of perception may activate different psychological mechanisms than purely positive or negative evaluations, thus warranting separate treatment. In the absence of empirical evidence and a theoretical rationale, we do not assume that these three aggregated attitude dimensions form a single, third-order “general AI attitude” construct.
We consolidated the three theoretical dimensions of the TSCS (service, satisfaction, and longevity) into a second-order, formative–reflective “teacher’s sense of calling” factor. This aggregation was justified by the weaker correlation between two of the three subscales, the adequate reliability (α = 0.92) of the full scale reported during its development (Jain and Kaur 2021) and theoretical considerations. Together, the three dimensions capture the complex nature of commitment to the teaching profession: intrinsic motivation (satisfaction), social mission (service), and temporal stability (longevity). However, in the model configured this way, the three subscales of the formative teacher’s sense of calling construct lost their significance. For this variable, we therefore calculated a composite score by simply averaging the subscales (as the items of the subscales were measured on the same scale). The reliability and validity indicators of the measurement model (prior to aggregation into second-order factors) are as follows in Table 3.
According to the results in the table, the reliability and convergent validity of the measurement model are adequate for all constructs. After establishing the internal reliability of the measurement models, we examined discriminant validity using the HTMT criterion.
According to the results in Table 4, the criteria for discriminant validity were met with a single exception. The HTMT value of 1 between the service and satisfaction subscales indicates a lack of discriminant validity. This empirical finding supports the prior methodological decision to treat the subscales of teacher’s sense of calling not as separate predictors in the structural model, but as a single, aggregated composite score. Table 5 presents the construct validity of AI adoption.
Based on the table, the AI adoption construct is valid. We note that the “presentation preparation” indicator contributed to the construct with a significant but negative weight (β = −0.312, p < 0.001), which suggests the presence of a suppressor effect. Potential explanations for this are discussed in the Discussion section.

4.2. Structural Model

In addition to testing hypotheses H1–H9, we controlled for the effects of the following demographic variables in the structural model: gender, age, years in the profession, and educational attainment. The results indicate that several control variables have a significant effect. Age has a significant, weak negative effect on AI adoption (β = −0.127, p = 0.038), suggesting that older teachers adopt AI technologies to a lesser extent in practice. Age also has a marginally significant positive relationship with the supportive evaluation of AI (β = 0.116, p = 0.055), suggesting that while older teachers are more cautious in their use of AI, they may be more inclined to evaluate its potential positively. The direct effect of years in the profession was not significant for any dependent variable, which can be attributed to the strong correlation (multicollinearity) between age and work experience. Women reported higher levels of AI-related concerns (β = 0.335, p < 0.001) and a stronger anthropomorphic perception of AI (β = 0.186, p = 0.020) compared to men. Higher educational attainment shows a significant negative relationship with both AI-related concerns (β = −0.120, p = 0.002) and the anthropomorphic perception of AI (β = −0.142, p < 0.001), meaning that more highly qualified teachers are less concerned and less inclined to personify the technology. We tested the hypotheses of our theoretical model while statistically controlling for these demographic effects. The results are as follows in Table 6.
Religiosity is negatively associated with the supportive evaluation of AI and with anthropomorphic perception, while it is not associated with AI-related concerns. In contrast, a teacher’s sense of calling is associated only with concerns, and this relationship is positive. AI adoption is positively influenced by supportive evaluation and negatively by concerns, as expected. The anthropomorphic perception of AI is not associated with its use.
AI adoption is not directly influenced by religiosity or a sense of calling; however, two significant indirect paths were identified. Religiosity negatively affects adoption via a lower supportive evaluation, while a teacher’s sense of calling exerts its negative effect on AI adoption via higher concerns.
The model explained 1.9% of the variance in the supportive evaluation of AI, 5.7% of the variance in AI-related concerns, 4.6% of the variance in anthropomorphic AI attitudes, and 31.5% of the variance in the final dependent variable, AI adoption. The total effects of intrinsic religiosity (β = −0.063, p = 0.094) and a teacher’s sense of calling (β = 0.020, p = 0.600) on AI adoption were not significant.
Based on these results, the model has a low but existent predictive power for the study’s main dependent variable, AI adoption (Q2predict = 0.033), and the anthropomorphic perception of AI (Q2predict = 0.011). This is supported by the observation that most of the formative indicators constituting AI adoption also had positive Q2predict values. In contrast, the model did not prove to be predictive for the constructs of AI-related concerns (Q2predict = −0.015) and supportive evaluation of AI (Q2predict = −0.007). This indicates that the model is unable to predict the levels of these attitude variables in a new sample with better-than-average accuracy. Overall, the model possesses out-of-sample predictive capability with respect to its main outcome variable (see Table 7).

5. Discussion

The central finding of this study is that neither religiosity nor a teacher’s sense of calling exerted a significant direct effect on AI adoption. Although statistically significant indirect effects were found, it is important to emphasise that these factors explained only a minimal proportion (1.9–5.7%) of the variance in AI attitudes, suggesting their role in AI adoption is more limited than hypothesised. As was theoretically evident, a supportive evaluation of AI positively influenced adoption, while AI-related concerns had a negative influence, thereby confirming the fundamental findings of technology acceptance research (Ogbu Eke 2024; Viberg et al. 2024). In terms of effect sizes, these relationships proved to be the most substantial: the supportive evaluation of AI had a medium effect, whereas the concerns had a small effect. This highlights that the primary drivers of teacher behaviour are their specific perceptions. The background of these attitudes is illuminated by the research of Galindo-Domínguez et al. (2024), who found a strong positive relationship between teachers’ general digital competence and their positive attitudes toward AI. In other words, teachers’ attitudes are founded on their existing technological knowledge.
The effect of religiosity is statistically detectable but practically marginal (f2 = 0.011). While stronger religiosity did significantly reduce both supportive evaluation and the anthropomorphic perception of AI, these effects are so small that their practical significance is questionable. Surprisingly, and contrary to the findings of Kozak and Fel (2024), religiosity did not significantly increase AI-related concerns. This may suggest that among Catholic teachers, religiosity manifests not as explicit fear but rather as a general scepticism regarding the utility of AI. Although a statistically significant mediation path was identified (β = −0.038), its effect size is negligible, which calls into question the utility of discussing a “mechanism” in the context of such a weak relationship.
In the case of a teacher’s sense of calling, a form of responsible concern was observed. While a higher sense of calling did not increase the supportive evaluation of AI, it did significantly increase related concerns. This indicates that committed teachers are not rejective but are circumspect, driven by a sense of responsibility for the quality of teaching. A practical solution to this anxiety is offered by the research of Yang et al. (2024), who demonstrated that enactive mastery experiences—successful trials reinforced by community support—increase self-efficacy and reduce anxiety related to challenges. The anxiety stemming from a sense of calling, as identified in our study, could therefore be effectively addressed with such a professional development programme. The significant mediation effect corroborates this process: a sense of calling indirectly inhibits AI adoption by increasing concerns.
The personification of AI warrants special attention. Our results show that both stronger religiosity and higher educational attainment significantly reduced the tendency to personify technology. This points to an awareness among Catholic teachers of the sharp boundary between human and machine, a distinction rooted in theological (Coghill 2023; Kaunda 2024) and philosophical (Fernandez-Borsot 2023) foundations. However, one of the most interesting findings of this study is that the anthropomorphic perception in itself did not significantly influence AI adoption. This means that although teachers reflect on the human-like nature of the technology, their decisions about its practical use are guided not by this reflection but by more pragmatic considerations: perceived utility and pedagogical concerns. It appears that teachers are able to separate their (potentially unsettling) thoughts about the technology’s ontological status from their practical evaluation of it as a teaching tool. Even if someone perceives AI as more human-like, this does not automatically lead to its use (out of curiosity) or its rejection (out of fear). The decision is determined by a cost-benefit analysis of pedagogical utility and controllable risks, which can be interpreted as a sign of professional pragmatism.
From a methodological perspective, it is worth highlighting the negative weight of the presentation preparation indicator in the adoption model, which suggests a suppressor effect. This can be interpreted as presentation preparation being a lower-threshold activity characteristic of users who do not yet use more complex AI functions, thus showing negative correlation with higher-level, more integrated adoption. Furthermore, according to the PLSpredict procedure results, our model has low but existing predictive power for AI adoption. This means the model not only works within the sample but is also capable—to a limited extent—of generalizing to new data, which strengthens its practical relevance. Our model explained 31.5% of the variance in AI adoption, which can be considered an adequate level of explanatory power for an exploratory model that includes value- and identity-based factors. Our research outlines a complex model wherein AI adoption is primarily influenced by attitudes toward the technology, which are, in turn, shaped by teachers’ demographic backgrounds, digital competence, religious convictions, and sense of calling.
Our results suggest that the roles of religiosity and a sense of calling in AI adoption are marginal. Practical interventions should therefore focus not on these factors, but on the stronger predictors identified in the literature, such as digital competence (Galindo-Domínguez et al. 2024) and direct experience (Yang et al. 2024). While it may be worthwhile to consider value-based concerns in Catholic schools, addressing these is likely not the most effective path to promoting AI integration.
Gender exerted a strong effect on concerns (β = 0.335), while sense of calling showed a weaker but still significant effect (β = 0.074). However, when interpreting this difference, it should be considered that due to the correlation between religiosity and sense of calling, their effects may appear weaker than those of independent demographic variables due to multicollinearity. Furthermore, male teachers in the sample were significantly more highly educated than female teachers (r = −0.105, p = 0.006), which partly explains gender differences, as higher education itself reduced concerns. Women’s increased concerns seem interpretable in the context of the teaching profession. Teaching is a typically female-dominated profession where teacher–student relationships and human interactions play a central role. It is conceivable that female teachers more strongly perceive artificial intelligence as a threat to these deeply human, relational aspects.
Related to this is the stronger degree of anthropomorphic perception (β = 0.186) among women. Attributing human characteristics to technology—such as the ability to think or have intentions—can also make it more frightening, as it creates the impression of an unpredictable, seemingly autonomous entity. Educational qualification shows an opposite tendency. Teachers with higher education showed lower values in both concerns (β = −0.120) and anthropomorphic perception (β = −0.142). Several factors may underlie this. Higher-level education develops critical thinking and information literacy, which can help teachers evaluate news and discourse about artificial intelligence more objectively, abstracting from exaggerations and panic-mongering. Additionally, it is likely that those with higher education better understand the operating principles of technology, which reduces uncertainty and fear arising from the “black box” phenomenon. Those who know a tool’s limitations and operational logic are less inclined to mystify it or attribute human characteristics to it.
Regarding the deletion of items with inadequate outer loadings from reflective latent variables in the measurement model, in the case of the first three deleted items, perhaps the absolute formulations (“comprehensive,” “always,” and “correct”) elicited ambivalent reactions from respondents—if teachers know or assume the limitations and error possibilities of artificial intelligence. The weak loading of the statement on the substitutability of teaching work suggests that teachers view their profession as a complex, human relationship-based activity that cannot simply be replaced by technology. The low loading of the anthropomorphic attribution (“desires”) is also understandable in a Catholic teacher population that clearly distinguishes between human and machine attributes.
Although not a central focus of the study, the analysis of the measurement model yielded an important ancillary finding for the population studied. Two dimensions of a teacher’s sense of calling, service and satisfaction, were so closely interrelated in the teachers’ responses that the statistical model could not treat them as two distinct constructs (HTMT = 1). This indicates that for Catholic teachers, these two concepts are practically indistinguishable, suggesting that in this specific, value-driven context, the very act of service is the source of professional satisfaction. The effect of this internally robust variable, however, almost entirely vanished when confronted with the pragmatic questions of AI adoption (or, more precisely, it did not engage with them).

5.1. Model Generalisability and Broader Theoretical Implications

Our results indicate that the role of value-based factors in technology acceptance is far more limited than theoretical considerations might suggest. Although we demonstrated that religiosity and a sense of calling influence adoption via attitudes, their practical significance is negligible (explaining 1.9–5.7% of the variance). This serves as a caution that value-based extensions of technology acceptance models should be approached with care, and in other contexts, it may be prudent to first examine the role of stronger predictors (e.g., digital competence and direct experience).
However, the pragmatic–utilitarian focus observed in AI adoption is not an inevitable endpoint but rather a manifestation of the currently dominant “sociotechnical imaginary” (Linderoth et al. 2024). This perspective creates an opportunity to move beyond a description of “what is” and to ask the normative question of “what should be”: what kind of future do we envision for AI in education, one that serves not only efficiency but also deeper human and pedagogical values? While our findings showed a marginal effect of religiosity among teachers, other research has revealed stronger, often negative, emotional and social associations between religiosity and AI (He 2024; Kozak and Fel 2024). This suggests that ignoring value-based concerns could be risky at a societal level. Consequently, teacher training and policy recommendations should not only promote adoption but also actively shape the value-driven integration of technology in education.

5.2. Practical Implications for Educational Administrators and Decision-Makers

5.2.1. For Educational Administrators and Training Providers

The most effective path to successful AI integration appears to be the development of “Artificial Intelligence Literacy” (AIL) that incorporates critical and ethical reflection, combined with opportunities for direct experience. Based on our findings, it is also crucial to address the issue of anthropomorphism, as the awareness of the boundary between human and machine remains a salient factor for teachers. Therefore, teacher-training programmes should emphasise that AI is intended not to replace the roles of teachers and students, but to extend them.
The use of language has the power to shape perspectives. The literature often refers to AI as a “crutch” or “prosthesis” (Z. Karvalics 2024), but a far more accurate and potent metaphor would be AI as an “amplifier.” It can amplify a teacher’s strengths, but also their weaknesses. If teachers possess up-to-date and accurate knowledge of AI’s capabilities and the consequences of its use, they can amplify their own strengths in the classroom. AI thus creates a duality: it can support students in becoming researchers by amplifying their curiosity—for which they must learn to ask the right questions—but it can also extinguish that curiosity with ready-made answers. In this dynamic, the teacher’s role is indispensable: by demonstrating the appropriate use of AI through their own example and instruction, they remain the director of the educational process.

5.2.2. For Technology Developers

Our results suggest that instead of value-based communication, demonstrating practical benefits and ensuring a user-friendly design will be more effective in the education sector. Trust and acceptance can be enhanced if developers communicate openly about the technology’s functionality and its limitations.

6. Limitations

Although the second phase of sampling aimed to reach the entire population, thus moving the research toward a census, the final sample was still determined by voluntary responses. Consequently, the possibility of self-selection bias exists: it cannot be ruled out that the study primarily attracted teachers who were interested in the topic or who possessed greater intrinsic motivation to complete questionnaires (which, however, can also increase the validity of the survey data). The 680 respondents represent approximately 30% of the population, a substantial participation rate. The sample is therefore not representative of all Catholic secondary school teachers in Hungary. However, with internal validity ensured, the statistical tests—including p-values—are interpretable for examining relationships within the sample, and the findings contribute analytically to the theoretical understanding of similar educational contexts and to future research.
The cross-sectional nature of the data collection limits the definitive and unambiguous establishment of causal relationships. Nevertheless, the application of SEM, which allowed for the simultaneous modelling of a theoretically grounded, complex system of structural relationships, provided deeper insights into the complexity of the phenomenon than could have been achieved with pairwise analyses.
Given the data collection method (a single-point, self-administered online questionnaire), we also identified common method bias (CMB) as a potential limitation. The measurement method itself may introduce common variance into the data, which could influence the strength and validity of the relationships detected between variables. To assess the potential impact of CMB, we employed the common latent factor (CLF) technique as a post hoc diagnostic procedure. Within this framework, we introduced a new, artificial latent factor into the PLS-SEM model, onto which all indicators from all relevant, reflectively measured constructs were loaded. This CLF was intended to represent the variance potentially arising from the common method. As the theoretically grounded paths remained essentially unchanged in strength and significance after the introduction of the CLF, we dismissed the likelihood of a significant distorting effect from CMB.
Finally, due to the specific Catholic context, the results may not be generalisable to other religious or cultural environments. Furthermore, the low proportions of variance in the attitude variables explained by our model (1.9–5.7%) suggest that the role of religiosity and a teacher’s sense of calling in AI acceptance is far more limited than was theoretically assumed.

7. Future Directions

Our exploratory, cross-sectional study can serve as a stepping stone for designing future longitudinal research that could more precisely confirm the hypothesised causal mechanisms. Furthermore, we need to understand not only responses to a specific current technology but also the more enduring psychological and sociocultural mechanisms that shape the perpetual interaction between people and technology. For such future research, we suggest exploring how fundamental values and identity-related elements influence teachers’ relationships with the continually emerging technological innovations in education. We also recommend examining the interaction effects between religiosity and a teacher’s sense of calling on AI adoption. It is conceivable that these constructs influence teachers’ technological openness not only independently but also by moderating each other’s effects. Finally, we suggest exploring the role of additional psychological resources, such as the sense of coherence (SOC) examined by Masry Herzallah and Makaldy (2025), which may facilitate teachers’ adaptation to complex technological changes, particularly in contexts where AI innovations could potentially conflict with religious value systems.
When interpreting the marginal role of religiosity and a sense of calling in AI adoption, it is crucial to consider a significant limitation of the study that stems from the narrow operational definition of the “AI adoption” construct itself. The variable measuring AI adoption comprised only five basic functions (seeking advice, searching, summarising, presentation preparation, and ChatGPT use). This level of use is unlikely to touch the deeper layers of teachers’ professional identity or personal lives. Consequently, the question arises: why should deeply rooted, value-based convictions, such as religiosity or a sense of calling, be closely related to the adoption of a technology used in practice for merely peripheral, administrative, or supplementary tasks? It is plausible that entirely different results would be obtained if the study’s focus shifted from teachers’ own limited AI use to their attitudes and responsibilities concerning student AI adoption. Religiosity and a sense of calling would likely emerge as much stronger predictors in a study examining how teachers perceive the extent and purpose of their students’ AI use in learning or even in their personal lives. A teacher’s sense of responsibility and the ethical dimensions of the profession would presumably be far more activated when the question of the technology’s impact on student development, ways of thinking, and ethical decision-making arises. Therefore, while our findings highlight the pragmatic nature of teachers’ own technological adoption, they do not invalidate the need for value-driven AI pedagogy. They merely indicate that, for now, we have only examined one side of the issue. The true value-based dilemmas and the deeper reflections arising from a sense of calling are likely to become most acute not within the teacher–machine dyad, but within the teacher–student–machine triangle.
A further outcome of the formative model of AI adoption is the significant negative weight of the presentation preparation indicator. This suppressor effect can be interpreted in several ways. On the one hand, it is possible that presentation preparation, as a low-threshold activity, is characteristic of a user group in the early stages of adoption that has yet to engage with more complex AI functions. On the other hand, it may indicate a more isolated mode of use, where the technology is not deeply integrated into the teacher’s overall pedagogical work. Future research could benefit from using qualitative methods to explore the adaptation strategies and user profiles associated with various AI applications.

8. Conclusions

The aim of this research was to examine the value-based factors underlying AI adoption—specifically, religiosity and a teacher’s sense of calling—among Catholic secondary school teachers in Hungary. By extending technology acceptance models, the study sought to determine how these deep, identity-shaping factors influence teachers’ relationships with new technologies and through what mechanisms. The first research question (Q1) focused on how religiosity and a sense of calling influence teachers’ attitudes toward AI. According to our results, stronger religiosity fosters an attitude of critical distance and scepticism: it reduces the supportive evaluation of AI and the tendency toward anthropomorphic perception. In contrast, a teacher’s sense of calling strengthens an attitude of responsible concern. Teachers with a higher sense of calling expressed significantly more concerns regarding the educational application of AI, stemming from their deep sense of responsibility for the quality of teaching and for their students. The second research question (Q2) examined the direct and indirect effects on adoption, as well as the mediating role of attitudes. One of the study’s key findings is that neither religiosity nor a teacher’s sense of calling exerted a significant direct effect on AI adoption. Their influence was realised entirely indirectly, through attitudes related to AI. The model confirmed that attitudes significantly mediate the relationship between these value-based factors and technological behaviour. Religiosity inhibits adoption by reducing the supportive evaluation of AI, while a teacher’s sense of calling exerts its indirect negative effect by increasing concerns. The study’s main conclusion is that religiosity and a sense of calling play only a marginal role in teachers’ AI adoption. Although these factors are statistically significantly related to certain attitudes, their practical effect is negligible. This suggests that to achieve successful integration, the focus should not be on these value-based factors, but on other, presumably stronger predictors.

Author Contributions

Conceptualization, M.T.; Methodology, M.T.; Software, M.T.; Validation, M.T.; Formal analysis, M.T.; Investigation, M.T.; Resources, I.P.T., E.S.-M., R.R., V.Z. and Z.S.; Data curation, M.T.; Writing—original draft, M.T.; Writing—review & editing, M.T.; Visualization, M.T.; Supervision, I.P.T., E.S.-M., R.R., K.S.-V., V.Z. and Z.S.; Project administration, Z.S.; Funding acquisition, I.P.T., E.S.-M., R.R. and Z.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by [Hungarian Diplomatic Academy] grant number [SZ19/KKMMDA/2025].

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Ethics Committee of Eszterházy Károly Catholic University (protocol code RK/428/2025 and date of approval 11 March 2025).

Informed Consent Statement

Informed consent was waived due to the anonymous, online nature of the survey involving adult participants, which rendered separate paper-based consent forms unnecessary. The informed consent was integrated into the online questionnaire design, where participants had to acknowledge their understanding and agreement before proceeding with the survey questions. The consent information was presented at the beginning of the questionnaire, and completion of the survey constituted informed consent, as is standard practice for anonymous online surveys. For detailed provisions, please refer to the Supplementary Statement of the Ethics Approval document from Eszterházy Károly Catholic University Research Ethics Committee, Section 4 (Proportionality Principle), which states: “Given the anonymous nature and scientific purposes of the research, paper-based consent forms would have created disproportionate administrative burden relative to the social benefit of the research”.

Data Availability Statement

Data available in a publicly accessible repository. The data presented in this study are openly available in [Datavers] at [https://doi.org/10.7910/DVN/XIA1U6] (accessed on 2 June 2025).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmed, Saif, Ayesha Akter Sumi, and Norzalita Abd Aziz. 2025. Exploring Multi-Religious Perspective of Artificial Intelligence. Theology and Science 23: 104–28. [Google Scholar] [CrossRef]
  2. Al-Kfairy, Mousa. 2024. Factors Impacting the Adoption and Acceptance of ChatGPT in Educational Settings: A Narrative Review of Empirical Studies. Applied System Innovation 7: 110. [Google Scholar] [CrossRef]
  3. Alkhouri, Khader I. 2024. The Role of Artificial Intelligence in the Study of the Psychology of Religion. Religions 15: 290. [Google Scholar] [CrossRef]
  4. Allport, Gordon W., and J. Michael Ross. 1967. Personal religious orientation and prejudice. Journal of Personality and Social Psychology 5: 432–43. [Google Scholar] [CrossRef] [PubMed]
  5. Alwaqdani, Mohammed. 2025. Investigating teachers’ perceptions of artificial intelligence tools in education: Potential and difficulties. Education and Information Technologies 30: 2737–55. [Google Scholar] [CrossRef]
  6. Bakhadirov, Mukhammadfoik, Rena Alasgarova, and Jeyhun Rzayev. 2024. Factors Influencing Teachers’ Use of Artificial Intelligence for Instructional Purposes. IAFOR Journal of Education 12: 9–32. [Google Scholar] [CrossRef]
  7. Barakat, Muna, Nesreen A. Salim, and Malik Sallam. 2025. University Educators Perspectives on ChatGPT: A Technology Acceptance Model-Based Study. Open Praxis 17: 129–44. [Google Scholar] [CrossRef]
  8. Becker, Jan-Michael, Jun-Hwa Cheah, Rasoul Gholamzade, Christian M. Ringle, and Marko Sarstedt. 2023. PLS-SEM’s most wanted guidance. International Journal of Contemporary Hospitality Management 35: 321–46. [Google Scholar] [CrossRef]
  9. Bolívar-Cruz, Alicia, and Domingo Verano-Tacoronte. 2025. Is Anxiety Affecting the Adoption of ChatGPT in University Teaching? A Gender Perspective. Technology, Knowledge and Learning, 1–20. [Google Scholar] [CrossRef]
  10. Buyukyazici, Duygu, and Francesco Serti. 2024. Innovation attitudes and religiosity. Research Policy 53: 105051. [Google Scholar] [CrossRef]
  11. Caidi, Nadia, Pranay Nangia, Hugh Samson, Cansu Ekmekcioglu, and Michael Olsson. 2025. Spiritual and religious information experiences: An Annual Review of Information Science and Technology (ARIST) paper. Journal of the Association for Information Science and Technology, 24983. [Google Scholar] [CrossRef]
  12. Coghill, George M. 2023. Artificial Intelligence (and Christianity): Who? What? Where? When? Why? and How? Studies in Christian Ethics 36: 604–19. [Google Scholar] [CrossRef]
  13. Cruz, Eduardo R. 2025. Artificial Vice? Artificial Intelligence and Threats to the Self. Theology and Science 23: 249–64. [Google Scholar] [CrossRef]
  14. Dehghani, Hoora, and Amir Mashhadi. 2024. Exploring Iranian english as a foreign language teachers’ acceptance of ChatGPT in english language teaching: Extending the technology acceptance model. Education and Information Technologies 29: 19813–34. [Google Scholar] [CrossRef]
  15. Fernandez-Borsot, Gabriel. 2023. Spirituality and technology: A threefold philosophical reflection. Zygon: Journal of Religion and Science 58: 6–22. [Google Scholar] [CrossRef]
  16. Galimova, Elvira G., Alexey Yu. Oborsky, Maria A. Khvatova, Dmitry V. Astakhov, Ekaterina V. Orlova, and Irina S. Andryushchenko. 2024. Mapping the interconnections: A systematic review and network analysis of factors influencing teachers’ technology acceptance. Frontiers in Education 9: 1436724. [Google Scholar] [CrossRef]
  17. Galindo-Domínguez, Héctor, Nahia Delgado, Lucía Campo, and Daniel Losada. 2024. Relationship between teachers’ digital competence and attitudes towards artificial intelligence in education. International Journal of Educational Research 126: 102381. [Google Scholar] [CrossRef]
  18. Gökçe Tekin, Özlem. 2024. Factors Affecting Teachers’ Acceptance of Artificial Intelligence Technologies: Analyzing Teacher Perspectives with Structural Equation Modeling. Öğretim Teknolojisi ve Hayat Boyu Öğrenme Dergisi—Instructional Technology and Lifelong Learning 5: 399–420. [Google Scholar] [CrossRef]
  19. Graves, Mark. 2024. Framing Theological Investigations of Near-Future AI. Theology and Science 22: 657–61. [Google Scholar] [CrossRef]
  20. Graves, Mark. 2025. Moral Attention Is All You Need. Theology and Science 23: 241–48. [Google Scholar] [CrossRef]
  21. Hair, Joseph F., Marko Sarstedt, Christian M. Ringle, Pratyush N. Sharma, and Benjamin Dybro Liengaard. 2024a. Going beyond the untold facts in PLS–SEM and moving forward. European Journal of Marketing 58: 81–106. [Google Scholar] [CrossRef]
  22. Hair, Joseph F., Pratyush N. Sharma, Marko Sarstedt, Christian M. Ringle, and Benjamin D. Liengaard. 2024b. The shortcomings of equal weights estimation and the composite equivalence index in PLS-SEM. European Journal of Marketing 58: 30–55. [Google Scholar] [CrossRef]
  23. Hair, Joseph Franklin. 2024. Advanced Issues in Partial Least Squares Structural Equation Modeling, 2nd ed. London: SAGE. [Google Scholar]
  24. Hair, Joseph Franklin, G. Tomas M. Hult, Christian M. Ringle, and Marko Sarstedt. 2022. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 3rd ed. London: SAGE. [Google Scholar]
  25. Hazzan-Bishara, Areen, Ofrit Kol, and Shalom Levy. 2025. The factors affecting teachers’ adoption of AI technologies: A unified model of external and internal determinants. Education and Information Technologies, 1–27. [Google Scholar] [CrossRef]
  26. He, Yugang. 2024. Artificial intelligence and socioeconomic forces: Transforming the landscape of religion. Humanities and Social Sciences Communications 11: 1–10. [Google Scholar] [CrossRef]
  27. Hopcan, Sinan, Gamze Türkmen, and Elif Polat. 2024. Exploring the artificial intelligence anxiety and machine learning attitudes of teacher candidates. Education and Information Technologies 29: 7281–301. [Google Scholar] [CrossRef]
  28. Jain, Puja, and Jasleen Kaur. 2021. Development and Validation of Teachers’ Sense of Calling Scale. Management and Labour Studies 46: 438–51. [Google Scholar] [CrossRef]
  29. Jiang, Yan, Jiaxin Wang, and Yibo Huang. 2024. Does religious atmosphere affect enterprise digital transformation? Evidence from China. Research in International Business and Finance 70: 102389. [Google Scholar] [CrossRef]
  30. Karamouzis, Polikarpos, and Emmanuel Fokides. 2017. Religious Perceptions and the Use of Technology: Profiling the Future Teachers of Religious Education. Journal of Religion, Media and Digital Culture 6: 23–42. [Google Scholar] [CrossRef]
  31. Kaunda, Chammah J. 2024. “Always-Already-Created”: Theology of Creation in the Context of Artificial Intelligence. Theology and Science 22: 407–24. [Google Scholar] [CrossRef]
  32. Kaya, Feridun, Fatih Aydin, Astrid Schepman, Paul Rodway, Okan Yetişensoy, and Meva Demir Kaya. 2024. The Roles of Personality Traits, AI Anxiety, and Demographic Factors in Attitudes toward Artificial Intelligence. International Journal of Human–Computer Interaction 40: 497–514. [Google Scholar] [CrossRef]
  33. Kock, Ned, and Pierre Hadaya. 2018. Minimum sample size estimation in PLS-SEM: The inverse square root and gamma-exponential methods. Information Systems Journal 28: 227–61. [Google Scholar] [CrossRef]
  34. Koenig, Harold, George R. Parkerson, and Keith G. Meador. 1997. Religion index for psychiatric research. American Journal of Psychiatry 154: 885–86. [Google Scholar] [CrossRef]
  35. Koenig, Harold G., and Arndt Büssing. 2010. The Duke University Religion Index (DUREL): A Five-Item Measure for Use in Epidemological Studies. Religions 1: 78–85. [Google Scholar] [CrossRef]
  36. Koenig, Harold G., Faten Al Zaben, Doaa Ahmed Khalifa, and Saad Al Shohaib. 2015. Measures of Religiosity. In Measures of Personality and Social Psychological Constructs. Amsterdam: Elsevier, pp. 530–61. [Google Scholar] [CrossRef]
  37. Kong, Siu Cheung, Yin Yang, and Chunyu Hou. 2024. Examining teachers’ behavioural intention of using generative artificial intelligence tools for teaching and learning based on the extended technology acceptance model. Computers and Education: Artificial Intelligence 7: 100328. [Google Scholar] [CrossRef]
  38. Kozak, Jaroslaw, and Stanislaw Fel. 2024. The Relationship between Religiosity Level and Emotional Responses to Artificial Intelligence in University Students. Religions 15: 331. [Google Scholar] [CrossRef]
  39. Lan, Yanzhen. 2024. Through tensions to identity-based motivations: Exploring teacher professional identity in Artificial Intelligence-enhanced teacher training. Teaching and Teacher Education 151: 104736. [Google Scholar] [CrossRef]
  40. Lin, Tianqi, Jianyang Zhang, and Bin Xiong. 2025. Effects of Technology Perceptions, Teacher Beliefs, and AI Literacy on AI Technology Adoption in Sustainable Mathematics Education. Sustainability 17: 3698. [Google Scholar] [CrossRef]
  41. Linderoth, Cornelia, Magnus Hultén, and Linnéa Stenliden. 2024. Competing visions of artificial intelligence in education—A heuristic analysis on sociotechnical imaginaries and problematizations in policy guidelines. Policy Futures in Education 22: 1662–78. [Google Scholar] [CrossRef]
  42. Masry Herzallah, Asmahan, and Rania Makaldy. 2025. Technological self-efficacy and sense of coherence: Key drivers in teachers’ AI acceptance and adoption. Computers and Education: Artificial Intelligence 8: 100377. [Google Scholar] [CrossRef]
  43. Masters, Kevin S. 2013. Intrinsic Religiousness (Religiosity). In Encyclopedia of Behavioral Medicine. Edited by Marc D. Gellman and J. Rick Turner. New York: Springer, pp. 1117–18. [Google Scholar] [CrossRef]
  44. Molefi, Rethabile Rosemary, Musa Adekunle Ayanwale, Lehlohonolo Kurata, and Julia Chere-Masopha. 2024. Do in-service teachers accept artificial intelligence-driven technology? The mediating role of school support and resources. Computers and Education Open 6: 100191. [Google Scholar] [CrossRef]
  45. Ogbu Eke, Eke. 2024. Assessing the readiness and attitudes of nigerian teacher educators towards adoption of artificial inteligence in educational settings. Journal of Educational Technology and Online Learning 7: 473–87. [Google Scholar] [CrossRef]
  46. Onyeukaziri, Justin Nnaemeka. 2024. Artificial Intelligence and an Anthropological Ethics of Work: Implications on the Social Teaching of the Church. Religions 15: 623. [Google Scholar] [CrossRef]
  47. Özbek, Tugce, Christina Wekerle, and Ingo Kollar. 2024. Fostering pre-service teachers’ technology acceptance—Does the type of engagement with tool-related information matter? Education and Information Technologies 29: 6139–61. [Google Scholar] [CrossRef]
  48. Park, Jiyoung, Sang Eun Woo, and JeongJin Kim. 2024. Attitudes towards artificial intelligence at work: Scale development and validation. Journal of Occupational and Organizational Psychology 97: 920–51. [Google Scholar] [CrossRef]
  49. Rafique, Hamaad, Zia Ul Islam, and Azra Shamim. 2023. Acceptance of e-learning technology by government school teachers: Application of extended technology acceptance model. Interactive Learning Environments 32: 1–19. [Google Scholar] [CrossRef]
  50. Ringle, Christian M., Sven Wende, and Jan-Michael Becker. 2024. SmartPLS (Version 4) [Computer Software]. Available online: https://www.smartpls.com (accessed on 2 June 2025).
  51. Sanusi, Ismaila Temitayo, Musa Adekunle Ayanwale, and Adebayo Emmanuel Tolorunleke. 2024. Investigating pre-service teachers’ artificial intelligence perception from the perspective of planned behavior theory. Computers and Education: Artificial Intelligence 6: 100202. [Google Scholar] [CrossRef]
  52. Viberg, Olga, Mutlu Cukurova, Yael Feldman-Maggor, Giora Alexandron, Shizuka Shirai, Susumu Kanemune, Barbara Wasson, Cathrine Tømte, Daniel Spikol, Marcelo Milrad, and et al. 2024. What Explains Teachers’ Trust in AI in Education Across Six Countries? International Journal of Artificial Intelligence in Education, 1–29. [Google Scholar] [CrossRef]
  53. Yang, Yu-Fen, Christine Chifen Tseng, and Siao-Cing Lai. 2024. Enhancing teachers’ self-efficacy beliefs in AI-based technology integration into English speaking teaching through a professional development program. Teaching and Teacher Education 144: 104582. [Google Scholar] [CrossRef]
  54. Z. Karvalics, László. 2024. A mesterséges intelligencia mint tudáskörnyezet és tudásprotézis. Educatio 33: 13–23. [Google Scholar] [CrossRef]
  55. Zhang, Chengming, Jessica Schießl, Lea Plößl, Florian Hofmann, and Michaela Gläser-Zikuda. 2023. Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis. International Journal of Educational Technology in Higher Education 20: 49. [Google Scholar] [CrossRef]
Figure 1. Theoretical model of the research examining background factors of AI adoption and religiosity among Catholic secondary school teachers. Note: The figure illustrates the hypothesised direct paths and associated hypotheses (H1–H7). Hypotheses H8 and H9 refer to mediation effects through AI attitudes. The paths of demographic control variables are indicated with dotted arrows in the figure, but no hypotheses are associated with them. TSC: Teacher’s sense of calling. IR: Intrinsic religiosity.
Figure 1. Theoretical model of the research examining background factors of AI adoption and religiosity among Catholic secondary school teachers. Note: The figure illustrates the hypothesised direct paths and associated hypotheses (H1–H7). Hypotheses H8 and H9 refer to mediation effects through AI attitudes. The paths of demographic control variables are indicated with dotted arrows in the figure, but no hypotheses are associated with them. TSC: Teacher’s sense of calling. IR: Intrinsic religiosity.
Religions 16 01069 g001
Table 1. Summarises the characteristics of the variables included in the model.
Table 1. Summarises the characteristics of the variables included in the model.
Variable GroupOperationalisationMeasurement ModelSource
Demographic control variablesGender, age, years in profession, educational levelManifestOwn questionnaire items
AI attitudesPerceived humanlikeness (four items), perceived adaptability (four items), perceived quality (five items), AI use anxiety (three items), job insecurity (four items), personal utility (four items)ReflectiveAAAW scale (Park et al. 2024)
ReligiosityIntrinsic religiosity (three items)ReflectiveDuke University Religion Index (Koenig et al. 1997)
Teacher’s sense of callingService (four items), satisfaction (three items), longevity (three items)ReflectiveTeacher’s Sense of Calling Scale (Jain and Kaur 2021)
AI adoptionPlatform usage frequency (seven platforms), work-related usage (eight items), everyday usage (five items)FormativeOwn questionnaire items
Table 2. Descriptive statistics and VIF values of the formative indicators for AI adoption.
Table 2. Descriptive statistics and VIF values of the formative indicators for AI adoption.
MSDVIF
Personal useDescribing the functionality of processes or tools1.4650.8622.210
Translation to or from a foreign language1.8821.0381.531
Automation of simple, non-academic text-based tasks1.2850.6871.669
Entertainment and chatting1.2590.6841.469
Seeking advice1.5840.9452.543
Workplace useDatabase analysis1.1410.5011.428
Transcription of voice notes1.1150.4491.259
Image generation1.4240.7611.733
Information search and retrieval2.1631.2221.989
Text summarisation1.5500.9322.765
Presentation preparation1.3470.7341.734
Text composition1.6841.0052.556
Seeking advice1.6840.9992.669
PlatformsClaude1.0160.1571.215
Copilot/Bing1.1180.5241.181
DALL-E1.0340.2551.839
Other1.2410.7731.376
ChatGPT1.7691.1111.937
Gemini/Bard1.1590.5851.250
Midjourney1.0120.1531.591
Note: The mean values for platform usage frequency are based on a seven-point scale (1 = none, 7 = more than 2 h daily). The mean values for work-related and personal use are based on a five-point scale (1 = never, 5 = regularly). VIF = variance inflation factor (examination of multicollinearity). N = 680.
Table 3. Reliability and convergent validity indicators, and VIF values for the reflective constructs.
Table 3. Reliability and convergent validity indicators, and VIF values for the reflective constructs.
VariableItemOuter Loadingrho_aCR (rho_c)AVEVIF
SatisfactionE10.95710.8280.7111.296
E30.712 1.296
LongevityH10.7560.9870.8650.6831.347
H20.935 2.268
H30.777 2.064
ServiceSZ10.8150.8750.8750.6381.388
SZ20.716 1.747
SZ30.830 2.217
SZ40.827 2.051
Perceived adaptabilityA10.8650.8830.9090.7142.195
A20.893 2.730
A30.815 1.913
A40.805 1.936
Job insecurityM10.8690.8090.8770.7041.874
M30.884 1.946
M40.759 1.440
Perceived humanlikenessEM20.7370.8250.8570.6671.529
EM30.873 1.489
EM40.834 1.631
Personal utilityHA10.7220.8090.8520.5911.384
HA20.852 1.643
HA30.713 1.450
HA40.780 1.602
Perceived qualityMI10.8870.9050.8390.6361.321
MI20.727 1.633
MI30.768 1.668
AI use anxietySZO10.7400.8270.8750.6371.429
SZO20.856 1.839
SZO30.779 1.824
SZO40.811 1.929
Intrinsic religiosityBV10.8660.9410.9400.8402.322
BV20.937 3.820
BV30.944 3.616
Note: The Item column contains the codes for the individual items of the original questionnaire scales.
Table 4. Discriminant validity based on the HTMT analysis.
Table 4. Discriminant validity based on the HTMT analysis.
Variables12345678910
Perceived humanlikeness (1)
Perceived adaptability (2)0.133
Job insecurity (3)0.4470.113
Personal utility (4)0.3790.5030.124
Perceived quality (5)0.1570.5610.0580.578
AI use anxiety (6)0.3510.0640.7300.1600.122
Satisfaction (7)0.0860.1650.0750.0560.1080.092
Longevity (8)0.0460.0580.0660.0450.0480.0780.643
Service (9)0.0740.0780.0540.0550.0620.12510.671
Intrinsic religiosity (10)0.0700.0710.0570.0880.1240.0550.1060.1250.196
Table 5. Construct validity of the AI adoption construct.
Table 5. Construct validity of the AI adoption construct.
ItemWeightpVIF
Personal use: seeking advice0.437<0.0011.365
Workplace use: information search and retrieval0.3070.0011.667
Workplace use: text summarisation0.296<0.0012.012
Workplace use: presentation preparation−0.312<0.0011.492
Platform use: ChatGPT0.386<0.0011.629
Table 6. Path coefficients of the structural model and hypothesis testing.
Table 6. Path coefficients of the structural model and hypothesis testing.
HypothesisRelationshipβpDecisionf2
H1aReligiosity → Supportive evaluation of AI−0.1050.006Supported0.011
H1bReligiosity → AI-related concerns0.0110.790Rejected0.000
H1cReligiosity → Anthropomorphic perception of AI−0.0740.026Supported0.006
H2aTeacher’s sense of calling → Supportive evaluation of AI0.0290.526Rejected0.001
H2bTeacher’s sense of calling → AI-related concerns0.0740.033Supported0.006
H2cTeacher’s sense of calling → Anthropomorphic perception of AI−0.0310.442Rejected0.001
H3Supportive evaluation of AI → AI adoption0.362<0.001Supported0.171
H4AI-related concerns → AI adoption−0.326<0.001Supported0.133
H5Anthropomorphic perception of AI → AI adoption−0.0490.167Rejected0.003
H6Religiosity → AI adoption−0.0250.435Rejected0.001
H7Teacher’s sense of calling → AI adoption0.0320.326Rejected0.001
H8aReligiosity → Supportive evaluation of AI → AI adoption−0.0380.007Supported-
H8bReligiosity → AI-related concerns → AI adoption−0.0040.791Rejected-
H8cReligiosity → Anthropomorphic perception of AI → AI adoption0.0040.305Rejected-
H9aTeacher’s sense of calling → Supportive evaluation of AI → AI adoption0.0100.532Rejected-
H9bTeacher’s sense of calling → AI-related concerns → AI adoption−0.0240.039Supported-
H9cTeacher’s sense of calling → Anthropomorphic perception of AI → AI adoption0.0020.568Rejected-
Note: The research hypotheses included both non-directional and directional predictions. However, the software used does not permit a choice between two-tailed and one-tailed tests on a per-hypothesis basis, instead calculating p-values uniformly as either two-tailed or one-tailed. We halved the two-tailed p-values in cases where the direction of the path coefficient matched theoretical expectations. If the path’s direction was contrary to the expected direction, the result was not considered significant, regardless of the p-value’s magnitude. It should be noted that halving is an approximate method, as the bootstrap distribution is not necessarily symmetrical. No non-significant effects became significant as a result of this procedure.
Table 7. Results of the PLSpredict procedure for assessing out-of-sample predictive capability.
Table 7. Results of the PLSpredict procedure for assessing out-of-sample predictive capability.
Q2predictRMSEMAE
Formative items of AI adoption
Personal use, seeking advice0.0350.9300.706
Work-related use: information search and retrieval−0.0031.2251.001
Work-related use: text summarisation0.0010.9320.720
Work-related use: presentation preparation−0.0080.7380.535
Platform use: ChatGPT0.0111.1060.836
Endogenous constructs
Anthropomorphic perception of AI0.0110.9980.854
AI adoption0.0330.9900.764
AI-related concerns−0.0151.0100.828
Supportive evaluation of AI−0.0071.0060.811
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Turós, M.; Tari, I.P.; Szőke-Milinte, E.; Rubovszky, R.; Soltész-Várhelyi, K.; Zsódi, V.; Szűts, Z. Beyond Utility: The Impact of Religiosity and Calling on AI Adoption in Education. Religions 2025, 16, 1069. https://doi.org/10.3390/rel16081069

AMA Style

Turós M, Tari IP, Szőke-Milinte E, Rubovszky R, Soltész-Várhelyi K, Zsódi V, Szűts Z. Beyond Utility: The Impact of Religiosity and Calling on AI Adoption in Education. Religions. 2025; 16(8):1069. https://doi.org/10.3390/rel16081069

Chicago/Turabian Style

Turós, Mátyás, Ilona Pajtókné Tari, Enikő Szőke-Milinte, Rita Rubovszky, Klára Soltész-Várhelyi, Viktor Zsódi, and Zoltán Szűts. 2025. "Beyond Utility: The Impact of Religiosity and Calling on AI Adoption in Education" Religions 16, no. 8: 1069. https://doi.org/10.3390/rel16081069

APA Style

Turós, M., Tari, I. P., Szőke-Milinte, E., Rubovszky, R., Soltész-Várhelyi, K., Zsódi, V., & Szűts, Z. (2025). Beyond Utility: The Impact of Religiosity and Calling on AI Adoption in Education. Religions, 16(8), 1069. https://doi.org/10.3390/rel16081069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop