Examining these authors’ work more closely, we see that
Alkhouri (
2024) analyses how AI-based applications (e.g., chatbots and virtual reality) might transform religious practices, spiritual seeking, and community experiences. In their study, the author raises several ethical concerns: the authenticity of AI-simulated religious experiences, the potential for algorithmic bias in religious contexts, and the challenges of protecting data and preserving genuine human connections.
Fernandez-Borsot (
2023) analyses two key phenomena through the philosophical categories of transcendence, immanence, and relationality. On one hand, he examines how contemplation might be marginalised by technology’s action-oriented “enframing” logic; on the other, he demonstrates how technology—as an extension of the human body—can lead to “dissociation” from the lived body. According to the author, spirituality, particularly body-based practices, offers an integrative counterbalance to these tendencies.
Graves (
2024) proposes theological frameworks for evaluating near-future AI developments, with particular attention to how these developments might affect concepts of human suffering and flourishing (eudaimonia) and the theological interpretation of human uniqueness (imago Dei) in light of ever-advancing AI capabilities, for instance, in the realms of consciousness or the capacity for a relationship with God.
These approaches confirm and complement our central assertion: AI is not merely a technical tool but a complex phenomenon that raises spiritual and moral questions. In religious contexts, these questions are pronounced and relate to the emergence of scepticism, caution, or rejection of AI technologies, potentially leading to value conflicts. We hypothesise that certain religious values—such as service to the community, care, or justice—may also facilitate the responsible and reflective use of AI.
We therefore place the educational integration of AI within a broader, value-based framework. Our aim is to conduct a complex examination of the psychological, spiritual, professional, and demographic factors influencing teacher technology acceptance. Supplementing classic technology acceptance models with the factors of religiosity and sense of calling creates an opportunity for a deeper understanding of teacher attitudes toward AI. This allows for the formulation of research and practical implications that can support the responsible, context-sensitive integration of AI into public education—particularly within faith-based institutions.
Our study seeks to explore the role of religiosity, a teacher’s sense of calling, and certain demographic variables in shaping attitudes and adoption intentions regarding the use of AI in schools. Given the complexity of this objective, we will examine the direct and indirect effects of these factors within a complex technology acceptance model (i.e., an empirical measurement model). We seek to answer the following questions: (Q1) How do religiosity and a teacher’s sense of calling influence teacher attitudes toward AI? (Q2) What direct and indirect effects do religiosity and a sense of calling exert on AI adoption, and are these effects mediated by attitudes toward AI? Our study’s added value lies in its examination of AI acceptance among Catholic secondary school teachers from a value- and identity-based perspective.
Hypotheses
Numerous ethical and theological studies (
Fernandez-Borsot 2023;
Graves 2024;
Onyeukaziri 2024) point to value conflicts that hinder technology acceptance. These include the questioning of human dignity and uniqueness by technology, the ontological implications of artificial intelligence, and tensions between the theological conception of work and the technological worldview.
Buyukyazici and Serti (
2024) found a negative correlation between religiosity and positive attitudes toward innovation. According to research by
Kozak and Fel (
2024), AI elicits stronger negative emotional reactions (e.g., fear and anger) from religious individuals. However, the issue is not so straightforward: research by
Karamouzis and Fokides (
2017) among teacher candidates highlights the complexity of the relationship between religiosity and technological attitudes. Furthermore, teacher self-efficacy can also influence one’s relationship with AI. The findings of
Viberg et al. (
2024) and
Gökçe Tekin (
2024) suggest that teachers with high self-efficacy perceive more advantages in AI, approach it with less concern, adopt it with greater trust, and apply it more readily. Correspondingly,
Zhang et al. (
2023) associate low self-efficacy with negative outcomes, such as stress and technological dependency.
Based on the literature, attitudes toward AI do not appear as a unified construct, but are fed by multiple emotional and cognitive sources. According to
Kozak and Fel (
2024), religiosity affects emotional reactions to AI differently: it increases fear and anger while reducing sadness and disgust.
Hopcan et al. (
2024) state that AI-related anxiety comprises dimensions such as concerns about learning difficulties, job security, and social impacts. According to
Buyukyazici and Serti (
2024), religiosity negatively influences certain dimensions of innovation attitudes; however, the magnitude and significance of the effect differ across attitude elements, suggesting that the impact of religiosity is not uniform but is differentiated by attitude type. Based on these findings, we can reasonably assume that religiosity influences different dimensions of AI attitudes in different ways. In formulating our hypotheses, we considered that in religious thought, the soul, consciousness, and personality are of divine origin, and humans hold a special place in creation (imago Dei). Consequently, religious individuals may draw a sharper line between human and machine and may be more cautious about personifying artificial entities: attributing human traits to a machine could contradict the conviction that true consciousness and personality are exclusively divine gifts.
We define a teacher’s sense of calling as a deep, intrinsic motivation that frames teaching not merely as a job but as a meaningful mission that serves the common good and provides personal satisfaction. This construct extends beyond self-efficacy (a belief in one’s ability for a specific task) to encompass dimensions of service, responsibility, satisfaction, and long-term commitment (
Jain and Kaur 2021). Although prior research suggests that self-efficacy can reduce technology-related concerns (
Gökçe Tekin 2024;
Viberg et al. 2024), other components of a sense of calling—such as a sense of responsibility toward students and a commitment to teaching quality—may act in the opposite direction. Teachers with a strong sense of calling might worry more about the educational applications of AI precisely because they feel a deeper responsibility for teaching and are more cautious about technologies that could affect the quality of the teacher–student relationship or the personal nature of education. The various components of a teacher’s sense of calling may therefore indirectly influence aspects of AI attitudes in different, even contradictory, ways.
Some technology acceptance models also posit a direct path from self-efficacy to usage intention. In
Gökçe Tekin’s (
2024) model, the relationship between self-efficacy and usage intention (H4 hypothesis) proved significant, indicating that direct effects may exist beyond attitudes. According to
Viberg et al. (
2024), teachers’ self-efficacy toward AI-based educational technology (AI-EdTech) and their AI understanding affect trust in technology not directly, but through perceived benefits and concerns. These factors mediate through attitudes, not independently of them. However, cultural values—such as uncertainty avoidance, collectivism, and masculinity—also directly influence trust, regardless of the benefits or concerns teachers perceive regarding AI-EdTech. Since certain deeper, identity-level factors—such as cultural values—can affect trust in and adoption of technology independently of attitudes, it is conceivable that religiosity and a teacher’s sense of calling may play a similar role in the context of AI application.
According to
Buyukyazici and Serti (
2024), religiosity has a negative effect on innovation attitudes (which are important antecedents of technology acceptance). Based on their findings, religiosity reinforces attitudes that hinder innovation.
Karamouzis and Fokides (
2017) examined the religious views, technology use, and attitudes toward technology of Greek theology and teacher-training students. A cluster analysis identified three distinct profiles: (1) religious students with positive technological attitudes; (2) non-religious students also with positive technological attitudes; and (3) moderately religious students with negative technological attitudes. The authors found significant associations between religiosity and attitude toward technology, particularly concerning the roles of gender and age. Religiosity and technology acceptance were not mutually exclusive: theology students were more religious than teacher-training students and viewed the compatibility of religion and technology more positively. The contradictory findings of these two studies (
Buyukyazici and Serti 2024;
Karamouzis and Fokides 2017) highlight that the relationship between religiosity and technology attitudes is complex and context-dependent. While
Buyukyazici and Serti (
2024) found a generally negative relationship, the results of
Karamouzis and Fokides (
2017) present a more nuanced picture, suggesting that the type of religiosity, educational background, and other demographic factors may influence this association. While the literature shows a contradictory relationship between religiosity and technology, our research focuses on a specific population representing a more traditional value system. Given the central role of human dignity and the order of creation in Catholic teaching, we assume that in this context, religiosity strengthens caution and more critical attitudes toward technology, especially AI that imitates human cognition, consistent with the findings of
Buyukyazici and Serti (
2024). Furthermore, we assume that the effect of religiosity is not limited to shaping attitudes. Religious conviction provides a deeper, identity-level framework that can directly influence behavioural adoption, independent of explicit attitudes. Unconscious norms or values (e.g., caution with novelty and respect for the created order) may also be at play, directly inhibiting the adoption of AI technologies. Therefore, we hypothesise partial mediation. Similarly, a teacher’s sense of calling may not act solely through attitudes. A teacher with a strong sense of calling might be more directly motivated to try to adopt new technologies (like AI) due to an internal commitment to professional development, even if their initial attitudes are ambivalent. This proactive, identity-driven behaviour also implies a direct effect.
Perceived usefulness predicts teachers’ AI usage intention. In
Viberg et al. (
2024) research, trust played a significant role, whilst in
Gökçe Tekin’s (
2024) study, self-efficacy and anxiety were significant.
Ogbu Eke (
2024) and
Alwaqdani (
2025) confirmed that positive perceptions—such as adaptability or personal usefulness—increase artificial intelligence acceptance.
Hopcan et al. (
2024) demonstrated a relationship between AI-related anxiety and attitudes toward machine learning: teacher candidates who viewed machine learning technology more positively were less concerned about potential job loss caused by artificial intelligence. Although, to our knowledge, the effect of anthropomorphic perception has not been empirically examined, this factor may also influence the extent of AI application.
Since religiosity demonstrably affects attitudes toward artificial intelligence and innovation (
Buyukyazici and Serti 2024;
Karamouzis and Fokides 2017;
Kozak and Fel 2024) and since these attitudes are closely related to technology acceptance and usage intention (
Gökçe Tekin 2024;
Ogbu Eke 2024;
Viberg et al. 2024), we assume that religiosity also indirectly influences AI adoption. Besides the direct effect, we also wish to examine whether an indirect effect exists in parallel. The indirect effect is supported by
Buyukyazici and Serti’s (
2024) empirical results: the authors found that the effect of religiosity operates through innovation attitudes. The technology acceptance model literature also points out that psychological factors such as self-efficacy—which is a component of teacher’s sense of calling (
Lan 2024)—primarily shape attitudes through perceived usefulness and perceived ease of use. These attitudes are then direct antecedent variables of usage intention and technological adoption (
Gökçe Tekin 2024;
Viberg et al. 2024). Based on all this, it can be assumed that AI attitudes play a mediating role in the relationship between both religiosity and sense of calling and AI adoption.
Studies examining teachers’ acceptance of artificial intelligence (
Hopcan et al. 2024;
Viberg et al. 2024) do not focus on the direct effect of demographic factors on religiosity. In the literature examining the acceptance of AI technologies, psychological (e.g., self-efficacy and anxiety) and cultural variables (e.g., Hofstede’s cultural dimensions) typically receive greater emphasis. The literature presents a complex picture regarding the effect of demographic factors (age, gender, and educational level) on artificial intelligence. Some studies find that the direct effect of these factors on acceptance or trust is limited, especially when models also consider other, stronger psychological predictors (
Gökçe Tekin 2024;
Viberg et al. 2024). Other studies point out that certain demographic characteristics—such as gender—may play a moderating role (
Zhang et al. 2023) or their relationship is mediated by other factors (e.g., attitudes and anxiety) (
Hopcan et al. 2024), and nationality as a demographic variable directly influenced ChatGPT adoption amongst university educators and showed correlation with related attitudes (
Barakat et al. 2025). However, other research has demonstrated the direct, negative effect of age on teachers’ AI adoption (
Bakhadirov et al. 2024). This suggests that demographic characteristics, particularly age and professional experience, may continue to be relevant factors in artificial intelligence acceptance, even if their effect is context-dependent or operates in interaction with other variables. Whilst
Hopcan et al. (
2024) identified gender and age differences in certain dimensions of AI-related attitudes,
Bolívar-Cruz and Verano-Tacoronte (
2025) found amongst Spanish university teachers that different factors (including various forms of anxiety) influence ChatGPT acceptance in men and women, highlighting the importance of gender perspective. Recent research shows that the role of demographic factors in artificial intelligence acceptance is uncertain (
Al-Kfairy 2024). Given the complex and context-dependent nature of effects, in the present research, we incorporate demographic variables (gender, age, years in profession, and educational level) as control variables in our model. Our aim is to ensure that we can examine the effects of the main psychological constructs under investigation—intrinsic religiosity, teacher’s sense of calling, and AI attitudes—on AI adoption whilst statistically filtering out the potential distorting influence of these background factors. Accordingly, since our research focuses on the mechanisms through which intrinsic religiosity and teacher’s sense of calling affect AI adoption both through AI attitudes and directly, we do not formulate separate, independent hypotheses regarding the specific predictive or mediated effects of demographic variables on AI attitudes or AI adoption. Finally, we note that although the relationship between attitudes and behaviour is theoretically trivial and evident, empirical testing is warranted due to the well-documented attitude–behaviour gap. AI adoption is also a function of structural, competence-based, and situational factors that may influence actual use independently of attitudes.
Based on the above, in our research, we test hypotheses partly based on the literature and partly exploratory in nature. For exploratory hypotheses, we formulated assumptions without direction: for H2b because the self-efficacy component of sense of calling may reduce concerns whilst the sense of responsibility component may increase them; for H2c and H5 because anthropomorphic perception may both encourage use (curiosity) and inhibit it (fear); for H8c due to the exploratory nature of H5; for H9b due to uncertainty about the H2b effect; and for H9c due to the exploratory nature of H2c and H5:
H1: Religiosity influences teachers’ AI-related attitudes and perceptions.
H1a: Stronger religiosity decreases the supportive evaluation of AI.
H1b: Stronger religiosity increases AI-related concerns.
H1c: Stronger religiosity decreases the anthropomorphic perception of AI.
H2: A teacher’s sense of calling influences teachers’ AI-related attitudes.
H2a: A higher sense of calling increases the supportive evaluation of AI.
H2b: A higher sense of calling influences AI-related concerns.
H2c: A higher sense of calling influences the anthropomorphic perception of AI.
H3: A supportive evaluation of AI positively influences AI adoption.
H4: AI-related concerns negatively influence AI adoption.
H5: The anthropomorphic perception of AI influences AI adoption.
H6: Religiosity directly and negatively influences AI adoption.
H7: A teacher’s sense of calling directly and positively influences AI adoption.
H8: AI attitudes and perceptions mediate the relationship between religiosity and AI adoption.
H8a: The supportive evaluation of AI negatively mediates the relationship between religiosity and AI adoption.
H8b: AI-related concerns negatively mediate the relationship between religiosity and AI adoption.
H8c: The anthropomorphic perception of AI mediates the relationship between religiosity and AI adoption.
H9: AI attitudes and perceptions mediate the relationship between a teacher’s sense of calling and AI adoption.
H9a: The supportive evaluation of AI positively mediates the relationship between a teacher’s sense of calling and AI adoption.
H9b: AI-related concerns mediate the relationship between a teacher’s sense of calling and AI adoption.
H9c: The anthropomorphic perception of AI mediates the relationship between a teacher’s sense of calling and AI adoption.