Next Article in Journal
Psychedelics and New Materialism: Challenging the Science–Spirituality Binary and the Onto-Epistemological Order of Modernity
Previous Article in Journal
Odd Conspiracies: John Allegro, Sacred Mushrooms, and the Dead Sea Scrolls
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence: A New Challenge for Human Understanding, Christian Education, and the Pastoral Activity of the Churches

1
Institute of Theological Sciences, John Paul II Catholic University of Lublin, 20-950 Lublin, Poland
2
Institute of Pedagogy, John Paul II Catholic University of Lublin, 20-950 Lublin, Poland
3
Institute of Philosophy, John Paul II Catholic University of Lublin, 20-950 Lublin, Poland
*
Author to whom correspondence should be addressed.
Religions 2025, 16(8), 948; https://doi.org/10.3390/rel16080948
Submission received: 9 June 2025 / Revised: 4 July 2025 / Accepted: 14 July 2025 / Published: 22 July 2025

Abstract

Artificial intelligence (AI) is one of the most influential and rapidly developing phenomena of our time. New fields of study are being created at universities, and managers are constantly introducing new AI solutions for business management, marketing, and advertising new products. Unfortunately, AI is also used to promote dangerous political parties and ideologies. The research problem that is the focus of this work is expressed in the following question: How does the symbiotic relationship between artificial and natural intelligence manifest across three dimensions of human experience—philosophical understanding, educational practice, and pastoral care—and what hermeneutical, phenomenological, and critical realist insights can illuminate both the promises and perils of this emerging co-evolution? In order to address this issue, an interdisciplinary research team was established. This team comprised a philosopher, an educator, and a pastoral theologian. This study is grounded in a critical–hermeneutic meta-analysis of the existing literature, ecclesial documents, and empirical investigations on AI. The results of scientific research allow for a broader insight into the impact of AI on humans and on personal relationships in Christian communities. The authors are concerned not only with providing an in-depth understanding of the issue but also with taking into account the ecumenical perspective of religious, social, and cultural education of contemporary Christians. Our analysis reveals that cultivating a healthy symbiosis between artificial and natural intelligence requires specific competencies and ethical frameworks. We therefore conclude with practical recommendations for Christian formation that neither uncritically embrace nor fearfully reject AI, but rather foster wise discernment for navigating this unprecedented co-evolutionary moment in human history.

1. Introduction

Artificial intelligence (AI) is not an exclusive domain of IT departments and technology companies. We are now faced with the fact that it has become firmly established in human reality. AI has been predicted and heralded by futurologists and filmmakers for years (Reed 2021, p. 3). Today, AI has become not only a technological tool, but also a complex epistemological construct that is radically transforming our understanding of human cognition. Its development is characterised by unprecedented dynamics—from simple machine learning algorithms to advanced neural networks capable of autonomous information processing, content generation, and decision-making under conditions of high uncertainty. Technological AI systems, such as deep learning neural networks or transformer models, demonstrate capabilities that a decade ago seemed to be the exclusive domain of human intelligence—understanding context, creative problem-solving, and adaptation to changing conditions. AI is not a complex machine designed to create an artificial human being but a multidimensional reality that already has a significant impact on human existence, and this impact is constantly growing.
This article argues that the relationship between artificial and natural intelligence is best understood not through the binary lens of threat versus ally, but as an emerging symbiosis requiring careful philosophical, pedagogical, and pastoral discernment. We propose that this symbiotic relationship manifests uniquely across different domains of human experience yet exhibits patterns that interdisciplinary analysis can illuminate.
The research problem explores how the symbiotic relationship between artificial and natural intelligence manifests across three interconnected dimensions: How does this symbiosis reshape our philosophical understanding of human nature? What forms does it take in educational practice, particularly Christian formation? How does it transform pastoral care while preserving its essentially human character? These questions demand not merely cataloguing benefits and risks, but understanding the deeper patterns of co-evolution between human and artificial intelligence.
The origins of AI can be traced back to 1956, when American computer scientist J. McCarthy (McCarthy et al. 1955) organised a summer workshop at Dartmouth University to explore the problem of AI, which he defined as “making a machine behave in ways that would be considered intelligent if a human did it” (AeN 2025, no. 7). It is worth noting that though attempts to create intelligent machines have a much longer history, dating back at least to the 17th century with Leibniz’s mechanical calculator and continuing through Babbage’s Analytical Engine to Turing’s theoretical foundations, to date, no clear definition of AI has been developed in the scientific field. The term itself has already entered everyday language and “encompasses a variety of sciences, theories and techniques aimed at making machines reproduce or imitate the cognitive abilities of human beings in their actions” (Francis 2023: no. 2). AI is a broad field of knowledge that includes, among other things, neural networks, robotics, and the creation of intelligent behaviour models and computer programs that simulate these behaviours, including machine learning, deep learning, and reinforcement learning (Michałowski 2018, p. 12). AI is knowledge combined with technology that deals with the search for solution techniques and their formal formulation, allowing for the machine implementation of complex problems, i.e., those solved by humans through their intellect but for which they cannot provide a precise and general solution algorithm (Kalisz 2020, p. 157).
However, AI is not just technology, as it has the “ambition” to learn to be ens humanum, and over time may even surpass humans in the search for better solutions to problems, higher skills, efficiency, productivity, etc. A. Oberg (2023) of Kochi University in Japan asks bold questions about whether AI has a self and whether it could create something like a human soul in the future. Even more interesting is the question of whether AI will ever learn responsibility for its decisions, whether it will be able to be held accountable in court, suffer punishment, undergo a process of reconciliation and a kind of conversion—in short, whether it can now or in the future be assigned the inalienable personal dignity that every human being possesses. If AI were to learn ethical responsibility, in what axiological system and who would decide what ethical algorithms would be implemented into the AI system?
In recent years, much research has been conducted on the ethical risks and challenges associated with AI (Gaudet 2022; Graves 2022; Vicini 2022; Oppong 2024). However, the discussion on this topic should be ongoing and even more intense. It is therefore worth emphasising that, following the creation of nuclear weapons by humans and their mass production, there is now a very real danger of “reducing our planet to a pile of ashes” (Francis 2024). This danger is all the greater because generative AI (GAI), based on algorithms provided by humans, can decide to use these weapons without human intervention (Francis 2023). According to Pope Francis, “no machine should ever decide to take the life of a human being” (Francis 2024). In this context, the words of Pope John Paul II (John 2000) appear prophetically relevant: “Humanity now has tools of unprecedented power: we can turn this world into a garden or reduce it to a heap of rubble.”
Singler (2023) of the University of Zurich asked a bold question: Is AI capable of creating a new religion? The author ultimately refrained from giving a clear answer to the question posed in the title of her publication, but she suggested the scope of future interaction between AI and religion. Such questions strongly encroach on the realm of morality and, as such, pose challenges for the understanding of human beings, Christian formation, and the pastoral ministry of the Churches. Can any Church be held responsible for the ethical formation of AI, or is it, as always, responsible only for the ethical formation of its followers? Undoubtedly, AI has its merits that can also be used in the pastoral ministry of the Churches, especially in religious education. In his first address to the College of Cardinals, Pope Leo XIV emphasised that the Church’s current task is to respond to the next industrial revolution and the development of artificial intelligence, which pose new challenges in terms of defending human dignity, justice, and work (Leo XIV 2025).
The article proceeds in three movements, each employing our methodological framework to examine a distinct domain: first, a phenomenological exploration of how AI challenges and enriches natural human intelligence; second, a hermeneutical interpretation of AI’s transformation of Christian education; and third, a critical realist analysis of AI’s role in pastoral ministry. Through this systematic investigation, we trace the contours of an emerging symbiosis that demands both theological wisdom and practical prudence.

2. Materials and Methods

In this study, an interdisciplinary team reflects on the challenges AI poses for scientific research in philosophy, pedagogy, and pastoral theology. The question is: What hopes and fears does AI generate in the process of understanding the human essence, human education, and the pastoral activity of Christian Churches? The initial assumption of this study is to present AI objectively, without excessive criticism, and without ignoring the real threats. The primary goal of analyses undertaken following the methodology adopted in philosophy, Christian pedagogy, and pastoral theology is to highlight the positive elements of AI as well as the threats and challenges that may arise in using AI resources. The ultimate goal of this interdisciplinary research is to develop practical recommendations for anthropological, pedagogical, and pastoral education in the family and school environment, as well as in the media and religious communities. The authors are concerned not only with providing an in-depth understanding of the relationship between humans and AI but also with taking into account the ecumenical perspective of the religious and spiritual education of Christians.
Our interdisciplinary approach employs a tripartite methodological framework particularly suited to examining human–AI symbiosis: phenomenological vigilance to bracket preconceptions and attending to lived experiences; dual hermeneutics (of trust and suspicion) to interpret both promises and ideological distortions; and critical realism to identify underlying generative mechanisms. This methodological synthesis allows us to move beyond surface-level analysis toward understanding the deep structures shaping human–AI co-evolution.
This study is grounded in a critical-hermeneutic meta-analysis of the existing literature, ecclesial documents, and empirical investigations on AI. It is anchored in three complementary philosophical traditions. First, phenomenological vigilance—drawing simultaneously on Husserl’s epoché and eidetic reduction and on Heidegger’s call to “return to the things themselves”—enables us to bracket both technological enthusiasm and alarmism so as to attend to the concrete experiences of AI in human self-understanding, pedagogy, and pastoral praxis. Second, dialogical hermeneutics in the sense of Gadamer and Ricoeur treats every text as a conversational partner; the ensuing fusion of horizons permits the discernment of both the promises—a hermeneutics of trust and the anxieties—a hermeneutics of suspicion, inscribed in contemporary AI discourse. Third, a critical realist triangulation—inspired by Bhaskar and Archer—confronts these interpretive insights with empirical findings, institutional practices, and the often hidden generative mechanisms that shape philosophical, pedagogical, and ecclesial realities, thereby guarding against purely idealist readings.
The convergence of these three strands ushers us into what Ricoeur terms a “second naivete”: a posture that neither idolises nor demonises artificial intelligence but instead enables a renewed discovery of its deeper significance. This stance, describable as an illumined Christian anthropology, identifies those facets of AI that foster authentic personal and communal flourishing within the contexts under consideration while simultaneously acknowledging the technology’s limitations. In this way, the multilayered, interdisciplinary framework provides a conceptual map that reveals the philosophical, pedagogical, and theological dimensions of artificial intelligence worthy of further exploration and secures the epistemic transparency on which the article’s subsequent sections and conclusions rest.
This methodological synthesis enables us to trace a central thesis throughout our analysis: rather than viewing AI as either threat or tool, we must recognise an emerging symbiotic relationship between artificial and natural intelligence. This symbiosis manifests as a process of mutual adaptation and co-evolution—what Kelly (2010, pp. 11–12) describes as the intertwining of human and technological development within the “technium”. While this relationship manifests differently across philosophical, pedagogical, and pastoral domains, it exhibits consistent patterns that our three-fold hermeneutical approach can illuminate. Each methodological lens contributes uniquely: phenomenology reveals the lived experience of human–AI interaction; hermeneutics uncovers both authentic possibilities and ideological distortions; critical realism identifies the causal mechanisms shaping this co-evolution.

3. Findings of Analysis and Discussion

3.1. From Competition to Symbiosis: A Phenomenological Exploration of Natural and Artificial Intelligence

The dynamic development of AI systems raises fundamental questions about AI’s relationship with natural human intelligence. Adopting phenomenological vigilance, we first bracket common assumptions about AI to examine the dynamic development of AI systems and their relationship with natural human intelligence. This epoché—suspending both techno-optimism and alarmism—allows us to attend to the concrete phenomena of how artificial and natural intelligence actually interact in lived experience. The dichotomy of “threat or ally” simplifies a complex problem yet provides a useful conceptual framework for analysing this phenomenon. As Floridi (2019) emphasises in his analysis of the ethical aspects of AI, AI systems do not function in a vacuum, they are complex socio-technical entities whose impact on humans is deeply rooted in the social, cultural, and ethical contexts of their implementation.
AI primarily encompasses systems with so-called narrow specialisation, built on machine learning and deep neural networks. Unlike general human intelligence, current AI focuses on selected tasks—from text processing and generation, image recognition to logical analysis—achieving superhuman proficiency without a general understanding of the world. Machine learning allows such systems to learn patterns from huge data sets and improve their performance through trial and error, while humans often learn from fewer examples, using their innate abilities to generalise and contextualise. For example, algorithms can now detect correlations in medical or financial data with remarkable precision, while humans may not be able to match their speed of calculation, but they have common sense, intuition, and awareness of purpose. Despite impressive advances, digital intelligence still operates on different principles than biological intelligence; it is essentially a different “operating system” based on digital processing, without the consciousness and self-awareness of the human mind (Korteling et al. 2021).
However, it is worth noting that the latest direction in the development of artificial intelligence and a specific trend are multimodal systems that integrate different modalities related to text, image, sound, and video within a unified computing architecture. Large language models (LLMs), such as GPT-4 and Claude, have evolved from pure text models towards multimodal capabilities, enabling them to interpret and generate content in various formats. This integration of modalities allows AI systems to create richer representations of knowledge, similar to the human way of processing information from multiple senses simultaneously. Models such as GPT-4.5 and Gemini can analyse images, understand their content, and answer questions about visual elements while maintaining their linguistic abilities. This enables them to perform tasks that require multimodal coordination, such as describing photos, solving mathematical problems presented graphically, or interpreting charts and diagrams. Despite these advanced capabilities, even multimodal LLMs still rely on statistical relationships between data rather than a deep understanding of reality comparable to that of humans.
A critical realist analysis exposes the hidden generative mechanisms beneath multimodal AI’s apparent sophistication. While surface capabilities impress, the underlying architecture remains fundamentally statistical, lacking the ontological depth of human meaning-making. These systems operate through what Bhaskar would call the “empirical” level of pattern matching, without access to the “real” structures of semantic understanding that emerge from embodied, cultural, and spiritual human existence.
Natural language processing (NLP) is an area where AI has come closer to human capabilities, although qualitative differences are still evident. Modern language models (such as GPT) can generate texts that resemble human style and fluency, and speech recognition systems enable natural interaction with machines. AI can handle machine translation, summarise articles, and answer questions in a fraction of a second—tasks that require years of language learning and reading comprehension for humans. However, despite their impressive linguistic fluency, these models do not “understand” the meaning of words in the same way as humans; they operate on statistical relationships in training data. Bender aptly described large language models as “stochastic parrots” that mimic human speech without true semantic awareness (Vinay 2024).
Applying Ricoeur’s hermeneutics of suspicion to this phenomenon, we must interrogate the corporate rhetoric surrounding “emotional AI”. When technology companies claim their systems can “understand” emotions, what ideological and commercial interests drive such assertions? The marketing language of “empathy” and “emotional intelligence” in AI systems masks a fundamental commodification of human affect, reducing complex emotional experiences to data patterns that can be monetised. This critical hermeneutical lens reveals how the discourse of AI “understanding” serves to normalise surveillance capitalism while obscuring the absence of genuine intersubjective encounter.
In practice, this means that AI can generate a convincing-sounding statement or response, but it does so without reference to actual experience or deep reflection; when the answer is correct, it is sometimes a “lucky guess” resulting from averaging patterns from the data (Vinay 2024). Humans, on the other hand, use language embedded in a cultural context, with an understanding of intent and pragmatics. As a result, AI’s understanding of language is superficial; for example, a system may not grasp sarcasm, ambiguity, or situational context that are obvious or to humans. This illustrates a general principle: AI matches or surpasses us in narrowly defined tasks (e.g., grammatical correctness, translation speed) but lacks the full human linguistic competence rooted in experience and understanding of the world. However, this is changing very dynamically and radically.
When it comes to emotions and emotional intelligence, the difference between AI and humans is particularly clear. Human thinking is intertwined with feelings; emotions influence decision-making, motivation, and creativity (according to hypotheses such as Damasio’s on the role of feelings in cognitive processes). Artificial intelligence, by its very nature, does not have emotional states or self-awareness, but it can simulate them to a certain extent. The field of affective computing, also known as emotion AI, is developing rapidly and aims to equip machines with the ability to recognise and respond to users’ emotions (Somers 2019). Algorithms are being developed that can assess a person’s mood based on facial expressions, tone of voice, or writing style and adjust the system’s response accordingly. For example, a voice assistant detects frustration in the user’s voice and apologises for the inconvenience, or a care robot imitates a caring tone towards an elderly person. Work is also underway on models capable of generating artificial emotions in virtual agents to make interactions with them more natural (Somers 2019). However, this “empathy” of artificial intelligence is purely simulated: a computer does not feel joy, fear, or empathy, it only imitates external manifestations of emotions based on recognised patterns. For example, a chatbot can be programmed to respond compassionately to a user’s post about feeling unwell, but, in reality, it has no internal emotional experience. This is a key difference: humans have a subjective sphere of experience. This phenomenal consciousness gives emotions authenticity and influences cognitive processes; machines remain at the level of advanced role-playing. Nevertheless, at a functional level, certain elements of emotional intelligence can be implemented in AI, which creates opportunities for cooperation. For example, AI systems can relieve humans of the burden of routine responses to emotional states (crisis helplines operated by bots, etc.) However, this support will always be lacking the “human factor”.
Both the similarities and differences between AI and human intelligence create unique opportunities for synergy. Areas where machines excel—fast calculations, flawless memory, consistency of performance—can complement what we as humans are weaker at, and conversely, human abilities to cope with ambiguous circumstances and understand other people can complement the limitations of AI (Korteling et al. 2021). This idea is confirmed in practice by human–machine hybrid systems. This is exemplified by “centaurs” in chess, where pairs of players consisting of a human assisted by a chess program competed against pure machines and humans. It turned out that a well-coordinated human–AI duo was able to achieve better results than a grandmaster playing alone or the strongest computer (Cassidy 2014). The “centaur” combined human intuition and creativity with the computational power of the program, achieving a playing style that surpassed either of them individually (Cassidy 2014).
Through phenomenological vigilance, bracketing both technological triumphalism and scepticism, we can attend to the lived experience of human–AI collaboration. Players report a distinctive phenomenology: neither purely human nor machine cognition, but an emergent hybrid consciousness where intuitive leaps and computational precision interweave. This epoché reveals that the centaur experience transcends simple tool use, manifesting as a genuinely new mode of cognitive being-in-the-world.
This example illustrates a broader principle: properly designed collaboration with AI can enhance human effectiveness. In practice, we already see this today in many fields. For instance, doctors use AI systems to detect subtle changes in medical imaging (X-rays, ultrasounds, etc.), which increases the accuracy of diagnoses, data analysts use algorithms to sift through large data sets so that they can focus on interpreting the results, etc. Synergy is also evident in language interaction, with translation tools suggesting translations of sentences to humans, speeding up the translator’s work. However, they still require human vigilance in terms of meaning and style. The boundaries between “natural” and “artificial” intelligence are becoming increasingly blurred in terms of interaction of human and machine cognitive systems, which can complement each other, creating a more efficient whole. However, to fully exploit this potential, it is necessary to have a deep understanding of the strengths and weaknesses of both solutions and the consequences of transferring certain cognitive functions to machines.
The “threat or ally” dichotomy gives way to a more complex vision of co-evolution or symbiosis between natural and artificial intelligence. Kelly (2010) proposes the concept of “symbiotechnogenesis”, a process of mutual influence and adaptation between humans and technology. In this view, artificial intelligence is neither an autonomous threat nor a neutral tool, but an active factor in a dynamic socio-technological system whose shape depends on conscious human decisions and values. Rahwan et al. (2019) postulate the need to develop a new discipline—"machine behaviourism”—to study human–AI interactions and their consequences for cognitive, social, and cultural processes. Understanding these complex interactions is a prerequisite for designing AI systems that support, rather than replace, human natural intelligence.
Thus, our phenomenological analysis reveals that the relationship between AI and human intelligence is neither one of replacement nor mere tool use but an emerging symbiosis where each form of intelligence can enhance the other within properly understood limits.

3.2. Cultivating Digital Wisdom: Hermeneutical Perspectives on AI in Christian Education

Christian education is based on philosophical and theological anthropology. Shifting to a hermeneutical lens, we now interpret how AI reshapes educational horizons within Christian pedagogy. This hermeneutical turn allows us to discern both the promises (hermeneutics of trust) and the perils (hermeneutics of suspicion) that AI brings to education grounded in philosophical and theological anthropology. It takes into account the ontological, ethical–axiological, sociocultural, and theological dimensions of the person. Christian pedagogy derives specific goals, principles, and methods of education from human nature and the message of the Gospel. Christian education aims to form mature individuals in the image of Christ, which requires introducing them to the mystery of salvation as it is realised in a specific religious culture. It is necessary to initiate them into the life of a religious and sacramental community, which helps to overcome obstacles to the full development of the person and facilitates the discovery of mature religiosity. Christian education takes into account the moral dimension of life, helps to form conscience, and teaches responsibility for guiding personal conduct, which is linked to religious motivation. It values the word in the process of education (Rynio 2010), the role of the media in evangelisation, and encourages apostolate understood not as proselytism but as a joyful witness to the gifts of God received in Holy Baptism; it also presents social life as dialogue with everyone and the pursuit of the common good of humanity, which is to lead to true unity and peace in the world (Kiciński 2016, p. 1381).
Civilisational progress is the basis for changes in the life of every human being, regardless of their origin, skin colour, cultural or religious affiliation. The development of information, telecommunications, and multimedia technologies represents a considerable advance in the history of human civilisation. The fourth industrial revolution (Schwab 2016) has rapidly moved us from an industrial society to an information society where AI plays a leading role. Information is becoming the basis not only for the efficient functioning of all types of institutions but also of the education process. The implementation of AI in the education system, including religious education, is met with both enthusiasm and concern. On the one hand, the possibilities offered by AI in the education process seem almost limitless, promising, among other things, personalised teaching, optimisation of teaching processes, and support in diagnosing and developing students’ skills. On the other hand, questions arise about data security, ethics in the use of AI, and the potential replacement of teachers by “intelligent” machines. Undoubtedly, thanks to its ability to analyse large data resources, AI is able to adapt teaching materials to the individual needs and learning pace of each student. AI also offers opportunities to support teachers by automating time-consuming tasks, allowing them to focus on more valuable aspects of teaching, such as developing students’ creativity and critical thinking skills. However, there is a risk of over-reliance on technology at the expense of direct human interaction, which is crucial for the emotional and social development of students. It is vital that AI technology supports teachers rather than seeks to replace them. It is important to strike a balance between innovation and the humanistic dimension of education.
One of the current challenges in Christian education is AI technology, which Pope Francis (2024) sees as a tool that is “both fascinating and dangerous”. Thus, when does the use of AI in education become an opportunity and when does it become a threat? Generally speaking, AI is an opportunity for Christian education when those who use AI respect the basic principles, functions, goals, forms, and methods of education. They are aware that AI is a technology that offers enormous educational opportunities. However, they know its development and application raise legal, ethical, and pedagogical dilemmas. They are convinced that many controversial issues require the creation of ethical and legal norms and the subsequent development of appropriate pedagogical criteria. Therefore, it should not be forgotten that research on AI should involve not only computer scientists and engineers, but also experts in law, ethics, philosophy, pedagogy, and catechetics.
The use of AI-based technologies in education requires teachers, especially religious education teachers, to consider who human beings are; why they are who they are; who they can become; who they should become; and how they can be helped to achieve this. During his speech at UNESCO headquarters in Paris on 2 June 1980, John Paul II said the following:
Education is essentially about becoming more human, about being more and more “being” rather than simply “having” more and more, and consequently, through everything one “has” and “possesses,” being able to “be” more fully human. To this end, man must know how to “be more” not only “with others” but also “for others”. Education is fundamental to the formation of interpersonal and social relationships.
(John 1980, no. 11)
Therefore, in addition to specialist pedagogical knowledge, teaching skills, the ability to create teaching aids, use of computer and internet tools, acquiring knowledge about information and communication technologies, knowledge about development and perception mechanisms, knowledge about the information society and the risks associated with the irresponsible use of artificial intelligence, teachers should know the basic principles of development and perception mechanisms, the information society, and the risks associated with the irresponsible use of artificial intelligence. Christian teachers and educators should be familiar with the basic criteria for adapting artificial intelligence to the educational process in general and to Christian education in particular.
Among the risks associated with the use of AI in education, which can be successfully applied to Christian education, Charchuła (2024, pp. 79–87) lists concerns about student privacy first. In his opinion, “the personal data of minors is more susceptible to being used for purposes other than those declared, which may make them victims of various types of manipulation” (p. 84). According to Charchuła,
there are concerns that biases hidden in new AI applications will not help to ensure high-quality inclusive education for all. AI algorithms operate on data from specific individuals, which may lead to these systems applying biased or discriminatory criteria. As such, their use may replicate existing biases, maintaining or increasing gaps that already exist in education.
(p. 85)
In the context of using AI technology in education, problems related to educational equality within and between countries may also arise, caused by huge differences in the economic status of families and their children in rich and poor countries. Charchuła also states:
there are also challenges and dangers associated with the conditions of interaction that AI generates between students. These are primarily aspects that are significantly influenced by technology. Furthermore, the widespread perception of robots with human abilities, often publicised by the media, reinforces the belief that, as in other sectors of social life, machines can automate tasks that are the responsibility of teachers.
(p. 85)
However, given the specific nature of the role of teachers, especially in religious education, there is currently no possibility of them being completely replaced by AI systems.
Jeziorański (2024, pp. 141–53) is concerned not only with the effective but also the appropriate use of AI in the educational process. Both the identified pedagogical criteria for adapting AI and the conclusions drawn from them can be successfully applied to Christian education. Jeziorański, pointing to four main dimensions of education: descriptive, exploratory, optative and normative, identifies the corresponding pedagogical criteria for adapting AI: (a) the criterion of humanistic irreducibility; (b) the criterion of teleological difference; (c) the criterion of the irreplaceability of the person in educational activity; (d) the criterion of educational problematisation (pp. 143–44). Jeziorański arrived at the following conclusions within the above-mentioned criteria for the adaptation of AI to the educational process:
The description and explanation of the human phenomenon in pedagogy should not be limited to empirical data; the image of a human being generated solely on the basis of empirical data—in accordance with Popper’s assumptions—should be treated as “irrevocably hypothetical” and having only temporary value. (…) Educators may use AI in selecting educational goals, but the final decision rests with the educator; the selection of educational goals is linked to value judgements, specific provisions and socially confirmed ideals. (…) Education in the strict sense is an interpersonal activity; responsibility for the choice of educational means rests with the educator. (…) Confronting the pupil with problematic situations is desirable from an educational point of view; individualisation of problematic situations
(pp. 151–52)
Jeziorański’s theses mentioned above can also be applied to Christian education open to technological innovations. The author rightly recommends that the catalogue of criteria and conclusions should not be closed, but open to further exploration. Therefore, this catalogue should include specific competences of Christian education teachers. When using new AI technologies, which should be seen as an opportunity to make the teaching process more attractive, it is crucial for teachers to continuously acquire methodological competences, with particular emphasis on practical IT, multimedia, and social skills.
At the current stage of civilisation, it is important not to prohibit baptised people from using AI technologies rationally and safely but to teach them how to use them properly, eliminating the risks they entail (Francis 2023, 2024). According to a document of the Holy See, AI should not be seen as an artificial form of human only as its product (AeN 2025, no. 3). Taking into account the praxeological dimension of Christian education, it should be remembered that it is the person who educates, not the tool they use. Therefore, when treating AI solely as a tool, one must remember the irreplaceability of the person in educational activities. This is confirmed by the experience of past generations who, without knowing AI, were raised in the Christian spirit and were usually able to make good choices even in extreme situations. Christian education, a natural expression of humanity, without violating what is natural in a human being or who a human being is, presents them with a living and non-utopian image of what they can become (Rynio 2004). This is possible when education is related to God and understood as “the shaping of the human person towards his ultimate goal, and at the same time for the good of the communities of which he is a member and, in whose duties, he will participate when he grows up” (Vatican Council II 1965, no. 1).
The hermeneutical examination of Christian education in the AI era thus confirms our thesis: rather than viewing AI as threat or saviour, we must cultivate a symbiotic relationship that preserves human dignity while embracing technological enhancement of pedagogical practice.

3.3. The Limits of Silicon Souls: A Critical Realist Analysis of AI in the Pastoral Activities of Churches

Through critical realist triangulation of empirical findings and theological reflection, we now examine the relationship between AI and pastoral ministry. This approach enables us to move beyond surface phenomena to identify the underlying generative mechanisms—technological, social, and spiritual—that shape how AI transforms pastoral care. To proceed systematically, we must first adopt specific definitions of these two realities. AI has already been defined in the introduction to this article. On the other hand, the concept of “pastoral ministry” has been defined differently in various Christian traditions. According to the Lexikon für Theologie und Kirche, the term “pastoral” means the full range of ecclesial activity in connection with the mission entrusted to the Church by Christ to be a sign of salvation for the world, to make God present in the world, and to ensure that all people attain unity in Christ and that human society is transformed into the Kingdom of God (Müller 1998, p. 1434). According to Catholic theologians in Poland, it is accepted that “pastoral ministry is the organised activity of the Church which realises Christ’s saving work in the service of humanity through the proclamation of the Word of God, liturgy, pastoral ministry and the witness of Christian life” (Kamiński and Przygoda 2006, p. 201).
In Protestant traditions, the term “pastoral care” means the ministry of healing human souls, aimed at healing, sustaining, guiding, and reconciling people in difficult situations whose problems arise in the context of ultimate meanings and concerns (Clebsch and Jaekle 1983, p. 4). The North American definition contained in the 1990 Dictionary of Pastoral Care and Counselling is universal and encompasses more faith traditions than just Christianity:
Pastoral care is considered to be any form of personal ministry to individuals and family and community relationships by persons (ordained or lay) and by their communities of faith who understand and direct their caring efforts from a theological perspective rooted in the tradition of faith
Pastoral care should be distinguished from professional pastoral counselling, which means “a specialised form of ministry characterised by a deliberate agreement between a pastoral caregiver and a person or family seeking help, usually involving a series of pre-arranged counselling sessions” (Hunter 1995).
According to Dreyer (2019, p. 2), we live in the age of homo digitalis, which challenges theologians to reimagine the Church and develop an ecclesiology that would help the Church to reintegrate itself into society at the beginning of the third millennium. This requires the development of a contextual and practical ecclesiology that is adequate to global cultural changes and capable of applying at least some of the achievements of the digital revolution. The main challenge facing practical theology and the Church is the reinterpretation of traditional categories. Louw (2017, p. 8) proposes focusing on issues of everyday life, which can be called “operational ecclesiology”, instead of the traditional clerical paradigm, denominational divisions, and selective morality. He suggests that the way forward will be determined by the ability of theology and the Church to support “reflective spirituality.” This will be a huge challenge for traditional ministry and for being the Church. Theology in the 21st century must become fides quaerens vivendi—faith seeking a way to live authentically in the presence of God. In operational ecclesiology, the emphasis shifts from “the splendour and glory of the cathedral to the audience of the marketplace—public spaces as locus theologicus” (Louw 2017, p. 8). Louw is undoubtedly right about the direction of change in ecclesiology, but many details still need to be worked out. The dynamics of cultural change in the era of the fourth industrial revolution require adequate changes in ecclesiology. We have reached a point in human development where each generation of Christians is now forced to develop their own existential ecclesiology thanks to the creativity of theologians, probably enhanced by AI technology. However, for the Church’s mission in the world to be successful, this new theological theory must be translated into pastoral practice.
Research conducted on a small sample of Polish Catholic priests confirmed that there are both supporters and sceptics of the use of AI technology in pastoral care (Ignatowski et al. 2024). Supporters see opportunities for the use of AI primarily in the sphere of religious information and education. Sceptics, on the other hand, warn against religious misinformation found in online resources, leading to many religious errors, as well as the dehumanisation of interpersonal relationships due to the temporary abuse of digital tools. Based on this work alone, it is clear that AI technologies offer certain opportunities for use in pastoral care, but they also have clear limitations and may even pose certain threats to individuals and society. Let us begin by showing the positive ways in which AI can be used in pastoral care.
The first promising area of AI application in pastoral care is the collection and processing, thanks to AI technology, of a global theological knowledge base accessible to millions of users around the world in natural languages. According to Romanian researchers (Necula and Dumulescu 2024, pp. 49–50), the use of digital technologies to collect theological knowledge resources can be successfully used for pastoral diagnosis or for recognising biblical or post-patristic patterns. AI technology can help Christians see themselves better, communicate their religious needs more accurately, and improve communication systems in parishes or communities, ultimately translating into better pastoral care. The fundamental question is: What resources should be used to build a global theological knowledge base? It seems evident that such a database cannot be built from publicly available Internet resources. There is too much false information, false narratives, and malicious comments about religion, Churches, and even God Himself. Therefore, what remains to be used? We already have well-verified databases of scientific theology. For example, the ATLA database of American theologians contains abstracts of monographs and theological articles from around the world, most of which are freely available. Another example of a peer-reviewed database is SCOPUS, which has an AI tool that uses only its own resources. Currently, these databases are mainly used by researchers, but in the near future, they may become a source of knowledge for religious education teachers and their students, as well as for seekers of truth about God and His plans for humanity. With the use of AI technology, this knowledge can be conveyed to people in various forms, e.g., through pastoral carebots.
Similar to AI bots currently used in medical care, psychotherapy, and palliative care, it is conceivable that soon, carebots could replace old, exhausted, and often ill pastors in at least some of their pastoral duties. The most far-reaching visions on this subject are put forward by Young (2022, p. 6), who is researching the interaction between AI technology and pastoral care at the Department of Computer Science at the University of Texas Austin. In his opinion, carebots are not yet capable of providing pastoral care comparable to that of a living human being. However, technological advances may ultimately force religious communities to face new questions about the potential for automation in pastoral care.
According to Young (2022, pp. 8–9), telepresence is an obvious alternative to traditional face-to-face interactions between service providers and clients in contemporary pastoral care. Generally speaking, telepresence can mean telephony, text messaging, and video conferencing. The latest additions to the AI toolkit include virtual, augmented, and mixed reality. These are related but distinct technologies. Virtual reality (VR) immerses users in a completely artificial digital environment, usually through a head-mounted display system. Augmented reality (AR) overlays virtual objects onto the real environment via mobile devices such as smartphones and tablets, computer or TV screens, or head-mounted devices or glasses. Finally, mixed reality (MR) extends a step beyond AR, allowing the viewer to interact with virtual objects.
Currently, there are no known pastoral care providers using VR, AR, or MR, although these are used in traditional mental health facilities in the US. The most futuristic aspect of pastoral care seems to be telepresence technology, such as holography and holoportation. Young (2022) describes them as follows:
In this case, photorealistic 3D images of distant people and objects are captured, compressed, transmitted over a network, decompressed and finally displayed using lasers in the user’s field of vision, along with real-time audio communication, rivalling physical presence. The other person appears in the user’s presence as a living hologram. Imagine sitting in a room ‘with’ your pastor, even if he is thousands of kilometres away. You see the facial expressions, gestures, posture and affect of your conversation partner as if he were physically present. The discussion proceeds completely naturally because the interaction takes place in real time.
(p. 9)
Young (2022, p. 11) mentions another fascinating technology that may be used in pastoral care in the future. This is video capture, a technology already used in some museums, such as the Illinois Holocaust Museum and Education Centre in Skokie. This technology allows the image and voice of a person (living or deceased) to be used to hold a conversation resembling that with a living person. This is possible thanks to large amounts of information processed by AI and transmitted in 2D or 3D technology.
The technologies mentioned by Young, such as VR, AR, MR, holography, and holoportation, seem difficult to apply in pastoral care. However, these technologies should not be seen as partners in building personal relationships, but as important resources and tools for pastoral care. The world of liturgy and rituals—not only Christian ones—is rich in symbols and signs. Symbolic meaning is hidden in “visible things” but becomes understandable to those in the know. An important feature of religious symbols is respect for human freedom. A symbol is a form of human expression, enriching the inner world of the spirit and allowing people to experience the sacred in community while respecting individual identity. The hermeneutics of religious signs and symbols is one of the main tasks of pastors. It seems that distinguishing between real and symbolic presence would provide a useful framework for the use of these AI products in pastoral care.
What are the limitations, ethical dilemmas, and threats arising from the use of AI in pastoral care? Can interaction with a machine replace human relationships? Human life is more than a series of biological processes, and emotions are more than a biochemical algorithm. According to Stoddart (2023, p. 673), carebots can assist but never replace humans in pastoral care. AI can generate artificial sensitivity, compassion, even some signs of sympathy, but it will remain a soulless and hopeless machine. It is incapable of grasping human conditionality and learning kenotic love. A bot makes no sacrifice; it merely fulfils its technical function. A human caregiver expresses their humanity in their relationship with another. Stoddart expressed the differences between human caregivers and artificial carebots in eight aphorisms:
Biological “feelings” of love do not exhaust love. Complex pattern recognition does not imply intelligence. Reaction to physical stimuli is not equivalent to empathy. Gathering information is not the same as knowledge. Probabilistic reasoning is not hope. The possibility of being ‘turned off’ is not equivalent to mortality. Assisted activities do not replace care. Technical skills are not wisdom
(p. 673)
A person cannot delegate their responsibility for another person, especially for a person in existential need, to AI technologies. Pastoral care must be based on a horizon of genuine unpredictability and mortality, and this requires a genuine presence, a dialogue that draws on the resources of wisdom and not just on mere information, as well as sacrificial (agapeic) love, which is not a decision-making process but a way of living in communion with others. Since the Second Vatican Council, pastoral ministry, at least in the Catholic Church, has been viewed in the spirit of ecclesiology of communion. To build an authentic community (koinonia), real relationships are necessary between people who are capable of mutual commitment, shared life, and responsibility for one another. AI tools can contribute to improving relationships between people, but only in an analogous sense and to a limited extent.
Proudfoot (2023, p. 677), citing the achievements of Herzfeld (2023) and analysing Karl Barth’s four factors of authentic encounter (open and mutual eye contact; speaking to and listening to each other; mutual giving and receiving of help; doing all this with joy), concludes that “an elementary I–You encounter between a human being and an AI agent is possible, although it will lack the full depth of a human–human encounter due to the different nature of the AI agent and its assumed lack of capax Dei”. After a thorough analysis of the four factors of authentic encounter in relation to AI, Proudfoot came to the following conclusion:
If conscious computers ever emerge (and I agree that this is a big “if”), this article has shown that I–You encounters with these new beings can take place, even from a theological point of view. Their potential role in pastoral care will be different and shallower than that which humans can provide. Such a role would still have value—after all, even humans cannot provide the perfect care that only our gracious God can provide.
(p. 693)
Summarising the results of the above analysis, it should be concluded that AI is not capable of replacing humans in pastoral ministry at the current stage of development, but it can perfectly complement them within the limits of its capabilities, especially in the creation of global databases of theological and spiritual knowledge, which can be used in religious education. At the moment, carebots are not able to respond to human moral dilemmas, taking into account a person’s experience, successes, failures, traumas, and spiritual sensitivity. Humans reflect the imago Dei in their capacity for social relationships with God and other people. In contrast, automated AI systems are unable to create the relationships necessary for authentic pastoral ministry. Carebots, while perfect machines, are unfortunately devoid of hope, soulless and unable to bear witness to faith and kenotic love.
The essence of humanity includes a physical component thanks to which humans are able to express and communicate their innermost experiences and thoughts to others. The activity of disembodied AI tools is capable of simulating empathic affectivity, but it is unable to convey the relationship of a living human being as an integrated and embodied being. Herzfeld (2023, p. 165) notes that “being unique requires a body. The fact that we are embodied beings is central to the Christian faith.” Jesus shared our physical pains, mental fears, even our sense of emptiness when he cried out from the cross: “My God, my God, why have you forsaken me?” (Mark 15:34). According to Herzfeld (2023, p. 170), “this vulnerability to suffering and death, shared by Jesus, is an obstacle between humans and artificial intelligence. It is unlikely that we will design carebots that can age and die like us.” Artificial intelligence is machines, not living beings. They can be a valuable resource when used well. But they are tools and nothing more. Hence the following statement by Herzfeld (2023, p. 179): “What makes life worth living is not the information encoded in AI tools [...]. It is love. Embodied love that we see, hear, taste, touch and nurture. The love that our God has shared with us and that we will, in some way, take with us to the end.”
Christian Churches face one crucial task: to actively participate in shaping the ethical framework for AI in collaboration with AI technologists and theologians. Otherwise, instead of contributing to human progress, AI may become a source of human deconstruction, dehumanisation and the collapse of social order on a global scale. An important step in this direction was taken on 26–28 February 2020, at the conclusion of the international workshop The “Good” Algorithm? Artificial Intelligence, Ethics, Law, Health, organised by the Pontifical Academy for Life, representatives of Microsoft, IBM, the Food and Agriculture Organisation of the United Nations (FAO) and the Italian government. At that time, a document entitled “Rome Call for AI Ethics” was signed “to support an ethical approach to artificial intelligence and promote a sense of responsibility among organisations, governments and institutions to create a future in which digital innovation and technological progress serve human genius and creativity, rather than gradually replacing them” (Pontifical Academy for Life 2020). The Rome Call for AI Ethics recognises that AI offers enormous potential for improving social coexistence and personal well-being, enhancing human capabilities and enabling or facilitating many tasks that can be performed more efficiently and effectively. An important outcome of the Rome meeting was the development of six ethical principles for good AI innovation: transparency, inclusiveness, accountability, impartiality, reliability of AI systems, security of AI systems, and respect for user privacy. The World Council of Churches (2023), concerned about the accelerating development and unregulated use of artificial intelligence (AI), on 27 June 2023 called on theological education institutions to reflect on the ethical issues related to AI and its impact on human self-understanding. WCC member Churches and ecumenical partners were encouraged to lobby their governments for swift action to introduce appropriate regulatory systems and accountability frameworks.
Our critical realist analysis of pastoral care demonstrates that while AI cannot replace the human capacity for love and spiritual accompaniment, it can enter into a symbiotic relationship with human pastors, enhancing their reach while respecting the irreducibly personal nature of authentic pastoral encounter.

4. Conclusions

The primary objective of this article was to formulate a compendium of pragmatic recommendations for the Christian nurturing of children, adolescents, and adults within the familial unit, educational institutions, media, and religious communities. A thorough interdisciplinary investigation was conducted to analyse the impact of AI on Christian formation at both the individual and social levels. The following conclusions were derived from this analysis:
  • AI appears simultaneously as a powerful ally of human intellect—an extension of our minds and a tool for broadening our cognitive horizons—and as a potential threat to our abilities if we misuse it. In purely cognitive areas, AI relieves us of tedious calculations and can help us make better decisions, but it also carries the risk of cognitive dependence and intellectual laziness. In the realm of relationships and emotions, AI can provide support and new forms of communication. Yet, at the same time, it carries the risk of shallow empathy and social atomisation. From a philosophical point of view, AI forces us to rethink the definitions of intelligence, consciousness, and the meaning of human cognition in a world where machines are becoming increasingly competent.
  • From the perspective of Christian anthropology, it is crucial to distinguish between intelligence as the ability to process information and wisdom as the ability to evaluate and act in the light of the good of the human person. As McGrath (2016) notes, technological reductionism, which limits human beings to information processing systems, is fundamentally at odds with the Christian vision of the person as a transcendent being, endowed with dignity and called to relationship with God and other people.
  • Christian pedagogy faces the challenge of shaping digital wisdom an ability to use AI in a reflective and ethical manner that supports the integral development of a person. Spitzer (2016) postulates the need to develop “digital resilience”, an ability to use digital technologies consciously and selectively, taking into account their potential risks to cognitive, emotional, and social development. In this context, paradoxically, limiting exposure to digital technologies in the early stages of development may be a prerequisite for forming a mature relationship with AI systems later in life.
  • A fundamental challenge for educators and pastoral theologians is the media formation of all Christians, especially young people, in the wise use of AI to broaden their religious knowledge and acquire spiritual skills. Only a conscious user, equipped with knowledge about the possibilities and limitations of AI, will be able to use it critically without succumbing to manipulation. The future dream of pastoral theologians is to create a carebot that could answer clients’ questions in a wise, empathetic and theologically sound manner, like a well-trained human pastoral counsellor. However, serious ethical, theological, and practical dilemmas arise here, regardless of whether the carebot would present itself through voice or video, as an avatar in virtual space, as a projection in the client’s augmented reality, or in some other way. It turns out that the road to replacing humans, with all the richness of Gods gifts (capax Dei) and at the same time sinful limitations, with super GAI systems is still a long way off. If this happens even in many years, it is already advisable today to cooperate between the human intelligence of pastors and their charges with AI systems.
  • Our tripartite methodological framework has revealed complementary dimensions of the AI–human relationship. Phenomenological vigilance allowed us to bracket preconceptions and attend to actual experiences of human–AI collaboration, from centaur chess to multimodal communication. The dual hermeneutics of trust and suspicion exposed both genuine opportunities for cognitive enhancement and the ideological mystifications of corporate AI discourse. Critical realism grounded these interpretive insights by identifying the underlying mechanisms—technological, philosophical, educational, and spiritual—that generate observable patterns of promise and peril in human–AI interaction.
  • The spheres of education, culture, and human brotherhood are realities integrally linked to the mission of Christian Churches. It seems that it is precisely in these areas of ecclesial activity that there are great opportunities for the application of AI. Today, young people in particular rely on online resources, as this is where they seek answers to all the questions that arise in their minds, especially questions about the meaning and purpose of life, the meaning of faith and the eschatological perspective of eternal existence. However, in order for them to find answers to these questions that are at least close to the teachings of Christ, someone must post this information in publicly accessible databases. The Primate of the Netherlands, Cardinal W. J. Eijk (2024), believes the time has come to evangelise new technologies. Admittedly, it is difficult today to have a complete vision of what AI can still do for humans, but technologies such as chatbots will make it possible to say something about religious topics as well. It is well known that AI is based on available IT resources, which is why Churches should immediately take steps to expand their own knowledge resources, and, importantly, on their own servers, to avoid the unpleasant situation in the future of these resources being removed by the owner of the medium for financial or ideological reasons. Let us hope that the development of the new reality created by AI will not turn against humanity but will be harnessed for the true development of humanity, in accordance with God’s eternal plan. However, what AI will become for humanity in the near future depends, to some extent, on each and every one of us.

Author Contributions

Conceptualization, W.P., A.R. and M.K.; methodology, W.P. and M.K.; software, W.P.; validation, W.P., A.R. and M.K.; formal analysis, W.P.; investigation, W.P., A.R. and M.K.; resources, W.P., A.R. and M.K.; data curation, W.P.; writing—original draft preparation, W.P., A.R. and M.K.; writing—review and editing, W.P.; visualization, A.R; supervision, M.K.; project administration, W.P.; funding acquisition, W.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC was funded by the Minister of Science and Higher Education within the programme Initiative of Excellence Research University (IDUB) 2025. Project title: “Sztuczna inteligencja. Nowe wyzwania dla rozumienia człowieka, chrześcijańskiego wychowania i pastoralnej działalności Kościołów”, IDUB Number: 1/6-20-25-01-0802-0003-1412.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analysed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. AeN 2025 (Dicastery for the Doctrine of the Faith & Dicastery for Culture and Education 2025). ‘Antiqua et Nova’. Note on the Relationship Between Artificial Intelligence and Human Intelligence, 28 January 2025. Available online: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html (accessed on 15 March 2025).
  2. Cassidy, Mike. 2014. Centaur Chess Shows Power of Teaming Human and Machine. HuffPost, January 27. [Google Scholar]
  3. Charchuła, Jarosław. 2024. Uniwersytet w dobie sztucznej inteligencji—Szanse i zagrożenia [The university in the age of artificial intelligence—Opportunities and threats]. Horyzonty Wychowania 23: 79–87. [Google Scholar] [CrossRef]
  4. Clebsch, William A., and Charles R. Jaekle. 1990. Pastoral Care in Historical Perspective. Northvale: Jason Aronson. [Google Scholar]
  5. Dreyer, Wim A. 2019. Being church in the era of ‘homo digitalis’. Verbum et Ecclesia 40: a1999. [Google Scholar] [CrossRef]
  6. Eijk, Willem J. 2024. Niech Kościół Ewangelizuje Chatboty [Let the Church Evangelise Chatbots]. Available online: https://www.ekai.pl/prymas-holandii-niech-kosciol-ewangelizuje-chatboty/ (accessed on 26 March 2025).
  7. Floridi, Luciano. 2019. What the near future of artificial intelligence could be. Philosophy & Technology 32: 1–15. [Google Scholar] [CrossRef]
  8. Francis, Pope. 2023. Artificial Intelligence and Peace. Message for 57 World Dey of Peace. 8 December 2023. Available online: https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html (accessed on 12 March 2025).
  9. Francis, Pope. 2024. Address to Attends the G7 Session on Artificial Intelligence, Borgo Egnazia 14 June 2024. Available online: https://www.vatican.va/content/francesco/en/speeches/2024/june/documents/20240614-g7-intelligenza-artificiale.html (accessed on 12 March 2025).
  10. Gaudet, Matthew J. 2022. An Introduction to the Ethics of Artificial Intelligence. Journal of Moral Theology 11: 1–12. [Google Scholar] [CrossRef]
  11. Graves, Mark. 2022. Theological Foundations for Moral Artificial Intelligence? Journal of Moral Theology 11: 182–211. [Google Scholar] [CrossRef]
  12. Herzfeld, Noreen L. 2023. The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age. Augsburg: Fortress Publishers. [Google Scholar]
  13. Hunter, Rodney J., ed. 1990. Dictionary of Pastoral Care and Counseling. Nashville: Abingdon Press. [Google Scholar]
  14. Hunter, Rodney J. 1995. Pastoral Care and Healthcare Chaplaincy. Encyclopedia.com. Available online: https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/pastoral-care-and-healthcare-chaplaincy (accessed on 24 March 2025).
  15. Ignatowski, Grzegorz, Łukasz Sułkowski, Krzysztof Przybyszewski, and Robert Seliga. 2024. Attitudes of Catholic Clergies to the Application of ChatGPT in Unite Religious Communities. Religions 15: 980. [Google Scholar] [CrossRef]
  16. Jeziorański, Marek. 2024. Pedagogical Criteria for the Adaptation of Artificial Intelligence to the Educational Process. Lubelski Rocznik Pedagogiczny 43: 141–53. [Google Scholar] [CrossRef]
  17. John, Paul II. 1980. Discorso di Giovanni Paolo II all’organizzazione delle Nazioni Unite per l’Educazione, la Scienza e la Cultura (UNESCO). Parigi 2 June 1980. Available online: https://www.vatican.va/content/john-paul-ii/it/speeches/1980/june/documents/hf_jp-ii_spe_19800602_unesco.html (accessed on 10 March 2025).
  18. John, Paul II. 2000. Act of Entrustment to Mary for the Jubilee of Bishops, 8.10.2000, par. 3. Insegnamenti XXIII/2: 56. [Google Scholar]
  19. Kalisz, Michał. 2020. Sztuczna inteligencja—Osiągnięcia, zagrożenia, perspektywy [Artificial intelligence—Achievements, threats, prospects]. Transformacje 1–2: 156–69. [Google Scholar]
  20. Kamiński, Ryszard, and Wiesław Przygoda. 2006. Duszpasterstwo [Pastoral Ministry]. In Leksykon Teologii Pastoralnej [Lexicon of Pastoral Theology]. Edited by Ryszard Kamiński, Wiesław Przygoda and Marek Fiałkowski. Lublin: Wydawnictwo KUL, pp. 201–9. [Google Scholar]
  21. Kelly, Kevin. 2010. What Technology Wants. New York: Viking Press. [Google Scholar]
  22. Kiciński, Andrzej. 2016. Wychowanie chrześcijańskie [Christian Education]. In Encyklopedia Aksjologii Pedagogicznej [Encyclopaedia of Pedagogical Axiology]. Edited by Krystyna Chałas and Adam Maj. Radom: Polwen, pp. 1380–83. [Google Scholar]
  23. Korteling, Johan E.H., Gillian C. van de Boer-Visschedijk, Romy A.M. Blankendaal, Rudy C. Boonekamp, and Aletta R. Eikelboom. 2021. Human—Versus artificial intelligence. Frontiers in Artificial Intelligence 4: 622364. [Google Scholar] [CrossRef] [PubMed]
  24. Leo XIV, Pope. 2025. Address to the College of Cardinals. 10 May 2025. Available online: https://www.vatican.va/content/leo-xiv/en/speeches/2025/may/documents/20250510-collegio-cardinalizio.html (accessed on 12 May 2025).
  25. Louw, Daniël J. 2017. Practical theology as life science: Fides Quaerens Vivendi and its connection to Hebrew thinking (Hālak). In die Skriflig 51: a2239. [Google Scholar] [CrossRef]
  26. McCarthy, John, Marvin Lee Minsky, Nathaniel Rochester, and Claude Elwood Shannon. 1955. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (31 August). Available online: http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html (accessed on 22 March 2025).
  27. McGrath, Alister E. 2016. Enriching Our Vision of Reality: Theology and the Natural Sciences in Dialogue. London: SPCK. [Google Scholar]
  28. Michałowski, Bartłomiej. 2018. Internet of Things (IoT) i Artificial Intelligence (AI) w Polsce. In Jak wykorzystać Rewolucję Technologiczną Internetu Rzeczy i Sztucznej Inteligencji w rozwoju Polski. Warszawa: Instytut Sobieskiego. Available online: https://sobieski.org.pl/wp-content/uploads/Raport-Iot-i-AI-w-Polsce-03-2018-Micha%C5%82owski.pdf (accessed on 31 March 2025).
  29. Müller, Josef. 1998. Pastoral. In Lexikon für Theologie und Kirche. Edited by Walter Kasper, Konrad Baumgartner, Horst Bürkle, Klaus Ganzer, Karl Kertelge, Wilhelm Korff and Peter Walter. Freiburg, Basel, Rome and Wien: Herder. [Google Scholar]
  30. Necula, Constantin V., and Daniela Dumulescu. 2024. Artificial Intelligence and religion: Between slavery and the path to salvation. Journal for the Study of Religions and Ideologies 23: 47–58. [Google Scholar]
  31. Oberg, Anderw. 2023. Souls and Selves: Querying an AI Self with a View to Human Selves and Consciousness. Religions 14: 75. [Google Scholar] [CrossRef]
  32. Oppong, Emmanuel O. 2024. Ethical Challenges of Artificial Intelligence in the Light of the Teachings of Pope Francis. Społeczeństwo 34: 93–107. [Google Scholar] [CrossRef]
  33. Pontifical Academy for Life. Artificial Intelligence 2020, Vatican 2020. Available online: https://www.academyforlife.va/content/pav/en/projects/artificial-intelligence.html (accessed on 26 June 2025).
  34. Proudfoot, Anderw. 2023. Could a Conscious Machine Deliver Pastoral Care? Studies in Christian Ethics 36: 675–93. [Google Scholar] [CrossRef]
  35. Rahwan, Iyat, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-F. Bonnefon, Cynthia Breazeal, Jacob W. Crandall, Nicholas A. Christakis, Iain D. Couzin, Matthew O. Jackson, and et al. 2019. Machine behaviour. Nature 568: 477–86. [Google Scholar] [CrossRef] [PubMed]
  36. Reed, Randall. 2021. A.I. in Religion, A.I. for Religion, A.I. and Religion: Towards a Theory of Religious Studies and Artificial Intelligence. Religions 12: 401. [Google Scholar] [CrossRef]
  37. Rynio, Alina. 2004. Integralne Wychowanie w myśli Jana Pawła II [Integral Education in the Thought of John Paul II]. Lublin: Wydawnictwo KUL. [Google Scholar]
  38. Rynio, Alina. 2010. Słowo w procesie wychowania [The word in the process of education]. In Media w Wychowaniu Chrześcijańskim [Media in Christian Education]. Edited by Rynio Alina and Dorota Bis. Lublin: Wydawnictwo KUL, pp. 261–76. [Google Scholar]
  39. Schwab, Klaus. 2016. The Fourth Industrial Revolution. New York: Crown Business. [Google Scholar]
  40. Singler, Beth. 2023. Will AI create a religion? Views of the algorithmic forms of the religious life in popular discourse. American Religion 5: 95–103. [Google Scholar] [CrossRef]
  41. Somers, Meredith. 2019. Emotion AI, explained. MIT Sloan Management Review, March 8. [Google Scholar]
  42. Spitzer, Manfred. 2016. Cyfrowa Demencja: W jaki sposób Pozbawiamy Rozumu Siebie i Swoje Dzieci [Digital Dementia: How We Are Depriving Ourselves and Our Children of Their Minds. Słupsk: Dobra Literatura. [Google Scholar]
  43. Stoddart, Eric. 2023. Artificial Pastoral Care: Abdication, Delegation or Collaboration? Studies in Christian Ethics 36: 660–74. [Google Scholar] [CrossRef]
  44. Vatican Council II. 1965. Declaration on Christian Education ‘Gravissimum Educationis’. 28 October 1965. Available online: https://www.vatican.va/archive/hist_councils/ii_vatican_council/documents/vat-ii_decl_19651028_gravissimum-educationis_en.html (accessed on 12 March 2025).
  45. Vicini, Andrea. 2022. Artificial Intelligence and Social Control: Ethical Issues and Theological Resources. Journal of Moral Theology 11: 41–69. [Google Scholar] [CrossRef]
  46. Vinay, Aananya. 2024. Emily Bender on AI as a “stochastic parrot”. The Student Life, November 15. [Google Scholar]
  47. World Council of Churches. 2023. Statement on the Unregulated Development of Artificial Intelligence by the WCC Central Committee, Geneva, Switzerland, 21–27 June. Available online: https://www.oikoumene.org/resources/documents/statement-on-the-unregulated-development-of-artificial-intelligence (accessed on 26 June 2025).
  48. Young, William. 2022. Virtual Pastor: Virtualization, AI, and Pastoral Care. Theology and Science 20: 6–22. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Przygoda, W.; Rynio, A.; Kalisz, M. Artificial Intelligence: A New Challenge for Human Understanding, Christian Education, and the Pastoral Activity of the Churches. Religions 2025, 16, 948. https://doi.org/10.3390/rel16080948

AMA Style

Przygoda W, Rynio A, Kalisz M. Artificial Intelligence: A New Challenge for Human Understanding, Christian Education, and the Pastoral Activity of the Churches. Religions. 2025; 16(8):948. https://doi.org/10.3390/rel16080948

Chicago/Turabian Style

Przygoda, Wiesław, Alina Rynio, and Michał Kalisz. 2025. "Artificial Intelligence: A New Challenge for Human Understanding, Christian Education, and the Pastoral Activity of the Churches" Religions 16, no. 8: 948. https://doi.org/10.3390/rel16080948

APA Style

Przygoda, W., Rynio, A., & Kalisz, M. (2025). Artificial Intelligence: A New Challenge for Human Understanding, Christian Education, and the Pastoral Activity of the Churches. Religions, 16(8), 948. https://doi.org/10.3390/rel16080948

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop