1. Introduction
The advent of AI and its ubiquitousness have consequently given birth to new research areas in philosophy, ethics, and theology, which are the philosophy of AI, AI ethics, and theology of AI. There is no better intellectual resource where the combinations of key subject matters in these new intellectual areas have been interrogated and expounded as in the four documents of the Catholic Church on AI during the papacy of Pope Francis, namely: “Rome Call for AI Ethics” (
Pontifical Academy for Life 2020), “Artificial Intelligence and Peace. Message of His Holiness Pope Francis” (
Pope Francis 2023), “Artificial Intelligence and the Wisdom of the Heart” (
Pope Francis 2024), and “Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence” (
Dicastery for the Doctrine of the Faith and Dicastery for Culture and Education 2025). These documents clearly discuss the ethical, philosophical, theological, anthropological, ecological, sociological, legal and cognate questions concerning the research, development, deployment and usage of AI systems and their implications to the meaning of human life and existence, the advancement of human flourishing, a harmonious and peaceful global community and the care of our common home, the Earth. In this latest of these four documents, Antiqua et Nova (n. 5), the seriousness of this discourse on AI is addressed not only to the Catholic faithful but to all humans of goodwill, thus:
Committed to its active role in the global dialogue on these issues, the Church invites those entrusted with transmitting the faith—including parents, teachers, pastors, and bishops—to dedicate themselves to this critical subject with care and attention….it is also meant to be accessible to a broader audience, particularly those who share the conviction that scientific and technological advances should be directed toward serving the human person and the common good.
Since the first document, “Rome Call for AI Ethics” (2020), research studies have been published on the Church’s view on AI; inspired by these AI documents of the Church, research papers have been published addressing subject matters of different academic fields, especially theology, religious studies, philosophy, ethics, and education (see
Vitullo et al. 2025;
Annicchino 2025;
Floridi 2024;
Onyeukaziri 2024;
Labrecque 2022). These studies and a few others not mentioned, no doubt, have helped to elucidate the thoughts and deepen the understanding of the Catholic Church’s position on AI, but they do not engage in an integral study of the four documents as a whole. The present study attempts to fill the gap by engaging in a philosophical and theological analysis of the four AI documents of the Church presently released, all of which were released during the papacy of Pope Francis.
The papacy of Pope Francis, like other previous papacies, addressed several contemporary global issues, of which the questions of climate change, global poverty, war, and artificial intelligence (AI) were given recurrent emphasis. There were four elaborate documents dedicated to the question of AI design and development with respect to their ethical, philosophical, theological, and socio-political implications. The aim of this study, therefore, is to philosophically analyze the philosophical and theological intuitions that underpin the urgency and the cogency that inform the pontification over the subject matter, AI. Additionally, it hopes to examine the exhaustiveness and cohesiveness of the scientific and technological epistemological foundations that ground the argumentations of the documents and theoretical unification that underpins Pope Francis’s thoughts on AI. This will help to evaluate its contribution to the ongoing discourse on AI ethics and governance and expound the humanistic imagination on the reality of the co-existence of humans and AI as cognitive systems.
2. Brief Exposé of the Four AI Documents of the Church
This section of the study will briefly discuss each of the four documents of the Church on AI. The aim is to directly epistemically grasp the main points emphasized in each AI document. The documents will be discussed chronologically. This means that insights in the earlier documents will be found referenced to or alluded to in the later documents, especially the last document, which is more comprehensive and detailed.
2.1. Rome Call for AI Ethics (RCAI Ethics) (28 February 2020)
This is the first official document of the Church on AI from the desk of the
Pontifical Academy for Life (
2020). In this document, the Church acknowledges that AI is “bringing about profound changes in the lives of human beings, and it will continue to do so.” In this document, the Church has to face the fact that the invention and continuous development of AI systems have come to stay and cannot be reinvented or its development stopped. This document acknowledges that “AI offers enormous potential when it comes to improving social coexistence and personal well-being.” However, the document is skeptical about the corresponding qualitative potential of AI to the quantitative potentials that AI brings. Hence, as much as AI could help to augment “human capabilities and enabling or facilitating many tasks that can be carried out more efficiently and effectively”, it maintains reservations that AI systems are portents to transforming “the way in which we perceive reality and human nature itself, so much so that they can influence our mental and interpersonal habits.” The question of “the way in which we perceive reality” points to metaphysics, and the question of “the way in which we perceive…human nature itself” points to both philosophical and theological anthropology. This implies that, critically speaking, the invention and development of AI is not philosophically and theologically indifferent. This explains the reason for the increase in research publications on the philosophy of AI and the theology of AI. Nevertheless, this document of the Church on AI, aimed at sparking a conversation on the ethical implications of AI and making suggestions for AI ethics, does not interrogate the philosophical and theological implications of AI. These implications became the thought directions of the subsequent three documents on AI.
Even though the focus of this document is on ethics, the theoretical foundation for the ethical appeal for AI development is not the traditional moral resources of the Church, nor is it explicitly based on the traditional moral intellectualization of the Church. This also shows that this document is not intended to be a rigorous moral, philosophical-cum-theological discourse on AI, but a mere if important appeal and call for the need of ethical regulations on the entire AI ecosystems. Hence, the document has as its main reference for this ethical appeal the
Universal Declaration on Human Rights. Referencing the preamble of the
Universal Declaration on Human Rights, the document maintains that ‘New technology must be researched and produced in accordance with criteria that ensure it truly serves the entire “human family”…, respecting the inherent dignity of each of its members and all natural environments, and taking into account the needs of those who are most vulnerable. The aim is not only to ensure that no one is excluded, but also to expand those areas of freedom that could be threatened by algorithmic conditioning.’ In this, one can grasp clear allusions to certain fundamental principles of the social teachings of the Catholic Church: the common good (the good of the entire human family), respect for the inherent dignity of every human person, care for nature and the environment, and option for the poor and most vulnerable (see
Pontifical Council for Justice & Peace 2004).
As alluded to earlier, the main objective of this AI document is to call for the ethical design and usage of AI systems. In affirmation of (Art. 1 and 2, Univ. Dec. Human Rights), the document proposes ethical AI systems that are grounded in the fundamental human principle: “All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of fellowship.” Freedom, equality, dignity, and rights to personhood are irreducible principles of the human person that must not be compromised in the advancement of AI systems. These principles, therefore, ought to be the fundamental grounds for AI ethical discernments, articulations, and promulgations. Hence, it maintains that “This fundamental condition of freedom and dignity must also be protected and guaranteed when producing and using AI systems.” To this effect, AI systems should be designed to guard against any form of discrimination.
Another important point this document argues is the easily forgotten fact that technological advancement does not necessarily entail human progress and flourishing; the other fact is that, most times, the price paid for technological progress is the degeneration of our common home, the Earth. This document submits that, for technological advancement to truly lead to human progress and flourishing and the respect of the planet, the following three requirements have to be met: i. “It must include every human being, discriminating against no one.” ii. “It must have the good of humankind and the good of every human being at its heart.” iii. ‘It must be mindful of the complex reality of our ecosystem and be characterized by the way in which it cares for and protects the planet (our “common and shared home”) with a highly sustainable approach.’ In addition to these requirements, the principles of transparency and the conscientious development and deployment of AI systems is also emphasized as an important AI ethical principle. This is because, since AI systems simulate human cognitive powers, effort should be made to inform users that they are interacting with machines, and the possible implications of human–machine interaction should be properly spelled out.
Additionally, this document is very concerned with the impact of AI on education. There is no doubt that AI will impact education; already, educational and research institutions are not only feeling the shocks of the impact of AI, but they are now battling with the challenges of reconfiguring traditional educational models and pedagogy to fit into the employment of AI systems. However, there is the fear of educational disenfranchisement and knowledge commercialization due to the cost of using complex AI systems in education. Thus, there is the question of ensuring universal basic education in the deployment of AI systems in education, as discussed in the document. The document submits that: “The social and ethical impact of AI must be also at the core of educational activities of AI. The main aim of this education must be to raise awareness of the opportunities and also the possible critical issues posed by AI from the perspective of social inclusion and individual respect.”
Hence, the right to a good education, which is the duty of all governments and indeed of all human rights, must be protected and enhanced by the development and employment of AI systems. Therefore, the developers of AI can advance AI designs that can promote and sustain global peace and the protection of the planet. These are broader ethical questions that affect the entire human race. Thus, the document unequivocally asserts: “To develop and implement AI systems that benefit humanity and the planet while acting as tools to build and maintain international peace, the development of AI must go hand in hand with robust digital security measures.” For this to be effective, ethically grounded algorithms must be given serious consideration in the design of AI, given the essential importance of algorithms in AI design and development. The document calls this perspective “an approach of ethics by design”, in other words, “algor-ethical vision” of AI. This implies designing AI systems with the following six ethical principles, which it conceives as “fundamental elements of good innovation”: transparency, inclusion, responsibility, impartiality, reliability, and security and privacy.
2.2. Artificial Intelligence and Peace: Message of His Holiness Pope Francis (AI and Peace) (2023)
The second document on AI is not from a dicastery of the Vatican, but it is a message from His Holiness Pope Francis. The message aimed to specially connect the question of global peace to the question of AI was designed to be the message for the World Peace Day celebration, usually celebrated on the first of January, in this case, 1 January 2024. Unlike in the first document, which heavily referenced and draws inspirations from the Universal Declaration on Human Rights, this document is written by drawing deep and elaborate inspiration from the Sacred Scriptures and some official documents of the Church: “Gaudium et Spes,” “Evangelii Gaudium,” “Laudato Si,” “Fratelli Tutti,” and a few other public addresses and messages of Pope Francis.
This document begins with a positive take on human intelligence, skillful abilities, and capacity of understanding and knowledge, including the knowledge of science and technology, as gifts from God to humans, which give dignity to the human person created in the image and likeness of God (see AI and Peace, n. 1). One could deduce that, since AI is a product of human intelligence and skillful ability, the Church is trying to argue that AI can be a force for good, especially an instrument for global peace, if properly developed and deployed. Hence, with the proper use of the human intelligence, we humans can and should continue “God’s plan and cooperation with his will to perfect and bring about peace among peoples. Progress in science and technology, insofar as it contributes to greater order in human society and greater fraternal communion and freedom, thus leads to the betterment of humanity and the transformation of the world” (AI and Peace, n. 1). Hence, the Pope acknowledges “the impressive achievements of science and technology,” especially in areas of health care and medical research and advancement. However, human experience teaches us that human intelligence and skillful abilities have also produced technological instruments that were a great force for evil, war, and human destruction. Thus, in the same vein, the document rightly affirms that science and technology come with “grave risks” as they come with “exciting opportunities” (see AI and Peace, n. 1). Hence, the question is: How can AI be a force for reducing societal restiveness and global wars and acrimony? This is the central question through which this second document on AI, the first message of Pope Francis on AI, could be studied.
In discussing the future of AI, the Pope asserts: “New digital tools are even now changing the face of communications, public administration, education, consumption, personal interactions and countless other aspects of our daily lives” (
AI and Peace, n. 2). It is the powerful capacity for the science and technologies of algorithm that makes AI an unparallel and revolutionary agent among new digital technologies. The Pope warns that, just as one should not be deceived in conceiving scientific research and technological innovation a “disembodied” and “neutral” but “subject to cultural influences” and other non-scientific elements and factors (
Feyerabend 2010, constantly reminded the scientific community of this), one should never think or conceive the science and technology of algorithm, which is the heart of AI, and the science and technology of AI, as “disembodied”, “neutral” and without “subject to cultural influences” (see
AI and Peace, n. 2). To this effect, the Pope reiterates the need for the ethical considerations in the entire ecosystem of AI (see
AI and Peace, n. 2).
The Pope conceives AI systems as systems of “forms of intelligence” that are “aimed at making machines reproduce or imitate in their functioning the cognitive abilities of human beings” (see AI and Peace, n. 2). Hence, the Pope sees AI “intelligence” as “fragmentary” because “they can only imitate or reproduce certain functions of human intelligence” (see AI and Peace, n. 2). Moreover, based on the conception that AI systems are “forms of intelligence”, the Pope argues that AI systems ‘should always be regarded as “socio-technical systems” (see AI and Peace, n. 2). On these premises, the Pope conclusively argued that: “For the impact of any artificial intelligence device—regardless of its underlying technology—depends not only on its technical design, but also on the aims and interests of its owners and developers, and on the situations in which it will be employed” (see AI and Peace, n. 2). Hence, he contends that AI “ought to be understood as a galaxy of different realities” (see AI and Peace, n. 2). By this he posits that AI systems are not apriorily beneficent: their goodness is dependent on human goodness, which is possible when we humans act ‘responsibly and respect such fundamental human values as “inclusion, transparency, security, equity, privacy and reliability”’ (see AI and Peace, n. 2). Since these human values are not apriorily in human nature, but can be developed by the human nature, the Pope maintains that it should not be presupposed that designers and developers of AI technologies will always “act ethically and responsibly. There is a need to strengthen or, if necessary, to establish bodies charged with examining the ethical issues arising in this field and protecting the rights of those who employ forms of artificial intelligence or are affected by them” (see AI and Peace, n. 2). This implies that the “inherent dignity of each human being and the fraternity that binds us together as members of the one human family must undergird the development of new technologies and serve as indisputable criteria for evaluating them before they are employed” (AI and Peace, n. 2).
In this message, the Pope rightly makes a conscious distinction between AI in general and machine learning systems (MLS) and even deep learning systems (DLS). This distinction is important for a concise interrogation of the implications of each system. Focusing on machine learning systems and deep learning, the Pope observes: “Developments such as machine learning or deep learning, raise questions that transcend the realms of technology and engineering, and have to do with the deeper understanding of the meaning of human life, the construction of knowledge, and the capacity of the mind to attain truth” (AI and Peace, n. 3). The capacity to train machines to “learn”, and indeed the fact that we have machines that overtly demonstrate learning behaviors, puts MLS and DLS in a realm that “transcend the realms of technology and engineering”. Technology has always been a useful artifact that improves the efficiency of human work, and engineering has always been a design aimed at optimizing human operations. However, when, in addition to the improvement of work efficiency and optimization, the capacity to learn and to improve in learning is added to machines, one should expect the transcended technological and engineering performance of MLS and DLS. Since the message is linked to peace, the Pope mentions a few ills that could be caused by MLS and DLS due to their learning capacity when they are taught unethically. They can exacerbate negative human problems such as campaigns of disinformation, problems associated with privacy, data ownership and intellectual property, “discrimination, interference in elections, the rise of a surveillance society, digital exclusion and the exacerbation of an individualism increasingly disconnected from society” (AI and Peace, n. 3).
The Pope considers the vastness and complexity of the world, which consists both of the laws explorable by the natural and computational sciences and the world that is the product of the human conscious self, such as human and cultural values, which speak to the cognitive limitations of the human mind regarding the designing of AI systems capable of such complexity. Thus, the Pope seems to be convinced that: ‘The human mind can never exhaust its richness, even with the aid of the most advanced algorithms. Such algorithms do not offer guaranteed predictions of the future, but only statistical approximations. Not everything can be predicted, not everything can be calculated; in the end, “realities are greater than ideas” (AI and Peace, n. 4). Moreover, the Pope is critical of propositions that suggest possibilities of anti-mortality in humans—i.e., posthumanism and/or transhumanism. He affirms human mortality and contends against aspirations that tend “to overcome every limit through technology,” for ‘in an obsessive desire to control everything, we risk losing control over ourselves; in the quest for an absolute freedom, we risk falling into the spiral of a “technological dictatorship”’ (AI and Peace, n. 4). With technology, we can extend the limitations of our human cognitive powers, but we should not be under the delusion that technology can liberate the metaphysical premise that humans are limited as contingent beings.
AI systems can exacerbate enduring ethical problems in human societies rooted in “bias and discrimination: systemic errors can easily multiply, producing not only injustices in individual cases but also, due to the domino effect, real forms of social inequality” (AI and Peace, n. 5). AI systems can be made to be benevolent tools, but the fallen state of human nature spurs the fear that AI could be based on information design and decision-making algorithms used for “forms of manipulation or social control require careful attention and oversight, and imply a clear legal responsibility on the part of their producers, their deployers, and government authorities” (AI and Peace, n. 5). With respect to the use of AI in surveillance and social demeanors, the Pope advices: “Algorithms must not be allowed to determine how we understand human rights, to set aside the essential human values of compassion, mercy and forgiveness, or to eliminate the possibility of an individual changing and leaving his or her past behind” (AI and Peace, n. 5). The history of philosophy tells us how problematic it is to clearly define moral concepts and values such justice, mercy, and compassion; therefore, if it is problematic for us to clearly define these moral values, there is substantial doubt about whether they could be formalized, computationalized and designed in AI systems.
Given that AI is expanding the market of weaponry, the Pope also warns against the weaponization of AI systems and other emerging technologies, such as so-called “autonomous weapon systems,” as “cause for grave ethical concern” (AI and Peace, n. 6). His earnest contention is: ‘Autonomous weapon systems can never be morally responsible subjects. The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms, and that capacity cannot be reduced to programming a machine, which, as “intelligent” as it may be, remains a machine. For this reason, it is imperative to ensure adequate, meaningful and consistent human oversight of weapon systems’ (AI and Peace, n. 6). On the other hand, he acknowledges and encourages that AI systems be used “to promote integral human development, it could introduce important innovations in agriculture, education and culture, and improved level of life for entire nations and peoples, and the growth of human fraternity and social friendship. In the end, the way we use it to include the least of our brothers and sisters, the vulnerable and those most in need, will be the true measure of our humanity” (AI and Peace, n. 6). This speaks to the call for a human-centered AI and a humanely oriented algorithmic design. Hence, according to the Pope: “An authentically humane outlook and the desire for a better future for our world surely indicates the need for a cross-disciplinary dialogue aimed at an ethical development of algorithms—an algor-ethics—in which values will shape the directions taken by new technologies” (AI and Peace, n. 6). A humanely oriented algorithmic design ought to be ethically oriented and initiated, and this explains the notion of “algor-ethics”.
The Pope, like many educationists, is worried about the implications of the use of AI for the educational value of critical thinking in students. The Pope observes: “Education in the use of forms of artificial intelligence should aim above all at promoting critical thinking” (AI and Peace, n. 7). He also maintains that AI should not lead to educational disenfranchisement of any person, irrespective of age. Therefore, he adjoins: “Schools, universities and scientific societies are challenged to help students and professionals to grasp the social and ethical aspects of the development and uses of technology” (AI and Peace, n. 7). The educational awareness about AI is not only a matter for or within educational institutions; there is need for educational awareness of AI on the level of international communities. Hence, the Pope advocates for the development of international law with respect to AI:
The global scale of artificial intelligence makes it clear that, alongside the responsibility of sovereign states to regulate its use internally, international organizations can play a decisive role in reaching multilateral agreements and coordinating their application and enforcement. In this regard, I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms.
(AI and Peace, n. 8)
In designing global laws, ethics, and governance on AI, the Pope clearly enjoins the need for “the consideration of deeper issues regarding the meaning of human existence, the protection of fundamental human rights, and the pursuit of justice and peace… For this reason, in debates about the regulation of artificial intelligence, the voices of all stakeholders should be taken into account, including the poor, the powerless and others who often go unheard in global decision-making processes” (AI and Peace, n. 8).
2.3. Artificial Intelligence and the Wisdom of the Heart (AI and WH) (2024)
The third document to be considered, like the second, is a message by Pope Francis. However, in this document, the message is about AI and human communication. In this message, the Pope commences by directly observing that AI “is radically affecting the world of information and communication, and through it, certain foundations of life in society” (AI and WH 2024). Consequent to this, the Pope thinks that this radical change due to the advancement of AI “leads inevitably to deeper questions about the nature of human beings, our distinctiveness and the future of the species homo sapiens in the age of artificial intelligence” (AI and WH 2024). Hence, he raises the crucial anthropological question: “How can we remain fully human and guide this cultural transformation to serve a good purpose?” (AI and WH 2024). This shows that the Pope and indeed the Church recognize how philosophically substantial the implications of the development of AI are to the philosophical foundations of Christian anthropology, and, by extension, to traditional Western philosophical anthropology.
In this message, the Pope reemphasizes some of the questions and issues raised in his first message, as reviewed earlier, as well as providing some perspective on AI ethics and governance, as reviewed in the first document. Nevertheless, this present message could be said to be more in the domain of philosophy and theology of AI. Hence, the Pope advocates a theology of AI that is centered on the principle of the wisdom of the human heart. The Pope contends: “At this time in history, which risks becoming rich in technology and poor in humanity, our reflections must begin with the human heart. Only by adopting a spiritual way of viewing reality, only by recovering a wisdom of the heart, can we confront and interpret the newness of our time and rediscover the path to a fully human communication” (AI and WH 2024). In other words, the advancement of technology challenges and calls for a more serious spiritual cultivation of the human person. The paradoxical good of the advancement of technology, especially of AI technology, is that it challenges us to be aware of the spiritual nature of the human person, because it compels us as a community of the human race to ask the critical and profound question: What makes us truly humans? To grapple with this fundamental question, the Pope suggests an analogical reasoning of the “Wisdom of the heart”. The Pope employs the analogical connotations of the term “heart,” inviting all to a reflection on true human wisdom, the “wisdom of the heart,” and asserts: “Wisdom of the heart, is the virtue that enables us to integrate the whole and its parts, our decisions and their consequences, our nobility and our vulnerability, our past and our future, our individuality and our membership within a larger community” (AI and WH 2024).
The wisdom of the heart is the specificity of human knowledge that is not computable and cannot be designed into AI systems. Hence, the Pope argues that this notion of wisdom of the heart, “cannot be sought from machines” (AI and WH 2024). The wisdom of the heart is not present and cannot be present in AI systems. Thus, he argues that the term “intelligence” in AI can be misleading when intended to be used in the same sense as human intelligence, by which “wisdom of the heart” can only be made possible, notwithstanding the enormous capacity of AI systems in terms of data storage and information processing. To this effect, he submits: “It is not simply a matter of making machines appear more human, but of awakening humanity from the slumber induced by the illusion of omnipotence, based on the belief that we are completely autonomous and self-referential subjects, detached from all social bonds and forgetful of our status as creatures” (AI and WH 2024). The technological progress in the design and development of AI systems has the epistemic temptation of an ontological omnipotence in humans who have always wanted to be like God, of the deceptive belief of being capable of recreating the human person and even creating posthuman and transhuman persons. The Pope sees this as human beings’ unwillingness to accept their self-insufficiency and vulnerability, which causes the drive to create artifacts, which is prone to abuse, as the primordial temptation to become like God without God” in humans (AI and WH 2024).
As a result of this tendency in human nature, the Pope points to the ambivalence of AI: “Artificial intelligence systems can help to overcome ignorance and facilitate the exchange of information between different peoples and generations”; ‘at the same time, they can be a source of “cognitive pollution”, a distortion of reality by partially or completely false narratives, believed and broadcast as if they were true’ (AI and WH 2024). Therefore, the Pope decried the use of AI systems, such as deepfakes, as an instrument for fake news. Such acts are potent in the destruction of human relationships. Hence, the Pope reiterates the point he made in his earlier documents that AI is not neutral, and we should not assume that algorithms can be neutral; to this end, he emphasized the need for the proper and ethical regulation of the entire AI ecosystem.
One of the temptations of the digital age is the conscious and/or unconscious assumption that digitalization in itself is intrinsically valuable; the more a thing is digitalized, the more valuable the thing is. This is a temptation that could easily result in the reduction of the human person to mere digital elements, as if a person were a number or could be valued merely as numerical quantification. Hence, the Pope warns of the risk at the time of the digital revolution, “of turning everything into abstract calculations that reduce individuals to data, thinking to a mechanical process, experience to isolated cases, goodness to profit, and, above all, a denial of the uniqueness of each individual and his or her story. The concreteness of reality dissolves in a flurry of statistical data” (AI and WH 2024). Pope Francis, in the last section of this document, concludes his message by raising about eleven questions that border on transparency, inclusivity, accessibility, and accountable ethical information processing in the era of AI. These are questions that need to be critically reflected on and the responses are important in the formulation of AI regulations and policies.
2.4. Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence (Antiqua et Nova) (2025)
This fourth and the most recent AI document of the Church is unlike the second and third reviewed above, which are Pope Francis’ messages about AI; it is like the first document on AI, which is from an office of the Vatican Curia. This document is from the Dicastery for the Doctrine of the Faith and Dicastery for Culture and Education. It is the most elaborate document of the Church on AI, employing the benefit of the questions and issues already raised and interrogated in the first three documents discussed above. Hence, efforts will be made to focus on new perspectives, conceptions, and insights that are not or have not been profoundly expounded in the earlier documents. This is in order to avoid unnecessary repetition.
This document begins by reemphasizing that “our scientific and technological abilities” are special attributes of what it means for us to “be humans” (Antiqua et nova, n. 2). It goes further by stating clearly the main objective of this document, which is to address “the anthropological and ethical challenges raised by AI—issues that are particularly significant, as one of the goals of this technology is to imitate the human intelligence that designed it” (Antiqua et nova, n. 3). This document acknowledges the potential of AI technology “to learn and make certain choices autonomously, adapting to new situations and providing solutions not foreseen by its programmers” (Antiqua et nova, n. 3). Having this capability, the document posits that the development of AI “raises critical concerns about AI’s potential role in the growing crisis of truth in the public forum” and thus “raises fundamental questions about ethical responsibility and human safety, with broader implications for society as a whole” (Antiqua et nova, n. 3). Hence, the document clearly states who this document is directly and indirectly written for, saying: “Committed to its active role in the global dialogue on these issues, the Church invites those entrusted with transmitting the faith—including parents, teachers, pastors, and bishops—to dedicate themselves to this critical subject with care and attention….it is also meant to be accessible to a broader audience, particularly those who share the conviction that scientific and technological advances should be directed toward serving the human person and the common good” (Antiqua et nova, n. 5).
All of Section II of the document is devoted to discussing the following question: “What is Artificial Intelligence?” in which the historical development of the concept of “Artificial Intelligence” from the John McCarthy-organized Workshop at Dartmouth University in 1956, and the making of AI as a research project is discussed. It raises all the contemporary perceptions of the reality, science, and myth of AI technology today, such as the reality of “narrow AIs”, the scientific speculation of “Artificial General Intelligence” (AGI) and “superintelligence” (ASI), and the mythology of “super-longevity” or so called “posthumanism” and/or “transhumanism.”
Section III of the document deals with distinguishing between the notion of “intelligence” designated to AI and the notion of “intelligence” as signified to humans, relying on the intellectual richness of the classical philosophical tradition and Christian theology. Hence, the document proceeds with an exposition of the Aristotelian–Thomistic metaphysics of the human person and their cognitive theories of human intelligence, which remain at the heart of the Christian, especially Catholic, conception of intelligence. Based on this, the document maintains: ‘In the classical tradition, the concept of intelligence is often understood through the complementary concepts of “reason” (ratio) and “intellect” (intellectus)’ (Antiqua et nova, n. 14), which are not separate faculties but two modes of the operation of the one and the same faculty, intelligence. The “term intellect is inferred from the inward grasp of the truth, while the name reason is taken from the inquisitive and discursive process” (Antiqua et nova, n. 14). The document thus highlights the following properties as what defines the specificity of the nature of human intelligence: embodiment (In the human person, spirit and matter “are not two natures united, but rather their union forms as single nature”’ (Antiqua et nova, n. 16)); transcendence and the self-possessed freedom of the will by the soul; relationality (‘Human beings are “ordered by their very nature to interpersonal communion”’ (Antiqua et nova, n. 18)); and even the relationship with the Truth (‘Human intelligence is ultimately “God’s gift fashioned for the assimilation of truth”’ (Antiqua et nova, n. 21)).
On the ethics of AI, the document acknowledges the good of the science and technology of AI, but, as in all sciences and technology, and indeed all human endeavors, we are reminded that “The Church is particularly opposed to those applications that threaten the sanctity of life or the dignity of the human person” (Antiqua et nova, n. 38). The document emphasizes that only humans are truly moral agents and can take moral responsibility. Hence, it reminds us that ‘Technological products reflect the worldview of their developers, owners, users, and regulators, and have the power to “shape the world and engage consciences on the level of values.” On a societal level, some technological developments could also reinforce relationships and power dynamics that are inconsistent with a proper understanding of the human person and society’ (Antiqua et nova, n. 41). Hence, it reemphasizes the criterion for ethical AI proposed by Pope Francis, thus: “The commitment to ensuring that AI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocation serves as a criterion of discernment for developers, owners, operators, and regulators of AI, as well as to its users” (Antiqua et nova, n. 43). It also restates what Pope Francis calls “technocratic paradigm,” “which perceives all the world’s problems as solvable through technological means alone” (Antiqua et nova, n. 54). To this effect, the documents restated the risk in “anthropomorphizing AI,” that is, a way of referring to AI or speaking about AI as if it is a human person (Antiqua et nova, nn. 59–61).
3. Scientific and Technological Epistemological Foundations of Pope Francis’s Thoughts on AI
This section discusses the epistemic conception of science and technology that underpins these four AI documents of the Church. The point is to examine and evaluate how this epistemic conception of science and technology conditions the conclusions and positions maintained by the Church. The best way to begin is to state how the Church defines or describes AI in these documents. A clear place to begin is Section II of Antiqua et nova, which is entirely devoted to discussing: “What is Artificial Intelligence?” The document quotes and references John McCarthy’s definition of AI as “that of making a machine behave in ways that would be called intelligent if a human were so behaving” (Antiqua et nova, n. 7). Based on this definition, the document maintains that McCarthy’s famous workshop launched a “research program focused on designing machines capable of performing tasks typically associated with the human intellect and intelligent behavior” (Antiqua et nova, n. 7). The question one could ask is: does the Church believe that this research program is possible? Or, put differently: How possible does the Church believe it is that this research program could be actualized?
The Church believes and affirms that this research program “has advanced rapidly, leading to the development of complex systems capable of performing highly sophisticated tasks” (Antiqua et nova, n. 8). This is understood as what is technically called “narrow AI” systems. Thus, for the Church, “narrow AI” systems are not only possible, but they are now a reality. Though the Church queries the notion of “intelligence” employed in denoting AI, it concedes that AI systems and/or ML systems demonstrate capacities for and/or to “statistical inference”, “logical deduction”, “analyzing large data”, “identify patterns”, “predict”, “mimicking some cognitive processes typical of human problem-solving”, “responds to various forms of human input”, “adapt to new situations”, “learn” and “suggest novel solutions not anticipated by their original programmers” (see Antiqua et nova, n. 8). Accepting that AI and ML systems have these and some other intelligence and/or cognitive capabilities, even to affirming that, on certain occasions, these capacities are executed more efficaciously than humans (see Antiqua et nova, n. 8), the Church still questions the notion of “intelligence” in AI. The Church clearly denies the univocal signification of the term “intelligence” for humans and AI systems. It maintains: ‘In the case of humans, intelligence is a faculty that pertains to the person in his or her entirety, whereas in the context of AI, “intelligence” is understood functionally, often with the presumption that the activities characteristic of the human mind can be broken down into digitized steps that machines can replicate’ (Antiqua et nova, n. 10). This distinction is based on the critique against functionalism (and/or behavioralism) in philosophy of mind and the question of intelligence in AI (see Antiqua et nova, n. 11).
However, it seems that the Church does not completely deny “intelligence” to AI, but it only maintains a difference in gradation. Overt behavior of human intelligence, the Church seems to argue, is far broader than that demonstrated by AI systems. That of AI “does not account for the full breadth of human experience, which includes abstraction, emotions, creativity, and the aesthetic, moral, and religious sensibilities” (Antiqua et nova, n. 11). To be clearer, the Church argues that, unlike human intelligence, ‘in the case of AI, the “intelligence” of a system is evaluated methodologically, but also reductively, based on its ability to produce appropriate responses—in this case, those associated with the human intellect—regardless of how those responses are generated’ (Antiqua et nova, n. 11). Based on this, the Church’s conclusion is that, although AI systems “perform tasks”, they do not “think” (Antiqua et nova, n. 12). For the Church, there is more to “thinking” or the faculty of “thought” than merely “performing of tasks”, however complicated the tasks are. The argumentation that grounds this position is based on the Aristotelian–Thomistic philosophical and theological tradition of the Church, which will be expounded below.
Before that, it is important to highlight other scientific and technological epistemological perspectives that are oriented to the above definition of AI and conception of “intelligence” by the Church. There is the notion of “algorithmic conditioning” (RCAI Ethics). The Church believes that the science and technology of algorithms, being the product of humans with concupiscence to moral evil, are neither “disembodied” nor “neutral,” which means questioning the scientific nature of algorithms (see AI and Peace, n. 2). Thus, algorithms are subjective and could be humanly manipulated, and they could be intentionally designed to be manipulative. To this end, the Church recommends an “algor-ethical” vision of algorithmic design (see RCAI Ethics). Hence, it is clear that the Church generally conceives AI as a species of “information technology” and “digital technologies” (AI and Peace, n. 2; Antiqua et nova, n. 3). The Church believes that AI should be characterized as a technology, even with its unparalleled cognitive efficiency. Hence, it discourages any mythologization of AI as possessing the essential nature of the human person and/or even possessing powers that transcend human personhood (see Antiqua et nova, n. 9).
There is also the notion of AI systems as “socio-technical systems” (AI and Peace, n. 2). They are socio-technical systems, according to the Church, because they are “forms of intelligence” designed with the artificial mechanical and electronic capacity to simulate and replicate human cognitive abilities (see AI and Peace, n. 2). However, because AI systems presently do not possess the integrated and unified cognitive powers of the human person, the Church maintains that AI “intelligence” is “fragmentary” and, as such, AI “ought to be understood as a galaxy of different realities” (see AI and Peace, n. 2). This implies the complexity and sophistication portent in AI as cognitive systems. Hence, put simply, the Church is not underestimating or playing down the reality of AI systems that we humans will have to co-exist with; hence, it ardently calls for AI ethics and governance (see RCAI Ethics; AI and Peace, n. 2 and 6). The reason is simple: “Artificial intelligence will become increasingly important. The challenges it poses are technical, but also anthropological, educational, social and political” (AI and Peace, n. 2).
4. Philosophical and Theological Intuitions Underpinning the AI Pontification
The Church’s pontification on AI is naturally based on its traditional philosophical and theological worldview. In fact, it could be argued that there is no institution more equipped than the Catholic Church with the intellectual resources of philosophical and theological depositum to interrogate AI and thus contribute more copiously from the perspective of the humanities. Already, its copious documents on social teachings not only provide a wealth of content to draw inspiration from, but they also offer methodological precision and clarity. Subject matters such as the following have been substantially interrogated in the social teachings of the Catholic Church and have been constantly employed in the writings of diverse current subject matters: concerning the nature, dignity, and end of the human person, the revelational knowledge of every human person as uniquely created in the image of God (imago Dei), the role and place of the human person in the world, the implications of the creative powers of the human intellect and imagination for good and for evil in the world, preferred option for the poor, dignity and right for labor, common good, social and structural justice, care of the earth, sacredness life and right to a meaningful livelihood, and so on.
Hence, for a proper understanding and analysis of the Church’s positions and discourse on AI, an appreciable grasp of the enduring social teachings of the Catholic Church is very important, especially those document explicitly referenced in the AI documents and the Sacred Scriptures: “Gaudium et Spes,” “Redemptor Hominis,” “Sollicitudo Rei Socialis,” “Fides et Ratio,” “Caritas in Veritate,” “Evangelii Gaudium,” “Laudato Si’,” “Fratelli Tutti,” “Gaudete et Exsultate,” “Dilexit Nos” and a few other public addresses and messages of Pope Francis (AI and Peace). Acquittance with most, if not all, of these documents of the Church will intellectually dispose the reader to grasp the philosophical and theological logic of the Church’s pontification on AI systems. As it is in the section on the scientific and technological foundation of the Church’s thought on AI discussed above, a more current document on AI to begin with is Antiqua et nova. The entirety of section III of Antiqua et nova is devoted to discussing the notion of “intelligence” in the philosophical and theological tradition of the Church.
The Church maintains its intellectual tradition of employing as rational ground the Aristotelian–Thomistic philosophy and the Thomistic theology articulated with this philosophy. The philosophical relevance of AI is its claim to “intelligence.” The invention and continuous development of AI have philosophically problematized the notion of “intelligence,” and this philosophical problematization of intelligence has both philosophical and theological anthropological implications, which in turn have ethical implications. As
Onyeukaziri (
2023) has argued, “the science and research in AI and neuroscience have the strongest implications on the Church’s notion of the human person. This is because it is on them that most of the other human and social sciences are established or inspired, since they raise fundamental questions concerning the notion of person.” Referencing Aristotle, in discussing the “mind” (
intellectus, intellect) as the nature proper to the human person, the Church maintains that “all people by nature desire to know” thus she argues: “This knowledge, with its capacity for abstraction that grasps the nature and meaning of things, sets humans apart from the animal world” (see
Antiqua et nova, n. 13). If this is the case with the claim of AI demonstrating cognitive capacity for knowledge generation, regeneration, interpretation, patterning and prediction, this indeed is a cause for concerns.
The Church dismisses any concerns about AI, given to its claim of knowledge or claim of knowledge attributed to AI, by employing the Aristotelian–Thomistic philosophical distinction between “reason” (ratio) and “intellect” (intellectus), which are not two different faculties or powers, but two modes of the one faculty of intelligence or the mind. While the intellect deals with the apprehension (intuitive grasp) of truth, reason deals with the inquisitive, analytic, and discursive process of the mind (see
Antiqua et nova, n. 14). In the human mind, properly speaking, these two powers of the mind are not only required but are operative in the act of intellection (
intelligere) or knowing/understanding. However, both powers are not present in AI, which, though it demonstrates capacity for reasoning, does not have the capacity of the intellect. This metaphysical psychological structure employed by the Church is sustained on the Christian theological anthropology that conceives “the human person as a being consisting of both body and soul—deeply connected to this world and yet transcending it” (
Antiqua et nova, n. 13–14). That is the human person, as an embodiment of an integral being of spiritual (soul) and material (body) substances (see
Antiqua et nova, n. 16). The challenge of holding this dualistic sort of theological anthropology by contemporary science in AI and neuroscience, as well as the need for a reconstruction and substantiation of this theological anthropology, has been advocated for by
Onyeukaziri (
2023).
Hence, for the Church, the spiritual aspect of the human person privileges every human person with the capacity for transcendental knowledge and truth. Thus, the knowledge domain of the human person is not only broader and more profound than that of AI because of the capacity for intellectus, but far more given the capacity for transcendental knowledge and eternal truth. More so, the embodied human person is privileged with personalistic and interpersonalistic knowledge due to the nature of “relationality” in the human person. There is subjective and personalistic knowledge that the human person grasps by relating with oneself (self-knowledge), one personhood (personal knowledge), and with other persons (interpersonal knowledge). These different realms of knowledge are consequential to the construction of socio-political, cultural-religious, and techno-scientific knowledge among humans, which cannot be the case in AI systems (see Antiqua et nova, n. 18–29).
Hence, the human person is beyond being a rational and intelligent being; every human person is created in the image of God (imago Dei). Being a created being, the human person is contingent and dependent on God the creator. Hence, the Church contests any form of techno-deification, the attempt to deify humanity by means of advanced technology, of which AI is seen as a possible means. Thus, the Church critiques propositions that are anti-mortality in humans, believing that, with technology such as a human–AI interface or integration, human mortality will be completely destroyed. The Church affirms human mortality and contends against aspirations that tend “to overcome every limit through technology,” for ‘in an obsessive desire to control everything, we risk losing control over ourselves; in the quest for an absolute freedom, we risk falling into the spiral of a “technological dictatorship”’ (AI and Peace, n. 4). This self-deception is rooted in the atheism of not accepting the mystery of incarnation and resurrection of the Christian faith by which God the creator, through the gift of his beloved son Jesus Christ, has recreated the world, restored the wounded nature of humans, and given human the grace to share in his divine trinitarian nature and conquering eternally the power of mortality in the death of his son on the cross of Calvary. This is the ancient sin of humanity: “the primordial temptation to become like God without God.” (AI and WH). Hence, the advent of AI has given humanity the opportunity to appreciate the Christian mystery of salvation in Christ. It has challenged us to reflect deeply and critically on the crucial anthropological question: “How can we remain fully human and guide this cultural transformation to serve a good purpose?” (AI and WH). To this end, humanity is invited to reflect and cultivate true human wisdom, the “wisdom of the heart,” which AI does not have and cannot have (AI and WH).
The need for ethically designed AI and AI ethical discourse and governance must be philosophically and theologically granted. The Church provides a substantial direction for ethical AI and AI ethics, thus: “The Church is particularly opposed to those applications that threaten the sanctity of life or the dignity of the human person” (Antiqua et nova, n. 38). The ethics of AI stands and falls on whether or not they are constructed on the pillars of the preservation of “the sanctity of life” and “the dignity of the human person.” This being the case, however, much effort is put into designing what is now called “agentic AI” and “autonomous AI” systems; only the human person is, in the proper sense, the moral agent and freely autonomous system. Hence, a proper criterion for ethical AI should underscore the following point: “The commitment to ensuring that AI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocation serves as a criterion of discernment for developers, owners, operators, and regulators of AI, as well as to its users” (Antiqua et nova, n. 43).
5. Contributions to AI Ethics and Governance
Even when many nations, sovereign states, and international organizations are still trying to articulate and enact regulations and policies on AI ethics and governance, the Catholic Church has already presented the four documents on AI discussed above—even when those who have articulated regulations and policies see the need to update or review existing ones and rearticulate a new one. At present, every regulation or policy on AI will keep undergoing review and rearticulation, considering the constant, fast rate of design and development of AI systems. Hence, a fundamental theoretical background is needed for the substantiation and conceptualization of the discourse on AI ethics and governance. The documents of the Church on AI, even with the need for further robustness, as AI research and development advance, provide philosophical and theological foundations and epistemic imaginaries and directions for articulations of AI ethics and governance.
It is not surprising that, within the last five years, there have been several AI seminars, conferences, and summits with a special focus on AI ethics and governance. This is because, even with the narrowness of present AI systems, the impact of their cognitive operation is astonishing, and the socio-political, cultural, educational, economic, and even religious challenges they portend are revolutionary. Hence, these seminars, conferences, and summits have produced a number of White papers, policies, bills, acts, and other official national and institutional documents on AI. On the global stage, the past three years have seen the organization of the following AI summits: Bletchley Park AI Safety Summit in United Kingdom (1–2 November 2023), AI Seoul Summits in South Korea (21–22 May 2024), and the most recent one, the Paris AI Action Summit in France (6–11 February 2025). These are just a few out of many. Within this period, different countries and organizations have enacted different AI guidelines or regulations such as:
Organization for Economic Cooperation and Development’s (
2019) (OECD)
Recommendation of Artificial Intelligence,
European Union’s (
2019) (EU)
Ethics Guidelines for Trustworthy AI,
United Nations’ (
2021)
Artificial Intelligence Act,
United States of America’s (
2022)
Blueprint for and AI Bill of Rights and
United States of America’s (
2023)
Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,
Canada’s (
2022)
Artificial Intelligence and Data Act, the
United Kingdom’s (
2023)
The Governance of Artificial Intelligence: Interim Report (2022–2023),
European Union’s (
2024)
Artificial Intelligence Act,
United Nations AI Advisory Body’s (
2024)
Governing AI for Humanity.
While it should be applauded that nations and international organizations are taking the challenges that come with AI technology seriously and are coming up with AI ethics and governance documents, it is regrettable that many nations’ documents on AI are constructed from the standpoint of economic and technological competitiveness, on how to take or maintain the lead in the research and development of AI, and not on the critical global implications of AI development. Some have emphasized either AI risk and/or AI security; a few are indeed concerned about the impact of AI on the climate and the public good, hence advocating the development and deployment of AI based on the principles of the United Nations’ Sustainable Development Goals (SDGs). The Church is in a strategic position to epistemologically and metaphysically guide the global community on an anthropological and nature-centered AI ethics and governance regulations, since the Church is not in any economic or technological competition with anybody. The Church exists as the sacrament of God, for the glory of God, and as the mystical body of Christ for the service of the common good and the salvation of all.
The Church, therefore, has the evangelical duty in service of the common good to call for the development of international law on AI, as Pope Francis rightly observed:
The global scale of artificial intelligence makes it clear that, alongside the responsibility of sovereign states to regulate its use internally, international organizations can play a decisive role in reaching multilateral agreements and coordinating their application and enforcement. In this regard, I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms.
(AI and Peace, n. 8)
In calling socio-political institutions to come up with an AI treaty, the Church provides a philosophical and theological ground on which an AI treaty could be formulated. Thus, the Church maintains: “The work of drafting ethical guidelines for producing forms of artificial intelligence can hardly prescind from the consideration of deeper issues regarding the meaning of human existence, the protection of fundamental human rights and the pursuit of justice and peace” (AI and Peace, n. 8). The Church maintains that the questions of the meaning of human existence, the protection of fundamental human rights and the pursuit of justice and peace must be central in any true ethical treaty on AI. For ethical discourse to be reasonable, human existence must be meaningful, and, for human existence to be meaningful, human life must be considered sacred. For, if human life is profane and ordinary, then human existence will be without substance and intrinsic value. The question of fundamental human rights and the question of justice and peace will be absurd if human existence is meaningless. The question of the meaning of human existence remains one of the ultimate subject matters in the epistemic enquiry of philosophy and theology. Because of the importance of the question of the meaning of human existence in the discourse on AI ethics and governance, the epistemic and didactic role of the Church in the conversation on AI will always remain not only relevant but significantly cogent.