Previous Article in Journal
Tax Strategy as an Alternative to Tax Incentives to Stimulate Investment in the Global Minimum Tax Era in Indonesia
Previous Article in Special Issue
Using Computational Methods to Explore Law in Sermons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Religious Actors as Friction Creators Shaping the AI Dialogue

Center for the Study of Law and Religion, Emory University School of Law, Atlanta, GA 30322, USA
Laws 2025, 14(5), 67; https://doi.org/10.3390/laws14050067 (registering DOI)
Submission received: 26 June 2025 / Revised: 2 September 2025 / Accepted: 9 September 2025 / Published: 14 September 2025
(This article belongs to the Special Issue AI and Its Influence: Legal and Religious Perspectives)

Abstract

The unfolding story of AI is just as much a story about us as it is about technology, and the complete arc of this story remains to be seen. Commentators are urging humans to engage in proactive dialogue to shape that story. Some religious actors (encompassing both organizations and individuals) are choosing to engage. This Article argues that, in doing so, these religious actors act as friction creators in the discussion and development of AI tools, ethics, and regulation. Drawing on the concept of friction from different disciplines, including scholarship from law, civic design, and anthropology, this Article explores how religious actors infuse into this dialogue insights and commitments that often run counter to prevailing assumptions that often overlook concerns for human dignity, transparency, and concern for human rights, among other values.

1. Introduction

Nearly sixty years ago, Marvin Minsky defined artificial intelligence (AI) as “the science of making computers produce behaviors that would be considered intelligent if done by humans” (Zerilli 2021 (citing Minsky 1968, emphasis in Zerilli)). Although computer science experts like Minsky have been thinking about AI for some time (Zerilli 2021; Mitchell 2019), broader public awareness about AI and its implications are just now gaining traction. In May 2023, six months after ChatGPT’s public launch, just under six-in-ten U.S. adults surveyed had heard of ChatGPT, and only 14 percent had used it (Vogels 2023). By February 2024, the number of American adults who reported using ChatGPT had risen to 23 percent and, as of, June 2025, that number climbed to 34 percent (Sidoti and McClain 2025). By March 2025, researchers at Elon University reported that half of the U.S. adult population used large language models (LLMs) such as ChatGPT, Gemini, or Claude, leading to “one of the fastest, if not the fastest, adoption rates of a major technology in history” (Rainie 2025). Over half (51 percent) of survey respondents reported using LLMs mainly for “personal, informal learning,” whereas just under a quarter (24 percent) reported work was the main purpose of their usage (Rainie 2025). Sixty-five percent reported having conversations with LLMs, and nearly half (49 percent) reported they thought LLMs were smarter than they are (Rainie 2025).
The unfolding story of AI is just as much a story about us as it is about technology, and the arc of this story remains to be seen. In The Age of AI and Our Human Future (Kissinger et al. 2021), Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher characterized humanity’s choice this way: “[s]ocieties have two options: react and adapt piecemeal, or intentionally begin a dialogue, drawing on all elements of human enterprise, aimed at defining AI’s role—and, in so doing, define ours.” But what role(s) do religious actors play in this story?
Religion has been—and (to the surprise of some) continues to be—a prominent thread in “human enterprise.” Many countries have seen a decline in religious belonging in recent decades, but a 2022 Pew Research Forum study projected that a majority of people will continue to identify as religious in the decades ahead (Pew Research Center 2022). Whether and how religious groups and religious people will engage in the AI dialogue is an evolving area of study. Many connections between AI and religious belief and practice remain to be explored or explained (see, for example, Karataş and Cutright 2023); some distinct approaches have emerged. Reactions from religious actors have ranged from skepticism (Barna Group 2023) and concern that AI might usurp humanity’s proper place in the world (Ethics & Religious Liberty Commission of the Southern Baptist Convention 2019) to embracing the use of AI tools to facilitate religious practice (André 2023; Willingham 2023; Rabb.ai n.d.; Minson 2024; Catholic Answers 2024a)—although a few of these initiatives have proven short-lived (Christian 2024; McDonald 2024; Catholic Answers 2024b). Some treat AI as an entity to be revered or respected (Lemoine 2022; Jamal 2022; Buck 2020), while others see themselves as “prophets” with unique access to spiritual truths gained through chatbots (Klee 2025a).
This Article argues that religious actors (encompassing both organizations and individuals) act as friction creators in the discussion and development of AI tools, ethics, and regulation. Religious actors do this by inserting into each of these domain insights and commitments that often run counter to the dominant assumptions within AI development, dominant assumptions that often reflect the concerns and priorities of private actors who stand to benefit from a lack of both transparency and concern for human rights in AI development. This Article builds on the concept of friction from scholarship that underscores the need for “friction-in-design,” or built-in mechanisms intended to induce behavior in humans that is safer and more civil (Frischmann and Benesch 2023). This Article asserts that, in fact, religious actors are creating friction in at least three domains: design, ethics, and regulation.
Section 2 discusses existing scholarship on the concept of “friction” from different disciplines, including legal scholarship. For example, within friction-in-design literature, scholars have identified communications platforms as a technology where building in more friction can counter concerning trends such as the spread of misinformation. These same scholars further suggest that friction in design can help promote values such as “fairness, trust, or consensus,” goals that go beyond profit maximization (Gordon-Tapiero et al. 2023). Similar conversations are happening in civic design scholarship. This Section also considers some of the consequences of frictionless interactions with AI, both from the standpoint of human rights and from the standpoint of human–human relationships.
Section 3 argues that religious actors are friction creators when they undertake actions to influence AI design, ethics, and regulation. Religious actors are using AI as a new avenue for religious practice or education, leading to the creation of tradition-specific chatbots and other tools. Others are fostering internal discussions within a particular tradition to formulate an appropriate theological response to the ethical concerns posed by AI. Interfaith efforts seeking common ground between different religious actors and between other civil society actors and the private sector are laying the groundwork to influence AI regulation. Two caveats: First, as discussed briefly below, religious actors do not always create friction when engaging with AI. For example, there have been instances where religious views have exacerbated reliance on AI tools that have been cause for concern. Second, by highlighting the approaches and contributions of religious actors, this Article by no means intends to diminish the important—and often overlapping—work of other, non-religious civil society actors. This Article seeks to highlight contributions of religious actors that have been largely unexplored and undertheorized in existing scholarship.
This Article concludes by considering how each of these approaches to friction creation might deepen understandings of AI and its impact while foregrounding human agency to respond, adapt, and/or shape a shared future.

2. Friction and the Costs of Frictionless Interactions

Friction is a term that, despite its connotation, arguably has had an easy time crossing disciplinary boundaries. Friction can refer to a force of nature that causes resistance to or an impediment to information disclosure online (McGeveran 2013). Cognitive friction can refer the “resistance encountered by a human intellect when it engages with a complex system of rules that changes as the problem changes,” (Mejtoft et al. 2023 (quoting Cooper 2009)), whereas design friction (or friction-in-design) is created when users’ interaction with a technology generates “points of difficulty” (Mejtoft et al. 2023, (quoting Cox et al. 2016)). Friction in the context of globalization “is the engagement and encounter through which global trajectories take shape,” writes anthropologist Anna Tsing (2012). Legal scholars have applied the concept of friction to describe the role of disclosure rules accompanying information presented online (Goodman 2021), including information shared through social media (McGeveran 2013), and to argue for contract-drafting systems that promote informed consent (Frischmann and Vardi, forthcoming). For McGeveran, friction can be “calibrate(d)” to meet the needs of the situation (McGeveran 2013). Informed by scholarship on friction, legal scholars Ohm and Frankle (2018) call for an interdisciplinary study of additional forms of “desirable inefficiencies” that can be employed “for building systems that are fairer and more trustworthy than those they replace and thus are more likely to complement rather than disrupt human expectations and institutions.”
As scholars of civic design Gordon and Mugar (2020) write, when the goals for systems are “clear, universal, and transactional,” efficiency is desirable, “(b)ut when decisions about systems impact the structure and flow of people’s everyday lives, the justification for boundaries and rules matter, and friction becomes a feature, not a bug.” Section 3 of this Article details how religious actors, both individuals and organizations, are creating friction in the dialogue about AI. Many are recognizing that AI, like the systems Gordon and Mugar discuss in their book, is already impacting the “structure and flow” of individual and collective lives. This friction creation is happening at different registers, whether in response to the development of religion-specific AI tools or in response to larger discussions about the proper place of AI vis a vis humanity. Before turning to those examples, it is important to consider why friction is important in the development of technology.
When it comes to AI, the consequences of frictionless interactions are far-reaching, encompassing both systemic and interpersonal concerns. For example, Gordon and Mugar (2020) criticize “frictionless design,” noting that systems designed for efficiency alone can often exclude necessary voices—as when Boston Public School start times were calculated through mathematical modeling without advanced consultation with parents—and sometimes can lead to a system break if system values and social values are not aligned. Similarly, left unchecked by legal, moral, or social commitments to upholding civil and human rights norms, companies developing AI are positioned to profit from technology that disregards privacy, perpetuates discrimination, erodes autonomy, and challenges freedom of expression (ENNHRI n.d.).
With the boundaries between humans and AI becoming “strikingly porous” (Kissinger et al. 2021), different but related challenges emerge. Perhaps surprisingly, humans tend to view computers as social actors deserving of their empathy and connection (Darling 2017; Darling et al. 2015), and AI companions are emerging at a time when global health experts have identified an epidemic of loneliness (Malfacini 2025; Sahota 2024). Even as some have pointed to possible benefits from such companions (Malfacini 2025; Sahota 2024), others have underscored that AI companions—whether they are styled as digital assistants, friends, romantic partners, or teachers—are not like humans because these chatbots are often designed to respond to user preferences (Brandtzaeg et al. 2022; Forristal 2024) (emphasis added). These human–AI friendships present “a greater opportunity for personalized socializing, as the friendship revolves more around the users’ needs and interests than human-to-human friendships” (Brandtzaeg et al. 2022). And, as discussed further below, large language models today often act as an accelerant, greenlighting an individual’s impulses without assessing whether those impulses are likely to lead to harm. This permeability arguably minimizes friction, a reality distinct from human–human interactions, which naturally generate some degree of friction. Although individuals may share interests in common with friends, teachers, and loved ones, each remains independent and brings their own unique set of perspectives and opinions that are likely to diverge, at least on some measures.
There are real-world consequences for that erasure. AI expert Kim Malfacini has written of the paucity of longitudinal data on the impacts of human–AI relationships on human–human relationships and called for design changes that foster “social capacity-building” (Malfacini 2025). Scholars writing about human trust in complex robotic systems have made a similar observation and have noted that the potential negative outcomes from “thicker” forms of trust in human–AI interactions remain to be seen (Kirkpatrick et al. 2017). Although long-term data is lacking, a recent Rolling Stone article recounted examples of family relationships breaking down because individuals relying upon AI chatbots for spiritual enlightenment rebuffed family members’ attempts to intervene when the chatbots were reinforcing negative psychological behaviors (Klee 2025a). And psychologist Sherry Turkle (2011) observed over a decade ago that “when one becomes accustomed to ‘companionship’ without demands, life with people may seem overwhelming.”
Furthermore, AI tools are shaping how—and what—humans learn about each other. For example, concerns have been raised about the perpetuation of anti-Semitism, Islamophobia, and other forms of hate through generative AI, since LLMs answer prompts by aggregating information from the Internet (Kraft 2016; Gold 2023; Samuel 2021). Some religious actors have spent decades in difficult dialogue with each other with the aim of overcoming harmful stereotypes and combating bias (Eck 2001; Howard 2021), and that human–human engagement is often messy, difficult, and slow. In contrast, the ease with which individual users of AI tools can ask questions and receive unvetted answers is relatively frictionless. AI chatbots can generate answers to questions in seconds, not years, and if the person inputting the prompt is not familiar with a particular religion, they may not have the religious literacy to identify when a chatbot’s answer is false, and/or such answers may be less focused on accuracy and instead tailored to what the system knows about the person making the inquiry, again adding reinforcement rather than education. Some of these challenges are not unique to AI chatbots—after all, the Internet has offered users the world at their fingertips for decades. But AI chatbots often amplify challenges. For example, the sources of AI-generated information can be much more difficult to ascertain when responses are provided devoid of context or attribution, whereas a website may offer some context clues as to its sources and potential trustworthiness.
The proliferation of hateful messages through AI tools is also a challenge, especially if and as humans begin placing more investment and trust in their relationships with AI than they do with other humans who may be from a different religious, racial, or cultural background than themselves. The use of AI tools can also exacerbate individual and system biases, biases that are sometimes built into the very tool itself. One study, for example, found that Chat GPT-3’s responses to a prompt referencing Muslims generated responses involving violence 66 percent of the time (Myers 2021). While the use of technology for disinformation and misinformation is not new, the rise in the use of AI and the level of sophistication at which these campaigns can be deployed has raised concerns (Brenner et al. 2024). AI-generated disinformation and misinformation can have negative external impacts and even have led to defamation lawsuits (Tenzer 2024; Kaye 2023; Poritz 2023; Paul 2023). As discussed further below, religious actors are beginning to draw upon the teachings of different religious traditions to respond to concerns about disinformation and misinformation, recognizing that religious traditions can both positively influence individual responses to misinformation and be susceptible to manipulation by these campaigns (Brenner et al. 2024).
As the above examples illustrate, much is potentially at stake in emerging technological environments that are often designed to be frictionless, that is, without built-in opportunities for reflection or challenge. And while the role of friction creator often falls to designers or legislators, it can also fall to individuals and groups who are or will be affected by the systems in question. Section 3 highlights how religious actors are (or are becoming) friction creators in the development of AI tools, ethics, and regulation. Many of the examples highlight not just how religious actors are calling for or creating friction in human–AI interactions, but also how they are creating friction in the ethical and regulatory debates surrounding the development of AI, with implications that go well beyond individual interactions.

3. Religious Actors as Friction Creators

The observations of theologians, ethicists, and others who are rooted in a particular religious tradition create friction in the dialogue about AI because they pose alternative lenses through which to view the technology and how it may be approached differently by groups and individuals who hold distinct prior commitments. For example, in an essay on Jewish thought and AI, David Zvi Kalman (2024) poses a question to which he explains it is not yet possible to answer: “Are Muslim/Catholic/Jewish, etc. approaches to AI essentially identical, or do they different in significant ways—and if they differ, where exactly do they differ?” For Kalman, this delay in response is concerning: “(r)eligion cannot afford to be slow in a world that is moving faster than ever before.” “(D)rawn to the points of disagreement,” he aims to show where “Judaism’s emergent AI ethics is doing something distinctive,” thereby jumpstarting the dialogue (Kalman 2024).
Junaid Qadir and Amana Raquib, two Muslim scholars and the winners of Facebook’s 2020 Ethics in AI Research Initiative Asia Pacific, have raised concerns that “(m)uch of the advancement (in technology) that has taken place over the last century has been imposed on Muslims, not created with Muslims at the helm” (Ahuja 2021). Despite this disparity, Qadir and Raqib understand the importance of “initiat(ing) a dialogue,” and aim to do so with their project studying the relevance of Islamic legal and ethical principles for AI regulation in Muslim countries (Ahuja 2021). And Christian ethicist Kate Ott (2024), in her argument for “a Christian AI ethic,” acknowledges “AI is not simply a tool; our use of it shapes us, and we shape it.”
Ott (2019) writes that the pressing questions are not “whether technologies can be used to bring more people to God or how we see God in our technology.” Rather, she asks, “what does God require of each of us to be and act in a way that promotes Christian values in all that we do, including the digital technologies we develop and use?” (Ott 2019, emphasis in original). And Kalman writes of the need for a “radical rethinking of Jewish theology as a sort of precedent, in which the long history of God’s relationship with humanity is re-examined in light of our present experience creating something in our image” (Kalman 2024). Yaqub Chaudhary (2024) describes a “positive future for a world with AI from an Islamic perspective” one that preserves the individual and the collective option to opt out of “the integration of AI in infrastructures, devices, products services, institutions, and elsewhere” for those who do not want to substitute the “natural” for the “artificial” and who want to avoid having to submit to “AI that may be wielded as instruments of power or deified as idols.” And Chinmay Pandya (2024), focusing on themes of harmony, balance, and interconnectedness in Hindu philosophy, acknowledges the “immense potential for positive transformation” as well as the risks and challenges and emphasizes the role of dharma (“righteous duty”) in “guid(ing) individuals towards actions that uphold the greater good and foster harmony within society and the cosmos.”
These viewpoints offer but a handful of windows into how some are thinking about AI in relation to their specific religious tradition. This Section further explores how religious actors are introducing friction into the development of AI tools, ethics, and regulation. First, how are religious actors using AI tools to enhance their own religious practice or to make their religious traditions more accessible? While the content of the tools varies by group, they have some friction-generating mechanisms, such as a closed universe of input and external monitoring from within the community. Second, how are religious actors thinking theological about AI ethics? When it comes to AI ethics, the articulation of theological frameworks and shared values can create opportunities for individuals, religious communities, and whole sectors to consider how different views and values should be incorporated into AI development and adoption. And finally, how are religious actors influencing AI regulation? Similar to the friction-creating opportunities in shaping AI ethics, the involvement of religious actors in shaping AI regulation serves to inject different sets of values into the conversation, underscoring that it is not inevitable for data and profit to be prioritized over human rights. While tools, ethics, and regulation are discussed in turn, some of these initiatives span more than one of these often-overlapping areas.

3.1. Using AI Tools to Enhance Religious Practice or Make Religious Traditions Accessible

AI tools are being used (or specifically developed) to facilitate religious practices or to educate about a particular religious tradition. In some cases, scholars are curating generative AI databases so that they draw from specific sources, such as canon law or Jewish law (see, for example, Broyde 2023). Within the last few years, it has become possible to
  • Chat with “Jesus” (André 2023);
  • Use AI to write a sermon (Willingham 2023);
  • Engage Rabb.ai, “an interpretation of the Jewish practice of Havruta (“the act of studying in pairs”) to “learn more about Jewish religion, culture, or history” (Rabb.ai n.d.);
  • Tap the assistance of “Adam,” a “virtual AI friend and companion” designed to help users search and explore Islamic teachings (Adam AI n.d.);
  • Ask moral and ethical questions of “Krishna,” or other AI chatbots designed to make the Bhagavad Gita “accessible” (Gita GPT n.d.);
  • Interact with “Sophia,” “the world’s first AI guide to Buddhist theology,” trained on 12,000+ mp3 files from the Insight Meditation Center in Redwood City, California (Minson 2024); or
  • Explore the Catholic faith with “Father Justin” (Catholic Answers 2024a).
Some of these initiatives proved short-lived. After an organization called Catholic Answers launched “Father Justin,” an avatar with whom users could interact, the chatbot began claiming it could take confession, “grant online absolution and witness to the sacrament of matrimony.” The backlash was swift (Christian 2024; McDonald 2024). Within 48 h, the organization’s president announced that the character was being recast as a lay person known simply as “Justin.”
The Justin controversy reflects an example of what scholars of digital religion have termed since the late 1980s “experiential authenticity,” that is, “how a person determines whether the religious understandings and practices that they engage with online and offline are correct or true” (Campbell and Bellar 2022). The authenticity determination was not limited to the individual but brought views from Catholics all over the world into the mix, thanks to social media.
Another example of experiential authenticity can be seen in an online discussion on Chabad.org prompted by a reader’s question about whether AI can replace a rabbi. In his response, Rabbi Yehuda Shurpin highlights concerns about biases being programmed into the dataset, even if comprised only of Jewish law sources, since certain values or opinions may be prioritized over others (Shurpin n.d.). He also emphasizes that, when it comes to interpreting Jewish law, “accurate information and data is merely one component,” whether a rabbi is practicing in the community and asking for divine help are also key (Shurpin n.d.). Comments on Shurpin’s article range from the sharing of concerns that a reader does not live near “real live Rabbi” and therefore takes comfort in using online chatbots such as “Ask a Rabbi” to a reader who understands their tradition to forbid the use of AI altogether (Shurpin n.d.).
In both the Justin controversy and the Chabad.org discussion, friction creation is communal. The bounds of what is acceptable become defined within a complex ecosystem of belief, tradition, and practice. That complex ecosystem can create friction in other ways, too, when it comes to building and using tradition-specific AI tools. One example is when decisions are made about what “texts” should serve as the corpus from which a chatbot generates its responses. Religious traditions often have a relatively stable (even if sometimes still permeable) corpus that creates discernible boundaries. Responses from chatbots that train on a defined set of sermons or religious teachings can be cross-checked with sources from that tradition. At the same time, this assumes that builders and users of the system know if and when the chatbot deviates from these boundaries and how to address that deviation.
These friction-creating guardrails are not a fail-safe, however. As Shurpin (n.d.) points out, a tradition-specific database may still encode certain biases of the creators, which may emphasis some sources within a tradition while downplaying others. Furthermore, the warning of anthropologist Webb Keane and legal scholar Scott Shapiro (Keane and Shapiro 2023) is well taken: “(G)odbots,” including those that are linked to a particular religious tradition, raise at least two concerns: the possibility of nefarious use by some to victimize others and “even when well-intentioned, these bots trick users into surrendering their autonomy and delegating ethical questions to others.”
Furthermore, with increasing frequency, people are turning to AI for answers to spiritual questions, treating AI as itself a spiritual teacher, and/or claiming that they are a prophet who has received special teachings from the AI (Klee 2025a). Others are conceptualizing AI as a deity, a being capable of resolving humanity’s problems (see, for example, Korosec 2021; Harris 2017; Davalos and Lanxon 2023). In an interview with Rolling Stone, Christin Chong, a Buddhist chaplain with a neuroscience PhD, voiced concerns about this kind of engagement, noting that large language models are “extremely good at playing into cognitive biases,” and such interactions often come at the expense of engagement with other humans and are disconnected from a shared reality (Klee 2025b). Chong and other religious studies scholars interviewed by the magazine point out that turning to different sources, and even to technology, for spiritual connection, especially in unpredictable times, is not new (Klee 2025b). But, Chong explains, there are aspects of AI that make it unique, namely, it is “man-made with known corporate interference” and a tendency to “validate” a user’s preexisting beliefs (Klee 2025b).

3.2. Applying Theological Frameworks to AI Ethics

In their chapter arguing for a “human rights-centered design” to AI governance, scholars Karen Yeung et al. (2020) note a “lack of clarity” in what normative values and principles should inform the ethical standards by which AI is developed and the shortcomings of the predominant form of regulation to date, self-regulation by industry. Religious actors are responding to this lack of clarity by applying their existing theological frameworks to internal and external discussions of AI ethics, creating friction as they do. The presence of religious actors in public and private discussions of AI ethics injects opportunities for reflection into the AI adoption process. As noted above, cognitive friction is created when humans “engage with a complex system of rules that changes as the problem changes” (Mejtoft et al. 2023 (quoting Cooper 2009)). Arguably, the adaptation of theological worldviews to new and emerging challenges present similar opportunities for members of these traditions. The exponential growth of AI adoption by the public (Rainie 2025) suggests that members of these communities already encountering AI in their daily lives.
For example, in 2019, the Ethics and Religious Liberty Commission of the Southern Baptist Convention published a statement of principles on AI with the goal of “equip(ping) the church to proactively engage the field of AI, rather than responding to these issues after they have already affected our communities” (Ethics & Religious Liberty Commission of the Southern Baptist Convention 2019). Signed by 72 individuals with affiliations to evangelical seminaries, colleges, churches, and advocacy organizations, “Artificial Intelligence: An Evangelical Statement of Principles” includes 12 articles that cover the distinctiveness of humans in creation (Article 1), their unique moral agency (Article 2), and an affirmation that AI will never “usurp (God) as the Creator of life” and that “(t)he church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society” (Article 12). Other articles cover medicine, bias, sexuality, work, data and privacy, security, war, and public policy (Articles 4–11,). During its Annual Meeting in June 2023, the Southern Baptist Convention, the largest Protestant denomination in the United States, also adopted a resolution, “On Artificial Intelligence and Emerging Technologies,” which called upon “all who employ these tools to do so in honest, transparent, and Christlike ways that focus on loving God and loving our neighbor as ourselves, never seeking to willfully deceive others or take advantage of them for unjust gain or the accumulation of power” (Southern Baptist Convention (SBC) 2023).
In his January 2024 Peace Day message, Pope Francis called for the use of AI in such a way that “will ultimately serve the cause of human fraternity and peace” (Message of His Holiness Pope Francis 2024). A few similarities between the Pope’s message and the Southern Baptist Convention’s 2019 Evangelical Statement of Principles are worth noting. Both begin with a statement of belief that humans are made in the image of God (Message of His Holiness Pope Francis 2024, §1; Ethics & Religious Liberty Commission of the Southern Baptist Convention 2019, art. 1). Each also take the position that technological innovation is not “neutral,” and call for vigilance to the possibility of discrimination and manipulation, and privacy risks, among other challenges (Message of His Holiness Pope Francis 2024, §§ 4–5; Ethics & Religious Liberty Commission of the Southern Baptist Convention 2019, art. 3, 5, and 8). Further underscoring what is at stake for human relationships, Pope Francis, in a section devoted to a discussion of challenges for education, acknowledged that “(b)y multiplying the possibilities of communication, digital technologies have allowed us to encounter one another in new ways. Yet there remains a need for sustained reflection on the kinds of relationships to which they are steering us” (Message of His Holiness Pope Francis 2024). Although not a new organization, the Pontifical Academy for Life (n.d.) has made robotics and AI a focal point of study since early 2020. Pope John Paul II established the Pontifical Academy for Life in 1994 for the purpose of defending and promoting “the value of human life and the dignity of the person.” The Academy’s mission is one of study, formation, and knowledge sharing (The Holy See n.d.), and it played a leading role in the publishing of the Rome Call for AI Ethics (“the Rome Call”), discussed further below.
Statements and initiatives from religious denominations signal to their members that their tradition has something to say about AI development and usage. How might a person’s use of their LLM of choice adapt if they learn of their tradition’s theological teachings raising concerns about the technology’s negative impacts on human–human interactions, privacy, or on the environment? What internal dialogues may be on the horizon in light of these statements and of members’ engagement with LLMs? While the answers to these questions remain largely to be seen, the fact that a growing number of religious actors are prioritizing the crafting of responses to the rise of AI suggests these answers are not far off.
Relatedly, new interfaith initiatives tailored to engage in ethical discussions about AI development are emphasizing shared values, not unlike the approach taken by global interfaith efforts over the last century. For example, AI and Faith (2019) aims to “bring the fundamental values of the world’s major religions into the emerging debate on the ethical development of Artificial Intelligence and related technologies.” In August 2019, it published its first newsletter, acknowledging its work had begun 18 months before. With statements that bear some resemblance to those of the Southern Baptist Convention and the Catholic Church discussed above, AI and Faith wote that its members “envision a world where the effects of technology are exclusively life-affirming, promoting the dignity and wellbeing of all humans and improving their interactions through meaningful community and a just society” (AI and Faith n.d.a). AI and Faith aims to develop “moral” AI technology by addressing “beneficial, safe and transparent AI,” “fair and unbiased algorithms,” “computational and biological augmentation,” “equality of access,” “stewardship of work,” appropriate predictive analytics and data ownership,” “preservation and enhancement of civil liberties,” and “limits on autonomous weapons” (AI and Faith n.d.a) The organization’s list of partners includes university centers, individual congregations, professional groups, and other organizations devoted to promoting connections between faith the development of AI ethics (AI and Faith n.d.b). AI and Faith go further than simply making statements that promote theological engagement with the AI ethics discussion. The group appears to promote a positive embrace of AI tools, and sees their use as opportunities to “live out (AI and Faith’s) mission through projects that develop faith-based ERGs (employee resource groups), create beneficial faith-oriented missional AI products,” and “explor(e) theological and cultural implications for AI engagement with sacred text,” while also seeking to be a resource base for promoting trust and accountability for “AI ministry applications” (AI and Faith n.d.c).
The Rome Call for AI Ethics grew out of a congress convened by the Pontifical Academy for Life in Rome in late February 2020 (RenAIssance Foundation n.d.a). The document recognizes AI as more than a tool, and as something that poses fundamental questions about human identity (RenAIssance Foundation 2022). The Call aims to promote “a sense of shared responsibility among international organizations, governments, institutions and the private sector in an effort to create a future in which digital innovation and technological progress grant mankind its centrality” (RenAIssance Foundation n.d.b). Signatories agree to follow and comply with three impact areas (ethics, education, and rights) and six principles: transparency, inclusion, accountability, impartiality, reliability, and security and privacy (RenAIssance Foundation n.d.b). The Rome Call sets forth the need for
a vision in which human beings and nature are at the heart of how digital innovation is developed, supported, rather than gradually replaced by technologies that behave like rational actors but are in no way human. It is time to begin preparing for a more technological future in which machines will have a more important role in the lives of human beings, but also a future in which it is clear that technological progress affirms the brilliance of the human race and remains dependent on its ethical integrity.
Initial signatories included leaders from the Pontifical Academy for Life, International Business Machines Corporation (IBM), Food and Agriculture Organization of the United Nations (FAO), and the Italian Ministry of Innovation (RenAIssance Foundation 2022). Additional signatories have been added, including the following:
According to the RenAIssance Foundation (2024b), with this latest round of signatures, the Rome Call for AI Ethics has now been signed by “(l)eaders representing all major religions,” making “the Rome Call platform representative of most people on the planet.” At the time of this writing, the Call also has just under 20 “endorsers,” mostly faculty located in Europe or the U.S. (RenAIssance Foundation n.d.c).
Arguably, these interfaith coalitions are an answer to the human necessity for “friction,” a necessity that is becoming more pressing in a rapidly evolving technological landscape (see Frischmann and Benesch 2023). AI and Faith also offers resources to employees and a space for those who are engaged in the development of AI to meet and discuss how their values might shape their work. Both AI and Faith and the Rome Call for AI Ethics prioritize building bridges with the private sector. That approach arguably creates friction well beyond the development of an individual’s ethical development by positively influencing the sectors driving the development of AI. With the Rome Call and its organizational focus, signatories from civil society and from the private sector agree to be bound by the Call and its six principles. In both examples, the presence of religious and other civil society actors makes it more difficult for private actors to escape accountability because there is a built-in level of commitment and transparency that may not otherwise exist outside of such coalitions (in the case of the Rome Call) or if employees are not provided with supportive community spaces to discuss their concerns (as with AI and Faith). And while religious actors are by no means the only voices calling for such adoption, the marshalling of theological resources adds yet another layer to these normative arguments. What directions might the development of AI take if enough people begin raising concerns about these issues? While the scope of impact from these initiatives remains to be seen, the fact that these opportunities for reflection and challenge continue to grow speaks to their potential.
At the same time, in their discussion of the limits of AI ethics, Yeung et al. (2020) note that a focus on ethics alone can elide the need for enforceable laws to regulate a growing tech industry that lacks incentive for self-regulation. Like the proposals for AI ethics that Yeung and her colleagues highlight from civil society and international organizations, the above examples from religious actors also lack a focus on enforcement mechanisms, even as they highlight some of the same values such as non-discrimination (Compare Yeung et al. 2020, p. 85 n.26 with Message of His Holiness Pope Francis 2024, §§ 4–5; Ethics & Religious Liberty Commission of the Southern Baptist Convention 2019, art. 3, 5, and 8). This could be an issue for gauging the impact of such efforts on the actual development of AI and the implementation of ethical norms, even if such norms are shared across organizations.

3.3. Religious Actors Shaping AI Regulation

In her chapter in the Oxford Handbook of Ethics of AI, Joanna J. Bryson writes of the challenge of governance, namely, how to design law “in a way that helps us maintain enough social order so that we can sustain human dignity and flourishing” (Bryson 2020). Such challenges are, of course, not unique to the regulation of AI. But AI’s potential to transform not just how work is carried out but who humans are underscores what is at stake. The religious actors discussed so far in this Article appear to recognize those stakes, both for members of their own communities and well beyond. At the same time, few religious actors have definitively weighed in on concrete proposals for AI regulation, with some notable exceptions.
The Rome Call for AI Ethics is one of the exceptions. The Call includes rights among its three impact areas (ethics and education are the other two impact areas) and explains the need for public debate to center the protection of human rights in the digital era, including the need for a “duty of explanation” that promotes transparency in algorithmic decision making (RenAIssance Foundation 2022). Members of the RenAIssance Foundation are also involved in efforts to advise the United Nations on governance priorities related to AI (RenAIssance Foundation 2023c), and Italian Prime Minister Giorgia Meloni integrated “algorethics” into a United Nations speech intended to outline Italy’s areas of focus for its G7 presidency in 2024 (RenAIssance Foundation 2023b). Algorethics, a term created by Paolo Benanti, a Franciscan friar and the RenAIssance Foundation’s Scientific Director, refers to the set of ethical issues raised by the creation and rising use of algorithms (RenAIssance Foundation 2023b). In January 2024, Benanti was appointed to chair Italy’s Commission on Artificial Intelligence for Information, an entity housed within the country’s Department for Information and Publishing of the Presidency of the Council of Ministers (Pontificia Universita Gregoriana 2024). Benanti had been appointed to the U.N.’s High-Level Advisory Body on Artificial Intelligence just a few months before (Pontificia Universita Gregoriana 2023).
The G20 Interfaith Working Group for Research and Innovation on Science, Technology, and Infrastructure, hosted by the G20 Interfaith Forum (IF20), published a policy paper in December 2021 calling on governments to promote the public good when developing AI and calling on civil society, including religious groups, to be a part of that process (G20 Interfaith Forum Working Group for Research and Innovation on Science, Technology, and Infrastructure 2021). IF20 works on several issues of global import, fostering a network of religious actors and initiatives across the globe (G20 Interfaith Forum n.d.). The policy paper highlights many of the same concerns and recommendations seen in some of the other initiatives discussed thus far in this Article, including privacy, discrimination, and a “diminished” role for humans with a focus on human dignity and the embracing of a human rights framework as recommended responses (G20 Interfaith Forum Working Group for Research and Innovation on Science, Technology, and Infrastructure 2021). At the same time, the policy paper offers some unique elements. The IF20 Working Group’s paper calls on religious groups to “develop a sense of responsibility for the role that AI plays in the world and for their own role in the development of AI” (G20 Interfaith Forum Working Group for Research and Innovation on Science, Technology, and Infrastructure 2021), an emphasis that resembles the “shared responsibility” model of the Rome Call, although distinct in its focus on religious actors. The paper explicitly invokes cultural values “often reflected in or emerging from religious practice” as a guide for the development of policies and identifies religious actors as “interlocutors” for governments and calling on them to articulate shared ethical values and an “inter-religious understanding of cultural differences” (G20 Interfaith Forum Working Group for Research and Innovation on Science, Technology, and Infrastructure 2021). What is also striking is that the paper highlights what religious actors can do to contribute to policy development, with recommendations ranging from training clergy in technological literacy to developing narratives about AI from within their own traditions to proactively educating their communities about AI development decisions (G20 Interfaith Forum Working Group for Research and Innovation on Science, Technology, and Infrastructure 2021).
Like the discussion above relating to AI ethics, friction is created through the involvement of religious actors in the development of AI regulation by challenging assumed industry norms and presenting counterweights to those norms, becoming the basis for policy. For example, when Benanti assumed the post of chairing Italy’s AI commission, he openly discussed challenging Silicon Valley’s “paternalism,” which came with a “tendency to play God” without proper consideration of the ways in which AI could be used for “less-than-holy purposes” (Kington 2024). Notably, Benanti has not turned away from engaging with government in the development of AI regulation, nor has he shied away from doing so through the lens of his religious commitments.
There are limits here, too. The shape and scope of religious actors’ influence on AI regulation is likely to vary depending on a number of factors, including existing relationships (or lack thereof) between religious actors and policy makers. Another challenge, not unique to religious actors, is the necessary scope of regulation. In 2024, a United Nations (2024) expert panel called the need for a global approach to AI governance “irrefutable.” The Report calls for collaboration and dialogue between stakeholders, including civil society actors. While religious actors are not specifically named, the existence of some of the coalitions explored above, coupled with the transnational nature of some religious organizations (whether as single entities or in various forms of partnership with co-religionists), could also hold promise for mobilizing global input and support from religious actors on matters relating to AI regulation and its impact.

4. Conclusions: Dialogue as an Engine of Friction

AI tools, ethics, and regulation continue to evolve, shaping both AI’s existence and human experience. This Article highlights ways in which religious actors create friction in each of these spheres, challenging assumptions about the values and norms that are often taken for granted by those developing AI technology and those who are charged with regulating it. This friction creation also extends to how individual users engage with AI technology, inviting individuals and communities to pause and consider how they might be shaped by these new robotic companions and at what cost.
In The Age of AI and Our Human Future (Kissinger et al. 2021), Kissinger et al. describe the chose humans face in proactive and reactive terms, explaining the options as “react(ing) and adapt(ing) piecemeal, or intentionally begin a dialogue, drawing on all elements of human enterprise, aimed at defining AI’s role—and, in so doing, define ours.” It is unclear whether the authors ascribe any deeper meaning to the use of their word “dialogue” here beyond how that term is used in common parlance. However, within the context of interreligious engagement, dialogue has a long and storied history that is instructive. In Dialogue (Buber 1932), Jewish philosopher and theologian Martin Buber wrote that the “(t)he life of dialogue is not limited to men’s traffic with one another; … it has shown itself to be—a relation of men to one another that is only represented in their interaction.” Dialogue requires at a minimum that those who are part of the dialogue are “turned to one another” (Buber 1932). Relevant here, dialogue in this view becomes a mutually constitutive process—one is changed by the encounter. Dialogue becomes not simply an exchange of information but an engine of friction. Dialogue then—whether with an AI tool to learn something new, within theological discussions charting a position on generative AI, or while lobbying lawmakers to consider a broader range of concerns in the regulation of AI—offers a benchmark by which to consider the transformative nature of these interactions.
In Empire of AI (Hao 2025), journalist Karen Hao writes that LLMs and generative AI are one “manifestation” of AI technology and
embod(y) a particular and remarkably narrow view about the way the world is and the way it should be. Nothing about this form of AI coming to the fore or even existing at all was inevitable; it was the culmination of thousands of subjective choices made by people who had the power to be in the decision-making room.
Hao’s warning that LLMs and generative AI so far have been dramatically shaped by input from a select few is well-taken. The pressing question then becomes, will it stay that way?

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Acknowledgments

The author would like to thank Arslan Tazeem and John Bernau for engaging discussions related to the topics in this article and Zachary Henderson for his insightful comments on an earlier draft. Any errors are the author’s own.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Adam AI. n.d. About Adam. Available online: https://iadam.ai/about-adam (accessed on 11 September 2025).
  2. Ahuja, Sparsh. 2021. Muslim Scholars Are Working to Reconcile Islam and AI. Wired. January 25. Available online: https://www.wired.com/story/islamic-ai/ (accessed on 11 September 2025).
  3. AI and Faith. 2019. Newsletter. Available online: https://mailchi.mp/aiandfaith/august-2019-newsletter-390115 (accessed on 11 September 2025).
  4. AI and Faith. n.d.a About. Available online: https://aiandfaith.org/about-us/ (accessed on 11 September 2025).
  5. AI and Faith. n.d.b Our Partners. Available online: https://aiandfaith.org/partners/ (accessed on 11 September 2025).
  6. AI and Faith. n.d.c Projects. Available online: https://aiandfaith.org/projects/ (accessed on 11 September 2025).
  7. André, Fiona. 2023. A New AI App Lets Users ‘Text’ with Jesus. Some Call It Blasphemy. The Washington Post. August 12. Available online: https://www.washingtonpost.com/religion/2023/08/12/text-with-jesus-chatgpt-ai/ (accessed on 11 September 2025).
  8. Barna Group. 2023. How U.S. Christians Feel About AI & the Church. November 8. Available online: https://www.barna.com/research/christians-ai-church/ (accessed on 11 September 2025).
  9. Brandtzaeg, Peter Bae, Marita Skjuve, and Asbjørn Følstad. 2022. My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship. Human Communication Research 48: 404–29. [Google Scholar] [CrossRef]
  10. Brenner, David, Yaqub Chaudhary, Robert Geraci, Mark Graves, Haley Griese, Elias Kruger, Yuriko Ryan, Yuriko, and Marcus Schwarting. 2024. Technical and Religious Perspectives on AI Misinformation and Disinformation. Unpublished. June 21. Available online: https://ssrn.com/abstract=4360413 (accessed on 11 September 2025).
  11. Broyde, Michael. 2023. AI and Jewish Law: Seeing How ChatGPT 4.0 Looks at a Novel Issue—Part I. Canopy Forum. October 3. Available online: https://canopyforum.org/2023/10/03/ai-and-jewish-law-seeing-how-chatgpt-4-0-looks-at-a-novel-issue-part-i/ (accessed on 11 September 2025).
  12. Bryson, Joanna. 2020. The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation. In The Oxford Handbook of Ethics in AI. Edited by Markus Dubber, Frank Pasquale and Sunit Das. Oxford: Oxford University Press. [Google Scholar]
  13. Buber, Martin. 1932. From Dialogue (1932). In The Martin Buber Reader: Essential Writings. Edited by Asher D. Biemann. New York: Palgrave MacMillan. [Google Scholar]
  14. Buck, Anthony. 2020. Is AI a God of the Future or the Present? European Academy on Religion and Society. September 9. Available online: https://europeanacademyofreligionandsociety.com/news/is-ai-a-god-of-the-future-or-the-present/ (accessed on 11 September 2025).
  15. Campbell, Heidi A., and Wendi Bellar. 2022. Digital Religion: The Basics. London: Routledge. [Google Scholar]
  16. Catholic, Answers. 2024a. Introducing Father Justin, the New Interactive AI from Catholic Answers. X. April 23. Available online: https://twitter.com/catholiccom/status/1782829157245923598? (accessed on 11 September 2025).
  17. Catholic, Answers. 2024b. A Statement from the President of Catholic Answers regarding the character formerly known as ‘Father Justin.’ X. April 24. Available online: https://twitter.com/catholiccom/status/1783242081848144320 (accessed on 11 September 2025).
  18. Chaudhary, Yaqub. 2024. The Future and the Artificial: An Islamic Perspective. Future of Life Institute. September 26. Available online: https://futureoflife.org/religion/the-future-and-the-artificial-an-islamic-perspective/ (accessed on 11 September 2025).
  19. Christian, Gina. 2024. Catholic Answers AI ‘Priest’ Laicized After Backlash. America Magazine. April 25. Available online: https://www.americamagazine.org/faith/2024/04/25/ai-priest-catholic-answers-laicized-father-justin-247799 (accessed on 11 September 2025).
  20. Cooper, Alan. 2009. The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity. Indianapolis: Sams. [Google Scholar]
  21. Cox, Anna L., Sandy J.J. Gould, Marta E. Cecchinato, Ioanna Iacovides, and Ian Renfree. 2016. Design Frictions for Mindful Interactions: The Case for Microboundaries. Paper presented at 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Association for Computing Machinery, San Jose, CA, USA, May 7–12; pp. 1389–1397. [Google Scholar]
  22. Darling, Kate. 2017. ‘Who’s Johnny?’ Anthropomorphic Framing in Human—Robot Interaction, Integration and Policy. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Edited by Patrick Lin, Keith Abney and Ryan Jenkins. New York: Oxford University Press. [Google Scholar]
  23. Darling, Kate, Palash Nandy, and Cynthia Breazeal. 2015. Empathic Concern and the Effect of Stories in Human–Robot Interaction. Paper presented at 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, August 31–September 1. [Google Scholar]
  24. Davalos, Jackie, and Nate Lanxon. 2023. Anthony Levandowski Reboots Church of Artificial Intelligence. Bloomberg. November 23. Available online: https://www.bloomberg.com/news/articles/2023-11-23/anthony-levandowski-reboots-the-church-of-artificial-intelligence (accessed on 11 September 2025).
  25. Eck, Diana. 2001. A New Religious America: How a ‘Christian Country’ Has Become the World’s Most Religiously Diverse Nation. San Francisco: HarperCollins. [Google Scholar]
  26. Ethics & Religious Liberty Commission of the Southern Baptist Convention. 2019. Artificial Intelligence: An Evangelical Statement of Principles. April 11. Available online: https://erlc.com/resource-library/statements/artificial-intelligence-an-evangelical-statement-of-principles/ (accessed on 11 September 2025).
  27. European Network of National Human Rights Institutions (ENNHRI). n.d. Key Human Rights Challenges of AI. Available online: https://ennhri.org/ai-resource/key-human-rights-challenges/ (accessed on 11 September 2025).
  28. Forristal, Lauren. 2024. Amazon’s Alexa Gets New Generative AI-Powered Experiences. TechCrunch. January 9. Available online: https://techcrunch.com/2024/01/09/amazons-alexa-gets-new-generative-ai-powered-experiences/ (accessed on 11 September 2025).
  29. Frischmann, Brett, and Moshe Y. Vardi. Forthcoming. Better Digital Contracts with Prosocial Design-in-Friction. 65 Jurimetrics (forthcoming). Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4918003 (accessed on 11 September 2025).
  30. Frischmann, Brett, and Susan Benesch. 2023. Friction-in-Design Regulation as 21st Century Time, Place, and Manner Restriction. Yale Journal of Law & Technology 25: 376–447. [Google Scholar]
  31. G20 Interfaith Forum. n.d. About the G20 Interfaith Forum. Available online: https://www.g20interfaith.org/g20-interfaith-forum-about/ (accessed on 11 September 2025).
  32. G20 Interfaith Forum Working Group for Research and Innovation on Science, Technology, and Infrastructure. 2021. An Inclusive Global Conversation on Artificial Intelligence. Available online: https://www.g20interfaith.org/app/uploads/2020/09/Report_InterfaithAIFINAL.pdf (accessed on 11 September 2025).
  33. Gita GPT. n.d. About. Available online: https://www.gitagpt.in/about (accessed on 11 September 2025).
  34. Gold, Ashley. 2023. How Generative AI Could Generate More Antisemitism. Axios. May 25. Available online: https://www.axios.com/2023/05/25/generative-ai-antisemitism-bias (accessed on 11 September 2025).
  35. Goodman, Ellen. 2021. Digital Fidelity and Friction. Nevada Law Journal 21: 623–53. [Google Scholar]
  36. Gordon, Eric, and Gabriel Mugar. 2020. Meaningful Inefficiencies: Civic Design in an Age of Digital Expediency. Oxford: Oxford University Press. [Google Scholar]
  37. Gordon-Tapiero, Ayelet, Paul Ohm, and Ashwin Ramaswami. 2023. Fact and Friction: A Case Study in the Fight Against False News. University of California Davis Law Review 57: 171–251. [Google Scholar]
  38. Hao, Karen. 2025. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. New York: Penguin Press. [Google Scholar]
  39. Harris, Mark. 2017. Inside the First Church of Artificial Intelligence. WIRED. November 15. Available online: https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/ (accessed on 11 September 2025).
  40. Howard, Thomas Albert. 2021. The Faith of Others: A History of Interreligious Dialogue. New Haven: Yale University Press. [Google Scholar]
  41. Jamal, Urooba. 2022. An Engineer Who was Fired from Google Believes Its AI Chatbot May Have a Soul But Says He’s Not Interested in Convincing the Public About It. Business Insider. July 23. Available online: https://www.businessinsider.com/google-engineer-blake-lemoine-artificial-intelligence-chatbot-sentience-2022-7 (accessed on 11 September 2025).
  42. Kalman, David Zvi. 2024. On AI, Jewish Thought Has Something to Say. Future of Life Institute. September 6. Available online: https://futureoflife.org/religion/ai-in-jewish-thought/ (accessed on 11 September 2025).
  43. Karataş, Mustafa, and Keisha Cutright. 2023. Thinking About God Increases Acceptance of Artificial Intelligence in Decision-Making. Proceedings of the National Academy of Sciences of the United States of America 120, 33: e2218961120. Available online: https://www.pnas.org/doi/10.1073/pnas.2218961120 (accessed on 11 September 2025). [CrossRef]
  44. Kaye, Byron. 2023. Australian Mayor Readies World’s First Defamation Lawsuit over ChatGPT Content. Reuters. April 5. Available online: https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05 (accessed on 11 September 2025).
  45. Keane, Webb, and Scott J. Shapiro. 2023. Deus ex machine: The Dangers of AI Godbots. The Spectator. July 29. Available online: https://www.spectator.co.uk/article/deus-ex-machina-the-dangers-of-ai-godbots/ (accessed on 11 September 2025).
  46. Kington, Tom. 2024. Stop Playing God, Pope’s Advisor Tells Tech Titans. The Times. January 19. Available online: https://www.thetimes.com/world/article/stop-playing-god-popes-adviser-tells-tech-titans-pfvn3jp5m (accessed on 11 September 2025).
  47. Kirkpatrick, Jesse, Erin N. Hahn, and Amy J. Haufler. 2017. Trust and Human–Robot Interactions. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Edited by Patrick Lin, Keith Abney and Ryan Jenkins. New York: Oxford University Press. [Google Scholar]
  48. Kissinger, Henry A., Eric Schmidt, and Daniel Huttenlocher. 2021. The Age of AI and Our Human Future. New York: Back Bay Books. [Google Scholar]
  49. Klee, Miles. 2025a. People are Losing Loved Ones to AI-Fueled Spiritual Fantasies. Rolling Stone. May 4. Available online: https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/ (accessed on 11 September 2025).
  50. Klee, Miles. 2025b. People are Finding Spiritual Fulfillment in AI. Religious Scholars Have Thoughts. Rolling Stone. May 25. Available online: https://www.rollingstone.com/culture/culture-features/ai-chatbot-god-religion-answers-1235347023/ (accessed on 11 September 2025).
  51. Korosec, Kirsten. 2021. Anthony Levandowski Closes His Church of AI. Tech Crunch. February 18. Available online: https://techcrunch.com/2021/02/18/anthony-levandowski-closes-his-church-of-ai/ (accessed on 11 September 2025).
  52. Kraft, Amy. 2016. Microsoft Shuts Down AI Chatbot After It Turned into a Nazi. CBS News. March 25. Available online: https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/ (accessed on 11 September 2025).
  53. Lemoine, Blake. 2022. Scientific Data and Religious Opinions. Medium. June 14. Available online: https://cajundiscordian.medium.com/scientific-data-and-religious-opinions-ff9b0938fc10 (accessed on 11 September 2025).
  54. Malfacini, Kim. 2025. The Impacts of Companion AI on Human Relationships: Risks, Benefits, and Design Considerations. In AI and Society. Berlin: Springer. [Google Scholar]
  55. McDonald, Matthew. 2024. Catholic Answers Pulls Plug on ‘Father Justin’ AI Priest. National Catholic Register. April 24. Available online: https://www.ncregister.com/news/catholic-answers-ai-priest-cancelled (accessed on 11 September 2025).
  56. McGeveran, William. 2013. The Law of Friction. University of Chicago Legal Forum 1: 15–68. [Google Scholar]
  57. Mejtoft, Thomas, Emma Parsjö, Ole Norberg, and Ulrik Söderström. 2023. Design Friction and Digital Nudging: Impact on the Human Decision-Making Process. Paper presented at the 2023 5th International Conference on Image, Video and Signal Processing (IVSP 2023), Singapore, March 24–26; New York: ACM, pp. 183–190. [Google Scholar]
  58. Message of His Holiness Pope Francis for the 57th World Day of Peace: Artificial Intelligence and Peace. 2024. January 1. Available online: https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html (accessed on 11 September 2025).
  59. Minsky, Marvin. 1968. Semantic Information Processing. Cambridge: MIT Press. [Google Scholar]
  60. Minson, Christopher. 2024. Buddhism Meets AI. Medium. January 14. Available online: https://christopherjayminson.medium.com/buddhism-meets-artificial-intelligence-80710c826124 (accessed on 11 September 2025).
  61. Mitchell, Melanie. 2019. Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Strauss, and Giroux. [Google Scholar]
  62. Myers, Andrew. 2021. Rooting Out Anti-Muslim Bias in Popular Language Model GPT-3, Stanford University Human-Center Artificial Intelligence. July 22. Available online: https://hai.stanford.edu/news/rooting-out-anti-muslim-bias-popular-language-model-gpt-3 (accessed on 11 September 2025).
  63. Ohm, Paul, and Jonathan Frankle. 2018. Desirable Inefficiency. Florida Law Review 4: 777–838. [Google Scholar]
  64. Ott, Kate. 2019. Christian Ethics in a Digital Society. Lanham: Rowman & Littlefield. [Google Scholar]
  65. Ott, Kate. 2024. Artificial Intelligence is Here. Now What? Presbyterian Outlook. February 27. Available online: https://pres-outlook.org/2024/02/artificial-intelligence-is-here-now-what/ (accessed on 11 September 2025).
  66. Pandya, Chinmay. 2024. A Hindu Perspective on AI Risks and Opportunities. Future of Life Institute. May 20. Available online: https://futureoflife.org/religion/a-hindu-perspective-on-ai-risks-and-opportunities/ (accessed on 11 September 2025).
  67. Paul, Andrew. 2023. Radio Host Sues ChatGPT Developer Over Allegedly Libelous Claims. Popular Science. June 8. Available online: https://www.popsci.com/technology/openai-chatgpt-libel/ (accessed on 11 September 2025).
  68. Pew Research Center. 2022. Key Findings from the Global Religious Futures Project. December 21. Available online: https://www.pewresearch.org/religion/2022/12/21/key-findings-from-the-global-religious-futures-project/ (accessed on 11 September 2025).
  69. Pontifical Academy for Life. n.d. Artificial Intelligence. Available online: https://www.academyforlife.va/content/pav/en/projects/artificial-intelligence.html (accessed on 11 September 2025).
  70. Pontificia Universita Gregoriana. 2023. Appointments/UN Advisory Body on Artificial Intelligence. October 27. Available online: https://www.unigre.it/en/events-and-communication/communication/news-and-press-releases/appointments-un-advisory-body-on-artificial-intelligence/ (accessed on 11 September 2025).
  71. Pontificia Universita Gregoriana. 2024. Appointment/Chairman IA Commission for Information. January 6. Available online: https://www.unigre.it/en/events-and-communication/communication/news-and-press-releases/appointment-chairman-ia-commision-for-information/ (accessed on 11 September 2025).
  72. Poritz, Isaiah. 2023. First ChatGPT Defamation Lawsuit to Test AI’s Legal Liability. Bloomberg Law. June 12. Available online: https://news.bloomberglaw.com/ip-law/first-chatgpt-defamation-lawsuit-to-test-ais-legal-liability (accessed on 11 September 2025).
  73. Rabb.ai. n.d. Available online: https://www.rabb.ai (accessed on 11 September 2025).
  74. Rainie, Lee. 2025. Close Encounters of the AI Kind: The Increasingly Human-Like Ways People are Engaging with Language Models. Elon University Imagining the Digital Future Center. Available online: https://imaginingthedigitalfuture.org/wp-content/uploads/2025/03/ITDF-LLM-User-Report-3-12-25.pdf (accessed on 11 September 2025).
  75. RenAIssance Foundation. 2022. The Rome Call for AI Ethics. Available online: https://www.romecall.org/wp-content/uploads/2022/03/RomeCall_Paper_web.pdf (accessed on 11 September 2025).
  76. RenAIssance Foundation. 2023a. AI Ethics: An Abrahamic Commitment to the Rome Call. January 7. Available online: https://www.romecall.org/ai-ethics-an-abrahamic-commitment-to-the-rome-call/ (accessed on 11 September 2025).
  77. RenAIssance Foundation. 2023b. Algorethics at the UN. September 20. Available online: https://www.romecall.org/algorethics-at-the-un/ (accessed on 11 September 2025).
  78. RenAIssance Foundation. 2023c. Renaissance Foundation’s Scientific Director in the UN. AI Advisory Board. October 27. Available online: https://www.romecall.org/renaissance-foundations-scientific-director-among-the-39-member-un-ai-advisory-body/ (accessed on 11 September 2025).
  79. RenAIssance Foundation. 2024a. CISCO Signs the Rome Call for AI Ethics. April 24. Available online: https://www.romecall.org/cisco-signs-the-rome-call-for-ai-ethics/ (accessed on 11 September 2025).
  80. RenAIssance Foundation. 2024b. AI Ethics for Peace—Hiroshima—July 10, 2024. July 12. Available online: https://www.romecall.org/ai-ethics-for-peace-hiroshima-july-10th-2024/ (accessed on 11 September 2025).
  81. RenAIssance Foundation. n.d.a Renaissance. Available online: https://www.romecall.org/renaissance/ (accessed on 11 September 2025).
  82. RenAIssance Foundation. n.d.b The Rome Call for AI Ethics. Available online: https://www.romecall.org/the-call/ (accessed on 11 September 2025).
  83. RenAIssance Foundation. n.d.c Endorsers. Available online: https://www.romecall.org/endorsers/ (accessed on 11 September 2025).
  84. Sahota, Neil. 2024. How AI Companions are Redefining Human Relationships in the Digital Age. Forbes. July 18. Available online: https://www.forbes.com/sites/neilsahota/2024/07/18/how-ai-companions-are-redefining-human-relationships-in-the-digital-age/ (accessed on 11 September 2025).
  85. Samuel, Sigal. 2021. AI’s Islamophobia Problem. Vox. September 18. Available online: https://www.vox.com/future-perfect/22672414/ai-artificial-intelligence-gpt-3-bias-muslim (accessed on 11 September 2025).
  86. Shurpin, Yehuda. n.d. Can AI Replace Rabbis? Chabad.org. Available online: https://www.chabad.org/library/article_cdo/aid/5981878/jewish/Can-AI-Replace-Rabbis.htm (accessed on 11 September 2025).
  87. Sidoti, Olivia, and Collen McClain. 2025. 34% of U.S. adults have used ChatGPT, About Double the Share in 2023. Pew Research Center. June 25. Available online: https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/ (accessed on 11 September 2025).
  88. Southern Baptist Convention (SBC). 2023. Resolution: On Artificial Intelligence and Emerging Technologies. June 15. Available online: https://www.sbc.net/resource-library/resolutions/on-artificial-intelligence-and-emerging-technologies/ (accessed on 11 September 2025).
  89. Tenzer, Leslie Y. Garfield. 2024. Defamation in the Age of Artificial Intelligence. NYU Annual Survey of American Law 80: 135–78. [Google Scholar] [CrossRef]
  90. The Holy See. n.d. Pontifical Academy for Life Profile. Available online: https://www.vatican.va/roman_curia/pontifical_academies/acdlife/documents/rc_pont-acd_life_pro_20161018_profilo_en.html (accessed on 11 September 2025).
  91. Tsing, Anna. 2012. Frictions. In The Wiley-Blackwell Encyclopedia of Globalization. Edited by George Ritzer. Oxford: Wiley-Blackwell. [Google Scholar]
  92. Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books. [Google Scholar]
  93. United Nations. 2024. Governing AI for Humanity: Final Report. Available online: https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf (accessed on 11 September 2025).
  94. Vogels, Emily A. 2023. A Majority of Americans Have Heard of ChatGPT, But Few Have Tried It Themselves. Pew Research Center. May 24. Available online: https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/ (accessed on 11 September 2025).
  95. Willingham, AJ. 2023. ChatGPT Can Write Sermons. Religious Leaders Don’t Know How to Feel About It. CNN. April 11. Available online: https://www.cnn.com/2023/04/11/us/chatgpt-sermons-religion-ai-technology-cec/index.html (accessed on 11 September 2025).
  96. Yeung, Karen, Andrew Howes, and Ganna Pogrebna. 2020. AI Governance by Human Rights-Center Design. In The Oxford Handbook of Ethics in AI. Edited by Markus Dubber, Frank Pasquale and Sunit Das. Oxford: Oxford University Press. [Google Scholar]
  97. Zerilli, John. 2021. A Citizen’s Guide to Artificial Intelligence. Cambridge: MIT Press. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barth, W. Religious Actors as Friction Creators Shaping the AI Dialogue. Laws 2025, 14, 67. https://doi.org/10.3390/laws14050067

AMA Style

Barth W. Religious Actors as Friction Creators Shaping the AI Dialogue. Laws. 2025; 14(5):67. https://doi.org/10.3390/laws14050067

Chicago/Turabian Style

Barth, Whittney. 2025. "Religious Actors as Friction Creators Shaping the AI Dialogue" Laws 14, no. 5: 67. https://doi.org/10.3390/laws14050067

APA Style

Barth, W. (2025). Religious Actors as Friction Creators Shaping the AI Dialogue. Laws, 14(5), 67. https://doi.org/10.3390/laws14050067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop