Next Article in Journal
Religious and Spiritual Changes After Near-Death Experience: A Survey-Based Study Among Urban Indonesians
Previous Article in Journal
Medicalized Death and the Reification of Spiritual Bonds: Contemporary Korean Funeral Rites
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence and Interreligious Dialogue: Emerging Implications for Faith-Based Organizations

by
Jeff Clyde G. Corpuz
Department of Theology and Religious Education, College of Liberal Arts, De La Salle University, Manila 0922, Philippines
Religions 2026, 17(3), 354; https://doi.org/10.3390/rel17030354
Submission received: 23 November 2025 / Revised: 4 March 2026 / Accepted: 5 March 2026 / Published: 12 March 2026
(This article belongs to the Special Issue Interreligious Dialogue: Validity and Sustainability)

Abstract

This article advances a constructive theological account of Human-Centered Artificial Intelligence (HCAI) for Faith-Based Organizations (FBOs) engaged in interreligious dialogue (IRD). Drawing on a practical–theological methodology, the study follows four interrelated steps—descriptive–empirical, interpretive, normative, and pragmatic—to examine how AI-enabled practices such as translation, textual analysis, and cross-scriptural synthesis are reshaping contemporary forms of dialogue among religious and non-religious communities. Through the empirical mapping of current AI applications, interdisciplinary interpretation informed by social and ethical analysis, and normative theological evaluation, the study identifies both the opportunities and risks of AI-mediated IRD. On this basis, it synthesizes three interdependent dimensions that structure the proposed framework: (1) Ethics, which clarifies the moral purpose and values guiding AI use; (2) Technology, which addresses mediation, governance, and power in AI systems; and (3) Humans, which centers institutional responsibility, agency, and sustainability within FBOs. From this synthesis, the article introduces an AI–IRD Integration Framework that translates theological and ethical reflection into practical guidance for responsible AI adoption. The study contributes an original interdisciplinary perspective that equips religious leaders, theologians, policymakers, and faith communities to engage AI not merely as a tool, but as a human-centered partner in fostering inclusive, sustainable, and ethically grounded dialogue in an era of AI–human coexistence.

1. Introduction

Artificial intelligence (AI) and interreligious dialogue (IRD) are interconnected. Once confined to technical domains of computation and automation, AI now extends deeply into ethical reflection, cultural formation, and theological/religious inquiry (Floridi 2019; Coeckelbergh 2020; Campbell and Tsuria 2021). Within this expanding frontier, Faith-Based Organizations (FBOs) occupy a critical role as mediators between technological innovation and moral discernment (Ashraf 2022). FBOs are multifaceted actors engaged in development and social action, operating at the intersection of religious communities and civil society (Clarke and Ware 2015). While institutionally distinct from secular non-governmental organizations (NGOs), they share historical, structural, and operational features with NGOs, religious institutions, and community-based organizations, reflecting a hybrid character (Clarke and Ware 2015). Long engaged in social transformation and peacebuilding, FBOs now face the urgent task of discerning how AI might serve, rather than disrupt, their mission of dialogue (Abu-Nimer 2020; Phan 2004, 2022; Corpuz 2025b). Recent advances in AI-driven translation and textual analysis are creating new possibilities for IRD. At the same time, public narratives often portray these technologies in near-transcendent ways, sparking both fascination and concern about AI systems with seemingly “god-like” capacities and raising serious theological and ethical questions (Papakostas 2025). This study is timely because generative AI tools—such as OpenAI’s ChatGPT (GPT-5.4 model), a text-generating model—encourage views of AI as a kind of creator, while also becoming spaces for religious experimentation, interpretation, and the spread of both accurate information and misinformation about religions (Dorobantu 2024; Chrostowski and Najda 2025; Rähme and Prohl 2025; Shormani and Alfahad 2025).
Recent scholarships reveal a plethora of religious experimentation involving artificial intelligence (Yao 2024; Papakostas 2025). Arora (2025) notes that applications such as “Text With Jesus” attracted controversy for enabling users to interact with AI-generated representations of Jesus and other biblical figures, raising questions about blasphemy, authenticity, and theological propriety. Comparable tools exist across other religious traditions—for example, Deen Buddy for Islam, Vedas AI for Hinduism, and AI Buddha. These platforms generally present themselves as tools for engaging with sacred texts, rather than as embodiments or sources of religious authority or holiness (Agence France-Presse 2025). In the same period, “QuranGPT v1.1.9” gained significant public attention for offering AI-mediated responses based on the Qur’an, drawing so much traffic that it reportedly crashed within its first day of release. Similar platforms have emerged that simulate conversations with historical and spiritual figures such as Confucius and Martin Luther.
Arora (2025) further observes that AI has inspired the creation of new belief systems, most notably the Way of the Future church, founded by former Google engineer Anthony Levandowski, which seeks to develop a form of divinity “based on artificial intelligence.” These examples illustrate both the creative potential and ethical tension surrounding AI’s role in mediating religious experience and authority (Floridi 2019). Chandra and Ranjan (2022) used a large language model to conduct topic modeling between the Bhagavad Gita and the Upanishads, two foundational texts in Hindu philosophy. Their study found an average thematic similarity of 73 percent, reinforcing conclusions long recognized by Hindu scholars through traditional exegesis. They observed that AI-supported analysis can identify subtle and previously unnoticed patterns within sacred literature, offering new interpretive possibilities for theology, comparative religion, and digital humanities (Chandra and Ranjan 2022). A 2026 study examines audience engagement with an AI-generated Christian–Muslim debate on YouTube by analyzing 8737 comments, showing how sentiment and discursive patterns surrounding algorithmically produced theological arguments illuminate wider dynamics of digital religion (Mackey and Elhersh 2026). Building on this, Mackey and Elhersh (2026) find that AI-driven debates tend to reproduce interfaith polemics and therefore calls for better dialogical frameworks, as well as further research on how platform design and moderation influence interfaith dialogue.
The growing religious and cultural plurality of the contemporary world calls for renewed forms of dialogue that are attentive to rapid technological developments and ethical accountability (Dorobantu 2024; Chrostowski and Najda 2025; Rähme and Prohl 2025; Shormani and Alfahad 2025). Empirical research indicates that flawed or poorly designed language technologies can undermine multilingual exchange and contribute to the spread of misinformation (Yao 2024). Abu-Nimer (2020) argues that interreligious engagement is vital for reconciliation and peace. Francis (2020) notes that dialogue in faith communities must evolve to include new modes of encounter shaped by media and digital systems. Within this context, AI becomes a potential enabler of inclusive dialogue (Corpuz 2025b). Translation models, semantic analysis, and automated text comparison allow shared ethical values across religious texts to surface, promoting communication and collaboration beyond linguistic and cultural divisions (Dorobantu 2024; Shormani and Alfahad 2025; Mackey and Elhersh 2026). The study builds on recent interdisciplinary research on AI and religion (Dorobantu 2024; Ahmed et al. 2025; Papakostas 2025) and extends the practical–theological work on IRD.
Despite the expanding body of scholarship on artificial intelligence, digital religion, and interreligious dialogue, a significant gap persists in articulating a coherent theological framework that integrates Human-Centered Artificial Intelligence (HCAI) into the praxis of FBOs engaged in IRD. Existing studies tend to focus either on the technical capacities and ethical risks of AI or on the theological foundations of dialogue, yet few offer an interdisciplinary synthesis that systematically connects these domains. Moreover, limited attention has been given to how FBOs can institutionalize AI in ways that are theologically grounded, ethically accountable, and operationally sustainable. Using a practical–theological approach (Osmer 2008, 2011), the research examines AI’s role in IRD. Integrating HCAI into IRD presents an exciting opportunity to reimagine the role of IRD and FBOs in shaping inclusive, responsive, and technologically enhanced sacred spaces. It proposes that AI, when used responsibly, can facilitate translation, identify shared values in sacred texts, and support inclusive spaces for engagement between religious and non-religious actors. At the same time, it calls for critical discernment of the ethical and theological implications of such integration.
The work is part of a larger research project that explored the use of AI in FBOs. As a preliminary study, the paper has three main goals. First, it explores the use of AI tools in FBOs. Second, it evaluates how AI technologies can support FBOs in fostering meaningful dialogue using HCAI. Third, it introduces the AI–IRD Integration Framework, which provides guidelines for responsible adoption of AI in faith-based institutions. This framework aligns with global initiatives such as UNESCO’s (2021) Recommendation on the Ethics of Artificial Intelligence and Pope Francis’s (2020) Fratelli Tutti, both of which call for technologies that respect human dignity and promote solidarity. This study contributes to the broader effort of aligning technological innovation with theological and ethical reflection. It argues that AI, when ethically guided and inclusively designed, can deepen IRD praxis and strengthen the peacebuilding mission of FBOs worldwide. This study seeks to answer the following research questions:
  • How can AI tools support dialogue in FBOs while respecting theological and cultural/religious plurality?
  • What ethical and practical challenges arise from using AI to translate, analyze, and synthesize religious and non-religious texts in dialogue settings?
  • How can FBOs develop a sustainable, human-centered framework for integrating AI into IRD?

2. Review of Related Literature

2.1. Artificial Intelligence and IRD

Recent studies have explored the intersection of AI, theology, and ethics (Floridi 2019; Coeckelbergh 2020). Campbell and Tsuria (2021) note that digital religion reshapes authority and belonging. Papakostas (2025) explores the use of AI in religious education, showing how automation influences moral learning. Corpuz (2025a) explores the use of GenAI in theological instruction to generate faith images. Together, these perspectives highlight both opportunity and challenges in the field of theology. These works show that religion offers both resources and challenges for evaluating AI’s role in society (World Council of Churches 2026).
IRD has become increasingly urgent amid escalating global crises, including climate change, armed conflict, and deepening social inequality (Francis 2020). These structural challenges are compounded by rising geopolitical tensions and the proliferation of hate-filled discourse in digital spaces (Swidler 2014). Incidents of antisemitism and Islamophobia have grown significantly, partly intensified by conflict in the Middle East (Interfaith Alliance 2025). Extremist and far-right groups have leveraged such crises to broaden their influence, using online platforms to amplify polarizing narratives and attract audiences beyond their traditional constituencies (Interfaith Alliance 2025). In this context, IRD is not merely a theological ideal but a critical social and ethical imperative for countering division and fostering peaceful coexistence (Abu-Nimer 2020).
IRD has become firmly embedded in the political sphere, reflected in its inclusion in official policy frameworks of international bodies such as the United Nations (UN), the European Union (EU), and the Council of Europe (CoE), as well as initiatives like the United Nations Alliance of Civilizations (UNAOC), which explicitly promotes interreligious and intercultural dialogue (Banchoff 2012). In this context, the convergence of diverse religious and spiritual traditions offers pathways toward peace, mutual understanding, and shared responsibility (Corpuz 2025b). Spiritual traditions address moral and existential questions that science and technology alone cannot resolve, including how communities cultivate love, hope, compassion, and solidarity (Francis 2020). Diversity, rather than functioning as an obstacle, enriches the possibilities for dialogue and cooperation. In the digital age, new forms of interconnectedness expand opportunities for interreligious encounter. AI further shapes this landscape by influencing communication, education, and the ways spiritual ideas are exchanged across traditions. When responsibly engaged, AI has the potential to support interfaith dialogue by amplifying marginalized voices, facilitating inclusive participation, and creating new spaces for mutual learning and ethical reflection.

2.2. Interfaith Dialogue and Technology

Throughout history, developments in communication technologies have continually reshaped religious life, enabling new forms of teaching, community building, and spiritual engagement (World Council of Churches 2025). From broadcast media that transformed religious outreach in the twentieth century to the rapid integration of digital platforms for worship and formation during global crises, technological tools have largely extended established religious practices (Campbell and Tsuria 2021). More recently, however, AI marks a significant shift, as it is beginning to influence not only the transmission of religious content but also the ways believers understand sacred traditions, engage in interpretation, and encounter religious meaning itself. The intersection of AI and IRD presents both theological promise and moral tension. Scholars have begun to explore how AI redefines authority, participation, and moral agency in digital religion (Campbell and Tsuria 2021) while others highlight the capacity of AI to mediate learning and cooperation across traditions (Ahmed et al. 2025; Corpuz 2025b). Interfaith dialogue emphasizes respect, encounter, and cooperation (Panikkar 1999; Wolf 2012). Phan (2004) explores how people of different faiths can respectfully and fruitfully engage with each other in an increasingly globalized world. Much of the scholarship focuses on face-to-face encounters, yet digital technologies increasingly mediate these exchanges. Corpuz (2025b) documents grassroots interfaith efforts in the Philippines, where digital tools are now integral to sustaining dialogue. Interfaith dialogue in the Philippines requires a theology of dialogue rooted in inclusivity, plurality, justice, solidarity, and mutual recognition. These studies suggest that technology is no longer peripheral, but central, to interreligious engagement.

2.3. AI Interaction in Interreligious Dialogue Through Translation, Analysis, and Synthesis of Religious Texts

Several authors suggested that advances in machine translation would significantly redefine the work of professional translators, with their responsibilities moving largely toward reviewing and refining automated outputs (Jolley and Maimone 2022; Lee 2023; Mohsen et al. 2023). Building on these developments, contemporary AI language models, translation platforms, and text analysis tools now offer expanded opportunities for strengthening interreligious dialogue (Shormani and Alfahad 2025). Such technologies enable the large-scale processing and cross-linguistic comparison of religious and philosophical writings, facilitating deeper engagement across diverse traditions and languages. Ahmed et al. (2025) argue that AI can facilitate multi-religious understanding by providing accurate, context-sensitive translations of sacred writings and theological commentaries.
AI-assisted translation tools can reduce linguistic barriers that have long limited interfaith exchange. Automated semantic analysis enables scholars and practitioners to identify patterns of meaning, ethical convergence, and historical parallels between traditions (World Council of Churches 2025). For instance, sentiment analysis and natural language processing can reveal how compassion, justice, and mercy are expressed across texts such as the Qur’an, the Bhagavad Gita, the Bible, and Buddhist sutras. This analytical capacity supports interreligious dialogue by grounding discussions in textual evidence rather than abstraction. It also democratizes access by enabling smaller FBOs and local communities to engage with theological materials that were once inaccessible due to linguistic or academic limitations (Kolb 2021).
Beyond translation and analysis, AI allows for the synthesis of themes across religious literature, producing cross-referential maps of values and doctrines. Such synthesis aligns with the goal of finding “common ground” that respects diversity while encouraging collaboration for peace and sustainability (Abu-Nimer 2020). Nevertheless, scholars warn against over-reliance on algorithmic interpretation. A recent study shows that, while AI translation is improving, human translators are still essential for accurately conveying complex figurative expressions in the Holy Qur’an and preserving its deeper rhetorical meaning (Metwally and Bin-Hady 2025). Automated tools risk flattening theological nuance, reproducing Western linguistic hierarchies, and overlooking oral traditions and embodied practices central to many faiths (Coeckelbergh 2020). Thus, the challenge is to ensure that AI systems are designed with pluralism, transparency, and human oversight in mind.
AI’s capacity to translate, analyze, and synthesize religious texts positions it as a valuable mediator for contemporary IRD. When guided by ethical frameworks such as the UNESCO (2021) Recommendation on the Ethics of Artificial Intelligence and theological principles of respect for human dignity, AI can support FBOs in building inclusive, multilingual platforms for dialogue. The task for religious leaders and technologists is to co-create systems that both preserve doctrinal integrity and promote mutual understanding across traditions.
Classical and contemporary literature consistently emphasizes that dialogue is a relational and transformative practice rather than a purely instrumental exchange or debate (Knitter 2002; Clooney and von Stosch 2018; Race 1993; Cornille 2013). Interreligious dialogue is a process of encounter, mutual recognition, and shared ethical action (Phan 2004; Panikkar 1999). When read alongside emerging studies on digital religion and AI-mediated discourse (Floridi 2019; Mackey and Elhersh 2026), this literature provides a critical lens for evaluating whether AI tools support IRD.
The reviewed literature converges on three insights. First, AI requires ethical evaluation because of its transformative effect on human relationships and interpretation. Second, religious and interfaith traditions provide deep ethical resources for guiding technological development. Third, AI’s interaction with sacred texts offers both opportunity and risk, underscoring the need for critical theological reflection and institutional accountability in faith-based settings (World Council of Churches 2026). This study addresses these issues through a practical–theological framework that evaluates AI’s role in interreligious dialogue and formulates guidelines for sustainable integration in FBOs.

3. Methodology

This study adopts a practical–theological methodology grounded in the work of Osmer (2008, 2011). This approach is particularly appropriate for examining artificial intelligence within FBOs engaged in IRD, as it integrates empirical analysis, interdisciplinary interpretation, theological normativity, and constructive action. Practical theology, in this sense, is not limited to abstract reflection but is concerned with critically engaging lived religious practices within their concrete social, cultural, and technological contexts (Osmer 2008, 2011). Osmer’s (2008) framework is structured around four interrelated and iterative tasks, which this study applies to the analysis of AI-mediated IRD.
The descriptive–empirical task addresses the guiding question, “What is going on?” (Osmer 2008). In the present study, this task involves examining how AI technologies are currently being introduced, discussed, and utilized within FBOs involved in IRD. This includes attention to emerging practices such as AI-supported translation, digital platforms for dialogue, and algorithmic mediation of religious communication. Drawing on policy documents, scholarly literature, and illustrative cases, this task seeks to offer a thick description of the contemporary landscape in which AI intersects with interreligious engagement (Osmer 2011).
The interpretive task asks, “Why is this going on?”, and places the descriptive findings in dialogue with relevant insights from the social sciences, media studies, and critical AI scholarship (Osmer 2008, 2011). In this study, interpretation focuses on the social, cultural, and institutional dynamics that shape AI adoption within FBOs, including questions of ethics, technology, technological authority, and human capacity. This task helps explain how and why certain patterns of AI use emerge in IRD contexts and how technological mediation reshapes relationships, communication norms, and theological meaning-making.
The normative task centers on the question, “What ought to be going on?”, and brings theological and ethical resources to bear on the practices identified and interpreted (Osmer 2008). Here, the study draws on interreligious theology, Christian social ethics, and human-centered AI principles to evaluate whether current and emerging uses of AI in IRD align with core commitments such as human dignity, justice, relationality, and responsibility. Normative reflection functions critically and constructively, offering moral criteria for discerning appropriate uses of AI in dialogical settings (Osmer 2011).
Finally, the pragmatic task addresses “How might we respond?” by translating normative insights into concrete guidance for practice (Osmer 2008). In this study, the pragmatic task is expressed through the development of the HCAI Integration Framework, which proposes actionable pathways for FBOs to engage AI responsibly and sustainably in IRD. This includes recommendations related to ethical governance, interdisciplinary collaboration, capacity building, and ongoing evaluation.
Taken together, this fourfold methodological approach enables the study to move systematically from empirical observation to theological and ethical evaluation, and finally toward constructive strategies for action. By integrating the insights of Osmer (2008, 2011), the methodology provides a robust and context-sensitive framework for examining AI as a transformative force in interreligious discourse within FBOs.

4. Results

What is going on? The landscape of AI applications relevant to interreligious dialogue (IRD) is diverse, ranging from machine translation to semantic analysis, conversational agents, and bias evaluation tools (Zhang et al. 2025). Within the European Union educational framework, intercultural and IRD competences are understood as cross-cutting skills that cut across disciplines rather than belonging to a single subject area (Mukhidova 2023). Each category brings distinct opportunities and risks that must be critically examined within theological, ethical, and social contexts.
Table 1 presents a structured overview of major AI tool categories currently relevant to IRD, outlining their primary functions, concrete applications, benefits for FBOs, and associated ethical considerations. The results demonstrate that AI technologies are not monolithic; rather, they support IRD at multiple levels—linguistic, textual, educational, organizational, and strategic. AI have produced models capable of tasks such as speech-to-speech translation, machine translation, text summarization, and other forms of language generation (Brown et al. 2020). Collectively, these tools extend the reach, depth, and inclusivity of interfaith engagement while simultaneously introducing new theological, ethical, and methodological challenges that require careful governance (UNESCO 2021).
Why is this going on? Across categories, a clear pattern emerges: AI tools are most effective in IRD when they function as augmentative rather than substitutive instruments. They enhance HCAI, comparative theological reflection, and institutional collaboration, but they do not replace interpretive authority, pastoral discernment, or lived encounter. In recent years, HCAI has gained significant scholarly and institutional traction, particularly since the late 2010s, as evidenced by a growing body of academic literature (Ahmad et al. 2023; Human-Centered AI 2023; Shneiderman 2022), politics (e.g., G7, G20, UN, EU, and EC) and the establishment of dedicated research centers and institutes at major universities, including the Stanford Human-Centered AI Institute, the Berkeley Center for Human-Compatible AI, and initiatives at the University of Technology Sydney (Human-Centered AI 2023; Shneiderman 2022). Beyond academia, the HCAI framework has also been taken up by major institutions (e.g., World Bank, World Economic Forum, UNESCO, and OECD), civil society groups, and professional associations, as well as by major technology firms such as IBM Research, Google, and Microsoft (Human-Centered AI 2023; Shneiderman 2022). The principle of human-centeredness in AI deployment serves as the unifying thread that integrates these key research findings.

4.1. Linguistic and Communicative Capacities

Machine translation systems and speech recognition tools emerge as foundational enablers of IRD (Shormani and Alfahad 2025). Their primary contribution lies in reducing linguistic barriers that historically limited interfaith encounters to elite or monolingual participants (Metwally and Bin-Hady 2025). As shown in Table 1, neural machine translation and voice-based systems expand access to sacred texts, live dialogue events, and shared liturgical or commemorative practices across languages. AI-driven translation systems and speech recognition technologies have significantly enhanced cross-linguistic communication, enabling participants from diverse linguistic and cultural backgrounds to engage more substantively in shared dialogical spaces (Campbell and Tsuria 2021). Neural machine translation has expanded access to religious texts, liturgical materials, and interfaith conversations that would otherwise remain inaccessible due to language barriers, thereby reshaping the communicative ecology of religious encounter (EWTN 2026). This development represents a significant milestone in liturgical accessibility and global participation, illustrating how AI can function as a mediating infrastructure that broadens inclusion while reconfiguring the dynamics of communal worship within a multilingual Church (EWTN 2026).
However, there are well-documented limitations: AI may misinterpret or oversimplify complex religious language and metaphorical structures, potentially distorting meaning and cultural nuance unless coupled with careful human oversight and contextual expertise (Papakostas 2025). These limitations are especially salient in sacred texts, which often deploy layered symbolism and culturally embedded idioms that AI models struggle to interpret accurately without human contextualization (Geraci 2010). For instance, studies of “Ask the Rabbi” websites show that responses were typically brief and offered with minimal citation or engagement with authoritative texts, relying more on the rabbi’s voice than on explicit textual evidence (Tsuria and Campbell 2021). However, the use of these technologies has expanded well beyond professional translators, with adoption occurring at a pace that often exceeds critical evaluation (Metwally and Bin-Hady 2025). As a result, persistent problems—including hallucinations, inaccuracies, and embedded biases—continue to emerge, frequently in contexts where errors carry significant social consequences (Weidinger et al. 2022; Zhang et al. 2025).

4.2. Thematic and Comparative Research Functions

Semantic text analysis and cross-textual synthesis tools demonstrate particular relevance for comparative theology (CT) and academic IRD research (Clooney and von Stosch 2018). Within comparative theology of religions (ToRs) and IRD, the use of AI-assisted research tools significantly reshapes how scholars engage sacred texts, doctrinal traditions, and lived religious expressions (Shormani and Alfahad 2025). Tools for semantic analysis, topic modeling, and cross-textual synthesis enable researchers to trace recurring themes, symbolic structures, and ethical vocabularies across multiple religious corpora at a scale and depth unattainable through close reading alone. Rather than replacing traditional hermeneutical methods, these tools function as amplifiers of comparative insight, allowing scholars to identify macro-level patterns that can later be examined through contextual and tradition-specific interpretation or what Daggers (2013) calls particularity.
In CT, where sustained engagement with more than one religious tradition is central, AI facilitates the systematic comparison of key concepts. For example, semantic clustering can surface how notions of compassion are framed relationally in Buddhist texts, covenantally in Jewish and Christian scriptures, or communally in Islamic ethical discourse. Such findings provide an empirical starting point for deeper theological reflection, helping scholars articulate both convergences and irreducible differences without collapsing traditions into a single moral framework and provide a pluralistic theology of religions (Dupuis 1997; Hick and Knitter 2005). As Geraci (2010) notes, digital methods can expose how similar ethical ideals are embedded within distinct metaphysical and cosmological assumptions, thereby enriching rather than flattening comparative analysis.

4.3. Interactive Dialogue and Educational Engagement

Dialogue agents, conversational AI, and recommendation systems primarily support educational and community-facing dimensions of IRD. Conversational AI and chatbot systems have been explored as tools for community engagement and introductory religious learning (Papakostas 2025; Rähme and Prohl 2025). For FBOs, this represents a strategic opportunity to broaden engagement while adapting content to diverse audiences. Interreligious and intercultural competences are key part of higher religious education (Kolb 2021). In educational settings, interreligious dialogue may encompass a wide range of themes, including historical awareness, political and human-rights formation, engagement with religious and cultural traditions, and the development of civic responsibility, as articulated in the White Paper on Intercultural Dialogue issued by the Council of Europe (2008).
Scholarly analysis and empirical studies alike warn that generative systems may flatten complex theological distinctions and unconsciously reproduce culturally embedded assumptions, underscoring the importance of transparent design and clear acknowledgment of technological limits (Tsuria and Campbell 2021). Moreover, because AI cannot substitute for the depth of pastoral judgment or academic discernment of professors and teachers, its involvement in interpretive or dialogical contexts raises concerns about authority and theological credibility (Leo XIV 2025). For this reason, leaders of interfaith engagement bear the responsibility not only to guide conversations but also to foster critical reflection, mutual encouragement, and thoughtful challenge when participants address sensitive and demanding issues (Pope 2021).

4.4. Visual, Ethical, and Strategic Dimensions

Image and symbol recognition tools, alongside ethics and fairness frameworks, extend AI’s role beyond text into visual culture, governance, and institutional trust. Interactive AI systems such as ChatGPT facilitate dynamic and context-sensitive engagement with biblical texts (Chrostowski and Najda 2025). When combined with text-to-image technologies such as DALL·E, these tools further expand interpretive possibilities by generating visual representations of scriptural narratives, thereby enhancing accessibility and broadening public engagement with religious content. The analysis in Table 1 shows that multimodal AI can support comparative study of religious art and shared heritage, but misclassification of sacred imagery poses a high risk of offense. Scholarly research consistently argues that AI-generated images frequently fall short of key dimensions traditionally associated with human creativity, including genuine originality, cultural situatedness, affective depth, intentional agency, and conceptual integration (Makimei et al. 2025, p. 1; Campos et al. 2017; Chatterjee 2022; Herzfeld 2024). Similarly, ethics and fairness frameworks emerge as cross-cutting safeguards rather than standalone tools (Papakostas 2025).
Predictive analytics represents the most strategically oriented category, offering insights into participation trends and program sustainability. While useful for planning, the results caution against over-quantification of faith dynamics, which may overlook embodied, spiritual, and relational dimensions central to IRD.

4.5. Synthesis

The findings synthesized in Table 1 indicate that AI tools can significantly strengthen IRD when aligned with HCAI, theologically informed frameworks. Their greatest value lies in enhancing access, revealing comparative insights in ToRs, and supporting sustainable engagement. However, ethical vigilance, interpretive humility, and ongoing collaboration between technologists, theologians, and dialogue practitioners are essential. AI, in this context, functions not as an autonomous agent of dialogue but as a third space in which interreligious understanding can be responsibly cultivated (Corpuz 2021). In summary, while AI technologies create significant opportunities to improve accessibility, attention to human-centered considerations must accompany technical objectives (UNESCO 2021). Within AI development and deployment, this perspective emphasizes enhancing user experience, improving the clarity and usability of information, promoting fairness and trust, reducing bias, and fostering the creation of responsible and accountable AI systems (Zhang et al. 2025).

5. Discussion: The AI–IRD Integration Framework

What ought to be going on? This section presents the AI–IRD Integration Framework, which translates HCAI’s key principles into actionable principles for integrating AI into interreligious practice. The rise of HCAI frameworks emphasizes the alignment of AI with human values, fairness, and contextual awareness, reinforcing the need for ethical safeguards in IRD applications (World Council of Churches 2026). HCAI scholarship argues that mitigating algorithmic bias and ensuring human oversight are essential to maintaining equity and trust in AI systems, especially in sensitive domains like religion and spirituality, where interpretations significantly shape lived meaning. Institutions such as bias evaluation frameworks and audit tools are vital in promoting accountable AI that respects plural religious identities and avoids marginalization of minority perspectives (UNESCO 2021). Organizations advocating algorithmic justice also underscore how unchecked AI usage can unintentionally reproduce social inequities. The resulting AI–IRD integration framework provides a practical model for sustainable and inclusive dialogue. It highlights how AI can strengthen intercultural understanding, expand access to dialogue, and support collaboration while preserving the human dimension of encounter.

5.1. Conceptual Foundations

Human-centered Artificial Intelligence (HCAI) offers a normative and practical approach to AI design and deployment that explicitly prioritizes human dignity, agency, and well-being. Rather than treating intelligence as a purely technical achievement, HCAI reframes the entire AI lifecycle around human needs, values, and social contexts (Riedl 2019; Xu 2019; Xu et al. 2023). In this view, AI systems are not intended to replace human judgment or responsibility but to augment human capabilities through technologies that are demonstrably reliable, safe, and trustworthy (Shneiderman 2022). Central to this approach is the active involvement of users and stakeholders throughout design, implementation, and evaluation, ensuring that AI reflects how people actually think, act, and relate in real-world settings. By emphasizing transparency, usability, and interpretability, HCAI seeks to cultivate trust while advancing broader goals such as access to services, safety, inclusion, and overall quality of life (Xu et al. 2023).
When applied to AI-mediated forms of interaction, HCAI reveals that algorithms, data infrastructures, and platform architectures do more than enable communication—they actively shape the conditions, power relations, and outcomes of dialogue. Building on this insight, the present study develops an HCAI Integration Framework structured around three interrelated dimensions—Ethics, Technology, and Humans—which together ground a theory of responsible AI use in IRD. Figure 1. Presents that HCAI Integration Framework based on Shneiderman (2022) and Xu et al. (2023). The framework positions AI as an enabling mediator that supports translation, access, and collaboration among religious and non-religious actors without displacing human discernment or theological reflection (Shneiderman 2022). Its distinctive contribution lies in balancing technological capability with ethical accountability and theological depth, offering faith-based organizations (FBOs) a coherent guide for sustainable implementation.

5.2. Ethical Dimension

The ethical dimension clarifies the purpose and moral horizon of AI in dialogue. Situated within the broader ethical AI discourse, HCAI affirms commitments to human rights, fairness, diversity, transparency, and bias mitigation (Shneiderman 2022; Zhang et al. 2025). Within this framework, ethical reflection attends to issues such as data privacy, transparency, bias mitigation, and the development of accountable and explainable AI. HCAI further emphasizes adaptability and contextual awareness, recognizing that AI systems learn from human behavior and operate within specific social and cultural environments (Xu et al. 2023).
While digital technologies have transformed how religious knowledge is accessed and interpreted, AI must remain a supportive tool rather than a substitute for spiritual communication or discernment. Its role is to expand accessibility—through translation, multilingual and interreligious exchange, and information support—while remaining oriented toward transformation rather than efficiency alone. Accordingly, AI integration must be guided by virtues such as humility, compassion, and justice, and resist dynamics of exaggeration, extremism, or polarization, as cautioned by Pope Francis (2020).

5.3. Technological Dimension

The technological dimension focuses on mediation, governance, and learning processes (UNESCO 2021). HCAI underscores the need for institutional mechanisms that promote system reliability, a strong culture of safety, and sustained trustworthiness. Empathetic design plays a critical role here, enabling developers and institutions to anticipate user confusion, frustration, and risk—especially in high-stakes contexts (Shneiderman 2022). AI-driven tools such as natural language processing, translation systems, and sentiment analysis can surface thematic connections and ethical convergences across traditions, yet technological mediation is never neutral. System design inevitably reflects cultural assumptions, data biases, and power asymmetries (Floridi 2019; Coeckelbergh 2020; Yao 2024; Zhang et al. 2025). For FBOs, this requires vigilance against reducing religious life to abstracted data points and sustained attention to how control over infrastructure, data, and expertise shapes dialogical authority (Ashraf 2022). Ethical governance must therefore be collaborative and interdisciplinary, integrating theological, technological, legal, and cultural perspectives, as emphasized by Quirós-Fons (2025).

5.4. Human Dimension

The human dimension centers on institutional responsibility and long-term sustainability. Here, HCAI frames FBOs as moral agents tasked with embedding ethical norms, transparency, and accountability into AI-driven initiatives. Effective integration requires organizational policies aligned with broader social teachings on human dignity, participation, and solidarity. In IRD contexts, work processes are strengthened when human and technological intelligence are intentionally integrated, allowing each to contribute its distinct strengths. HCAI thus functions as a normative framework encompassing principles such as fairness, accountability, beneficence, justice, and explicability (Shneiderman 2022). For FBOs, this means ensuring that innovation serves the common good, prioritizes the marginalized, and contributes to peacebuilding (Francis 2020). Practically, this involves capacity building, education in digital ethics, and alignment with international standards such as those articulated by UNESCO (2021), while leveraging AI to amplify local voices and participation in multilingual and multicultural settings (Corpuz 2025b).

5.5. Synthesis

The HCAI Integration Framework offers a coherent synthesis that enables FBOs to engage AI in interreligious dialogue responsibly and sustainably. By holding ethics, technology, and human agency in dynamic relation, the framework affirms that meaningful dialogue in the age of AI remains fundamentally relational, morally accountable, and oriented toward the flourishing of both humanity and the wider world.

6. Limitations and Future Research

How might we respond? This study offers a new conceptual and theological framework for integrating AI into IRD within FBOs. However, it faces several limitations that suggest directions for future inquiry. First, the research remains largely theoretical and interpretive. It proposes a model grounded in theology and ethics but does not yet include empirical validation through case studies or pilot programs in specific FBOs. Future studies should therefore examine how the proposed AI–IRD Integration Framework performs in practice, particularly in multilingual or post-conflict settings where dialogue carries high moral and emotional stakes.
Second, the study’s reliance on secondary literature limits its capacity to address the rapid evolution of AI technologies. Advances in generative AI, deep learning, and natural language processing (NLP) are transforming how meaning is produced and interpreted in the context of IRD. Continuous interdisciplinary and transdisciplinary research is necessary to assess how these developments reshape theological concepts such as revelation, interpretation, and discernment. Scholars and practitioners should also explore how AI influences spiritual formation, pastoral ministry, and ethical education within religious institutions (Tsuria and Campbell 2021).
Third, cultural and theological diversity complicates the application of any single framework. Faith communities differ in how they understand the relationship between human agency, divine action, and technological mediation. What may appear as innovation in one context could be viewed as intrusion in another. Future research should therefore adopt a comparative and intercultural approach, incorporating insights from Christian, Muslim, Hindu, Buddhist, and Indigenous traditions (Francis 2020; World Council of Churches 2026). Such pluralistic engagement would enrich the framework’s capacity to reflect the global and relational nature of dialogue and multilogue (Susan 2025).
Fourth, the study has not addressed the material and ecological costs of AI, such as energy consumption, data storage, and digital inequality. These factors bear ethical weight, especially when viewed through Catholic social teaching on creation and justice. Future research could investigate how sustainable technology policies align with the Church’s call for ecological conversion in Laudato Si’ (Francis 2015). Addressing these ecological dimensions would extend the moral scope of the AI–IRD framework from interreligious harmony to planetary responsibility.
Finally, the study opens questions about the theological anthropology of AI. Future research should explore how cyber-theology and theological anthropology together respond to these emerging questions about moral agency, relationality, spiritual formation, and freedom of religion or belief (FoRB) in a digital age (Quirós-Fons 2025). These inquiries will not only deepen theological understanding but also shape how humanity exercises responsibility in creating and governing intelligent systems in IRD.

7. Conclusions

AI confronts FBOs engaged in IRD with both opportunity and profound ethical responsibility. By integrating practical theology with Human-Centered AI, this study offers a novel AI–IRD Integration Framework that positions FBOs as moral agents capable of guiding technological development toward dialogue, justice, and the common good. In a global context marked by polarization, digital misinformation, and religious tension and extremism, the responsible engagement of AI in IRD is no longer optional but urgent. The findings demonstrate that AI is already reshaping IRD at linguistic, educational, comparative, visual, and strategic levels, yet its transformative potential depends decisively on human-centered governance and theological discernment. The study shows that AI tools are most constructive when they function as augmentative infrastructures—expanding access, illuminating cross-traditional ethical convergences, and strengthening institutional collaboration—while remaining accountable to ethical safeguards and interpretive humility (UNESCO 2021; Shneiderman 2022).
In synthesizing these insights into the AI–IRD Integration Framework, structured around ethical, technological, and human dimensions, this research offers a novel and globally relevant model that moves beyond abstract AI ethics toward contextually grounded implementation within FBOs. Therefore, this article calls on FBOs to take concrete steps—such as developing institutional AI ethics guidelines, forming interdisciplinary partnerships with technology experts, investing in digital literacy formation, and aligning with international standards such as those proposed by UNESCO (2021)—so that AI innovation is intentionally directed toward peacebuilding, inclusion, and the safeguarding of human dignity in an era of AI–human coexistence.

Funding

The APC was funded by De La Salle University Manila, Philippines.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Abu-Nimer, Mohammed. 2020. Interreligious Dialogue and the Path to Reconciliation. In Multi-Level Reconciliation and Peace-Building. London and New York: Routledge. [Google Scholar]
  2. Agence France-Presse. 2025. Virtual Jesus? People of Faith Divided as AI enters Religion. Available online: https://www.gmanetwork.com/news/scitech/technology/961319/virtual-jesus-people-of-faith-divided-as-ai-enters-religion/story/ (accessed on 26 February 2026).
  3. Ahmad, Khlood, Mohamed Abdelrazek, Chetan Arora, Muneera Bano, and John Grundy. 2023. Requirements practices and gaps when engineering human-centered Artificial Intelligence systems. Applied Soft Computing 143: 110421. [Google Scholar] [CrossRef]
  4. Ahmed, Saif, Ayesha Akter Sumi, and Norzalita Abd Aziz. 2025. Exploring Multi-Religious Perspective of Artificial Intelligence. Theology and Science 23: 104–28. [Google Scholar] [CrossRef]
  5. Arora, Suvrat. 2025. People Are Using AI to Talk to God. BBC. October 18. Available online: https://www.bbc.com/future/article/20251016-people-are-using-ai-to-talk-to-god (accessed on 1 November 2025).
  6. Ashraf, Cameran. 2022. Exploring the Impacts of Artificial Intelligence on Freedom of Religion or Belief Online. The International Journal of Human Rights 26: 757–91. [Google Scholar] [CrossRef]
  7. Banchoff, Thomas. 2012. Interreligious Dialogue and International Relations. In Rethinking Religion and World Affairs. Edited by Thimothy Samuel Shah, Alfred Stepan and Monica Duffy Toft. Oxford: Oxford University Press, pp. 204–16. [Google Scholar]
  8. Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, and et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33: 1877–901. [Google Scholar]
  9. Campbell, Heidi A., and Ruth Tsuria. 2021. AI, Religion, and the Future. London: Routledge. [Google Scholar]
  10. Campos, Victor, Brendan Jou, and Xavier Giro-i-Nieto. 2017. From pixels to sentiment: Fine-tuning CNNs for visual sentiment prediction. Image and Vision Computing 65: 15–22. [Google Scholar] [CrossRef]
  11. Chandra, Rohitash, and Mukul Ranjan. 2022. Artificial intelligence for topic modelling in Hindu philosophy: Mapping themes between the Upanishads and the Bhagavad Gita. PLoS ONE 17: e0273476. [Google Scholar] [CrossRef] [PubMed]
  12. Chatterjee, Anjan. 2022. Art in an age of artificial intelligence. Frontiers in Psychology 13: 1024449. [Google Scholar] [CrossRef]
  13. Chrostowski, Mariusz, and Andrzej J. Najda. 2025. ChatGPT as a modern tool for Bible teaching in confessional religious education: A German view. Journal of Religious Education 73: 75–94. [Google Scholar] [CrossRef]
  14. Clarke, Matthew, and Vicki-Anne Ware. 2015. Understanding faith-based organizations: How FBOs are contrasted with NGOs in international development literature. Progress in Development Studies 15: 37–48. [Google Scholar] [CrossRef]
  15. Clooney, Francis X., and Klaus von Stosch, eds. 2018. How to Do Comparative Theology. New York: Fordham University Press. [Google Scholar]
  16. Coeckelbergh, Mark. 2020. AI Ethics. Cambridge, MA: MIT Press. [Google Scholar]
  17. Cornille, Catherine. 2013. Conditions for Inter-Religious Dialogue. In The Wiley-Blackwell Companion to Inter-Religious Dialogue. West Sussex: Blackwell Publishing. [Google Scholar]
  18. Corpuz, Jeff Clyde G. 2021. Religions in action: The role of interreligious dialogue in the COVID-19 pandemic. Journal of Public Health 43: e236–e237. [Google Scholar] [CrossRef]
  19. Corpuz, Jeff Clyde G. 2025a. Faith and Artificial Intelligence (AI) in Catholic Education: A Theological Virtue Ethics Perspective. Religions 16: 1083. [Google Scholar] [CrossRef]
  20. Corpuz, Jeff Clyde G. 2025b. Toward Grassroots Interfaith Dialogue: The Role of a Faith-Based Movement. Religions 16: 345. [Google Scholar] [CrossRef]
  21. Council of Europe. 2008. White Paper on Intercultural Dialogue—“Living Together as Equals in Dignity”. Available online: https://www.coe.int/en/web/campaign-free-to-speak-safe-to-learn/-/white-paper-on-intercultural-dialogue-living-together-as-equals-in-dignity-2008- (accessed on 17 February 2026).
  22. Daggers, Jenny. 2013. Postcolonial Theology of Religions: Particularity and Pluralism in World Christianity. New York: Routledge. [Google Scholar]
  23. Dorobantu, Marius. 2024. Could Robots Become Religious? Theological, Evolutionary, and Cognitive Perspectives. Zygon: Journal of Religion and Science 59: 768–87. [Google Scholar] [CrossRef]
  24. Dupuis, Jacques. 1997. Toward a Christian Theology of Religious Pluralism. Maryknoll: Orbis. [Google Scholar]
  25. EWTN. 2026. Vatican to Use AI to Translate Masses at St. Peter’s into 60 Languages in Real Time. Available online: https://ewtnvatican.com/articles/ai-live-translation-st-peters-masses-60-languages (accessed on 26 February 2026).
  26. Floridi, Luciano. 2019. The Ethics of Artificial Intelligence. Oxford: Oxford University Press. [Google Scholar]
  27. Francis, Pope. 2015. Laudato Si. Vatican City: Libreria Editrice Vaticana. Available online: https://www.vatican.va/content/francesco/en/encyclicals/documents/papa-francesco_20150524_enciclica-laudato-si.html (accessed on 9 February 2026).
  28. Francis, Pope. 2020. Fratelli Tutti. Vatican City: Libreria Editrice Vaticana. Available online: https://www.vatican.va/content/francesco/en/encyclicals/documents/papa-francesco_20201003_enciclica-fratelli-tutti.html (accessed on 9 February 2025).
  29. Geraci, Robert M. 2010. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. Oxford: Oxford University Press. [Google Scholar]
  30. Herzfeld, Noreen. 2024. Is an AI-generated icon an icon? A case study in artificial intelligence and Christian spirituality. Spiritus: A Journal of Christian Spirituality 24: 281–94. [Google Scholar] [CrossRef]
  31. Hick, John, and Paul F. Knitter. 2005. The Myth of Christian Uniqueness: Toward a Pluralistic Theology of Religions. Wipf and Stock Publishers. [Google Scholar]
  32. Human-Centered AI. 2023. Research Groups. Available online: https://hcai.site/groups/ (accessed on 9 February 2026).
  33. Interfaith Alliance. 2025. The Rise of AI Generated Hate Content. Available online: https://www.interfaithalliance.org/post/the-rise-of-ai-generated-hate-content (accessed on 1 March 2026).
  34. Jolley, Jason R., and Luciane Maimone. 2022. Thirty years of machine translation in language teaching and learning: A review of the literature. L2 Journal 14: 26–44. [Google Scholar] [CrossRef]
  35. Knitter, Paul. 2002. Introducing Theologies of Religions. Ossining: Orbis Books. [Google Scholar]
  36. Kolb, Jonas. 2021. Modes of interreligious learning within pedagogical practice. An analysis of interreligious approaches in Germany and Austria. Religious Education 116: 142–56. [Google Scholar] [CrossRef]
  37. Lee, Sangmin-Michelle. 2023. The effectiveness of machine translation in foreign language education: A systematic review and meta-analysis. Computer Assisted Language Learning 36: 103–25. [Google Scholar] [CrossRef]
  38. Leo XIV, Pope. 2025. General Audience St Peter’s Square Wednesday, 29 October 2025 [Multimedia]. Available online: https://www.vatican.va/content/leo-xiv/en/audiences/2025/documents/20251029-udienza-generale.html (accessed on 1 November 2025).
  39. Mackey, Andrew Ambrose, and Ghanem Ayed Elhersh. 2026. Debating Divinity Through AI: Christian-Muslim Responses to Artificial Dialogues on YouTube. In Oxford Intersections: Social Media in Society and Culture. Edited by Laeeq Khan. Online edition. Oxford: Oxford Academic. [Google Scholar] [CrossRef]
  40. Makimei, Hidde, Shuai Wang, and Willem van Peursen. 2025. Seeing the words: Evaluating AI-generated biblical art. arXiv arXiv:2504.16974. [Google Scholar] [CrossRef]
  41. Metwally, Amal Abdelsattar, and Wagdi Rashad Ali Bin-Hady. 2025. Exploring human vs. AI-powered translation to metonymic expressions: A case study of the Holy Quran. Social Sciences & Humanities Open 12: 101615. [Google Scholar] [CrossRef]
  42. Mohsen, Mohammed Ali, Sultan Althebi, and Mohammed Albahooth. 2023. A scientometric study of three decades of machine translation research: Trending issues, hotspot research, and co-citation analysis. Cogent Arts & Humanities 10: 2242620. [Google Scholar] [CrossRef]
  43. Mukhidova, Olima. 2023. The importance of transversal competencies in the training of future teachers. International Scientific Journal Science and Innovation 2: 328–32. [Google Scholar]
  44. Osmer, Rick. 2008. Practical Theology. An Introduction. Grand Rapids: William B. Eerdmans. [Google Scholar]
  45. Osmer, Rick. 2011. Practical theology. A current international perspective. HTS Teologiese Studies/Theological Studies 67: 156–62. [Google Scholar] [CrossRef]
  46. Panikkar, Raimon. 1999. The Intrareligious Dialogue. New York: Paulist Press. [Google Scholar]
  47. Papakostas, Christos. 2025. Artificial Intelligence in Religious Education: Ethical, Pedagogical, and Theological Perspectives. Religions 16: 563. [Google Scholar] [CrossRef]
  48. Phan, Peter. 2004. Being Religious Interreligiously: Asian Perspectives on Interfaith Dialogue. Maryknoll: Orbis Books. [Google Scholar]
  49. Phan, Peter. 2022. Pope Francis and Interreligious Encounter. Theological Studies 83: 25–47. [Google Scholar] [CrossRef]
  50. Pope, Elizabeth M. 2021. Facilitator Guidance during Interfaith Dialogue. Religious Education 116: 369–82. [Google Scholar] [CrossRef]
  51. Quirós-Fons, Antonio. 2025. Religious Freedom in the Age of AI and Surveillance: Mapping the Field and Charting Future Research Directions. The Review of Faith & International Affairs 23: 104–16. [Google Scholar] [CrossRef]
  52. Race, Alan. 1993. Christians and Religious Pluralism: Patterns in the Christian Theology of Religions, 2nd ed. London: SCM Press. [Google Scholar]
  53. Rähme, Boris, and Inken Prohl. 2025. Religious Studies Approaches to the Intersection of Artificial Intelligence and Religion: Formations Analogous to Religion. Religion 55: 573–95. [Google Scholar] [CrossRef]
  54. Riedl, Mark O. 2019. Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies 1: 33–36. [Google Scholar] [CrossRef]
  55. Shneiderman, Ben. 2022. Human-Centered AI. Oxford: Oxford University Press. [Google Scholar]
  56. Shormani, Mohammed Q., and Abdulrahaman Alfahad. 2025. Artificial Intelligence or Human: The use of ChatGPT in the academic translation for religious texts. Sage Open 15: 21582440251343954. [Google Scholar] [CrossRef]
  57. Susan, Kemigisha. 2025. Artificial Intelligence: Investigate the Implications of AI on the Work of the Holy Spirit, Including the Potential for AI to Facilitate or Hinder Spiritual Growth. Advances in Social Sciences Research Journal 12: 27–33. [Google Scholar] [CrossRef]
  58. Swidler, Leonard. 2014. Dialogue for Interreligious Understanding: Strategies for the Transformation of Culture-Shaping Institutions. In Interreligious Studies in Theory and Practice. New York: Palgrave MacMillan. [Google Scholar]
  59. Tsuria, Ruth, and Heidi A. Campbell. 2021. “In My Own Opinion”: Negotiation of Rabbinical Authority Online in Responsa within Kipa.co.il. Journal of Communication Inquiry 45: 65–84. [Google Scholar] [CrossRef]
  60. UNESCO. 2021. Recommendation on the Ethics of Artificial Intelligence. Available online: https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence (accessed on 10 February 2025).
  61. Weidinger, Laura, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, and et al. 2022. Taxonomy of risks posed by language models. Paper presented at 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21–24; pp. 214–29. [Google Scholar]
  62. Wolf, Alain. 2012. Intercultural Identity and Inter-Religious Dialogue: A Holy Place to Be? Language and Intercultural Communication 12: 37–55. [Google Scholar] [CrossRef]
  63. World Council of Churches. 2025. African Church Leaders Explore Artificial Intelligence. Available online: https://www.oikoumene.org/news/african-church-leaders-explore-artificial-intelligence (accessed on 17 February 2026).
  64. World Council of Churches. 2026. Panel Discussion Focuses on Risks of AI—And How Faith Communities Respond. Available online: https://www.oikoumene.org/news/panel-discussion-focuses-on-risks-of-ai-and-how-faith-communities-respond (accessed on 28 February 2026).
  65. Xu, Wei. 2019. Toward human-centered AI: A perspective from human-computer interaction. Interactions 26: 42–46. [Google Scholar] [CrossRef]
  66. Xu, Wei, Marvin J. Dainoff, Liezhong Ge, and Zaifeng Gao. 2023. Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human–Computer Interaction 39: 494–518. [Google Scholar] [CrossRef]
  67. Yao, Dedo Williams Johannes. 2024. The Role of Artificial Intelligence in Shaping Religion and Social Studies. Convergence Chronicles 5: 334–42. [Google Scholar]
  68. Zhang, Jing, Wenlong Song, and Yang Liu. 2025. Cognitive bias in generative AI influences religious education. Scientific Reports 15: 15720. [Google Scholar] [CrossRef] [PubMed]
Figure 1. HCAI Integration Framework based on (Shneiderman 2022; Xu et al. 2023).
Figure 1. HCAI Integration Framework based on (Shneiderman 2022; Xu et al. 2023).
Religions 17 00354 g001
Table 1. AI tools and their applications in interreligious dialogue.
Table 1. AI tools and their applications in interreligious dialogue.
AI Tool/CategoryPrimary FunctionApplication in Interreligious Dialogue (IRD)Benefits for FBOs & Interfaith EngagementEthical Considerations
Machine Translation (e.g., neural MT systems)Automatic translation of text/speechFacilitates communication across language barriers in interfaith settings; enables access to sacred texts in multiple languagesExpands inclusivity; supports shared liturgical or dialogical eventsRisk of mistranslation of doctrinal nuance; cultural faith sensitivity
Semantic Text Analysis (e.g., topic modeling, sentiment analysis)Extracts themes and emotional tone from large text corporaIdentifies shared ethical concepts across scriptures; maps patterns in religious discourseDeepens understanding of convergences/divergences; informs curriculum designAlgorithmic bias; interpretive oversimplification
Cross-Textual Synthesis Tools (e.g., AI summarization, knowledge graphs)Generates condensed, relational insights across textsCompares sacred texts for thematic parallels; supports comparative theologyEnhances scriptural literacy; aids interreligious teaching resourcesReductive synthesis may mask context-specific meanings
Dialogue Agents/Conversational AI (e.g., chatbots trained on multi-faith data)Interactive Q&A and conversation simulationCommunity engagement; educational support in interreligious forumsScales outreach; supports exploratory learningOntological concerns about simulated understanding; pastoral integrity
Speech Recognition & Voice AssistantsTranscribes and responds to spoken languageAccessibility for dialogue events; real-time interpretationImproves participatory access for diverse communitiesAccuracy across accents/ritual language; privacy
Image & Symbol Recognition (e.g., multimodal AI)Identifies visual religious symbols and contextsSupports analysis of religious art in comparative research; digital archivesCultural education; shared heritage mappingMisclassification of sacred imagery may offend
Ethics & Fairness Frameworks (e.g., audit tools, fairness evaluation)Detects bias and promotes accountable AI systemsEnsures dialogical AI respects diverse traditions; mitigates exclusionary outcomesStrengthens trust; aligns with human-centered valuesRequires ongoing theological criteria for justice
Collaborative Filtering & Recommendation SystemsSuggests relevant content based on user patternsTailors interfaith educational materials to participants’ interestsPersonalized learning pathways; adaptive IRD resourcesEcho chambers; reinforcement of stereotypes
Predictive AnalyticsModels likely outcomes from data patternsAssesses potential impacts of dialogical initiatives (e.g., participation trends)Informs strategy for sustainable IRD programmingQuantification of faith dynamics may overlook lived nuance
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Corpuz, J.C.G. Artificial Intelligence and Interreligious Dialogue: Emerging Implications for Faith-Based Organizations. Religions 2026, 17, 354. https://doi.org/10.3390/rel17030354

AMA Style

Corpuz JCG. Artificial Intelligence and Interreligious Dialogue: Emerging Implications for Faith-Based Organizations. Religions. 2026; 17(3):354. https://doi.org/10.3390/rel17030354

Chicago/Turabian Style

Corpuz, Jeff Clyde G. 2026. "Artificial Intelligence and Interreligious Dialogue: Emerging Implications for Faith-Based Organizations" Religions 17, no. 3: 354. https://doi.org/10.3390/rel17030354

APA Style

Corpuz, J. C. G. (2026). Artificial Intelligence and Interreligious Dialogue: Emerging Implications for Faith-Based Organizations. Religions, 17(3), 354. https://doi.org/10.3390/rel17030354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop