Next Article in Journal
A Survey on the Daoist Lineages and Ritual Texts in Southeastern Hebei
Previous Article in Journal
Word Pairs as Rhetorical Elements in the Qurʾān: In Memoriam Alexander Sima (1969–2004)
Previous Article in Special Issue
State Relations with Religion: Common Law or Special Law?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Artificial Intelligence on the Management of Religious Diversity in Healthcare

by
María-José Parejo-Guzmán
1 and
David Cobos-Sanchiz
2,*
1
Law School, Universidad Pablo de Olavide, 41013 Sevilla, Spain
2
Social Sciences School, Universidad Pablo de Olavide, 41013 Sevilla, Spain
*
Author to whom correspondence should be addressed.
Religions 2026, 17(1), 20; https://doi.org/10.3390/rel17010020
Submission received: 5 November 2025 / Revised: 10 December 2025 / Accepted: 19 December 2025 / Published: 24 December 2025

Abstract

Religious plurality is an increasingly prevalent phenomenon in contemporary societies, and managing it within the healthcare sector presents significant challenges. In this regard, healthcare systems must strike a balance between religious freedom and the organisation of healthcare services. This paper will address the management and legal treatment of religious diversity in healthcare, focusing on the impact of artificial intelligence in this area. There is no doubt that Artificial Intelligence is transforming the management of religious diversity in healthcare. While many advances have been observed in this area in recent years, numerous ethical and privacy challenges have also emerged, which are undoubtedly leading to the need for a reconfiguration of the legal framework. Issues such as conscientious objection by healthcare personnel, access to treatments compatible with patients’ beliefs, and possible tensions between the right to health and religious freedom will be analysed. The influence of Artificial Intelligence on decision-making and the personalisation of treatments, along with the ethical and legal challenges this entails, will also be explored. Based on this analysis, we will reflect on current challenges and possible improvements in managing religious plurality in healthcare systems. Our aim is to promote a model that provides better medical care, adequately addresses ethical and privacy challenges, respects diversity, and guarantees fundamental rights.

1. Introduction

Religious plurality is a structural reality in contemporary societies, especially in advanced democratic systems, where globalisation, migration and technological expansion have multiplied the scenarios of coexistence between different worldviews. The health sector is undoubtedly one of the areas where this diversity is most visible and relevant, given that it is where the fundamental values of the individual life, health, dignity and freedom of conscience intersect with the institutional organisation of public services and scientific advances. In this context, healthcare systems are forced to manage a constant tension between the institutional neutrality of social and democratic states governed by the rule of law and the effective recognition of religious diversity, in order to guarantee both the freedom of conscience of professionals and the religious freedom of patients.
Added to this complexity is a new transformative vector: Artificial Intelligence. Its emergence in the field of health has brought about a paradigm shift in the way medical care is diagnosed, managed and personalised. Algorithmic tools capable of analysing large volumes of clinical data make it possible to improve the efficiency of the system and optimise treatments, but at the same time raise questions of enormous ethical and legal significance (Baena et al. 2023; Domingo Moratalla 2024; Velásquez and Ruiz 2023).
The World Health Organisation (WHO)1, in its first global report on Artificial Intelligence in health, identified six fundamental principles that should guide its use: human autonomy, well-being, transparency, accountability, equity and sustainability. These principles aim to ensure that technology remains at the service of people and not the other way around, avoiding the reproduction of biases or inequalities. However, incorporating these principles requires mechanisms capable of adequately integrating the cultural and spiritual determinants that shape clinical decisions: can algorithms respect the diversity of moral and religious beliefs? What happens when automated decisions affect sensitive bioethical issues such as blood transfusions, fertility treatments or end-of-life decisions?
The development of Artificial Intelligence in healthcare has advanced at a dizzying pace. From pioneering projects such as IBM Watson Health, which combined clinical diagnosis and predictive analysis (Strickland 2019, pp. 24–31), to recent innovations such as the use of Google algorithms for the early detection of breast cancer (Lagos 2024), AI-based medicine has become an everyday reality. However, these advances imply the need for an ethical and legal framework that allows us to respond to new dilemmas of responsibility, privacy and algorithmic justice: “The risks of AI in healthcare require an ethical and legal response” (Pastor 2024).
From a legal perspective, some authors have emphasised that the incorporation of artificial intelligence into the healthcare sector requires a reinterpretation of traditional categories such as civil liability, professional autonomy and informed consent (Cotino Hueso 2017; Aznar Domingo 2022). In turn, other authors have emphasised the need to develop governance models that integrate biomedical ethics and technology law in order to address the challenges arising from automated decision-making (Baena et al. 2023, Romeo Casabona 2020).
The management of religious diversity is particularly affected by this technological context. Religious beliefs can influence both the acceptance of certain treatments and the definition of the very notion of health and well-being. Similarly, healthcare professionals may face conflicts of conscience arising from the use of technologies whose operating logic is beyond direct human control. Consequently, artificial intelligence is not only transforming medicine, but also the conditions under which freedom of conscience, the right to health and the moral autonomy of individuals are exercised (Zito 2025, pp. 133–46; Bellido Diego-Madrazo 2025).
To illustrate the relevance of this phenomenon, it is useful to consider several examples that are already emerging in clinical practice. Algorithmic decision-support systems may recommend treatments that conflict with patients’ religious beliefs—such as blood transfusions for Jehovah’s Witnesses—without being programmed to identify or respect such objections. In other cases, algorithms used to allocate resources or prioritise waiting lists can reproduce structural biases against religious minorities if the historical datasets used for training fail to incorporate cultural or spiritual variables. Automated decision-making at the end of life, based on predictions of survival or quality of life, may also come into tension with religious convictions regarding the dignity of dying, palliative sedation, or the use of extraordinary measures. These examples demonstrate that the issue is not abstract: it already manifests itself in concrete clinical and organisational situations that call for urgent ethical and legal responses.
The fundamental objective of the research is to analyse, from a broad perspective, how artificial intelligence is transforming the relationships between religious freedom, the right to health, and clinical practice, considering both its potential and the risks it poses. The proposed methodological approach is based on a qualitative and interdisciplinary design, which we believe is appropriate for addressing the complexity of the subject matter. Initially, we have posed three research questions: how artificial intelligence can transform the management of religious diversity in the healthcare setting; what tensions are emerging between patient autonomy and religious beliefs and decisions based on automated systems; and what legal aspects are necessary to ensure adequate governance of religious diversity in this context.
Methodologically, a non-experimental design has been chosen, focusing on documentary and conceptual analysis, with an exploratory-analytical scope, which aims to offer an integrated understanding of the phenomenon in order to provide regulatory and ethical tools that can help to regulate it. To begin with, the normative and institutional foundations that recognise and protect religious diversity in the healthcare field are examined, followed by an analysis of the advances brought about by artificial intelligence in the healthcare field, as well as the ethical dilemmas it raises. Finally, some criteria are proposed for ethical and inclusive governance that ensure respect for human rights and religious freedom. These are, therefore, three axes of analysis that allow the discussion to be placed in a broad framework, combining legal sources, scientific literature, and the most recent regulations.
The ultimate goal is to contribute to a critical understanding of Artificial Intelligence in healthcare, highlighting how its design and governance condition the way in which patients’ autonomy, dignity, and moral integrity are protected.
In this context, the central question guiding this article is how Artificial Intelligence can be integrated into healthcare systems without overlooking the plurality of values, beliefs and worldviews that shape the patient’s clinical experience. The religious dimension—understood broadly as a meaningful component of personal identity and of clinical decision-making—should not be regarded as an accessory element, but rather as a factor that directly influences autonomy, risk perception, treatment acceptance and the interaction between patients and healthcare professionals. For this reason, the article examines systematically the challenges that arise when automated models fail to incorporate these variables adequately and proposes theoretical and normative criteria to articulate a governance framework for AI that remains coherent with the principles of the right to health in democratic and culturally diverse societies.

2. Religious Diversity in Healthcare: Fundamentals and Current Challenges

2.1. Legal Recognition of Religious Freedom in the Healthcare Context

The right to freedom of religion and conscience is recognised as an essential pillar of the international legal order. From the Universal Declaration of Human Rights (Art. 18) and the International Covenant on Civil and Political Rights (Art. 18) to regional instruments, such as the European Convention on Human Rights (Art. 9) and the Charter of Fundamental Rights of the European Union (Art. 10), religious freedom encompasses not only the internal dimension of belief, but also its public manifestation, including practices and observances related to health.
Classic contributions from the sociology of religion have long emphasised that religious plurality is not an anomaly but a structural feature of contemporary societies. Berger (Berger 1967) famously described how modernity produces a “religious market” in which multiple systems of meaning coexist and claim legitimacy, compelling institutions—including healthcare systems—to adapt their frameworks of action. Likewise, Casanova (Casanova 1994) argued that religion has not withdrawn from the public sphere but has rather reconfigured its presence, shaping public policies, personal identity, and welfare systems. Taylor (Taylor 2007) further highlighted that religious and moral convictions constitute “moral sources” through which individuals articulate their identity and make ethically grounded decisions, including those related to health and illness. These theoretical insights help explain why religious commitments continue to play a significant role in clinical contexts and why healthcare systems must provide responses that are attentive to such diversity.
In the health sector, this right translates into an obligation on the part of States and health systems to guarantee an environment in which medical decisions can be made without coercion and in accordance with the patient’s personal convictions. This recognition also implies a duty to respect the conscientious objection of health personnel, provided that it is in harmony with the right of patients to receive adequate care without discrimination.
In the European context, the European Court of Human Rights (ECHR) has developed significant case law on the balance between freedom of conscience, public health and patient rights, emphasising that States enjoy a margin of appreciation but must ensure proportionality in any limitation of religious freedom.
In Spain, Article 16 of the Constitution recognises freedom of religion and worship, and Organic Law 7/1980 on Religious Freedom establishes the basis for cooperation between the State and religious denominations. In the health sector, the application of this principle takes the form of cooperation agreements (those signed with the Catholic Church, the Federation of Evangelical Religious Entities of Spain—FEREDE2, the Federation of Jewish Communities—FCIE3 and the Islamic Commission of Spain—CIE4), which address, among other things, issues such as religious care in public hospitals, thus incorporating criteria that allow healthcare practices to be adapted to personal convictions that are relevant in clinical and organisational terms.
In a comparative perspective, healthcare systems have developed different approaches to integrating religious diversity into clinical practice. In countries such as the United Kingdom and Canada, robust frameworks of cultural competence and faith-sensitive care have been implemented. These models recognise religion as a social determinant of health and promote institutional protocols for cultural adaptation, spiritual support, and respect for patients’ religious preferences. In the United States, many hospitals—especially those linked to university networks and bioethics centres—have established chaplaincy services, interdisciplinary ethics committees, and specific guidelines for addressing conflicts between medical treatment and religious convictions, particularly in advance care planning, organ donation, or decisions at the end of life.
In continental Europe, several noteworthy models also exist. France maintains an approach of strict institutional neutrality inspired by laïcité, which results in a clear separation between healthcare organisations and religious expression, while still ensuring access to spiritual care in public hospitals. Germany adopts a cooperative model grounded in the constitutional recognition of religious communities, reflected in hospital protocols that guarantee pastoral assistance and allow adaptations of certain medical procedures when justified by the patient’s beliefs. In the Netherlands, the healthcare system includes specialised units on cultural and religious diversity within university hospitals, as well as comprehensive guidelines for intercultural communication and moral-pluralistic decision-making.
Beyond the Western context, other jurisdictions have developed significant practices. In Singapore and Malaysia, multicultural models incorporate religious liaison officers who act as mediators when religious beliefs substantially influence medical decisions. In Israel, a highly technologised healthcare system includes experts in halakha and mixed committees that examine the compatibility between medical treatments, end-of-life decisions, and Jewish or Muslim religious prescriptions. In India, marked by extensive religious plurality, various states have adopted protocols for managing issues such as funeral practices, refusal of blood products, dietary restrictions, and gender- or privacy-related concerns in clinical settings.
Spain also offers a relevant example within the European framework. Article 16 of the Spanish Constitution protects freedom of religion, and the Organic Law 7/1980 on Religious Freedom establishes the basis for cooperation with religious denominations, resulting in specific agreements concerning spiritual assistance in hospitals, dietary adaptation, and certain medical practices. Nevertheless, the present study does not focus on any single jurisdiction. Rather, it adopts a global and comparative perspective, drawing on different models to provide a comprehensive and cross-national analysis of the impact of Artificial Intelligence on the management of religious diversity in healthcare.

2.2. Practical Challenges in Managing Religious Diversity in Hospitals and Health Centres

The management of religious diversity in healthcare faces multiple organisational, ethical and legal challenges. Among the most significant are: conscientious objection by healthcare personnel; access to treatments in accordance with the patient’s beliefs; and the presence and provision of spiritual care in hospitals.
With regard to conscientious objection, the legal debate revolves around the tension between the individual freedom of the professional and the guarantee of universal healthcare. In cases such as abortion or euthanasia, comparative jurisprudence has established that the exercise of objection must be personal, motivated and compatible with the institutional duty of service (Romeo Casabona 2020). With the incorporation of Artificial Intelligence, new questions arise: can a healthcare professional object to the use of an automated system in a clinical decision that contravenes their convictions? Can a machine execute a decision that ignores the conscientious objection of the responsible doctor?
On the other hand, respect for patients’ beliefs raises the need to adapt medical and hospital protocols. Common examples include Jehovah’s Witnesses’ refusal of blood transfusions or the refusal to receive certain treatments derived from animal products. In multicultural contexts, these decisions can involve complex bioethical dilemmas between patient autonomy and the duty to preserve life (Domingo Moratalla 2024).
Finally, spiritual care is an essential component of overall well-being. The WHO and various bioethics associations maintain that religious or spiritual support contributes to the healing process and the humanisation of medicine. However, increasing organisational automation is forcing us to rethink how we preserve spaces for personal support in environments where technological intermediation is becoming increasingly prevalent.

2.3. Tensions Between the Right to Health and Religious Freedom

It is clear that the coexistence of the right to health and religious freedom is not always harmonious. The COVID-19 pandemic highlighted the difficulties of reconciling mandatory health measures with religious practices, showing that public health can conflict with freedom of worship. In everyday clinical practice, this tension is reflected in decisions that affect deep moral and religious convictions.
Artificial Intelligence introduces an additional layer of complexity: if clinical decisions are based on algorithms, how can we ensure that the patient’s religious preferences are respected in the design and operation of these systems? Some authors have warned that Artificial Intelligence systems trained without cultural or religious diversity criteria may reproduce biases that harm minorities or groups with different practices (Baena et al. 2023). Similarly, other authors argue that Artificial Intelligence, lacking moral subjectivity, can generate “diffuse responsibility” in decision-making with spiritual or ethical impact (Santos Divino 2021, pp. 237–52; Zito 2025, pp. 133–46). Artificial Intelligence lacks moral subjectivity and cannot discern between good and evil, acting in an amoral manner and depending on the human values and decisions that programme or supervise it (Durand-Azcárate et al. 2023): “Although it can simulate human behaviour, it does not possess the volitional capacity or ethical reflection characteristic of human beings, which limits its performance in ethical or spiritual dilemmas”; “The absence of moral subjectivity in AI can erode ethics in decision-making, influence human autonomy and modify the construction of identities and values, which requires constant human supervision and the application of ethical principles and responsibility in the design and use of these systems” (Durand-Azcárate et al. 2023, pp. 629–41).
For all these reasons, managing religious diversity in healthcare requires the articulation of ethical and legal governance mechanisms that balance patient autonomy, professional freedom of conscience, and the principles of justice and efficiency that govern modern healthcare systems. The following sections examine how these tensions are reconfigured by the introduction of automated systems and what regulatory implications arise from this.

3. The Emergence of Artificial Intelligence in Healthcare

3.1. Main Applications of Artificial Intelligence in the Healthcare Sector

The incorporation of Artificial Intelligence into healthcare systems has led to a structural transformation of contemporary medicine. Its applications range from assisted diagnosis and risk prediction to personalised treatments, monitoring of chronic patients and efficient management of hospital resources (Velásquez and Ruiz 2023, pp. 84–91; WHO 2021).
Machine learning algorithms and deep neural networks enable the processing of large volumes of clinical data with greater accuracy than humans in certain fields, such as radiology, oncology, and dermatology (Esteva et al. 2019, pp. 24–29; Topol 2019). Artificial Intelligence has also been key in genetic sequencing and precision medicine, facilitating therapies tailored to each patient’s biological and behavioural profile (Rajpurkar et al. 2022, pp. 31–38).
At the organisational level, intelligent systems applied to hospital management optimise resource distribution, reduce costs and improve responsiveness to health emergencies.
However, as Bellido Diego-Madrazo (Bellido Diego-Madrazo 2025) warns, these systems introduce new vulnerabilities, including technological dependence, potential algorithmic failures and the loss of human control over medical decisions.
Recent technological advances, such as the Med-PaLM 2 model developed by Google DeepMind or generative artificial intelligence systems applied to the interpretation of medical images, illustrate the transformative power of this technology. However, these developments must be analysed in light of the principles of biomedical ethics and human rights, as it is not only a matter of improving the efficiency of the system, but also of ensuring that healthcare processes continue to be aligned with the basic principles of biomedical ethics and health law (Domingo Moratalla 2024, pp. 101–2).

3.2. Ethical and Legal Issues Arising from the Use of Artificial Intelligence in Healthcare

The expansion of Artificial Intelligence raises ethical and legal challenges that affect the very structure of the right to health.
Firstly, the transparency and explainability of algorithms are necessary conditions for the legitimacy of their decisions. When an automated system recommends or rejects a treatment, the patient and the professional must be able to understand the criteria underlying that decision (Beriaín 2019, pp. 93–109). Algorithmic opacity, known as the “black box effect”, compromises the traceability of information and makes it difficult to assign responsibility (Goodman and Flaxman 2017, pp. 50–57; Zhang and Zhang 2023).
Secondly, the protection of sensitive personal data takes on crucial importance. Health information is one of the most protected types of data under European law (Regulation (EU) 2016/679, GDPR). Its use by artificial intelligence systems requires enhanced guarantees of anonymisation, informed consent and control over the data lifecycle. The AI Act approved by the European Union in 2024 introduces specific obligations for high-risk systems, including healthcare systems, requiring impact assessment, transparency registration and meaningful human oversight.
Thirdly, the issue of civil and criminal liability arising from the use of Artificial Intelligence emerges. Who is liable for a medical error caused or mediated by an algorithmic system? Legal doctrine has proposed different options or possibilities: liability of the manufacturer, the programmer, the healthcare professional or shared liability (Aznar Domingo 2022, pp. 2–15; Navarro Mendizábal 2024, pp. 1–15). The debate is not merely technical, but ethical, as it reflects the tension between professional autonomy and technological delegation.
The incorporation of artificial intelligence into medicine has driven the need for clear regulations to define liability in the event of harm. Although artificial intelligence systems may have autonomy, the healthcare professional remains responsible for supervising and validating algorithmic recommendations. Effective supervision and documentation of decisions made with the support of artificial intelligence are key obligations, especially in high-risk areas. If the professional acts in accordance with lex artis and supervises adequately, liability could shift to the manufacturer or supplier of the system in the event of technical failure. When the error is due to a system defect, liability may fall on the manufacturer or supplier under the product liability regime. Proving the causal link between the algorithmic failure and the damage is a significant legal challenge. Legal doctrine has therefore concluded that liability for a medical error mediated by artificial intelligence is shared and depends on the specific case: it may fall on the professional, the institution, the manufacturer or a combination of these, depending on the cause of the error and the applicable regulations. The regulatory trend is to require human supervision and transparency in the use of Artificial Intelligence in healthcare (González-García Viñuela 2024, pp. 1–21; Inglada Galiana et al. 2024, pp. 178–86).
From a bioethical perspective, the report by the Observatory of Trends in Future Medicine emphasises that delegating medical decisions to automated systems requires maintaining the principle of accountability and constant human supervision (Romeo Casabona 2020). Automation cannot empty the moral responsibility of the doctor or the autonomy of the patient of their content.
Finally, there is the problem of algorithmic bias. Algorithms learn from historical data and can therefore reproduce pre-existing inequalities, including those of a cultural, ethnic or religious nature. Recent studies show that automated diagnostic systems are less accurate in underrepresented or minority population groups (Obermeyer et al. 2019, pp. 447–53). This structural bias can lead to indirect discrimination contrary to the principles of equality and non-discrimination recognised by international human rights instruments5.

3.3. Artificial Intelligence as a Tool to Support Clinical Decision-Making

One of the most sensitive areas of artificial intelligence use is clinical decision-making. Intelligent systems offer support for diagnosis or treatment selection, but the final decision must remain human. The risk arises when professionals blindly trust algorithmic recommendations, a phenomenon known as automation bias. This cognitive bias can reduce doctors’ critical thinking skills and turn patients into passive objects of technology (Topol 2019).
Artificial Intelligence also impacts the doctor-patient relationship, which is the ethical core of medicine. If decisions are mediated by algorithms, how can empathy, spiritual support, or the personalised care required to respect religious beliefs be preserved? In pluralistic environments, patient confidence in the healthcare system depends not only on technical efficacy but also on the cultural and moral sensitivity of the treatment received (WHO 2021).
The integration of Artificial Intelligence in contexts where decisions have religious implications—such as organ donation, assisted reproduction, or palliative sedation—requires regulatory frameworks capable of articulating healthcare decision-making with the plurality of values present in complex democratic societies: the doctrine has emphasised that Artificial Intelligence must operate under principles of safety, equity, transparency and respect for patient privacy and autonomy, involving ethics specialists, health professionals and patients themselves in the supervision and development of these technologies (Inglada Galiana et al. 2024, pp. 178–86). As Domingo Moratalla warns (Domingo Moratalla 2024, pp. 101–2), the ethics of artificial intelligence in health must be inspired by a relational and non-instrumental view of the person, incorporating the spiritual and community dimensions of healthcare.

4. Impact of Artificial Intelligence on the Management of Religious Diversity

4.1. Algorithms and Religious Pluralism: Neutrality or Automated Discrimination?

The issue of algorithmic neutrality versus religious pluralism is one of the most pressing challenges of the 21st century. Although artificial intelligence is presented as an objective technology, its design and training reflect the values and biases of those who programme it (Cotino Hueso 2017, pp. 131–50). This has direct implications for the management of religious diversity, especially when algorithms are applied in treatment prioritisation systems, resource allocation or personalised clinical recommendations.
Several studies have shown that Artificial Intelligence models can reproduce dominant cultural patterns and marginalise religious or ethnic minorities. For example, hospital diet recommendation systems may ignore religious dietary restrictions—such as halal, kosher, or strict vegetarian diets—if these are not correctly parameterised in the database (WHO 2021). Consequently, the design of medical Artificial Intelligence must incorporate an approach that is sensitive to faith and plurality of values to ensure equity, trust and respect for diversity in healthcare (Shetty et al. 2025). In this regard, ignoring the diversity of values, religious beliefs and worldviews can lead to bias, discrimination and loss of patient autonomy, negatively affecting the trustworthiness and effectiveness of medical Artificial Intelligence: the active participation of diverse communities and ongoing ethical evaluation are recommended to adapt Artificial Intelligence to plural contexts (Zhang and Zhang 2023). The key question is to determine which technical and regulatory criteria allow this sensitivity to be translated into verifiable computational models that are compatible with human rights principles.
This need to integrate cultural and religious differences into healthcare governance aligns with Burchardt’s analysis of how contemporary regimes of diversity are shaped through biopolitical instruments such as data infrastructures, technological systems, and administrative procedures (Burchardt 2021). From this perspective, Artificial Intelligence is not merely a technical tool but a new regulatory space in which pluralism is negotiated. Depending on how algorithms are designed and what values inform their models, these systems may reinforce dynamics of recognition and inclusion or contribute to the marginalisation of minority religious sensibilities.
In terms of human rights, UNESCO has insisted that cultural and religious diversity must be considered a cross-cutting principle in the ethical design of Artificial Intelligence. Similarly, the Council of Europe (2023), in its Recommendation on the Human Rights Impacts of Algorithmic Systems, stresses that States must ensure democratic oversight of AI systems to prevent forms of structural discrimination6.
A clearer distinction between religious-freedom-compliant uses of Artificial Intelligence in healthcare and those that disregard it can be illustrated through contrasting examples. A system may be considered respectful of religious freedom when it incorporates patients’ religious preferences into their electronic health records—such as objections to receiving blood transfusions, the need for halal or kosher diets, or restrictions related to end-of-life practices—and adapts its clinical recommendations accordingly. When adaptation is not possible, ethically aligned systems alert clinicians to potential conflicts so that alternative protocols can be activated, thereby safeguarding patient autonomy.
By contrast, a system that ignores religious freedom operates without integrating these variables or without enabling meaningful human oversight in cases of conflict. For instance, surgical prioritisation algorithms trained on biased historical data may systematically disadvantage religious minorities with culturally distinct health-seeking behaviours. Likewise, automated decision-support tools that recommend procedures incompatible with a patient’s religious convictions—such as transfusions or specific biomedical interventions—without notifying clinicians exemplify uses that undermine religious freedom and generate indirect discrimination. The contrast between these two models demonstrates how the presence or absence of religious sensitivity in AI systems can have a substantial impact on the protection of fundamental rights in healthcare.

4.2. Personalised Treatments and Respect for the Patient’s Beliefs

Continuing along the same lines, it should be noted that Artificial Intelligence promises more personalised medicine, based on each patient’s genomic, historical and behavioural data. But the important thing is that this personalisation must also include the ethical and religious dimension of the individual. In many contexts, beliefs determine the acceptance or rejection of certain treatments, how to deal with illness or end-of-life decisions.
Artificial Intelligence systems should be integrated with decision-making models that respect the patient’s religious preferences, provided that these are adequately recorded in their digital medical records. This requires a delicate balance between personal autonomy and the institutional neutrality of the healthcare system (Romeo Casabona 2020). The central issue is to define decision-making frameworks in which algorithmic personalisation does not replace the patient’s capacity for deliberation or the clinical dialogue necessary for complex decisions.
In this regard, models of ethically aware algorithms have been proposed, capable of incorporating moral variables into their operation (Floridi and Cowls 2019). These models could be adapted to religious contexts, provided they are designed with transparency and under interdisciplinary ethical supervision.

4.3. Artificial Intelligence and Freedom of Conscience for Healthcare Personnel

The implementation of Artificial Intelligence also has an impact on the freedom of conscience of healthcare professionals. If an algorithm determines a course of action that the doctor considers incompatible with their ethical or religious convictions, how is this discrepancy managed? Can a professional refuse to follow the recommendation of an automated system without incurring disciplinary responsibility?
In situations of this kind, where the management of the discrepancy is based on ethical and legal principles and professional autonomy, the professional may refuse to follow the automated recommendation if they base their decision on ethical or religious grounds, especially if there are institutional mechanisms in place to channel these conflicts. Healthcare ethics committees and ethics consultants are key resources for resolving conflicts between automated recommendations and the personal convictions of professionals. However, only a small fraction of these conflicts formally reach the committees, suggesting the need to strengthen these channels and the culture of ethical consultation in clinical practice (Zapatero Ferrandiz et al. 2017, pp. 549–51). Most doctrine emphasises that the final decision must rest with the healthcare professional, who assumes clinical and ethical responsibility for their actions, even when faced with automated systems (Beriaín 2019, pp. 93–109). The obligation to blindly follow an algorithm’s recommendation cannot be imposed, especially if there are ethical, legal or safety concerns. Protection from disciplinary liability depends on the existence of clear protocols and the use of established ethical channels (Beriaín 2019, pp. 93–109). In conclusion, it is the unanimous opinion of legal doctrine that doctors may refuse to follow the recommendation of an automated system on ethical or religious grounds, provided that they justify their decision and use the institutional mechanisms for ethical consultation. Professional autonomy and ethical deliberation are essential to avoid disciplinary liability in such cases.
Along these lines, legal doctrine is beginning to raise the possibility of recognising a “technological conscientious objection”, understood as the right of professionals not to delegate decisions that affect fundamental values to a machine. Some authors have argued that respect for the physician’s conscience is part of professional dignity and cannot be overridden by automation: “Respect for the physician’s conscience is considered an essential pillar of professional dignity. Medical excellence requires a balance between scientific knowledge and ethical values, where the physician’s independence and introspective judgement are fundamental to social trust and the quality of medical practice. The doctor-patient relationship is based on trust and the doctor’s fiduciary responsibility, which means that any action that disconnects the professional from these principles violates the ethos of the profession”, “Professional excellence requires that the doctor maintain his or her capacity for ethical judgement even in the face of external pressures, including those arising from automation” (Verdú González 2020, pp. 1–28); “Automation and technology can improve medical practice, but they should not replace the principles of medical humanism or professional autonomy. Technology must be implemented in a way that respects the traditional values of medical humanism, allowing for an improved model of doctor-patient relationship only if it is compatible with these principles” (Baños Díez and Guardiola Pereira 2024, pp. 1–5). However, they also warn that this right must be exercised in a manner that is proportionate and compatible with the patient’s interests.
The future of medical ethics will depend, to a large extent, on the ability to articulate a coexistence between artificial intelligence and moral intelligence. Technology can assist clinical judgement, but never replace it. The integration of the religious and spiritual dimension into this process constitutes a normative challenge, but also an opportunity to rethink the person-centred model of medicine.
Ultimately, respect for the physician’s conscience is inseparable from professional dignity and should not be overridden by automation. In other words, any automation process must be part of a deliberative structure that preserves clinical judgement and the ethical standards of the healthcare profession. In recent decades, the doctor-patient relationship has evolved from a bilateral one to a more complex relationship where organisation and technology play a decisive role in clinical decision-making. This has reduced the traditional freedom of action of the physician, as decisions are now conditioned by institutional policies, patient autonomy and process automation: “Advances in technology and automation in medicine have raised concerns about dehumanisation and the possible loss of professional autonomy. However, it is recognised that technology can improve medical practice if it is implemented in accordance with the principles of medical humanism. The key is to reconcile automation with the preservation of ethics and individual clinical judgement” (Baños Díez and Guardiola Pereira 2024, pp. 1–5). The professional autonomy of physicians is being transformed and, to a certain extent, limited by automation and organisational influence. Nevertheless, it remains essential to preserve ethical judgement and medical humanism in clinical practice.

4.4. Empirical Examples of the Impact of Artificial Intelligence on Religious Diversity

Although many of the challenges described have a structural dimension, there are already concrete examples that illustrate how Artificial Intelligence can directly affect religious freedom and the management of pluralism in clinical settings.
(a)
Algorithmic recommendations incompatible with religious beliefs (blood transfusions).
In several hospitals in the United States, bioethics committees have reported conflicts arising from algorithmic decision-support systems that recommended urgent blood transfusions for Jehovah’s Witness patients. Because the systems did not integrate religious objections into the electronic health record or into the model’s parameters, they generated automatic alerts that prompted clinicians to intervene quickly, sometimes without activating existing alternative protocols (such as bloodless surgery techniques or volume expanders). These cases led institutions to revise their algorithms to incorporate variables related to religious sensitivity.
(b)
Automated dietary planning systems that omit religious restrictions.
In the Netherlands and Germany, internal audits in university hospitals revealed that AI-based dietary planning tools failed to identify religious dietary requirements such as halal, kosher or Hindu vegetarian restrictions. Automated menu assignments triggered frequent complaints and exposed the fact that the underlying datasets were incomplete or insufficiently labelled. As a result, several hospitals introduced specific “religious dietary flags” and reinforced human oversight mechanisms.
(c)
Biases in automated triage systems affecting religious minorities.
In Canada and the United Kingdom, recent studies found that some AI-assisted triage systems reproduced patterns of indirect discrimination against religious minorities. The models were trained on historical data that underrepresented communities with culturally specific healthcare-seeking behaviours, leading to less accurate risk estimations and, in some cases, lower clinical priority scores than warranted. This had a tangible impact on equity in emergency care.
(d)
Predictive models in end-of-life care and tensions with religious beliefs.
In Israel, hospital committees have documented cases in which survival-prediction algorithms used in palliative care generated tensions with religious norms concerning therapeutic proportionality, particularly among Orthodox Jewish and Muslim communities. In certain cases, automated prognostic outputs encouraged early withdrawal of treatment before families had been able to consult religious authorities. These situations prompted hospitals to strengthen human oversight and involve clinical chaplains earlier in the decision-making process.
(e)
Natural language processing (NLP) tools misinterpreting religious expressions.
Hospitals in the United Kingdom and the United States have reported incidents in which NLP systems used to analyse clinical notes or patient communications incorrectly classified devotional statements (e.g., “God is speaking to me”, “I entrust everything to God”) as evidence of delirium or irrational thinking. The literal, decontextualised interpretation of such expressions triggered unnecessary mental health alerts, highlighting the need for models trained on culturally contextualised data.
(f)
Organ-allocation algorithms and indirect discrimination linked to religious practices.
In India and Singapore, studies have identified issues in organ-allocation algorithms when the datasets used failed to account for religious funerary rituals or mourning periods that influence donor availability in certain communities. Although the algorithms did not intend to discriminate, the omission of these cultural dynamics generated disproportionate outcomes, prompting adjustments to the datasets and the introduction of human review protocols.
These examples demonstrate that the impact of Artificial Intelligence on religious diversity in healthcare is not merely theoretical. It manifests in clinical, organisational, and communicative decisions, underscoring the need for robust governance models, periodic audits, and the explicit recognition of religion as a relevant variable for equity in health”.

5. Legal and Ethical Governance of Artificial Intelligence in the Field of Health

5.1. International and European Regulatory Frameworks

The development of Artificial Intelligence in the field of health is undergoing intense regulation at a global level. Various international institutions have begun to establish ethical principles and frameworks to ensure responsible, transparent and person-centred use: delegating decisions to Artificial Intelligence systems can dilute responsibility. This creates uncertainty about who is ultimately responsible for the consequences, especially in areas where automated decisions affect significant personal values and complex clinical decision-making. The doctrine highlights the need for clear regulatory and ethical frameworks to prevent responsibility from being dispersed among multiple actors and the moral traceability of decisions from being lost (Durand-Azcárate et al. 2023, pp. 629–41).
At the international level, the World Health Organisation (WHO), as mentioned at the beginning of this study, published its first global report on Artificial Intelligence in health in 2021, in which it defines six guiding principles for its development: transparency, accountability, inclusiveness, equity, sustainability and respect for human rights. This document marks a milestone, as it explicitly recognises that Artificial Intelligence can exacerbate pre-existing inequalities, including cultural and religious ones, if adequate oversight mechanisms are not designed.
UNESCO, for its part, approved its Recommendation on the Ethics of Artificial Intelligence in 2021, which was unanimously adopted by its 193 Member States. This international instrument incorporates the principle of cultural and religious diversity as an essential element of the ethical development of Artificial Intelligence. It recognises that algorithmic systems are not neutral and must be designed with respect for local contexts and diverse worldviews, including the religious and spiritual beliefs of individuals.
At the European level, Regulation (EU) 2024/1680, known as the AI Act, constitutes the first comprehensive legal framework for the regulation of Artificial Intelligence: “The European Union is leading the way globally in the legal regulation of artificial intelligence and is developing significant legislative activity, notably the Artificial Intelligence Act” (Morales Santos et al. 2024, pp. 431–46). It classifies Artificial Intelligence systems used in healthcare as “high risk” and requires an impact assessment on fundamental rights, meaningful human oversight and traceability of the data used, thus establishing an operational balance between the technical requirements of algorithmic systems and the institutional guarantees inherent to the rule of law7.
From a doctrinal perspective, some authors have already warned that fundamental rights should serve as a parameter for interpreting the compatibility of Big Data and Artificial Intelligence with the rule of law (Cotino Hueso 2017, pp. 131–50). Along the same lines, there is consensus in the literature on the need for accountability mechanisms in the governance of Artificial Intelligence in healthcare, establishing a liability regime aligned with the specific characteristics of automated healthcare systems (González-García Viñuela 2024, pp. 1–21; Inglada Galiana et al. 2024, pp. 178–86).

5.2. Informed Consent in Contexts Involving Artificial Intelligence: Is Truly Free Consent Possible?

The principle of informed consent is one of the pillars of the right to health and patient autonomy. However, in contexts mediated by artificial intelligence, this principle faces new challenges.
Traditional consent presupposes that the patient understands the risks, benefits, and alternatives of a treatment. But when decisions are based on machine learning algorithms, which operate with complex data structures that are unintelligible even to the developers themselves, the consent process becomes problematic. It could be said that truly free consent requires understanding and moral deliberation, something that is compromised when the logic of artificial intelligence is neither explainable nor accessible to the patient (Domingo Moratalla 2024, pp. 101–2).
At the legal level, the General Data Protection Regulation (Regulation EU 2016/679) requires that all automated processing of personal data, including health data, be subject to meaningful human control and transparency mechanisms. However, in practice, the information provided to patients is often insufficient or too technical, limiting their ability to play a truly deliberative role in the healthcare process (Medinaceli Díaz and Silva Chique 2021, pp. 77–113).
Algorithmic opacity compromises not only patient autonomy but also professional responsibility (Aznar Domingo 2022, pp. 2–15; Muñoz Vela 2022). If doctors do not understand how artificial intelligence systems work, they will find it difficult to explain the implications of their use to patients. This gives rise to the need to redefine informed consent as a dynamic and contextual process that includes not only medical information, but also technological and ethical information: the complexity of modern medicine, digitalisation and the integration of technologies such as artificial intelligence have shown that traditional informed consent based solely on medical and legal information is insufficient. There is a recognised urgency to incorporate technological aspects (e.g., functioning and limitations of artificial intelligence, data security) and ethical aspects (autonomy, justice, equity) into the consent process (Iserson 2023, pp. 225–30; Chougule et al. 2025, pp. 21–30).
These tensions echo Taylor’s (2001) notion of the “ethics of authenticity,” according to which personal decisions—particularly those involving ethically charged or existential matters—can only be considered free when they are expressed through the individual’s own moral and spiritual frameworks. When automated systems do not allow patients to articulate these moral sources, including religious ones, informed consent risks losing its function as a genuine expression of agency and self-determination.
In a pluralistic and religious context, this problem is exacerbated: the patient’s beliefs may influence their willingness to accept automated treatments, especially if they perceive that the spiritual dimension of medical care is being lost (Alkhouri 2024, pp. 1–27). The introduction of artificial intelligence and automated treatments in medical care raises ethical and spiritual authenticity challenges. Automation can be perceived as a threat to the integrity and depth of the spiritual experience, especially if the patient’s cultural and religious contexts are not respected. Artificial intelligence must therefore be designed with cultural and ethical sensitivity, ensuring that the information provided allows patients to adequately assess the clinical and evaluative implications of using AI: the ethical design of artificial intelligence requires the integration of values such as transparency, inclusion, impartiality and respect for human dignity, as well as adaptation to specific cultural and religious norms and values (Alkhouri 2024, pp. 1–27; Papakostas 2025, pp. 1–22; WHO 2021; Lagos 2024). Incorporating data representative of different religious and cultural traditions into the training of Artificial Intelligence helps to reduce the tendency to favour dominant perspectives, such as Western or Protestant ones, and improve the inclusion of minority voices. According to most doctrine, Artificial Intelligence designed with cultural and religious sensitivity can significantly reduce bias in religious contexts, but it requires diverse data, ethical oversight, and constant updating to be truly effective and fair.

5.3. Legal Responsibility and Accountability

The attribution of liability in cases of damage caused by artificial intelligence systems in healthcare continues to be one of the most debated issues in contemporary legal literature. The current regulatory framework, based on fault or product defect, is insufficient to address damage caused by autonomous or semi-autonomous systems (Navarro Mendizábal 2024, pp. 1–15).
As we pointed out earlier in our work, legal doctrine seems to have concluded that liability for medical errors mediated by artificial intelligence is shared and depends on the specific case. We would now add that the most widespread proposal in doctrine is that of strict liability of the manufacturer or operator, inspired by the theory of created risk; this model is based on the idea that whoever introduces an Artificial Intelligence system onto the market must assume the risks arising from its operation, even when there is no human fault (Bellido Diego-Madrazo 2025).
However, Santos Divino (2021, pp. 237–52) introduces an ontological reflection: if Artificial Intelligence evolves towards forms of decision-making autonomy, could it be considered a legal entity? Although this possibility remains speculative, it reveals the urgency of rethinking the classic categories of liability and legal personhood.
From a practical perspective, the governance of Artificial Intelligence in health must be based on transparency, auditing and accountability. The traceability of algorithmic processes and the existence of auditable records are essential elements for defining the obligations of each agent involved in the development and use of algorithmic systems8.

6. Proposals and Future Prospects

6.1. Towards Ethical, Participatory and Multilevel Governance

The complexity of the impact of Artificial Intelligence on the management of religious diversity requires multilevel governance that combines international, European and national frameworks, together with the participation of social and religious actors (Robles Carrillo 2020, pp. 1–27). The OECD recommends promoting multilateral, multisectoral and interdisciplinary forums for the governance of Artificial Intelligence, which in the health context could involve councils or committees with lawyers, doctors, computer scientists and even religious representatives contributing to ensuring that automated systems operate within deliberative frameworks consistent with contemporary social complexity9. We are actually applying this interpretation to the religious healthcare sphere, as the inclusion of ‘religious representatives’ is not an express requirement of the OECD, but rather a specialised interpretation suggested by some authors to ensure that the plurality of values and beliefs is duly respected. It is a matter of advocating for the inclusion of religious communities or religious values in the governance of Artificial Intelligence, or in other words, advocating for the incorporation of religious values into the ethical frameworks of Artificial Intelligence (Bor 2022)10: “Religious communities have something to say about humanity, regardless of our beliefs or backgrounds”. These authors propose that Artificial Intelligence governance forums go beyond a purely technical and multisectoral approach to also include the direct participation of religious or belief communities, with the aim of ensuring that technological systems respond to the plurality of values and beliefs and are not dominated by technocratic or secular worldviews, while maintaining that AI governance processes must incorporate new or reinforced channels of consultation with religious or belief actors (“religious or belief actors”) so that governance processes incorporate diverse perspectives in the definition of technical and ethical criteria (Galassini 2021, pp. 2–31).
In the European context, the creation and regulation of the European Health Data Space (EHDS 2024)11 establishes an innovative framework for healthcare data interoperability and improves patient control over their data (European Commission 2024)12. Although the text mentions fundamental rights, it does not specifically provide for detailed mechanisms to ensure that patients’ religious beliefs or moral convictions are respected when using such data. In this regard, its true success will depend on Member States and healthcare operators adopting additional safeguards to ensure that data management is aligned with the fundamental rights standards applicable in the European area.
Likewise, ethical and digital education for healthcare professionals is essential: legal and ethical training in Artificial Intelligence must be part of medical and healthcare law education, ensuring responsible practice that integrates technology and humanity (Pastor 2024).

6.2. Role of Legal and Healthcare Professionals in Regulation and Enforcement

The interaction between technology, law and religion requires close collaboration between lawyers and healthcare professionals. The former must develop adaptable regulatory frameworks, while the latter must incorporate ethical and spiritual reflection into their daily practice.
The role of the “ethics consultant”, which is widespread in hospitals in the United States and Europe, could be expanded to include the religious dimension in decision-making assisted by artificial intelligence. In this way, complex clinical decisions—such as those relating to organ donation, transfusions, or end-of-life treatments—could be addressed through deliberative frameworks that adequately articulate the clinical, legal, and axiological dimensions of each decision (Velásquez and Ruiz 2023, pp. 84–91).

6.3. Need for Flexible Regulation That Can Be Adapted to Scientific Advances in Line with Religious and Cultural Diversity

The rapid pace of technological development requires regulatory frameworks that are flexible and evolutionary, capable of adapting to scientific advances without compromising the protection of fundamental rights. As pointed out in the Anticipating Artificial Intelligence Reports (Romeo Casabona 2020), the law cannot aspire to regulate all innovations ex ante, but it can establish guiding principles based on human dignity, precaution and proportionality.
Experience shows that overly rigid regulation can inhibit innovation, but that an absence of rules creates legal uncertainty. Therefore, the governance of Artificial Intelligence must be based on common ethical principles, including justice, transparency, and ensuring consistency between technological advances and the structural principles of European health law, as well as on mechanisms for continuous evaluation.
In turn, there is a strong consensus in doctrine that artificial intelligence must be designed, as indicated above in our work, with cultural and ethical sensitivity, ensuring respect, understanding and fair representation of religious and cultural diversity. This is essential to avoid exclusion, bias and conflict, and to promote a more just and equitable digital coexistence. Among other recommendations and good practices, we would highlight: using representative and multilingual datasets to reflect cultural and religious diversity (Papakostas 2025, pp. 1–22); involving experts, religious communities and minorities in the development and supervision of Artificial Intelligence; establishing sectoral ethical guidelines and continuous auditing mechanisms to detect and correct biases; and promoting critical education on Artificial Intelligence and its impact on diversity and ethics (Papakostas 2025, pp. 1–22; Alkhouri 2024, pp. 1–27).

7. Conclusions

The emergence of Artificial Intelligence in the healthcare sector has radically transformed the way in which the right to health is conceived, managed and guaranteed. Furthermore, we understand that this advance cannot be analysed without considering the identity and value factors that influence clinical interaction.
Religious plurality, far from being an obstacle, is an essential component of contemporary healthcare systems, which must respond to the needs and beliefs of all patients. Artificial Intelligence, if implemented without oversight or transparency, risks amplifying inequalities and eroding fundamental rights, but if designed with ethical and cultural sensitivity, it can become a valuable tool for improving the suitability of clinical interventions to the specific needs of different patient groups.
Consequently, the future governance of artificial intelligence in healthcare must be based on an ethic of respect, responsibility and diversity. This requires the cooperation of legislators, healthcare professionals, technology developers and religious communities, through institutional structures capable of integrating technical, evaluative, and legal analyses into decision-making.
The integration of Artificial Intelligence into healthcare, therefore, requires a broader reflection on the cultural, ethical and legal assumptions that are embedded—or omitted—within processes of automation. The religious dimension, as a legitimate expression of meaningful personal values, constitutes a structural element for understanding how clinical decisions are made and how relationships of trust between patients and professionals are established. Acknowledging this reality does not entail privileging specific convictions, but rather ensuring that algorithmic systems operate within frameworks that respect the diversity of motivations and priorities individuals may hold, thereby preventing technological innovation from generating new forms of inequality or exclusion.
As Berger, Casanova, and Taylor have shown, religion remains a constitutive dimension of contemporary democratic societies, shaping identity, moral reasoning, and public life. Incorporating these perspectives into the analysis of Artificial Intelligence in healthcare reveals that algorithmic systems interact with religious commitments not as peripheral elements but as essential dimensions of clinical decision-making. Understanding this interplay is therefore crucial for developing governance models that respect both technological innovation and the moral and spiritual agency of patients.
Only in this way can a model be consolidated that is capable of combining technological innovation with the regulatory and ethical requirements of democratic and culturally diverse societies.

Author Contributions

Methodology, M.-J.P.-G. and D.C.-S.; investigation, M.-J.P.-G. and D.C.-S.; writing—original draft preparation, M.-J.P.-G. and D.C.-S.; writing—review and editing, M.-J.P.-G. and D.C.-S.; supervision, M.-J.P.-G. and D.C.-S.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analysed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OMSOrganización Mundial de la Salud
TEDHTribunal Europeo de Derechos Humanos
FEREDEFederación de Entidades Religiosas Evangélicas de España
FCIEEederación de Comunidades Judías
CIEComisión Islámica de España
EHDSEuropean Health Data Space
OCDEOrganización para la Cooperación y el Desarrollo Económico
WHOWorld Health Organisation

Notes

1
In Spanish: Organización Mundial de la Salud (OMS) 2021. Informe WHO issues first global report on Artificial Intelligence in health and six guiding principles for its design and use.
2
In Spanish: Federación de Entidades Religiosas Evangélicas de España.
3
In Spanish: Federación de Comunidades Judías.
4
In Spanish: Comisión Islámica de España.
5
UNESCO. 2021. Recommendation on the Ethics of Artificial Intelligence. Paris: UNESCO.
6
Council of Europe. 2023. Recommendation on the Human Rights Impacts of Algorithmic Systems. Strasbourg: Council of Europe.
7
European Commission. (2024). Artificial Intelligence Act (Regulation (EU) 2024/1680). Brussels: European Union.
8
European Commission. 2024. Artificial Intelligence Act (Regulation (EU) 2024/1680). Brussels: European Union; OCDE. 2023. OECD Framework for the Classification of AI Systems. Paris: OECD Publishing.
9
The Recommendation of the Council on Artificial Intelligence (WITA).
10
Bor (2022). What can religion contribute to the AI debate?. AI Ethics: Global Perspectives. The GovLab. https://www.aiethicscourse.org/lectures/religion-the-ai-debate (accessed on 10 December 2025): In this module, Bor considers the relationship between religion and AI, and religion’s possible contribution to the ethical AI debate. Using Judaism as a test case, Bor points to 7 core values that encapsulate key aspects of humanity. These values are: (1) Sanctity of the individual; (2) Virtue as foundation for knowledge; (3) Oneness and individuality; (4) Possibility of transformation; (5) Power of emotion; (6) Embodiment; and (7) Sacred time and space. By applying these values to questions around different innovations, Bor makes the case that religion can offer critical insights into how we develop and employ AI technologies in an ethical and responsible manner.
11
Council of the European Union. (2024, 15 March). European Health Data Space: Council and Parliament strike deal.
12
European Commission. (2024, March 15). Commission welcomes political agreement on European Health Data Space; Council of the EU.

References

  1. Alkhouri, Khader I. 2024. The Role of Artificial Intelligence in the Study of the Psychology of Religion. Religions 15: 290. [Google Scholar] [CrossRef]
  2. Aznar Domingo, Antonio. 2022. La responsabilidad civil derivada del uso de inteligencia artificial. Revista de Jurisprudencia Lefebvre-El Derecho 41: 2–15. [Google Scholar]
  3. Baena, Antoni, Alicia De Manuel, Miquel Domènech, Harry Farmer, Felip Miralles, and Juliana Ribera. 2023. Inteligencia Artificial en Salud. Retos Éticos y Científicos. Barcelona: Cuadernos de la Fundación Víctor Grifols i Lucas. [Google Scholar]
  4. Baños Díez, Josep E., and Elena Guardiola Pereira. 2024. La convivencia entre la tecnología y el humanismo médico. Medicina Clínica Práctica 7: 100437. [Google Scholar] [CrossRef]
  5. Bellido Diego-Madrazo, Ramón Alfonso. 2025. Aspectos éticos y legales de las nuevas tecnologías en medicina. Incluido en la Revista Ocronos 8: 814. [Google Scholar]
  6. Berger, Peter. 1967. The Sacred Canopy: Elements of a Sociological Theory of Religion. New York: Anchor Books. [Google Scholar]
  7. Beriaín, Iñigo de Miguel. 2019. Medicina personalizada, algoritmos predictivos y utilización de sistemas de decisión automatizados en asistencia sanitaria. Problemas éticos. Dilemata 30: 93–109. [Google Scholar]
  8. Bor, Harris. 2022. What Can Religion Contribute to the AI Debate? AI Ethics: Global Perspectives. The GovLab. Available online: https://www.aiethicscourse.org/lectures/religion-the-ai-debate (accessed on 10 December 2025).
  9. Burchardt, Marian. 2021. Regulating Difference: Religious Diversity and Nationhood in the Secular West. Journal of Religion in Europe 14: 189–92. [Google Scholar] [CrossRef]
  10. Casanova, José. 1994. Public Religions in the Modern World. Chicago: University of Chicago Press. [Google Scholar]
  11. Chougule, Umesh, Kavita Sharma, Shreesh S. Kolekar, Hemchandra V. Nerlekar, and Shailly Gupta. 2025. Informed Consent in Surgery: Legal and Ethical Considerations. Journal of Neonatal Surgery 14: 21–30. [Google Scholar] [CrossRef]
  12. Cotino Hueso, Lorenzo. 2017. Big data e inteligencia artificial. Una aproximación a su tratamiento jurídico desde los derechos fundamentales, Big Data and Artificial Intelligence. An Approach from a Legal Point of View about Fundamental Rights. Dilemata 24: 131–50. [Google Scholar]
  13. Domingo Moratalla, Agustín. 2024. Inteligencia artificial en el ámbito de la salud. Reflexión ética. ILAPHAR|Revista de la OFIL 34: 101–2. [Google Scholar]
  14. Durand-Azcárate, Luis Augusto, Graciela Esther Reyes-Pastor, Susan Cristy Rodríguez-Balcázar, and Ena Cecilia Obando-Peralta. 2023. Políticas educativas en torno al uso de la inteligencia artificial: Debates sobre su viabilidad ética. Cuestiones Políticas 41: 629–41. [Google Scholar] [CrossRef]
  15. Esteva, Andre, Alexandre Robicquet, Bharath Ramsundar, Volodymyr Kuleshov, Mark DePristo, Katherine Chou, Claire Cui, Greg Corrado, Sebastian Thrun, and Jeff Dean. 2019. A guide to deep learning in healthcare. Nature Medicine 25: 24–29. [Google Scholar] [CrossRef]
  16. Floridi, Luciano, and Josh Cowls. 2019. A unified framework of five principles for AI in society. Harvard Data Science Review 3: 535–45. [Google Scholar]
  17. Galassini, Margherita. 2021. Religious or Belief Actors and the European Commission’s White Paper on Artificial Intelligence. Trento: Center for Religious Studies. Fondazione Bruno Kessler, pp. 2–31. [Google Scholar]
  18. González-García Viñuela, María. 2024. Responsabilidad por daños de productos y servicios sanitarios equipados con sistemas de inteligencia artificial. Liability for damages of healthcare products and services equipped with artificial intelligence systems. Revista Bioderecho.es 19: 1–21. [Google Scholar]
  19. Goodman, Bryce, and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine 38: 50–57. [Google Scholar] [CrossRef]
  20. Inglada Galiana, Luis, Luis Corral Gudino, and José Pablo Miramontes González. 2024. Ética e inteligencia artificial. Revista Clínica Española 224: 178–86. [Google Scholar] [CrossRef]
  21. Iserson, Kenneth V. 2023. Consentimiento informado para inteligencia artificial en medicina de urgencias: Una guía práctica. Informed consent for artificial intelligence in emergency medicine: A practical guide. American Journal of Emergency Medicine 76: 225–30. [Google Scholar] [CrossRef]
  22. Lagos, Anna. 2024. Google Contribuye a la Detección Temprana del Cáncer de Mama con la Inteligencia Artificial. Wired Artículos Salud. Available online: https://es.wired.com/articulos/google-contribuye-a-la-deteccion-temprana-del-cancer-de-mama-con-la-ia (accessed on 14 October 2025).
  23. Medinaceli Díaz, Karina Ingrid, and Moises Martín Silva Chique. 2021. Impacto y regulación de la Inteligencia Artificial en el ámbito sanitario. Artificial Intelligence impact and regulation in the healthcare fiel. Revista del Instituto de Ciencias Jurídicas de Puebla, México 15: 77–113. [Google Scholar]
  24. Morales Santos, Ángel, Sara Lojo Lendoiro, M. Rovira Canellas, and P. Valdés Solís. 2024. La regulación legal de la inteligencia artificial en la Unión Europea: Guía práctica para radiólogos. Radiología 66: 431–46. [Google Scholar] [CrossRef]
  25. Muñoz Vela, José Manuel. 2022. Inteligencia Artificial y responsabilidad penal. Derecho Digital e Innovación. Digital Law and Innovation Review 11. [Google Scholar]
  26. Navarro Mendizábal, Iñigo. 2024. ¿Quién paga los daños que causa la Inteligencia Artificial? De la ética a la responsabilidad por productos defectuosos. Revista Iberoamericana de Bioética 25: 1–15. [Google Scholar] [CrossRef]
  27. Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366: 447–53. [Google Scholar] [CrossRef] [PubMed]
  28. Papakostas, Christos. 2025. Artificial Intelligence in Religious Education: Ethical, Pedagogical and Theological Perspectives. Religions 16: 563. [Google Scholar] [CrossRef]
  29. Pastor, Ana. 2024. Los riesgos de la Inteligencia Artificial en el ámbito de la sanidad requieren una respuesta ética y jurídica. Paper presented at the XXX Congreso Nacional de Derecho Sanitario, Madrid, Spain, November 20–21. [Google Scholar]
  30. Rajpurkar, Pranav, Emma Chen, Oishi Banerjee, and Eric J. Topol. 2022. AI in health and medicine. Nature Medicine 28: 31–38. [Google Scholar] [CrossRef] [PubMed]
  31. Robles Carrillo, Margarita. 2020. La gobernanza de la inteligencia artificial: Contexto y parámetros generales. Revista Electrónica de Estudios Internacionales 39: 1–27. [Google Scholar] [CrossRef]
  32. Romeo Casabona, Carlos. 2020. Informes Anticipando Inteligencia Artificial en Salud: Retos Éticos y Legales. Madrid: Observatorio de Tendencias en la Medicina del Futuro—Fundación Instituto Roche. [Google Scholar]
  33. Santos Divino, Sthéfano Bruno. 2021. Inteligência Artificial como sujeito de direito construção e teorização crítica sobre pessoalidade e subjetivação. Revista de Bioética y Derecho: Publicación del Máster en Bioética y Derecho 52: 237–52. [Google Scholar] [CrossRef]
  34. Shetty, Anudeex, Amin Beheshti, Mark Dras, and Usman Naseem. 2025. VITAL: A New Dataset for Benchmarking Pluralistic Alignment in Healthcare. arXiv arXiv:2502.13775. [Google Scholar] [CrossRef]
  35. Strickland, Eliza. 2019. IBM Watson, heal thyself: How Artificial Intelligence is helping doctors—And failing them. IEEE Spectrum 56: 24–31. [Google Scholar] [CrossRef]
  36. Taylor, Charles. 2001. Sources of the Self: The Making of the Modern Identity. Cambridge, MA: Harvard University Press. [Google Scholar]
  37. Taylor, Charles. 2007. A Secular Age. Cambridge, MA: Harvard University Press. [Google Scholar]
  38. Topol, Eric. 2019. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York: Basic Books. [Google Scholar]
  39. Velásquez, Juan D., and Rocío B. Ruiz. 2023. Inteligencia artificial al servicio de la salud del futuro. Artificial intelligence at the service of the health of the future. Revista Médica Clínica Las Condes 34: 84–91. [Google Scholar]
  40. Verdú González, Irene. 2020. A la búsqueda del médico bueno. Los conflictos de intereses en las relaciones con la industria farmacéutica. Bioderecho.es: Revista del Centro de Estudios en Bioderecho, Ética y Salud 11: 1–28. [Google Scholar] [CrossRef]
  41. Zapatero Ferrandiz, Ana, Gerard Colomar Pueyo, Irene Dot Jordana, and José Felipe Solsona Durán. 2017. El consultor de ética. Medicina Clínica 149: 549–51. [Google Scholar] [CrossRef]
  42. Zhang, Jie, and Zong-ming Zhang. 2023. Ethics and governance of trustworthy medical artificial intelligence. BMC Medical Informatics and Decision Making 23: 7. [Google Scholar] [CrossRef]
  43. Zito, Emiliano. 2025. La Inteligencia Arficial aplicada a la salud: Nuevos desafíos jurídicos. Arficial Inelligence applied to health: New legal challenges. Revista Derecho y Salud 10: 133–46. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Parejo-Guzmán, M.-J.; Cobos-Sanchiz, D. Impact of Artificial Intelligence on the Management of Religious Diversity in Healthcare. Religions 2026, 17, 20. https://doi.org/10.3390/rel17010020

AMA Style

Parejo-Guzmán M-J, Cobos-Sanchiz D. Impact of Artificial Intelligence on the Management of Religious Diversity in Healthcare. Religions. 2026; 17(1):20. https://doi.org/10.3390/rel17010020

Chicago/Turabian Style

Parejo-Guzmán, María-José, and David Cobos-Sanchiz. 2026. "Impact of Artificial Intelligence on the Management of Religious Diversity in Healthcare" Religions 17, no. 1: 20. https://doi.org/10.3390/rel17010020

APA Style

Parejo-Guzmán, M.-J., & Cobos-Sanchiz, D. (2026). Impact of Artificial Intelligence on the Management of Religious Diversity in Healthcare. Religions, 17(1), 20. https://doi.org/10.3390/rel17010020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop