1. Introduction
Religious plurality is a structural reality in contemporary societies, especially in advanced democratic systems, where globalisation, migration and technological expansion have multiplied the scenarios of coexistence between different worldviews. The health sector is undoubtedly one of the areas where this diversity is most visible and relevant, given that it is where the fundamental values of the individual life, health, dignity and freedom of conscience intersect with the institutional organisation of public services and scientific advances. In this context, healthcare systems are forced to manage a constant tension between the institutional neutrality of social and democratic states governed by the rule of law and the effective recognition of religious diversity, in order to guarantee both the freedom of conscience of professionals and the religious freedom of patients.
Added to this complexity is a new transformative vector: Artificial Intelligence. Its emergence in the field of health has brought about a paradigm shift in the way medical care is diagnosed, managed and personalised. Algorithmic tools capable of analysing large volumes of clinical data make it possible to improve the efficiency of the system and optimise treatments, but at the same time raise questions of enormous ethical and legal significance (
Baena et al. 2023;
Domingo Moratalla 2024;
Velásquez and Ruiz 2023).
The World Health Organisation (WHO)
1, in its first global report on Artificial Intelligence in health, identified six fundamental principles that should guide its use: human autonomy, well-being, transparency, accountability, equity and sustainability. These principles aim to ensure that technology remains at the service of people and not the other way around, avoiding the reproduction of biases or inequalities. However, incorporating these principles requires mechanisms capable of adequately integrating the cultural and spiritual determinants that shape clinical decisions: can algorithms respect the diversity of moral and religious beliefs? What happens when automated decisions affect sensitive bioethical issues such as blood transfusions, fertility treatments or end-of-life decisions?
The development of Artificial Intelligence in healthcare has advanced at a dizzying pace. From pioneering projects such as IBM Watson Health, which combined clinical diagnosis and predictive analysis (
Strickland 2019, pp. 24–31), to recent innovations such as the use of Google algorithms for the early detection of breast cancer (
Lagos 2024), AI-based medicine has become an everyday reality. However, these advances imply the need for an ethical and legal framework that allows us to respond to new dilemmas of responsibility, privacy and algorithmic justice: “The risks of AI in healthcare require an ethical and legal response” (
Pastor 2024).
From a legal perspective, some authors have emphasised that the incorporation of artificial intelligence into the healthcare sector requires a reinterpretation of traditional categories such as civil liability, professional autonomy and informed consent (
Cotino Hueso 2017;
Aznar Domingo 2022). In turn, other authors have emphasised the need to develop governance models that integrate biomedical ethics and technology law in order to address the challenges arising from automated decision-making (
Baena et al. 2023,
Romeo Casabona 2020).
The management of religious diversity is particularly affected by this technological context. Religious beliefs can influence both the acceptance of certain treatments and the definition of the very notion of health and well-being. Similarly, healthcare professionals may face conflicts of conscience arising from the use of technologies whose operating logic is beyond direct human control. Consequently, artificial intelligence is not only transforming medicine, but also the conditions under which freedom of conscience, the right to health and the moral autonomy of individuals are exercised (
Zito 2025, pp. 133–46;
Bellido Diego-Madrazo 2025).
To illustrate the relevance of this phenomenon, it is useful to consider several examples that are already emerging in clinical practice. Algorithmic decision-support systems may recommend treatments that conflict with patients’ religious beliefs—such as blood transfusions for Jehovah’s Witnesses—without being programmed to identify or respect such objections. In other cases, algorithms used to allocate resources or prioritise waiting lists can reproduce structural biases against religious minorities if the historical datasets used for training fail to incorporate cultural or spiritual variables. Automated decision-making at the end of life, based on predictions of survival or quality of life, may also come into tension with religious convictions regarding the dignity of dying, palliative sedation, or the use of extraordinary measures. These examples demonstrate that the issue is not abstract: it already manifests itself in concrete clinical and organisational situations that call for urgent ethical and legal responses.
The fundamental objective of the research is to analyse, from a broad perspective, how artificial intelligence is transforming the relationships between religious freedom, the right to health, and clinical practice, considering both its potential and the risks it poses. The proposed methodological approach is based on a qualitative and interdisciplinary design, which we believe is appropriate for addressing the complexity of the subject matter. Initially, we have posed three research questions: how artificial intelligence can transform the management of religious diversity in the healthcare setting; what tensions are emerging between patient autonomy and religious beliefs and decisions based on automated systems; and what legal aspects are necessary to ensure adequate governance of religious diversity in this context.
Methodologically, a non-experimental design has been chosen, focusing on documentary and conceptual analysis, with an exploratory-analytical scope, which aims to offer an integrated understanding of the phenomenon in order to provide regulatory and ethical tools that can help to regulate it. To begin with, the normative and institutional foundations that recognise and protect religious diversity in the healthcare field are examined, followed by an analysis of the advances brought about by artificial intelligence in the healthcare field, as well as the ethical dilemmas it raises. Finally, some criteria are proposed for ethical and inclusive governance that ensure respect for human rights and religious freedom. These are, therefore, three axes of analysis that allow the discussion to be placed in a broad framework, combining legal sources, scientific literature, and the most recent regulations.
The ultimate goal is to contribute to a critical understanding of Artificial Intelligence in healthcare, highlighting how its design and governance condition the way in which patients’ autonomy, dignity, and moral integrity are protected.
In this context, the central question guiding this article is how Artificial Intelligence can be integrated into healthcare systems without overlooking the plurality of values, beliefs and worldviews that shape the patient’s clinical experience. The religious dimension—understood broadly as a meaningful component of personal identity and of clinical decision-making—should not be regarded as an accessory element, but rather as a factor that directly influences autonomy, risk perception, treatment acceptance and the interaction between patients and healthcare professionals. For this reason, the article examines systematically the challenges that arise when automated models fail to incorporate these variables adequately and proposes theoretical and normative criteria to articulate a governance framework for AI that remains coherent with the principles of the right to health in democratic and culturally diverse societies.
2. Religious Diversity in Healthcare: Fundamentals and Current Challenges
2.1. Legal Recognition of Religious Freedom in the Healthcare Context
The right to freedom of religion and conscience is recognised as an essential pillar of the international legal order. From the Universal Declaration of Human Rights (Art. 18) and the International Covenant on Civil and Political Rights (Art. 18) to regional instruments, such as the European Convention on Human Rights (Art. 9) and the Charter of Fundamental Rights of the European Union (Art. 10), religious freedom encompasses not only the internal dimension of belief, but also its public manifestation, including practices and observances related to health.
Classic contributions from the sociology of religion have long emphasised that religious plurality is not an anomaly but a structural feature of contemporary societies. Berger (
Berger 1967) famously described how modernity produces a “religious market” in which multiple systems of meaning coexist and claim legitimacy, compelling institutions—including healthcare systems—to adapt their frameworks of action. Likewise, Casanova (
Casanova 1994) argued that religion has not withdrawn from the public sphere but has rather reconfigured its presence, shaping public policies, personal identity, and welfare systems. Taylor (
Taylor 2007) further highlighted that religious and moral convictions constitute “moral sources” through which individuals articulate their identity and make ethically grounded decisions, including those related to health and illness. These theoretical insights help explain why religious commitments continue to play a significant role in clinical contexts and why healthcare systems must provide responses that are attentive to such diversity.
In the health sector, this right translates into an obligation on the part of States and health systems to guarantee an environment in which medical decisions can be made without coercion and in accordance with the patient’s personal convictions. This recognition also implies a duty to respect the conscientious objection of health personnel, provided that it is in harmony with the right of patients to receive adequate care without discrimination.
In the European context, the European Court of Human Rights (ECHR) has developed significant case law on the balance between freedom of conscience, public health and patient rights, emphasising that States enjoy a margin of appreciation but must ensure proportionality in any limitation of religious freedom.
In Spain, Article 16 of the Constitution recognises freedom of religion and worship, and Organic Law 7/1980 on Religious Freedom establishes the basis for cooperation between the State and religious denominations. In the health sector, the application of this principle takes the form of cooperation agreements (those signed with the Catholic Church, the Federation of Evangelical Religious Entities of Spain—FEREDE
2, the Federation of Jewish Communities—FCIE
3 and the Islamic Commission of Spain—CIE
4), which address, among other things, issues such as religious care in public hospitals, thus incorporating criteria that allow healthcare practices to be adapted to personal convictions that are relevant in clinical and organisational terms.
In a comparative perspective, healthcare systems have developed different approaches to integrating religious diversity into clinical practice. In countries such as the United Kingdom and Canada, robust frameworks of cultural competence and faith-sensitive care have been implemented. These models recognise religion as a social determinant of health and promote institutional protocols for cultural adaptation, spiritual support, and respect for patients’ religious preferences. In the United States, many hospitals—especially those linked to university networks and bioethics centres—have established chaplaincy services, interdisciplinary ethics committees, and specific guidelines for addressing conflicts between medical treatment and religious convictions, particularly in advance care planning, organ donation, or decisions at the end of life.
In continental Europe, several noteworthy models also exist. France maintains an approach of strict institutional neutrality inspired by laïcité, which results in a clear separation between healthcare organisations and religious expression, while still ensuring access to spiritual care in public hospitals. Germany adopts a cooperative model grounded in the constitutional recognition of religious communities, reflected in hospital protocols that guarantee pastoral assistance and allow adaptations of certain medical procedures when justified by the patient’s beliefs. In the Netherlands, the healthcare system includes specialised units on cultural and religious diversity within university hospitals, as well as comprehensive guidelines for intercultural communication and moral-pluralistic decision-making.
Beyond the Western context, other jurisdictions have developed significant practices. In Singapore and Malaysia, multicultural models incorporate religious liaison officers who act as mediators when religious beliefs substantially influence medical decisions. In Israel, a highly technologised healthcare system includes experts in halakha and mixed committees that examine the compatibility between medical treatments, end-of-life decisions, and Jewish or Muslim religious prescriptions. In India, marked by extensive religious plurality, various states have adopted protocols for managing issues such as funeral practices, refusal of blood products, dietary restrictions, and gender- or privacy-related concerns in clinical settings.
Spain also offers a relevant example within the European framework. Article 16 of the Spanish Constitution protects freedom of religion, and the Organic Law 7/1980 on Religious Freedom establishes the basis for cooperation with religious denominations, resulting in specific agreements concerning spiritual assistance in hospitals, dietary adaptation, and certain medical practices. Nevertheless, the present study does not focus on any single jurisdiction. Rather, it adopts a global and comparative perspective, drawing on different models to provide a comprehensive and cross-national analysis of the impact of Artificial Intelligence on the management of religious diversity in healthcare.
2.2. Practical Challenges in Managing Religious Diversity in Hospitals and Health Centres
The management of religious diversity in healthcare faces multiple organisational, ethical and legal challenges. Among the most significant are: conscientious objection by healthcare personnel; access to treatments in accordance with the patient’s beliefs; and the presence and provision of spiritual care in hospitals.
With regard to conscientious objection, the legal debate revolves around the tension between the individual freedom of the professional and the guarantee of universal healthcare. In cases such as abortion or euthanasia, comparative jurisprudence has established that the exercise of objection must be personal, motivated and compatible with the institutional duty of service (
Romeo Casabona 2020). With the incorporation of Artificial Intelligence, new questions arise: can a healthcare professional object to the use of an automated system in a clinical decision that contravenes their convictions? Can a machine execute a decision that ignores the conscientious objection of the responsible doctor?
On the other hand, respect for patients’ beliefs raises the need to adapt medical and hospital protocols. Common examples include Jehovah’s Witnesses’ refusal of blood transfusions or the refusal to receive certain treatments derived from animal products. In multicultural contexts, these decisions can involve complex bioethical dilemmas between patient autonomy and the duty to preserve life (
Domingo Moratalla 2024).
Finally, spiritual care is an essential component of overall well-being. The WHO and various bioethics associations maintain that religious or spiritual support contributes to the healing process and the humanisation of medicine. However, increasing organisational automation is forcing us to rethink how we preserve spaces for personal support in environments where technological intermediation is becoming increasingly prevalent.
2.3. Tensions Between the Right to Health and Religious Freedom
It is clear that the coexistence of the right to health and religious freedom is not always harmonious. The COVID-19 pandemic highlighted the difficulties of reconciling mandatory health measures with religious practices, showing that public health can conflict with freedom of worship. In everyday clinical practice, this tension is reflected in decisions that affect deep moral and religious convictions.
Artificial Intelligence introduces an additional layer of complexity: if clinical decisions are based on algorithms, how can we ensure that the patient’s religious preferences are respected in the design and operation of these systems? Some authors have warned that Artificial Intelligence systems trained without cultural or religious diversity criteria may reproduce biases that harm minorities or groups with different practices (
Baena et al. 2023). Similarly, other authors argue that Artificial Intelligence, lacking moral subjectivity, can generate “
diffuse responsibility” in decision-making with spiritual or ethical impact (
Santos Divino 2021, pp. 237–52;
Zito 2025, pp. 133–46). Artificial Intelligence lacks moral subjectivity and cannot discern between good and evil, acting in an amoral manner and depending on the human values and decisions that programme or supervise it (
Durand-Azcárate et al. 2023): “
Although it can simulate human behaviour, it does not possess the volitional capacity or ethical reflection characteristic of human beings, which limits its performance in ethical or spiritual dilemmas”; “
The absence of moral subjectivity in AI can erode ethics in decision-making, influence human autonomy and modify the construction of identities and values, which requires constant human supervision and the application of ethical principles and responsibility in the design and use of these systems” (
Durand-Azcárate et al. 2023, pp. 629–41).
For all these reasons, managing religious diversity in healthcare requires the articulation of ethical and legal governance mechanisms that balance patient autonomy, professional freedom of conscience, and the principles of justice and efficiency that govern modern healthcare systems. The following sections examine how these tensions are reconfigured by the introduction of automated systems and what regulatory implications arise from this.
4. Impact of Artificial Intelligence on the Management of Religious Diversity
4.1. Algorithms and Religious Pluralism: Neutrality or Automated Discrimination?
The issue of algorithmic neutrality versus religious pluralism is one of the most pressing challenges of the 21st century. Although artificial intelligence is presented as an objective technology, its design and training reflect the values and biases of those who programme it (
Cotino Hueso 2017, pp. 131–50). This has direct implications for the management of religious diversity, especially when algorithms are applied in treatment prioritisation systems, resource allocation or personalised clinical recommendations.
Several studies have shown that Artificial Intelligence models can reproduce dominant cultural patterns and marginalise religious or ethnic minorities. For example, hospital diet recommendation systems may ignore religious dietary restrictions—such as halal, kosher, or strict vegetarian diets—if these are not correctly parameterised in the database (WHO 2021). Consequently, the design of medical Artificial Intelligence must incorporate an approach that is sensitive to faith and plurality of values to ensure equity, trust and respect for diversity in healthcare (
Shetty et al. 2025). In this regard, ignoring the diversity of values, religious beliefs and worldviews can lead to bias, discrimination and loss of patient autonomy, negatively affecting the trustworthiness and effectiveness of medical Artificial Intelligence: the active participation of diverse communities and ongoing ethical evaluation are recommended to adapt Artificial Intelligence to plural contexts (
Zhang and Zhang 2023). The key question is to determine which technical and regulatory criteria allow this sensitivity to be translated into verifiable computational models that are compatible with human rights principles.
This need to integrate cultural and religious differences into healthcare governance aligns with Burchardt’s analysis of how contemporary regimes of diversity are shaped through biopolitical instruments such as data infrastructures, technological systems, and administrative procedures (
Burchardt 2021). From this perspective, Artificial Intelligence is not merely a technical tool but a new regulatory space in which pluralism is negotiated. Depending on how algorithms are designed and what values inform their models, these systems may reinforce dynamics of recognition and inclusion or contribute to the marginalisation of minority religious sensibilities.
In terms of human rights, UNESCO has insisted that cultural and religious diversity must be considered a cross-cutting principle in the ethical design of Artificial Intelligence. Similarly, the Council of Europe (2023), in its
Recommendation on the Human Rights Impacts of Algorithmic Systems, stresses that States must ensure democratic oversight of AI systems to prevent forms of structural discrimination
6.
A clearer distinction between religious-freedom-compliant uses of Artificial Intelligence in healthcare and those that disregard it can be illustrated through contrasting examples. A system may be considered respectful of religious freedom when it incorporates patients’ religious preferences into their electronic health records—such as objections to receiving blood transfusions, the need for halal or kosher diets, or restrictions related to end-of-life practices—and adapts its clinical recommendations accordingly. When adaptation is not possible, ethically aligned systems alert clinicians to potential conflicts so that alternative protocols can be activated, thereby safeguarding patient autonomy.
By contrast, a system that ignores religious freedom operates without integrating these variables or without enabling meaningful human oversight in cases of conflict. For instance, surgical prioritisation algorithms trained on biased historical data may systematically disadvantage religious minorities with culturally distinct health-seeking behaviours. Likewise, automated decision-support tools that recommend procedures incompatible with a patient’s religious convictions—such as transfusions or specific biomedical interventions—without notifying clinicians exemplify uses that undermine religious freedom and generate indirect discrimination. The contrast between these two models demonstrates how the presence or absence of religious sensitivity in AI systems can have a substantial impact on the protection of fundamental rights in healthcare.
4.2. Personalised Treatments and Respect for the Patient’s Beliefs
Continuing along the same lines, it should be noted that Artificial Intelligence promises more personalised medicine, based on each patient’s genomic, historical and behavioural data. But the important thing is that this personalisation must also include the ethical and religious dimension of the individual. In many contexts, beliefs determine the acceptance or rejection of certain treatments, how to deal with illness or end-of-life decisions.
Artificial Intelligence systems should be integrated with decision-making models that respect the patient’s religious preferences, provided that these are adequately recorded in their digital medical records. This requires a delicate balance between personal autonomy and the institutional neutrality of the healthcare system (
Romeo Casabona 2020). The central issue is to define decision-making frameworks in which algorithmic personalisation does not replace the patient’s capacity for deliberation or the clinical dialogue necessary for complex decisions.
In this regard, models of
ethically aware algorithms have been proposed, capable of incorporating moral variables into their operation (
Floridi and Cowls 2019). These models could be adapted to religious contexts, provided they are designed with transparency and under interdisciplinary ethical supervision.
4.3. Artificial Intelligence and Freedom of Conscience for Healthcare Personnel
The implementation of Artificial Intelligence also has an impact on the freedom of conscience of healthcare professionals. If an algorithm determines a course of action that the doctor considers incompatible with their ethical or religious convictions, how is this discrepancy managed? Can a professional refuse to follow the recommendation of an automated system without incurring disciplinary responsibility?
In situations of this kind, where the management of the discrepancy is based on ethical and legal principles and professional autonomy, the professional may refuse to follow the automated recommendation if they base their decision on ethical or religious grounds, especially if there are institutional mechanisms in place to channel these conflicts. Healthcare ethics committees and ethics consultants are key resources for resolving conflicts between automated recommendations and the personal convictions of professionals. However, only a small fraction of these conflicts formally reach the committees, suggesting the need to strengthen these channels and the culture of ethical consultation in clinical practice (
Zapatero Ferrandiz et al. 2017, pp. 549–51). Most doctrine emphasises that the final decision must rest with the healthcare professional, who assumes clinical and ethical responsibility for their actions, even when faced with automated systems (
Beriaín 2019, pp. 93–109). The obligation to blindly follow an algorithm’s recommendation cannot be imposed, especially if there are ethical, legal or safety concerns. Protection from disciplinary liability depends on the existence of clear protocols and the use of established ethical channels (
Beriaín 2019, pp. 93–109). In conclusion, it is the unanimous opinion of legal doctrine that doctors may refuse to follow the recommendation of an automated system on ethical or religious grounds, provided that they justify their decision and use the institutional mechanisms for ethical consultation. Professional autonomy and ethical deliberation are essential to avoid disciplinary liability in such cases.
Along these lines, legal doctrine is beginning to raise the possibility of recognising a “
technological conscientious objection”, understood as the right of professionals not to delegate decisions that affect fundamental values to a machine. Some authors have argued that respect for the physician’s conscience is part of professional dignity and cannot be overridden by automation: “
Respect for the physician’s conscience is considered an essential pillar of professional dignity. Medical excellence requires a balance between scientific knowledge and ethical values, where the physician’s independence and introspective judgement are fundamental to social trust and the quality of medical practice. The doctor-patient relationship is based on trust and the doctor’s fiduciary responsibility, which means that any action that disconnects the professional from these principles violates the ethos of the profession”, “
Professional excellence requires that the doctor maintain his or her capacity for ethical judgement even in the face of external pressures, including those arising from automation” (
Verdú González 2020, pp. 1–28); “
Automation and technology can improve medical practice, but they should not replace the principles of medical humanism or professional autonomy. Technology must be implemented in a way that respects the traditional values of medical humanism, allowing for an improved model of doctor-patient relationship only if it is compatible with these principles” (
Baños Díez and Guardiola Pereira 2024, pp. 1–5). However, they also warn that this right must be exercised in a manner that is proportionate and compatible with the patient’s interests.
The future of medical ethics will depend, to a large extent, on the ability to articulate a coexistence between artificial intelligence and moral intelligence. Technology can assist clinical judgement, but never replace it. The integration of the religious and spiritual dimension into this process constitutes a normative challenge, but also an opportunity to rethink the person-centred model of medicine.
Ultimately, respect for the physician’s conscience is inseparable from professional dignity and should not be overridden by automation. In other words, any automation process must be part of a deliberative structure that preserves clinical judgement and the ethical standards of the healthcare profession. In recent decades, the doctor-patient relationship has evolved from a bilateral one to a more complex relationship where organisation and technology play a decisive role in clinical decision-making. This has reduced the traditional freedom of action of the physician, as decisions are now conditioned by institutional policies, patient autonomy and process automation: “
Advances in technology and automation in medicine have raised concerns about dehumanisation and the possible loss of professional autonomy. However, it is recognised that technology can improve medical practice if it is implemented in accordance with the principles of medical humanism. The key is to reconcile automation with the preservation of ethics and individual clinical judgement” (
Baños Díez and Guardiola Pereira 2024, pp. 1–5). The professional autonomy of physicians is being transformed and, to a certain extent, limited by automation and organisational influence. Nevertheless, it remains essential to preserve ethical judgement and medical humanism in clinical practice.
4.4. Empirical Examples of the Impact of Artificial Intelligence on Religious Diversity
Although many of the challenges described have a structural dimension, there are already concrete examples that illustrate how Artificial Intelligence can directly affect religious freedom and the management of pluralism in clinical settings.
- (a)
Algorithmic recommendations incompatible with religious beliefs (blood transfusions).
In several hospitals in the United States, bioethics committees have reported conflicts arising from algorithmic decision-support systems that recommended urgent blood transfusions for Jehovah’s Witness patients. Because the systems did not integrate religious objections into the electronic health record or into the model’s parameters, they generated automatic alerts that prompted clinicians to intervene quickly, sometimes without activating existing alternative protocols (such as bloodless surgery techniques or volume expanders). These cases led institutions to revise their algorithms to incorporate variables related to religious sensitivity.
- (b)
Automated dietary planning systems that omit religious restrictions.
In the Netherlands and Germany, internal audits in university hospitals revealed that AI-based dietary planning tools failed to identify religious dietary requirements such as halal, kosher or Hindu vegetarian restrictions. Automated menu assignments triggered frequent complaints and exposed the fact that the underlying datasets were incomplete or insufficiently labelled. As a result, several hospitals introduced specific “religious dietary flags” and reinforced human oversight mechanisms.
- (c)
Biases in automated triage systems affecting religious minorities.
In Canada and the United Kingdom, recent studies found that some AI-assisted triage systems reproduced patterns of indirect discrimination against religious minorities. The models were trained on historical data that underrepresented communities with culturally specific healthcare-seeking behaviours, leading to less accurate risk estimations and, in some cases, lower clinical priority scores than warranted. This had a tangible impact on equity in emergency care.
- (d)
Predictive models in end-of-life care and tensions with religious beliefs.
In Israel, hospital committees have documented cases in which survival-prediction algorithms used in palliative care generated tensions with religious norms concerning therapeutic proportionality, particularly among Orthodox Jewish and Muslim communities. In certain cases, automated prognostic outputs encouraged early withdrawal of treatment before families had been able to consult religious authorities. These situations prompted hospitals to strengthen human oversight and involve clinical chaplains earlier in the decision-making process.
- (e)
Natural language processing (NLP) tools misinterpreting religious expressions.
Hospitals in the United Kingdom and the United States have reported incidents in which NLP systems used to analyse clinical notes or patient communications incorrectly classified devotional statements (e.g., “God is speaking to me”, “I entrust everything to God”) as evidence of delirium or irrational thinking. The literal, decontextualised interpretation of such expressions triggered unnecessary mental health alerts, highlighting the need for models trained on culturally contextualised data.
- (f)
Organ-allocation algorithms and indirect discrimination linked to religious practices.
In India and Singapore, studies have identified issues in organ-allocation algorithms when the datasets used failed to account for religious funerary rituals or mourning periods that influence donor availability in certain communities. Although the algorithms did not intend to discriminate, the omission of these cultural dynamics generated disproportionate outcomes, prompting adjustments to the datasets and the introduction of human review protocols.
These examples demonstrate that the impact of Artificial Intelligence on religious diversity in healthcare is not merely theoretical. It manifests in clinical, organisational, and communicative decisions, underscoring the need for robust governance models, periodic audits, and the explicit recognition of religion as a relevant variable for equity in health”.
7. Conclusions
The emergence of Artificial Intelligence in the healthcare sector has radically transformed the way in which the right to health is conceived, managed and guaranteed. Furthermore, we understand that this advance cannot be analysed without considering the identity and value factors that influence clinical interaction.
Religious plurality, far from being an obstacle, is an essential component of contemporary healthcare systems, which must respond to the needs and beliefs of all patients. Artificial Intelligence, if implemented without oversight or transparency, risks amplifying inequalities and eroding fundamental rights, but if designed with ethical and cultural sensitivity, it can become a valuable tool for improving the suitability of clinical interventions to the specific needs of different patient groups.
Consequently, the future governance of artificial intelligence in healthcare must be based on an ethic of respect, responsibility and diversity. This requires the cooperation of legislators, healthcare professionals, technology developers and religious communities, through institutional structures capable of integrating technical, evaluative, and legal analyses into decision-making.
The integration of Artificial Intelligence into healthcare, therefore, requires a broader reflection on the cultural, ethical and legal assumptions that are embedded—or omitted—within processes of automation. The religious dimension, as a legitimate expression of meaningful personal values, constitutes a structural element for understanding how clinical decisions are made and how relationships of trust between patients and professionals are established. Acknowledging this reality does not entail privileging specific convictions, but rather ensuring that algorithmic systems operate within frameworks that respect the diversity of motivations and priorities individuals may hold, thereby preventing technological innovation from generating new forms of inequality or exclusion.
As Berger, Casanova, and Taylor have shown, religion remains a constitutive dimension of contemporary democratic societies, shaping identity, moral reasoning, and public life. Incorporating these perspectives into the analysis of Artificial Intelligence in healthcare reveals that algorithmic systems interact with religious commitments not as peripheral elements but as essential dimensions of clinical decision-making. Understanding this interplay is therefore crucial for developing governance models that respect both technological innovation and the moral and spiritual agency of patients.
Only in this way can a model be consolidated that is capable of combining technological innovation with the regulatory and ethical requirements of democratic and culturally diverse societies.