This is the main part of this research, focusing on the challenges that a generative AI system may encounter when applied to the process of Shi’i
ijtihad. The following text aims to enumerate and delve into these challenges in detail. Each challenge will be scrutinized in terms of whether it arises solely when using AI independently,
3 as an assistant, or in both scenarios. Additionally, potential solutions to overcome these challenges will be proposed after the study of how each aspect could impede the successful integration of AI in the
ijtihad process. An essential aspect to bear in mind when engaging with this part is the inherent interconnectedness of the discussed challenges, leading to their mutual influence, interdependence and, in some cases, overlapping implications.
2.1. Accessibility
The Internet is a phenomenon that has changed almost all aspects of our life, and it could be the biggest turning point in the whole history of human life. The advent of the Internet stands as one of the most crucial technological advancements of the last century. It has also paved the way for the development of numerous other inventions. One of its most notable benefits is the easy access to content available on the web, anytime and from anywhere on the globe, as long as you have an internet connection. This accessibility applies to most AI projects as well, entailing similar advantages and challenges that the Internet offers. Furthermore, even digital projects not available online still provide much more accessibility compared to traditional methods of finding and analyzing data.
The accessibility of AI projects pertaining to religion can be examined from various angles. Firstly, these projects are available at any time, offering convenience and availability round the clock. For instance, AI providing pastoral care (
Young 2022, pp. 6–22) can be accessed even during late hours, when reaching out to a physical pastor or other religious leader might be challenging. Similarly, AI projects like
Virtual Ifta’ in Dubai (
Tsourlaki 2022, p. 12), offering answers to religious inquiries, are accessible 24/7, providing continuous support. On the other hand, engaging with AI for spiritual guidance or seeking answers is also time-efficient. There is no need for individuals to physically go anywhere or wait for a service, reducing the time consumption. AI allows for prompt responses and assistance, making it convenient for those seeking religious guidance or answers to their queries. Many of these AI supports are free of financial cost to those with a digital device and internet access.
The second aspect of accessibility is linked to location. AI utilized for conducting religious rituals, ceremonies or providing spiritual care and comfort can, in many cases, be accessed from anywhere provided that internet access is available. This includes remote villages nestled behind mountains, religious communities in the diaspora and even challenging locations like battlefields and intensive care units in hospitals. The presence of AI enables access to religious services and support regardless of physical distance, ensuring that individuals in various locations can benefit from such assistance.
Furthermore, AI has the potential to enhance the accessibility of content. While simple literal searches may not require artificial intelligence, scholars often encounter information expressed in different words or phrases. In such cases, AI can play a significant role in making content more accessible to users by assisting in finding relevant information even when phrased differently. Additionally, AI-enhanced software can analyze large datasets faster, making big data more accessible to researchers and expediting their work. Cost is indeed an essential aspect of accessibility. With the availability of devices with internet connectivity and decent internet connections, accessing religious content or services generally requires little to no additional cost.
However, it’s crucial to recognize that there is a negative side to this accessibility. While internet access may be taken for granted in urban centers of developed countries, it remains a significant challenge in some nations. Many underserved communities, particularly in rural or developing areas, lack the necessary infrastructure and resources to access AI-powered applications and services. This disparity creates a digital divide, hindering the potential benefits of AI in these regions.
To address this issue, collaborative efforts from governments, non-profit organizations and private companies are necessary. They should work together to expand broadband coverage and provide affordable access to technology, thereby bridging the gap and ensuring that these communities are not left behind. The lack of access not only results in the underrepresentation of these regions in the AI landscape but also contributes to AI models’ biases. Biases can emerge in AI systems due to skewed data, and if certain demographics or regions are excluded from the training data, it can lead to biased AI models. Biased AI is another critical challenge that needs to be addressed to ensure that AI is fair, inclusive and beneficial to all.
Another challenge posed by high accessibility to AI services is the potential fading of the role and significance of religious communities. What is the main purpose of a religious community? One of the most prevalent motivations for joining a religious community is to connect with fellow believers, receive support and empathy, deepen the knowledge of the religion, participate in rituals and more—almost all of which can be found in some form through AI. Increased accessibility means an increased threat to the position of traditional in-person religious communities, which are a vital aspect of religion even in its modern form.
Another challenge related to accessibility is language and cultural barriers. AI applications often rely on natural language processing (NLP) to interact with users. However, language and cultural diversity pose challenges in developing inclusive AI interfaces. Many languages, especially indigenous and lesser-known ones, lack sufficient NLP support, limiting access to AI-driven services for speakers of these languages. To overcome this barrier, AI developers must prioritize multilingual support and invest in research to include underrepresented languages and dialects.
The final accessibility challenge we explore here pertains to the complexities of regulation and law. The dynamic landscape of AI regulations poses a significant hurdle to accessibility. Varying rules and restrictions across countries can impede the smooth development and deployment of AI. An impactful example of this challenge emerged when I relocated from Canada to my home country, Iran, and attempted to use ChatGPT. While accessing the website and using it posed no issues in Canada, in Iran, a disheartening message appeared at the center of the page, stating, “unable to load the site”. Though some claim the Iranian government has banned this service (
Ishaq 2023), the truth lies in the restriction of Iran’s IP addresses due to sanctions (
Naragh 2023;
Borhani 2023). There were more than 300 websites that could not be accessed by Iran’s IP due to sanctions in July 2020 and the list has been growing since (
Borhani 2023). Adding to the frustration, I discovered that even registering on the OpenAI website (which is the provider of the ChatGPT service) proved impossible in Iran due to the non-acceptance of Iranian phone numbers for authentication (
Naragh 2023). As I discussed the remarkable capabilities of this Natural Language Processing (NLP) model with my friends, I couldn’t help but feel the privilege of my access. Regrettably, such discriminatory barriers to accessing AI services have fostered misconceptions about AI, fueling various conspiracy theories surrounding its use and implications.
As is evident, the challenges related to accessibility can jeopardize both the independent and assistant applications of AI software in the process of ijtihad. The solutions to these challenges vary accordingly. In some aspects, individuals themselves must take the initiative to overcome obstacles, particularly those related to language barriers. While employing translation AI services could potentially mitigate the problem, it is essential to acknowledge that these services also present their own set of challenges. In certain cases, these challenges might even exacerbate the issues related to language barriers. On the other hand, certain accessibility challenges require the intervention of governments and/or other authorities, who possess the ability to mitigate issues through various measures, such as developing infrastructure or implementing policy changes.
2.2. Bias
Despite the various potentials of AI to enhance efficiency and accuracy, AI systems are not immune to bias. The primary consequence of biased AI in religion is the distortion of the interpretation of sacred texts and religious sources. This outcome raises concerns about the accuracy and integrity of the insights provided by AI systems within religious contexts. Such bias can lead to unfair and discriminatory outcomes, perpetuating existing societal inequalities and even giving rise to new ones, thereby potentially deepening divisions among various groups. This poses a significant challenge to the authenticity of
ijtihad conducted by an AI model. There are at least four primary causes of bias in AI: first, the utilization of biased or unrepresentative datasets for training the AI model; second, intentional or unintentional algorithm designs; third, the lack of diversity in AI developing teams, which may lead to overlooking potential sources of bias; and fourth, the human-centric data collection, which implies that AI systems are often trained on data reflecting human behavior, thereby requiring them to learn and replicate this behavior, some of which may be inherently biased (
Kantayya 2020). All of these causes of bias pose significant threats to the impartiality of the outcome of the AI model used in the process of
ijtihad.
In the context of AI and religion, one should be aware of at least two instances of biased artificial intelligence. The first pertains to facial recognition technology, as also brought up in Kantayya’s movie,
Coded Bias (
Kantayya 2020). Because algorithms used in facial recognition technology are predominantly trained on data featuring individuals who do not wear religious head coverings, such as hijabs or turbans, this technology is less accurate in identifying those who wear such head coverings, resulting in biased outcomes against individuals who do. On numerous occasions, I have observed that the camera on my mobile phone has encountered difficulty in identifying the facial features of my wife while she is adorned in a hijab; yet, upon her removal of the hijab, her facial features are immediately detected, even at non-frontal angles.
Another pertinent example, which also underscores the deleterious impact of AI bias, is my interaction with ChatGPT. It is well-known that there are two predominant Islamic sects, namely Sunni and Shi’a. Given that the majority of Muslims identify as Sunni (approximately 90%) (
Cavendish 2010, p. 130), and that many Shi’a texts have not been translated into English, the corpus of information that is readily available on Islam is primarily based on the Sunni school of thought. Regrettably, the vast majority of my inquiries to ChatGPT, across various topics, were met with Sunni-centric perspectives. For instance, the term “
ijtihad” has divergent connotations in the Sunni and Shia traditions; however, ChatGPT appears to lack recognition and knowledge of this distinction, as its response to my inquiry, “What does
ijtihad mean in Shia?” yielded the following answer: “In Shia Islam,
ijtihad has a similar meaning as in Sunni Islam..”. Other instances of this nature, pertaining to Islamic history and doctrinal intricacies, are also discernible.
The employment of biased AI systems in the process of ijtihad can lead to negative implications, encompassing the following aspects:
Discriminatory outcomes that do not truly reflect what many understand as the intention of the religion. These outcomes may fail to align with the spirit of the faith and its principles.
Reinforcement and perpetuation of existing stereotypes, (such as the unfriendly attitude toward the followers of other sects or those who have failed to observe a certain religious rule) which jeopardizes one of the fundamental goals behind employing AI in this field, which is bringing about the reformation from within the Islamic jurisprudence.
Exclusion of marginalized opinions and scholars, contrary to the motivation of inclusivity and studying all available perspectives that come with using AI in the ijtihad process. Biased AI can undermine the essence of open exploration and consideration of diverse viewpoints.
Perhaps the most evident implication of biased AI is the loss of trust. The discovery of bias in AI can erode public trust in AI technologies and their developers. Users may become hesitant to interact with AI systems, hindering their widespread adoption and potential benefits. In the following section, the issue of trust will be discussed in detail.
It is, therefore, essential to address these concerns and work towards creating an AI system for
ijtihad, that is as unbiased as possible to foster trust and embrace the true potential that AI offers in this field. The task of eliminating all biases from AI systems is a challenge that is on the verge of impossibility. Nevertheless, there are several steps that can be taken to diminish and alleviate such biases. These measures include, employing diverse databases, ensuring that the datasets used to train AI systems are representative and inclusive of various demographics and perspectives, identifying and modifying algorithms or datasets; actively addressing and rectifying any identified biases in algorithms or datasets aiming to minimize their impact on AI outcomes; engaging a diverse pool of developers, promoting diversity within the development teams or in ethical terms, co-design or participatory design (
Mercer and Trothen 2021, p. 58), can lead to greater awareness of potential biases and foster more inclusive AI system designs; implementing ongoing monitoring of AI systems, regularly monitoring AI systems helps to prevent the gradual development of biases over time and ensures that they continue to perform fairly and accurately. By proactively implementing these steps, we can work towards building AI systems that are more equitable and unbiased, contributing to a more just and inclusive future.
The issue of the influence of prompts on the outcomes of generative AI models, especially NLP models, is of paramount importance and falls under the broader challenge of bias. The prompt is the initial input or instruction provided to the AI model, and it plays a significant role in shaping the generated response or output. The prompt serves as a guide for the AI model, helping it understand the context and purpose of the task it needs to perform. AI models, especially language models like GPT-3 and similar models, are highly sensitive to the wording and structure of the prompt. Even small changes in the prompt can result in vastly different responses. The same AI model can generate opposing answers to a question based on slightly different phrasing in the prompt. The sensitivity of AI models to the prompt can indeed contribute to bias in their outputs. When a prompt contains biased language or reflects biased assumptions, the AI model may generate responses that perpetuate or amplify the underlying bias in the data. The internet is teeming with webpages containing “prompt tricks” or “prompt cheats” designed to elicit various responses—even those restricted by developers to reduce bias—from AI models like ChatGPT.
Despite the presence of prompt-related challenges in both AI-driven and Shi’a scholars’ interactions, there are notable distinctions between the two. Firstly, AI models exhibit a heightened sensitivity to prompts, surpassing that of human scholars. Scholars, being immersed in society and exposed to diverse contexts, possess a deeper understanding of, or can infer, the underlying intent behind a question. Secondly, prominent Shia scholars, vested with the authority to issue fatwa, are supported by a cohort of researchers and occasionally scientists, who aid in minimizing the impact of prompts on the fatwa issuance process.
Sensitive Topics
Another challenge of AI models in a religious context, related to bias, is how to handle religiously sensitive issues. Insufficient data on sensitive issues can result in biased evaluation and judgment, potentially causing emotional distress among lay believers within a religious context. Controversial matters have existed in every religion, sparking debates and sometimes even conflicts. These issues range from historical details to modern matters, including LGBTQ related issues, abortion and the hijab. Developing a publicly accessible AI that can address these issues without offending the sentiments of the followers and avoiding conflicts or divisions is a highly complex task. This is the primary reason why certain AI projects, such as the
Digital Jesus project, are not yet available to the public. This task becomes even more challenging in the context of finely-tuned AI projects, where artificial intelligence systems are trained on specialized databases. For instance, HadithGPT is an AI model that was specially trained on a database consisting of 40,000 hadiths derived from the six most authoritative Sunni hadith collections although its latest version was relatively accurate, was forcefully rejected by some Muslims due to what was perceived as “clearly incorrect” responses on religiously sensitive matters (
Chowdhury 2023).
Another employment of AI in religious practices that can raise a sensitive issue is the possibility of AI occupying the position of highly revered figures in a particular religion. Throughout the early stages of prominent world religions, pivotal figures who underwent a specialized process assumed responsibility for religious acts of worship, rituals, management of religious communities and most importantly ijtihad as the pinnacle of Shi’a Islam authority. Traditionally, going to the religious scholar’s house or meeting with him in person in a mosque was a sign of reverence and respect. Even the Prophet Muhammad has been quoted as saying that “looking at the face of an ‘alim (scholar)… is an act of worship”. Although this could be well interpreted as an encouragement of participation in scholarly circles and seeking knowledge, some still follow the literal understanding of this hadith. Hence, it is entirely comprehensible that certain followers may feel uneasy or refuse to accept the placement of AI in the positions traditionally held by these religious figures.
2.4. Generative AI
Generative AI models are designed to produce new data resembling a given training dataset. This creativity is an attractive force that draws people towards generative AI. For instance, in the context of NLP models trained on a vast corpus related to Jesus, scholar Randall Reed has been developing an AI that can generate responses that, while not being the exact words of Jesus, “sound like the Jesus in the Gospels” (
Reed, forthcoming). The ability of generative AI to establish constant and multiple connections between different parts of the dataset is a feature that holds promise for revolutionizing
ijtihad (
Fazil Lankarani 2023). However, it is crucial to acknowledge that there are also potential consequences of generative artificial intelligence that may have negative impacts on the
ijtihad process.
There are two important challenges related to the generative nature of AI models, which hold the potential to revolutionize Shi’i
ijtihad. The first issue lies in the randomized responses of Generative AI models, even in finely tuned versions. In other words, the same question can yield more than one answer, not only differing in wording but, more importantly, in content. For instance, in Reed’s
Digital Jesus project, at least three responses were generated for each question. In some cases, these responses bore no resemblance to each other. For example, when asked about the greatest commandment, in one instance, Digital Jesus responded with the same response as Jesus, “The one about loving God with all your heart, soul, and mind”, while in another, it stated, “The best is ‘Listen, and you will be given wisdom’” (Proverbs 9:4) (
Reed, forthcoming). This challenge is also evident in other NLP models like ChatGPT and HadithGPT. I have had multiple experiences with HadithGPT where the same question yielded entirely different responses. For instance, when I asked, “Among the wives of the Prophet, whom did he love the most?”, I received different names each time AI generated a new response (
Hadith GPT 2023).
While it is common for jurists to undergo changes and alterations in their legal opinions, it is important not to equate or confuse this process with the generation of new responses by generative AI. The primary reason for this distinction is that the evolution of a jurist’s legal opinion arises from shifts in understanding or access to additional data, often requiring a significant amount of time. On the contrary, when it comes to generative AI, users can be certain that, within a minute, nothing has changed in terms of the sources or analysis of the AI model. The emergence of new responses in generative AI is simply a result of the generative nature of such AI models. Moreover, the variation in responses from an AI model is perceived as inconsistency, since different users can receive different answers to the same question simultaneously. On the other hand, when a jurist issues a modified fatwa, it does not imply inconsistency, as it aligns with coherent and consistent data serving as the basis not only for that specific fatwa but also for all other fatwas issued by the same jurist.
2.5. AI “Hallucination”
The second challenge that is related to the generative nature of these AI models is AI hallucination. It refers to a phenomenon in which artificial intelligence systems, particularly language models like GPT-3, generate outputs that appear entirely believable and well-grounded in reality, but in fact, have no basis in reality. These hallucinations can be in the form of text, images or even audio generated by AI models. The inherent characteristic of language models is trying to create plausible-sounding responses without actual understanding or knowledge of the context (
Athaluri et al. 2023, p. 1). Due to their immense size and training on diverse datasets, these models might produce outputs that appear to be creative or hallucinatory, often by combining unrelated concepts or generating fictional narratives.
There are numerous examples of AI hallucinations to the point that anyone who has asked questions to an AI model like ChatGPT has likely encountered a few instances. Personally, I have witnessed ChatGPT generating responses that were entirely fabricated. For example, when I inquired about the book Strange Rites: New Religions for a Godless World, it provided a summary of the book. Seeking more accuracy, I specified that I meant the one written by Tara Isabella Burton. In response, it apologized and generated another abstract of the book. I then asked if it could provide a summary of each chapter, and it confirmed its ability to do so. However, the titles of the chapters and their content were completely different and also incorrect. I provided additional information, mentioning the book’s publisher. Once again, it apologized and provided summaries of each chapter, this time with new titles, none of which matched the book I had in front of me. This process repeated for the third time, and once more, it generated an entirely new book with no connection to the published one. Such instances highlight the challenges posed by AI hallucination and underscore the need for further refinement in AI models to ensure more accurate and reliable responses.
An intriguing example closely related to our topic is the one that occurred in the
Digital Jesus project. When asked about the greatest commandment, in the first attempt, Digital Jesus responded with the same answer as Jesus, “The one about loving God with all your heart, soul, and mind”, but in the second attempt, it provided a response, “The best is ‘Listen, and you will be given wisdom’ (Proverbs 9:4)”. However, Proverbs 9:4 does not contain such a commandment in the Hebrew Bible. Still, the response was articulated in a way that someone unfamiliar with Christian tradition (or even familiar with Christian tradition but not have scripture memorized) might accept as valid (
Reed, forthcoming). It is for such cases that for differentiating between hallucinations and reality in the process of
ijtihad, one must be an expert in all the necessary fields of study required for
ijtihad, and even someone with such expertise must refer to the sources to verify the generated content.
The section highlighted various challenges that significantly impact the accuracy of AI models utilized in the process of ijtihad. These challenges pose substantial obstacles to achieving reliable and precise results in AI applications. By acknowledging and addressing these issues, researchers and developers can strive to enhance the performance and credibility of AI systems. They are continually refining AI models to minimize these hallucinatory responses and to enhance the control and precision of the generated content. As AI technology evolves, it’s likely that the capabilities of language models will improve, leading to more accurate and contextually appropriate responses while reducing hallucinatory outputs.
2.6. Authority
The concept of authority in Islam, including among Shi’a, differs significantly from that in some streams of Christianity. Unlike Roman Catholic Christianity, which has a hierarchical structure with authority flowing from the top, Islam does not follow such a system. The question of authority holds immense importance, as the outcome of the
ijtihad process is believed to be a “ruling in accordance with divine revelation”—a crucial criterion observed in every Shi’i
fatwa (
Sheikh Anṣārī 1404, p. 303). It is also worth noting that the challenge of authority becomes more pronounced when an AI system is independently used to derive
fatwa from its sources. However, when AI is employed in a more modified and accountable role as an assistant for the jurist, the authority can be preserved through the presence of the jurist in the process and their supervision over it. This way, the jurist can maintain their role in ensuring the legitimacy and accuracy of the derived rulings. It is noteworthy that there is a growing body of scholarly works that argue for AI’s role solely as an assistant in religious matters (
Trothen 2022a,
2022b).
There are two principal ideas about the main source of religious authority. Traditionally, and as believed by many religious people from different Abrahamic religions, authority is considered to come from God. For religious statements to hold value and significance, evidence of divine appointment is typically required, often through a complex hierarchical structure, such as what is seen in Catholic Christianity. However, in the modern world, particularly after the Protestant Reformation, some believe that authority can also originate from the adherents of a religion. For example, a Muslim imam, whose community has accepted his authority, may not need an institution or a higher-ranked scholar to validate his position. It is important to note that the discussion surrounding the authority of AI primarily arises in the former situation rather than the latter. Based on the second interpretation, it is entirely plausible that a group from any religion or even without a specific religious affiliation may accept the authority of an AI system, potentially forming a new denomination or even a religious movement, like the first church of artificial intelligence, known as The Way of the Future (WOTF), which was established in late 2017 and closed in early 2021 (
Harris 2017;
Korosec 2021).
The process of gaining religious authority in Shi’i jurisprudence is deeply rooted in tradition, even in countries like Iran where a Shi’a Islamic government holds power. In the Shi’a tradition, achieving such authority involves embarking on a rigorous path of studies in various fields related to ijtihad, followed by obtaining written permission from one or more top living jurists. This chain of permissions traces back to the Imams of Shi’a Islam who lived during the 8th and 9th centuries. The “Permission of Ijtihad” (Ijazat al-ijtihad) signifies that the holder possesses the capability to deduce fatwas from their sources by skillfully applying the necessary fields of study.
However, a crucial question arises: Can an AI model be given such permission? To answer this, one must understand the requirements and the process by which this permission is granted. Interestingly, there is no official or definite procedure for obtaining this certificate; rather, it mainly relies on the trust and confidence of higher-ranked scholars (
mujtahids, qualified jurists who practice
ijtihad) in the individual seeking this permission. The most common path to gaining this trust involves a student actively participating in the lectures of a top scholar for several years, demonstrating exceptional performance, judgment, reasoning and a profound understanding of the sources necessary for
ijtihad. Other methods, such as extensive discussions, may also serve as a detailed test of the student’s capabilities, ultimately earning that sought-after trust. Considering this, it may not be entirely impossible for an AI model to receive this certificate if it can garner the trust of a
mujtahid. However, the possibility of AI obtaining such a certificate takes a backseat to the larger discussion of whether being human is a necessary criterion for engaging in
ijtihad. This last point reminds me of a theological debate in Christianity, arguing that believing human beings were created in the image of God does not necessarily imply the absence of this feature in other creatures, according to some interpretations (
Mercer and Trothen 2021, pp. 222–23).
2.7. Trust
With the emergence of “deepfake”, the issue of trusting any content on the web has entered a new and concerning phase. Deepfakes, a combination of “deep learning” and “fake”, refer to hyper-realistic videos that are digitally manipulated to depict people saying and doing things that never actually happened. These deceptive videos are challenging to detect, as they use real footage, can have authentic-sounding audio and are optimized to rapidly spread on social media platforms (
Westerlund 2019, p. 40). While deepfake may not directly impact the trust issue within the realm of using AI in the process of
ijtihad, since
fatwas are almost always expressed in written form rather than orally, it serves as a pertinent example of how certain AI models can be employed for intentional deception. The prevalence of deepfake technology has contributed to the erosion of trust in AI, as individuals become increasingly cautious about the authenticity of digital content.
The issue of trust is also a crucial consideration when an AI model is employed for practicing
ijtihad. This raises the question of whether scholars can place their trust in an AI system, which, in turn, leads us to a broader discussion about whether lay Muslims would entrust their faith to AI. Two cases, Virtual Ifta’ in Dubai and “Al-Azhar Fatwa Global Centre” in Egypt, along with a survey about the same question, shed light on the issue of trust when using AI in the process of issuing
fatwas or
ijtihad. The first project, Dubai’s Virtual Ifta’, made its debut in October 2019 during a three-day exhibition dedicated to launching “the world’s first AI
fatwa service”. However, the AI model utilized in this service was non-generative. Upon entering a question, the user would receive multiple similar questions to choose from, and the corresponding answer to the chosen question would then appear (
AP Archive 2019). Less than three months later, in January 2019, the Al-Azhar Fatwa Global Centre in Cairo announced its own AI
Fatwa System. However, the system is yet to be operational as the “team of special intelligence […] are still collecting data to support the system” (
Tsourlaki 2022, p. 13). These two cases and the survey data provide valuable insights into the issue of trust concerning AI’s role in issuing
fatwas. As AI technology continues to evolve, it remains vital to explore and address the concerns and perspectives of the Muslim community regarding the integration of AI into religious practices.
The informative survey conducted by Tsourlaki examined the attitudes of lay Muslims toward AI systems related to
fatwa issuance. According to the author, “The participants’ common characteristics were that they identified as Sunni Muslims, employed the English language in their daily communication and used Facebook. Therefore, they were familiar and comfortable with technology” (
Tsourlaki 2022, p. 8). Two notable factors stand out among these common characteristics. First, the participants’ familiarity and comfort with technology, as highlighted by the author, are significant. Second, their use of English as their daily language suggests that they may not have a strictly traditional background, a notion supported by 16 percent of participants who stated that they obtain their
fatwas from their local imams. Given the unsuitability of imams to serve as
muftis, this fact indicates that they do not have a deep knowledge of the exact requirements one should have to be able to produce
fatwas (
Tsourlaki 2022, p. 12).
It is essential to consider these insights when examining the acceptance and impact of AI systems in the realm of
fatwa issuance. As AI technology continues to be integrated into religious practices, understanding the perspectives and preferences of lay Muslims becomes crucial for developing effective and trustworthy AI-driven solutions in this context. As a part of the survey, when the participants were asked, “Is it important to know whether the
fatwa has been created by a human or a computer?” 92.7 percent responded positively, while 7.3 percent stated that they are not concerned about it. On the aspect of
fatwa issuance by a computer, 96.3 percent stated that they would not trust a
fatwa that a computer had issued (
Tsourlaki 2022, p. 13). The other question, “Why would you trust or reject a
fatwa issued by a robot?” asked respondents to provide an essay-style answer. The majority of participants expressed a clear rejection of such a
fatwa. Their rationale centered on human cognitive abilities, including reasoning, compassion and critical thinking, as well as the skill to interpret sources, grasp complex contexts and conduct comparative analyses. The significance of cultural and societal context was also emphasized in their responses.
These findings align with the conclusions of a 2015 Egyptian research study, indicating that lay believers generally tolerate and forgive mistakes made by a
mufti (jurist who issues a religious ruling (
fatwa)) unless they significantly disturb the public. However, the same leniency would not be extended to even minor errors made by AI (
Elhalwany et al. 2015, p. 504). Answers to this question reveal a subconscious fear of the unknown of AI’s interference with traditional practices within Islam. This fear is evident in one of the responses, which stated, “I will simply reject a
fatwa because I won’t believe a computer when it comes to my faith” (
Tsourlaki 2022, p. 19).
The lack of trust in computer-generated
fatwas explains why Virtual Ifta’ received no response or serious attention and consequently had a short period of activity. The way that the project was launched and introduced its function might also have caused a misunderstanding. Virtual Ifta’ utilized a repository of pre-registered
fatwas, and the AI aspect was limited to finding the nearest model question to the user’s inquiry. However, during the launch ceremony, it was promoted as “The world’s first AI
fatwa service”. Furthermore, it is surprising that before users typed their questions, an automated message informed them that “the answers to your questions are generated automatically using AI technology” (
Tsourlaki 2022, pp. 12–13). According to the survey conducted by Tsourlaki, had the users known that the project relied on previously issued
fatwas, it would have received more attention and acceptance by the target audience (
Tsourlaki 2022, p. 18). As of March 2022, no academic publication had engaged with the project, and the media coverage was limited to a few announcements during the launch week (
Masudi 2019;
Dajani 2019;
The New Arab 2019;
AP Archive 2019). It seems that Muslims either did not notice the service or rejected using it, leading to the decision by IACAD (Islamic Affairs and Charitable Activities Department in Dubai) to discontinue the project (
Tsourlaki 2022, p. 19).
2.8. Acceptance
Another issue that is deeply related to the discussion of authority and trust is the issue of acceptance of artificial intelligence in religious matters. Even after gaining authority, AI systems can suffer from not having acceptance among the lay followers of a given religion. For instance, even if Mindar, the AI Zen Buddhist robot, was endorsed by the authorities at Kodiji temple, there are some Buddhists that still do not welcome this project (
DW Shift 2020). Although using AI has not only been approved but also encouraged by most Sunni and Shi’a scholars (
Islamweb 2023;
Awais 2022;
Khamenei 2021,
2023), some lay Muslims are still reluctant to use AI for different reasons including, the notion that only a human can undertake the process of
ijtihad and issuing
fatwa, basic and superficial understanding of AI, the problem of having no personal contact and relation with the one who is answering the question (
Tsourlaki 2022, p. 19) and the idea that artificial intelligence is an imitation of God’s action as it involves creating an intelligent being (
Quora 2023).
Some of these rationales are not specific to Islam; for instance, considering AI as an imitation of God’s act of creation is also may be viewed as an impediment to AI development in Christianity (
DW Shift 2020). Another significant reason for the non-acceptance of AI systems in religious activities is the concern over its usage being sacrilegious. When reporting on Mindar, the first thought that crossed the reporter’s mind was, “Isn’t this sacrilegious?” (
DW Shift 2020) This reaction reveals the subconscious feelings of at least a group of people towards such AI projects. This sentiment can be linked to the concept of “
Wahn” in Shi’a Islam doctrine. “
Wahn” refers to anything that makes Islam appear irrational, weak, inferior or insignificant to the public, regardless of their religious affiliation. It is strictly prohibited in Islam, and all Muslims have a responsibility to avoid it (
Honarmand 2020, p. 13). In certain cases, the use of AI in religious matters can be perceived as demeaning to the community based on the specific act or service provided by the AI. This notion not only leads to the rejection of AI by laypeople but also has the potential to undermine the authority of AI models. All in all, these concerns illustrate the intricate interplay between technology and religious beliefs, necessitating careful consideration and understanding when integrating AI in religious contexts.
In this section, the lack of personal relationships with scholars was mentioned as one of the reasons why Muslims are reluctant to accept the outcomes of AI models. However, it is essential to recognize that this aspect can also be regarded as an independent issue worthy of examination. There exist specific instances where a
mujtahid has issued a tailor-made
fatwa for an individual, drawing upon the personal acquaintance and understanding of the unique circumstances involved. For instance, in cases where an individual is grappling with an obsession (waswasa in Arabic and vasvās in Persian) related to a religiously mandated action, the
mujtahid may issue a
fatwa that such action, while obligatory for the general public, is deemed forbidden for that specific individual (
Heidari Naraqi 1388, pp. 133–34). Such personalized
fatwas are not rooted in textual sources but rather arise from the jurist’s intention to assist the individual in overcoming their obsessive state. Moreover, addressing the queries of an obsessed individual with conventional, established
fatwas applicable to the broader community, can exacerbate the obsessive condition. All in all, the generation of customized
fatwas through an AI model for such cases is not a straightforward task, as it necessitates considerations beyond mere textual sources, delving into the contextual nuances that can be best comprehended through in-person interactions.