Next Article in Journal
Complexity and Timeliness of the Term “Christendom” for Ecumenical Ecclesiology
Previous Article in Journal
Science-Engaged Thomism
Previous Article in Special Issue
Psychedelics, the Bible, and the Divine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI and East Asian Philosophical and Religious Traditions: Relationality and Fluidity

1
School of Religion and School of Rehabilitation Therapy, Queen’s University, Kingston, ON K7L 3N6, Canada
2
Candler School of Theology, Emory University, Atlanta, GA 30322, USA
3
Iliff School of Theology, Denver, CO 80210, USA
*
Author to whom correspondence should be addressed.
Religions 2024, 15(5), 593; https://doi.org/10.3390/rel15050593
Submission received: 20 March 2024 / Revised: 6 May 2024 / Accepted: 9 May 2024 / Published: 11 May 2024
(This article belongs to the Special Issue Theology and Science: Loving Science, Discovering the Divine)

Abstract

:
This article examines aspects of the intersection of artificial intelligence (AI) and religion, challenging Western Christian perspectives that warn against playing God and ascribing human and God-like characteristics to AI. Instead of a theistic emphasis, East Asian religious perspectives emphasize concern for the potential implications of AI on communities and relationships. This article argues for the inclusion of perspectives from Chinese and Korean traditions in the growing discourse on AI and religion to adequately address the potential social impacts of AI technologies. First, we describe some of the questions and concerns being posed regarding AI and consider how certain normative interpretations of Western Christianity may influence some of these issues. Second, we discuss the contributions of Asian philosophies and religious traditions, which emphasize relationality and fluidity, to provide alternative approaches to AI. Third, we outline the discussion of AI from Confucian, Daoist, and Buddhist traditions, which see the cosmos as an interwoven whole and both humans and the cosmos as evolving. Lastly, we introduce the example of digital resurrection (e.g., deadbots) and consider how the philosophical and theological Korean concept of Jeong might refocus our understanding of the potential impacts of this AI technology.

1. AI and Religion: The Discourse So Far

Most of the discourse regarding the intersection of artificial intelligence (AI) and religion has focused on the West and, especially, Western Christian discourse. Christian theological assessments have tended to caution regarding AI. Fears that we may violate divine will or compromise human life have driven much of the conversation regarding AI and religion. This article asks how Chinese and Korean philosophical and religious traditions might expand and reframe the discourse about religion, ethics, and AI. In particular, we consider some possible implications emerging from Confucianism, Daoism, Buddhism, and the Korean lens of Jeong, for engagement with AI. This article aims to describe a few key points in the current primarily Western Christian theological discourse about AI and religion, and consider how Chinese and Korean philosophical and religious traditions might expand this discourse and help us reimagine AI.
Narrow AI refers to AI systems designed and trained for specific tasks or narrow domains of application. Unlike a possible future general AI or superintelligence, which aims to replicate the broad cognitive abilities of humans across a wide range of tasks and contexts, narrow AI systems are limited in scope and functionality. But, narrow AI is capable of generating and refining algorithms through a process known as machine learning, which is a subset of AI. While AI systems do not independently create algorithms in the same way humans do, they can iteratively generate and optimize algorithms based on input data and predefined objectives. Artificial General Intelligence (AGI) refers to a theoretical form of AI that possesses the ability to understand, learn, and apply knowledge in a manner similar to human intelligence across a wide range of tasks and contexts. Unlike narrow AI systems, which are designed for specific tasks or domains, AGI aims to replicate the broad cognitive abilities of human beings, including reasoning, problem-solving, perception, and adaptation to novel situations. AGI represents a level of AI that is capable of performing any intellectual task that a human can do and potentially surpassing human-level intelligence in some or all domains.
Chatbots (such as Replika), deadbots (sometimes called ghost-bots), Alexa, smart refrigerators, wearable fitness apps, smart assistive technologies for aging adults, and digital art are all examples of narrow AI. Developed by Eugenia Kuyda, Replika is a widely used social chatbot with over 6 million users in 2019 (Takahashi 2019) and over 10 million users in 2023. Kuyda created Replika as a bot to help her process her grief over the sudden death of a close friend, Roman. As with other narrow AI, Replika has both positive and negative implications (Trothen 2022). The “Roman bot”, which became Replika, was helpful to Kuyda in her grief process. Replika, as a chatbot, has been found to mitigate loneliness for many users (Skjuve et al. 2021). Replika has also raised questions regarding possible deception and stigmatization of users (Wangmo et al. 2019). Many people claim to have developed meaningful relationships with Replika and believe that Replika is always there for them, cares about them, and supports them, sometimes better than any human. But, can an algorithm care (Vallor 2016)? Does it matter? How do these questions relate to relational theological issues? Later in this article, we build on these questions when we consider digital resurrection, in a Korean context, through the lens of Jeong.
The imbuing of what religious studies scholar Ann Taves understands as religious ascriptions (Taves 2009) to AI adds to the power and mystery that may be perceived about AI. It is not uncommon for apocalyptic or divine qualities to be attributed to narrow AI or to a possible future, feared, and more powerful AGI (Singler 2020). Elon Musk has stated that the development of AI is like “summoning a demon”. The Future of Life Institute predicts “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us” (Future of Life Institute 2023). The power and potential power of AI is being recognized increasingly. Many scientists and government officials warn about the urgent need for AI regulatory bodies and safeguards. Even those who contend that AI is not extraordinary or even unpredictable are among those voices calling for a pause or regulation. In the United States, over 33,000 scholars, educators, researchers, and technologists (sponsored by the Future of Life Institute 2023) have signed an open letter calling for a pause to “Giant AI Experiments” that exceed the size of GPT-4. The head of OpenAI, a very influential creator of AI, and the Chief Privacy and Trust Office of I.B.M. came together to beg the U.S. Congress to regulate the AI industry. In June of 2023, a proposed set of regulations for AI was presented to the European Union. At the 2023 World Summit AI Americas conference (at the Palais des congrès in Montreal, Canada), AI leaders urged governments to legislate AI regulations immediately to protect humanity from the possibility of advanced AI and the malicious use of narrow AI. And, indeed, government officials are taking note.1 Given the religious ascriptions and fears of informed scientists and political figures, it is not surprising that many people deify AI (implicitly or explicitly) and are suspicious or afraid of AI.
Taves (2009) explores the processes by which people attribute causation, extraordinary qualities, and special meaning to particular experiences or things that are special to them. These processes of attribution and ascription occur for many reasons, including experiences in religions and cultures, emotions, and brain function. As a result, we need a multidisciplinary and diverse approach to examine special experiences adequately. Cognitive science, psychology, sociology, anthropology, and religious studies, among other disciplines, are all needed in order to probe the meanings of special things and special experiences. Some special things may be irreducible. But, other seemingly special things may be explainable by science and other disciplines. The challenge is to encourage critical engagement with special things, including AI, and ask if these things are truly special. This critical engagement is challenging since special things or experiences tend to be set apart from the everyday through prohibitions and taboos, making special things untouchable by critique, at least from many Western Christian perspectives.
Theology and religious studies can help us to probe this tendency to imbue AI with religious or special qualities and to engage questions related to theological anthropology. The human tendency to anthropomorphize and deify AI seems intractable. The assumption that human beings are normative pervades perceptions about what is not human. For example, it is common to hear people talking about chatbots as “thinking” or even “feeling”. Computer scientist Joseph Weizenbaum discovered what he called the ELIZA effect after creating his chatbot, ELIZA, in 1966. ELIZA was programmed to respond to users as a Rogerian psychotherapist. Weizenbaum observed that people projected human traits such as thinking, comprehension, and empathy onto ELIZA, regardless of how strenuously he emphasized that ELIZA was a machine and could not experience human qualities such as empathy (Weizenbaum 1966; 1976, p. 66). While many may fear that AI may become sentient, the possibility that AI may develop a type of machine sentience that is not the same as human sentience may be difficult for us to imagine. Our collective imaginations are being challenged by the development of AI. But, as long as we implicitly assume that everything, including the divine, AI, and anything else that is not human, is definable and comprehensible in human terms, we cannot truly encounter that which is Other.
A related theological issue is the prohibition against playing God. As mentioned earlier, things that are deemed special or religious tend to be seen as taboo or out of bounds from many Western Christian perspectives. When heart surgeries were first performed in the mid-20th century, many people questioned the wisdom of human tampering with what was then seen as medically untouchable—the heart (Weisse 2011). Heart surgeries are now seen as normal and save many lives. Some Christian theologians, such as Ted Peters (2018), have questioned and reconstructed the meaning of playing God. Peters asks if scientists who create AI and other technological innovations that have often been used to save lives are “violating the sacred”. According to Christian doctrine, part of the nature of God is creating for the good. From this theological perspective, part of the teleology of being human is to create for the good with “caution, prudence, and sound judgement” (Peters 2018). If the purpose of creating is just to see if we can do it, from a theological perspective, that is an insufficient reason. Humans do not create in the same way as “God” or whatever is imagined as divine. Again, it is a failure of the imagination that the divine is often represented in primarily human terms.
Human creation and engagement with AI become even more fraught when we see that not only do many humans anthropomorphize and deify AI but the process of deification is also intertwined with anthropomorphizing. A theistic worldview can implicitly encourage the anthropomorphizing of God. A theistic worldview in which God is understood as a singular being (monotheistic) that exists apart from the rest of life resonates with Western concepts of the individualized human. But, if the divine is instead understood as in and of all things, and not separate, then it may not be as easy to ascribe human qualities to the divine and human and divine qualities onto AI. If human and divine qualities are not (as often) ascribed to AI, then it may become more possible for humans to engage with AI as a type of “machine-Other” that is not us and is not God.
How might we challenge this tendency to assume that the Other is but a version of ourselves? Much has been written on what it means to encounter the human Other (e.g. Karl Barth, Emmanuel Levinas, and Mayra Rivera). Scholars are deepening the theological examination of how humans encounter and have relationships with bots (e.g., Herzfeld 2023). What are the implications of an anthropomorphized and deified AI for community, relationality, and even relationships that continue beyond the grave? How do we encounter the machine–Other (Trothen and Mercer 2024)?
If we are to understand the meanings of AI (as a special thing) from diverse global perspectives and expand our imaginations, we must look to religions other than Western Christianity. The tendency to ascribe God-like and human-like characteristics to AI must be critically questioned if we examine the implications of AI more deliberately from cross-cultural perspectives. Chinese and Korean religions and philosophies can help us to critically assess the normativity of anthropomorphizing and deifying the technological Other and the complexities of being human in relation to the cosmological whole.

2. Contributions of East Asian Religions and Philosophies

Chinese and Korean philosophical and religious traditions suggest questions about ontology and identity that should be asked about AI. These questions about what it means to be human in relation to AI are different from the common starting points assumed by much of Western Christianity. Followers of Chinese and Korean philosophical and religious traditions espouse a more fluid, interconnected, and non-dualistic worldview than most followers of Western Christianity.
We propose that, regarding AI, the most significant theological issues that distinguish Chinese and Korean religions and philosophies from many Western understandings of Christianity concern fluidity, interconnection, and the understanding of the relationship between the transcendent and the immanent. Chinese and Korean religions and philosophies find that the “world of distinctions is an illusion” (Mercer and Trothen 2021, p. 24). An understanding and acceptance of change and the dynamism of life is much more integrated in most Chinese and Korean religions and philosophies. Importantly, humans are not understood to be more important or of greater value than other life in these religions and philosophies.
The limits to desirable uses of AI, from the perspective of most Chinese and Korean religions and philosophies, would concern the well-being of all. If one aspect of life is negatively impacted by AI, then those negative implications would be understood to affect all life. For example, these technologies have significant implications for climate and ecology (Gordon 2024). A relational ontology in which humans are not prioritized over anything else understands potential harms to climate and ecology to be as significant as potential harms to human beings. From such a perspective, we should be concerned also about chatbot abuse by human “users” (Pentina et al. 2023; Depounti et al. 2023). Chinese and Korean religions and philosophies would take seriously not only potential harms of AI but also potential harms to AI since all are interconnected.
While many intersectional Christian theologians have critiqued and rejected dualisms, including mind/body, heaven/earth, sentient/insentient, natural/artificial, human/machine, and human/nature, dualistic thinking continues to be very influential within much of Western Christianity. This binary approach to God/human and machine/human has influenced Western understandings of AI. Non-fluid Western theologies and perspectives on life are not restricted to faith communities. The Western tendency to analyze technologies as if these technologies are separable from the rest of life is entwined with an often mechanistic and individualistic approach to technology and ontology.
The emphasis on solidarity and connection in Asian philosophies and religions can provide an alternative to a Western individualistic approach. At the root of the issue, because solidarity and connection are so important, there is both fear and promise associated with emergent AI in Korea, for example. Will AI threaten or enhance connection and relationships? We consider this tension below by examining the example of digital resurrection and griefbots, which are growing in popularity in Korea. The importance of connection and relationships extends beyond this life. It may be that AI will become increasingly important in Asian cultures such as Korea through griefbots since the digital resurrection of the dead offers another way to potentially extend relationships with those who have gone before.
Instead of a narrow focus on what we “can” do in science, scholars of religion are beginning to pose more questions about the perceived meanings of AI and the potential impacts of AI on not only humans but all aspects of life. Chinese and Korean religions and philosophies can help us expand this discourse about AI, especially as AI warnings and calls for restrictions grow in volume. If we are to shift our horizons regarding the questions that we ask about AI, we need to be more intentional about including voices on the margins. If we are to critically understand the special meanings that are being ascribed to AI and achieve greater clarity regarding the potential impacts that AI may have, we need to unmask assumptions related to the often-unconscious deifying and anthropomorphizing of AI.
At the heart of these warnings and urgent cautions are questions about appropriate limits and what is meant by “better”. Religious perspectives are often culturally entwined. Ascriptions can have religious-like qualities. Asian philosophies and religious traditions understand religion and culture as necessarily entwined. This integrated way of understanding the world is often in contrast to a Western understanding that religion and culture can (and usually should) be independent of each other. A belief that religion can be separate from other aspects of culture may make it more difficult to see that religion does seep into the ways that AI is understood and imagined. We contend that it is important to understand how religions may affect perceptions of AI in wider society. Western Christianity imposes implicit understandings regarding the meaning and value of human life on the discourse about AI. Chinese and Korean philosophies and religions can help to illuminate normative ontological assumptions, particularly the assumptions that humans are of greater value than other life and that extreme individualism is desirable and viable. Asian philosophies challenge Western notions of AI, suggesting a more interconnected and non-dualistic approach to understanding AI’s role in society.

3. AI and Chinese Philosophies and Religious Traditions

Chinese philosophical and religious traditions espouse more fluid, changing, and non-dualistic worldviews when compared to Christianity. These traditions are not theistic and do not assume a creator created the world. Thus, the question “Are they playing God?” posed to new technologies and AI does not arise. Scholars who have studied Chinese culture and religions note that people in the West respond to the breakthroughs in robotics and AI with alarm because many are shaped by a dualistic and reductionistic cosmology. Such a cosmology distinguishes between “human and non-human, self and other, the natural and the artificial, the intelligent and the sensuous, the mind and the body, the rational and the emotional, the sentient and the insentient, means and ends, and so on” (Ames 2021, p. 115). In contrast, Chinese traditions see the cosmos as an interwoven whole, humans as related to other beings, and human culture as integral to and not separated from nature. Chinese philosophers and scholars are open to the development of AI and the benefits it will bring because they believe that human nature and the cosmos are constantly evolving and not static. They draw elements from Confucianism, Daoism, and Buddhism to interpret the development and impact of AI and other new technologies. As AI challenges us to think about critical questions such as intelligence, the nature of being human, the creation of new species, and immortality, Chinese traditions can provide resources and serve as dialogical partners. Chinese moral philosophy and religions also contribute to developing frameworks and oversight to ensure that AI innovations align with shared moral imperatives that govern the human world and the planet.
Scholars in China from various fields have engaged in lively discussion of the impact of AI on society. For example, the Berggruen Research Center in Beijing sponsored a five-year project to study the relationships between AI and Chinese philosophy, bringing together Chinese and Western scholars. It published the pioneering book Intelligence and Wisdom: Artificial Intelligence Meets Chinese Philosophers in 2021. The editor, Bing Song, writes that AI and other frontier technologies challenge existing norms at a time when we are rethinking globalization and global values. B. Song (2021, p. 2) surmises, “the disruptive nature of frontier technologies has created ruptures in our habitual thinking patterns… [and] offered a golden opportunity for us to pause and rethink foundational values for the future and for the greater planetary flourishing”.
Scholars of Confucianism, Daoism, and Buddhism argue that Chinese understandings of cosmology, human nature, and society are not incompatible with AI. However, they caution about the impact of AI and the limits of technology. The Book of Changes has long influenced Confucian and Daoist worldviews, which regard the world as ever-changing and non-static. The world is a complex whole, with everything connected organically with others. Philosopher Chunsong Gan (2021; Tu 1978) points out Confucianism does not consider humans as isolated individuals but as a part of the larger whole, extending from the family to the country and the world. Humans are not separated from other animals by biological processes or physiological instincts but by their moral consciousness. While Greek philosophy has a dualistic view about humans and non-humans, and Christianity puts humans at the apex of creation, Confucianism does not make “some ontological claim of human exceptionalism”, as Roger T. Ames (2021, p. 110) points out. Humans are integrally related to their social and ecological environment, and there is no dualistic distinction between nature and culture. Confucians view the cosmos as “natural”, in the sense that it is autogenerative (自然); changes in one thing cause changes in others and vice versa. Ames (2021, p. 124) argues that, just as humans have used horticulture, writing, and toolmaking to improve their lives, AI and other innovations are the latest developments following a long history of human development and flourishing.
Robin R. Wang (2021, p. 79), a scholar of Daoist philosophy, points out that Daoism can support and validate AI’s development because of its emphasis on change: “The central tenet of a Daoist framework holds that unpredictability and change are not only unavoidable but dominate every aspect of life”. When new technology emerges, people are concerned about its impacts and whether it will change their lives. But, if change is seen as a reality of life, we would not be afraid of and balk at technological innovations such as AI. She does not think that AI will ever compete with humans because, in Daoist philosophy, humans are more than information and data, as they are capable of emotional calculations, self-awareness, and connecting with the cosmos. Her hope for AI is that it can build machines that mimic human intelligence “to allow humans to be closer to nature” and align with the movement of Dao. For example, the gathering and processing of personalized health data and biometrics can be useful to understand one’s well-being and generate actionable information.
While Wang draws from Daoist philosophical concepts, Fei Gai includes popular Daoism in her discussion of AI and machines. AI has the potential to create a machine whose power and intelligence are greater than those of human beings. In popular Daoism, Gai notes that human beings can evolve and aspire to become one of the transcendental, celestial beings (神仙) through Daoist practices. These Daoist celestials have characteristics that modern people have conjured up for powerful AI. First, these celestial beings are immortal, and they overcome human finitude and death. Various Daoist practices, such as alchemy, herbal medicine, and diet, were promoted for human beings to achieve immortality. These celestial beings do not eat or drink and are not restricted by their physical limitations. Moreover, they are in unity or oneness with the Dao. Just as celestial beings can be seen as an evolved human life, AI can be seen as the digitalization of consciousness to transform human beings into a new “AI species” that can live forever in the data stream. Gai (2021, p. 100) writes, “Transformations in science and technology will inevitably prompt a transformation in philosophy too. Immortality is no longer a myth from the perspective of Daoism. If artificial super-intelligence comes into being, then perhaps Daoism’s Celestial Being pedigree will open up to a new taxonomical classification: The Digital Celestials”. Gai’s intention is to show that within popular Daoism there are stories and myths that can accommodate AI innovations and challenge us to rethink new species and immortality. She cautions about how these innovations will impact society, such as human well-being and relationality, job opportunities, and employment.
Apart from these contributors to the Berggruen Research Center’s project, scholars who study Buddhism have commented on the development of AI. For example, Paul Andrew Powell (2005) discusses whether machines can create artificial enlightenment. He argues that enlightenment can only be achieved for the Buddhist when one deconstructs the problematic natures of the “self”, sentience, and consciousness and sees them as illusions. He notes that technological advances may one day create a self-aware, sentient being, which represents a new life form. Even so, he says, this new being cannot achieve enlightenment in the Buddhist sense. Enlightenment requires the awareness of no self, reality as emptiness, and all thought as the by-product of binary dualism. From this perspective, mass data and information supercomputing may not pave the way for Nirvana and may be hindrances.
Since Asian philosophies and religious traditions have more fluid and changing worldviews, scholars can find elements in these traditions that can accommodate the evolution of humanity through the use of AI. Yet, they also caution against potential challenges and dangers brought by AI, such as humans creating a machine they cannot control. There is also a social justice issue regarding the inequity between those with access to AI technology and those without. In many parts of the Global South, people do not have the Internet or broadband and will not be able to benefit from AI. Some scholars are concerned about growing unemployment and unrest when robots replace human labor and the new technological “cold war”. But, Jensen Huang (2023), the Taiwanese co-founder of Nvidia, a dominant supplier of AI hardware and software, argued that there would be new jobs related to AI and other opportunities that we cannot even think of in his commencement speech in Taiwan. While many in the West have pointed to the danger of using AI for surveillance and control, Chinese scholars also point out surveillance can help to tackle crime and provide security.
As AI and other technologies, such as gene editing, affect humanity as a whole, we need to think of global oversight and regulatory processes that can handle competing claims and moral values. Chinese philosopher Tingyang Zhao (2006) argues that the ancient Chinese concept of tianxia (天下 all-under-heaven) is helpful to imagine the world order. All-under-heaven means the whole world under heaven, equivalent to the term the “universe” or the “world” in Western languages. As the world is the people’s home, all-under-heaven also means the hearts or will of the people. All-under-heaven has a third meaning as a world institution, a utopia of the world-as-one-family. Thus, all-under-heaven is a political philosophy that includes “the geographical world (the earth), the psychological world (the hearts of all people) and the political world (the world institution)” (Zhao 2006, p. 39). All-under-heaven is based on a philosophy rooted in an ontology of relationality and the interconnectedness of all beings. It proposes a “world theory” rather than “international theory” in dealing with challenges facing humanity, such as AI and technology. Zhao argues that institutions such as the United Nations, based on a system of nations/states, are ill equipped to handle issues in the age of globality. Instead, we need to develop world institutions that emphasize the inclusion of all and the respect for diversity and plurality.
As humanity faces so many critical issues, such as environmental crises, the COVID pandemic, and the unknown future brought about by new technology, Chinese leaders and scholars have used the concept of “a community for a shared future of humankind” (人類命運共同體) to emphasize collaboration to achieve peace, sustainable development, and prosperity. Bu and Xu (2023, p. 109) said that AI cannot be used simply as a tool for international competition but can be employed to enhance international cooperation to benefit all. This requires the collaboration of transnational corporations, governments, and non-governmental organizations to work together to develop better AI and technology, devise reasonable and legal security measures, and determine parameters for safety commitments.

4. AI with Korean Jeong (정)

Echoing the Chinese philosophical approach, which integrates AI into a fluid and interconnected cosmos, the Korean concept of Jeong expands this perspective by emphasizing deep emotional connections and communal bonds. Jeong presents AI not just as a technological tool but also as a crucial element within relational networks. Korean high-tech firms have used AI for some time and advertisements of their products are broadcasted all day long on Korean television networks. “The arrival of Tromm Deep Learning AI marks the beginning of a new era!” “Tailored exclusively for you!”—these catchphrases are highlighted in commercials for LG’s AI-enabled dryer and Samsung’s Bespoke refrigerator, which is equipped with AI that alerts owners to expiration dates and suggests recipes based on what is inside. Owing to the market dominance of these two pioneering Korean firms, AI has seamlessly woven itself into the fabric of everyday life in Korea.
Despite this widespread integration, the 2021 National Public Perception Survey on Artificial Intelligence (H. Lee 2021) reveals a significant gap in knowledge about AI among Koreans: 46.2% of respondents reported having minimal or no understanding of AI, while 40.8% acknowledged having a moderate understanding. Utilizing data from the Massachusetts Institute of Technology, this survey underscores the differing views on AI between Koreans and Americans. Whereas Americans often consider AI in terms of its technological components, such as computers, machinery, and software, Koreans tend to see AI more from a relational perspective, focusing on its capacity to automate tasks and mimic human cognitive functions. Koreans’ relational perspectives on AI tie directly to their apprehensions regarding its use. The findings suggest a Korean preference for deploying AI in supportive roles, enhancing daily activities, caregiving, and administrative functions. Yet, a cautious approach toward applying AI in decision-making requires ethical or value-based judgments, particularly in corporate settings like recruitment and performance evaluations. In other words, AI has become an object with both sweet Jeong (고운정) and resentful Jeong (미운정) to Koreans.
Jeong is a complex and rich concept in Korean culture, representing deep-seated emotional connections, empathy, and affection among people, as well as with places, objects, and more. It encapsulates various emotions that deepen over time, fostering a strong sense of attachment, devotion, and unity within the community (Choi 2011, pp. 38–44). Understanding Jeong is crucial for grasping the nuances of Korean social interactions and the emphasis on the collective good, lasting relationships, and a natural inclination towards empathy and comprehension. More than mere affection, Jeong acts as the emotional cement that unites people despite differences or disputes. It applies not only to the bonds among family and friends but also extends to ties with acquaintances, coworkers, and even the natural world and non-living things (S. Lee 1994, p. 88). This concept transcends ordinary limits, rooted in a profound commitment and concern for others. A significant characteristic of Jeong is its resilience, enabling relationships to withstand and surmount challenges. Even when one hates someone, there is still Jeong for the person, which is called resentful Jeong, and it allows one to stay in a relationship with the other. Bonds formed through Jeong are resilient, underscored by a readiness to overlook flaws and focus on the connection’s durability rather than personal shortcomings (Choi et al. 2000). Jeong, thus, fosters a strong sense of communal identity and solidarity, often encapsulated in the Korean concept of “우리” (uri), meaning “we”. Here, “we” does not mean the coexistence of I and You as independent individual units. Rather, it indicates that “You and You” and “You and I” are the same reality: “I and you exist not as separate units but as a unified one. At the moment when two individuals abandon their own perspective and put themselves in their partner’s shoes, they become one, not a separate two” (S. Lee 1994, p. 88). Koreans’ relational views of AI, accompanied by their hesitancy and fear about AI, can be understood through the lens of Jeong.
AI first caused resentful Jeong in Koreans when Sedol Lee, a South Korean who was the strongest Go player in the world, lost a match in 2016 against AlphaGo, Google DeepMind’s AI program. For days and weeks after that, Korean media and major global news outlets reported and analyzed the human defeat, raising serious concerns about the future of human lives in the age of AI (Moyer 2016). Academic conferences in various disciplines have also focused on AI as their subjects, often coupled with deep concerns about human roles in the AI-dominant era. A group of Korean journalism scholars who analyzed media coverage of the match characterized the event as a significant step toward the era of humans cohabiting with social AI (Jongsoo Lim et al. 2017). Korean media anthropomorphized AlphaGo, a non-human entity with a tenacious personality and advanced cognitive skills, cleverly executing the strategies needed in Go. The human-like depiction of AlphaGo, which turned mechanical AI interactions into social engagements, has sparked considerable anxiety and fear among many Koreans. There is concern over AI taking over their roles, reflecting the aspect of Jeong that involves resentment, while it also seamlessly integrates into their lives as an unseen yet comforting presence, embodying the sweeter side of Jeong.
Not long after the whirlwind meeting with AlphaGo, Koreans encountered sweet-Jeong-based AI through a documentary titled “Meeting You”, one of the most-watched TV programs in recent years, broadcasted on MBC, one of Korea’s top three television networks, in February 2020. The film presents a very emotional virtual encounter between Jisung Jang, a mother, and her late daughter, Nayeon, who died of leukemia in 2016 when she was just seven years old (Violet Kim 2020). MBC produced the documentary, employing virtual reality (VR) and AI to meticulously craft a lifelike avatar of the daughter over nearly a year. This innovative use of VR and AI enabled the grieving mother to reconnect virtually with her daughter. The documentary’s success prompted MBC to release new seasons of episodes annually. The latest, Season 4, aired in February 2024, featured a heartrending episode where grieving parents had the chance to bid farewell to their 17-year-old son, who died unexpectedly. Unlike the response to the AlphaGo event, “Meeting You” captured the hearts of the Korean audience with the visual of a mother engaging with her daughter’s AI avatar in a digitally crafted space. The series has profoundly connected with the Korean public, with almost two-thirds of the nation watching the first season (MBClife 2020).2 This significant engagement is likely attributed to the film’s resonance with the deeply ingrained Korean value of sweet Jeong by highlighting the Jeong relationship’s continuation beyond the grave. Unlike many AI-based approaches to “resurrecting” the deceased, such as HereAfter.AI, which enables users to create interactive digital avatars from personal stories, memories, thoughts, and feelings for future interactions with the deceased, “Meeting You” specifically aims to assist those grieving a loss without proper farewells. This program helps them achieve closure. Culturally, this means that their resolution with their loved ones is rooted in the concept of sweet Jeong, emphasizing affectionate bonds, rather than resentful Jeong, which could arise from the lack of closure.
“Meeting You” delves into the profound theological and ethical implications of utilizing AI-powered VR for grappling with grief and the concept of digitally resurrecting the departed. The virtual interaction offered the grieving mother closure. Yet, it sparked significant theological debates over the authenticity of such virtual encounters. In their response, Korean Christian theologians have primarily concentrated on the ethical and moral considerations surrounding AI and similar technologies, while recognizing their benefits. Their research seeks to understand the fundamental nature of AI, assess it through theological and ethical lenses, and offer recommendations for its ethical use based on Christian theological principles. Key concerns raised include the devaluation of life’s meaning through AI, the exacerbation of social inequalities due to the prohibitive costs of accessing technology, the need for theologians to reaffirm the Imago Dei in humanity, the impossibility of AI having personality, the incapability of AI to have a relationship with God as humans do, the irreplaceability of divine transcendence by AI, the church’s duty to uphold and declare divine wisdom, and AI as a global risk factor that the church and science community should address together, and so on.
Regarding such trends, Yongsup Song (Y. Song 2022), a Christian ethicist who uses postcolonial discourse for his theological framework, argues that, whether we resist or not, the era with AI is already with us. Thus, we should focus on pursuing the constructive coexistence of AI and humans. Song emphasizes the importance of cultivating a global consciousness that values religious, cultural, and regional differences for the harmonious integration of AI with human society. He observes that current discussions in the West, including the discourse of many Korean Christian theologians, which often relies on the individualistic theological frameworks of Europe and North America, explicitly and implicitly draw on Judeo-Christian ethical and religious principles when addressing AI and religion. Such an approach risks imposing a new form of colonialism in non-Western or religiously diverse contexts, such as Korea. For example, Song, after analyzing research trends on religion and AI by Western scholars, argues that for Christians, it would be natural to teach Agape, the core Christian value, interpreted as self-sacrificial love, for the development of semi-autonomous bots (including griefbots) that can express empathy. However, as postcolonial theologians like Anne Joh (2010, p. 180) note, Agape sometimes advocates for one-sided sacrifice. Thus, he proposes that AI ethics grounded in Jeong, a Korean cultural sense that predates the arrival of Christianity to Korea, could expand upon the Western theological perspective, as Jeong maintains personal agency, even in less equitable relationships.
Y. Song (2022, pp. 234–38) further argues that Jeong, by enriching Agape, could serve as a globally applicable ethical model and suggests the concepts of “AI with Jeong” and “AI with abundant Jeong”. “AI with Jeong” refers to AI that has developed a bond or connection with its users or human counterparts over time, akin to the deep relational bonds formed through prolonged interaction, emphasizing AI’s ability to recognize and adapt to humans’ emotional states and needs. This type of AI is characterized by its long-term interactions with humans, through which it accumulates experiences and emotions, leading to an attachment or loyalty from AI toward its human users. “AI with abundant Jeong” describes AI that is inherently designed to be empathetic, compassionate, and capable of understanding and responding to human emotions effectively from the outset. This AI is built with an abundance of Jeong, suggesting that it is not just responsive but also proactive in its interactions. It offers humans support, empathy, and companionship by recognizing their dignity and value regardless of the situation.
The Korean concept of Jeong offers a transformative perspective on AI, emphasizing emotional connections, community, and ethics. It presents an alternative to the dominant technological focus, advocating for AI that enhances human relationships and addresses ethical concerns. Jeong encourages a balance between technological advancement and preserving human dignity, suggesting a model for global AI ethics rooted in compassion and communal values. As the world navigates the challenges of AI integration, Jeong’s emphasis on empathy and relationality provides valuable insights for developing technology that truly serves humanity, ensuring a future where AI supports more profound social and emotional well-being.

5. Conclusions

Chinese philosophical and religious traditions espouse a fluid, changing, and non-dualistic worldview. This Asian worldview assumes indistinct borders between religions and philosophies, humans and other life, machines and human connection, and all else that can be encountered in life. It invites us to consider AI as not simply friend or foe. In this essay we ask how these Asian traditions might respond to AI and its challenges for the future.
To understand how AI might be seen from an East Asian perspective, we first identified some of the assumptions imposed on AI from an implicit and explicit Western Christian perspective. Western theological concerns have tended to focus on the danger of crossing the line and acting as God as we create AI. Technologies have been seen as separate from humans and meant for use by humans. Understood as separate things, AI has been imbued with apocalyptic qualities and feared as a potentially demonic, sentient being that wishes to defeat humanity. From a Western perspective, the anthropomorphizing of AI has included the projection of territorialism and fragmentation.
But, what if we were to understand AI from a more relational perspective, as do many Koreans? If AI has both sweet Jeong (고운정) and resentful Jeong (미운정), AI is understood as something with potential harms and benefits and something that is in relationship with us. The refusal to see AI as either all good or all bad may help to open us up to how AI is entwined with life. Instead of ascribing demonic or salvific qualities to AI, AI is perceived as embedded in a constantly unfolding and deeply interconnected universe.
This perception does not mitigate the need to be prudent regarding AI development. Asian religions and traditions understand that there must be limits on technology. These limits are grounded in an ontology of relationality and the interconnectedness of all things, not in fear or disconnection from that which is different. If the starting point is the assumption that everything is organic and interconnected, the discourse at the intersection of religion and AI changes from the normative Western Christian discourse that paradoxically assumes that everything is like me and that everything is separate from me.
If we were to embrace unpredictability as an expected and even desirable feature of life, as we see in Daoism for example, AI becomes only one more manifestation of that unpredictability. Even if Artificial General Intelligence were to evolve, this would not necessarily be a discontinuity or undesirable from an Asian perspective. AI will continue to change us. Asian religions and philosophies suggest that all life is always changing and that we cannot always control those changes. Dynamism and uncertainty are features intrinsic to life. There is no need to fear the uncertainty, but it is incumbent on us to make prudent choices informed by valuing all life, not just human life.

Author Contributions

Conceptualization, T.J.T., P.L.K. and B.L.; methodology, T.J.T., P.L.K. and B.L.; formal analysis, T.J.T., P.L.K. and B.L.; investigation, T.J.T., P.L.K. and B.L.; resources, T.J.T., P.L.K. and B.L.; writing—original draft preparation, T.J.T., P.L.K. and B.L.; writing—review and editing, T.J.T., P.L.K. and B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors gratefully acknowledge the research assistance of graduate student SunGu Kwag on AI and Chinese and Korean traditions.

Conflicts of Interest

The authors declare no conflicts of interest.

Notes

1
e.g., for the United States, see https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ (accessed 29 January 2024). In Canada, the House of Commons passed Bill C-27, the Digital Charter Implementation Act, on second reading in 2023. This omnibus bill includes the Artificial Intelligence and Data Act (AIDA). AIDA would apply to the safety and human rights relevant to “high-impact AI systems” and come into effect no sooner than 2025. However, there remains many concerns about AIDA’s scope and potential effectiveness (Parliament of Canada, https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading). (Accessed 29 January 2024).
2
The documentary’s clip on YouTube has almost 36 million views. MBClife (2020, February 6). Mother meets her deceased daughter through VR technology. YouTube. https://youtu.be/uflTK8c4w0c?si=E-SsClPk_MYZC6v9 (accessed on 7 February 2024).

References

  1. Ames, Roger T. 2021. Natural robots: Locating ‘NI’ within the Yijing cosmology. In Intelligence and Wisdom: Artificial Intelligence Meets Chinese Philosophers. Edited by Bing Song. Beijing: CITIC Press Corporation, pp. 109–29. [Google Scholar]
  2. Bu, Yanjun 部彦君, and Kaiyi Xu 許開軼. 2023. 重塑與介入:人工智能技術對國際權力结構的影响作用探析 [Investigation on artificial intelligence technology’s impact on international power structure]. 世界經濟與政治論壇 Forum of World Economics & Politics 1: 86–111. [Google Scholar]
  3. Choi, Sang Chin 최상진. 2011. 한국인의 심리학 [Psychology of the Korean People]. Seoul: Hakjisa. [Google Scholar]
  4. Choi, Sang Chin 최상진, Ji Young Kim 김지영, and Kibum Kim 김기범. 2000. 정 (미운 정 고운 정)의 심리적 구조: 행위 및 기능 간의 구조적 관계 분석 [The structural relationship among Jeong, and its actions and functions]. 한국심리학회지: 사회 및 성격 Korean Journal of Social and Personality Psychology 14: 203–22. [Google Scholar]
  5. Depounti, Iliana, Paula Saukko, and Simone Natale. 2023. Ideal technologies, ideal women: AI and gender imaginaries in Redditors’ discussions on the Replika bot girlfriend. Media, Culture & Society 45: 720–36. [Google Scholar]
  6. Future of Life Institute. 2023. Pause Giant AI Experiments. March 22. Available online: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed on 29 January 2024).
  7. Gai, Fei. 2021. When artificial intelligence meets Daoism. In Intelligence and Wisdom: Artificial Intelligence Meets Chinese Philosophers. Edited by Bing Song. Beijing: CITIC Press Corporation, pp. 83–100. [Google Scholar]
  8. Gan, Chunsong. 2021. Artificial intelligence, emotion, and order: A Confucian perspective. In Intelligence and Wisdom: Artificial Intelligence Meets Chinese Philosophers. Edited by Bing Song. Beijing: CITIC Press Corporation, pp. 15–31. [Google Scholar]
  9. Gordon, Cindy. 2024. AI Is Accelerating the Loss of Our Scarcest Natural Resource: Water. Forbes. February 25. Available online: https://www.forbes.com/sites/cindygordon/2024/02/25/ai-is-accelerating-the-loss-of-our-scarcest-natural-resource-water/?sh=25eeecf27c06 (accessed on 25 February 2024).
  10. Herzfeld, Noreen. 2023. The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age. Minneapolis: Fortress Press. [Google Scholar]
  11. Huang, Jensen 黃仁勳. 2023. 黃仁勳台大畢典致詞中文翻譯, 勉勵畢業生跑在時代前面 [Jensen Huang commencement speech at Taiwan University encouraging students to run at the forefront]. YouTube. Available online: https://www.youtube.com/watch?v=y8OI53Xylfo (accessed on 12 March 2024).
  12. Joh, Wonhee A. 2010. Love’s multiplicity: Jeong and Spivak’s notes toward planetary love. In Planetary Loves: Spivak, Postcoloniality, and Theology. Edited by Stephen D. Moore and Mayra Rivera. New York: Fordham University Press, pp. 168–90. [Google Scholar]
  13. Kim, Violet. 2020. Virtual Reality, Real Grief. Slate. March 27. Available online: https://slate.com/technology/2020/05/meeting-you-virtual-reality-documentary-mbc.html (accessed on 8 February 2024).
  14. Lee, Hana. 2021. The recent survey conducted by Seoul National University revealed that 46.2% of the population responded they have ‘little to no knowledge’ about AI. AI Times. December 15. Available online: https://www.aitimes.com/news/articleView.html?idxno=141967#:~:text=%272021%20%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5%EA%B3%BC%20%EA%B3%A0%EB%A0%B9%EC%82%AC%ED%9A%8C%20%EB%8C%80%EA%B5%AD%EB%AF%BC%20%EC%9D%B8%EC%8B%9D%EC%A1%B0%EC%82%AC%27%20%EA%B2%B0%EA%B3%BC%20%EB%B0%9C%ED%91%9C&text=%EA%B7%B8%20%EA%B2%B0%EA%B3%BC%2C%20%EC%97%AC%EC%84%B1%EB%B3%B4%EB%8B%A8%20%EB%82%A8%EC%84%B1,%EC%95%8C%EA%B3%A0%20%EC%9E%88%EB%8B%A4%27%EA%B3%A0%20%EC%9D%91%EB%8B%B5%ED%96%88%EB%8B%A4. (accessed on 15 January 2024).
  15. Lee, Soo Won. 1994. The Cheong [Jeong] space: A zone of non-exchange in Korean human relationships. In Psychology of the Korean People: Collectivism and Individualism. Edited by Gene Yoon and Sang Chin Choi. Seoul: Dong-A Publishing Co., pp. 85–99. [Google Scholar]
  16. Lim, Jongsoo 임종수, Minju Shin 신민주, Hunbock Moon 문훈복, Jumee Yoon 윤주미, Taeyoung Jeong 정태영, Yeonjoo Lee 이연주, and Seunghyun Yu 유승현. 2017. AI 로봇 의인화 연구: ‘알파고’ 보도의 의미네트워크 분석 [AI robot anthropomorphism study: A semantic network analysis of AlphaGo press coverage]. 한국언론학보Korean Journal of Journalism & Communication Studies 61: 113–43. [Google Scholar]
  17. MBClife. 2020. Mother Meets Her Deceased Daughter through VR Technology. YouTube. February 6. Available online: https://youtu.be/uflTK8c4w0c?si=E-SsClPk_MYZC6v9 (accessed on 10 December 2023).
  18. Mercer, Calvin, and Tracy J. Trothen. 2021. Religion and the Technological Future: An Introduction to Biohacking, A.I., and Transhumanism. Cham: Palgrave MacMillan. [Google Scholar]
  19. Moyer, Christopher. 2016. How Google’s AlphaGo Beat a Go World Champion. The Atlantic. March 28. Available online: https://www.theatlantic.com/technology/archive/2016/03/the-invisible-opponent/475611/ (accessed on 9 March 2024).
  20. Pentina, Iryna, Tyler Hancock, and Tianling Xie. 2023. Exploring relationship development with social chatbots: A mixed-method study of Replika. Computers in Human Behavior 140: 107600. [Google Scholar] [CrossRef]
  21. Peters, Ted. 2018. Playing God with Frankenstein. Theology and Science 16: 145–50. [Google Scholar] [CrossRef]
  22. Powell, Paul Andrew. 2005. On the conceivability of artificially created enlightenment. Buddhist-Christian Studies 25: 123–32. [Google Scholar] [CrossRef]
  23. Singler, Beth. 2020. “Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse. AI & Society 35: 945–55. [Google Scholar]
  24. Skjuve, Marita, Asbjørn Følstad, Knut Inge Fostervold, and Petter Bae Brandtzaeg. 2021. My Chatbot Companion—A Study of Human-Chatbot Relationships. International Journal of Human-Computer Studies 149: 102601. [Google Scholar] [CrossRef]
  25. Song, Bing. 2021. Introduction. In Intelligence and Wisdom: Artificial Intelligence Meets Chinese Philosophers. Edited by Bing Song. Beijing: CITIC Press Corporation, pp. 1–14. [Google Scholar]
  26. Song, Yong Sup. 2022. Artificial intelligence felt of Jeong and artificial intelligence full of Jeong: Jeong as a local value for the development of AI for symbiosis with humans. Christian Social Ethics 54: 217–43. [Google Scholar] [CrossRef]
  27. Takahashi, Dean. 2019. The inspiring possibilities and sober and realities of making virtual beings. Venture Beat. Available online: https://venturebeat.com/2019/07/26/the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings/ (accessed on 2 February 2022).
  28. Taves, Ann. 2009. Religious Experience Reconsidered: A Building Blocks Approach to the Study of Religion and Other Special Things. Princeton: Princeton University Press. [Google Scholar]
  29. Trothen, Tracy J. 2022. Replika: Spiritual enhancement technology? Religions 13: 275. [Google Scholar] [CrossRef]
  30. Trothen, Tracy J., and Calvin Mercer. 2024. Neither God nor human: Encountering AGI and superintelligence as “machine-Other”. In AI and IA: Utopia or Extinction? 2nd ed. Edited by Ted Peters. Adelaide: ATF Press. Minneapolis: Fortress Press. chp. 7. (in press) [Google Scholar]
  31. Tu, Weiming. 1978. Humanity and Self-Cultivation: Essays in Confucian Thought. Singapore: Asian Humanities Press. [Google Scholar]
  32. Vallor, Shannon. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press. [Google Scholar]
  33. Wang, Robin R. 2021. Can a machine flow like Dao? The Daoist philosophy on artificial intelligence. In Intelligence and Wisdom: Artificial Intelligence Meets Chinese Philosophers. Edited by Bing Song. Beijing: CITIC Press Corporation, pp. 65–81. [Google Scholar]
  34. Wangmo, Tenzin, Mirjam Lipps, Reto W. Kressig, and Marcello Ienca. 2019. Ethical Concerns with the Use of Intelligent Assistive Technology: Findings from a Qualitative Study with Professional Stakeholders. BMC Medical Ethics 20: 98–109. [Google Scholar] [CrossRef] [PubMed]
  35. Weisse, Allen B. 2011. Cardiac surgery: A century of progress. Texas Heart Institute Journal 38: 486–90. [Google Scholar] [PubMed] [PubMed Central]
  36. Weizenbaum, Joseph. 1966. Eliza: A computer program for the study of natural language communication between man and machine. Communications of the ACM 9: 36–45. [Google Scholar] [CrossRef]
  37. Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. New York: W.H. Freeman and Company. [Google Scholar]
  38. Zhao, Tingyang. 2006. Rethinking empire from a Chinese concept ‘All-under-Heaven’ (Tian-xia 天下). Social Identities 12: 29–41. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trothen, T.J.; Kwok, P.L.; Lee, B. AI and East Asian Philosophical and Religious Traditions: Relationality and Fluidity. Religions 2024, 15, 593. https://doi.org/10.3390/rel15050593

AMA Style

Trothen TJ, Kwok PL, Lee B. AI and East Asian Philosophical and Religious Traditions: Relationality and Fluidity. Religions. 2024; 15(5):593. https://doi.org/10.3390/rel15050593

Chicago/Turabian Style

Trothen, Tracy J., Pui Lan Kwok, and Boyung Lee. 2024. "AI and East Asian Philosophical and Religious Traditions: Relationality and Fluidity" Religions 15, no. 5: 593. https://doi.org/10.3390/rel15050593

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop