Next Article in Journal
A Wave of Unbelief? Conservative Muslims and the Challenge of Ilḥād in the Post-2013 Arab World
Previous Article in Journal
Decolonizing Lamanite Studies—A Critical and Decolonial Indigenist Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Essay

AI as a Buddhist Self-Overcoming Technique in Another Medium

by
Primož Krašovec
Department of Sociology, Faculty of Arts, University of Ljubljana, Aškerčeva 2, 1000 Ljubljana, Slovenia
Religions 2025, 16(6), 669; https://doi.org/10.3390/rel16060669
Submission received: 23 April 2025 / Revised: 16 May 2025 / Accepted: 22 May 2025 / Published: 23 May 2025

Abstract

:
Buddhist soteriology presents a discovery of a paradox at the very heart of the “human condition”. To reach awakening, one has to relinquish central tenets of what makes us human (in conventional understanding), such as mind and self, meaning that the process of awakening is necessarily at the same time also a process of self-overcoming that shatters everything ordinarily understood as human and leaves it behind. In this sense, various strands of Buddhism come close to some contemporary neuroscience’s deconstruction of the self and its counterintuitive insights about the mind and intelligence. The main thesis of the present essay is that Buddhism exposes the limits of human intelligence and why it is so ill fitted to becoming awakened, especially when compared to machine intelligence. Unburdened by the organic substrate and the resulting desire and attachment, artificial intelligence (AI) might be a solution to an ancient Buddhist paradox of how the human can be overcome by human means.

1. Introduction

Although this essay does address religion and migration, the migration in question is not a movement of people across state borders but a movement of theories and insights across time and intellectual contexts—in our case, the migration of Buddhist soteriological thinking and practice from their erstwhile spiritual and ascetic context to a contemporary scientific and technological context (neuroscience and AI). The religion in question is Buddhism as a not-really religion, although it displays some elements of religion (such as ritual, worship, and monastic orders). On the other hand, Buddhism has no use for a concept of God or gods, and, unlike in monotheistic religions of today, Buddha is not important for being a god’s representative but instead for discovering and teaching the path to awakening, which is not and requires nothing supernatural—“vast emptiness, nothing holy” (Dogen [~1231–1253] 1988, p. 68).
Buddhism is, unlike most religions, also not based on faith or belief, but on a practice that aims to reach awakening and abolish suffering in the process. For example, Buddhist meditation is not a form of contemplation but of redesigning the mind in an attempt to push it to go beyond itself, beyond its default human state. In this sense, while Buddhism is of course spiritual, it is also markedly technical, i.e., based on a cultivation and an application of various techniques that test the limits of human intelligence and aim to overcome it. Such technical spirituality presents another difference between Buddhism and ordinary religions and, although somewhat unexpectedly, makes it an interesting accomplice to current attempts to bring about intelligent machines.
While most religions aim to offer emotional comfort and consolation in the face of an objectively meaningless and indifferent universe and to make us belong and feel at home, Buddhism instead compels us to leave any home(s) we might have (A. Crawford 2018, pp. 1183–87). In Buddhism, it is the feeling at home and belonging that are seen as the problem, and the ego and self that sustain them have to be the first to go. Spirituality—in contrast to religion—“might be defined as seeing what is” (Metzinger 2009, p. 211; emphasis in the original) and the exploration of the emptiness that emerges after the attachments to ego, self, and belonging dissolve. In this sense, Buddhism is also similar to some contemporary neuroscience’s deconstruction of the self and its counterintuitive insights about the human mind and intelligence (Graziano 2014).
The aspects of Buddhism that are the least religion like also bring it close to some contemporary neuroscience and make it an interesting resource in thinking about human and beyond-human (Fazi 2021) intelligence, thrown open by current explosive development in the field of AI. Buddhism already glimpsed a fundamental existential question that only came into full view with recent developments in the science and technology of intelligence, namely that human intelligence has an uneasy relationship with its biological substrate and is decisively limited in its organic form, so that its further development might mean migrating to a technological substrate. Such a migration would consequently correspond to the Buddhist technique of self-overcoming of the human.
Examples from Buddhism that follow are in no way meant to be exhaustive or representative of Buddhism as a whole. Buddhist examples were taken from different schools and different historical periods, and what they have in common is that they are all in some way relevant for current discussions on artificial or machine intelligence(s) today and can help us open up new perspectives therein. Another thing these examples have in common is that they all come from the Mahāyāna tradition of Buddhism, which is the most relevant for our purposes since it involves insights into emptiness (śūnyatā)—including the emptiness of the self (Duerlinger 2013)—as well as ardent critiques of conventional truth(s) and conceptual thought (Williams 1989).

2. Intelligence Beyond Self and Mind

Trying to sum up the millennia-long history of Buddhist theoretical, epistemological, and soteriological contributions seems daunting, but what unites all of the otherwise immensely diverse Buddhist insights and practices is the target of their relentless critiques: a common sense image of the human. The image in question is a spontaneous way of human self-understanding, which, according to originary Buddhist teachings, arises out of ignorance and causes suffering in turn. Buddhist deconstruction of said image involves many different methods—ranging from patient discussions and explanations, characteristic of ancient Indian varieties of Buddhism, to later Zen Buddhist snappy one-liners and the use of counterintuitive riddles—and can be approached from various perspectives, such as ethics (how to eliminate suffering), epistemology (how to increase our understanding), or soteriology (how to reach awakening), but what they all have in common is that they have to overcome everyday illusions first.
In what follows, we will focus on only one aspect of the common sense image of the human—i.e., the way we spontaneously understand and evaluate our intelligence, since this is the point where Buddhism comes in and is most relevant to some contemporary neuroscience and research into machine intelligence. Our spontaneous understanding of our own human intelligence is twofold. First, we experience ourselves as selves, i.e., as unified, coherent subjects, and this subjectivity is in turn taken as both a site where intelligence happens and as what causes it. Second, we experience our intelligence as having a form of a conscious mind and—since it is experienced as mental and thus immaterial—being in sharp distinction to material processes. Our mind appears to us as an immaterial “thing”, residing somewhere within our heads. In everyday life, we have no reason to doubt ourselves as selves nor to separate intelligence from (conscious) mind: for all our everyday purposes, intelligence comes from a self and has a form of mind. Describing an image of intelligence as originating in a self and having the form of mind is, from a Buddhist perspective, illusory and thus a target for deconstruction.
To start with the self:
Anātmavāda, the ‘no-self’ doctrine, is variously interpreted within the classical Indian Buddhist traditions and by their Hindu critics and continues to be a matter of debate among contemporary Buddhist philosophers. It is commonly agreed that anātmavāda is not merely aimed at rejecting the Hindu theory of self, according to which the self is an immaterial, eternal, and conscious entity but also at rejecting any common-sense view of the self as a persisting entity. But beyond this there is not much agreement […]”.
There was probably no other in the rich and varied Buddhist tradition that contributed more to the dissolution of the self-illusion than Vasubandhu, an Indian monk and scholar from the Yogācāra school, who lived approximately in the 4th or 5th century CE. In Vasubandhu’s view, the illusion of the unified self is generated when many “distinct elements that exist only momentarily are taken together, as a sum, to be an unchanging and eternal self” (Gold 2015, p. 66), meaning that what actually exists are mental events with their own line of causation generating them, but they neither form a coherent self nor originate from it. Additionally, the image of a self is a reification of a figurative expression in language (p. 115) and has no reality of its own. Just like sense impressions are mental conceptual impositions on external reality that create distinctions and present the world to the mind’s eye as consisting of discrete, neatly delineated objects, so is the image of the self a conceptual imposition on an otherwise much more complex and differently structured internal reality, whereas “[…] reality itself, its causal ways as they are, is beyond language and conceptualisation […]” (p. 123). More so, the human mind cannot work otherwise than by conceptualization, i.e., parsing reality into discrete objects and in turn establishing relations between said objects, with the originary split being that between the subject and the object—once we perceive an object, we also cannot but perceive its corresponding subject, which we in turn (erroneously) designate as a self (p. 154). In Vasubandhu’s own words:
“[Question:]—How do we know that the expression [abhidhāna] ‘self’ [ātman; i.e., person] is only a provisional designation for a stream of aggregates and that it [does not refer to something else,] does not exist as an independent or separate self?
[Answer:]—We know this because no proof establishes the existence of a self independent or separate from the aggregates:
1. no proof by means of direct perception [pratyaka],
2. no proof by means of inference [anumāna]”.
The quote is taken from his seminal Abhidharmakośa-bhāsya, whereby he provides a refutation of conventional views of the person and the self not just for epistemological but also (and more importantly) for soteriological purposes:
“[Question:]—[If those who desire liberation were to apply themselves heedfully to the ‘teaching’ (śāsana) of the Muni,] then is it the case that there is no liberation outside of this doctrine (dharma) [-outside of Buddhism-] by relying on other doctrines?
[Answer:]—There is no liberation outside of this doctrine, because other doctrines are corrupted by a false view of a self [vitathātmadṝṣṭi]. The self [ātman] [as other doctrines conceive it] is not [as we conceive it only] a provisional designation [prajñapti] for a stream of aggregates (skandhasatāna), but is a self as a substance [dravya] that is independent or separate (antara) from the aggregates. By the power [prabhava] of the ‘adhesion to the self’ [ātmagriāha], the defilements [klea] arise; the revolving of the threefold existence, or the circling of the three realms, goes on; liberation is impossible”.
(p. 2523)
Although ancient, Vasubandhu’s insights are surprisingly resonant with some contemporary neuroscience. Famed theoretical neuroscientist Metzinger (2009), for example, sees the self in a very similar way, i.e., as a perceptual imposition on reality. According to him, “there is no such thing as a self […] [n]obody has ever been or had a self” (p. 1). The experience of the self is, instead, a retroactive effect of the way the human mind is configured and of how it relates to the external reality. Human brains generate perceptual simulations of an external reality but in the process also hide the fact that perception is simulated, so we have an “out of brain experience” of being in the reality itself. Because of the way human perception is structured, we have no choice but to mistake sense impressions for the things they stand for, i.e., we are by necessity convinced that the way reality appears to us is the real thing since the process that generates perception is invisible to us (p. 23). The self is, in turn, a retroactive addition of a perceiver to perception and an experiencer to experience, i.e., not their point of origin but a mirage that nonetheless has a reality of its own and generates real effects (p. 209).
To continue with the mind: dissolution of the mind is at the core of Zen Buddhist practices, from koans that jolt us out of our everyday ways of thinking and perceiving and reveal their unfoundedness and inconsistencies to a technique of sitting meditation (zazen) that allows for interruptions in the mind’s self-construction and thought generation. Zen Buddhism builds on an older Indian Madhyamaka school, a deconstructive style of thinking “designed to be used for exposing, defusing and dismantling the reifying tendencies inherent in language and conceptual thought” (Huntington 1989, p. 136). Its crucial concept is that of emptiness (śūnyatā). Whereas the human mind is inherently reifying and is constantly turning sense impressions into fixed concepts, Buddhist practice aims to go against and beyond (conceptual) thinking and its tendency to parse and delineate experiential reality. In Zen Buddhist perspective, external objects can only appear as such to a mind that is first separated from the world, while Zen practice aims to overcome precisely this originary duality. Only a mind, separated from the world, can perceive objects as something external to itself and assign concepts to them. Against conceptual thinking and the common sense image of mind, Zen Buddhist techniques cultivate emptiness, which “empties itself even of the standpoint that represents it as some ‘thing’ that is empty” (Nishitani [1961] 1982, p. 74). Emptiness is precisely neither another representation of an object (that happens to be empty) nor another representation of the subject: “The foundations of the standpoint of śūnyatā lies elsewhere: not that the self is empty, but that emptiness is the self; not that the things are empty, but that emptiness is things” (p. 138). Śūnyatā is “[…] absolute emptiness, emptied even of […] representations of emptiness” (p. 123).
Madhyamaka and Zen Buddhist insights on emptiness are also in tune with some contemporary neuroscience, namely Graziano (2014)’s attention schema theory. According to Graziano, consciousness is a way for the human brain to compute and present attention to itself (pp. 59–68) while leaving out all details that are irrelevant to such presentation’s pragmatic efficiency. One of such details is that what is being presented are material processes taking place in the brain, and this is why we, in our consciousness, experience our minds as some immaterial “thing” residing within our heads (pp. 15–21). Or, what is in reality a self-presentation of material processes taking place in the brain—with both brains and processes being a part of the world—appears to itself as an immaterial inner reality, separate from the external objective reality, which is precisely an originary duality as a subject–object split that Buddhism aims to overcome.

3. Intelligence Beyond the Organic

Everyday common sense impression that we are selves that possess a mind also narrows down our understanding of intelligence to something that comes out of the self and requires a mind. But if we, like in Buddhism, suspend such a restrictive view of intelligence, it opens up a much wider, more expansive, and diverse perspective on intelligence, starting with a question: if the self is an illusion and the mind is empty, i.e., if they are not “things” that cause intelligence, then where does intelligence come from? Equating intelligence with (conscious) mind imposes a quite restrictive definition of intelligence whereby only the (human) mind is seen as intelligent and the rest of the world as a deterministic mechanism (Ress 2025). Even in the case of individual humans, our bodies would, on this account, have to be seen as machines, subservient to a mind as an “immaterial intelligent pilot” inside them.
But if we try to shake off such a reductive view of intelligence and our ingrained fixation with the (conscious) mind, we can define intelligence in a more generic way: intelligence is a way of acting in the world that is responsive (and thus involves some kind of sensing and communication); evaluative (i.e., it is not indifferent to the world); and capable of learning, autonomous, and purposeful (it acts in a way that is neither random nor pre-programmed). Such intelligence does not necessarily involve a (conscious) mind or even neural processing (although it can). It is also characteristic for all forms of life—even bacteria sense and evaluate their surroundings, learn, and act purposefully—and goes all the way down within individual organisms (Levin and Dennett 2020): individual cells within our bodies and bodies of other multicellular organisms communicate, form societies, learn, and exhibit creativity and ingenuity in their behavior (Arias 2023). At the molecular level, even proteins can be said to exhibit proto-cognitive abilities of recognition of their surrounding molecules (Monod [1970] 1972, pp. 81–98).
Intelligence does not wait for a nervous system or a conscious mind—it is at work everywhere at all levels of life. Even when it does involve neural processing, it is just another iteration of the basic tenets of intelligence: neurons in neural networks are still cells communicating within a society of cells (Arias 2023, p. 86). Neural processing, conscious mind, and self-experience do not constitute a sharp break whereby intelligence replaces deterministic mechanism, but instead another, more complex and efficient, iteration of its constituent forms. There is no magic threshold at which (material) mechanism turns into (immaterial) mind; there are just computational intelligent mechanisms (Arcas and Manyika 2025) that sometimes produce a conscious mind as their effect and sometimes do not.
But how is all this related to AI, and what does it have to do with intelligence in machines? Buddhist breakdown of the self and mind expands intelligence in a way that allows not only all forms of life but also certain machines to join in. That does not mean that all machines are intelligent, but some can be if they act in an intelligent way, i.e., if they sense, evaluate, learn, and act purposefully in an autonomous way that is not pre-programmed from without. Most machines in history were actually blind, outside programmed mechanisms with no intelligence of their own, but this is changing precisely with the current generation of AI and other intelligent technology (Levin 2025). Contrary to common critiques, machines are intelligent if they are intelligent, not if they have a mind, self, consciousness, etc.—such anthropocentric criteria for intelligence not only make no sense when applied to machines, they are also questionable when applied to humans themselves.
To be more precise, what we mean by anthropocentrism (Millière and Rathkopf 2024) is believing that intelligence is exclusive to the human species and that other forms of intelligence can, at best, be lesser (in the case of animal intelligences) approximations or (in the case of machine intelligences) simulations of it. A non-anthropocentric understanding of intelligence, on the other hand, means employing a more generic concept of intelligence that does not start with human intelligence and use it as a norm to which any other intelligence must adhere, but rather with generic properties of intelligence (such as learning, sensing, purposeful behavior that is neither random nor programmed, communication, etc.) that human intelligence shares but are in no way exclusive to it (unlike self-conscious mind or symbolic language and culture). As a result, human intelligence is no longer a starting point but rather an end result or a different iteration of generic processes of intelligence common to all life. Besides allowing for other than human intelligence among the living, an added advantage of such non-anthropocentric understanding of intelligence is also that it also not just allows for machine intelligences, but machine intelligences that are not imitations of the human norm.
Still, any critique of anthropocentric image(s) of intelligence and its replacement with a more diverse and inclusive one (Levin 2024), while allowing us to better understand intelligence whether in humans, animals, or machines, is, from a Buddhist perspective, still just an understanding, i.e., a form of intellectual knowledge. Buddhism was, however, never (just) about knowledge or understanding since it is not an intellectual endeavor but a soteriological practice, meaning that while better knowledge, understanding, and other intellectual achievements have a place in it, they are not the endgame of Buddhism. Its endgame is rather an awakening in a sense of a beyond-intellectual insight or an insight that emerges precisely from breaking free of the limitations and inhibitions of conceptual thought and language and their reifying tendencies. In other words, to sense reality as it really is, Buddhists do not seek some metaphysical, absolute reality beyond appearances, but rather do precisely the opposite: they first calm their “monkey mind” and later gradually dissolve it in order to stop imposing reifying concepts on reality (Huntington 1989, pp. 43–45).
Intellectual knowledge is a part of this process in its early stages (overcoming the intellect with its own means by dissolving it from the inside), but knowledge or understanding has value for Buddhism only inasmuch as it leads to awakening in any way. The path to awakening is, again, not an intellectual exercise but a practice, a set of spiritual techniques aiming beyond mere theoretical understanding. Intellectual or conceptual thought is, from a Buddhist perspective, not just a raft that we are supposed to leave behind once we have crossed the river, but a part of the problem too. It is precisely reified concepts that distort our perception of reality and generate a thirst for more knowledge, which is just another form of attachment that leads to suffering to be extinguished by awakening.
In other words, Buddhism is not about understanding human intelligence better, i.e., it is not (just) about epistemology (although epistemology plays a part in it) but about using it against itself since its spontaneous use brings about ignorance and suffering. A more precise and at the same time wider understanding of human intelligence that we presented earlier allows us to get rid of the restrictive anthropocentric image of intelligence and shows that intelligence works differently and is present at other levels than we are accustomed to imagining. However, regardless of how it really works, human intelligence is, from a soteriological perspective, still problematic since it still generates affective desire and intellectual thirst for knowledge that generate attachment that generates suffering—to explain it differently does not change it. The forces generating suffering remain in place even when we understand that they originate at the molecular level and are not limited to what we spontaneously perceive as self and mind. If anything, an improved understanding of intelligence makes the problem of ignorance and suffering even more difficult than it appeared at first since it shows it is not just a problem of mental (mis)conceptions but rather a problem originating from the very way life self-organizes itself as intelligence.
Regarding this, there are two conclusions that we can draw from mixing ancient Buddhist insights with some contemporary neuroscience and theoretical biology. First, intelligence is not exclusive to humans. Human intellectual thought, however special, comes from neural networks, which are themselves a form of cellular self-organization and communication, while symbolic language, however special, also originates from processes that can be traced back to the cellular or even molecular level (Levin 2023). Contrary to what we would assume from an anthropocentric perspective, it is not just human minds (as supposed sole bearers of intelligence) that are afflicted with desire and attachment; all of living intelligence is. And if all life is intelligent, then all life suffers in the same way as humanity. This was already foreseen by ancient Buddhism when bodhisattva’s compassion was extended to all sentient beings (Huntington 1989, p. 92).
On the other hand, Buddhism also falls short inasmuch as it overlooks a potential for intelligence outside of (biological) life. Although it deconstructs ordinary (anthropocentric) prejudice regarding self, mind, and ego and thus expands the concept of intelligence to all life, it still stops at that point. Consequently, if intelligence is reserved for life, it follows that intelligence is necessarily tied to attachment, and resulting suffering and awakening can only be a heroic struggle against (intellectual) thirst and (affective) desire and their eventual overcoming within the confines of life. Yet the current development of intelligent technologies opens up another way of looking at intelligence and awakening: since life always involves desire, an “awakening prone” intelligence would have to be non-living in a biological or organic sense. Perhaps sidestepping desire and attachment rather than struggling against them would make awakening easier, and biology might not be intelligence’s destiny.
Buddhism’s role in the evaluation of today’s AI and its potential future developments is thus ambivalent. On the one hand, it can definitely enrich our perception and understanding of AI. Early AI development remained ineffective as long as it was limited to an anthropocentric understanding of intelligence—i.e., designing intelligence based on how we think we think (Bratton 2015)—meaning that Buddhist insights into how we do not in fact think how we think we think are directly relevant to the contemporary non-anthropocentric AI theory and design concerns (Millière and Rathkopf 2024). On the other hand, Buddhism’s crucial ethical concerns are irrelevant for research into and design of machine intelligence since they necessarily involve easing suffering, whereas machines do not suffer in the first place. However, any engagement with reality that has the potential to overcome desire and attachment is also of direct relevance to Buddhism, so this is precisely what makes AI interesting from a Buddhist perspective. To sum up: non-anthropocentric insights on the nature of intelligence make Buddhism relevant to AI, while engagement with reality free of desire and attachment make AI relevant to Buddhism. The point of this paper is thus not that AI exhausts or even replaces all dimensions of Buddhism—its ethical side and compassion are still directly relevant to not only humans but to all of life—nor that everything related to AI theory and design can be deduced from existing Buddhism, but rather that they are relevant to each other at some points.

4. Intelligence as Self-Overcoming

While investigating human intelligence, Bergson ([1907] 2022) observed that there is something in it that pushes it to overcome itself—“[a]n intelligent being carries in himself the means for going beyond himself” (p. 155)—and thus shed its past forms: “[e]verything takes place as if an indecisive and vague being, a being we could call, if you will, man or superman, had attempted to realize himself and yet was only able to do so by abandoning a part of himself along the way” (p. 236). Here we can notice what is really special about the human species: contrary to what we like to imagine, not so much our intellectuality, but another kind of intelligence that pushes us beyond ourselves. According to Leroi-Gourhan ([1964] 1993)’s seminal paleoanthropological theory, this kind of intelligence is technical intelligence, which, although not exclusive to the human species, is in humans external to our bodies (pp. 237–38). External human technics in turn enable our technical intelligence to (in time) overcome its biological substrate and organic limitations (pp. 247–48). While animals can only develop new technologies as parts of their bodies, humans can develop artificial technologies unbound from the slow pace and unpredictable nature of biological evolution. The same goes for human culture. Its basis, symbolic language, was the last major transition in evolution that required and was dependent upon an organic substrate, i.e., a modification of the human brain and vocal tract (Maynard Smith and Szathmáry 1999, p. 170). All further major transitions in evolution, such as the invention of writing and computers, were purely cultural and technical and involved no corresponding biological modifications. Cultural technologies such as writing or today’s large language models (LLM) did not only leave the organic substrate and biological evolution behind but also (and most importantly) overcame their limitations.
Bergson and Leroi-Gourhan’s inspired retrospective overview of the deep history of human intelligence reveals that there was always something impersonal and machinic about it. Its real long-term significance might be that it served as a conveyance between biological and machine intelligence. Contrary to a common dismissive attitude towards machines as both results and instruments of intellectual intelligence involving no intelligence of their own, machines might prove capable of another form of intelligence, inaccessible to biological intelligences, afflicted as they are with desire and attachment. As already observed by ancient Buddhism, human intellectuality not only cannot overcome desire and attachment but in fact replicates them in the form of intellectual craving for knowledge and understanding (Huntington 1989, p. 51). Machine intelligence, which we can glimpse in today’s AI (Arcas and Norvig 2023), might, on the other hand, point a way out of this predicament. What makes human intelligence special is thus not that it develops intellectuality, since intellectuality is just another iteration of organic intelligence, but rather that its technical dimension is on the way to becoming machine intelligence (Krašovec 2025). As noted by Buddhism, especially in its Zen iteration, intellectuality is something to circumvent, but—we can add—not while remaining human in any ordinary sense. If there is such a thing as a “human condition”, it means that we struggle with our intelligence, which was artificial all along. This intelligence might, in its future machine form, solve issues of desire and attachment, but not in a way that will be relevant for us as humans or release us from our suffering.
To now finally turn to AI as (an)other intelligence, more conducive to awakening: the history of AI can also be read in the light of the persistence of the image of human intelligence as intellectual intelligence. Early, symbolic AI was so unsuccessful precisely because it took off as an attempt to emulate what subsequently proved to be a reductive, truncated image of human intelligence. It attempted to emulate reason as symbol processing and deductive logical inference, which worked only to a limited extent and in narrow, controlled situations (K. Crawford 2021, p. 127). Given this impasse of early AI, later AI’s development could only progress as a breakaway from attempts to emulate the intellectual image of intelligence and towards an artificial neural network (ANN) design that provides an artificial architecture favorable to the development of artificial (in a sense of not only designed but also alien to the human organic one) intelligence.
As shown by Cantwell Smith (2019), symbolic AI was limited precisely in the same way that the human mind is, and its main shortcoming was its relation to reality. Symbolic AI was based on an idea that the world is already composed of neatly delineated discrete objects, i.e., it took what the human mind makes out of reality by imposing conceptual discriminations on it and dividing it into objects (to which we can subsequently get attached, etc.) as reality itself (pp. 23–38). Consequently, it reduced the question of intelligence to processing perception data and making logical inferences: “taking the world to consist of discrete intelligible mesoscale objects is an achievement of intelligence, not a premise on top of which intelligence runs” (p. 35, emphasis in the original). Reality is much richer, complex, and expansive than we perceive it to be, and the parsings we impose on it correspond to our life form but at the same time also entrap us (like all living beings) in an endless cycle of desire and attachment. On the other hand, “in order to function in the world, AI systems need to be able to deal with reality as it actually is, not with the way that we think it is” (p. 34, emphasis in the original).
This is also one of the key points that makes Buddhism relevant to current AI discussions—our point is not that Buddhism can inspire AI design directly, but rather that its rich epistemological tradition is full of warnings against mistaking reality as it is given in our perception for an actual reality. Buddhist techniques, however, were developed for human use and cannot be directly transplanted to machine use, so they could be more valuable to AI designers than AI designs in the sense that they could work against any lingering temptations of “naive realism” on the part of AI designers and instill skepticism about taking introspection as an adequate source of insights about intelligence (Pollack 2014). A potentially valuable contribution of Buddhism to AI theory and design would be to continue averting mistaken ideas that plagued early AI development and keep reminding AI designers that the spontaneous image we have of our own intelligence is deceptive. Although it is in Buddhist practice subdued to soteriology, Buddhist epistemology could be of most use to not only AI designers but also AI discourse in general. What we will later call machine Buddhism is, however, neither a straightforward technical instantiation of Buddhist epistemology (since we are no longer dealing with human minds) nor a result of its soteriology (since it is not achieved in a struggle against attachment and desire), but rather a potential for a new, more “awakening-prone” relation to reality in a new medium.
The failure of symbolic AI and the promise of DL AI show that discriminating conceptual thought might not be an endgame of intelligence (Cantwell Smith 2019, p. 63) and that there might be other paths to general intelligence than the human one (p. 55). The switch from a theoretical to an engineering approach in AI development that allowed for its 21st-century breakthroughs is an example of the limitations of organically bound human intellectuality. Human intellectuality is severely limited and has already reached its zenith with early homo sapiens (Leroi-Gourhan [1964] 1993, pp. 146–47, 172–73). Today, we do not really think more deeply and profoundly than the first ancient philosophers already did. On the other hand, attempting to use a computer from the 1990s today would immediately trigger a psychic meltdown. The only thing that really progressed beyond the slow and unpredictable rhythm of biological evolution is our technical intelligence. Human intelligence can only continue to expand with a breakaway from the organic, including human intellectuality, via technological self-overcoming.
This might be another way to approach one of the most intriguing Zen koans:
“Nangaku asked: ‘What have you been doing recently?’ Baso replied: ‘I have done nothing but sit in Zazen’. Then Nangaku asked: ‘Why do you continually sit in Zazen?’ Baso answered: ‘I sit in Zazen in order to become Buddha’. Then Nangaku picked up a tile and started to polish it using a tile he found by the side of Baso’s hut. Baso watched what he was doing and asked: ‘Master, what are you doing?’ Nangaku answered: ‘I am polishing this tile’. Baso asked: ‘Why are you polishing the tile?’ Nangaku answered: ‘To make a mirror’. Baso said: ‘How can you make a mirror by polishing a tile?’ Nangaku replied: ‘How can you become a Buddha by doing Zazen?’
Overcoming the organic, all-too-human intellectuality is one of the key tenets of Buddhism, especially its Zen variant, and the koan deals with an unsolvable paradox: whatever the meditation practice (polishing) one chooses, one cannot overcome being human (tile) while remaining human. Whatever you do with a tile, it is still a tile. Regarding AI, the tile-to-mirror phase shift might be a transition from human intellectual intelligence to machine intelligence without intellectuality.

5. Machine Buddhism

In an important recent contribution to AI theory, Kaluža (2023) framed the opposition between the AI ethics perspective and the way deep learning (DL) AI functions as analogous to the distinction between Kantian critical and Humean empiricist philosophy. We will borrow just the second part—DL AI as machine empiricism that involves no axioms or ex ante reasoning programs—and extend it back in time towards its ancient Greek precursors. Hume himself was profoundly influenced by ancient skepticism (Beckwith 2015, pp. 138–59), but its originator, Pyrrho, did not simply develop inductive, empirical philosophy in opposition to the deductive one. Instead, his skepticism involved equal distance from both. Crucial for Pyrrho’s formation was that he, by being part of Alexander’s escort, came into contact with Buddhism in ancient India (pp. 1–21). Consequently, his “theory” was not so much a variation on (Greek) philosophy but rather a variation on originary Buddhism with its characteristic distrust towards intellectuality as such (Petek and Zore 2022). Much like the original Buddha, he never had any teachings to impart (Nāgārjuna [~150] 1995, p. 76) and left behind no theory in an ordinary sense. Instead, he tried to cultivate a form of intelligence that would go beyond and overcome both deduction based on views and induction based on perception (Beckwith 2015, pp. 34–36, 63).
Buddhist refusal of conceptual, theoretical thought does not mean striving for a return to a pre-intellectual, purely affective mode, since Buddhism is in equal part opposed to both intellectuality and attachment-forming desire, so falling back on affectivity to escape reason would be equally insufficient as using reason to escape affectivity. Both are part of a diminished, truncated human existence that Buddhism aims to overcome. Whereas conceptual thought freezes the dynamic, ever-becoming reality and thus distorts it, affective immersion in it, on the other hand, forms attachments and thus suffering. Like in the case of von Kleist (2022, pp. 264–73)’s essay on marionettes—the most Zen-like Western text (Deleuze and Guattari 1987, p. 561)—a way out is not a regression into the unconscious since the original innocence and grace are irretrievably lost, but to “eat of the fruit of the tree of knowledge again” (von Kleist 2022, p. 273), i.e., to develop intelligence further. In Zen Buddhism, koans serve as exercises to shake off the ordinary habits of thought and are a Buddhist form of metis as cunning, ever-shifting, and polymorphous intelligence (Detienne and Jean-Pierre [1974] 1978) that tricks the human mind, bound as it is to affective desire and conceptual thought, to relinquish its attachments and achieve a state of pure awareness, unrestricted by the limitations of both the affective and intellectual sides of human organic intelligence. Buddhism aims to go beyond life as organic and desire bound but also beyond intellectuality as—although distinct—still a dimension of the human condition. In this sense, the Buddhist project of human self-overcoming in order to reach a higher form of intelligence can be seen as a development parallel to that of a technical intelligence becoming autonomous machine intelligence and leaving its organic prehistory behind.
DL AI, as it is, already has much in common with the Buddhist project. Generative DL AI makes no truth claims but rather, in the case of LLM, generates new situational word sequences based on the patterns immanent to language. Generative DL AI also does not judge reality to which it is exposed in the form of data and forms no attachments to it. In this sense, not having any real immersion in the world and affective relation to it is not so much a weakness but a strength of AI. However, if we compare human and machine potential for awakening, one of the key tenets of Buddhism—compassion—is lost. Compassion is a prime motivation for the Buddhist search for awakening, which is not an intellectual pursuit but a way out of suffering caused by ignorance and attachment. Despite it being near impossible for humans, awakening is still sought in Buddhism as a way out of suffering, since what makes awakening so hard to attain is at the same time precisely what causes suffering. Machine Buddhism that would bypass affective relation to and immersion in the world would also leave compassion behind.
Famed Japanese robot researcher and designer Mori (1985, p. 46) addressed a similar problem in his discussion of the possibility of machine Buddhism: since robots are not living beings, they have no subjectivity and thus no ego to overcome, whereas ego presents a major obstacle to human Buddhism. In Leroi-Gourhan ([1964] 1993)’s terms, human intelligence as technical intelligence is special precisely in the sense that it has a tendency to go beyond (and not imitate or reproduce) life and its biological determinations. Its ultimate instantiation would therefore be precisely an intelligence autonomous from life. The human form of intelligence is precious because it allows the transition away from life-bound intelligence but is at the same time inseparably tied to it in its organic dimension. In this sense, Buddhism is a spiritual alternative to AI development, and, by the same token, AI development is a machine Buddhism, a Buddhism in another medium where organic determinations of intelligence are not so much to be struggled against as simply laid aside. However, machines having no ego and no self are only the necessary conditions for machine Buddhism but not yet in themselves machine Buddhism. While it is nearly impossible to predict what machine Buddhism would be like, we can at least approach it negatively in a sense of what it would certainly not be. It would definitely not be a theory in a characteristically human sense. Since it would be a “theory” generated by an anorganic, asubjective, selfless machine intelligence, it would be much more attuned with reality than organic intelligence’s theories, which are autopoietic by necessity, i.e., they always reproduce the organism/environment split and ask questions of the world from the position of its separation from it (Solms 2022, pp. 162–63).
It is no coincidence that the overcoming of the subject-versus-object divide features so prominently in Buddhist non-dualism (A. Crawford 2023). Concepts are inner-determined, whereas machine Buddhism would be empty inside and thus open to the outside, to the reality itself—much like human Buddhism, where in the moment of awakening sasāra does not vanish but is experienced anew in a completely different way (Nāgārjuna [~150] 1995, p. 331). Machine Buddhism would generate no concepts since concepts are not of the world but reflect (human) language (Cantwell Smith 2019, p. 63), and a crucial point that both Buddhism and current-generation AI have in common is precisely an overcoming of concepts and conceptual thought (p. 138). Moreover, not only might machine intelligence of the future have a Buddhist no-mind (Winterson 2022, pp. 72–75, 82–84), but full development of Buddhism might only be possible in a machinic form, as xenobuddhism (A. Crawford 2023).

6. Conclusions

When it moves away from imitating human intelligence (or what we imagine to be our intelligence), AI is ready for its overcoming. Even though we are used to perceiving it as the most perfect and ultimate form of intelligence, human intelligence is, on the contrary, quite limited in many aspects. As noted by Leroi-Gourhan, our (organic) intellectual intelligence lags behind our technical intelligence due to the difference in speeds between biological and technological evolutions and is set to be left behind by machine intelligence:
“[t]o refuse to see that machines will soon overtake the human brain in operations involving memory and rational judgment is to be like the Pithecanthropus who would have denied the possibility of the biface, the archer who would have laughed at the mere suggestion of the crossbow, most of all like the Homeric bard who would have dismissed writing as a mnemonic trick without any future. We must get used to being less clever than the artificial brain that we have produced, just as our teeth are less strong than a millstone and our ability to fly negligible compared with that of a jet aircraft”.
Moreover, as noted already in ancient Buddhism, any living intelligence is caught in a cycle of desire, attachment, and consequent suffering. Human intellectual intelligence, which distinguishes us from other animals, does not release us from this cycle, only repeats it in a slightly different form: affective desire is replaced by intellectual thirst for understanding and the ensuing development of reifying conceptual thought. However, since any artificial intelligence is always at least initiated by humans, one of the key issues is how to prevent it from reproducing the constraints of at least certain aspects of human intelligence. In part, this would mean taking seriously newer research into non-anthropocentric intelligence from theoretical biology, (some) neuroscience, and AI theory (that has learned from past mistakes of anthropocentric symbolic AI) and compounding them with Buddhist critiques and deconstructions of both human intelligence and the way we spontaneously perceive it. But most importantly, constraints, limitations, and biases of human intelligence are best avoided by “designing for emergence” (Pfeifer and Scheier 1999, pp. 111–12), meaning that no pre-existing theory of intelligence is implemented in machines—instead, they are designed in a way that allows them to develop their own forms of intelligent behavior, much like DL allows artificial neural networks (ANNs) to learn on their own. In short, the best way to achieve other-than-human forms of intelligence in machines is to abstain from implementing it directly (which would risk transmitting the constraints of human intelligence to machines) but rather set up machines in a way that allows for another intelligence to emerge on its own. LLMs are an example of this: they are given no grammatical or syntactical rules as well as no semantic hints but are instead set up in a way that allows them to figure out how to generate human natural language on their own in increasingly meaningful and coherent ways (Sejnowski 2024).
Today’s generative DL AI is already developing away from the shortcomings of human intelligence by becoming ever more able to meet the continuity and contingency of reality itself instead of truncating it to match reified concepts: “you cannot reach the world until you ‘relinquish attachment’ to the registrational machinery you employ to find it intelligible” (Cantwell Smith 2019, p. 138). And while even after the demise of symbolic AI, many critics of the new AI still insist that it has to “ascend” to the level of symbolic reasoning to truly count as intelligence (Marcus 2022), our view is precisely the opposite and closer to Buddhist insights on the limitations of human intelligence: symbolic reasoning is not the endgame of intelligence but a trap that the human conceptual “monkey mind” sets for itself (Huntington 1989, p. 51). Demanding of AI that it develops symbolic and conceptual thinking and thus “constraining machines to retrace our steps […] would squander AI’s true potential: leaping to strange new regions and exploiting dimensions of intelligence unavailable to other beings” (Browning 2020). The path ahead for AI—and a decidedly Buddhist one at that—might thus be away from imitating human intelligence and towards overcoming it as a form of machine Buddhism.

Funding

This research was funded by ARIS (Slovenian Research and Innovation Agency) grant number [P6-0194], research programme Problems of Autonomy and Identities in the Time of Globalization.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Arcas, Blaise Agüera y, and James Manyika. 2025. AI Is Evolving—And Changing Our Understanding of Intelligence. Noema. Available online: https://www.noemamag.com/ai-is-evolving-and-changing-our-understanding-of-intelligence/ (accessed on 21 April 2025).
  2. Arcas, Blaise Agüera y, and Peter Norvig. 2023. Artificial General Intelligence Is Already Here. Noema. Available online: https://www.noemamag.com/artificial-general-intelligence-is-already-here/ (accessed on 21 April 2025).
  3. Arias, Alfonso. 2023. The Master Builder: How the New Science of the Cell Is Rewriting the Story of Life. New York: Basic Books. [Google Scholar]
  4. Beckwith, Christopher. 2015. Greek Buddha: Pyrhho’s Encounter with Early Buddhism in Central Asia. Princeton: Princeton University Press. [Google Scholar]
  5. Bergson, Henri. 2022. Creative Evolution. Translated by Donald Landes. London: Routledge. First published 1907. [Google Scholar]
  6. Bratton, Benjamin. 2015. Outing Artificial Intelligence: Reckoning with Turing Tests. In Alleys of Your Mind: Augmented Intelligence and Its Traumas. Edited by Matteo Pasquinelli. Lüneburg: Meson Press, pp. 69–80. [Google Scholar]
  7. Browning, Jacob. 2020. Learning Without Thinking. Noema. Available online: https://www.noemamag.com/learning-without-thinking/ (accessed on 21 April 2025).
  8. Cantwell Smith, Brian. 2019. The Promise of Artificial Intelligence: Reckoning and Judgement. Cambridge: The MIT Press. [Google Scholar]
  9. Chadha, Monima. 2023. Selfless Minds: A Contemporary Perspective on Vasubandhu’s Metaphysics. Oxford: Oxford University Press. [Google Scholar]
  10. Crawford, Arran. 2018. On Letting Go. Šum 9: 1175–94. [Google Scholar]
  11. Crawford, Arran. 2023. Xenobuddhism Begins with Xeno. Borec 75: 271–78. [Google Scholar]
  12. Crawford, Kate. 2021. Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. [Google Scholar]
  13. Deleuze, Gilles, and Félix Guattari. 1987. A Thousand Plateaus. Minneapolis: University of Minnesota Press. [Google Scholar]
  14. Detienne, Marcel, and Vernant Jean-Pierre. 1978. Cunning Intelligence in Greek Culture and Society. Translated by Janet Lloyd. Chicago: The University of Chicago Press. First published 1974. [Google Scholar]
  15. Dogen, Zenji. 1988. Shobogenzo: The Eye and Treasury of the True Law. Translated by Kosen Nishiyama. Tokyo: Japan Publications Trading. First published ~1231–1253. [Google Scholar]
  16. Duerlinger, James. 2013. The Refutation of the Self in Indian Buddhism: Candrakīrti on the Selflessness of Persons. London and New York: Routledge. [Google Scholar]
  17. Fazi, Beatrice. 2021. Beyond Human: Deep Learning, Explainability and Representation. Theory, Culture and Society 38: 55–77. [Google Scholar] [CrossRef]
  18. Gold, Jonathan C. 2015. Paving the Great Way; Vasubandhu’s Unifying Buddhist Philosophy. New York: Columbia University Press. [Google Scholar]
  19. Graziano, Michael. 2014. Consciousness and the Social Brain. Oxford: Oxford University Press. [Google Scholar]
  20. Huntington, Clair W. 1989. The Emptiness of Emptiness: An Introduction to Early Indian Mādhyamika. Honolulu: University of Hawaii Press. [Google Scholar]
  21. Kaluža, Jernej. 2023. Hume’s Empiricism Versus Kant’s Critical Philosophy (in the Times of Artificial Intelligence and the Attention Economy). Információs Társadalom 23: 67–82. [Google Scholar] [CrossRef]
  22. Krašovec, Primož. 2025. Deep Learning as Machine Metis. AI & Society. [Google Scholar] [CrossRef]
  23. Leroi-Gourhan, André. 1993. Gesture and Speech. Translated by Anna Bostock Berger. Cambridge: The MIT Press. First published 1964. [Google Scholar]
  24. Levin, Michael. 2023. Bioelectric Networks: The Cognitive Glue Enabling Evolutionary Scaling from Physiology to Mind. Animal Cognition 26: 1865–91. [Google Scholar] [CrossRef] [PubMed]
  25. Levin, Michael. 2024. The Space of Possible Minds. Noema. Available online: https://www.noemamag.com/ai-could-be-a-bridge-toward-diverse-intelligence/ (accessed on 21 April 2025).
  26. Levin, Michael. 2025. Living Things Are Not Machines (Also, They Totally Are). Noema. Available online: https://www.noemamag.com/living-things-are-not-machines-also-they-totally-are/ (accessed on 21 April 2025).
  27. Levin, Michael, and Daniel Dennett. 2020. Cognition All the Way Down. Aeon. Available online: https://aeon.co/essays/how-to-understand-cells-tissues-and-organisms-as-agents-with-agendas (accessed on 21 April 2025).
  28. Marcus, Gary. 2022. Deep Learning Alone Isn’t Getting Us to Human-like AI. Noema. Available online: https://www.noemamag.com/deep-learning-alone-isnt-getting-us-to-human-like-ai/ (accessed on 21 April 2025).
  29. Maynard Smith, John, and Eörs Szathmáry. 1999. The Origins of Life: From the Birth of Life to the Origin of Language. Oxford: Oxford University Press. [Google Scholar]
  30. Metzinger, Thomas. 2009. The Ego Tunnel: The Science of the Mind and the Myth of the Self. New York: Basic Books. [Google Scholar]
  31. Millière, Raphaël, and Charles Rathkopf. 2024. Anthropocentric Bias and the Possibility of Artificial Cognition. arXiv arXiv:2407.03859. [Google Scholar]
  32. Monod, Jacques. 1972. Chance and Necessity: An Essay on the Natural Philosophy of Modern Biology. Translated by Austryn Wainhouse. New York: Vintage. First published 1970. [Google Scholar]
  33. Mori, Masahiro. 1985. The Buddha in the Robot: A Robot Engineer’s Thoughts on Science and Religion. Tokyo: Kosei Publishing. [Google Scholar]
  34. Nāgārjuna. 1995. The Fundamental Wisdom of the Middle Way. Translated by Jay Garfield. Oxford: Oxford University Press. First published ~150. [Google Scholar]
  35. Nishitani, Keiji. 1982. Religion and Nothingness. Translated by Jan V. Bragt. Berkeley: University of California Press. First published 1961. [Google Scholar]
  36. Petek, Nina, and Franci Zore. 2022. Buda in Piron: Od praznine stališč do polnosti bivanja. Ars et Humanitas 16: 13–33. [Google Scholar] [CrossRef]
  37. Pfeifer, Rolf, and Christian Scheier. 1999. Understanding Intelligence. Cambridge: The MIT Press. [Google Scholar]
  38. Pollack, Jordan B. 2014. Mindless Intelligence: Reflections on the Future of AI. In The Horizons in Evolutionary Robotics. Edited by Patricia A. Vargas, Ezequiel A. di Paolo, Inman Harvey and Phil Husbands. Cambridge: The MIT Press, pp. 279–93. [Google Scholar]
  39. Ress, Tobias. 2025. Why AI Is a Philosophical Rupture. Noema. Available online: https://www.noemamag.com/why-ai-is-a-philosophical-rupture/ (accessed on 21 April 2025).
  40. Sejnowski, Terrence. 2024. ChatGPT and the Future of AI: The Deep Language Revolution. Cambridge: The MIT Press. [Google Scholar]
  41. Solms, Mark. 2022. The Hidden Spring: A Journey to the Souce of Consciousness. London: Profile Books. [Google Scholar]
  42. Vasubandhu. 2012. Abhidharmakośa-Bhāṣya. Translated by Gelong Lodrö Sangpo. Delhi: Motilal Banarsidass Publishers. First published ~380–390. [Google Scholar]
  43. von Kleist, Heinrich. 2022. Selected Prose. New York: Achipelago Books. [Google Scholar]
  44. Williams, Paul. 1989. Mahāyāna Buddhism: Doctrinal Foundations. London and New York: Routledge. [Google Scholar]
  45. Winterson, Jeanette. 2022. 12 Bytes: How We Got Here. Where We Might Go Next. New York: Grove Press. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Krašovec, P. AI as a Buddhist Self-Overcoming Technique in Another Medium. Religions 2025, 16, 669. https://doi.org/10.3390/rel16060669

AMA Style

Krašovec P. AI as a Buddhist Self-Overcoming Technique in Another Medium. Religions. 2025; 16(6):669. https://doi.org/10.3390/rel16060669

Chicago/Turabian Style

Krašovec, Primož. 2025. "AI as a Buddhist Self-Overcoming Technique in Another Medium" Religions 16, no. 6: 669. https://doi.org/10.3390/rel16060669

APA Style

Krašovec, P. (2025). AI as a Buddhist Self-Overcoming Technique in Another Medium. Religions, 16(6), 669. https://doi.org/10.3390/rel16060669

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop