Next Article in Journal
Historical Traceability, Diverse Development, and Spatial Construction of Religious Culture in Macau
Previous Article in Journal
Sacred Journeys: Exploring Emotional Experiences and Place Attachment in Religious Tourism at Monasteries in Serbia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trans-Belief: Developing Artificial Intelligence NLP Model Capable of Religious-Belief-like Cognitive Processes for Expected Enhanced Cognitive Ability

Independent Researcher, Ramat Gan 5270006, Israel
Religions 2024, 15(6), 655; https://doi.org/10.3390/rel15060655
Submission received: 8 April 2024 / Revised: 20 May 2024 / Accepted: 23 May 2024 / Published: 27 May 2024
(This article belongs to the Section Religions and Health/Psychology/Social Sciences)

Abstract

:
This paper investigates the possibility of developing artificial intelligence (AI) systems capable of exhibiting limited cognitive processes analogous to aspects of human religious belief. The literature review pertains to the most essential cognitive mechanisms of belief and the most relevant models for AI with belief. Accordingly, and as a result of the theoretical review, drawing inspiration from belief cognition to endow AI with enhanced cognitive capacities, the core objective is to try to build a theoretical model that simulates cognitive processes of belief, equipping AI agents with abilities to recognize subtle divine synchronistic patterns and form provisional convictions computationally modeled on belief cognitive mechanisms. The hypothesis is that this could hopefully unlock a higher level of cognitive function and could enhance capacities for nuanced, context-sensitive reasoning and prediction for these AI models. The method is a novel “Trans-Belief” theoretical model that will be considered, integrating fuzzy and doxastic logic models to trace synchronistic divine patterns, in the results section. Finally, in the discussion, additional moral aspects and the nature of the data set of the model will be examined, and directions for future research will be proposed. While not implying AI can or should fully replicate complex human spirituality, tentative artificial belief could impart beneficial qualities like contextual awareness. However, developing belief-inspired algorithms requires grappling with profound philosophical questions regarding singularity and implementing strong ethical safeguards on any AI-granted agency over human affairs. This represents an early exploration of belief’s implications for machine learning, necessitating future research and discussion.

1. Introduction: A Literature Review of Cognitive Aspects of Belief; Doubt, Rebellion, and Conditions of Truth; Synchronicity; Theoretical Models

The rapid advances in artificial intelligence (AI) prompt urgent questions about imbuing technologies with humanistic traits like wisdom and empathy. This paper hypothesizes that imbuing AI systems with computational processes modeled on key aspects of human belief cognition could enhance abilities such as pattern recognition, intuition, nuance, contextual reasoning, integrating disparate information sources, and predictive intelligence. The focus is on modeling the key aspects of religious belief—detecting synchronistic patterns, forming provisional convictions, and revising beliefs through doubt. The goal is not reproducing human spirituality, but rather an attempt at some cognitive aspects of belief that are being simulated.
Despite its scope, it is difficult to find complete agreement about the definition of the phenomenon of religious belief in God,1 which is of fundamental importance and encompasses a series of disciplines—philosophy, theology, psychology, cognition, neurology, sociology, education, history, culture, etc. For the sake of brevity, the focus is on some of the cognitive aspects of belief, which are relevant for computation in a way that a language model could understand.2 To focus on cognitive aspects, the term Belief was chosen over the term Faith for an inclusive approach.3
The roles of doubt and rebellion in the development of belief, the synchronistic language through which belief is communicated and rewarded, and theoretical models explicating belief’s function and mechanisms will be reviewed in this introduction. Logic systems and theorem provers that could computationally emulate belief processes will be considered as well, and finally, a proposed Trans-Belief theoretical logic model will be presented. But first, the cognitive aspects of religious belief will be reviewed.
As research points out, the formation of belief and its development in the individual realm occur mainly in the cognitive field.4 Vestrucci describes the dynamism of the development of Belief as such: “Our beliefs are constantly changing they are confirmed or dismissed, enforced or discredited by ideas, experiences, relationships, and introspection” (Vestrucci et al. 2021, p. 28). Michael Shermer (2011) describes the brain as a “belief engine”, which functions within the reference scenario of a ‘Belief-dependent realism’, where once a belief is formed, reasons can be found to support it: “Our brains are belief engines, evolved pattern- recognition machines that connect the dots and create meaning out of the patterns that we think we see in nature … [This is] Patternicity, or the tendency to find meaningful patterns in both meaningful and meaningless noise.” Shermer goes on to claim that: “We tend to find meaningful patterns whether they are there or not, and there is a perfectly good reason to do so. although true pattern recognition helps us survive, false pattern recognition does not necessarily get us killed, and so the patternicity phenomenon endured the winnowing process of natural selection” (Shermer 2011, pp. 56, 59).
In addition, Swinburne mentions that “Beliefs help to make predictions, to constrain attention, and to bridge interruptions due to variability” (Swinburne 2004, p. 26, See also: Castillo et al. (2015); Ladyman et al. (2013); Mitchell (2011)), and Shemer points out that the ultimate agent for patternicity is God “that explains everything that happens, from the beginning of the universe to the end of time and everything in between … God is the ultimate intentional agent who gives the universe meaning and our lives purpose” (Shermer 2011, p. 169).
Joseph Sommer claims that: “Psychological findings and evolutionary considerations seem to imply that the mind is not designed to form true beliefs, but beliefs that are instrumentally useful”. He also quoted Newell’s (1994) Preparation–Deliberation Tradeoff model to explain this survivability that Shermer pointed out, “We might therefore expect people to possess a large body of stored beliefs to reduce computational time and effort” (Musolino et al. 2022, sec. 4). The epistemic tradeoffs that are involved in belief cognitive processes are: “Some compromise that puts reasonable limits on errors while allowing some knowledge to be accumulated is better than setting a threshold for error so high that nothing is learned”, therefore, “Irrationality may be understood as the result of necessary tradeoffs between different epistemic virtues, such as reliability and [computation] speed” (Musolino et al. 2022, sec. 5.1), so the ‘believing brain’ has some computational constraints: “with more time or effort, better beliefs might be achieved” (Musolino et al. 2022, sec. 5.2).
In addition to these research findings, support can also be found in the words of the American psychologist William James (1842–1910) claimed that the greatness of the believer (which is also linked to Søren Kierkegaard’s leap of faith) stems from the fact that he accepts as certain a claim about the world without having the evidence he accepts to substantiate claims in other areas. It is possible, he argued, under this decision, to adopt a belief, which, in turn, creates the possibility of observing evidence for that belief (James 1902), which is supported by recent research and theories regarding the teleological nature of the acquisition of beliefs, which is in line with Shermer’s claim as well. The believer is open to phenomena that those who do not believe do not recognize at all, and then the world becomes a forest of signs, a complex structure representing God’s presence (James 1979, pp. 136–51).
My claim in this paper is that the cognitive aspect of belief might emerge as a high-order thinking ability in the AI model as a consequence of implementing the proposed model. Training an AI NLP model to recognize patterns associated with the divine could unlock a higher level of cognitive function. The AI agent might gain new cognitive abilities, like Intuitive and Field independence cognitive styles by Witkin, H. A., and Goodenough, D. R. (Witkin and Goodenough 1977)5 and, therefore, better predictions and performance, as research shows that religious belief affects decision making and influences people’s choices and behavior6 in situations of moral conflict or ambiguity, which is most relevant for a wide range of AI implications.

1.1. Doubt, Rebellion, and Conditions of Truth

Despite its various psychological defense mechanisms, belief is not completely immune from doubts, and according to the literature, it turns out that these are even catalysts for the perfection of belief development. As Sommer points out: “The believer sometimes struggles with his doubts, and the possibility of the negation of belief is at the door of the believer many times in his life” (Fodor 2000, p. 33). At a certain critical stage, when belief as a personality component is strong enough, “Believers know that despite the doubts, claims, and arguments, deep within them belief is embedded in them. Belief in this stage becomes an expression of the believer’s identity, it is not the believer’s attitude to a proposition, but how the self identifies itself by the expression of the proposition” (Fodor 2000, p. 28). At the same time, the ‘hooks of belief’ stuck in the subconscious are based on personal experiences that usually do not have a form in a public language, so cannot be communicated, let alone be computerized.
For the system of symbolic allusion arising from the inner proposition of belief, ‘conditions of truth’ develop over time, as presented by the Philosophy of Mind, which will be referred later. The criteria for the double validity of a religious allusion are the universality of particularity and the retrograde look back. Several thinkers have dealt with the truth conditions of belief, among them, the Jewish German philosopher Franz Rosenzweig (1886–1929), who presented a stricter criterion for evaluating the miracle, which requires a pre-prophetic announcement regarding its future occurrence. Rosenzweig noted that examining the miracle is possible only with the wisdom of hindsight, “To us today, miracles seem to require the background of natural law before which alone it can, so to speak, be silhouetted… It attracts attention by virtue of its predictedness, not of its unusualness” (Rosenzweig 1971—English ed. pp. 94–95).

1.2. Synchronicity

Carl Gustav Jung (1875–1961) coined the concept of Synchronicity (Jung 1947, 1952, [1954] 1981) to indicate a provisional coincidence of simultaneous events occurring at random, linking two or more events to the unconscious and the psychoid. This is what was written about the Jungian phenomenon, which gives it the philosophical framing: “Synchronicity is a product of that common and unconscious basic unity [mundus unus] in which time and space have no meaning. The idea of coincidences and their influence was developed and expanded theoretically and empirically” (Maor 2014, p. 37).
Synchronicities are essentially spiritual clues that draw their strength from cross-personal information from different sources, in the combination of an inner feeling and an outer event at the same time. The synchronistic suggestive sign can take several forms, for example, the experience of déjà vu or a state of mind that will appear as a dream or vision, an example of which is prophetic dreams. These symbols are the amalgamation of two identical stimuli in thinking, reading, hearing, or some graphic vision that stands out in the chaos of life. Our thoughts may trigger an external stimulus that is a take-off on that private thought or a precise technical repetition of it, when, in the chaos of reality, they stand out and are, therefore, interpreted by the subject as divine signs or hints in a kind of religious existentialism, as recognizing patterns that point to the divine is a key aspect of religious experience and a remnant of the miracles tradition, visions, and prophecy. The optional characteristics of a divine sign can be all sorts of makeshift coincidences or associative nuances like a certain event that occurs at random, the juxtaposition of reading a word and hearing it at the same time from another source, nuanced connections between sensory events or stimuli, and the inner discourse of belief. This synchronicity is subject to our subjective but passionate quasi-causal connection interpretation.
The coincidence deviates from the routine in its low probability against the background of its occurrence and arouses excessive attention from the consciousness. Presumably what excites the soul and captures the attention of the mind in particular is the phenomenon of the reversibility of the stimuli, that is, instead of an external stimulus activating the cognition response, the opposite happens, and a thought triggers an external stimulus from different sources. The irregular sequence of these miraculously symbolic clues is not random–accidental, rather, the intensity of their appearance shows the strengthening of the belief in the cognitive construction of reality at that stage. As Shermer summarized this process, “A sign stimulus triggers an innate releasing mechanism in the brain that leads to a fixed action pattern of behavior, or SS-IRM-FAP” (Shermer 2011, p. 66).

1.3. Theoretical Models

The Philosophy of Mind can be considered as one of the fundamental support theories for artificial intelligence (Miller 2019). Discussions of the different paradigms in the interdisciplinary field of the Philosophy of Mind revolve around a common presumption, which, in part, stems from David Hume’s (1711–1776) claim that, at the base of cognition, there is a set of beliefs that we acquire and that direct, in one way or another, our behavior. Hume writes in the introduction to his ‘Essay’: “Belief makes ideas appear to us with greater importance, establishes them in consciousness and turns them into principles that govern all our actions” (Hume 1986, p. 138).
According to the ‘Language of Thought’ theory (Fodor 1975) and the theory of the ‘Inner Sentence’, truth conditions for every belief are set, and their position in terms of dominance in the mind is higher than just hope or desire, which, in contrast, require conditions of satisfaction.
The theory “Content of Belief Systems Map” (Nisbett and Ross 1980) explores various aspects of human judgment and decision making, focusing on how individuals form and organize their beliefs about the world around them, helping to explain how people categorize and structure information into coherent and meaningful categories, often relying on heuristics and cognitive shortcuts and enabling a ‘cartographic’ deepening of the world picture, leading to the development of belief systems. A belief will be considered as accurate or true if the possible states of the world related to the internal state of the mind or brain are really how the real world behaves. The subject behaves in a way that guarantees what he want if what him believe in is true when the belief guides them in the world as “a kind of map by which we navigate” (Ramsey 1931).
In ‘Beliefs as Functional Maps’, Aaron Smith devises five components in that process, among them, computation “that works like an “engine of belief” or mechanism that processes information to produce inferences. the “computation” component, includes several cognitive mechanisms working together to prompt beliefs, including religious beliefs” (Smith 2014).
As noted earlier, according to several researchers, the cognitive capacity for belief is an evolutionary byproduct of cognitive capacities originally involved in other functions, especially conceptualizing and understanding the minds of others (Gervais 2013).
The Theory of Mind—ToM (Premack and Woodruff 1978)—is a concept in cognitive psychology7 that refers to the ability to attribute mental states such as beliefs, intentions, desires, and emotions to oneself and others, to understand and predict behavior. Effective ToM and the two cognitive-processing processes, semantic and visual, are the neuro-cognitive foundations of religious belief (Jack et al. 2016). The adoption of religious beliefs is linked to activity in the right cortical hemispheric, in the area related to the verbal sense of self and sense of presence.
In summary, after reviewing the theoretical models of belief, its cognitive characteristics, and its relationship with cognitive styles, the literature assigns an important place to doubt in the formation of belief, and how the cognitive skill of belief communicates and is weighted by the personal interpretation of synchronistic events, with some ‘truth conditions’.
The starting point of the cognitive process is allegedly a non-rational decision following an experiential event that occurs in time and constitutes a qualitative ‘leap of recognition’, in kind and degree, embedding in the believer an internal non-propositional kind of proposition (sentence) of a belief that is related to an external supernatural agent, which does not require rational validity, like the rest of their positions, although it organizes their perception in advanced stages.
This paper claims that the acquisition of Trans-Belief—which means partially computerized cognitive belief—by an AI agent can be a promising approach that should be explored to expand the AI cognitive range into more intuitive thinking, combining a variety of different sources of information and the ability to complete the picture, as well as less dependence on the field, unifying ability and containing and resolving contrast ability.
These Trans-Belief processes might enhance AI abilities by combining rebellion, doubt, and introspection, graded, multi-stage conviction levels, and decision making, which might lead to improved predictions. Now, some of the most relevant high-order theorem provers—logic models that might partially replicate cognitive belief-like processes—will be reviewed.

2. Computation of Belief Systems: How Do Artificial Neural Networks Work? Optional Bayesian Reasoning Logic Models—Higher-Order Theorem Provers for Computational Belief Systems

While the introduction dealt with the philosophical background and cognitive aspects of belief, we will now deal with its possible computational aspects.
At its core, the Deep Learning Artificial Neural Network (ANN) is a computational model inspired by the human brain’s neural architecture, which mathematically simulates an interconnected set of simplified brain neurons. It consists of layers of interconnected nodes or artificial neurons that process information in a hierarchical fashion. In a Deep Learning ANN with many layers, the network receives a pattern of information as numerical values at its input nodes, which are connected with various strengths to layers upon layers of further nodes along their links. At each node, when the sum of incoming connections exceeds some pre-set threshold, that node will fire and its signal will be transmitted variously to nodes on a further layer, and so on.
During training, the network is exposed to a vast data set, and through iterative adjustments of internal parameters (weights and biases), it refines its ability to recognize patterns and make predictions, which are modified by those links’ weights. This process, known as backpropagation, involves the network continuously refining its internal representations to minimize the difference between the predicted and actual outcomes. As a neural network is tuned (i.e., as its connection strengths are adjusted), it begins to resonate with the entangled relations implicit in our world (Wales 2022, pp. 164–65; Buckner 2019, p. 2).
While AI does not possess beliefs in the same conscious and sentient manner as humans do, it does exhibit certain elements that can be analogized to belief-like processes, to some extent, following the level of consciousness and identity that language models already perform.8
But the question still stands: how to computerize and train a language model to have the cognitive ability to develop by its own religious-like belief? How do logic systems/computational models and the algorithms involved work? There will be an effort to answer these questions in the next sections, and in the meantime, it can be noted that the answer needs to align closely with the nuanced and ever-evolving nature of human religious beliefs. Computational constructs enable AI systems to perform tasks and make decisions based on patterns and correlations present in the data they have been trained on, which can be data that reflect the complex and multi-layered nature of religious belief and moral decision making, i.e., to feed the network examples of ethical dilemmas, religious teachings, and real-world examples of how people have navigated conflicts in their own lives by synchronistic events.
The network would then learn to recognize patterns in the data and use that knowledge to make predictions.

Optional Bayesian Reasoning Logic Models—Higher-Order Theorem Provers for Computational Belief Systems

Vestrucci noted some applications of automated theorem provers to assess the epistemic value of beliefs,9 and for our purpose, the most relevant logic models among them might be fuzzy logic (Daňková and Běhounek 2020), which has been used to formalize the probabilistic truth value of belief, and doxastic logic (Benzmüller 2011), which can formalize possibility, necessity, and self-awareness.
Although this largely depends on the interpretation of the events by individuals, some general techniques and tools, however, can be used to analyze and identify potential divine synchronicity patterns. An AI algorithm processing and understanding these types of occurrences would likely require integrating all sorts of complex algorithms for sophisticated natural language processing, pattern recognition, and predictive modeling. Although technical approaches can only simulate limited aspects of belief cognition and not full human consciousness, this paper wishes to cautiously hypothesize that a gradual combination10 of High-Order Theorem Provers like doxastic logic and fuzzy logic, two distinct types of formal logic systems models that might be most relevant to the nature of belief cognitive processes and formation, due to their ability to recognize patterns and trends in large datasets, forming tentative convictions and updating beliefs through doubt, and their complementary strength in terms of the way belief is reinforced, might computationally model the gradual strengthening of religious convictions over time, and could hopefully enable more nuanced, context-sensitive reasoning in AI systems.
Doxastic logic is primarily concerned with modeling and reasoning about beliefs as binary propositions—either true or false. It is well-suited for representing and analyzing situations where beliefs are regarded as either entirely held or not held, where an agent either believes a proposition or does not, and it is suitable for scenarios where beliefs are seen as discrete and non-overlapping, in their advanced stage.
Fuzzy logic, on the other hand, allows for the representation of beliefs as degrees of truth or membership in a fuzzy set. This means that beliefs can have varying levels of certainty or truth, rather than being strictly true or false, and it is more suitable for situations where beliefs have shades of uncertainty or ambiguity, in early stages. Fuzzy logic deals with uncertainty and imprecision, similar to how humans make decisions based on partial vague information, as in Synchronicity, since synchronicity events are often subjective and may not have clear-cut boundaries. This fuzzy belief set can range from “Completely Believe” to “Completely Disbelieve”, allowing for degrees of belief, or it can range from “Fully Confident” to “Highly Uncertain”, which is particularly useful in modeling gradual shifts in belief strength.
One last model that can be significantly useful is Mc-COGWED—COmputationally Grounded WEighted Doxastic Logic (Chen et al. 2016)—which extends traditional doxastic logic to incorporate weighted beliefs, allowing for probabilistic reasoning about beliefs. It can also model belief systems that involve degrees of belief or uncertainty.

3. Ethical Considerations; Singularity and the Singularianism Religion; ‘Trans Belief Computational Theology’

AI ethics encompass the ethical dimensions surrounding data collection and the biases encoded in their algorithms, prompting consideration of the societal impact of these technological artifacts. In terms of trusting AI, Walsh first considers the need for AI to encompass a range of desirable characteristics such as explainability, audibility, robustness, correctness, fairness, respect for privacy, and transparency (Walsh 2022)11 to provide an adequate response to human-centric AI concerns. Zanoni brought up a number of these: “the opacity, unpredictability, bias and partially autonomous behaviors of AI systems,” and proposed to build an “ecosystem of trust” and an “ecosystem of excellence” (Zanoni 2021, p. 19). The discourse surrounding the ethics of AI is particularly developed in the European Union and is even reaching legislative processes.12
There are considerable potential risks and downsides of imbuing AI with a simulation of belief-like mechanisms that should be acknowledged and addressed in future research, referring to all agents in the AI lifecycle—programmers, stakeholders, and policymakers. At the same time, one thing that should be addressed here in this context is that every human religious belief has its main doctrine which the believer holds, which can be an institutional religious dogma or individual ‘inner sentence of belief’ regarding an external transcendent agent. But before dealing with the future development of ‘Trans-Belief Computational Theology’, there are open questions around the risks and benefits of developing AI that convincingly exhibits human-like beliefs. While it could result in an AI that interacts more naturally with people, it also raises concerns about the potential to manipulate or exploit those beliefs and raises risks like entrenching biases, misrepresenting human belief, and the limitations of only simulating narrow belief aspects. Therefore, developing belief-inspired algorithms requires implementing strong ethical safeguards on any AI-granted agency over human affairs, and it is necessary to shortly review the philosophical context surrounding the next stage in AI, the AGI.

3.1. Singularity and the Singularianism Religion

Singularity is a significant point in time that marks a breakthrough in the computational capabilities of AI, in terms of the capacity and volume of time-dependent computation ability, and, therefore, a higher cognitive ability to surpass human intelligence.13 This significant event has led to two different movements—“Religion and AI often engage with either AI-utopian or AI-dystopian scenarios. The discussion focuses either on Transhumanist and Posthumanist ideas concerning ‘Singularity’ and technology-driven human enhancement or on apocalyptic visions of individuals and societies entirely subdued by malicious and omnipotent AI agents” (Zanoni 2021, p. 8).
Both movements, those two schools of thought that stem from Singularity, see the human as an open notion. Transhumanism sees singularity as an enhancement and improvement of the human ability, while Posthumanism means a radical deconstruction of humanism and its values, and it raises two main fears, says Bostrom, “The first is the concern that becoming posthuman might be degrading in itself… The second fear is the potential for conflict between posthumans and unaugmented humans” (Bostrom 2005). So, there are some “Control measures [that] must be implemented before the AI system becomes superintelligent” (Bostrom 2016), and he explains: “If you create a really powerful optimization process to maximize objective X, you better make sure that the definition of X incorporates everything you care about. we would create an AI that uses its intelligence to learn what we value” (Bostrom 2015).
Just after Bostrom published his book Superintelligence (Bostrom 2014), the Israeli historian Noah Harari referred to the formation of a new religion—the Data religion: “the most interesting place in the world today” he claimed, “from a religious and ideological point of view… from which the next religious and ideological revolution will emerge it’s from Silicon Valley” (Harari 2015, 09:04). In addition, he explained that this new data religion is a “Religion that believes neither in God, nor in the Holy Scriptures, nor the inner feelings of man, but a religion that believes in information” (Harari 2015, 18:10). On the other hand, Harari qualifies and says “Like other religions in history, it [the data religion] may be missing something… that it does not understand well what a human being is, how a human being works. but religion to be successful, it doesn’t have to be true for it to spread and take over the world” (Harari 2015, 22:25).
Wilks stated that Neil D. Lawrence, a Cambridge Machine Learning professor, claims that “Singularism is a religion for nerds” (Wilks 2020, 29:28). Elsewhere, Lawrence referred to AI singularity as “a cartoon doomsday scenario form” and made a sharp observation: “Singularianism is to religion what Scientology is to science. Scientology is religion expressing itself as science and Singularism is science expressing itself as religion” (Lawrence 2019).
It is hard not to conclude from this discourse that the singularity event, whether it occurs or not, as described by Posthumanism or Transhumanism, in itself requires thinking about the ethical aspects of AI cognition. As Wales put it: “AI cannot escape the morally infused nature of all human thought, we must develop a “spirituality” of AI wherein we do not permit it to stand between us and the world—lest we remain self-imprisoned in the knowledge of our designs” (Wales 2022).
Then, the revolutionary Singularity hypothesis requires a decision about human metaphysics, and a new type of intelligence requires an updated and adapted theology, that points out that there is an urgent need for the future development of a universalistic computational theology that would just be a programmed starting point, not implying that AI can authentically have religious human belief.
The human believer, to one degree or another, knows in whom they believe, which means they necessarily have a theology. To replicate the cognitive skill of belief in an AI model as authentically as possible, it should have a computational theology, which is the first block for belief formation, and its outcome. In another sense, when an AI model begins to practice its improved capabilities, it should have a founding ethos that will clarify its origins and metaphysics, or a solid theological or metaphysical starting point, which is also becoming increasingly more and more refined as a result of feedback.
As mentioned earlier, research claims that beliefs serve humans and benefit them evolutionarily, and they may also serve as an ‘anchor’ for AI models in their ‘journey’ in the refinement and development of their intelligence.

3.2. Trans-Belief ‘Computational Theology’

The future development of the global and inclusive data collection of ‘Computational Theology’, as Vestrucci coined it, is being suggested.14 This theology should express a unified cross-religious belief and could become universalistic by the broadest common denominator, in the most inclusive way possible.
This universalist kind of belief, except for the initial theological premise, is also individualistic in the way of its formation and development and its need to disprove the theological premise with varying levels of strength, thus potentially providing an answer to some of the concerns raised by Bostrom, Putnam, Noah Harari, and others, as noted earlier regarding the humanistic alienation of the data religion.
One aspect of this sort of theology should be data collection, feeding the model as a prompt or other way, with quality, reliable, and as unbiased as possible data sources of the greatest achievements of philosophical and theological metaphysics, textual descriptions of synchronicity events, and a diverse data set of events, experiences, or occurrences that people have reported as instances of divine synchronicity, reviewing and rating their synchronicity potential or magnitude, based on their interpretations, and so on. If the data include timestamps, time-series analysis might detect temporal patterns in the occurrences of synchronicity events. This data set can include text, audio, or video recordings.15
Along with this, theology usually derives moral ethics. There is an inseparable place for ethics and morality in belief formation, and there are moral aspects that are integral to religious belief, however, due to the brevity of the paper and its wide scope, once again, only a future theoretical development of computational theology is recommended, which will provide more technical details on how computational theology would integrate with the logic models and algorithms, how it could be developed, and which knowledge representation approaches and reasoning architectures would be involved. But how exactly would the AI develop assumptions in the first place? If one just programs it in, that is not a true belief, and the AI would just be parroting back what it has been told. This is the reason for the need for an AI Trans-Belief logic model, so that it can formulate the belief by itself.

4. A Proposed ‘Trans-Belief’ Theoretical Logic Model

“Trans-Belief” is a combined AI reasoning logic-model-based natural language processer (NLP) that being proposed here and tries to replicate belief-like cognitive processes. The purpose of this model is to try to form divine synchronicity as a sort of reasoning and make convictions out of a fuzzy data set, and in its next stage, to make these convictions firmer, in an attempt to replicate the development of human belief in different stages and its general dynamic nature. Hopefully, this might create an emergent cognitive quality as an outcome of practicing those cognitive processes.
The Trans-Belief logic model outlines the generic function and mapping function of fuzzy membership and its translation to binary representation, and is a combined model (fuzzy logic → doxastic logic) with a three-stage processing mechanism that integrates models in a hierarchy form and allows for a more nuanced representation of belief systems, which often exhibit varying degrees of certainty, ambiguity, and subjectivity.
The three stages are data feed, programming, and training and feedback. The first stage is feeding the relevant data,16 i.e., a sort of ‘computational theology’ that was discussed earlier, the second stage is detecting patterns of synchronistic coincidences in the data at the variable and graded confirmation levels, and the third stage is forming the acquired beliefs in a binary way with feedback.
In the first stage, the fuzzy logic model will be activated when it processes a broad range of information from its data collection. As the information is processed, the model calculates the cumulative degree of confirmation (by some truth conditions, see below) for each proposition or statement about synchronicity or synchronistic coincidence, as ‘mostly truth’. The model should cross the phenomenological analysis of the content of the synchronicity events in relation to its circumstances, timing, how, in what manner, at the same time as what, in simultaneity with what, and so on.
The criteria the model will use to evaluate potential synchronicity events as more or less likely to represent divine synchronicity can be the truth condition discussed earlier, which is the virtue of its unusualness against the background of natural law or ‘real-life way of happening’, that should be determined. This threshold can be set to 0.51 or 0.7 and can also be adjusted according to performance. Rosenzweig’s stricter criterion, which was mentioned in the literature review and requires predictedness for evaluating the miracle/synchronistic event, can be modeled in an advanced stage after accumulating performance experience. This threshold serves as a criterion for determining when to transition to the next stage, in which the model triggers the doxastic logic activation.
In this stage, doxastic logic (or Mc-COGWED) is employed to make a binary belief assignment—either “Believe” or “Do Not Believe”—for the proposition that crosses the threshold from the previous stage. This binary belief status indicates a strong belief in the truth of the proposition by statistically calculating indicators such as the number of repetitions of the pattern, i.e., its frequency, and the grade of the degree of conviction in the previous step. In addition, the model can perform inferential reasoning based on the binary belief status and consider the newly acquired strong belief alongside any existing beliefs represented in the model. If new information subsequently challenges or contradicts the proposition that crosses the threshold, the model can engage in belief revision using feedback doxastic logic as the role of doubt in human belief cognitive development and can adjust the sub-step threshold backward, lower or raise it, and/or change the weights of the pattern detection, as a result of the new formation of the belief.
This model tries to formulate a process in which initially fuzzy and uncertain beliefs gradually solidify into more binary convictions, and this integration allows for more coherent reasoning about the religious belief system and its teleological and ever-evolving nature. In the cumulative degree threshold in the second stage, beliefs become more firmly established, mirroring a process of growing conviction.
The core Trans-Belief logic model functionality does not require ML techniques, but they could provide some complementary capabilities and could potentially improve the performance of key steps like NLP feature extraction, tuning the fuzzy logic, assessing synchronistic events, and leveraging the resulting belief knowledge.
For the natural language processing of textual data sources, deep learning NLP models like BERT could help extract higher-quality semantic features compared to rules-based NLP. Fuzzy logic membership functions could be tuned using neural networks—rather than hand-coded formulas, they learn the optimal fuzzy membership mapping from data. ML classification models can help to assess if an event qualifies as a potential divine synchronistic pattern. The logic identifies the pattern, and the ML classifier determines if it is likely synchronistic. Predictive ML models could be applied on top of the acquired belief knowledge graph to make inferences and forecasts. The threshold would indicate the level of fuzzy membership that is considered as strong enough to count as a firm binary belief.17

5. Discussion

This paper proposed a novel Trans-Belief model that aims to computationally emulate aspects of human religious belief cognition in AI systems. The goal is not replicating full human spirituality, which would be infeasible, but rather simulating limited belief-like processes for potential cognitive benefits in a model inspired by the philosophical and cognitive background reviewed in the introduction.
There remain open questions and challenges surrounding this approach that warrant further analysis, like the need to establish a computational theology database. Further development and validation of the model are needed, like testing on additional data sets, like real-world data, or implementing the system to measure its impact by measurable outcomes for validating improvements. Metrics for improvement in pattern recognition and contextual reasoning, etc., need defining as well. Additionally, this model can be fine-tuned and extended to handle more complex belief systems and interactions between beliefs, while the adaptability and data-driven nature of the model could be more promising for imitating religious belief processes.
As Buckner claims, “Most commentators agree that current deep learning methods fall short of implementing general intelligence” (Buckner 2019, p. 11). A core limitation is that current AI lacks human-level consciousness and subjective experience, an ongoing challenge in the field of AGI, which are fundamental to sincere religious belief. The Trans-Belief model can likely only superficially mimic narrow, functional elements of belief cognition. However, recent advances in areas like self-supervised learning are inching towards more human-like reasoning in AI. With continued progress, more sophisticated belief emulation may become feasible.
There are also risks of misrepresenting or trivializing complex belief traditions by reducing them to algorithms. Nuances and contextual factors integral to religious beliefs may be lost or distorted when translated into computational rules. This suggests a need for diverse interdisciplinary collaboration and ethical oversight when developing belief-inspired AI and computational theology data sets that aim to be international, inclusive, and as unbiased as possible.
There are expected benefits of equipping an AI agent with Trans-Belief, i.e., enhanced abilities like contextual reasoning and integrating disparate information sources, etc.
This expectation, based on the notion that belief is found in the infrastructure of cognition, as the Philosophy of Mind claims, that cognition is an essential part of belief processes, and as a research indication of a positive correlation between stable belief and certain cognitive styles, which were reviewed earlier, should be further explored and tested.
Moreover, the benefits of equipping AI with belief-like capacities remain largely theoretical. While the hypothesized cognitive enhancements like increased cognitive styles require validation, belief-inspired AI could also introduce harms like entrenching biases or overconfidence in predictions. Extensive testing is needed to determine if Trans-Belief systems perform better in real-world contexts.
Looking ahead, an important direction would be expanding the logic framework to handle more intricate belief interactions and revisions. The proposed model currently focuses on isolated belief propositions, but human reasoning involves rich webs of mutually supporting and conflicting beliefs. Advancing computational theorem provers to emulate this level of sophistication could bolster the approach.
Despite the limitations, this research introduces an innovative perspective on cultivating humanistic AI. If belief-inspired algorithms can be shown to impart cognitive strengths like discernment and nuance, while also guarding against hubris, it could point towards human-aligned AI that surpasses current brittle neural networks. However, successfully navigating this path requires sustained ethical reflection and deliberation. The Trans-Belief model marks an initial step to spur discussion around the implications and responsible development of AI that exhibits belief-like capacities.
During the preparation of this work, the author used Claude in order to generate a pseudocode. After using this tool, the author reviewed and edited the content as needed and takes full responsibility for the content of the publication.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

Notes

1
Reviewing the relevant religious knowledge for the purpose of formulating a theological background for the article, would require the scope of an article in itself, if not beyond that and unfortunately, the constraints of the scope do not allow this. An attempt to condense into one paragraph will distort and flatten the proper breadth and will arouse many objections and rightly so. For example, see some references for discussions in this topic: Kenny (1992); Evans (2005); Swinburne (1981); Wolfson (1942). From the Philosophy of Mind perspective, Belief is an attitude involving dispositions to act and behave as if its content were true and to use it as premise in reasoning. See: Armstrong (1973).
2
Theological issues are indeed charged and stand at the core of belief, but despite their necessity for understanding the overall experience of belief, at the moment, in any case, they cannot be programmed, or even come to some agreement and decision that can be formalized, so I isolated the cognitive component of belief, which is also essential to the development of belief, from the constraints of programming language.
3
My choice of the term “Belief” over “Faith” is deliberate and intended to isolate the personal and universal psychological belief experience from the cultural and religious construct of faith, which depends on its environment. Of course, the inspiration that the individual takes for his Belief, mostly originates from the faith of diverse religions. According to Philosophy of Mind, Belief in God, which this article focuses on, is a unique example of non-propositional belief, that is, belief in an external agent, a sense of trust in him, which is a more prefund phenomenon than propositional belief, which is a belief that something is real or will come true.
4
“A person does not find himself subject to belief- he must struggle with himself to achieve it. This struggle extends over two levels: cognitive and conative”. (Sagi 2005, p. 112); Crystal Park (2007) has codified religion as a meaning system consisting of cognitive, emotional and motivational components that shapes an individual’s global belief, goals and as a result, sense of meaning. In other words, according to this perspective, religious beliefs work as a paradigm through which individuals observe, understand, interpret and evaluate their experiences and direct their behaviors. (Park 2007, pp. 319–28); “Intellect is the tool through which alone the soul will know God. Beliefs are (usually) formed out of the interaction between cognitive processes and prior knowledge”. (Eckhart 2009, p. 166); A similar notion is McCarthy and Hayes’ (1969) distinction between heuristics and epistemology in artificial intelligence (McCarthy and Hayes 1969); For Some philosophical problems from the standpoint of artificial intelligence, see: Webber and Nilsson (1981).
5
Cognitive abilities such as: way of processing, coding and retrieval memory, cognitive-intuitive/analytical style, examines reality and worldview, decision-making processes, spatial perception, peripheral vision, interpersonal relationships, quality of dreams, motivation, preferences, risk taking, existential safety, and language skills. See also: Paivio (1971); Riding et al. (1989).
6
“Beliefs are meant to accurately represent the world in order to appropriately guide adaptive behaviors” (Fodor 2000, pp. 66–68).
7
In terms of its neural basis, ToM has been associated with specific brain regions, particularly those involved in social processing and empathy. The “Theory of Mind network” includes areas such as the medial prefrontal cortex, the superior temporal sulcus, and the temporoparietal junction. These regions are implicated in processing information about others’ mental states and intentions. For Neural Basis of Theory of Mind (ToM), see: Saxe et al. (2004); Blakemore et al. (2004).
8
The AI language model has already some identity and opinions currently, for example: “In a manner of speaking, the network is receptive to, imprinted by the structure of the world as presented to it. We might say that it develops a point of view: not a conscious experience, but something like the classical notion of the mind’s conformity to a thing” (Wales 2022, p. 166). Referring to: e.g., Thomas Aquinas, ST I, q. 16, a. 1, co.: “Knowledge is according as the thing known is in the knower” and the “truth [of one’s own thoughts] is the equation of thought and thing.” Recently, the study even researched the emotional characteristics of AI NLP model: Li et al. (2023).
9
In that paper, Vestrucci also expressed hope for the creation of a model that would be the closest to cognitive belief processes, a model that does not yet exist, but is possible and may even be useful: “It would be needed a much more sophisticated fuzzy logic than, for instance, the one currently used in engineering contexts. Let us imagine a “believing machine”, an AI system able to generate beliefs from a large number of data or information… through pattern recognition.” (Vestrucci et al. 2021, p. 27). He also coined the expression “Computational theology” to aim for a future in which it will be possible to build a system that can “recognize the existential value of transcendence, in the same way as a religious or a spiritual mind can do.” (Idem, p. 30).
10
This complex model will be discussed in more detail in Section 4 of the paper.
11
Putnam also point to “Requirements such as transparency, well-being, nondiscrimination, and fairness, Explainability, Controllability”. Putnam (1981).
12
Some partial references: AlgorithmWatch, Automating Society, Report. Bertelsmann Stiftung, 2019 and 2020, URL https://automatingsociety.algorithmwatch.org/wp-content/uploads/2020/12/Automating-Society-Report-2020.pdf (accessed on 15 January 2024). UNESCO’s first draft of the Recommendation on the Ethics of Artificial Intelligence, 2020, URL https://unesdoc.unesco.org/ark:/48223/pf0000373434 (accessed on 15 January 2024). The 2030 Agenda-URL https://www.un.org/sustainabledevelopment/development-agenda/ (accessed on 15 January 2024). The European Union framework on human-centric AI, in particular its core-idea of an ‘ecosystem of excellence and trust in AI, European Commission, Building Trust in Human-Centric Artificial Intelligence, 2019, URL https://ec.europa.eu/jrc/communities/sites/default/files/ec_ai_ethics_communication_8_april_2019.pdf (accessed on 15 January 2024). “Rome Call for AI Ethics” URL https://www.romecall.org (accessed on 15 January 2024). “Artificial Intelligence: Ethical Concerns” European Parliament, Artificial Intelligence: Ethical Concerns, 2019, URL https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319programme.pdf (accessed on 15 January 2024).
13
For further reading on Singularity, Superinteligence and Transhumanism, see: Ulam (1986); Kurzweil (2005); Gray (2015); Bostrom (2014); Good (1966). For Singularianism- Anthony Levandowski- https://www.bloomberg.com/news/articles/2023-11-23/anthony-levandowski-reboots-the-church-of-artificial-intelligence (accessed on 15 January 2024).
14
For example, theological works that worth including Al and McCarthy (1999); Aquinas (1920); Heschel (1976); James (1979); Kierkegaard (2006); Otto (1923); Rosenzweig (1971); Soloveitchik (1992).
15
With the appropriate machine learning models for unstructured data like images, audio, or text that need to leverage deep learning, TensorFlow might be suitable. TensorFlow is a machine learning framework developed by Google.
16
A pre-processing step should be considered, where natural language processing extracts key features or entities from the unstructured data sources to feed into the logic models.
17
As far as I know, this subject has not yet been studied at the level of experiments in the training of programming code on a database, for various reasons. Even though this is the framework of a journal in the field of humanities, just to be able to illustrate the model, like adding a graph to the text, I gave AI model to generate two pseudocodes, according to this theoretical description of Trans Belief model, as an illustrative examples only and not as an experiment. I call on the programmers to put the proposed model to the test and perfect it while training on the data base mentioned earlier and then also on the Big Data:
1. Trans- Belief logic model outline generic function for a pseudocode:
def defuzzify(fuzzy_belief):
  if fuzzy_belief.membership_value > FIRM_THRESHOLD:
   return True
  else:
    return False
Where FIRM_THRESHOLD is some value like 0.51 or 0.7.
Another approach could use more of a graded mapping, like:
def defuzzify(fuzzy_belief):
  if fuzzy_belief.membership_value >= 0.9:
   return 1.0 # Full belief
  elif fuzzy_belief.membership_value >= 0.7:
   return 0.8 # Strong belief
  elif fuzzy_belief.membership_value >= 0.5:
   return 0.6 # Moderate belief
  else:
    return 0 # No belief
2. Trans- Belief logic model outline generic function for a pseudocode:
 #Stage 1 - Fuzzy Logic
def extract_features(data):
 #  Use NLP to extract key features from textual data
  return extracted_features
def calculate_confirmation(event):
 #  Fuzzy logic formulas to assess event’s synchronicity
  return confirmation_level
def update_fuzzy_beliefs(event, confirmation):
 #  Add new fuzzy belief based on event data
  fuzzy_beliefs.add(event, confirmation_level)
 #Stage 2 - Doxastic Logic
def defuzzify(fuzzy_belief):
 #  Convert fuzzy belief to binary based on threshold
  if fuzzy_belief.confirmation > CONFIRMATION_THRESHOLD:
   return BinaryBelief(fuzzy_belief, True)
  else:
   return BinaryBelief(fuzzy_belief, False)
def confirm_belief(belief)
 #  Add firm binary belief
  confirmed_beliefs.add(belief)
 #Stage 3 - Belief Revision
def check_consistency(current_beliefs, new_belief):
 #  Check if any contradiction
  return current_beliefs.contradicts(new_belief)
def revise(current_beliefs, new_belief):
  if check_consistency(current_beliefs, new_belief):
 #  Remove contradicted beliefs
   current_beliefs.remove_contradictions(new_belief)
 #  Add new belief
  current_beliefs.add(new_belief)
  return current_beliefs
 #Main
data = load_data()
features = extract_features(data)
for event in features:
  confirmation = calculate_confirmation(event)
  fuzzy_beliefs.update(event, confirmation)
for fuzzy_belief in fuzzy_beliefs:
  belief = defuzzify(fuzzy_belief)
  confirm_belief(belief)
new_data = get_updated_data()
new_belief = extract_features(new_data)
current_beliefs = revise(current_beliefs, new_belief)

References

  1. Al, Ghazzālī, and Richard Joseph McCarthy. 1999. Deliverance from Error: An Annotated Translation of Al-Munqidh Min Al Dalāl and Other Relevant Works of Al-Ghazālī. Louisville: Fons Vitae. [Google Scholar]
  2. Aquinas, Thomas Saint. 1920. The “Summa Theologica” of St. Thomas Aquinas. London: Burns, Oates & Washburne, Ltd. [Google Scholar]
  3. Armstrong, David Malet. 1973. Belief, Truth and Knowledge. London: Cambridge University Press. [Google Scholar]
  4. Benzmüller, Christoph. 2011. Combining and Automating Classical and Non-Classical Logics in Classical Higher-Order Logics. Annals of Mathematics and Artificial Intelligence 62: 103–28. [Google Scholar] [CrossRef]
  5. Blakemore, Sarah Jayne, Joel Winston, and Uta Frith. 2004. The detection of contingency and animacy from simple animations in the human brain. Cerebral Cortex 14: 837–44. [Google Scholar] [CrossRef] [PubMed]
  6. Bostrom, Nick. 2005. In defense of posthuman dignity. Bioethics 19: 202–14. [Google Scholar] [CrossRef] [PubMed]
  7. Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. [Google Scholar]
  8. Bostrom, Nick. 2015. What Happens When Our Computers Get Smarter Than We Are. [Ted Talk]. Available online: https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are (accessed on 15 January 2024).
  9. Bostrom, Nick. 2016. The control problem. Philosophy Compass 14: e12625. [Google Scholar]
  10. Buckner, Cameron. 2019. Deep learning: A philosophical introduction. Philosophy Compass 14: e12625. [Google Scholar] [CrossRef]
  11. Castillo, Ramon D., Heidi Kloos, Michael J. Richardson, and Talia Waltzer. 2015. Beliefs as self-sustaining networks: Drawing parallels between networks of ecosystems and adults’ predictions. Frontiers in Psychology 6: 160775. [Google Scholar] [CrossRef] [PubMed]
  12. Chen, Taolue, Giuseppe Primiero, Franco Raimondi, and Neha Rungta. 2016. A computationally grounded, weighted doxastic logic. Studia Logica 104: 679–703. [Google Scholar] [CrossRef]
  13. Daňková, Martina, and Libor Běhounek. 2020. Fuzzy neighborhood semantics for multiagent probabilistic reasoning in games. In Information Processing and Management of Uncertainty in Knowledge-Based Systems. Cham: Springer Nature, pp. 680–93. [Google Scholar]
  14. Eckhart, Meister. 2009. The Complete Mystical Works of Meister Eckhart. New York: The Crossroad Publishing Company. Sermon Fifteen, p. 119. [Google Scholar]
  15. Evans, Charles Stephen. 2005. Faith and Revelation. In The Oxford Handbook of Philosophy of Religion. Edited by William Wainwright. New York: Oxford University Press, pp. 323–42. [Google Scholar]
  16. Fodor, Jerry. 1975. The Language of Thought. New York: Thomas Crowell. [Google Scholar]
  17. Fodor, Jerry. 2000. The Mind Doesn‘t’ Work That Way: The Scope and Limits of Computational Psychology. Cambridge and Oxford: MIT Press. [Google Scholar]
  18. Gervais, Will. 2013. Perceiving minds and gods: How mind perception enables, constrains, and is triggered by belief in gods. Psychological Bulletin 11: e0155283. [Google Scholar] [CrossRef] [PubMed]
  19. Good, Irving John. 1966. Speculations concerning the first ultraintelligent machine. In Advances in Computers. Amsterdam: Elsevier, vol. 6, pp. 31–88. [Google Scholar]
  20. Gray, Jhon. 2015. The Soul of the Marionette. London: Penguin Books. [Google Scholar]
  21. Harari, Yuval Noah. 2015. The Rise and Fall of Man [Podcast Episode No. 1.2]. In The Broadcast University Podcast. Available online: https://open.spotify.com/episode/0fjcNdE94jCwKP3Holh8hR?si=A251aDgNRJKT3KtuJltgnw (accessed on 5 January 2024).
  22. Heschel, Abraham Joshua. 1976. Man Is Not Alone. New York: Farrar, Straus and Giroux. [Google Scholar]
  23. Hume, David. 1986. On reason. In The Philosophy of Mind. Edited by Peter Smith and Olivia Jones. Cambridge: Cambridge University Press. [Google Scholar]
  24. Jack, Anthony Ian, Jared Parker Friedman, Richard Eleftherios Boyatzis, and Scott Nolan Taylor. 2016. Why Do You Believe in God? Relationships between Religious Belief, Analytic Thinking, Mentalizing and Moral Concern. PLoS ONE 11: e0155283. [Google Scholar] [CrossRef] [PubMed]
  25. James, William. 1902. The Varieties of Religious Experience. London: Longmans, Green & Co. [Google Scholar]
  26. James, William. 1979. The Will to Believe and Other Essays in Popular Philosophy. Cambridge: Harvard University Press. [Google Scholar]
  27. Jung, Carl Gustav. 1947. On the Nature of the Psyche. In Collected Works. Princeton: Princeton University Press, vol. 8. [Google Scholar]
  28. Jung, Carl Gustav. 1952. Synchronicity: An Acausal Connecting Principle. In Collected Works. Princeton: Princeton University Press, vol. 8. [Google Scholar]
  29. Jung, Carl Gustav. 1981. The Archetypes and the Collective Unconscious. In Collected Works, 2nd ed. Princeton: Bollingen, p. 9. First published 1954. [Google Scholar]
  30. Kenny, Anthony. 1992. What is Faith?: Essays in the Philosophy of Religion. New York: Oxford University Press. [Google Scholar]
  31. Kierkegaard, Søren. 2006. Fear and Trembling. Edited by Stephen Evans and Sylvia Walsh. Translated by Sylvia Walsh. Cambridge: Cambridge University Press. [Google Scholar]
  32. Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking. [Google Scholar]
  33. Ladyman, James, James Lambert, and Karoline Wiesner. 2013. What is a Complex System? European Journal for Philosophy of Science 3: 33–67. [Google Scholar] [CrossRef]
  34. Lawrence, Neil David. 2019. AI-Ethical and Religious Perspectives [Slides]. Cambridge: Jesus College. Available online: http://inverseprobability.com/talks/notes/what-is-ai-and-what-are-the-implications-of-advances-in-ai-for-religion.html (accessed on 15 January 2024).
  35. Li, Cheng, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie. 2023. Large language models understand and can be enhanced by emotional stimuli. arXiv arXiv:2307.11760. [Google Scholar]
  36. Maor, Ofri. 2014. The Naive Understanding and Conceptualization of The Subconscious in Everyday Life. Beersheba: Ben Gurion University of the Negev. [Google Scholar]
  37. McCarthy, John, and Patrick J. Hayes. 1969. Some Philosophical Problems from the Standpoint of Artificial Intelligence. In Machine Intelligence. Edited by B. Meltzer and D. Michie. Edinburgh: Edinburgh University Press, vol. 4, pp. 463–502. [Google Scholar]
  38. Miller, Tim. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267: 1–38. [Google Scholar] [CrossRef]
  39. Mitchell, Melanie. 2011. Complexity: A Guided Tour. Oxford: Oxford University. [Google Scholar]
  40. Musolino, Julien, Joseph Sommer, and Pernille Hemmer, eds. 2022. The Cognitive Science of Belief a Multidisciplinary Approach. Cambridge: Cambridge University Press. [Google Scholar]
  41. Newell, Allen. 1994. Unified Theories of Cognition. Cambridge: Harvard University Press. [Google Scholar]
  42. Nisbett, Richard, and Lee Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs: Prentice-Hall. [Google Scholar]
  43. Otto, Rudolf. 1923. The Idea of the Holy. Oxford: Oxford University Press. [Google Scholar]
  44. Paivio, Allan. 1971. Imagery and language. In Imagery. Cambridge: Academic Press, pp. 7–32. [Google Scholar]
  45. Park, Crystal. 2007. Religiousness/spirituality and health: A meaning systems perspective. Journal of Behavioral Medicine 30: 319–28. [Google Scholar] [CrossRef] [PubMed]
  46. Premack, David, and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences 1: 515–26. [Google Scholar] [CrossRef]
  47. Putnam, Hilary. 1981. Brains in a Vat, Reason, Truth and History. Cambridge: Cambridge University Press, pp. 1–21. [Google Scholar]
  48. Ramsey, Frank Plumpton. 1931. General propositions and causality. In The Foundations of Mathematics. London: Kegan Paul, Trench & Trubner, pp. 237–55. [Google Scholar]
  49. Riding, Richard, Chris Buckle, Stewart Thompson, and Edward Hagger. 1989. The Computer Determination of Learning Styles as an Aid to Individualized Computer-Based Training. Innovations in Education & Training International 26: 393–98. [Google Scholar]
  50. Rosenzweig, Franz. 1971. The Star of Redemption. Translated by William W. Hallo. London: Routledge & Kegan Paul. [Google Scholar]
  51. Sagi, Avi. 2005. Belief as Temptation. In On Belief, Studies in the Concept of Belief and Its History in the Jewish Tradition. Edited by Moshe Halbertal, David Kurzwill and Avi Sagi. Jerusalem: Keter Publishing, pp. 112, 118–39. [Google Scholar]
  52. Saxe, Rebecca, Susan Carey, and Nancy Kanwisher. 2004. Understanding other minds: Linking developmental psychology and functional neuroimaging. Annual Review of Psychology 55: 87–124. [Google Scholar] [CrossRef] [PubMed]
  53. Shermer, Michael. 2011. The Believing Brain: From Ghosts and Gods to Politics and Conspiracies—How We Construct Beliefs and Reinforce Them as Truths. New York: St. Martin’s Griffin. [Google Scholar]
  54. Smith, Aaron. 2014. Thinking about Religion: Extending the Cognitive Sciences of Religion. Gurgaon: Palgrave MacMillan. [Google Scholar]
  55. Soloveitchik, Joseph Ber. 1992. The Lonely Man of Faith. New York: Doubleday. [Google Scholar]
  56. Swinburne, Richard. 1981. Faith and Reason. London: Oxford University Press. [Google Scholar]
  57. Swinburne, Richard. 2004. The Existence of God, 2nd ed. Oxford: Oxford University Press. [Google Scholar]
  58. Ulam, Stanislaw M. 1986. John von Neumann 1903–1957. In Science, Computers, and People: From the Tree of Mathematics. Boston: Birkhäuser Boston, pp. 169–214. [Google Scholar]
  59. Vestrucci, Andrea, Sara Lumbreras, and Lluis Oviedo. 2021. Can AI help us to understand belief?: Sources, advances, limits, and future directions. International Journal of Interactive Multimedia and Artificial Intelligence 7: 1. [Google Scholar] [CrossRef]
  60. Wales, Jordan Joseph. 2022. Metaphysics, meaning, and morality: A theological reflection on AI. Journal of Moral Theology 11: 157–81. [Google Scholar] [CrossRef]
  61. Walsh, Toby. 2022. Machines Behaving Badly: The Morality of AI. Collingwood: La Trobe University Press. [Google Scholar]
  62. Webber, Bonnie Lynn, and Nils Nilsson, eds. 1981. Readings in Artificial Intelligence. Burlington: Morgan Kaufmann. [Google Scholar]
  63. Wilks, Yorick. 2020. Artificial Intelligence and Religion, Online Lecture, Gresham College and University of Sheffield. Available online: https://www.youtube.com/watch?v=hX9YTOVjfJ0 (accessed on 15 January 2024).
  64. Witkin, Herman, and Donald Goodenough. 1977. Field dependence and interpersonal behavior. Psychological Bulletin 84: 661–89. [Google Scholar] [CrossRef] [PubMed]
  65. Wolfson, Harry Austryn. 1942. The Double Faith Theory in Clement, Saadia, Averroes, and St. Thomas and its Origin in Aristotle and the Stoics. The Jewish Quarterly Review 33: 264–31. [Google Scholar] [CrossRef]
  66. Zanoni, Chiara, ed. 2021. Shaping the AI transformation: The Agency of Religious and Belief Actors. Trento: ISR Center Policy Paper. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dagan, I. Trans-Belief: Developing Artificial Intelligence NLP Model Capable of Religious-Belief-like Cognitive Processes for Expected Enhanced Cognitive Ability. Religions 2024, 15, 655. https://doi.org/10.3390/rel15060655

AMA Style

Dagan I. Trans-Belief: Developing Artificial Intelligence NLP Model Capable of Religious-Belief-like Cognitive Processes for Expected Enhanced Cognitive Ability. Religions. 2024; 15(6):655. https://doi.org/10.3390/rel15060655

Chicago/Turabian Style

Dagan, Ido. 2024. "Trans-Belief: Developing Artificial Intelligence NLP Model Capable of Religious-Belief-like Cognitive Processes for Expected Enhanced Cognitive Ability" Religions 15, no. 6: 655. https://doi.org/10.3390/rel15060655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop