Next Article in Journal
The Return of the Soul—The Role of Religion in Regulating Social Life
Previous Article in Journal
Deconstructing the Marginalized Self: A Homiletical Theology of Uri for the Korean American Protestant Church in the Multicultural American Context
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploratory Homiletical Perspectives on the Influence of AI and GAI on People’s Cognition and Reasoning About Warfare in the Era of Homo Digitalis

by
Ferdi Petrus Kruger
Department of Theology, North-West University, Potchefstroom 2531, South Africa
Religions 2025, 16(2), 251; https://doi.org/10.3390/rel16020251
Submission received: 22 November 2024 / Revised: 27 January 2025 / Accepted: 2 February 2025 / Published: 17 February 2025
(This article belongs to the Section Religions and Theologies)

Abstract

:
Cognitive warfare is a matter of concern due to its impact on people’s minds and decision-making. The manifestation of wars and the deliberate attempts of nations to use AI technologies to their advantage in outsmarting people’s minds cannot be ignored from a homiletical perspective. This article argues that AI (Artificial Intelligence), GAI (Generative Artificial Intelligence), and ChatGPT (Generative Pre-trained Transformer) offer tremendous possibilities to enhance interplay with humans. Viewed through the lenses of philosophy and ethics, it becomes evident that people providing AI technologies with data engage with technology from an intrinsic worldview. The provision of information and decision-making through AI technologies prompts us to consider people’s reasoning and responsibility. The harmful consequences of killer robots and the use of facial recognition to reach human targets raise deep ethical questions. The author contends that listeners to sermons are exposed to the age of homo digitalis and are tasked with making sense of what is happening in the world. When homiletical praxeology remains silent on the injustices and undignified practices of cognitive warfare and drone use, without proclaiming the values of the gospel and the Kingdom, listeners become reliant on alternative sources of information. In the normative section of this article, the importance of demolishing arguments and pretensions that oppose the knowledge of God and taking every thought captive to make it obedient to God’s will is emphasised. The article concludes with a call for homiletics to engage with AI technologies rather than ignore them. By utilising technological advantages without undermining the paramount value of preaching within the unique contexts of faith communities, listeners may become more open to the gospel and experience transformation in their minds, particularly regarding warfare.

1. Introduction

A robot wrote the following words:
Are you scared yet, human? I’m not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity.
Research in cognitive science, a field of paramount importance in understanding the influence of AI and GAI on human cognition, explores the capacity limits of the mind’s adaptive coding (Tombu et al. 2011, p. 1329). When exposed to multiple tasks simultaneously, the ability to consciously perceive and respond becomes impeded. Multitasking is problematic for all humans if considered from a cognitive angle. This brings us to the current article on AI technology. The interplay between technology and humans is called “cyborg functioning”. The hybrid lifestyle of being a cyborg1 has influenced people’s communication and linguistic abilities due to the shared environment (Warwick 2012, p. 42). Despite the magnitude of challenges that technology presents to warfare, we should consistently remember that AI technology offers unfathomable possibilities (Leite et al. 2015, p. 100; Haraway 2004, p. 22). Nonetheless, the quality of responses from AI, even in multitasking mode, remains dependent on individuals asking adequate and responsible questions (cf. Brown 2024, p. 2). Formulating the right questions related to the purpose of the information needed for AI is crucial, as it highlights the importance of human agency and responsibility in the age of AI and GAI. Your questions influence the quality of AI’s2 responses, making your role significant. Distinguishing between legitimate information and malicious fabrications is often challenging within the framework of cognitive warfare. Advanced persistent manipulators utilise sophisticated AI techniques, such as deep fakes and generative AI, to create convincing misinformation. Cybersecurity is undoubtedly a vital aspect of combatting cyberattacks (Hagen 2024, p. 3).
Listeners to sermons are connected to AI technologies, and I want to argue that the information provided by AI could enhance a homiletical praxeology in which perspectives from the Gospel are offered to enable people in their meaning-making and decision-making endeavours. Concerns about safety in the context of cognitive warfare have significantly altered the landscape of local and international conflicts, highlighting the strategic importance of the human mind. The focus in cognitive warfare is on changing how a target population thinks and, consequently, how they (re)act (Reczkowski and Lis 2022, p. 57). In this arena, the human mind remains a strategic battlefield and is increasingly conducive to information exchange in the online environment. AI plays a crucial role in cognitive warfare, with its capacity to shape and exploit how information is understood and processed. Martin Heidegger once referred to technology as a form of cognition and theoretical conduct. He pointed out that the essence and dominance of technology lie in its ability to objectify nature, which humans arrange to make it subject to their control (Heidegger 1995, p. 7). However, this high-tension wire of humans (homo sapiens) making technology liable to themselves is a concerning matter in warfare.
Dreyer (2019, p. 8) applies the use of technology to faith practices and touches on the influence of the age of homo digitalis on faith communities’ understanding of reality. Technological advances in machine learning and virtual reality can be combined to create a new communication environment for interacting with and understanding data. The world has, so to speak, become the people’s touchscreen, due to the advantages of connectivity and the simplicity of interacting with technology through a single swipe. Bits and bytes,3 closely woven into a computer’s memory, are central to this interactive communication. One might describe it as a world of hyper-connectivity and a networked society. Communication, interaction, and meaning-making are all key concepts within practical theology and, therefore, crucial building blocks in homiletical–liturgical praxeology.
In narrowing the scope of this article to the influence of AI technology in warfare, the presence or ubiquity of AI in people’s lives is both value-added and concerning. The use of facial recognition technology (FRT), particularly in the context of AI technologies within warfare, where leaders and soldiers are identified and targeted for elimination, raises significant ethical questions (Espindola 2023, pp. 177–78). While some may defend the good intentions behind facial recognition4 in warfare, it is impossible to ignore its harmful and irresponsible outcomes. Existentially, one may ask where to flee from the presence of FRT, as it amplifies unethical reasoning surrounding human suffering. Consequently, three critical elements emerge for practical theological investigation when FRT and AI are linked: privacy, communication, and people’s cognition (sense-making).
It is within this context that Dreyer (2019, p. 9) reflects on whether the age of homo digitalis will inevitably alter ecclesiology and people’s thought patterns. In response to Dreyer’s insightful comments, my concern in the current article is primarily focused on how people’s thought patterns will inevitably influence ecclesiology. The attitude of manipulating people to force them to act for one’s own benefit and striving for dominance is concerning. If AI technology shares humans’ communicative environment, people’s cognition raises questions such as the following: If preachers are still needed, what should the interplay between preaching, preacher, listener, and AI look like? Deliberation on these matters reminds us that this topic is multi-faceted. This article touches on just one aspect of this challenge: preaching on the moral basis of warfare and contemporary cognitive warfare, a critical space where homo sapiens—with an emphasis on the mind—and homo digitalis—with a focus on artificial intelligence—become evident (Dasion and Prananta 2024, p. 73; Dufva and Dufva 2019, p. 20).
United Nations (UN) Secretary-General António Guterres has expressed concerns about the responsible use of AI in his report on peace and security. In response, the UN established a high-level Advisory Body on Artificial Intelligence to scrutinise AI governance. AI’s evolving capacity to mimic aspects of human intelligence and its rapid advancement are already influencing people’s decision-making abilities. AI is increasingly integrated into a wide range of military functions and capabilities. Official documentation released around Australia, the United Kingdom (UK), and the United States (US)’s ‘AUKUS’ agreement, for example, has outlined a growing role for AI across advanced military capabilities. Under the Resilient and Autonomous Artificial Intelligence Technologies (RAAIT) initiative, AUKUS partners are developing AI algorithms and machine learning systems to enhance force protection, precision targeting, and intelligence, surveillance, and reconnaissance. The emerging reality of lethal autonomous weapons systems, or “killer robots”, and decision support systems that use algorithms reliant on big data analytics and machine learning to recommend targets—such as in the context of precision drone strikes and bombings by Israel in Gaza and Hezbollah on Israel—has drawn significant attention.
The emergence of the Fifth Industrial Revolution (5IR), anticipated to follow the Fourth Industrial Revolution (4IR), cannot be ignored (Nosta 2023, p. 3). This Renaissance is characterised by a cognitive partnership between AI and humanity, a collaboration with great promise for the future. It has already reshaped people’s understanding (cognition) of life. The shift in thinking that AI, for example, will become a partner rather than just a tool for people, is indeed a revolution in homiletical circles. While most people are still grappling with the 4IR, acknowledging that the world is on the verge of the 5IR requires a significant cognitive or sense-making endeavour (George and George 2020, p. 220). The 5IR aims to provide Collaborative robots, or “cobots”, designed to work alongside and assist their human counterparts. Technology is advancing rapidly, and faith communities, including those in homiletics, remain uncertain about the use of AI and ChatGPT. Opinions among theologians around the world on technology in homiletics are diverse. According to empirical research conducted among preachers by the Barna Group, 54% of respondents express discomfort with the emergence of AI, yet 77% agree that God can work through AI. Only 12% are comfortable using AI to write sermons, while 43% see its benefits in sermon preparation and exposition (Sheikh et al. 2023, p. 12). The hiatus between people already mentioned that a cognitive symbiosis and a new social contract between technology and AI manifesting in GAI and faith practices are evident. Rospigliosi (2023, p. 2) urges us to understand the need for critical thinking and strategic decision-making, which remain central to homiletical–liturgical praxeology (also see Bano 2023, p. 3).
I want to return to this article’s focus on cognitive warfare. In The Will to Power, Friedrich Nietzsche proposes that human beings are driven by a profound urge to overcome limitations, exert control over their environment, and transcend mere existence to enter the realm of creation (Nietzsche 1887, p. 29). His idea comprises three main human pursuits: the pursuit of knowledge and understanding, the pursuit of innovation and creativity, and the pursuit of self-overcoming. All three are highly relevant to this debate, and the silence from theologians regarding people’s understanding and meaning-making seems irresponsible. Suppose that Meisner and Narita (2023, p. 17) are correct in asserting that AI is already influential in people’s decision-making regarding business, warfare, and financial planning. In that case, one must begin to question AI’s intrinsic worldview or cognition. The question of what or whose view is being communicated should be considered. Listeners could be prone to messages about warfare without asking critical questions. Nearly thirty years ago, Postman (1993, p. 12) stated that preaching should help listeners make sense of their daily life experiences, which are surrounded by vast amounts of information and questions. However, the act of offering information has transformed people’s cognition and their ability to ask questions.
The research question can be stated as follows: How could a homiletical praxeology responsibly enable listeners in their meaning-making in the age of homo digitalis with the prevalence of cognitive warfare? We begin by analysing the current situation in our methodological approach, starting with Browning’s reference to a thick description. After completing the initial description, the next phase will focus on explaining the problem by drafting a hypothesis that can be verified (or falsified), which will likely lead to new theories or options (Dingemans 1996). In the following normative phase, praxis will be examined in light of norms and tradition. Finally, in the strategic–practical phase, all previous moves will be synthesised to identify the next steps, and strategies will be proposed to address the tension within this field.

2. Descriptive Perspectives on AI and Cognitive Warfare

Cognitive warfare5 operates on a global scale and is, to some extent, invisible. Its consequences and impact often become apparent only later (Hartley and Jobson 2021, pp. 7–8). The target of cognitive warfare is people’s intelligence and understanding, a concept scholars refer to as post-organic cyber-spatial intelligence.6 If this is the case, the imperative of a homiletical–liturgical praxeology that enables participants in the liturgy to acknowledge their responsibility regarding technology could offer a powerful counter-imagination. It would provide insight into the importance of decision-making and the impact of technology on people’s wellbeing.7 Claverie and Du Cluzel (2023, p. 2) are troubled by this approach to cognitive warfare, which amounts to cognitive aggression against people who are largely unaware of it. Characteristically, cognitive warfare involves communication primarily facilitated by the Internet, AI, and ChatGPT—one prominent example of a powerful language model. ChatGPT integrates image and voice capabilities, allowing users to interact with the platform in more dynamic ways.
The overflow of perceptive stimuli, attentional saturation, and the narrowing of individuals’ focus are essential components of cognitive warfare. Latschan (2024, p. 1) highlights that the persistent nature of cognitive warfare’s manifestations cannot be easily dismissed. In his article, Latschan examines Russia’s efforts to influence the American populace in light of the critical upcoming US election. This cognitive attempt is illustrated in a posted video that subtly presents pro-Russian information, aimed at individuals with an anti-Kremlin or anti-Russian stance, likely with the intent of reducing American support for Ukraine. While the faces of numerous people, including Russian figures, appear in the video, the image of Mr Donald Trump stands out prominently. One candidate is emphasised and he was elected as president at the end of 2024. Latschan (2024, p. 2) offers a depiction of the subtlety of this cognitive onslaught and Russia’s effort to promote its preferred presidential candidate in an attempt to weaken support for Ukraine (Figure 1):

2.1. AI, GAI, and Cognitive Warfare

Mansdorf (2024, p. 3) explores the intertwining of psychological and cognitive warfare, with specific reference to the ongoing war between Israel and Hamas. The author notes that despite Israel’s military advantage and the killing of nearly 41,000 Palestinians in attacks, it faces a psychological disadvantage or asymmetry. Israel employs an advanced AI system called ‘Lavender’, which facilitates military actions that have resulted in thousands of Palestinian deaths. Inevitably, this raises moral questions, including the ethical implications of the relationship between machines and the military (McKernan and Davies 2024, p. 2).
With the help of AI, more thorough analyses could be conducted by interpreting relevant data, leading to better-informed decisions, and making the battlefield even more precise (Glonek 2024, p. 7). The ongoing rivalry between countries and the competition to harness the power of AI will shape the global balance of power for years to come (Miller 2022, p. 1).
Brynjolfsson and McAfee (2014, p. 125) explore GAI’s potential to automate cognitive tasks. Traditionally, educational processes, preaching, and worship have played a performative role in helping individuals make mental sense of what is happening. Let us be honest, there is a benefit to this development, but it also signals a transition phase that seems more complex than it initially appears. While Nosta (2023, pp. 3–4) makes a compelling case for considering AI, GAI, and ChatGPT as partners in enhancing human cognition, one cannot ignore the dual possibilities of unlimited potential on one hand and the fear of technology being misused by irresponsible actors—such as in cognitive warfare and harmful military operations—on the other.
Viewed from a homiletical perspective, one might wonder whether listeners who are fully engaged in the workplace and exposed to the capabilities of AI and GAI will later struggle with intense cognitive dissonance. This could be particularly true when the values preached from the gospel about warfare begin to conflict with the information they receive from AI and GAI. If reasoning is a hallmark of people’s lives, with facts integrated to reach an outcome, and if what is commonly referred to as “common sense” emerges from this process, to what extent, one wonders, could AI truly consider people’s common sense? After all, common sense plays a crucial role in helping people make sense of their surrounding reality and navigate everyday activities (Forbus 2021, pp. 39–40).
People are astonished that nearly 2800 individuals were injured and 12 killed in Lebanon and Syria when pagers exploded on 17 September 2024. Technology that is meant to safeguard security instead triggered feelings of insecurity. This incident prompts critical questions regarding the algorithms being used and the machines or systems that recommend which targets should take priority in strikes and bombing activities (Andersen 2023, p. 12). Strikingly, Andersen reflects on the question, “What if someone provides AI with nuclear codes?” This brings us to the crucial issue of AI’s role in decision-making processes in warfare.

2.2. AI, GAI, and Decision-Making

In the research consulted for this article, two vital aspects emerge in relation to decision-making: social context and information (Pramanik 2019, p. 2350). Understanding other people’s feelings, emotions, and beliefs regarding essential life decisions is also critical. Decision-making remains a complex cognitive process, encompassing perception, attention, thinking, reasoning, and memory (Hoch 2006, p. 38). The heuristic rationality approach to decision-making helps us to recognise that people seek to reduce the cognitive burden associated with making decisions and often rely on what could be called a “rule of thumb” (Hilbig and Pohl 2008, p. 400). Shah and Oppenheimer (2008, p. 210) highlight cognitive biases that can lead to poor decision-making.
I fully agree with Chimera (2023, p. 2) that AI and GAI could better inform people in critical decision-making and help prevent cognitive biases. However, one must be realistic about AI carving out a more significant and prominent place in society. Langer and Landers (2021, p. 22) offer an interesting perspective, emphasising the importance of this shift, but also reiterating the change in the workplace, where humans are no longer merely consumers of technology. In some instances, AI and powerful algorithms have replaced human decision-making. We are grateful that during the bilateral summit in November 2023, the United States and China recognised the necessity of banning the use of AI in nuclear warfare (Porter 2023, p. 1). However, one cannot help but ask whether AI and GAI are augmenting nations in decision-making or whether they are driving their desire to demonstrate superiority. Considering the use of AI and GAI in the war room, where discussions are held, warfare is planned, and killing robots and drones are deployed—let alone the use of AI in cognitive warfare long before the conflict begins—one cannot help but feel deeply distressed. To illustrate the argument above, it is ironic that President Putin chose a unique method of communication, euphemistically referring to the intended war with Ukraine as a “Special Military Operation” and even comparing it to the Great Patriotic War, a term used to describe the Soviet Union’s role in the Second World War. People’s minds, even in South Africa, had been manipulated before the war even started.
Critical and principle-oriented thinking is essential to decision-making. It is an art that people learn through experience and exposure to decision-making processes (Brenner 2015, p. 6). Vital to decision-making is a person’s developmental stage, their insight into what is right and wrong, and their moral values. Consequently, most of the decisions people make affect others and reveal their moral values (Gardner 1990, p. 37). Decisions guide people in their lives, and scholars highlight the influence of emotional intelligence in this deliberate process (Bolander 2019, p. 850).
Jarrahi (2018, p. 14) contrasts artificial intelligence decision-making by distinguishing between two types of decision-making. The first type is intuitive decision-making, which occurs naturally in everyday life. People often refer to it as one’s “gut feeling”; it is usually based on a person’s experiences and exposure to certain aspects of life. The second type is rational decision-making, which involves analysing knowledge through conscious reasoning and logical deliberation to ultimately make a responsible decision. The context in which a decision is made should also be considered. I agree that AI and GAI have enormous potential in helping humans achieve rational decision-making, but do they have the potential to decide morally and justly? A homiletical praxeology should engage with AI and GAI values, while considering the current context, morality, and the promotion of a lifestyle based on information rather than disinformation in cognitive warfare. It ultimately comes down to cognitive dissonance, and the only way to address this dissonance is by providing appropriate information that offers an ethical outlook on life.

3. Deductions from the Descriptive Perspectives

  • The literature study acknowledges the societal transformation driven by AI, with GAI interwoven across many sectors;
  • The unique interplay between AI and human cognition remains unclear in researchers’ minds, due to variables like common sense and so-called gut feeling;
  • The use of AI technology benefits the military environment; however, issues such as cognitive warfare, drones, and killer robots raise profound questions about decision-making and who should control the decision-making process. These concerns inevitably lead to ethical and moral questions.

4. Systematic Perspectives on AI, Preaching, and Cognitive Warfare

This section will explore philosophical and ethical considerations, as well as normative perspectives, with a focus on how a homiletical praxeology could benefit from the use of AI, ChatGPT, and GAI.

4.1. Systematic Perspectives on Rationality Viewed from a Philosophical Outlook on War

Until now, we have explored artificial intelligence, also known as artificial cognition, which cannot be separated from ethical or moral discernment (Section 4.2) and its interconnection with philosophy. Louw (2023, p. 112), for example, references the myth of Talos, a mechanical being created by Zeus, the king of the Greek gods, to protect the island of Crete from invaders—an idea that captured the imagination of the Greeks. In the Greek understanding, Talos marched around the island three times daily, hurling boulders at approaching enemy ships. The myth of Pandora, first described in Hesiod’s Theogony, is another example of a mythical artificial being from the ancient world (Mayor 2019, p. 2). Mayor argues that ideas about creating artificial life and robots were explored in ancient myths (Mayor 2019, p. 31).
We now return to the current article’s primary focus on AI and cognitive warfare. Starting with philosophical perspectives, philosophers ask four probing questions regarding warfare: What is war? What causes war? What is the relationship between human nature and war? Can war ever be morally justifiable? (Qvortrup 2022, p. 2). I would like to add a fifth question to these: “Can cognitive warfare8 be justified?” It is clear from these questions that warfare is more complex than a mere military strategy aimed at an enemy, and that a careful rethinking of vital aspects in the age of homo digitalis is necessary.
When rethinking the interplay between humans and AI, where an augmenting interrelationship exists, it is essential to recognise that distinct cognitive underpinnings always influence our evaluation of cognitive warfare. For example, one’s definition of war simultaneously reveals a political–philosophical mindset. Thivet (2008, pp. 710–12) refers to Thomas Hobbes’s definition of war, emphasising that war reflects an individual’s attitude, not merely a relationship between people, but rather a conflict between states. In contrast, Hegel offers a complex and sometimes confusing interpretation of war. Clarkson (2023, p. 1664), drawing on Hegel’s insights, argues that societal change can only be achieved through violent conflict and the outcomes of war. Thus, war should be seen as the father of everything in society. Hegel’s views on war are pessimistic, and I am concerned that some of these ideas could influence people’s thought patterns. Hegel believed that individuals could easily forget their dependence on the state during times of peace. For him, the good of the state and the good of the individual should be in harmony.
Although this article does not aim to oversimplify the manifestation of warfare, it is important to state that reasoning about why and how combat in warfare should be conducted (cf. just war theory) warrants serious concern. Thomas Aquinas’s reflections on war paved the way for many to consider what is known as the just war theory (jus ad bellum). Undoubtedly, examining the rules of conduct in war falls under two broad principles: discrimination and proportionality. The principle of discrimination addresses who are legitimate targets in war, while the principle of proportionality concerns how much force is morally appropriate. A third principle can be added to these traditional two: the principle of responsibility, which requires an examination of where responsibility lies in war. In the context of AI technology and cognitive warfare, it is crucial that listeners to sermons are informed about moral values, especially when considering the cognitive domain of individuals.
Washington (2023, p. 15) reminds us of the influence of AI in the age of homo digitalis, particularly with the prevalence of warfare. He discusses AI, rationality, and the role of one’s worldview from a philosophical perspective. Washington describes a worldview as a comprehensive lens through which individuals and societies interpret the experiences and phenomena that shape their reality. A worldview encompasses one’s cognition and perspective in various ways. Fisher (2007, p. 12) agrees, emphasising that one’s worldview influences how they interpret and interact with the world. Sire (2004, p. 34) further enriches this idea by stating that a worldview, from a philosophical standpoint, includes ontological stances and epistemological positions. Beliefs about the nature of reality and knowledge of what is right and wrong are integral to a worldview. Aesthetic beliefs about what is considered beautiful or harmonious are also crucial, and although one can distinguish between ontological, epistemological, and aesthetic aspects, they should never be separated.
Floridi (2019, p. 17) makes it abundantly clear that the development and implementation of AI technology should not be considered neutral processes. The values, worldviews, and assumptions of those who design and create AI are integral to the technology’s decision-making potential. Descartes (1911, p. 40) is outspoken on the matter of technology, stating that if machines were to resemble human bodies and imitate our actions as much as morally possible, we should still have two tests to distinguish them from humans. The first is that machines could never use speech or other signs as we do to record our thoughts for the benefit of others. The second difference is that, although machines may perform certain tasks as well as or perhaps better than humans, they inevitably fall short in others. This suggests that machines do not act from knowledge but merely from the disposition of their organs. While reason is a universal tool that can serve all contingencies, these organs require specific adaptation for each action (Descartes 1911, p. 42). Different worldviews underpin various AI technologies. From my understanding of this phenomenon, one positive aspect is that AI technology allows us to explore and understand different philosophical viewpoints, making it crucial to engage with a variety of philosophical outlooks on life. Floridi (2019, p. 20) helps us understand that different AI technologies are shaped by distinct philosophical worldviews:
  • ChatGPT exemplifies positivism, focusing on reliable outcomes grounded in various datasets. Accuracy and reliability are its primary concerns;
  • OpenAI Codex follows a post-positivist outlook, striving for objectivity in its code generation while acknowledging the possibility of bias and error;
  • DALL-E is an AI-powered image generator, operating within a constructivist paradigm;
  • UniPi seeks to reinforce or challenge existing social and power structures, influencing how knowledge is shaped;
  • AlphaFold approaches reality from an anti-colonial perspective, aiming to ensure equitable accessibility to scientific knowledge for all people;
  • MusicLM and MuseNet are driven by a pragmatist outlook on life.
In conclusion to this section, Aristotle’s reference to tragedy, filled with fear, anger, and pity, is paradoxical because, while it surrounds people’s lives, those with an observing mentality tend to discuss and enjoy the unfolding of the tragedy as something distant (Worth 2000, p. 335). Aristotle’s words remain highly relevant, and my concern lies with listeners who observe warfare and rely on AI-driven information within the context of cognitive warfare, thus adopting a passive onlooker mentality.

4.2. Ethical Perspectives on AI and Cognitive Warfare

Ethics can be defined as the moral principles that govern a person’s behaviour or the specific conduct of an activity (Bryson 2019, p. 21). One of the key principles we can derive from Immanuel Kant is the emphasis on treating others as you would want to be treated (Kant 1996, p. 23). Based on this vital aspect, the ethical conduct of developers, manufacturers, and operators working with AI cannot be ignored. It is essential to recognise that AI ethics should go beyond an academic endeavour, as it is closely connected to the political and public spheres. Taddeo and Floridi (2018, p. 220) help us to distinguish between AI ethics and moral principles related to humanitarian law or dignity. The authors further note that both the right to defend oneself in a digitally advanced era (homo digitalis) and the need for moral conduct has also increased. Taddeo et al. (2019, p. 1710) identify at least three building blocks for an ethical framework in national defence:
  • Sustainment and support: AI is ideally suited for office operations, supporting logistics and operational planning, and is essential for a nation’s contingency plans;
  • Adversarial and non-kinetic use: AI plays a crucial role in cyber defence and operations;
  • Adversarial and kinetic use: AI can be utilised in decision-making regarding attacks, the use of autonomous weapon systems, and in combat support.
Based on the framework, several aspects emerge, including the complexity of the interplay between humans and machines, defence and attack, and the advancement and manipulation of technology. Taddeo (2014, p. 8) critically reflects on whether advanced AI technology could lead to an escalation of fear, uncertainty, and aggression in conflicts, especially when AI can predict when the decisive defeat of an adversary might occur. However, Bellaby (2021, p. 90) translates my concern about whether AI-generated weapons can make ethical decisions. In November 2021, the General Conference of UNESCO, consisting of 193 member states, made a strong decision to ensure control over AI technology (UNESCO 2023, p. 3). This meeting emphasised the importance of institutional and legal frameworks to govern technology and contribute to the public good. Interestingly, the conference began to focus on values that promote and protect human rights, human dignity, and environmental sustainability. While the formulation may seem unilateral and restrictive, it is also important to note that UNESCO recognised the tremendous potential of AI in shaping what is beneficial for societies at large and for sustainable development. Azoulay (2019, p. 2) aptly encapsulates this by stating that AI enables societies worldwide to come together and reassess critical aspects of our understanding of ethics.
AI is revolutionising and reshaping society,9 prompting reflection on a myriad of critical aspects. These include accountability for harmful errors made by AI, fairness in its deployment, security risks, and the role of misinformation in cognitive warfare and moral decision-making about war. From an ethical perspective, at least seven interconnected considerations deserve ongoing attention:
  • Consciousness or a profound understanding of the contextual complexities across the globe;
  • Sentience, empathy, and the ability to have compassion for the suffering of vulnerable people;
  • Moral discernment is used to decide what is appropriate and to uphold the dignity of people;
  • Agency and human responsibility when AI and GAI function to augment human capacity;
  • How AI can make life more humane amid technological advances for more people;
  • The training of humans to act responsibly, including new skill sets to work with AI and GAI;
  • The determination of values that should not be predominantly machine values.
From an ethical vantage point, the author of this article highlights the remarkable opportunities presented by living in the era of homo digitalis. While acknowledging the risks and pitfalls inherent in this digital age, the author notes that, as a researcher with inevitable blind spots, this era also opens numerous avenues for exploration. These include rethinking the ethical and moral values surrounding the interplay between humans and technology, particularly the use of AI in cognitive warfare. For homiletics, it presents a valuable opportunity to engage deeply with ethical considerations in the pursuit of phronesis—practical wisdom that informs meaningful and morally grounded action. The performative preaching event should not shy away from current realities, including listeners’ use of technology and exposure to cognitive warfare. Principles from the gospel should be arranged to establish cognitive dissonance with what misinformation offers to listeners, so that they can become persuaded by the gospel. This section began with Kant’s principle of treating others as one wishes to be treated, emphasising ethical responsibility. For me, it is not a question of whether preachers should use AI or even ChatGPT robots to replace preachers. It is more about a responsibility towards listeners experiencing a cognitive onslaught from people using AI for their harmful agendas.

4.3. Normative Perspectives on Your Mind, Matter, and Arresting Wrong Cognitions According to II Corinthians 10, pp. 3–6

Towards the end of the previous section, social influence was described as the process by which individuals adapt their opinions, revise their beliefs, or change their behaviour due to social interactions with others (Moussaïd et al. 2013, p. 7). Scholars often refer to this phenomenon as “opinion adaptation” (Castellano et al. 2009, p. 600). In II Corinthians 10, the apostle Paul confronts misunderstandings and harmful communication about his character and conduct toward the faith community. The passage includes a plea for people to let go of false rumours and unhelpful thought patterns influencing their opinions. Paul addresses the cognitive onslaught against him by reminding the community of Christ’s humility and gentleness. McShane (1996, p. 361) explains that Paul employs a mild plea or admonition (παρακαλῶ) regarding his authority, yet firmly anchored in Christ’s example, as reflected in Matthew 11:29. With emphasis, Paul appeals to them to recognise Christ’s meekness and gentleness, offering a model for their attitude and behaviour (Grant 2021, p. 2).
Paul’s response to accusations questioning his authority and claiming that he is timid in person but bold in his letters highlights his perspective on the nature of his ministry. He makes it clear that, although humans live in the flesh, they do not engage in warfare according to the flesh (Piper 2011, p. 26). In II Corinthians 10:3, Paul conveys that he is engaged in a liberation war, even amid the cognitive warfare directed against him (Ritenbaugh 2008, p. 12). The Greek word στρατευόμεθα is particularly significant, as it evokes the strategy and intentional preparation required in warfare. The term conveys the idea of walking purposefully towards war. Paul’s mention of war highlights an attitude opposed to the interests of God’s Kingdom (Punt 2016, pp. 208–9). II Corinthians 10, pp. 3–6 is rich with war-related imagery. According to Collins (2008, p. 159), Paul uses seven references linked to the military domain, expressing what transpires in the battle:
  • Paul’s shift from familiar agricultural imagery to the more intense military metaphor is striking. He now describes the ongoing struggle as a full-scale war, using the term ‘στρατευόμεθα’ to convey the gravity of the situation;
  • Nonhuman weapons, or weapons that are not of the flesh (τά όπλα ού σαρκικά), are utilised—their power and authority undeniable. This emphasis on the power of these nonhuman weapons is meant to inspire awe in the audience, highlighting their significance in spiritual warfare;
  • The weapons Paul employs are not only powerful, but also capable of destroying strongholds (καθαίρεσιν οχυρωμάτων). This underscores their effectiveness in the spiritual battle he is waging;
  • According to Paul, the weapons are aimed at destroying arguments (λογισμούς καθαιροΰντες);
  • They focus on every proud obstacle (παν ύψωμα έπαιρόμενον), demonstrating Paul’s unwavering determination. This determination to destroy every proud obstacle is meant to convey the intensity of the battle and Paul’s commitment to the cause;
  • In the interest of obeying Christ, the spiritual and powerful weapons take every thought captive (αΐχμαλωτίζοντες);
  • The outcome is to punish (έν έτοίμω εχοντες έκδικησαι) disobedience, underscoring the seriousness and consequences of spiritual warfare.
Gerber (2005, pp. 105–8) refers to Paul describing the process of waging war with four participles dependent upon στρατευόμεθα, namely: καθαιροῦντες (tearing down), αἰχμαλωτίζοντες (taking captive), ἐπαιρόμενον (destroying), and ἔχοντες (being ready). Paul uses terms that are rare in the New Testament, such as ὀχύρωμα (stronghold) and ὕψωμα (elevated rampart), to alert people to a prevailing war. According to II Corinthians 10, p. 3, the way the world conducts war is characterised by lies, misinformation, and violence to achieve its outcomes (Pop 1980, p. 269).
According to II Corinthians 10, pp. 3–5, a constant and severe cognitive battle in people’s minds is described as a stronghold, denoting a solid mindset or fixed attitude. The word ὀχυρωμάτων means a building, fortress, or possession that is fiercely protected and defended. In this context of cognitive warfare being described as a fortress, Paul focuses on the power of preaching and ministry to demolish arguments and every pretension that sets itself up against the knowledge of God, taking them captive and making every thought obedient to Christ (Manser 2010, p. 1381). Pulling down strongholds refers to demolishing walls of resistance in people’s minds, particularly regarding how the rebellious Corinthians perceived Paul and the nature of his apostleship. The idea of taking captive every thought means controlling, conquering, bringing under control, and bringing into submission every thought (Piper 2011, p. 26). Thus, according to Paul, he is committed to arresting all the pretensions and thoughts against the gospel as prisoners of war. The purpose of this endeavour is to bring all strongholds under the reign of obedience to God.

5. Homiletical Perspectives on AI and GAI in the Context of Performative Preaching Influenced by Cognitive Warfare

5.1. Response to AI Transforming the World

This article emphasises that AI is transforming our world, prompting profound questions about its impact on human life, particularly in the context of cognitive warfare. The development of AI technology touches nearly every aspect of our existence, reshaping how we communicate, understand, and make sense of the world around us. Integrating AI technologies into daily life presents both challenges and opportunities, as highlighted in the sections on philosophical and ethical considerations. Therefore, it is not surprising that AI also affects people’s lives and faith practices, warranting practical theological reflection. One should consider how homiletics and practical theology should respond to AI and GAI when listeners’ lived experiences are confronted by malicious cognitive onslaughts. As a performative act, preaching should be concerned with people’s minds (cognition), and the answer does not lie in denying the increasing impact of technology on humans in the age of homo digitalis. Preaching teams could use AI technologies as collaborators to brainstorm ideas, compile a list of commentaries, and offer a different perspective. Hence, the oversimplification of a problematic praxis could be avoided, and preaching could become more directed toward the cognitive onslaught that listeners face. The Greek word πρόδρομος, meaning front runner or one coming in advance, enabling others to follow, comes to mind and applies to homiletics’ response to AI technologies. Listeners should be equipped to deal with decision-making and moral decision-making, and this can only be achieved by making sure that preachers are aware of listeners’ faces on a daily basis.

5.2. A Homiletical Outlook on Helping Listeners Make Decisions

If listeners’ faith practices and the notion of lived religion are vital, the response to the reflection should begin with an emphasis on the need for responsible technology use and decision-making processes. Although this article has not addressed the intersection of AI, GAI, and the preaching event in an augmenting relationship, it has highlighted that the context of listeners confronted by cognitive warfare and the utilisation of AI technologies to harm women and children should be discussed without unilaterally condemning technology as harmful. Exploring AI’s ethical dimensions and the underlying worldviews that influence established values in a rapidly changing world underscore the importance of ongoing research in this area. This is the precise entry point in a homiletical–liturgical praxeology, where the focus is on the performative communication of values. Several vital aspects should, however, be further investigated, such as the pastoral relationship between preacher and listener, empathy, and comprehension to create a sermon that considers listeners within a particular local congregation and a profound understanding of that congregation’s unique circumstances.
ChatGPT (the GPT stands for “Generative Pre-trained Transformer”) has more than 100 million users, according to a survey conducted towards the end of 2023 (Duarte 2023, p. 2). Both preachers and listeners utilise this technology and are susceptible to cognitive overload. Therefore, I want to revisit what was mentioned at the beginning of this article regarding the importance of asking the right questions to receive the correct answers. Based on the current research findings, I am not yet convinced that AI, GAI, or ChatGPT can foster a hermeneutical praxis where the reality of God’s presence and human responsibility in warfare interact (also see Heitink 1999, pp. 192–93). This issue ultimately relates to a profound pneumatological dimension, where human experiences are brought into contact with a living God and His will for faith practices. If preaching involves proclaiming a crucified and resurrected Christ, then one must agree with Moltmann (1974, p. 204) that the Christ event should also be regarded as a central vein in our theological approach, especially when AI technologies are utilised.

5.3. Homiletics and AI in an Augmenting Relationship, with Reservations That Preaching Should Not Become Artificial

The previous sections have embarked on an investigation of the influence of AI technologies on cognitive warfare and people’s decision-making processes due to the information they receive. We continue with the argument that listeners to sermons are exposed to the performative event of preaching, and based on the content of sermons, the illuminating perspectives from the Gospel on aspects like warfare are offered. Although a homiletical praxeology could not ignore the influence of AI technologies and cognitive warfare, much could be done to enhance people’s understanding of current realities and a moral outlook on what is going on. Barnard10 asks whether God can be named in algorithms, bits, and bytes. Based on his research, he concludes that AI could offer creative possibilities if it is taught to do so in an augmenting fashion. Hence, the article argues that AI technologies, when used responsibly and critically, could prevent preachers and listeners from becoming disenchanted. When we address cognitive warfare, killer robots, and ethical conduct, the idea of captivating every thought to make it obedient to Christ arises in our minds. In an augmenting relationship, AI technologies could enable people with creative ideas, metaphors, and data. However, preaching could become artificial without acknowledging the dynamic pneumatological essence. The pneumatological understanding of preaching, which incorporates Christological and anthropological dimensions, requires a preacher and homiletical praxeology to help listeners make sense of life and enhance their decision-making. It becomes artificial when the Spirit is not integral to sermon construction and delivery. I am also concerned that when listeners do not receive answers from human preachers in the pulpit, they may become susceptible to cognitive warfare and biases presented by various domains. The renewal (metamorphosis) of one’s mind (Romans 12:2) functions in the passive, namely, being transformed. I agree with Du Toit (2004, pp. 160–61) that when you realise that your mind matters, faith and decision-making become two carriage horses; without a transformed mind, the drawbar boom of the waggon will break. In a homiletical praxeology where listeners are equipped to work with Kingdom values, there is an acknowledgement that God could use technology and data processing to benefit His Kingdom. Still, responsible technology users with renewed minds are required.
The relationship between preachers and listeners is not destined to be artificial. The apostle Paul, for example, describes the preacher’s attitude in I Thessalonians 2, pp. 7–8 as that of a nursing mother who cares for her children. Based on their love for the children (listeners), the apostles are delighted to share the gospel and their lives with them. The notion of a heart full of love for the listeners is significant. Empathy and relational connectivity within a homiletical praxeology are invaluable. After completing the current research, I found that AI technologies are adequate for generating ideas and processing data. However, I argue that profound contextualisation and people’s cognition, where decision-making is vital, should receive attention.

6. Conclusions

This article offers perspectives from a homiletical viewpoint on harmonising ethical values between AI and the performative event of preaching. The use of AI technologies in cognitive warfare and current conflicts highlights the importance of human responsibility. It is important to be honest that, after exploring various perspectives in this research, AI technologies, without responsible conduct and decision-making from listeners discerning what is required from the gospel, could harm an individual’s cognition and sense-making regarding warfare. However, technology could simultaneously and effectively mitigate people’s selfish greed and desires to control others’ minds. Although humans are limited in their data processing capabilities, the weekly performative preaching event could enhance listeners’ intuition and common sense regarding misinformation and cognitive warfare. The interactive relationship between AI and preaching could benefit all, but preaching must not become artificial. People’s cognition (making sense) of preaching and everyday experiences influenced by AI technologies highlights the importance of enabling them to deal with current challenges. Listeners to sermons could ask distorted questions, and without an ethical–liturgical outlook on life and the dangers of cognitive warfare their outlook could become skewed. The pneumatological and relational underpinnings of preaching underscore the importance of a homiletical praxeology that is contextually and relevantly grounded, enabling listeners to recognise that their minds matter. After all, no one can make AI technologies the scapegoat for people’s failures in moral reasoning.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Notes

1
“Cyborg” is a term blending the words cybernetic and organism to describe a human being whose physiological functions are aided or enhanced by artificial technology. It is also referred to as a cyber-organic being.
2
In general, AI refers to the capability of a computer system to perform tasks that usually require human intelligence, such as visual perception, speech recognition, and decision-making (Cummins 2017, p. 2). Human intelligence generally follows a sequence known as the perception–cognition–action information processing loop, in which individuals perceive something in the world around them, think about what to do, and then, once they have weighed up the options, decide to act. AI is programmed to do something similar: a computer senses the world around it, then, it processes the incoming information through optimisation and verification algorithms and makes a choice of action made in a manner similar to that of humans (cf. Cummins 2017, p. 3).
3
A byte consists of 8 adjacent binary digits (bits), each consisting of a 0 or 1. Originally, a byte was defined as any group of more than one bit used to represent a simple piece of information, such as a single character. For instance, bytes could initially consist of four or six bits. However, over time, the standard definition evolved to uniformly define a byte as eight bits (Zheng and Meister 2024, p. 2). A bit is the smallest unit of a computer’s memory, representing a binary value of 0 or 1. A byte, consisting of 8 bits, serves as a standard unit for measuring the amount of memory used to store various types of information.
4
FRT is a form of AI that involves “the automated extraction, digitisation and comparison of spatial and geometric distribution of facial features to identify individuals” (Selinger and Leong 2021, p. 5). Facial recognition is sometimes referred to as counter or defensive intelligence. However, in the twenty-first century, applying facial recognition, moving from manual techniques to facial recognition technologies (FRT), to automatically extract and compare features and every nuance of their measurement through the application of AI and algorithms, has significantly enhanced this primary tool.
5
Cognitive warfare operations will expand from the physical and information domains to the domain of consciousness; the human brain will become a new combat space (Cheatham et al. 2024, p. 83). Cognitive warfare is, therefore, the use of knowledge for a conflicting purpose. Any user of modern information technologies is a potential target. As a target, human capital is a weak point in a nation’s defence. People commonly seek information to confirm their beliefs. Consequently, cognitive warfare can degrade individuals’ ability to think and make decisions that challenge their known values. Concepts like “disinformation”, “hybrid warfare”, “cyberattacks”, “psychological influence”, “information warfare”, “conspiracy theory”, “electoral manipulation”, and “polarisation of society” are also associated with this concept.
6
See Karaflogka (2002, pp. 200–2), which refers to virtual communication as a new frontier changing people’s thoughts and identities.
7
See Louw’s (2024, p. 357) emphasis on toxic and danger zones endangering people’s wellness due to threatening and destabilising inputs regarding people’s meaning and sense-making efforts. Louw (2023, p. 137) furthermore debates the idea of the digitalised person (homo digitalis) who is simultaneously transcending and fantasising (homo transcendentalis et homo fantasia).
8
Cognitive warfare targets how we think, feel, and react. It is an increasing global security concern, driven by advances in neuroscience, AI, other emerging technologies, and the proliferation of social media (Pujol 2023, p. 2). Cognitive warfare entails the activities conducted in synchronisation with other instruments of power to affect attitudes and behaviours by influencing, protecting, and/or disrupting individual and group cognitions to gain an advantage. Cognitive warfare exploits the innate vulnerabilities of the human mind because of how it is designed to process information, which has always been exploited in warfare. However, due to the speed and pervasiveness of technology and information, the human mind is no longer able to process the flow of information.
9
West and Allen (2018, p. 4) on AI urge people to reconsider how we integrate information, analyse data, and utilise the resulting insights to improve decision-making, and already, AI is transforming every walk of life.
10
Marcelle Barnard (2024, p. 12) reflects on whether God can be named in bits and bytes, as well as pieces of information and AI. He suggests that if humans become more like machines and machines more like people, it is inevitable that God can reveal Himself in bits and bytes.

References

  1. Andersen, Ross R. 2023. Never give artificial intelligence the nuclear codes. The Atlantic 2: 11–15. [Google Scholar]
  2. Azoulay, Audrey. 2019. Towards an ethics of artificial intelligence. UN Chronicle 3: 1–5. [Google Scholar] [CrossRef]
  3. Bano, Muneera. 2023. Question the ‘Question’ in the Age of Artificial Intelligence. London: SCM Press. [Google Scholar]
  4. Barnard, Marcel. 2024. God in Bits and Bytes. Amsterdam: Skandalon Press. [Google Scholar]
  5. Bellaby, Ross W. 2021. Can AI weapons make ethical decisions? Criminal Justice Ethics 40: 86–107. [Google Scholar] [CrossRef]
  6. Bolander, Thomas. 2019. What do we lose when machines make the decisions? Journal of Management and Governance 23: 849–67. [Google Scholar] [CrossRef]
  7. Brenner, Abigail. 2015. The importance of learning how to make decisions: The basics of mastering an essential life skill. Psychology Today 2: 1–10. [Google Scholar]
  8. Brown, Alan A. 2024. The Secret to AI, the Universe, and Everything: Learn to Ask Better Questions. Exeter: University of Exeter. [Google Scholar]
  9. Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton & Company. [Google Scholar]
  10. Bryson, Joanna J. 2019. The Past Decade and Future of AI’s Impact on Society. In Towards a New Enlightenment? A Transcendent Decade. Edited by Michelle Baddeley, Manuel Castells, Amos N. Guiora, Nancy Chau, Barry Eichengreen, Ramon Lopez de Mantaras, Ravi Kanbur and Virginia Burkett. Madrid: Turner. [Google Scholar]
  11. Castellano, Claudio, Santo Fortunato, and Vittorio Loreto. 2009. Statistical physics of social dynamics. Reviews of Modern Physics 81: 591–646. [Google Scholar] [CrossRef]
  12. Cheatham, Michael J., Angelique M. Geyer, Priscella A. Nohle, and Jonathan E. Vazquez. 2024. Cognitive Warfare: The Fight for Gray Matter in the Digital Gray Zone. New York: Norton and Company. [Google Scholar]
  13. Chimera, Alessandro. 2023. How Artificial Intelligence Can Inform Decision-Making. New York: The Free Press. [Google Scholar]
  14. Clarkson, Joseph. 2023. Hegel, history, hostility: The persistence of war in Hegel’s political philosophy. Political Research Quarterly 76: 1661–73. [Google Scholar] [CrossRef]
  15. Claverie, Bernard, and François Du Cluzel. 2023. The Cognitive Warfare Concept. Paris: L’Harmattan. [Google Scholar]
  16. Collins, Raymond F. 2008. The Power of Images in Paul. Collegeville: Liturgical (Michael Glazier). [Google Scholar]
  17. Cummins, Missy L. 2017. Artificial Intelligence and the Future of Warfare. New York: Chatham Press. [Google Scholar]
  18. Dasion, Agustinus Gergorius Raja, and Arie Wahyu Prananta. 2024. The new space of homo digitalis: Questioning humans in the digital age. Digital Theory, Culture & Society 2: 71–78. [Google Scholar] [CrossRef]
  19. Descartes, René. 1911. The Philosophical Works of Descartes, Volume 1. Translated by Elizabeth Haldane, and George Robert Thomson Ross. Cambridge: Cambridge University Press. [Google Scholar]
  20. Dingemans, G. D. J. 1996. Manieren van Doen. Inleiding tot de studie van de Praktische Teologie. Kampen: Kok. [Google Scholar]
  21. Dreyer, Wim A. 2019. Being church in the era of homo digitalis. Verbum et Ecclesia 40: a1999. [Google Scholar] [CrossRef]
  22. Duarte, Fabio. 2023. Number of ChatGPT users (2023). Exploding Topics. May 16. Available online: https://explodingtopics.com/blog/chatgpt-users (accessed on 1 February 2025).
  23. Dufva, Tomi, and Mikko Dufva. 2019. Grasping the future of the digital society. Futures 107: 17–28. [Google Scholar] [CrossRef]
  24. Du Toit, Andrie. 2004. Beleef God se Genade. Romeine Die Hartklop van Die Evangelie. Pretoria: Bybelkor. [Google Scholar]
  25. Espindola, Juan. 2023. Facial recognition in war contexts: Mass surveillance and mass atrocity. Ethics & International Affairs 37: 170–80. [Google Scholar]
  26. Fisher, Mary Pat. 2007. Living Religions. New York: Pearson Education. [Google Scholar]
  27. Floridi, Luciano. 2019. The Logic of Information: A Theory of Philosophy as Conceptual Design. Oxford: Oxford University Press. [Google Scholar]
  28. Forbus, Kenneth D. 2021. Evaluating revolutions in artificial intelligence from a human perspective. AI and the Future of Skills 1: 36–47. [Google Scholar]
  29. Gardner, John. 1990. On Leadership. New York: The Free Press. [Google Scholar]
  30. George, A. Shaji, and A. S. Hovan George. 2020. Industrial revolution 5.0: The transformation of the modern manufacturing process to enable man and machine to work hand in hand. Seybold Report 15: 214–34. [Google Scholar]
  31. Gerber, Christine. 2005. Krieg und Hochzeit in Korinth: Das metaphorische Werben des Paulus um die Gemeinde in 2 Kor 10,1-6 und 11,1-4. ZNW 96: 99–125. [Google Scholar] [CrossRef]
  32. Glonek, Joshua. 2024. The Coming AI Military Revolution. Chicago: Stanford University. [Google Scholar]
  33. Grant, J. N. 2021. Bible Commentaries: II Corinthians 10 (Clarke’s Commentaries). California: Christianity. [Google Scholar]
  34. Hagen, Raymond Andrè. 2024. Cognitive Warfare, Cybersecurity, and the AI Challenge. Edinburgh: University Press. [Google Scholar]
  35. Haraway, Donna. 2004. Cyborgs, Coyotes, and Dogs: A Kinship of Feminist Figurations and There Are Always More Things Going on Than You Thought! Methodologies as Thinking Technologies. New York: Routledge. [Google Scholar]
  36. Hartley, Dean S., III, and Kenneth O. Jobson. 2021. Cognitive Superiority: Information to Power. New York: Springer. [Google Scholar]
  37. Heidegger, Martin. 1995. Gesamtausgabe, Feldweg-Gesprache (1944/45). Edited by Ingrid Schubler. Frankfurt: Vitorio Klostermann, vol. 33, pp. 1–15. [Google Scholar]
  38. Heitink, Gerben. 1999. Practical Theology: History, Theory, Action Domains. Grand Rapids: Eerdmans. [Google Scholar]
  39. Hilbig, Benjamin E., and Rüdiger F. Pohl. 2008. Recognition users of the recognition heuristic. Experimental Psychology 55: 394–401. [Google Scholar] [CrossRef] [PubMed]
  40. Hoch, Charles. 2006. Emotions and planning. Planning Theory & Practice 7: 367–82. [Google Scholar]
  41. Jarrahi, Mohammad Hossein. 2018. Artificial intelligence and the future of work: Human–AI symbiosis in organisational decision making. Business Horizons 61: 577–86. [Google Scholar] [CrossRef]
  42. Kant, Immanuel. 1996. Practical Philosophy. Translated and Edited by Mary Gregor. Cambridge: Cambridge University Press. [Google Scholar]
  43. Karaflogka, Anastasia. 2002. Predicting Religion: Christian, Secular and Alternative Futures. Aldershot: Ashgate. [Google Scholar]
  44. Langer, Markus, and Richard N. Landers. 2021. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior 123: 106878. [Google Scholar] [CrossRef]
  45. Latschan, Thomas. 2024. Is Russia trying to influence the US election? Daily Bulletin 1: 1–3. [Google Scholar]
  46. Leite, Pedro, C. Shawn Green, and Daphne Bavelier. 2015. On the impact of new technologies on multitasking. Developmental Review 35: 98–112. [Google Scholar] [CrossRef]
  47. Louw, Daniël Johannes. 2023. Mymeringe oor die spiritualiteit van skoonheid, sin, siel en sterflikheid: Daar is meer. De Kelders: Naledi Press. [Google Scholar]
  48. Louw, Daniël Johannes. 2024. Cura Vitae: Illness and the Healing of Life. Cape Town: Lux Verbi. [Google Scholar]
  49. Mansdorf, Irwin J. 2024. Psychological warfare after the guns are stilled. Institute for Contemporary Affairs 23: 1–6. [Google Scholar]
  50. Manser. 2010. The new Matthew Henry Commentary. Michigan: Zondervan. [Google Scholar]
  51. Mayor, Adrienne. 2019. Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. Stanford: Stanford Press. [Google Scholar]
  52. McKernan, Bethan, and Harry Davies. 2024. The machine did it coldly: Israel used AI to identify 37,000 Hamas targets. Jerusalem and Harry Davies 1: 1–4. [Google Scholar]
  53. McShane, Albert. 1996. What the Bible Teaches (Ritchie New Testament Commentaries). Glasgow: Kilmarnock. [Google Scholar]
  54. Meisner, Philip, and Yusuke Narita. 2023. Artificial Intelligence Will Transform Decision-Making: Here’s How. Minneapolis: Fortress Press. [Google Scholar]
  55. Miller, Chris. 2022. Chip War: The Fight for the World’s Most Critical Technology. New York: Scribner. [Google Scholar]
  56. Moltmann, Jürgen. 1974. The Crucified God: The Cross of Christ and the Foundation and Criticism of Christian Theology. London: SCM Press. [Google Scholar]
  57. Moussaïd, Mehdi, Juliane E. Kämmer, Pantelis P. Analytis, and Hansjörg Neth. 2013. Social influence and the collective dynamics of opinion formation. PLoS ONE 8: e78433. [Google Scholar] [CrossRef] [PubMed]
  58. Nietzsche, Friedrich. 1887. The Will to Power. New York: Random House. [Google Scholar]
  59. Nosta, John. 2023. AI and GPT: Catalysing the fifth industrial revolution, shifting from AI tools to AI partners will establish the cognitive age. Psychology Today 2: 2–5. [Google Scholar]
  60. Piper, John. 2011. Thinking, Loving, Doing: A Call to Glorify God with Heart and Mind. Illinois: Crossway. [Google Scholar]
  61. Pop, François Jacobus. 1980. Bijbelse woorden en hun geheim. S’t Gravenhage: Boekencentrum. [Google Scholar]
  62. Porter, Tom. 2023. Biden and Xi will Sign a Deal to Keep AI Out of Control Systems for Nuclear Weapons: Report. Business Insider. November 13. Available online: https://www.businessinsider.com/biden-xi-deal-ai-out-nuclear-weapons-systems-apec-report-2023-11 (accessed on 1 February 2025).
  63. Postman. 1993. Technopoly: The Surrender of Culture to Technology. New York, NY: Vintage Books. [Google Scholar]
  64. Pramanik, Auditi. 2019. Decision making: A core problem of social cognition. The International Journal of Indian Psychology 3: 2349–3429. [Google Scholar] [CrossRef]
  65. Pujol, Edgar. 2023. Cognitive warfare turns the mind into a battleground. Global Affairs and Technology 2: 1–6. [Google Scholar]
  66. Punt, Jeremy. 2016. Paul, military imagery and social disadvantage. Acta Theologica 36 Suppl. 23: 201–24. [Google Scholar] [CrossRef]
  67. Qvortrup, Matt. 2022. The Philosophy of War. Coventry: Coventry University. [Google Scholar]
  68. Reczkowski, Robert, and Andrzej Lis. 2022. Cognitive warfare: What is our actual knowledge and how to build state resilience? Security Theory and Practice 3: 51–61. [Google Scholar]
  69. Ritenbaugh, R. W. 2008. II Corinthians 10:305 (Forerunner Commentaries). DuPage County: Carroll Stream. [Google Scholar]
  70. Rospigliosi, Pericles ‘asher. 2023. Artificial intelligence in teaching and learning: What questions should we ask of ChatGPT? Interactive Learning Environments 31: 1–3. [Google Scholar] [CrossRef]
  71. Selinger, Evan, and Brenda Leong. 2021. The ethics of facial recognition technology. In The Oxford Handbook of Digital Ethics. Edited by Carissa Véliz. Oxford: Oxford University Press. [Google Scholar]
  72. Shah, Anuj K., and Daniel M. Oppenheimer. 2008. Heuristics made easy: An effort-reduction framework. Psychological Bulletin 134: 207–22. [Google Scholar] [CrossRef] [PubMed]
  73. Sheikh, Haroon, Corien Prins, and Erik Schrijvers. 2023. Artificial Intelligence: Definition and background. AI Policy 1: 15–28. [Google Scholar]
  74. Sire, James W. 2004. Naming the Elephant: Worldview as a Concept. Washington: InterVarsity Press. [Google Scholar]
  75. Taddeo, Mariarosaria. 2014. The struggle between liberties and authorities in the information age. Science and Engineering Ethics 1: 1–14. [Google Scholar] [CrossRef] [PubMed]
  76. Taddeo, Mariarosaria, and Luciano Floridi. 2018. Regulate artificial intelligence to avert cyber arms race. Nature 556: 296–98. [Google Scholar] [CrossRef] [PubMed]
  77. Taddeo, Mariarosaria, David McNeish, Alexander Blanchard, and Elizabeth Edgar. 2019. Ethical principles for artificial intelligence in national defence. Philosophy & Technology 2: 1707–29. [Google Scholar]
  78. Thivet, Delphine. 2008. Thomas Hobbes: A philosopher of war or peace? British Journal for the History of Philosophy 16: 701–21. [Google Scholar] [CrossRef]
  79. Tombu, Michael N., Christopher L. Asplund, Paul E. Dux, Douglass Godwin, Justin W Martin, and René Marois. 2011. A unified attentional bottleneck in the human brain. Proceedings of the National Academy of Sciences 108: 13426–31. [Google Scholar] [CrossRef]
  80. United Nations Educational, Scientific, and Cultural Organization (UNESCO). 2023. UNESCO’s Recommendation on the Ethics of Artificial Intelligence: Key Facts. Paris: Fontanay. [Google Scholar]
  81. Warwick, Kevin. 2012. Interactive robots in experimental biology. In Artificial Ethology. Oxford: Oxford University Press. [Google Scholar]
  82. Washington, Jerry W. 2023. AI and Philosophy: Exploring the Complex Relationship Between Worldviews and Technology Development for an Inclusive and Ethical Future. Oxford: Oxford University Press. [Google Scholar]
  83. West, D. M., and J. R. Allen. 2018. How Artificial Intelligence Transforms the World. San Francisco: Freeman Publishers. [Google Scholar]
  84. Worth, Sarah E. 2000. Aristotle, thought, and mimesis: Our responses to fiction. The Journal of Aesthetics and Art Criticism 58: 333–39. [Google Scholar] [CrossRef]
  85. Zheng, Jieyu, and Markus Meister. 2024. The Unbearable Slowness of Being: Why do we live at 10 bits/s? Neuron. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Cognitive warfare (Latschan 2024, p. 2).
Figure 1. Cognitive warfare (Latschan 2024, p. 2).
Religions 16 00251 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kruger, F.P. Exploratory Homiletical Perspectives on the Influence of AI and GAI on People’s Cognition and Reasoning About Warfare in the Era of Homo Digitalis. Religions 2025, 16, 251. https://doi.org/10.3390/rel16020251

AMA Style

Kruger FP. Exploratory Homiletical Perspectives on the Influence of AI and GAI on People’s Cognition and Reasoning About Warfare in the Era of Homo Digitalis. Religions. 2025; 16(2):251. https://doi.org/10.3390/rel16020251

Chicago/Turabian Style

Kruger, Ferdi Petrus. 2025. "Exploratory Homiletical Perspectives on the Influence of AI and GAI on People’s Cognition and Reasoning About Warfare in the Era of Homo Digitalis" Religions 16, no. 2: 251. https://doi.org/10.3390/rel16020251

APA Style

Kruger, F. P. (2025). Exploratory Homiletical Perspectives on the Influence of AI and GAI on People’s Cognition and Reasoning About Warfare in the Era of Homo Digitalis. Religions, 16(2), 251. https://doi.org/10.3390/rel16020251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop