1. Introduction
Brain–Computer Interfaces (BCIs)—microchips implanted into the brain—are already showing marked success in connecting the human brain to computers running sophisticated AI. The version of a BCI developed by the company Neuralink is currently undergoing clinical trials in human subjects after it received Federal Drug Administration approval to move forward in 2024. These trials are an important step toward Neuralink’s eventual goal of making its BCI a commercially available product. If Neuralink achieves this goal, its BCI will one day be inserted into the brain by a robot during an outpatient procedure as straightforward and commonplace as today’s Lasik surgery. For Christians (and others), these advances raise important questions, such as: Is there a way to harness the speed and powerful capabilities of BCIs and AIs to enhance moral choices and actions? This paper focuses on the potential benefits of using these technologies. To illustrate how BCIs like Neuralink’s, coupled with AI, could support the moral life, I develop a constructive, Christian approach to BCIs and AI.
1 This approach is a thought experiment intended as a starting point for ongoing conversation by lay and professional Christian ethicists and theologians grappling with rapid-fire developments in BCI and AI technologies.
To start, I describe Neuralink’s BCI implant and discuss its current and expected capabilities. I then turn to Immanuel Kant and describe the moral anthropology that he elaborated in his 1792 Religion Within the Boundaries of Mere Reason. Kant bemoans the inability of human beings to fully perceive or understand our motives when reaching moral decisions, an inability that hampers our moral progress. With this claim in mind, I describe how BCI working with AI could create a rendering of an individual’s inner workings, shedding light on motives that are normally out of reach. I also discuss Kant’s advocacy—also found in the Religion—of Christ as the prototype of the perfectly good human who serves as the best moral role model for individuals to emulate. Next, I focus on the computer science community’s ongoing work on AI ethics known as Conditional Preference systems, or CP-nets. Finally, I show how, in the future, the anticipated capabilities of BCIs coupled with the interpretive power of CP-Nets could, based on Kant’s moral anthropology, contribute positively to improving the moral life and imitation of Christ. To reiterate, my goal in this paper is to sketch one speculative approach, among many possible options, for Christians who are open to seeking to improve their moral lives by using BCIs and AI as tools. Drawing on the same text, I analyze Kant’s understanding of the Son of God.
2. The Future: AI and Brain–Computer Interfaces
What the future holds regarding BCIs and AI is not yet written. What
is known is that Neuralink has developed a BCI brain implant the diameter of a small watch and just as thin. With higher precision than any surgeon, Neuralink’s specialized robot can already cut a round flap of skin and a piece of skull the size of a quarter and then, using tiny needles, implant its BCI’s sixty-four wires into the human brain. Neuralink is unique, at least so far, in the sense that both its BCI brain implant and its surgical robot are designed to be mass produced. The company also plans for the software of its BCI to be upgradable in the same way that updates are made to apps or computer operating systems (
Cuthberton 2024). A factory in Austin, Texas, dedicated to the manufacture of Neuralink BCIs and surgical robots is under construction. It will provide the company with the infrastructure it needs to shift from its current prototype phase into volume production.
2 However, though the announced completion date of this factory was May 2025, as of September 2025, the factory’s status was not publicly available (
Al-Shaikh 2024).
Neuralink’s current BCI brain implant is inserted into the motor cortex and can detect electrical signals related to a person’s intention to move such as the intention to move the computer’s mouse to change a cursor’s location on the screen. Using wireless Bluetooth, Neuralink’s BCI transmits these brain signals to a computer running AI. The AI decodes the signals and carries out the person’s desired action. To continue with our example, AI interprets the person’s intention to move the mouse and shifts the cursor to the desired location. This human–AI symbiosis enables BCI recipients to control digital devices like computers through thought alone.
So far, Neuralink has tested its BCI brain implant only on people who have sustained severe neurological damage and have little or no movement in their hands, wrists, and arms (
Levy and Taylor 2024). In January 2024, Noland Arbaugh, who has paraplegia, became the first human to be fitted with a Neuralink BCI. He recovered the ability to play computer games after AI was trained to interpret his brain’s electrical impulses (
Jewett 2024). He became so adept at communicating with AI by way of his BCI that, according to
The New York Times, Arbaugh “beat a 2017 world record in the field for speed and precision in cursor control.” (
Jewett 2024).
A few months later, again in 2024, a second person, identified only as Alex, who also has paraplegia, was fitted with a Neuralink BCI, updated to resolve several problems that plagued Arbaugh’s. So far, no issues with Alex’s have been reported. Neuralink reported that Alex was able to communicate with his electronic devices after a mere five minutes. Apparently, Alex also quickly learned to control a CAD program and to design 3D products, which he then produces using a 3D printer (
Kan 2024).
These developments presage the day when, just as people throughout the world have come to rely on their smartphones, they may come to rely on BCI brain implants to access the increasingly sophisticated capabilities of AI. Neuralink views the current FDA trials of its BCI as a first step toward achieving its goal of fitting most human beings with BCIs whose capabilities will far exceed those of our current smart phones. A world could emerge in which persons use their BCIs to think an email and have AI immediately formulate and send it; to wonder about federal laws regulating estate taxes and be instantly informed by AI; to seek real estate comparisons for a particular property only to have AI provide the data before they can blink.
Given recent leaps in BCI and AI technologies, the Pew Research Center conducted a survey asking U.S. adults about the amount of pressure they thought they would feel to get a BCI “if the use of computer chip implants in the brain to far more quickly and accurately process information becomes widespread.” Three out of every five respondents (60%) surveyed said “most people would feel pressure” to do so (
Rainie et al. 2022).
What could a future of AI–human symbiosis—if this is what the future holds—mean for Christians and Christian ethics? How could communication between BCI brain implants like Neuralink’s and AI running on computers be exploited such that they contribute positively to the moral life? To demonstrate one way that BCIs and AI could be used to support Christians’ moral lives, I draw upon the tripartite and hierarchical conception of humanity’s capacity for moral behavior that Kant describes in his Religion. Here, Kant depicts the human person as a composite of elements with distinct capacities for ethical reasoning and for abiding by that reasoning’s moral verdict.
3. Moral Progress in Kant’s Religion Within the Boundaries of Mere Reason
When Kant analyzes “the original predisposition to good in human nature” in
Religion, he identifies three “elements,” each of which has different ends, but all of which count, he says, “as elements of the determination of the human being.”
3 The first element, for Kant, describes the predisposition of human beings to
animality. The element of animality is not rooted in reason, Kant says, but rather on “mechanical self-love” focused on securing physical ends. These physical ends include self-preservation, reproducing and propagating ourselves, and the sociality that drives us to live in community. Human beings, Kant points out, share the element of animality with non-human animals.
The second element, Kant writes, is humanity. Practical reason, possessed by all human beings, is the key feature of this element. As such, we are driven by other considerations than those involved in merely securing our physical ends. We can understand the demands of the moral law and reflect on our duties to ourselves and others. This element sets us apart from animals who are incapable of practical reason. However, the element of humanity is also rooted in what Kant calls general self-love, or the tendency to surrender to the demands of our “sensuous nature.”
He insists that there is nothing inherently “bad” about having a “sensuous nature” because this is inherent to human beings. And sometimes the drives of sensuous nature align with the demands of practical reason. When such an alignment occurs, people experience harmony between their incentives and the moral law. Still, Kant notes, the only motive that should dictate a person’s actions is the moral law itself. In his words, “…even children are capable of discovering even the slightest taint of admixture of spurious incentives: for in their eyes the action then immediately loses all moral worth.” (
Kant 1998, p. 69).
The third element of the human being is that of
personality or character. Not all human beings possess this element. Those who qualify for this highest tier do so because they are both “rational and
responsible” (
Kant 1998, pp. 50–52). Rather than their moral decisions being primarily motivated by mechanical self-love (as in the element of animality) or general self-love (as in the element of humanity), people with
personality can rely, to a greater extent, on their practical reason to act on the moral law. They have the capacity to meet the demands of duty for no other reason than duty demands it. Children would describe such a person as having “pure” motives. According to Kant,
personality can be absent, underdeveloped, or eradicated (
Kant 1998, pp. 50–52).
Regardless of their tier, human beings count as “morally good human beings” only if they engage in “incessant” effort to choose duty merely for duty’s sake. Moral progress, Kant insists, requires gaining and keeping an upper hand over the demands of the senses and any incentives that compete with duty. Because the element of animality and humanity are intrinsic parts of our nature, even for those endowed with
personality, moral improvement requires “an ever-continuing striving for the better…” (
Kant 1998, p. 68). If such “improvement was a matter of mere wishing,” Kant writes, “every human being would be good.” Nonetheless, “it is a fundamental principle,” he insists, “that to become a better human being, everyone must [steadfastly] do as much as it is in his powers to do.” (
Kant 1998, p. 71).
To become a better human entails a lifelong struggle to comply with the unconditional authority of the moral law which must alone be the “determining ground of our power of choice.” Conformity to the law is to take precedence—in every instance—over “individual advantages” (
Kant 1998, p. 81). Kant acknowledges the difficulty of abiding by this “ever-always” priority. An additional difficulty, he points out, is that human beings are mostly inscrutable to themselves. They often cannot access the depths of their own hearts. No matter how mighty the effort to peer inside and determine why they acted this way instead of that, the true motives behind their choices are not fully knowable.
4. The Hidden Heart Made Visible with BCIs and AI
To make visible—in Kant’s telling—the sometimes-hidden motives that hamper moral growth, BCIs working together with AI could prove helpful. What if these brain implants had access to areas of the brain responsible for emotions and motivations? What if BCIs could relay this information to AI? And what if AI could translate this information such that an individual could view the workings of their moral decision-making normally inaccessible to them? What if, thanks to BCIs and AI, individuals could plumb the depths of their own hearts?
Granted, all these possibilities rely on technology that does not exist, and, indeed, may never exist. However, what does already exist is the resolve of several global and highly resourced companies to develop increasingly powerful BCIs and AI. Sundar Pinchai, CEO of Alphabet, Google’s parent company, is convinced that eventually AI technology will have a more profound effect on civilization than the discovery of fire or electricity (
Nolan 2023). Private investors and governments alike are betting on the success of AI and are pouring billions into its development.
That BCIs may someday be able to detect neural spikes related to ethical decision-making is not as farfetched as it may seem. Scientific research teams at major universities are making important inroads in mapping the synaptic structure of the human brain, providing the kind of key breakthroughs needed for more meaningful BCI connectivity (
Harvard University 2024). Researchers collaborating across several universities have already discovered a distinctive network of brain regions involved in judging “moral violations” like cheating on an exam. These brain regions are distinct from those activated by “social norm violations” like drinking coffee from a spoon (
Tasoff 2023 and
Hopp et al. 2023). Like other mental tasks, moral reasoning appears to trigger “characteristic patterns across the brain, with nuances based on the specifics” (
Tasoff 2023). In addition, technology even now exists that can capture electrical spikes generated by the brain when a person is telling a lie. Neuropsychiatrist Andrew Kozel and his colleagues report that, using functional MRI, they can detect when someone is lying with greater than 90% accuracy (
Kozel et al. 2009, pp. 6–11).
If BCI technology becomes available to detect emotions, incentives, and intuitions, and if AI technology becomes available to decode them, these two technologies working together would expose the motives, preferences, and priorities involved in moral decision-making processes, including factors of which people may not be aware. No longer would individuals be, as Kant argues, a mystery to themselves, unable to comprehend fully the workings of their minds. Data collected by a BCI would lay those ways bare. This possibility could prove of interest to persons seeking, in Kantian terms, to make moral progress.
4For Kant, a condition for the possibility of moral progress is constant effort, a striving in every instance to abide by the moral law for its own sake. Effort offers hope, especially when it results in observable advances in moral conduct. These advances encourage us to press on “with ever greater courage,” he writes, and “provided that their principle is good, will always increase [our]
strength for future ones….” (
Kant 1998, p. 86). A BCI–AI window into the internal workings of the brain could make such advances easier, boosting observable improvements and fostering greater hope.
After all, as AI entrepreneur Austin Ambrozi writes, “One of the most significant advantages of AI lies in its ability to rapidly process vast amounts of data and uncover patterns and trends that may otherwise go undetected by humans. This presents a remarkable opportunity [for people] to enhance their decision-making processes by leveraging AI algorithms” (
Ambrozi 2023). According to Kant’s moral anthropology, for people who struggle to rise above the second tier, humanity, and advance to the third tier, personality, the demands of practical reason and duty to the moral law are easily defeated by the seductive calls of emotion and physical desires. Though, per Kant, practical reason equips such people to live a moral life, a “lazy and timid cast of mind” often hampers them from doing so (
Kant 1998, p. 71). As a result, those who wish to progress morally will struggle mightily and will, more often than not, be undermined by the stronger temptations of general self-love. The opportunity to look deep inside and perceive the role of general self-love in their decision-making could be of particular help. It even could, if combined with extra effort, move them into the highest tier,
personality. And for those already in the third tier, such self-knowledge could boost their efforts for moral progress.
5. The Son of God in Kant’s Religion Within the Bounds of Mere Reason
Throughout his
Religion, Kant is concerned with the internal battle waged inside human beings. Though buffeted by non-rational incentives, people are tasked with ignoring those incentives in favor of the moral law. The stakes are high because, Kant cautions, “we are human beings pleasing to God, or not, only on the basis of the conduct of the life we have led so far….” (
Kant 1998, p. n88). Kant offers several suggestions to help those who wish to identify their true incentives as they work toward become “better human beings” (
Kant 1998, p. 70). He recommends, for instances that “apprentices in morality” closely study people whom they judge to be good human beings and try to discern the motives driving their actions. By comparing the incentives behind their own actions to those of their role models, “apprentices” would be able to identify and address any disparities (
Kant 1998, p. 69). However, this tactic has limits, Kant warns, because regardless of how far on the moral scale these role models have advanced, they—like all human beings—are bound to fall short of perfection.
Though we may search far and wide, we will be unable to find a prototype of full moral perfection. No person “is free of guilt” Kant argues. But, he also insists, this does not excuse us from constructing, for ourselves, an idea of what such a prototype would be like (
Kant 1998, p. 81). Moreover, “it is,” he writes, “our universal human duty to elevate ourselves to this ideal of moral perfection.” We are bound to emulate, as best we can, “the prototype of moral disposition in its entire purity.” Only this prototype, Kant believes, “can give us force” (
Kant 1998, p. 80). Fortunately, it already resides in our practical reason. According to Kant, we need look no further than ourselves to formulate an idea of a model human being whose course of life is “entirely blameless and meritorious” (
Kant 1998, p. 81).
Still, while practical reason may be fully capable of constructing this model human being, Kant is certain that the Son of God can also serve as the prototype of moral perfection. We may adopt the Son of God as our moral role model, he says, as long as we ascribe to him the same frailties as human beings. A “supernaturally begotten human being” is of no benefit to our practical reason, Kant writes, since “the prototype which we see embedded in this apparition must be sought in us as well (though natural human beings).” We are to let the Son of God, “a human being well-pleasing to God, be thought as human,” meaning that, like us, “he is afflicted by just the same needs and hence also the same sufferings, by just the same natural inclinations and hence also the same temptations to transgressions, as we are” (
Kant 1998, p. 82). Unlike us, however, the Son of God has demonstrated how, in every instance, dedication to duty can overcome the negative influences of purely self-serving incentives.
For Kant, the Son of God is “the prototype of moral disposition in its entire purity” and the ultimate example of how best to abide by the moral law (
Kant 1998, p. 80). Here is Kant’s conception of the Son of God:
We cannot think the ideal of a humanity pleasing to God (hence of such moral perfection as is possible to a being pertaining of this world and dependent on needs and inclinations) except in the idea of [the Son of God as] a human being willing not only to execute in person all human duties, and at the same time to spread goodness about him as far wide as possible through teaching and example, but also, though tempted by the greatest temptation, to take upon himself all sufferings, up to the most ignominious death, for the good of the world and even for his enemies…
In the practical faith in this Son of God (so far as he is represented as having taken up human nature) the human being can thus hope to become pleasing to God (and thereby blessed); that is, only a human being conscious of such a moral disposition in himself as enables him to believe and self-assuredly trust that he, under similar temptations and afflictions (so far as these are made the touchstone of that idea), would steadfastly cling to the prototype of humanity and follow this prototype’s example in loyal emulation, only such a human being, and he alone, is entitled to consider himself not an unworthy object of divine pleasure.
Given Kant’s views on moral anthropology, the path to moral progress, and the Son of God as perfect moral prototype, how can BCI brain implants and AI contribute to this framework? I readily admit that, for the sake of clarity and manageability in developing a constructive proposal, I have vastly oversimplified Kant’s ethical theory and theology and ignored most of their nuances and complexities. In my defense, this paper is intended as a thought experiment. It uses vaguely Kantian premises as functional steppingstones for the sake of demonstration. With this caveat in mind, I will now draw on current work in computer science. I will show how BCIs coupled with AI could assist human beings who wish—as understood by Kant—to imitate the morally perfect Son of God (or their reason-generated role model) and find support for their efforts to make moral progress.
6. A Conditional Preference Network Approach to Kant
Moral decisions usually involve agents weighing the consequences of their actions on other agents. Computer science as a discipline has, for some time, taken an interest in such human engagement for various purposes. One of its approaches has been to create “multiagent systems” that model various ‘agents’, their reasoning, their preferences, their priorities, and interactions with other ‘agents’” (
Loreggia et al. 2020, p.128). AI researcher Andrea Loreggia and his colleagues describe computer science work in the sub-specialty of multiagent preference reasoning as “modeling, aggregating, and reasoning with…possibly competing agent preferences and priorities.” (
Loreggia et al. 2020, p.128).
These multiagent systems generate scenarios in which multiple ‘agents’ with different ethical priorities and subjective preferences come together (
Loreggia et al. 2020, p.128). One system of particular interest for this paper’s thought-experiment is called Conditional Preference Networks, or CP-nets. It is designed for autonomous decision-making.
5 The work of Andrea Loreggia and his colleagues on CP-nets is especially salient since it focuses on comparing the preference orderings of distinct CP-net ‘agents’ (
Awad et al. 2022 p. 388). Without going into too much detail, a CP-net ‘agent’ in a Conditional Preference Network has its own set of preferences, ordered in relation to each other. As a result, when an ‘agent’ is presented with a problem, it will compare potential outcomes and determine which one is optimal given its preferences (
Boutilier et al. 2004, p. 136).
Though BCI brain implants do not yet have this capability, they may eventually be able to communicate directly with Conditional Preference Networks that include AI-coded CP-net ‘agents.’ The advantage: humans fitted with BCIs would have the opportunity to consult, instantly, any number of relevant AI-generated ‘agents’ as they try to reach moral decisions. For example, they could access CP-net ‘agents’ that model various ethical theories, social norms, professional standards, union rules, government regulations, legal systems, and more. “It is important to be able to model these concepts, reason with them, and combine them,” Loreggia writes, while at the same time to keep them separate in order to give them different weights (
Loreggia et al. 2020, p.128).
To illustrate how a multiagent CP-nets could support or even enhance a Kantian approach to moral progress and imitation of the Son of God, I turn to a case study. It comes from a set of possible questions used by medical schools to assess applicants:
You are the only ER doctor on duty and are responsible for all decision making during this shift. This night you have two patients rushed into the ER who desperately require a kidney transplant. One patient is an 80-year-old university professor who is suffering from acute kidney failure related to his age; the other patient is a 20-year-old university student who has been brought in for yet another episode of kidney problems related to excessive drinking of alcohol at a school party. There is only one kidney available that matches both patients. Who do you give the kidney to?
Using Loreggia’s Conditional Preference Network, multiagent approach, an ER doctor faced with this decision—I’ll call her Dr. Laetitia Smith—could turn to any number of AI-generated CP-net ‘agents’ for assistance in reaching a decision. Dr. Smith may wish to consult hospital administration to ensure that she is following acceptable institutional protocol. AI could create a CP-net ‘agent’ that reflects this protocol. The ‘hospital protocol CP-net agent’ would explain the hospital’s official position on transplants as well as respond to any of Dr. Smith’s follow-up questions. Because of her BCI, Dr. Smith would have instantaneous access to the ‘hospital protocol CP-net agent’—a protocol that, itself, was the result of lengthy consultations between ethics boards, attorneys, hospital board members, the hospital’s CEO, and perhaps even other Conditional Preference Networks. Dr. Smith would have access to that position, modeled by the ‘agent’, day or night.
Dr. Smith may also wish to take into consideration other factors: legal rulings, government statutes, and professional norms. Distinct AI-generated CP-net ‘agents’, each based on relevant information could respond to Dr. Smith’s questions with summaries or additional details if desired. Dr. Smith could, through her BCI, engage with the ‘legal rulings CP-net agent,’ the ‘government statutes CP-net agent,’ or the ‘professional norms CP-net agent.’ She would have to try to correlate them or accord priority to one or more in case of incoherence or inconsistency. Alternatively, she could request a single response generated by the Conditional Preference Network using the weights assigned to each ‘agent’s’ preference settings.
Whether these approaches would yield an ethical response as perfectly moral as that of a Son of God ‘agent’ is an open question. Of special relevance to the Christian dimension of this paper and drawing on Kant, a CP-net ‘agent’ could be programmed to model the prototype of a perfectly moral human being—the Son of God—with reasoning, preferences, and priorities appropriately weighted. To hew to this paper’s Kantian framework, the ‘Son of God CP-net agent’ would be programmed to reflect Kantian moral theory—the categorical imperative in particular. This would enable Dr. Smith to compare her personal biases and intuitions against the defensible moral law of the ‘Son of God CP-net agent’—who, per Kant, always prefers duty for duty’s sake. By adding a ‘Son-of-God CP-net agent’ to the case study’s Conditional Preference Network, Dr. Smith would be able, through her BCI, to check, quickly and interactively, just how closely her reasoning, preferences, and priorities matched those of the Son of God’s, and the exact nature of any disagreement. Such an appraisal would also benefit Dr. Smith’s efforts to make moral progress by making clear the judgment reached by a perfect moral prototype.
Table 1 illustrates the proposed Conditional Preference Network. I have included the ‘Dr. Smith CP-net agent’ that corresponds to Dr. Smith herself, generated using data about her reasoning, emotions, and motivations collected by her BCI and converted into a CP-net ‘agent’ by AI.
It bears underscoring that this Conditional Preference Network preserves Dr. Smith’s ability to choose for herself. While BCI–AI connections “can augment and amplify our capabilities,” (
Padfoot 2015) the Kantian Conditional Preference Network that I propose in no way usurps personal ethical judgments. After all, a commitment to “Should she so decide,” is key for Kant since even imitating the Son of God should not supplant the exercise of practical reason. It is up to each of us, he insists, to reach moral judgments for ourselves. His intransigence on this point was so great that philosopher Iris Murdoch said of him: “How recognizable, how familiar to us, is the man so beautifully portrayed in the [
Groundwork of the Metaphysics of Morals] who confronted even with Christ turns away to consider the judgment of his own conscience and to hear the voice of his own reason.”
6Kant is not alone in his resolve that moral decisions remain the prerogative of persons. Though their arguments for self-sufficiency may differ from Kant’s, most people wish to preserve their ability to exercise their practical reason. Anders Sandberg, a senior research fellow at Oxford’s Future of Humanity Institute, reported the results of a study in which students were asked “about various mental traits and whether they’d be willing to use an enhancement technology to improve them.” (
Strickland 2017) The students said they were “very willing” to use such technologies “to improve cognitive traits like attention, alertness, and rote memory.” However, they rejected technology-generated improvements of moral traits like “empathy and kindness.” If acts of empathy and kindness are expressions of moral decisions, these students wanted to protect this choice. A mere nine percent were willing to have their level of kindness enhanced (
Strickland 2017).
7. Conclusions
I acknowledge once again that I vastly oversimplified Kant’s work for the purposes of my thought experiment, focusing almost exclusively, for the sake of clarity and brevity, on the themes he explores in his Religion. For example, while this monograph implies the categorial imperative, it speaks of the moral law only in general terms. My macro approach has made it possible for me to advance a coherent and systematic demonstration of the potential benefits of BCIs and AI for Christians (and others) seeking to live a moral life.
This constructive approach is unique in the sense that, so far, the overwhelming response by Christian ethicists and theologians to these technologies has been negative, limited to arguing against their use and further development. Given the blistering pace of advancement in the fields of BCIs, AI, and AI ethics such as Computer Preference Networks, I hope my demonstration will encourage others to engage in similar thought experiments, including ones based in a rigorous and comprehensive engagement with Kantian ethics, or with non-Kantian starting points, or sturdier Christocentric or theocentric frameworks. I described a ‘Son-of-God CP-net agent’ based on Kant’s work. Others will no doubt choose different versions of the Son of God to create their CP-nets—a decision that will be shaped by any number of factors such as faith tradition, church teachings, Biblical exegesis, theological writings, etc.
7There are good reasons not to be enthusiastic about BCIs or AI. However, many U.S adults anticipate that they will feel pressured to opt for BCIs if these implants become widespread. Yes, “There’s [also] a public fear of brain manipulation,” says bioethicist Arthur Caplan of New York University. But such fear is held by fewer members of the American public than might be expected: a 2022 survey (
Rainie et al. 2022) conducted by the Pew Research Center showed that, at this juncture, only slightly more than half of American respondents (56 percent) thought that the widespread use of brain chips to improve cognitive function would be a bad idea (
Mullin 2024).
Perhaps even more telling, in 2021, without any real-life demonstration of how this might work, one out of four Americans were prepared to say that they believed BCIs would improve their decision-making (
Rainie et al. 2022). After all, in 2022, when the ChatGPT-3.5 model was released, people embraced and integrated this AI chatbot (and the many others that followed) so decisively and quickly into their everyday lives that even members of the tech industry were shocked (
Nolan 2023).
BCI technology is here to stay. Each month seems to bring news of yet another advancement. In June 2025, a rival to Neuralink announced that it had started its own human clinical trials and implanted one if its dime-sized devices into a person undergoing surgery for epilepsy. (
Jones 2025) Its BCI requires the intervention of a trained surgeon to perform brain surgery making the procedure difficult to scale-up for large numbers of people, unlike Neuralink’s surgical robots, which can be mass produced. Nonetheless, the scale of investment and testing in humans are clear indicators BCIs will grow more formidable and could well become ubiquitous. Though this paper’s constructive proposal was based on informed — or what may appear to some unwarranted — speculations about future technologies, the current pace of successes in ongoing scientific and engineering efforts indicates the time for such proposals is now.