Next Article in Journal / Special Issue
Superhuman Enhancements via Implants: Beyond the Human Mind
Previous Article in Journal
Autonomy and the Ownership of Our Own Destiny: Tracking the External World and Human Behavior, and the Paradox of Autonomy
Previous Article in Special Issue
Marketing the Prosthesis: Supercrip and Superhuman Narratives in Contemporary Cultural Representations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Can a Soldier Say No to an Enhancing Intervention?

National Security College, Crawford School of Public Policy, Australian National University, Canberra ACT 2601, Australia
*
Authors to whom correspondence should be addressed.
Philosophies 2020, 5(3), 13; https://doi.org/10.3390/philosophies5030013
Submission received: 1 July 2020 / Revised: 25 July 2020 / Accepted: 27 July 2020 / Published: 3 August 2020
(This article belongs to the Special Issue Human Enhancement Technologies and Our Merger with Machines)

Abstract

:
Technological advancements have provided militaries with the possibility to enhance human performance and to provide soldiers with better warfighting capabilities. Though these technologies hold significant potential, their use is not without cost to the individual. This paper explores the complexities associated with using human cognitive enhancements in the military, focusing on how the purpose and context of these technologies could potentially undermine a soldier’s ability to say no to these interventions. We focus on cognitive enhancements and their ability to also enhance a soldier’s autonomy (i.e., autonomy-enhancing technologies). Through this lens, we explore situations that could potentially compel a soldier to accept such technologies and how this acceptance could impact rights to individual autonomy and informed consent within the military. In this examination, we highlight the contextual elements of vulnerability—institutional and differential vulnerability. In addition, we focus on scenarios in which a soldier’s right to say no to such enhancements can be diminished given the special nature of their work and the significance of making better moral decisions. We propose that though in some situations, a soldier may be compelled to accept said enhancements; with their right to say no diminished, it is not a blanket rule, and safeguards ought to be in place to ensure that autonomy and informed consent are not overridden.

1. Introduction

Rapid advancements in technology have seen a rise in innovative ways to enhance human capabilities. This is the case for technologically advanced militaries seeking to enhance soldier capabilities. This paper explores challenges associated with technological interventions being developed that could offer soldiers a chance to enhance their cognitive functions and, by extension, to enhance their autonomy in a warfighting context. We propose that technologies that enhance an individual’s cognitive functions, such as decision-making capacity, situational awareness, memory enhancement, and increased vigilance, all have the potential to also enhance an individual’s autonomy and moral decision-making capabilities. In a medical bioethics context, such interventions require informed consent of the recipient in order for that intervention to go ahead. However, when considering particular enhancements used in the military, the nature, purpose, and context of the enhancement used may significantly undermine the capacity of a recipient to say no to these enhancements. This is a common problem for informed consent; how do we ensure that the recipients of medical or biotechnological interventions consent to these interventions freely? We suggest that the ‘nature’ of these enhancements presents a conceptual challenge; can a person autonomously say no to an option that will enhance their autonomy? Further to this, if the purpose of these enhancements is to improve moral decision-making, can a person justifiably say no to an option that will lead to them make better morally relevant decisions? In addition, we suggest that in the military context, this becomes even more complicated because soldiers are not just expected to follow commands, but are trained and inculcated in the practice of following commands, and bear certain loyalties to their comrades. We propose that these contextual elements form the basis to consider soldiers as a vulnerable group with regards to obtaining informed consent. Finally, the fact that soldiers have signed up to be part of the military and accept the military doctrine means that they might have to accept these enhancements as part of the job. In combination, we find that these conditions mean that in certain circumstances, a soldier cannot say no to an enhancement. However, as we show in the concluding section, this is not a broad statement; there is still a range of conditions that must be met in order for particular enhancements to be obligatory in the military context.

2. Cognitive Enhancements as Autonomy-Enhancing Technologies

Cognitive enhancements are those technologies with a demonstrated or potential ability to alter or modify physiological processes such as decision-making, reasonability, memory, judgement, situational awareness, attention span, and complex problem solving. We propose that cognitive enhancements could be viewed as “enhancing” technologies as they improve or have the ability to improve the physiological processes of how we acquire and process knowledge and understand the world around us. These cognitive processes, when enhanced, also enhance an individual’s autonomy, i.e., ability to self-govern (see below for more on this). The two types of cognitive enhancements discussed in the following paragraphs are: Brain–Computer Interface (BCI) and Non-Invasive Brain Stimulation (NIBS). As we will discuss below, there is significant disagreement about whether such interventions do in fact act to enhance one’s moral decision-making.

2.1. Brain Computer Interface (BCI)

BCIs consist of a direct communication pathway between the brain and an external device (usually a computer platform) via one-way or two-way communication. It is a system that captures brain signals (neural activity) and transforms these signals into commands that can be controlled by an external application or instrument [1]. BCIs have four broad characteristics: Ability to detect brain signals, provide feedback in real time or near time, read/decode brain activity, and provide feedback to the user on the success of the task or goal attained [2]. Broadly, there are two general uses for which BCIs can be used with respect to human performance enhancement: (1) Direct signals from the brain used to direct/alert/command external equipment as an auxiliary to human actions, to control prostheses, robotics, or weapons platforms, or (2) enhanced sensory or information input and/or control signals to enhance individual performance [3]. The earlier goals of BCIs—controlling external equipment—have research origins in medical research and are widely studied for their ability to control prostheses. As the scope of this paper focuses on human enhancement and not therapeutic uses of technology, we focus on those technologies that are hoped to improve particular cognitive functions.
Connection between the human brain and a computer interface is established via two methods: (1) Invasive connections that requiring surgery to implant/connect an electrode inside the skull and (2) non-invasive connections whereby electrodes are placed on the outside of the skull, either attached to a cap or helmet. For use in the military as an enhancement (and not for veterans’ medical/therapeutic purposes), we assume that non-invasive BCIs would be preferred, as this poses less risk to the individual and is relatively easily reversible compared to invasive/implanted devices. This type of technology is attractive for use in the military, as it provides the opportunity to increase the brain’s computational power, information load, and processing speed, which then allows for an enhanced human performance. BCIs allow for the human brain to handle larger quantities of information in a shorter time frame compared to the brain’s normal/average functioning. In a military context, where individuals are required to process significant amounts of information in a short period of time, BCIs provide the possibility to increase human performance with regards to complex decision-making and situational awareness. For example, research in this area has indicated that BCIs can improve facial recognition as well as target detection and localisation in rapidly presented aerial pictures [4,5].
In reference to BCIs, the US Air War College indicated, “This technology will advance computing speed, cognitive decision-making, information exchange, and enhanced human performance. A direct connection between the brain and a computer will bypass peripheral nerves and muscles, allowing the brain to have direct control over software and external devices. The military applications for communications, command, control, remote sensors, and weapon deployment with BCI will be significant” [6].
The United States’ (US) interest in identifying novel ways to enhance human cognition beyond current capabilities is evident in the projects undertaken by the Defense Advanced Research Projects Agency (DARPA)1, such as: Restoring Active Memory Replay (RAM Replay), which is a brain interface project to investigate ways to improve memories of events and skills by studying neural replays; Targeting Neuroplasticity Training (TNT), aimed to improve cognitive skills training by modifying peripheral nerves and strengthening neural connections; and Next-Generation Nonsurgical Neurotechnology (N3), which uses bi-directional BCIs that can control external equipment and applications such as unmanned aerial vehicles and cyber defence systems [7]. These are all examples of projects aimed at identifying ways to enhance cognitive functions that extend beyond therapeutic purposes [8].

2.2. Non-Invasive Brain Stimulation (NIBS)

Non-Invasive Brain Stimulation (NIBS) technologies stimulate neural activity by using either a transcranial electrical stimulation (tES) or transcranial magnetic stimulation (TMS). TMS and tES have been demonstrated to improve the cognitive domains responsible for perception, learning, memory, and attention spans [9]. Research shows that an individual’s ability to detect, visually search, or track specific targets can be improved by NIBS [10]. Similarly, tES can be used to improve complex threat detection tasks [11] and to increase risk-taking behaviour [12]. Stimulating specific regions of the brain that are active when performing complex threat detection tasks and risk-taking behaviour provides possibilities for use in military operations. The following paragraphs examine the capability of NIBS to enhance memory, vigilance, and attention, as well as its applicability in a military context.
Memory enhancement research has focused on using TMS and tES to improve working memory and learning capacities in individuals. Using a direct current stimulation on the dorsolateral prefrontal cortex (critical for working memory functions) improves the implicit learning of sequential motor sequences, motor learning, probabilistic learning, explicit memory for lists of words, spatial memory, and working memory [13,14]. Monitoring the mental state of users allows for enhanced performance by adapting the user interface to the changes in mental state. Target detection is one area where this type of technology has been tested. The adaptive interface adjusts accordingly to the feedback given by an electroencephalogram (EEG) and other physiological measures. Complex flight and driving simulation tasks have been used to test the usability of attention-increasing technologies. Studies have investigated the applicability of this technology in air traffic controllers [15] and in military-relevant training scenarios [16]. Enhanced declarative memory is another area that has military applicability. Memory enhancement impacts individual performance on tasks relating to situational awareness, which is of use for fighter pilots and point-shooting [17]. DARPA’s RAM Replay project is aimed at identifying ways to enhance memory formation and recall to help individuals identify specific episodic events and learned skills, with research outputs to be applicable in military training activities [18]. Another area where research is done to augment cognitive capabilities useful for the military is to increase vigilance. Vigilance here refers to the ability to maintain sustained attention in areas of high workloads and to be able to shift/divide attention between tasks [15,19,20]. Reaction time tasks, stimulus discrimination, and target counting have been used to measure individuals’ reaction times, and this information is used to increase vigilance.
The Halo Sport headset manufactured by the company Halo Neuroscience is one example of an NIBS product available on the market and tested in a military context. Aimed at increasing neuroplasticity, the headset has the ability to enhance physical performance; hence, their usage has been popular among professional athletes [21]. The aspect of the headset that we are interested in here is the ability to improve cognitive performance. The basis as to how Halo Sport headsets function is the same as in the general NIBS method; a Transcranial Direct Current Stimulation (tDCS) (a weak current of approximately 2 to 3 mA) is delivered to the scalp for a duration of several minutes. The current alters the neural activity in the motor cortex of the brain to impact cognitive functions [22]. The Halo Sport headsets, tested in a controlled laboratory environment, have been shown to increase accuracy in cognitive functions but not reaction times. In 2017, Rear Admiral Tim Szymanski, a commander of the US Navy Special Operations, expressed interest in human enhancement technologies with a focus on cognitive enhancement technologies. In his statement, the commander requested that the Defence industry develop and demonstrate technologies that could enhance cognitive performance in the Navy Special Operations forces [23]. A specific reference was also made to NIBS technologies that apply an electrical stimulation to the brain to improve performance. Halo Neuroscience’s Chief Technology Officer indicated that the Halo Sport headset has been tested on Navy Seals, showing promising results in improved cognitive performance [23]. According to the Naval Special Warfare Development group (SEAL Team Six), this technology has shown promising results for sleep-deprived individuals performing under hard training environments. At the time, this device was being tested at five military sites. Though the area that has shown the most promising improvements with the use of Halo Sport headsets is physical performance, testing regimens also showed significant improvements in cognitive functions, which has warranted its use for this particular purpose as well.

3. Decision-Making and Autonomy

Autonomy is not only a complex notion, but is one of the most contested areas in philosophy and ethics. We do not expect to answer any of those open questions here, but draw attention to the connection between the technologies as described and autonomy. As Christman describes it, autonomy is the “idea that is generally understood to refer to the capacity to be one’s own person, to live one’s life according to reasons and motives that are taken as one’s own and not the product of manipulative or distorting external forces” [24]. Our view on autonomy is that there is some relative equivalence between what a person does and the reasons that they have for acting.2 This is a somewhat Kantian notion where reason and rationality play a key role in autonomy and, more generally, in ethics. This stands in the face of other views, like that of Haidt’s social intuitionist model, in which reasons play far less of a role than the in Kantian model [28]. However, as Kennett and Fine argue, an “examination of the interaction between automatic and controlled reflective processes in moral judgment provides some counter to scepticism about our agency and makes room for the view shared by rationalists and sophisticated sentimentalists alike that genuine moral judgments are those that are regulated or endorsed by reflection” [29] (p. 78). The important point is that if technologies can change cognitive capacities and practices, they could play a role in improving moral decision-making. Our purpose is to draw attention to common elements of autonomy, and to see how they play out in relation to particular enhancement technologies when used in a military context.
In particular, of the technologies that we have reviewed, they are all expected and intended to impact upon and improve decision-making in different ways. The connection to autonomy is that improved decision-making sits in part with the notions of autonomy as increasing “the capacity to be one’s own person, to live one’s life according to reasons that are taken as one’s own.” By increasing capacities like memory, attention, and vigilance, we suggest that these technologies are increasing the recipient’s autonomy by enhancing their decision-making capacity. Moreover, insofar as these enhancements increase such decision-making while in positions of high cognitive demand and stress, like conflict, then they are minimizing the “distorting external forces.” While more can be said about the connections between increased decision-making capacity and autonomy, the point here is to show that the technologies described are hoped to have some potential to enhance autonomy.

4. Informed Consent

The idea of autonomy in a medical context is frequently operationalized in terms of informed consent. Similar to the elements of autonomy mentioned above, we are primarily interested in whether a person has freely consented to a given enhancement; could they have done otherwise, and was there some external agent or factor that interfered with their decision-making? This draws from the third aspect of autonomy described by Christman above, that decisions made by the person are not the product of an external agent. The concept of informed consent, developed out of the Nuremburg Doctors’ Trials in 1947 which led to the creation of the Nuremburg Code, consists of 10 principles that constitute basic legal and ethical rules for research involving human subjects. The first principle is: “The voluntary consent of the human subject is absolutely essential” [30]. This principle is, for the most part, concerned with the individual’s ability to exercise free power of choice and to be free of any intervention of force, deceit, duress, coercion, or constraint. Under this principle, the individual should also be given access to sufficient knowledge of the decision to be made and is able to understand the elements involved in the decision-making. The ability to refuse or say no to a decision is also an essential element of informed consent, or, more importantly, voluntary informed consent. The right to say no (or withdraw or refuse) is a direct indicator of the individual’s ability to “exercise free power of choice” without any coercion, duress, or intervention [30]. Examining this concept in a military context, it is important to identify situations in which soldiers have the ability to refuse an order or directive to accept enhancement technologies.

5. Vulnerability and Saying No

Proper informed consent practices recognize that people may be especially vulnerable to diminutions in their autonomy and capacity to give informed consent. Human research ethics addresses the concepts of vulnerability in depth, some aspects of which are applicable here. For example, prisoners are treated differently with relation to informed consent compared to other adults in medical contexts [31,32,33]. This vulnerability comes from factors such as prisoners being placed in physical isolation and the power dynamics in the relationships with authority figures. Prisoners are at a greater risk of being manipulated or coerced into accepting interventions that they may otherwise refuse. This special vulnerability comes from aspects such as: Do prisoners have the capacity to understand what is being asked of them and what they are consenting to? Do they have the capacity to say no? If they do consent to an intervention, how can we be sure that the individual in this case is saying yes to an intervention freely and not as a product of coercion by institutional authorities? Based on these aspects, prisoners require special safeguards when it comes to obtaining informed consent for medical interventions.
Soldiers are not the same as prisoners in their roles and treatment within their relevant institutions; however, the concept of unseen pressures and the possibility of coercion and duress can be used to draw some parallels between these two scenarios. The directives to obey the chain of command and subsequent reprimand if one disobeys create an environment in which soldiers could feel unduly pressured into accepting enhancement technologies. The power imbalance in authority relationships formalised in the military’s hierarchical systems directly impacts an individual’s right to say no to enhancement technologies.
For instance, decisions involving the use of human enhancement technologies would, at a minimum, involve an authority figure (a commander or responsible officer), research or technical specialist, and a physician if the enhancement involves an alteration to the human physiology (such as cognitive enhancements). If it is the case that the enhancement is used for a specific operation (on-the-ground testing), one can presume that the unit or team members would need to be privy to the decision-making process. Privacy and confidentiality are normally available in a medical setting under doctor–patient confidentiality and individual privacy laws, or in a research setting with the ethics approval for the specific study. In prisons, such privacy and confidentiality are limited by the need for prison officers, medical specialists, etc. to share information about a given prisoner. Moreover, given the close confines when being incarcerated, information is hard to suppress, and can travel quickly and easily among inmates. Similar practical limits on privacy and confidentiality apply in a military context. Like with prisoners, we recognize a form of differential vulnerability arising from informal authority relationships, such as those with one’s team members and other outranking officers.
In addition, the “mission first” values that are promoted in the military add to the constraints in the individual’s ability to freely consent. Where the commanding officers’ priorities and those of the individuals may not align, commanding officers may prioritise mission success and safety of the unit as a whole over one individual’s safety or privacy. This may not be the case of the individual being asked to accept a brain-stimulating technology that could potentially leave them with adverse side effects, whether they are in the long or short term. History has shown that this is the case, as military personnel have been coerced or pressured into accepting experimental vaccines, which have later on been identified as having less-than-ideal efficacy and several side effects that were long-lasting [34]. Whilst some of the enhancement technologies are supported by scientific research conducted to investigate their functions prior to use, the testing protocols are not the same as a product that would be tested prior to release to the market, thereby raising concerns regarding safety and efficacy.
Vulnerability, as discussed here, is a set of contextual elements: Institutional and differential vulnerability [35]. Institutional vulnerability arises from individuals being subjected to authority relationships where the power imbalance is formalised in hierarchical systems, and differential vulnerability arises when individuals are subjected to the informal power dynamics from authority of others. The above example involving prisoners is used here to highlight the parallels that can be drawn with regards to obtaining informed consent from soldiers. In the following sections of this paper, we suggest that because of the elements of contextual vulnerability arising in the military context, soldiers fit the conditions of an especially vulnerable population, even when the specific technology could potentially enhance their autonomy through improved decision-making.

6. Can a Soldier Say No? The Special Case of Soldiers

In this section, we look at three situations where the recipient is compelled to say yes, and we ask if they could say no. First, can a soldier autonomously say no to interventions that will enhance their own autonomy? Second, given the moral significance of some of their future actions, does morality itself compel a person to enhance their morality? That is, can a soldier say no to making better moral decisions? Finally, in a military context, soldiers are expected to follow commands. Therefore, can a soldier say no to following a command given the special conditions of being in the military and the ethical implications of doing so?

6.1. Can a Soldier Say No to Themselves?

The first issue where a soldier’s capacity to say no is limited derives from the potential for an intervention to change them. It is a question of continuity or “numeric identity”.3 Essentially, does the Soldier at Time 1 (T1) owe it to themselves for Soldier at Time 2 (T2) to be enhanced? The basic idea of this question works on two related aspects of numeric identity: First, that the enhancement causes some significant rupture between Soldier at T1 and Soldier at T2, such that there is no relevant continuity between them; second, that Soldier at T2 not only has significantly enhanced autonomy as a result of the enhancement, but also that this matters morally. Combining these two points, as Soldier at T1 and Soldier at T2 are different enough people (as a result of the rupture), and Soldier at T2 will be so significantly improved by the enhancement, that Soldier at T1 owes it to Soldier at T2 to undergo the enhancement.
The first premise of this argument draws on notions of numeric identity. Essentially, are Soldier at T1 and Soldier at T2 the same person or different people? “Discussions of Numeric Identity… are often concerned with what is needed for a thing to be the same as itself. If time passes, how do we judge that” the Soldier at T1 and Soldier at T2 are the same person? [26]. Consider here Sally who, since the age of 10, has wanted to join the army. By the time she is 30, Sally has spent a number of years as a soldier and fighting in conflict zones; Sally at 30 years old (at T2) has a set of physical, psychological, and experiential attributes that are going to significantly differentiate her from who she was as a ten year old at T1. We can obviously see that Sally is different at the two times. “But despite these changes, most of us would say that Sally is the same person she was at ten years old, now and until she dies... That is, Sally’s identity persists through time, despite the obvious fact that she has changed” [26]. So, on the one hand, Sally at T1 and Sally at T2 are different, but on the other hand, Sally at T1 and Sally at T2 are the same.
The way that people have sought to explain this persistence or continuity, despite the differences, draws on different aspects of Sally. One is what Derek Parfit called overlapping chains of psychological connectedness [36,37,38]. The person Sally is today is very similar to the person she was yesterday. The person she was yesterday is very similar to the person she was two days before, and so on. So, though she may not be exactly the same person now as she was at ten years old, as long as there is a continuity of states that links the person she was then to the person she is now, an identity claim holds [26]. Others suggest an alternative explanation. Instead of these overlapping chains of psychological connectedness, it is the facts about Sally’s physical persistence that make her the same person at T1 and T2.4 On this bodily criterion of numeric identity, it is the facts of the ongoing physical existence that make Sally the same person.
We suggest here that, whichever account one favours (psychological connectedness or the bodily criterion), Soldier at T1 and Soldier at T2 are the same person. Though they are different, they are not different people. T1 and T2 are still likely going to be psychologically connected, and their body is ongoing. That they have received a technological intervention that enhances them is not sufficient cause to say that they are different people. On both accounts, they are still the same.
This is all relevant to whether the soldier can say no to an enhancement, as one potential argument against saying no is that the soldier owes it to their future self to say yes. On this argument, if Soldier at T1 said no, they would be unfairly denying Soldier at T2 the options or capacities offered by the enhancement. We encounter a similar form of argument in discussions about environmental stewardship and what present people owe future people [40,41]. On the issues of what we owe future people, the issues rely in part at least on generational injustice, which in turn relies on the people at T1 or Generation 1 being different people from the people at T2 or Generation 2. Likewise, the “owe it to their future self” argument relies on the two selves being different people; it requires some significant difference between T1 and T2 selves. However, this does not work as a compelling argument if T1 and T2 selves are the same. Insofar as they make a free decision, and the soldier is making an autonomous decision about themselves, they are not denying the options or capacities to any different future self.
Another way that the “owe it to themselves” argument can run is like this: Soldier at T2 is not simply improved or enhanced by the intervention, but their rational capacities and the resulting autonomy from those rational capacities are so far in advance of Soldier at T1, that Soldier at T2 essentially has “authority” over Soldier at T1. Here, we can look at the arguments around advance directives where a previous self has authority over the present self, but only when the previous self’s autonomy is so far above the present self’s extremely low autonomy [42]. In our situation, while the temporal logic is the reverse,5 the core of the argument is the same. One’s self is so significantly advanced in terms of its autonomy that the enhanced self has authority over the less autonomous self. As such, Soldier at T1 owes it to themselves to do what they can to bring Soldier at T2 about. However, we think that, with the particular technologies being the way that they are at the moment, it is unlikely that the Soldier at T2 would be so significantly enhanced that their autonomy must take precedence over that of the Soldier at T1. Thus, we think that the authority of the Soldier at T2 is not sufficient enough to prevent Soldier at T1 from saying no.

6.2. Can a Soldier Say No to Making Better Moral Decisions? Moral Decision-Making in a Military Context

The next argument is more compelling. The basic claim here is that the soldier cannot say no to an enhancement if that enhancement improves their moral decision-making. For example, an NIBS technology that could enhance a soldier’s situational awareness or vigilance to the extent that they are able to process a considerable amount of information load could allow a soldier to improve their moral decision-making compared to that of a non-enhanced soldier. This might be a sacrifice they are compelled to make. Consider this argument by analogy: A soldier in a conflict zone is offered the option of using weapon 1 or weapon 2. Weapon 1 is a weapon that they have been using for years and they feel comfortable with it, and they like to use it. They are familiar with weapon 2, but they do not feel as comfortable with it. However, in this particular conflict zone, there is a reasonable risk that particular forms of combat will kill innocent civilians, and the soldier knows this. Now, weapon 2 is much more likely to avoid civilian casualties or harm, but other than that, it will impact the enemy the same as weapon 1. Again, the soldier knows that weapon 2 will be far better in terms of its discrimination. In this scenario, as per the ethics and laws of armed conflict, the soldier needs to choose weapon 2 over weapon 1.
The underpinning logic of this is that soldiers have a duty to not just adhere to relevant moral principles, but if there are two options and one meets the moral principles better than the other one, they ought to choose that better option. Here, they are compelled to follow what morality demands. The same reasoning would likely hold with regard to particular enhancements; if the soldier is presented with an option that would improve capacity to adhere to and meet specific military ethics principles, then that option ought to be chosen. On the face of it, the soldier’s general moral responsibility overrides any personal disagreement they might have with a particular technological intervention. This idea that people should be morally enhanced is currently an idea being explored in the literature [44,45,46,47]. These authors have advanced the argument that we ought to morally enhance ourselves if such enhancements exist. Some of these authors take quite a strong line on this. If safe moral enhancements are ever developed, there are strong reasons to believe that their use should be obligatory [46].
Their reasoning turns on access to destructive technologies like weapons, and is similar to what we have offered here:
Around the middle of last century, a small number of states acquired the power to destroy the world through detonation of nuclear weapons. This century, many more people, perhaps millions, will acquire the power to destroy life on Earth through use of biological weapons, nanotechnology, deployment of artificial intelligence, or cyberterrorism… To reduce these risks, it is imperative to pursue moral enhancement not merely by traditional means, such as education, but by genetic or other biological means. We will call this moral bioenhancement.
[47]
We suggest that in the context of military decision-making, particularly when considering decisions that are of significant moral weight, such as deciding when to shoot, who to shoot, and so on, there seems to be a convincing argument that soldiers ought to be morally enhanced. However, this is a contingent claim. First, this is not a blanket claim that the soldier must assent to all enhancements. It is only relevantly applied to enhancements that enhance their moral decision-making. We note here that there is an important discussion about the assumptions and feasibility of moral enhancement. One general assumption is that there is some agreement on what constitutes “good” moral decision-making. Much of ethics, from one’s metaethical position to one’s preferred normative theories, is a series of open questions. However, we point out here that in the military ethics context, there are some generally accepted principles like discrimination, proportionality, and necessity that must be met. We do not claim that these principles are true, but instead agree with the just war tradition that things are better, all things considered, when soldiers adhere to these principles.
In terms of feasibility, as Harris points out, if moral enhancement involves the reduction of morally problematic emotions like racism, then he is “sceptical that we would ever have available an intervention capable of targeting aversions to the wicked rather than the good” [46]6. Similarly, Dubljevic and Racine argue that “an analysis of current interventions leads to the conclusion that they are ‘blunt instruments’: Any enhancement effect is unspecific to the moral domain” [47].7 The worry here is that the technologies that might aid in moral enhancement are so imprecise as to be discounted as serious ways to improve moral behaviour and so on. In Harris’ view, we should instead focus on current methods of moral enhancement like education [48]. We consider these points to be reasonably compelling; there is good reason to be sceptical about the likelihood that these technologies will have the precision to reliably and predictably improve moral decision-making. However, for the purposes of this paper, in order to explore the ethical implications of these technologies, we are assuming some potential for these technologies to work as promised [49]. That said, this is an in principle argument. Without certainty that these interventions do enhance moral decision-making, the argument against saying no becomes significantly weaker.
For instance, we need to question which interventions actually constitute enhancements to moral decision-making. For instance, given the relation between enhanced memory, vigilance, and attention span and decision-making, as discussed in earlier sections of this paper, and the relations between improved decision-making and moral decision-making [29], one could argue that interventions that improved the quality of a soldier’s cognitive functions do in fact enhance their chances at making better moral decisions.8 It is important to note that whilst the enhancements examined in this paper have the capability to enhance cognitive functions, research investigating the efficacy of some types of commercially available NIBS products has shown that these enhancements may not be as effective as expected [51].9 Our thought here is that, as we are entertaining a claim that the soldier ought to be compelled to accept the intervention, there would need to be more than a mere likelihood that such an intervention will reliably enhance their moral decision-making. We suggest that this is reliant on a combination of assumptions about moral psychology and empirical claims about particular interventions.10
We also need to take into account other factors—does the intervention have any side effects? In particular, does it have any side effects on moral decision-making? For instance, amphetamines are a range of pharmaceutical interventions that have been used to reduce the need for sleep in the military [53]. However, they have a range of side effects, such as causing aggression and long-term psychological effects that would argue against a soldier being compelled to take them on moral grounds. Investigation into potential side effects of NIBS identified short-term side effects, such as reactions at the electrode sites, as well as a few cases of black-outs and seizures [17,52]. These investigations were done on voluntary healthy patients in a medical/laboratory setting. As the technology is still relatively new, further investigation will be required to identify long-term side effects of their use. Any side effects would need to be taken into account and weighed against the likelihood that the intervention does indeed improve moral decision-making. For instance, if there was only an inference that the intervention improved moral decision-making and there were known deleterious side effects of the intervention, the case that the soldier can say no is much stronger.
However, even if the interventions were likely to improve moral decision-making without significant side effects, some might still balk at the idea that soldiers must consent to such enhancements. This is because such interventions seem to override the principle of autonomy. However, perhaps the soldier has to assent to enhancements that would improve their moral decision-making capacity. This is because of the nature of military actions, or at least military actions that involve killing people. These actions are of significant moral weight, and so need to be treated differently from non-moral decisions or actions.11
There is a counter-argument to this: That the position that one must assent to moral enhancements is absurd and extreme. If it is true that we must accept interventions that improve our moral decision-making, then everyone on earth is morally required to assent to these moral enhancements. If the particular case holds in the military context—that a soldier must consent to being morally enhanced—this would surely hold for everyone, and this seems absurd: It would seem to be such a significant infringement on personal autonomy, bordering on authoritarianism, that we ought to reject it. While there is perhaps substance to this counter-argument at a general level, we can reject it as an argumentum ad absurdum, as we are only looking at the military context. Moreover, we are only considering those military members whose roles and duties would have them being forced to make life and death decisions, and to make those decisions in a way that would benefit from the enhancements described in Section 2. The average person does not face such life and death decisions in such high-pressure contexts. Finally, even if we constrain potential recipients to the military, arguably, no military has or will have the capacity to roll this out for every serving member. Maybe they should [46,55,56], but that is a different point from what we are concerned with here. What we are concerned with is the capacity to say no. As we can reasonably constrain the claim to particular members of the military, the argumentum ad absurdum fails.

6.3. Saying No to an Order: Ethics of Following Commands and Being in the Military

The third element of this discussion arises because of the special conditions of being in the military. First, soldiers are trained to follow orders, thus diminishing their capacity to say no. Second, soldiers decide to enter the military with the knowledge that they will not only be asked to potentially engage in risky activity, but also have the foreknowledge that they will be expected to follow orders. These points are complex and nuanced, as we will discuss, but when combined with the previous argument about saying no to making better moral decisions, we suggest there might be a situation where a soldier cannot say no to particular moral enhancements.
As an essential part of their training, soldiers are trained to follow orders, something that may conflict with their existing moral identity [57]. Of course, this does not mean that they are an automaton; many professional militaries now include training on the laws of armed conflict, military ethics, and the just war tradition. Any such training will explicitly or implicitly include recognition that a soldier should not follow an order that they know to breach the laws of armed conflict or a relevant ethical principle. For instance, many soldiers are taught that they can refuse to follow an order to kill a prisoner of war or an unarmed unthreatening civilian. However, as history shows [58], commanders still give orders that are illegal or immoral, and many soldiers still follow those commands. Moreover, as was infamously demonstrated by Stanley Millgram, many people will follow the commands of someone perceived to be in a position of authority even if what they are being asked to do is objectively morally objectionable [59,60]. The point here is that even when significant moral principles may be transgressed, many soldiers will still follow those commands; their capacity to say no is diminished.
The relevance here is that if it is generally psychologically difficult for a soldier to say no to a command, particularly commands that do not obviously contravene the laws or ethics of war, it may be equally psychologically difficult to be able to say no to commanders commanding them to accept an enhancement. We can consider here a general claim that the military command structures, training, and socialisation to follow orders undermine our confidence that soldiers can say no to enhancements.
Adding to this explanation, again arising in part from military training, is that soldiers feel a significant responsibility to their comrades and/or to the nation for which they fight. “The role of this socialisation process is to separate soldiers from their previous social milieus and inculcate a new way of understanding the world. Central to this process is loyalty, obedience, and the subsuming of one’s individual desires to the needs of the greater cause” [61]. Not only are soldiers trained to consider seriously sacrificing themselves for some greater good, but as training progresses, they are taught that they ought to. “The officer in training builds up a professional identity on the basis of his personal immersion in the ongoing, collective narrative of his corps. This narrative identity is imparted not by instruction in international law but by stories about the great deeds of honourable soldiers” [62]. Some see this loyalty to one’s closest comrades as fundamental to military practice: “The strongest bonds are not to large organizations or abstract causes like the nation; rather, they are to the immediate group of soldiers in one’s platoon or squad, before whom one would be ashamed to be a coward”—Frances Fukuyama, quoted in [61]. Here, the issue is whether a soldier feels like they can say no because they are concerned that, if they do, they will be letting their comrades down.
Similarly, a number of people sign up to be soldiers due to a sense of loyalty to their nation, to protect their nation, and to fight for what is right. Serving in the military “is a higher calling to serve a greater good, and there is nothing incoherent or irrational about sacrificing oneself for a greater good” [63]. On this view, soldiering is not a mere job, but something higher. For a group like “the clergy, the ‘larger’ and ‘grander’ thing in question is the divine. [In contrast, for] soldiers, it is the closest thing on earth to divinity: The state” [63].12 Here, rather than the loyalty being to their comrades, it is to the larger nation for whom they fight and/or the values for which they fight. In both aspects, though, we can recognise a strong weight in favour of doing what is asked; were this another job, the responsibility to would play far less of a role; very few jobs reliably expect a person to sacrifice their life for their job. “If it is not permissible for civilian employers to enforce compliance with such ‘imminently dangerous’ directives, then why is it permissible for the military?” The obvious answer is that soldiers make a commitment of obedience unto death; they agree to an “unlimited liability contract” upon enlistment, called so because there is no limit to the personal sacrifice that they can legitimately be ordered to make under that contract [63]. Given the importance of soldiering, the soldier forgoes many basic rights. In line with this, they may also forfeit a right to say no to enhancements.
This brings us back to the argument that we need to think of informed consent in a military context as different from informed consent in a medical context. What we might need to think of is that entry into a military context is a broad consent, where you consent to giving up other latter consents.13 This is not a “freedom to be a slave” argument, as the military service will typically end, but where enhancements differ is that they are ongoing.14
However, this all brings us to a second vital aspect of the capacity to say no—soldiers sign up to join the military, and unlike many other jobs, they are expected to follow orders. We would hope that people signing up to join the military would have some foreknowledge that what they are committing to is different from a normal job, with a set of important expectations, including following orders. For instance, it would be absurd to join the military and then complain about their commander being bossy, and that they do not like guns or state-sanctioned violence. It is essentially a form of caveat emptor, where the person knowingly gives up certain freedoms as part of joining the military; their freedom to say no is significantly curtailed. Just as they have significantly diminished freedoms to say no to commands, their freedom to say no to an enhancement is diminished. We return to the issue of exploitation in enlistment below.
The above argument becomes even more compelling when considering the narrowed focus that this paper has—whether a soldier can say no to technologies that enhance their military decision-making capacity. As we discussed in the technology summary and in the section on whether a soldier can say no to themselves, the technologies we are concerned with are those that are non-permanent and non-invasive. While their intended use is explicitly to enhance a person’s decision-making capacity in a military context, thus qualifying them as an enhancement technology, they are as much a military tool as they are a biotechnological intervention. While the interventions we are concerned with here are not equivalent to asking a soldier to carry a weapon or put on body armour, they are not exactly equivalent to an invasive clinical intervention to irreversibly alter their physiology.
The relevance of this is that the arguments about saying no to clinical interventions that one finds in the biomedical literature have less purchase in the military context than they do in a clinical biomedical context. That informed consent in a military context is different from that in a medical or clinical context is a well-founded view [67,68,69]. This does not mean that we jettison the notion of informed consent in a military context, but rather that it needs to be adapted to that context.
We also need to consider whether the purposes of particular enhancements add further moral weight to the argument that soldiers cannot say no. For instance, if the enhancement was shown to increase a particular military team’s success or survival rate, then there is an increased weight for that enhancement to be accepted; for example, if cognitive enhancements were so advanced to the stage that they could significantly enhance a soldier’s memory processing or reduce or eliminate cognitive fatigue in the battlefield, factors that directly impact survival rate on the ground. It is like a soldier saying no to a better weapon; insofar as that better weapon hits its targets but is more proportionate and discriminatory than existing weapons, we find a prima facie case that the soldier should use the better weapon. So too, we have a prima facie case that the enhancement be accepted.15 Moreover, as was discussed above, if the particular enhancement is likely to increase the capacity to adhere to ethical principles like jus in bello proportionality or discrimination, then they should not say no to making better moral decisions. The relevance to this section is that both of these arguments that a soldier has responsibility to fight better are strengthened when considering that the soldier signed up for entry into the military.
All that said, this argument has a significant caveat of its own—it assumes that all people join the military freely and with the relevant advance knowledge of what this role entails. However, as Bradley Strawser and Michael Robillard show, there is risk of exploitation in the military [71]. Exploitation of people’s economic, social, and educational vulnerabilities to get them to enlist in the military would significantly undermine any notions of broad consent or that soldiers must accept enhancements because “they knew what they were getting into when they signed up.” Moreover, the arguments developed here are considerably weaker if considering soldiers who were conscripted to fight. Thus, not only does the context of the military change how we would assess whether they can say no to an enhancement, but the conditions under which soldiers are enlisted are also essential to any relevant analysis.

7. Conclusions

In this paper, we examined the challenges associated with autonomy-enhancing technology used in a military context, particularly a soldier’s right to say no to these types of technologies. Some parallels can be drawn between enhancements used in the military and similar interventions used in medical or therapeutic contexts. However, we propose that the nature, purpose, and context of the enhancements used in the military raise special concerns regarding the impact on individual autonomy, informed consent, and the ability to say no to such enhancements.
Examination of current technologies indicated that the nature and function of the technologies require further evidence to ascertain that, as a blanket rule, autonomy would be enhanced to the extent that it would override a soldier’s right to autonomy and informed consent rule. In addition, challenges to obtaining informed consent become more complex when in a military context. We propose that soldiers can be considered an especially vulnerable group due to contextual elements that highlight institutional and differential vulnerability. A system in which power imbalances are formalised in authority relationships and hierarchical command structures, where a higher priority is given to the success of the operations and safety of the unit as a whole over individual rights, could impact a soldier’s ability to refuse an enhancement that is considered to be beneficial to the very aspects considered a higher priority. Further to this, the lack of individual privacy that would otherwise be afforded to civilians (in a medical context) could also diminish a soldier’s capacity to say no to an enhancement.
Looking at possible situations that could compel a soldier to accept enhancements, we examined the argument where soldiers potentially owe it to themselves to accept an intervention that could benefit them in the future. Unpacking the concepts of numeric identity and potentially denying one’s future self from a benefit, we propose that the current enhancements and the benefits they offer at this stage are unlikely to enhance a soldier to the extent that the rights of their future self takes precedence over their current self.
Another scenario in which a soldier may be compelled to accept an enhancement is the possibility of making better moral decisions. In this case, we propose that soldiers, by the nature of their work in making life and death decisions, could possibly be compelled to accept an enhancement if it is certain that said enhancement would guarantee a better moral outcome in line with jus in bello and Laws Of Armed Conflict (LOAC). This is not a blanket claim to all enhancements, but to those that only produce a better moral decision and a better moral outcome. However, even in this scenario, potential side effects of the enhancement and likelihoods of outcomes need to be taken into consideration. Even then, it does not warrant an override of autonomy and informed consent, but rather a need to ensure that safeguards are in place to protect soldiers’ rights even if they are compelled to accept such enhancements.
Finally, we looked at the argument that claims that soldiers sign up to follow orders when they join the military; hence, they have foreknowledge that they are committing to a job that comes with certain risks and sacrifices. Considering the focus of this paper—the question of whether a soldier can say no to an autonomy-enhancing technology, provided that the technology is non-invasive, not permanent, and explicitly used to enhance a person’s decision-making capacity—we propose that a soldier could be compelled to accept said technology with their right to say no being considerably diminished. However, a caveat here is that not everyone joins the military freely, without exploitation, and with sufficient knowledge of what their roles would entail.
Whilst there are narrowly focused scenarios in which a soldier could be compelled to accept autonomy-enhancing technologies impacting their right to say no, this would need to meet the thresholds highlighted in scenarios in this paper. Even so, rights to individual autonomy and obtaining informed consent should not be forgone; rather, an understanding of the nature, purpose, and context of the use of such enhancements in the military context is warranted, as is identification of the appropriate measures that could be implemented to ensure that individual rights are not corroded. Adding to that the risks of exploitation and issues like conscription, we need to be explicit that our argument is one that comes with a series of important caveats and restrictions. We are not saying that a soldier loses any right to say no to an enhancement.

Author Contributions

Both authors contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

Initial research for this paper was supported by the Brocher Foundation.

Conflicts of Interest

Sahar Latheef works for the Australian Government, Department of Defense. The views and opinions expressed in this paper are those of the authors and do not reflect any official policy or position of any agency. The authors have no other conflicts of interest.

References

  1. McKendrick, R.; Parasuraman, R.; Ayaz, H. Wearable functional near infrared spectroscopy (fNIRS) and transcranial direct current stimulation (tDCS): Expanding vistas for neurocognitive augmentation. Front. Syst. Neurosci. 2015, 9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Burwell, S.; Sample, M.; Racine, E. Ethical aspects of brain computer interfaces: A scoping review. BMC Med. Ethics 2017, 18, 1–11. [Google Scholar] [CrossRef] [PubMed]
  3. Krishnan, A. Military Neuroscience and the Coming Age of Neurowarfare; Routledge, Taylor & Francis Group: New York, NY, USA, 2017. [Google Scholar]
  4. Matran-Fernandez, A.; Poli, R. Towards the automated localisation of targets in rapid image-sifting by collaborative brain-computer interfaces. PLoS ONE 2017, 12, e0178498. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Matran-Fernandez, A.; Poli, R.; Cinel, C. Collaborative Brain-Computer Interfaces for the Automatic Classification of Images. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; IEEE: San Diego, CA, USA, 2013; pp. 1096–1099. [Google Scholar]
  6. Moore, B.E. The Brain Computer Interface Future: Time for a Strategy; Air War College Air University Maxwell AFB: Montgomery, AL, USA, 2013. [Google Scholar]
  7. DARPA Next-Generation Nonsurgical Neurotechnology. Available online: https://www.darpa.mil/program/next-generation-nonsurgical-neurotechnology (accessed on 12 July 2020).
  8. Smalley, E. The business of brain-computer interfaces. Nat. Biotechnol. 2019, 37, 978–982. [Google Scholar] [CrossRef] [PubMed]
  9. Coffman, B.A.; Clark, V.P.; Parasuraman, R. Battery powered thought: Enhancement of attention, learning, and memory in healthy adults using transcranial direct current stimulation. NeuroImage 2014, 85, 895–908. [Google Scholar] [CrossRef] [PubMed]
  10. Nelson, J.M.; McKinley, R.A.; McIntire, L.K.; Goodyear, C.; Walters, C. Augmenting Visual Search Performance With Transcranial Direct Current Stimulation (tDCS). Mil. Psychol. 2015, 27, 335–347. [Google Scholar] [CrossRef] [Green Version]
  11. Clark, V.P.; Coffman, B.A.; Mayer, A.R.; Weisend, M.P.; Lane, T.D.R.; Calhoun, V.D.; Raybourn, E.M.; Garcia, C.M.; Wassermann, E.M. TDCS guided using fMRI significantly accelerates learning to identify concealed objects. NeuroImage 2012, 59, 117–128. [Google Scholar] [CrossRef] [Green Version]
  12. Sela, T.; Kilim, A.; Lavidor, M. Transcranial alternating current stimulation increases risk-taking behavior in the balloon analog risk task. Front. Neurosci. 2012, 6, 22. [Google Scholar] [CrossRef] [Green Version]
  13. Durantin, G.; Scannella, S.; Gateau, T.; Delorme, A.; Dehais, F. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight. Front. Hum. Neurosci. 2015, 9, 707. [Google Scholar] [CrossRef] [Green Version]
  14. Brunoni, A.R.; Vanderhasselt, M.-A. Working memory improvement with non-invasive brain stimulation of the dorsolateral prefrontal cortex: A systematic review and meta-analysis. Brain Cogn. 2014, 86, 1–9. [Google Scholar] [CrossRef] [Green Version]
  15. Aricò, P.; Borghini, G.; Di Flumeri, G.; Colosimo, A.; Pozzi, S.; Babiloni, F. A passive brain-computer interface application for the mental workload assessment on professional air traffic controllers during realistic air traffic control tasks. Front. Hum. Neurosci. 2016, 228, 295. [Google Scholar]
  16. McDermott, P.L.; Ries, A.J.; Plott, B.; Touryan, J.; Barnes, M.; Schweitzer, K. A Cognitive Systems Engineering Evaluation of a Tool to Aid Imagery Analysts. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2015, 59, 274–278. [Google Scholar] [CrossRef]
  17. Davis, S.E.; Smith, G.A. Transcranial Direct Current Stimulation Use in Warfighting: Benefits, Risks, and Future Prospects. Front. Hum. Neurosci. 2019, 13, 114. [Google Scholar] [CrossRef] [PubMed]
  18. DARPA Restoring Active Memory (RAM). Available online: https://www.darpa.mil/program/restoring-active-memory (accessed on 12 July 2020).
  19. Hitchcock, E.M.; Warm, J.S.; Matthews, G.; Dember, W.N.; Shear, P.K.; Tripp, L.D.; Mayleben, D.W.; Parasuraman, R. Automation cueing modulates cerebral blood flow and vigilance in a simulated air traffic control task. Theor. Issues Ergon. Sci. 2003, 4, 89–112. [Google Scholar] [CrossRef]
  20. Nelson, J.T.; McKinley, R.A.; Golob, E.J.; Warm, J.S.; Parasuraman, R. Enhancing vigilance in operators with prefrontal cortex transcranial direct current stimulation (tDCS). NeuroImage 2014, 85, 909–917. [Google Scholar] [CrossRef] [PubMed]
  21. Neuroscience, H. Bihemispheric Transcranial Direct Current Stimulation with Halo Neurostimulation System over Primary Motor. Cortex Enhances Fine Motor Skills Learning in a Complex Hand Configuration Task. 2016. Available online: https://www.haloneuro.com/pages/science (accessed on 3 June 2020).
  22. Huang, L.; Deng, Y.; Zheng, X.; Liu, Y. Transcranial Direct Current Stimulation with Halo Sport Enhances Repeated Sprint Cycling and Cognitive Performance. Front. Physiol. 2019, 10, 118. [Google Scholar] [CrossRef]
  23. Seck, H.H. Super SEALs: Elite Units Pursue Brain- Stimulating Technologies. Military.com, 2 April 2017. [Google Scholar]
  24. Christman, J. Autonomy In Moral And Political Philosophy. The Stanford Encyclopedia of Philosophy. Zalta, E.N., Ed.; 2018. Available online: https://plato.stanford.edu/archives/fall2020/entries/autonomy-moral/ (accessed on 28 June 2020).
  25. Mackenzie, C. Critical Reflection, Self-Knowledge, and the Emotions. Philos. Explor. 2002, 5, 186–206. [Google Scholar] [CrossRef]
  26. Henschke, A. Ethics in an Age of Surveillance: Personal Information and Virtual Identities; Cambridge University Press: New York, NY, USA, 2017; pp. 1–334. [Google Scholar]
  27. Kahneman, D. Thinking, Fast and Slow, 1st ed.; Farrar, Straus and Giroux: New York, NY, USA, 2011; p. 78. [Google Scholar]
  28. Smith, M. The moral Problem; Blackwell: Cambridge, MA, USA; Oxford, UK, 1994; pp. 1–226. [Google Scholar]
  29. Kennett, J.; Fine, C. Will the Real Moral Judgment Please Stand up? The Implications of Social Intuitionist Models of Cognition for Meta-Ethics and Moral Psychology. Ethical Theory Moral Pract. 2009, 12, 77–96. [Google Scholar] [CrossRef]
  30. Annas, G.J. Beyond Nazi War Crimes Experiments: The Voluntary Consent Requirement of the Nuremberg Code at 70. Am. J. Public Health (1971) 2018, 108, 42–46. [Google Scholar] [CrossRef]
  31. Moser, D.J.; Arndt, S.; Kanz, J.E.; Benjamin, M.L.; Bayless, J.D.; Reese, R.L.; Paulsen, J.S.; Flaum, M.A. Coercion and informed consent in research involving prisoners. Compr. Psychiatry 2004, 45, 1–9. [Google Scholar] [CrossRef] [PubMed]
  32. Hayes, M.O. Prisoners and autonomy: Implications for the informed consent process with vulnerable populations. J. Forensic Nurs. 2006, 2, 84–89. [Google Scholar] [CrossRef] [PubMed]
  33. Pont, J. Ethics in research involving prisoners. Int. J. Prison. Health 2008, 4, 184–197. [Google Scholar] [CrossRef]
  34. Cummings, M. Informed Consent and Investigational New Drug Abuses in the U.S. Military. Account. Res. 2002, 9, 93–103. [Google Scholar] [CrossRef] [PubMed]
  35. Gordon, B.G. Vulnerability in Research: Basic Ethical Concepts and General Approach to Review. Ochsner J. 2020, 20, 34–38. [Google Scholar] [CrossRef] [PubMed]
  36. Parfit, D. Reasons and Persons, Reprint with corrections, 1987 ed.; Clarendon Press: Oxford, UK, 1987; pp. 1–543. [Google Scholar]
  37. Parfit, D. Personal Identity. Philos. Rev. 1971, 80, 3–27. [Google Scholar] [CrossRef]
  38. Parfit, D. On “The Importance of Self-Identity”. J. Philos. 1971, 68, 683–690. [Google Scholar] [CrossRef]
  39. DeGrazia, D. Human Identity And Bioethics; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  40. Forrester, M.G. What Do We Owe To Future Generations? In Persons, Animals, and Fetuses: An Essay in Practical Ethics; Springer: Dordrecht, The Netherlands, 1996; pp. 137–146. [Google Scholar]
  41. Golding, M.P. Obligations To Future Generations. Monist 1972, 56, 85–99. [Google Scholar] [CrossRef]
  42. Jaworska, A. Advance Directives and Substitute Decision-Making. In Stanford Encyclopedia of Philosophy; Center for Study of Language and Information, Stanford University: Stanford, CA, USA, 2009. [Google Scholar]
  43. Faye, J. Backward Causation. The Stanford Encyclopedia of Philosophy. Zalta, E.N., Ed.; Summer 2018 ed. 2018. Available online: https://plato.stanford.edu/archives/sum2018/entries/causation-backwards/ (accessed on 28 June 2020).
  44. Douglas, T. Moral Enhancement. J. Appl. Philos. 2008, 25, 228–245. [Google Scholar] [CrossRef]
  45. Douglas, T. Moral Enhancement Via Direct Emotion Modulation: A Reply To John Harris. Bioethics 2011. [Google Scholar] [CrossRef]
  46. Persson, I.; Savulescu, J. The Perils Of Cognitive Enhancement And The Urgent Imperative To Enhance The Moral Character Of Humanity. J. Appl. Philos. 2008, 25, 162–177. [Google Scholar] [CrossRef]
  47. Dubljević, V.; Racine, E. Moral Enhancement Meets Normative and Empirical Reality: Assessing the Practical Feasibility of Moral Enhancement Neurotechnologies: Moral Enhancement Meets Normative and Empirical Reality. Bioethics 2017, 31, 338–348. [Google Scholar] [CrossRef] [PubMed]
  48. Harris, J. Moral Enhancement and Freedom. Bioethics 2011, 25, 102–111. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Persson, I.; Savulescu, J. Getting Moral Enhancement Right: The Desirability Of Moral Enhancement. Bioethics 2013, 27, 124–131. [Google Scholar] [CrossRef] [PubMed]
  50. Haidt, J. The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Psychol. Rev. 2001, 108, 814–834. [Google Scholar] [CrossRef] [PubMed]
  51. Steenbergen, L.; Sellaro, R.; Hommel, B.; Lindenberger, U.; Kühn, S.; Colzato, L.S. “Unfocus” on foc.us: Commercial tDCS headset impairs working memory. Exp. Brain Res. 2016, 234, 637–643. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Voarino, N.; Dubljević, V.; Racine, E. tDCS for Memory Enhancement: Analysis of the Speculative Aspects of Ethical Issues. Front. Hum. Neurosci. 2016, 10, 678. [Google Scholar] [CrossRef] [Green Version]
  53. Repantis, D.; Schlattmann, P.; Laisney, O.; Heuser, I. Modafinil and methylphenidate for neuroenhancement in healthy individuals: A systematic review. Pharmacol. Res. 2010, 62, 187–206. [Google Scholar] [CrossRef]
  54. Giubilini, A. Conscience. In Stanford Encyclopedia of Philosophy Archive, 2016 ed.; Center for the Study of Language and Information (CSLI), Stanford University: Stanford, CA, USA, 2016. [Google Scholar]
  55. Persson, I.; Savulescu, J. Moral Hard-Wiring and Moral Enhancement. Bioethics 2017, 31, 286–295. [Google Scholar] [CrossRef] [Green Version]
  56. Persson, I.; Savulescu, J. The Duty to be Morally Enhanced. Topoi 2019, 38, 7–14. [Google Scholar] [CrossRef] [Green Version]
  57. Dobos, N. Ethics, Security, and the War Machine: The True Cost of the Military; Oxford University Press: Oxford, UK, 2020. [Google Scholar]
  58. Glover, J. Humanity: A Moral History of the Twentieth Century; Yale University Press: New Haven, CT, USA, 2000; 464p. [Google Scholar]
  59. Russell, N.J.C. Milgram’s Obedience to Authority Experiments: Origins and Early Evolution. Br. J. Soc. Psychol. 2011, 50, 140–162. [Google Scholar] [CrossRef]
  60. Blass, T. The Milgram Paradigm after 35 Years: Some Things We Now Know about Obedience to Authority. J. Appl. Soc. Psychol. 1999, 29, 955–978. [Google Scholar] [CrossRef]
  61. Connor, J.M. Military Loyalty: A Functional Vice? Crim. Justice Ethics 2010, 29, 278–290. [Google Scholar] [CrossRef]
  62. Osiel, M.J. Obeying Orders: Atrocity, Military Discipline, and the Law Of War. Calif. Law Rev. 1998, 86, 939. [Google Scholar] [CrossRef]
  63. Dobos, N. Punishing Non-Conscientious Disobedience: Is the Military a Rogue Employer? Philos. Forum 2015, 46, 105–119. [Google Scholar] [CrossRef]
  64. Helgesson, G. In Defense of Broad Consent. Camb. Q. Healthc. Ethics 2012, 21, 40–50. [Google Scholar] [CrossRef]
  65. Sheehan, M. Can Broad Consent be Informed Consent? Public Health Ethics 2011, 4, 226–235. [Google Scholar] [CrossRef] [Green Version]
  66. Henschke, A. Militaries and the Duty of Care to Enhanced Veterans. J. R. Army Med. Corps 2019, 165, 220–225. [Google Scholar] [CrossRef]
  67. Boyce, R.M. Waiver of Consent: The Use of Pyridostigmine Bromide during The Persian Gulf War. J. Mil. Ethics 2009, 8. [Google Scholar] [CrossRef]
  68. McManus, J.; Mehta, S.G.; McClinton, A.R.; De Lorenzo, R.A.; Baskin, T.W. Informed Consent and Ethical Issues in Military Medical Research. Acad. Emerg. Med. 2005, 12, 1120–1126. [Google Scholar] [CrossRef]
  69. Wolfendale, J.; Clarke, S. Paternalism, Consent, and the Use of Experimental drugs in the Military. J. Med. Philos. 2008, 33, 337–355. [Google Scholar] [CrossRef] [PubMed]
  70. Strawser, B.J. Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles. J. Mil. Ethics 2010, 9, 342–368. [Google Scholar] [CrossRef]
  71. Robillard, M.; Strawser, B.J. The Moral Exploitation of Soldiers. Public Aff. Q. 2016, 30, 171–195. [Google Scholar]
1
Former U.S President Barack Obama’s BRAIN Initiative is supported by National Institute of Health (NIH) and DARPA.
2
For more on autonomy and self-identification [25,26] and for the argument about reasons and autonomy, see [27].
3
We note here that in the philosophical literature, these issues are typically covered under discussions of “personal identity” rather than “numeric identity”. However, as “personal identity” is also used in non-philosophical disciplines to refer to psychological aspects of a person’s identity, we have chosen to refer to this as “numeric identity”. For more on this particular nomenclature, see Henschke [26].
4
For instance, see [39].
5
We also recognise here that there is perhaps an additional step required to make the claim that the T2 self has authority over the T1 self—that the future self can direct or dictate things to the present self. However, this line of argument may rely on some form of backwards causation, where the future causes present events to occur. We note here that backwards causation is a somewhat contentious concept. For more on backwards causation, see [43].
6
See p. 105.
7
See p. 348.
8
As noted earlier, a somewhat Kantian approach to reason and decision-making, as well as their connection to moral decision-making. For this, we draw from the work of people like Michael Smith, or Jeanette Kennett and Cordelia Fine [28,29]. This is, in contrast, a more Humean account, like that of the social intuitionist model of moral decision-making advocated by Jonathan Haidt [50].
9
In some cases, one type of cognitive function could be enhanced at the cost of another. For example, increased learning memory could come at a cost of decreased levels of automated processing.
10
Counter-arguments [52] indicate that concerns regarding explicit coercion and potential impact on individual autonomy and informed consent in the military are perhaps misplaced given the low prevalence of use, social acceptance, and efficacies of tDCS still yet to be explored. However, we propose that though these interventions are not widely used as yet, it does not negate exploration of potential ethical concerns should their use become more widely accepted.
11
We recognise that this position, that “moral reasons” can override personal beliefs, is contentious and contested. While we do not have space to cover the topic here, we suggest that one of the features of moral reasons that makes them different from non-moral reasons is that they ought to count significantly in one’s decision-making [54]. What we will say is that, given the specifics of the technologies that seem likely to be used for such enhancements, as they are currently non-invasive and potentially reversible, the argument that a soldier has a right to conscientiously object to such enhancements is weak. Like “weapon 1” versus “weapon 2” above, if the technologies do enhance moral decision-making and are not so different from using two different weapon types, the right to say no is limited at best. However, as we have taken care to note throughout the paper, there is perhaps a stronger conscientious objection argument that says “I say no to this technology, because it does not actually enhance moral decision-making.”
12
We note here that this author is not endorsing this view; rather, they are describing the notion of military service as distinct from a normal job [63].
13
For more on broad consent, see [64,65].
14
For more on enhancements and the duty of care to veterans, see [66].
15
We are thinking here of a parallel argument that remote weapons like drones should be used if, all other things being equal, these remote weapons reduce risk to one’s own soldiers [70].

Share and Cite

MDPI and ACS Style

Latheef, S.; Henschke, A. Can a Soldier Say No to an Enhancing Intervention? Philosophies 2020, 5, 13. https://doi.org/10.3390/philosophies5030013

AMA Style

Latheef S, Henschke A. Can a Soldier Say No to an Enhancing Intervention? Philosophies. 2020; 5(3):13. https://doi.org/10.3390/philosophies5030013

Chicago/Turabian Style

Latheef, Sahar, and Adam Henschke. 2020. "Can a Soldier Say No to an Enhancing Intervention?" Philosophies 5, no. 3: 13. https://doi.org/10.3390/philosophies5030013

APA Style

Latheef, S., & Henschke, A. (2020). Can a Soldier Say No to an Enhancing Intervention? Philosophies, 5(3), 13. https://doi.org/10.3390/philosophies5030013

Article Metrics

Back to TopTop