Next Article in Journal / Special Issue
Superhuman Enhancements via Implants: Beyond the Human Mind
Previous Article in Journal
Autonomy and the Ownership of Our Own Destiny: Tracking the External World and Human Behavior, and the Paradox of Autonomy
Previous Article in Special Issue
Marketing the Prosthesis: Supercrip and Superhuman Narratives in Contemporary Cultural Representations
 
 
Article
Peer-Review Record

Can a Soldier Say No to an Enhancing Intervention?

Philosophies 2020, 5(3), 13; https://doi.org/10.3390/philosophies5030013
by Sahar Latheef * and Adam Henschke *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Philosophies 2020, 5(3), 13; https://doi.org/10.3390/philosophies5030013
Submission received: 1 July 2020 / Revised: 25 July 2020 / Accepted: 27 July 2020 / Published: 3 August 2020
(This article belongs to the Special Issue Human Enhancement Technologies and Our Merger with Machines)

Round 1

Reviewer 1 Report

This is a promising paper, that nevertheless needs improvement.

There are many minor issues, such a typos, spelling, confusion in tenses, mixing of singular and plural in sentences, and repetition.

However, the most important issue is that the authors seemed to have rushed the paper, to the point of misattributing the title, having a confusing subheading scheme and failing to consider adequate counterpoints to their argument.

Ad title: since the 'autonomy enhancement' argument is one of three, and not even the most important one, perhaps the 'Autonomy' should be dropped from the title.

Ad subheadings: BCI, NIBS and Memory, Attention and Vigilance (pp 2-3) seem to be part of the discussion of technologies, yet the latter is not a discussion of technologies per se, but of effects from neuroscientific studies. Also, the heading "special case for soldiers" (p.6) seems to comprise two (out of three announced) subheadings (the third "...Ethics of Following Commands..." (p.9) seems to be a separate heading for some reason), which are not discussed in the order they are introduced, which is confusing for the reader.

Ad adequate counterpoints: even though the authors give an extensive coverage of the literature, they seem to ignore studies and papers that would put their favored narrative into question. On page 3, they consider positive enhancement results of tDCS studies, but not more general reviews that put this interpretation into question (e.g., PMCID: PMC5225120 DOI: 10.3389/fnhum.2016.00678) nor specific studies that prove reports of enhancement effects such as those being sought by athletes and the military are misplaced (e.g., PMCID: PMC4751189 DOI: 10.1007/s00221-015-4391-9), or come with considerable detriments (PMCID: PMC3672974 DOI: 10.1523/JNEUROSCI.4927-12.2013).

Similarly, the discussion of moral enhancement seems to be one-sided: it is completely ignoring valid and important contra-arguments (e.g.,PMCID: PMC3660783 DOI: 10.1111/j.1467-8519.2010.01854.x) and more recent reviews that put the prospects of moral enhancement into question empirically and in principle (e.g., PMID: 28503833 DOI: 10.1111/bioe.12355).

Additional comments:

Some references seem to be improperly cited, and some (e.g., [13]) are of poor quality (e.g., not peer reviewed) and should be replaced with vigorously vetted systematic reviews.

The discussion of autonomy (e.g., p.4) could be improved by a better differentiation of 'basic autonomy' (the right to freely choose) from the 'ideal of autonomy' (the imperative to choose wisely after a period of weighing evidence). Similarly, the discussion of objections is too perfunctory, and borders on being a 'straw man argument'.

The discussion of following commands has merit, but could be improved by a more nuanced consideration of conscientious objection. It could be the case that a soldier signs up to serve their country, and has no issues with using weapons, but would object on moral (or religious) grounds to using enhancements. Finally, the mention of the difference between conscripted vs. freely contracted soldiers is a great point,  but underdeveloped as things stand.

 

Author Response

R1:

This is a promising paper, that nevertheless needs improvement.

There are many minor issues, such a typos, spelling, confusion in tenses, mixing of singular and plural in sentences, and repetition.

Response to reviewer: Manuscript amended to correct typos, spelling errors and grammar as indicated in track changes.

However, the most important issue is that the authors seemed to have rushed the paper, to the point of misattributing the title, having a confusing subheading scheme and failing to consider adequate counterpoints to their argument.

Ad title: since the 'autonomy enhancement' argument is one of three, and not even the most important one, perhaps the 'Autonomy' should be dropped from the title.

Response to reviewer: ‘Autonomy’ deleted from the title.

Ad subheadings: BCI, NIBS and Memory, Attention and Vigilance (pp 2-3) seem to be part of the discussion of technologies, yet the latter is not a discussion of technologies per se, but of effects from neuroscientific studies. Also, the heading "special case for soldiers" (p.6) seems to comprise two (out of three announced) subheadings (the third "...Ethics of Following Commands..." (p.9) seems to be a separate heading for some reason), which are not discussed in the order they are introduced, which is confusing for the reader.

Response to reviewer: p.2-3 sub-heading “Memory, Attention and Vigilance” deleted. Text L119-L120 amended to provide the reader with clarity as to the order of paragraphs that follows. Text L132-L135 amended to discuss areas of neuroscientific research focusing on memory enhancement that have military applicability.

p.6 Formatting for sub-heading “Ethics of Following Commands” amended to include it as a sub-heading under the section heading “Special Case of Soldiers” (L256).  Text in paragraphs L277 to L283 amended to correct the order in which the sub-headings appear and to align with the order of discussions that follow.

Ad adequate counterpoints: even though the authors give an extensive coverage of the literature, they seem to ignore studies and papers that would put their favored narrative into question. On page 3, they consider positive enhancement results of tDCS studies, but not more general reviews that put this interpretation into question (e.g., PMCID: PMC5225120 DOI: 10.3389/fnhum.2016.00678) nor specific studies that prove reports of enhancement effects such as those being sought by athletes and the military are misplaced (e.g., PMCID: PMC4751189 DOI: 10.1007/s00221-015-4391-9), or come with considerable detriments (PMCID: PMC3672974 DOI: 10.1523/JNEUROSCI.4927-12.2013).

Response to reviewer:

Counter arguments to what we have proposed in this paper, questioning the efficacy and societal acceptance of enhancements sought by athletes and military (PMCID: PMC5225120 DOI: 10.3389/fnhum.2016.00678) are acknowledged in footnote 12 paragraph L399-L402. We propose that even though some enhancements have low societal acceptance as yet and some require further investigation into their efficacy, this does not mean that discussions regarding ethical issues of their use should be ignored. Now is the time to engage in such discussions and prepare ourselves should their use become widely accepted.

Risks and side effects of tDCS acknowledged and discussed briefly L412- L414. Text amended to acknowledged that some commercially available tDCS products such as foc.us headsets (PMCID: PMC4751189 DOI: 10.1007/s00221-015-4391-9) whilst enhancing memory, can also decrease accuracy of tasks performed. Similar research (PMCID: PMC3672974 DOI: 10.1523/JNEUROSCI.4927-12.2013) shows that enhanced memory could also decrease cognitive functions such as automated processing . Therefore, this questions the efficacy of some commercially available products.

Similarly, the discussion of moral enhancement seems to be one-sided: it is completely ignoring valid and important contra-arguments (e.g.,PMCID: PMC3660783 DOI: 10.1111/j.1467-8519.2010.01854.x) and more recent reviews that put the prospects of moral enhancement into question empirically and in principle (e.g., PMID: 28503833 DOI: 10.1111/bioe.12355).

 

Response: We have added a paragraph here that acknowledges both of these concerns. First, we note the valid point about ethics/’the good’ etc being open questions. However, we point out that in the military context, there are well rehearsed arguments about discrimination, proportionality and necessity. The paper obviously does not have space to enter into discussions about the just war tradition, but we hope that making a conscious connection to these principles gives a little more clarity to our position here. We note that while this is not a definitive answer to the open questions in metaethics etc., that there are a substantial set of positions do hold some agreement on the substance of what counts as good in a military context.

On the second aspect, we thank Reviewer One for the references here. We have included some discussion of Harris and Dubjlevic and Racine. We have also tried to make it more clear that our position here is an principle one, and that scepticism about the feasibility of these enhancements is not only warranted but plays a significant role in whether a soldier can say no or not.

 

Paragraph added:

We note here that there is an important discussion about the assumptions and feasibility of this technological moral enhancement. One general assumption is that there is some agreement on what constitutes ‘good’ moral decision making. Much of ethics, from one’s metaethical position, to one’s preferred normative theories, is a series of open questions. However, we point out here that in the military ethics context, there are some generally accepted principles, like discrimination, proportionality and necessity that must be met. We do not claim that these principles are true, but instead agree with the just war tradition that things are better, all things considered when soldiers adhere to these principles. In terms of feasibility, as Harris points out, if moral enhancement involves the reduction of morally problematic emotions like racism, then he is “sceptical that we would ever have available an intervention capable of targeting aversions to the wicked rather than the good” PAGE 105 [https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-8519.2010.01854.x]. Simliarly, Dubljevic and Racine argue that “an analysis of current interventions leads to the conclusion that they are blunt instruments: any enhancement effect is unspecific to the moral domain” PAGE 348 [https://onlinelibrary.wiley.com/doi/epdf/10.1111/bioe.12355]. The worry here is that the technologies that might aid in moral enhancement are so imprecise as to be discounted as serious ways to improve moral behavior and so on Harris’ view we should instead focus on current methods of moral enhancement like education [https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-8519.2010.01854.x]. We consider these points to be reasonably compelling; there is good reason to be skeptical about the likelihood that these technologies will have the precision to reliably and predictably improve moral decision making. However, for the purposes of this paper, in order to explore the ethical implications of these technologies, we are assuming some potential for these technologies to work as promised. That said, this is an in principle argument. Without certainty that these interventions do enhance moral decision-making, the argument against saying no becomes significantly weaker.

 

Additional comments:

Some references seem to be improperly cited, and some (e.g., [13]) are of poor quality (e.g., not peer reviewed) and should be replaced with vigorously vetted systematic reviews.

Response to reviewer:

Reference [13] removed. L115-L116 deleted. All other references checked. Sources used that are not peer-reviewed are information from the DARPA website used to describe details of neuroscience projects. The website military.com (reference 23) is not a peer-reviewed source also used in this paper. However, we have chosen to keep this source as the purpose of this reference is to show that the military is interested in this technology (provided as a media announcement) and not the details of how the technology functions. The information detailing the functions of the technology have been supported by peer reviewed papers published in vigorously vetted journals such as Frontiers in neuroscience, Frontiers in human neuroscience, Brain and Cognition and Neuroimage (see references used in the discussions on NIBS p.3-4).

The discussion of autonomy (e.g., p.4) could be improved by a better differentiation of 'basic autonomy' (the right to freely choose) from the 'ideal of autonomy' (the imperative to choose wisely after a period of weighing evidence). Similarly, the discussion of objections is too perfunctory, and borders on being a 'straw man argument'.

Response to reviewer

Given the very limited word length what we have done here is, rewritten this section. As advised, made our own position a clearer with some illustration by reference to the Social Intuitionist Model, and a quote from Kennet and Fine from a paper of theirs that (we think) refutes or at least, adds nuance to Haidt’s view.

Autonomy is not only a complex notion, but one of the most contested areas in philosophy and ethics. We do not expect to answer any of those open questions here but draw attention to the connection between the technologies  as described and autonomy. As Christman describes it, autonomy is the “idea that is generally understood to refer to the capacity to be one's own person, to live one's life according to reasons and motives that are taken as one's own and not the product of manipulative or distorting external forces” [24]. Our view on autonomy is that there is some relative equivalence between what a person does and the reasons that they have for acting.[1] This is a somewhat Kantian notion where reason and rationality play a key role in autonomy and in ethics more generally. This stands in the face of other views, like that of Haidt’s social intuitionist model, in which reasons play far less of a role than the in Kantian model [49]. However, as Kennett and Fine argue, an “examination of the interaction between automatic and controlled reflective processes in moral judgment provides some counter to scepticism about our agency and makes room for the view shared by rationalists and sophisticated sentimentalists alike that genuine moral judgments are those that are regulated or endorsed by reflection” [48 PAGE 78. The important point is that if technologies can change cognitive capacities and practices they could play a role in improving moral decision making. Our purpose is to draw attention to common elements of autonomy, and to see how they play out in relation to particular enhancement technologies when used in a military context.

In particular, of the technologies that we have reviewed, they are all expected and intended to impact upon and improve decision making in different ways. The connection to autonomy is that improved decision-making sits in part with the notions of autonomy as increasing ‘the capacity to be one’s own person, to live one’s life according to reasons that are taken as one’s own’. By increasing capacities like memory, attention and vigilance, we suggest that these technologies are increasing the recipient’s autonomy by enhancing their decision-making capacity. Moreover, insofar as these enhancements increase such decision-making while in positions of high cognitive demand and stress, like conflict, then they are minimizing the ‘distorting external forces’. While more can be said about the connections between increased decision-making capacity and autonomy, the point here is to show that the technologies described are hoped to have some potential to enhance autonomy

The discussion of following commands has merit, but could be improved by a more nuanced consideration of conscientious objection. It could be the case that a soldier signs up to serve their country, and has no issues with using weapons, but would object on moral (or religious) grounds to using enhancements.

 

 

Finally, the mention of the difference between conscripted vs. freely contracted soldiers is a great point,  but underdeveloped as things stand.

Response to reviewer

 

On the issue of conscription v freely contracted soldiers, we agree that this is a really interesting area, and one that we like to do more work on. But given the word length, we cannot expand it here.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

  • A brief summary (one short paragraph) outlining the aim of the paper and its main contributions.

 

The article presents an exploration of issues regarding military use of enhancement that might affect moral decision-making capabilities and then explores practical concerns with soldiers’ autonomy and ability to refuse such enhancement (with particular reference to the subtle and not-so-subtle institutional and psychological disincentives to refuse). The authors explore BCI and noninvasive brain stimulation as potential means for enhancing attention and then take that as likewise constituting some manner of moral enhancement. The authors argue that if such enhancement can guarantee a better moral outcome viz. jus in bello frameworks then it might actually be justifiable to make such enhancement compulsory. The paper then explores practical issues to do with autonomy and soldiers’ ability to say “no” to such enhancements given the nature of the institution of the military and the power-structures and power-relations contained therein.

 

  • Broad comments highlighting areas of strength and weakness. These comments should be specific enough for authors to be able to respond.

 

The topic under consideration is extremely important. Broadly speaking, the authors have considered two kinds of issue – first, a more conceptual bioethical (moral enhancement) argument; second, a set of practical realities that undermine a soldier’s capacity to say “no” to an enhancement. The former questions are less interesting or weighty than the latter. Regarding the broader moral enhancement concerns, the authors have tackled the issues in line with much of domain’s general standards (i.e., has simply taken it as read that such moral enhancement will be possible, let alone efficacious, and argued from there). A nuanced look at moral decision-making would raise some questions about the connections drawn here. What is good about the paper, and what makes it worth publishing is the treatment regarding the practical concerns which have been raised - these are extremely significant. Such issues have been broadly raised in other literature on the subject, but they are weighty enough to bear further engagement. Ideally, the authors would have ignored the former bioethical questions and lengthened this meditation on the practical issues. The Reviewer recommends that the authors pursue the practical issues further in later works. In conclusion, however, this is an important topic, written about in a clear and well-informed manner. It does cover some material already in the broader literature. The bioethical arguments are not necessarily as convincing as they could be (for example, it is not clear that such enhancements will substantively be able to alter moral decision making in any radical way that needs special consideration, the moral powers or moral decision-processes themselves are not radically altered by the proposed interventions and their conjectural moral enhancement side-effects). On the other hand, the practical issues raised in the paper are of such import, and the argument is sufficiently well-informed as to justify publication.

 

  • Specific comments referring to line numbers, tables or figures. Reviewers need not comment on formatting issues that do not obscure the meaning of the paper, as these will be addressed by editors.

 

L56-249 speculative. First, it is not clear that these technologies work in the context mentioned, nor is it clear that they work to a degree that needs to be worried about (indeed, the worry is that they do not work and cause damage), nor is it clear that the supposed enhancement potential would affect moral decision-making or moral processes in a way that is morally concerning.

 

L400 onwards, vulnerability and the ability to say no is the better and most significant portion of the paper.

 

L537 “we propose that soldiers by the nature of their work in making life and death decisions, could possibly be compelled to accept an enhancement if it is certain 538 that said enhancement would guarantee a better moral outcome in line with jus in bello and LOAC. 539 This is not a blanket claim to all enhancements but to those that only produce a better moral decision 540 and a better moral outcome.” This sort of language is concerning, it suggests that the authors do not really comprehend the complexities of moral decision-making. There is no enhancement that definitively enhances moral functioning – that depends on a prior agreement on what exactly “the good” is. There is nothing in the world, even in principle, that can enhance a person’s decision-making as to guarantee a good outcome, as no one can agree what a good outcome is, even within the confines of jus in bello (all jus in bello defenses are tentative and open to critique). If the authors cannot grasp that moral decision-making involves a range of subtle and diverse powers and is just so context-related and situational that the efficacy of any kind of military moral enhancement is made very questionable, or at best very limited, then the authors should at least take care to use more tentative language throughout the related moral enhancement sections.

 

FINAL COMMENT: the paper merits publication but the Reviewer would like to see a little more tentative language in the moral enhancement section, and at least some more acknowledgement of the many valid criticisms that have been laid against the practical reality of moral enhancement (especially in relation to the sorts of interventions raised here, BCI and noninvasive stimulation).

Author Response

R2:

The article presents an exploration of issues regarding military use of enhancement that might affect moral decision-making capabilities and then explores practical concerns with soldiers’ autonomy and ability to refuse such enhancement (with particular reference to the subtle and not-so-subtle institutional and psychological disincentives to refuse). The authors explore BCI and noninvasive brain stimulation as potential means for enhancing attention and then take that as likewise constituting some manner of moral enhancement. The authors argue that if such enhancement can guarantee a better moral outcome viz. jus in bello frameworks then it might actually be justifiable to make such enhancement compulsory. The paper then explores practical issues to do with autonomy and soldiers’ ability to say “no” to such enhancements given the nature of the institution of the military and the power-structures and power-relations contained therein.

  • Broad comments highlighting areas of strength and weakness. These comments should be specific enough for authors to be able to respond.

The topic under consideration is extremely important. Broadly speaking, the authors have considered two kinds of issue – first, a more conceptual bioethical (moral enhancement) argument; second, a set of practical realities that undermine a soldier’s capacity to say “no” to an enhancement. The former questions are less interesting or weighty than the latter. Regarding the broader moral enhancement concerns, the authors have tackled the issues in line with much of domain’s general standards (i.e., has simply taken it as read that such moral enhancement will be possible, let alone efficacious, and argued from there). A nuanced look at moral decision-making would raise some questions about the connections drawn here. What is good about the paper, and what makes it worth publishing is the treatment regarding the practical concerns which have been raised - these are extremely significant. Such issues have been broadly raised in other literature on the subject, but they are weighty enough to bear further engagement. Ideally, the authors would have ignored the former bioethical questions and lengthened this meditation on the practical issues. The Reviewer recommends that the authors pursue the practical issues further in later works. In conclusion, however, this is an important topic, written about in a clear and well-informed manner. It does cover some material already in the broader literature. The bioethical arguments are not necessarily as convincing as they could be (for example, it is not clear that such enhancements will substantively be able to alter moral decision making in any radical way that needs special consideration, the moral powers or moral decision-processes themselves are not radically altered by the proposed interventions and their conjectural moral enhancement side-effects).

Response to reviewer

 

We believe that we have responded to this concern with our additional paragraph on assumptions and feasibility of moral decision making to R1. We add it here again for completeness

We note here that there is an important discussion about the assumptions and feasibility of this technological moral enhancement. One general assumption is that there is some agreement on what constitutes ‘good’ moral decision making. Much of ethics, from one’s metaethical position, to one’s preferred normative theories, is a series of open questions. However, we point out here that in the military ethics context, there are some generally accepted principles, like discrimination, proportionality and necessity that must be met. We do not claim that these principles are true, but instead agree with the just war tradition that things are better, all things considered when soldiers adhere to these principles. In terms of feasibility, as Harris points out, if moral enhancement involves the reduction of morally problematic emotions like racism, then he is “sceptical that we would ever have available an intervention capable of targeting aversions to the wicked rather than the good” PAGE 105 [https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-8519.2010.01854.x]. Simliarly, Dubljevic and Racine argue that “an analysis of current interventions leads to the conclusion that they are blunt instruments: any enhancement effect is unspecific to the moral domain” PAGE 348 [https://onlinelibrary.wiley.com/doi/epdf/10.1111/bioe.12355]. The worry here is that the technologies that might aid in moral enhancement are so imprecise as to be discounted as serious ways to improve moral behavior and so on Harris’ view we should instead focus on current methods of moral enhancement like education [https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-8519.2010.01854.x]. We consider these points to be reasonably compelling; there is good reason to be skeptical about the likelihood that these technologies will have the precision to reliably and predictably improve moral decision making. However, for the purposes of this paper, in order to explore the ethical implications of these technologies, we are assuming some potential for these technologies to work as promised. That said, this is an in principle argument. Without certainty that these interventions do enhance moral decision-making, the argument against saying no becomes significantly weaker.

 

On the other hand, the practical issues raised in the paper are of such import, and the argument is sufficiently well-informed as to justify publication.

  • Specific comments referring to line numbers, tables or figures. Reviewers need not comment on formatting issues that do not obscure the meaning of the paper, as these will be addressed by editors.

L56-249 speculative. First, it is not clear that these technologies work in the context mentioned, nor is it clear that they work to a degree that needs to be worried about (indeed, the worry is that they do not work and cause damage), nor is it clear that the supposed enhancement potential would affect moral decision-making or moral processes in a way that is morally concerning.

Response to reviewer:

Text amended to acknowledged studies that question the efficacy of commercially available NIBS products L399-L3402 and their ability to enhance cognitive functions to the extent that they claim to do so. In addition, we have also acknowledged studies that have investigated efficacy of NIBS in memory enhancing, and the impact on other cognitive functions, such as decrease in accuracy of tasks performed and decrease in automated processing (footnote 11).

We have also added more nuance, in the discussion of assumptions and feasibility, as covered in the paragraph above.

 

L537 “we propose that soldiers by the nature of their work in making life and death decisions, could possibly be compelled to accept an enhancement if it is certain 538 that said enhancement would guarantee a better moral outcome in line with jus in bello and LOAC. 539 This is not a blanket claim to all enhancements but to those that only produce a better moral decision 540 and a better moral outcome.” This sort of language is concerning, it suggests that the authors do not really comprehend the complexities of moral decision-making. There is no enhancement that definitively enhances moral functioning – that depends on a prior agreement on what exactly “the good” is. There is nothing in the world, even in principle, that can enhance a person’s decision-making as to guarantee a good outcome, as no one can agree what a good outcome is, even within the confines of jus in bello (all jus in bello defenses are tentative and open to critique). If the authors cannot grasp that moral decision-making involves a range of subtle and diverse powers and is just so context-related and situational that the efficacy of any kind of military moral enhancement is made very questionable, or at best very limited, then the authors should at least take care to use more tentative language throughout the related moral enhancement sections.

We have covered this issue in our paragraph above

 

 

FINAL COMMENT: the paper merits publication but the Reviewer would like to see a little more tentative language in the moral enhancement section, and at least some more acknowledgement of the many valid criticisms that have been laid against the practical reality of moral enhancement (especially in relation to the sorts of interventions raised here, BCI and noninvasive stimulation).

Response: Text amended to include tentative language and acknowledgement of valid criticisms against moral enhancements.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Spelling issues persist. Also, there are some errors in the references (e.g., duplication of information in Steenbergen et al). These issues should be carefully addressed before I can advise that the paper should be accepted.

Author Response

We received this feedback from the editors

 

"Our (minor) comment is that the author should put the material in the
bracketed notes 1, 3, 4, 5 and 16 as language in the text."

 

We have made the changes and have uploaded the completed proofs.

Author Response File: Author Response.docx

Back to TopTop