Next Article in Journal
An Approach for Realizing Hybrid Digital Twins Using Asset Administration Shells and Apache StreamPipes
Next Article in Special Issue
Integrating Comprehensive Human Oversight in Drone Deployment: A Conceptual Framework Applied to the Case of Military Surveillance Drones
Previous Article in Journal
Network Traffic Anomaly Detection via Deep Learning
 
 
Article
Peer-Review Record

Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status

Information 2021, 12(5), 216; https://doi.org/10.3390/info12050216
by Steven Umbrello 1,* and Nathan Gabriel Wood 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Information 2021, 12(5), 216; https://doi.org/10.3390/info12050216
Submission received: 18 March 2021 / Revised: 17 May 2021 / Accepted: 18 May 2021 / Published: 20 May 2021

Round 1

Reviewer 1 Report

I have many doubts about the arguments used in this article. With more than 20 years of military life and some combat missions around the world, my perception is that AWS in its current state is hardly more efficient than humans. Eventually, AWS are useful in environments with high analytical-cognitive complexity, but combat situations also involve social-emotional capacity. And, in the social-emotional field, machines have very limited capacities, even when enabled with artificial intelligence. So, my question is, how can the "hearts and minds" of the people in Afghanistan and/or Iraq be “conquered” with machines?

Moreover, I agree with the authors, to the extent that the debate on the ethics and legality of the design and implementation of AWS is necessary. But, I am also skeptical about the scientificity of the article, given that a section on research methodology is missing. The discussion is naturally relevant for a broader public, but its scientificity is expressed in the light of a certain methodological process.

Author Response

Reviewer Comment 1: 

I have many doubts about the arguments used in this article. With more than 20 years of military life and some combat missions around the world, my perception is that AWS in its current state is hardly more efficient than humans. Eventually, AWS are useful in environments with high analytical-cognitive complexity, but combat situations also involve social-emotional capacity. And, in the social-emotional field, machines have very limited capacities, even when enabled with artificial intelligence. So, my question is, how can the "hearts and minds" of the people in Afghanistan and/or Iraq be “conquered” with machines?

Response 1: We do not argue to the contrary. Our point is that AWS are going to feature on the battlefield very soon, and in fact are already in use (under certain definitions of AWS), and thus it is important that we be aware of how our regulatory regimes may already control AWS, and where we might need improvements.  In the paper, we added a short bit at the end of the introduction where we make this point explicitly, backing out this objection from the start. 

 

Reviewer Comment 2: 

Moreover, I agree with the authors, to the extent that the debate on the ethics and legality of the design and implementation of AWS is necessary. But, I am also skeptical about the scientificity of the article, given that a section on research methodology is missing. The discussion is naturally relevant for a broader public, but its scientificity is expressed in the light of a certain methodological process.

Response 2: It is unclear to us precisely what the reviewer wants in this context. It is true that we do not have scientific results, but this is not a scientific paper. The paper is about morality and law (in the scope of the special issues to which this has been submitted), and within those disciplines our methodology is sound; (i) for the legal side, we utilize current legal doctrine, its supplementary materials, as well as current legal scholarship, and (ii) for the philosophical side we utilize current moral theories and the standard methods of analytic philosophy.  In the paper, we do not address this specifically (mainly, as it is beside the point of our paper), but we did add a short explanation at the end of the introduction which makes clear what the methods of our paper are (legal interpretation and analytic philosophical argumentation), highlighting that these are the standard in both law and philosophy. 

Reviewer 2 Report

This manuscript aims to show that "the law demands a contextualized appraisal of whether or not an enemy is in fact hors de combat". Referring to the AP I official Commentary Authors defend an interesting (although controversial) idea that combatants should be treated as hors de combat if they cannot do anything to harm their enemies. The main problem I find in this argument is that it is based on highly idealized hypothetical examples (e.g. Tanker, etc.) without taking into account different types of uncertainties in war. It is easy to imagine that in cases such as "Tanker" Coalition officers cannot be sure that "Iraqi soldiers have small arms", and out of precaution assume that they may have somewhere heavy arms as well. The same concerns AWS and "High-Value Target": how is it even possible that a drone is sure that the offensive capacity of some enemy combatants against this drone is functionally irrelevant? My point is particularly relevant in light of the final remarks that "military commanders must retain control over AWS’ general targeting behavior". Therefore, I believe that the manuscript would improve if the authors discussed some more nuanced cases and referred to some recent works that highlight different types of uncertainty during combat, e.g. Adil Ahmad Haque (2017), Law and Morality at War, Oxford University Press; Tomasz Żuradzki (2016), Meta-Reasoning in Making Moral Decisions Under Normative Uncertainty, in: Argumentation and Reasoned Action,  London: College Publications. They may also refer to the use of heuristics during wars, e.g. Maciej Zając (2019), Defeating Ignorance – Ius ad Bellum Heuristics for Modern Professional Soldiers, Diametros 16 (62), 78-94. 

Author Response

Reviewer Comment 1:

. The main problem I find in this argument is that it is based on highly idealized hypothetical examples (e.g. Tanker, etc.) without taking into account different types of uncertainties in war. It is easy to imagine that in cases such as "Tanker" Coalition officers cannot be sure that "Iraqi soldiers have small arms", and out of precaution assume that they may have somewhere heavy arms as well. The same concerns AWS and "High-Value Target": how is it even possible that a drone is sure that the offensive capacity of some enemy combatants against this drone is functionally irrelevant?

Response 1: This is an interesting point and something we could do well to be more clear about. Our arguments are conditional on there being relative certainty that the enemy cannot pose a threat (if that assurance is not present for combatants, they may indeed use greater force than might *objectively* be necessary). However, this aspect of our arguments could be more clear. 

In the paper, we made minor amendments to make this point more clear, and added a clear caveat that *if* the tanker is not sure the enemy is powerless, *then* they may indeed use significant force against them. We also add, however, that there is a large range of cases in modern war where that certainty can be achieved and can be achieved long before combat operations even begin. We present the case of UNOSOM II in Somalia in 1993. In that operation, U.S. Gen. William Garrison had requested M1 Abrams tanks for use in the operation, in large part because these would have virtually guaranteed ground combat superiority due to the fact that the Somali militias had no effective counter to heavy armor. In that instance, had the U.S. task force had access to Abrams tanks, these units acting alone would have known with relative certainty that their enemies were powerless against them (though not powerless against civilians in the area). There are similar cases across many real conflicts, and there will continue to be cases where one side of a conflict knows at the outset that (some of) their units cannot be harmed by the enemy. 

 

Reviewer Comment 2: 

Therefore, I believe that the manuscript would improve if the authors discussed some more nuanced cases and referred to some recent works that highlight different types of uncertainty during combat, e.g. Adil Ahmad Haque (2017), Law and Morality at War, Oxford University Press; Tomasz Żuradzki (2016), Meta-Reasoning in Making Moral Decisions Under Normative Uncertainty, in: Argumentation and Reasoned Action,  London: College Publications. They may also refer to the use of heuristics during wars, e.g. Maciej Zając (2019), Defeating Ignorance – Ius ad Bellum Heuristics for Modern Professional Soldiers, Diametros 16 (62), 78-94. 

Response 2: After adding the clarification about (un)certainty, we added a footnote with reference to both Haque and Zajac as good recent developments with regards to how uncertainty may affect moral reasoning and evaluations in war. 

Reviewer 3 Report

The article is a successful attempt to design a regulatory framework for the horse de combat figure in the light of the massive introduction of the autonomous weapons system (AWS). According to the authors, the introduction of robotic weapon systems (which can be controlled at various levels by human operators and/or can take tactical decisions autonomously), in the light of the most important regulations of international law relating to the law of war, should lead to a surprisingly counterintuitive conclusion. In fact, the rise of so-called robot killers on battlefields should lead to the conclusion that most enemy fightershould be considered ad hors de combat. To my knowledge, this is a brilliant conclusion that I have not found in the rich literature on the subject. The authors are not satisfied with drawing this surprising conclusion, but they make it the starting point to suggest a contextual reconsideration of the normative meaning of hors de combat. The article is brilliant and shows a deep knowledge of the regulatory, technical and juridical aspects at stake. My advice is to publish it without revision.

Author Response

We thank this reviewer for providing encouraging comments and for recommending publication without revisions. We have nonetheless made revisions in light of the other reviewers' comments and hope that the paper is even better than the original manuscript. 

Round 2

Reviewer 1 Report

I think the authors answered my questions. I understand that autonomous weapon systems (AWS) are at an early stage and more scientific studies are likely to clarify the issue. The article is relevant to the extent that it brings a broader discussion, although further studies are needed to corroborate some arguments. After reading the authors' comments, I now believe the research has some merit, as it is a starting point for future investigations. My recommendation is to approve without further changes.

Back to TopTop