Special Issue "Meaningful Human Control and Autonomous Weapons Systems: Ethics, Design, and Responsible Innovation"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 31 August 2021.

Special Issue Editors

Dr. Steven Umbrello
E-Mail Website
Guest Editor
Institute for Ethics and Emerging Technologies, University of Turin, Turin, ‎TO‎, Italy
Interests: artificial intelligence; autonomous weapons; military ethics; engineering ethics
Dr. Roman V. Yampolskiy
E-Mail Website
Guest Editor
Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
Interests: AI safety; AI security
Special Issues and Collections in MDPI journals
Prof. Dr. Birgitta Dresp-Langley
E-Mail Website
Guest Editor
Centre National de la Recherche Scientifique (CNRS), ICube Lab UMR 7357 CNRS, Université de Strasbourg, F-67081 Strasbourg, France
Interests: cognitive neuroscience; brain; cognitive psychology; behavior; perceptual learning and memory; neural networks; consciousness; philosophy of artificial intelligence; principles of unsupervised learning; computing and philosophy
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Global discussions on the legality and ethics of using Artificial intelligence (AI) technology in warfare, particularly the use of autonomous weapons (AWS), continue to be hotly debated. Despite the push for a ban on these types of systems, unilateral agreement remains out of reach. Much of the disaccord comes from a privation of common understandings of fundamental notions of what it means for these types of systems to be autonomous. Similarly, there is a dispute as to what, if at all possible, it means for humans to have meaningful control over these systems.

This Special Issue aims to contribute to this ongoing discourse by exploring the issues central to the debate on the ban of autonomous weapon systems. Investigators in the field are invited to contribute their original, unpublished works. Both research and review papers are welcome. Topics of interest include but are not limited to:

  • The philosophical foundations of meaningful human control;
  • Governance of AWS by multiagent networks;
  • Distributed accounts of moral responsibility;
  • Ethical principles underlying the design of AWS;
  • Design approaches for the implementation of ethical principles in AWS;
  • Critical evaluation of the arguments for/against a ban of AWS;
  • Reviews of AWS literature and articulation of directions for future research;
  • Approach to AI control and value compliance.

Dr. Steven Umbrello
Dr. Roman V. Yampolskiy
Prof. Dr. Birgitta Dresp-Langley
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous weapons systems
  • meaningful human control
  • autonomy
  • artificial intelligence
  • warfare
  • AI safety

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status
Information 2021, 12(5), 216; https://doi.org/10.3390/info12050216 - 20 May 2021
Viewed by 956
Abstract
Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving ever more attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected to emerging ethical and legal problems linked to increasing autonomy in [...] Read more.
Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving ever more attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected to emerging ethical and legal problems linked to increasing autonomy in weapons systems, but there is a general underappreciation for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more sophisticated and increasingly more capable than flesh-and-blood soldiers, it will increasingly be the case that such soldiers are “in the power” of those AWS which fight against them. This implies that such soldiers ought to be considered hors de combat, and not targeted. In arguing for this point, we draw out a broader conclusion regarding hors de combat status, namely that it must be viewed contextually, with close reference to the capabilities of combatants on both sides of any discreet engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, we argue that each particular AWS will likely need its own standard for when enemy soldiers are deemed hors de combat. We conclude by examining how these nuanced views of hors de combat status might impact on meaningful human control of AWS. Full article

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: EXAMINING THE IMPACT OF DIFFERING PERCEPTIONS OF MEANINGFUL HUMAN CONTROL IN THE EMERGING AUTONOMOUS WEAPON SYSTEM DISCOURSE
Authors: Jai Galliott; Austin Wyatt
Affiliation: Values in Defence & Security Technology Group, University of New South Wales Centre for International Studies, University of Oxford; Values in Defence & Security Technology Group, University of New South Wales
Abstract: Entering the seventh year since formal discussions began under the auspices of the United Nations, the international community has largely seized on the concept of “meaningful human control” as a standard for resolving the ethical and legal issues raised by the development of autonomous weapon systems. However, this enthusiasm belies the fact that no systematic, evidence-based attempt has been made to develop measurable criteria for Meaningful Human Control, or to understand how differing cultural values will affect regional interpretations of any future international control mechanisms. This paper steps into that gap by drawing on empirical data gathered from a survey of policy experts and scholars across the Asia Pacific region. Building on a comparative analysis of the novel underlying dataset, this article presents an evaluation of how distinct perspectives on meaningful human control would inform the development of autonomous weapon systems as well as the ethico-legal frameworks developed to regulate their use.

Title: Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status
Authors: Steven Umbrello; Nathan Gabriel Wood
Affiliation: Institute for Ethics and Emerging Technologies, University of Turin, Turin, ‎TO‎, Italy; Ghent University
Abstract: Autonomous weapons systems (AWS), sometimes referred to as ‘killer robots’, are receiving ever more attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected with emerging ethical and legal problems linked to increasing autonomy in weapons systems, but there is a general underappreciation for the ways in which existing law might impact on these new technologies. In this paper we argue that as AWS become more sophisticated and increasingly more capable than flesh-and-blood soldiers, it will increasingly be the case that such soldiers are “in the power” of those AWS which fight against them. This implies that such soldiers ought to be considered hors de combat, and not targeted. In arguing for this point, we draw out a broader conclusion regarding hors de combat status, namely that it must be viewed contextually, with close reference to the capabilities of combatants on both sides of any discreet engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, we argue that each particular autonomous weapons system will likely need its own standard for when enemy soldiers are deemed hors de combat. We conclude by examining how these nuanced views of hors de combat status might impact on meaningful human control of AWS.

Title: Integrating Comprehensive Human Oversight in the Glass Box Framework to ensure Accountability Applied to the case of autonomous military surveillance drones
Authors: Ilse Verdiesen; Andrea Aler Tubella; Virginia Dignum
Affiliation: Faculty of Technology, Policy and Management (TBM), TU Delft
Abstract: Although there is much work on governance and attribution of accountability, there is a significant lack of methods for the operationalisation of accountability within the socio-technical layer of autonomous systems. In this paper, we aim to fill the operationalisation gap by proposing a socio-technical framework based on the Glass Box Framework to guarantee human oversight and accountability in autonomous drone deployments, showing its enforceability in the real case of autonomous military surveillance drones. By keeping a focus on accountability and human oversight as values, we align with the emphasis placed on human responsibility, while requiring a concretisation of what these principles mean for each specific application, connecting them with concrete socio-technical requirements. In addition, by constraining the framework to observable elements of the pre- and post-deployment process of autonomous military drones, we do not rely on assumptions on the internal workings of the drone nor the technical fluency of the operator.

Title: Learning from automation in targeting to better regulate autonomous weapon systems: from targeted killing and the Vietnam war to automatic mines
Authors: Joshua G. Hughes
Affiliation: Lancaster University
Abstract: Autonomous weapon systems (AWS) are an emerging technology that are not currently subject to any specific regulation. However, we can look back at the regulation of pre-cursor technologies and processes in the history of regulating warfare and weapons to see good examples that can be used to better understand how AWS should be regulated going forward. This article examines the nature of automation in AWS and uses it as a starting point to understand similar types of automation in pre-cursor examples. Three examples of automation in targeting are considered and used to suggest best practices for how to understand AWS and apply legal rules to them. The examples are: automation in target selection during recent targeted killing operations; automation in target engagement during secret operations from the Vietnam war, and; automation in the absence of human involvement in terms of automatic mines. Conclusions about automation in each of the examples are used to determine best practices that can be used to better interpret legal rules for applying them to AWS in future. From these best practices, an outline of a legally required minimum level of human control is also developed.

Back to TopTop