Meaningful Human Control and Autonomous Weapons Systems: Ethics, Design, and Responsible Innovation

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 16393

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Ethics and Emerging Technologies, University of Turin, Turin, ‎TO‎, Italy
Interests: artificial intelligence; autonomous weapons; military ethics; engineering ethics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
Interests: AI safety; AI security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Global discussions on the legality and ethics of using Artificial intelligence (AI) technology in warfare, particularly the use of autonomous weapons (AWS), continue to be hotly debated. Despite the push for a ban on these types of systems, unilateral agreement remains out of reach. Much of the disaccord comes from a privation of common understandings of fundamental notions of what it means for these types of systems to be autonomous. Similarly, there is a dispute as to what, if at all possible, it means for humans to have meaningful control over these systems.

This Special Issue aims to contribute to this ongoing discourse by exploring the issues central to the debate on the ban of autonomous weapon systems. Investigators in the field are invited to contribute their original, unpublished works. Both research and review papers are welcome. Topics of interest include but are not limited to:

  • The philosophical foundations of meaningful human control;
  • Governance of AWS by multiagent networks;
  • Distributed accounts of moral responsibility;
  • Ethical principles underlying the design of AWS;
  • Design approaches for the implementation of ethical principles in AWS;
  • Critical evaluation of the arguments for/against a ban of AWS;
  • Reviews of AWS literature and articulation of directions for future research;
  • Approach to AI control and value compliance.

Dr. Steven Umbrello
Dr. Roman V. Yampolskiy
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous weapons systems
  • meaningful human control
  • autonomy
  • artificial intelligence
  • warfare
  • AI safety

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 169 KiB  
Editorial
Editorial for the Special Issue on Meaningful Human Control and Autonomous Weapons Systems
by Steven Umbrello
Information 2022, 13(5), 215; https://doi.org/10.3390/info13050215 - 21 Apr 2022
Cited by 1 | Viewed by 1679
Abstract
Global discussions on the legality and ethics of using Artificial intelligence (AI) technology in warfare, particularly the use of autonomous weapons (AWS), continue to be hotly debated [...] Full article

Research

Jump to: Editorial

21 pages, 279 KiB  
Article
An Empirical Examination of the Impact of Cross-Cultural Perspectives on Value Sensitive Design for Autonomous Systems
by Austin Wyatt and Jai Galliott
Information 2021, 12(12), 527; https://doi.org/10.3390/info12120527 - 17 Dec 2021
Cited by 5 | Viewed by 2584
Abstract
The removal of direct human involvement from the decision to apply lethal force is at the core of the controversy surrounding autonomous weapon systems, as well as broader applications of artificial intelligence and related technologies to warfare. Far from purely a technical question [...] Read more.
The removal of direct human involvement from the decision to apply lethal force is at the core of the controversy surrounding autonomous weapon systems, as well as broader applications of artificial intelligence and related technologies to warfare. Far from purely a technical question of whether it is possible to remove soldiers from the ‘pointy end’ of combat, the emergence of autonomous weapon systems raises a range of serious ethical, legal, and practical challenges that remain largely unresolved by the international community. The international community has seized on the concept of ‘meaningful human control’. Meeting this standard will require doctrinal and operational, as well as technical, responses at the design stage. This paper focuses on the latter, considering how value sensitive design could assist in ensuring that autonomous systems remain under the meaningful control of humans. However, this article will also challenge the tendency to assume a universalist perspective when discussing value sensitive design. By drawing on previously unpublished quantitative data, this paper will critically examine how perspectives of key ethical considerations, including conceptions of meaningful human control, differ among policymakers and scholars in the Asia Pacific. Based on this analysis, this paper calls for the development of a more culturally inclusive form of value sensitive design and puts forward the basis of an empirically-based normative framework for guiding designers of autonomous systems. Full article
13 pages, 505 KiB  
Article
Integrating Comprehensive Human Oversight in Drone Deployment: A Conceptual Framework Applied to the Case of Military Surveillance Drones
by Ilse Verdiesen, Andrea Aler Tubella and Virginia Dignum
Information 2021, 12(9), 385; https://doi.org/10.3390/info12090385 - 21 Sep 2021
Cited by 9 | Viewed by 3690
Abstract
Accountability is a value often mentioned in the debate on intelligent systems and their increased pervasiveness in our society. When focusing specifically on autonomous systems, a critical gap emerges: although there is much work on governance and attribution of accountability, there is a [...] Read more.
Accountability is a value often mentioned in the debate on intelligent systems and their increased pervasiveness in our society. When focusing specifically on autonomous systems, a critical gap emerges: although there is much work on governance and attribution of accountability, there is a significant lack of methods for the operationalisation of accountability within the socio-technical layer of autonomous systems. In the case of autonomous unmanned aerial vehicles or drones—the critical question of how to maintain accountability as they undertake fully autonomous flights becomes increasingly important as their uses multiply in both the commercial and military fields. In this paper, we aim to fill the operationalisation gap by proposing a socio-technical framework to guarantee human oversight and accountability in drone deployments, showing its enforceability in the real case of military surveillance drones. By keeping a focus on accountability and human oversight as values, we align with the emphasis placed on human responsibility, while requiring a concretisation of what these principles mean for each specific application, connecting them with concrete socio-technical requirements. In addition, by constraining the framework to observable elements of pre- and post-deployment, we do not rely on assumptions made on the internal workings of the drone nor the technical fluency of the operator. Full article
Show Figures

Figure 1

11 pages, 227 KiB  
Article
Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status
by Steven Umbrello and Nathan Gabriel Wood
Information 2021, 12(5), 216; https://doi.org/10.3390/info12050216 - 20 May 2021
Cited by 7 | Viewed by 4064
Abstract
Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving ever more attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected to emerging ethical and legal problems linked to increasing autonomy in [...] Read more.
Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving ever more attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected to emerging ethical and legal problems linked to increasing autonomy in weapons systems, but there is a general underappreciation for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more sophisticated and increasingly more capable than flesh-and-blood soldiers, it will increasingly be the case that such soldiers are “in the power” of those AWS which fight against them. This implies that such soldiers ought to be considered hors de combat, and not targeted. In arguing for this point, we draw out a broader conclusion regarding hors de combat status, namely that it must be viewed contextually, with close reference to the capabilities of combatants on both sides of any discreet engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, we argue that each particular AWS will likely need its own standard for when enemy soldiers are deemed hors de combat. We conclude by examining how these nuanced views of hors de combat status might impact on meaningful human control of AWS. Full article
Back to TopTop