5.2. What Are the Problems with VSD in the Context of LAWS?
There are certainly well-reasoned arguments in favour of implementing value sensitive design as a tool for ensuring that future autonomous systems are limited in their potential negative impacts and remain under sufficient levels of meaningful human control regardless of operational and doctrinal decisions (which are generally seen as the domain of individual militaries). Unfortunately, this approach is not without its demerits, and we must now turn to those of particular relevance in the context of increasingly autonomous systems and military applications of artificial intelligence.
The first issue with relying on value sensitive design in this context is that it would require that one commits to the underlying assumption that “certain values are universally held, and that this fact can provide normative direction in design” [
20]. While VSD also acknowledges that culture plays a role, it is limited to influencing how these values “play out” in a given time period, and although some values are culturally specific, there are also universally applicable factors upon which the methodology can rest [
21]. Those limited values that can be said to be universally present across cultures are outnumbered by those that are either absent or differently interpreted by different cultures, even within geographic regions. The problem, therefore, with a “universalist” approach has less to do with the existence of globally accepted values among those that are more relative, than it does with the risk that policy-makers and scholars will assume that engineers, designers or even end-users will act in a particular way [
22], when in fact holding the same “value” on paper may translate into very different design decisions once it has passed through the lens of culturally distinct normative frameworks. This leads to another potential pitfall in the case of autonomous weapon systems.
Assuming a universally applicable set of values as the starting point for international regulation of an emerging technology presents an additional risk that actors will, whether innocently or maliciously, claim to have designed or procured systems that meet a given value (for example, accountability) but the standards used may be different from those of another state, or even the designer in the case of a purchased system. A related question is that when designing the underlying list of values to be considered by a development team, it becomes necessary to grapple with the question of who should decide which values should be considered and how should value prioritisation decisions be made. If those questions have different answers based on cultural, organisational, legal, doctrinal factors, for example, then we cannot be certain that even if the international community was to agree on a universal set of values that should guide autonomous system development, that it would result in universal design decisions.
Given the centrality identifying, analysing and prioritizing values to the concept of value sensitive design, it is particularly concerning that after seven years of formal discussions, there remains no agreement on the finer details of key concepts in this space [
1]. Despite ostensibly being the principal venue for such discussions, the ongoing Group of Governmental Experts on LAWS process, organised under the auspices of the UN and the Convention on Certain Conventional Weapons, has yet to yield a universal definition of autonomous weapon systems, much less technical standards to guide engineering teams. This concern arises from the fact that an absence of commonly agreed definitions, technical standards or governance measures reflects the level of difficulty that is likely to characterize attempts to develop any universalist value set that could guide developers that wish to integrate positive values into autonomous systems.
The closest the process has come to reaching a normative framework was the adoption of the eleven Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System, in September 2020 [
23]. The most relevant of these principles to this paper are:
“Human responsibility for decisions on the use of weapon systems must be maintained since accountability cannot be transferred to machines. This should be considered across the entire life cycle of the weapons system”;
“Accountability for developing, deploying and using any emerging weapons system in the framework of the CCW must be ensured in accordance with applicable international law, including through the operation of such systems within a responsible chain of human command and control”;
“Risk assessments and mitigation measures should be part of the design, development, testing and deployment cycle of emerging technologies in any weapons systems”;
“Discussions and any potential policy measures taken within the context of the CCW should not hamper progress in or access to peaceful uses of intelligent autonomous technologies”.
Although these principles are neither binding nor specific enough to build a domestic legislative framework upon, they do provide some valuable guidance for those interested in incorporating VSD into autonomous weapon systems. The first salient point to draw from these principles is that the GGE clearly places concern on maintaining a sufficient level of control over autonomous weapon systems that human accountability and responsibility can be maintained. That this requirement is reflected so strongly in the principles suggests that the GGE delegations, and the non-governmental representatives, remain at least somewhat optimistic that a compromise can be reached short of a pre-emptive ban, which would require future systems to retain a sufficient level of meaningful human control, as well as in their capacity to generate an objectively assessable standard for determining whether a given system meets those standards. This is certainly an area where the incorporation of a value sensitive design methodology into autonomous system development and conceptualisation could have a significant impact on translating the GGE’s guiding principles into a technically and diplomatically feasible set of regulations.
We can also see in these principles a commitment to the idea that risk assessment, open communication, and cross-disciplinary consideration of technical and policy challenges should be incorporated into every stage of the conceptualisation, design, testing and development cycle. From this, we can logically infer those considerations of risks, responses, and design decisions would also be expected in the processes by which militaries would select, procure, and incorporate these systems into their arsenals. The tripartite, iterative methodology of value sensitive design would appear to suit these requirements quite well, particularly with its insistence on stakeholder principles guiding, and in turn being shaped by, technically feasible design choices. However, these principles also highlight a problem with applying a universalist methodology in that it would raise the question of who should decide which stakeholders should, and should not, be heard in the process, and what weight should be given to each value in the design process. As well as, arguably more importantly, who should decide in which order values should be prioritised or that a state’s declaration of considering values in the design of an autonomous system meets or fails some external standard for measuring the legitimacy of that process. That being said, however, VSD is conceptualized as an iterative methodology, which suggests that broader thinking is necessary. We could, for example, consider the potential interaction between VSD as a guiding approach that informs, and is informed by, governing bodies, which could make the approach more flexible and capable of evolving alongside shifting policy positions and values among principle-stakeholder groups.
Finally, the guiding principles are clear that the ongoing discussions of a ban under international law should not interfere with the development of civilian technologies. This is particularly problematic given the inherently dual-use nature of both robotics research and artificial intelligence training methods. The question of how to balance protecting the capacity of researchers and corporations to pursue legitimate civilian innovations in these spaces and the desire to limit the potential for autonomous weapon system development or proliferation is both complicated and unresolved. In the absence of some clear way of delineating between the two, additional restrictions on autonomous systems research and continued pressure on those working on research with potential military applications (but not direct weapon systems) will continue to have a chilling effect, particularly among junior researchers and corporate funders. Unfortunately, value sensitive design would not, by itself, resolve the issue, although incorporating transparency and cross-disciplinary cooperation into the research and design processes for AI-related research would be a valuable first step.
Although most state delegations at the most recent meeting of the GGE on LAWS (August 2021 at the time of writing) agreed to support these guiding principles, a closer examination of submissions to meetings of the group, as well as broader literature, is illustrative of deep divisions on the topic of autonomous weapon systems. Somewhat expectedly, the major division between state delegations is between those who have publicly supported a strong legal regulatory regime be established (of which a smaller number support a “ban” in the traditional sense), those who have agreed to participate in the CCW GGE process but stopped short of strong support for additional regulations, and those who have, for various reasons, pushed back on a ban, acted disruptively, or put forward support based on unworkable understandings of “autonomous weapon systems”. In August 2020, Human Rights Watch [
24] noted that, of the 97 countries that had publicly commented on the question of autonomous weapon systems, 30 states had expressed support for a ban. These states were Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China (issues with this inclusion are discussed later in this article), Colombia, Costa Rica, Cuba, Djibouti, Ecuador, El Salvador, Egypt, Ghana, Guatemala, the Holy See, Iraq, Jordan, Mexico, Morocco, Namibia, Nicaragua, Pakistan, Panama, Peru, the State of Palestine, Uganda, Venezuela, and Zimbabwe. Reaching Critical Will maintains a very helpful list of submissions to the GGE, in which countries have announced their position on a legal framework, some of whom have outright called for a developmental or deployment-based ban. States that took the view that a ban on fully autonomous weapons is necessary at the 2020 and 2021 meetings included Brazil, Chile, Mexico [
25], Canada, Costa Rica [
25], and Venezuela [
26]. Somewhat unsurprisingly, none of these are traditional great power states, nor are they known developers of increasingly autonomous military technologies. A less cynical perspective would focus on the fact that their opposition to a ban has been couched in humanitarian terms, following the lead of the Vatican in opposing the transfer of the decision to end human life to a non-human entity on ethical as well as geostrategic grounds.
The second group, those states that have committed to supporting and participating in negotiations within the GGE without calling for a blanket ban on autonomous weapon systems, is also significant. Most support some level of additional legal or normative restrictive frameworks but stop short of supporting a pre-emptive blanket ban. For example, the Philippines delegation has called for a normative and operational framework and legally binding restrictions under the Convention on Certain Conventional Weapons but differentiates these calls from its support for a ban on anti-personnel LAWS [
27]. Again, a failure to enthusiastically support calls for a blanket ban on “autonomous weapon systems” is unlikely to be due solely to either geostrategic or ethical concerns. For example, the Finnish delegation in 2021 argued that, while a pre-emptive ban could be imposed on LAWS “that operate in a truly independent manner or which perform tasks with no limitations and/or without a tasking by a human” [
28], there remains a significant grey area with emerging technologies and the international community should focus on building processes for separating out legitimate applications so that innovation is not stymied by unrealistic oversight measures [
28]. It is noteworthy that states, such as Australia, have instead put forward arguments in favour of more moderate control mechanisms and stated that discussion of “a prevention or ban treaty on LAWS is premature” [
29]. It is also worth noting that the application of AI to military logistics have been generally deemed separate from weapon systems and targeting tools in the international discussions. While disagreements remain between states on the specifics of a definition for autonomous weapon systems, published definitions would not cover back-end integrations of AI in any pre-emptive ban. Finally, some states have put forward suggestions on control regimes that build on or incorporate existing compliance mechanisms, including France and Germany.
It is worth explicitly noting here that, despite appearing as a supporting group on the Campaign to Stop Killer Robots list, the Non-Aligned Movement’s submissions to the GGE on LAWS are indicative of a more complex position. With more than 120 member states and a well-known commitment to non-interference, it is difficult to rely on these statements as comprehensively reflective of the individual member’s positions. Indeed the 2018 statement from the Non-Aligned Movement referred to a re-emphasis of the position adopted at the XVII Summit of the NAM and 2018 NAM Ministerial Meeting (the Baku Declaration). However, the 206-page XVII Summit Final Declaration contains only a single paragraph that refers to LAWS, while the Baku Declaration does not actually mention autonomous weapon systems [
30]. Taken together, it is difficult to agree with the Campaign that the NAM supports a ban rather than further discussions, and a closer examination of statements made by member states suggests that the NAM is a good case study of a moderate faction within the GGE debate.
The final “faction”, to continue a metaphor, in the ongoing GGE on LAWS discussions are those, typically great power states, who have sought to oppose, disrupt, or block the group’s deliberations, and publicly stated scepticism regarding the need and utility of a ban. Chief amongst these is, of course, the United States, which has an unfortunate history of reluctance to compromise its pursuit of a military technology at the urging of the international community, examples of which include its continued refusal to sign the Ottawa Mine Ban Treaty, recent walk-back of a domestic policy disallowing the use of anti-personnel mines [
31], and its refusal to ratify the UN Treaty on the Prohibition of Nuclear Weapons [
32]. Given the centrality of military applications of artificial intelligence and increasingly autonomous weapon systems to the Third Offset Strategy, it is hardly surprising that the United States would not react with great support to efforts to introduce binding international legal restrictions on the perceived basis of their future force. For similar reasons, one may have been surprised by China’s 2018 proclamation that it would support a pre-emptive ban on lethal autonomous weapon systems [
1]. However, once one reads the submission carefully, it becomes clear that the Chinese delegation is proclaiming support for a ban on a surprisingly small category of system that would effectively overlook most systems currently proposed by states. For example, their definition of LAWS would discount systems intended principally for anti-vehicle or anti-material roles, as well as those that retain the capacity for any intervention by a human supervisor (including the capacity to manually abort or shut down the system during operations) [
1]. Relying on this support would lead to a ban that would only cover a fraction of what opponents of autonomous weapon systems are actually proposing, and the argument that this was a genuine contribution in favour of the pro-ban camp must be considered sceptically by analysts, especially given the centrality of autonomous weapon systems and military AI to “intelligentized warfare” and the expectation among Chinese military planners that autonomous systems will be central to the next Revolution in Military Affairs [
33]. The last great power that could be grouped into this final camp is Russia. Given its public and significant investments in military robotics in recent years, it is hardly surprising that the Russian Federation would also take a dim view of the proceedings, and the Russian delegation has repeatedly called for the GGE on LAWS to focus first on developing a universally agreeable definition of autonomous weapon systems, while simultaneously putting forward a definition that would exclude systems that retain some engagement with a human operator (the delegation explicitly supports excluding UAVs from the debate) [
34], is limited to physical (rather than cyber) platforms, and that the final responsibility for assessing whether systems meet a definition of “autonomous” and are used responsibly should remain with the individual states [
34]. Russia has also argued (as has the United States and others) that existing international law is adequate for regulating the use of autonomous weapon systems [
35]. While these are genuine and important issues, when put into the context of Russia’s pattern of disruptive and provocative behaviours in recent years, particularly in cyberspace, it suggests an intention to slow the CCW process before autonomous weapon systems become a reality. It is worth noting that each of these three states has made arguments in support of autonomous weapon systems on ethical grounds, arguing that the technology may be more effective than human operators in certain situations, may assist in the protection of civilians, and that they would lessen the physical and psychological toll on soldiers.
Granted, some of these divisions will have arisen from competing geostrategic or commercial interests; however, the continued debate over the finer details of virtually every term in the autonomous weapon system lexicon does not bode well for the argument that a universalist set of values could be codified. This is further complicated by the fact that the emergence of autonomous weapon systems is occurring in a competitive, multipolar series of development efforts. Therefore, while we can utilize VSD to identify differences between how individual actors are planning to implement meaningful human control and generate a better understanding of where common ground can be found, we cannot operate under the premise that there is a single design process upon which we can impose a VSD conceptual framework.
Implementing a globally accepted approach to value-sensitive autonomous systems would require a variety of trusted actors to represent different stakeholder groups (which creates further issues around how many global militaries and legal systems can be represented in a manageable stakeholder set), publicly verifiable access to in-development or proposed autonomous weapon systems (as well as the military officers guiding the development of accompanying strategic requirements), and universally agreed and sufficiently specific technical standards that could be applied. At the present time, however, attaining all three of these requirements appears impossible, which represents a serious barrier to implementing VSD on a sufficient scale with uniformity. This is particularly unfortunate given the fact that value sensitive design is built around the concept that one can identify, empirically analyse and prioritise certain values over others in the design of the given technology.
Furthermore, if the international community cannot show a sufficient level of uniformity on what values should apply to autonomous systems, as well as an understanding of how a given military would prioritise those values, then the approach will not be effective. While most advanced western militaries have spent the past two decades promoting the concept of interoperability and coalition operations, there remain differences even between close military allies around operational culture, doctrine, interpretations of international law, and legitimate uses of force. The role of distinct military and strategic cultures on how different states integrate a given technology into their arsenals is well documented in the military innovation literature. Arguably the most comprehensive recent publication on this issue is Adamsky’s
The Culture of Military Innovation [
36], although other useful discussions have been written by Raska [
37], Horowitz [
38], Goldman and Leslie [
39]. Therefore even if one was to limit the discussion to close military allies that share similar cultures and generally hold similar norms of military behaviour, for example, Australia, the United States and the United Kingdom, differences remain in how soldiers are trained to consider the use of force, the legal requirements for testing and adopting new weapon systems, and distinct geostrategic goals and operational taskings, that would complicate the roll out of an autonomous weapon system developed purely by the United States. Obviously, these challenges grow when one considers the impact of a system being developed by Russia or China, both of whom have well-documented track records of exporting advanced weapon systems to nations outside of their traditional sphere of influence.
The distinctions between how autonomous systems are viewed by states and the demonstrated market for remote operated uninhabited systems tie directly into a second problematic aspect of value sensitive design, its assumption that it is possible for engineering and design decisions to substantially, if not completely, remove the possibility that a system will be used in an unintentionally harmful or nefarious manner. Regardless of whether this assumption is generally defensible, which the authors believe it can be, it is particularly problematic in the case of autonomous weapon systems. This comes down to the fact that autonomous systems would lack the capacity to interpret contextual information and apply circumstance through interpretations of pre-programmed frameworks reflective of norms. In effect, the argument goes, the capacity for autonomous weapons to act in an “ethical” manner would depend solely on the capacity of designers to foresee potential scenarios and provide hard-coded guidance prior to deployment. This is further exacerbated by the absence of well-trained and responsible human operators, who would ideally be able to intervene in the event of failure.
Finally, we should consider the fact that, in the absence of universally agreed to technical standards or a common definition, there is an open question as to whether the quality standard deemed acceptable by designing engineers in one state would be accepted by those in another state. Opening the internal decision logic or giving access to core code may somewhat alleviate the impact of this disconnect; however, it would require militaries to be willing to allow external reviewers access to this sensitive data and could further open future autonomous systems to the risk of interference by malicious actors.
However, even if we put aside the question of the technical feasibility and diplomatic challenges involved in relying on a series of separate engineering teams to limit the potential harms of autonomous weapon systems, and accept the premise that engineers can prevent most misuses of technology and implement ethical uses through design choices, which is a core assumption of value sensitive design, the question remains, whose values are we asking them to impose on autonomous weapon system design?