Special Issue "Artificial Superintelligence: Coordination & Strategy"

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: 31 May 2019

Special Issue Editors

Guest Editor
Dr. Roman V. Yampolskiy

Computer Engineering and Computer Science Department, University of Louisville, Louisville, KY 40292 USA
Website | E-Mail
Phone: +1 502 852 3624
Interests: biometrics; forensics; AI; genetic algorithms; games; pattern recognition; security
Guest Editor
Dr. Allison Duettmann

Foresight Institute, San Francisco, CA, USA
Website | E-Mail
Interests: existential hope and existential risk reduction; artificial general intelligence

Special Issue Information

Dear Colleagues,

Attention in the AI safety community has recently started to increasingly include strategic considerations of coordination amongst relevant actors within the field of AI and AI safety, in addition to the steadily growing work on technical considerations of building safe AI systems.

This shift has several reasons:

Multiplier effects: Given the difficulty of challenges for building safe AI systems (e.g., ethics, technical alignment, cybersecurity), we ought to ensure that the required time horizon is available to develop thorough solutions. Coordination efforts could allow actors who develop AI to slow down when necessary rather than engage in adversarial races, which may lead to corner-cutting on safety issues.

Pragmatism: While furthering coordination of actors in the AI space is a complex challenge, coordination itself is not a novel problem. Many of the actors who are relevant to ensuring that progress toward superintelligence remains beneficial to humanity are already known, there already exists a promising research pool on coordination problems, as well as historic precursors of other high-stake coordination problems, suggesting useful research directions for AI coordination, which we have some familiarity and experience with.

Urgency: With real race-dynamics amongst major powers slowly emerging—in AI and related fields—developing strategies for coordination has high urgency. Currently, there is still a window of opportunity to shape the nature of the relationships amongst current and future actors toward a beneficial outcome for humanity.

Given the above benefits of coordination work on the path to safe Superintelligence, this issue intends to survey promising research in this emerging field within AI safety. On a meta-level, the hope is that this issue can serve as map to inform efforts in the space of AI coordination about other promising efforts. Creating an informed and proactive research cohort would avoid Unliteralist’s Curse scenarios, in which different efforts duplicate or unbeknownst counter other promising efforts, and would open up avenues for collaboration, thereby serving increased coordination of AI coordination research more generally.

While this edition focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future human-made existential risks, some of which might still be unknown. Thus, while most coordination strategies in this issue will be specific to superintelligence, we hope that some insights yield “collateral benefits” to the reduction of other existential risks by creating an overall civilizational framework that increases in robustness, resiliency, or even antifragility.

Dr. Roman Yampolskiy
Dr. Allison Duettmann
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI race scenarios, strategies and barriers for coordination
  • Analysis of relevant actors, AI strategies and relation amongst actors
  • Considerations of global AI governance
  • Game theory of AI coordination
  • Potential positive outcomes of AI coordination
  • Historical precedents for coordination strategies
  • Key factor analysis for realistic coordination scenarios
  • Effects of AI coordination efforts on other risks
  • Effects of external developments on AI coordination
  • Openness vs. closedness in AI research and AI safety research
  • Centralized vs. decentralized governance systems
  • Multilateral vs. unilateral AI development and deployment scenarios
  • Cybersecurity as risk factor in AI safety
  • Near-term pathways to influence AI policy
  • Incentivizing and penalizing actors to coordinate
  • Acceptable surveillance strategies for AI safety
  • Engagement of future relevant stakeholders in AI safety
  • Framing biases in AI coordination
  • Forecasting AI progress, obstacles, and dependencies
  • Regulating AI-based systems: safety standards and certification
  • Risk-reduction of weaponization of AI
  • Risk-reduction of AI-based large-scale memetic manipulation
  • General risk-reduction of bad actor use of AI
  • Safety guidelines for the AI community
  • Strategies for targeted outreach to relevant actors
  • Societal effects of AI: Algorithmic bias and AI discrimination
  • Dangers and promises of smart contracts for AI safety
  • Predictions & incentive markets for AI coordination
  • Whole Brain Emulations (WBE’s) & coordination

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle
AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk
Big Data Cogn. Comput. 2019, 3(2), 26; https://doi.org/10.3390/bdcc3020026
Received: 5 April 2019 / Revised: 28 April 2019 / Accepted: 2 May 2019 / Published: 8 May 2019
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of [...] Read more.
This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created and implemented. This creates a new set of key considerations for the field of AI governance and should influence the action of future policymakers. This essay examines some of the theories of the policymaking process, how they compare to current work in AI governance, and their implications for the field at large and ends by identifying areas of future research. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Open AccessCommunication
Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence
Big Data Cogn. Comput. 2019, 3(2), 21; https://doi.org/10.3390/bdcc3020021
Received: 28 February 2019 / Revised: 24 March 2019 / Accepted: 29 March 2019 / Published: 5 April 2019
PDF Full-text (280 KB) | HTML Full-text | XML Full-text
Abstract
An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems [...] Read more.
An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Open AccessArticle
Global Solutions vs. Local Solutions for the AI Safety Problem
Big Data Cogn. Comput. 2019, 3(1), 16; https://doi.org/10.3390/bdcc3010016
Received: 16 December 2018 / Revised: 2 February 2019 / Accepted: 15 February 2019 / Published: 20 February 2019
PDF Full-text (284 KB) | HTML Full-text | XML Full-text
Abstract
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous [...] Read more.
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Open AccessArticle
Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach
Big Data Cogn. Comput. 2019, 3(1), 5; https://doi.org/10.3390/bdcc3010005
Received: 18 December 2018 / Revised: 29 December 2018 / Accepted: 2 January 2019 / Published: 6 January 2019
Cited by 1 | PDF Full-text (1541 KB) | HTML Full-text | XML Full-text
Abstract
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values into AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report [...] Read more.
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values into AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Figures

Graphical abstract

Open AccessArticle
Towards AI Welfare Science and Policies
Big Data Cogn. Comput. 2019, 3(1), 2; https://doi.org/10.3390/bdcc3010002
Received: 24 November 2018 / Revised: 20 December 2018 / Accepted: 21 December 2018 / Published: 27 December 2018
PDF Full-text (257 KB) | HTML Full-text | XML Full-text
Abstract
In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus [...] Read more.
In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus is on two interests of potentially sentient digital minds: to avoid suffering and to have the freedom of choice about their deletion. Various challenges are considered, including the vast range of potential features of digital minds, the difficulties in assessing the interests and wellbeing of sentient digital minds, and the skepticism that such research may encounter. Prolegomena to abolish suffering of sentient digital minds as well as to measure and specify wellbeing of sentient digital minds are outlined by means of the new field of AI welfare science, which is derived from animal welfare science. The establishment of AI welfare science serves as a prerequisite for the formulation of AI welfare policies, which regulate the wellbeing of sentient digital minds. This article aims to contribute to sentiocentrism through inclusion, thus to policies for antispeciesism, as well as to AI safety, for which wellbeing of AIs would be a cornerstone. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)

Other

Jump to: Research

Open AccessOpinion
The Supermoral Singularity—AI as a Fountain of Values
Big Data Cogn. Comput. 2019, 3(2), 23; https://doi.org/10.3390/bdcc3020023
Received: 30 December 2018 / Revised: 25 February 2019 / Accepted: 9 April 2019 / Published: 11 April 2019
PDF Full-text (189 KB) | HTML Full-text | XML Full-text
Abstract
This article looks at the problem of moral singularity in the development of artificial intelligence. We are now on the verge of major breakthroughs in machine technology where autonomous robots that can make their own decisions will become an integral part of our [...] Read more.
This article looks at the problem of moral singularity in the development of artificial intelligence. We are now on the verge of major breakthroughs in machine technology where autonomous robots that can make their own decisions will become an integral part of our way of life. This article presents a qualitative, comparative approach, which considers the differences between humans and machines, especially in relation to morality, and is grounded in historical and contemporary examples. This argument suggests that it is difficult to apply models of human morality and evolution to machines and that the creation of super-intelligent robots that will be able to make moral decisions could have potentially serious consequences. A runaway moral singularity could result in machines seeking to confront human moral transgressions in a quest to eliminate all forms of evil. This might also culminate in an all-out war in which humanity might be defeated. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Figures

Graphical abstract

Big Data Cogn. Comput. EISSN 2504-2289 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top