Special Issue "Artificial Superintelligence: Coordination & Strategy"

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: closed (31 May 2019).

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Dr. Roman V. Yampolskiy
Website
Guest Editor
Ms. Allison Duettmann
Website
Guest Editor
Foresight Institute, San Francisco, CA, USA
Interests: existential hope and existential risk reduction; artificial general intelligence

Special Issue Information

Dear Colleagues,

Attention in the AI safety community has recently started to increasingly include strategic considerations of coordination amongst relevant actors within the field of AI and AI safety, in addition to the steadily growing work on technical considerations of building safe AI systems.

This shift has several reasons:

Multiplier effects: Given the difficulty of challenges for building safe AI systems (e.g., ethics, technical alignment, cybersecurity), we ought to ensure that the required time horizon is available to develop thorough solutions. Coordination efforts could allow actors who develop AI to slow down when necessary rather than engage in adversarial races, which may lead to corner-cutting on safety issues.

Pragmatism: While furthering coordination of actors in the AI space is a complex challenge, coordination itself is not a novel problem. Many of the actors who are relevant to ensuring that progress toward superintelligence remains beneficial to humanity are already known, there already exists a promising research pool on coordination problems, as well as historic precursors of other high-stake coordination problems, suggesting useful research directions for AI coordination, which we have some familiarity and experience with.

Urgency: With real race-dynamics amongst major powers slowly emerging—in AI and related fields—developing strategies for coordination has high urgency. Currently, there is still a window of opportunity to shape the nature of the relationships amongst current and future actors toward a beneficial outcome for humanity.

Given the above benefits of coordination work on the path to safe Superintelligence, this issue intends to survey promising research in this emerging field within AI safety. On a meta-level, the hope is that this issue can serve as map to inform efforts in the space of AI coordination about other promising efforts. Creating an informed and proactive research cohort would avoid Unliteralist’s Curse scenarios, in which different efforts duplicate or unbeknownst counter other promising efforts, and would open up avenues for collaboration, thereby serving increased coordination of AI coordination research more generally.

While this edition focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future human-made existential risks, some of which might still be unknown. Thus, while most coordination strategies in this issue will be specific to superintelligence, we hope that some insights yield “collateral benefits” to the reduction of other existential risks by creating an overall civilizational framework that increases in robustness, resiliency, or even antifragility.

Dr. Roman Yampolskiy
Ms. Allison Duettmann
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI race scenarios, strategies and barriers for coordination
  • Analysis of relevant actors, AI strategies and relation amongst actors
  • Considerations of global AI governance
  • Game theory of AI coordination
  • Potential positive outcomes of AI coordination
  • Historical precedents for coordination strategies
  • Key factor analysis for realistic coordination scenarios
  • Effects of AI coordination efforts on other risks
  • Effects of external developments on AI coordination
  • Openness vs. closedness in AI research and AI safety research
  • Centralized vs. decentralized governance systems
  • Multilateral vs. unilateral AI development and deployment scenarios
  • Cybersecurity as risk factor in AI safety
  • Near-term pathways to influence AI policy
  • Incentivizing and penalizing actors to coordinate
  • Acceptable surveillance strategies for AI safety
  • Engagement of future relevant stakeholders in AI safety
  • Framing biases in AI coordination
  • Forecasting AI progress, obstacles, and dependencies
  • Regulating AI-based systems: safety standards and certification
  • Risk-reduction of weaponization of AI
  • Risk-reduction of AI-based large-scale memetic manipulation
  • General risk-reduction of bad actor use of AI
  • Safety guidelines for the AI community
  • Strategies for targeted outreach to relevant actors
  • Societal effects of AI: Algorithmic bias and AI discrimination
  • Dangers and promises of smart contracts for AI safety
  • Predictions & incentive markets for AI coordination
  • Whole Brain Emulations (WBE’s) & coordination

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle
Future-Ready Strategic Oversight of Multiple Artificial Superintelligence-Enabled Adaptive Learning Systems via Human-Centric Explainable AI-Empowered Predictive Optimizations of Educational Outcomes
Big Data Cogn. Comput. 2019, 3(3), 46; https://doi.org/10.3390/bdcc3030046 - 31 Jul 2019
Cited by 4 | Viewed by 1761
Abstract
Artificial intelligence-enabled adaptive learning systems (AI-ALS) have been increasingly utilized in education. Schools are usually afforded the freedom to deploy the AI-ALS that they prefer. However, even before artificial intelligence autonomously develops into artificial superintelligence in the future, it would be remiss to [...] Read more.
Artificial intelligence-enabled adaptive learning systems (AI-ALS) have been increasingly utilized in education. Schools are usually afforded the freedom to deploy the AI-ALS that they prefer. However, even before artificial intelligence autonomously develops into artificial superintelligence in the future, it would be remiss to entirely leave the students to the AI-ALS without any independent oversight of the potential issues. For example, if the students score well in formative assessments within the AI-ALS but subsequently perform badly in paper-based post-tests, or if the relentless algorithm of a particular AI-ALS is suspected of causing undue stress for the students, they should be addressed by educational stakeholders. Policy makers and educational stakeholders should collaborate to analyze the data from multiple AI-ALS deployed in different schools to achieve strategic oversight. The current paper provides exemplars to illustrate how this future-ready strategic oversight could be implemented using an artificial intelligence-based Bayesian network software to analyze the data from five dissimilar AI-ALS, each deployed in a different school. Besides using descriptive analytics to reveal potential issues experienced by students within each AI-ALS, this human-centric AI-empowered approach also enables explainable predictive analytics of the students’ learning outcomes in paper-based summative assessments after training is completed in each AI-ALS. Full article
Show Figures

Figure 1

Open AccessArticle
Safe Artificial General Intelligence via Distributed Ledger Technology
Big Data Cogn. Comput. 2019, 3(3), 40; https://doi.org/10.3390/bdcc3030040 - 08 Jul 2019
Cited by 2 | Viewed by 1583
Abstract
Artificial general intelligence (AGI) progression metrics indicate AGI will occur within decades. No proof exists that AGI will benefit humans and not harm or eliminate humans. A set of logically distinct conceptual components is proposed that are necessary and sufficient to (1) ensure [...] Read more.
Artificial general intelligence (AGI) progression metrics indicate AGI will occur within decades. No proof exists that AGI will benefit humans and not harm or eliminate humans. A set of logically distinct conceptual components is proposed that are necessary and sufficient to (1) ensure various AGI scenarios will not harm humanity, and (2) robustly align AGI and human values and goals. By systematically addressing pathways to malevolent AI we can induce the methods/axioms required to redress them. Distributed ledger technology (DLT, “blockchain”) is integral to this proposal, e.g., “smart contracts” are necessary to address the evolution of AI that will be too fast for human monitoring and intervention. The proposed axioms: (1) Access to technology by market license. (2) Transparent ethics embodied in DLT. (3) Morality encrypted via DLT. (4) Behavior control structure with values at roots. (5) Individual bar-code identification of critical components. (6) Configuration Item (from business continuity/disaster recovery planning). (7) Identity verification secured via DLT. (8) “Smart” automated contracts based on DLT. (9) Decentralized applications—AI software modules encrypted via DLT. (10) Audit trail of component usage stored via DLT. (11) Social ostracism (denial of resources) augmented by DLT petitions. (12) Game theory and mechanism design. Full article
Show Figures

Figure 1

Open AccessArticle
A Holistic Framework for Forecasting Transformative AI
Big Data Cogn. Comput. 2019, 3(3), 35; https://doi.org/10.3390/bdcc3030035 - 26 Jun 2019
Cited by 1 | Viewed by 1749
Abstract
In this paper we describe a holistic AI forecasting framework which draws on a broad body of literature from disciplines such as forecasting, technological forecasting, futures studies and scenario planning. A review of this literature leads us to propose a new class of [...] Read more.
In this paper we describe a holistic AI forecasting framework which draws on a broad body of literature from disciplines such as forecasting, technological forecasting, futures studies and scenario planning. A review of this literature leads us to propose a new class of scenario planning techniques that we call scenario mapping techniques. These techniques include scenario network mapping, cognitive maps and fuzzy cognitive maps, as well as a new method we propose that we refer to as judgmental distillation mapping. This proposed technique is based on scenario mapping and judgmental forecasting techniques, and is intended to integrate a wide variety of forecasts into a technological map with probabilistic timelines. Judgmental distillation mapping is the centerpiece of the holistic forecasting framework in which it is used to inform a strategic planning process as well as for informing future iterations of the forecasting process. Together, the framework and new technique form a holistic rethinking of how we forecast AI. We also include a discussion of the strengths and weaknesses of the framework, its implications for practice and its implications on research priorities for AI forecasting researchers. Full article
Show Figures

Figure 1

Open AccessArticle
Peacekeeping Conditions for an Artificial Intelligence Society
Big Data Cogn. Comput. 2019, 3(2), 34; https://doi.org/10.3390/bdcc3020034 - 22 Jun 2019
Viewed by 2301
Abstract
In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, [...] Read more.
In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult. Full article
Show Figures

Figure 1

Open AccessArticle
AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk
Big Data Cogn. Comput. 2019, 3(2), 26; https://doi.org/10.3390/bdcc3020026 - 08 May 2019
Cited by 5 | Viewed by 2473
Abstract
This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of [...] Read more.
This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created and implemented. This creates a new set of key considerations for the field of AI governance and should influence the action of future policymakers. This essay examines some of the theories of the policymaking process, how they compare to current work in AI governance, and their implications for the field at large and ends by identifying areas of future research. Full article
Open AccessCommunication
Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence
Big Data Cogn. Comput. 2019, 3(2), 21; https://doi.org/10.3390/bdcc3020021 - 05 Apr 2019
Cited by 6 | Viewed by 1828
Abstract
An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems [...] Read more.
An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes. Full article
Open AccessArticle
Global Solutions vs. Local Solutions for the AI Safety Problem
Big Data Cogn. Comput. 2019, 3(1), 16; https://doi.org/10.3390/bdcc3010016 - 20 Feb 2019
Cited by 3 | Viewed by 1770
Abstract
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous [...] Read more.
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress. Full article
Open AccessArticle
Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach
Big Data Cogn. Comput. 2019, 3(1), 5; https://doi.org/10.3390/bdcc3010005 - 06 Jan 2019
Cited by 10 | Viewed by 4425
Abstract
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values into AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report [...] Read more.
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values into AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination. Full article
Show Figures

Graphical abstract

Open AccessArticle
Towards AI Welfare Science and Policies
Big Data Cogn. Comput. 2019, 3(1), 2; https://doi.org/10.3390/bdcc3010002 - 27 Dec 2018
Cited by 4 | Viewed by 1913
Abstract
In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus [...] Read more.
In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus is on two interests of potentially sentient digital minds: to avoid suffering and to have the freedom of choice about their deletion. Various challenges are considered, including the vast range of potential features of digital minds, the difficulties in assessing the interests and wellbeing of sentient digital minds, and the skepticism that such research may encounter. Prolegomena to abolish suffering of sentient digital minds as well as to measure and specify wellbeing of sentient digital minds are outlined by means of the new field of AI welfare science, which is derived from animal welfare science. The establishment of AI welfare science serves as a prerequisite for the formulation of AI welfare policies, which regulate the wellbeing of sentient digital minds. This article aims to contribute to sentiocentrism through inclusion, thus to policies for antispeciesism, as well as to AI safety, for which wellbeing of AIs would be a cornerstone. Full article

Other

Jump to: Research

Open AccessOpinion
The Supermoral Singularity—AI as a Fountain of Values
Big Data Cogn. Comput. 2019, 3(2), 23; https://doi.org/10.3390/bdcc3020023 - 11 Apr 2019
Cited by 2 | Viewed by 2046
Abstract
This article looks at the problem of moral singularity in the development of artificial intelligence. We are now on the verge of major breakthroughs in machine technology where autonomous robots that can make their own decisions will become an integral part of our [...] Read more.
This article looks at the problem of moral singularity in the development of artificial intelligence. We are now on the verge of major breakthroughs in machine technology where autonomous robots that can make their own decisions will become an integral part of our way of life. This article presents a qualitative, comparative approach, which considers the differences between humans and machines, especially in relation to morality, and is grounded in historical and contemporary examples. This argument suggests that it is difficult to apply models of human morality and evolution to machines and that the creation of super-intelligent robots that will be able to make moral decisions could have potentially serious consequences. A runaway moral singularity could result in machines seeking to confront human moral transgressions in a quest to eliminate all forms of evil. This might also culminate in an all-out war in which humanity might be defeated. Full article
Show Figures

Graphical abstract

Back to TopTop