Special Issue "Artificial Superintelligence: Coordination & Strategy"

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: 31 January 2019

Special Issue Editors

Guest Editor
Dr. Roman V. Yampolskiy

Computer Engineering and Computer Science Department, University of Louisville, Louisville, KY 40292 USA
Website | E-Mail
Phone: +1 502 852 3624
Interests: biometrics; forensics; AI; genetic algorithms; games; pattern recognition; security
Guest Editor
Dr. Allison Duettmann

Foresight Institute, San Francisco, CA, USA
Website | E-Mail
Interests: existential hope and existential risk reduction; artificial general intelligence

Special Issue Information

Dear Colleagues,

Attention in the AI safety community has recently started to increasingly include strategic considerations of coordination amongst relevant actors within the field of AI and AI safety, in addition to the steadily growing work on technical considerations of building safe AI systems.

This shift has several reasons:

Multiplier effects: Given the difficulty of challenges for building safe AI systems (e.g., ethics, technical alignment, cybersecurity), we ought to ensure that the required time horizon is available to develop thorough solutions. Coordination efforts could allow actors who develop AI to slow down when necessary rather than engage in adversarial races, which may lead to corner-cutting on safety issues.

Pragmatism: While furthering coordination of actors in the AI space is a complex challenge, coordination itself is not a novel problem. Many of the actors who are relevant to ensuring that progress toward superintelligence remains beneficial to humanity are already known, there already exists a promising research pool on coordination problems, as well as historic precursors of other high-stake coordination problems, suggesting useful research directions for AI coordination, which we have some familiarity and experience with.

Urgency: With real race-dynamics amongst major powers slowly emerging—in AI and related fields—developing strategies for coordination has high urgency. Currently, there is still a window of opportunity to shape the nature of the relationships amongst current and future actors toward a beneficial outcome for humanity.

Given the above benefits of coordination work on the path to safe Superintelligence, this issue intends to survey promising research in this emerging field within AI safety. On a meta-level, the hope is that this issue can serve as map to inform efforts in the space of AI coordination about other promising efforts. Creating an informed and proactive research cohort would avoid Unliteralist’s Curse scenarios, in which different efforts duplicate or unbeknownst counter other promising efforts, and would open up avenues for collaboration, thereby serving increased coordination of AI coordination research more generally.

While this edition focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future human-made existential risks, some of which might still be unknown. Thus, while most coordination strategies in this issue will be specific to superintelligence, we hope that some insights yield “collateral benefits” to the reduction of other existential risks by creating an overall civilizational framework that increases in robustness, resiliency, or even antifragility.

Dr. Roman Yampolskiy
Dr. Allison Duettmann
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI race scenarios, strategies and barriers for coordination
  • Analysis of relevant actors, AI strategies and relation amongst actors
  • Considerations of global AI governance
  • Game theory of AI coordination
  • Potential positive outcomes of AI coordination
  • Historical precedents for coordination strategies
  • Key factor analysis for realistic coordination scenarios
  • Effects of AI coordination efforts on other risks
  • Effects of external developments on AI coordination
  • Openness vs. closedness in AI research and AI safety research
  • Centralized vs. decentralized governance systems
  • Multilateral vs. unilateral AI development and deployment scenarios
  • Cybersecurity as risk factor in AI safety
  • Near-term pathways to influence AI policy
  • Incentivizing and penalizing actors to coordinate
  • Acceptable surveillance strategies for AI safety
  • Engagement of future relevant stakeholders in AI safety
  • Framing biases in AI coordination
  • Forecasting AI progress, obstacles, and dependencies
  • Regulating AI-based systems: safety standards and certification
  • Risk-reduction of weaponization of AI
  • Risk-reduction of AI-based large-scale memetic manipulation
  • General risk-reduction of bad actor use of AI
  • Safety guidelines for the AI community
  • Strategies for targeted outreach to relevant actors
  • Societal effects of AI: Algorithmic bias and AI discrimination
  • Dangers and promises of smart contracts for AI safety
  • Predictions & incentive markets for AI coordination
  • Whole Brain Emulations (WBE’s) & coordination

Published Papers

This special issue is now open for submission.
Back to Top