Next Article in Journal
VizTract: Visualization of Complex Social Networks for Easy User Perception
Next Article in Special Issue
Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence
Previous Article in Journal
Intelligent Recommender System for Big Data Applications Based on the Random Neural Network
Previous Article in Special Issue
Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach
Article Menu
Issue 1 (March) cover image

Export Article

Open AccessArticle

Global Solutions vs. Local Solutions for the AI Safety Problem

1
Science for Life Extension Foundation, Prospect Mira 124-15, Moscow 129164, Russia
2
Alliance to Feed the Earth in Disasters (ALLFED), University of Alaska Fairbanks, Fairbanks, AK 99775, USA
3
Markkula Center for Applied Ethics, Santa Clara University, Santa Clara, CA 95053, USA
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2019, 3(1), 16; https://doi.org/10.3390/bdcc3010016
Received: 16 December 2018 / Revised: 2 February 2019 / Accepted: 15 February 2019 / Published: 20 February 2019
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
  |  
PDF [284 KB, uploaded 20 February 2019]
  |  

Abstract

There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress. View Full-Text
Keywords: AI safety; existential risk; AI alignment; superintelligence; AI arms race AI safety; existential risk; AI alignment; superintelligence; AI arms race
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Turchin, A.; Denkenberger, D.; Green, B.P. Global Solutions vs. Local Solutions for the AI Safety Problem. Big Data Cogn. Comput. 2019, 3, 16.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Big Data Cogn. Comput. EISSN 2504-2289 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top