Next Article in Journal / Special Issue
A Holistic Framework for Forecasting Transformative AI
Previous Article in Journal
Tensor Decomposition for Salient Object Detection in Images
Previous Article in Special Issue
AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk
Open AccessArticle

Peacekeeping Conditions for an Artificial Intelligence Society

by Hiroshi Yamakawa 1,2,3
The Whole Brain Architecture Initiative, a Specified Non-Profit Organization, Nishikoiwa 2-19-21, Edogawa-ku, Tokyo 133-0057, Japan
The RIKEN Center for Advanced Intelligence Project, Nihonbashi 1-Chome Mitsui Building, 15th Floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
Dwango Co., Ltd., KABUKIZA TOWER, 4-12-15 Ginza, Chuo-ku, Tokyo 104-0061, Japan
Big Data Cogn. Comput. 2019, 3(2), 34;
Received: 18 May 2019 / Revised: 19 June 2019 / Accepted: 20 June 2019 / Published: 22 June 2019
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult. View Full-Text
Keywords: autonomous distributed system; conflict; existential risk; distributed goals management; terraforming; technological singularity autonomous distributed system; conflict; existential risk; distributed goals management; terraforming; technological singularity
Show Figures

Figure 1

MDPI and ACS Style

Yamakawa, H. Peacekeeping Conditions for an Artificial Intelligence Society. Big Data Cogn. Comput. 2019, 3, 34.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop