Next Article in Journal
Agency, Responsibility, Selves, and the Mechanical Mind
Next Article in Special Issue
AI Ethics and Value Alignment for Nonhuman Animals
Previous Article in Journal
Human Enhancements and Voting: Towards a Declaration of Rights and Responsibilities of Beings
Previous Article in Special Issue
Facing Immersive “Post-Truth” in AIVR?
Article

Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions

1
Department of Information and Computing Sciences, Utrecht University, 3584 CC Utrecht, The Netherlands
2
TNO Netherlands, 2597 AK The Hague, The Netherlands
3
School of Engineering, University of Louisville, Louisville, KY 40292, USA
*
Author to whom correspondence should be addressed.
Philosophies 2021, 6(1), 6; https://doi.org/10.3390/philosophies6010006
Received: 26 November 2020 / Revised: 4 January 2021 / Accepted: 5 January 2021 / Published: 15 January 2021
(This article belongs to the Special Issue The Perils of Artificial Intelligence)
In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research. View Full-Text
Keywords: AI safety; AI observatory; retrospective counterfactual risk analysis; artificial stupidity; artificial creativity augmentation; cybersecurity; social psychology; HCI AI safety; AI observatory; retrospective counterfactual risk analysis; artificial stupidity; artificial creativity augmentation; cybersecurity; social psychology; HCI
Show Figures

Figure 1

MDPI and ACS Style

Aliman, N.-M.; Kester, L.; Yampolskiy, R. Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions. Philosophies 2021, 6, 6. https://doi.org/10.3390/philosophies6010006

AMA Style

Aliman N-M, Kester L, Yampolskiy R. Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions. Philosophies. 2021; 6(1):6. https://doi.org/10.3390/philosophies6010006

Chicago/Turabian Style

Aliman, Nadisha-Marie, Leon Kester, and Roman Yampolskiy. 2021. "Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions" Philosophies 6, no. 1: 6. https://doi.org/10.3390/philosophies6010006

Find Other Styles

Article Access Map by Country/Region

1
Back to TopTop