Special Issue "AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Theory and Methodology".

Deadline for manuscript submissions: closed (30 October 2018)

Special Issue Editors

Guest Editor
Prof. Dr. Robert K. Logan

Department of Physics, University of Toronto, 60 St. George, Toronto, ON M5S 1A7, Canada
Website | E-Mail
Interests: media ecology; systems biology; linguistics; AI
Guest Editor
Prof. Dr. Adriana Braga

Department of Social Communication, Pontifícia Universidade Católica do Rio de Janeiro (PUC-RJ), R. Marquês de São Vicente, 225—Gávea, Rio de Janeiro 22451-900, RJ, Brazil
Website | E-Mail
Interests: technology; gender; pragmatism; phenomenology; psychoanalysis; media studies; social interaction; ethnography

Special Issue Information

Dear Colleagues,

We are putting together a Special Issue on the notion of the technological Singularity, the idea that computers will one day be smarter that their human creators. Articles that are both in favour of and against the Singularity are welcome, but the lead article by the Guest Editors Bob Logan and Adrianna Braga is quite critical of the notion. Here is the abstract of their paper, The Emperor of Strong AI Has no Clothes: Limits to Artificial Intelligence.

Abstract: We argue that the premise of the technological Singularity, based on the notion that computers will one day be smarter that their human creators, is false, making use of the techniques of media ecology. We also analyze the comments of other critics of the Singularity, as well those of supporters of this notion. The notion of intelligence that advocates of the technological Singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence, as we will show, is not based solely on logical operations and computation, but also includes a long list of other characteristics, unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor.

Prof. Dr. Robert K. Logan
Prof. Dr. Adriana Braga
Guest Editors

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.

Keywords

  • singularity

  • computers

  • information processing

  • AI (artificial intelligence)

  • human intelligence

  • curiosity

  • imagination

  • intuition

  • emotions

  • goals

  • values

  • wisdom

Published Papers (14 papers)

View options order results:
result details:
Displaying articles 1-14
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle Pareidolic and Uncomplex Technological Singularity
Information 2018, 9(12), 309; https://doi.org/10.3390/info9120309
Received: 25 October 2018 / Revised: 30 November 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
PDF Full-text (337 KB) | HTML Full-text | XML Full-text
Abstract
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the
[...] Read more.
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the perverse power of apophenia (“the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas)”) and pereidolia (“the tendency to perceive a specific, often meaningful image in a random or ambiguous visual pattern”) in those studies is the starting point for two claims: the “accelerated change” is a future-related apophenia case, whereas AGI (and TS) are future-related pareidolia cases. A short presentation of research-focused social networks working to solve complex problems reveals the superiority of human networked minds over the hardware‒software systems and suggests the opportunity for a network-based study of TS (and AGI) from a complexity perspective. It could compensate for the weaknesses of approaches deployed from a linear and predictable perspective, in order to try to redesign our intelligent artifacts. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessFeature PaperArticle Countering Superintelligence Misinformation
Information 2018, 9(10), 244; https://doi.org/10.3390/info9100244
Received: 9 September 2018 / Revised: 25 September 2018 / Accepted: 26 September 2018 / Published: 30 September 2018
Cited by 1 | PDF Full-text (254 KB) | HTML Full-text | XML Full-text
Abstract
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence
[...] Read more.
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence is the subject of major ongoing debate, which includes a significant amount of misinformation. Superintelligence misinformation is potentially dangerous, ultimately leading bad decisions by the would-be developers of superintelligence and those who influence them. This paper surveys strategies to counter superintelligence misinformation. Two types of strategies are examined: strategies to prevent the spread of superintelligence misinformation and strategies to correct it after it has spread. In general, misinformation can be difficult to correct, suggesting a high value of strategies to prevent it. This paper is the first extended study of superintelligence misinformation. It draws heavily on the study of misinformation in psychology, political science, and related fields, especially misinformation about global warming. The strategies proposed can be applied to lay public attention to superintelligence, AI education programs, and efforts to build expert consensus. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Superintelligence Skepticism as a Political Tool
Information 2018, 9(9), 209; https://doi.org/10.3390/info9090209
Received: 27 June 2018 / Revised: 14 August 2018 / Accepted: 17 August 2018 / Published: 22 August 2018
Cited by 1 | PDF Full-text (250 KB) | HTML Full-text | XML Full-text
Abstract
This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be
[...] Read more.
This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be built, with massive and potentially catastrophic consequences. There is substantial skepticism about superintelligence, including whether it will be built, whether it would be catastrophic, and whether it is worth current attention. To date, superintelligence skepticism appears to be mostly honest intellectual debate, though some of it may be politicized. This paper finds substantial potential for superintelligence skepticism to be (further) politicized, due mainly to the potential for major corporations to have a strong profit motive to downplay concerns about superintelligence and avoid government regulation. Furthermore, politicized superintelligence skepticism is likely to be quite successful, due to several factors including the inherent uncertainty of the topic and the abundance of skeptics. The paper’s analysis is based on characteristics of superintelligence and the broader AI sector, as well as the history and ongoing practice of politicized skepticism on other science and technology issues, including tobacco, global warming, and industrial chemicals. The paper contributes to literatures on politicized skepticism and superintelligence governance. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle When Robots Get Bored and Invent Team Sports: A More Suitable Test than the Turing Test?
Information 2018, 9(5), 118; https://doi.org/10.3390/info9050118
Received: 9 April 2018 / Revised: 5 May 2018 / Accepted: 8 May 2018 / Published: 11 May 2018
PDF Full-text (608 KB) | HTML Full-text | XML Full-text
Abstract
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for
[...] Read more.
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for determining human-level intelligence: the invention of team sports by humanoid robots. The test is preferred because team sport activity is easily identified, uniquely human, and is suggested to emerge in basic, controllable conditions. To expect humanoid robots to self-organize, or invent, team sport as a function of human-level artificial intelligence, the following necessary conditions are proposed: humanoid robots must have the capacity to participate in cooperative-competitive interactions, instilled by algorithms for resource acquisition; they must possess or acquire sufficient stores of energetic resources that permit leisure time, thus reducing competition for scarce resources and increasing cooperative tendencies; and they must possess a heterogeneous range of energetic capacities. When present, these factors allow robot collectives to spontaneously invent team sport activities and thereby demonstrate one fundamental indicator of human-level intelligence. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessFeature PaperArticle Thinking in Patterns and the Pattern of Human Thought as Contrasted with AI Data Processing
Information 2018, 9(4), 83; https://doi.org/10.3390/info9040083
Received: 5 March 2018 / Revised: 4 April 2018 / Accepted: 5 April 2018 / Published: 8 April 2018
Cited by 2 | PDF Full-text (662 KB) | HTML Full-text | XML Full-text
Abstract
We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family
[...] Read more.
We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family that emerged with the control of fire, rudimentary set theory or categorization and spoken language that co-emerged, the ability to deal with information overload, conceptualization, imagination, abductive reasoning, invention, art, religion, mathematics and science. These traits are interrelated as they all involve the ability to flexibly manipulate information from our environments via pattern restructuring. We argue that the human mind is the emergent product of a shift from external percept-based processing to a concept and language-based form of cognition based on patterning. In this article, we describe the evolution of human cognition and culture, describing the unique patterns of human thought and how we, humans, think in terms of patterns. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Technological Singularity: What Do We Really Know?
Information 2018, 9(4), 82; https://doi.org/10.3390/info9040082
Received: 27 February 2018 / Revised: 25 March 2018 / Accepted: 3 April 2018 / Published: 8 April 2018
Cited by 1 | PDF Full-text (204 KB) | HTML Full-text | XML Full-text
Abstract
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity
[...] Read more.
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity from more speculative beliefs about the possibility of creating artificial general intelligence. I use the theory of metasystem transitions and the concept of universal evolution to analyze some misconceptions about the technological singularity. While it may be neither purely technological, nor truly singular, we can predict that the next transition will take place, and that the emerged metasystem will demonstrate exponential growth in complexity with a doubling time of less than half a year, exceeding the complexity of the existing cybernetic systems in few decades. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Conceptions of Artificial Intelligence and Singularity
Information 2018, 9(4), 79; https://doi.org/10.3390/info9040079
Received: 15 February 2018 / Revised: 1 April 2018 / Accepted: 3 April 2018 / Published: 6 April 2018
Cited by 1 | PDF Full-text (785 KB) | HTML Full-text | XML Full-text
Abstract
In the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term
[...] Read more.
In the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term in the related discussions, many people are not really familiar with this research, including its aim and status. We analyze these notions, and introduce the results of our own AGI research. Our main conclusions are that: (1) it is possible to build a computer system that follows the same laws of thought and shows similar properties as the human mind, but, since such an AGI will have neither a human body nor human experience, it will not behave exactly like a human, nor will it be “smarter than a human” on all tasks; and (2) since the development of an AGI requires a reasonably good understanding of the general mechanism of intelligence, the system’s behaviors will still be understandable and predictable in principle. Therefore, the success of AGI will not necessarily lead to a singularity beyond which the future becomes completely incomprehensible and uncontrollable. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Cosmic Evolutionary Philosophy and a Dialectical Approach to Technological Singularity
Information 2018, 9(4), 78; https://doi.org/10.3390/info9040078
Received: 13 February 2018 / Revised: 3 April 2018 / Accepted: 4 April 2018 / Published: 5 April 2018
PDF Full-text (14157 KB) | HTML Full-text | XML Full-text
Abstract
The anticipated next stage of human organization is often described by futurists as a global technological singularity. This next stage of complex organization is hypothesized to be actualized by scientific-technic knowledge networks. However, the general consequences of this process for the meaning of
[...] Read more.
The anticipated next stage of human organization is often described by futurists as a global technological singularity. This next stage of complex organization is hypothesized to be actualized by scientific-technic knowledge networks. However, the general consequences of this process for the meaning of human existence are unknown. Here, it is argued that cosmic evolutionary philosophy is a useful worldview for grounding an understanding of the potential nature of this futures event. In the cosmic evolutionary philosophy, reality is conceptualized locally as a universal dynamic of emergent evolving relations. This universal dynamic is structured by a singular astrophysical origin and an organizational progress from sub-atomic particles to global civilization mediated by qualitative phase transitions. From this theoretical ground, we attempt to understand the next stage of universal dynamics in terms of the motion of general ideation attempting to actualize higher unity. In this way, we approach technological singularity dialectically as an event caused by ideational transformations and mediated by an emergent intersubjective objectivity. From these speculations, a historically-engaged perspective on the nature of human consciousness is articulated where the truth of reality as an emergent unity depends on the collective action of a multiplicity of human observers. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Can Computers Become Conscious, an Essential Condition for the Singularity?
Information 2017, 8(4), 161; https://doi.org/10.3390/info8040161
Received: 12 November 2017 / Revised: 3 December 2017 / Accepted: 6 December 2017 / Published: 9 December 2017
Cited by 3 | PDF Full-text (181 KB) | HTML Full-text | XML Full-text
Abstract
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware
[...] Read more.
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one’s perceptions and/or of one’s thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one’s thoughts is basically listening to one’s own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one’s self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer’s lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessFeature PaperArticle The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence
Information 2017, 8(4), 156; https://doi.org/10.3390/info8040156
Received: 31 October 2017 / Revised: 20 November 2017 / Accepted: 22 November 2017 / Published: 27 November 2017
Cited by 6 | PDF Full-text (266 KB) | HTML Full-text | XML Full-text
Abstract
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the
[...] Read more.
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the Singularity, as well supporters of this notion. The notion of intelligence that advocates of the technological singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence as we will show is not based solely on logical operations and computation, but also includes a long list of other characteristics that are unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)

Review

Jump to: Research, Other

Open AccessReview AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”)
Information 2018, 9(7), 183; https://doi.org/10.3390/info9070183
Received: 1 July 2018 / Revised: 17 July 2018 / Accepted: 21 July 2018 / Published: 23 July 2018
PDF Full-text (300 KB) | HTML Full-text | XML Full-text
Abstract
This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used
[...] Read more.
This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used in the news media to produce articles and thereby replace human journalists. In 2016, Wordsmith, one of the two most powerful news-writing algorithms, wrote and published 1.5 billion news stories. This number is comparable to or may even exceed work written and published by human journalists. Robo-journalists’ skills and competencies are constantly growing. Research has shown that readers sometimes cannot differentiate between news written by robots or by humans; more importantly, readers often make little of such distinctions. Considering this, these forms of AI can be seen as having already passed a kind of Turing test as applied to journalism. The paper provides a review of the current state of robo-journalism; analyses popular arguments about “robots’ incapability” to prevail over humans in creative practices; and offers a foresight of the possible further development of robo-journalism and its collision with organic forms of journalism. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)

Other

Jump to: Research, Review

Open AccessCommentary Love, Emotion and the Singularity
Information 2018, 9(9), 221; https://doi.org/10.3390/info9090221
Received: 31 July 2018 / Revised: 23 August 2018 / Accepted: 31 August 2018 / Published: 3 September 2018
Cited by 1 | PDF Full-text (222 KB) | HTML Full-text | XML Full-text
Abstract
Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the
[...] Read more.
Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the brain; one cannot conceive of human cognition without embodiment. This essay considers the emotional nature of cognition by exploring the most human of emotions—romantic love. By examining the idea of love from an evolutionary and a physiological perspective, the author suggests that in order to account for the full range of human cognition, one must also account for the emotional aspects of cognition. The paper concludes that if there is to be a singularity that transcends human cognition, it must be embodied. As such, the singularity could not be completely non-organic; it must take place in the form of a cyborg, wedding the digital to the biological. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessCommentary The Singularity May Be Near
Information 2018, 9(8), 190; https://doi.org/10.3390/info9080190
Received: 5 July 2018 / Revised: 24 July 2018 / Accepted: 25 July 2018 / Published: 27 July 2018
Cited by 1 | PDF Full-text (181 KB) | HTML Full-text | XML Full-text
Abstract
Toby Walsh in “The Singularity May Never Be Near” gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and
[...] Read more.
Toby Walsh in “The Singularity May Never Be Near” gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and arrive at similar conclusions, but with more weight given to the “likely to happen” prediction. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessEssay The Singularity Isn’t Simple! (However We Look at It) A Random Walk between Science Fiction and Science Fact
Information 2018, 9(4), 99; https://doi.org/10.3390/info9040099
Received: 11 April 2018 / Revised: 17 April 2018 / Accepted: 18 April 2018 / Published: 19 April 2018
Cited by 1 | PDF Full-text (542 KB) | HTML Full-text | XML Full-text
Abstract
It seems to be accepted that intelligenceartificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take
[...] Read more.
It seems to be accepted that intelligenceartificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take it from there (and take over from us). However, such wisdom and debate are simplistic in a number of ways: firstly, this is a poor definition of the singularity; secondly, it muddles various notions of intelligence; thirdly, competing arguments are rarely based on shared axioms, so are frequently pointless; fourthly, our models for trying to discuss these concepts at all are often inconsistent; and finally, our attempts at describing any ‘post-singularity’ world are almost always limited by anthropomorphism. In all of these respects, professional ‘futurists’ often appear as confused as storytellers who, through freer licence, may conceivably have the clearer view: perhaps then, that becomes a reasonable place to start. There is no attempt in this paper to propose, or evaluate, any research hypothesis; rather simply to challenge conventions. Using examples from science fiction to illustrate various assumptions behind the AI/singularity debate, this essay seeks to encourage discussion on a number of possible futures based on different underlying metaphysical philosophies. Although properly grounded in science, it eventually looks beyond the technology for answers and, ultimately, beyond the Earth itself. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Back to Top