AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Theory and Methodology".

Deadline for manuscript submissions: closed (30 October 2018) | Viewed by 199583

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Physics, University of Toronto, Toronto, ON M5S 1A7, Canada
Interests: media ecology; systems biology; linguistics; AI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Social Communication, Pontifícia Universidade Católica do Rio de Janeiro (PUC-RJ), R. Marquês de São Vicente, 225—Gávea, Rio de Janeiro 22451-900, RJ, Brazil
Interests: technology; gender; pragmatism; phenomenology; psychoanalysis; media studies; social interaction; ethnography

Special Issue Information

Dear Colleagues,

We are putting together a Special Issue on the notion of the technological Singularity, the idea that computers will one day be smarter that their human creators. Articles that are both in favour of and against the Singularity are welcome, but the lead article by the Guest Editors Bob Logan and Adrianna Braga is quite critical of the notion. Here is the abstract of their paper, The Emperor of Strong AI Has no Clothes: Limits to Artificial Intelligence.

Abstract: We argue that the premise of the technological Singularity, based on the notion that computers will one day be smarter that their human creators, is false, making use of the techniques of media ecology. We also analyze the comments of other critics of the Singularity, as well those of supporters of this notion. The notion of intelligence that advocates of the technological Singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence, as we will show, is not based solely on logical operations and computation, but also includes a long list of other characteristics, unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor.

Prof. Dr. Robert K. Logan
Prof. Dr. Adriana Braga
Guest Editors

Reference

We are delighted to have received permission from Daniel Tunkelang to include his brilliant article "10 Things Everyone Should Know About Machine Learning” that appeared on the Medium on September 5, 2017 (https://medium.com/@dtunkelang/10-things-everyone-should-know-about-machine-learning-15279c27ce96) in our Special Issue. Daniel Tunkelang describes himself as a high class consultant “at large for search, discovery, machine learning/AI, and data science.” He has worked with Apple, eBay, Karat and Twiggle, among many others. His profile can be found at https://www.linkedin.com/in/dtunkelang/. We thank him for his generosity in granting us permission to include his article in our Special Issue—Adriana Braga and Robert K. Logan.

Here is the article:

10 Things Everyone Should Know About Machine Learning

Author: Daniel Tunkelang

Sep 5, 2017

As someone who often finds himself explaining machine learning to non-experts, I offer the following list as a public service announcement.

  1. Machine learning means learning from data; AI is a buzzword. Machine learning lives up to the hype: there are an incredible number of problems that you can solve by providing the right training data to the right learning algorithms. Call it AI if that helps you sell it, but know that AI, at least as used outside of academia, is often a buzzword that can mean whatever people want it to mean.
  2. Machine learning is about data and algorithms, but mostly data. There’s a lot of excitement about advances in machine learning algorithms, and particularly about deep learning. But data is the key ingredient that makes machine learning possible. You can have machine learning without sophisticated algorithms, but not without good data.
  3. Unless you have a lot of data, you should stick to simple models. Machine learning trains a model from patterns in your data, exploring a space of possible models defined by parameters. If your parameter space is too big, you’ll overfit to your training data and train a model that doesn’t generalize beyond it. A detailed explanation requires more math, but as a rule, you should keep your models as simple as possible.
  4. Machine learning can only be as good as the data you use to train it. The phrase “garbage in, garbage out” predates machine learning, but it aptly characterizes a key limitation of machine learning. Machine learning can only discover patterns that are present in your training data. For supervised machine learning tasks like classification, you’ll need a robust collection of correctly labeled, richly featured training data.
  5. Machine learning only works if your training data is representative. Just as a fund prospectus warns that “past performance is no guarantee of future results”, machine learning should warn that it’s only guaranteed to work for data generated by the same distribution that generated its training data. Be vigilant of skews between training data and production data, and retrain your models frequently so they don’t become stale.
  6. Most of the hard work for machine learning is data transformation. From reading the hype about new machine learning techniques, you might think that machine learning is mostly about selecting and tuning algorithms. The reality is more prosaic: most of your time and effort goes into data cleansing and feature engineering — that is, transforming raw features into features that better represent the signal in your data.
  7. Deep learning is a revolutionary advance, but it isn’t a magic bullet. Deep learning has earned its hype by delivering advances across a broad range of machine learning application areas. Moreover, deep learning automates some of the work traditionally performed through feature engineering, especially for image and video data. But deep learning isn’t a silver bullet. You can’t just use it out of the box, and you’ll still need to invest significant effort in data cleansing and transformation.
  8. Machine learning systems are highly vulnerable to operator error. With apologies to the NRA, “Machine learning algorithms don’t kill people; people kill people.” When machine learning systems fail, it’s rarely because of problems with the machine learning algorithm. More likely, you’ve introduced human error into the training data, creating bias or some other systematic error. Always be skeptical, and approach machine learning with the discipline you apply to software engineering.
  9. Machine learning can inadvertently create a self-fulfilling prophecy. In many applications of machine learning, the decisions you make today affect the training data you collect tomorrow. Once your machine learning system embeds biases into its model, it can continue generating new training data that reinforces those biases. And some biases can ruin people’s lives. Be responsible: don’t create self-fulfilling prophecies.
  10. AI is not going to become self-aware, rise up, and destroy humanity. A surprising number of people (cough) seem to be getting their ideas about artificial intelligence from science fiction movies. We should be inspired by science fiction, but not so credulous that we mistake it for reality. There are enough real and present dangers to worry about, from consciously evil human beings to unconsciously biased machine learning models. So you can stop worrying about SkyNet and “superintelligence”.

There’s far more to machine learning than I can explain in a top-10 list. But hopefully, this serves as a useful introduction for non-experts.

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.

Keywords

  • singularity

  • computers

  • information processing

  • AI (artificial intelligence)

  • human intelligence

  • curiosity

  • imagination

  • intuition

  • emotions

  • goals

  • values

  • wisdom

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:
266 KiB  
Article
The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence
by Adriana Braga and Robert K. Logan
Information 2017, 8(4), 156; https://doi.org/10.3390/info8040156 - 27 Nov 2017
Cited by 60 | Viewed by 24174
Abstract
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the [...] Read more.
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the Singularity, as well supporters of this notion. The notion of intelligence that advocates of the technological singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence as we will show is not based solely on logical operations and computation, but also includes a long list of other characteristics that are unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
18 pages, 254 KiB  
Article
Countering Superintelligence Misinformation
by Seth D. Baum
Information 2018, 9(10), 244; https://doi.org/10.3390/info9100244 - 30 Sep 2018
Cited by 11 | Viewed by 9851
Abstract
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence [...] Read more.
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence is the subject of major ongoing debate, which includes a significant amount of misinformation. Superintelligence misinformation is potentially dangerous, ultimately leading bad decisions by the would-be developers of superintelligence and those who influence them. This paper surveys strategies to counter superintelligence misinformation. Two types of strategies are examined: strategies to prevent the spread of superintelligence misinformation and strategies to correct it after it has spread. In general, misinformation can be difficult to correct, suggesting a high value of strategies to prevent it. This paper is the first extended study of superintelligence misinformation. It draws heavily on the study of misinformation in psychology, political science, and related fields, especially misinformation about global warming. The strategies proposed can be applied to lay public attention to superintelligence, AI education programs, and efforts to build expert consensus. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
16 pages, 250 KiB  
Article
Superintelligence Skepticism as a Political Tool
by Seth D. Baum
Information 2018, 9(9), 209; https://doi.org/10.3390/info9090209 - 22 Aug 2018
Cited by 18 | Viewed by 9896
Abstract
This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be [...] Read more.
This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be built, with massive and potentially catastrophic consequences. There is substantial skepticism about superintelligence, including whether it will be built, whether it would be catastrophic, and whether it is worth current attention. To date, superintelligence skepticism appears to be mostly honest intellectual debate, though some of it may be politicized. This paper finds substantial potential for superintelligence skepticism to be (further) politicized, due mainly to the potential for major corporations to have a strong profit motive to downplay concerns about superintelligence and avoid government regulation. Furthermore, politicized superintelligence skepticism is likely to be quite successful, due to several factors including the inherent uncertainty of the topic and the abundance of skeptics. The paper’s analysis is based on characteristics of superintelligence and the broader AI sector, as well as the history and ongoing practice of politicized skepticism on other science and technology issues, including tobacco, global warming, and industrial chemicals. The paper contributes to literatures on politicized skepticism and superintelligence governance. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
4 pages, 172 KiB  
Editorial
AI and the Singularity: A Fallacy or a Great Opportunity?
by Adriana Braga and Robert K. Logan
Information 2019, 10(2), 73; https://doi.org/10.3390/info10020073 - 21 Feb 2019
Cited by 7 | Viewed by 6362
Abstract
We address the question of whether AI, and in particular the Singularity—the notion that AI-based computers can exceed human intelligence—is a fallacy or a great opportunity. We have invited a group of scholars to address this question, whose positions on the Singularity range [...] Read more.
We address the question of whether AI, and in particular the Singularity—the notion that AI-based computers can exceed human intelligence—is a fallacy or a great opportunity. We have invited a group of scholars to address this question, whose positions on the Singularity range from advocates to skeptics. No conclusion can be reached as the development of artificial intelligence is still in its infancy, and there is much wishful thinking and imagination in this issue rather than trustworthy data. The reader will find a cogent summary of the issues faced by researchers who are working to develop the field of artificial intelligence and in particular artificial general intelligence. The only conclusion that can be reached is that there exists a variety of well-argued positions as to where AI research is headed. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
18 pages, 542 KiB  
Essay
The Singularity Isn’t Simple! (However We Look at It) A Random Walk between Science Fiction and Science Fact
by Vic Grout
Information 2018, 9(4), 99; https://doi.org/10.3390/info9040099 - 19 Apr 2018
Cited by 3 | Viewed by 10444
Abstract
It seems to be accepted that intelligenceartificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take [...] Read more.
It seems to be accepted that intelligenceartificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take it from there (and take over from us). However, such wisdom and debate are simplistic in a number of ways: firstly, this is a poor definition of the singularity; secondly, it muddles various notions of intelligence; thirdly, competing arguments are rarely based on shared axioms, so are frequently pointless; fourthly, our models for trying to discuss these concepts at all are often inconsistent; and finally, our attempts at describing any ‘post-singularity’ world are almost always limited by anthropomorphism. In all of these respects, professional ‘futurists’ often appear as confused as storytellers who, through freer licence, may conceivably have the clearer view: perhaps then, that becomes a reasonable place to start. There is no attempt in this paper to propose, or evaluate, any research hypothesis; rather simply to challenge conventions. Using examples from science fiction to illustrate various assumptions behind the AI/singularity debate, this essay seeks to encourage discussion on a number of possible futures based on different underlying metaphysical philosophies. Although properly grounded in science, it eventually looks beyond the technology for answers and, ultimately, beyond the Earth itself. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Show Figures

Figure 1

18 pages, 337 KiB  
Article
Pareidolic and Uncomplex Technological Singularity
by Viorel Guliciuc
Information 2018, 9(12), 309; https://doi.org/10.3390/info9120309 - 6 Dec 2018
Cited by 1 | Viewed by 5650
Abstract
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the [...] Read more.
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the perverse power of apophenia (“the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas)”) and pereidolia (“the tendency to perceive a specific, often meaningful image in a random or ambiguous visual pattern”) in those studies is the starting point for two claims: the “accelerated change” is a future-related apophenia case, whereas AGI (and TS) are future-related pareidolia cases. A short presentation of research-focused social networks working to solve complex problems reveals the superiority of human networked minds over the hardware‒software systems and suggests the opportunity for a network-based study of TS (and AGI) from a complexity perspective. It could compensate for the weaknesses of approaches deployed from a linear and predictable perspective, in order to try to redesign our intelligent artifacts. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Show Figures

Figure 1

28 pages, 14157 KiB  
Article
Cosmic Evolutionary Philosophy and a Dialectical Approach to Technological Singularity
by Cadell Last
Information 2018, 9(4), 78; https://doi.org/10.3390/info9040078 - 5 Apr 2018
Cited by 8 | Viewed by 10726
Abstract
The anticipated next stage of human organization is often described by futurists as a global technological singularity. This next stage of complex organization is hypothesized to be actualized by scientific-technic knowledge networks. However, the general consequences of this process for the meaning of [...] Read more.
The anticipated next stage of human organization is often described by futurists as a global technological singularity. This next stage of complex organization is hypothesized to be actualized by scientific-technic knowledge networks. However, the general consequences of this process for the meaning of human existence are unknown. Here, it is argued that cosmic evolutionary philosophy is a useful worldview for grounding an understanding of the potential nature of this futures event. In the cosmic evolutionary philosophy, reality is conceptualized locally as a universal dynamic of emergent evolving relations. This universal dynamic is structured by a singular astrophysical origin and an organizational progress from sub-atomic particles to global civilization mediated by qualitative phase transitions. From this theoretical ground, we attempt to understand the next stage of universal dynamics in terms of the motion of general ideation attempting to actualize higher unity. In this way, we approach technological singularity dialectically as an event caused by ideational transformations and mediated by an emergent intersubjective objectivity. From these speculations, a historically-engaged perspective on the nature of human consciousness is articulated where the truth of reality as an emergent unity depends on the collective action of a multiplicity of human observers. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Show Figures

Figure 1

181 KiB  
Article
Can Computers Become Conscious, an Essential Condition for the Singularity?
by Robert K. Logan
Information 2017, 8(4), 161; https://doi.org/10.3390/info8040161 - 9 Dec 2017
Cited by 11 | Viewed by 7199
Abstract
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware [...] Read more.
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one’s perceptions and/or of one’s thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one’s thoughts is basically listening to one’s own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one’s self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer’s lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
6 pages, 177 KiB  
Essay
The Universality of Experiential Consciousness
by Robert K. Logan
Information 2019, 10(1), 31; https://doi.org/10.3390/info10010031 - 17 Jan 2019
Cited by 1 | Viewed by 3181
Abstract
It is argued that of Block’s (On a confusion about a function of consciousness, 1995; The Nature of Consciousness: Philosophical Debates, 1997) two types of consciousness, namely phenomenal consciousness (p-consciousness) and access consciousness (a-consciousness), that p-consciousness applies to all living things but that [...] Read more.
It is argued that of Block’s (On a confusion about a function of consciousness, 1995; The Nature of Consciousness: Philosophical Debates, 1997) two types of consciousness, namely phenomenal consciousness (p-consciousness) and access consciousness (a-consciousness), that p-consciousness applies to all living things but that a-consciousness is uniquely human. This differs from Block’s assertion that a-consciousness also applies to some non-human organisms. It is suggested that p-consciousness, awareness, experience and perception are basically equivalent and that human consciousness has in addition to percept-based p-consciousness, concept-based a-consciousness, a verbal and conceptual form of consciousness that can be utilized to coordinate, organize and plan activities for rational decision-making. This argument is based on Logan’s (The Extended Mind: The Emergence of Language, The Human Mind and Culture, 1997) assertion that humans are uniquely capable of reasoning and rationality because they are uniquely capable of verbal language and hence the ability to conceptualize. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
15 pages, 662 KiB  
Article
Thinking in Patterns and the Pattern of Human Thought as Contrasted with AI Data Processing
by Robert K. Logan and Marlie Tandoc
Information 2018, 9(4), 83; https://doi.org/10.3390/info9040083 - 8 Apr 2018
Cited by 10 | Viewed by 11801
Abstract
We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family [...] Read more.
We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family that emerged with the control of fire, rudimentary set theory or categorization and spoken language that co-emerged, the ability to deal with information overload, conceptualization, imagination, abductive reasoning, invention, art, religion, mathematics and science. These traits are interrelated as they all involve the ability to flexibly manipulate information from our environments via pattern restructuring. We argue that the human mind is the emergent product of a shift from external percept-based processing to a concept and language-based form of cognition based on patterning. In this article, we describe the evolution of human cognition and culture, describing the unique patterns of human thought and how we, humans, think in terms of patterns. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Show Figures

Figure 1

10 pages, 222 KiB  
Commentary
Love, Emotion and the Singularity
by Brett Lunceford
Information 2018, 9(9), 221; https://doi.org/10.3390/info9090221 - 3 Sep 2018
Cited by 4 | Viewed by 5522
Abstract
Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the [...] Read more.
Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the brain; one cannot conceive of human cognition without embodiment. This essay considers the emotional nature of cognition by exploring the most human of emotions—romantic love. By examining the idea of love from an evolutionary and a physiological perspective, the author suggests that in order to account for the full range of human cognition, one must also account for the emotional aspects of cognition. The paper concludes that if there is to be a singularity that transcends human cognition, it must be embodied. As such, the singularity could not be completely non-organic; it must take place in the form of a cyborg, wedding the digital to the biological. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
20 pages, 300 KiB  
Review
AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”)
by Andrey Miroshnichenko
Information 2018, 9(7), 183; https://doi.org/10.3390/info9070183 - 23 Jul 2018
Cited by 27 | Viewed by 21222
Abstract
This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used [...] Read more.
This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used in the news media to produce articles and thereby replace human journalists. In 2016, Wordsmith, one of the two most powerful news-writing algorithms, wrote and published 1.5 billion news stories. This number is comparable to or may even exceed work written and published by human journalists. Robo-journalists’ skills and competencies are constantly growing. Research has shown that readers sometimes cannot differentiate between news written by robots or by humans; more importantly, readers often make little of such distinctions. Considering this, these forms of AI can be seen as having already passed a kind of Turing test as applied to journalism. The paper provides a review of the current state of robo-journalism; analyses popular arguments about “robots’ incapability” to prevail over humans in creative practices; and offers a foresight of the possible further development of robo-journalism and its collision with organic forms of journalism. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
3 pages, 175 KiB  
Opinion
Artificial Intelligence Hits the Barrier of Meaning
by Melanie Mitchell
Information 2019, 10(2), 51; https://doi.org/10.3390/info10020051 - 5 Feb 2019
Cited by 47 | Viewed by 9897
Abstract
Today’s AI systems sorely lack the essence of human intelligence: Understanding the situations we experience, being able to grasp their meaning. The lack of humanlike understanding in machines is underscored by recent studies demonstrating lack of robustness of state-of-the-art deep-learning systems. Deeper networks [...] Read more.
Today’s AI systems sorely lack the essence of human intelligence: Understanding the situations we experience, being able to grasp their meaning. The lack of humanlike understanding in machines is underscored by recent studies demonstrating lack of robustness of state-of-the-art deep-learning systems. Deeper networks and larger datasets alone are not likely to unlock AI’s “barrier of meaning”; instead the field will need to embrace its original roots as an interdisciplinary science of intelligence. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
9 pages, 204 KiB  
Article
Technological Singularity: What Do We Really Know?
by Alexey Potapov
Information 2018, 9(4), 82; https://doi.org/10.3390/info9040082 - 8 Apr 2018
Cited by 18 | Viewed by 13217
Abstract
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity [...] Read more.
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity from more speculative beliefs about the possibility of creating artificial general intelligence. I use the theory of metasystem transitions and the concept of universal evolution to analyze some misconceptions about the technological singularity. While it may be neither purely technological, nor truly singular, we can predict that the next transition will take place, and that the emerged metasystem will demonstrate exponential growth in complexity with a doubling time of less than half a year, exceeding the complexity of the existing cybernetic systems in few decades. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
19 pages, 296 KiB  
Review
From Homo Sapiens to Robo Sapiens: The Evolution of Intelligence
by Anat Ringel Raveh and Boaz Tamir
Information 2019, 10(1), 2; https://doi.org/10.3390/info10010002 - 21 Dec 2018
Cited by 4 | Viewed by 6533
Abstract
In this paper, we present a review of recent developments in artificial intelligence (AI) towards the possibility of an artificial intelligence equal that of human intelligence. AI technology has always shown a stepwise increase in its capacity and complexity. The last step took [...] Read more.
In this paper, we present a review of recent developments in artificial intelligence (AI) towards the possibility of an artificial intelligence equal that of human intelligence. AI technology has always shown a stepwise increase in its capacity and complexity. The last step took place several years ago with the increased progress in deep neural network technology. Each such step goes hand in hand with our understanding of ourselves and our understanding of human cognition. Indeed, AI was always about the question of understanding human nature. AI percolates into our lives, changing our environment. We believe that the next few steps in AI technology, and in our understanding of human behavior, will bring about much more powerful machines that are flexible enough to resemble human behavior. In this context, there are two research fields: Artificial Social Intelligence (ASI) and General Artificial Intelligence (AGI). The authors also allude to one of the main challenges for AI, embodied cognition, and explain how it can be viewed as an opportunity for further progress in AI research. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
13 pages, 608 KiB  
Article
When Robots Get Bored and Invent Team Sports: A More Suitable Test than the Turing Test?
by Hugh Trenchard
Information 2018, 9(5), 118; https://doi.org/10.3390/info9050118 - 11 May 2018
Viewed by 7152
Abstract
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for [...] Read more.
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for determining human-level intelligence: the invention of team sports by humanoid robots. The test is preferred because team sport activity is easily identified, uniquely human, and is suggested to emerge in basic, controllable conditions. To expect humanoid robots to self-organize, or invent, team sport as a function of human-level artificial intelligence, the following necessary conditions are proposed: humanoid robots must have the capacity to participate in cooperative-competitive interactions, instilled by algorithms for resource acquisition; they must possess or acquire sufficient stores of energetic resources that permit leisure time, thus reducing competition for scarce resources and increasing cooperative tendencies; and they must possess a heterogeneous range of energetic capacities. When present, these factors allow robot collectives to spontaneously invent team sport activities and thereby demonstrate one fundamental indicator of human-level intelligence. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Show Figures

Figure 1

19 pages, 750 KiB  
Article
Artificial Intelligence and the Limitations of Information
by Paul Walton
Information 2018, 9(12), 332; https://doi.org/10.3390/info9120332 - 19 Dec 2018
Cited by 12 | Viewed by 9482
Abstract
Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in [...] Read more.
Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in which information evolves in information ecosystems. These limitations are caused by the combinatorial challenges associated with information processing, and by the tradeoffs driven by selection pressures. Analysis of the limitations explains some current difficulties with AI and machine learning and identifies the principles required to resolve the limitations when implementing AI and machine learning in organizations. Applying the same type of analysis to artificial general intelligence (AGI) highlights some key theoretical difficulties and gives some indications about the challenges of resolving them. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Show Figures

Figure 1

15 pages, 785 KiB  
Article
Conceptions of Artificial Intelligence and Singularity
by Pei Wang, Kai Liu and Quinn Dougherty
Information 2018, 9(4), 79; https://doi.org/10.3390/info9040079 - 6 Apr 2018
Cited by 16 | Viewed by 16045
Abstract
In the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term [...] Read more.
In the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term in the related discussions, many people are not really familiar with this research, including its aim and status. We analyze these notions, and introduce the results of our own AGI research. Our main conclusions are that: (1) it is possible to build a computer system that follows the same laws of thought and shows similar properties as the human mind, but, since such an AGI will have neither a human body nor human experience, it will not behave exactly like a human, nor will it be “smarter than a human” on all tasks; and (2) since the development of an AGI requires a reasonably good understanding of the general mechanism of intelligence, the system’s behaviors will still be understandable and predictable in principle. Therefore, the success of AGI will not necessarily lead to a singularity beyond which the future becomes completely incomprehensible and uncontrollable. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Show Figures

Figure 1

6 pages, 181 KiB  
Commentary
The Singularity May Be Near
by Roman V. Yampolskiy
Information 2018, 9(8), 190; https://doi.org/10.3390/info9080190 - 27 Jul 2018
Cited by 12 | Viewed by 5948
Abstract
Toby Walsh in “The Singularity May Never Be Near” gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and [...] Read more.
Toby Walsh in “The Singularity May Never Be Near” gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and arrive at similar conclusions, but with more weight given to the “likely to happen” prediction. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Back to TopTop