Special Issue "AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Theory and Methodology".

Deadline for manuscript submissions: closed (30 October 2018)

Special Issue Editors

Guest Editor
Prof. Dr. Robert K. Logan

Department of Physics, University of Toronto, 60 St. George, Toronto, ON M5S 1A7, Canada
Website | E-Mail
Interests: media ecology; systems biology; linguistics; AI
Guest Editor
Prof. Dr. Adriana Braga

Department of Social Communication, Pontifícia Universidade Católica do Rio de Janeiro (PUC-RJ), R. Marquês de São Vicente, 225—Gávea, Rio de Janeiro 22451-900, RJ, Brazil
Website | E-Mail
Interests: technology; gender; pragmatism; phenomenology; psychoanalysis; media studies; social interaction; ethnography

Special Issue Information

Dear Colleagues,

We are putting together a Special Issue on the notion of the technological Singularity, the idea that computers will one day be smarter that their human creators. Articles that are both in favour of and against the Singularity are welcome, but the lead article by the Guest Editors Bob Logan and Adrianna Braga is quite critical of the notion. Here is the abstract of their paper, The Emperor of Strong AI Has no Clothes: Limits to Artificial Intelligence.

Abstract: We argue that the premise of the technological Singularity, based on the notion that computers will one day be smarter that their human creators, is false, making use of the techniques of media ecology. We also analyze the comments of other critics of the Singularity, as well those of supporters of this notion. The notion of intelligence that advocates of the technological Singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence, as we will show, is not based solely on logical operations and computation, but also includes a long list of other characteristics, unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor.

Prof. Dr. Robert K. Logan
Prof. Dr. Adriana Braga
Guest Editors

Reference

We are delighted to have received permission from Daniel Tunkelang to include his brilliant article "10 Things Everyone Should Know About Machine Learning” that appeared on the Medium on September 5, 2017 (https://medium.com/@dtunkelang/10-things-everyone-should-know-about-machine-learning-15279c27ce96) in our Special Issue. Daniel Tunkelang describes himself as a high class consultant “at large for search, discovery, machine learning/AI, and data science.” He has worked with Apple, eBay, Karat and Twiggle, among many others. His profile can be found at https://www.linkedin.com/in/dtunkelang/. We thank him for his generosity in granting us permission to include his article in our Special Issue—Adriana Braga and Robert K. Logan.

Here is the article:

10 Things Everyone Should Know About Machine Learning

Author: Daniel Tunkelang

Sep 5, 2017

As someone who often finds himself explaining machine learning to non-experts, I offer the following list as a public service announcement.

  1. Machine learning means learning from data; AI is a buzzword. Machine learning lives up to the hype: there are an incredible number of problems that you can solve by providing the right training data to the right learning algorithms. Call it AI if that helps you sell it, but know that AI, at least as used outside of academia, is often a buzzword that can mean whatever people want it to mean.
  2. Machine learning is about data and algorithms, but mostly data. There’s a lot of excitement about advances in machine learning algorithms, and particularly about deep learning. But data is the key ingredient that makes machine learning possible. You can have machine learning without sophisticated algorithms, but not without good data.
  3. Unless you have a lot of data, you should stick to simple models. Machine learning trains a model from patterns in your data, exploring a space of possible models defined by parameters. If your parameter space is too big, you’ll overfit to your training data and train a model that doesn’t generalize beyond it. A detailed explanation requires more math, but as a rule, you should keep your models as simple as possible.
  4. Machine learning can only be as good as the data you use to train it. The phrase “garbage in, garbage out” predates machine learning, but it aptly characterizes a key limitation of machine learning. Machine learning can only discover patterns that are present in your training data. For supervised machine learning tasks like classification, you’ll need a robust collection of correctly labeled, richly featured training data.
  5. Machine learning only works if your training data is representative. Just as a fund prospectus warns that “past performance is no guarantee of future results”, machine learning should warn that it’s only guaranteed to work for data generated by the same distribution that generated its training data. Be vigilant of skews between training data and production data, and retrain your models frequently so they don’t become stale.
  6. Most of the hard work for machine learning is data transformation. From reading the hype about new machine learning techniques, you might think that machine learning is mostly about selecting and tuning algorithms. The reality is more prosaic: most of your time and effort goes into data cleansing and feature engineering — that is, transforming raw features into features that better represent the signal in your data.
  7. Deep learning is a revolutionary advance, but it isn’t a magic bullet. Deep learning has earned its hype by delivering advances across a broad range of machine learning application areas. Moreover, deep learning automates some of the work traditionally performed through feature engineering, especially for image and video data. But deep learning isn’t a silver bullet. You can’t just use it out of the box, and you’ll still need to invest significant effort in data cleansing and transformation.
  8. Machine learning systems are highly vulnerable to operator error. With apologies to the NRA, “Machine learning algorithms don’t kill people; people kill people.” When machine learning systems fail, it’s rarely because of problems with the machine learning algorithm. More likely, you’ve introduced human error into the training data, creating bias or some other systematic error. Always be skeptical, and approach machine learning with the discipline you apply to software engineering.
  9. Machine learning can inadvertently create a self-fulfilling prophecy. In many applications of machine learning, the decisions you make today affect the training data you collect tomorrow. Once your machine learning system embeds biases into its model, it can continue generating new training data that reinforces those biases. And some biases can ruin people’s lives. Be responsible: don’t create self-fulfilling prophecies.
  10. AI is not going to become self-aware, rise up, and destroy humanity. A surprising number of people (cough) seem to be getting their ideas about artificial intelligence from science fiction movies. We should be inspired by science fiction, but not so credulous that we mistake it for reality. There are enough real and present dangers to worry about, from consciously evil human beings to unconsciously biased machine learning models. So you can stop worrying about SkyNet and “superintelligence”.

There’s far more to machine learning than I can explain in a top-10 list. But hopefully, this serves as a useful introduction for non-experts.

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.

Keywords

  • singularity

  • computers

  • information processing

  • AI (artificial intelligence)

  • human intelligence

  • curiosity

  • imagination

  • intuition

  • emotions

  • goals

  • values

  • wisdom

Published Papers (18 papers)

View options order results:
result details:
Displaying articles 1-18
Export citation of selected articles as:
Open AccessFeature PaperArticle The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence
Information 2017, 8(4), 156; https://doi.org/10.3390/info8040156
Received: 31 October 2017 / Revised: 20 November 2017 / Accepted: 22 November 2017 / Published: 27 November 2017
Cited by 6 | PDF Full-text (266 KB) | HTML Full-text | XML Full-text
Abstract
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the [...] Read more.
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the Singularity, as well supporters of this notion. The notion of intelligence that advocates of the technological singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence as we will show is not based solely on logical operations and computation, but also includes a long list of other characteristics that are unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessFeature PaperArticle Countering Superintelligence Misinformation
Information 2018, 9(10), 244; https://doi.org/10.3390/info9100244
Received: 9 September 2018 / Revised: 25 September 2018 / Accepted: 26 September 2018 / Published: 30 September 2018
Cited by 1 | PDF Full-text (254 KB) | HTML Full-text | XML Full-text
Abstract
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence [...] Read more.
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence is the subject of major ongoing debate, which includes a significant amount of misinformation. Superintelligence misinformation is potentially dangerous, ultimately leading bad decisions by the would-be developers of superintelligence and those who influence them. This paper surveys strategies to counter superintelligence misinformation. Two types of strategies are examined: strategies to prevent the spread of superintelligence misinformation and strategies to correct it after it has spread. In general, misinformation can be difficult to correct, suggesting a high value of strategies to prevent it. This paper is the first extended study of superintelligence misinformation. It draws heavily on the study of misinformation in psychology, political science, and related fields, especially misinformation about global warming. The strategies proposed can be applied to lay public attention to superintelligence, AI education programs, and efforts to build expert consensus. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Superintelligence Skepticism as a Political Tool
Information 2018, 9(9), 209; https://doi.org/10.3390/info9090209
Received: 27 June 2018 / Revised: 14 August 2018 / Accepted: 17 August 2018 / Published: 22 August 2018
Cited by 1 | PDF Full-text (250 KB) | HTML Full-text | XML Full-text
Abstract
This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be [...] Read more.
This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be built, with massive and potentially catastrophic consequences. There is substantial skepticism about superintelligence, including whether it will be built, whether it would be catastrophic, and whether it is worth current attention. To date, superintelligence skepticism appears to be mostly honest intellectual debate, though some of it may be politicized. This paper finds substantial potential for superintelligence skepticism to be (further) politicized, due mainly to the potential for major corporations to have a strong profit motive to downplay concerns about superintelligence and avoid government regulation. Furthermore, politicized superintelligence skepticism is likely to be quite successful, due to several factors including the inherent uncertainty of the topic and the abundance of skeptics. The paper’s analysis is based on characteristics of superintelligence and the broader AI sector, as well as the history and ongoing practice of politicized skepticism on other science and technology issues, including tobacco, global warming, and industrial chemicals. The paper contributes to literatures on politicized skepticism and superintelligence governance. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessEssay The Singularity Isn’t Simple! (However We Look at It) A Random Walk between Science Fiction and Science Fact
Information 2018, 9(4), 99; https://doi.org/10.3390/info9040099
Received: 11 April 2018 / Revised: 17 April 2018 / Accepted: 18 April 2018 / Published: 19 April 2018
Cited by 1 | PDF Full-text (542 KB) | HTML Full-text | XML Full-text
Abstract
It seems to be accepted that intelligenceartificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take [...] Read more.
It seems to be accepted that intelligenceartificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take it from there (and take over from us). However, such wisdom and debate are simplistic in a number of ways: firstly, this is a poor definition of the singularity; secondly, it muddles various notions of intelligence; thirdly, competing arguments are rarely based on shared axioms, so are frequently pointless; fourthly, our models for trying to discuss these concepts at all are often inconsistent; and finally, our attempts at describing any ‘post-singularity’ world are almost always limited by anthropomorphism. In all of these respects, professional ‘futurists’ often appear as confused as storytellers who, through freer licence, may conceivably have the clearer view: perhaps then, that becomes a reasonable place to start. There is no attempt in this paper to propose, or evaluate, any research hypothesis; rather simply to challenge conventions. Using examples from science fiction to illustrate various assumptions behind the AI/singularity debate, this essay seeks to encourage discussion on a number of possible futures based on different underlying metaphysical philosophies. Although properly grounded in science, it eventually looks beyond the technology for answers and, ultimately, beyond the Earth itself. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Pareidolic and Uncomplex Technological Singularity
Information 2018, 9(12), 309; https://doi.org/10.3390/info9120309
Received: 25 October 2018 / Revised: 30 November 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
PDF Full-text (337 KB) | HTML Full-text | XML Full-text
Abstract
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the [...] Read more.
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the perverse power of apophenia (“the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas)”) and pereidolia (“the tendency to perceive a specific, often meaningful image in a random or ambiguous visual pattern”) in those studies is the starting point for two claims: the “accelerated change” is a future-related apophenia case, whereas AGI (and TS) are future-related pareidolia cases. A short presentation of research-focused social networks working to solve complex problems reveals the superiority of human networked minds over the hardware‒software systems and suggests the opportunity for a network-based study of TS (and AGI) from a complexity perspective. It could compensate for the weaknesses of approaches deployed from a linear and predictable perspective, in order to try to redesign our intelligent artifacts. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Cosmic Evolutionary Philosophy and a Dialectical Approach to Technological Singularity
Information 2018, 9(4), 78; https://doi.org/10.3390/info9040078
Received: 13 February 2018 / Revised: 3 April 2018 / Accepted: 4 April 2018 / Published: 5 April 2018
PDF Full-text (14157 KB) | HTML Full-text | XML Full-text
Abstract
The anticipated next stage of human organization is often described by futurists as a global technological singularity. This next stage of complex organization is hypothesized to be actualized by scientific-technic knowledge networks. However, the general consequences of this process for the meaning of [...] Read more.
The anticipated next stage of human organization is often described by futurists as a global technological singularity. This next stage of complex organization is hypothesized to be actualized by scientific-technic knowledge networks. However, the general consequences of this process for the meaning of human existence are unknown. Here, it is argued that cosmic evolutionary philosophy is a useful worldview for grounding an understanding of the potential nature of this futures event. In the cosmic evolutionary philosophy, reality is conceptualized locally as a universal dynamic of emergent evolving relations. This universal dynamic is structured by a singular astrophysical origin and an organizational progress from sub-atomic particles to global civilization mediated by qualitative phase transitions. From this theoretical ground, we attempt to understand the next stage of universal dynamics in terms of the motion of general ideation attempting to actualize higher unity. In this way, we approach technological singularity dialectically as an event caused by ideational transformations and mediated by an emergent intersubjective objectivity. From these speculations, a historically-engaged perspective on the nature of human consciousness is articulated where the truth of reality as an emergent unity depends on the collective action of a multiplicity of human observers. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Can Computers Become Conscious, an Essential Condition for the Singularity?
Information 2017, 8(4), 161; https://doi.org/10.3390/info8040161
Received: 12 November 2017 / Revised: 3 December 2017 / Accepted: 6 December 2017 / Published: 9 December 2017
Cited by 4 | PDF Full-text (181 KB) | HTML Full-text | XML Full-text
Abstract
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware [...] Read more.
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one’s perceptions and/or of one’s thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one’s thoughts is basically listening to one’s own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one’s self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer’s lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessEssay The Universality of Experiential Consciousness
Information 2019, 10(1), 31; https://doi.org/10.3390/info10010031
Received: 16 November 2018 / Revised: 19 December 2018 / Accepted: 11 January 2019 / Published: 17 January 2019
PDF Full-text (177 KB) | HTML Full-text | XML Full-text
Abstract
It is argued that of Block’s (On a confusion about a function of consciousness, 1995; The Nature of Consciousness: Philosophical Debates, 1997) two types of consciousness, namely phenomenal consciousness (p-consciousness) and access consciousness (a-consciousness), that p-consciousness applies to all living things but that [...] Read more.
It is argued that of Block’s (On a confusion about a function of consciousness, 1995; The Nature of Consciousness: Philosophical Debates, 1997) two types of consciousness, namely phenomenal consciousness (p-consciousness) and access consciousness (a-consciousness), that p-consciousness applies to all living things but that a-consciousness is uniquely human. This differs from Block’s assertion that a-consciousness also applies to some non-human organisms. It is suggested that p-consciousness, awareness, experience and perception are basically equivalent and that human consciousness has in addition to percept-based p-consciousness, concept-based a-consciousness, a verbal and conceptual form of consciousness that can be utilized to coordinate, organize and plan activities for rational decision-making. This argument is based on Logan’s (The Extended Mind: The Emergence of Language, The Human Mind and Culture, 1997) assertion that humans are uniquely capable of reasoning and rationality because they are uniquely capable of verbal language and hence the ability to conceptualize. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessFeature PaperArticle Thinking in Patterns and the Pattern of Human Thought as Contrasted with AI Data Processing
Information 2018, 9(4), 83; https://doi.org/10.3390/info9040083
Received: 5 March 2018 / Revised: 4 April 2018 / Accepted: 5 April 2018 / Published: 8 April 2018
Cited by 3 | PDF Full-text (662 KB) | HTML Full-text | XML Full-text
Abstract
We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family [...] Read more.
We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family that emerged with the control of fire, rudimentary set theory or categorization and spoken language that co-emerged, the ability to deal with information overload, conceptualization, imagination, abductive reasoning, invention, art, religion, mathematics and science. These traits are interrelated as they all involve the ability to flexibly manipulate information from our environments via pattern restructuring. We argue that the human mind is the emergent product of a shift from external percept-based processing to a concept and language-based form of cognition based on patterning. In this article, we describe the evolution of human cognition and culture, describing the unique patterns of human thought and how we, humans, think in terms of patterns. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessCommentary Love, Emotion and the Singularity
Information 2018, 9(9), 221; https://doi.org/10.3390/info9090221
Received: 31 July 2018 / Revised: 23 August 2018 / Accepted: 31 August 2018 / Published: 3 September 2018
Cited by 1 | PDF Full-text (222 KB) | HTML Full-text | XML Full-text
Abstract
Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the [...] Read more.
Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the brain; one cannot conceive of human cognition without embodiment. This essay considers the emotional nature of cognition by exploring the most human of emotions—romantic love. By examining the idea of love from an evolutionary and a physiological perspective, the author suggests that in order to account for the full range of human cognition, one must also account for the emotional aspects of cognition. The paper concludes that if there is to be a singularity that transcends human cognition, it must be embodied. As such, the singularity could not be completely non-organic; it must take place in the form of a cyborg, wedding the digital to the biological. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessReview AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”)
Information 2018, 9(7), 183; https://doi.org/10.3390/info9070183
Received: 1 July 2018 / Revised: 17 July 2018 / Accepted: 21 July 2018 / Published: 23 July 2018
PDF Full-text (300 KB) | HTML Full-text | XML Full-text
Abstract
This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used [...] Read more.
This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used in the news media to produce articles and thereby replace human journalists. In 2016, Wordsmith, one of the two most powerful news-writing algorithms, wrote and published 1.5 billion news stories. This number is comparable to or may even exceed work written and published by human journalists. Robo-journalists’ skills and competencies are constantly growing. Research has shown that readers sometimes cannot differentiate between news written by robots or by humans; more importantly, readers often make little of such distinctions. Considering this, these forms of AI can be seen as having already passed a kind of Turing test as applied to journalism. The paper provides a review of the current state of robo-journalism; analyses popular arguments about “robots’ incapability” to prevail over humans in creative practices; and offers a foresight of the possible further development of robo-journalism and its collision with organic forms of journalism. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessOpinion Artificial Intelligence Hits the Barrier of Meaning
Information 2019, 10(2), 51; https://doi.org/10.3390/info10020051
Received: 20 January 2019 / Revised: 31 January 2019 / Accepted: 2 February 2019 / Published: 5 February 2019
PDF Full-text (175 KB) | HTML Full-text | XML Full-text
Abstract
Today’s AI systems sorely lack the essence of human intelligence: Understanding the situations we experience, being able to grasp their meaning. The lack of humanlike understanding in machines is underscored by recent studies demonstrating lack of robustness of state-of-the-art deep-learning systems. Deeper networks [...] Read more.
Today’s AI systems sorely lack the essence of human intelligence: Understanding the situations we experience, being able to grasp their meaning. The lack of humanlike understanding in machines is underscored by recent studies demonstrating lack of robustness of state-of-the-art deep-learning systems. Deeper networks and larger datasets alone are not likely to unlock AI’s “barrier of meaning”; instead the field will need to embrace its original roots as an interdisciplinary science of intelligence. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Technological Singularity: What Do We Really Know?
Information 2018, 9(4), 82; https://doi.org/10.3390/info9040082
Received: 27 February 2018 / Revised: 25 March 2018 / Accepted: 3 April 2018 / Published: 8 April 2018
Cited by 1 | PDF Full-text (204 KB) | HTML Full-text | XML Full-text
Abstract
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity [...] Read more.
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity from more speculative beliefs about the possibility of creating artificial general intelligence. I use the theory of metasystem transitions and the concept of universal evolution to analyze some misconceptions about the technological singularity. While it may be neither purely technological, nor truly singular, we can predict that the next transition will take place, and that the emerged metasystem will demonstrate exponential growth in complexity with a doubling time of less than half a year, exceeding the complexity of the existing cybernetic systems in few decades. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessReview From Homo Sapiens to Robo Sapiens: The Evolution of Intelligence
Information 2019, 10(1), 2; https://doi.org/10.3390/info10010002
Received: 30 October 2018 / Revised: 17 December 2018 / Accepted: 18 December 2018 / Published: 21 December 2018
PDF Full-text (296 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a review of recent developments in artificial intelligence (AI) towards the possibility of an artificial intelligence equal that of human intelligence. AI technology has always shown a stepwise increase in its capacity and complexity. The last step took [...] Read more.
In this paper, we present a review of recent developments in artificial intelligence (AI) towards the possibility of an artificial intelligence equal that of human intelligence. AI technology has always shown a stepwise increase in its capacity and complexity. The last step took place several years ago with the increased progress in deep neural network technology. Each such step goes hand in hand with our understanding of ourselves and our understanding of human cognition. Indeed, AI was always about the question of understanding human nature. AI percolates into our lives, changing our environment. We believe that the next few steps in AI technology, and in our understanding of human behavior, will bring about much more powerful machines that are flexible enough to resemble human behavior. In this context, there are two research fields: Artificial Social Intelligence (ASI) and General Artificial Intelligence (AGI). The authors also allude to one of the main challenges for AI, embodied cognition, and explain how it can be viewed as an opportunity for further progress in AI research. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle When Robots Get Bored and Invent Team Sports: A More Suitable Test than the Turing Test?
Information 2018, 9(5), 118; https://doi.org/10.3390/info9050118
Received: 9 April 2018 / Revised: 5 May 2018 / Accepted: 8 May 2018 / Published: 11 May 2018
PDF Full-text (608 KB) | HTML Full-text | XML Full-text
Abstract
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for [...] Read more.
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for determining human-level intelligence: the invention of team sports by humanoid robots. The test is preferred because team sport activity is easily identified, uniquely human, and is suggested to emerge in basic, controllable conditions. To expect humanoid robots to self-organize, or invent, team sport as a function of human-level artificial intelligence, the following necessary conditions are proposed: humanoid robots must have the capacity to participate in cooperative-competitive interactions, instilled by algorithms for resource acquisition; they must possess or acquire sufficient stores of energetic resources that permit leisure time, thus reducing competition for scarce resources and increasing cooperative tendencies; and they must possess a heterogeneous range of energetic capacities. When present, these factors allow robot collectives to spontaneously invent team sport activities and thereby demonstrate one fundamental indicator of human-level intelligence. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Artificial Intelligence and the Limitations of Information
Information 2018, 9(12), 332; https://doi.org/10.3390/info9120332
Received: 17 November 2018 / Revised: 7 December 2018 / Accepted: 18 December 2018 / Published: 19 December 2018
PDF Full-text (750 KB) | HTML Full-text | XML Full-text
Abstract
Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in [...] Read more.
Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in which information evolves in information ecosystems. These limitations are caused by the combinatorial challenges associated with information processing, and by the tradeoffs driven by selection pressures. Analysis of the limitations explains some current difficulties with AI and machine learning and identifies the principles required to resolve the limitations when implementing AI and machine learning in organizations. Applying the same type of analysis to artificial general intelligence (AGI) highlights some key theoretical difficulties and gives some indications about the challenges of resolving them. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Conceptions of Artificial Intelligence and Singularity
Information 2018, 9(4), 79; https://doi.org/10.3390/info9040079
Received: 15 February 2018 / Revised: 1 April 2018 / Accepted: 3 April 2018 / Published: 6 April 2018
Cited by 2 | PDF Full-text (785 KB) | HTML Full-text | XML Full-text
Abstract
In the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term [...] Read more.
In the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term in the related discussions, many people are not really familiar with this research, including its aim and status. We analyze these notions, and introduce the results of our own AGI research. Our main conclusions are that: (1) it is possible to build a computer system that follows the same laws of thought and shows similar properties as the human mind, but, since such an AGI will have neither a human body nor human experience, it will not behave exactly like a human, nor will it be “smarter than a human” on all tasks; and (2) since the development of an AGI requires a reasonably good understanding of the general mechanism of intelligence, the system’s behaviors will still be understandable and predictable in principle. Therefore, the success of AGI will not necessarily lead to a singularity beyond which the future becomes completely incomprehensible and uncontrollable. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessCommentary The Singularity May Be Near
Information 2018, 9(8), 190; https://doi.org/10.3390/info9080190
Received: 5 July 2018 / Revised: 24 July 2018 / Accepted: 25 July 2018 / Published: 27 July 2018
Cited by 1 | PDF Full-text (181 KB) | HTML Full-text | XML Full-text
Abstract
Toby Walsh in “The Singularity May Never Be Near” gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and [...] Read more.
Toby Walsh in “The Singularity May Never Be Near” gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and arrive at similar conclusions, but with more weight given to the “likely to happen” prediction. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top