Conceptions of Artificial Intelligence and Singularity
AbstractIn the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term in the related discussions, many people are not really familiar with this research, including its aim and status. We analyze these notions, and introduce the results of our own AGI research. Our main conclusions are that: (1) it is possible to build a computer system that follows the same laws of thought and shows similar properties as the human mind, but, since such an AGI will have neither a human body nor human experience, it will not behave exactly like a human, nor will it be “smarter than a human” on all tasks; and (2) since the development of an AGI requires a reasonably good understanding of the general mechanism of intelligence, the system’s behaviors will still be understandable and predictable in principle. Therefore, the success of AGI will not necessarily lead to a singularity beyond which the future becomes completely incomprehensible and uncontrollable. View Full-Text
Share & Cite This Article
Wang, P.; Liu, K.; Dougherty, Q. Conceptions of Artificial Intelligence and Singularity. Information 2018, 9, 79.
Wang P, Liu K, Dougherty Q. Conceptions of Artificial Intelligence and Singularity. Information. 2018; 9(4):79.Chicago/Turabian Style
Wang, Pei; Liu, Kai; Dougherty, Quinn. 2018. "Conceptions of Artificial Intelligence and Singularity." Information 9, no. 4: 79.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.