Next Article in Journal
Bidirectional Long Short-Term Memory Network with a Conditional Random Field Layer for Uyghur Part-Of-Speech Tagging
Next Article in Special Issue
Can Computers Become Conscious, an Essential Condition for the Singularity?
Previous Article in Journal
Face Classification Using Color Information
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence

Department of Social Communication, Pontifícia Universidade Católica do Rio de Janeiro (PUC-RJ), R. Marquês de São Vicente, 225—Gávea, Rio de Janeiro 22451-900, RJ, Brazil
Department of Physics, University of Toronto, 60 St. George, Toronto, ON M5S 1A7, Canada
Author to whom correspondence should be addressed.
Information 2017, 8(4), 156;
Received: 31 October 2017 / Revised: 20 November 2017 / Accepted: 22 November 2017 / Published: 27 November 2017
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)


Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the Singularity, as well supporters of this notion. The notion of intelligence that advocates of the technological singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence as we will show is not based solely on logical operations and computation, but also includes a long list of other characteristics that are unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor.

The true sign of intelligence is not knowledge but imagination.
—Albert Einstein
Levels of consciousness. Knowing what one knows. Language is necessary for knowing what one knows as one talks to oneself. But computer have no need or desire to communicate with others and hence never created language and without language one cannot talk to oneself and hence computers will never be conscious

1. Introduction

The notion of the technological singularity or the idea that computers will one day be more intelligent than their human creators has received a lot of attention in recent years. A number of scholars have argued both for and against the idea of a technological singularity using a variety of different arguments. A sample of these opinions for and against the idea of the technological singularity can be found in two collections of short essays, entitled Special Report: The Singularity [2] and What to Think About Machines that Think [3] as well as the critical writings of Herbert Dreyfus [4,5,6,7]. We will analyze their different positions and make use of their arguments, which we will integrate into our own critiques of both the idea that computers can think, and the idea of the Singularity, or the idea that machines through the use of Artificial General Intelligence (AGI, sometimes referred to simply as AI) can become more intelligent than their human creators. We intend to show that despite the usefulness of artificial intelligence, that the Singularity is an over extension of AI and that no computer can ever duplicate the intelligence of a human being.
We will argue that the emperor of AI is quite naked by exploring the many dimensions of human intelligence that involve characteristics that we believe cannot be duplicated by silicon based forms of intelligence because machines lack a number of essential properties that only a flesh and blood living organism, especially a human, can possess. In short, we believe that artificial intelligence (AI) or its stronger version artificial general intelligence (AGI) can never rise to the level of human intelligence because computers are not capable of many of the essential characteristics of human intelligence, despite their ability to out-perform us as far as logic and computation are concerned. As Einstein once remarked “Logic will get you from A to B. Imagination will take you everywhere”.
What motivated us to write this essay is our fear that some who argue for the technological singularity might in fact convince many others to lower the threshold as to what constitutes human intelligence so that it meets the level of machine intelligence, and thus devalue those aspects of human intelligence that we (the authors) hold dear such as imagination, aesthetics, altruism, creativity, and wisdom.
To be a fully realized human intelligent being it is necessary, in our opinion, to have these characteristics. We will suggest that these many aspects of the human experience that are associated uniquely with our species Homo sapiens (wise humans) do not have analogues in the world of machine intelligence, and that as a result the notion that an artificial intelligent machine-based system that is more intelligent than a human is not possible and that the notion of the technological singularity is basically science fiction. We recognize that the attributes that we listed above that constitute what we consider to be intelligence are arrived at subjectively. Perhaps we are defining what we believe is a humane form of intelligence as has been suggested kindly by one of the reviewers of an earlier version of this essay. But, that is one of the objectives of this essay, a desire to make sure that in the desire to gain the benefits of AI, we as a society do not degrade the humaneness of what is considered intelligence. Human intelligence and machine intelligence are of a completely different nature so to claim that one is greater than the other is like comparing the proverbial apples and oranges. They are different and they are both valuable and one should not be mistaken for the other.
There is a subjective, non-rational (or perhaps extra-rational) aspect of human intelligence, which a computer can never duplicate. We do not want to have intelligence as defined by Singularitarians, who are primarily AI specialists and as a result are motivated to exaggerate their field of research and their accomplishments as is the case with all specialists. Engineers should not be defining intelligence. Consider the confusion engineers created by defining Shannon’s measure of signal transmission as information (see Braga and Logan [8]).
To critique the idea of the Singularity we will make use of the ideas of Terrence Deacon [9], as developed in his study Incomplete Nature: How Mind Emerged from Matter. Deacon’s basic idea is that for an entity to have sentience or intelligence it must also have a sense of self [9] pp. 463–484. In his study, Deacon [9] p. 524 defines information “as about something for something toward some end”. As a computer or an AI device has no sense of self (i.e., no one is home), it has no information as defined by Deacon. The AI device only has Shannon information, which has no meaning for itself, i.e., the computer is not aware of what it knows as it deals with one bit of data at a time. We will discover that many of the other critiques of the singularity that we will reference parallel our notion that a machine has no sense of self, no objectives or ends for which it strives, and no values.
We will also make use of media ecology and the insights of Marshall McLuhan [10] including:
  • the notion that a figure must be understood in the context of the ground or environment in which it operates,
  • the notion that every technology brings both service and disservice, and
  • the idea that any technology pushed far enough flips into opposite or complementary form.
Given that the medium is the message, as McLuhan [10] proclaimed, we will examine the medium of the computer and its special use as an artificial intelligence (AI) device with particular attention to strong AI or AGI. Our basic thesis is that computers, together with AI, are a form of technology and a medium that extends human intelligence not a form of intelligence itself.
Our critique of AGI will make use of McLuhan’s [10] technique of figure/ground analysis, which is at the heart of his iconic one-liner the “medium is the message” that first appeared in his book Understanding Media. The medium independent of its content has its own message. The meaning of the content of a medium, the figure, is affected by the ground in which it operates, the medium itself. The problem that the advocates of AGI and the Singularity make is they regard the computer as a figure without a ground. As McLuhan once pointed out “logic is figure without ground” [11]. A computer is nothing more than a logic device and hence it is a figure without a ground. A human and the human’s intelligence are each a figure with a ground, the ground of experience, emotions, imagination, purpose, and all of the other human characteristics that computers cannot possibly duplicate because they have no sense of self.
While we are critical of the notion of the idea of the Singularity, we are quite positive re the value of AI. We also believe, like Rushkoff [12] pp. 354–355, that networked computers will increase human intelligence by allowing humans to network and share their insights. We also concur with Benjamin Bratton [13]:
“AI can have an enormously positive role to play at an infrastructural level, not just the augmentation of an individual’s intelligence, but the augmentation of systemic intelligence and the ability of infrastructural systems to automate what we call political decision or economic decision.”
The pattern recognition capabilities of big data will assist humans to make new discoveries, but it will require human intelligence to guide the AI devices as to what patterns to look for. In short, AI guided by human intelligence will always be more productive than AGI working on its own. As pointed out by one of the reviewers of our essay, other forms of a technological singularity that do not try to duplicate human intelligence are altogether possible but they are not the subject of this essay.

2. Origin of the Singularity Idea

The following excerpt from the article, Technological Singularity in Wikipedia, accessed 15 September 2017, summarizes the rise of the concept in the early days of the computer age beginning with a conversation between John Von Neumann and Stan Ulam.
The first use of the term “singularity” in this context was made by Stanislaw Ulam in his 1958 obituary for John Von Neumann, in which he mentioned a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”. The term was popularized by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain (

2.1. The Belief in the Singularity

Let us examine the roots of the fantasy that humans will one day be obsolesced by computers. A number of advocates of strong AI or AGI have suggested that in the not too distant futures (2045 according to Ray Kurzweil [14]) programmers will design a computer with an AI or AGI capability that will allow it to design a computer even more intelligent than itself and that computer will be able to do the same and by a process of iteration a technological singularity point will be arrived at where post-Singularity computers will be far more intelligent than us poor humans who only have an intelligence that is designed by nature through natural selection and evolution. At this point, according to those that embrace this idea the super-intelligent computers will take over and we human will become their docile servants.
This scenario is not the product of science fiction writers. Rather, they are the scenarios being that are painted by information and computer scientists who are advocates of strong AI or AGI and the idea of the Technological Singularity.
The idea of the perfectibility of human intelligence can be traced back to the Enlightenment and the encyclopaedist Condorcet who wrote,
Nature has set no term to the perfection of human faculties; that the perfectibility of man is truly indefinite; and that the progress of this perfectibility, from now onwards independent of any power that might wish to halt it, has no other limit than the duration of the globe upon which nature has cast us (
It is worth noting that the idea of the Singularity is the product of techno-optimists who have made other predictions in the past that did not pan out. For example, the techno-optimists once predicted that with the efficiency of computers, that there would be a dramatic decrease in the number of hours we would need to work, which we have yet to seen ( “For instance, computer workstations have revolutionized office and retail … Yet the dramatic increase in productivity has not led to a shorter work week or to a more relaxed work environment” [15] p. 74. Instead, it has led to companies using the efficiency of computers to reduce the number of workers needed to perform certain tasks.
Techno-optimists also predicted that offices would be practically paper-free, when in fact the amount of paper in offices has actually increased.
Over the past thirty years, many people have proclaimed the imminent arrival of the paperless office. Yet even the World Wide Web, which allows almost any computer to read and display another computer’s documents, has increased the amount of printing done. The use of e-mail in an organization causes an average 40% increase in paper consumption.
[16] p. 1

2.2. Singularity Advocates with a Spiritual Dimension to Their Belief

When we first began to think of this project and work on the research that led to this essay the biggest mystery for us was what motivates a tough minded scientific thinker to believe that human intelligence can be programmed into or instantiated on a computer, a mechanical machine. Then, we read Steven Pinker’s [17] pp. 5–8 short little essay “Thinking Does Not Imply Subjugating”, and in the following few sentences he succinctly described his romance in the belief that human intelligences could be captured in a machine:
The cognitive feats of the brain can be explained in physical terms… This is a great idea for two reasons. First, it completes a naturalistic understanding of the universe, exorcising occult souls, spirits and ghosts in the machine. Just as Darwin made it possible for a thoughtful observer of the natural world to do without creationism, Turing and others made it possible for a thoughtful observer of the cognitive world to do so without spiritualism.
However, some proponents of the Singularity have a religious zeal to them not in the theist sense but somewhat similar to the beliefs of the deists. Here is a collection of positions by Singularity zealots that have in our opinion to varying degrees a religious tone to them.
Frank Tipler [18] has an amusing solution for the inevitable fact that once our sun runs out of nuclear fuel and can no longer provide the conditions that make life on Earth sustainable that our only hope for the survival of human culture will be AI computers (AIs) that do not require the conditions that make carbon-based life sustainable. He suggests that the AIs will take to outer space. And
any human who wants to join the AIs in their expansion can become a human upload, a technology that should be developed about the same time as AI technology … If you can’t beat ‘em, join ‘em … When this doom is at hand, any human who remains alive and doesn’t want to die will have no choice but to become a human upload. The AIs will save us all.
The parallels of Tipler’s proposal with Christianity are striking. God Is dead but AI has been born and it is our Savior and like Jesus’s self-described appellation, it is “the son of man”. AI, not Jesus, “will save us all” and eternal life can be found in an AI computer somewhere in space like the “kingdom of heaven” (Matthew 3.2) and not here on Earth.
Anthony Garrett Lisi [19], in an article entitled “I, for One, Welcome our Machine Overlords”, claimed: “Computers share knowledge much more easily than humans do, and they keep that knowledge longer, becoming wiser than humans”. Lisi in his attempt to find a higher power makes the mistake that wisdom comes from knowledge. Knowledge is about using information to achieve one’s objectives and wisdom is the ability to choose objective consistent with one’s values. How can a computer have values? The values of a computer are those of its programmers.
Pamela McCorduck [20] p. 53 in an article entitled “An Epochal Human Event” opined, “We long to preserve ourselves as a species. For all of the imaginary deities that we have petitioned throughout history who have failed to protect us from nature, from one another, from ourselves—we’re finally ready to call on our own enhanced, augmented minds instead”. Her god is “our own enhanced, augmented minds”.
Sam Harris [21] suggests that a super intelligent AGI could achieve 20,000 years of intellectual work in a week. Scientific work requires making observation, designing, and building observational tools. His closing comments reveal what we have identified as the quasi-religious fervor of the AGI advocates: “We seem to be in the process of building a god. Now would be a good time to wonder whether it will (or can be) a good one”.
Gregory Paul [22] writes, “The way for human minds to avoid becoming obsolete is to join in the cyber civilization; out of growth-limited bio brains into rapidly improving cyber brains”. He then suggests that we can then give up our physical bodies which would then benefit the Earth’s biosystem. This is a variation on the Christian idea that we can have everlasting life as pure spirits. For Gregory Paul heaven will be in the clouds, computer clouds.
James Croak [23] suggests that, “Fear of AI is the latest incarnation of our primal unconscious fear of an all-knowing, all-powerful angry God dominating us–but in a new ethereal form”.
Douglas Hofstadter [24] provides us with an apocalyptic scenario of the impact of the Singularity, which he believes is a “couple of centuries” away. He suggests that the ramifications “will be enormous, since the highest form of sentient beings on the planet will no longer be human. Perhaps these machines—our ‘children’—will be vaguely like us and will have culture similar to ours, but most likely not. In that case, we humans may well go the way of the dinosaurs”.
Perhaps the most explicit example of the religious devotion to the idea of the Singularity comes from AI programmer Anthony AI programmer Anthony Levandowski, who is famous for his work on self-driving cars, first for Waymo (a subsidiary of Alphabet, Google’s holding company) and later for Uber. Levandowski has founded a non-profit religious organization, the Way of the Future that plans to “develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society (”.
We end this section with John Horgan’s [25] humorous and skeptical take on the eternal life belief by Singularity advocates, who believe that humans can be uploaded onto a computer:
I would love to believe that we are rapidly approaching “the singularity”. Like paradise, technological singularity comes in many versions, but most involve bionic brain boosting. At first, we’ll become cyborgs, as stupendously powerful brain chips soup up our perception, memory, and intelligence and maybe even eliminate the need for annoying TV remotes. Eventually, we will abandon our flesh-and-blood selves entirely and upload our digitized psyches into computers. We will then dwell happily forever in cyberspace … Kurzweil says he has adopted an antiaging regimen so that he’ll live long enough to live forever.
Horgan remains skeptical of the uploading of human brains on to computers, mainly because neuroscientists have such a sketchy understanding of how the brain operates or how it stores memories, or what are the roles of various chemicals found in the brain.
Neurotransmitters, which carry signals across the synapse between two neurons, also come in many different varieties. In addition to neurotransmitters, neural-growth factors, hormones, and other chemicals ebb and flow through the brain, modulating cognition in ways that are both profound and subtle.

2.3. Have We Become the Servomechanisms of Our Computers?

Although we firmly believe that machine intelligence can never exceed human intelligence, there is still a very real danger, however, that we can lose some of our autonomy to AI or AGI through a decline in what we regard as human intelligence and hence how we view the nature of the human spirit. There are other instances where we have partially lost our autonomy to other technologies. One example is our total dependence on the automobile and the burning of fossil fuels that now threatens our very existence due to global warming and climate change.
In Chapter 4 of Understanding Media: The Gadget Lover Narcissus as Narcosis, Marshall McLuhan [10] describes how we allow our technologies to take over and control us so that we become their servo mechanisms. We believe this is an apt description of the AGI computer ‘gadget lovers’ who support the notion of the inevitability of the technological singularity. McLuhan [10] p. 51 wrote:
The Greek myth of Narcissus is directly concerned with a fact of human experience, as the word Narcissus indicates. It is from the Greek word narcosis, or numbness. The youth Narcissus mistook his own reflection in the water for another person. This extension (or amplification) of himself by mirror numbed his perceptions until he became the servomechanism of his own extended or repeated image … Now the point of this myth is the fact that men at once become fascinated by any extension of themselves (i.e., their technological extensions) … Such amplification is bearable by the nervous system only through numbness or blocking of perception … To behold, use or perceive any extension of ourselves in form is necessarily to embrace it … By continuously embracing technologies, we relate ourselves to them as servo-mechanisms. That is why we must, to use them at all, serve these objects, these extensions of ourselves, as gods or minor religions … Physiologically, man in the normal use of technology (or his variously extended body) is perpetually modified by it and in turn finds ever new ways of modifying his technology. Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms.
We have quoted this rather long excerpt from McLuhan because it seems to describe some advocates of AGI, who, in our opinion, have become the servomechanisms of their computer technology and even suggest that we will become even more than their servomechanisms, but we will become their slaves. Like Narcissus who fell in love with his own image reflected in the water, the advocates of the technological singularity see a reflection of themselves in the computers that they program. Riffing on the one-liner “garbage in, garbage out” we would suggest “computer worship in, narcissism out”.
Quentin Hardy [26], a strong advocate of strong AI, wrote, “we have met the AI and it is us”. AI is like the pool that Narcissus looked into and fell in love with his own image. AI is just a reflection of one aspect of our own intelligence (the logical and rational aspect), and we, or some of us, have fallen in love with it. Just as Narcissus was suffering from narcosis because he was so mesmerized by his own reflection he could not get beyond himself. Many of the advocates of AGI or strong AI are mesmerized by the beauty of logic and rationality to such an extent that they dismiss the emotional, the non-rational, the poetry of poiesis, and the emotional side of intelligence. Human intelligence is bicameral. Not a neat division of the analytic/rational left brain and the artistic/intuitive right brain, but a synthesis of these two aspects of the human mind. The AI computer brain is unicameral with a left brain bias. It lacks the neurochemistry, such as dopamine, serotonin, and other agents that are triggered by or are part of human emotional life.
Timothy Taylor [27] introduces the idea of denkraumverlust, whereby one confuses the representation of something for the thing itself. He cites the example of the Pygmalion myth where a sculptor falls in love with the figure of a woman that he sculpted. In a similar way, creators of AGI have fallen in love with their creations attributing properties to them that they do not possess like the creator of Pygmalion or the male lead in the movie Her.

3. The Ground of Intelligence—What Is Missing in Computers

At the core of our critique of the technological singularity is our belief that human intelligence cannot be exceeded by machine intelligence because the following set of human attributes are essential ingredients of human intelligence, and they cannot, in our opinion, be duplicated by a machine. The most important of these is that humans have a sense of self and hence have purpose, objectives, goals, and telos, as has been described by Terrence Deacon [9] pp. 463–484 in his book Incomplete Nature. As a result of this sense of self, humans also have curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, values, morality, experience, wisdom, and judgement. All of these attributes are essential elements of or conditions for human intelligence, in our opinion. In a certain sense, they are the ground in which human intelligence operates. Stripped of these qualities as is the case with AI all that is left of intelligence is logic, a figure without a ground according to McLuhan as we have already mentioned. If those that desire to create a human level of intelligence in a machine they will have to find a way to duplicate the above list of characteristics that we believe define human intelligence.
To the long list above of human characteristics that we have suggested contributes to human intelligence we would also add humor based on the following report of the social interaction of Stan Ulam and John von Neumann, the very first scholars to entertain the notion of the singularity.
Von Neumann’s closest friend in the United States was mathematician Stanislaw Ulam. A later friend of Ulam’s, Gian-Carlo writes: “They would spend hours on end gossiping and giggling, swapping Jewish jokes, and drifting in and out of mathematical talk”. When von Neumann was dying in hospital, every time Ulam would visit he would come prepared with a new collection of jokes to cheer up his friend (
Humor entails thinking out of the box, a key ingredient of human intelligence. Humor specifically works by connecting elements that are not usually connected, as is also the case with creative thinking. All of the super intelligent people we have known invariably have a great sense of humor. Who can doubt the intelligence of the comics Robin Williams and Woody Allen, or the sense of humor of physicists Albert Einstein and Richard Feynman?
There are computers that can calculate better than us, and in the case of IBM’s Big Blue, play chess better than us, but Big Blue is a one-trick pony that is incapable of many of the facets of thinking that we regard as essential for considering someone intelligent. Other examples of computers that exceeded humans in game playing are Google’s AlphaGo beating the human Go champion and IBM’s Watson beating the TV Jeopardy champion. In the case of Watson, it won the contest but it had no idea of what the correct answers it gave meant and it did not realize that it won the contest, nor did it celebrate its victory. What kind of intelligence is that? A very specialized and narrow kind for sure.
Perhaps the biggest challenge to our skepticism vis-à-vis the Singularity is a recent feat by the non-profit organization OpenAI with the mission of openly sharing its AI research. They developed an AI machine that can play games against itself and thereby find the optimum strategy for winning the game. It played GO against itself for three days and when it was finished, it was able to beat the original AlphaGo computer that had beat the human Go champion. In fact, it played 100 matches against AlphaGo and it won them all. AI devices that can beat humans at ruled base games parallels the fact that computers can calculate far faster and far better than any human. The other aspect of computers beating humans playing games is that a game is a closed system, whereas life and reality is an open system [28].
Intelligence, at least the kind that we value, involves more than rule based activities and is not limited to closed systems, but operates in open systems. All of the breakthroughs in science and the humanities involve breaking the rules of the previous paradigms in those fields. Einstein’s theory of relativity and quantum theory did not follow the rules of classical physics. As for the fine arts, there are no rules. Both the arts and the sciences are open systems.
The idea that a computer can have a level of imagination or wisdom or intuition greater than humans can only be imagined, in our opinion, by someone who is unable to understand the nature of human intelligence. It is not our intention to insult those that have embraced the notion of the technological singularity, but we believe that this fantasy is dangerous and has the potential to mislead the developers of computer technology by setting up a goal that can never be reached, as well as devalue what is truly unique about the human spirit and human intelligence.
It is only if we lower our standards as to what constitutes human intelligence, will computers overtake their human creators as advocates of AGI and the technological singularity suggest. Haim Harari [29] p. 434 put it very succinctly when he wrote that he was not worried about the world being controlled by thinking machines but rather he was “more concerned about a world led by people who think like machines, a major emerging trend of our digital society”. In a similar vein, Devlin [30] p. 76 claims that computers cannot think, they can only make decisions, and that he further claims, is the danger of AGI, namely, decisions that are made without thought.

4. The 3.5 Billion Year Evolution of Human Intelligence

Many of the shortcomings of AGI as compared to human intelligence is due to the fact that human beings are not just logic machines, but they are flesh and blood organisms that perceive their environment, have emotions, goals, and have the will to live. These capabilities took 3.5 billion years of evolution to create.
Kate Jeffrey [31] pp. 366–369 suggests that it would be “an immense act of hubris” to achieve the same level of human intelligence in a machine. She asks, “Can we do better than 3.5 billion years of evolution did with us?”
Anthony Aguirre [32] pp. 212–214 remarks that, “human minds are incredibly complex but have been battle tested into (relative) stability over eons of evolution in a variety of extremely challenging environments. AGI computers, on the other hand, cannot be built in the millions or billions, and how many generations of them need to be developed before they achieve the stability of the human brain”.
As S. Abbas Raza [33] pp. 257–259 asks, can any process other than Darwinian evolution produce “teleological autonomy akin to our own”.
Gordon E. Moore [34], cofounder of Intel Corp, cofounder of Fairchild Semiconductor and the Moore of Moore’s law is also a skeptic.
I don’t believe this kind of thing is likely to happen, at least for a long time. And I don’t know why I feel that way. The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don’t see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they’re intellectually more capable than humans.

4.1. Human Intelligence and the Figure/Ground Relationship

In the Introduction, we indicated that human intelligence for us is not just a matter of logic and rationality, but that it also entails explicitly the following characteristics that we will now show are essential to human thought: purpose, objectives, goals, telos, caring, intuition, imagination, humor, emotions, passion, desires, pleasure, aesthetics, joy, curiosity, values, morality, experience, wisdom, and judgement. We will now proceed through this list of human characteristics and show how each is an essential component of human intelligence that would be difficult if not impossible to duplicate with a computer. These characteristics arise directly or indirectly because of the fact that humans have a sense of self that motivates these characteristics. Without a sense of self, who is it that has purpose, objectives, goals, telos, caring, intuition, imagination, humor, emotions, passion, desires, pleasure, aesthetics, joy, curiosity, values, morality, experience, wisdom, and judgement. How could a machine have any of these characteristics?

4.2. Human Thinking Is Not Just Logical and Rational, but It Is Also Intuitive and Imaginative

Just as some advocates of the Singularity look at figures without considering the ground in which they operate, they also do not take into account that human thought is not just logical and rational but it is also intuitive, imaginative and even sometimes irrational.
The earliest and harshest critic of AGI was Hubert Dreyfus who wrote the following series of books beginning in 1965: Alchemy and AI [4], What Computers Can’t Do [5], Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer [6], and What Computers Still Can’t Do [7].
Dreyfus made a distinction between knowing-that which is symbolic and knowing-how, which is intuitive like facial recognition. Knowing-how depends on context, which he claimed is not stored symbolically. He contended that AI would never be able to capture the human ability to understand context, situation, or purpose in the form of rules. The reason being that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation. He argued that these unconscious skills would never be captured in formal rules that a computer could duplicate. Dreyfus’s critique parallels McLuhan’s notion of figure/ground analysis. Just as McLuhan claimed that the ground was subliminal, Dreyfus also claims that the ground of human thought are unconscious instincts that allow us to instantly and directly arrive at a thought without going through a conscious series of logical steps or symbolic manipulations. Another way of formulating Dreyfus’s insight is in terms of emergence theory. The human mind and its thought processes are emergent, non-reductionist phenomena. Computers, on the other hand, operate making use of a reductionist program of symbolic manipulations. AGI is linear and sequential, whereas human thinking processes are simultaneous and non-sequential. In McLuhan’s terminology, computers operate in one thing at a time, visual space and the human mind operates in the simultaneity of acoustic space. Computers operate as closed systems, no matter how large the databases can be. Biotic systems, such as human beings, are created by and operate within an open system.
Atran [35] pp. 220–222 reminds us that computers “process information in ways opposite to humans’ in domains associated with human creativity”. He points out that Newton and Einstein imagined “ideal worlds without precedent in any past or plausible future experience … Such thoughts require levels of abstraction and idealization that disregard, rather than assimilate” what is known. Dyson [36] pp. 255–256 strikes a similar note: “genuinely creative intuitive thinking requires non-deterministic machines that can make mistakes, abandon logic from one moment to the next and learn. Thinking is not as logical as we think”.
Imagination and curiosity are uniquely human and are not mechanical one step at a time. Mechanically trying to program them as a series of logical steps is doomed to fail, since imagination and curiosity defy logic and a computer is bound by logic. Logic can inhibit creativity, imagination and curiosity. An example of how deviation from strictly logical thinking leads to new ideas is the story of the invention of zero. Parmenides argued that nothing changes because ‘non-being’ cannot be because it is a contradiction. As a result, nothing changes because if A changes into B, the A will not-be but since non-being cannot be nothing changes.
His use of logic led to a non-intuitive result but it impacted much of Ancient Greek thinking with all subsequent Greek philosophers adding something to their model of the world that was unchanging like Plato’s forms and Aristotle’s domain of the heavens. We believe that Parmenides argument that non-being could not be also explains why the Greeks, who were great at geometry, never came up with the idea of zero. Zero was an invention of the Hindu mathematicians who were not always very logical but very practical. When they recorded their abacus calculations, they used a symbol they called sunya (leave a space) to record 302 as 3 sunya 2, i.e., 3 in the 100 column, nothing in the ten column and 2 in the one column. They denoted sunya with a dot and later with a circle. Sunya allowed them to invent the place number system, which we call Arabic numbers. The Arabs adopted sunya and translated ‘leave a space’ into sifr. When the Italians borrowed sifr from the Arabs, they called it zefiro and later shortened it to zero. Zero, place numbers, negative numbers, and algebra all emerged from what Parmenides and his Greek compatriots would have called a logical error.
Lawrence Krauss, a physicist, expressed a wish for AGI machines: “I’m interested in what machines will focus on when they get to choose the questions as well as the answers” [37]. We are quite skeptical of this hope of Krauss given the following remark of Einstein and Infeld [38]: “The mere formulation of a problem is far more often essential than its solution, which may be merely a matter of mathematical or experimental skill. To raise new questions, new possibilities, to regard old problems from a new angle requires creative imagination and marks real advances in science”. It is hard to imagine how a system of logic could develop a curiosity, as all logic can do is equate equivalent statements.
Kevin Kelly [39] also sees AGI machines tackling scientific questions: “To solve the current grand mysteries of quantum gravity, dark energy, dark matter, we’ll probably need intelligences other than human”. Kelly is not taking into account the imagination that is required to create a new paradigm. A new paradigm does arise from making a calculation, but from using one’s imagination. If he had written his piece in 1904, he might have suggested that we needed AI to understand the result of the Michaelson-Morley experiment, which showed that there is no aether and the speed of light is the same in all frames of reference. Einstein did not use computation or logic to come up with the theory of relativity, only imagination. Einstein [40] remarked, “Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand”.
Gordon Kane [41] p. 219 reminds us that AI might be able to analyze data but scientists will still have to build technological devices to gather data, without which there is no science.
Rafaeli [42] pp. 342–344 also sees AGI computers someday making scientific progress: “Thinking needs data, information and knowledge but also requires communication and interaction. Thinking is about asking questions, not just answering them… For a machine to think it will need to be curious, creative and communicative”. Rafaeli believes that machines will one day be communicative, but we wonder what will motivate them to do so or to be creative or curious for that matter.

4.3. Purpose, Objectives, Goals, Telos and Caring

Everything—a horse, a vine—is created for some duty... For what task, then, were you yourself created? A man’s true delight is to do the things he was made for
—Marcus Aurelius
The set of five interrelated characteristics of purpose, objectives, goals, telos, and caring, define what it is to be a living organism. All living organisms and only this class of objects have these properties because they are the only entities that act in their own self-interest. One can define a living organism as an entity that has purpose, objectives, goals, and a telos, a will to live and to reproduce, an end. Telos, associated with Aristotle’s fourth cause, is the purpose, goal, or objective that gives rise to an efficient cause that makes something happen. Computers do not have a will to live, a purpose, a goal, or objectives nor do they care about anything. They just function as they were designed to perform and as they are programmed by their human manufacturers and users. They do not reproduce like all forms of life, including bacteria and viruses. Because of their will to live, all organisms are caring; they care about finding nourishment and whether they live or die. Even single cell organisms like bacteria and more complex eukaryote microbes will communicate with each other when there is not an ample supply of nourishment to sustain themselves and form a slime medium that allows them to migrate to where there is more food. Bacteria also form slime to protect themselves from ingestion or desiccation. Slime molds, which are eukaryotes, also form cooperative slime colonies for finding nourishment and for reproduction. As more complex multi-cell organisms emerged more complex forms of caring emerged. Computers, on the other hand, have no capacity for caring. Caring, an emotional state, as we will discover below is key for creative thinking.

4.4. Intuition

Intuition is the clear conception of the whole at once
—Johann Kaspar Lavater
Living organisms basically make decisions based on their intuition and not on a linear deductive use of logic or any other process for that matter. One exception are humans, who because of their capacity with symbolic language, sometimes make decisions through a process of reasoning, but for the most part their day to day decisions regarding their safety, their nourishment, their movement, their breathing, are all done intuitively. It is only when they are planning, building something, or solving a problem that they make use of logical reasoning. But intuition kicks in again when they are engaged in the following activities: solving a wicked problem; creating or composing music; making an art object like a painting or a piece of sculpture; performing in a play; dancing; engaged in sports; driving a car; flying a plane; or, sailing a boat. In most of these cases there would not be enough time to proceed through a chain of logical steps to make decisions upon. In the case of wicked problem solving, the solution lies in making assumptions that have never been made before.
Logic has nothing to do with making the assumptions upon which a chain of logical thinking is executed. Logic only helps one develop a solution based on the assumptions one has made. Imagining new assumption is an intuitive act not an act of reason or rationality. Thomas Kuhn has made a study of how new scientific breakthroughs are made. They are always made by someone new to the field, usually a young scientist who intuits the new paradigm by making an assumption that contradicts the logic of the old paradigm. This is why an AI device cannot solve a wicked problem because it operates as a logic closed system, and thus cannot intuit a new paradigm or a new set of assumptions. That requires a creative artistic-like or improvisation approach to science or problem solving. Improvisation cannot be achieved using logic. Logic constrains one’s thinking, making improvisation and imagination impossible. Creative thinking is not rational and most times defies logic. Improvisation is about breaking the rules. Computers with their logical step by step processes cannot leap directly to a solution as is the case with an intuitive thinker.
The moment we, the authors, learned about the idea of a Singularity, we immediately sensed that it was wrong. As to those that hold a belief in the Singularity, it was also a result of their intuitive thinking. The explanation as to why the intuition of human agents can be so different is because they have different emotional needs. It is no accident that many Singularitarians are computer scientists, especially AI specialists.

4.5. Imagination

Imagination is an important component of intelligence
—Albert Einstein
Imagination entails the creation of new images, concepts, experiences, or sensations in the mind’s eye that have never been seen, perceived, experienced, or sensed through the senses of sight, hearing, and/or touch. Computers do not see, perceive, experience, or sense, as do humans, and therefore cannot have imagination. They are constrained by logic and logic is without images. Another way of describing imagination is to say it represents thinking outside the box. Well, is the box not all that we know and the equivalent ways of representing that knowledge using logic? Logic is merely a set of rules that allows one to show that one set of statements is equivalent to another. One cannot generate new knowledge using logic; one can only find new ways of representing it. Creativity requires imagination and imagination requires creativity and both creativity and imagination are intuitive, so once again we run up against another barrier that prevent computers from generating general intelligence.
Imagination is essential in science for creating a hypothesis to explain observed phenomena, and this part of the process of scientific thinking requires imagination, which is quite independent of logic. Logic comes into play when one used logic to determine the consequences of one’s hypotheses that can be tested empirically. Devising ways to test one’s hypotheses requires another, but quite different, kind of imagination.
Imagination is also a key element of artistic creation. The artist creates sensations for his or her audience that they (the audience) would not ordinarily experience.

4.6. Humor

He laughed to free himself from his minds bondage
—James Joyce
Humor is not so much a pre-requisite for intelligence as it is an indication of intelligence. To create or appreciate humor, one requires an imagination to see alternatives to one’s expectations. The “incongruity theory (of humor) suggests that humor arises when logic and familiarity are replaced by things that don’t normally go together [43]”. Given that a computer would not recognize or even create this kind of incongruity, then they would not only lack a sense of humor, but would not be able to assemble such incongruities, which are an essential part of imagination, and hence, intelligence.

4.7. Emotions

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science
—Albert Einstein
Humans experience a wide variety of emotions, some of which, as Einstein suggests, motivate art and science. Emotions, which are a psychophysical phenomenon, are closely associated with pleasure (or displeasure); passion; desires; motivation; aesthetics; and, and joy. Every human experience is actually emotional. It is a response of the body and the brain. Every experience is about what action to take. Acting to do it again or not do it again (private communication Terry Deacon).
Emotions play an essential part in human thinking as neuroscientist Antonio Damasio has shown:
Damasio’s studies showed that emotions take (or play) an important part in the human rational thinking mechanism.
[44] p. 326
For decades, biologists spurned emotion and feeling as uninteresting. But Antonio Damasio demonstrated that they were central to the life-regulating processes of almost all living creatures. Damasio’s essential insight is that feelings are “mental experiences of body states“, which arise as the brain interprets emotions, themselves physical states arising from the body’s responses to external stimuli. (The order of such events is: I am threatened, experience fear, and feel horror.) He has suggested that consciousness, whether the primitive “core consciousness” of animals or the “extended” self-conception of humans, requiring autobiographical memory, emerges from emotions and feelings.
Terrence Deacon [9] pp. 512, 533 in Incomplete Nature also claims that emotions are essential for mental activities:
Emotion … is not merely confined to such highly excited states as fear, rage, sexual arousal, love, craving, and so forth. It is present in every experience, even if often highly attenuated, because it is the expression of the necessary dynamic infrastructure of all mental activity … Emotion … is not some special feature of brain function that is opposed to cognition
Computers are incapable of emotions, which in humans, are inextricably linked to pleasure and pain because they have no pain nor any pleasure, and hence there is nothing to get emotional about. In addition, they have none of the chemical neurotransmitters, which is another reason why computers are incapable of emotions and the drives that are associated with them. Without emotions, computers lack the drive that are an essential part of intelligence and the striving to achieve a purpose, an objective or a goal. Emotions play a key role in curiosity, creativity, aesthetics, which are three other factors that are essential for human intelligence.
Singularitarians are essentially dualists that embrace the dualisms between body and mind and between reason and emotion. They are the last of the behaviorists that have replaced the Skinner box with a silicon box (today’s computers). The mind is not just the brain, and the brain is not just a network of neurons operating as logic gates. The human mind extends into the body, is extended into our language according to Logan [46], and extended into our tools according to Clark and Chalmers [47].

4.8. Curiosity

Curiosity is one of the permanent and certain characteristics of a vigorous intellect
—Samuel Johnson
I have no special talent. I am only passionately curious.
—Albert Einstein
Curiosity is both an emotion and a behavior. Without the emotion of curiosity, the behavior of curiosity is not possible, and given that computers are not capable of emotions, then they cannot be curious, and hence lack an essential ingredient for intelligence. Curiosity entails the anticipation of reward, which in the brain comes in the form of neurotransmitters like dopamine and serotonin. No such mechanism exists in computers, and hence they totally lack native curiosity. Curiosity, if it exists at all, would have to be programmed into them. In fact, that is exactly what NASA did when it sent its Mars rover, aptly named Curiosity, to explore the surface of Mars.
Curiosity and intelligence are highly correlated. Advances in knowledge have always been the result of someone’s curiosity. Curiosity is a characteristic that only a living organism can possess and no living organism is more curious than humans. How could a computer create new forms of knowledge without being curious? But that level of curiosity would have to the curiosity of the programmers who create the AGI creature. Since the curiosity programmed into the AGI device cannot exceed native human curiosity, this represents a real barrier to the achievement of the Singularity.

4.9. Creativity and Aesthetics

The role of creativity and aesthetics in the fine arts is rather obvious but it also plays a key role in science, engineering, product design and general problem solving. Humans solve problems and make discoveries using both out of the box and elegant thinking that defy the logical, one thing at a time, line of thought characteristic of uncreative thinkers and computers. Creativity is a passionate emotion-filled pursuit in which the creator cares about their creation whether it be a practical well-designed product, a scientific theory or an objet d’art.
Terence Deacon [9], pp. 91–92 also weighs in on the question of computers and creativity, which he shows if it were the case that a series of logical steps is all that is required to be creative then we live in a pre-determined world in which there is no free will, which is not how we find this world:
Consider, however, that to the extent that we map physical processes onto logic, mathematics, and machine operation, the world is being modeled as though it is preformed, with every outcome implied in the initial state. But as we just noted, even Turing recognized that this mapping between computing and the world was not symmetrical. Gregory Bateson [48] p. 58 (1972, 58) explains this well:
In a computer, which works by cause and effect, with one transistor triggering another, the sequences of cause and effect are used to simulate logic. Thirty years ago, we used to ask: Can a computer simulate all the processes of logic? The answer was “yes”, but the question was surely wrong. We should have asked: Can logic simulate all sequences of cause and effect? The answer would have been: “no”.
When extrapolated to the physical world in general, this abstract parallelism has some unsettling implications. It suggests notions of predestination and fate: the vision of a timeless, crystalline, four-dimensional world that includes no surprises. This figures into problems of explaining intentional relationships such as purposiveness, aboutness, and consciousness, because as theologians and philosophers have pointed out for centuries, it denies all spontaneity, all agency, all creativity, and makes every event a passive necessity already prefigured in prior conditions. It leads inexorably to a sort of universal preformationism.
Aesthetic also plays a role and not just in the fine arts and design but also in science and engineering. Einstein once remarked, “the only physical theories that we are willing to accept are the beautiful ones”. Herman Bondi [49] confirmed this attitude of Einstein’s when he wrote,
What I remember most clearly was that when I put down a suggestion that seemed to me cogent and reasonable, Einstein did not in the least contest this, but he only said, ‘Oh, how ugly.’ As soon as an equation seemed to him to be ugly, he really rather lost interest in it and could not understand why somebody else was willing to spend much time on it. He was quite convinced that beauty was a guiding principle in the search for important results in theoretical physics.
The idea of a computer having a sense of aesthetics is preposterous given that the feeling of beauty is an emotion and computers cannot have emotions. Emotions involve the nervous system and the psyche, and since the computers do not have a nervous system or a psyche, they cannot feel emotions like the emotion of beauty. So, once again, it is hard to imagine AGI competing with or developing anything like human intelligence.

4.10. Values and Morality

Because a computer has no purpose, objectives, or goals, it cannot have any values as values are related to one’s purpose, objectives, and goals. As is the case with curiosity, values will have to be programmed into a computer, and hence the morality of the AGI device will be determined by the values that are programmed into it, and hence the morality of the AGI device will be that of its programmers. This gives rise to a conundrum. Whose values will be inputted, and who will make this decision, a critical issue in a democratic society. Not only that, but there is a potential danger. What if a terrorist group or a rogue state were to create or gain control of a super-intelligent computer or robot that could be weaponized. Those doing AGI research cannot take comfort in the notion that they will not divulge their secrets to irresponsible parties. Those that built the first atomic bomb thought that they could keep their technology secret, but the proliferation of nuclear weapons of mass destruction became a reality. When considering how many hackers are operating today, is not the threat of super-intelligent AGI agents a real concern?
Intelligence, artificial or natural, entails making decisions, and making decisions requires having a set of values. So, once again, as was the case with curiosity, the decision-making ability of an AGI device cannot exceed that of human decision making as it will be the values that are programmed into the machine that will ultimately decide which course of action to take and which decisions are made.

5. Artificial Intelligence and the Figure/Ground Relationship

Another insight of McLuhan’s, namely the relationship of figure and ground, can help to explain why so many intelligent scientists can go so far astray in advocating the idea that a machine can think. The meaning of any “figure” according to McLuhan, whether it is a technology, an institution, a communication event, a text, or a body of ideas, cannot be determined if that figure is considered in isolation from the ground or environment in which it operates. The ground provides the context from which the full meaning or significance of a figure emerges. The following examples illustrate the way in which the context can transform the meaning of a figure: a smokestack belching smoke, once a symbol of industrial progress, is today a symbol of pollution; the suntan, once a sign of hard work in the fields, is now a symbol of affluence and holidaying and will probably evolve into a symbol of reckless disregard for health and the risk of skin cancer.
We believe that a computer operates strictly on the figure of the problem that it is asked to solve. It has no idea of the ground or the context in which the problem arose. It is therefore not thinking, but merely manipulating symbols for concepts that it has never experienced and for which it has not had a perceptual experience, hence no emotions, no caring. Basically, the machine does not give a damn. Thinking entails making use of a bit of wisdom, which only can be acquired with experience that has both an intellectual and an emotional component, the latter of which is impossible for a machine. In other words, the machine cannot have emotions, feel love, pain, or regret, or have a sense of what is just or beautiful and hence can never become wise. The computer can only manipulate the figure of a problem, and it really has no clue about the ground or the context of the problem.
Can machines think? Just because a machine can calculate or compute faster than a human does not mean that it is thinking. It is just carrying out computations that a human who programs the computer has asked it to do. Before computers humans used abacuses and slide rules to facilitate their calculations. It never occurred to anyone to suggest that the abacuses or slide rules could think. No—they only carried out operations that their human operators made then perform. According to Andy Clark [50], these devices became extensions of the human mind. Computers, like abacuses and slide rules, only carry out operations their human operators/programmers ask them to do, and as such, they are extensions of the minds of their operators/programmers. They cannot think as they have no free will, in fact they have no will at all.
We would imagine that proponents of AGI or strong AI would claim that free will is an illusion and that our argument is simply a category error. Well, if there is no such thing as free will, then there is no difference between a human and a computer, as both are subject to the laws of physics. If that is the case, why do we value human life more than computer life. Should a person be charged with murder who destroys a computer as we have done when we abandoned our former out of date computers to recycling. The answer is, of course, no, but it does raise the question, would an AGI computer in the post-Singularity days be protected against murder like the humans that they will replace. Which raises another question: If post-Singularity computers control society how will they enforce the law.
Our position that computers or machines are mindless is supported by many of our confreres who are also Singularity skeptics. Here is a sample of a few thinkers who believe that AI computers are mindless:
Much of the power of artificial intelligence stems from its mindlessness... Unable to question their own actions or appreciate the consequences of their programming—unable to understand the context in which they operate—they can wreak havoc either as a consequence of flaws in their programming or through the deliberate aims of their programmers.
[51] p. 59
Silicon-based information processing requires interpretation by humans to become meaningful and will for the foreseeable future. We have little to fear from thinking machines and more to fear from the increasingly unthinking humans who use them.
[52] p. 89
Machines (at least so far, and I don’t think this will change with a Singularity) lack vision.
[53] p. 93
Machines (humanly constructed artifacts) cannot think, because no machine has a point of view—that is, a unique perspective on the worldly referents of its internal symbolic logic.
[54] p. 7
Machines don’t think. They approximate functions. They turn inputs into outputs … much of machine thinking is just machine hill climbing … Tomorrow’s machines will look a lot like today’s—old algorithms running on faster machines.
[55] pp. 423–426
Basically, as each of the five critics above have pointed out AGI is a figure without a ground and a figure without a ground is dangerous because it lacks meaning because of a lack of context as Marshall McLuhan has observed.
In each of these quotes the ground that is missing to support the machines mechanical processing is: for Carr [51] “appreciate the consequences of their programming”; for Fitch [52] “interpretation”; for Pepperberg “vision”; for Trehub “a point of view” and for Kosko thinking substituted by “machine hill climbing”.

5.1. The Turing Test

Alan Turing, in 1950, developed criteria to determine if a machine could exhibit intelligent behavior. The Turing test, as it became to be known, was whether a human dialoging with a computer in a text only channel could determine whether or not they were in conversation with a machine or a human. If the human could not tell whether it was a human or a computer, the computer passed the Turing test. The Turing test might be a necessary condition for a computer possessing human-like intelligence, but it certainly is not a sufficient condition. A smart interlocutor, however, could easily determine whether they were conversing with a machine or a human by asking the following personal questions: Have you ever been in love?; Who are you closest to in your family?; What is your gender?; What gives you joy and why?; What are your goals in life?; What is the ethnic origin of your family?; What sports do you enjoy participating in and why? What was the happiest moment in your life and the saddest? The Turing test is not really a test of intelligence, it is a test of whether a programmer can fool a human to believe it is also a human. In other words, it is a magician’s trick (private communication with Terry Deacon).

5.2. Machines Do Not Network and They Have No Theory of Mind

Stanislaw Dehaene [56] pp. 223–225 points out that machines lack two essential functions for intelligent thinking a global workspace and a theory of mind. The human mind knows what it knows. Although it has specialized modules where the information is accessible by all of the brain, similar to Pribram’s holographic brain with its holographic storage network. Computer modules do not have holographic access to information, and a computer module, unlike a human brain module, is not aware of the information in the other modules.
The human mind is capable of responding and attending to other humans, but AI does not have this capability to respond and attend to its users. Computers operate as figure without ground. Ridley [57] pp. 226–227 points out “human intelligence is not individual thinking at all but collective, collaborative and distributive intelligence”. Networking with humans is possible because humans have language which animals and computers do not have. Control of fire and the living in communal groups led to verbal language and networking. Ridley claims the only hope for the AGI will be individual AGI configured computers that are interlinked by the Internet.
Shafir [58] pp. 300–301 points out that since an AGI computer cannot have a Theory of Mind, it will never be able to achieve the level of human intelligence. A theory of mind emerges when a human realizes other humans think the way they do. Since a computer cannot think like a human, it will never be able to develop a theory of mind.

5.3. AI Have No Goals, Feelings or Emotions and Hence Cannot Act and They Do Not Care

A computer cannot experience pain, pleasure, or joy, and therefore has no motivation, no goals, no desire to communicate according to Enfield [59] pp. 3917–3998: “Machines comply, but they don’t cooperate” because they have no goals.
David Gelernter [60] pp. 80–83. “Philosophers and scientists wish that the mind was nothing but thinking and that feeling or being played no part. They wished so hard for it to be true that they finally decided it was. Philosophers are only human”.
Edward Slinerland [61] pp. 345–346 regards AGI computers as “directionless intelligences because AI systems are tools not organisms … No matter how good they become [doing things] they don’t actually want to do any of these things… AI systems in and of themselves are entirely devoid of intentions or goals … Motivational direction is the product of natural selection working on biological organisms”.
Since computers operate using an either/or, 0 or 1, true or false logic, they are not capable of metaphor, which Stuart Kauffman [62] pp. 507–509 claims is the basis of human creative thought, whether mathematical, artistic or scientific.
Abigail Marsh [63] pp. 415–417 cites a patient that is incapable of any emotions because of damage to his ventro-medial prefrontal cortex. The patient was unable to make use of his intelligence and knowledge residing in other unaffected areas of his brain because without his emotional capacity he was “unable to make decisions or take action”. No emotions—no actions. One, therefore, cannot expect any thought or initiative from an AGI device. It is apathetic, has no capacity for emotions and has no initiative unless instructed by a human. It all comes down to the fact that there is no reward for a computer, and hence no motivation. Therefore, the notion that is expressed by some advocates of the Singularity that AGI computers could take over the human race is without basis. Roy Baumeister [64] p. 73 comes to a similar conclusion.
Johnathan Gottschall [65] pp. 179–180 points out that the success of AGI computers to generate compelling stories has been a dismal failure. The creators of great stories and other forms of artistic expression are intelligent but they have another quality, which we will call soulfulness. By soul in this context we are expressing something that is not supernatural but something that has a strong emotional component to it, in addition to the analytic intelligence that an artist must also possess. Musicologists have formulated the rules that Mozart used in composing his music, and then fed those rules into a computer along with a simple Mozart melody. The result was music that sounded a bit like Mozart but had none of the emotional and aesthetic appeal of the music that Mozart actually composed. Mozart has soul and passion and the computer has a melody, a set of rules and the ability to combine them, but not the ability to create beauty. An artist knows when to break the rules, whereas the computer can only stick to the rules. There is a parallel with creative science. The scientists that breaks new ground breaks the rules of the former Kuhnian paradigm by combining intelligence with intuition and creative imagination.
Saffo [66] pp. 206–208 suggests one of the motivation to create AGI is that “we want someone to talk to”. This suggestion raises the question: but if we have the possibility to talk to each other, why would we need an AGI computer to fill in this need. Saffo also suggests that, “we of course will attribute feelings and rights to AIs—and eventually they will demand it”. How can machines without a sense of self or without a will to live be able to demand anything which begs the question how will they arrive at a notion of rights unless that is programmed into them. Finally, the last straw for us is Saffo’s contention that sentience is universal and not limited to living things. “It’s just a small step to speculate about what trees, rocks—or AIs—think”. I guess that means trees and rocks have rights and we can talk to them. Maybe the tree huggers know something that we do not know.
Kosslyn [67] pp. 228–230 posits that AGI computers will want to have purpose, and as a consequence, they will want to support and elevate the human condition. This is another example of a Singularity advocate assuming that a machine without any capacity for emotions having the capacity to desire something that would give it pleasure. We contend that without emotions and without the ability to feel pleasure, there can be no desire and no purpose.

6. Decision Making, Experience, Judgement and Wisdom

The only Source of Knowledge is Experience.
—Albert Einstein
If the challenges of programming an AGI device with a set of values and a moral compass that represents the will of the democratic majority of society is achieved, there is still the challenge of whether the AGI device still has the judgment and wisdom to make the correct decision. In other words, is it possible to program wisdom into a logical device that has no emotions and has no experiences upon which to base a decision. Wisdom is not a question of having the analytic skills to deal with a new situation but rather having a body of experiences to draw upon to guide one’s decisions. How does one program experience into a logic machine?
Intelligence requires the ability to calculate or to compute, but the ability to calculate or compute does not necessarily provide the capability to make judgments and decisions unless values are available, which for an AGI device, requires input from a human programmer.

7. Conclusions: How Computers Will Make Us Humans Smarter

Douglas Rushkoff [12] pp. 354–355 invites us to consider computers not as figure but as ground. He suggests that the leap forward in intelligence will not be in AGI configured computers that have the potential to be smarter than us humans, but in the environment that computers create. Human intelligence will increase by allowing human minds to network and create something greater than what a single human mind can create, or what a small group of minds that are co-located can create. The medieval university made us humans smarter by bringing scholars in contact with each other. The city is another example of a medium that allowed thinkers and innovators to network, and hence increase human intelligence. The printing press had a similar impact. With networked computer technology, a mind with a global scale is emerging.
In the past, schools of thought emerged that represented the thinking of a group or team of scholars. They were named after cities. What is emerging now are schools of thought and teams of scholars that are not city-based but exist on a global scale. An example is that we once talked about the Toronto school of communication and media studies consisting of scholars, such as Harold Innis, Marshall McLuhan, Ted Carpenter, Eric Havelock, and Northrop Fry that lived in Toronto and communicated with each other about media and communications. A similar New York school of communication emerged with Chris Nystrom, Jim Carey, John Culkin, Neil Postman, and his students at NYU. Today, that tradition lives on but not as the Toronto School or the New York School, but as the Media Ecology School, with participants in every part of the world. This is what Rushkoff [12] was talking about in his article “The Figure or Ground”, where he pointed out that it is the ground or environment that computers create, and not the figure of the computer by itself that will give rise to intelligence greater than a single human. He expressed this idea as follows: “Rather than towards machines that think, I believe we are migrating toward a networked environment in which thinking is no longer an individual activity nor bound by time and space”.
Marcelo Gleiser [68] pp. 54–55 strikes a similar chord to that of Doug Rushkoff when he points out that many of our technologies act as extensions of who we are. He asks: “What if the future of intelligence is not outside but inside the human brain? I imagine a very different set of issues emerging from the prospect that we might become super-intelligent through the extension of our brainpower by digital technology and beyond—artificially enhanced human intelligence that amplifies the meaning of being human”.

Author Contributions

A.B. and R.K.L. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Chalmers, D. The Singularity: A Philosophical Analysis. J. Conscious. Stud. 2010, 17, 7–65. [Google Scholar]
  2. IEEE Spectrum. Special Report: The Singularity. 2008. Available online: (accessed on 15 January 2016).
  3. Brockman, J. (Ed.) What to Think About Machines that Think; Harper Perennial: New York, NY, USA, 2015. [Google Scholar]
  4. Dreyfus, H. Alchemy and AI; RAND Corporation: Santa Monica, CA, USA, 1965. [Google Scholar]
  5. Dreyfus, H. What Computers Can’t Do; MIT Press: New York, NY, USA, 1972. [Google Scholar]
  6. Dreyfus, H. What Computers Can’t Do, 2nd ed.; MIT Press: New York, NY, USA, 1979. [Google Scholar]
  7. Dreyfus, H. What Computers Still Can’t Do; MIT Press: New York, NY, USA, 1992. [Google Scholar]
  8. Braga, A.; Logan, R. Communication, Information and Pragmatics. In Encyclopedia of Information Science and Technology; Khosrow-Pour, M., Ed.; IGI Global: Hershey, PA, USA, 2017. [Google Scholar]
  9. Deacon, T. Incomplete Nature: How Mind Emerged from Matter; WW Norton and Company: New York, NY, USA, 2012. [Google Scholar]
  10. McLuhan, M. Understanding Media: Extensions of Man; McGraw Hill: New York, NY, USA, 1964. [Google Scholar]
  11. McLuhan, E. Media and Formal Cause; NeoPoiesis Press: New York, NY, USA, 2011. [Google Scholar]
  12. Rushkoff, D. Figure or Ground? In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 354–355. [Google Scholar]
  13. Bratton, B. The Evolution Revolution. Available online: (accessed on 27 November 2017).
  14. Kurzweil, R. The Singularity Is Near; Viking Books: New York, NY, USA, 2005. [Google Scholar]
  15. Gibson, K. Ethics and Business: An Introduction; Cambridge University Press: Cambridge UK, 2007. [Google Scholar]
  16. Sellen, A.; Harper, R. The Myth of the Paperless Office; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  17. Pinker, S. Thinking Does Not Imply Subjugating. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 5–8. [Google Scholar]
  18. Tipler, F. If You Can’t Beat ‘Em, Join ‘Em. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 17–18. [Google Scholar]
  19. Lisi, A.G. I, for One, Welcome our Machine Overlords. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 22–24. [Google Scholar]
  20. McCorduck, P. An Epochal Human Event. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 51–53. [Google Scholar]
  21. Harris, S. Can We Avoid a Digital Apocalypse? In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 408–411. [Google Scholar]
  22. Paul, G. What will AI’s Think of Us. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 391–393. [Google Scholar]
  23. Croak, J. Fear of a God, Redux. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 498–499. [Google Scholar]
  24. Hofstadter, D. Tech Luminaries Address Singularity. 2008. Available online: (accessed on 15 March 2016).
  25. Horgan, J. The Consciousness Conundrum. 2008. Available online: (accessed on 3 March 2016).
  26. Hardy, Q. Interesting Thoughts brought to Mind. In What to Think about Machines that Think; Harper Perennial: New York, NY, USA, 2015; pp. 190–193. [Google Scholar]
  27. Taylor, T. Denkraumverlust. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 251–254. [Google Scholar]
  28. Quach, K. How DeepMind’s AlphaGo Zero Learned All by Itself to Trash World Champ AI AlphaGo. Available online: (accessed on 18 October 2017).
  29. Harari, H. Thinking about People Who Think Like Machines. In What to Think About Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; p. 434. [Google Scholar]
  30. Devlin, K. Leveraging Human Intelligence. In What to Think About Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 74–76. [Google Scholar]
  31. Jeffrey, K. In Our Image. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 366–369. [Google Scholar]
  32. Aguirre, A. The Odds on AI. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 212–214 & 342–344. [Google Scholar]
  33. Raza, S.A. The Value of Artificial Intelligence. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 257–259. [Google Scholar]
  34. Moore, G. Tech Luminaries Address Singularity. 2008. Available online: (accessed on 25 May 2016).
  35. Atran, S. Are We Going in the Wrong Direction? In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 220–222. [Google Scholar]
  36. Dyson, G. Analog, the Revolution that Dares Not Speak Its Name. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 255–256. [Google Scholar]
  37. Krauss, L.M. What Me Worry? In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 171–174. [Google Scholar]
  38. Einstein, A.; Infield, L. The Evolution of Physics; Simon and Schuster: New York, NY, USA, 1938. [Google Scholar]
  39. Kelly, K. Call Them Artificial Aliens. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 245–247. [Google Scholar]
  40. Einstein, A. What Life Means to Einstein: An Interview by George Sylvester Viereck. The Saturday Evening Post, 26 October 1929; 117. [Google Scholar]
  41. Kane, G. We Need More than Thought. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; p. 219. [Google Scholar]
  42. Rafaeli, S. The Moving Goalposts. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015. [Google Scholar]
  43. Ma, M. The Power of Humor in Ideation and Creativity. Psychology Today. 16 June 2014. Available online: (accessed on 30 July 2016).
  44. Martınez-Miranda, J.; Aldea, A. Emotions in Human and Artificial Intelligence. Comput. Hum. Behav. 2005, 21, 323–341. [Google Scholar] [CrossRef]
  45. Potin, J. The neuroscientist Antonio Damasio explains how minds emerge from emotions and feelings. Technology Review. 17 June 2014. Available online: (accessed on 27 June 2015).
  46. Logan, R.K. The Extended Mind: The Origin of Language and Culture; University of Toronto Press: Toronto, ON, Canada, 2007. [Google Scholar]
  47. Clark, A.; Chalmers, D. The extended mind. Analysis 1998, 58, 10–23. [Google Scholar] [CrossRef]
  48. Bateson, G. Steps to An Ecology of Mind; Random House/Ballantine: New York, NY, USA, 1972. [Google Scholar]
  49. Bondi, H. Einstein: The Man and His Achievement; Withrow, G.J., Ed.; British Broadcasting Corporation: London, UK, 1973; p. 82. [Google Scholar]
  50. Clark, A. Natural-Born Cyborgs; Oxford University Press: New York, NY, USA, 2003. [Google Scholar]
  51. Carr, N. The Control Crisis. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 59–61. [Google Scholar]
  52. Fitch, W. Tecumseh. Nano-Intentionality. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 89–92. [Google Scholar]
  53. Trehub, A. Machines Cannot Think. In What to Think about Machines that Think; Harper Perennial: New York, NY, USA, 2015; p. 71. [Google Scholar]
  54. Pepperberg, I. A Beautiful Visionary Mind. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 93–94. [Google Scholar]
  55. Kosko, B. Thinking Machines—Old Algorithms on Faster Computers. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 423–426. [Google Scholar]
  56. Dehaene, S. Two Cognitive Functions Machine Lack. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 223–225. [Google Scholar]
  57. Ridley, M. Among the Machines Not Within the Machines. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 226–227. [Google Scholar]
  58. Shafir, E. Blind to the Core of Human Experience. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 300–301. [Google Scholar]
  59. Enfield, N.J. Machines Aren’t into Relationships. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 97–98. [Google Scholar]
  60. Gelernter, D. Why Can’t ‘Being’ or ‘Happiness’ Be Computed. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 80–83. [Google Scholar]
  61. Slinerland, E. Directionless Intelligence. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 345–346. [Google Scholar]
  62. Kauffman, S. Machines that Think? Nuts! In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 507–509. [Google Scholar]
  63. Marsh, A. Caring Machines. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 415–417. [Google Scholar]
  64. Baumeister, R. No ‘I’ and no Capacity for Malice. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 72–73. [Google Scholar]
  65. Gottschall, J. The Rise of Storytelling Machines. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 179–180. [Google Scholar]
  66. Saffo, P. What will the Place of Humans Be? In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 206–208. [Google Scholar]
  67. Kosslyn, S.M. Another Kind of Diversity. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 228–230. [Google Scholar]
  68. Gleiser, M. Welcome to Your Transhuman Self. In What to Think about Machines that Think; Brockman, J., Ed.; Harper Perennial: New York, NY, USA, 2015; pp. 54–55. [Google Scholar]

Share and Cite

MDPI and ACS Style

Braga, A.; Logan, R.K. The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence. Information 2017, 8, 156.

AMA Style

Braga A, Logan RK. The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence. Information. 2017; 8(4):156.

Chicago/Turabian Style

Braga, Adriana, and Robert K. Logan. 2017. "The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence" Information 8, no. 4: 156.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop