Special Issue "AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?"
Deadline for manuscript submissions: 30 October 2018
Prof. Dr. Adriana Braga
Department of Social Communication, Pontifícia Universidade Católica do Rio de Janeiro (PUC-RJ), R. Marquês de São Vicente, 225—Gávea, Rio de Janeiro 22451-900, RJ, Brazil
Website | E-Mail
Interests: technology; gender; pragmatism; phenomenology; psychoanalysis; media studies; social interaction; ethnography
We are putting together a Special Issue on the notion of the technological Singularity, the idea that computers will one day be smarter that their human creators. Articles that are both in favour of and against the Singularity are welcome, but the lead article by the Guest Editors Bob Logan and Adrianna Braga is quite critical of the notion. Here is the abstract of their paper, The Emperor of Strong AI Has no Clothes: Limits to Artificial Intelligence.
Abstract: We argue that the premise of the technological Singularity, based on the notion that computers will one day be smarter that their human creators, is false, making use of the techniques of media ecology. We also analyze the comments of other critics of the Singularity, as well those of supporters of this notion. The notion of intelligence that advocates of the technological Singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence, as we will show, is not based solely on logical operations and computation, but also includes a long list of other characteristics, unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor.
Prof. Dr. Robert K. Logan
Prof. Dr. Adriana Braga
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed Open Access monthly journal published by MDPI.Please visit the Instructions for Authors page before submitting a manuscript. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.
AI (artificial intelligence)
The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.
Title: Metaphysics of Artificial Intelligence
Author: Adriana Braga
Abstract: This article discusses the metaphysical aspects of Artificial Intelligence theories, departing from the Cartesian dualism. Two ideas permeate the social imaginary regarding digital technologies: that of a utopian (or dystopic) modernity, in which machines will become progressively “intelligent” for the improvement (or domination) of human beings; And the second, according to which human bodies are natural “machines”. The philosophical matrix of this dual conception of the “human machine” and the “machinic human” can be traced back to Descartes. Through the Cartesian nexus - the conception of the human being as the connection between a physical entity (the body) and a distinct metaphysical entity (the mind) - a quasi-religious form of representation of science is based, as in the fictional images of robots and AI computers.
Keywords: Philosophy of Technology; Technopoly; Artificial Intelligence; Metaphysics.
Title: Memories and time out of place
Author: Clarisse Fukelman
Abstract: This paper approaches the topics of affection, proximity communication and the singularization of subjectivity through the interpretation of texts by two Brazilian authors, Clarice Lispector (1920-1977) and Carlos Drummond de Andrade (1902-1987), given the impact of technological development placed by the space conquest and information technologies between the 1960s and the 1970s. Both authors report to the technological advances of their time ("computational machine" or "komunikansia/interplanetary interpathetic") in order to confront other human dimensions, raising issues of power, colonialism and ecology; on the breech between reason and imagination; and on a new rethoric associated to the cybernetic machine (Barthes). Thus, literature participates on the discussions on the theory of communications on the 20th Century.
Title: AI, Technological Singularity and Death
Author: Jose Carlos Rodrigues
Abstract: My contribution to the debate on AI and technological singularity considers the propositions of the so-called 'transhumanism'. I do not intend to contest the prophecies or the chronologies of future outcomes posed by their enthusiasts. Departing from my anthropological studies on the role of death in human societies, I intend to discuss some probable - or possible - economic, political and social consequences that may occur in the case that the transhumanist prophecies come true.
Title: THE PRESENT AND THE FUTURE OF 'ARTIFICIAL INTELLIGENCE'.
Author: Rod Watson
Abstract: This brief paper will sketch the assumptions that must be made in order to claim that 'Artificial Intelligence' (AI) expresses human intelligence and that computer-driven AI will, in the future, operationally improve upon and even supercede human intelligence. The view advocated in this paper largely (though not exclusively) reflects arguments to be found in the sociological approaches of Ethnomethodology (EM) and Conversation (al) Analysis (CA), and especially the determinations of those approaches that are informed by the later work of the Austrian philosopher Ludwig Wittgenstein. Wittgenstein himself, of course, discussed the notion of machines 'thinking', as have those following up his discussion.
Instead of looking at AI as a single concept in vacuo, this paper will adopt the approach to concepts employed by Erving Goffman and Harvey Sacks, each in his own way. Following their general guidelines, this will involve, for instance, converting a one-concept issue (AI alone) into a two-concept issue (AI + the Computational Model of the Human Mind). These two concepts will (following Sacks' term used in another context) will be shown to form a 'pair', and, it will be argued, such a pairing is implied in AI proponents' own, varyingly implicit, conception of their work. The work of Wittgensteinian EM and CA is to draw out, raise into view and explicitly discuss any implicitness in this conceptual pairing, and terms such as 'lebenswelt pairs' (Eric Livingston) will be brought in to assist in this process.
The 'AI-Computational Model of Mind' lebenswelt pair of concepts and the work that is built on this pair will be described as a 'hermeneutic' one, operating akin to the working of a hermeneutic circle, with each concept reciprocally defining and determining the other in a back-and forth manner, and thus in a tautological, sealed-off way. This process will be shown to resemble that of what F.C.Crews calls 'theoreticism' – a self privileging theorisation of the issue that is, in its own terms, impervious to disconfirmation.
How, then, can we seek to launch a disconfirmation of those proponents of AI who employ this lebenswelt pair? The only way is not to play the AI game in its own, conveniently self-confirming, terms. For instance, one can challenge the cognitivistic conception of human mind, and therefore of intelligence, that the second of these concepts employs-information processing, etc. In particular, we can challenge the allegedly unitary, context-free, non-praxiological, methodologically individualist (mutatis mutandis) nature of this conception of mind, and we can challenge also the reductionist nature of this conception. Moreover, to adapt a quip by Wittgenstein, an additional problem with the 'AI -Mind/Intelligence' lebenswelt pair is that in it, language goes on holiday, or at least arrives too late. In some cases too, one can challenge a further reduction that this AI conception of mind affords, namely the attempt to map the various operations of mind as conceived by AI proponents onto the anatomical structures and physiological processings of the brain-a move which, if permitted to go unchallenged, surely threatens the very existence of the social sciences and much of psychology too. Such a reduction is usually termed 'physicalism' or 'materialism'.
Some of this challenge by EM/CA will include questioning the focus of AI on the 'einzelindividuum', i.e., its methodological individualism. Part of the challenge will involve a modification of John Searle's (and likeminded philosophers') arguments about AI. Another part will comprise a critique questioning the characterisation of the purportedly unitary nature of human mind/intelligence as formulated by proponents of AI, and an EM/CA characterisation of mind/intelligence will be adduced by way of viable, empirically-grounded, contrast. For EM and CA, mind, as shown in intelligences, can be seen as pluralistic, as situated, as collaborative, and as multifarious, lodged in what Wittgenstein termed various 'forms of life', and in contexts within those forms, and thus intelligences are not amenable to description in terms of a single, universal standard imposed by the analyst. In this way, intelligences may be conceived as part of the intersubjective world, where parties to a social setting make sense, setting-sensitive sense, of that setting and do so collaboratively. In this and so many other respects, language practices, deployed collectively in situated social interaction, are central to intelligence or to making sense of a given setting: they must be analysed as primary, as doing constitutive work.
If this more naturalistic conception of intelligences be propounded, then the claims of AI for both the present and the future are undermined. If AI cannot be credibly conceived as modelling actual mind and intelligences, then the plethora of promissory notes issued daily by its proponents can be seen to be fakery.
Author: Alex Webb
Abstract: This paper will argue that the question of a technological Singularity is misplaced, revealing anthropocentric biases towards cognition. The social constructs that frame the capacities of a technological Singularity as “more” or “less” than human intelligence are the very ones impeding the development of artificial intelligence (AI), preventing us from both recognizing existing forms of computational cognition as well as constraining our augmentation of it. Computational intelligence is radically different than our own, to the degree that we cannot recognize certain forms of cognition that could be radically productive. When we consider the many global issues of climate change and material use in which computation could assist, our collective bias towards our own cognitive structures are not simply arresting the development of AI, they are dangerous to a planetary degree.
This paper will examine the theories of Benjamin Bratton, Neil Leach, as well as roboticists Rodney Brooks and Hod Lipson; ultimately arguing that it is the notion of a singularity that subverts the development of AI itself.
Author: Robert K. Logan
Abstract: The recent detection of the gravitational wave from the merger of two neutron stars due to scientists not trusting their AI configured automated data processing programs underscores the thesis that one cannot rely on artificial intelligence alone. The case study that is reviewed illustrates the point that AI combined with human intervention produces the most desirable results and that AI by itself will never totally replace human intelligence.
Ethan Siegel, astrophysicist, science communicator and NASA columnist, in a recent article entitled LIGO’s Greatest Discovery Almost Didn’t Happen , demonstrated that AI configured computers will never replace human intelligence but that they are nevertheless important tools that enhance human intelligence. If the scientists had relied solely on the results of their AI configured automated data processing, they would have missed an important, nay, a critical observation of a rare event, namely the production of gravity waves from the merger of two neutron stars, an extremely rare event never before observed.
There are altogether three observatories for detecting gravity waves two LIGO (The Laser Interferometer Gravitational-Wave Observatory) detectors located at Hanford Washington and Livingston Louisiana and the EGO (European Gravitational Observatory) detector located near Pisa Italy. The three detectors were in agreement when they observed the first detected gravitational wave that emanated from the merger of two black holes.
A short time after the detection of the first gravitational wave at all three detectors, a signal was received at the Hanford detector consistent with the merger of two neutron stars. The problem was that no signal was registered at the other two detectors as should have been the case if a gravitational wave had arrived at our planet according to the automated data processing program in use at the three detectors. Without the collaborating evidence from the two other detectors, the team at Hanford would have been forced to conclude that the signal was not the detection of a gravitational wave but rather a glitch in the system. One of the scientists, Reed Essick, however, decided that it was worth examining the data from the other detectors to see if possibly there was a signal that had been missed by the other two detectors, because of a glitch at these detectors. He went through the painstaking task of examining every signal that might have been received by the Livingston detector around the time of the event detected at Hanford and to his delight he found that a signal had been registered at the Livingston detector but had been overlooked by the automated computer program because of a glitch at that detector. An analysis of the detector at Pisa revealed that at the time of the event at Hanford the Pisa detector was in a blind spot for observing the event of the two neutron stars merging. Collaborating data came from NASA’s Fermis satellite which had detected a “short period gamma ray burst” that arrived two seconds after the arrival of the gravity wave and was consistent with the merger of two neutron stars. As a result of the due diligence and the perseverance of the LIGO team led by Essick an astronomically important observation was rescued. Siegel in his article drew the following conclusion from this episode:
Here’s how scientists didn’t let it slip away… If all we had done was look at the automated signals, we would have gotten just one “single-detector alert,” in the Hanford detector, while the other two detectors would have registered no event. We would have thrown it away, all because the orientation was such that there was no significant signal in Virgo, and a glitch caused the Livingston signal to be vetoed. If we left the signal-finding solely to algorithms and theoretical decisions, a 1-in-10,000 coincidence would have stopped us from finding this first-of-its-kind event. But we had scientists on the job: real, live, human scientists, and now we’ve confidently seen a multi-messenger signal, in gravitational waves and electromagnetic light, for the very first time.
And the conclusion that I reached as a result of Siegel’s story and conclusion is as follows:
- AI combined with human intervention produces the most desirable results.
- AI by itself will never totally replace human intelligence and
- One cannot rely on AI, no matter how sophisticated, to always get the right answer or reach the correct conclusion.
- Siegel, E. “LIGO’s Greatest Discovery Almost Didn’t Happen” Available online: https://medium.com/starts-with-a-bang/ligos-greatest-discovery-almost-didn-t-happen-a315e328ca8 (accessed on April 24, 2018).