Special Issue "Frontiers of Embodied Artificial Intelligence: The (r-)evolution of the embodied approach in AI"

A special issue of Philosophies (ISSN 2409-9287).

Deadline for manuscript submissions: closed (1 April 2019).

Special Issue Editor

Prof. Dr. Luisa Damiano
Website
Guest Editor
ESARG (Epistemology of the Sciences of the Artificial Research Group), Department of Ancient and Modern Civilizations, University of Messina, Messina, Italy
Interests: Sciences of the Artificial; Synthetic Method; (Radically) Embodied Cognitive Science and Artificial Intelligence; Cognitive, Developmental, Social Robotics and HRI; Self-organization; Autopoiesis; Synthetic Biology; Synthetic Ethics; Artificial Empathy

Special Issue Information

Dear Colleagues,

It’s been nearly three decades since the notion of “embodiment” has been adopted in Artificial Intelligence (AI) to characterize a (number of) research approach(es) divergent from the classic – “Computationalist” – one. The main novelty emphasized by this notion is a positive focalization on the role(s) played by the biological body in cognitive processes, which discards the traditional assumption that identifies artificial models of natural cognitive processes with purely “software models” – programs for computers reproducing cognitive performances observed in living systems, and primarily in humans. The driving idea of “Embodied AI” is that, in order to successfully explore natural cognitive processes, AI practitioners have to build and study “complete” or “embodied agents”: physically realized machines whose structure and functioning are based on biologically informed theses on adaptation and cognition. In other words, not programs, nor virtual agents, but biological-like robots: “embodied” and “situated” artificial systems that, like biological systems, provide themselves information about their environment by interacting with it, and, in this sense, learn about their environment through their interactive bodies – something that programs, or computers, cannot do.

Since the early 1990s, an increasing number of sub-divisions of Embodied AI have emerged in the attempt of modeling in robots all range of natural cognitive processes, human ones included. To this end, they have initiated an extremely effective exploration of adaptive bodily and neural mechanisms based on physiological, neuro-scientific and ethological research, which have produced more an more adaptable and autonomous biological-like robots. Applicative success has multiplied the novel domains of implementation of the embodied approach (e.g., epigenetic robotics, evolutionary robotics, developmental robotics, affective developmental robotics, among others), and made of Embodied AI the current mainstream of AI. It’s been since the late 1990s that literature empahizes how the establishment and consolidation of this approach, even if it did not implied the extinction of the classic one, produced a significant metamorphosis in AI research, which some of the proponents of Embodied AI tend to consider as a “revolution” – a “paradigmatic shift” in a Kuhnian sense.

Three decades after the emergence of Embodied AI, this special issue intends to explore the frontiers of the (r-)evolution it triggered in AI research. The goal is two-fold. Firstly, the special issue aims at mapping the most significant advancements of the avant-garde research in Embodied AI – from Enactive AI to SB-AI, from Strong Social Robotics to Artificial Empathy, from Android Science to Synthetic Ethics – and the related technical, theoretical, epistemological, and methodological challenges that they impose to AI research. Secondly, on the basis of this inquiry on front line research in Embodied AI, this special issue intends to address one of the most critical problems discussed by current related debate: defining the borders of Embodied AI, that is, the frontiers that distinguish its approach(es) from Classic AI. The overall purpose is to provide a perspective on contempotrary frontier trends in Embodied AI from which it is possible not only to evaluate the depth of the metamorphosis that it generated in AI, but also to identify the obstacles that currently impede Embodied AI to fully establish itself as a genuine alternative to Classic AI.

Prof. Dr. Luisa Damiano
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Philosophies is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Embodied AI
  • Enactive AI
  • SB-AI
  • Artificial Consciousness
  • Artificial Empathy
  • Cognitive Robotics
  • Developmental Robotics
  • Epigenetic Robotics
  • Android Science
  • Social Robotics
  • Synthetic Anthropology
  • Synthetic Phenomenology
  • Synthetic Ethics
  • Synthetic Method

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle
Embodied AI beyond Embodied Cognition and Enactivism
Philosophies 2019, 4(3), 39; https://doi.org/10.3390/philosophies4030039 - 16 Jul 2019
Cited by 1
Abstract
Over the last three decades, the rise of embodied cognition (EC) articulated in various schools (or versions) of embodied, embedded, extended and enacted cognition (Gallagher’s 4E) has offered AI a way out of traditional computationalism—an approach (or an understanding) loosely referred to as [...] Read more.
Over the last three decades, the rise of embodied cognition (EC) articulated in various schools (or versions) of embodied, embedded, extended and enacted cognition (Gallagher’s 4E) has offered AI a way out of traditional computationalism—an approach (or an understanding) loosely referred to as embodied AI. This view has split into various branches ranging from a weak form on the brink of functionalism (loosely represented by Clarks’ parity principle) to a strong form (often corresponding to autopoietic-friendly enactivism) suggesting that body–world interactions constitute cognition. From an ontological perspective, however, constitution is a problematic notion with no obvious empirical or technical advantages. This paper discusses the ontological issues of these two approaches in regard to embodied AI and its ontological commitments: circularity, epiphenomenalism, mentalism, and disguised dualism. The paper also outlines an even more radical approach that may offer some ontological advantages. The new approach, called the mind-object identity, is then briefly compared with sensorimotor direct realism and with the embodied identity theory. Full article
Open AccessArticle
Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots
Philosophies 2019, 4(3), 38; https://doi.org/10.3390/philosophies4030038 - 13 Jul 2019
Cited by 2
Abstract
In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental [...] Read more.
In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning. Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others. Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed. Full article
Show Figures

Figure 1

Open AccessArticle
Interaction Histories and Short-Term Memory: Enactive Development of Turn-Taking Behaviours in a Childlike Humanoid Robot
Philosophies 2019, 4(2), 26; https://doi.org/10.3390/philosophies4020026 - 23 May 2019
Cited by 1
Abstract
In this article, an enactive architecture is described that allows a humanoid robot to learn to compose simple actions into turn-taking behaviours while playing interaction games with a human partner. The robot’s action choices are reinforced by social feedback from the human in [...] Read more.
In this article, an enactive architecture is described that allows a humanoid robot to learn to compose simple actions into turn-taking behaviours while playing interaction games with a human partner. The robot’s action choices are reinforced by social feedback from the human in the form of visual attention and measures of behavioural synchronisation. We demonstrate that the system can acquire and switch between behaviours learned through interaction based on social feedback from the human partner. The role of reinforcement based on a short-term memory of the interaction was experimentally investigated. Results indicate that feedback based only on the immediate experience was insufficient to learn longer, more complex turn-taking behaviours. Therefore, some history of the interaction must be considered in the acquisition of turn-taking, which can be efficiently handled through the use of short-term memory. Full article
Show Figures

Figure 1

Open AccessArticle
Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence
Philosophies 2019, 4(2), 24; https://doi.org/10.3390/philosophies4020024 - 17 May 2019
Abstract
Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research [...] Read more.
Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition. Full article
Open AccessArticle
Rilkean Memories and the Self of a Robot
Philosophies 2019, 4(2), 20; https://doi.org/10.3390/philosophies4020020 - 25 Apr 2019
Abstract
This paper discusses the concept of Rilkean memories, recently introduced by Mark Rowlands, to analyze the complex intermix of hardware and software related to the self of a robot. The Rilkean memory of an event is related to the trace of that episode [...] Read more.
This paper discusses the concept of Rilkean memories, recently introduced by Mark Rowlands, to analyze the complex intermix of hardware and software related to the self of a robot. The Rilkean memory of an event is related to the trace of that episode left in the body of the individual. It transforms the act of remembering into behavioral and bodily dispositions, thus generating the peculiar behavioral style of the individual, which is at the basis of her autobiographical self. In the case of long-life operating robots, a similar process occurs: the software of the robot has to cope with the changes that happened in the body of the robot because of damaging events in its operational life. Thus, the robot, in compensating the damages of its body, acquires a particular behavioral style. The concept of Rilkean memory is essential in self-adapting robotics technologies where human intervention on a robot is not possible, and the robot must cope with its faults, and also in applications concerning green robotics. Full article
Show Figures

Figure 1

Open AccessArticle
Embodiment: The Ecology of Mind
Philosophies 2019, 4(2), 12; https://doi.org/10.3390/philosophies4020012 - 27 Mar 2019
Abstract
Following a suggestion from G. Bateson, this article enquires into the consequence of the idea of embodiment in philosophy of mind, taking seriously the notion of an ecology of mind. In the first half of this article, after distinguishing between the biological and [...] Read more.
Following a suggestion from G. Bateson, this article enquires into the consequence of the idea of embodiment in philosophy of mind, taking seriously the notion of an ecology of mind. In the first half of this article, after distinguishing between the biological and the systemic approaches to ecology, I focus on three characteristics of the systemic approach. First, that a system is an abstract object that is multiply embodied in a collection of physically distinct heterogeneous objects. Second, that there is a form of circular causality between the level of the elements and that of the system as a whole, as some characteristics of the elements partake in the explanation of how the system functions, while the requirement of the system explains why the elements have the characteristics that they do. The third is the ontological uncertainty that we sometimes find in ecology, where the same term is used to designate both a central component of the ecological system and the system as a whole. In the second half, beginning with a critique of the theory of mind approach, I look into the consequences of conceiving that mind is embodied in a collection of physically distinct heterogeneous objects that interact as elements of a system, rather than enclosed in an individual body. Full article
Open AccessArticle
Enactivism and Robotic Language Acquisition: A Report from the Frontier
Philosophies 2019, 4(1), 11; https://doi.org/10.3390/philosophies4010011 - 07 Mar 2019
Cited by 1
Abstract
In this article, I assess an existing language acquisition architecture, which was deployed in linguistically unconstrained human–robot interaction, together with experimental design decisions with regard to their enactivist credentials. Despite initial scepticism with respect to enactivism’s applicability to the social domain, the introduction [...] Read more.
In this article, I assess an existing language acquisition architecture, which was deployed in linguistically unconstrained human–robot interaction, together with experimental design decisions with regard to their enactivist credentials. Despite initial scepticism with respect to enactivism’s applicability to the social domain, the introduction of the notion of participatory sense-making in the more recent enactive literature extends the framework’s reach to encompass this domain. With some exceptions, both our architecture and form of experimentation appear to be largely compatible with enactivist tenets. I analyse the architecture and design decisions along the five enactivist core themes of autonomy, embodiment, emergence, sense-making, and experience, and discuss the role of affect due to its central role within our acquisition experiments. In conclusion, I join some enactivists in demanding that interaction is taken seriously as an irreducible and independent subject of scientific investigation, and go further by hypothesising its potential value to machine learning. Full article

Other

Jump to: Research

Open AccessEssay
The Problem of Meaning in AI and Robotics: Still with Us after All These Years
Philosophies 2019, 4(2), 14; https://doi.org/10.3390/philosophies4020014 - 03 Apr 2019
Cited by 5
Abstract
In this essay we critically evaluate the progress that has been made in solving the problem of meaning in artificial intelligence (AI) and robotics. We remain skeptical about solutions based on deep neural networks and cognitive robotics, which in our opinion do not [...] Read more.
In this essay we critically evaluate the progress that has been made in solving the problem of meaning in artificial intelligence (AI) and robotics. We remain skeptical about solutions based on deep neural networks and cognitive robotics, which in our opinion do not fundamentally address the problem. We agree with the enactive approach to cognitive science that things appear as intrinsically meaningful for living beings because of their precarious existence as adaptive autopoietic individuals. But this approach inherits the problem of failing to account for how meaning as such could make a difference for an agent’s behavior. In a nutshell, if life and mind are identified with physically deterministic phenomena, then there is no conceptual room for meaning to play a role in its own right. We argue that this impotence of meaning can be addressed by revising the concept of nature such that the macroscopic scale of the living can be characterized by physical indeterminacy. We consider the implications of this revision of the mind-body relationship for synthetic approaches. Full article
Back to TopTop