The rapid progress of observation and measurement technologies in neuroscience and physiology have revealed various types of brain activities, and the recent progress of artificial intelligence (AI) technologies represented by deep learning (DL) methods [1
] has been remarkable. Therefore, it appears inevitable that artificial consciousness will be realized soon; however, owing to the fundamental limitations of DL, this seems difficult. The main reason for this is that the current DL emphasizes the perception link between the sensory data and labels, lacking strong connections with the motor system; therefore, it does not seem to involve physical embodiment and social interaction, both of which develop a rich loop by including perception and action with attention, cognition, and prediction (Figure 1
). This is essential for consciousness research, including unconsciousness.
Cognitive developmental robotics [2
] has been advocating the importance of physical embodiment and social interaction, which has great potential in overcoming the above-mentioned limitation. The ideological background of a constructive approach by CDR is well-explained in a book by Jun Tani [3
] in chapter 3, featuring Husserl’s Phenomenology, as follows (Figure 2
With regard to the relationship between the mind and body or things, it is Descartes who advocated mind and body dualism and laid the foundation of modern philosophy. It is Husserl who insisted on “New Cartesianism” that goes beyond Descartes to transcendental phenomenology and gave phenomenological consideration. He developed a way of thinking of subjectivity between subjective and objective intervals, and greatly influenced the generations after him. He predicted that the analysis of nature was based on individual conscious experiences. Heidegger and Merleau-Ponty extended and evolved Husserl’s phenomenological theory.
Heidega argues that the concept of “being-in-the-world” emerges through dynamic interaction between the future possibilities of individual agents and their past possibilities without separating subjectivity and objectivity. He also pointed out the importance of some forms of social interaction where individuals can mutually exist under the prior understanding of how individuals interact with purpose.
Merleau-Ponty argues that in addition to subjectivity and objectivity, the dimension of “physical embodiment” emerges, where a body of the same thickness is given to objects that are touched or viewed when the subject is touching and seeing, and that the body could be a place where exchanges between two subjective and objective poles are repeated. In other words, he pointed out the importance of the body as a media connecting the objective physical world and subjective experience. As mentioned above, this is the basic concept of “physical embodiment” in cognitive development robotics.
Based on these ideological backgrounds, CDR has done several studies where computational models were proposed to reproduce cognitive developmental processes by utilizing computer simulations and real robot experiments. Although CDR has not mentioned consciousness explicitly, here we argue any possibility of artificial consciousness more explicitly by proposing a working hypothesis based on the nervous system of pain sensation. The hypothesis includes the following points:
A pain nervous system is embedded into robots so they can feel pain.
Through the development of MNS, robots may feel pain in others.
That is, emotional contagion, emotional empathy, cognitive empathy, and sympathy/compassion can be developed inside robots.
Robots could be agents who could be moral beings, and at the same time, subjects to moral consideration.
In this article, this hypothesis is argued from several viewpoints towards its verification. The rest of the paper is organized as follows. First, the nervous system for pain sensation is briefly explained from a neuroscientific viewpoint. Next, a preliminary experiment using a soft tactile sensor is shown as a potential artificial nociceptor system. Then, we argue any possibility of artificial empathy, morality, and ethics in CDR by integrating existing studies and future issues. The above hypothesis can be regarded as the developmental process for artificial consciousness.
2. A Nervous System for Pain Sensation
The perception of pain called nociception has its own nervous pathways, which are different from mechanosensory pathways (see Chapter 10 in [4
]). Figure 3
shows these two pathways. The pain nervous system transmits two kinds of information through the anterolateral system: the sensory discrimination of pain (location, intensity, and quality), and the affective and motivational responses to pain. The former terminates at the somatosensory cortex (S1, S2) while the latter involves anterior cingulate and insular regions of cortex and the amygdala. The pain matrix consists of these four regions (four rectangles bounded by red lines in Figure 3
). They are ascending pathways.
The left side of Figure 4
shows the discriminative pain pathway (red) along with the mechanosensory pathway (dark blue), both of which are ascending pathways. The analgesic effect arises from the activation of descending pain-modulating pathways that project to the dorsal horn of the spinal cord from the somatic sensory cortex through the amygdala, hypothalamus, and periaqueductal gray, then some parts of the midbrain (e.g., raphe nuclei), and regulate the transmission of information to higher centers. Such projections provide a balance of inhibitory (past view) and facilitatory influences that ultimately determine the efficacy of nociceptive transmission. Figure 3
shows such descending pathways (broken blue arrows), and the top right of Figure 4
indicates the local interaction of this descending pathway from raphe nuclei.
In addition to the above projections, local interactions between mechanoreceptive afferents and neural circuits within the dorsal horn can modulate the transmission of nociceptive information to higher centers. The bottom right of Figure 4
indicates this situation, which is called the “gate theory of pain” by Ronald Melzack and Patrick Wall [5
]. They proposed that the flow of nociceptive information through the spinal cord is modulated by concomitant activation of the large myelinated fibers associated with low-threshold mechanoreceptors [4
]. It explains the ability to reduce the sensation of sharp pain by activating low-threshold mechanoreceptors (kiss it and make it well).
To tackle the issue of consciousness, this study attempted to represent it as a phenomenon of the developmental process of artificial empathy for pain and moral behavior generation. The conceptual model for the former is given by [9
], while the latter is now a story of fantasy. If a robot is regarded as a moral being that is capable of exhibiting moral behavior with others, is it deserving of receiving moral behavior from them? If so, can we agree that such robots have conscious minds? This is an issue of ethics towards robots, and is also related to the legal system. Can we ask such robots to accept a sort of responsibility for any accident they commit? If so, how? These issues arise when we introduce robots who are qualified as a moral being with conscious minds into our society.
Before these issues can be considered, there are so many technical issues to address. Among them, the following should be addressed intensively.
Associate the sensory discrimination of pain with the affective and motivational responses to pain (the construction of the pain matrix and memory dynamics).
Recall the experience when a painful situation of others is observed.
Generate appropriate behavior to reduce the pain.