Next Article in Journal
Time Domain Spherical Harmonic Processing with Open Spherical Microphones Recording
Next Article in Special Issue
Expressing Robot Personality through Talking Body Language
Previous Article in Journal
Real-Time Terrain-Following of an Autonomous Quadrotor by Multi-Sensor Fusion and Control
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Abel: Integrating Humanoid Body, Emotions, and Time Perception to Investigate Social Interaction and Human Cognition

E. Piaggio Research Center, University of Pisa, 56126 Pisa, Italy
Creative Director of Biomimic Studio, London SE18 5T, UK
Information Engineering Department, University of Pisa, 56126 Pisa, Italy
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(3), 1070;
Submission received: 31 December 2020 / Revised: 17 January 2021 / Accepted: 20 January 2021 / Published: 25 January 2021
(This article belongs to the Special Issue Social Robotics: Theory, Methods and Applications)


Humanoids have been created for assisting or replacing humans in many applications, providing encouraging results in contexts where social and emotional interaction is required, such as healthcare, education, and therapy. Bioinspiration, that has often guided the design of their bodies and minds, made them also become excellent research tools, probably the best platform by which we can model, test, and understand the human mind and behavior. Driven by the aim of creating a believable robot for interactive applications, as well as a research platform for investigating human cognition and emotion, we are constructing a new humanoid social robot: Abel. In this paper, we discussed three of the fundamental principles that motivated the design of Abel and its cognitive and emotional system: hyper-realistic humanoid aesthetics, human-inspired emotion processing, and human-like perception of time. After reporting a brief state-of-the-art on the related topics, we present the robot at its stage of development, what are the perspectives for its application, and how it could satisfy the expectations as a tool to investigate the human mind, behavior, and consciousness.

1. Introduction

In recent decades, robots with human-inspired appearance, expressiveness, and cognitive capabilities have been used in the field of healthcare [1], education [2,3], and therapy [4,5]. The encouraging results in these fields of application highlighted that the involvement of humanoids become particularly fruitful in specific scenarios where the creation of an empathic bond is crucial. Accordingly, the perception of robots has also changed over time, together with definitions: no longer Robota (Robota derives from Czech, meaning “labor’’. The term was coined in the drama of K. Čapek R.U.R. “Rossum’s Universal Robots’’ (1920).) but companions [6,7], assistants [8], tutors [9], or—citing Brooks et al.—“Alternative Essences of Intelligence’’ [10]. These last words well introduce the other side of the coin: beyond their application as socially interactive agents, many researchers in the humanoid robotics community see humanoid robots as a tool for better understanding humans [11,12]. We do agree with Kemp et al. claiming that humanoid robots offer an avenue to test understanding through construction (synthesis), and thereby complement the careful analysis provided by researchers in disciplines such as cognitive science [13]. This approach, which sees humanoids as testbeds for testing cognitive models by the means of embodied artificial intelligence, highly resembles the “understanding by building” approach [14]. It is related to the intrinsic value of every modeling process: firstly, it is necessary to divide the complex system that must be modeled into simpler functional parts, then the interactions between these parts producing an observable behavior must be highlighted; a priority is given to each part and mechanism in order to decide what is needed for the model and how to make it work. The model then is tested, errors or imperfections are encountered, and finally the model is improved with corrections. Regardless of what motivates this iterative modeling and improvement process (e.g., imitating human motion, cognitive capabilities, emotion processing, personality, or consciousness) it will still produce a two-fold effect: better synthesis and better analysis. The former leads to more believable and skilled humanoids; the latter leads to a better comprehension of human nature.
The scope of this paper is not to provide definitive answers about optimized robot design in the form of a series of guidelines for roboticists, neither to discuss ethical issues, which have been optimally debated in the recent IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems [15], but to present an overview, based on our works and scientific literature, on some aspects that appear to be fundamental for the development of social humanoids as research tools for understanding humans. In particular, three factors, which we address to be useful both for the synthetic and the analytic intent, are here discussed:
  • a believable aesthetics and expressivity of the robot for better conveying and expressing emotions
  • an embodied cognitive system, with internal emotional representation, that can flexibly adapt its behavior basing both on the robot’s internal and external scenario
  • the possibility to endow the robot with its subjective time
In the next section, we discuss why these aspects can be important for the next generation of humanoids, stressing their interconnection and their effect on human–robot interaction. We report a brief state-of-the-art for each topic, lessons learned from previous experiments, and the subsequent reflections that determines the design of a new humanoid that is currently under construction: Abel.
Abel is a humanoid adolescent built in London at the Biomimic Studio by one of the authors (GH). It is conceived by the authors for eliciting an empathic bond with human interlocutors. Abel is covered with flesh-like skin and equipped with sensors and actuators to detect and express emotions at a high level of realism. The humanoid is here introduced with its mechatronic features. Then, a specific section of the paper is dedicated to the robot cognitive system. We present how it is designed, how the features discussed in the introduction can be implemented, and how this integration on the robot would allow its usage both as a platform for cognitive robotics and as a tool for emotional human–robot interaction. Finally, we discuss potential improvements and limitations of the presented system.

2. Humanoid Body, Emotions, and Subjective Time

2.1. Humanoid Body

Fifty years ago, the Japanese roboticist Masahiro Mori proposed the Uncanny valley hypothesis, which predicted a nonlinear relation between robots’ perceived human-likeness and their likability [16]. The unpleasant reaction of a human to a particularly human-like robot, described by Prof. Mori in 1970 [17] was later proven to be dependent on several factors, such as the habituation and exposure to robots, which depends on cultural context, and the disappointment of expectations caused by the misalignment between the aesthetic realism and the actual capabilities and movements of the robot [18,19]. All these factors have inevitably changed in the following years. In fact, humans have continued to pursue their innate tendency to imitate nature and transfer knowledge from biology to the machine [20].
Based on the same principles, the anthropomorphicness of humanoid robots has also increased over the years and continues to improve (examples of humanoid robots with different levels of human-likeness are shown in Figure 1, together with related descriptive publications). A satisfactory review of human-likeness in the design of robots and human–robot interaction was reported in [21], where the author claims that one approach to enhance people’s acceptance of robots is the attempt to increase a robot’s familiarity by using anthropomorphic design and "human social" characteristics. Moreover, sharing a similar morphology, robots can communicate in a manner that supports the natural communication modalities of humans [22]. Chaminade et al. investigated the effects of humanoid appearance in human–robot interaction by the means of behavioral and neuroimaging experiments, finding the involvement of an important aspect of social cognition, particularly important in automatic and unconscious perception of another agent, i.e., motor resonance [23]. In their paper, it was highlighted how motor resonance was present only (or increased) in human–robot interactions in which the robot had some level of humanness, in its appearance and motion. They discussed how this may be linked with the activation of social resonance, empathy, and social bonding, which are highly linked with motor resonance in humans [24].
Another reason to design anthropomorphic robots is related to the concept of embodiment [33]. It is a consolidated idea in the scientific community that intelligence cannot merely exist in the form of an abstract algorithm, on the contrary, it requires a physical instantiation, a body [34,35,36].
In "Internal Robotics", Parisi described how an organism’s body determines the organism’s behavior, reporting several robotics examples [37]. In Figure 2, taken from the article, the body is represented as the interface dividing two worlds with which the nervous system interacts: the inner world and the external world. The only possible representation that a mind can have of the world is determined by the possibilities of perception and action with which the body is equipped.
It is therefore clear that the aim of obtaining an agent with human-like intelligence and behavior cannot be separated from the objective of providing the agent with a body and with the instruments of perception and actuation that are inspired by those of a human being. As a consequence, the body of a robot should be considered not only for its aesthetic appearance but as the specific interface that allows the robot to internalize the information on which to build any abstraction, reasoning, and feeling of what happens.

2.2. Humanoid Emotions

The peculiar way in which emotions are recognized, internalized, and exploited by the brain is one of the characteristics that distinguishes human beings from all other beings [38]. As a consequence, it can be said that emotions are essential to define a form of life as a form of human life. In [39], the philosopher Bennett W. Helm claimed that we can understand a creature as an agent in this minimal sense only if it both exhibits goal-oriented behavior structured in accordance with instrumental rationality and displays a pattern of noninstrumental rationality characteristic of significance and of the emotions. Dennett described emotions as the main tools, given by evolution, for the generation of intentionality and consciousness [40,41]. Accordingly, neuroscientists demonstrated how emotions are linked with the body and the sense of self [42,43], their role in optimizing human decision making [44], and their indispensability for the emergence of human consciousness [45].
Based on their demonstrated relevance, emotions had a prominent role in social robotics, and they were integrated in humanoids with two different aims: imitating the human empathic interaction and modeling the human mind-body emotion process. In order to establish an empathic connection with its interlocutor—what Paiva calls an affective loop (see Figure 3)—the robot must be equipped with a various range of sensors, particularly for audio-visual perception and an acquiring system capable of recognition and classification of high-level data (e.g., words, facial expressions, gestures). This information is needed by the robot to be constantly aware of the user’s affective state. Likewise, to elicit emotion in the user, the robot should be able to respond accordingly through human-like facial expressions, gestures, and utterances. These aspects of emotion improve the naturalness of the human–robot interaction (e.g., [46,47]) and concern external robotics.
In case the robot is aimed to be more than an engaging emotional character, but also a research tool for modeling human emotions and studying their implication with cognition (e.g., mood, personality, consciousness), then emotions should be available also in the form of internal values and then abstract representations. It follows that, in order to generate human-like behavior and cognition, the human emotional process must be modeled and integrated in embodied cognitive architectures [49].
Although not all of them have been conceived to be embedded in humanoid robots, Bio-Inspired Cognitive Architectures (BICA) [50] are successful examples of imitation of functions and structures of the human brain and often include emotional behavior generation. In particular, the ones that in [51] are classified as Hybrid Architectures (e.g., LIDA [52], CLARION [53], DUAL [54], MicroPsi [55], and OpenCog [56]). These architectures are characterized by having both a sensorimotor connection and a deliberative path to manipulate symbolic concepts and planning ponderate actions. These two coexistent connections, i.e., reactive and deliberative, are equally important for emotional behavior generation: the former guarantees the direct and continuous link between the environment and the body, e.g., the fast and unweighted reaction happening when basic emotions are induced and expressed; the latter is necessary for symbolic abstraction, e.g., the mental representation of secondary emotions, the generation of beliefs, and the construction of desires and intentions.
For what concerns the modeling of emotion in the robot cognitive systems, instead of viewing emotions in terms of categories (happiness, disgust, fear, etc.) the most widespread school of thought is that of conceptualizing the dimensions that could span the relationship between different emotions (valence and arousal, for instance) [22]. This concept, coming from psychology with different variants [57,58,59,60], was well accepted by social roboticists, who needed a numerical representation and saw the possibility of shifting the emotional state as a movement of a point in an emotional space, which form and dimensions may vary depending on the model.
An artificial agent, built according to the materials and methods outlined so far, would be a robot with a humanoid body endowed with social perception, human-like expressive and communication capabilities, controlled by a bioinspired hybrid cognitive system that allows the representation of emotions also as abstract constructs.
If this is sufficient to obtain a likable emotional conversational agent, it would be not enough for the goal of creating an artificial lifeform that is useful for human understanding. For this scope, a third component is missing.

2.3. Humanoid Perception of Time

An artificial agent that has to model and interact with a human must be designed to share both the same spatial environment and the same temporal dimension of humans. For example, primary and secondary emotions, as well as personality and consciousness, evolve (or change) in different timescales. Nonetheless, humans do not have any perceptive apparatus capable of directly measuring time; therefore, the brain makes an indirect reconstruction of time [61,62], through a coding very similar to that discovered by O’Keefe in the hippocampal neurons implicated in spatial navigation [63].
This internalization of time described by neuroscientists has been confirmed by behavioral psychology experiments, which demonstrated also that humans have subjective time perception that is affected by their emotional state [64,65]. In addition, attention has a role in time estimation [66]. Differences emerged when subjects were asked to perform time judgments under prospective conditions (in which subjects are instructed to attend to time) and retrospective conditions (in which subjects are unaware that they will be required to judge time) [67]. Another condition in which time duration perception may be biased is when decision-making tasks have to be performed under time stress [68].
All these results demonstrated that human time perception is naturally subjective and strictly connected with the emotional context that is present in the body and in the environment of the agent who judges time [69].
In the literature, there are rare cases of implementation of cognitive systems that take time perception into account, e.g., the DESTIN cognitive architecture [70,71], which is a spatiotemporal inference network capable to inherently capture both spatial and temporal regularities in the data. Even more rare are the cases of embodiment of time perception models in artificial agents, for which TimeStorm [72] must be cited. TimeStorm is an H2020 EU Project aimed at investigating the temporal attributes of human–machine synergetic interaction, adopting a multidisciplinary research approach that involves developmental studies, among which they consider embodied experiments.
Without any doubt, there is a lot to do in social robotics on this side. The emulation of the distortions found in human perception of time, once applied to humanoid robotics, would lead to a more natural human–machine alignment in the temporal dimension. Furthermore, this could be a first step towards the possibility to create some sort of artificial phenomenal time, i.e., the perceived qualitative awareness of the relationship between futurity and pastness, that is a fundamental element of human awareness and their sense of self. This kind of approach in humanoid robotics is still unexplored, and an embodied cognitive system with human-inspired emotional processes and time perception has not yet been developed.

3. The Making of Abel

Abel is a new generation humanoid robot, conceived to be a research platform for social interaction, emotion modeling, and studies on embodied intelligence. The name Abel derives from the Hebrew Hevel, or Havel, which means ’breath’, ’vital breath’, or ’vapour’. Its appearance resembles that of an 11–12 year old boy. It is a unique piece, resulting from the collaboration between the Enrico Piaggio Research Center of the University of Pisa [73] and Gustav Hoegen, from Biomimic Studio [74], based in London. Abel is a humanoid resulting from the merge between researchers with bioengineering and robotics background and animatronics artists. The collaboration between engineers and creatives with such an artistic inspiration is fundamental for the creation of a humanoid like Abel, i.e., an emotional robot. Actually, the importance of being emotionally evocative [75] drove the design of the Abel’s body (Figure 4).
The robot design has been chosen after years of experience with the FACE robot (Facial Automaton for Conveying Emotions—bottom row, second from the left, in Figure 1), a social robot with human-inspired facial expressiveness that has been used in therapy with children affected by Autism Spectrum Disorder (ASD) [5] and in educational contexts as synthetic tutor [3,9].
For what concerns the mechatronic design, a creative approach is adopted by GH, who was responsible on this side. His design method does not start from an a priori idea of the mechanisms that will move the robot. On the contrary, the mechanics are dictated by the shape of the face and the body of the robot, as well as by its conceptual design. In this approach, it is the creature’s body itself that communicates to the creator the most natural and effective way to control it, in order to make it expressive to the maximum of its possibilities. The realism of the movements is obtained by entrusting the tough part of this task to particular mechanical transmissions, which translate a simple linear movement of an upstream servo motor into a sophisticated movement downstream, i.e., movements of the skin or the frame of the android. The mechatronics of the head confers to Abel the possibility to express a wide spectrum of emotions by means of facial expressions, which are accompanied by a body which is also designed to represent emotional and meaningful body gestures.
Because of their relevance in generating the illusion of life in the human interlocutor, a particular focus has been given to eyes and mouth movements [76], that have been designed to perform a believable gaze movement, and lip-sync for verbal communication (a detail of Abel’s head mechatronics is shown in Figure 5, on the right).
Abel is physically made up of the head and the upper part of the torso with arms and hands, all of these are robotic parts moved by the latest generation of Futaba, MKS, and Dynamixel servo motors. Twenty-one servo motors are inside the Abel’s head, they are dedicated to the movement of the facial expression, to perform gaze, and simulate speaking: four move the brow, eight move the eyes, one moves the jaw, and eight are for the movement of mouth, lips, and cheeks. Five motors are dedicated to neck and head movement. Then, five servo motors are mounted in each arm (three for the shoulder, one for the elbow, one to twist the arm), and three servo motors are in each hand, for a total of 42 degrees of freedom.
The humanoid is equipped with an integrated camera into the torso and integrated binaural microphones, which are specifically designed to emulate the acoustic perception of the human listener. The robot also has an internal speaker to reproduce its voice.

4. The Mind of Abel

The cognitive system of Abel is an adaptation and extension of SEAI (i.e., Social Emotional Artificial Intelligence), a bioinspired hybrid cognitive architecture we devised and implemented, based on Damasio’s theory of mind and consciousness [77]. SEAI is a modular architecture, conceptually divided in Figure 6 between the characteristic Sense, Plan, and Act functional blocks. Modules are grouped forming services, which are designed as standalone applications. A service can be an application for image analysis, an animator of a robot part, or a component of the cognitive system which is dedicated to the processing of a specific domain of information. These services can be distributed in different computers and the communication among them is implemented with the YARP middleware [78], by which we create a local network dedicated to robot control. The framework allows both low-level reactive control, by means of direct connection between perception and actuation control services and a high-level deliberative control, which includes the possibility to plan context-dependent actions and to perform abstract reasoning on the acquired sensory data by means of symbolic manipulation.
Scene Analyzer is the social perception system responsible for processing the environmental information gathered by the available sensors and extract data with social or emotional significance, e.g., 3D position of human interlocutors, gestures, facial expressions, speaker probability, estimated age, and gender [80].
Such information detected by the perception system can lead to an immediate physical reaction, like a movement or a facial expression, in the body of the robot [81] (i.e., the reactive path) and/or constitute a trigger for emotional or reasoning processes that will lead to more complex behavior (i.e., the deliberative path).
In this latter case, the information passes through the symbolic part of the cognitive architecture: the I-CLIPS brain [79] implemented on CLIPS as a rule-based expert system [82]. Including this approach in the cognitive architecture allows for the easy simulation of human expertise in certain domains, manipulate abstract concepts, and perform deductive and inductive logical reasoning by using forward and backward chaining in the rules engine. Moreover, it is possible to implement patterns of behavior, for example, programming rules that determine the modification of robot internal values depending on external events affordable by the perception system or on previous internal values, and rules that determine how the change of internal values influence the cognitive processes of the robot, and then its behavior.
SEAI has been successfully exploited to implement the influence of emotion in decision-making described by Bechara and Damasio as the mechanism of the somatic marker [44]. Implementation details and results are available in [77,83], where this mechanism has been tested in a real context of human–robot interaction using the FACE Robot.
Furthermore, a model of subjective time perception, which takes into consideration the influence of the emotional state of the agent has been recently implemented in SEAI and presented at the 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2019) [84].
Our perspective is to implement all these features in the Abel robot, optimizing both its role as a research platform to model human cognitive processes and as an empathic social robot able to interact more and more naturally with humans.

5. Discussion and Future Works

5.1. Body

Although we have highlighted the importance and reasons for using an anthropomorphic corporeity, many studies must be conducted regarding the details of this aesthetics. For example, the perceived gender and age of the humanoid may influence the induction of trust, empathy, and the perceived level of competence of the robot. Future studies based on the comparison among different embodiments in equal interaction scenarios are probably a good approach to find (if they exist) correlations that determine which aesthetic or physical variants are better in different applications and contexts.
Moving to the involvement of the body in emotion and cognition, our intention is to provide Abel with a deeper link to its body, completed by a proprioception system. We are considering the application of ink-based conductive material for the sensorization of the skin of the robot, a method that allows extrapolating position and movements from the stretch of the sensorized skin. This would make it possible also to discern tactile interactions with emotional content.

5.2. Emotion

For what concerns emotion recognition, we are studying how to extend Abel perception connecting its sensory apparatus to wireless and wearable affective computing devices. The program application of each device should be integrated into the SEAI cognitive architecture as additional services of the SENSE block in the YARP network. This extended perception of the robot on the body of the human interlocutor would augment intensively the reliability of the robot’s beliefs about the other’s emotions. A preliminary experiment of a similar integration was made in [85].
Some limitations, then, regard the emotion modeling, for which we refer to the mentioned Russell’s Circumplex Model of Affect [86]. The bidimensional model which describes emotions by means of the two coordinates of valence and arousal is useful and probably sufficient for guiding the facial expressions of social robots, while it is certainly too reductive if we move to a higher level of emotional states, such as the mood of the robot. In the future, we will try other methods of spatial representation of the emotion, which would make a strong contribution to the possibilities of generating complex emotional behavior.
Moreover, the mechanism of somatic markers, included in SEAI, should be applied—as in humans—not only to label external entities but also in the labeling of concepts and decisions, based on the emotional consequences perceived in the subsequent events, therefore influencing the possibility for the robot to choose or not the same decision branches in future similar situations. A similar application would lead to something that resembles the formation and development of a personality. Indeed, the robot would have available both universal laws of social contexts (in the form of behavioral rules in SEAI) but it would also be able to autonomously develop opinions, reticences, preferences, and preferred modalities of behavior, according to its subjective past experience.
Finally, since we included verbal communication capabilities in Abel, a speech recognition and analysis software capable of extracting both the textual content and the emotional content (i.e., prosody) of the speakers’ utterances must be developed and integrated into the perception system of the robot. Likewise, an emotion-based speech synthesis program modulating the voice of the robot according to its internal emotional state must be included.

5.3. Time

Introducing human-like time perception in humanoid social robotics is a very recent and largely unexplored objective. We do believe that time will have a central role in the future of social robotics.Especially if we think about emotions, a lot has been done in terms of recognition, synthesis, expressiveness, but what do we know about the dynamics of emotions? Furthermore, how is the evolution of emotions over time implemented in robots? Time is a key factor, especially when considered in association with emotions and social behavior [87].
For an extensive discussion on the different implications of time on cognition, that should be implemented in social robotics, we refer to [88], from which we report here: temporal degradation of memory, memory reconsolidation, estimating time duration, and time-warping emotions (e.g., depression, sadness, urgency, anger, pleasure).

6. Conclusions

We do expect that humanoids, in the next future, will be greatly improved in terms of their acceptability, believability, and efficiency in the mentioned fields of application and many others not yet foreseen, but they also will prove to be a key tool for a better understanding of human nature, behavior, mind, and consciousness. They will be a mirror, which we can reflect on and understand. What has been remarked in this paper is that a humanoid artificial intelligence would greatly benefit from a higher integration of mind and body, both human-inspired in form and functions; that this mind–body creature should have the autonomy of interact with the same world in which humans are living, using similar ways of extracting information from it and similar ways of processing and use this gathered information; and that this mind-body agent should be also aligned in the same temporal dimension in which humans live, in order to understand their emotions and have similar emotional dynamics. Emotions are essential for the generation of personality, character, and consciousness itself. This means that, if an artificial consciousness can ever exist, it cannot disregard body, emotion, time, and their mutual interaction. Abel has then been presented at its actual stage of development. The authors constructed this new social robot in order to explore the topics discussed in the present paper. The SEAI cognitive system will lead to the possibility both to implement cognitive models from mind theories and to design robot behavior in therapy or interaction contexts. The robot will be used also to test the possibility for an artificial agent to perform human-like temporal distortion according to its internal and external emotional environment, which we consider the first step towards a deeper alignment of human and robot time dimensions.
The future studies that will be conducted on Abel and on this new generation of robots, on how to better model and replicate human properties, will undoubtedly bring results to the double intention of humanoid robotics: producing robots that will be able to contribute more effectively and more naturally to the life of humans and at the same time bringing us deeper knowledge of ourselves.

Author Contributions

Conceptualization, L.C.; G.H. and D.D.R.; methodology, L.C.; G.H. and D.D.R.; writing–original draft preparation, L.C.; G.H. and D.D.R.; All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.


In addition to the authors, other researchers and professors of the E. Piaggio Research Center contributed to the collaboration with Gustav Hoegen and to the creation and development of Abel. In particular, we would like to thank Pasquale Scilingo, Arti Ahluwalia, Giovanni Vozzi, and Caterina Giannetti.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Cresswell, K.; Cunningham-Burley, S.; Sheikh, A. Health care robotics: Qualitative exploration of key challenges and future directions. J. Med. Internet Res. 2018, 20, e10410. [Google Scholar] [CrossRef] [PubMed]
  2. Belpaeme, T.; Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social robots for education: A review. Sci. Robot. 2018, 3, eaat5954. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Reidsma, D.; Charisi, V.; Davison, D.; Wijnen, F.; van der Meij, J.; Evers, V.; Cameron, D.; Fernando, S.; Moore, R.; Prescott, T.; et al. The EASEL project: Towards educational human-robot symbiotic interaction. In Proceedings of the Conference on Biomimetic and Biohybrid Systems, Edinburgh, UK, 19–22 July 2016; pp. 297–306. [Google Scholar]
  4. Shamsuddin, S.; Yussof, H.; Ismail, L.; Hanapiah, F.A.; Mohamed, S.; Piah, H.A.; Zahari, N.I. Initial response of autistic children in human-robot interaction therapy with humanoid robot NAO. In Proceedings of the 2012 IEEE 8th International Colloquium on Signal Processing and its Applications, Melaka, Malaysia, 23–25 March 2012; pp. 188–193. [Google Scholar]
  5. Mazzei, D.; Billeci, L.; Armato, A.; Lazzeri, N.; Cisternino, A.; Pioggia, G.; Igliozzi, R.; Muratori, F.; Ahluwalia, A.; De Rossi, D. The face of autism. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 791–796. [Google Scholar]
  6. Kerzel, M.; Strahl, E.; Magg, S.; Navarro-Guerrero, N.; Heinrich, S.; Wermter, S. NICO—Neuro-inspired companion: A developmental humanoid robot platform for multimodal interaction. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 113–120. [Google Scholar]
  7. Park, H.W.; Grover, I.; Spaulding, S.; Gomez, L.; Breazeal, C. A model-free affective reinforcement learning approach to personalization of an autonomous social robot companion for early literacy education. In Proceedings of the AAAI Conference on Artificial Intelligence 2019, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 687–694. [Google Scholar]
  8. Diftler, M.A.; Ambrose, R.O.; Tyree, K.S.; Goza, S.; Huber, E. A mobile autonomous humanoid assistant. In Proceedings of the 4th IEEE/RAS International Conference on Humanoid Robots, Santa Monica, CA, USA, 10–12 November 2004; Volume 1, pp. 133–148. [Google Scholar]
  9. Vouloutsi, V.; Blancas, M.; Zucca, R.; Omedas, P.; Reidsma, D.; Davison, D.; Charisi, V.; Wijnen, F.; van der Meij, J.; Evers, V.; et al. Towards a synthetic tutor assistant: The Easel project and its architecture. In Proceedings of the Conference on Biomimetic and Biohybrid Systems, Edinburgh, UK, 19–22 July 2016; pp. 353–364. [Google Scholar]
  10. Brooks, R.A.; Breazeal, C.; Irie, R.; Kemp, C.C.; Marjanovic, M.; Scassellati, B.; Williamson, M.M. Alternative essences of intelligence. AAAI/IAAI 1998, 1998, 961–968. [Google Scholar]
  11. Atkeson, C.G.; Hale, J.G.; Pollick, F.; Riley, M.; Kotosaka, S.; Schaul, S.; Shibata, T.; Tevatia, G.; Ude, A.; Vijayakumar, S.; et al. Using humanoid robots to study human behavior. IEEE Intell. Syst. Their Appl. 2000, 15, 46–56. [Google Scholar] [CrossRef] [Green Version]
  12. Adams, B.; Breazeal, C.; Brooks, R.A.; Scassellati, B. Humanoid robots: A new kind of tool. IEEE Intell. Syst. Their Appl. 2000, 15, 25–31. [Google Scholar] [CrossRef]
  13. Kemp, C.C.; Fitzpatrick, P.; Hirukawa, H.; Yokoi, K.; Harada, K.; Matsumoto, Y. Humanoids. Exp. Psychol. 2009, 56, 1–3. [Google Scholar]
  14. Webb, B. Can robots make good models of biological behaviour? Behav. Brain Sci. 2001, 24, 1033–1050. [Google Scholar] [CrossRef] [Green Version]
  15. Chatila, R.; Firth-Butterflied, K.; Havens, J.C.; Karachalios, K. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems [Standards]. IEEE Robot. Autom. Mag. 2017, 24, 110. [Google Scholar] [CrossRef]
  16. Wang, S.; Lilienfeld, S.O.; Rochat, P. The Uncanny Valley: Existence and Explanations. Rev. Gen. Psychol. 2015, 19, 393–407. [Google Scholar] [CrossRef] [Green Version]
  17. Mori, M. Bukimi no tani [the uncanny valley]. Energy 1970, 7, 33–35. [Google Scholar]
  18. Gee, F.; Browne, W.N.; Kawamura, K. Uncanny valley revisited. In Proceedings of the ROMAN 2005, IEEE International Workshop on Robot and Human Interactive Communication, Okayama, Japan, 22 September 2004; pp. 151–157. [Google Scholar]
  19. Weis, P.P.; Wiese, E. Cognitive conflict as possible origin of the uncanny valley. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications Sage CA: Los Angeles, CA, USA, 2017; Volume 61, pp. 1599–1603. [Google Scholar]
  20. Lepora, N.F.; Verschure, P.; Prescott, T.J. The state of the art in biomimetics. Bioinspir. Biomimetics 2013, 8, 013001. [Google Scholar] [CrossRef] [PubMed]
  21. Fink, J. Anthropomorphism and human likeness in the design of robots and human-robot interaction. In Proceedings of the International Conference on Social Robotics. Springer, Chengdu, China, 29–31 October 2012; pp. 199–208. [Google Scholar]
  22. Breazeal, C. Emotion and sociable humanoid robots. Int. J. Hum.-Comput. Stud. 2003, 59, 119–155. [Google Scholar] [CrossRef]
  23. Chaminade, T.; Cheng, G. Social cognitive neuroscience and humanoid robotics. J. Physiol.-Paris 2009, 103, 286–295. [Google Scholar] [CrossRef] [PubMed]
  24. Leslie, K.R.; Johnson-Frey, S.H.; Grafton, S.T. Functional imaging of face and hand imitation: Towards a motor theory of empathy. Neuroimage 2004, 21, 601–607. [Google Scholar] [CrossRef] [PubMed]
  25. Gouaillier, D.; Hugel, V.; Blazevic, P.; Kilner, C.; Monceaux, J.; Lafourcade, P.; Marnier, B.; Serre, J.; Maisonnier, B. Mechatronic design of NAO humanoid. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 769–774. [Google Scholar]
  26. Pandey, A.K.; Gelin, R. A mass-produced sociable humanoid robot: Pepper: The first machine of its kind. IEEE Robot. Autom. Mag. 2018, 25, 40–48. [Google Scholar] [CrossRef]
  27. Dautenhahn, K.; Nehaniv, C.L.; Walters, M.L.; Robins, B.; Kose-Bagci, H.; Assif, N.; Blow, M. KASPAR—A minimally expressive humanoid robot for human–robot interaction research. Appl. Bionics Biomech. 2009, 6, 369–397. [Google Scholar] [CrossRef] [Green Version]
  28. Hanson, D.; Baurmann, S.; Riccio, T.; Margolin, R.; Dockins, T.; Tavares, M.; Carpenter, K. Zeno: A cognitive character. In Proceedings of the AAAI National Conference, Chicago, IL, USA, 13–17 July 2008. [Google Scholar]
  29. Metta, G.; Sandini, G.; Vernon, D.; Natale, L.; Nori, F. The iCub humanoid robot: An open platform for research in embodied cognition. In Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems, Gaithersburg, MD, USA, 19–21 August 2008; pp. 50–56. [Google Scholar]
  30. Allman, T. The Nexi Robot; Norwood House Press: Chicago, IL, USA, 2009. [Google Scholar]
  31. Hanson, D. Hanson Robotics Website. Available online: (accessed on 31 December 2020).
  32. Nishio, S.; Ishiguro, H.; Hagita, N. Geminoid: Teleoperated android of an existing person. Humanoid Robot. New Dev. 2007, 14, 343–352. [Google Scholar]
  33. Kiverstein, J.; Miller, M. The embodied brain: Towards a radical embodied cognitive neuroscience. Front. Hum. Neurosci. 2015, 9, 237. [Google Scholar] [CrossRef] [Green Version]
  34. Dautenhahn, K.; Ogden, B.; Quick, T. From embodied to socially embedded agents–implications for interaction-aware robots. Cogn. Syst. Res. 2002, 3, 397–428. [Google Scholar] [CrossRef]
  35. Pfeifer, R.; Lungarella, M.; Iida, F. Self-organization, embodiment, and biologically inspired robotics. Science 2007, 318, 1088–1093. [Google Scholar] [CrossRef] [Green Version]
  36. Parisi, D. The other half of the embodied mind. In Embodied and Grounded Cognition; Frontiers Media: Lausanne, Switzerland, 2011. [Google Scholar]
  37. Parisi, D. Internal robotics. Connect. Sci. 2004, 16, 325–338. [Google Scholar] [CrossRef]
  38. Dennett, D.C. Kinds of Minds: Toward an Understanding of Consciousness; Basic Books: New York, NY, USA, 1996. [Google Scholar]
  39. Helm, B.W. The significance of emotions. Am. Philos. Q. 1994, 31, 319–331. [Google Scholar]
  40. Dennett, D.C. The Intentional Stance; MIT Press: Cambridge, MA, USA, 1989. [Google Scholar]
  41. Dennett, D.C. Consciousness Explained; Little, Brown & Company: Boston, MA, USA, 1991. [Google Scholar]
  42. Damasio, A. Descartes’ Error: Emotion, Reason, and the Human Brain; Grosset/Putnam: New York, NY, USA, 1994. [Google Scholar]
  43. Ledoux, J. The Emotional Brain: The Mysterious Underpinnings of Emotional Life; Simon & Schuster: New York, NY, USA, 1998. [Google Scholar]
  44. Bechara, A.; Damasio, H.; Tranel, D.; Damasio, A.R. Deciding advantageously before knowing the advantageous strategy. Science 1997, 275, 1293–1295. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Damasio, A. The Feeling of What Happens. Body and Emotion in the Making of Consciousness. In Spektrum Der Wissenschaft; Harcourt Brace: San Diego, CA, USA, 2000; p. 104. [Google Scholar]
  46. Ficocelli, M.; Terao, J.; Nejat, G. Promoting Interactions Between Humans and Robots Using Robotic Emotional Behavior. IEEE Trans. Cybern. 2016, 46, 2911–2923. [Google Scholar] [CrossRef]
  47. Lazzeri, N.; Mazzei, D.; Greco, A.; Rotesi, A.; Lanatà, A.; De Rossi, D.E. Can a Humanoid Face be Expressive? A Psychophysiological Investigation. Front. Bioeng. Biotechnol. 2015, 3, 64. [Google Scholar] [CrossRef] [Green Version]
  48. Paiva, A.; Leite, I.; Ribeiro, T. Emotion modeling for social robots. In The Oxford Handbook of Affective Computing; Oxford University Press: Oxford, UK, 2014; pp. 296–308. [Google Scholar]
  49. Michaud, F.; Pirjanian, P.; Audet, J.; Létourneau, D. Artificial emotion and social robotics. In Distributed Autonomous Robotic Systems 4; Springer: Berlin/Heidelberg, Germany, 2000; pp. 121–130. [Google Scholar]
  50. Samsonovich, A.V. On a roadmap for the BICA Challenge. Biol. Inspired Cogn. Archit. 2012, 1, 100–107. [Google Scholar] [CrossRef]
  51. Goertzel, B.; Lian, R.; Arel, I.; De Garis, H.; Chen, S. A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures. Neurocomputing 2010, 74, 30–49. [Google Scholar] [CrossRef]
  52. Franklin, S.; Madl, T.; D’mello, S.; Snaider, J. LIDA: A systems-level architecture for cognition, emotion, and learning. IEEE Trans. Auton. Ment. Dev. 2013, 6, 19–41. [Google Scholar] [CrossRef]
  53. Sun, R. The Motivational and Metacognitive Control in CLARION; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  54. Kokinov, B. Micro-level hybridization in the cognitive architecture DUAL. In Connectionist-Symbolic Integration: From Unified to Hybrid Approaches; Psychology Press: East Sussex, UK, 1997; pp. 197–208. [Google Scholar]
  55. Bach, J. The micropsi agent architecture. In Proceedings of the ICCM-5, International Conference on Cognitive Modeling, Bamberg, Germany, 10–12 April 2003; pp. 15–20. [Google Scholar]
  56. Hart, D.; Goertzel, B. Opencog: A software framework for integrative artificial general intelligence. In Proceedings of the 2008 Conference on Artificial General Intelligence 2008, Memphis, TN, USA, 1–3 March 2008; pp. 468–472. [Google Scholar]
  57. Russell, J.A. 13-reading emotion from and into faces: Resurrecting a dimensional-contextual perspective. In The Psychology of Facial Expression; Cambridge University Press: Cambridge, UK, 1997; pp. 295–320. [Google Scholar]
  58. Lazarus, R.S. Cognition and motivation in emotion. Am. Psychol. 1991, 46, 352. [Google Scholar] [CrossRef]
  59. Plutchik, R. Emotions: A general psychoevolutionary theory. Approaches Emot. 1984, 1984, 197–219. [Google Scholar]
  60. Smith, C.A. Dimensions of appraisal and physiological response in emotion. J. Personal. Soc. Psychol. 1989, 56, 339. [Google Scholar] [CrossRef]
  61. Howard, M.W.; MacDonald, C.J.; Tiganj, Z.; Shankar, K.H.; Du, Q.; Hasselmo, M.E.; Eichenbaum, H. A unified mathematical framework for coding time, space, and sequences in the hippocampal region. J. Neurosci. 2014, 34, 4692–4707. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. MacDonald, C.J.; Lepage, K.Q.; Eden, U.T.; Eichenbaum, H. Hippocampal “time cells” bridge the gap in memory for discontiguous events. Neuron 2011, 71, 737–749. [Google Scholar] [CrossRef] [Green Version]
  63. O’keefe, J.; Nadel, L. The Hippocampus as a Cognitive Map; Clarendon Press: Oxford, UK, 1978. [Google Scholar]
  64. Droit-Volet, S.; Meck, W.H. How emotions colour our perception of time. Trends Cogn. Sci. 2007, 11, 504–513. [Google Scholar] [CrossRef] [PubMed]
  65. Droit-Volet, S.; Gil, S. The emotional body and time perception. Cogn. Emot. 2016, 30, 687–699. [Google Scholar] [CrossRef]
  66. Zakay, D.; Block, R.A. The role of attention in time estimation processes. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1996; Volume 115, pp. 143–164. [Google Scholar]
  67. Brown, S.W. Time perception and attention: The effects of prospective versus retrospective paradigms and task demands on perceived duration. Percept. Psychophys. 1985, 38, 115–124. [Google Scholar] [CrossRef]
  68. Zakay, D. The impact of time perception processes on decision making under time stress. In Time Pressure and Stress in Human Judgment and Decision Making; Springer: Berlin/Heidelberg, Germany, 1993; pp. 59–72. [Google Scholar]
  69. Droit-Volet, S.; Fayolle, S.; Lamotte, M.; Gil, S. Time, emotion and the embodiment of timing. Timing Time Percept. 2013, 1, 99–126. [Google Scholar] [CrossRef]
  70. Arel, I.; Rose, D.; Coop, R. Destin: A scalable deep learning architecture with application to high-dimensional robust pattern recognition. In Proceedings of the 2009 AAAI Fall Symposium Series, Arlington, VA, USA, 5–7 November 2009. [Google Scholar]
  71. Arel, I.; Rose, D.; Karnowski, T. A deep learning architecture comprising homogeneous cortical circuits for scalable spatiotemporal pattern inference. In Proceedings of the NIPS 2009 Workshop on Deep Learning for Speech Recognition and Related Applications, Whistler, BC, Canada, 11–12 December 2009; pp. 23–32. [Google Scholar]
  72. Project, T. Official Website. Available online: (accessed on 31 December 2020).
  73. Center, E.P.R. University of Pisa. Available online: (accessed on 31 December 2020).
  74. Hoegen, G. Biomimic Studio. Available online: (accessed on 31 December 2020).
  75. Breazeal, C.L. Sociable Machines: Expressive Social Exchange between Humans and Robots. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2000. [Google Scholar]
  76. Kozima, H.; Nakagawa, C.; Yano, H. Can a robot empathize with people? Artif. Life Robot. 2004, 8, 83–88. [Google Scholar] [CrossRef]
  77. Cominelli, L.; Mazzei, D.; De Rossi, D.E. SEAI: Social emotional artificial intelligence based on Damasio’s theory of mind. Front. Robot. AI 2018, 5, 6. [Google Scholar] [CrossRef] [Green Version]
  78. Metta, G.; Fitzpatrick, P.; Natale, L. YARP: Yet another robot platform. Int. J. Adv. Robot. Syst. 2006, 3, 43–48. [Google Scholar] [CrossRef] [Green Version]
  79. Mazzei, D.; Cominelli, L.; Lazzeri, N.; Zaraki, A.; De Rossi, D. I-clips brain: A hybrid cognitive system for social robots. In Biomimetic and Biohybrid Systems; Springer: Berlin/Heidelberg, Germany, 2014; pp. 213–224. [Google Scholar]
  80. Zaraki, A.; Pieroni, M.; De Rossi, D.; Mazzei, D.; Garofalo, R.; Cominelli, L.; Dehkordi, M.B. Design and Evaluation of a Unique Social Perception System for Human-Robot Interaction. IEEE Trans. Cogn. Dev. Syst. 2017, 9, 341–355. [Google Scholar] [CrossRef]
  81. Mazzei, D.; Lazzeri, N.; Hanson, D.; De Rossi, D. HEFES: An hybrid engine for facial expressions synthesis to control human-like androids and avatars. In Proceedings of the 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Rome, Italy, 24–27 June 2012; pp. 195–200. [Google Scholar]
  82. Giarratano, J.C.; Riley, G. Expert Systems; PWS Publishing Co.: Boston, MA, USA, 1998. [Google Scholar]
  83. Cominelli, L.; Mazzei, D.; Pieroni, M.; Zaraki, A.; Garofalo, R.; De Rossi, D. Damasio’s Somatic Marker for Social Robotics: Preliminary Implementation and Test. In Biomimetic and Biohybrid Systems; Springer: Berlin/Heidelberg, Germany, 2015; pp. 316–328. [Google Scholar]
  84. Cominelli, L.; Garofalo, R.; De Rossi, D. The Influence of Emotions on Time Perception in a Cognitive System for Social Robotics. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–6. [Google Scholar]
  85. Cominelli, L.; Mazzei, D.; Carbonaro, N.; Garofalo, R.; Zaraki, A.; Tognetti, A.; De Rossi, D. A Preliminary Framework for a Social Robot “Sixth Sense”. In Proceedings of the Conference on Biomimetic and Biohybrid Systems, Edinburgh, UK, 19–22 July 2016; pp. 58–70. [Google Scholar]
  86. Russell, J.A. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol. Bull. 1994, 115, 102–141. [Google Scholar] [CrossRef] [PubMed]
  87. Maniadakis, M.; Trahanias, P. Temporal cognition: A key ingredient of intelligent systems. Front. Neurorobot. 2011, 5, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  88. Maniadakis, M.; Trahanias, P. Time models and cognitive processes: A review. Front. Neurorobot. 2014, 8, 7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Pictures of anthropomorphic robots with different levels of human-likeness and expressive capabilities. Top row: Nao [25] and Pepper [26]; Kaspar [27]; Zeno [28]; iCub [29]; Bottom row: Nexi [30]; Face [5]; Sophia [31]; GeminoidF [32].
Figure 1. Pictures of anthropomorphic robots with different levels of human-likeness and expressive capabilities. Top row: Nao [25] and Pepper [26]; Kaspar [27]; Zeno [28]; iCub [29]; Bottom row: Nexi [30]; Face [5]; Sophia [31]; GeminoidF [32].
Applsci 11 01070 g001
Figure 2. As a physical system the nervous system affects and is affected by two components of the environment that lies outside itself: (a) the environment inside the organism’s body (internal environment) and (b) the environment that lies outside the organism’s body (external environment). Image taken by [37]. ©Taylor & Francis 2004.
Figure 2. As a physical system the nervous system affects and is affected by two components of the environment that lies outside itself: (a) the environment inside the organism’s body (internal environment) and (b) the environment that lies outside the organism’s body (external environment). Image taken by [37]. ©Taylor & Francis 2004.
Applsci 11 01070 g002
Figure 3. The Affective Loop, illustrated by Paiva et al. [48]. ©Oxford University 2015.
Figure 3. The Affective Loop, illustrated by Paiva et al. [48]. ©Oxford University 2015.
Applsci 11 01070 g003
Figure 4. A picture of Abel at the current stage of development.
Figure 4. A picture of Abel at the current stage of development.
Applsci 11 01070 g004
Figure 5. A detail of Abel’s head. On the left the head is covered with bioinspired skin-like material; on the right the internal mechatronics exposed, designed to perform facial expressions, gaze behavior, and lip-sync speaking.
Figure 5. A detail of Abel’s head. On the left the head is covered with bioinspired skin-like material; on the right the internal mechatronics exposed, designed to perform facial expressions, gaze behavior, and lip-sync speaking.
Applsci 11 01070 g005
Figure 6. A functional framework of the SEAI cognitive architecture (Social Emotional Artificial Intelligence). Image taken from [77]. More details on single services reported in the figure can be found in [79,80,81].
Figure 6. A functional framework of the SEAI cognitive architecture (Social Emotional Artificial Intelligence). Image taken from [77]. More details on single services reported in the figure can be found in [79,80,81].
Applsci 11 01070 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cominelli, L.; Hoegen, G.; De Rossi, D. Abel: Integrating Humanoid Body, Emotions, and Time Perception to Investigate Social Interaction and Human Cognition. Appl. Sci. 2021, 11, 1070.

AMA Style

Cominelli L, Hoegen G, De Rossi D. Abel: Integrating Humanoid Body, Emotions, and Time Perception to Investigate Social Interaction and Human Cognition. Applied Sciences. 2021; 11(3):1070.

Chicago/Turabian Style

Cominelli, Lorenzo, Gustav Hoegen, and Danilo De Rossi. 2021. "Abel: Integrating Humanoid Body, Emotions, and Time Perception to Investigate Social Interaction and Human Cognition" Applied Sciences 11, no. 3: 1070.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop