Next Article in Journal
Area-Efficient Vision-Based Feature Tracker for Autonomous Hovering of Unmanned Aerial Vehicle
Next Article in Special Issue
A Multimodal User Interface for an Assistive Robotic Shopping Cart
Previous Article in Journal
A Multi-Objective Evolutionary Algorithm Based on KNN-Graph for Traffic Network Attack
Previous Article in Special Issue
An Autonomous Human Following Caddie Robot with High-Level Driving Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

I2E: A Cognitive Architecture Based on Emotions for Assistive Robotics Applications

by
Priscila Silva Martins
1,*,†,
Gedson Faria
1,† and
Jés de Jesus Fiais Cerqueira
2,‡
1
Information Systems/CPCX, Federal University of Mato Grosso do Sul (UFMS), Campo Grande 79.4000-000, Brazil
2
Department of Electrical and Computer Engineering of Polytechnic, Federal University of Bahia (UFBA), Salvador 40.210-630, Brazil
*
Author to whom correspondence should be addressed.
Current address: Av. Márcio Lima Nantes s/n, Coxim CEP 79400-000, MS, Brazil.
Current address: Rua Aristides Nivis, 02, Federação, Salvador CEP 40.210-630, BA, Brazil.
Electronics 2020, 9(10), 1590; https://doi.org/10.3390/electronics9101590
Submission received: 2 August 2020 / Revised: 21 September 2020 / Accepted: 25 September 2020 / Published: 28 September 2020
(This article belongs to the Special Issue Robots in Assisted Living)

Abstract

:
Emotions and personality play an essential role in human behavior, their considerations, and decision-making. Humans infer emotions from several modals, merging them all. They are an interface between a subject’s internal and external means. This paper presents the design, implementation, and tests of the Inference of an Emotional State (I2E): a cognitive architecture based on emotions for assistive robotics applications, which uses, as inputs, emotions recognized previously by four affective modals who inferred the emotional state to an assistive robot. Unlike solutions that classify emotions, with a single sign, the architecture proposed in this article will merge four sources of information about emotions into one. For this inference to be closer to a human being, a Fuzzy System Mamdani was used to infer the user’s personalities, and a MultiLayer Perceptron (MLP) was used to infer the robot’s personality. The hypothesis tested in this work was based on the Mehrabian studies and in addition to three experts in psychologists. The I2E architecture proved to be quite efficient for identifying an emotion with various types of input.

1. Introduction

Creating a cognitive architecture for robots requires additional knowledge from different research fields, such as social psychology, affective computing, computer science, and AI, which influence the design of the underlying control structure. Cognitive architectures, or systems of cognition, specify the structural basis for an intelligent system. Several architectures have already been proposed, as shown by the work of Rickel [1], Kim [2] and Arnellos [3]. These cognitive architectures are for specific purposes; they were developed to solve a specific problem. However, there are other architectures known as general-purpose architectures. Two architectures are widely used in this category, Adaptive Control of Through—Rational (ACT-R), developed by John Anderson and his team since the 1970s [4] and SOAR (State Operator and Result) that was developed by John Laird since the 1980s [5].
Some architectures of intelligent cognitive systems, like those described above, try to reproduce the brain’s functioning and are based on the stimulus-response concept, or based only on rationalization. No feeling/emotion is considered for the decision making in these architectures. On the other hand, most of the main topics in psychology and all the major problems humanity faces usually involve emotion. According to [6], “Psychology and humanity can progress without considering emotion—as fast as someone running on one leg”.
Trying to model emotion has become a new challenge for researchers in cognitive architectures. Minski, in [7], had already noticed the importance of emotion for cognitive models: “The question is not whether intelligent machines can have any emotions, but whether machines can be intelligent without any emotions”. Affective computing is a new interdisciplinary field focused on giving machines the ability to interpret human beings’ emotional state and adapt their state and behavior to them [5].
Psychology and neuroscience have contributed to some theoretical models that try to explain different aspects of emotions. These models focus on the concept of how the human being transforms the external stimulus into emotions. Taking these theoretical models into account, one of the new challenges of AI is to develop Computational Models of Emotions (ECMs), that is, systems that propose mechanisms to process information about emotion and generate emotional behaviors. In these systems, the individual’s emotion is taken into account for decision making. These new models can benefit the intelligence systems used in cognitive robots, autonomous agents, and even human–machine interaction. Marceddu in [8] adopted machine learning techniques to analyze the passengers’ emotions while driving with alternative vehicle calibrations; through the analysis of these emotions, it was possible to obtain an objective metric about the comfort feeling of the passengers. Rincon in [9] presented a low-cost cognitive assistant. The aim was to detect the performance and the emotional state that older people present when performing exercises. Their goal was to bring to people at their homes an assistant who motivates them to perform exercises and, concurrently, monitor their physical and emotional responses. Moreover, Seo in [10] proposed an automatic method of musical classification based on emotions to classify songs with high precision. The classification was performed using multiple regression analysis and a support vector machines.
Personality also has a significant impact on human behavior. People in the same conditions make different decisions/actions according to their personalities. Personality is the set of characteristics that stand out in a person, representing the pattern of personal and social individuality. Personality also influences the individual concerning values, to judge specific objectives, such as freedom of action dispositions such as honesty [11]. Read et al. [12] have proposed a computational model of personality based on both the structure and the biology of human personality, simulating the creation of personalities in a game agent based on a motivational agent.
A personality model widely used in the area of affective computing is the Five Personality Factors (FFM) model (Big Five Personality Factors or Five Factors Model). The five factors may be easily remembered using the acronym “OCEAN”. These factors are measured in a continuous way, whereby an individual may be highly extroverted, low in extroversion (introverted), or somewhere between these two extremes. A fundamental premise of FFM is that the human personality could be demonstrated in five factors [13]:
Mehrabian [14] has described how the FFM can be mapped into Pleasure, Arousal, and Dominance (PAD) emotion models [15]. Yi et al. [16] has used the FFM model to compare with his proposed NEO PI-R personality model. The FFM model was also used to predict emotional and personality states in [17].
Based on the above, we found a gap to build a cognitive model based on emotions to embed on an autonomous robot to use it in Assistive Robotics (AR). AR is an area of autonomous robotics focused on improving humans’ quality of life with particular needs, such as the elderly, individuals with some cognitive impairment, or social disorders. Currently, the most appropriate definition for an assistance robot is one that gives help or support to a human user through interaction with/without physical contact, including rehabilitation robots, social robots, manipulating robots, and mobility [18].
The human being employs multiple senses, both sequentially and in parallel, to passively and actively explore the environment, confirm expectations about the world, and perceive new information. They experience external stimuli through sight, hearing, touch, and smell. Human interaction with the world is inherently multimodal [19]. Various detection modals provide a wealth of information to support interaction with the world. Multimodality is defined by the presence of more than one modality or channel, for example, visual, audio, text, gestures, and display. Multimodal interaction systems aim to recognize natural forms of language and human behavior through the use of technologies based on recognition [20]. Seeking to reproduce the way human beings interact with the world, in the Robotics Laboratory of the Electrical Engineering Department of the Federal University of Bahia, researchers in AR are being developed a robotic device called HiBot (roBOT for Human Interaction). HiBot aims to be a platform for human–robot interaction (IHR) experiments. To be more specific, assisting in medical treatments, such as for autistic individuals, in pedagogical support, and other forms of social interaction. HiBot can behave as a mediator between therapists and user care. This device will interact using emotional protocols to make the social interaction process with the user closest to human beings.
The HiBot architecture adopts three basic levels of abstraction: (i) cognitive level, (ii) associative level, and (iii) reactive level. The cognitive level is responsible for choosing the robot’s emotional behavior. The associated level is responsible for communicating the robot’s internal processes and synchronizing with external events. The reactive level is responsible for the perception and execution of the robot’s actions. The HiBot’s project uses several affective ways to recognize the user’s emotion and, in the future, from these emotions to be able to make decisions. Currently, emotion recognition modules are being developed in parallel, and each of the modals that make up the set of affective sensors uses different techniques/algorithms to accomplish this task.
HiBot, as illustrated in Figure 1, has a set of affective actuators and sensors. Modules organize these sets. Affective actuators, Figure 1b, aims to promote interaction through social protocols (facial and vocal expressions). For that, two modules were defined: (i) Voice Synthesis Module (VSM) and (ii) Facial Expression Module (FEM). The affective sensors, Figure 1a, aim to obtain affective cues transmitted by different modals, such as face, body, voice, and electroencephalography (EEG). Thus, Hibot has four affective sensors modules: (i) Facial Expression Recognition Module (FERM), by video; (ii) Body expression recognition module (BERM), by video/Kinect; (iii) Voice Expression Recognition Module (VERM), by voice; and (iv) EEG recognition module (EEGRM). All four of these affective modules have been developed in parallel by the laboratory researchers. Camada [21], for example, developed the BERM module, which is responsible for recognizing the affective state from the stereotypes of a person’s gestures, using as a camera sensor a Kinect device.
Thus, this work presents the Inference of an Emotional statE (I2E). This architecture will merge all the emotions recognized previously by four HiBot’s affective modals, and infers a single emotion for the user. Unlike the solutions presented above, which classify emotion, the architecture proposed in this article infers the emotion demonstrated by a robot from the emotions classified by the affective modals of HiBot. In order for this inference to be closer to a human being, a Fuzzy Mamdani [22] system was implemented to infer the user’s personality and a MultiLayer Perceptron (MLP) [23] to infer the robot’s personality. It is known that there are more sophisticated techniques like Deep Learning or Fuzzy 2. However, the I2E will be embedded on a grid of low processing capacity boards, such as beaglebone black or raspberry, so the solution would have to adapt to the type of hardware it will process. Designing an embedded system is an incredibly complex task. It involves concepts such as portability issues, limiting energy consumption without loss of performance, low memory availability, the need for security and reliability, and the possibility of adding modules to each update. For these reasons, we focus on designing I2E with less sophisticated techniques, which have already been extensively explored by the scientific community and that present very satisfactory results.
The rest of this paper is organized as follows. The I2E proposed architecture is presented in Section 2. Section 3 presents the results of the simulations. Finally, the final comments and future works are presented in Section 4.

2. I2E Architecture

The I2E architecture is responsible for infers the assistive robot’s emotional state. I2E does not recognize emotions; it processes previously recognized emotions. Unlike solutions that classify emotions, with a single sign, the architecture proposed in this article will merge four emotions into one, that is to say, IE2 will infer a single emotion for the user. This emotion will be used to infer the emotion that will be demonstrated to the user by HiBot. It is similar to human behavior. Humans infer emotions from several modals by merging all of them.
This emotional state is based on emotion and the user’s personality. Figure 2 shows the overview of I2E. It is sub-divided into three modules:
  • Module 01: responsible for inferring the user’s personality;
  • Module 02: responsible for inferring the robot’s personality;
  • Module 03: responsible for inferring the robot’s emotion.
The inference of the robot’s emotion follows three steps: Module 01 infers the user’s personality profile (UserOcean), from the inference of the user’s emotion (UserEmotion), resulting from the combination of the HiBot’s four emotion modals. The personality model adopted was the Big Five Model (BFM) because it is a widely used model and with much literature on the relationship of emotions with BFM. Module 2, based on the personality traced to the user (UserOcean), will infer the personality of the robot (UserRobot), that the personality is empathic with the personality of the user, as it is an auxiliary robot. After the inference of the robot’s personality, Module 03 chooses the emotion represented by that personality that will be demonstrated by HiBot.

2.1. Module 01—User’s Personality

Module 01 is responsible for inferring the user’s personality. Personality is the set of characteristics that stand out from a person and represent the pattern of personal and social individuality of each individual. It also influences the tendency to judge specific objectives, such as freedom, or dispositions of actions such as honesty [24].
Module 01 is illustrated in Figure 3. Module 01 uses as inputs the emotions recognized previously by four HiBot’s affective modals: (i) Facial Expression Recognition Module (FERM), by video; (ii) Body expression recognition module (BERM), by video/Kinect; (iii) Voice Expression Recognition Module (VERM), by voice; and (iv) EEG recognition module (EEGRM). The set of affective modals is highlighted within the red dotted lines in Figure 3. Each affective modal comprises six attributes representing the pertinence of each of the six emotions: anger, anger, disliking, fear, pity, relief, happy four. After reading all the attributes, it was necessary to calculate the median between each attribute’s three values. The median was adopted because it returns to the central trend for distorted numerical distributions. After processing the input values, the next step is the configuration of the User Fuzzy System. This process can be described in four steps: Get OCEAN’S values, OCEAN’s discretization, FCL: rules of fuzzy system, and defuzzification.

2.1.1. Get OCEAN’S Values

Let us consider the PAD values used by [14] to define the emotions shown in Table 1. The Equation (1) described by [14] was used to transform the emotions into the pattern of the Big Five Model, in this article, these values will be called OCEAN’s values. For example, for the emotion Anger, the PAD values are: P = 0.51 , A = 0.59 and D = 0.25 . After applying Equation (1), the resulting OCEAN’s values are: O = 0.3622 , C = 0.0882 , E = 0.1585 , A = 0.3637 and N = 0.6742 .
O = 0.33 A + 0.67 D C = 0.32 P + 0.30 D E = 0.23 P + 0.12 A + 0.82 D A = 0.83 P + 0.12 A 0.21 D N = 0.33 P 0.65 A
As a result of these transformations, numerical values representing the user’s personality for each of the input emotions as obtained. The values that represent each emotion used in this article are illustrated in Table 1. However, a problem emerged: how to represent these numerical values? Fuzzy systems can represent these kinds of values using linguistic variables. Fuzzy logic was proposed initially by Lotfi Zadeh in 1965 [25]. Fuzzy logic relaxes the harsh notion of pertinence in propositional logic and classical set theory by creating the concept of pertinence degrees: rather than true or false (it belongs or does not belong), there is a continuous spectrum of values between 0 and 1 to denote the degree of pertinence. Thus, it becomes possible to represent inaccurate symbolic knowledge.

2.1.2. OCEAN’s Discretization

This step is responsible for discretizing the OCEAN’s values to find a better amount of output linguistics variables in the fuzzy system.
For discretizing OCEAN’s values, tests were performed subdividing the range from −1 to 1 into 3, 5, 7, and 9 parts. Knowing that discretization is the process of putting values in classes so that there are a limited number of possible states. In our tests, we used the WEKA (Weka is a tried and tested open-source machine learning software that can be accessed through a graphical user interface, standard terminal applications, or the Java API—https://www.cs.waikato.ac.nz/ml/weka/) environment and an unsupervised pre-processing filter called Discretize. After each test, its validation was done in WEKA, too, using the J48 algorithm, which implements a decision tree. The test results are shown in Table 2, where, the error rate means the percentage of information loss relative to the values of the Mehrabian article shown in Table 1. Therefore, after all the tests, it was found that nine linguistic variables would be ideal for discretizing the OCEAN’s values.

2.1.3. Input/Output of Fuzzy System

  • Input: ANGER, DISLIKING, FEAR, HAPPY FOR, PITY, RELIEF mapped as triangular values HIGH, MEDIUM and LOW, as shown in Figure 4.
  • Output: OCEAN’s values obtained by discretization, were mapped into triangular values, as shown in Figure 5.

2.1.4. FCL: Rules of Fuzzy System

Generation of Inference Engine Rules: Considering the problem of merging a set of six input values. The output values were obtained through the discretization of OCEAN’s values, considering the intervals shown in Figure 5, as can be seen in Table 3.
The information in Table 3 was translated into FCL rules [26], and the rules that compose the knowledge base of the fuzzy system can be seen in Figure 6.

2.1.5. Defuzzification

The defuzzification function used was the centroid, and the results of OCEAN’s outputs values were between −1 and 1, as can be seen in Figure 7, and from here, they are called UserOcean.
In Figure 7 the fusion of the input emotions 25% FEAR and 50% PITY can be observed, resulting in trapezoids that represent the pertinence of each personality factor, OCEAN. Deffuzification using the centroid function resulted in the following UserOCEAN values: −0.2526, −0.2688, −0.5313, −0.2688 and 0.2947, respectively.

2.2. Module 02—Robot’s Personality

Module 02, Figure 8, is responsible for inferring the robot’s personality (RobotOcean) from the user’s personality (UserOcean). The input of module 02 is the output of module 01; that is, the UserOcean.
The output of module 02 was generated following two steps: (1) consultation with experts, and (2) neural network design.
  • Consultation with experts: Three experts psychologists were consulted and, based on the emotions in Table 1, they were asked to relate personalities they considered basic and those that could be more commonly be presented by users, with the personality that the robot must have to react with empathy to each of the personalities presented by the users. As a result of this query, the database represented in Table 4 was generated. The link between Emotion columns IN and OUT was completed by the consultation with experts, and the OCEAN columns IN and OUT were completed according to Table 1.
  • Neural network design: To generate the robot’s personality, a Multi-Layer Perceptron (MLP) [23] neural network was used with five neurons in the input layer, ten neurons in the hidden layer, and five neurons in the output layer. The activation function used for the hidden layers was sigmoid logistics. For the output layer, the hyperbolic tangent, this variation in activation functions is to better represent the output with values between −1 and 1. As can be seen, there is a limited amount of data. Therefore all data were used as a training set, obtaining network convergence with training errors below 0.000001.
In Figure 9 an example of the execution of module 2 is shown, considering the UserOcean (−0.2526, −0.2688, −0.5313, −0.2688 and 0.2947) as input, result obtained from module 01 (Figure 7), and using RobotOCean (0.1324, 0.1304, 0.1999, 0.2050 and −0.0865) as output.

2.3. Module 03: Robot’s Emotion

This module was responsible for choosing the robot’s emotion as from the personality suggested by OCEAN, resulting from the neural network of module 2, called RobotOcean. The robot’s emotion was selected from the emotion corresponding to the personality with the lowest Euclidean Distance ( d ) calculated between RobotOcean and the values in Table 1. An example of selecting the emotion of module 03 can be seen in Table 5, The emotion love was selected from all the emotions in Table 1, as it has the shortest Euclidean Distance (0.0646) from RobotOCean (0.1324, 0.1304, 0.1999, 0.2050 e −0.0865).

3. Simulation and Performance Evaluation

The research group that works with HiBot moves towards the construction of a database, where each emotion classified by one of the modals, is merged into just one. It is important to note that the objective is to obtain a database that considers all possible intensities of each of these emotions. Considering that this database is still under construction, it became a necessary alternative for the visualization of the data, which examines the intensities of the six inputs/emotions.
It is known that there are pleasurable emotions that demonstrate similar personalities; the same occurs with emotions of low pleasure, yet their OCEAN values are analogous. Therefore, it is expected that the fusion of analogous emotions, obtained after processing the fuzzy system of module 01, will be grouped into a single emotion with similar OCEANs, and the intensity of each factor of the ocean is what will differentiate which emotion will be inferred by module 02 and 03.
To verify the behavior of emotion’s groups from the inputs, the cloud of particles were generated, equally distributed in the space of representation of the set of entries. Each particle of the cloud is made up of six values, between 0 and 1, corresponding to the pertinence of the incoming emotions: anger, disliking, fear, pity, relief, and happy for. Using a particle cloud to represent the data universe used in this work is not new, and it was inspired by Thurn’s paper that used a particle map on mobile robot navigation [27].
For generating the cloud of particles, space has divided into equal parts for representing each entry. An example would be with these distances: 0.25, 0.20, 0.125, 0.1, 0.0625, and 0.05, resulting in 6 sets of tests. For example, the same emotion in input can generate different output emotions from module 01, and different intensity of emotions affects the selection of different personalities, some results obtained with the test set 1, which use as distance or value 0.25 will be discussed. Due to the complexity of observing the values that represent personalities (OCEAN’s values) and improving the visualization and validation of the tests performed, the graphs will show the emotions that infer each personality.
The graphs in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15 show the results of the inference and the error of each OCEAN value from Anger, Disliking, Fear, Happy For, Pity, and Relief emotions, respectively. The orange lines represent the values inferred by Module 01, and the blue lines are the reference values used by Mehrabian [14].
Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20 show the graphs that illustrate how the Mamdani System creates the merge of input emotions. The transitions show the merge of the emotions in each personality factor (OCEAN). This experiment uses as input emotions: anger and fear. The orange-colored area represents the intensity of anger in each personality factor, the blue-colored area represents fear intensity, the pink-colored area represents the merge of the two in each personality factor, and the green area is the neutral area.
The graph shown in Figure 21 illustrates the emotions and their percentages inferred by the UserOcean personality. Remembering that module 01 makes a “fusion” of the six emotions that each of the three affective modals classified. The module 01 system, therefore, considers the nuances of these emotions. For example, if modal 01 indicates anger with 0.9 pertinence and modal 02 indicate disliking with 0.5 pertinence, both emotions are must be considered in the user’s emotional inference. Therefore, the emotions inferred in module 01 may be different from the input emotions, as they represent the nuances of the emotions; the inference mechanism takes into account the relevance of each emotion. Like the six inputs, four emotions are said to be “of low pleasure”; therefore, in the output of module 01 was observed that the disliking emotion also has a higher representation with 55.5%.
Table 6 shows emotions classified to be demonstrated by the robot, which proves that the robot’s posture remained empathic and proving the neural network’s prediction efficiency.
The idea of use particle cloud to cover the most significant number of emotions could be proven by looking at Figure 22, which shows the same input emotion generating different output emotions. For example, the dislike emotion had love, joy, hope, and liking as its output, that is, the name of the emotion was the same, but the profiles of each personality was different.
To better understand this inference, we will use another example, the case of the pairs ANGER-GRATITUDE and ANGER-LIKING. The histogram charts present by Figure 23 shows the general map of the ANGER emotion, while histogram charts presented in Figure 24 and Figure 25 show the map of the ANGER-LIKING and ANGER-GRATITUDE pairs respectively, in which it can be observed that small variations in the input emotions influence the emotion expressed by the robot. For example, the predominance of anger in Figure 24 and an absence of emotions happy for, relief and pity (the light blue bars indicate zero, that is, no example of this emotion was found) led to the inference of the Liking emotion. In Figure 25 the merge of emotions of disliking, fear, and anger inferred in the emotion of gratitude as output.
Another example of the merge of the input emotions can be seen in Figure 26, where the emotions anger, disliking, fear, happy for, relief, and pity, have the pertinence’s values 10, 0, 25, 0, 0, and 50, respectively. Then module 01 inferred the emotion disappointment, with a UserOCEAN −0.1626, −0.2361, −0.4145, −0.2688 and 0.3115 respectively. As a second step, the inferred personality was a RobotOCEAN 0.013, 0.1040, 0.1132, 0.2599, and −0.0894 respectively. This resulted in the joy emotion being demonstrated by the robot, as it has the shortest Euclidean distance (0.1526) from RobotOCEAN 0.013, 0.1040, 0.1132, and 0.2599, and −0.0894.

4. Final Comments

Emotion and personality are both critical in the process of human decision making in real life. In the Robotics Laboratory of the Electrical Engineering Department of the Federal University of Bahia, there is a device called HiBot (roBOT for Human Interaction). Therefore, cognitive architecture must be implemented in HiBot to make the social interaction process with the user closest to human beings.
Usually, the works that classify emotions present their results using a confusion matrix with known input/output pairs, based on supervised learning systems. However, as mentioned earlier, as we do not yet have a known set of input/output pairs, we used a cloud of particles to simulate all the possibilities of entry, fusing emotions from the modals affective and grouping them accordingly the Big Five Model.
In this work, the I2E was presented, an architecture that infers an emotional state of a robot from a multimodal system’s affective sensors, which will compose a system embedded in HiBot. I2E is the first step towards creating a cognitive architecture for the HiBot. Thiese are also the first steps towards the creation of a database for the classification of affective modals in emotions.
The I2E architecture infers the emotion that will be demonstrated by a robot from the affective modals. For this inference to be closer to a human being, in module 01, a Fuzzy System Mamdani infers the user’s personality, and in module 02, an MLP Neural Network infers the robot’s personality. Finally, in module 03, from the robot’s personality, it is inferred the emotion to be demonstrated by HiBot.
Considering the recognition of emotions by all affective modals is under development, and we do not have a database that includes all modals by the same person. Moreover, this database is still under construction, and it became necessary an alternative visualize the data, which analyzes the intensities of the six inputs/emotions. The experiments made from particle cloud shows that the combination of the input emotions generated different user’s personalities (OCEANs), that is, different values for each factor of the Big Five (OCEAN), which in some cases approximate the same emotion. However, although OCEANs can be associated with the same emotion, they generate a different personality to a robot, as seen in the discussion on the output pairs ANGER-GRATITUDE and ANGER-LIKING. These associations demonstrate that the intensity of each input emotion and the combination of these emotions are factors that directly imply the inference of the output emotion.
The experiments also demonstrated that using a machine learning technique to define the better amount of linguistic variables in the discretization process of the OCEAN’s values, to avoid loss of knowledge representation was a factor of high relevance to obtain the nuances in the user’s personality factors. In this work, a neural network is efficient as a function to describe empathic personalities for a robot based on the user’s personality.
The use of a Mamdani system allows the use of linguistic variables and the visualization of the fusion of emotions in defuzzification charts. This form of data visualization has become an essential tool for the validation of experiments by specialists. Despite being a relatively simple machine learning technique, fuzzy systems and MLP have proven to be an efficient choice, and that they can quickly be shipped and replicated on other platforms.
For psychology, I2E provided an environment for observing the inference of one emotion from another. With the tests, it is possible to verify how emotion and an individual’s personality influences the generation of emotion. A contribution to psychology is that based on these observations in the creation of emotions, protocols to classify emotions can be created based on current theories. With the addition of technology, it would remove subjectivity to identify and measure emotions.
As future work can be suggested, in ways to improve the accuracy of architecture, going deeper into each of the groups of emotions that resulted from mergers and with the help of experts create new relationships between personality profiles, to find the values of the five factors that define the personality (OCEANs) in various levels of basic emotions, because in this work only the basic emotions, without their variations were used. Therefore, instead of several emotions, it would be necessary to know the various levels of the same basic emotions. With then, several levels of the combinations of emotions to validate that the system’s learning can be efficient.
The hypothesis tested in this work was based on the Mehrabian studies, which follows three experts’ validation. Another future work is testing I2E with real people and validating the real output by psychology experts.
The I2E architecture proved to be quite efficient for identifying an emotion with various input types. With these results, a further step has been taken in the construction of the HiBot brain, because, with the emotions identified, emotion can now be included in the robot’s decision process, thus making it closer to assistive robotics.

Author Contributions

Conceptualization, P.S.M. and J.d.J.F.C.; methodology, P.S.M.; software, G.F.; validation, P.S.M.; formal analysis, P.S.M. and G.F.; investigation, P.S.M.; resources, P.S.M.; data curation, P.S.M. and G.F.; writing—original draft preparation, P.S.M.; writing—review and editing, P.S.M. and G.F. and J.d.J.F.C.; visualization, P.S.M. and G.F.; supervision, J.d.J.F.C.; project administration, J.d.J.F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rickel, J.; Johnson, W.L. Animated agents for procedural training in virtual reality: Perception, cognition, and motor control. Appl. Artif. Intell. 1999, 13, 343–382. [Google Scholar] [CrossRef]
  2. Kim, Y.D.; Kim, J.; Kim, J.H.; Lim, J.R. Implementation of artificial creature based on interactive learning. In Proceedings of the FIRA Robot World Congress, Seoul, Korea, 23–29 May 2002; pp. 369–374. [Google Scholar]
  3. Arnellos, A.; Vosinakis, S.; Anastasakis, G.; Darzentas, J. Autonomy in virtual agents: Integrating perception and action on functionally grounded representations. In Proceedings of the Hellenic Conference on Artificial Intelligence, Syros, Greece, 2–4 October 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 51–63. [Google Scholar]
  4. Anderson, J.R.; Bothell, D.; Byrne, M.D.; Douglass, S.; Lebiere, C.; Qin, Y. An integrated theory of the mind. Psychol. Rev. 2004, 111, 1036. [Google Scholar] [CrossRef] [PubMed]
  5. Laird, J.E.; Newell, A.; Rosenbloom, P.S. Soar: An architecture for general intelligence. Artif. Intell. 1987, 33, 1–64. [Google Scholar] [CrossRef]
  6. A Russell, J. Core Affect and the Psychological Construction of Emotion. Psychol. Rev. 2003, 110, 145–172. [Google Scholar] [CrossRef] [PubMed]
  7. Minsky, M. The Society of Mind; Simon & Schuster, Inc.: New York, NY, USA, 1986. [Google Scholar]
  8. Sini, J.; Marceddu, A.C.; Violante, M. Automatic Emotion Recognition for the Calibration of Autonomous Driving Functions. Electronics 2020, 9, 518. [Google Scholar] [CrossRef] [Green Version]
  9. Costa, A.; Rincon, J.A.; Julian, V.; Novais, P.; Carrascosa, C. A Low-Cost Cognitive Assistant. Electronics 2020, 9, 310. [Google Scholar] [CrossRef] [Green Version]
  10. Seo, Y.S.; Huh, J.H. Automatic Emotion-Based Music Classification for Supporting Intelligent IoT Applications. Electronics 2019, 8, 164. [Google Scholar] [CrossRef] [Green Version]
  11. Pejovic, V.; Lathia, N.; Mascolo, C.; Musolesi, M. Mobile-Based Experience Sampling for Behaviour Research. In Emotions and Personality in Personalized Services: Models, Evaluation and Applications; Tkalčič, M., De Carolis, B., de Gemmis, M., Odić, A., Košir, A., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  12. Read, S.; Miller, L.; Monroe, B.; Brownstein, A.; Zachary, W.; Le Mentec, J.C.; Iordanov, V. A Neurobiologically Inspired Model of Personality in an Intelligent Agent. In Proceedings of the Intelligent Virtual Agents, 6th International Conference, IVA 2006, Marina Del Rey, CA, USA, 21–23 August 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 316–328. [Google Scholar]
  13. Costa, P.; McCrae, R.; Psychological Assessment Resources Inc. Revised NEO Personality Inventory (NEO PI-R) and NEO Five-Factor Inventory (NEO-FFI); Psychological Assessment Resources: Jacksonville, FL, USA, 1992. [Google Scholar]
  14. Mehrabian, A. Analysis of the Big-five Personality Factors in Terms of the PAD Temperament Model. Aust. J. Psychol. 1996, 48, 86–92. [Google Scholar] [CrossRef]
  15. Mehrabian, A. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament. Curr. Psychol. 1996, 14, 261–292. [Google Scholar] [CrossRef]
  16. Yi, Z.; Ling, L. A Personality Model Based on NEO PI-R for Emotion Simulation. IEICE Trans. Inf. Syst. 2014, E97.D, 2000–2007. [Google Scholar] [CrossRef] [Green Version]
  17. Mehraei, M.; Akcay, N.I. Pleasure, Arousal, and Dominance Mood Traits Prediction Using Time Series Methods. IAFOR J. Psychol. Behav. Sci. 2017, 3. [Google Scholar] [CrossRef]
  18. Feil-Seifer, D.J.; Matarić, M.J. Defining socially assistive robotics. In Proceedings of the 9th International Conference on Rehabilitation Robotics, ICORR 2005, Chicago, IL, USA, 28 June–1 July 2005; pp. 465–468. [Google Scholar]
  19. Jaimes, A.; Sebe, N. Multimodal human–computer interaction: A survey. Comput. Vis. Image Underst. 2007, 108, 116–134. [Google Scholar] [CrossRef]
  20. Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A Review of Affective Computing: From Unimodal Analysis to Multimodal Fusion. Inf. Fusion 2017, 37. [Google Scholar] [CrossRef] [Green Version]
  21. Camada, M.Y.O.; Stéfano, D.; Cerqueira, J.J.F.; Lima, A.M.N.; Conceição, A.G.S.; Costa, A.C.P.L.d. Recognition of Affective State for Austist from Stereotyped Gestures. In Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics, Lisbon, Portugal, 29–31 July 2016; pp. 197–204. [Google Scholar] [CrossRef]
  22. Mamdani. Application of Fuzzy Logic to Approximate Reasoning Using Linguistic Synthesis. IEEE Trans. Comput. 1977, C-26, 1182–1191. [Google Scholar] [CrossRef]
  23. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2003. [Google Scholar]
  24. Reisenzein, R.; Weber, H. Personality and emotion. Camb. Handb. Personal. Psychol. 2009, 54–71. [Google Scholar] [CrossRef]
  25. Zadeh, L. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  26. Cingolani, P.; Alcala-Fdez, J. jFuzzyLogic: A robust and flexible Fuzzy-Logic inference system language implementation. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  27. Thrun, S.; Bü, A. Integrating Grid-Based and Topological Maps for Mobile Robot Navigation. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, Portland, OR, USA, 4–8 August 1996; AAAI Press: Palo Alto, CA, USA, 1996; Volume 2, pp. 944–950. [Google Scholar]
Figure 1. HiBot’s architecture. (a) Affective Sensors: responsible to obtain affective cues transmitted by different modals and (b) Affective Actuators: responsible to promote interaction through social protocols.
Figure 1. HiBot’s architecture. (a) Affective Sensors: responsible to obtain affective cues transmitted by different modals and (b) Affective Actuators: responsible to promote interaction through social protocols.
Electronics 09 01590 g001
Figure 2. Inference of an Emotional State (I2E) architecture overview.
Figure 2. Inference of an Emotional State (I2E) architecture overview.
Electronics 09 01590 g002
Figure 3. Module 01-responsible for inferring the user’s personality.
Figure 3. Module 01-responsible for inferring the user’s personality.
Electronics 09 01590 g003
Figure 4. Fuzzy values of the input variables.
Figure 4. Fuzzy values of the input variables.
Electronics 09 01590 g004
Figure 5. Fuzzy values of the output variables.
Figure 5. Fuzzy values of the output variables.
Electronics 09 01590 g005
Figure 6. FCL Rules.
Figure 6. FCL Rules.
Electronics 09 01590 g006
Figure 7. Deffuzification tests.
Figure 7. Deffuzification tests.
Electronics 09 01590 g007
Figure 8. Module 02- responsible for inferring the robot’s personality.
Figure 8. Module 02- responsible for inferring the robot’s personality.
Electronics 09 01590 g008
Figure 9. Example of module 02.
Figure 9. Example of module 02.
Electronics 09 01590 g009
Figure 10. Error graphics from Anger input emotion—OCEANS’s factors.
Figure 10. Error graphics from Anger input emotion—OCEANS’s factors.
Electronics 09 01590 g010
Figure 11. Error graphics from Disliking input emotion—OCEANS’s factors.
Figure 11. Error graphics from Disliking input emotion—OCEANS’s factors.
Electronics 09 01590 g011
Figure 12. Error graphics from Fear input emotion—OCEANS’s factors.
Figure 12. Error graphics from Fear input emotion—OCEANS’s factors.
Electronics 09 01590 g012
Figure 13. Error graphics from Happy For input emotion—OCEANS’s factors.
Figure 13. Error graphics from Happy For input emotion—OCEANS’s factors.
Electronics 09 01590 g013
Figure 14. Error graphics from Pity input emotion—OCEANS’s factors.
Figure 14. Error graphics from Pity input emotion—OCEANS’s factors.
Electronics 09 01590 g014
Figure 15. Error graphics from Relief input emotion—OCEANS’s factors.
Figure 15. Error graphics from Relief input emotion—OCEANS’s factors.
Electronics 09 01590 g015
Figure 16. Anger + Fear—openness factor.
Figure 16. Anger + Fear—openness factor.
Electronics 09 01590 g016
Figure 17. Anger + Fear—conscientiousness factor.
Figure 17. Anger + Fear—conscientiousness factor.
Electronics 09 01590 g017
Figure 18. Anger + Fear—extraversion factor.
Figure 18. Anger + Fear—extraversion factor.
Electronics 09 01590 g018
Figure 19. Anger + Fear—agreeableness factor.
Figure 19. Anger + Fear—agreeableness factor.
Electronics 09 01590 g019
Figure 20. Anger + Fear—neuroticism factor.
Figure 20. Anger + Fear—neuroticism factor.
Electronics 09 01590 g020
Figure 21. Inferred emotions in module 01.
Figure 21. Inferred emotions in module 01.
Electronics 09 01590 g021
Figure 22. User_emotion/robot_emotion pairs.
Figure 22. User_emotion/robot_emotion pairs.
Electronics 09 01590 g022
Figure 23. Result Anger.
Figure 23. Result Anger.
Electronics 09 01590 g023
Figure 24. Result Anger–Liking.
Figure 24. Result Anger–Liking.
Electronics 09 01590 g024
Figure 25. Result Anger–Gratitude.
Figure 25. Result Anger–Gratitude.
Electronics 09 01590 g025
Figure 26. Example of an execution of the I2E architecture.
Figure 26. Example of an execution of the I2E architecture.
Electronics 09 01590 g026
Table 1. The emotions Distress and Remorse weren’t used due to having Pleasure, Arousal, and Dominance (PAD) values are equal to Pity and Shame, respectively. PAD values from [17] and OCEANs values from [14].
Table 1. The emotions Distress and Remorse weren’t used due to having Pleasure, Arousal, and Dominance (PAD) values are equal to Pity and Shame, respectively. PAD values from [17] and OCEANs values from [14].
EmotionPADOCEAN
ADMIRATION0.50000.3000−0.2000−0.03500.1000−0.01300.5140−0.0900
ANGER−0.51000.59000.25000.3622−0.08820.1585−0.36370.6742
DISAPPOINTMENT−0.30000.1000−0.4000−0.2350−0.2160−0.3850−0.14600.2360
DISLIKING−0.40000.20000.10000.1330−0.09800.0140−0.31500.3580
FEAR−0.64000.6000−0.4300−0.0901−0.3338−0.4278−0.32690.7548
GLOATING0.3000−0.3000−0.1000−0.16600.0660−0.04900.2130−0.3660
GRATIFICATION0.60000.50000.40000.43300.31200.52600.5090−0.0170
GRATITUDE0.40000.2000−0.3000−0.13500.0380−0.13000.4330−0.0980
HAPPY FOR0.40000.20000.20000.20000.18800.28000.3280−0.0980
HATE−0.60000.60000.30000.3990−0.10200.1800−0.44700.7320
HOPE0.20000.2000−0.1000−0.00100.0340−0.01200.22500.0160
JOY0.40000.20000.10000.13300.15800.19800.3490−0.0980
LIKING0.40000.1600−0.2400−0.10800.0560−0.08560.4128−0.1240
LOVE0.30000.10000.20000.16700.15600.24500.2260−0.1060
PITY−0.4000−0.2000−0.5000−0.4010−0.2780−0.5260−0.26500.0980
PRIDE0.40000.30000.30000.30000.21800.37400.3260−0.0330
RELIEF0.2000−0.30000.40000.16900.18400.33800.0250−0.3090
REPROACH−0.3000−0.10000.40000.23500.02400.2470−0.35200.1060
RESENTMENT−0.2000−0.3000−0.2000−0.2330−0.1240−0.2460−0.1810−0.0810
SATISFACTION0.3000−0.20000.40000.20200.21600.37300.1270−0.3010
SHAME−0.30000.1000−0.6000−0.3690−0.2760−0.5490−0.10400.2360
Table 2. Discretization tests results.
Table 2. Discretization tests results.
Number of ClassesError RateNo. of Wrong Classifications/Total Examples
328.5%6/21
519.05%4/21
74.76%1/21
90%0/21
Table 3. Relation between emotions (input) and OCEAN’s values (output).
Table 3. Relation between emotions (input) and OCEAN’s values (output).
InputOutput
EmotionOCEAN
ANGER HIGHHIGHZEROSLIGHTLY HIGHLOWVERY HIGH
DISLIKING HIGHSLIGHTLY HIGHZEROZEROLOWHIGH
FEAR HIGHZEROLOWLOWLOWEXTRA HIGH
HAPPY_FOR HIGHSLIGHTLY HIGHSLIGHTLY HIGHSLIGHTLY HIGHHIGHZERO
PITY HIGHLOWSIGHTLY LOWVERY LOWSLIGHTLY LOWZERO
RELIEF HIGHSLIGHTLY HIGHSLIGHTLY HIGHHIGHZEROLOW
Table 4. Equivalence emotion/incoming personality profile and emotion/outgoing personality profile.
Table 4. Equivalence emotion/incoming personality profile and emotion/outgoing personality profile.
INOUT
EMOTIONOCEANEMOTIONOCEAN
ANGER0.3622, −0.0882, 0.1585, −0.3637, 0.6742LIKING−0.1080, 0.0560, −0.0856, 0.4128, −0.1240
DISLIKING0.1330, −0.0980, 0.0140, −0.3150, 0.3580LIKING−0.1080, 0.0560, −0.0856, 0.4128, −0.1240
FEAR−0.0901, −0.3338, −0.4278, −0.3269, 0.7548HOPE−0.0010, 0.0340, −0.0120, 0.2250, 0.0160
JOY0.2000, 0.1880, 0.2800, 0.3280, −0.0980SATISFACTION0.2020, 0.2160, 0.3730, 0.1270, −0.3010
HAPPY FOR0.1330, 0.1580, 0.1980, 0.3490, −0.0980SATISFACTION0.2020, 0.2160, 0.3730, 0.1270, −0.3010
RELIEF−0.4010, −0.2780, −0.5260, −0.2650, 0.0980GRATITUDE0.1670, 0.1560, 0.2450, 0.2260, −0.1060
PITY0.1690, 0.1840, 0.3380, 0.0250, −0.3090LOVE−0.1350, 0.0380, −0.1300, 0.4330, −0.0980
Table 5. The data in Table 1 are presented in ascending order by the Euclidean distance column, considering RobotOCean (0.1324, 0.1304, 0.1999, 0.2050 e −0.0865).
Table 5. The data in Table 1 are presented in ascending order by the Euclidean distance column, considering RobotOCean (0.1324, 0.1304, 0.1999, 0.2050 e −0.0865).
OCEANEmotionEuclidean Distance
0.16700.15600.24500.2260−0.1060LOVE0.0646
0.13300.15800.19800.3490−0.0980JOY0.1473
0.20000.18800.28000.3280−0.0980HAPPY_FOR0.1694
0.30000.21800.37400.3260−0.0330PRIDE0.2859
−0.00100.0340−0.01200.22500.0160HOPE0.2924
0.20200.21600.37300.1270−0.3010SATISFACTION0.3031
0.16900.18400.33800.0250−0.3090RELIEF0.3212
−0.03500.1000−0.01300.5140−0.0900ADMIRATION0.4151
−0.10800.0560−0.08560.4128−0.1240LIKING0.4390
−0.16600.0660−0.04900.2130−0.3660GLOATING0.4855
−0.13500.0380−0.13000.4330−0.0980GRATITUDE0.4948
0.43300.31200.52600.5090−0.0170GRATIFICATION0.5688
0.23500.02400.2470−0.35200.1060REPROACH0.6090
−0.2330−0.1240−0.2460−0.1810−0.0810RESENTMENT0.7423
0.1330−0.09800.0140−0.31500.3580DISLIKING0.7465
−0.2350−0.2160−0.3850−0.14600.2360DISAPPOINTMENT0.9117
0.3622−0.08820.1585-0.36370.6742ANGER1.0029
−0.3690−0.2760−0.5490−0.10400.2360SHAME1.0889
0.3990−0.10200.1800−0.44700.7320HATE1.1054
−0.4010−0.2780−0.5260−0.26500.0980PITY1.1142
−0.0901−0.3338−0.4278−0.32690.7548FEAR1.2876
Table 6. Emotions classified to be demonstrated by the robot.
Table 6. Emotions classified to be demonstrated by the robot.
EmotionNumberPercentage
LOVE86.6494.891%
JOY1177.06766.442%
SATISFACTION5670.032%
LIKING386.05621.792%
HOPE118.5826.694%
GRATITUDE18400.104%
ADMIRATION8000.045%

Share and Cite

MDPI and ACS Style

Martins, P.S.; Faria, G.; Cerqueira, J.d.J.F. I2E: A Cognitive Architecture Based on Emotions for Assistive Robotics Applications. Electronics 2020, 9, 1590. https://doi.org/10.3390/electronics9101590

AMA Style

Martins PS, Faria G, Cerqueira JdJF. I2E: A Cognitive Architecture Based on Emotions for Assistive Robotics Applications. Electronics. 2020; 9(10):1590. https://doi.org/10.3390/electronics9101590

Chicago/Turabian Style

Martins, Priscila Silva, Gedson Faria, and Jés de Jesus Fiais Cerqueira. 2020. "I2E: A Cognitive Architecture Based on Emotions for Assistive Robotics Applications" Electronics 9, no. 10: 1590. https://doi.org/10.3390/electronics9101590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop