Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = facial expression imitation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4313 KiB  
Article
Dynamic Emotion Recognition and Expression Imitation in Neurotypical Adults and Their Associations with Autistic Traits
by Hai-Ting Wang, Jia-Ling Lyu and Sarina Hui-Lin Chien
Sensors 2024, 24(24), 8133; https://doi.org/10.3390/s24248133 - 19 Dec 2024
Viewed by 2538
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by deficits in social interaction and communication. While many studies suggest that individuals with ASD struggle with emotion processing, the association between emotion processing and autistic traits in non-clinical populations is still unclear. We [...] Read more.
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by deficits in social interaction and communication. While many studies suggest that individuals with ASD struggle with emotion processing, the association between emotion processing and autistic traits in non-clinical populations is still unclear. We examine whether neurotypical adults’ facial emotion recognition and expression imitation are associated with autistic traits. We recruited 32 neurotypical adults; each received two computerized tasks, the Dynamic Emotion Recognition and Expression Imitation, and two standardized measures: the Chinese version AQ and the Twenty-Item Prosopagnosia Index (PI-20). Results for the dynamic emotion recognition showed that happiness has the highest mean accuracy, followed by surprise, sadness, anger, fear, and disgust. For expression imitation, it was easiest to imitate surprise and happiness, followed by disgust, while the accuracy of imitating sadness, anger, and fear was much lower. Importantly, individual AQ scores negatively correlated with emotion recognition accuracy and positively correlated with PI-20. The AQ imagination, communication sub-scores, and PI-20 positively correlated with the expression imitation of surprise. In summary, we found a significant link between recognizing emotional expressions and the level of autistic traits in non-clinical populations, supporting the concept of broader autism phenotype. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

16 pages, 6330 KiB  
Article
A Two-Stage Facial Kinematic Control Strategy for Humanoid Robots Based on Keyframe Detection and Keypoint Cubic Spline Interpolation
by Ye Yuan, Jiahao Li, Qi Yu, Jian Liu, Zongdao Li, Qingdu Li and Na Liu
Mathematics 2024, 12(20), 3278; https://doi.org/10.3390/math12203278 - 18 Oct 2024
Cited by 2 | Viewed by 1610
Abstract
A plentiful number of facial expressions is the basis of natural human–robot interaction for high-fidelity humanoid robots. The facial expression imitation of humanoid robots involves the transmission of human facial expression data to servos situated within the robot’s head. These data drive the [...] Read more.
A plentiful number of facial expressions is the basis of natural human–robot interaction for high-fidelity humanoid robots. The facial expression imitation of humanoid robots involves the transmission of human facial expression data to servos situated within the robot’s head. These data drive the servos to manipulate the skin, thereby enabling the robot to exhibit various facial expressions. However, since the mechanical transmission rate cannot keep up with the data processing rate, humanoid robots often suffer from jitters in the imitation. We conducted a thorough analysis of the transmitted facial expression sequence data and discovered that they are extremely redundant. Therefore, we designed a two-stage strategy for humanoid robots based on facial keyframe detection and facial keypoint detection to achieve more natural and smooth expression imitation. We first built a facial keyframe detection model based on ResNet-50, combined with optical flow estimation, which can identify key expression frames in the sequence. Then, a facial keypoint detection model is used on the keyframes to obtain the facial keypoint coordinates. Based on the coordinates, the cubic spline interpolation method is used to obtain the motion trajectory parameters of the servos, thus realizing the robust control of the humanoid robot’s facial expression. Experiments show that, unlike before where the robot’s imitation would stutter at frame rates above 25 fps, our strategy allows the robot to maintain good facial expression imitation similarity (cosine similarity of 0.7226), even at higher frame rates. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

18 pages, 2748 KiB  
Article
“When You’re Smiling”: How Posed Facial Expressions Affect Visual Recognition of Emotions
by Francesca Benuzzi, Daniela Ballotta, Claudia Casadio, Vanessa Zanelli, Carlo Adolfo Porro, Paolo Frigio Nichelli and Fausta Lui
Brain Sci. 2023, 13(4), 668; https://doi.org/10.3390/brainsci13040668 - 16 Apr 2023
Cited by 2 | Viewed by 3604
Abstract
Facial imitation occurs automatically during the perception of an emotional facial expression, and preventing it may interfere with the accuracy of emotion recognition. In the present fMRI study, we evaluated the effect of posing a facial expression on the recognition of ambiguous facial [...] Read more.
Facial imitation occurs automatically during the perception of an emotional facial expression, and preventing it may interfere with the accuracy of emotion recognition. In the present fMRI study, we evaluated the effect of posing a facial expression on the recognition of ambiguous facial expressions. Since facial activity is affected by various factors, such as empathic aptitudes, the Interpersonal Reactivity Index (IRI) questionnaire was administered and scores were correlated with brain activity. Twenty-six healthy female subjects took part in the experiment. The volunteers were asked to pose a facial expression (happy, disgusted, neutral), then to watch an ambiguous emotional face, finally to indicate whether the emotion perceived was happiness or disgust. As stimuli, blends of happy and disgusted faces were used. Behavioral results showed that posing an emotional face increased the percentage of congruence with the perceived emotion. When participants posed a facial expression and perceived a non-congruent emotion, a neural network comprising bilateral anterior insula was activated. Brain activity was also correlated with empathic traits, particularly with empathic concern, fantasy and personal distress. Our findings support the idea that facial mimicry plays a crucial role in identifying emotions, and that empathic emotional abilities can modulate the brain circuits involved in this process. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

22 pages, 1723 KiB  
Article
Is There a Difference in Facial Emotion Recognition after Stroke with vs. without Central Facial Paresis?
by Anna-Maria Kuttenreich, Harry von Piekartz and Stefan Heim
Diagnostics 2022, 12(7), 1721; https://doi.org/10.3390/diagnostics12071721 - 15 Jul 2022
Cited by 6 | Viewed by 2833
Abstract
The Facial Feedback Hypothesis (FFH) states that facial emotion recognition is based on the imitation of facial emotional expressions and the processing of physiological feedback. In the light of limited and contradictory evidence, this hypothesis is still being debated. Therefore, in the present [...] Read more.
The Facial Feedback Hypothesis (FFH) states that facial emotion recognition is based on the imitation of facial emotional expressions and the processing of physiological feedback. In the light of limited and contradictory evidence, this hypothesis is still being debated. Therefore, in the present study, emotion recognition was tested in patients with central facial paresis after stroke. Performance in facial vs. auditory emotion recognition was assessed in patients with vs. without facial paresis. The accuracy of objective facial emotion recognition was significantly lower in patients with vs. without facial paresis and also in comparison to healthy controls. Moreover, for patients with facial paresis, the accuracy measure for facial emotion recognition was significantly worse than that for auditory emotion recognition. Finally, in patients with facial paresis, the subjective judgements of their own facial emotion recognition abilities differed strongly from their objective performances. This pattern of results demonstrates a specific deficit in facial emotion recognition in central facial paresis and thus provides support for the FFH and points out certain effects of stroke. Full article
(This article belongs to the Special Issue Evidence-Based Diagnosis and Management of Facial Nerve Disorders)
Show Figures

Figure 1

18 pages, 4269 KiB  
Article
Fostering Emotion Recognition in Children with Autism Spectrum Disorder
by Vinícius Silva, Filomena Soares, João Sena Esteves, Cristina P. Santos and Ana Paula Pereira
Multimodal Technol. Interact. 2021, 5(10), 57; https://doi.org/10.3390/mti5100057 - 22 Sep 2021
Cited by 18 | Viewed by 6824
Abstract
Facial expressions are of utmost importance in social interactions, allowing communicative prompts for a speaking turn and feedback. Nevertheless, not all have the ability to express themselves socially and emotionally in verbal and non-verbal communication. In particular, individuals with Autism Spectrum Disorder (ASD) [...] Read more.
Facial expressions are of utmost importance in social interactions, allowing communicative prompts for a speaking turn and feedback. Nevertheless, not all have the ability to express themselves socially and emotionally in verbal and non-verbal communication. In particular, individuals with Autism Spectrum Disorder (ASD) are characterized by impairments in social communication, repetitive patterns of behaviour, and restricted activities or interests. In the literature, the use of robotic tools is reported to promote social interaction with children with ASD. The main goal of this work is to develop a system capable of automatic detecting emotions through facial expressions and interfacing them with a robotic platform (Zeno R50 Robokind® robotic platform, named ZECA) in order to allow social interaction with children with ASD. ZECA was used as a mediator in social communication activities. The experimental setup and methodology for a real-time facial expression (happiness, sadness, anger, surprise, fear, and neutral) recognition system was based on the Intel® RealSense™ 3D sensor and on facial features extraction and multiclass Support Vector Machine classifier. The results obtained allowed to infer that the proposed system is adequate in support sessions with children with ASD, giving a strong indication that it may be used in fostering emotion recognition and imitation skills. Full article
Show Figures

Figure 1

14 pages, 683 KiB  
Article
Facial Imitation Improves Emotion Recognition in Adults with Different Levels of Sub-Clinical Autistic Traits
by Andrea E. Kowallik, Maike Pohl and Stefan R. Schweinberger
J. Intell. 2021, 9(1), 4; https://doi.org/10.3390/jintelligence9010004 - 13 Jan 2021
Cited by 9 | Viewed by 4515
Abstract
We used computer-based automatic expression analysis to investigate the impact of imitation on facial emotion recognition with a baseline-intervention-retest design. The participants: 55 young adults with varying degrees of autistic traits, completed an emotion recognition task with images of faces displaying one of [...] Read more.
We used computer-based automatic expression analysis to investigate the impact of imitation on facial emotion recognition with a baseline-intervention-retest design. The participants: 55 young adults with varying degrees of autistic traits, completed an emotion recognition task with images of faces displaying one of six basic emotional expressions. This task was then repeated with instructions to imitate the expressions. During the experiment, a camera captured the participants’ faces for an automatic evaluation of their imitation performance. The instruction to imitate enhanced imitation performance as well as emotion recognition. Of relevance, emotion recognition improvements in the imitation block were larger in people with higher levels of autistic traits, whereas imitation enhancements were independent of autistic traits. The finding that an imitation instruction improves emotion recognition, and that imitation is a positive within-participant predictor of recognition accuracy in the imitation block supports the idea of a link between motor expression and perception in the processing of emotions, which might be mediated by the mirror neuron system. However, because there was no evidence that people with higher autistic traits differ in their imitative behavior per se, their disproportional emotion recognition benefits could have arisen from indirect effects of imitation instructions Full article
(This article belongs to the Special Issue Advances in Socio-Emotional Ability Research)
Show Figures

Figure 1

28 pages, 15652 KiB  
Article
Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones
by Hymalai Bello, Bo Zhou and Paul Lukowicz
Sensors 2020, 20(17), 4904; https://doi.org/10.3390/s20174904 - 30 Aug 2020
Cited by 10 | Viewed by 4654
Abstract
Many human activities and states are related to the facial muscles’ actions: from the expression of emotions, stress, and non-verbal communication through health-related actions, such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and [...] Read more.
Many human activities and states are related to the facial muscles’ actions: from the expression of emotions, stress, and non-verbal communication through health-related actions, such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and evaluation of a wearable system for facial muscle activity monitoring based on a re-configurable differential array of stethoscope-microphones. In our system, six stethoscopes are placed at locations that could easily be integrated into the frame of smart glasses. The paper describes the detailed hardware design and selection and adaptation of appropriate signal processing and machine learning methods. For the evaluation, we asked eight participants to imitate a set of facial actions, such as expressions of happiness, anger, surprise, sadness, upset, and disgust, and gestures, like kissing, winkling, sticking the tongue out, and taking a pill. An evaluation of a complete data set of 2640 events with 66% training and a 33% testing rate has been performed. Although we encountered high variability of the volunteers’ expressions, our approach shows a recall = 55%, precision = 56%, and f1-score of 54% for the user-independent scenario(9% chance-level). On a user-dependent basis, our worst result has an f1-score = 60% and best result with f1-score = 89%. Having a recall 60% for expressions like happiness, anger, kissing, sticking the tongue out, and neutral(Null-class). Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

27 pages, 12685 KiB  
Article
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
by Felipe Cid, Jose Moreno, Pablo Bustos and Pedro Núñez
Sensors 2014, 14(5), 7711-7737; https://doi.org/10.3390/s140507711 - 28 Apr 2014
Cited by 49 | Viewed by 17240
Abstract
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the [...] Read more.
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2013)
Show Figures

Back to TopTop