Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = robot’s emotional body movements

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2723 KiB  
Article
How Do Humans Recognize the Motion Arousal of Non-Humanoid Robots?
by Qisi Xie, Zihao Chen and Dingbang Luh
Appl. Sci. 2025, 15(4), 1887; https://doi.org/10.3390/app15041887 - 12 Feb 2025
Viewed by 1190
Abstract
As non-humanoid robots develop and become more involved in human life, emotional communication between humans and robots will become more common. Non-verbal communication, especially through body movements, plays a significant role in human–robot interaction. To enable non-humanoid robots to express a richer range [...] Read more.
As non-humanoid robots develop and become more involved in human life, emotional communication between humans and robots will become more common. Non-verbal communication, especially through body movements, plays a significant role in human–robot interaction. To enable non-humanoid robots to express a richer range of emotions, it is crucial to understand how humans recognize the emotional movements of robots. This study focuses on the underlying mechanisms by which humans perceive the motion arousal levels of non-humanoid robots. It proposes a general hypothesis: Human recognition of a robot’s emotional movements is based on the perception of overall motion, and is independent of the robot’s mechanical appearance. Based on physical motion constraints, non-humanoid robots are divided into two categories: those guided by inverse kinematics (IK) constraints and those guided by forward kinematics (FK) constraints. Through literature analysis, it is suggested that motion amplitude has the potential to be a common influencing factor. Two psychological measurement experiments combined with the PAD scale were conducted to analyze the subjects’ perception of the arousal expression effects of different types of non-humanoid robots at various motion amplitudes. The results show that amplitude can be used for expressing arousal across different types of non-humanoid robots. Additionally, for non-humanoid robots guided by FK constraints, the end position also has a certain impact. This validates the overall hypothesis of the paper. The expression patterns of emotional arousal through motion amplitude are roughly the same across different robots: the degree of motion amplitude corresponds closely to the degree of arousal. This research helps expand the boundaries of knowledge, uncover user cognitive patterns, and enhance the efficiency of expressing arousal in non-humanoid robots. Full article
Show Figures

Figure 1

18 pages, 8189 KiB  
Article
An Emotion Recognition Method for Humanoid Robot Body Movements Based on a PSO-BP-RMSProp Neural Network
by Wa Gao, Tanfeng Jiang, Wanli Zhai and Fusheng Zha
Sensors 2024, 24(22), 7227; https://doi.org/10.3390/s24227227 - 12 Nov 2024
Cited by 2 | Viewed by 1966
Abstract
This paper mainly explores the computational model that connects a robot’s emotional body movements with human emotion to propose an emotion recognition method for humanoid robot body movements. There is sparse research directly carried out from this perspective to recognize robot bodily expression. [...] Read more.
This paper mainly explores the computational model that connects a robot’s emotional body movements with human emotion to propose an emotion recognition method for humanoid robot body movements. There is sparse research directly carried out from this perspective to recognize robot bodily expression. A robot’s body movements are designed by imitating human emotional body movements. Subjective questionnaires and statistical methods are used to analyze the characteristics of a user’s perceptions and select appropriate designs. An emotional body movement recognition model using a BP neural network (EBMR-BP model) is proposed, in which the selected robot’s body movements and corresponding emotions are used as inputs and outputs. The corresponding topological architecture, encoding rules, and training process are illustrated in detail. Then, the PSO method and the RMSProp algorithm are introduced to optimize the EBMR-BP method, and the PSO-BP-RMSProp model is developed. Through experiments and comparisons for emotion recognition of a robot’s body movements, the feasibility and effectiveness of the EBMR-BP model, with a recognition rate of 66.67%, and the PSO-BP-RMSProp model, with a recognition rate of 88.89%, are verified. This indicates that the proposed method can be used for emotion recognition of a robot’s body movements, and optimization can improve emotion recognition. The contributions are beneficial for emotional interaction design in HRI. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

14 pages, 9346 KiB  
Article
Human Perception of the Emotional Expressions of Humanoid Robot Body Movements: Evidence from Survey and Eye-Tracking Measurements
by Wa Gao, Shiyi Shen, Yang Ji and Yuan Tian
Biomimetics 2024, 9(11), 684; https://doi.org/10.3390/biomimetics9110684 - 8 Nov 2024
Cited by 2 | Viewed by 2376
Abstract
The emotional expression of body movement, which is an aspect of emotional communication between humans, has not been considered enough in the field of human–robot interactions (HRIs). This paper explores human perceptions of the emotional expressions of humanoid robot body movements to study [...] Read more.
The emotional expression of body movement, which is an aspect of emotional communication between humans, has not been considered enough in the field of human–robot interactions (HRIs). This paper explores human perceptions of the emotional expressions of humanoid robot body movements to study the emotional design of the bodily expressions of robots and the characteristics of the human perception of these emotional body movements. Six categories of emotional behaviors, including happiness, anger, sadness, surprise, fear, and disgust, were designed by imitating human emotional body movements, and they were implemented on a Yanshee robot. A total of 135 participants were recruited for questionnaires and eye-tracking measurements. Statistical methods, including K-means clustering, repeated analysis of variance (ANOVA), Friedman’s ANOVA, and Spearman’s correlation test, were used to analyze the data. According to the statistical results of emotional categories, intensities, and arousals perceived by humans, a guide to grading the designed robot’s bodily expressions of emotion is created. By combining this guide with certain objective analyses, such as fixation and trajectory of eye movements, the characteristics of human perception, including the perceived differences between happiness and negative emotions and the trends of eye movements for different emotional categories, are described. This study not only illustrates subjective and objective evidence that humans can perceive robot bodily expressions of emotions through only vision but also provides helpful guidance for designing appropriate emotional bodily expressions in HRIs. Full article
Show Figures

Figure 1

31 pages, 2738 KiB  
Article
Real-Time Robotic Presentation Skill Scoring Using Multi-Model Analysis and Fuzzy Delphi–Analytic Hierarchy Process
by Rafeef Fauzi Najim Alshammari, Abdul Hadi Abd Rahman, Haslina Arshad and Osamah Shihab Albahri
Sensors 2023, 23(24), 9619; https://doi.org/10.3390/s23249619 - 5 Dec 2023
Cited by 1 | Viewed by 2055
Abstract
Existing methods for scoring student presentations predominantly rely on computer-based implementations and do not incorporate a robotic multi-classification model. This limitation can result in potential misclassification issues as these approaches lack active feature learning capabilities due to fixed camera positions. Moreover, these scoring [...] Read more.
Existing methods for scoring student presentations predominantly rely on computer-based implementations and do not incorporate a robotic multi-classification model. This limitation can result in potential misclassification issues as these approaches lack active feature learning capabilities due to fixed camera positions. Moreover, these scoring methods often solely focus on facial expressions and neglect other crucial factors, such as eye contact, hand gestures and body movements, thereby leading to potential biases or inaccuracies in scoring. To address these limitations, this study introduces Robotics-based Presentation Skill Scoring (RPSS), which employs a multi-model analysis. RPSS captures and analyses four key presentation parameters in real time, namely facial expressions, eye contact, hand gestures and body movements, and applies the fuzzy Delphi method for criteria selection and the analytic hierarchy process for weighting, thereby enabling decision makers or managers to assign varying weights to each criterion based on its relative importance. RPSS identifies five academic facial expressions and evaluates eye contact to achieve a comprehensive assessment and enhance its scoring accuracy. Specific sub-models are employed for each presentation parameter, namely EfficientNet for facial emotions, DeepEC for eye contact and an integrated Kalman and heuristic approach for hand and body movements. The scores are determined based on predefined rules. RPSS is implemented on a robot, and the results highlight its practical applicability. Each sub-model is rigorously evaluated offline and compared against benchmarks for selection. Real-world evaluations are also conducted by incorporating a novel active learning approach to improve performance by leveraging the robot’s mobility. In a comparative evaluation with human tutors, RPSS achieves a remarkable average agreement of 99%, showcasing its effectiveness in assessing students’ presentation skills. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 2441 KiB  
Article
Respiration Based Non-Invasive Approach for Emotion Recognition Using Impulse Radio Ultra Wide Band Radar and Machine Learning
by Hafeez Ur Rehman Siddiqui, Hina Fatima Shahzad, Adil Ali Saleem, Abdul Baqi Khan Khakwani, Furqan Rustam, Ernesto Lee, Imran Ashraf and Sandra Dudley
Sensors 2021, 21(24), 8336; https://doi.org/10.3390/s21248336 - 13 Dec 2021
Cited by 33 | Viewed by 5199
Abstract
Emotion recognition gained increasingly prominent attraction from a multitude of fields recently due to their wide use in human-computer interaction interface, therapy, and advanced robotics, etc. Human speech, gestures, facial expressions, and physiological signals can be used to recognize different emotions. Despite the [...] Read more.
Emotion recognition gained increasingly prominent attraction from a multitude of fields recently due to their wide use in human-computer interaction interface, therapy, and advanced robotics, etc. Human speech, gestures, facial expressions, and physiological signals can be used to recognize different emotions. Despite the discriminating properties to recognize emotions, the first three methods have been regarded as ineffective as the probability of human’s voluntary and involuntary concealing the real emotions can not be ignored. Physiological signals, on the other hand, are capable of providing more objective, and reliable emotion recognition. Based on physiological signals, several methods have been introduced for emotion recognition, yet, predominantly such approaches are invasive involving the placement of on-body sensors. The efficacy and accuracy of these approaches are hindered by the sensor malfunctioning and erroneous data due to human limbs movement. This study presents a non-invasive approach where machine learning complements the impulse radio ultra-wideband (IR-UWB) signals for emotion recognition. First, the feasibility of using IR-UWB for emotion recognition is analyzed followed by determining the state of emotions into happiness, disgust, and fear. These emotions are triggered using carefully selected video clips to human subjects involving both males and females. The convincing evidence that different breathing patterns are linked with different emotions has been leveraged to discriminate between different emotions. Chest movement of thirty-five subjects is obtained using IR-UWB radar while watching the video clips in solitude. Extensive signal processing is applied to the obtained chest movement signals to estimate respiration rate per minute (RPM). The RPM estimated by the algorithm is validated by repeated measurements by a commercially available Pulse Oximeter. A dataset is maintained comprising gender, RPM, age, and associated emotions which are further used with several machine learning algorithms for automatic recognition of human emotions. Experiments reveal that IR-UWB possesses the potential to differentiate between different human emotions with a decent accuracy of 76% without placing any on-body sensors. Separate analysis for male and female participants reveals that males experience high arousal for happiness while females experience intense fear emotions. For disgust emotion, no large difference is found for male and female participants. To the best of the authors’ knowledge, this study presents the first non-invasive approach using the IR-UWB radar for emotion recognition. Full article
(This article belongs to the Special Issue Ultra Wideband (UWB) Systems in Biomedical Sensing)
Show Figures

Figure 1

25 pages, 8988 KiB  
Article
Imitating Human Emotions with a NAO Robot as Interviewer Playing the Role of Vocational Tutor
by Selene Goenaga, Loraine Navarro, Christian G. Quintero M. and Mauricio Pardo
Electronics 2020, 9(6), 971; https://doi.org/10.3390/electronics9060971 - 11 Jun 2020
Cited by 8 | Viewed by 3614
Abstract
This paper proposes an intelligent system that can hold an interview, using a NAO robot as interviewer playing the role of vocational tutor. For that, twenty behaviors within five personality profiles are classified and categorized into NAO. Five basic emotions are considered: anger, [...] Read more.
This paper proposes an intelligent system that can hold an interview, using a NAO robot as interviewer playing the role of vocational tutor. For that, twenty behaviors within five personality profiles are classified and categorized into NAO. Five basic emotions are considered: anger, boredom, interest, surprise, and joy. Selected behaviors are grouped according to these five different emotions. Common behaviors (e.g., movements or body postures) used by the robot during vocational guidance sessions are based on a theory of personality traits called the “Five-Factor Model”. In this context, a predefined set of questions is asked by the robot—according to a theoretical model called the “Orientation Model”—about the person’s vocational preferences. Therefore, NAO could react as conveniently as possible during the interview, according to the score of the answer given by the person to the question posed and its personality type. Additionally, based on the answers to these questions, a vocational profile is established, and the robot could provide a recommendation about the person’s vocation. The results show how the intelligent selection of behaviors can be successfully achieved through the proposed approach, making the Human–Robot Interaction friendlier. Full article
(This article belongs to the Special Issue Applications and Trends in Social Robotics)
Show Figures

Figure 1

Back to TopTop