Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (62)

Search Parameters:
Keywords = social HRI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3441 KiB  
Article
Which AI Sees Like Us? Investigating the Cognitive Plausibility of Language and Vision Models via Eye-Tracking in Human-Robot Interaction
by Khashayar Ghamati, Maryam Banitalebi Dehkordi and Abolfazl Zaraki
Sensors 2025, 25(15), 4687; https://doi.org/10.3390/s25154687 - 29 Jul 2025
Viewed by 378
Abstract
As large language models (LLMs) and vision–language models (VLMs) become increasingly used in robotics area, a crucial question arises: to what extent do these models replicate human-like cognitive processes, particularly within socially interactive contexts? Whilst these models demonstrate impressive multimodal reasoning and perception [...] Read more.
As large language models (LLMs) and vision–language models (VLMs) become increasingly used in robotics area, a crucial question arises: to what extent do these models replicate human-like cognitive processes, particularly within socially interactive contexts? Whilst these models demonstrate impressive multimodal reasoning and perception capabilities, their cognitive plausibility remains underexplored. In this study, we address this gap by using human visual attention as a behavioural proxy for cognition in a naturalistic human-robot interaction (HRI) scenario. Eye-tracking data were previously collected from participants engaging in social human-human interactions, providing frame-level gaze fixations as a human attentional ground truth. We then prompted a state-of-the-art VLM (LLaVA) to generate scene descriptions, which were processed by four LLMs (DeepSeek-R1-Distill-Qwen-7B, Qwen1.5-7B-Chat, LLaMA-3.1-8b-instruct, and Gemma-7b-it) to infer saliency points. Critically, we evaluated each model in both stateless and memory-augmented (short-term memory, STM) modes to assess the influence of temporal context on saliency prediction. Our results presented that whilst stateless LLaVA most closely replicates human gaze patterns, STM confers measurable benefits only for DeepSeek, whose lexical anchoring mirrors human rehearsal mechanisms. Other models exhibited degraded performance with memory due to prompt interference or limited contextual integration. This work introduces a novel, empirically grounded framework for assessing cognitive plausibility in generative models and underscores the role of short-term memory in shaping human-like visual attention in robotic systems. Full article
Show Figures

Figure 1

21 pages, 2941 KiB  
Article
Dynamic Proxemic Model for Human–Robot Interactions Using the Golden Ratio
by Tomáš Spurný, Ján Babjak, Zdenko Bobovský and Aleš Vysocký
Appl. Sci. 2025, 15(15), 8130; https://doi.org/10.3390/app15158130 - 22 Jul 2025
Viewed by 266
Abstract
This paper presents a novel approach to determine dynamic safety and comfort zones in human–robot interactions (HRIs), with a focus on service robots operating in dynamic environments with people. The proposed proxemic model leverages the golden ratio-based comfort zone distribution and ISO safety [...] Read more.
This paper presents a novel approach to determine dynamic safety and comfort zones in human–robot interactions (HRIs), with a focus on service robots operating in dynamic environments with people. The proposed proxemic model leverages the golden ratio-based comfort zone distribution and ISO safety standards to define adaptive proxemic boundaries for robots around humans. Unlike traditional fixed-threshold approaches, this novel method proposes a gradual and context-sensitive modulation of robot behaviour based on human position, orientation, and relative velocity. The system was implemented on an NVIDIA Jetson Xavier NX platform using a ZED 2i stereo depth camera Stereolabs, New York, USA and tested on two mobile robotic platforms: Go1 Unitree, Hangzhou, China (quadruped) and Scout Mini Agilex, Dongguan, China (wheeled). The initial verification of proposed proxemic model through experimental comfort validation was conducted using two simple interaction scenarios, and subjective feedback was collected from participants using a modified Godspeed Questionnaire Series. The results show that the participants felt comfortable during the experiments with robots. This acceptance of the proposed methodology plays an initial role in supporting further research of the methodology. The proposed solution also facilitates integration into existing navigation frameworks and opens pathways towards socially aware robotic systems. Full article
(This article belongs to the Special Issue Intelligent Robotics: Design and Applications)
Show Figures

Figure 1

20 pages, 5700 KiB  
Article
Multimodal Personality Recognition Using Self-Attention-Based Fusion of Audio, Visual, and Text Features
by Hyeonuk Bhin and Jongsuk Choi
Electronics 2025, 14(14), 2837; https://doi.org/10.3390/electronics14142837 - 15 Jul 2025
Viewed by 471
Abstract
Personality is a fundamental psychological trait that exerts a long-term influence on human behavior patterns and social interactions. Automatic personality recognition (APR) has exhibited increasing importance across various domains, including Human–Robot Interaction (HRI), personalized services, and psychological assessments. In this study, we propose [...] Read more.
Personality is a fundamental psychological trait that exerts a long-term influence on human behavior patterns and social interactions. Automatic personality recognition (APR) has exhibited increasing importance across various domains, including Human–Robot Interaction (HRI), personalized services, and psychological assessments. In this study, we propose a multimodal personality recognition model that classifies the Big Five personality traits by extracting features from three heterogeneous sources: audio processed using Wav2Vec2, video represented as Skeleton Landmark time series, and text encoded through Bidirectional Encoder Representations from Transformers (BERT) and Doc2Vec embeddings. Each modality is handled through an independent Self-Attention block that highlights salient temporal information, and these representations are then summarized and integrated using a late fusion approach to effectively reflect both the inter-modal complementarity and cross-modal interactions. Compared to traditional recurrent neural network (RNN)-based multimodal models and unimodal classifiers, the proposed model achieves an improvement of up to 12 percent in the F1-score. It also maintains a high prediction accuracy and robustness under limited input conditions. Furthermore, a visualization based on t-distributed Stochastic Neighbor Embedding (t-SNE) demonstrates clear distributional separation across the personality classes, enhancing the interpretability of the model and providing insights into the structural characteristics of its latent representations. To support real-time deployment, a lightweight thread-based processing architecture is implemented, ensuring computational efficiency. By leveraging deep learning-based feature extraction and the Self-Attention mechanism, we present a novel personality recognition framework that balances performance with interpretability. The proposed approach establishes a strong foundation for practical applications in HRI, counseling, education, and other interactive systems that require personalized adaptation. Full article
(This article belongs to the Special Issue Explainable Machine Learning and Data Mining)
Show Figures

Figure 1

16 pages, 432 KiB  
Article
Understanding HRIS Adoption: A Psychosocial Perspective on Managerial Engagement and System Effectiveness
by Fadi Sofi, Anas Al-Fattal and Ip-Shing Fan
Psychol. Int. 2025, 7(3), 57; https://doi.org/10.3390/psycholint7030057 - 4 Jul 2025
Viewed by 406
Abstract
Human Resource Information Systems (HRISs) have become integral to contemporary organizational life, yet their successful adoption remains uneven and poorly understood. Existing models often focus on cognitive or technical determinants, overlooking how emotional and social factors shape user behavior in real-world settings. This [...] Read more.
Human Resource Information Systems (HRISs) have become integral to contemporary organizational life, yet their successful adoption remains uneven and poorly understood. Existing models often focus on cognitive or technical determinants, overlooking how emotional and social factors shape user behavior in real-world settings. This study explores HRIS adoption through a psychosocial lens, centering the experiences of business line managers, key users who are often excluded from HRIS design, training, and research. Drawing on 25 qualitative interviews across five large UK-based organizations, this paper identifies six emergent themes related to interpersonal trust, role identity, leadership influence, organizational culture, emotional resistance, and the gap between expected usefulness and daily utility. Findings reveal that approaches which account for user emotions, perceived role clarity, and social context offer a more complete understanding of HRIS adoption than those based solely on intention or usability. By highlighting the role of interpersonal dynamics and subjective experience, this study challenges dominant technology adoption models and contributes to more human-centered perspectives in HRIS research and practice. This paper concludes by offering theoretical implications and practical guidance for designing HRIS strategies that reflect the psychosocial realities of implementation across diverse organizational environments. Full article
(This article belongs to the Section Cognitive Psychology)
Show Figures

Figure 1

17 pages, 1253 KiB  
Article
Retrieving Memory Content from a Cognitive Architecture by Impressions from Language Models for Use in a Social Robot
by Thomas Sievers and Nele Russwinkel
Appl. Sci. 2025, 15(10), 5778; https://doi.org/10.3390/app15105778 - 21 May 2025
Viewed by 1257
Abstract
Large Language Models (LLMs) and Vision-Language Models (VLMs) have the potential to significantly advance the development and application of cognitive architectures for human–robot interaction (HRI) to enable social robots with enhanced cognitive capabilities. An essential cognitive ability of humans is the use of [...] Read more.
Large Language Models (LLMs) and Vision-Language Models (VLMs) have the potential to significantly advance the development and application of cognitive architectures for human–robot interaction (HRI) to enable social robots with enhanced cognitive capabilities. An essential cognitive ability of humans is the use of memory. We investigate a way to create a social robot with a human-like memory and recollection based on cognitive processes for a better comprehensible and situational behavior of the robot. Using a combined system consisting of an Adaptive Control of Thought-Rational (ACT-R) model and a humanoid social robot, we show how recollections from the declarative memory of the ACT-R model can be retrieved using data obtained by the robot via an LLM or VLM, processed according to the procedural memory of the cognitive model and returned to the robot as instructions for action. Real-world data captured by the robot can be stored as memory chunks in the cognitive model and recalled, for example by means of associations. This opens up possibilities for using human-like judgment and decision-making capabilities inherent in cognitive architectures with social robots and practically offers opportunities of augmenting the prompt for LLM-driven utterances with content from declarative memory, thus keeping them more contextually relevant. We illustrate the use of such an approach in HRI scenarios with the social robot Pepper. Full article
(This article belongs to the Special Issue Advances in Cognitive Robotics and Control)
Show Figures

Figure 1

24 pages, 2855 KiB  
Article
A Study on Factors Affecting the Continuance Usage Intention of Social Robots with Episodic Memory: A Stimulus–Organism–Response Perspective
by Yi Yang, Hye-Kyung Cho and Min-Yong Kim
Appl. Sci. 2025, 15(10), 5334; https://doi.org/10.3390/app15105334 - 10 May 2025
Cited by 1 | Viewed by 828
Abstract
As social robots become increasingly integrated into everyday life, understanding the factors that influence users’ long-term continuance intention is essential. This study investigates how various features of MOCCA, a social robot equipped with episodic memory, affect users’ continuance usage intention through perceived trust [...] Read more.
As social robots become increasingly integrated into everyday life, understanding the factors that influence users’ long-term continuance intention is essential. This study investigates how various features of MOCCA, a social robot equipped with episodic memory, affect users’ continuance usage intention through perceived trust and parasocial interaction, within the framework of the Stimulus–Organism–Response (SOR) theory. A structural model incorporating key perceived features (intimacy, morality, dependency, and information privacy risk) was tested with survey data from 285 MOCCA users and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results show that intimacy and morality positively influence both trust and parasocial interaction, while information privacy risk exerts a negative effect. Dependency significantly reduces parasocial interaction but does not significantly impact trust. These findings highlight the importance of balancing human-like qualities, ethical responsibility, perceived autonomy, and privacy protection in social robot design to foster trust, enhance user engagement, and support long-term adoption. This study provides theoretical, managerial, and practical insights into the field of human–robot interaction (HRI) and contributes to the broader acceptance of social robots in everyday life. Full article
Show Figures

Figure 1

22 pages, 3579 KiB  
Article
Gait-to-Gait Emotional Human–Robot Interaction Utilizing Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer
by Chenghao Li, Kah Phooi Seng and Li-Minn Ang
Sensors 2025, 25(3), 734; https://doi.org/10.3390/s25030734 - 25 Jan 2025
Cited by 1 | Viewed by 991
Abstract
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that [...] Read more.
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that stores a series of joint coordinates and is easy for humanoid robots to execute. However, a limited amount of research investigates emotional HRI systems based on gaits, indicating an existing gap in human emotion gait recognition and robotic emotional gait response. To address this challenge, we propose a Gait-to-Gait Emotional HRI system, emphasizing the development of an innovative emotion classification model. In our system, the humanoid robot NAO can recognize emotions from human gaits through our Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer (TS-ST) and respond with pre-set emotional gaits that reflect the same emotion as the human presented. Our TS-ST outperforms the current state-of-the-art human-gait emotion recognition model applied to robots on the Emotion-Gait dataset. Full article
Show Figures

Figure 1

17 pages, 1326 KiB  
Article
Navigating the Human–Robot Interface—Exploring Human Interactions and Perceptions with Social and Telepresence Robots
by Eva Mårell-Olsson, Suna Bensch, Thomas Hellström, Hannah Alm, Amanda Hyllbrant, Mimmi Leonardson and Sanna Westberg
Appl. Sci. 2025, 15(3), 1127; https://doi.org/10.3390/app15031127 - 23 Jan 2025
Cited by 1 | Viewed by 1382
Abstract
This study investigates user experiences of interactions with two types of robots: Pepper, a social humanoid robot, and Double 3, a self-driving telepresence robot. Conducted in a controlled setting with a specific participant group, this research aims to understand how the design and [...] Read more.
This study investigates user experiences of interactions with two types of robots: Pepper, a social humanoid robot, and Double 3, a self-driving telepresence robot. Conducted in a controlled setting with a specific participant group, this research aims to understand how the design and functionality of these robots influence user perception, interaction patterns, and emotional responses. The findings reveal diverse participant reactions, highlighting the importance of adaptability, effective communication, autonomy, and perceived credibility in robot design. Participants showed mixed responses to human-like emotional displays and expressed a desire for robots capable of more nuanced and reliable behaviors. Trust in robots was influenced by their perceived functionality and reliability. Despite limitations in sample size, the study provides insights into the ethical and social considerations of integrating AI in public and professional spaces, offering guidance for enhancing user-centered designs and expanding applications for social and telepresence robots in society. Full article
(This article belongs to the Special Issue Technology Enhanced and Mobile Learning: Innovations and Applications)
Show Figures

Figure 1

21 pages, 2476 KiB  
Article
Enhancing Human–Agent Interaction via Artificial Agents That Speculate About the Future
by Casey C. Bennett, Young-Ho Bae, Jun-Hyung Yoon, Say Young Kim and Benjamin Weiss
Future Internet 2025, 17(2), 52; https://doi.org/10.3390/fi17020052 - 21 Jan 2025
Viewed by 1277
Abstract
Human communication in daily life entails not only talking about what we are currently doing or will do, but also speculating about future possibilities that may (or may not) occur, i.e., “anticipatory speech”. Such conversations are central to social cooperation and social cohesion [...] Read more.
Human communication in daily life entails not only talking about what we are currently doing or will do, but also speculating about future possibilities that may (or may not) occur, i.e., “anticipatory speech”. Such conversations are central to social cooperation and social cohesion in humans. This suggests that such capabilities may also be critical for developing improved speech systems for artificial agents, e.g., human–agent interaction (HAI) and human–robot interaction (HRI). However, to do so successfully, it is imperative that we understand how anticipatory speech may affect the behavior of human users and, subsequently, the behavior of the agent/robot. Moreover, it is possible that such effects may vary across cultures and languages. To that end, we conducted an experiment where a human and autonomous 3D virtual avatar interacted in a cooperative gameplay environment. The experiment included 40 participants, comparing different languages (20 English, 20 Korean), where the artificial agent had anticipatory speech either enabled or disabled. The results showed that anticipatory speech significantly altered the speech patterns and turn-taking behavior of both the human and the agent, but those effects varied depending on the language spoken. We discuss how the use of such novel communication forms holds potential for enhancing HAI/HRI, as well as the development of mixed reality and virtual reality interactive systems for human users. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Figure 1

5 pages, 757 KiB  
Proceeding Paper
Innovations in Agri-Food Systems in Europe: Pathways and Challenges to Inclusion and Sustainability
by Iván Tartaruga and Fernanda Sperotto
Proceedings 2025, 113(1), 12; https://doi.org/10.3390/proceedings2025113012 - 20 Jan 2025
Viewed by 839
Abstract
Optimal functioning of agri-food systems is essential for food security and sustainability. In this sector, Europe faces many issues, such as promoting sustainable and healthy food production in the context of social and economic inequalities. To deal with these issues, we propose a [...] Read more.
Optimal functioning of agri-food systems is essential for food security and sustainability. In this sector, Europe faces many issues, such as promoting sustainable and healthy food production in the context of social and economic inequalities. To deal with these issues, we propose a conceptual framework relating to the idea of a regional innovation system considering power relations, called the hierarchical regional innovation system (HRIS). This framework is based on the concepts of eco-innovation, inclusive innovation, and transition as its theoretical foundations. The findings show that the framework can be helpful in European rural contexts. Full article
Show Figures

Figure 1

16 pages, 1101 KiB  
Article
Enhancing Human–Robot Interaction: Development of Multimodal Robotic Assistant for User Emotion Recognition
by Sergio Garcia, Francisco Gomez-Donoso and Miguel Cazorla
Appl. Sci. 2024, 14(24), 11914; https://doi.org/10.3390/app142411914 - 19 Dec 2024
Viewed by 3770
Abstract
This paper presents a study on enhancing human–robot interaction (HRI) through multimodal emotional recognition within social robotics. Using the humanoid robot Pepper as a testbed, we integrate visual, auditory, and textual analysis to improve emotion recognition accuracy and contextual understanding. The proposed framework [...] Read more.
This paper presents a study on enhancing human–robot interaction (HRI) through multimodal emotional recognition within social robotics. Using the humanoid robot Pepper as a testbed, we integrate visual, auditory, and textual analysis to improve emotion recognition accuracy and contextual understanding. The proposed framework combines pretrained neural networks with fine-tuning techniques tailored to specific users, demonstrating that high accuracy in emotion recognition can be achieved by adapting the models to the individual emotional expressions of each user. This approach addresses the inherent variability in emotional expression across individuals, making it feasible to deploy personalized emotion recognition systems. Our experiments validate the effectiveness of this methodology, achieving high precision in multimodal emotion recognition through fine-tuning, while maintaining adaptability in real-world scenarios. These enhancements significantly improve Pepper’s interactive and empathetic capabilities, allowing it to engage more naturally with users in assistive, educational, and healthcare settings. This study not only advances the field of HRI but also provides a reproducible framework for integrating multimodal emotion recognition into commercial humanoid robots, bridging the gap between research prototypes and practical applications. Full article
Show Figures

Figure 1

16 pages, 3403 KiB  
Article
Beyond Binary Dialogues: Research and Development of a Linguistically Nuanced Conversation Design for Social Robots in Group–Robot Interactions
by Christoph Bensch, Ana Müller, Oliver Chojnowski and Anja Richert
Appl. Sci. 2024, 14(22), 10316; https://doi.org/10.3390/app142210316 - 9 Nov 2024
Cited by 2 | Viewed by 1443
Abstract
In this paper, we detail the technical development of a conversation design that is sensitive to group dynamics and adaptable, taking into account the subtleties of linguistic variations between dyadic (i.e., one human and one agent) and group interactions in human–robot interaction (HRI) [...] Read more.
In this paper, we detail the technical development of a conversation design that is sensitive to group dynamics and adaptable, taking into account the subtleties of linguistic variations between dyadic (i.e., one human and one agent) and group interactions in human–robot interaction (HRI) using the German language as a case study. The paper details the implementation of robust person and group detection with YOLOv5m and the expansion of knowledge databases using large language models (LLMs) to create adaptive multi-party interactions (MPIs) (i.e., group–robot interactions (GRIs)). We describe the use of LLMs to generate training data for socially interactive agents including social robots, as well as a self-developed synthesis tool, knowledge expander, to accurately map the diverse needs of different users in public spaces. We also outline the integration of a LLM as a fallback for open-ended questions not covered by our knowledge database, ensuring it can effectively respond to both individuals and groups within the MPI framework. Full article
(This article belongs to the Special Issue Advances in Cognitive Robotics and Control)
Show Figures

Figure 1

22 pages, 3190 KiB  
Article
Sustainable Impact of Stance Attribution Design Cues for Robots on Human–Robot Relationships—Evidence from the ERSP
by Dong Lv, Rui Sun, Qiuhua Zhu, Jiajia Zuo and Shukun Qin
Sustainability 2024, 16(17), 7252; https://doi.org/10.3390/su16177252 - 23 Aug 2024
Cited by 1 | Viewed by 1291
Abstract
With the development of large language model technologies, the capability of social robots to interact emotionally with users has been steadily increasing. However, the existing research insufficiently examines the influence of robot stance attribution design cues on the construction of users’ mental models [...] Read more.
With the development of large language model technologies, the capability of social robots to interact emotionally with users has been steadily increasing. However, the existing research insufficiently examines the influence of robot stance attribution design cues on the construction of users’ mental models and their effects on human–robot interaction (HRI). This study innovatively combines mental models with the associative–propositional evaluation (APE) model, unveiling the impact of the stance attribution explanations of this design cue on the construction of user mental models and the interaction between the two types of mental models through EEG experiments and survey investigations. The results found that under the influence of intentional stance explanations (compared to design stance explanations), participants displayed higher error rates, higher θ- and β-band Event-Related Spectral Perturbations (ERSPs), and phase-locking value (PLV). Intentional stance explanations trigger a primarily associatively based mental model of users towards robots, which conflicts with the propositionally based mental models of individuals. Users might adjust or “correct” their immediate reactions caused by stance attribution explanations after logical analysis. This study reveals that stance attribution interpretation can significantly affect users’ mental model construction of robots, which provides a new theoretical framework for exploring human interaction with non-human agents and provides theoretical support for the sustainable development of human–robot relations. It also provides new ideas for designing robots that are more humane and can better interact with human users. Full article
Show Figures

Figure 1

12 pages, 1823 KiB  
Article
When Trustworthiness Meets Face: Facial Design for Social Robots
by Yao Song and Yan Luximon
Sensors 2024, 24(13), 4215; https://doi.org/10.3390/s24134215 - 28 Jun 2024
Cited by 2 | Viewed by 2541
Abstract
As a technical application in artificial intelligence, a social robot is one of the branches of robotic studies that emphasizes socially communicating and interacting with human beings. Although both robot and behavior research have realized the significance of social robot design for its [...] Read more.
As a technical application in artificial intelligence, a social robot is one of the branches of robotic studies that emphasizes socially communicating and interacting with human beings. Although both robot and behavior research have realized the significance of social robot design for its market success and related emotional benefit to users, the specific design of the eye and mouth shape of a social robot in eliciting trustworthiness has received only limited attention. In order to address this research gap, our study conducted a 2 (eye shape) × 3 (mouth shape) full factorial between-subject experiment. A total of 211 participants were recruited and randomly assigned to the six scenarios in the study. After exposure to the stimuli, perceived trustworthiness and robot attitude were measured accordingly. The results showed that round eyes (vs. narrow eyes) and an upturned-shape mouth or neutral mouth (vs. downturned-shape mouth) for social robots could significantly improve people’s trustworthiness and attitude towards social robots. The effect of eye and mouth shape on robot attitude are all mediated by the perceived trustworthiness. Trustworthy human facial features could be applied to the robot’s face, eliciting a similar trustworthiness perception and attitude. In addition to empirical contributions to HRI, this finding could shed light on the design practice for a trustworthy-looking social robot. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

16 pages, 2873 KiB  
Article
Robots as Mental Health Coaches: A Study of Emotional Responses to Technology-Assisted Stress Management Tasks Using Physiological Signals
by Katarzyna Klęczek, Andra Rice and Maryam Alimardani
Sensors 2024, 24(13), 4032; https://doi.org/10.3390/s24134032 - 21 Jun 2024
Cited by 4 | Viewed by 4423
Abstract
The current study investigated the effectiveness of social robots in facilitating stress management interventions for university students by evaluating their physiological responses. We collected electroencephalogram (EEG) brain activity and Galvanic Skin Responses (GSRs) together with self-reported questionnaires from two groups of students who [...] Read more.
The current study investigated the effectiveness of social robots in facilitating stress management interventions for university students by evaluating their physiological responses. We collected electroencephalogram (EEG) brain activity and Galvanic Skin Responses (GSRs) together with self-reported questionnaires from two groups of students who practiced a deep breathing exercise either with a social robot or a laptop. From GSR signals, we obtained the change in participants’ arousal level throughout the intervention, and from the EEG signals, we extracted the change in their emotional valence using the neurometric of Frontal Alpha Asymmetry (FAA). While subjective perceptions of stress and user experience did not differ significantly between the two groups, the physiological signals revealed differences in their emotional responses as evaluated by the arousal–valence model. The Laptop group tended to show a decrease in arousal level which, in some cases, was accompanied by negative valence indicative of boredom or lack of interest. On the other hand, the Robot group displayed two patterns; some demonstrated a decrease in arousal with positive valence indicative of calmness and relaxation, and others showed an increase in arousal together with positive valence interpreted as excitement. These findings provide interesting insights into the impact of social robots as mental well-being coaches on students’ emotions particularly in the presence of the novelty effect. Additionally, they provide evidence for the efficacy of physiological signals as an objective and reliable measure of user experience in HRI settings. Full article
Show Figures

Figure 1

Back to TopTop