Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (308)

Search Parameters:
Keywords = emotion and robots

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1678 KB  
Article
Body Knowledge and Emotion Recognition in Preschool Children: A Comparative Study of Human Versus Robot Tutors
by Alice Araguas, Arnaud Blanchard, Sébastien Derégnaucourt, Adrien Chopin and Bahia Guellai
Behav. Sci. 2026, 16(1), 29; https://doi.org/10.3390/bs16010029 - 23 Dec 2025
Abstract
Social robots are increasingly integrated into early childhood education, yet limited research exists examining preschoolers’ learning from robotic versus human demonstrators across embodied tasks. This study investigated whether children (aged between 3 and 6) demonstrate comparable performance when learning body-centered tasks from a [...] Read more.
Social robots are increasingly integrated into early childhood education, yet limited research exists examining preschoolers’ learning from robotic versus human demonstrators across embodied tasks. This study investigated whether children (aged between 3 and 6) demonstrate comparable performance when learning body-centered tasks from a humanoid robot compared to a human demonstrator. Sixty-two typically developing children were randomly assigned to a robot or a human condition. Participants completed three tasks: body part comprehension and production, body movement imitation, and emotion recognition from body postures. Performance was measured using standardized protocols. No significant main effects of demonstrator type emerged across most tasks. However, age significantly predicted performance across all measures, with systematic improvements between 3 and 6. A significant age × demonstrator interaction was observed for sequential motor imitation, with stronger age effects for the human demonstrator condition. Preschool children demonstrate comparable performance when interacting with a humanoid robot versus a human in body-centered tasks, though motor imitation shows differential developmental trajectories. These findings suggest appropriately designed social robots may serve as supplementary pedagogical tools for embodied learning in early childhood education under specific conditions. The primacy of developmental effects highlights the importance of age-appropriate design in both traditional and technology-enhanced educational contexts. Full article
Show Figures

Figure 1

22 pages, 11862 KB  
Article
Do We View Robots as We Do Ourselves? Examining Robotic Face Processing Using EEG
by Xaviera Pérez-Arenas, Álvaro A. Rivera-Rei, David Huepe and Vicente Soto
Brain Sci. 2026, 16(1), 9; https://doi.org/10.3390/brainsci16010009 (registering DOI) - 22 Dec 2025
Viewed by 148
Abstract
Background/Objectives: The ability to perceive and process emotional faces quickly and efficiently is essential for human social interactions. In recent years, humans have started to interact more regularly with robotic faces in the form of virtual or real-world robots. Neurophysiological research regarding how [...] Read more.
Background/Objectives: The ability to perceive and process emotional faces quickly and efficiently is essential for human social interactions. In recent years, humans have started to interact more regularly with robotic faces in the form of virtual or real-world robots. Neurophysiological research regarding how the brain decodes robotic faces relative to human ones is scarce and, as such, warrants further research to explore these mechanisms and their social implications. Methods: This study uses event-related potentials (ERPs) to examine the neural correlates during an emotional face categorization task involving human and robotic stimuli. We examined differences in brain activity elicited by viewing robotic and human faces expressing both happy and neutral emotions. ERP waveforms’ amplitudes for the P100, N170, P300, and P600 components were calculated and compared. Furthermore, mass univariate analysis of ERP waveforms was carried out to explore effects not limited to brain regions previously reported in the literature. Results: Results showed robotic faces evoked increased waveform amplitudes at early components (P100 and N170) as well as at the later P300 component. Further, only mid-latency and late cortical components (P300 and P600) showed amplitude differences resulting from emotional valences, aligning with dual-stage models of face processing. Conclusions: These results advance our understanding of face processing during human–robot interaction and contribute to our understanding of brain mechanisms underlying interactions when viewing social robots, setting new considerations for their use in brain health settings and broader cognitive impact. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

24 pages, 450 KB  
Article
Late Fusion Model for Emotion Recognition from Facial Expressions and Biosignals in a Dataset of Children with Autism Spectrum Disorder
by Dominika Kiejdo, Monika Depka Prądzinska and Teresa Zawadzka
Sensors 2025, 25(24), 7485; https://doi.org/10.3390/s25247485 - 9 Dec 2025
Viewed by 466
Abstract
Children with autism spectrum disorder (ASD) often display atypical emotional expressions and physiological responses, making emotion recognition challenging. This study proposes a multimodal recognition model employing a late fusion framework combining facial expression with physiological measures: electrodermal activity (EDA), temperature (TEMP), and heart [...] Read more.
Children with autism spectrum disorder (ASD) often display atypical emotional expressions and physiological responses, making emotion recognition challenging. This study proposes a multimodal recognition model employing a late fusion framework combining facial expression with physiological measures: electrodermal activity (EDA), temperature (TEMP), and heart rate (HR). Emotional states are annotated using two complementary schemes derived from a shared set of labels. Three annotators provide one categorical Ekman emotion for each timestamp. From these annotations, a majority-vote label identifies the dominant emotion, while a proportional distribution reflects the likelihood of each emotion based on the relative frequency of the annotators’ selections. Separate machine learning models are trained for each modality and for each annotation scheme, and their outputs are integrated through decision-level fusion. A distinct decision-level fusion model is constructed for each annotation scheme, ensuring that both the categorical and likelihood-based representations are optimally combined. The experiments on the EMBOA dataset, collected within the project “Affective loop in Socially Assistive Robotics as an intervention tool for children with autism”, show that the late fusion model achieves higher accuracy and robustness than unimodal baselines. The system attains an accuracy of 68% for categorical emotion classification and 78% under the likelihood-estimation scheme. The results obtained, although lower than those reported in other studies, suggest that further research into emotion recognition in autistic children using other fusions is warranted, even in the case of datasets with a significant number of missing values and low sample representation for certain emotions. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

16 pages, 231 KB  
Concept Paper
The Use of Artificial Intelligence (AI) in Early Childhood Education
by Silvia Cimino, Angelo Giovanni Icro Maremmani and Luca Cerniglia
Societies 2025, 15(12), 341; https://doi.org/10.3390/soc15120341 - 4 Dec 2025
Viewed by 1358
Abstract
The integration of Artificial Intelligence (AI) into early childhood education presents new opportunities and challenges in fostering cognitive, social, and emotional development. This theoretical discussion synthesizes recent research on AI’s role in personalized learning, educational robotics, gamified learning, and social-emotional development. The study [...] Read more.
The integration of Artificial Intelligence (AI) into early childhood education presents new opportunities and challenges in fostering cognitive, social, and emotional development. This theoretical discussion synthesizes recent research on AI’s role in personalized learning, educational robotics, gamified learning, and social-emotional development. The study explores theoretical frameworks such as Vygotsky’s Sociocultural Theory, Distributed Cognition, and the Five Big Ideas Framework to understand AI’s impact on young learners. AI-powered personalized learning platforms enhance engagement and adaptability, while robotics and gamification foster problem-solving and collaboration. Additionally, AI tools support children with disabilities, promoting inclusivity and accessibility. However, ethical concerns related to privacy, bias, and teacher preparedness pose challenges to effective AI integration. Furthermore, the long-term effects of AI on children’s social skills and emotional intelligence require further investigation. This theoretical discussion emphasizes the need for interdisciplinary collaboration to develop AI-driven educational strategies that prioritize developmental appropriateness, equity, and ethical considerations. The findings highlight AI’s potential as a transformative educational tool, provided it is implemented thoughtfully and responsibly. The paper aims to address the following research question: How can artificial intelligence (AI) be meaningfully and ethically integrated into early childhood education to enhance learning, while preserving developmental and relational values? Full article
(This article belongs to the Special Issue Digital Learning, Ethics and Pedagogies)
17 pages, 3220 KB  
Article
ArecaNet: Robust Facial Emotion Recognition via Assembled Residual Enhanced Cross-Attention Networks for Emotion-Aware Human–Computer Interaction
by Jaemyung Kim and Gyuho Choi
Sensors 2025, 25(23), 7375; https://doi.org/10.3390/s25237375 - 4 Dec 2025
Viewed by 356
Abstract
Recently, the convergence of advanced sensor technologies and innovations in artificial intelligence and robotics has highlighted facial emotion recognition (FER) as an essential component of human–computer interaction (HCI). Traditional FER studies based on handcrafted features and shallow machine learning have shown a limited [...] Read more.
Recently, the convergence of advanced sensor technologies and innovations in artificial intelligence and robotics has highlighted facial emotion recognition (FER) as an essential component of human–computer interaction (HCI). Traditional FER studies based on handcrafted features and shallow machine learning have shown a limited performance, while convolutional neural networks (CNNs) have improved nonlinear emotion pattern analysis but have been constrained by local feature extraction. Vision transformers (ViTs) have addressed this by leveraging global correlations, yet both CNN- and ViT-based single networks often suffer from overfitting, single-network dependency, and information loss in ensemble operations. To overcome these limitations, we propose ArecaNet, an assembled residual enhanced cross-attention network that integrates multiple feature streams without information loss. The framework comprises (i) channel and spatial feature extraction via SCSESResNet, (ii) landmark feature extraction from specialized sub-networks, (iii) iterative fusion through residual enhanced cross-attention, (iv) final emotion classification from the fused representation. Our research introduces a novel approach by integrating pre-trained sub-networks specialized in facial recognition with an attention mechanism and our uniquely designed main network, which is optimized for size reduction and efficient feature extraction. The extracted features are fused through an iterative residual enhanced cross-attention mechanism, which minimizes information loss and preserves complementary representations across networks. This strategy overcomes the limitations of conventional ensemble methods, enabling seamless feature integration and robust recognition. The experimental results show that the proposed ArecaNet achieved accuracies of 97.0% and 97.8% using the public databases, FER-2013 and RAF-DB, which were 4.5% better than the existing state-of-the-art method, PAtt-Lite, for FER-2013 and 2.75% for RAF-DB, and achieved a new state-of-the-art accuracy for each database. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

20 pages, 31486 KB  
Article
Design and Implementation of a Companion Robot with LLM-Based Hierarchical Emotion Motion Generation
by Yoongu Lim, Jaeuk Cho, Duk-Yeon Lee, Dongwoon Choi and Dong-Wook Lee
Appl. Sci. 2025, 15(23), 12759; https://doi.org/10.3390/app152312759 - 2 Dec 2025
Viewed by 371
Abstract
Recently, human–robot interaction (HRI) with social robots has attracted significant attention. Among them, companion robots, which exhibit pet-like behaviors and interact with people primarily through non-verbal means, particularly require the generation of appropriate gestures. This paper presents the design and implementation of a [...] Read more.
Recently, human–robot interaction (HRI) with social robots has attracted significant attention. Among them, companion robots, which exhibit pet-like behaviors and interact with people primarily through non-verbal means, particularly require the generation of appropriate gestures. This paper presents the design and implementation of a companion cat robot, named PEPE, with a large language model (LLM)-based hierarchical emotional motion generation method. To design the cat-like companion robot, an analysis of feline emotional behaviors was conducted to identify the body parts and motions essential for effective emotional expression. Based on this analysis, the required degrees of freedom (DoFs) and structural configuration for PEPE were derived. To generate expressive gestures efficiently and reliably, a hierarchical LLM-based emotional motion generation method was proposed. The process defines the robot’s structural features, establishes a gesture generation code format, and incorporates emotion-based guidelines grounded in feline behavioral analysis to mitigate LLM hallucination and ensure physical feasibility. The proposed method was implemented on the physical robot, and eight emotional gestures were generated—Happy, Angry, Sad, Fearful, Joyful, Excited, Positive Feedback, and Negative Feedback. A user study with 15 participants was conducted to validate the system. The high-arousal gestures—Angry, Joyful, and Excited—were rated significantly above the neutral clarity threshold (p < 0.05), demonstrating clear user recognition. Meanwhile, low-arousal gestures exhibited neutral-level perceptions consistent with their subtle motion profiles. These results confirm that the proposed LLM-based framework effectively generates expressive, physically executable gestures for a companion robot. Full article
Show Figures

Figure 1

18 pages, 2987 KB  
Article
Preliminary Effects of a Robot-Based Therapy Program with Atlas-2030 in Children with Cerebral Palsy Receiving Care at a Specialized Rehabilitation Center
by Igor Salinas-Sánchez, María R. Huerta-Teutli, David Cordero-Cuevas, Guadalupe Maldonado-Guerrero and Raide A. González-Carbonell
Appl. Sci. 2025, 15(22), 12047; https://doi.org/10.3390/app152212047 - 12 Nov 2025
Viewed by 625
Abstract
Robot-based rehabilitation emerges as a promise to enhance mobility and improve the rehabilitation outcomes in children with cerebral palsy. The study aimed to evaluate the preliminary effects of a robot-based therapy program with Atlas-2030 on spatiotemporal gait parameters, pelvis kinematics, gross-motor function, quality [...] Read more.
Robot-based rehabilitation emerges as a promise to enhance mobility and improve the rehabilitation outcomes in children with cerebral palsy. The study aimed to evaluate the preliminary effects of a robot-based therapy program with Atlas-2030 on spatiotemporal gait parameters, pelvis kinematics, gross-motor function, quality of life, and joint range-of-motion in children with cerebral palsy receiving care at a specialized rehabilitation center. This is a single-arm, institution-based, quantitative, longitudinal, pilot study with repeated measures. Sixteen sessions of a robot-based therapy program with the Atlas-2030 wearable exoskeleton were applied to all the children from APAC-IAP in Mexico City with cerebral palsy. Pre-intervention, after eight and sixteen sessions, the GMFM-66, the CP QoL-Child, and gait analysis were performed. The results suggest that an Atlas-2030 robot-based therapy program combined with therapeutic stimulation exhibited better scores on the modified Ashworth scale: hip flexors and extensors: 2.0(1.0), knee flexors and extensors: 2.0(2.9), p > 0.0167, and experience enhanced range of motion in hip flexion: 122.5(5) deg, and extension: 11(5) deg and knee extension: 0(5) deg, p < 0.0167, pelvis rotation approached zero on both sides (left: −1.99(14.04, right: 2.22(13.43), p > 0.0167) reducing the difference in laterality, inducing physiological muscle activation patterns, and higher scores in quality of life regarding well-being and acceptance: 17(1.0) and emotional well-being and self-esteem: 14.5 (1.0), p > 0.0167. The limitations of this study include the following: recruitment from a single specialty care center, the absence of a control group, and the adjusted significance level of p < 0.0167, which may lead to false negatives. Full article
(This article belongs to the Special Issue Rehabilitation and Assistive Robotics: Latest Advances and Prospects)
Show Figures

Figure 1

31 pages, 3310 KB  
Article
Companion Robots Supporting the Emotional Needs of the Elderly: Research Trends and Future Directions
by Hui Zeng, Yuxin Sheng and Jinwei Zhu
Information 2025, 16(11), 948; https://doi.org/10.3390/info16110948 - 3 Nov 2025
Viewed by 2744
Abstract
The accelerating global population aging has brought increasing attention to the loneliness and emotional needs experienced by older adults due to shrinking social networks and the loss of relatives and friends, which significantly impair their quality of life and psychological well-being. In this [...] Read more.
The accelerating global population aging has brought increasing attention to the loneliness and emotional needs experienced by older adults due to shrinking social networks and the loss of relatives and friends, which significantly impair their quality of life and psychological well-being. In this context, companion robots powered by artificial intelligence are increasingly regarded as a scalable and sustainable form of emotional intervention that can address older people’s affective and social requirements. This study systematically reviews research trends in this field, analyzing the structure of emotional needs among older users and their acceptance mechanisms toward robot functionalities. First, a keyword co-occurrence analysis was conducted using VOSviewer on relevant literature published between 2000 and 2025 from the Web of Science database, revealing focal research topics and emerging trends. Subsequently, questionnaire surveys and in-depth interviews were carried out to identify emotional needs and functional preferences among elderly users. Findings indicate that the field is characterized by increasing interdisciplinary integration, with affective computing and naturalistic interaction becoming central concerns. Empirical results reveal significant differences in need structures across age groups: the oldest-old prioritize safety monitoring and daily assistance, whereas the young-old emphasize social interaction and developmental activities. Regarding emotional interaction, older adults generally prefer natural and non-intrusive expressive styles and exhibit reserved attitudes toward highly anthropomorphic designs. Key factors influencing acceptance include practicality, ease of use, privacy protection, and emotional warmth. The study concludes that effective companion robot design should be grounded in a nuanced understanding of the heterogeneous needs of the aging population, integrating functionality, interaction, and emotional value. Future development should emphasize adaptive and customizable capabilities, adopt natural yet restrained interaction strategies, and strengthen real-world cross-cultural and long-term evaluations. Full article
Show Figures

Graphical abstract

24 pages, 1312 KB  
Article
Differences in Human Response When Interacting in Real and Virtual (VR) Human–Robot Scenarios
by Jonas Birkle and Verena Wagner-Hartl
Automation 2025, 6(4), 58; https://doi.org/10.3390/automation6040058 - 15 Oct 2025
Viewed by 633
Abstract
The utilization of robots has become an integral aspect of industrial operations. In this particular context, the study of the interaction of humans and robots aims to integrate their relevant capabilities with the intention of attaining maximum efficiency. Moreover, in the private sector, [...] Read more.
The utilization of robots has become an integral aspect of industrial operations. In this particular context, the study of the interaction of humans and robots aims to integrate their relevant capabilities with the intention of attaining maximum efficiency. Moreover, in the private sector, interaction with robots is already common in many places. Acceptance, trust, and perceived emotions vary widely depending on specific contexts. This highlights the necessity for adequate training to mitigate fears and enhance trust and acceptance. Currently, no such training is available. Virtual realities have frequently proven to be helpful platforms for the implementation of training. This study aims to evaluate the suitability of virtual realities for training in this specific application area. For this purpose, simple object handovers were performed in three different scenarios (reality, virtual reality, and hybrid reality). Subjective evaluations of the participants were extended by psychophysiological (ECG and EDA) and performance measures. In most cases, the results show no significant differences between the scenarios, indicating that personal perception during interaction is transferable to a virtual reality. This demonstrates the general suitability of virtual realities in this context. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

58 pages, 744 KB  
Article
Review and Comparative Analysis of Databases for Speech Emotion Recognition
by Salvatore Serrano, Omar Serghini, Giulia Esposito, Silvia Carbone, Carmela Mento, Alessandro Floris, Simone Porcu and Luigi Atzori
Data 2025, 10(10), 164; https://doi.org/10.3390/data10100164 - 14 Oct 2025
Viewed by 3739
Abstract
Speech emotion recognition (SER) has become increasingly important in areas such as healthcare, customer service, robotics, and human–computer interaction. The progress of this field depends not only on advances in algorithms but also on the databases that provide the training material for SER [...] Read more.
Speech emotion recognition (SER) has become increasingly important in areas such as healthcare, customer service, robotics, and human–computer interaction. The progress of this field depends not only on advances in algorithms but also on the databases that provide the training material for SER systems. These resources set the boundaries for how well models can generalize across speakers, contexts, and cultures. In this paper, we present a narrative review and comparative analysis of emotional speech corpora released up to mid-2025, bringing together both psychological and technical perspectives. Rather than following a systematic review protocol, our approach focuses on providing a critical synthesis of more than fifty corpora covering acted, elicited, and natural speech. We examine how these databases were collected, how emotions were annotated, their demographic diversity, and their ecological validity, while also acknowledging the limits of available documentation. Beyond description, we identify recurring strengths and weaknesses, highlight emerging gaps, and discuss recent usage patterns to offer researchers both a practical guide for dataset selection and a critical perspective on how corpus design continues to shape the development of robust and generalizable SER systems. Full article
Show Figures

Figure 1

19 pages, 825 KB  
Article
Preliminary User-Centred Evaluation of a Bio-Cooperative Robotic Platform for Cognitive Rehabilitation in Parkinson’s Disease and Mild Cognitive Impairment: Insights from a Focus Group and Living Lab in the OPERA Project
by Ylenia Crocetto, Simona Abagnale, Giulia Martinelli, Sara Della Bella, Eleonora Pavan, Cristiana Rondoni, Alfonso Voscarelli, Marco Pirini, Francesco Scotto di Luzio, Loredana Zollo, Giulio Cicarelli, Cristina Polito and Anna Estraneo
J. Clin. Med. 2025, 14(19), 7042; https://doi.org/10.3390/jcm14197042 - 5 Oct 2025
Viewed by 619
Abstract
Background: Mild cognitive impairment (MCI) affects up to 40% of patients with Parkinson’s disease (PD), yet conventional rehabilitation often lacks engagement. The OPERA project developed a novel Bio-cooperative Robotic Platform (PRoBio), integrating a service robot and a virtual reality-based rehabilitation for personalized cognitive [...] Read more.
Background: Mild cognitive impairment (MCI) affects up to 40% of patients with Parkinson’s disease (PD), yet conventional rehabilitation often lacks engagement. The OPERA project developed a novel Bio-cooperative Robotic Platform (PRoBio), integrating a service robot and a virtual reality-based rehabilitation for personalized cognitive training. This work presents two preliminary user-centred studies aimed to assess PRoBio usability and acceptability. Methods: to gather qualitative feedback on robotic and virtual reality technologies, through ad hoc questionnaires, developed according to participatory design principles and user-centered evaluation literature, Study 1 (Focus group) involved 23 participants: 10 PD patients (F = 6; mean age = 68.9 ± 8.2 years), 5 caregivers (F = 3; mean age = 49.0 ± 15.5), 8 healthcare professionals (F = 6; mean age = 40.0 ± 12.0). Study 2 (Living Lab) tested the final version of PRoBio platform with 6 healthy volunteers (F = 3; mean age = 50.3 ± 11.0) and 8 rehabilitation professionals (F = 3; mean age = 32.8 ± 9.9), assessing usability and acceptability through validated questionnaires. Results: The focus group revealed common priorities across the three groups, including ease of use, emotional engagement, and personalization of exercises. Living Lab unveiled PRoBio as user-friendly, with high usability, hedonic quality, technology acceptance and low workload. No significant differences were found between groups, except for minor concerns on system responsiveness. Discussion: These preliminary findings support the feasibility, usability, and emotional appeal of PRoBio as a cognitive rehabilitation tool. The positive convergence among the groups suggests its potential for clinical integration. Conclusions: These preliminary results support the feasibility and user-centred design of the PRoBio platform for cognitive rehabilitation in PD. The upcoming usability evaluation in a pilot study with patients will provide critical insights into its suitability for clinical implementation and guide further development. Full article
(This article belongs to the Section Clinical Neurology)
Show Figures

Figure 1

27 pages, 1664 KB  
Review
Actomyosin-Based Nanodevices for Sensing and Actuation: Bridging Biology and Bioengineering
by Nicolas M. Brunet, Peng Xiong and Prescott Bryant Chase
Biosensors 2025, 15(10), 672; https://doi.org/10.3390/bios15100672 - 4 Oct 2025
Viewed by 1766
Abstract
The actomyosin complex—nature’s dynamic engine composed of actin filaments and myosin motors—is emerging as a versatile tool for bio-integrated nanotechnology. This review explores the growing potential of actomyosin-powered systems in biosensing and actuation applications, highlighting their compatibility with physiological conditions, responsiveness to biochemical [...] Read more.
The actomyosin complex—nature’s dynamic engine composed of actin filaments and myosin motors—is emerging as a versatile tool for bio-integrated nanotechnology. This review explores the growing potential of actomyosin-powered systems in biosensing and actuation applications, highlighting their compatibility with physiological conditions, responsiveness to biochemical and physical cues and modular adaptability. We begin with a comparative overview of natural and synthetic nanomachines, positioning actomyosin as a uniquely scalable and biocompatible platform. We then discuss experimental advances in controlling actomyosin activity through ATP, calcium, heat, light and electric fields, as well as their integration into in vitro motility assays, soft robotics and neural interface systems. Emphasis is placed on longstanding efforts to harness actomyosin as a biosensing element—capable of converting chemical or environmental signals into measurable mechanical or electrical outputs that can be used to provide valuable clinical and basic science information such as functional consequences of disease-associated genetic variants in cardiovascular genes. We also highlight engineering challenges such as stability, spatial control and upscaling, and examine speculative future directions, including emotion-responsive nanodevices. By bridging cell biology and bioengineering, actomyosin-based systems offer promising avenues for real-time sensing, diagnostics and therapeutic feedback in next-generation biosensors. Full article
(This article belongs to the Special Issue Biosensors for Personalized Treatment)
Show Figures

Figure 1

18 pages, 386 KB  
Article
Do Perceived Values Influence User Identification and Attitudinal Loyalty in Social Robots? The Mediating Role of Active Involvement
by Hua Pang, Zhen Wang and Lei Wang
Behav. Sci. 2025, 15(10), 1329; https://doi.org/10.3390/bs15101329 - 28 Sep 2025
Viewed by 680
Abstract
With the rapid advancement of artificial intelligence, the deployment of social robots has significantly broadened, extending into diverse fields such as education, medical services, and business. Despite this expansive growth, there remains a notable scarcity of empirical research addressing the underlying psychological mechanisms [...] Read more.
With the rapid advancement of artificial intelligence, the deployment of social robots has significantly broadened, extending into diverse fields such as education, medical services, and business. Despite this expansive growth, there remains a notable scarcity of empirical research addressing the underlying psychological mechanisms that influence human–robot interactions. To address this critical research gap, the present study proposes and empirically tests a theoretical model designed to elucidate how users’ multi-dimensional perceived values of social robots influence their attitudinal responses and outcomes. Based on questionnaire data from 569 social robot users, the study reveals that users’ perceived utilitarian value, emotional value, and hedonic value all exert significant positive effects on active involvement, thereby fostering their identification and reinforcing attitudinal loyalty. Among these dimensions, emotional value emerged as the strongest predictor, underscoring the pivotal role of emotional orientation in cultivating lasting human–robot relationships. Furthermore, the findings highlight the critical mediating function of active involvement in linking perceived value to users’ psychological sense of belonging, thereby elucidating the mechanism through which perceived value enhances engagement and promotes sustained long-term interaction. These findings extend the conceptual boundaries of human–machine interaction, offer a theoretical foundation for future explorations of user psychological mechanisms, and inform strategic design approaches centered on emotional interaction and user-oriented experiences, providing practical guidance for optimizing social robot design in applications. Full article
Show Figures

Figure 1

18 pages, 892 KB  
Article
Developing a Psychological Research Methodology for Evaluating AI-Powered Plush Robots in Education and Rehabilitation
by Anete Hofmane, Inese Tīģere, Airisa Šteinberga, Dina Bethere, Santa Meļķe, Undīne Gavriļenko, Aleksandrs Okss, Aleksejs Kataševs and Aleksandrs Vališevskis
Behav. Sci. 2025, 15(10), 1310; https://doi.org/10.3390/bs15101310 - 25 Sep 2025
Viewed by 729
Abstract
The integration of AI-powered plush robots in educational and therapeutic settings for children with Autism Spectrum Disorders (ASD) necessitates a robust interdisciplinary methodology to evaluate usability, psychological impact, and therapeutic efficacy. This study proposes and applies a four-phase research framework designed to guide [...] Read more.
The integration of AI-powered plush robots in educational and therapeutic settings for children with Autism Spectrum Disorders (ASD) necessitates a robust interdisciplinary methodology to evaluate usability, psychological impact, and therapeutic efficacy. This study proposes and applies a four-phase research framework designed to guide the development and assessment of AI-powered plush robots for social rehabilitation and education. Phase 1 involved semi-structured interviews with 13 ASD specialists to explore robot applications. Phase 2 tested initial usability with typically developing children (N = 10–15) through structured sessions. Phase 3 involved structured interaction sessions with children diagnosed with ASD (N = 6–8) to observe the robot’s potential for rehabilitation, observed by specialists and recorded on video. Finally, Phase 4 synthesized data via multidisciplinary triangulation. Results highlighted the importance of iterative, stakeholder-informed design, with experts emphasizing visual properties (color, texture), psychosocial aspects, and adjustable functions. The study identified key technical and psychological evaluation criteria, including engagement, emotional safety, and developmental alignment with ASD intervention models. Findings underscore the value of qualitative methodologies and phased testing in developing child-centered robotic tools. The research establishes a robust methodological framework and provides preliminary evidence for the potential of AI-powered plush robots to support personalized, ethically grounded interventions for children with ASD, though their therapeutic efficacy requires further longitudinal validation. This methodology bridges engineering innovation with psychological rigor, offering a template for future assistive technology research by prioritizing a rigorous, stakeholder-centered design process. Full article
(This article belongs to the Section Psychiatric, Emotional and Behavioral Disorders)
Show Figures

Figure 1

27 pages, 5516 KB  
Article
A Robot Companion with Adaptive Object Preferences and Emotional Responses Enhances Naturalness in Human–Robot Interaction
by Marcos Maroto-Gómez, Sofía Álvarez-Arias, Juan Rodríguez-Huelves, Arecia Segura-Bencomo and María Malfaz
Electronics 2025, 14(18), 3711; https://doi.org/10.3390/electronics14183711 - 19 Sep 2025
Viewed by 1612
Abstract
Autonomous robot companions must engage users to create long-lasting bonds that promote their better and frequent use. Previous studies in the area revealed that a personalised human–robot interaction facilitates such connections, leading people to use robots more frequently, improving how they perceive the [...] Read more.
Autonomous robot companions must engage users to create long-lasting bonds that promote their better and frequent use. Previous studies in the area revealed that a personalised human–robot interaction facilitates such connections, leading people to use robots more frequently, improving how they perceive the robot. This paper presents a biologically inspired system based on reinforcement learning to endow care-dependent robot companions with adaptive preferences for objects and dynamic emotional responses that depend on the interaction context. The system generates and adapts the robot’s preferences towards objects based on user actions, simulated internal needs, and other factors such as the kind of object. We integrate the system into Mini, which is a robot companion that simulates behaviour inspired by the famous Tamagotchi toy to promote human–robot bonding. Mini encourages users to care for it by providing objects that restore its hunger, thirst, and boredom, reacting to the actions taken by users. We conducted a within-subjects user study where participants interacted with two robots: with preferences towards objects and emotional responses, and without them. The results indicate that participants perceived the robot with preferences and emotions as more natural—Animacy, Intelligence, and Agency dimensions—but not more likeable and sociable; however, most explicitly indicated their preferences towards the robot with adaptive preferences and emotions in the posterior analysis. Full article
Show Figures

Figure 1

Back to TopTop