sensors-logo

Journal Browser

Journal Browser

Challenges in Human-Robot Interactions for Social Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 11030

Special Issue Editors


E-Mail Website
Guest Editor
Instituto Superior Técnico, Lisbon University, 1649-004 Lisboa, Portugal
Interests: social robotics; human-robot interaction; architectural aspects of robotics, including robot modelling and control; networked and cooperative robotics; systems integration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Robotics, University Carlos III of Madrid, Avda. de la Universidad 30, Leganés, 28911 Madrid, Spain
Interests: social robots; assistive robotics; human-robot interaction; autonomous robots; decision making; multimodal dialogue management; robot expressiveness; artificial emotions
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of System Engineering and Automation, Carlos III University, Madrid, Spain
Interests: human–robot interaction; feature extraction; pattern matching; speaker recognition; speech-based user interfaces; time-frequency analysis

E-Mail Website
Guest Editor
Department of Systems Engineering and Automation Carlos III University, Madrid, Spain
Interests: social robotics; computer vision; perception; activity detection; human-robot interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Gradually, social robots are becoming increasingly influential in human societies, and successful interactions between robots and humans are essential to ensure the former’s integration and acceptance.

This Special Issue focuses on human–robot interactions in social robotics. Nowadays, interactions with computational devices are ubiquitous in daily life, and the lessons acquired have helped design our interactions with robots. We must improve the effectiveness of the interactions between humans and robots to learn about humans in their social environment. Moreover, existing technologies in social robotics affect how humans interact with these robots. Novel technologies and applications endow robots and humans with new ways to interact that will potentially improve human–robot interactions.

Robots complying with social norms, the challenges in developing interfaces to interact with social robots, the dynamics of human perception, novel interaction capabilities, sustainability, closing the technological gap between humans and robots, the design of the robots considering their purpose within the interaction, and the future of social robots are just a few examples of themes this Special Issue aims to cover.

Dr. João Silva Sequeira
Dr. Álvaro Castro Gonzalez
Dr. Fernando Alonso Martín
Dr. José Carlos Castillo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human-robot interaction
  • social robots
  • robot interfaces
  • human perception
  • interaction metrics
  • sustainability
  • human-robot gap

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 6562 KiB  
Article
Personalized Cognitive Support via Social Robots
by Jaime Andres Rincon Arango, Cedric Marco-Detchart and Vicente Javier Julian Inglada
Sensors 2025, 25(3), 888; https://doi.org/10.3390/s25030888 - 31 Jan 2025
Cited by 1 | Viewed by 1188
Abstract
This paper explores the use of personalized cognitive support through social robots to assist the elderly in maintaining cognitive health and emotional well-being. As aging populations grow, the demand for innovative solutions to address issues like loneliness, cognitive decline, and physical limitations increases. [...] Read more.
This paper explores the use of personalized cognitive support through social robots to assist the elderly in maintaining cognitive health and emotional well-being. As aging populations grow, the demand for innovative solutions to address issues like loneliness, cognitive decline, and physical limitations increases. The studied social robots utilize machine learning and advanced sensor technology to deliver real-time adaptive interactions, including cognitive exercises, daily task assistance, and emotional support. Through responsive and personalized features, the robot enhances user autonomy and improves quality of life by monitoring physical and emotional states and adapting to the needs of each user. This study also examines the challenges of implementing assistive robots in home and healthcare settings, offering insights into the evolving role of AI-powered social robots in eldercare. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

24 pages, 1173 KiB  
Article
A Comprehensive Analysis of a Social Intelligence Dataset and Response Tendencies Between Large Language Models (LLMs) and Humans
by Erika Mori, Yue Qiu, Hirokatsu Kataoka and Yoshimitsu Aoki
Sensors 2025, 25(2), 477; https://doi.org/10.3390/s25020477 - 15 Jan 2025
Viewed by 1236
Abstract
In recent years, advancements in the interaction and collaboration between humans and have garnered significant attention. Social intelligence plays a crucial role in facilitating natural interactions and seamless communication between humans and Artificial Intelligence (AI). To assess AI’s ability to understand human interactions [...] Read more.
In recent years, advancements in the interaction and collaboration between humans and have garnered significant attention. Social intelligence plays a crucial role in facilitating natural interactions and seamless communication between humans and Artificial Intelligence (AI). To assess AI’s ability to understand human interactions and the components necessary for such comprehension, datasets like Social-IQ have been developed. However, these datasets often rely on a simplistic question-and-answer format and lack justifications for the provided answers. Furthermore, existing methods typically produce direct answers by selecting from predefined choices without generating intermediate outputs, which hampers interpretability and reliability. To address these limitations, we conducted a comprehensive evaluation of AI methods on a video-based Question Answering (QA) benchmark focused on human interactions, leveraging additional annotations related to human responses. Our analysis highlights significant differences between human and AI response patterns and underscores critical shortcomings in current benchmarks. We anticipate that these findings will guide the creation of more advanced datasets and represent an important step toward achieving natural communication between humans and AI. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

21 pages, 3698 KiB  
Article
Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction
by Da-Young Kim, Hyo Jeong Lym, Hanna Lee, Ye Jun Lee, Juhyun Kim, Min-Gyu Kim and Yunju Baek
Sensors 2024, 24(24), 7939; https://doi.org/10.3390/s24247939 - 12 Dec 2024
Viewed by 1018
Abstract
Dialogue systems must understand children’s utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child–robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot [...] Read more.
Dialogue systems must understand children’s utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child–robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot comprehend children’s intent as accurately as humans because of their distinctive features. An LLM-based dialogue system should acquire the manner by which humans understand children’s speech to enhance its intention reasoning performance in verbal interactions with children. To this end, we propose a fine-tuning methodology that utilizes the LLM–human judgment discrepancy and interactive response data. The former data represent cases in which the LLM and human judgments of the contextual appropriateness of a child’s answer to a robot’s question diverge. The latter data involve robot responses suitable for children’s utterance intentions, generated by the LLM. We developed a fine-tuned dialogue system using these datasets to achieve human-like interpretations of children’s utterances and to respond adaptively. Our system was evaluated through human assessment using the Robotic Social Attributes Scale (RoSAS) and Sensibleness and Specificity Average (SSA) metrics. Consequently, it supports the effective interpretation of children’s utterance intentions and enables natural verbal interactions, even in cases with syntactic incompleteness and mispronunciations. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

17 pages, 1797 KiB  
Article
The Role of Name, Origin, and Voice Accent in a Robot’s Ethnic Identity
by Jessica K. Barfield
Sensors 2024, 24(19), 6421; https://doi.org/10.3390/s24196421 - 4 Oct 2024
Cited by 2 | Viewed by 1175
Abstract
This paper presents the results of an experiment that was designed to explore whether users assigned an ethnic identity to the Misty II robot based on the robot’s voice accent, place of origin, and given name. To explore this topic a 2 × [...] Read more.
This paper presents the results of an experiment that was designed to explore whether users assigned an ethnic identity to the Misty II robot based on the robot’s voice accent, place of origin, and given name. To explore this topic a 2 × 3 within subject study was run which consisted of a humanoid robot speaking with a male or female gendered voice and using three different voice accents (Chinese, American, Mexican). Using participants who identified as American, the results indicated that users were able to identify the gender and ethnic identity of the Misty II robot with a high degree of accuracy based on a minimum set of social cues. However, the version of Misty II presenting with an American ethnicity was more accurately identified than a robot presenting with cues signaling a Mexican or Chinese ethnicity. Implications of the results for the design of human-robot interfaces are discussed. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

12 pages, 1823 KiB  
Article
When Trustworthiness Meets Face: Facial Design for Social Robots
by Yao Song and Yan Luximon
Sensors 2024, 24(13), 4215; https://doi.org/10.3390/s24134215 - 28 Jun 2024
Cited by 1 | Viewed by 1896
Abstract
As a technical application in artificial intelligence, a social robot is one of the branches of robotic studies that emphasizes socially communicating and interacting with human beings. Although both robot and behavior research have realized the significance of social robot design for its [...] Read more.
As a technical application in artificial intelligence, a social robot is one of the branches of robotic studies that emphasizes socially communicating and interacting with human beings. Although both robot and behavior research have realized the significance of social robot design for its market success and related emotional benefit to users, the specific design of the eye and mouth shape of a social robot in eliciting trustworthiness has received only limited attention. In order to address this research gap, our study conducted a 2 (eye shape) × 3 (mouth shape) full factorial between-subject experiment. A total of 211 participants were recruited and randomly assigned to the six scenarios in the study. After exposure to the stimuli, perceived trustworthiness and robot attitude were measured accordingly. The results showed that round eyes (vs. narrow eyes) and an upturned-shape mouth or neutral mouth (vs. downturned-shape mouth) for social robots could significantly improve people’s trustworthiness and attitude towards social robots. The effect of eye and mouth shape on robot attitude are all mediated by the perceived trustworthiness. Trustworthy human facial features could be applied to the robot’s face, eliciting a similar trustworthiness perception and attitude. In addition to empirical contributions to HRI, this finding could shed light on the design practice for a trustworthy-looking social robot. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

32 pages, 4861 KiB  
Article
Creating Expressive Social Robots That Convey Symbolic and Spontaneous Communication
by Enrique Fernández-Rodicio, Álvaro Castro-González, Juan José Gamboa-Montero, Sara Carrasco-Martínez and Miguel A. Salichs
Sensors 2024, 24(11), 3671; https://doi.org/10.3390/s24113671 - 5 Jun 2024
Viewed by 1524
Abstract
Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker’s [...] Read more.
Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker’s emotional and motivational state) communication. Thus, to enhance human–robot interactions, the expressions that are used have to convey both dimensions. This paper presents a method for modelling a robot’s expressiveness as a combination of these two dimensions, where each of them can be generated independently. This is the first contribution of our work. The second contribution is the development of an expressiveness architecture that uses predefined multimodal expressions to convey the symbolic dimension and integrates a series of modulation strategies for conveying the robot’s mood and emotions. In order to validate the performance of the proposed architecture, the last contribution is a series of experiments that aim to study the effect that the addition of the spontaneous dimension of communication and its fusion with the symbolic dimension has on how people perceive a social robot. Our results show that the modulation strategies improve the users’ perception and can convey a recognizable affective state. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

22 pages, 13026 KiB  
Article
Development of a Personal Guide Robot That Leads a Guest Hand-in-Hand While Keeping a Distance
by Hironobu Wakabayashi, Yutaka Hiroi, Kenzaburo Miyawaki and Akinori Ito
Sensors 2024, 24(7), 2345; https://doi.org/10.3390/s24072345 - 7 Apr 2024
Cited by 2 | Viewed by 1791
Abstract
This paper proposes a novel tour guide robot, “ASAHI ReBorn”, which can lead a guest by hand one-on-one while maintaining a proper distance from the guest. The robot uses a stretchable arm interface to hold the guest’s hand and adjusts its speed according [...] Read more.
This paper proposes a novel tour guide robot, “ASAHI ReBorn”, which can lead a guest by hand one-on-one while maintaining a proper distance from the guest. The robot uses a stretchable arm interface to hold the guest’s hand and adjusts its speed according to the guest’s pace. The robot also follows a given guide path accurately using the Robot Side method, a robot navigation method that follows a pre-defined path quickly and accurately. In addition, a control method is introduced that limits the angular velocity of the robot to avoid the robot’s quick turn while guiding the guest. We evaluated the performance and usability of the proposed robot through experiments and user studies. The tour-guiding experiment revealed that the proposed method that keeps distance between the robot and the guest using the stretchable arm enables the guests to look around the exhibits compared with the condition where the robot moved at a constant velocity. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Back to TopTop