Recent Advances in Human-Robot Interactions

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: 20 July 2024 | Viewed by 3151

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Education, Cheongju National University of Educaton, Cheongju 28690, Republic of Korea
Interests: human robot interaction

E-Mail Website
Guest Editor
Department of Humanities, University of Catania, 95124 Catania, Italy
Interests: human–robot interaction

Special Issue Information

Dear Colleagues,

In recent decades, novel technologies have been developed in a number of diverse scientific fields, including robotics. This convergent technological field can be related to mechanical engineering, computer science. Human–robot interactions (HRI) represent a crucial research area for the advancement and application of these technologies.

This Special Issue addresses all domains related to human–robot interactions and focuses on various advances in robotics convergence.

This Special Issue aims to publish high-quality and original research papers that attend to the following areas of interest:

  • Human–robot interaction;
  • Human–machine interaction;
  • Human–drone interaction;
  • Robots and artificial intelligence;
  • Robotic car or a self-driving cars;
  • Robots in art;
  • Medical robotics and patients;
  • Robot–human collaboration;
  • Robots and education;
  • Robots and rehabilitation;
  • Service robots;
  • Assessment robotics;
  • Robots and softbots.

Dr. Jeonghye Han
Dr. Daniela Conti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–robot interaction
  • robotics
  • AI
  • collaboration
  • education
  • rehabilitation
  • services
  • evaluation
  • softbots

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 8322 KiB  
Article
Self-Configurable Centipede-Inspired Rescue Robot
by Jingbo Hou, Zhifeng Xue, Yue Liang, Yipeng Sun, Yu Zhao and Qili Chen
Appl. Sci. 2024, 14(6), 2331; https://doi.org/10.3390/app14062331 - 10 Mar 2024
Viewed by 598
Abstract
Drawing from the characteristics of centipedes, such as their low center of gravity, high stability in movement, adaptability to complex terrains, and ability to continue moving even after losing a limb, this paper designs a self-reconfigurable centipede-type rescue robot with relatively high stability [...] Read more.
Drawing from the characteristics of centipedes, such as their low center of gravity, high stability in movement, adaptability to complex terrains, and ability to continue moving even after losing a limb, this paper designs a self-reconfigurable centipede-type rescue robot with relatively high stability while moving. The robot’s body can lift and traverse higher obstacles, and its multi-segmented structure enables self-disconnection and reconstruction for docking. Moreover, the proposed robot is adept at navigating diverse terrains and surmounting obstacles, equipped with a camera sensor facilitating life recognition, terrain surveying, scene understanding, and obstacle avoidance. Its capabilities prove advantageous for achieving challenging ground rescue missions. Motion stability tests, conducted across various terrains, showcase the robot’s ability to maintain a consistent movement path in rugged environments. Operating with a leg lift height of 0.02 m, the robot achieves a speed of 0.09 m per second. In simulated damaged conditions, the robot demonstrates the capacity to disconnect and reconnect its limbs swiftly, restoring movement capabilities within a single second. During environmental perception tasks, the robot processes and analyzes environmental data in real time at a rate of approximately 15 frames per second, with an 80% confidence level. With an F1 score exceeding 93% and an average precision rate surpassing 98%, the robot showcases its reliability and efficiency. Full article
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)
Show Figures

Figure 1

18 pages, 2642 KiB  
Article
Modelling and Measuring Trust in Human–Robot Collaboration
by Erlantz Loizaga, Leire Bastida, Sara Sillaurren, Ana Moya and Nerea Toledo
Appl. Sci. 2024, 14(5), 1919; https://doi.org/10.3390/app14051919 - 26 Feb 2024
Viewed by 497
Abstract
Recognizing trust as a pivotal element for success within Human–Robot Collaboration (HRC) environments, this article examines its nature, exploring the different dimensions of trust, analysing the factors affecting each of them, and proposing alternatives for trust measurement. To do so, we designed an [...] Read more.
Recognizing trust as a pivotal element for success within Human–Robot Collaboration (HRC) environments, this article examines its nature, exploring the different dimensions of trust, analysing the factors affecting each of them, and proposing alternatives for trust measurement. To do so, we designed an experimental procedure involving 50 participants interacting with a modified ‘Inspector game’ while we monitored their brain, electrodermal, respiratory, and ocular activities. This procedure allowed us to map dispositional (static individual baseline) and learned (dynamic, based on prior interactions) dimensions of trust, considering both demographic and psychophysiological aspects. Our findings challenge traditional assumptions regarding the dispositional dimension of trust and establish clear evidence that the first interactions are critical for the trust-building process and the temporal evolution of trust. By identifying more significant psychophysiological features for trust detection and underscoring the importance of individualized trust assessment, this research contributes to understanding the nature of trust in HRC. Such insights are crucial for enabling more seamless human–robot interaction in collaborative environments. Full article
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)
Show Figures

Figure 1

14 pages, 6079 KiB  
Article
“I See What You Feel”: An Exploratory Study to Investigate the Understanding of Robot Emotions in Deaf Children
by Carla Cirasa, Helene Høgsdal and Daniela Conti
Appl. Sci. 2024, 14(4), 1446; https://doi.org/10.3390/app14041446 - 09 Feb 2024
Viewed by 598
Abstract
Research in the field of human–robot interactions (HRIs) has advanced significantly in recent years. Social humanoid robots have undergone severe testing and have been implemented in a variety of settings, for example, in educational institutions, healthcare facilities, and senior care centers. Humanoid robots [...] Read more.
Research in the field of human–robot interactions (HRIs) has advanced significantly in recent years. Social humanoid robots have undergone severe testing and have been implemented in a variety of settings, for example, in educational institutions, healthcare facilities, and senior care centers. Humanoid robots have also been assessed across different population groups. However, research on various children groups is still scarce, especially among deaf children. This feasibility study explores the ability of both hearing and deaf children to interact with and recognize emotions expressed by NAO, the humanoid robot, without relying on sounds or speech. Initially, the children watched three video clips portraying emotions of happiness, sadness, and anger. Depending on the experimental condition, the children observed the humanoid robot respond to the emotions in the video clips in a congruent or incongruent manner before they were asked to recall which emotion the robot exhibited. The influence of empathy on the ability to recognize emotions was also investigated. The results revealed that there was no difference in the ability to recognize emotions between the two conditions (i.e., congruent and incongruent). Indeed, NAO responding with congruent emotions to video clips did not contribute to the children recognizing the emotion in NAO. Specifically, the ability to predict emotions in the video clips and gender (females) were identified as significant predictors to identify emotions in NAO. While no significant difference was identified between hearing and deaf children, this feasibility study aims to establish a foundation for future research on this important topic. Full article
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)
Show Figures

Figure 1

34 pages, 6061 KiB  
Article
The Impression of Phones and Prosody Choice in the Gibberish Speech of the Virtual Embodied Conversational Agent Kotaro
by Antonio Galiza Cerdeira Gonzalez, Wing-Sum Lo and Ikuo Mizuuchi
Appl. Sci. 2023, 13(18), 10143; https://doi.org/10.3390/app131810143 - 08 Sep 2023
Viewed by 877
Abstract
The number of smart devices is expected to exceed 100 billion by 2050, and many will feature conversational user interfaces. Thus, methods for generating appropriate prosody for the responses of embodied conversational agents will be very important. This paper presents the results of [...] Read more.
The number of smart devices is expected to exceed 100 billion by 2050, and many will feature conversational user interfaces. Thus, methods for generating appropriate prosody for the responses of embodied conversational agents will be very important. This paper presents the results of the “Talk to Kotaro” experiment, which was conducted to better understand how people from different cultural backgrounds react when listening to prosody and phone choices for the IPA symbol-based gibberish speech of the virtual embodied conversational agent Kotaro. It also presents an analysis of the responses to a post-experiment Likert scale questionnaire and the emotions estimated from the participants’ facial expressions, which allowed one to obtain a phone embedding matrix and to conclude that there is no common cross-cultural baseline impression regarding different prosody parameters and that similarly sounding phones are not close in the embedding space. Finally, it also provides the obtained data in a fully anonymous data set. Full article
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)
Show Figures

Figure 1

Back to TopTop