Next Article in Journal
Special Issue: Advanced Methodology and Analysis in Coal Mine Gas Control
Previous Article in Journal
An EMG-to-Force Processing Method for Estimating In Vivo Knee Muscle Power During Self-Selected Speed Walking in Adults
Previous Article in Special Issue
Modeling and Predicting Human Actions in Soccer Using Tensor-SOM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Recent Advances in Human–Robot Interactions

1
Department of Computer Education, Cheongju National University of Education, Cheongju 28690, Republic of Korea
2
Department of Humanities, University of Catania, 95124 Catania, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(12), 6850; https://doi.org/10.3390/app15126850
Submission received: 27 February 2025 / Accepted: 6 June 2025 / Published: 18 June 2025
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)

1. Introduction

In recent years, the field of human–robot interaction (HRI) has developed significantly, with hundreds of scientific publications covering various research domains [1]. Specifically, one of the most rapidly advancing areas concerns the integration of artificial intelligence (AI), where significant progress is revolutionizing the way humans and robots collaborate [2]. Thanks to machine learning algorithms, robots can adapt to the behaviors of the users they interact with, offering increasingly personalized experiences [3].
Natural language processing (NLP) enables them to understand voice commands and provide context-aware responses, making communication with humans more seamless and natural [4,5]. Additionally, advancements in computer vision allow robots to recognize and interpret gestures and facial expressions, enhancing their perception and interaction capabilities. Finally, reinforcement learning plays a crucial role in optimizing real-time responses, allowing robots to continuously evolve based on human feedback [6]. Recently, due to these innovations, socially assistive robots (SARs) have been able to provide emotional support in healthcare settings [7].
Another sector that has seen significant advancements is that of large language models (LLMs) [5,8]. These developments have enhanced the conversational abilities of robots, making them more versatile in handling assigned tasks. Robots equipped with LLMs excel at building connections with individuals and show promising capabilities in interpreting complex non-verbal cues. However, they tend to exhibit shortcomings in logical communication and may induce anxiety in interlocutors [9].
Another emerging area of innovation in HRI is intention prediction [10]. Specifically, a robot capable of anticipating the intentions of its human collaborators would be significantly more efficient than one lacking this ability as it would eliminate the need for explicit articulation of every intention [11]. Intention prediction is made possible through the analysis of human eye movements and doing so supports the precise anticipation of each step required to complete a given task. This enables the robot to promptly understand the intentions of its human counterpart [11].
One of the most promising areas for the future of HRI is humanoid robotics. These robots, designed with a human-like structure (head, torso, arms, and legs), are intended to mimic human behaviors, adapt to dynamic environments, and make autonomous decisions [12]. The capabilities of humanoid robots stem from the integration of multiple disciplines, including mechanics, electronics, computer science, and artificial intelligence. Additionally, biology, through the study of the human brain, contributes further to the development of robots with increasingly human-like behaviors. However, the gap between human abilities and the current capabilities of humanoid robots remains substantial [13,14].
Although research has not yet reached its full potential, future developments in both software and hardware could lead to significant advancements. Humanoid robots could be employed in numerous sectors, including industry, healthcare, education, agriculture, and entertainment, as well as in military and rescue operations. This suggests that in the near future humans and robots will collaborate daily in workplace environments [12,15,16].
The introduction of robots into the workplace has also raised new challenges. Often, human workers are required to monitor robot behaviors. This is a task that, on the one hand, can be stimulating as it involves decision-making, but on the other, may be alienating as it reduces human engagement in the production process. This situation can generate stress among workers [17,18,19]. A potential solution to this issue is represented by collaborative robots (co-bots), which, although less advanced than humanoid robots, are designed to work alongside humans in workplace environments [20].

2. An Overview of Published Articles

The study conducted by Gonzalez et al. [21] highlights the crucial role that the development of methods for generating appropriate prosody in the responses of embodied conversational agents will play in the coming years. Specifically, it is projected that the number of smart devices will exceed 100 billion by 2050, many of which will integrate conversational user interfaces. The experimental study analyzed the reactions of individuals from diverse cultural backgrounds to the prosody and phonetic choices of a fictitious language based on IPA symbols used by a virtual conversational agent. The results include an analysis of responses to a post-experimental Likert-scale questionnaire and emotions estimated through facial expression analysis. This approach enabled the construction of a phonetic embedding matrix. The authors conclude that there is no common cross-cultural basis for prosodic impressions and that phonic elements with acoustic similarity are not necessarily close within the embedding space.
Recent advancements in human–robot interaction (HRI) have also been highlighted by the study of Cirasa et al. [22]. Although humanoid social robots have undergone rigorous testing and have been implemented in various contexts in recent years, research on robot–child interactions remains limited, particularly concerning deaf children. The experimental study conducted by Cirasa et al. [22] explores the ability of both hearing and deaf children to interact with a humanoid robot and recognize the emotions (happiness, sadness, and anger) expressed by it. Depending on the experimental condition, the humanoid robot responded to the emotions presented in the videos either congruently or incongruently. The results showed no differences in emotion recognition ability between the two conditions (congruent vs. incongruent). Although no significant differences emerged between hearing and deaf children, this feasibility study aims to provide a foundation for future research in HRI and inclusivity-related topics.
Another study [23] investigated the role of trust in human–robot collaboration (HRC). The authors designed an experimental procedure that involved monitoring cerebral, electrodermal, respiratory, and ocular activity, allowing them to map both the dispositional and learned dimensions of trust. The findings provide clear evidence of the crucial role that initial interactions play in the trust-building process and its temporal evolution. Additionally, the identification of key psychophysiological indicators for trust detection and the importance of personalized assessments contribute to a deeper understanding of trust in HRC contexts. These insights are essential for enhancing human–robot interaction in collaborative settings, facilitating smoother and more effective integration.
In another study, Hou et al. [24] proposed the design of a self-reconfigurable centipede-type rescue robot characterized by high stability during movement. Moreover, the robot can navigate heterogeneous environments and overcome obstacles thanks to the integration of an optical sensor that facilitates life-form recognition, terrain analysis, scene comprehension, and obstacle avoidance. These features are particularly advantageous for rescue operations in complex environments. Stability tests conducted on diverse types of terrain demonstrate the robot’s ability to maintain a stable trajectory even in rugged conditions, with an F1-score exceeding 93% and an average precision rate above 98%. These results confirm the robot’s high reliability and efficiency in rescue operations.
The study by Lee et al. [25] proposes an innovative multilayer rendering method capable of more accurately approximating the motion blur effect, surpassing traditional techniques that often introduce chromatic deviations from true colors. The authors’ approach stores motion vectors for each pixel, divides these vectors into multiple sampling points, and performs a backward search from the current pixel. A sampled point’s color is acquired only if it shares the same motion vector as the origin pixel. This process is repeated across multiple layers, considering only the closest chromatic values for depth testing. The final motion blur effect is determined by averaging the sampled colors at each point. The experimental results demonstrate that the proposed method significantly reduces the chromatic deviations typically observed in traditional techniques—achieving structural similarity indices of 0.8 and 0.92—providing substantial improvements over the accumulation method.
Overall, these studies collectively contribute to a deeper understanding of HRI, reinforcing the necessity for interdisciplinary approaches that integrate engineering, cognitive science, and human-centered design.

3. Conclusions

These studies highlight significant advancements in the field of human–robot interaction (HRI), particularly in prosody for conversational agents, trust dynamics in human–robot collaboration, and inclusivity in robot–child interactions. Innovations in robotic design, such as the centipede-type rescue robot, along with advancements in computer vision further demonstrate their potential for real-world applications. These findings underscore the need for interdisciplinary research to enhance adaptability, efficiency, and seamless human–robot integration.
Future studies and advancements in HRI research should continue to address ethical considerations, inclusivity, and practical implementation to ensure that robots become more effective, adaptable, and that they seamlessly integrate into human environments.

Author Contributions

D.C.: writing—original draft preparation; J.H. and D.C.: writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Borboni, A.; Reddy, K.V.V.; Elamvazuthi, I.; AL-Quraishi, M.S.; Natarajan, E.; Azhar Ali, S.S. The Expanding Role of Artificial Intelligence in Collaborative Robots for Industrial Applications: A Systematic Review of Recent Works. Machines 2023, 11, 111. [Google Scholar] [CrossRef]
  2. Soori, M.; Arezoo, B.; Dastres, R. Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cogn. Robot. 2023, 3, 54–70. [Google Scholar] [CrossRef]
  3. Ercolano, G.; Rossi, S.; Conti, D.; Di Nuovo, A. Gesture recognition with a 2D low-resolution embedded camera to minimise intrusion in robot-led training of children with autism spectrum disorder. Appl. Intell. 2024, 54, 6579–6591. [Google Scholar] [CrossRef]
  4. Song, J. A Review of The Application of Natural Language Processing in Human-Computer Interaction. Appl. Comput. Eng. 2024, 106, 111–117. [Google Scholar] [CrossRef]
  5. Kang, Y.; Cai, Z.; Tan, C.W.; Huang, Q.; Liu, H. Natural language processing (NLP) in management research: A literature review. J. Manag. Anal. 2020, 7, 139–172. [Google Scholar] [CrossRef]
  6. Obaigbena, A.; Lottu, O.A.; Ugwuanyi, E.D.; Jacks, B.S.; Sodiya, E.O.; Daraojimba, O.D. AI and human-robot interaction: A review of recent advances and challenges. GSC Adv. Res. Rev. 2024, 18, 321–330. [Google Scholar] [CrossRef]
  7. Di Nuovo, A.; Bamforth, J.; Conti, D.; Sage, K.; Ibbotson, R.; Clegg, J.; Westaway, A.; Arnold, K. An explorative study on robotics for supporting children with autism spectrum disorder during clinical procedures. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 189–191. [Google Scholar]
  8. Fanni, S.C.; Febi, M.; Aghakhanyan, G.; Neri, E. Natural language processing. In Introduction to Artificial Intelligence; Springer International Publishing: Cham, Switzerland, 2023; pp. 87–99. [Google Scholar] [CrossRef]
  9. Kim, C.Y.; Lee, C.P.; Mutlu, B. Understanding large-language model (LLM)-powered human-robot interaction. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 11–15 March 2024; pp. 371–380. [Google Scholar]
  10. Dutta, V.; Zielinska, T. Predicting the intention of human activities for real-time human-robot interaction (HRI). In Proceedings of the Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, 1–3 November 2016; pp. 723–734. [Google Scholar]
  11. Belardinelli, A. Gaze-based intention estimation: Principles, methodologies, and applications in HRI. ACM Trans. Hum.-Robot Interact. 2024, 13, 1–30. [Google Scholar] [CrossRef]
  12. Tong, Y.; Liu, H.; Zhang, Z. Advancements in humanoid robots: A comprehensive review and future prospects. IEEE/CAA J. Autom. Sin. 2024, 11, 301–328. [Google Scholar] [CrossRef]
  13. Patacchiola, M.; Cangelosi, A. A developmental cognitive architecture for trust and theory of mind in humanoid robots. IEEE Trans. Cybern. 2020, 52, 1947–1959. [Google Scholar] [CrossRef] [PubMed]
  14. Conti, D.; Di Nuovo, S.; Buono, S.; Di Nuovo, A. Robots in Education and Care of Children with Developmental Disabilities: A Study on Acceptance by Experienced and Future Professionals. Int. J. Soc. Robot. 2017, 9, 51–62. [Google Scholar] [CrossRef]
  15. Mutlu, B.; Forlizzi, J. Robots in organizations: The role of workflow, social, and environmental factors in human-robot interaction. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, HRI 2008, Amsterdam, The Netherlands, 12–15 March 2008; pp. 287–294. [Google Scholar]
  16. Welfare, K.S.; Hallowell, M.R.; Shah, J.A.; Riek, L.D. Consider the human work experience when integrating robotics in the workplace. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 75–84. [Google Scholar]
  17. Mital, A.; Pennathur, A. Advanced technologies and humans in manufacturing workplaces: An interdependent relationship. Int. J. Ind. Ergon. 2004, 33, 295–313. [Google Scholar] [CrossRef]
  18. Warm, J.S.; Matthews, G.; Finomore, V.S., Jr. Vigilance, workload, and stress. In Performance Under Stress; CRC Press: Boca Raton, FL, USA, 2018; pp. 131–158. [Google Scholar]
  19. Woods, D.D. Decomposing automation: Apparent simplicity, real complexity. In Automation and Human Performance; CRC Press: Boca Raton, FL, USA, 2018; pp. 3–17. [Google Scholar]
  20. Toichoa Eyam, A.; Mohammed, W.M.; Martinez Lastra, J.L. Emotion-driven analysis and control of human-robot interactions in collaborative applications. Sensors 2021, 21, 4626. [Google Scholar] [CrossRef] [PubMed]
  21. Gonzalez, A.G.C.; Lo, W.-S.; Mizuuchi, I. The Impression of Phones and Prosody Choice in the Gibberish Speech of the Virtual Embodied Conversational Agent Kotaro. Appl. Sci. 2023, 13, 10143. [Google Scholar] [CrossRef]
  22. Cirasa, C.; Høgsdal, H.; Conti, D. “I See What You Feel”: An Exploratory Study to Investigate the Understanding of Robot Emotions in Deaf Children. Appl. Sci. 2024, 14, 1446. [Google Scholar] [CrossRef]
  23. Loizaga, E.; Bastida, L.; Sillaurren, S.; Moya, A.; Toledo, N. Modelling and Measuring Trust in Human–Robot Collaboration. Appl. Sci. 2024, 14, 1919. [Google Scholar] [CrossRef]
  24. Hou, J.; Xue, Z.; Liang, Y.; Sun, Y.; Zhao, Y.; Chen, Q. Self-Configurable Centipede-Inspired Rescue Robot. Appl. Sci. 2024, 14, 2331. [Google Scholar] [CrossRef]
  25. Lee, D.; Kwon, H.; Oh, K. Real-Time Motion Blur Using Multi-Layer Motion Vectors. Appl. Sci. 2024, 14, 4626. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, J.; Conti, D. Recent Advances in Human–Robot Interactions. Appl. Sci. 2025, 15, 6850. https://doi.org/10.3390/app15126850

AMA Style

Han J, Conti D. Recent Advances in Human–Robot Interactions. Applied Sciences. 2025; 15(12):6850. https://doi.org/10.3390/app15126850

Chicago/Turabian Style

Han, Jeonghye, and Daniela Conti. 2025. "Recent Advances in Human–Robot Interactions" Applied Sciences 15, no. 12: 6850. https://doi.org/10.3390/app15126850

APA Style

Han, J., & Conti, D. (2025). Recent Advances in Human–Robot Interactions. Applied Sciences, 15(12), 6850. https://doi.org/10.3390/app15126850

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop