Intelligent Human-Robot Interaction

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Locomotion and Bioinspired Robotics".

Deadline for manuscript submissions: closed (20 August 2023) | Viewed by 34872

Special Issue Editors


E-Mail Website
Guest Editor
School of Electromechanical Engineering, Guangdong University of Technology, Guangzhou 510006, China
Interests: bionic robotics; motion planning; automation and robotics; mechatronics robot motion planning; cognitive robotics; space robotics
Special Issues, Collections and Topics in MDPI journals
School of Engineering Science, Osaka University, Osaka 565-0871, Japan
Interests: robot manipulation; motion planning; intelligent robot
Special Issues, Collections and Topics in MDPI journals
Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
Interests: SLAM; machine learning; mobile robot; LiDAR; autonomous driving

Special Issue Information

Dear Colleagues,

Human–robot interaction (HRI) is a multi-disciplinary field that encompasses artificial intelligence, robotics, human–computer interaction, machine vision, natural language understanding, and social science. With the rapid development of AI and robotics, intelligent HRI has become an increasingly attractive issue in the field of robotics.

Intelligent HRI involves many challenges in science and technology, particularly in human-centered aspects. These include human expectations of, attitudes towards, and perceptions of robots, safety, acceptability, and comfortability of robotic behaviors, and the closeness of robots to humans. On the other hand, it is desired that robots can understand the attention, intention, and even emotion of humans, and make prompt corresponding responses, under the support of AI. Achieving excellent intelligent HRI requires R&D in this multi- and cross-disciplinary field, with efforts expected in all relevant aspects including actuation, sensing, perception, control, recognition, planning, learning, AI algorithms, intelligent IO, integrated systems, and so on.

The aim of this Special Issue is to reveal new concepts, ideas, findings, and the latest achievements in both theoretical research and technical development in intelligent HRI. We invite scientists and engineers from robotics, AI, computer science, and other relevant disciplines to present the latest results of their research and development in the field of intelligent HRI. The topics of interest include, but are not limited to, the following:

  • Intelligent sensors and systems;
  • Bio-inspired sensing and learning;
  • Multi-modal perception and recognition;
  • Social robotics;
  • Autonomous behaviors of robots;
  • AI algorithms in robotics;
  • Collaboration between humans and robots;
  • Advances and future challenges of HRI. 

Prof. Dr. Yisheng Guan
Dr. Weiwei Wan
Dr. Li He
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent sensors and systems
  • bio-inspired sensing and learning
  • multi-modal perception and recognition
  • social robotics
  • autonomous behaviors of robots
  • AI algorithms in robotics
  • collaboration between humans and robots
  • advances and future challenges of HRI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

35 pages, 9958 KiB  
Article
Examining the Impact of Digital Human Gaze Expressions on Engagement Induction
by Subin Mok, Sung Park and Mincheol Whang
Biomimetics 2023, 8(8), 610; https://doi.org/10.3390/biomimetics8080610 - 14 Dec 2023
Cited by 1 | Viewed by 1643
Abstract
With advancements in technology, digital humans are becoming increasingly sophisticated, with their application scope widening to include interactions with real people. However, research on expressions that facilitate natural engagement in interactions between real people and digital humans is scarce. With this study, we [...] Read more.
With advancements in technology, digital humans are becoming increasingly sophisticated, with their application scope widening to include interactions with real people. However, research on expressions that facilitate natural engagement in interactions between real people and digital humans is scarce. With this study, we aimed to examine the differences in user engagement as measured by subjective evaluations, eye tracking, and electroencephalogram (EEG) responses relative to different gaze expressions in various conversational contexts. Conversational situations were categorized as face-to-face, face-to-video, and digital human interactions, with gaze expressions segmented into eye contact and gaze avoidance. Story stimuli incorporating twelve sentences verified to elicit positive and negative emotional responses were employed in the experiments after validation. A total of 45 participants (31 females and 14 males) underwent stimulation through positive and negative stories while exhibiting eye contact or gaze avoidance under each of the three conversational conditions. Engagement was assessed using subjective evaluation metrics in conjunction with measures of the subjects’ gaze and brainwave activity. The findings revealed engagement disparities between the face-to-face and digital-human conversation conditions. Notably, only positive stimuli elicited variations in engagement based on gaze expression across different conversation conditions. Gaze analysis corroborated the engagement differences, aligning with prior research on social sensitivity, but only in response to positive stimuli. This research departs from traditional studies of un-natural interactions with digital humans, focusing instead on interactions with digital humans designed to mimic the appearance of real humans. This study demonstrates the potential for gaze expression to induce engagement, regardless of the human or digital nature of the conversational dyads. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

24 pages, 11830 KiB  
Article
Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction
by Yihui Li, Jiajun Wu, Xiaohan Chen and Yisheng Guan
Biomimetics 2023, 8(6), 497; https://doi.org/10.3390/biomimetics8060497 - 19 Oct 2023
Viewed by 1467
Abstract
For robots in human environments, learning complex and demanding interaction skills from humans and responding quickly to human motions are highly desirable. A common challenge for interaction tasks is that the robot has to satisfy both the task space and the joint space [...] Read more.
For robots in human environments, learning complex and demanding interaction skills from humans and responding quickly to human motions are highly desirable. A common challenge for interaction tasks is that the robot has to satisfy both the task space and the joint space constraints on its motion trajectories in real time. Few studies have addressed the issue of hyperspace constraints in human-robot interaction, whereas researchers have investigated it in robot imitation learning. In this work, we propose a method of dual-space feature fusion to enhance the accuracy of the inferred trajectories in both task space and joint space; then, we introduce a linear mapping operator (LMO) to map the inferred task space trajectory to a joint space trajectory. Finally, we combine the dual-space fusion, LMO, and phase estimation into a unified probabilistic framework. We evaluate our dual-space feature fusion capability and real-time performance in the task of a robot following a human-handheld object and a ball-hitting experiment. Our inference accuracy in both task space and joint space is superior to standard Interaction Primitives (IP) which only use single-space inference (by more than 33%); the inference accuracy of the second order LMO is comparable to the kinematic-based mapping method, and the computation time of our unified inference framework is reduced by 54.87% relative to the comparison method. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

18 pages, 6155 KiB  
Article
Human Operation Augmentation through Wearable Robotic Limb Integrated with Mixed Reality Device
by Hongwei Jing, Tianjiao Zheng, Qinghua Zhang, Kerui Sun, Lele Li, Mingzhu Lai, Jie Zhao and Yanhe Zhu
Biomimetics 2023, 8(6), 479; https://doi.org/10.3390/biomimetics8060479 - 8 Oct 2023
Cited by 1 | Viewed by 1952
Abstract
Mixed reality technology can give humans an intuitive visual experience, and combined with the multi-source information of the human body, it can provide a comfortable human–robot interaction experience. This paper applies a mixed reality device (Hololens2) to provide interactive communication between the wearer [...] Read more.
Mixed reality technology can give humans an intuitive visual experience, and combined with the multi-source information of the human body, it can provide a comfortable human–robot interaction experience. This paper applies a mixed reality device (Hololens2) to provide interactive communication between the wearer and the wearable robotic limb (supernumerary robotic limb, SRL). Hololens2 can obtain human body information, including eye gaze, hand gestures, voice input, etc. It can also provide feedback information to the wearer through augmented reality and audio output, which is the communication bridge needed in human–robot interaction. Implementing a wearable robotic arm integrated with HoloLens2 is proposed to augment the wearer’s capabilities. Taking two typical practical tasks of cable installation and electrical connector soldering in aircraft manufacturing as examples, the task models and interaction scheme are designed. Finally, human augmentation is evaluated in terms of task completion time statistics. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

30 pages, 7881 KiB  
Article
Assessing Feasibility of Cognitive Impairment Testing Using Social Robotic Technology Augmented with Affective Computing and Emotional State Detection Systems
by Sergio Russo, Letizia Lorusso, Grazia D’Onofrio, Filomena Ciccone, Michele Tritto, Sergio Nocco, Daniela Cardone, David Perpetuini, Marco Lombardo, Daniele Lombardo, Daniele Sancarlo, Antonio Greco, Arcangelo Merla and Francesco Giuliani
Biomimetics 2023, 8(6), 475; https://doi.org/10.3390/biomimetics8060475 - 6 Oct 2023
Cited by 5 | Viewed by 1827
Abstract
Social robots represent a valid opportunity to manage the diagnosis, treatment, care, and support of older people with dementia. The aim of this study is to validate the Mini-Mental State Examination (MMSE) test administered by the Pepper robot equipped with systems to detect [...] Read more.
Social robots represent a valid opportunity to manage the diagnosis, treatment, care, and support of older people with dementia. The aim of this study is to validate the Mini-Mental State Examination (MMSE) test administered by the Pepper robot equipped with systems to detect psychophysical and emotional states in older patients. Our main result is that the Pepper robot is capable of administering the MMSE and that cognitive status is not a determinant in the effective use of a social robot. People with mild cognitive impairment appreciate the robot, as it interacts with them. Acceptability does not relate strictly to the user experience, but the willingness to interact with the robot is an important variable for engagement. We demonstrate the feasibility of a novel approach that, in the future, could lead to more natural human–machine interaction when delivering cognitive tests with the aid of a social robot and a Computational Psychophysiology Module (CPM). Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Graphical abstract

23 pages, 7411 KiB  
Article
Human Posture Transition-Time Detection Based upon Inertial Measurement Unit and Long Short-Term Memory Neural Networks
by Chun-Ting Kuo, Jun-Ji Lin, Kuo-Kuang Jen, Wei-Li Hsu, Fu-Cheng Wang, Tsu-Chin Tsao and Jia-Yush Yen
Biomimetics 2023, 8(6), 471; https://doi.org/10.3390/biomimetics8060471 - 2 Oct 2023
Cited by 5 | Viewed by 2601
Abstract
As human–robot interaction becomes more prevalent in industrial and clinical settings, detecting changes in human posture has become increasingly crucial. While recognizing human actions has been extensively studied, the transition between different postures or movements has been largely overlooked. This study explores using [...] Read more.
As human–robot interaction becomes more prevalent in industrial and clinical settings, detecting changes in human posture has become increasingly crucial. While recognizing human actions has been extensively studied, the transition between different postures or movements has been largely overlooked. This study explores using two deep-learning methods, the linear Feedforward Neural Network (FNN) and Long Short-Term Memory (LSTM), to detect changes in human posture among three different movements: standing, walking, and sitting. To explore the possibility of rapid posture-change detection upon human intention, the authors introduced transition stages as distinct features for the identification. During the experiment, the subject wore an inertial measurement unit (IMU) on their right leg to measure joint parameters. The measurement data were used to train the two machine learning networks, and their performances were tested. This study also examined the effect of the sampling rates on the LSTM network. The results indicate that both methods achieved high detection accuracies. Still, the LSTM model outperformed the FNN in terms of speed and accuracy, achieving 91% and 95% accuracy for data sampled at 25 Hz and 100 Hz, respectively. Additionally, the network trained for one test subject was able to detect posture changes in other subjects, demonstrating the feasibility of personalized or generalized deep learning models for detecting human intentions. The accuracies for posture transition time and identification at a sampling rate of 100 Hz were 0.17 s and 94.44%, respectively. In summary, this study achieved some good outcomes and laid a crucial foundation for the engineering application of digital twins, exoskeletons, and human intention control. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

13 pages, 824 KiB  
Article
Modeling and Analysis of Human Comfort in Human–Robot Collaboration
by Yuchen Yan, Haotian Su and Yunyi Jia
Biomimetics 2023, 8(6), 464; https://doi.org/10.3390/biomimetics8060464 - 1 Oct 2023
Cited by 2 | Viewed by 1548
Abstract
The emergence and recent development of collaborative robots have introduced a safer and more efficient human–robot collaboration (HRC) manufacturing environment. Since the release of COBOTs, a great amount of research efforts have been focused on improving robot working efficiency, user safety, human intention [...] Read more.
The emergence and recent development of collaborative robots have introduced a safer and more efficient human–robot collaboration (HRC) manufacturing environment. Since the release of COBOTs, a great amount of research efforts have been focused on improving robot working efficiency, user safety, human intention detection, etc., while one significant factor—human comfort—has been frequently ignored. The comfort factor is critical to COBOT users due to its great impact on user acceptance. In previous studies, there is a lack of a mathematical-model-based approach to quantitatively describe and predict human comfort in HRC scenarios. Also, few studies have discussed the cases when multiple comfort factors take effect simultaneously. In this study, a multi-linear-regression-based general human comfort prediction model is proposed under human–robot collaboration scenarios, which is able to accurately predict the comfort levels of humans in multi-factor situations. The proposed method in this paper tackled these two gaps at the same time and also demonstrated the effectiveness of the approach with its high prediction accuracy. The overall average accuracy among all participants is 81.33%, while the overall maximum value is 88.94%, and the overall minimum value is 72.53%. The model uses subjective comfort rating feedback from human subjects as training and testing data. Experiments have been implemented, and the final results proved the effectiveness of the proposed approach in identifying human comfort levels in HRC. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

18 pages, 2667 KiB  
Article
Development of a New Control System for a Rehabilitation Robot Using Electrical Impedance Tomography and Artificial Intelligence
by Alireza Abbasimoshaei, Adithya Kumar Chinnakkonda Ravi and Thorsten Alexander Kern
Biomimetics 2023, 8(5), 420; https://doi.org/10.3390/biomimetics8050420 - 11 Sep 2023
Cited by 18 | Viewed by 1781
Abstract
In this study, we present a tomography-based control system for a rehabilitation robot using a novel approach to assess advancement and a dynamic model of the system. In this model, the torque generated by the robot and the impedance of the patient’s hand [...] Read more.
In this study, we present a tomography-based control system for a rehabilitation robot using a novel approach to assess advancement and a dynamic model of the system. In this model, the torque generated by the robot and the impedance of the patient’s hand are used to determine each step of the rehabilitation. In the proposed control architecture, a regression model is developed and implemented based on the extraction of tomography signals to estimate the muscles state. During the rehabilitation session, the torque applied by the patient is adjusted according to this estimation. The first step of this protocol is to calculate the subject-specific parameters. These include the axis offset, inertia parameters, passive damping and stiffness. The second step involves identifying the other elements of the model, such as the torque resulting from interaction. In this case, the robot will calculate the torque generated by the patient. The developed robot-based solution and the suggested protocol were tested on different participants and showed promising results. First, the prediction of the impedance–position relationship was evaluated, and the prediction was below 2% error. Then, different participants with different impedances were tested, and the results showed that the control system controlled the force and position for each participant individually. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Graphical abstract

23 pages, 1224 KiB  
Article
Adaptive Circadian Rhythms for Autonomous and Biologically Inspired Robot Behavior
by Marcos Maroto-Gómez, María Malfaz, Álvaro Castro-González, Sara Carrasco-Martínez and Miguel Ángel Salichs
Biomimetics 2023, 8(5), 413; https://doi.org/10.3390/biomimetics8050413 - 6 Sep 2023
Viewed by 7764
Abstract
Biological rhythms are periodic internal variations of living organisms that act as adaptive responses to environmental changes. The human pacemaker is the suprachiasmatic nucleus, a brain region involved in biological functions like homeostasis or emotion. Biological rhythms are ultradian (<24 h), circadian (∼24 [...] Read more.
Biological rhythms are periodic internal variations of living organisms that act as adaptive responses to environmental changes. The human pacemaker is the suprachiasmatic nucleus, a brain region involved in biological functions like homeostasis or emotion. Biological rhythms are ultradian (<24 h), circadian (∼24 h), or infradian (>24 h) depending on their period. Circadian rhythms are the most studied since they regulate daily sleep, emotion, and activity. Ambient and internal stimuli, such as light or activity, influence the timing and the period of biological rhythms, making our bodies adapt to dynamic situations. Nowadays, robots experience unceasing development, assisting us in many tasks. Due to the dynamic conditions of social environments and human-robot interaction, robots exhibiting adaptive behavior have more possibilities to engage users by emulating human social skills. This paper presents a biologically inspired model based on circadian biorhythms for autonomous and adaptive robot behavior. The model uses the Dynamic Circadian Integrated Response Characteristic method to mimic human biology and control artificial biologically inspired functions influencing the robot’s decision-making. The robot’s clock adapts to light, ambient noise, and user activity, synchronizing the robot’s behavior to the ambient conditions. The results show the adaptive response of the model to time shifts and seasonal changes of different ambient stimuli while regulating simulated hormones that are key in sleep/activity timing, stress, and autonomic basal heartbeat control during the day. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

Review

Jump to: Research

35 pages, 5576 KiB  
Review
Deep Learning in the Ubiquitous Human–Computer Interactive 6G Era: Applications, Principles and Prospects
by Chunlei Chen, Huixiang Zhang, Jinkui Hou, Yonghui Zhang, Huihui Zhang, Jiangyan Dai, Shunpeng Pang and Chengduan Wang
Biomimetics 2023, 8(4), 343; https://doi.org/10.3390/biomimetics8040343 - 2 Aug 2023
Cited by 4 | Viewed by 2310
Abstract
With the rapid development of enabling technologies like VR and AR, we human beings are on the threshold of the ubiquitous human-centric intelligence era. 6G is believed to be an indispensable cornerstone for efficient interaction between humans and computers in this promising vision. [...] Read more.
With the rapid development of enabling technologies like VR and AR, we human beings are on the threshold of the ubiquitous human-centric intelligence era. 6G is believed to be an indispensable cornerstone for efficient interaction between humans and computers in this promising vision. 6G is supposed to boost many human-centric applications due to its unprecedented performance improvements compared to 5G and before. However, challenges are still to be addressed, including but not limited to the following six aspects: Terahertz and millimeter-wave communication, low latency and high reliability, energy efficiency, security, efficient edge computing and heterogeneity of services. It is a daunting job to fit traditional analytical methods into these problems due to the complex architecture and highly dynamic features of ubiquitous interactive 6G systems. Fortunately, deep learning can circumvent the interpretability issue and train tremendous neural network parameters, which build mapping relationships from neural network input (status and specific requirements of a 6G application) to neural network output (settings to satisfy the requirements). Deep learning methods can be an efficient alternative to traditional analytical methods or even conquer unresolvable predicaments of analytical methods. We review representative deep learning solutions to the aforementioned six aspects separately and focus on the principles of fitting a deep learning method into specific 6G issues. Based on this review, our main contributions are highlighted as follows. (i) We investigate the representative works in a systematic view and find out some important issues like the vital role of deep reinforcement learning in the 6G context. (ii) We point out solutions to the lack of training data in 6G communication context. (iii) We reveal the relationship between traditional analytical methods and deep learning, in terms of 6G applications. (iv) We identify some frequently used efficient techniques in deep-learning-based 6G solutions. Finally, we point out open problems and future directions. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

22 pages, 2505 KiB  
Review
A Review of Myoelectric Control for Prosthetic Hand Manipulation
by Ziming Chen, Huasong Min, Dong Wang, Ziwei Xia, Fuchun Sun and Bin Fang
Biomimetics 2023, 8(3), 328; https://doi.org/10.3390/biomimetics8030328 - 24 Jul 2023
Cited by 16 | Viewed by 10127
Abstract
Myoelectric control for prosthetic hands is an important topic in the field of rehabilitation. Intuitive and intelligent myoelectric control can help amputees to regain upper limb function. However, current research efforts are primarily focused on developing rich myoelectric classifiers and biomimetic control methods, [...] Read more.
Myoelectric control for prosthetic hands is an important topic in the field of rehabilitation. Intuitive and intelligent myoelectric control can help amputees to regain upper limb function. However, current research efforts are primarily focused on developing rich myoelectric classifiers and biomimetic control methods, limiting prosthetic hand manipulation to simple grasping and releasing tasks, while rarely exploring complex daily tasks. In this article, we conduct a systematic review of recent achievements in two areas, namely, intention recognition research and control strategy research. Specifically, we focus on advanced methods for motion intention types, discrete motion classification, continuous motion estimation, unidirectional control, feedback control, and shared control. In addition, based on the above review, we analyze the challenges and opportunities for research directions of functionality-augmented prosthetic hands and user burden reduction, which can help overcome the limitations of current myoelectric control research and provide development prospects for future research. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

Back to TopTop