Next Article in Journal
Robust Reduced-Order Active Disturbance Rejection Control Method: A Case Study on Speed Control of a One-Dimensional Gimbal
Next Article in Special Issue
Dynamic Human–Robot Collision Risk Based on Octree Representation
Previous Article in Journal
Laboratory Device Detecting Tensile Forces in the Rope and Coefficient of Friction in the Rope Sheave Groove
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Human–Robot Interaction: A Review and Analysis on Variable Admittance Control, Safety, and Perspectives

by
Abdel-Nasser Sharkawy
1,* and
Panagiotis N. Koustoumpardis
2
1
Mechatronics Engineering, Mechanical Engineering Department, Faculty of Engineering, South Valley University, Qena 83523, Egypt
2
Robotics Group, Department of Mechanical Engineering and Aeronautics, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Machines 2022, 10(7), 591; https://doi.org/10.3390/machines10070591
Submission received: 20 June 2022 / Revised: 12 July 2022 / Accepted: 18 July 2022 / Published: 20 July 2022

Abstract

:
Human–robot interaction (HRI) is a broad research topic, which is defined as understanding, designing, developing, and evaluating the robotic system to be used with or by humans. This paper presents a survey on the control, safety, and perspectives for HRI systems. The first part of this paper reviews the variable admittance (VA) control for human–robot co-manipulation tasks, where the virtual damping, inertia, or both are adjusted. An overview of the published research for the VA control approaches, their methods, the accomplished collaborative co-manipulation tasks and applications, and the criteria for evaluating them are presented and compared. Then, the performance of various VA controllers is compared and investigated. In the second part, the safety of HRI systems is discussed. The various methods for detection of human–robot collisions (model-based and data-based) are investigated and compared. Furthermore, the criteria, the main aspects, and the requirements for the determination of the collision and their thresholds are discussed. The performance measure and the effectiveness of each method are analyzed and compared. The third and final part of the paper discusses the perspectives, necessity, influences, and expectations of the HRI for future robotic systems.

1. Introduction

Human–robot interaction (HRI) is a fast-growing research field in robotics and seems to be most promising for robotics’ future and its effective introduction into more and more areas of everyday life. HRI research covers many fields and applications [1,2], which can be found in industrial, medical and rehabilitation, agriculture, service, educational environments, etc. HRI is used in industrial applications in co-manipulating tasks, picking and placing in the lines of production, welding processes, parts’ assembly, and in painting [3,4,5,6,7]. Assistive robotics are considered in the highest profile fields of HRI. Robots can help people with physical and mental challenges and provide the opportunity for interaction as well as therapy [8,9,10,11]. In addition, HRI can be found in hospitals and can be helpful in fighting against COVID-19 [12]. In an agricultural environment, HRI strategies can provide solutions to many complex problems such as providing the security, the lower workload, the comfort, and improving the productivity of the process [13]. In addition, HRI helps in various tasks that include harvesting, seeding, pruning, spraying, fertilizing, hauling, weed detection, phenotyping, mowing, and sorting and packing [14,15,16,17,18]. With an educational environment, the robots can help in classrooms in different processes of learning. Furthermore, they can be used to promote education to typical children in home and schools [19,20]. The robots can also help the young pupils with empathy and acquiring skills. In addition, HRI can be found in service, home use, mining, households management, space exploration, and UAVs [1,2,21,22].
For all these real and very different applications, highly efficient HRI must be achieved, and this can be attained by considering two main factors. The first one is safety. Safety, in HRI systems, is very important because the existence of the operator near the robot could lead to possible injuries. Thus, a system depending on collision avoidance or detection must be developed and embedded into the robot control. The second main factor in HRI is the control. Advanced controllers that can adapt themselves must follow the collaborator’s intention as well as the environment’s changes. Therefore, this could lead to robots being friendly with the humans and the required tasks can be executed easily, efficiently, and with robustness.
The main contribution of this manuscript is discussed as follows. This paper presents a survey on the two main and vital factors for HRI: control and safety. This survey is divided into two parts:
  • In the first part, the VA controller is presented in which the virtual damping, inertia, or both are adjusted. The various developed methods, the executed co-manipulation tasks and applications, the criteria for evaluation, and the performance are compared and investigated. The study of this part is crucial and innovative, and its main aim is to give an insight into the role of VA control in improving the HRI’s performance. In addition, it gives guidelines to researchers for designing and evaluating their own VA control systems.
  • In the second part, the safety of HRI is reviewed. The model- and data-based methods for collision detection, the collision threshold determination, and the effectiveness (%) of the methods are analyzed and compared. The main purpose of studying this part is revealing the effectiveness, performance measure (%), and application of each method. This could be a chance for the future enhancement of the performance measure of the developed safety method.
The content of the methodology followed in this paper is presented in Figure 1.
The rest of this paper is composed of the following sections: Section 2 presents the control methods for HRI, particularly the VA control. The review of the VA control in human–robot co-manipulation tasks is investigated. In Section 3, the safety methods of HRI are presented and compared, including their effectiveness and errors. Section 4 shows some perspectives, expectations, and recommendations of the HRI for future robotic systems. At the end, Section 5 summarizes the main important points in this manuscript.

2. Control Methods for HRI

The robot’s cooperation with the human in industrial, service, co-manipulation, and medical applications [23,24] could enhance the effectiveness of both the human and the robotic system. Collaborative robots are used for humans’ assistance, and this leads to an increase in their capabilities in three issues: the precision, the speed, and the force. Furthermore, the robots can reduce the human operator’s stress or tiredness and therefore improve the working conditions. In the cooperation, the human operator contributes in the following points: the experience, the knowledge for task executing, the intuition, ease of adaptation and learning, and the ease of understanding of the strategies of control [25,26].
Many robotics applications and varieties of tasks lead to the necessity for developing variable parameter controllers for accomplishing the robotic task. The controllers should be adjusted depending on the human collaborator’s intention as well as the changes of environment (e.g., the payload of the robot). Therefore, friendly robots with humans are performed. In this section, we concentrate on compliance control, particularly admittance control. The following subsections discuss and review in detail the VA controller in HRI.

2.1. Compliance Control (Impedance/Admittance)

Understanding the compliant behavior is not a new problem in robotics. This is relevant when the robot interacts with the environment, particularly if the environment is only and partly known. In this subsection, the impedance control and the admittance control of the robot are presented and discussed. Compliance control (impedance or admittance) [27,28,29] is always used as a control system so as to implement a dynamic relationship between the robot and human. In admittance control [30], a good position or velocity controller (trajectory tracking controller) and an external force sensor should be found. The force sensor is not required for the impedance controller when the inertia is not shaped. The robot’s dynamic behavior is adjusted by tuning the virtual damping, inertia, and stiffness instead of independently controlling whether the position or the force. The main important points that compare the impedance and admittance controllers are presented in Table 1. Furthermore, the concept and the implementation of both controllers is shown in Figure 2.
The advantages and properties of impedance and admittance control can be shown in a better way by comparing them with other modes of control such as force control, position control, and hybrid control. Table 2 presents this comparison depending on the work presented by Song et al. [31].
The dynamic relationship between the applied forces or torques by the human operator to the robot end-effector and its displacement or velocity is presented using the following equation [27,28,29],
F = M d   V ˙ a + C d   V a
where, V a 6 and V ˙ a 6 represent the velocity and the acceleration, respectively, in the directions of the Cartesian coordinate system, M d   6 × 6   is defined as the positive definite matrix that represent the virtual inertia. C d 6 × 6 is defined as the positive definite matrix that represent the virtual damping. F 6   is the vector of the applied forces and torques by the human. The term K is the virtual spring and is usually omitted from Equation (1). Dealing with the virtual stiffness is out of scope of this paper.
Equation (1) can be rewritten in the general form as follows:
F x F y F z T x T y T z = m 1 x m 1 y m 1 z I 1 x I 1 y I 1 z m 2 x m 2 y m 2 z I 2 x I 2 y I 2 z m 3 x m 3 y m 3 z I 3 x I 3 y I 3 z m 4 x m 4 y m 4 z I 4 x I 4 y I 4 z m 5 x m 5 y m 5 z I 5 x I 5 y I 5 z m 6 x m 6 y m 6 z I 6 x I 6 y I 6 z V ˙ x V ˙ y V ˙ z ω ˙ x ω ˙ y ω ˙ z + c 1 x c 1 y c 1 z c r 1 x c r 1 y c r 1 z c 2 x c 2 y c 2 z c r 2 x c r 2 y c r 2 z c 3 x c 3 y c 3 z c r 3 x c r 3 y c r 3 z c 4 x c 4 y c 4 z c r 4 x c r 4 y c r 4 z c 5 x c 5 y c 5 z c r 5 x c r 5 y c r 5 z c 6 x c 6 y c 6 z c r 6 x c r 6 y c r 6 z V x V y V z ω x ω y ω z
For a relatively specific case where the inertia and damping matrices are decoupled with respect to the world coordinates, Equation (2) is written in the following form:
F x F y F z T x T y T z = m x 0 0 0 0 0 0 m y 0 0 0 0 0 0 m z 0 0 0 0 0 0 I x 0 0 0 0 0 0 I y 0 0 0 0 0 0 I z V ˙ x V ˙ y V ˙ z ω ˙ x ω ˙ y ω ˙ z + c x 0 0 0 0 0 0 c y 0 0 0 0 0 0 c z 0 0 0 0 0 0 c r x 0 0 0 0 0 0 c r y 0 0 0 0 0 0 c r z V x V y V z ω x ω y ω z
where, m x , m y , and m z represent the virtual inertia parameter of the admittance control in directions of Cartesian coordinate system during the linear motion, whereas I x , I y , and I z represent the virtual inertia during the rotation. Additionally, c x , c y , and c z represent the virtual damping during the linear motion, whereas c r x , c r y , and c r z represent the virtual damping during the rotation.
In this paper, the admittance controller is reviewed, particularly the VA controller in co-manipulation tasks, where either the virtual inertia   m   or the virtual damping   c   or both of them are adjusted for facilitating the robot’s cooperation with the human, as presented in Figure 3.

2.2. Methods for VA Control System in Co-Manipulation Tasks

In this subsection, the developed methods for the VA controller are presented whether the virtual damping parameter or the virtual inertia parameter is only adjusted, or both are adjusted simultaneously.
Parameters of VA controllers are adjusted based on different techniques such as human intention, passivity-preserving strategy, transmitted power from human to robot, and data-based approaches such as fuzzy logic, neural network, trajectory prediction, and online and fast Fourier transform (FFT) of measured forces [6,37,38,39,40,41,42,43,44,45,46,47,48,49,50]. These classifications are presented in Figure 4.
Inference of human intention was the basis of the following works. Duchaine and Gosselin [37] improved the intuitivity of humans considering only the adjustment of the virtual damping. They developed their VA control using the time derivative of the applied force, and after that it was used for inferring the human’s intentions. Lecours et al. [46] developed a VA control in which both the virtual damping and the virtual inertia were adapted. Their controller was implemented using the human intentions’ inference and by considering the desired operator’s velocity and acceleration. The drawbacks of the above two approaches are the necessity for numerical differentiations that make noisy signals, which in turn require filtering that causes delays. In [41], Topini et al. implemented a VA control where both the virtual inertia and the virtual damping were adapted online to the motion intention of the user. Their system followed the same approach presented with Lecours et al. [46]. However, the field of application was different. In addition, the desired reference force or the source of such a force reference value were not provided by Lecours et al. [46].
The strategy of passivity-preserving was used to adapt the parameters of robot admittance control. TSUMUGIWA et al. [45] presented a method to adjust both virtual damping and inertia considering the passivity index and an ultimate value of the applied force by human. In their strategy, the passivity index was used for identifying the process of working from other processes, viz energy transferring between human and robot during the collaboration. In a case where the passivity index was a positive value and this value exceeded a defined threshold, the human provided the robot with the energy for executing the collaboration’s task. This passivity index was also used to detect the whole human’s working process. After that, the ultimate applied force value was used to divide the whole working process into four parts. Then, proper characteristics of admittance control were applied to each part. A passivity-preserving strategy was also proposed by C. T. Landi et al., in [48], where they adjusted the virtual inertia parameter of the admittance control for removing the oscillations with high frequency and restoring the desired model of interaction. Furthermore, the virtual damping parameter was adjusted based on a constant damping to inertia ratio for preserving a similar system dynamics after the adjustment, which has more intuitivity for humans [46].
According to the transmitted power from human to robot, Sidiropoulos et al. [40] proposed a VA control for adapting the virtual damping taking into account the minimization of the energy provided by the human allowing the subject to control the task.
Data-based methods like fuzzy logic and neural networks were also developed. Fuzzy logic-based methods were proposed by the following research works. Z. Du et al. [43] developed a hybrid VA model based on a learning of type fuzzy Sarsa (λ), considering the rotational movement about a single axis, for obtaining intuitive and natural interaction along the pose adjustment of minimal invasive surgery manipulators. Dimeas and Aspragathos [44] proposed a method in which the process of human-like decision making was combined with an adaptation fuzzy inference algorithm. The measured robot velocity and the applied human force were the inputs to their method, whereas the online adjusted virtual damping parameter was the output. Their fuzzy inference system was adapted using the fuzzy model reference learning controller. The minimum jerk trajectory model was the basis of this learning, and the expert knowledge must be found for the intuitive collaboration.
Neural networks (NNs) were also used for adapting the parameters of robot admittance control systems. In [6,49,50], Sharkawy et al. proposed a multi-layer feedforward neural network (MLFFNN) that online trained for adjusting only the virtual damping parameter or only the virtual inertia parameter of the admittance control. The training occurred depending on the error backpropagation algorithm and considering an error representing the difference between the actual robot’s velocity and the desired minimum jerk trajectory velocity. In [38], both the virtual damping and inertia were adjusted online and simultaneously by the use of Jordan recurrent NNs. The network was indirectly trained by the real-time recurrent learning considering the error between the actual robot’s velocity and the desired minimum jerk trajectory velocity. In these approaches, the need to expert knowledge for the intuitive collaboration was not found and this is desirable. In [39], a method was developed to allow the robot for interacting with the unknown environment. An observer in the robot joint space was employed for estimating the interaction torque. The admittance control was adopted for regulating the dynamic behavior at the point of interaction during the robot’s collaboration with the unknown environment. A controller based on radial basis function (RBF) was implemented for guaranteeing the trajectory tracking. The cost function was defined for achieving the performance of interaction of the torque regulation and the tracking of trajectory. Additionally, it was minimized by the admittance model’s adaptation.
Based on a trajectory prediction of the human hand motion, Wang et al. [42] proposed a VA control in HRI. In their approach, the robot end-effector’s trajectory under the guidance of the human operator was used to train offline a long- and short-term memory NN (LSTM-NN). After that, the trajectory predictors were used in VA control for predicting online the trajectory as well as the movement direction of the robot’s end-effector. The developed VA controller adjusted the virtual damping to reduce its value in the moving direction.
Another data-based method was developed in [47]. In [47], Okunev et al. used an online fast Fourier transform (FFT) of the forces, which were measured using a mounted sensor at the robot’s end-effector. Their method was used for detecting the oscillations and for dynamically adapting the virtual damping and inertia for attenuating the oscillations and improving the experience of haptic interaction of the cooperating human. In addition, they developed a machine-learning method to include the human preferences into the control design. The method was depending on the study of users and their evaluation.
The robotic tasks used by these researchers during the development and the evaluation of their VA controller are discussed in the next subsection.

2.3. Accomplished Co-Manipulation Tasks with VA Control

During the development and the evaluation of the VA control, different tasks were proposed and accomplished by the researchers. These tasks are classified into two main categories as follows:
(1)
The collaborative co-manipulation tasks in which the human effort and oscillations should be reduced. These types of tasks are the main interest of this paper and are discussed in this subsection.
(2)
The rehabilitations tasks in which the robot should apply high force and assist the human, or in other cases the robot should leave the patient to act alone. These types of tasks are out of scope of this paper.
The classifications of the collaborative co-manipulation tasks are presented in Figure 5.
Collaborative co-manipulation tasks, such as pick and place task, point-to-point movement, drawing task, and manipulating of large objects, were proposed with the following research works. Duchaine and Gosselin [37] designed their VA control system for the collaborative pick and place task as well as the drawing task. With Lecours et al. [46], the drawing task as well as giving an impulse for the assisting device were their accomplished tasks to develop and evaluate the variable controller. In [45], TSUMUGIWA et al. executed the point-to-point movement as the desired task for developing and evaluating their VA controller. The task achieved by Sidiropoulos et al. [40] to develop and evaluate their VA control system was the manipulating of large objects with high inertia.
The rotational movement of a joint of a minimally invasive surgical manipulator between two targets in a single direction was used as the proposed task with Z. Du et al. [43]. In [44], Dimeas and Aspragathos developed their VA control system for a point to point movement along a single direction of the Cartesian robot workspace. Their VA control was evaluated using different movements; short, medium, and long distances. Sharkawy et al. [6,38,49,50] developed and designed their VA control system for a point-to-point movement along a single direction of the Cartesian robot workspace. In addition, their VA control was evaluated using different movements; short, medium, and long distances. In [50], the VA control was evaluated by mounting a load or an object of 1 kg to the robot’s end-effector for simulating the transferring process of an object from a place to another guided by the human. The motion was also along straight-line segment (short, medium, and long distances). Furthermore, the VA control was tested and investigated along different axis of motion and along straight-line segment. Wang et al. [42] developed their VA control for simulating an experiment in which the robot was operated for grinding the prosthesis implantation plane. The selected task path was an N-shaped path covering the plane.
A rehabilitation task was proposed by Topini et al. [41]. They designed a hand exoskeleton system for interfacing with the VA control for achieving virtual reality-based rehabilitation tasks. The tasks used in their work were free motion and grasping virtual spherical object. However, rehabilitation tasks are out of the scope of this paper.
Although different tasks were used with the researchers, other realistic tasks and applications are recommended to be investigated and applied, such as curved and complex motions. Furthermore, the tasks in real (industrial, medical, agriculture, etc.) environments can be achieved.

2.4. Performance’s Comparison of VA Controllers in Co-Manipulation Tasks

In this section, the achieved performance of the developed VA controllers is compared. For this purpose, we concentrate on the number of subjects and the criteria used to evaluate the developed VA control system as well as the effectiveness and the improvements that were obtained by the VA control. The main used criteria for evaluation of the developed VA control included the following terms:
(1)
The required effort for performing the task.
(2)
The needed time for executing the task.
(3)
The oscillations and the number of overshoots.
(4)
The achieved accuracy.
(5)
The accumulated jerk.
(6)
The opposition of the robot to human forces.
The VA control system developed by Duchaine and Gosselin [37] was evaluated in terms of the needed time for accomplishing the drawing task and the number of overshoots. Their VA control system was compared with a constant admittance control by help of 6 subjects. By their VA control, the task was achieved rapidly with a smaller overshoots’ number. In addition, the time was improved/reduced by 18.23% compared with the constant admittance controller. The required effort for performing the task and the accuracy were not compared, and the results from a provided questionnaire were not included. The VA control by Lecours et al. [46] was compared with a low constant admittance controller and high constant admittance controller using the task completion time and the maze overshoots as the main criteria and by the help of six subjects. In the case of performing a drawing task, with their VA controller, it was easy to perform acceleration and fine movements. The needed time for achieving the task was as with the one given by the low constant admittance controller, and 20% lower than the time required by the high constant admittance controller. The overshoot distance provided by their VA controller was as with the one obtained by the high constant admittance controller. In addition, it was five times lower than the obtained one by the low constant admittance control. With the impulse test, their VA controller performed high desired velocity and acceleration, which were similar to the ones obtained by the low constant admittance controller. However, their VA control had less ability for performing fine movements. Furthermore, the velocity decayed more rapidly. This happened because of the higher acceleration result of the VA control law. In both tasks (drawing and impulse), the evaluation of the required human effort was not considered, and subjective results by giving a questionnaire to the participants were missing. In [41], Topini et al. evaluated their developed VA control with the help of a single trained healthy subject. The results showed that their developed VA control had promising performance while following the free motion of the user. In addition, it was quite suited to rehabilitation applications because of the smooth behavior at low operating frequencies. Statistics about the accuracy, required effort, and completion time were missing. In addition, using only one subject for evaluating the system is not fair or enough. The VA control developed by TSUMUGIWA et al. [45] was compared with a conventional VA control and with an invariable admittance control by the participation of 10 subjects. These systems were compared in terms of the overshoots and re-increasing of the applied force along the positioning section. It was found that with their developed VA controller, there was no re-increasing in the applied force, no overshoot along the positioning section, and the cooperation was performed smoothly. An overshoot of the applied human force appeared at the positioning section end when the invariable admittance control was used. Re-increasing the applied force was necessary at the beginning of the positioning section when the conventional VA control was used. The generalization and effectiveness ability of their VA controller was not tested and investigated using different movements. In addition, the comparison did not include the accuracy of each controller and the task completion time. The percentage of the improvement of their developed VA control was not calculated. The given questionnaire to the subjects was about the manipulability only. However, the evaluation of the feeling of the required human effort and the feeling of the oscillation was not included. The VA control developed by C. T. Landi et al., in [48] was evaluated using only a questionnaire. A questionnaire was given to 26 subjects divided into two groups, the first group with 12 users and the other one with 14 users, for the usability evaluation of the system by the use of system usability scale (SUS) [51]. The SUS score was 81.66% for the first group, and it was 82.88% for the second group. In general, this score was high.
Sidiropoulos et al. in [40] evaluated their VA control with the help of 10 subjects and based on only the two main criteria: the transmitted energy from human to robot, which represents the required human effort, and transmitted energy by robot to human, which represents the opposition of the robot to the forces of the human. Their results proved that their VA control reduced the human effort by approximately 33% to 46% compared with high constant admittance controller. In addition, their VA controller was compared with the VA control presented in [52], where the virtual damping was tuned depending on the velocity norm. The results proved that their VA controller has the better efficiency compared with the VA control based on velocity norm [52]. In addition, the later VA control reduced the human effort by 32%. Indications about the achieved accuracy and task completion time were missing. Furthermore, the subjective results from a questionnaire were not included.
In [43], the VA controller presented by Z. Du et al. was compared by three different systems, low constant admittance controller, high constant admittance controller, and VA controller based on the torque/force by the participation of 8 subjects. The comparison was in terms of the accuracy, the energy/effort, and the accumulated jerk. Their VA controller achieved better accuracy than the low constant admittance controller. In addition, the mean of the maximum distance after the remove of the interaction torque was decreased by 90.3%. The applied human effort was reduced and improved by 44.3% relative to the high constant admittance control. The smoothness of the cooperation was significantly improved by their variable controller compared with the VA controller based on the torque. Furthermore, the mean of the accumulated jerk in the episode was reduced and improved by 31.4% relative to the VA controller based on the torque. However, the needed time for executing the desired movement was not compared. From the questionnaire given to the subjects, the preferred results were obtained by their VA control system using the main criteria for comparison; the sense of being in control and the naturalness of the motion. The questionnaire did not give any indication about the human required effort feeling during the movement, the vibration and oscillations, and which system the participants prefer. In [44], Dimeas and Aspragathos developed a fuzzy inference system (FIS) for VA controller. Their trained FIS was compared with the untrained, manually tuned FIS along different movements (short, medium, and long distances) by help of 12 subjects. The comparison included the required human effort and the required time to complete the movement. In short distance, it was found that 1% improvement/reduction in the effort with the trained FIS and this improvement/reduction increases to 7% for the medium distance and finally to 13% for the long distance. The mean completion time in case of the trained FIS is lower compared with the untrained system by 12%. However, the oscillations and the accuracy were not included in the comparison. In addition, it is not a practical way to compare a trained system with untrained one since logically the trained one will achieve the good performance. The comparison with a constant/fixed admittance controller was not also included. The given results from the provided questionnaire to the participants proved that the trained FIS was the preferred system compared with the untrained FIS. The feeling of the human effort during the movement, the accuracy, and the oscillations were not considered in the provided questionnaire.
In [49], Sharkawy et al. developed a VA controller based on the trained NN system. Their VA controller was included in a comparison with three other constant admittance controllers (low, medium, and high). The comparison was carried out using different movements (short, medium, and long distances) and by the help of 13 subjects. The main criteria for this comparison were the required human effort to move the robot, the needed time for the task, and the obtained accuracy at the target point. With their VA control, it was found that, in the short distance, the required effort and the needed task time were, respectively, improved and reduced by 65.22% and 6.65%, relative to the high constant admittance controller. Its accuracy was improved and increased by 5.30% relative to the low constant admittance controller. In the medium distance, the required human effort and the needed task time were improved and reduced, respectively, by 58.83% and 16.89% with reference to the high constant admittance control. The accuracy was improved and increased by 4.031% with reference to the low constant admittance control. In the long distance, the human effort and the needed time were, respectively, improved and reduced by 63.63% and 15.184% with reference to the high constant admittance control, whereas the accuracy was improved by 3.86% relative to the low constant admittance controller. From the subjective results of the questionnaire given to the subjects, their VA control gave high performance in which the required effort was low, and the accuracy was the highest. Furthermore, it was the preferred system by the subjects. In [50], the developed VA controller was evaluated by the help of 10 subjects. Their VA controller improved and reduced the required human effort and the needed task time, respectively, by 58.58% and 23.86% with reference to the high constant admittance control, in the short straight-line segment motion. Furthermore, the accuracy was improved/increased by 5.12% relative to the low constant admittance control. In the medium straight-line segment motion, the VA controller improved and reduced the required human effort and the needed task time, respectively, by 51.474% and 24.30% with reference to the high constant admittance control. The accuracy was improved and increased by 5.00% with reference to the low constant admittance control. In case of the long straight-line segment motion, the VA controller improved and reduced the human effort and the needed task time, respectively, by 57.154% and 26.57% with reference to the high constant admittance control. The accuracy was improved and increased by 4.456% with reference to the low constant admittance control. The generalization ability of their VA control was checked by mounting a load or an object on the robot end-effector to simulate a co-manipulating task. In this case, the VA controller improved and reduced the human effort and the needed task time, respectively, by 43.235% and 19.76% when short straight-line segment motion was performed and, respectively, by 51.856% and 22.35% when the medium straight-line segment motion was used and by 58.658% and 22.892% when long straight-line segment motion was performed, relative to the high constant admittance control. The achieved accuracy with their VA controller was improved/increased by 9.678%, 8.576%, and 8.653% when short, medium, and long distance, respectively, were used, relative to the low constant admittance control. In [38], the VA control, in which both the virtual damping parameter and inertia parameter were adjusted, has the best performance compared with the VA control, where only the virtual damping was adjusted, or the VA control in which only the virtual inertia parameter was adjusted. The comparison was achieved by using 10 subjects.
In [42], Wang et al. evaluated their developed VA control by comparing it with a constant admittance control considering the operating force and jerk and the trajectory error (accuracy) as the main criteria with the help of two subjects. Their VA controllers reduced the trajectory errors by 51%, and reduced the operating force by 23%, and reduced the operating jerk by 21%. The subjective results from a questionnaire were missing. In addition, using only two subjects was not enough and not fair.
In [47], the VA controller presented by Okunev et al. was compared through participation of 10 subjects with four constant admittance controllers; the first one has high values, the second one has medium values, the third one has low values and the last one has the lowest values. The comparison was executed using a mobile robot that had two 7-DoF anthropomorphic arms, which hold an aluminum bar. The subjective results were presented only from a provided questionnaire to the participants. The results were about the heaviness and the oscillations using each controller. By comparing their variable controller with the first constant admittance controller, their variable controller was less heavy and more oscillatory. Compared with the second controller, the variable controller was approximately in the same level of heaviness with the second controller, which was less oscillatory. In comparison with the third constant admittance controller, their variable controller was less heavy and oscillatory. By comparing with the fourth constant admittance controller, the fourth controller aborted during the experiments, in which abort means that the number of times the built-in robot security system shut down and the adaptive controller was less oscillatory. However, the measured results from these comparisons were not included, which are more important than the subjective ones.
The number of subjects used to evaluate the VA control by researchers can be compared. This comparison is presented in Figure 6. As shown in Figure 6, the number of subjects used by researchers is in the range between 1 to 13, except one research work used 26 subjects to evaluate their VA control system. However, our recommendation is to use more and enough subjects (e.g., more than 30) for the evaluation of VA control for more realistic statistics and results.
From this discussion, it is difficult to compare the performance of all VA control systems quantitatively because the used criteria and the accomplished tasks with each system are different from the others. In addition, the obtained results with some VA control systems in form of values are missing. However, we present a figure that can compare the performance of the closest VA systems to help the reader to see the difference easily. These systems are as follows:
(1)
The VA control system based on inference of human intention [37],
(2)
The VA control system depending on transmitted power by human to robot [40],
(3)
The VA control system based on the velocity norm [52],
(4)
The neural network-based system to adjust the damping only [49],
(5)
The neural network-based system to adjust the inertia only [50], and
(6)
The VA control system depending on the trajectory’s prediction of the motion of a human hand [42].
This comparison is presented in Figure 7. The percentage of the improvement in the required human effort and time of all systems is relative to high constant admittance control. The percentage of the improvement in the achieved accuracy is relative to low constant admittance control.
The good results obtained from these previous works need further investigation by developing new methods for variable admittance control. In addition, as is clear from Figure 7, using a neural network-based approach is promising in improving the performance of the VA control system in a better way. This needs further investigation by applying different types of neural networks as well as deep learning-based techniques.

3. Safety of HRI

Safety is very crucial and necessary during the collaboration between the robot and human in the same area or workspace. This is because the human’s closeness to the robot can make possible injuries. Therefore, a safety method or technique must be found on the robotic system. Safety is considered as an extensive and crucial research work. Gualtieri et al. in [53] classified the recent research works and themes that are very related to the safety of HRI. These classifications were adopted after analyzing all related works and then categorizing them. These categories are shown in Figure 8.
Considerations and requirements of safety must be followed during the HRI. These requirements for integrating the industrial robotic systems are provided with ISO standards as presented in ref. [54,55]. In ISO, the hazards’ situations are illustrated and the requirements for eliminating or reducing the associated risks with these hazards are also presented. In [56], ISO/TS 15,066 assigns the requirements of safety for human–industrial robotic systems cooperation. In addition, it supplies the guidance on the operation of the collaborative industrial robot based on [54,55]. The quasi-static and transient contact limits are also presented. Yamada et al. [57] investigated safety and determined the tolerance limit of the pain. In the design of the control system in HRI, safety must be taken into account. Developed systems for the safety of the robot collaboration with humans were classified as collision avoidance techniques and collision detections’ techniques. Collision avoidance techniques were dependent on sensors to efficiently monitor the environment. These methods used depth sensors and vision as presented in ref. [58,59,60] and color sensors as in ref. [61,62]. These methods are efficient when there are no occlusions or problems during the detection of the human or the obstacle and the robot. When the sensor is very far or very close to the working space, multiple sensors must be placed for monitoring the workspace from multiple directions. Number of sensors was considered by researchers in two cases: the first case used single-sensor variants as presented by ref. [63,64], whereas the second case used multi-sensor based techniques as proposed by ref. [65,66]. These approaches need modifications in the body of the robot because of the installation of the sensors. NNs were also used for the collisions’ avoidance as in [67]. Reviewing of the collision avoidance techniques is out of the scope of this paper. The main interest of this paper is to review the collision detection methods, which are classified in Figure 9. These classifications are reviewed deeply in the following subsections.

3.1. Collision Detection Techniques

For the improvement of system of safety in HRI, the collision detection and reaction approaches are necessary and required in case the collision avoidance level fails. Various techniques for detecting and identifying the collisions were developed by researchers. The classifications of these techniques are either model-based or data-based. These methods are presented in the following two subsections.

3.1.1. Model-Based Methods

In the model-based approaches, disturbance observer, impedance and admittance control, and use of force/torque sensor were taken into account by researchers. Disturbance observer was presented by the following studies. In Haddadin et al.’s approach [68], two collision detection systems as well as five reaction strategies were implemented using LWR robot and for tasks of cooperation and interaction. The first collision detection system needed the generalized momentum, and the second system compared the measured joint torque by the estimated torque from the model of the robot. In Cho et al. [69], a collision detection technique was developed as well as three reaction strategies. These reactions’ strategies were (1) the mode of oscillation, (2) the mode of torque-free motion, and (3) the mode of forced stop, and they were used in different scenarios of collision. Their collision detection system was dependent on the generalized momentum and the signals of joint torque sensors. They developed a 7-DOF service robotic manipulator, which was used for the execution of the experiments. Jung and his group [70] presented a group of band-width filters in a disturbance observer for detecting the collisions occurring between human and robot. The characteristics of the frequency of each part of the manipulator and disturbances were investigated and used for detecting the collisions. Pengfei Cao et al. [71] presented the model-based sensorless collision detection scheme for HRI. Their method was dependent on the torque residual, which was defined as the difference between the nominal torque and the actual one, using the information of the motor-side.
Impedance and admittance control were also used for collision detection. In Morinaga and Kosuge’s approach [72], a nonlinear impedance controller was developed with no need for any external sensor. Their approach was based on the torque error, which was defined as the difference between the actual input manipulator torque and the reference input torque determined from the dynamic manipulator parameters. In [73], Kim proposed a sensorless admittance control-based method for collision’s detection and reaction of collaborative robots. The collision observer was dependent on the forced response of a mechanical system. For detecting the collision correctly, a low-pass filter was combined with a high-pass filter in a unified fashion.
Using the external force/torque sensor as a collision detection’s method was presented by Shujun Lu et al. [74]. In their approach, two six-axis force and torque sensors were used. The first one was on the base and the second one was on the wrist. Their method was investigated and tested using 1-DOF and 2-DOF manipulators.
The main problems of such model-based approaches are that they need an explicit dynamic robot model, which is sometimes not available or found. In addition, there are uncertainties in the dynamic parameters’ identification.

3.1.2. Data-Based Methods

Data-based techniques are presented and developed for the human–robot collisions’ detections and identifications. These methods use data to train a developed system, and after that the trained system is used to estimate and detect the collisions. Systems depending on fuzzy logic, time series, support vector machine (SVM), and NN were considered. In Dimeas et al.’s work [75,76], two methods were developed. The first was dependent on fuzzy identification, whereas the second was dependent on time series. Both methods were investigated using a 1-DOF manipulator and 2-DOF planar manipulator. Their fuzzy systems were developed for estimating the collisions then detecting them, and the inputs were the joint position errors, the measured joint torques, and the actual joint velocities. One fuzzy system was designed and trained for each joint. Their fuzzy-based method was able to quickly and accurately detect the collisions using lower defined threshold values. In the case of their time series-based method, the collision torque was estimated using only the signals of the measured joint velocity. In addition, the required time for detecting the collision was low; however, its defined threshold was higher compared with the fuzzy system. Their systems were developed by neglecting the occurring dynamic coupling during the joints’ motion. Furthermore, the generalization ability and the effectiveness of their methods were not investigated outside of the training range motion and in different conditions.
Franzel et al. in [77] designed an approach using the knowledge of the demonstrated task as well as the resulted offset from human interaction for distinguishing between the contact events and the normal execution by the use of a contact event detector. For this purpose, they implemented a contact type classifier using SVM, which was trained with the specified events. In [78], Cioff et al. used SVM for classifying the contact situations happening between human and robot based on time series of the signals of joint load torque. The contact situations were classified into two cases: the first one was the intended interactions and the second was the accidental collisions. The coarse localization was also carried out to identify if the contact on the upper or the lower robot arm.
In [74], Shujun Lu and his group developed a collision detection method depending on the NN’s training. Their method needed a base as well as wrist force and a torque sensor, and it was investigated using 1-DOF and 2-DOF manipulators. The inputs of the designed NN were the short history of joints’ angles and the readings from the wrist and base force/torque sensors, and the outputs were the collision forces and the positions of contact. The results from their method were promising and proved its validity; however, two external sensors were needed and investigating its generalization ability and effectiveness was missing outside of the training range motion and in different conditions. Briquet-Kerestedjian et al. [79] implemented a supervised learning for a NN-based approach to differentiate the situations of the unintended contact (labeled collisions) from the foreseen ones (labeled interactions). In addition, their approach sought to infer which of the upper or lower robot arms collided. The classifications’ problem is neglecting the external force’s amplitude (collisions), and the classifier cannot be used as a practical and real system for this force estimating. In [80,81,82,83], Sharkawy and his group proposed and designed the multilayer feedforward NN (MLFFNN) to detect collisions between robots and humans as well as for identifying which link collided. The generalization and the effectiveness for their method were investigated and presented using different conditions. Their method succeeded with a very high percentage and the results were promising. The MLFFNN presented in [80,81,83] can be applied only for collaborative robots where the signals of the joints’ positions and torques sensors are available. The MLFFNN presented in [82] can be applied for any robot since it was dependent only on the signals of the joints’ positions sensors of the manipulator. In [84], three types of NNs were investigated and compared for the human–robot collision detection. These types were as follows: (1) multilayer feedforward, (2) cascaded forward, and (3) recurrent NNs. The designed NN considered the manipulator joints’ dynamics and sought to use only the signals of the intrinsic joints’ positions sensors of the robotic manipulator to can be validated and applied by any robot. The comparison between the three designed NNs were quantitative and qualitative.

3.2. Collision Threshold

As discussed above, safety is a crucial factor in HRI, therefore the occurring collisions should be detected correctly, and the link that was collided should be identified for achieving a safe robot collaboration with human.
In literature of HRI, the collision threshold was determined considering the following factors. The first factor was the safety of human, and the second factor was minimizing the false collisions’ detections to perform smooth and continuous HRI. The contact force and the joint torque were the basis for determining the collision threshold. The contact force was considered by Shujun Lu et al. [74]. In [74], the collision threshold was defined as a value that was lower than the contact force parameterizing the unified pain tolerance limit of the human, which was studied and investigated in [57]. The joint torque was considered with Haddadin et al. [68]. In [68], the threshold was defined as 10% of the maximum nominal robot torque. In [73], a time-varying threshold was proposed. The collision threshold was dependent on the modeling error, and it was only updated when the state of the collision was OFF for monitoring the pure modeling error, e.g., motion without collisions. Otherwise, the collision threshold was increasing with the generated external joint torque by the collisions. This definition was based on presented work in [85].
In Dimeas et al.’s study [76], their threshold was determined as the maximum of the approximation error. This approximation error was the difference between the external joint torque obtained by the external force and torque sensor and the estimated torque by their developed fuzzy system, in case of a motion without any collision. Morinaga and Kosuge [72] defined their collision threshold by depending on the normal distribution characteristics. In [80,81], the collision threshold was determined as the maximum of the absolute value of the approximation error. This approximation error was the difference between the external joint torque obtained from KRC and the estimated external torque by the designed NN, in case of a motion without any collision.
Considering this discussion, the determination of the collision threshold needs more investigation and deep study.

3.3. Performance Measure and Effectiveness Comparison of the Safety Methods

In this subsection, the effectiveness and the performance measure of the collision detection methods are discussed and compared.
The model-based techniques, like the ones presented in [68,69,70,71,72,73,86], require the explicit robot dynamic model. This model is not found and available in most of the robots. In addition, uncertainties exist in this model. Most of these techniques are developed depending on the signals of the joints’ torque sensors which do not exist in most of the industrial robotics manipulators. Therefore, these methods are effective exclusively with the collaborative robot. The effectiveness and the performance measure (%) of such methods are not provided with the researchers.
Our main concern in this subsection is to compare the effectiveness and the performance measure of data-based techniques. In [76], a system for collision detection was proposed using the fuzzy logic and applied with a 2-DOF robotic manipulator. The overall efficiency of this fuzzy system in detecting collisions was 72%. The number of the false negative collisions using the fuzzy system was 28% and the number of false positive collisions are zero. In [76], a time series-based approach was also proposed for detecting the robot’s collisions with the human. The overall efficiency of this approach to detect the collisions was 70%. The false negative collisions were 30% and the false positive collisions were 11%.
In [77], the contact classifier based on SVM achieved effectiveness of 92.5% with the trained users, whereas this effectiveness was decreased to 84.4% with the novel (untrained) users.
In [82], two NNs structures were designed and trained for the detection of the collisions between human and robot. These architectures were applied for a 2-DOF manipulator. The first architecture (MLFFNN-1) (MLFFNN-1 refers to the multilayer feedforward NN architecture that was designed depending on the signals of both the intrinsic joints positions and joints torques sensors of the robotic manipulator. MLFFNN-2 refers to the multilayer feedforward NN architecture that was designed depending only on the intrinsic joints positions sensors of the robotic manipulator.) was developed using the signals of both intrinsic joints positions and joints torques sensors. The effectiveness of this architecture was 82.52%. The number of false negative collisions was 1.136% and the number of false positive collisions was 16%. The second architecture (MLFFNN-2) was based on only on the intrinsic joints’ position sensors of the robotic manipulator. Thus, this method could be applied to any industrial and conventional robot. The effectiveness of this architecture was 85.73%. The number of false negative collisions was 7.95% and the number of false positive collisions was 6.82%. In [83], a multi-layer feedforward NN (MLFFNN-1) was implemented and trained for human–robot collision detection. The method was applied to 3-DOF manipulator, and it was dependent on both the intrinsic joints’ positions and joints’ torque sensors of the robotic manipulator. The effectiveness of the trained NN was 86.6%. The number of the false negative collisions was 4.7% and the number of the false positive collisions was 8.7%. In [84], three types of trained NNs were compared for the collisions’ detection between human and robot. These types were as follows: (1) the multilayer feedforward (MLFFNN-1 and MLFFNN-2), (2) the cascaded forward (CFNN), and (3) the recurrent NNs (RNN). These types of NN were applied to a 1-DOF manipulator. The effectiveness of the MLFFNN-1 was 76%, the number of the resulted false negative collisions was 16%, and the number of the resulted false positive collisions was 8%. The effectiveness of the MLFFNN-2 was 80%, the number of the false negative collisions was 16%, and the number of false positive collisions was 4%. The effectiveness of the trained CFNN was 84%, the number of the resulted false negative collisions was 16%, and the number of the resulting false positive collisions was zero. The effectiveness of the trained RNN was 80%, the number of the false negative collisions was 20%, and the number of false positive collisions are zero.
In Lu et al.’s approach [74], a NN was also developed so as to detect the performed collisions with the human. The main concern about their approach was the necessity to the two external force sensors: one at the base and another at the wrist. Thus, this increased the cost. The generalization of their trained NN and its performance measure (%) were missing/not included.
In [79], a classifier based on NN was proposed so as to differentiate the situations of unintended contact from the foreseen ones. With this approach, the candidates had to learn to become used to the responsiveness of the classification to obtain better results. Therefore, the results (overall success rate) were improved from (70–72)% with candidates without prior experience of the classifier to (85–87)% after adaptation to the classifier.
Based on the above discussion, Figure 10 is presented to compare some of the data-based methods for the collision detection using the effectiveness, number of false positive and false negative collisions as the main criteria.
From the presented comparison in this subsection, we can deduce that data-based methods are promising for improving the safety of HRI. Furthermore, the NNs are also considered as excellent methods that have the high effectiveness and performance measures in the detection of the collisions that happen between the human and the robot during collaboration, considering the properties presented in ref. [87]. However, further investigations should be considered for these methods using 7-DOF robots. Additionally, other types of NNs and deep learning should also be investigated. Classifiers based on SVM or NNs have high effectiveness and performance measure with the trained users and this effectiveness is decreased with the untrained/novel users. This point can be considered also with the future works with classifier.

4. Perspectives

This section presents some perspectives, necessity, influences, and expectations for the safety and control of HRI for future robotic systems. From the discussion presented in the previous two sections, it is clear that the HRI research work is wide and comprehensive. However, some aspects need to be investigated deeply and bridged with the segmented presented research works.
In the part of HRI control, some issues should be taken into consideration. The first issue, as made clear from the results presented in Section 2.4, is that developing VA controllers for robotic manipulators depending on the soft computing-based techniques are promising and needed in improving the HRI’s performance. Further investigation by applying different types of neural networks and deep learning-based techniques is recommended. The second issue is that, as made clear from the literature, only one paper dealt with adjusting only the virtual inertia parameter of the admittance control. The results from this paper proved that adjusting the inertia only improved the stability of the system, minimized the oscillations, and generally improved the performance of HRI. Therefore, further investigation for adjusting only the virtual inertia is required and needed. The third issue is that developing new VA controllers should avoid large numbers of computations and complexity. In addition, these systems should avoid expert knowledge for intuitive cooperation. The fourth issue is that the introduced criteria in the current paper to evaluate the VA controllers should be taken into consideration while developing new variable controllers. Furthermore, new criterion can be considered that analyze the oscillations during the robot motion. The last issue is that more realistic tasks should be investigated during development and evaluation of the new VA controllers. These tasks can include curved and complex motions and tasks in real (industrial, medical, agriculture, etc.) environments. Furthermore, the number of subjects used in evaluating the VA controller should be more than 30 subjects for more real and justified statistics.
In the part of HRI safety, some issues should also be considered and deeply investigated. The first issue is the effectiveness and the performance measure of the current approaches for the collisions’ detection (the magnitudes, the directions, the positions, etc.) with robots. The human can carry out infinitely different and various cases of collisions with the robotic manipulator; effective determination for these collisions is a very important point in HRI and should be thoroughly and deeply investigated. This could help in expanding the current research from the applications of the robotic manufacturing/factory to other robotic sectors, which is a necessity for the community of robotics. This issue is related to the generalization ability of the method under different cases and conditions. The collided link determination should also be considered. The second issue is determination of the collision threshold, which needs more investigations and deep study considering the presented literature in the current paper. The third issue is that most of current approaches are designed based on the joints torques signals and less approaches are designed using the other conventional signals such as the joint position or the current signals. Therefore, there are great HRI systems that are applied only to the collaborative robots, which are more expensive, and less systems are applied to the conventional and the industrial robots. The fourth issue is that one developed system is preferred and recommended to all joints of the robot and not a system for every joint. This can minimize the effort, the time, the complexity, and the computations. The fifth issue is that the classifier, whether based on SVM or NN, has high effectiveness with the trained users, but with the untrained/novel users its effectiveness is decreasing. Therefore, this issue needs further investigations and study. The last issue is that, as made clear from the presented results in Section 3.3, using machine learning-based approaches, particularly the NNs, are promising in improving the HRI and effective in detecting the collisions. Therefore, further investigations are recommended for these approached considering 7-DOF robots. Furthermore, other different types of NNs and deep learning can be investigated.
In case of methods which combine both features of safety and control, a gap is found in which the cutting-edge research can be applied. Advanced HRI systems can be obtained when this merging is implemented depending on AI and algorithms of machine-learning. For these systems, the effectiveness and the performance measure in extensive and different conditions as well as the applications and the uses of these methods for different robots can be the key factor of the expected research work.

5. Conclusions

In the current paper, a review is presented for VA control systems in co-manipulation tasks, safety methods, and perspectives for HRI. In the case of VA control, the different techniques for its development, the achieved robotic tasks, the performance’s comparison are discussed deeply. The results from this review recommend using soft computing techniques, as they are promising methods in improving HRI’s performance. In addition, more realistic tasks and increasing the number of subjects used to evaluate the VA controllers should be considered. In the case of HRI safety, the collision detection methods (model-based and data-based), the collision threshold determination, and the comparison between the effectiveness of each method, are reviewed. From this review, we deduced that the effectiveness of the collision detection methods considering different conditions should be considered with the new approaches. The collision threshold determination needs more investigation. In addition, the new approach should be applied and used with any robot considering only the conventional signals such as the joint position or the current signals. Finally, some perspectives and expectations of the safety and control of HRI are presented and discussed.

Author Contributions

Most of the presented work is done by A.-N.S. Conceptualization, A.-N.S.; methodology, A.-N.S.; formal analysis, A.-N.S.; investigation, A.-N.S. and P.N.K.; resources, A.-N.S.; data curation, A.-N.S.; writing—original draft preparation, A.-N.S. and P.N.K.; writing—review and editing, A.-N.S. and P.N.K.; visualization, A.-N.S. and P.N.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sharkawy, A.-N. Human-Robot Interaction: Applications. In Proceedings of the 1st IFSA Winter Conference on Automation, Robotics & Communications for Industry 4.0 (ARCI’ 2021); Yurish, S.Y., Ed.; International Frequency Sensor Association (IFSA) Publishing, S. L.: Chamonix-Mont-Blanc, France, 2021; pp. 98–103. [Google Scholar]
  2. Sharkawy, A.-N. A Survey on Applications of Human-Robot Interaction. Sens. Transducers 2021, 251, 19–27. [Google Scholar]
  3. Kruger, J.; Lien, T.K.; Verl, A. Cooperation of human and machines in assembly lines. CIRP Ann.-Manuf. Technol. 2009, 58, 628–646. [Google Scholar] [CrossRef]
  4. Liu, C.; Tomizuka, M. Algorithmic Safety Measures for Intelligent Industrial Co-Robots. In Proceedings of the IEEE International Conference on Robotics and Automation 2016, Stockholm, Sweden, 16–21 May 2016; pp. 3095–3102. [Google Scholar]
  5. Sharkawy, A.N.; Papakonstantinou, C.; Papakostopoulos, V.; Moulianitis, V.C.; Aspragathos, N. Task Location for High Performance Human-Robot Collaboration. J. Intell. Robot. Syst. Theory Appl. 2020, 100, 183–202. [Google Scholar] [CrossRef]
  6. Sharkawy, A.-N. Intelligent Control and Impedance Adjustment for Efficient Human-Robot Cooperation. Ph.D. Thesis, University of Patras, Patras, Greece, 2020. [Google Scholar] [CrossRef]
  7. Thomas, C.; Matthias, B.; Kuhlenkötter, B. Human—Robot Collaboration—New Applications in Industrial Robotics. In Proceedings of the International Conference in Competitive Manufacturing 2016 (COMA’16), Stellenbosch University, Stellenbosch, South Africa, 27–29 January 2016; pp. 1–7. [Google Scholar]
  8. Billard, A.; Robins, B.; Nadel, J.; Dautenhahn, K. Building Robota, a Mini-Humanoid Robot for the Rehabilitation of Children With Autism. Assist. Technol. 2007, 19, 37–49. [Google Scholar] [CrossRef] [Green Version]
  9. Robins, B.; Dickerson, P.; Stribling, P.; Dautenhahn, K. Robot-mediated joint attention in children with autism: A case study in robot-human interaction. Interact. Stud. 2004, 5, 161–198. [Google Scholar] [CrossRef]
  10. Werry, I.; Dautenhahn, K.; Ogden, B.; Harwin, W. Can Social Interaction Skills Be Taught by a Social Agent? The Role of a Robotic Mediator in Autism Therapy. In Cognitive Technology: Instruments of Mind. CT 2001. Lecture Notes in Computer Science; Beynon, M., Nehaniv, C.L., Dautenhahn, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; ISBN 978-3-540-44617-0. [Google Scholar]
  11. Lum, P.S.; Burgar, C.G.; Shor, P.C.; Majmundar, M.; Loos, M. Van der Robot-Assisted Movement Training Compared With Conventional Therapy Techniques for the Rehabilitation of Upper-Limb Motor Function After Stroke. Arch. Phys. Med. Rehabil. Vol. 2002, 83, 952–959. [Google Scholar] [CrossRef] [Green Version]
  12. COVID-19 Test Robot as a Tireless Colleague in the Fight against the Virus. Available online: https://www.kuka.com/en-de/press/news/2020/06/robot-helps-with-coronavirus-tests (accessed on 24 June 2020).
  13. Vasconez, J.P.; Kantor, G.A.; Auat Cheein, F.A. Human—Robot interaction in agriculture: A survey and current challenges. Biosyst. Eng. 2019, 179, 35–48. [Google Scholar] [CrossRef]
  14. Baxter, P.; Cielniak, G.; Hanheide, M.; From, P. Safe Human-Robot Interaction in Agriculture. In Proceedings of the HRI’18 Companion, Session; Late-Breaking Reports, Chicago, IL, USA, 5–8 March 2018. [Google Scholar]
  15. Smart Robot Installed Inside Greenhouse Care. Available online: https://www.shutterstock.com/image-photo/smart-robot-installed-inside-greenhouse-care-765510412 (accessed on 17 June 2022).
  16. Lyon, N. Robot Turns Its Eye to Weed Recognition at Narrabri. Available online: https://www.graincentral.com/ag-tech/drones-and-automated-vehicles/robot-turns-its-eye-to-weed-recognition-at-narrabri/ (accessed on 19 October 2018).
  17. Bergerman, M.; Maeta, S.M.; Zhang, J.; Freitas, G.M.; Hamner, B.; Singh, S.; Kantor, G. Robot farmers: Autonomous orchard vehicles help tree fruit production. IEEE Robot. Autom. Mag. 2015, 22, 54–63. [Google Scholar] [CrossRef]
  18. Freitas, G.; Zhang, J.; Hamner, B.; Bergerman, M.; Kantor, G. A low-cost, practical localization system for agricultural vehicles. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2012; Volume 7508, pp. 365–375. ISBN 9783642335020. [Google Scholar]
  19. Cooper, M.; Keating, D.; Harwin, W.; Dautenhahn, K. Robots in the classroom—Tools for accessible education. In Assistive Technology on the Threshold of the New Millennium, Assistive Technology Research Series; Buhler, C., Knops, H., Eds.; ISO Press: Düsseldorf, Germany, 1999; pp. 448–452. [Google Scholar]
  20. Han, J.; Jo, M.; Park, S.; Kim, S. The Educational Use of Home Robots for Children. In Proceedings of the ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 378–383. [Google Scholar]
  21. How Robotics Is Changing the Mining Industry. Available online: https://eos.org/features/underground-robots-how-robotics-is-changing-the-mining-industry (accessed on 13 May 2019).
  22. Bandoim, L. Grocery Retail Lessons from the Coronavirus Outbreak for the Robotic Future. Available online: https://www.forbes.com/sites/lanabandoim/2020/04/14/grocery-retail-lessons-from-the-coronavirus-outbreak-for-the-robotic-future/?sh=5c0dfe1b15d1 (accessed on 14 April 2020).
  23. Dautenhahn, K. Methodology & Themes of Human-Robot Interaction: A Growing Research Field. Int. J. Adv. Robot. Syst. 2007, 4, 103–108. [Google Scholar]
  24. Moniz, A.B.; Krings, B. Robots Working with Humans or Humans Working with Robots ? Searching for Social Dimensions in New Human-Robot Interaction in Industry. Societies 2016, 6, 23. [Google Scholar] [CrossRef] [Green Version]
  25. De Santis, A.; Siciliano, B.; De Luca, A.; Bicchi, A. An atlas of physical human—Robot interaction. Mech. Mach. Theory 2008, 43, 253–270. [Google Scholar] [CrossRef] [Green Version]
  26. Khatib, O.; Yokoi, K.; Brock, O.; Chang, K.; Casal, A. Robots in Human Environments: Basic Autonomous Capabilities. Int. J. Rob. Res. 1999, 18, 684–696. [Google Scholar] [CrossRef]
  27. Song, P.; Yu, Y.; Zhang, X. A Tutorial Survey and Comparison of Impedance Control on Robotic Manipulation. Robotica 2019, 37, 801–836. [Google Scholar] [CrossRef]
  28. Hogan, N. Impedance control: An approach to manipulation: Part I theory; Part II implementation; Part III applications. J. Dynamlc Syst. Meas. Contral 1985, 107, 1–24. [Google Scholar] [CrossRef]
  29. Sam Ge, S.; Li, Y.; Wang, C. Impedance adaptation for optimal robot—Environment interaction. Int. J. Control 2014, 87, 249–263. [Google Scholar]
  30. Ott, C.; Mukherjee, R.; Nakamura, Y. Unified impedance and admittance control. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 554–561. [Google Scholar]
  31. Song, P.; Yu, Y.; Zhang, X. Impedance control of robots: An overview. In Proceedings of the 2017 2nd International Conference on Cybernetics, Robotics and Control (CRC), Chengdu, China, 21–23 July 2017; pp. 51–55. [Google Scholar]
  32. Dimeas, F. Development of Control Systems for Human-Robot Collaboration in Object Co-Manipulation. Ph.D. Thesis, University of Patras, Patras, Greece, 2017. [Google Scholar]
  33. Newman, W.S.; Zhang, Y. Stable interaction control and coulomb friction compensation using natural admittance control. J. Robot. Syst. 1994, 1, 3–11. [Google Scholar] [CrossRef]
  34. Surdilovic, D. Contact Stability Issues in Position Based Impedance Control: Theory and Experiments. In Proceedings of the 1996 IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, USA, 22–28 April 1996; pp. 1675–1680. [Google Scholar]
  35. Adams, R.J.; Hannaford, B. Stable Haptic Interaction with Virtual Environments. IEEE Trans. Robot. Autom. 1999, 15, 465–474. [Google Scholar] [CrossRef] [Green Version]
  36. Ott, C. Cartesian Impedance Control of Redundant and Flexible-Joint Robots; Siciliano, B., Khatib, O., Groen, F., Eds.; Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2008; Volume 49, pp. 1–192. ISBN 9783540692539. [Google Scholar]
  37. Duchaine, V.; Gosselin, M. General Model of Human-Robot Cooperation Using a Novel Velocity Based Variable Impedance Control. In Proceedings of the Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC’07), Tsukuba, Japan, 22–24 March 2007; pp. 446–451. [Google Scholar]
  38. Sharkawy, A.-N.; Koustoumpardis, P.N.; Aspragathos, N. A recurrent neural network for variable admittance control in human—Robot cooperation: Simultaneously and online adjustment of the virtual damping and Inertia parameters. Int. J. Intell. Robot. Appl. 2020, 4, 441–464. [Google Scholar] [CrossRef]
  39. Yang, C.; Peng, G.; Li, Y.; Cui, R.; Cheng, L.; Li, Z. Neural networks enhanced adaptive admittance control of optimized robot-environment interaction. IEEE Trans. Cybern. 2019, 49, 2568–2579. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Sidiropoulos, A.; Kastritsi, T.; Papageorgiou, D.; Doulgeri, Z. A variable admittance controller for human-robot manipulation of large inertia objects. In Proceedings of the 2021 30th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2021, Vancouver, BC, Canada, 8–12 August 2021; pp. 509–514. [Google Scholar]
  41. Topini, A.; Sansom, W.; Secciani, N.; Bartalucci, L.; Ridolfi, A.; Allotta, B. Variable Admittance Control of a Hand Exoskeleton for Virtual Reality-Based Rehabilitation Tasks. Front. Neurorobot. 2022, 15, 1–18. [Google Scholar] [CrossRef] [PubMed]
  42. Wang, Y.; Yang, Y.; Zhao, B.; Qi, X.; Hu, Y.; Li, B.; Sun, L.; Zhang, L.; Meng, M.Q.H. Variable admittance control based on trajectory prediction of human hand motion for physical human-robot interaction. Appl. Sci. 2021, 11, 5651. [Google Scholar] [CrossRef]
  43. Du, Z.; Wang, W.; Yan, Z.; Dong, W.; Wang, W. Variable Admittance Control Based on Fuzzy Reinforcement Learning for Minimally Invasive Surgery Manipulator. Sensors 2017, 17, 844. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Dimeas, F.; Aspragathos, N. Fuzzy Learning Variable Admittance Control for Human-Robot Cooperation. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), Chicago, IL, USA, 14–18 September 2014; pp. 4770–4775. [Google Scholar]
  45. Tsumugiwa, T.; Yokogawa, R.; Hara, K. Variable Impedance Control with Regard to Working Process for Man-Machine Cooperation-Work System. In Proceedings of the 2001 IEEE/RsI International Conference on Intelligent Robots and Systems, Maui, HI, USA, 29 October–3 November 2001; pp. 1564–1569. [Google Scholar]
  46. Lecours, A.; Mayer-st-onge, B.; Gosselin, C. Variable admittance control of a four-degree-of-freedom intelligent assist device. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3903–3908. [Google Scholar]
  47. Okunev, V.; Nierhoff, T.; Hirche, S. Human-preference-based Control Design: Adaptive Robot Admittance Control for Physical Human-Robot Interaction. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 443–448. [Google Scholar]
  48. Landi, C.T.; Ferraguti, F.; Sabattini, L.; Secchi, C.; Bonf, M.; Fantuzzi, C. Variable Admittance Control Preventing Undesired Oscillating Behaviors in Physical Human-Robot Interaction. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 3611–3616. [Google Scholar]
  49. Sharkawy, A.-N.; Koustoumpardis, P.N.; Aspragathos, N. Variable Admittance Control for Human—Robot Collaboration based on Online Neural Network Training. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, 1–5 October 2018. [Google Scholar]
  50. Sharkawy, A.-N.; Koustoumpardis, P.N.; Aspragathos, N. A Neural Network based Approach for Variable Admittance Control in Human- Robot Cooperation: Online Adjustment of the Virtual Inertia. Intell. Serv. Robot. 2020, 13, 495–519. [Google Scholar] [CrossRef]
  51. Sauro, J. A Practical Guide to the System Usability Scale: Background, Benchmarks and Best Practices; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2011; pp. 1–162. [Google Scholar]
  52. Ficuciello, F.; Villani, L.; Siciliano, B. Variable Impedance Control of Redundant Manipulators for Intuitive Human-Robot Physical Interaction. IEEE Trans. Robot. 2015, 31, 850–863. [Google Scholar] [CrossRef] [Green Version]
  53. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput. Integr. Manuf. 2021, 67, 101998. [Google Scholar] [CrossRef]
  54. ISO 10218-1; Robots and Robotic Devices—Safety Requirements for Industrial Robots—Part 1: Robots. ISO Copyright Office: Zurich, Switzerland, 2011.
  55. ISO 10218-2; Robots and robotic devices—Safety Requirements for Industrial Robots—Part 2: Robot Systems and Integration. ISO Copyright Office: Zurich, Switzerland, 2011.
  56. ISO/TS 15066; Robots and Robotic Devices—Collaborative Robots. ISO Copyright Office: Geneva, Switzerland, 2016.
  57. Yamada, Y.; Hirasawa, Y.; Huang, S.; Umetani, Y.; Suita, K. Human—Robot Contact in the Safeguarding Space. IEEE/ASME Trans. Mechatron. 1997, 2, 230–236. [Google Scholar] [CrossRef]
  58. Flacco, F.; Kroger, T.; De Luca, A.; Khatib, O. A Depth Space Approach to Human-Robot Collision Avoidance. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 338–345. [Google Scholar]
  59. Schmidt, B.; Wang, L. Contact-less and Programming-less Human-Robot Collaboration. In Proceedings of the Forty Sixth CIRP Conference on Manufacturing Systems 2013; Elsevier: Amsterdam, The Netherlands, 2013; Volume 7, pp. 545–550. [Google Scholar]
  60. Anton, F.D.; Anton, S.; Borangiu, T. Human-Robot Natural Interaction with Collision Avoidance in Manufacturing Operations. In Service Orientation in Holonic and Multi Agent Manufacturing and Robotics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 375–388. ISBN 9783642358524. [Google Scholar]
  61. Kitaoka, M.; Yamashita, A.; Kaneko, T. Obstacle Avoidance and Path Planning Using Color Information for a Biped Robot Equipped with a Stereo Camera System. In Proceedings of the 4th Asia International Symposium on Mechatronics, Singapore, 15–18 December 2010; pp. 38–43. [Google Scholar]
  62. Lenser, S.; Veloso, M. Visual Sonar: Fast Obstacle Avoidance Using Monocular Vision. In Proceedings of the Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, USA, 27–31 October 2003. [Google Scholar]
  63. Peasley, B.; Birchfield, S. Real-Time Obstacle Detection and Avoidance in the Presence of Specular Surfaces Using an Active 3D Sensor. In Proceedings of the 2013 IEEE Workshop on Robot Vision (WORV), Clearwater Beach, FL, USA, 15–17 January 2013; pp. 197–202. [Google Scholar]
  64. Flacco, F.; Kroeger, T.; De Luca, A.; Khatib, O. A Depth Space Approach for Evaluating Distance to Objects. J. Intell. Robot. Syst. 2014, 80, 7–22. [Google Scholar] [CrossRef]
  65. Gandhi, D.; Cervera, E. Sensor Covering of a Robot Arm for Collision Avoidance. In Proceedings of the SMC’03 Conference Proceedings 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme—System Security and Assurance (Cat. No.03CH37483), Washington, DC, USA, 8 October 2003; pp. 4951–4955. [Google Scholar]
  66. Lam, T.L.; Yip, H.W.; Qian, H.; Xu, Y. Collision Avoidance of Industrial Robot Arms using an Invisible Sensitive Skin. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 4542–4543. [Google Scholar]
  67. Shi, L.; Copot, C.; Vanlanduit, S. A Bayesian Deep Neural Network for Safe Visual Servoing in Human–Robot Interaction. Front. Robot. AI 2021, 8, 1–13. [Google Scholar] [CrossRef]
  68. Haddadin, S.; Albu-sch, A.; De Luca, A.; Hirzinger, G. Collision Detection and Reaction: A Contribution to Safe Physical Human-Robot Interaction. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3356–3363. [Google Scholar]
  69. Cho, C.; Kim, J.; Lee, S.; Song, J. Collision detection and reaction on 7 DOF service robot arm using residual observer. J. Mech. Sci. Technol. 2012, 26, 1197–1203. [Google Scholar] [CrossRef]
  70. Jung, B.; Choi, H.R.; Koo, J.C.; Moon, H. Collision Detection Using Band Designed Disturbance Observer. In Proceedings of the 8th IEEE International Conference on Automation Science and Engineering, Seoul, Korea, 20–24 August 2012; pp. 1080–1085. [Google Scholar]
  71. Cao, P.; Gan, Y.; Dai, X. Model-based sensorless robot collision detection under model uncertainties with a fast dynamics identification. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419853713. [Google Scholar] [CrossRef] [Green Version]
  72. Morinaga, S.; Kosuge, K. Collision Detection System for Manipulator Based on Adaptive Impedance Control Law. In Proceedings of the 2003 IEEE International Conference on Robotics &Automation, Taipei, Taiwan, 14–19 September 2003; pp. 1080–1085. [Google Scholar]
  73. Kim, J. Collision detection and reaction for a collaborative robot with sensorless admittance control. Mechatronics 2022, 84, 102811. [Google Scholar] [CrossRef]
  74. Lu, S.; Chung, J.H.; Velinsky, S.A. Human-Robot Collision Detection and Identification Based on Wrist and Base Force/Torque Sensors. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 796–801. [Google Scholar]
  75. Dimeas, F.; Avenda, L.D.; Nasiopoulou, E.; Aspragathos, N. Robot Collision Detection based on Fuzzy Identification and Time Series Modelling. In Proceedings of the RAAD 2013, 22nd InternationalWorkshop on Robotics in Alpe-Adria-Danube Region, Portoroz, Slovenia, 11–13 September 2013. [Google Scholar]
  76. Dimeas, F.; Avendano-valencia, L.D.; Aspragathos, N. Human—Robot collision detection and identification based on fuzzy and time series modelling. Robotica 2014, 33, 1886–1898. [Google Scholar] [CrossRef]
  77. Franzel, F.; Eiband, T.; Lee, D. Detection of Collaboration and Collision Events during Contact Task Execution. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Munich, Germany, 19–21 July 2021; pp. 376–383. [Google Scholar]
  78. Cioffi, G.; Klose, S.; Wahrburg, A. Data-Efficient Online Classification of Human-Robot Contact Situations. In Proceedings of the 2020 European Control Conference (ECC), St. Petersburg, Russia, 12–15 May 2020; pp. 608–614. [Google Scholar]
  79. Briquet-Kerestedjian, N.; Wahrburg, A.; Grossard, M.; Makarov, M.; Rodriguez-Ayerbe, P. Using neural networks for classifying human-robot contact situations. In Proceedings of the 2019 18th European Control Conference, ECC 2019, Naples, Italy, 25–28 June 2019; pp. 3279–3285. [Google Scholar]
  80. Sharkawy, A.-N.; Aspragathos, N. Human-Robot Collision Detection Based on Neural Networks. Int. J. Mech. Eng. Robot. Res. 2018, 7, 150–157. [Google Scholar] [CrossRef]
  81. Sharkawy, A.-N.; Koustoumpardis, P.N.; Aspragathos, N. Manipulator Collision Detection and Collided Link Identification based on Neural Networks. In Advances in Service and Industrial Robotics. RAAD 2018. Mechanisms and Machine Science; Nikos, A., Panagiotis, K., Vassilis, M., Eds.; Springer: Cham, Switzerland, 2018; pp. 3–12. [Google Scholar]
  82. Sharkawy, A.N.; Koustoumpardis, P.N.; Aspragathos, N. Neural Network Design for Manipulator Collision Detection Based only on the Joint Position Sensors. Robotica 2020, 38, 1737–1755. [Google Scholar] [CrossRef]
  83. Sharkawy, A.N.; Koustoumpardis, P.N.; Aspragathos, N. Human–robot collisions detection for safe human–robot interaction using one multi-input–output neural network. Soft Comput. 2020, 24, 6687–6719. [Google Scholar] [CrossRef]
  84. Sharkawy, A.-N.; Mostfa, A.A. Neural Networks’ Design and Training for Safe Human-Robot Cooperation. J. King Saud Univ. Eng. Sci. 2021, 1–15. [Google Scholar] [CrossRef]
  85. Sotoudehnejad, V.; Takhmar, A.; Kermani, M.R.; Polushin, I.G. Counteracting modeling errors for sensitive observer-based manipulator collision detection. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 4315–4320. [Google Scholar]
  86. De Luca, A.; Albu-Schäffer, A.; Haddadin, S.; Hirzinger, G. Collision detection and safe reaction with the DLR-III lightweight manipulator arm. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 1623–1630. [Google Scholar]
  87. Sharkawy, A.-N. Principle of Neural Network and Its Main Types: Review. J. Adv. Appl. Comput. Math. 2020, 7, 8–19. [Google Scholar] [CrossRef]
Figure 1. The content of the presented survey on HRI.
Figure 1. The content of the presented survey on HRI.
Machines 10 00591 g001
Figure 2. The concept or the implementation of the admittance and impedance controller (a) Admittance Control and (b) Impedance Control.
Figure 2. The concept or the implementation of the admittance and impedance controller (a) Admittance Control and (b) Impedance Control.
Machines 10 00591 g002
Figure 3. The VA control system for facilitating the human–robot co-manipulating task.
Figure 3. The VA control system for facilitating the human–robot co-manipulating task.
Machines 10 00591 g003
Figure 4. The classifications of the used techniques for developing VA control whether adjusting the damping only, the inertia only, or both the damping and inertia.
Figure 4. The classifications of the used techniques for developing VA control whether adjusting the damping only, the inertia only, or both the damping and inertia.
Machines 10 00591 g004
Figure 5. The robotic tasks accomplished during the development and the evaluation of the VA control that appeared in the literature.
Figure 5. The robotic tasks accomplished during the development and the evaluation of the VA control that appeared in the literature.
Machines 10 00591 g005
Figure 6. The comparison between the number of subjects used to evaluate the developed VA control, according to the literature presented in this subsection.
Figure 6. The comparison between the number of subjects used to evaluate the developed VA control, according to the literature presented in this subsection.
Machines 10 00591 g006
Figure 7. The performance’s comparison between some developed VA control systems. The main criteria used are the required human effort, needed task time, and the obtained accuracy.
Figure 7. The performance’s comparison between some developed VA control systems. The main criteria used are the required human effort, needed task time, and the obtained accuracy.
Machines 10 00591 g007
Figure 8. The research contents and themes related to safety and investigated in recent years by Gualtieri et al. [53].
Figure 8. The research contents and themes related to safety and investigated in recent years by Gualtieri et al. [53].
Machines 10 00591 g008
Figure 9. The classifications of the collision detection techniques used for the safety of HRI’s systems.
Figure 9. The classifications of the collision detection techniques used for the safety of HRI’s systems.
Machines 10 00591 g009
Figure 10. The comparison between the performance of collision detection methods for HRI in terms of the effectiveness, the false negative collisions, and the false positive collisions.
Figure 10. The comparison between the performance of collision detection methods for HRI in terms of the effectiveness, the false negative collisions, and the false positive collisions.
Machines 10 00591 g010
Table 1. Comparison between admittance controller and impedance controller of the robot.
Table 1. Comparison between admittance controller and impedance controller of the robot.
ParameterCompliance Control
Admittance ControllerImpedance Controller
UseIt is used with HRI in which there is no interaction between the robot and the stiff environment. The main aim of the methodology of impedance control is modulating the manipulator’s mechanical impedance [28].
Inputs and OutputsIt maps the applied forces into robot motion, as shown in Figure 2a.The motion is the input, whereas the output is the force as shown in Figure 2b [30,32].
Rendering1- It can render only the virtual stiff surfaces, whereas it cannot render the low inertia.
2- It is negatively affected during the dynamic interaction with the real stiff surfaces (constrained motion) [33,34,35].
1- It can render low inertia, whereas it cannot render the virtual stiff surfaces.
2- It is negatively affected during the dynamic interaction with the low inertia (free motion) [35].
Control1- It is the impedance control based on position [36].
2- The position or velocity controller is used to control the robot and the desired compliant behavior is understood by the outer control loop.
1- The force-based impedance control is used.
2- It is not only the controlled manipulator is required, but also the controller itself should have the impedance causality.
Representation Machines 10 00591 i001 Machines 10 00591 i002
Table 2. Comparison between the different schemes of control. This comparison is done based on ref. [31].
Table 2. Comparison between the different schemes of control. This comparison is done based on ref. [31].
Schemes of ControlWork
Space
Measured
Variables
Appropriate
Applied
Situations
Control
Aims
Position ControlTask spacePositionFree
motion
Desired
position
Force ControlTask spaceContact ForceConstrained
motion
Desired
contact force
Hybrid ControlPosition
subspace
PositionAll motion kinds Desired
position
Force
subspace
Contact
Force
Desired
contact force
Impedance/
Admittance Control
Task spacePosition,
Contact Force
All motion kinds Impedance/
Admittance
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sharkawy, A.-N.; Koustoumpardis, P.N. Human–Robot Interaction: A Review and Analysis on Variable Admittance Control, Safety, and Perspectives. Machines 2022, 10, 591. https://doi.org/10.3390/machines10070591

AMA Style

Sharkawy A-N, Koustoumpardis PN. Human–Robot Interaction: A Review and Analysis on Variable Admittance Control, Safety, and Perspectives. Machines. 2022; 10(7):591. https://doi.org/10.3390/machines10070591

Chicago/Turabian Style

Sharkawy, Abdel-Nasser, and Panagiotis N. Koustoumpardis. 2022. "Human–Robot Interaction: A Review and Analysis on Variable Admittance Control, Safety, and Perspectives" Machines 10, no. 7: 591. https://doi.org/10.3390/machines10070591

APA Style

Sharkawy, A. -N., & Koustoumpardis, P. N. (2022). Human–Robot Interaction: A Review and Analysis on Variable Admittance Control, Safety, and Perspectives. Machines, 10(7), 591. https://doi.org/10.3390/machines10070591

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop