Grasped Object Weight Compensation in Reference to Impedance Controlled Robots

: This paper addresses the problem of grasped object weight compensation in the one-handed manipulation of impedance controlled robots. In an exemplary identiﬁcation procedure, the weight of an object and its centre of mass together with gripper kinematic conﬁguration are identiﬁed. The procedure is based on the measurements from a 6-axis force/torque sensor mounted near the gripper. The proposed method reduces trajectory tracking errors coming from the model imprecision without compromising the main advantages of impedance control. The whole approach is applied according to the embodied agent paradigm and veriﬁed on the two-arm service robot both in simulation and on hardware. Due to the general description that follows system engineering standards, the method can be easily modiﬁed or applied to similar systems.


Introduction
Service robots have become more advanced, but there is still much to accomplish if one compares their abilities with those of humans. An example of a very significant service robot application is the manipulation of a grasped object, which is a typical task for everyday human life. Object manipulation is well known in robotics, even for robots equipped with many effectors [1]. In general, this assignment is preceded by grasping an object. It is worth noticing that a good deal of recent research works concerns this issue, in particular, in grasp planning [2] and learning [3]. The kinematic and dynamic properties of a manipulator's kinematic chain change when an object is grasped. This should be taken into account during the manipulator's controller development. Typically, for stiff position control in the manipulator's joints, the integral part of the position controller deals with the change in plant properties caused by object grasping. The modern approaches to service robots' control lead to changes in that paradigm.
Although it was previously introduced, position-force control is still being improved [4,5]; most promising seems to be VSA (variable stiffness actuators), e.g., [6], or impedance control, especially in the case of a robot's end-effector contact with the environment and transition from contact-free motion to contact phase [7,8]. Such types of control are biologically inspired [9]. The compliance and damping are substantial to keep the system stable while the manipulator is in contact with the environment and transitions from contact-free motion to contact with obstacles. The main advantage of impedance control that addresses the current economic and ecological demands is its susceptibility to energy consumption optimisation [10], as, in its nature, the constraining of torques exerted in the joints prevails over the exact trajectory tracking. Impedance control algorithms are based on the computed torque method and are sensitive to the inaccuracy of a robot's model. If a robot control law ignores the inertial properties of a grasped object, then presumably, undesirable position errors of the robot's joints or end-effectors may be large. These errors should not be eliminated by applying integral action. Impedance control has no integrals by definition since, with an integral, a contact force would increase, and this goes against the essence of impedance control. Instead, the accuracy of the model parameters should be improved and the control should rely on the compliance [11], which is inherent in impedance control.
Our goal was to propose a model and its parameters identification procedure for a gripper and grasped object that can be effectively analysed and integrated into the control system of impedance controlled robots. To achieve this, the problem analysis and system description fulfil the system engineering standards and are based on SysML [12,13] and its application to cyber-physical systems modelling, i.e., EARL [14] version 1.2 [15]. EARL was recently used to develop systems described in, for example, [16,17]. It is based on the embodied agent theory [18] applied for different systems, especially robotics systems with varying structure [19], or with harmonised tasks [20] as well as to describe and analyse general cyber-physical structures [21] or motion control systems based on artificial intelligence methods [22]. The exemplary identification procedure takes into account the kinematic configuration of the gripper without the need for the gripper model. It addresses this model's lack because it can be unavailable for the grippers with, for example, specific and customised structures.
The remaining part of the article is organised as follows. Section 2 starts from control system use-case analyses. Then, control system structure and behaviour descriptions are presented, emphasising impedance control laws in operational and joint space and their extension that includes object weight compensation and gripper kinematic configuration. The convenient procedure for the grasping model parameter identification is presented in Section 3. Then, Section 4 describes the experiments performed with Velma https: //www.robotyka.ia.pw.edu.pl/robots/velma accessed on 10 October 2021, a custom-built two-arm service robot. Finally, Section 5 summarises the work.

Control System
A robot with a grasped object is modelled as an open kinematic-chain mechanism with n rigid joints. This work presents the method of adjusting the EARL based robot model so that the object's parameters are included in the robot control law. Additionally, the experiment results connected with gravity compensation are preferred to be similar to the control without any object grasped.
The control system development and analysis starts with general use cases ( Figure 1). The variety of applications demands two modes of impedance control [23] (UC01, UC02), gripper fingers control (UC03) and tool parameters setting (UC04), and finally, the identification of the grasped object parameters (UC05), which is the main contribution of this work.
The above requirements can be satisfied with the control system of the general structure depicted in Figure 2. The system consists of two main parts: • task/ga: Group of Agents with the specific task implemented. • core/a: Agent that realizes the core behaviours that are demanded from the point of view of the wide range of applications formulated in task/ga. Agent core/a provides communications channels for the Group of Agents task/ga. Robot arms (UC01, 02, 04, 05) and grippers (UC03, 05) are controlled through separate channels to ease controller reconfiguration with the changes of hardware (e.g., change of gripper type). In the opposite direction, core/a informs about the arms' state, gripper state and control subsystem core/a.cs state, which includes wrist wrench measurements. The other part of core/a is the group of subsystems armsGrippers/igs responsible for low-level control and sensor data acquisition from all effectors and receptors.
The use cases depicted in Figure 1 are allocated by primitive transition functions of Agent core/a ( Figure 3). Those primitive transition functions should be associated with basic behaviours that will use them. It is enough to distinguish two basic behaviours as it is presented in Figure 4. In EARL, the basic behaviours are associated with the finite state machine, which states the general behaviour of the subsystem ( Figure 5). In the following sections, the primitive transition functions core/a.cs.pf are presented, which relate to an impedance control. These functions are part of complex functions formulated to realize the joint space move Figure 6a

Primitive Transition Function core/a.cs.jointMove/pf
Impedance control law, both in joint and operational spaces, may be described in a hierarchical manner [24][25][26]. In joint space, it can be formulated according to [24] as Equation (1): Herein τ j(1) and τ j(2) are the n × 1 vectors of torques related to tasks j(1) and j(2), respectively, where numbers in parentheses indicate the priority of tasks. Matrix N j(1) denotes an n × n null space projection as Equation (2): where the n × n matrices N t (i), i = 1, . . . , n p are constructed, using the continuous null space projection method [25]. The Jacobian inverses used in the calculations of N t (i) can be dynamically consistent [27] or not. As stated in [28], n p is the number of contact point pairs, as the task with higher priority determines self-collision avoidance in Equation (3): where V coll (q) represents a repulsive potential field according to [29], and it is in terms of the potential energy [28]. Matrix D diag denotes a n × n diagonal damping, and q andq denote the n × 1 vectors of measured joint positions and velocities, respectively. The task with lower priority concerns joint impedance control [23] and can be expressed as Equation (4): where K j is a n × n diagonal joint stiffness matrix, ∆q = q d − q is a n × 1 difference vector of the desired and measured joint positions, and D j is a n × n joint damping matrix calculated using the double diagonalisation method [30].
Let n e be the number of robot's arms, each of which has one effector. The task with the lowest priority is the Cartesian impedance control [23] shown as Equation (6): where J is a 6n e × n Jacobi matrix, K c is a 6n e × 6n e diagonal Cartesian stiffness matrix, ∆r = r d − r is a 6n e × 1 vector of virtual spring displacement between desired r d and actual r positions, D c is a 6n e × 6n e Cartesian damping matrix obtained from the double diagonalisation method [30], ξ is a 6n e × 1 vector representing normalized damping along the main directions of the task frames, andṙ = Jq is a 6n e × 1 vector of velocity. Each of the vectors ∆r, r d , r, andṙ has a block structure so that the first 6 coordinates concern the first effector, the next 6, the second effector, etc. Moreover, in each 6 × 1 block of r (analogously for ∆r, r d , andṙ) the first 3 × 1 vector denotes positions that are supplemented by a 3 × 1 vector part of a unit quaternion. Thus, the matrices J, K c , and D c must also have the appropriate block structure. The task with higher priority than the previous one is related to vector τ c(3) . This task determines joint limits avoidance [31] with additional damping applied according to the double diagonalisation method [30] with constant coefficients' values in an auxiliary diagonal damping matrix D ξ . The null space projection matrix N c(3) , which relates featured tasks, c(3) and c(4), can be set as the identity matrix. Consequently, these tasks are treated with the same priority.
The task with the second-highest priority involves self-collision avoidance. Therefore, τ c (2) and N c(2) may be calculated analogously to τ j(1) and N j(1) , respectively. Finally, the task with the highest priority determines the wrist collision constraint [26]. This task is similar to the former one, except that it affects only the two joints on each effector that are most distant from the robot's base. Indeed, the main distinction between τ c (1) or N c(1) and τ c (2) or N c(2) , respectively, is the lower number of contact point pairs [28] in the first task.

Primitive Transition Function core/a.cs.compensateObjectWeight/pf
The grasped object and gripper configuration gravitational term can be incorporated as a sum component into the equation for the robot with the grasped object. Therefore, the robot control law can be settled as Equation (7): where a n × 1 vector of desired torques τ d is the sum of a n × 1 vector of joint torques τ imp , calculated according to the joint or Cartesian impedance control law, and a n × 1 vector of object gravity compensation torques τ ogc . The origin of the approach from [32] is stated in [1], where robot systems are considered in terms of kinetic and potential energy. Let i (i ∈ N) be the index of the chosen robot's joint. Then, the object weight compensation torque τ ogc[i] acting in joint i can be formulated as Equation (8): In this equation, the skew-symmetric matrix r i (Equation (9)) is associated with the vector T that denotes the object's centre of mass position relative to joint i. The 3 × 1 vector f g represents the object's weight, and the 3 × 1 unit vectorê i defines the joint's axis of rotation. The expression τ g i = r i f g is a cross product of r i and f g , and it determines the 3 × 1 vector of torque. Hence, the torque τ ogc[i] is a scalar product of −τ g i andê i .
Equation (8) assumes that r i , f g , andê i are expressed in the same coordinate system. However, it is more convenient to express quantities r i andê i in a local Cartesian coordinate system π i , whereas f g should be rather expressed in the robot's base Cartesian coordinate system π 0 (Ox 0 y 0 z 0 ). The frame π i can be bound up with the link driven by the motor in joint i. Then, Equation (8) may be rewritten as Equation (10): Herein, 0 i R is the rotation matrix used to transform quantities from π i to π 0 . The upper left index in the bracket, e.g., j of (j) w, indicates that the quantity (j) w is expressed in the coordinate system π j . Let m be the object's mass, and g be the gravitational acceleration. Then, the force (0) f g can be stated as Equation (11): (0) f g = 0 0 −mg T .
The minus sign in −mg is the result of the assumption that the z 0 axis is directed upwards and therefore against the force of gravity.

The Identification Procedure of the Object's Parameters
To apply weight compensation of the object, group of agents task/ga needs to estimate the grasped object's weight and centre of mass position and command it to agent core/a. Each robot's manipulator (effector) is equipped with a force/torque sensor mounted between the wrist and the gripper. It is also assumed that the grasped object is rigid, and it cannot move relative to the gripper. Furthermore, the 6 × 1 wrench measured by the force/torque sensor is established as Equation (12): where

2.
The gripper rotates around the axis x of the frame π k by −90 • , and the second reference measurement is taken (Figure 9d).

3.
The manipulator grasps the object with its gripper and reaches the same position as in phase 1. The third measurement is taken (Figure 9e).

4.
The arm reaches the same position as in phase 2, and the fourth measurement is taken (Figure 9f). Identification parameters can be computed and transmitted into agent core/a. The object's weight w can be obtained using the formula of Equation (13): where the upper right index denotes the phase of the identification procedure when the measurement was made, and (k) in the upper left indices is related to the coordinate frame π k . It should be noted that due to numerical conditions for the very small values of w, the procedure should stop here, and the compensation factors remain unchanged. Otherwise, the position of the object's mass centre relative to the force/torque sensor can be computed with Equations (14)- (16): (k) T is the position vector that has the direction from the force/torque sensor to the object's mass centre. For each joint i to obtain the quantity (0) r i from Equation (10), the vector (k) r has to be transformed with the robot kinematic model.

The Verification of the Grasped Object Weight Compensation Algorithm
The approach was verified using the Velma service robot simulator [33], and then on the real robot, in both cases with a universal controller implemented within the FAB-RIC [34] software framework. As the use of domain-specific frameworks is a convenient approach [35], the FABRIC bases on ROS [36] and OROCOS [37]. The Velma robot has already been used in many research works described, e.g., in [2,23,26].

The Specific Realisation of Impedance Control Law for the Velma Service Robot
In the Velma robot impedance control law, only its two arms and torso are included, because other robot parts are controlled separately. Thus, the vector of desired torques τ d from Equation (7) is represented as Equation (17): In Equation (17), τ torso is the torso torque proportional to the current i τ flowing through the motor Equation (18): where k τ is the proportionality factor. The torso controller is created using the technology presented in [38]. The vector τ d from Equation (17) also consists of the 7 × 1 right arm torque vector τ rightLWR and the 7 × 1 left arm torque vector τ le f tLWR . The arms of the Velma robot are 7-DOF redundant KUKA LWR4+ manipulators. Dynamic properties of these arms are supposed to be highly repeatable, but not identical [39]. However, the compensation algorithm described in Section 2.3 does not depend on the particular manipulator, so this issue does not affect the verification of the algorithm.
The Velma robot's arms are internally torque-controlled [40] (Equation (19)): Impedance control is included in the calculation of the vector τ FRI , which is specified by the KUKA Fast Research Interface (FRI) remote side. The vector f dynamics (q,q,q) denotes the manipulator's dynamic model and is computed internally by the KUKA controller. The commanded torque vector is denoted as τ cmd . For the Velma robot, the vector τ FRI is specified by the robot's research controller and is identical with τ rightLWR or τ le f tLWR , depending on which manipulator is considered. It should be noted that there are n = 15 joints and n e = 2 arms (effectors) in the Velma robot control; hence, in Equation (5), the task with the highest priority is treated as two separate tasks, for the right and left arms.

Exemplary Results of Experiments
A test task concerned object manipulation. In simulation, the main aim was to move the object from one table to another with one of the Velma robot's arms. For the purpose of the presentation, the right manipulator was chosen, but a similar outcome would be obtained with the left arm. The task was divided into three subsequent stages: The robot performed the identification procedure (Section 3, Figure 10a-e, starting from the convenient configuration (Figure 10a). According to phase 0 of the procedure, the robot checked the grasp configuration of the fingers of its right gripper (Figure 10b). Then, the robot took two reference wrench measurements (Figure 10c), and after that, it grasped the object (Figure 10d). Ultimately, the robot took two last wrench measurements (Figure 10e), and thanks to this, the desired parameters could be computed.

2.
The robot executed a sequence of the movements in Cartesian impedance control mode with linear interpolation (analogous to that described in [23]) between points (vertices) of a square (Figures 10f,g and 11) lying in the plane perpendicular to the x-axis of the global coordinate system (Figure 10a).

3.
The robot put away the object on the other table (Figure 10h). Figure 10. The partial illustration of the test task in simulation: (a-e) stage 1, (f-g) stage 2, (h) stage 3.
The test task was repeated two times for a 2 kg object. The first time, the object's weight was not compensated, so the vector τ ogc from Equation (7) was arbitrarily set to zero, and there was no necessity to perform stage 1 of the task. The second time (https://vimeo.com/ 567801853, accessed on 10 October 2021), the vector τ ogc was computed in every control cycle, according to the compensation algorithm from Section 2.3, including the identified object's parameters (Section 3). The computed weight Equation (13) was w = 19.6024 N, and the mass centre position vector Equations (14)-(16) was (k) r = 0.13065 −0.01121 −0.00837 T m.
Hence, for g = 9.80665 m s 2 , the computed object's mass was m = w/g = 1.9989 kg. Let k r f and k rt be the values of stiffness from the diagonal Cartesian stiffness matrix Equation (6), specified as Equation (20): which are related to the right arm and concern components of the 3 × 1 force and torque, respectively (analogously, k l f and k lt are related to the left arm). During the experiments, these values were set as k r f = 500 and k rt = 100, except the identification procedure (Section 3), when they were stated to be higher, i.e., k r f = 1000 and k rt = 400 for phases 1 and 2, and k r f = 1500 and k rt = 1200 for phases 3 and 4. Each element of the normalized damping vector ξ Equation (6) was set to 0.7. The selected results from stage 2 (Figure 10f,g) of the test task are presented in Figure 11. Only a part of the sequence of the movements is considered in reference to the z-axis and y-axis Cartesian desired positions (Figure 11a,d, respectively) of the right end-effector in the global coordinate frame (Figure 10a). During these movements, the x-axis Cartesian desired position was constant and equal to 0.7m.
In the first case (no compensation), the z-axis Cartesian position error (Figure 11b) between the desired and actual position was quite considerable, and it oscillated around a non-zero value. However, in the second case (with weight compensation), this error (Figure 11c) was smaller than previously, as it oscillated around zero and was comparable with the analogous one from experiments with a much lighter object (e.g., 100 times), where there was no need to utilize the mentioned compensation algorithm. In contrast to the z-axis error, the y-axis Cartesian position errors (Figure 11e,f) were almost identical in both cases because the described algorithm refers mainly to the z-axis static effects of the object's dynamics. Figure 11. Experiments in simulation. The plots from control in Cartesian impedance mode with the grasped 2 kg object: (a,d) the Cartesian desired positions, (b,e) the errors with no compensation, (c,f) the errors with object weight compensation. The upper and lower rows concern the z-axis and y-axis, respectively.
After successful experiments in simulation, the experiments on the real robot ( Figure 12) were performed. The general test scenario was the same as in the simulation. The results were achieved for the number of test repetitions. The general nature of signals overlaps the simulation results and does not change through the subsequent test iterations (Figure 13). (f) Figure 13. Experiments on the real robot. The plots from control in Cartesian impedance mode with the grasped 2 kg object: (a,d) the Cartesian desired positions, (b,e) the errors with no compensation, (c,f) the errors with object weight compensation. The upper and lower rows concern the z-axis and y-axis, respectively. The black plot is the mean for 15 runs and red plots represent minimum and maximum values.
The differences between the results in the simulation and those of reality result primarily from force/torque measurement inaccuracies. The experimental hardware tests confirm the operation of the control system as it was intended (https://vimeo.com/ 618941465, accessed on 10 October 2021).

Conclusions
In an extensive survey [41], authors distinguish three categories of estimation methods of inertial parameters of objects: purely visual, exploratory and fixed object. As the example, authors of [42] investigate the usability of simulation data to learn manipulation tasks in an end-to-end manner with a model-free RL algorithm and train the robot controller, especially in the occurrence of an unknown object being manipulated. The works [43][44][45] consider a similar robotic platform as in our tests, but the artificial neural networks together with fuzzy logic are employed to control the robot hardware. Our exemplary approach can be assigned to fixed methods, such as that in [46,47] and evolved from [48]. It was chosen due to its accuracy, needed by impedance control and correspondence to applications, where the grasp is the indispensable part. Here, we concentrate on static error elimination, as it is one of the main problems of impedance control, where the dynamic errors in contact or free-motion are irrevocable. The verification experiments confirm that the proposed object weight compensation algorithm is correct and fulfils the expectations, as it enables to reduce the z-axis position error. Such errors can cause unexpected collisions and lead to task execution repetition or failure, even for impedance controlled manipulators. It could be especially undesirable in systems, such as autonomous robots [49], where both the energy and time to execute a task are constrained. It should be noted that the method relies on precise measurements of a wrench that, in general, is not trivial to achieve, due to the influence of end-effector inertial parameters both in static [48] and dynamic cases [50][51][52]. A survey of inertia parameter identification methods is presented in [53].
In the article, the compensation method was incorporated into the unique and versatile system with two different impedance control laws. Additionally, the exemplary mass centre parameters identification procedure addresses the problem of an unknown gripper's model. It is convenient, due to the task-specific gripper customisations. The resultant integrated controller is the base for recent work with the Velma robot. We believe that the system description that fulfils the current system engineering standards and utilises contemporary domain-specific graphical languages will make it easier to apply the results in similar works performed by other researchers. Funding: This work was funded within "Look & learn: Skill acquisition by a companion robot based on task demonstration" Warsaw University of Technology project.