Abstract
To ensure smooth robot operations, parameters of its kinematic model and a registration transformation between robot base and world coordinate frame must be determined. Both tasks require data acquired by external sensors that can measure either 3D locations or full 6D poses. We show that use of full pose measurements leads to much smaller robot orientation errors when compared with the outcome of calibration and registration procedures based on 3D data only. Robot position errors are comparable for both types of data. The conclusion is based on extensive simulations of 7 degrees of freedom robot arm and different levels of pseudo-noise perturbing both positional and rotational components of pose.
1. Introduction
The topic of robot calibration is well-established, yet it is still a significant factor identified by end-users as being negatively impactful for robot usability and utility [1]. Calibration is followed by registration of robot frame to world frame so the accurate encoder angles can be obtained from inverse kinematic and fed to the robot’s controller. Both procedures have a profound impact on robot performance and, as pointed out in [2], “it is impossible to distinguish the end-effector error contributed either by” incorrect model parameters or by inaccurate registration transformation.
Various methods of calibrating a robot’s kinematic chain have been developed (e.g., [3,4,5]). Many of these methods rely on intrinsic kinematic models (e.g., [6,7,8,9]), which minimize complicated, nonlinear error functions (unless only linearized error models are considered, which may exchange uncertainty for mathematical simplicity) in at least N-dimensional space, where N is the number of controllable joints in the serial kinematic chain. Calibrations based on extended modeling (i.e., beyond rigid kinematics) include compensating for thermal effects [10], and elastostatic [11] and higher order errors [12]. Likewise, examples of non-kinematics-based calibrations can be seen in [13,14]. There are also compensation techniques that can handle both kinematic and non-kinematic errors, but they require steady calculations and application of corrections during on-line operations [15,16,17], or dynamically selected pre-calculated, hand–eye calibrations from a table [18].
Robot calibration procedures depend on theoretical models of the mechanical system’s forward kinematic. For a serial open chain robot, the Product of Exponentials (POE)—based on screw theory—is thought to be one of the most versatile models that can handle singularities in the popular Denavit–Hartenberg (DH) parameter model [19]. For robots with revolute joints only, each joint is parametrized by a three-dimensional (3D) unit vector indicating axis of rotation, and a 3D vector of any point on the axis line. Calibration procedures for such models rely on Circle Point Analysis (CPA) applied to 3D data acquired with laser tracker or other sensor: positioning the robot into a zero-reference configuration (i.e., where all joint angles are set to zero), and then rotating each joint one by one while keeping all other joints fixed at zero [20,21]. Unfortunately, POE-based models do not explicitly include zero offsets of encoder angles. Accurate estimating of zero offsets is critical because the largest contribution to the robot positioning error (97%) comes from incorrect zero offsets [22]. Performing the zero offsets calibration in CPA causes that errors in registration transformation and in the individual offsets accumulate. This may lead to inconsistent calibration results. For some poses, the calibration process reduced robot pose error seven-fold; for others, it actually increased error twofold [23].
A desired outcome of calibration is the error reduction in full pose of robot end-effector, i.e., in its position and orientation. However, both components belong to two different spaces: position is a vector in 3D space and its components have length units, like millimeters, while orientation matrix is parameterized by three angles in degrees. This causes a fundamental scaling problem when a full pose error is minimized (as discussed in [24], ad hoc introduced scaling factors put more weight either on linear or angular part of pose error and push optimizer towards different solutions). This may become a problem in commercial applications where not only position but also orientation of end-effector is important. For example, in automated drilling, a parallelism between the spindle axis and the normal axis of the drilling plate surface should be below 0.2° [25]. Small orientation error 0.05° required for automated riveting, drilling and spot welding was demonstrated by applying online pose corrections in [26]. Automated fiber placement is another example of industrial application where the orientation of robot end-effector is important [27].
The approach that we introduce in this paper avoids the pitfall of minimization of unbalanced 6D error. First, link twists are determined in the CPA-like procedure from 3D data. Then, using full 6D poses measured by sensors, encoder zero offsets are determined in a separate minimization. The error function used in this minimization does not depend on linear DH parameters (link lengths and linear offsets) nor on the position components of noisy 6D poses acquired by sensors. Once twists and zero offsets are known, they are inserted into another error function, which depends only on position components of sensor data. The remaining linear DH parameters are determined by minimizing this second error function. For comparison, robot calibration based on only 3D sensor data is also performed. Obtained results clearly show that orientation errors of end-effector are smaller when orientation part of 6D data is used. At the same time, the position errors are comparable for both methods.
2. Background
The frame associated with the robot’s Tool Center Point (TCP) coordinate system can be expressed as a 4 × 4 homogeneous transformation consisting of a 3 × 3 rotation matrix and a 3 × 1 translation vector :
For a serial, open-chain collaborative robot arm with N revolute joints, the frame in the robot’s base coordinate system can be determined using a forward kinematic model:
where
and is the encoder angle of the n-th revolute joint for the k-th robot configuration, is the homogeneous transformation associated with the n-th joint, , and is a transformation from the robot’s flange frame to the TCP. Using the nominal DH parameters, the rotation component of each can be written as
where and are rotations around z and x axis, respectively. Two angular DH parameters in (4) are (the link twist) and (the zero offset angle for the n-th encoder). The translation component of can be expressed as
where and are two linear DH parameters (link length and offset), and denotes the vector transpose.
From (2) and (4) it can be seen that the rotation part of the frame depends only on the rotation components
This is a general property of serial chain manipulators with revolute joints, and is not dependent on a particular kinematic model (here, we use the DH model for illustration purposes only). In the remainder of this paper, we use the notation
where and are the vectors of the DH angular parameters. Note that the positional component of the frame depends on joint angles and all four vectors of the DH parameters:
3. Determination of Link Twist
To ensure that forward kinematics correctly predict the tool pose in the robot coordinate frame, the DH parameters must be determined first during the robot calibration process. Once calibrated, they remain fixed during robot on-line operations. Calibration may be performed by installing a spherically-mounted retro-reflector (SMR) at the robot’s TCP, and tracking it with a laser tracker. From four vectors of DH parameters , the twist angles can be determined independently of other DH parameters by using 3D data acquired for CPA procedure. The twist angle is defined as the angle between two consecutive joint axes of rotation, and (the last twist is, by definition, set to zero). If denotes a set of K 3D points calculated in (8) and acquired for n-th joint in CPA procedure, then these points are distributed along an arch (section of a circle) on a plane in 3D space. Thus, for each joint , a unit vector normal to the fitted plane can be calculated. While the exact locations of points depend on all DH parameters , vector is parallel to the axis of rotation and, therefore, can be determined from the scalar product of two consecutive axes, . If is the set of 3D points measured by laser tracker which correspond to , then a unit vector normal to the plane fitted to can be calculated and the angle between two consecutive and is used as the estimate of .
To get correctly estimated twist angles , two important steps must be followed. First, since arccos () is an even function, a sign of estimated angle must be equal to the sign of the default (i.e., theoretical) twist angle, . Second, plane fitting procedure provides only a normal to the plane, its particular direction (up or down) depends on a bounding box containing the points. To remove this ambiguity, fitted normal must obey the right hand rule together with acquired 3D points , which are located on a section of a circle. Thus, the estimated corrected twist angle is determined as
4. Robot Calibration Based on 3D Measurements
Once the twist angles are estimated, they can be inserted in (4) and the remaining DH parameters can be found in traditional calibration procedure using 3D data. Given K configurations of the arm (i.e., “poses”) defined by , the SMR is moved to K positions in 3D Cartesian space. Since and in (8) are determined in different coordinate frames, the error function in the calibration process is based on relative distances between two 3D points to avoid a dependence on a registration. For convenience, the whole set of K points can be divided in two halves and then
where
and are determined in (8), is the Euclidean norm and is a distance between two points measured by laser tracker.
Thus, the fitted DH parameters can be estimated by minimizing
providing the vector of link twist angles is known. The actual dimension of search space is since the distance between two points in (11) does not depend on and (the two parameters may have arbitrary values which only affect the registration transformation between robot and sensor). In the remainder of this paper, we call this procedure Method 1.
5. Calibration Based on 6D Measurements
Such data were used for robot calibration using different procedures [14,28,29]. The approach we propose calculates zero offsets in a separate minimization based on orientation components of 6D poses and determined earlier twist angles .
In the remainder of this paper, we assume that, for each robot configuration defined by , there is a corresponding 3 × 3 rotation matrix provided by an external sensor. Both and in (7) are determined in different coordinate frames. If denotes the rotation component of registration matrix then, for each k-th robot orientation in (7) and the corresponding measured with the external sensor, the following relation holds:
where is a small, random rotation accounting for noise in the orientation part of 6D data acquired by sensors. For a pair of orientations and (where ), matrix can be defined as
and its angle of rotation is calculated as
Matrix and its angle depend on the measured joint angle vectors and , the twist angles estimated earlier, and all zero offsets for , which can be obtained by minimizing the error function
where
Once the zero offsets are estimated, they can be inserted in (12) and the linear DH parameters and can be found by minimizing in (10). In the remainder of this paper, we call this procedure Method 2.
To show a scaling problem when both position and rotation errors are simultaneously minimized, robot calibration was attempted by minimizing the following error function:
where is defined in (10), in (17) and positive scaling factor w ensures correct dimensionality of . In the remainder of this paper, we call this procedure Method 3.
6. Registering Robot Frame
When all robot model parameters are known, i.e., estimated , , , and arbitrary values are assigned to and , then a registration transformation (rotation and translation ) between the coordinate systems of robot and laser tracker can be determined.
There are many registration techniques, one of the commonly used was developed in [30] and is based on 3D data. For calibration Method 1 described in Section 4, where only 3D data acquired by sensor are available, there is only one possible registration transformation . When 6D data are available, the registration transformation can be calculated in two ways. In the first (which we name Registration (1)) is calculated using only the 3D positional parts of full poses, as in [30]. In the second (named hereafter as Registration (2)), the rotation matrix is calculated as the mean rotation calculated properly [31] from the individual matrices in (13). Once are known, the translation vectors can be determined as
where and are the centroids of the collected 3D positions in the robot and the external sensor frame, respectively.
7. Simulation
All calculations were performed in Matlab. Built-in nonlinear least-square (NLS) optimizer with default input parameters was used to minimize the error function in (10), in (17) and in (18). As a starting point for all optimizations, default DH parameters were used.
To test the proposed calibration method, a kinematic model of a 7 degrees-of-freedom (DoF) industrial robot arm KUKA LWR 4+ was used. The robot’s default DH parameters are provided in Table 1 (all angular parameters are in degrees and all linear in millimeters). Ground truth (GT) parameters used in simulations were defined as a sum of the defaults and deviations, for example . Deviations from the default DH parameters are provided in Table 2. Two sets of arbitrarily chosen deviations were used in simulations: small deviations and large deviations . GT parameters were used to generate noisy sensor data from (7) and from (8)
and
where is arbitrarily selected transformation between robot and sensor frame, is 3D positional Gaussian noise with standard deviation , and is 3D angular Gaussian noise with standard deviation , which was used to generate small random rotations.
Table 1.
Default DH parameters.
Table 2.
Deviations of DH parameters.
In Figure 1a, examples of histograms for x component of vectors are shown (histograms for y and z components look similar). In Figure 1b, histograms of corresponding angles of rotation of small random rotations are plotted. Note that histograms of are well approximated by a Gaussian distribution while non-symmetric histograms of are well approximated by a Fisher–Bingham–Kent (FBK) distribution [32]. Similar histograms of angles were observed for experimental data acquired with a marker-based pose measuring system, see Figures 1 and 3 in [33].
Figure 1.
Characteristics of simulated small random rotations in (22): (a) histograms of x component of angle vectors ; (b) histograms of angle of rotation of rotation matrix . Blue lines correspond to weak noise with ° and black lines correspond to strong noise with °.
Tool transformation needed in (7) and (8) was arbitrarily chosen with the caveat that the TCP center is not located on the last axis of rotation so that 3D data acquired for CPA procedure are located on a circle.
For each n-th join, vectors of encoder angles were created such that their components were all zero except
where and were such that all were within a valid range of n-th encoder angles. These angles were then inserted in (21) to generate 3D sensor data from which the twist angles were estimated as described in Section 3. In order to estimate the remaining DH parameters and calculate registration transformation , another set of joint angle vectors was selected in such a way that corresponding poses in (1) were randomly scattered in the workspace that is accessible to the robot arm. In computer simulations, this is the only restriction for selection of tool poses, but additional limitations may arise in lab experiments due to a use of a line-of-sight sensor for pose acquisition.
In addition, a separate batch of joint angles was selected for evaluation of calibration and registration procedures. These test poses were used neither in calibration nor registration. To test the performance of all three procedures, the robot kinematic model in (1) was used with the parameters estimated by Method 1, 2 and 3. For Method 1, the registration transformation was calculated using 3D data. For Method 2 and 3, both registrations and were calculated, as described in Section 6. For each tested arm configuration and selected m-th noise levels , the corresponding rotation and position were calculated in (20) and (21) to simulate noisy 6D measurements acquired by sensor. Then, the mean of J angles of rotations and the mean of J relative distances were calculated, where
and the transformation was appropriate for each of the three calibration procedures and (for Method 2 and 3) the appropriate Registration 1 or 2. Both calculated means and were used as metrics to gauge a performance of tested procedures.
These steps were repeated for each of the selected noise level and both sets of GT parameters corresponding to two deviation vectors: small and large , as shown in Table 2. noise levels were equally spaced between zero and 0.15 (degrees for and millimeters for ). In order to estimate a variability of the calculated metrics, all the above calculations were repeated for different realizations of noise (different sequences of pseudo-numbers). Thus, for each i-th instance of noise and each m-th pair of noise levels , the end-effector errors were calculated: for positional error and for angular error. As the final results, the averages and standard deviations from all repeats were stored for each m-th noise level:
and similarly for positional errors and .
To test a performance of the three error functions , and used in calibration, for a few randomly selected noise repeats and strengths, minimization was restarted from 300 randomly scattered initial points (i.e., starting DH parameters) and the final optimized parameters were analyzed. In addition, for Method 3, minimization of was repeated for a few scaling factors w in (18).
In all simulations performed in this study, the distal variant of DH parameters was used [34]. Alternatively, the proximal variant could be used, which would affect derived from it homogeneous matrix . However, not every kinematic model is suitable for describing any robot: a well-known example is a robot with two consecutive joint axes that are parallel to each other. In such a case, the DH model is not continuous and must be replaced by another model, e.g., POE [20], and parameters specific for a given model must be determined. Whichever kinematic model is selected, it is important to consistently use it in a calibration process along with other basic definitions (like use of a right-hand or left-hand coordinate system). With all procedural steps clearly defined and consistently followed, there is no ambiguity in the calibration process.
8. Results
Fitted DH parameters revealed different amounts of variations for different simulated conditions. The twist angles estimated from 3D data generated for the CPA procedure showed moderate variations. The largest absolute deviation from the GT value over all N joints and all simulated conditions ( noise levels, repeats and both deviations from the default values ) was 0.3°. Zero offsets revealed larger deviations: the largest absolute deviation °. The largest link length deviation was mm and the largest link offset deviation was mm. Such large differences between the fitted and the GT parameters were observed mostly for large noise levels and .
Figure 2 shows an example of robot end-effector errors at test poses. Position errors and orientation errors were calculated in (24) for robot DH parameters calibrated with Method 1 and Method 2. Presented errors were calculated for simulated sensor poses perturbed by noise realization (selected arbitrary from repeats) and noise levels . These errors were then used to calculate and then, mean errors and in (25) and the corresponding standard deviations and for each m-th noise level. These means and standard deviations were then used to create the plots in the remaining Figure 3, Figure 4, Figure 5 and Figure 6.
Figure 2.
Robot end-effector errors calculated at test poses for fixed sensor noise mm, ° and one, arbitrary selected noise realization: (a) positional errors ; (b) orientation errors . Robot was calibrated with Method 1 (black lines) and Method 2 (blue lines).
Figure 3.
Comparison of two registration procedures for robot calibrated with Method 2 and data generated using: (a,b)—small deviations from the default DH parameters ; (c,d)—large deviations . Dependence of the mean positional error of robot end-effector on positional noise in sensor 6D data in (a,d); dependence of the mean orientation error of robot end-effector on angular noise in sensor 6D data in (b,d).
Figure 4.
Comparison of two registration procedures for robot calibrated with Method 3 and data generated using large deviations from the default DH parameters: (a) dependence of the mean positional error of robot end-effector on positional noise in sensor 6D data; (b) dependence of the mean orientation error of robot end-effector on angular noise in sensor 6D data.
Figure 5.
Comparison of three calibration methods: (a) dependence of the mean positional error of robot end-effector on positional noise —Registration 1 was used in Method 2 (blue line); (b) dependence of the mean orientation error of robot end-effector on noise ( for Method 1 and for Method 2)—Registration 2 was used in Method 2 (red line); (c) dependence of error on positional noise —Registration 1 was used in both methods; (d) dependence of error on noise —Registration 2 was used in both methods.
Figure 6.
Comparison of robot calibrations using two different scaling factors w in error function in Method 3: (a) dependence of the mean positional error of robot end-effector on positional noise in sensor 6D data—Registration 1 was used; (b) dependence of the mean orientation error of robot end-effector on angular noise in sensor 6D data—Registration 2 was used. Data generated using large deviations from the default DH parameters.
Figure 3 shows the outcomes of two registration transformations and described in Section 6. In both cases, robot was calibrated with Method 2. GT parameters used in simulation of 6D data, i.e., end-effector poses and noisy poses as measured by sensor, were obtained by modifying the default DH parameters with deviations shown in Table 2. For both registrations, mean errors were calculated at the same values of sensor noise ( in Figure 3a,c and in Figure 3b,d). In each subplot, two graphs are slightly shifted horizontally only for better visualization. Error bars in Figure 3a,c and in Figure 3b,d are the corresponding standard deviations calculated in (25) from repeated simulations of noisy sensor data.
Figure 4 shows the outcomes of two registration procedures applied after robot was calibrated using Method 3 and the error function defined in (18) with the scaling factor . Presented results were obtained for 6D data generated with GT values of DH parameters deviating from their default values by shown in Table 2.
Figure 5 shows the outcomes of three calibration procedures: Method 1 based on 3D sensor data, and Method 2 and 3 based on 6D sensor data (in Figure 5b,d noise in mm for Method 1 and in degrees for Method 2 and 3). Two different registration procedures were used in robot calibration with Method 2 and 3: for positional error , Registration 1 was used (blue line in Figure 5a,c, the same as in Figure 3c for Method 2 and the blue line with triangle markers in Figure 5c, the same as blue line in Figure 4a for Method 3). For angular error , Registration 2 was used (red line in Figure 5b,d, the same as in Figure 3d for Method 2 and the red line with triangle markers in Figure 5d, the same as red line in Figure 4b). Error bars in Figure 5a,c and in Figure 5b,d are the corresponding standard deviations calculated in (25) from repeated simulations of noisy sensor data. On each subplot, the two graphs are slightly shifted horizontally for a visualisation effect. Robot GT parameters used in simulation of 6D data were obtained by modifying the default DH parameters with large deviations shown in Table 2. Similar results for and were obtained when small deviations were used in simulations.
Figure 6 shows outcome of robot calibration for Method 3 with two different values of the scaling factor w in in (18). Results for Method 3 presented in Figure 4 and Figure 5c,d were obtained for .
For each of the selected cases where the minimization of the error function was repeated from 300 different starting points, all initial DH parameters led to the same solution. Fitted DH parameters depended on noise strengths, choice of error function and GT values of DH parameters.
9. Discussion
In this study, an open-chain robotic manipulator with N revolute joints was calibrated using three different methods and two different sets of data: 3D positions only, and full 6D poses. All three methods share the same strategy for determining link twists . Then, in Method 1, the error function in (10) was minimized, and the remaining DH parameters were found by using 3D data only. In Method 2, a search for the zero offsets was performed separately by minimizing in (17), which depends only on the orientation part of full 6D data. Once the zero offsets were known, the remaining DH parameters were found by minimizing in (10) using only the positional part of 6D data. Such an approach reduces the dimensionality of the search space when compared with minimization of in Method 1. In addition, by using angles of relative rotations in error function in (17) and relative distances between pairs of 3D points in error function in (10), the proposed strategy decouples robot calibration from registration of the robot frame to the world frame. Different calibration strategies yielded different sets of fitted DH parameters which, in turn, led to different end-effector errors. This is expected, as the optimizer which uses different error functions and different sensor data usually converges to different solutions for the same kinematic model. It should be noted that both Methods 1 and 2 are equally valid and it is a matter of practicality which one is more useful.
In Method 2, two different approaches to registration were used. Rotation from the first approach minimizes distances between the sensor’s 3D positions and robot’s TCP points for K robot arm configurations [30]. Rotation is calculated as the mean rotation of K relative rotations and, thus, minimizes angular distances between orientations of TCP frame and orientations provided by sensor. Therefore, one may expect that is better than in aligning robot orientations with sensor orientations. Indeed, end-effector angular errors shown in Figure 3b,d are smaller for in Registration 2 (red line) than for in Registration 1 (blue line).
When it comes to the positional errors , the situation is exactly opposite. Both translation vectors and are calculated in (19). Since does not depend on positional data, the transformation does not minimize (in the least-square sense) the distances between sensor 3D positions and robot TCP points for K robot arm configurations. Transformation does minimize the distances, and therefore is expected to better align the sensor 3D positions with the robot TCP. Indeed, end-effector position errors shown in Figure 3a,c are smaller for in Registration 1 (blue line) than for in Registration 2 (red line).
Analysis of the plots in Figure 3 suggests the optimal strategy: instead of choosing either the first or the second registration, take the best part from both. Use to transform orientations of robot end-effector and use to transform robot TCP . Outcome of such strategy is displayed in Figure 5a,b: note blue line for positional errors and red line for angular error indicating a use of different registrations in Method 2.
Another advantage of using to transform the TCP orientations rather than is that there is a much smaller dispersion of orientation errors for different noise realizations. The error bars in Figure 5b are much smaller for Method 2 (which leverages ) than for Method 1. This implies that orientations from the world coordinate system can be fed into an inverse kinematic solver more consistently and accurately.
It may appear counter intuitive that mean position errors and mean orientation errors calculated for the same m-th pair of noise strengths but different GT values of DH parameters are almost the same, as Figure 3 shows. However, it should not be a surprise since we used NLS optimizer with exact error function. Scale of deviation from the default DH parameters may become an issue when the calibration is performed using approximated, linearlized errors and the Jacobian is calculated at the default DH values.
Results of robot calibration obtained with Method 3 clearly reveal the consequences of scaling problem when simultaneous minimization of both position and orientation errors in one optimization is attempted, as demonstrated in Figure 6. While the mean orientation errors are almost equal for two selected values of w, the corresponding position errors differ substantially. This method, similarly as Method 2, uses 6D data and, therefore, two registration procedures are available. In Method 3, similarly to Method 2, smaller position errors are obtained when Registration 1 is applied to the position data and smaller orientations errors are observed when Registration 2 is applied to the orientation data, as results in Figure 4 clearly indicate. Even as both Method 2 and 3 share a possibility of using different registrations for position and rotation components of a full pose, a direct comparison between the two methods clearly points to Method 2 as a better procedure, as demonstrated by the results shown in Figure 5c,d. Thus, a use of Method 3 is discouraged.
The calibration strategy outlined in this paper was tested on a kinematic model of a serial open chain robot with revolute joints only. A question can be asked if the strategy can be applied to a more complex kinematic model when a serial chain has both revolute and prismatic joints. Acquisition of full 6D poses enables calculation of two registrations defined in (19): one of them minimizes a position error and the other minimizes an orientation error. Therefore, as long as full 6D poses are acquired, the outlined calibration strategy could in principle be used for robots with a mixture of revolute and prismatic joints. However, a presence of prismatic joints complicates the error function in (10) by increasing a number of search variables and it requires further study to verify whether the strategy is beneficial also for robots with revolute and prismatic joints.
The simulation results presented in this paper raise an important, practical question about the characteristics of 6D pose measuring sensors which are used for robot calibration. Commercially available sensors allow quick acquisition of many repeated measurements, which enables the noise in recorded data to be substantially reduced by calculating mean poses. The mean position error of robot end-effector calculated by Method 2 is increasing with sensor position noise , as Figure 5a shows. If the three sigma rule is followed and approximate relation holds, then the upper bound for sensor position noise should satisfy , where is the acceptable robot position tolerance. For orientation data, due to the strong non-symmetric FBK-like distribution of angles (which accounts for deviation of noisy, instantaneous rotations from the mean rotation), the three sigma rule can be replaced by calculating quantile of angles at 0.997 level. Assuming the mean orientation error of robot end-effector is four times larger than sensor’s orientation noise (as shown in Figure 5b for Method 2), the upper bound for sensor orientation noise should satisfy where is the acceptable robot orientation tolerance. For different robot, the dependence of end-effector error on sensor noise may be different from that shown in Figure 5a,b. Then, the estimates for upper bounds of position noise and orientation noise need to be updated.
The proposed calibration strategy reduces both the position and orientation errors of the robot end-effector. Recommended procedure for serial robot calibration consists of: (1) acquiring the full 6D poses; (2) getting link twists in CPA-like procedure; (3) getting encoder zero offsets using orientation data only; (4) getting link lengths and offsets using position data only. Then, use two separate registrations to transform position and orientation component of a pose from a world to the robot frame. In summary, the dilemma of having only the position or the orientation error of the robot’s end-effector minimized can be avoided and a pose with both optimized components can be fed into inverse kinematic solver.
Author Contributions
Conceptualization, M.F. and J.A.M.; methodology, M.F. and J.A.M.; software, M.F.; validation, M.F. and J.A.M.; formal analysis, M.F. and J.A.M.; investigation, M.F.; writing—original draft preparation, M.F.; writing—review and editing, J.A.M.; visualization, M.F.; project administration, J.A.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest. Certain commercial equipment, instruments, or software are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the equipment or software identified are necessarily the best available for the purpose.
References
- Marvel, J.; Messina, E.; Antonishek, B.; Fronczek, L.; Wyk, K.V. NISTIR 8093: Tools for Collaborative Robots within SME Workcells; Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2015.
- He, R.; Zhao, Y.; Yang, S.; Yang, S. Kinematic-parameter identification for serial-robot calibration based on POE formula. IEEE Trans. Robot. 2010, 26, 411–423. [Google Scholar]
- Roth, Z.; Mooring, B.; Ravani, B. An overview of robot calibration. IEEE J. Robot. Autom. 1987, 3, 377–385. [Google Scholar] [CrossRef]
- Hollerbach, J.M.; Wampler, C.W. The calibration index and taxonomy for robot kinematic calibration methods. Int. J. Robot. Res. 1996, 15, 573–591. [Google Scholar] [CrossRef]
- Elatta, A.; Gen, L.P.; Zhi, F.L.; Daoyuan, Y.; Fei, L. An overview of robot calibration. Inf. Technol. J. 2004, 3, 74–78. [Google Scholar] [CrossRef] [Green Version]
- Chen, H.; Fuhlbrigge, T.; Choi, S.; Wang, J.; Li, X. Practical industrial robot zero offset calibration. In Proceedings of the 2008 IEEE International Conference on Automation Science and Engineering, Arlington, VA, USA, 23–26 August 2008; pp. 516–521. [Google Scholar]
- Nubiola, A.; Bonev, I.A. Absolute calibration of an ABB IRB 1600 robot using a laser tracker. Robot. Comput.-Integr. Manuf. 2013, 29, 236–245. [Google Scholar] [CrossRef]
- Messay, T.; Ordóñez, R.; Marcil, E. Computationally efficient and robust kinematic calibration methodologies and their application to industrial robots. Robot. Comput.-Integr. Manuf. 2016, 37, 33–48. [Google Scholar] [CrossRef] [Green Version]
- Yang, P.; Guo, Z.; Kong, Y. Plane kinematic calibration method for industrial robot based on dynamic measurement of double ball bar. Precis. Eng. 2020, 62, 265–272. [Google Scholar] [CrossRef]
- Li, R.; Zhao, Y. Dynamic error compensation for industrial robot based on thermal effect model. Measurement 2016, 88, 113–120. [Google Scholar] [CrossRef]
- Kalas, V.J.; Vissière, A.; Company, O.; Krut, S.; Noiré, P.; Roux, T.; Pierrot, F. Application-oriented selection of poses and forces for robot elastostatic calibration. Mech. Mach. Theory 2021, 159, 104176. [Google Scholar] [CrossRef]
- Ma, L.M.; Fong, T.; Micire, M.J.; Kim, Y.K.; Feigh, K. Human-Robot teaming: Concepts and components for design. In Field and Service Robotics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 649–663. [Google Scholar]
- Joubair, A.; Bonev, I.A. Non-kinematic calibration of a six-axis serial robot using planar constraints. Precis. Eng. 2015, 40, 325–333. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, Q.; Sun, Y. Non-kinematic calibration of industrial robots using a rigid–flexible coupling error model and a full pose measurement method. Robot. Comput.-Integr. Manuf. 2019, 57, 46–58. [Google Scholar] [CrossRef]
- Yang, J.; Wang, D.; Fan, B.; Dong, D.; Zhou, W. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement. Opt. Eng. 2017, 56, 034111. [Google Scholar] [CrossRef]
- Zeng, Y.; Tian, W.; Li, D.; He, X.; Liao, W. An error-similarity-based robot positional accuracy improvement method for a robotic drilling and riveting system. Int. J. Adv. Manuf. Technol. 2017, 88, 2745–2755. [Google Scholar] [CrossRef]
- Chen, D.; Yuan, P.; Wang, T.; Ying, C.; Tang, H. A compensation method based on error similarity and error correlation to enhance the position accuracy of an aviation drilling robot. Meas. Sci. Technol. 2018, 29, 085011. [Google Scholar] [CrossRef]
- Franaszek, M.; Cheok, G.S. Using locally adjustable hand-eye calibrations to reduce robot localization error. SN Appl. Sci. 2020, 2, 839. [Google Scholar] [CrossRef] [Green Version]
- Park, F.C. Computational aspects of the product-of-exponentials formula for robot kinematics. IEEE Trans. Autom. Control 1994, 39, 643–647. [Google Scholar] [CrossRef]
- Wang, H.; Shen, S.; Lu, X. A screw axis identification method for serial robot calibration based on the POE model. Ind. Robot Int. J. 2012, 39, 146–153. [Google Scholar] [CrossRef] [Green Version]
- Santolaria, J.; Conte, J.; Ginés, M. Laser tracker-based kinematic parameter calibration of industrial robots by improved CPA method and active retroreflector. Int. J. Adv. Manuf. Technol. 2013, 66, 2087–2106. [Google Scholar] [CrossRef]
- Conrad, K.L.; Shiakolas, P.S.; Yih, T. Robotic calibration issues: Accuracy, repeatability and calibration. In Proceedings of the 8th Mediterranean Conference on Control and Automation (MED2000), Rio, Patras, Greece, 17–19 July 2000; Volume 1719. [Google Scholar]
- Choi, Y.; Cheong, J.; Kyung, J.H.; Do, H.M. Zero-offset calibration using a screw theory. In Proceedings of the 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Xi’an, China, 19–22 August 2016; pp. 526–528. [Google Scholar]
- Wu, Y.; Klimchik, A.; Caro, S.; Furet, B.; Pashkevich, A. Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments. Robot. Comput.-Integr. Manuf. 2015, 35, 151–168. [Google Scholar] [CrossRef] [Green Version]
- Posada, D.J.; Schneider, U.; Pidan, S.; Geravand, M.; Stelzer, P.; Verl, A. High Accurate Robotic Drilling with External Sensor and Compliance Model-Based Compensation. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; pp. 3901–3907. [Google Scholar]
- Gharaaty, S.; Shu, T.; Joubair, A.; Xie, W.F.; Bonev, I.A. Online pose correction of an industrial robot using an optical coordinate measure machine system. Int. J. Adv. Robot. Syst. 2018, 4, 1–16. [Google Scholar] [CrossRef] [Green Version]
- Rousseau, G.; Wehbe, R.; Halbritter, J.; Harik, R. Automated Fiber Placement Path Planning: A state-of-the-art review. Comput.-Aided Des. Appl. 2019, 16, 172–203. [Google Scholar] [CrossRef] [Green Version]
- Nguyen, H.N.; Zhou, J.; Kang, H.J. A new full pose measurement method for robot calibration. Sensors 2013, 13, 9132–9147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nubiola, A.; Bonev, I.A. Absolute robot calibration with a single telescoping ballbar. Precis. Eng. 2014, 38, 472–480. [Google Scholar] [CrossRef]
- Arun, K.S.; Huang, T.S.; Blostein, S.D. Least-Squares Fitting of Two 3-D Point Sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 9, 698–700. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Moakher, M. Means and averaging in the group of rotations. SIAM J. Matrix Anal. Appl. 2002, 24, 1–16. [Google Scholar] [CrossRef] [Green Version]
- Kent, J.T. The Fisher-Bingham Distribution on the Sphere. J. R. Stat. Soc. 1982, 44, 71–80. [Google Scholar] [CrossRef]
- Franaszek, M.; Shah, M.; Cheok, G.S.; Saidi, K.S. The axes of random infinitesimal rotations and the propagation of orientation uncertainty. Measurement 2015, 72, 68–76. [Google Scholar] [CrossRef]
- Lipkin, H. A note on Denavit-Hartenberg notation in robotics. In Proceedings of the ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Long Beach, CA, USA, 24–28 September 2005; pp. 1–6. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).