Next Article in Journal
Freight Wagon Digitalization for Condition Monitoring and Advanced Operation
Previous Article in Journal
Non-Contact Infrared Thermometers and Thermal Scanners for Human Body Temperature Monitoring: A Systematic Review
Previous Article in Special Issue
Research on High Precision Stiffness Modeling Method of Redundant Over-Constrained Parallel Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Calibration Method for Robot Measurement Systems

State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(17), 7447; https://doi.org/10.3390/s23177447
Submission received: 20 July 2023 / Revised: 16 August 2023 / Accepted: 25 August 2023 / Published: 26 August 2023
(This article belongs to the Collection Robotics, Sensors and Industry 4.0)

Abstract

:
Robot measurement systems with a binocular planar structured light camera (3D camera) installed on a robot end-effector are often used to measure workpieces’ shapes and positions. However, the measurement accuracy is jointly influenced by the robot kinematics, camera-to-robot installation, and 3D camera measurement errors. Incomplete calibration of these errors can result in inaccurate measurements. This paper proposes a joint calibration method considering these three error types to achieve overall calibration. In this method, error models of the robot kinematics and camera-to-robot installation are formulated using Lie algebra. Then, a pillow error model is proposed for the 3D camera based on its error distribution and measurement principle. These error models are combined to construct a joint model based on homogeneous transformation. Finally, the calibration problem is transformed into a stepwise optimization problem that minimizes the sum of the relative position error between the calibrator and robot, and analytical solutions for the calibration parameters are derived. Simulation and experiment results demonstrate that the joint calibration method effectively improves the measurement accuracy, reducing the mean positioning error from over 2.5228 mm to 0.2629 mm and the mean distance error from over 0.1488 mm to 0.1232 mm.

1. Introduction

Industrial robots are preferred for their high flexibility, low cost, and wide working range, and they have started to gradually replace manual labor in many scenarios, such as welding, shot peening, and palletizing [1,2,3]. A 3D camera utilizes the parallax principle to measure 3D coordinates, providing benefits such as high stability and measurement accuracy [4]. This type of camera is commonly mounted at the robot end-effector to form a measurement system for various applications, including reverse engineering and in-line inspection [5].
The working process of a robot measurement system is illustrated in Figure 1. Due to the workpiece size often exceeding the camera’s field of view (FOV), local point clouds of the workpiece are captured from multiple sampled poses, and these point clouds are then transformed into the robot base coordinate system to generate a complete point cloud of the workpiece. The accuracy of the measurement system is jointly determined by the robot kinematics, camera-to-robot pose installation, and 3D camera measurement errors. Robot kinematic error is deviations in geometric parameters such as rod length and torsion angle, which causes errors in poses fed back by the robot controller [6]. Camera measurement error is influenced by the accuracy of the camera’s internal and external parameters, which are typically calibrated by the camera manufacturer [7]. However, secondary calibration of the camera can improve its measurement accuracy [8]. The relative pose between the camera and robot is typically achieved through hand–eye calibration, and its accuracy is impacted by both robot kinematic error and camera measurement error [9]. A large measurement error can result in inaccurate measurement models and low-quality products, making it crucial to calibrate these errors before using the measurement system. These three types of errors are referred to simply as robot error, hand–eye matrix error, and camera error in this paper.
Some achievements have been made in the separate calibration of each error. The D-H method, screw method, and Lie algebra method are often used to model robot error [10,11,12]. The D-H method is known to have singularities and discontinuity in its parameters, whereas the screw method has a systematic associated error [13]. These problems are not present in the Lie algebra method. In addition, external equipment or calibrators are often used to provide high-accuracy raw data, and constraint equations are established to solve for kinematic errors [14,15]. Calibration methods that rely on calibrators tend to be simpler. Hand–eye calibration for 3D cameras has not been sufficiently studied. Due to their limited precision for measuring jumping edges and vertices [16], calibration methods often favor the use of calibrators featuring circular elements such as spheres, discs, and holes [17,18,19,20]. However, most hand–eye calibration methods do not consider robot and camera errors, which influence calibration accuracy. For camera error, many studies have investigated the relationship between camera parameters and measurement error [21,22,23]. However, these methods are not suitable for users who lack knowledge of the camera parameters. Other studies have focused on establishing the relationship between the measurement value and measurement error [8,24,25]. However, these error models are proposed entirely based on observations and experience, without considering the measurement principle of 3D cameras.
It is crucial to highlight that separate error calibration is inadequate in ensuring the accuracy of the measurement system. Instead, a joint calibration method that considers robot, hand–eye matrix, and camera errors is necessary. Few researchers have paid attention to this aspect. Some studies established joint error models based on the D-H method and used geometric constraints to calibrate robot and hand–eye matrix errors simultaneously [26,27]. However, the calibration results of these methods are unstable due to the drawbacks of the D-H method. To address this issue, Lie algebra has been used to establish a joint error model, resulting in improved stability [13]. However, the absolute position data were used for calibration in the study, which requires the use of high-precision external equipment. In addition, none of these studies account for camera error, which can affect the accuracy of the joint calibration.
We thoroughly consider the advantages and limitations of previous studies and propose a joint calibration approach that takes into account three types of error. A standard sphere is used as the calibrator considering the features of the 3D camera, and the sphere center is captured as the raw data for calibration. Separate models are established for each type of error. The error models for the robot and hand–eye matrix are established using Lie algebra, whereas a pillow model is created for the camera based on its error distribution and measurement principles. These error models are then integrated to construct the joint error model, which is used to establish a stepwise optimization problem for joint error calibration. Our contribution can be concluded as follows.
(1) The camera error, which is proved to influence the calibration accuracy in the simulation, is taken into account during joint calibration for the first time.
(2) A camera error model is established based on the measurement principle and error distribution, whereas a joint error model is established for joint calibration.
(3) The calibration problem is transformed into an optimization problem that minimizes the sum of the relative position error between the calibrator and robot, avoiding reliance on external equipment.

2. Method

2.1. Problem Statement

The typical robot measurement system is shown in Figure 1, with a 3D camera installed at the robot end-effector. In this system, several coordinate systems are defined. The robot base coordinate system is denoted as {B}, the robot end coordinate system as {E}, the camera coordinate system as {C}, and the workpiece coordinate system as {W}, which has the same pose as {B}, as shown in Figure 1. The transformation relationships between these coordinate systems are represented by homogeneous matrices. For example, E B T represents the pose of {E} in {B}, or the transformation from {B} to {E}. During the measurement procedure, the point cloud C P in {C} needs to be transformed into {B} using B P = E B T C E T C P . The errors in E B T , C E T , and C P are the robot, hand–eye matrix, and camera errors mentioned in Section 1. According to B P = E B T C E T C P , there are also error in B P , which is referred to as compensative error in this paper, and the accuracy of B P determines the accuracy of the measurement system.
This paper focuses on a joint calibration method for the robot, hand–eye matrix, and camera errors. Separate error models and the joint error model are first proposed. Then, the calibration problem is transformed into a stepwise optimization problem, and analytical solutions for each calibration parameter are derived.

2.2. Joint Calibration Method

The principle of the joint calibration method is depicted in Figure 2. A ceramic standard sphere is utilized as the calibrator and fixed at an appropriate position. The calibrator coordinate system is also donated as {W}, and the sphere center as W. Local point clouds of the sphere are collected from various sampling poses, and then spherical fitting is performed to obtain W C P , which represents the position of the sphere center W in {C}. As the relative position between the calibrator and robot is fixed, W B P should always be equal to its ideal value W B P 0 . However, W B P = E B T C E T W C P W B P 0 occurs due to errors in E B T , C E T , and W C P . The joint calibration method is achieved by calibrating W B P , and the specific principle is as follows.
Firstly, the robot error is modeled. E B T can be represented as E B T = i = 1 n i i 1 T using the modified Denavit–Hartenberg (MDH) method [28], where i i 1 T is the transformation matrix from the ( i 1 ) th joint coordinate system to the ith joint coordinate system. The transformation matrix i i 1 T is calibrated by right-multiplying the calibration matrix E i . E i can be represented by Lie algebra ξ i as E i = e ξ i , where ξ i = [ ρ i , ϕ i ] T s e ( 3 ) . The symbol ∧ denotes the skew-symmetric matrix, as shown in Equation (1), and the expression for ξ i is ϕ i ρ i 0 T 0 . Then, the calibration value E B T of E B T can be obtained as shown in Equation (2).
v = v 1 v 2 v 3 = 0 v 3 v 2 v 3 0 v 1 v 2 v 1 0
E B T = i = 1 n i i 1 T E i = i = 1 n i i 1 T e ξ i
The hand–eye matrix C E T is likewise calibrated using the calibration matrix E C E and Lie algebra ξ C E , as shown in Equation (3).
C E T = C E T E E C = C E T e ξ E C
Finally, the camera error is modeled. The measurement principle of the 3D camera is illustrated in Figure 3. The structured light is projected onto the object, and two cameras simultaneously capture patterns on the object’s surface. Then, x-axis and z-axis coordinates are obtained based on the similar triangle relationship in the XOZ plane, followed by the computation of the y-axis coordinate through the geometric relationship. The coordinate accuracy depends on the accuracy of the baseline B, tilt angles α 1 ,   α 2 , focal lengths f 1 ,   f 2 , and image resolutions X 1 ,   X 2 ,   Y 1 ,   Y 2 . However, these parameters are coupled, leading to a complex error form that is difficult to model and identify. To simplify, we will establish an error model that relates the error to the measurement value. In addition, the error model should consider the following characteristics based on the measurement principle and existing studies [21,22,29,30]. (1) The error increases as the measurement point moves away from point S, which is the intersection point of the two cameras’ lines of sight. This indicates the presence of a distortion field around S, as illustrated in Figure 3. (2) The error in the z-axis coordinate is generally larger than those of the x-axis and y-axis coordinates. (3) Different cameras may exhibit various forms of error.
The Surface HD50 camera is used in this study, with a repeatability accuracy of ±0.15 mm, an optimal working distance of 500 ± 250 mm, and an FOV of H55° × V36°. To observe the camera’s error distribution, a ceramic standard sphere with a diameter of 38.1 mm and a diameter deviation of no more than 1 µm is placed within the camera’s 250–400 mm working distance. The fitted diameter is compared with the theoretical diameter to observe the camera’s error. This process is illustrated in Figure 4.
Figure 5 displays the fitted diameter of the measurement points, where the horizontal coordinate represents the positions of the measurement points and the vertical coordinate represents the fitted diameter. Several patterns can be observed in Figure 5. (1) The fitted diameter of all measurement points is larger than the theoretical diameter. (2) The fitted diameter decreases obviously as the z-axis coordinate increases. (3) The fitted diameter tends to decrease as the x-axis coordinate increases, followed by a slight increase. (4) The fitted diameter is not related to the y-axis coordinate. These patterns suggest the existence of a pillow error at point S, causing the fitted diameter to increase as the measurement point moves away from S, as illustrated in Figure 3. Referring to the pillow distortion of color cameras, the error model of the 3D camera is defined as Equation (4). This expression takes into account the measurement principle of the camera, which states that the x-axis and z-axis coordinates are independent of the y-axis coordinate.
Δ x = i = 1 m k i x x s x x s 2 + z z s 2 i Δ y = j = 1 m k j y y s x x s 2 + y y s 2 + z z s 2 j Δ z = l = 1 m k l z z s x x s 2 + z z s 2 l
In Equation (4), Δ x , Δ y , Δ z , x , y , z , and x s , y s , z s represent the calibration value, the measurement value, and the coordinate of point S, respectively. k i , k j , and k l are the calibration parameters, whereas i, j, l, and m are positive integers. The coordinate of point S for the used camera is theoretically 60 mm , 0 mm , 500 mm . Based on the second error characteristic of the camera, the errors in the x-axis and y-axis coordinates can be neglected. Furthermore, m = 3 is set to simplify the expression, and thus, the error model can be expressed as Equation (5), where a i = z z s x x s 2 + z z s 2 i ,   i = 1 , 2 , 3 .
W C P = W C P + 0 0 0 0 0 0 a 1 a 2 a 3 0 0 0 A k 1 k 2 k 3 ξ C W = W C P + A ξ C W
The calibration value W B P , as shown in Equation (6), can be obtained by combining these separate error models. Theoretically, W B P should be equal to W B P 0 .
W B P = i = 1 n i i 1 T e ξ i C E T e ξ E C W C P + A ξ C W
W B P 1 , W B P 2 W B P m can be obtained from m sampling poses, and Equation (7) is obtained by taking pairwise differences between W B P 1 , W B P 2 W B P m .
W B P 1 W B P 2 = i = 1 n i i 1 T 1 e ξ i C E T e ξ E C W C P 1 + A 1 ξ C W i = 1 n i i 1 T 2 e ξ i C E T e ξ E C W C P 2 + A 2 ξ C W W B P m 1 W B P m = i = 1 n i i 1 T m 1 e ξ i C E T e ξ E C W C P m 1 + A m 1 ξ C W i = 1 n i i 1 T m e ξ i C E T e ξ E C W C P m + A m ξ C W
Since the relative position of the calibrator and robot is fixed, the equations in Equation (7) should theoretically be equal to 0. Equation (7) is then transformed into an optimization problem, as shown in Equation (8), where x ξ i i = 1 , 2 , , 6 , ξ E C , ξ C W is to be solved.
min x f x = min x 1 2 j = 2 m i = 1 n i i 1 T j 1 e ξ i C E T e ξ E C W C P j 1 + A j 1 ξ C W i = 1 n i i 1 T j e ξ i C E T e ξ E C W C P j + A j ξ C W 2
ξ i i = 1 , 2 , , 6 , ξ E C , and ξ C W are independent of each other, allowing them to be solved step by step. The process of solving ξ k is as follows. Let B k = i = 1 k 1 i i 1 T e ξ i k k 1 T and C k = i = k + 1 n i i 1 T e ξ i C E T e ξ E C W C P + A ξ C W , and define δ ξ k to update ξ k in each iteration, with the update expression given by e ξ k = e δ ξ k e ξ k . Then, Equation (8) can be transformed into Equation (9).
min E k f E k = min E k 1 2 j = 2 m B k j 1 e δ ξ k e ξ k C k j 1 B k j e δ ξ k e ξ k C k j 2
Define g j 1 δ ξ k as g j 1 δ ξ k = B k j 1 e δ ξ k e ξ k C k j 1 B k j e δ ξ k e ξ k C k j . Since δ ξ k is small, g j 1 δ ξ k can be simplified using the first-order Taylor expansion around δ ξ k = 0 , yielding g j 1 δ ξ k = B k j 1 e ξ k C k j 1 B k j e ξ k C k j + B k j 1 e ξ k C k j 1 δ ξ k B k j e ξ k C k j δ ξ k δ ξ k . Equation (10) is obtained by substituting the simplified g j 1 δ ξ k into Equation (9).
min δ ξ k f δ ξ k = min δ ξ k 1 2 j = 2 m g j 1 ( δ ξ k ) T g j 1 ( δ ξ k )
Define h j 1 = B k j 1 e ξ k C k j 1 δ ξ k B k j e ξ k C k j δ ξ k , p j 1 = B k j 1 e ξ k C k j 1 B k j e ξ k C k j , and δ ξ k can be obtained as Equation (11) by solving the partial derivative f δ ξ k δ ξ k = 0 .
δ ξ k = j = 2 m h j 1 T h j 1 1 j = 2 m h j 1 T p j 1
e ξ k C k j 1 δ ξ k and e ξ k C k j δ ξ k can be derived using the definition of the derivative, as shown in Equations (12) and (13), where the symbol ∧ denotes the skew-symmetric matrix. R and t represent the rotation matrix and translation vector of e ξ k .
e ξ k C k j 1 δ ξ k = I R C k j 1 + t 0 T 0
e ξ k C k j δ ξ k = I R C k j + t 0 T 0
Then, the hand–eye matrix calibration parameter ξ E C is derived. By defining D = i = 1 n i i 1 T e ξ i C E T and F = W C P + ξ C W , and using δ ξ E C to update ξ E C , the optimization problem min δ ξ E C f δ ξ E C = min δ ξ E C 1 2 j = 2 m D j 1 e δ ξ E C e ξ E C F j 1 D j e δ ξ E C e ξ E C F j 2 can be formulated. δ ξ E C can be obtained following the similar derivation as Equations (9)–(13).
Finally, ξ C W is derived. Let G = i = 1 n i i 1 T e ξ i C E T e ξ E C W C P + A ξ C W and H = i = 1 n i i 1 T e ξ i C E T e ξ E C A , and define δ ξ C W to update ξ C W , with the expression given by ξ C W = ξ C W + δ ξ C W . Then, Equation (8) can be transformed into Equation (14).
min δ ξ C W f δ ξ C W = min δ ξ C W 1 2 j = 2 m G j 1 G j + H j 1 H j δ ξ C W 2
δ ξ C W can be obtained as Equation (15) by solving the partial derivative f δ ξ C W δ ξ C W = 0 .
δ ξ C W = j = 2 m H j 1 H j T H j 1 H j 1 j = 2 m H j 1 H j T G j 1 G j
In each iteration, the calibration parameter x is calculated, and the objective function value f x is updated by substituting x into Equation (8). The iterative process continues until f x no longer decreases or the maximum number of iterations L has been reached. Algorithm 1 presents the pseudo-code for the iterative process. The initial C E T is obtained using a calibration method described in Reference [26].   
Algorithm 1: The pseudo-code of calibration process
Sensors 23 07447 i001

3. Simulation

The simulation is designed to verify the joint calibration method. Firstly, the simulation data are generated. The robot is modeled using the MDH method with parameter errors to generate E B T , and the pillow error model, as shown in Equation (5), is introduced to generate W C P . Furthermore, random noise is added to E B T and W C P . Then, E B T , C E T and W C P are substituted into Algorithm 1 to obtain x, and calibrated values W B P calibrated , E B T calibrated , C E T calibrated , and W C P calibrated are calculated according to x. Finally, the performance of the joint calibration method is evaluated by comparing the calibrated values with ideal values W B P 0 , E B T 0 , C E T 0 , and W C P 0 . Rotational random noise is added to E B T according to the expression T i j + e r o t , i , j = 1 3 , e r o t N 0 , σ r o t , where e r o t obeys a Gaussian distribution N 0 , σ r o t . Similarly, translational random noise is added to both E B T and W C P according to expressions T i 4 + e t r a n and P i 1 + e t r a n , i = 1 3 , e t r a n N 0 , σ t r a n , respectively. During the simulation, σ r o t and σ t r a n are set to 0.0001 and 0.01, respectively. The ABB IRB1410 robot model is used, and Table 1 presents the MDH parameters along with their errors (enclosed in parentheses). The camera error parameter is set to [1 × 10 9 , 7 × 10 15 , 5 × 10 18 ].

3.1. Calibration Accuracy

One hundred training data sets and one hundred testing data sets are generated. The initial C E T is obtained as below. We can observe that C E T deviates from C E T 0 , which can be attributed to robot and camera errors.
C E T = 0.0051 1.0000 0.0035 59.5445 1.0000 0.0051 0.0034 61.3253 0.0034 0.0035 1.0000 86.2298 0 0 0 1
Next, B E T , C E T , and W C P are substituted into Algorithm 1 to obtain the calibration parameter x, and the results are presented in Table 2. The calibrated hand–eye matrix C E T calibrated is obtained as follows.
C E T calibrated = 0.0001 1.0000 0.0001 59.7917 1.0000 0.0001 0.0014 60.0563 0.0014 0.0001 1.0000 84.9688 0 0 0 1
It has been observed that C E T calibrated is close to C E T 0 , and ξ C W is close to the negative of the preset value. These results demonstrate that the joint calibration method can accurately calibrate the hand–eye matrix and camera errors. However, since ξ i i = 1 , 2 , , 6 do not have true values, it is difficult to determine whether the robot error has been accurately identified. This problem will be addressed by observing the calibration result of E B T .
To illustrate the importance of joint calibration, the methods are classified into methods 1–4 based on whether robot, hand–eye matrix, or camera errors are taken into account, as shown in Table 3, where √ indicates that this type of error is considered and × indicates that it is not. Methods 1 and 2 utilize the error model and calibration method proposed in this paper. Method 4, proposed in Reference [26], uses the D-H method to model robot error and establishes a linearized equation for calibration. The positioning accuracy and repeatability accuracy in B are chosen as the evaluation criteria to evaluate these methods. The mean value ( e m e a n ) and the maximum value ( e m a x ) of W B P calibrated W B P 0 calculated from 100 testing data sets are used as the evaluation criteria for positioning accuracy. The standard deviation of W B P calibrated , denoted as s t d , is used as the evaluation criteria for repeatability accuracy.
The results are presented in Table 3. Method 3, without calibration, exhibits a large e m e a n , e m a x , and s t d , whereas method 1 shows significantly smaller values, indicating that the proposed joint calibration method improves both the positioning accuracy and repeatability accuracy. Method 2 shows a larger e m e a n and e m a x compared to method 3 but a smaller s t d , suggesting that method 2 improves the repeatability accuracy while reducing positioning accuracy. Method 4 fails to effectively calibrate the compensative error.
Notably, the proposed method’s optimization objective is to minimize the relative error in W B P , as shown in Equation (8). It does not guarantee the individual correct calibration of the robot, hand–eye matrix, and camera errors. E B T calibrated , C E T calibrated , and W C P calibrated are compared with their ideal values to observe the calibration results of the robot, hand–eye matrix, and camera errors separately. P calibrated P 0 and T calibrated 1 : 4 , 4 T 0 1 : 4 , 4 are chosen as the evaluation criteria. Figure 6 displays the calibration results of each testing data. Method 1 achieves great calibration for all three types of error. Method 2 only calibrates partly the hand–eye matrix error, whereas method 4 is less effective in calibrating each error.

3.2. The Impact of Random Noise

To investigate the impact of random noise on each method, we gradually increase the random noise by multiplying a coefficient “noise level” to σ r o t and σ t r a n , while setting ξ C W to 0. The variation in e m e a n is observed and presented in Figure 7. Even though e m e a n increases with the noise level, method 1 consistently produces smaller e m e a n than method 3. This indicates that method 1 is able to resist noise and maintain accuracy. Method 2 also achieves a similar result since the camera error is set to 0. Method 4 can calibrate the comprehensive error only when the random noise is small, indicating its high sensitivity to random noise.

3.3. The Impact of Camera Error

To investigate the impact of camera error on each method, we gradually increase the preset value of ξ C W while setting the noise level to 0, and obtain the variation in e m e a n for each method. The results are presented in Figure 8, where the horizontal axis represents the mean error in W C P . We observe that as the camera error increases, the e m e a n of method 1 gradually increases but remains smaller than that of method 3. Similarly, the e m e a n of method 2 also increases with the camera error. However, method 2 exhibits a negative effect when the camera error exceeds 0.2 mm, indicating the need to consider camera error. Method 4 can yield a small e m e a n only when the camera error is small, indicating its high sensitivity to camera error.

4. Experiment

The calibration system, as shown in Figure 9a, utilizes an ABB IRB1410 robot with a repeatability accuracy of ±0.05 mm. The parameters of the camera and ceramic standard sphere are detailed in Section 2.2.

4.1. Calibration Accuracy

The initial C E T is obtained as follows.
C E T = 0.0135 0.9998 0.0109 57.4017 0.9995 0.0132 0.0278 61.1012 0.0276 0.0113 0.9996 86.1401 0 0 0 1
Next, B E T , C E T , and W C P are substituted into Algorithm 1 to solve for x, and the results are presented in Table 4. The calibrated hand–eye matrix is obtained as follows.
C E T calibrated = 0.0023 0.9999 0.0010 57.2041 1.0000 0.0023 0.0032 60.9219 0.0032 0.0010 1.0000 86.5221 0 0 0 1
To evaluate the performance of methods 1–4, distance accuracy is used instead of positioning accuracy, since W B P 0 is unknown in the experiment. The distance accuracy is tested as follows. After obtaining the calibration parameters using each method, the standard sphere is fixed on a linear slide table with a distance accuracy of ±0.03 mm, as shown in Figure 8b. W C P 1 is obtained from 25 different poses and then converted to W B P 1 using the calibration parameters. The linear slide is then moved by 50 mm, and the process is repeated to obtain W B P 2 . The mean value ( d e m e a n ) and the maximum value ( d e m a x ) of abs W B P 1 W B P 2 2 50 calculated from 25 data sets are used as evaluation criteria for the distance accuracy, and std abs W B P 1 W B P 2 2 50 ( d s t d ) is used as the evaluation criterion for repeatability accuracy. The results obtained are shown in Table 5.
Method 3 exhibits the largest d e m e a n , d e m a x , and d s t d compared to other methods, whereas method 1 yields the smallest. This result suggests that the proposed joint calibration method performs exceptionally well in practical situations. Although method 2 shows slightly larger values than method 1, it still clearly outperforms method 3. In addition, method 2 could partially calibrate comprehensive error, indicating that camera error is insignificant. Method 4 also performs calibration successfully, unlike the result in Table 3. Method 4 disregards the camera error and relies on the D-H method, making it sensitive to the camera error. When the camera error is significant, method 4 will fail to calibrate the measurement system. To show this case, a significant camera error is set in the simulation to exhibit the necessity of considering the camera error and modeling using Lie algebra. As shown in Figure 6d, a maximum of 1.6 mm camera error is added in the simulation, and method 4 fails to achieve calibration. Figure 8b demonstrates that method 4 can also perform high-precision calibration with a small camera error. The maximum fitted diameter error is less than 0.4 mm in the experiment, as shown in Figure 5. The error is small enough that method 4 can successfully calibrate the measurement system. The experimental results actually align with the simulation results. While the proposed method’s advantage may not be as pronounced as in the simulations, the results in Table 5 indicate a notable improvement over other methods.
The individual compensation values for the robot, hand–eye matrix, and camera errors are shown in Figure 10. It is important to note that the compensation value can only be used as a reference for each error. We can infer from Figure 10 that robot error is the main error, followed by hand–eye matrix error, and camera error is the smallest. This result is consistent with the previous inference.
The ξ C W obtained from method 1 is used to calibrate the point clouds in Figure 5, and the results are shown in Figure 11. The fitted diameter after calibration is closer to the theoretical diameter, indicating that the joint calibration method successfully calibrates part of the camera error. However, the diameter deviation is not completely eliminated, which could be attributed to two reasons. Firstly, during the joint calibration method, only the sphere center and not all point clouds of the sphere are used, which can lead to the deviation in ξ C W . Secondly, the camera may have other error forms that are not accounted for.

4.2. Performance in Practical Applications

The robot measurement system is mainly calibrated to measure workpieces precisely. To evaluate each method in practical measurements, the calibrated robot measurement system is used to measure the standard sphere, which is placed at another position, different from that during the calibration process. Multiple point clouds of the standard sphere are collected from different poses and stitched together, and the performance of each method is assessed based on the stitching quality. For better visualization, only three point clouds are stitched first, and the stitching process and results are presented in the upper parts of Figure 12 and Figure 13, respectively. Method 1 produces the best stitching result, whereas methods 2 and 4 exhibit more mistakes than method 1. Method 3 yields the worst stitching result, with misaligned point clouds. These results suggest that the joint calibration method significantly improves measurement accuracy in practice.
After that, point clouds from more sampled poses are stitched together and fitted to spheres, as shown in the bottom part of Figure 12. The stitching and spherical fitting results of multiple point clouds are shown in Figure 14. Method 1 also yields the highest-quality stitching result. Methods 2 and 4 exhibit noticeable mistakes, with method 4 failing to stitch the point clouds. In terms of fitting results, method 1 outperforms the other methods, whereas methods 2 and 4 feature numerous outlier points. The error between the fitted diameter and theoretical value is used to evaluate each method quantitatively. The fitted diameter errors of methods 1, 2, and 4 are 0.23 mm, 0.61 mm, and 0.57 mm, respectively, whereas method 4 fails to achieve a proper spherical fitting. The result also demonstrates that method 1 exhibits the highest calibration accuracy.

5. Conclusions

This paper proposes a joint calibration method for a robot measurement system, which considers robot kinematic, camera-to-robot installation, and 3D camera measurement errors. This method establishes separate error models for each type of error and constructs a joint error model based on homogeneous transformation. Based on the joint error model, the calibration problem is formulated as a stepwise optimization problem, and analytical solutions are derived. The superiority of the proposed method is validated through simulation and experiment. In the simulation, the proposed method can reduce the mean positioning error from over 2.5228 mm to 0.2629 mm and the mean repeatability error from over 0.1831 mm to 0.0112 mm, compared to methods without considering all error types. In addition, the anti-noise simulation results demonstrate that the proposed method can achieve reliable high-precision calibration, even in increasing random noise. In the experiment, the proposed method can reduce the mean distance error from over 0.1488 mm to 0.1232 mm and the mean repeatability error from over 0.1045 mm to 0.0957 mm compared to other methods. When applied to actual measurements, the proposed method outperforms other methods in stitching and fitting point clouds, reducing the fitted diameter error from over 0.57 mm to 0.23 mm. However, based on the experimental results, the proposed method can only partially calibrate the 3D camera measurement error. Other forms of 3D camera measurement error, except for that proposed in Section 2.2, may exist and require further investigation.

Author Contributions

Conceptualization, L.W. and X.Z. (Xizhe Zang); methodology, L.W. and G.D.; software, L.W.; validation, G.D. and C.W.; formal analysis, X.Z. (Xuehe Zhang); investigation, X.Z. (Xuehe Zhang) and Y.L.; resources, X.Z. (Xizhe Zang) and J.Z.; data curation, L.W.; writing—original draft preparation, L.W.; writing—review and editing, X.Z. (Xizhe Zang) and Y.L.; supervision, J.Z.; funding acquisition, X.Z. (Xizhe Zang) and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (Grant No. 2022YFB4700802) and the Major Research Plan of the National Natural Science Foundation of China (Grant No. 92048301).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Suzuki, R.; Okada, Y.; Yokota, Y.; Saijo, T.; Eto, H.; Sakai, Y.; Murano, K.; Ohno, K.; Tadakuma, K.; Tadokoro, S. Cooperative Towing by Multi-Robot System That Maintains Welding Cable in Optimized Shape. IEEE Robot. Autom. Lett. 2022, 7, 11783–11790. [Google Scholar] [CrossRef]
  2. Siguerdidjane, W.; Khameneifar, F.; Gosselin, F.P. Closed-loop shot peen forming with in-process measurement and optimization. CIRP J. Manuf. Sci. Technol. 2022, 38, 500–508. [Google Scholar] [CrossRef]
  3. Li, P. Research on Staggered Stacking Pattern Algorithm for Port Stacking Robot. J. Coast. Res. 2020, 115, 199–201. [Google Scholar] [CrossRef]
  4. Lyu, C.; Li, P.; Wang, D.; Yang, S.; Lai, Y.; Sui, C. High-Speed Optical 3D Measurement Sensor for Industrial Application. IEEE Sensors J. 2021, 21, 11253–11261. [Google Scholar] [CrossRef]
  5. Zhong, F.; Kumar, R.; Quan, C. A Cost-Effective Single-Shot Structured Light System for 3D Shape Measurement. IEEE Sens. J. 2019, 19, 7335–7346. [Google Scholar] [CrossRef]
  6. Chen, X.; Zhan, Q. The Kinematic Calibration of an Industrial Robot With an Improved Beetle Swarm Optimization Algorithm. IEEE Robot. Autom. Lett. 2022, 7, 4694–4701. [Google Scholar] [CrossRef]
  7. Meng, L.; Li, Y.; Zhou, H.; Wang, Q. A Hybrid Calibration Method for the Binocular Omnidirectional Vision System. IEEE Sens. J. 2022, 22, 8059–8070. [Google Scholar] [CrossRef]
  8. Chen, G.; Cui, G.; Jin, Z.; Wu, F.; Chen, X. Accurate Intrinsic and Extrinsic Calibration of RGB-D Cameras With GP-Based Depth Correction. IEEE Sens. J. 2019, 19, 2685–2694. [Google Scholar] [CrossRef]
  9. Sarabandi, S.; Porta, J.M.; Thomas, F. Hand-Eye Calibration Made Easy Through a Closed-Form Two-Stage Method. IEEE Robot. Autom. Lett. 2022, 7, 3679–3686. [Google Scholar] [CrossRef]
  10. Bai, M.; Zhang, M.; Zhang, H.; Li, M.; Zhao, J.; Chen, Z. Calibration Method Based on Models and Least-Squares Support Vector Regression Enhancing Robot Position Accuracy. IEEE Access 2021, 9, 136060–136070. [Google Scholar] [CrossRef]
  11. Sun, T.; Lian, B.; Yang, S.; Song, Y. Kinematic calibration of serial and parallel robots based on finite and instantaneous screw theory. IEEE Trans. Robot. 2020, 36, 816–834. [Google Scholar] [CrossRef]
  12. Okamura, K.; Park, F.C. Kinematic calibration using the product of exponentials formula. Robotica 1996, 14, 415–421. [Google Scholar] [CrossRef]
  13. Liu, Z.; Liu, X.; Cao, Z.; Gong, X.; Tan, M.; Yu, J. High Precision Calibration for Three-Dimensional Vision-Guided Robot System. IEEE Trans. Ind. Electron. 2023, 70, 624–634. [Google Scholar] [CrossRef]
  14. Du, G.; Liang, Y.; Li, C.; Liu, P.X.; Li, D. Online robot kinematic calibration using hybrid filter with multiple sensors. IEEE Trans. Instrum. Meas. 2020, 69, 7092–7107. [Google Scholar] [CrossRef]
  15. Messay-Kebede, T.; Sutton, G.; Djaneye-Boundjou, O. Geometry based self kinematic calibration method for industrial robots. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 4921–4926. [Google Scholar] [CrossRef]
  16. Yang, L.; Cao, Q.; Lin, M.; Zhang, H.; Ma, Z. Robotic hand-eye calibration with depth camera: A sphere model approach. In Proceedings of the 2018 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 20–23 April 2018; pp. 104–110. [Google Scholar] [CrossRef]
  17. Fu, J.; Ding, Y.; Huang, T.; Liu, H.; Liu, X. Hand–eye calibration method based on three-dimensional visual measurement in robotic high-precision machining. Int. J. Adv. Manuf. Technol. 2022, 119, 3845–3856. [Google Scholar] [CrossRef]
  18. Yin, S.; Ren, Y.; Guo, Y.; Zhu, J.; Yang, S.; Ye, S. Development and calibration of an integrated 3D scanning system for high-accuracy large-scale metrology. Measurement 2014, 54, 65–76. [Google Scholar] [CrossRef]
  19. Madhusudanan, H.; Liu, X.; Chen, W.; Li, D.; Du, L.; Li, J.; Ge, J.; Sun, Y. Automated Eye-in-Hand Robot-3D Scanner Calibration for Low Stitching Errors. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8906–8912. [Google Scholar] [CrossRef]
  20. Wang, G.; Li, W.l.; Jiang, C.; Zhu, D.h.; Xie, H.; Liu, X.j.; Ding, H. Simultaneous Calibration of Multicoordinates for a Dual-Robot System by Solving the AXB = YCZ Problem. IEEE Trans. Robot. 2021, 37, 1172–1185. [Google Scholar] [CrossRef]
  21. Liu, Q.; Qin, X.; Yin, S.; He, F. Structural Parameters Optimal Design and Accuracy Analysis for Binocular Vision Measure System. In Proceedings of the 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Xi’an, China, 2–5 July 2008; pp. 156–161. [Google Scholar] [CrossRef]
  22. Lin, D.; Wang, Z.; Shi, H.; Chen, H. Modeling and analysis of pixel quantization error of binocular vision system with unequal focal length. J. Phys. Conf. Ser. 2021, 1738, 012033. [Google Scholar] [CrossRef]
  23. Bottalico, F.; Niezrecki, C.; Jerath, K.; Luo, Y.; Sabato, A. Sensor-Based Calibration of Camera’s Extrinsic Parameters for Stereophotogrammetry. IEEE Sens. J. 2023, 23, 7776–7785. [Google Scholar] [CrossRef]
  24. Zhang, C.; Zhang, Z. Calibration Between Depth and Color Sensors for Commodity Depth Cameras. In Computer Vision and Machine Learning with RGB-D Sensors; Springer: Berlin/Heidelberg, Germany, 2014; pp. 47–64. [Google Scholar] [CrossRef]
  25. Basso, F.; Pretto, A.; Menegatti, E. Unsupervised intrinsic and extrinsic calibration of a camera-depth sensor couple. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 6244–6249. [Google Scholar] [CrossRef]
  26. Li, W.L.; Xie, H.; Zhang, G.; Yan, S.J.; Yin, Z.P. Hand–Eye Calibration in Visually-Guided Robot Grinding. IEEE Trans. Cybern. 2016, 46, 2634–2642. [Google Scholar] [CrossRef]
  27. Lembono, T.S.; Suarez-Ruiz, F.; Pham, Q.C. SCALAR: Simultaneous Calibration of 2-D Laser and Robot Kinematic Parameters Using Planarity and Distance Constraints. IEEE Trans. Automat. Sci. Eng. 2019, 16, 1971–1979. [Google Scholar] [CrossRef]
  28. Karmakar, S.; Turner, C.J. Forward kinematics solution for a general Stewart platform through iteration based simulation. Int. J. Adv. Manuf. Technol. 2023, 126, 813–825. [Google Scholar] [CrossRef]
  29. Yang, L.; Wang, B.; Zhang, R.; Zhou, H.; Wang, R. Analysis on Location Accuracy for the Binocular Stereo Vision System. IEEE Photonics J. 2018, 10, 7800316. [Google Scholar] [CrossRef]
  30. Xu, D.; Zhang, D.; Liu, X.; Ma, L. A Calibration and 3-D Measurement Method for an Active Vision System With Symmetric Yawing Cameras. IEEE Trans. Instrum. Meas. 2021, 70, 5012013. [Google Scholar] [CrossRef]
Figure 1. The robot measurement system.
Figure 1. The robot measurement system.
Sensors 23 07447 g001
Figure 2. The joint calibration method.
Figure 2. The joint calibration method.
Sensors 23 07447 g002
Figure 3. The measurement principle and error model of the 3D camera.
Figure 3. The measurement principle and error model of the 3D camera.
Sensors 23 07447 g003
Figure 4. The observation process of the camera error distribution.
Figure 4. The observation process of the camera error distribution.
Sensors 23 07447 g004
Figure 5. The camera error distribution (a) along the x-axis direction, (b) along the y-axis direction, and (c) along the z-axis direction.
Figure 5. The camera error distribution (a) along the x-axis direction, (b) along the y-axis direction, and (c) along the z-axis direction.
Sensors 23 07447 g005
Figure 6. The residual error in each part: (a) the residual robot error in methods 1–3, (b) the residual robot error in method 4, (c) the residual error in the hand–eye matrix, (d) the residual error of the camera.
Figure 6. The residual error in each part: (a) the residual robot error in methods 1–3, (b) the residual robot error in method 4, (c) the residual error in the hand–eye matrix, (d) the residual error of the camera.
Sensors 23 07447 g006
Figure 7. The impact of random noise on each method: (a) methods 1–3, (b) method 4.
Figure 7. The impact of random noise on each method: (a) methods 1–3, (b) method 4.
Sensors 23 07447 g007
Figure 8. The impact of camera error on each method: (a) methods 1–3, (b) method 4.
Figure 8. The impact of camera error on each method: (a) methods 1–3, (b) method 4.
Sensors 23 07447 g008
Figure 9. Experimental equipment: (a) the calibration system, (b) the standard sphere installed on a linear slide.
Figure 9. Experimental equipment: (a) the calibration system, (b) the standard sphere installed on a linear slide.
Sensors 23 07447 g009
Figure 10. The compensation value of each part: (a) robot, (b) hand–eye matrix, and (c) camera.
Figure 10. The compensation value of each part: (a) robot, (b) hand–eye matrix, and (c) camera.
Sensors 23 07447 g010
Figure 11. The calibration results of the camera (a) along the x-axis direction, (b) along the y-axis direction, (c) along the z-axis direction.
Figure 11. The calibration results of the camera (a) along the x-axis direction, (b) along the y-axis direction, (c) along the z-axis direction.
Sensors 23 07447 g011
Figure 12. The point clouds stitching and spherical fitting process.
Figure 12. The point clouds stitching and spherical fitting process.
Sensors 23 07447 g012
Figure 13. Stitching results of three point clouds using each method: (a) method l, (b) method 2, (c) method 3, and (d) method 4.
Figure 13. Stitching results of three point clouds using each method: (a) method l, (b) method 2, (c) method 3, and (d) method 4.
Sensors 23 07447 g013
Figure 14. Stitching results and fitting results of each method: (a) the stitching result of method 1, (b) the fitting result of (a), (c) the stitching result of method 2, (d) the fitting result of (c), (e) the stitching result of method 3, (f) the stitching result of method 4, (g) the fitting result of (f).
Figure 14. Stitching results and fitting results of each method: (a) the stitching result of method 1, (b) the fitting result of (a), (c) the stitching result of method 2, (d) the fitting result of (c), (e) the stitching result of method 3, (f) the stitching result of method 4, (g) the fitting result of (f).
Sensors 23 07447 g014
Table 1. The parameters and errors of the robot.
Table 1. The parameters and errors of the robot.
Joint Number a j 1 (mm) α j 1 (rad) d j (mm) θ j (rad)
10 (0.0016)0 (1.5 × 10 4 )475 (0.0018)0 (3.0 × 10 4 )
2150 (0.0023)0.5 π (7.5 × 10 4 )0 (0.0033)0.5 π (4.5 × 10 4 )
3600 (0.0390)0 (3.0 × 10 4 )0 (0.0071)0 (−1.5 × 10 4 )
4120 (−0.030)0.5 π (9.0 × 10 4 )720 (0.0075)0 (3.0 × 10 4 )
50 (0.0240)−0.5 π (7.5 × 10 4 )0 (0.0032)0 (−0.0011)
60 (−0.0510)0.5 π (−4.5 × 10 4 )85 (0.0018)0 (1.5 × 10 4 )
Table 2. The simulation results of calibration parameters.
Table 2. The simulation results of calibration parameters.
ρ 1 (mm) ρ 2 (mm) ρ 3 (mm) ϕ 1 (rad) ϕ 2 (rad) ϕ 3 (rad)
ξ 1 −0.03860.0141−0.03200.02500.0505−0.2077
ξ 2 −0.40400.44600.5121−0.4951−0.00210.0545
ξ 3 5.62 × 10 4 −0.1397−0.17610.48800.5026−0.0640
ξ 4 −9.91 × 10 6 −3.04 × 10 4 −8.92 × 10 4 −7.50 × 10 4 −4.54 × 10 4 −0.0014
ξ 5 −8.97 × 10 6 −4.84 × 10 6 4.42 × 10 4 8.53 × 10 4 0.0034−1.31 × 10 4
ξ 6 8.75 × 10 4 3.72 × 10 4 −6.04 × 10 4 1.24 × 10 4 1.98 × 10 4 0.0031
ξ E C −1.27090.2462−1.25930.00330.00480.0049
k 1 k 2 k 3    
ξ C W −9.60 × 10 10 −7.32 × 10 15 −4.94 × 10 18    
Table 3. Positioning and repeatability errors of methods.
Table 3. Positioning and repeatability errors of methods.
MethodRobotHand–Eye MatrixCamera emean (mm) emax (mm) std (mm)
10.26290.29180.0112
2×5.65455.97150.1831
3×××2.52283.56150.5317
4×134.8228140.91250.6630
Table 4. The experimental results of the calibration parameters.
Table 4. The experimental results of the calibration parameters.
ρ 1 (mm) ρ 2 (mm) ρ 3 (mm) ϕ 1 (rad) ϕ 2 (rad) ϕ 3 (rad)
ξ 1 0.1416−0.2921−1.35 × 10 4 −1.56 × 10 4 6.66 × 10 4 3.26 × 10 5
ξ 2 0.5372−0.1758−0.05122.32 × 10 4 −1.58 × 10 4 4.73 × 10 5
ξ 3 0.35490.1113−0.02512.46 × 10 4 −1.28 × 10 4 1.25 × 10 6
ξ 4 −0.19630.21880.60914.84 × 10 4 −5.40 × 10 4 9.96 × 10 5
ξ 5 0.1579−0.0294−0.21418.81 × 10 5 6.02 × 10 4 1.73 × 10 4
ξ 6 0.09770.07430.0969−1.78 × 10 4 3.44 × 10 5 4.60 × 10 7
ξ E C −0.1723−0.19670.3857−0.01210.02450.0157
k 1 k 2 k 3    
ξ C W 7.02 × 10 28 4.32 × 10 23 7.58 × 10 18    
Table 5. Distance and repeatability errors of methods.
Table 5. Distance and repeatability errors of methods.
MethodRobotHand–Eye MatrixCamera demean (mm) demax (mm) dstd (mm)
10.12320.31370.0957
2×0.14880.36750.1045
3×××1.54574.60761.3503
4×0.19330.47030.1414
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, L.; Zang, X.; Ding, G.; Wang, C.; Zhang, X.; Liu, Y.; Zhao, J. Joint Calibration Method for Robot Measurement Systems. Sensors 2023, 23, 7447. https://doi.org/10.3390/s23177447

AMA Style

Wu L, Zang X, Ding G, Wang C, Zhang X, Liu Y, Zhao J. Joint Calibration Method for Robot Measurement Systems. Sensors. 2023; 23(17):7447. https://doi.org/10.3390/s23177447

Chicago/Turabian Style

Wu, Lei, Xizhe Zang, Guanwen Ding, Chao Wang, Xuehe Zhang, Yubin Liu, and Jie Zhao. 2023. "Joint Calibration Method for Robot Measurement Systems" Sensors 23, no. 17: 7447. https://doi.org/10.3390/s23177447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop