Next Article in Journal
Love Wave Surface Acoustic Wave Sensor with Laser-Deposited Nanoporous Gold Sensitive Layer
Next Article in Special Issue
Analysis and Improvements in AprilTag Based State Estimation
Previous Article in Journal
Robust Stride Detector from Ankle-Mounted Inertial Sensors for Pedestrian Navigation and Activity Recognition with Machine Learning Approaches
Previous Article in Special Issue
Methods for Simultaneous Robot-World-Hand–Eye Calibration: A Comparative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust and Accurate Hand–Eye Calibration Method Based on Schur Matric Decomposition

Hypervelocity Aerodynamics Institute, Chinese Aerodynamics Research and Development Center, Mianyang 621000, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(20), 4490; https://doi.org/10.3390/s19204490
Submission received: 4 September 2019 / Revised: 12 October 2019 / Accepted: 14 October 2019 / Published: 16 October 2019
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)

Abstract

:
To improve the accuracy and robustness of hand–eye calibration, a hand–eye calibration method based on Schur matric decomposition is proposed in this paper. The accuracy of these methods strongly depends on the quality of observation data. Therefore, preprocessing observation data is essential. As with traditional two-step hand–eye calibration methods, we first solve the rotation parameters and then the translation vector can be immediately determined. A general solution was obtained from one observation through Schur matric decomposition and then the degrees of freedom were decreased from three to two. Observation data preprocessing is one of the basic unresolved problems with hand–eye calibration methods. A discriminant equation to delete outliers was deduced based on Schur matric decomposition. Finally, the basic problem of observation data preprocessing was solved using outlier detection, which significantly improved robustness. The proposed method was validated by both simulations and experiments. The results show that the prediction error of rotation and translation was 0.06 arcmin and 1.01 mm respectively, and the proposed method performed much better in outlier detection. A minimal configuration for the unique solution was proven from a new perspective.

1. Introduction

The combination of vision sensors and robots is a milestone in robotic intelligence, increasing the extent and efficacy of robot applications [1,2,3,4,5]. Hand–eye calibration is an important technique for bridging the transformation between a robot gripper and a robot vision sensor [6]. Its application is mainly reflected in the robot’s hand–eye coordination, guiding the robot gripper to accurately target and reach into a specified location using the machine vision system. From height work to surgery, the more sophisticated the operation, the better robot hand–eye coordination required.
Many researchers have studied hand–eye calibration, and all current methods can be divided into two categories: linear methods and iterative methods.
Linear methods are efficient and suitable for online hand–eye calibration. Shiu and Ahmad first introduced the dynamic equation AX = XB into hand–eye calibration and provided minimal configuration for a unique solution [6]. Tsai and Lens proposed a high-efficiency linear method for the equation AX = XB [7]. Chou and Kamel expressed rotation matrices using quaternions and obtained an analytical solution using Singular Value Decomposition (SVD) [8]. Lu and Chou used an eight-dimension vector to express rotation and translation and obtained a least squares solution [9]. Chen analyzed the relationship between screw movement and hand–eye calibration, and then proved that the movement of the robot gripper and vision sensor must satisfy certain geometric constraints [10]. Daniilidis solved rotation and translation simultaneously by means of a dual quaternion [11]. Park introduced canonical coordinates into the hand–eye calibration equation, which simplified the parameters [12]. Shah constructed a closed-form solution and derived the minimal configuration of the unique solution based on Kronecker product [13]. Compared with Daniilidis [11], Shah’s method was more reliable and accurate. Iterative methods are mainly used to improve the accuracy and robustness. Other authors [14,15] took the F norm of the rotation error and translation error as the cost function, and then optimized it using nonlinear methods. Horaud expressed rotation matrices using quaternions and simultaneously optimized the transformation between the robot-world and hand and eye [16]. Strobl and Hirzinger proposed a new adaptive error model that helped improve the solution to AX = XB and AX = ZB [17]. Ruland proposed a self-calibration method that took projection error as its cost function and optimized it using branch-and-bound [18].
The accuracies of the above methods strongly depend on the quality of the observation data. Therefore, preprocessing observation data is essential. Observation data preprocessing is rarely reported. Schmidt et al. [19] proposed a preprocessing method based on vector quantization, which improved the quality of observation data to a certain extent but could not identity outliers. The complexity increased from O(N) to O(N4), which considerably decreased the method’s efficiency.

2. Description of Hand–Eye Calibration Problem

Figure 1 describes the hand–eye calibration problem. The symbols are notated as follows: Gi is the robot gripper coordinate system, it is fixed on the robot gripper and moves together with it, Ci is the camera coordinate system fixed on the camera that moves together with it and the origin point is coincident with the camera’s optical center. The Z-axis is parallel to the optical axis, and the X and Y axes are parallel to the X and Y axes of the image coordinate system. CW is the world coordinate system and RW is the robot coordinate system that is fixed on the robot and moves together with it. When the robot gripper moves, its controlling device can identify the gripper’s pose in RW.
Ai is the homogenous transformation matric from Gi to RW, obtained from the robot controlling device:
A i = [ R A i t A i 0 1 × 3 1 ]
Bi is the homogenous transformation matric from CW to Ci, obtained using camera pose estimation methods:
B i = [ R B i t B i 0 1 × 3 1 ]
Aij is the homogenous transformation matric from Gi to Gj:
A i j = A j 1 A i = [ R A i j t A i j 0 1 × 3 1 ]
Bij is the homogenous transformation matric from Ci to Cj:
B i j = B j B i 1 = [ R B i j t B i j 0 1 × 3 1 ]
and X is the homogenous transformation matric from Ci to Gi:
X = [ R X t X 0 1 × 3 1 ]
i and j represent the ith and jth state of the robot gripper and camera respectively, ranging from 0 to N. N is the number of movements. Since the robot gripper and camera are fixed, X is constant.
The hand–eye calibration equation can be represented by notations:
A i j X = X B i j
Two equations can be obtained based on the partition matric:
{ R A i j R X = R X R B i j ( R A i j I ) t X = R X t B i j t A i j
Equation (7) shows that RX is independent, but the accuracy of tX is related to RX.

3. Hand–Eye Calibration Method

3.1. Schur Matric Decomposition

A given matric can be simplified to a normalized form via similarity transformation. Considering numerical stability, the similarity transformation of a unitary matric is the most attractive. Schur matric decomposition can be simply described as: If A ∈ Cn×n, then a unitary matric that satisfies UHAU = T = D + N exists, where D is a diagonal matric and N is a strictly upper triangular matric, implicating ∀Ij ni j = 0. For a real matric A, U is restricted to an orthogonal matric: UTAU = T. T has the following form:
T = [ T 11 T 12 T 1 m 0 T 22 T 2 m 0 0 T m m ]
Tii is a 1 × 1 or 2 × 2 matric consisting of complex conjugate eigenvalues. If R A i j is similar to R B i j and eigenvalues of R A i j and R B i j are the same, the matric T related to R A i j and R B i j are the same.

3.2. Hand–Eye Calibration Principle

A0, B0 is notated as the initial state of the robot gripper and camera. ( A i 0 , Β i 0 ) ( i = 1 , 2 , , N 1 , N ) is a series of homogenous transformation matrices related to their initial states. Without the loss of generality, e.g., i = 1, only consider the equation related to the rotation in Equation (7):
R A 10 R X = R X R B 10
From Theorem 1, proved in the Appendix A, the general solution can be written as:
R X = U R A 10 Y U R B 10 T
And
Y = [ ± 1 0 0 0 c d 0 d c ] , c 2 + d 2 = 1
RX only depends on c and d. For arbitrary i = 1, 2, …, N – 1, N:
R A i 0 R X = R X R B i 0
Substitute Equation (10) into Equation (12):
P i Y = Y Q i
where:
P i = U R A 10 T R A i 0 U R A 10 , Q i = U R B 10 T R B i 0 U R B 10
Collate Equation (13) into equations only related to s = (c d)T.
C i s = D i
Ci is a matric generated by the coefficients of c and d. Di is a matric generated by the constant term. Then, the final linear equation system can be constructed:
C s = D
where,
C = [ C 1 C 2 C N ] , D = [ D 1 D 2 D N ]
This is a least squares problem with constraints:
s = arg min { s T K s 2 F T s } s T s = 1
where,
K = C T C , F = C T D , s = [ c d ] T
Notate the cost function as:
J ( s , λ ) = s T K s 2 F T s + λ ( 1 s T s )
From J ( s , λ ) s = 0 and J ( s , λ ) λ = 0 :
( K λ I ) s = F , s T s = 1
Notate s = ( K λ I ) y and substitute it into previous equations:
( K λ I ) 2 y = F
K is a symmetrical matric, so
s T s = y T ( K λ I ) T ( K λ I ) y = y T F
sTs = 1 is the same as yTF = 1:
F = F y T F = F F T y
Because yTF = FTy:
[ λ 2 I 2 λ K + ( K 2 F F T ) ] y = 0
This is a symmetrical second eigenvalue problem [20].
Solve the least squares solution of the Langrage multiplier through methods previously published [21,22]. The least square solution of s is:
s = ( K λ m i n I ) 1 F
Under the condition ( U R A 10 , U R B 10 ) , the least squares solution of RX is:
R X 1 = U R A 10 Y ( s ) U R B 10 T
An R X i exists for each i = 1, 2, …, N – 1, N. To weaken the effect of noise, fuse the matrices based on the string distance of matrices. First, calculate the singular decomposition of the sum of R X i , i = 1, 2, …, N – 1, N:
U R D R V R T = R X 1 + R X 2 + + R X i + + R X N 1 + R X N
Then:
R X = U R V R T
To solve for tX, for the ith movement, the translation satisfies the following equation:
( R A i 0 I ) t X = R X t B i 0 t A i 0 ( i = 1 , 2 , , N 1 , N )
Substitute Equation (29) into Equation (30):
H i t X = W i
Then, a large linear equation system can be obtained:
H t X = W
where,
H = [ H 1 T H 2 T H i T H N 1 T H N T ] T W = [ W 1 T W 2 T W i T W N 1 T W N T ] T
This problem can be solved using the least squares method [20].

3.3. Outlier Detection

In practice, matrices Ai and Bi contain an observation error, notated as A ^ i and B ^ i , respectively. Bi is more sensitive to image noises. A poor environment may lead to a large observation error and, in this case, the global optimization solution has no significance. This is a basic problem that considerably decreases the robustness of hand–eye calibration and has not been well solved.
The form of Y is:
Y = [ ± 1 0 0 0 c d 0 d c ]
R A i 0 and R B i 0 must satisfy Equation (13).
P i = [ P i 11 P i 12 P i 13 P i 21 P i 22 P i 23 P i 31 P i 32 P i 33 ] , Q i = [ Q i 11 Q i 12 Q i 13 Q i 21 Q i 22 Q i 23 Q i 31 Q i 32 Q i 33 ]
For arbitrary c and d, Equation (36) is satisfied:
| P i 11 Q i 11 | ε
which can be used to discriminate the quality of the observation data: if greater than a specific threshold ε, then the observation data are outliers and should be deleted. The threshold ε is an empirical value. Through setting its value, the observation data can be filtered. The lower the threshold ε, the higher the quality of the observation data. In simulations and experiments, ε was set to 0.01. In summary, the flowchart of the proposed method is described in Figure 2.

3.4. Unique Solution Conditions

Assume the rotation matrices of two movements are A1, A2, B1, and B2, and X is known. From Theorem 1 (Appendix A), the general solution of A1X = XB1 is:
X = U A 1 Y U B 1 T
where, Y is a matric only related to c and d. Substitute Equation (37) into the equation built by two movements:
A 1 X = X B 1 , A 2 X = X B 2
Substitute them into Equation (13) to obtain:
{ P 1 Y = Y Q 1 P 2 Y = Y Q 2
And
P 1 = Q 1
For Equation (39):
r a n k ( C 1 T C 1 ) = r a n k ( [ C 1 T C 1 C 1 T D 1 ] ) = 0
Equation (41) is an identical equation.
If rotation axes of two movements are not parallel, P2 and Q2 are independent:
r a n k ( C 2 T C 2 ) = r a n k ( [ C 2 T C 2 C 2 T D 2 ] ) = 2
From Theorem 2 proved in the Appendix A, if the rotation axes of N movements of the robot gripper are parallel, there will be multiple solutions to the hand–eye calibration. Therefore, the minimal configuration of the unique solution is that the robot gripper and camera move at least twice, and the rotation axes cannot be parallel.

4. Results

4.1. Simulations

We designed simulations to test the performance of different hand–eye calibration methods. The hand–eye calibration equation can be written as:
A i j X = X B i j
where, Aij and Bij are the movement of the robot gripper and camera from time i to time j, respectively. Ai and Bi were simulated as the observation data. X is simulated as the transformation from the camera to the robot gripper. Ai, Bi and X consist of rotation matrices and translation vectors. The rotation matric can be generated using three Euler angles.
The simulations included three parts: analysis of noise sensitivity, relationship between the number of movements and accuracy, and outlier detection ability. All the simulations were performed using MATLAB. In addition to the proposed method, we selected another five popular methods for comparisons [7,11,12,13,23]. For the ith simulation, R ˜ X i and t ˜ X i are the ideal transformation from the camera to the robot gripper and R ^ X i and t ^ X i are the measured transformations. The error matric can be calculated as:
R e r r o r i = ( R ^ X i ) T R ˜ X i t e r r o r i = t ^ X i t ˜ X i
where, k e r r o r i and θ e r r o r i are the rotation axis and rotation angle of R e r r o r i , respectively.
( k e r r o r i , θ e r r o r i ) = r o d r i g u e s ( R e r r o r i )
The errors of rotation and translation are defined as:
θ e r r o r = R M S ( θ e r r o r 1 , θ e r r o r 2 , , θ e r r o r n 1 , θ e r r o r n ) t e r r o r = R M S ( t e r r o r 1 , t e r r o r 2 , , t e r r o r n 1 , t e r r o r n )
where, n is the number of simulations.

4.1.1. Analysis of Noise Sensitivity

Gaussian rotation noise (μR = 0, σR = 0°–5°) and translation noise (μT = 0, σT = 0–5 mm) were added into Ai and Bi (i = 1, 2, …, 9, 10). We ran 100 simulations at each noise level. The results were shown in Figure 3, in which ‘Rot.’ represents ‘Rotation’ and ‘Trans.’ represents ‘Translation’. Except for the dual quaternion method, translation perturbation had no effect on the rotation solution, because only the dual quaternion method solves rotation and translation simultaneously, whereas other methods solve rotation and translation by steps.

4.1.2. Relationship between Number of Movements and Accuracy

The simulation conditions included σR = 0.2°, σT = 2 mm, and the number of movements varied from 3 to 15. We ran 100 simulations at each number of movements. Figure 4a,b indicates that the accuracy of hand–eye calibration improves with the increase in the number of movements. When the number of movements increases from three to eight, the accuracy of hand–eye calibration improves considerably. Figure 4c,d demonstrates that the other five methods are more robust, except for the dual quaternion method being unstable.

4.1.3. Outlier Detection

The simulation conditions were σR = 0.2°, σT = 2 mm, and ε = 0.01. The robot gripper moved 10 times, in which large noise was added into n (n = 1, 2, 3, 4, 5, 6) movements randomly and these observations were regarded as outliers. We ran 100 simulations at each number of outliers. Figure 5a,b shows the relationship between calibration errors of RX and tX and the number of outliers, respectively. Figure 5c,d depicts the performance of the proposed method. The results indicate that the proposed method can detect outliers effectively and performs much better than the other five methods.

4.2. Experiments

Determining poses of the robot gripper with high precision is costly, but movements of the robot gripper can be measured precisely. Thus, most researchers adopt the following program to validate hand–eye calibration methods: the camera moves N + n times, where the preview N times are called the calibration link and the last n times are called the verification link. The calibration link is used to solve the transformation between the robot gripper and the camera. The verification link is used to verify method accuracy by comparing its predicted movements with its true movements [3]. The predicted movements of the robot gripper can be solved from the camera’s movements using Equation (47). The true movements of the robot gripper can be obtained from its controlling device. A robot arm was fixed with a camera, as shown in Figure 6a.
For the calibration link:
(1) Fix 9 feature points on the platform as shown in Figure 6b. The three-dimensional (3D) coordinates of feature points can be measured by Leica Total Station. All the feature points’ coordinates remain unchanged during the experiment.
(2) At time 0, capture an image of the feature points on the platform. Calculate the camera’s pose B0 through Perspective-n-Points (PnP) methods. The robot gripper’s pose A0 can be determined from its controlling device.
(3) At time i, move the robot gripper and camera.
(4) Capture an image of the feature points on the platform. Calculate the camera’s pose Bi through PnP methods. The robot gripper’s pose Ai can be determined from its controlling device.
(5) Repeat step (3)–(4) N times and (Ai0, Bi0) (i = 1, …, N–1, N) can be obtained.
(6) The transformation X from the camera to the robot gripper can be calibrated using all six hand–eye calibration methods.
For the verification link:
(7) Repeat step (3)–(4) n times and (Ai0, Bi0) (i = N+1, …, N+n–1, N+n) can be obtained.
(8) The predicted movement A ^ i 0 of the robot gripper can be calculated through Equation (47). The true movement of the robot gripper Ai0 can be obtained from its controlling device.
A ^ i 0 = X B i 0 X 1
(9) Comparing A ^ i 0 with Ai0, the error matric can be calculated using Equation (48):
R e r r o r i = ( R ^ A i 0 ) T R A i 0 t e r r o r i = t ^ A i 0 t A i 0
k e r r o r i and θ e r r o r i are the corresponding rotation axis and rotation angle of R e r r o r i , respectively:
( k e r r o r i , θ e r r o r i ) = r o d r i g u e s ( R e r r o r i )
The rotation and translation errors are defined as:
θ e r r o r = R M S ( θ e r r o r 1 , θ e r r o r 2 , , θ e r r o r n 1 , θ e r r o r n ) t e r r o r = R M S ( t e r r o r 1 , t e r r o r 2 , , t e r r o r n 1 , t e r r o r n )
The rotation error is in arcmin and the translation error is in mm.
In the experiment, N = 2–9 and n = 200. The results are shown in Table 1. The experiment results indicate that the prediction error decreased with the increase in the number of movements and when the robot gripper moved 9 times, the proposed method’s prediction accuracy of rotation exceeded 6 arcsec, which is much higher than the calibration accuracy in the simulations. The reason is explained in the following.
Expand Equation (47) using a partition matric:
R ^ A i 0 = R X R B i 0 R X 1
The prediction error consists of hand–eye calibration error and camera pose estimation error. Hand–eye calibration error is notated as ∆RX. Then, the prediction error of Equation (51) can be written as:
e = R X Δ R X R B i 0 Δ R X 1 R X 1 R X R B i 0 R X 1 F λ Δ R X R B i 0 Δ R X 1 R B i 0 F
Equation (52) can weaken the effect of the hand–eye calibration error. This conclusion also applies to the prediction error of translation. Thus, the prediction error in the experiment was much lower than the hand–eye calibration error in the simulations.

5. Conclusions

A hand–eye calibration method with high accuracy and robustness was proposed in this paper. Using this method, the basic problem of observation data preprocessing is solved by outlier detection, which significantly improves robustness. However, two aspects remain to be studied. To improve the method’s efficiency, we used the least squares optimization method with constraints. If no strict need exists for efficiency, an iterative method could be considered. We decreased the rotation matric’s dimension from three to two via Schur matric decomposition and unknown parameters satisfied the constraint c2 + d2 = 1. If the following triangle transformation is adopted, the degrees of freedom (DOFs) can be decreased from two to one. The Gröbner basis method can be used to solve polynomial equations [24]:
c = 2 tan θ 2 1 + tan 2 θ 2 , d = 1 tan 2 θ 2 1 + tan 2 θ 2

Author Contributions

Conceptualization, J.L.; Data curation, X.L.; Formal analysis, J.L. and J.W.; Funding acquisition, J.L.; Investigation, J.W. and X.L.; Methodology, J.L.; Supervision, X.L.; Validation, J.W.; Writing – original draft, J.L.; Writing – review and editing, J.W. and X.L.

Funding

This work was funded by National Natural Science Foundation of China with Grant No. 11802321.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Lemma 1.
A is a 3 × 3 rotation matric and can be decomposed to Equation (55) based on Schur matric decomposition:
A = U A T A U A T
Then, TA can be written as:
T A = [ 1 0 1 × 2 0 2 × 1 T 2 × 2 ]
and T 2 × 2 is a unit orthogonal matric:
T 2 × 2 = ( a b b a )
Proof of Lemma 1.
Because A is a unit orthogonal matric:
A A T = U A T A T A T U A T = I T A T A T = I
TA can be written as:
T A = [ 1 T 1 × 2 0 T 2 × 2 ]
Substitute TA into T A T A T = I :
{ 1 + T 1 × 2 T 1 × 2 T = 1 T 1 × 2 = [ 0 0 ] T 2 × 2 T 2 × 2 T = I
Thus, T 2 × 2 is a unit orthogonal matric. Notate:
T 2 × 2 = [ a c b d ]
Then,
{ a 2 + c 2 = 1 b 2 + d 2 = 1 a b + c d = 0 a d b c = 1 ( a d ) 2 + ( b + c ) 2 = 0
Thus,
{ a = d b = c
The lemma has been proven. □
Lemma 2.
For rotation matric A, B, C, D, and X:
{ A X = X B C X = X D
and the Schur matric decompositions of A and B are as follows:
{ A = U A T A U A T B = U B T B U B T
If the axis of A and B is parallel to the axis of C and D respectively, then:
U A T C U A = U B T D U B = M
The form of M can be written as:
M = [ 1 0 0 M 2 × 2 ]
Proof of Lemma 2.
UA and UB can be obtained by Schur matric decomposition. ka and kb are the axes of A and B calculated through Rodrigues, respectively. θa and θb are the rotation angles. Then ka = Xkb, θa = θb, kc = Xkd and θc = θd. Then, the rotation matrices can be written as:
{ A = ( 1 cos θ a ) E A 1 + cos θ a E A 2 + sin θ a E A 3 B = ( 1 cos θ a ) E B 1 + cos θ a E B 2 + sin θ a E B 3 C = ( 1 cos θ c ) E A 1 + cos θ c E A 2 + sin θ c E A 3 D = ( 1 cos θ c ) E B 1 + cos θ c E B 2 + sin θ c E B 3
E A i and E B i ( i = 1 , 2 , 3 ) are linearly independent matrices generated from rotation axes. Any orthogonal transformation has no effect on the property of independence:
{ U A T A U A = ( 1 cos θ a ) U A T E A 1 U A + cos θ a U A T E A 2 U A + sin θ a U A T E A 3 U A U B T B U B = ( 1 cos θ a ) U B T E B 1 U B + cos θ a U B T E B 2 U B + sin θ a U B T E B 3 U B U A T C U A = ( 1 cos θ c ) U A T E A 1 U A + cos θ c U A T E A 2 U A + sin θ c U A T E A 3 U A U B T D U B = ( 1 cos θ c ) U B T E B 1 U B + cos θ c U B T E B 2 U B + sin θ c U B T E B 3 U B
Because U A T A U A = U B T B U B = T ,
U A T E A i U A = U B T E B i U B ( i = 1 , 2 , 3 )
The rotation angles of C and D are equal, then:
U A T C U A = U B T D U B
Since,
{ U A T C U A T = U A T C A U A T U A T C U A = U A T A C U A
and the axes of A and C are parallel,
A C = C A
Thus,
U A T C U A T = T U A T C U A M T = T M
From Lemma 1, M can be written as:
M = [ 1 0 0 M 2 × 2 ]
M22 is a unit orthogonal matric and Lemma 2 has been proven. □
Theorem 1.
A, X and B are 3 × 3 rotation matrices and AX = XB. The Schur decomposition of A and B can be written as:
{ A = U A T A U A T B = U B T B U B T
Notate Y = U A T X U B . Then,
Y = ( ± 1 0 0 0 c d 0 d c ) , c 2 + d 2 = 1
Proof of Theorem 1.
Since A is similar to B, T A = T B = T . Substitute it into AX = XB, then:
U A T U A T X = X U B T U B T T Y = Y T
From Lemma 1, T can be obtained:
T = [ 1 0 0 T 2 × 2 ]
Assume Y = [ Y 1 × 1 Y 1 × 2 Y 2 × 1 Y 2 × 2 ] , then:
{ Y 1 × 1 = Y 1 × 1 Y 1 × 2 ( T 2 × 2 I ) 0 ( T 2 × 2 I ) Y 2 × 1 0 T 2 × 2 Y 2 × 2 Y 2 × 2 T 2 × 2
Due to arbitrariness,
Y 1 × 2 = [ 0 0 ] Y 2 × 1 = [ 0 0 ] T
Thus,
Y = [ Y 1 × 1 0 1 × 2 0 2 × 1 Y 2 × 2 ]
Assume
T 2 × 2 = [ a b b a ] , Y 2 × 2 = [ c e d f ]
then:
{ a c + b d a c b e d = e a e + b f b c + a e f = c b c + a d a d b f f = c b e + a f b d + a f d = e
Y is a unit orthogonal matric, so:
{ c 2 + d 2 = 1 Y 1 × 1 = ± 1
The ± of Y11 is related to the determinant of UAUB. Because the determinant of X is greater than 0, the symbol of Y11 is the same as the symbol of the determinant of UAUB. Theorem 1 has been proven. □
Theorem 2.
If rotation axes of N movements of robot gripper are parallel, there will be multiple solutions to the hand–eye calibration.
Proof of Theorem 2.
Assume rotation matrices of two movements are A1, A2, B1, and B2. X is an unknown rotation matric. From Theorem 1, the general solution of equation A1X = XB1 can be obtained:
X = U A 1 Y U B 1 T
Y is a matric only related to c and d:
Y = [ ± 1 0 0 0 c d 0 d c ]
Substitute the general solution into the second movement:
A 2 U A 1 Y U B 1 T = U A 1 Y U B 1 T B 2 U A 1 T A 2 U A 1 Y = Y U B 1 T B 2 U B 1
From Lemma 2:
U A 1 T A 2 U A 1 = U B 1 T B 2 U B 1 = M = [ 1 0 1 × 2 0 2 × 1 M 2 × 2 ]
Thus,
M Y Y M
The equation is an identical equation indicating that the second movement cannot provide any extra constraint related to c and d.
In the same way, an N – 1 movement with same rotation axes cannot provide any extra constraint related to c and d. The general solution applies to all equations built by N movements. Therefore, hand–eye calibration problems with same rotation axes have multiple solutions. Theorem 2 has been proven. □

References

  1. Knight, J.; Reid, I. Automated alignment of robotic pan-tilt camera units using vision. Int. J. Comput. Vis. 2006, 68, 219–237. [Google Scholar] [CrossRef]
  2. Eschelbach, M.; Aghaeifar, A.; Bause, J.; Handwerker, J.; Anders, J.; Engel, E.M.; Thielscher, A.; Scheffler, K. Comparison of prospective head motion correction with NMR field probes and an optical tracking system. Magn. Reson. Med. 2018, 81, 719–729. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Song, Y.; Zhang, J.; Lian, B.; Sun, T. Kinematic calibration of a 5-DOF parallel kinematic machine. Precis. Eng. 2016, 45, 242–261. [Google Scholar] [CrossRef]
  4. Pan, H.; Wang, N.L.; Qin, Y.S. A closed-form solution to eye-to-hand calibration towards visual grasping. Ind. Robot 2014, 41, 567–574. [Google Scholar] [CrossRef]
  5. Ali, I.; Suominen, O.; Gotchev, A.; Morales, E.R. Methods for simultaneous robot-world-hand-eye calibration: A comparative study. Sensors 2019, 19, 2837. [Google Scholar] [CrossRef]
  6. Shiu, Y.C.; Ahmad, S. Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form AX = XB. IEEE Trans. Robot. Autom. 1989, 5, 16–29. [Google Scholar] [CrossRef]
  7. Tsai, R.Y.; Lenz, R.K. A new technique for fully autonomous and efficient 3D robotics hand-eye calibration. IEEE Trans. Robot. Autom. 1989, 5, 345–358. [Google Scholar] [CrossRef]
  8. Wang, Z.; Liu, Z.; Ma, Q.; Cheng, A.; Liu, Y.H.; Kim, S.; Deguet, A.; Reiter, A.; Kazanzides, P.; Taylor, R.H. Vision-based calibration of dual RCM-based robot arms in human-robot collaborative minimally invasive surgery. IEEE Robot. Autom. Lett. 2017, 3, 672–679. [Google Scholar] [CrossRef]
  9. Zhang, Z.Q.; Zhang, L.; Yang, G.Z. A computationally efficient method for hand-eye calibration. Int. J. Comput. Assist. Radiol. Surg. 2017, 12. [Google Scholar] [CrossRef]
  10. Li, H.; Ma, Q.; Wang, T.; Chirikjian, G.S. Simultaneous hand-eye and robot-world calibration by solving the AX = XB problem without correspondence. IEEE Robot. Autom. Lett. 2015, 8, 145–152. [Google Scholar] [CrossRef]
  11. Daniilidis, K. Hand-eye calibration using dual quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
  12. Park, F.C.; Martin, B.J. Robot sensor calibration: Solving AX = XB on the Euclidean group. IEEE Trans. Robot. Autom. Lett. 1994, 10, 717–721. [Google Scholar] [CrossRef]
  13. Shah, M. Solving the robot-world/hand-eye calibration problem using the Kronecker product. J. Mech. Robot. 2013, 5, 031007. [Google Scholar] [CrossRef]
  14. Pachtrachai, K.; Vasconcelos, F.; Chadebecq, F.; Allan, M.; Hailes, S.; Pawar, V.; Stoyanov, D. Adjoint transformation method for hand-eye calibration with applications in robotics assisted surgery. Ann. Biomed. Eng. 2018, 46, 1606–1620. [Google Scholar] [CrossRef]
  15. Fassi, I.; Legnani, G. Hand to sensor calibration: A geometrical interpretation of the matric equation AX = XB. J. Robot. Syst. 2005, 22, 497–506. [Google Scholar] [CrossRef]
  16. Li, W.; Dong, M.L.; Lu, N.G. Simultaneous robot-world and hand-eye calibration without a calibration object. Sensors 2018, 18, 3949. [Google Scholar] [CrossRef]
  17. Cao, C.T.; Do, V.P.; Lee, B.Y. A novel indirect calibration approach for robot positioning error compensation based on neural network and hand-eye vision. Appl. Sci. 2019, 9, 1940. [Google Scholar] [CrossRef]
  18. Ruland, T.; Pajdla, T.; Kruger, L. Robust hand-eye self-calibration. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Washington, DC, USA, 5–7 October 2011; pp. 87–94. [Google Scholar] [CrossRef]
  19. Schmidt, J.; Niemann, H. Data-selection for hand–eye calibration: A vector quantization approach. Int. J. Robot. Res. 2008, 27, 1027–1053. [Google Scholar] [CrossRef]
  20. Shu, T.; Zhang, B.; Tang, Y.Y. Multi-view classification via a fast and effective multi-view nearest-subspace classifier. IEEE Access 2019, 7, 49669–49679. [Google Scholar] [CrossRef]
  21. Adachi, S.; Iwata, S.; Nakatsukasa, Y.; Takeda, A. Solving the trust-region subproblem by a generalized eigenvalue problem. SIAM J. Optim. 2017, 27, 269–291. [Google Scholar] [CrossRef]
  22. Park, Y.; Gerstoft, P.; Seonng, W. Grid-free compressive mode extraction. J. Acoust. Soc. Am. 2019, 145, 1427–1442. [Google Scholar] [CrossRef] [PubMed]
  23. Horaud, R.; Dornaika, F. Hand-eye calibration. Int. J. Robot. Res. 1995, 14, 195–210. [Google Scholar] [CrossRef]
  24. Xu, C.; Zhang, L.H.; Cheng, L. Pose estimation from line correspondences: A complete analysis and a series of solutions. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1209–1222. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Description of the hand–eye calibration problem.
Figure 1. Description of the hand–eye calibration problem.
Sensors 19 04490 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Sensors 19 04490 g002
Figure 3. The relationship between calibration accuracy and observation errors: (a) Rotation errors in observations and calibration errors of RX. (b) Rotation errors in observations and calibration errors of tX. (c) Translation errors in observations and calibration errors of RX. (d) Translation errors in observation and calibration errors of tX. Each point on the figure is the Root Mean Square (RMS) of 100 simulations.
Figure 3. The relationship between calibration accuracy and observation errors: (a) Rotation errors in observations and calibration errors of RX. (b) Rotation errors in observations and calibration errors of tX. (c) Translation errors in observations and calibration errors of RX. (d) Translation errors in observation and calibration errors of tX. Each point on the figure is the Root Mean Square (RMS) of 100 simulations.
Sensors 19 04490 g003aSensors 19 04490 g003b
Figure 4. The relationship between calibration accuracy and the number of movements: (a) The number of movements and the calibration errors of RX. (b) The number of movements and the calibration errors of tX. (c) The standard deviations of the calibration errors of RX. (d) The standard deviations of the calibration errors of tX. Each point on the figure is the RMS of 100 simulations.
Figure 4. The relationship between calibration accuracy and the number of movements: (a) The number of movements and the calibration errors of RX. (b) The number of movements and the calibration errors of tX. (c) The standard deviations of the calibration errors of RX. (d) The standard deviations of the calibration errors of tX. Each point on the figure is the RMS of 100 simulations.
Sensors 19 04490 g004
Figure 5. The relationship between calibration accuracy and the number of outliers: (a) Calibration errors of RX and the number of outliers. (b) Calibration errors of tX and the number of outliers. (c) Partial enlargers of (a). (d) Partial enlargers of (b). Each point on the figure is the RMS of 100 simulations.
Figure 5. The relationship between calibration accuracy and the number of outliers: (a) Calibration errors of RX and the number of outliers. (b) Calibration errors of tX and the number of outliers. (c) Partial enlargers of (a). (d) Partial enlargers of (b). Each point on the figure is the RMS of 100 simulations.
Sensors 19 04490 g005
Figure 6. (a) Robot arm, gripper and its controlling device. (b) Camera and feature points.
Figure 6. (a) Robot arm, gripper and its controlling device. (b) Camera and feature points.
Sensors 19 04490 g006
Table 1. Prediction error: terror in mm and θerror in arcmin.
Table 1. Prediction error: terror in mm and θerror in arcmin.
NProposedTsaiInriaNavyDual QuaternionShah
θerrorterrorθerrorterrorθerrorterrorθerrorterrorθerrorterrorθerrorterror
210.145.4910.147.0610.146.2310.175.2510.218.7010.145.63
310.104.6310.106.2110.146.2010.145.1010.147.0810.104.71
410.074.0610.105.7710.104.7410.104.9710.146.1810.104.16
59.833.9410.074.1510.073.7910.074.6210.104.179.864.04
60.962.460.963.671.303.612.163.543.813.981.342.60
70.441.570.513.510.513.600.511.871.783.640.721.75
80.371.150.372.760.412.510.441.770.442.270.371.20
90.061.010.272.470.342.270.411.190.411.820.201.05

Share and Cite

MDPI and ACS Style

Liu, J.; Wu, J.; Li, X. Robust and Accurate Hand–Eye Calibration Method Based on Schur Matric Decomposition. Sensors 2019, 19, 4490. https://doi.org/10.3390/s19204490

AMA Style

Liu J, Wu J, Li X. Robust and Accurate Hand–Eye Calibration Method Based on Schur Matric Decomposition. Sensors. 2019; 19(20):4490. https://doi.org/10.3390/s19204490

Chicago/Turabian Style

Liu, Jinbo, Jinshui Wu, and Xin Li. 2019. "Robust and Accurate Hand–Eye Calibration Method Based on Schur Matric Decomposition" Sensors 19, no. 20: 4490. https://doi.org/10.3390/s19204490

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop