Next Article in Journal
One-Pot Hydrothermal Synthesis of Magnetite Prussian Blue Nano-Composites and Their Application to Fabricate Glucose Biosensor
Next Article in Special Issue
A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image
Previous Article in Journal
Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection
Previous Article in Special Issue
Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(2), 239; https://doi.org/10.3390/s16020239
Submission received: 22 December 2015 / Revised: 5 February 2016 / Accepted: 14 February 2016 / Published: 18 February 2016
(This article belongs to the Special Issue Sensors for Robots)

Abstract

:
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

Graphical Abstract

1. Introduction

Multi-sensor measurement systems usually have different coordinate systems. The original data must be transformed to a common coordinate system for the convenience of the subsequent data acquisition, comparison and fusion [1,2]. The transformation of coordinate systems is applied in many fields, especially vision measurement and robotics. For example, two images need to have a unified coordinate system for image matching [3]. In camera calibration, the coordinate systems of the image plane and the object plane need to be unified for the inner parameter calculation [4]. In robot systems, the coordinate system of the robot twist needs to be transformed to the tool center position (TCP) to obtain the correct pose of robot manipulators [5,6]. A minor error introduced by an imprecise coordinate transformation could cause problems such as the failure of image matching and track breaking [1]. Especially in an error accumulating system such as series industry robots, the coordinate transformation error would accumulate in each step and thereby decrease the position accuracy of the robot manipulator. Therefore, research on coordinate transformation has been of interest to researchers in recent years.
Industrial robots are well-known to have weak position accuracy compared with their repeatability accuracy. The positioning accuracy degrades with the number of axes of the robotic arm due to error accumulation. Various methods have been presented to improve the position accuracy of robots, such as establishing a kinematic model of the robot and calibrating the kinematic parameters [7]. Denavit and Hartenberg [8] first proposed the D-H model, which was revised to a linear model by Hayati [9]. It provides the basis for the kinematic calibration. Due to the geometry and non-geometry errors of the robot, the traditional robot self-calibration method based on the D-H model cannot accurately describe the robot pose. To avoid the influence of the robot body, many researchers have utilized external measuring instruments to calibrate the robot online [10,11]. To achieve the aim of calibration, the primary process is to unify the coordinate systems of a calibrated instruments and the robot. Only in this way is it possible to use the measurement results to correct the kinematic parameters of the robot. With an inaccurate coordinate transformation method, the transformation error might merge into the revised kinematics parameters, thereby failing to improve the positioning accuracy of the robot through the calibration of kinematic parameters. Therefore, an accurate method of coordinate transformation is indispensable in the field of robot calibration. The well-developed and widely-used methods of coordinate transformation at present might be classified into several categories: the Three-Point method, Small-Angle Approximation method, Rodrigo Matrix method, Singular Value Decomposition (SVD) method, Quaternion method and Least Squares method [2]. The Three-Point method uses three non-collinear points in space to construct an intermediate reference coordinate system [12,13]. The transformation relationship between the initial coordinate system and the target coordinate system is obtained by their relationship relative to the intermediate reference coordinate system. Depending on the choice of the public points, the accuracy of the Three-Point method might be unstable. The Small-Angle Approximation method means that the rotation matrix can be simplified by using the approximate relationship of a trigonometric function ( s i n θ = θ , c o s θ = 1 ) when the angle between the two coordinate systems is small (less than 5°). It is more suitable for the coordinate transformation of small angles. The Rodrigo matrix is a method of constructing a rotation matrix by using the anti-symmetric matrix [14,15]. Despite its high accuracy and good stability, the algorithm might be complex and difficult. The Singular Value Decomposition method (SVD) is a matrix decomposition method that can solve the minimization of the objective function based on the minimum square error sum [16]. The method is accurate and easy to implement, but it might be difficult to work out the rotation matrix under a dense public point cloud. The Quaternion method uses four element vectors ( q 0 , q 1 , q 2 , q 3 ) to describe the coordinate rotation matrix [17,18]. The aim of the algorithm is to solve for the maximum eigenvalue and the corresponding feature vector when the quadratic is minimized. It is a simple and precise method, but there might be no solution due to the use of ill conditioned matrices. In practice, complex calculations and unstable results would make the application more difficult and complicated. Therefore, researchers are searching for a simpler and more stable method of coordinate transformation. For example, Zhang et al. proposed a practical method of coordinate transformation in robot calibration [19]. This method rotates three single axes of the robot to calculate the normal vectors in three directions, combined with the data of the calibration sensor. Then, combined with the own readings of the robot, the rotation matrix and translation matrix are obtained. The method avoids the need to solve an equation and complex calculations, but it might be affected by any manufacturing errors of the robot and requires a calibration sensor with a large measuring range that can cover the full working range of the robot.

2. Online Calibration System of Robot

Industrial robots have the characteristics of high repeatability positioning accuracy and low absolute positioning accuracy. This is due to the structure of the robot, manufacturing errors, kinematic parameter error and environmental influence [10]. To improve the absolute positioning accuracy of the robot, the use of an external sensor to measure the position of the robot manipulator it any effective approach. This paper proposes an on-line calibration system for the kinematic parameters of the robot using a laser tracker and a close-range photogrammetric system, as Figure 1 shows. According to the differential equations constructed by the kinematic parameters of each robot axis, the final mathematic model of kinematic parameters of the robot is established. The position errors of the robot manipulator are obtained by comparing the coordinates in the robot base coordinate system and the measurement sensor system. Then, the errors, including the coordinate transformation error, target installation error and position and angle errors of the robot kinematic parameters, are separately corrected. In the robot calibration, on the one hand, the coordinate transformation error directly affects the final error correction of the kinematic parameters. On the other hand, the coordinate systems of sensors are often required to transform in the on-line combined measurement system. Therefore, the premise of obtaining the position errors of a robot manipulator is to unify the coordinate systems of the various measurement sensors by an accurate, fast and stable coordinate transformation algorithm.
In combination with the characteristics of the robot, we propose a practical coordinate transformation method. It extracts the characteristic lines from the point clouds in different coordinate systems. According to the theory of space analytic geometry, the rotation and translation parameters needed for the coincidence of the characteristic line scan be calculated. Then, the coordinate transformation matrix is calculated. The coincidence of the characteristic lines represents the coincidence of the point clouds as well as the coincidence of the two coordinate systems.
This method has some advantages. First, it does not require the solution of equations and complex calculations. Second, because the transformation matrix is obtained from the space geometry relationships, it would not be affected by robot errors or other environmental factors. The result is accurate and stable. Third, it does not require a sensor with a large field of view. Fourth, the algorithm is small and fast without occupying processor time and resources, and can be integrated into the host computer program. It could be applied easily in measurement coordinate systems that often need to change.

3. Methods of Online Calibration System

3.1. Method of Coordinate Transformation

Suppose that S is a cubic point cloud in space. Point cloud M is the form of S located in the coordinate system of the sensor OSXSYSZS. N is the form of S located in the robot base coordinate system OrXrYrZr. M' represents the point cloud M transformed from the coordinate system of the sensor OSXSYSZS to the robot base coordinate system OrXrYrZr with the transformation matrix TSr. The difference between N and M' is the transformation error caused by the transfer matrix TSr. Then, the coincidence of the coordinate systems OSXSYSZS and OrXrYrZr can be expressed as the coincidence of the two point clouds N and M'. For simplifying this mathematical model of the transformation process, we establish several characteristic lines instead of each point cloud. As verified by experiment, at least two characteristic lines are required to ensure the transformation accuracy.
In Figure 2, two points A1 and A2 are chosen to be linked to the characteristic line A. Points B1 and B2 form characteristic line B. Similarly, in point cloud N, the corresponding points A1'and A2' form line A', and points B1' and B2' form line B'. To achieve the coincidence of lines A and A', line A must be rotated around an axis in space. The rotated axis is the vector C which is perpendicular to the plane constructed by lines A and A'. As Figure 3 shows, the process of a vector rotating around an arbitrary axis can be divided into a series of rotations around the axis X, Y, Z. The following are the decomposition steps.
Take the first coincidence of Lines A and A' as an example:
(a)
Translate the rotation axis to the coordinate origin. The corresponding transformation matrix can be calculated as:
T ( x 1 , y 1 , z 1 ) = [ 1 0 0 a 0 0 1 0 b 0 0 0 1 c 0 0 0 0 1 ]
where, (a0, b0, c0) is the coordinates of the center point of line A.
(b)
Rotate the axis α1 degrees to Plane XOZ.
R x ( α 1 ) = [ 1 0 0 0 0 cos α 1 sin α 1 0 0 sin α 1 cos α 1 0 0 0 0 1 ]
α 1 is the angle between the axis and plane XOZ. It can be obtained by cos α 1 = c 1 b 1 2 + c 1 2 , sin α 1 = b 1 b 1 2 + c 1 2 , where, (a1, b1, c1) are the coordinates of vector C, as Figure 3b shows.
(c)
Rotate the axis β1 degrees to coincide with Axis Z.
R y ( β 1 ) = [ cos β 1 0 sin ( β 1 ) 0 0 1 0 0 sin ( β 1 ) 0 cos β 1 0 0 0 0 1 ]
where, β 1 is the angle between the rotation axis and axis Z. It can be obtained by { cos ( β 1 ) = cos β 1 = b 1 2 + c 1 2 a 1 2 + b 1 2 + c 1 2 sin ( β 1 ) = sin β 1 = a 1 a 1 2 + b 1 2 + c 1 2 .
(d)
Rotate the axis θ 1 degrees around Axis Z, as shown in Figure 3d.
R z ( θ 1 ) = [ cos θ 1 sin θ 1 0 0 sin θ 1 cos θ 1 0 0 0 0 1 0 0 0 0 1 ]
where θ 1 is the angle between lines A and A', which can be obtained by θ 1 = < A , A ' > = a r c c o s ( A A ' | A | | A ' | ) .
(e)
Rotate the axis by reversing the process of Step (c)
R y ( β 1 ) = [ cos β 1 0 sin β 1 0 0 1 0 0 sin β 1 0 cos β 1 0 0 0 0 1 ]
where, β 1 is as the same as in step (c).
(f)
Rotate the axis by reversing the process of Step (b).
R x ( α 1 ) = [ 1 0 0 0 0 cos α 1 sin α 1 0 0 sin α 1 cos α 1 0 0 0 0 1 ]
where, α 1 is as the same as in step (b).
(g)
Rotate the axis by reversing the process of Step (a)
T ( x 1 , y 1 , z 1 ) = [ 1 0 0 a 0 0 1 0 b 0 0 0 1 c 0 0 0 0 1 ]
where, (a0, b0, c0) is as the same as in step (a).
Combining all of the previous steps, the final transformation matrix T r t 1 of the first parallel (lines A and A') is expressed as:
T r t 1 = T ( x 1 , y 1 , z 1 ) R x ( α 1 ) R y ( β 1 ) R z ( θ 1 ) R y ( β 1 ) R x ( α 1 ) T ( x 1 , y 1 , z 1 )
Through the rotation matrix Trt1 calculated by Equation (8), the points Pi(x, y, z) in point cloud M can generate a new point cloud M1 by Equation (9).
P i ' ( x , y , z ) = T r t P i ( x , y , z )
Then, the characteristic line A of the new point cloud M1 is parallel with the characteristic line A' of point cloud N, as Figure 4a shows.
Based on the new point cloud M1 and point cloud N, the rotation matrix Trt2, which make the Line B of Point cloud M1 parallel with Line B' of Point cloud N, can be calculated through Equations (1)–(8):
T r t 2 = T ( x 2 , y 2 , z 2 ) R x ( α 2 ) R y ( β 2 ) R z ( θ 2 ) R y ( β 2 ) R x ( α 2 ) T ( x 2 , y 2 , z 2 )
Through the rotation matrix Trt2, the points Pi(x, y, z) in point cloud M1 can generate a new point cloud M2 again by Equation (9). Then, the characteristic line B of the new point cloud M2 is parallel with the characteristic line B' of point cloud N, as Figure 4b shows.
Since the point clouds are cubic, the characteristic lines are the diagonal lines. So, B⊥A, B'⊥A'. Since, B//B'. Then, B⊥A', B'⊥A. Therefore, the parallel Line B and B' are perpendicular to Line A and A'. There is an angle θ between Line A of point cloud M2 and Line A' of point cloud N, so the Line B of point cloud M2 is chosen as the rotation axis. The angle between Line A of point cloud M2 and Line A' of point cloud N is chosen as the rotation angle. The point cloud M2 is rotated by the above parameters. Then, the Line A of point cloud M2 and Line A' of point cloud N are parallel, like Line B of point cloud M2 and Line B' of point cloud N. Similarly, the rotation matrix Trt3 can be calculated by Equations (1)–(8):
T r t 3 = T ( x 3 , y 3 , z 3 ) R x ( α 3 ) R y ( β 3 ) R z ( θ 3 ) R y ( β 3 ) R x ( α 3 ) T ( x 3 , y 3 , z 3 )
The points Pi(x,y,z) in point cloud M2 can generate a new point cloud M3 by Equation (9), which are parallel with the point cloud N, as Figure 4c shows. In order to make coincident the point cloud M3 and point cloud N, the translation matrix Tr needs to be calculated by the two center points of Line A and A'. The new point cloud M' can be generated after translated by Tr. Therefore, through a series of simple rotations and translation, the two point clouds N and M' are coincident, as Figure 4d shows. The final transformation matrix is shown as Equation (12). The result, as a necessary preparation step, can then be used in robot calibration:
T r t = T r t 3 T r t 2 T r t 1 + T r

3.2. Method of Robot Calibration

The actual kinematic parameters of the robot deviate from their nominal values, which is referred to as kinematic errors [10]. The kinematic parameter calibration of a robot is an effective way to improve the absolute position accuracy of the robot manipulator. A simple robot self-calibration method based on the D-H model is described as follows. Reference [20] gives a more detailed description.
Assume that B p = [ r 1 p r 2 p r 3 p p x p r 4 p r 5 p r 6 p p y p r 7 p r 8 p r 9 p p z p 0 0 0 1 ] is the pose of a certain point in the coordinate system of the photogrammetric system, where r 1 p ~ r 9 p are the attitude parameters and p x p ~ p z p are the position parameters. Through transformation from the coordinate system of the measurement sensor O p X p Y p Z p to the robot base coordinate system O o X o Y o Z o , the point pose B o = [ r 1 o r 2 o r 3 o p x o r 4 o r 5 o r 6 o p y o r 7 o r 8 o r 9 o p z o 0 0 0 1 ] can be obtained by Equation (12):
B o = T r t × B p
where, Trt is the transformation matrix, which can be obtained by the method described in Section 3.1.
Given the six DOF robot in the lab, the transformation matrix from the robot tool coordinate system to the robot base coordinate system is expressed as:
T 0 N = T 0 1 T 1 2 L T n 1 n L T N 1 N ( N = 6 )
In this system, the cooperation target of the measurement sensor, which is set up at the end axis of the robot, should be considered as an additional axis, Axis 7. Then, the transformation matrix from Axis 6 to Axis 7 is:
T 6 7 = [ 1 0 0 t x 0 1 0 t y 0 0 1 t z 0 0 0 1 ]
where t x , t y , t z are the translation vectors, which can be measured previously. Therefore, according to the kinematic model of the robot, the typical coordinates of the robot manipulator in the robot base coordinate system O O X O Y O Z O is expressed as:
B o = ( i = 1 7 T i - 1 i ) B t
where B t is the point pose in the robot tool coordinate system, and B o is the point pose from the robot tool coordinate system to the robot base coordinate system.
In the robot calibration, the kinematic parameters are the most significant impact factors, which usually means the link parameters of the robot. In the D-H model, the link parameters include the length of the link a, the link angle α, the joint displacement d and the rotation angle of the joint θ . With the disturbances of the four link parameters, the position error matrix for adjacent robot axes d T i 1 i can be expressed as:
d T i 1 i = T i 1 i θ i Δ θ i + T i 1 i α i Δ α i + T i 1 i a i Δ a i + T i 1 i d i Δ d i
where Δ θ i , Δ α i , Δ a i and Δ d i are the small errors of link parameters. Suppose that A q i = ( T i 1 i ) 1 T i 1 i q i , where, q represents the link parameters (a, d, α, θ ).
If every two adjacent axes are influenced by the link parameters, the transformation matrix from the robot base coordinate system to the coordinate system of the robot manipulator can be expressed as:
T 0 N + d T 0 N = i = 1 N ( T i - 1 i + d T i - 1 i ) = i = 1 N ( T i - 1 i + T i - 1 i Δ i ) ( N = 6 )
where, T 0 N is the typical transformation matrix from the robot base coordinate system to the coordinate system of the robot manipulator and d T 0 N is the error matrix caused by the link parameters. Through expanding d T 0 N and performing a large number of simplifications and combinations, Equation (18) can be simplified as:
d T 0 N = T 0 1 A θ 1 T 1 N Δ θ 1 + T 0 1 A α 1 T 1 N Δ α 1 + T 0 1 A a 1 T 1 N Δ a 1 + T 0 1 A d 1 T 1 N Δ d 1 + T 0 2 A θ 2 T 2 N Δ θ 2 + T 0 2 A α 2 T 2 N Δ α 2 + T 0 2 A a 2 T 2 N Δ a 2 + T 0 2 A d 2 T 2 N Δ d 2 + L + T 0 N A θ N Δ θ N + T 0 N A α N Δ α N + T 0 N A a N Δ a N + T 0 N A d N Δ d N
Suppose that k i q = T 0 i A q i T 1 N , where, q represents the four link parameters. The position error of the robot manipulator can be simplified as given in Equation (20):
Δ p = [ d t x d t y d t z ] T = [ k 1 θ x k 1 α x k 1 a x k 1 d x k 2 θ x L k 6 θ x k t x x k t y x k t z x k 1 θ y k 1 α y k 1 a y k 1 d y k 2 θ y L k 6 θ y k t x y k t y y k t z y k 1 θ z k 1 α z k 1 a z k 1 d z k 2 θ z L k 6 θ z k t x z k t y z k t z z ] [ Δ θ 1 Δ α 1 Δ a 1 Δ d 1 Δ θ 2 Δ d 6 Δ t x Δ t y Δ t z ] T = B i Δ q i
where, Δ p is the position error of the robot manipulator. d t x , d t y , d t z are the Cartesian coordinate components of the position error and B i = [ k 1 θ x L k t z x M O M k 1 θ z L k t z z ] is the parameter matrix related to the typical position value of the robot manipulator. In this paper, because the DOF of the series robot is 6, Δ q i = [ Δ θ 1 ~ Δ t z ] includes 24 kinematics parameters of the robot a 1 ~ a 6 , d 1 ~ d 6 , α 1 ~ α 6 , θ 1 ~ θ 6 and three translation error variables of T67. Therefore, there are 27 parameters of the robot that need to be calibrated. In Equation (20), the left side of equation is the position error at each point, as measured by the measurement sensor, and the right side is the kinematics errors that need to be corrected. These errors can be revised by the least squares method in the generalized inverse matrix sense.

4. Experiments and Analysis

Through the designed experiments, we show how to use the proposed coordinate transformation method to achieve the coordinate transformation of the on-line robot calibration system. Using verification experiments, we determine the result of the robot calibration using the proposed method. For evaluating the performance of the proposed method, it is compared with four other common methods of coordinate transformation under the same experimental conditions.

4.1. Coordinate Transformationin an On-line Robot Calibration System

The on-line robot calibration system we constructed includes an industrial robot, a photographic system and a laser tracker as shown in Figure 4. The model of the robot in lab is the KR 5 arc from KUKA Co. Ltd. (Augsburg, Germany), one of the world's top robotic companies. Its arm length is 1.4 m and the working envelope is 8.4 m3. For covering most of the robot working range, the close range photogrammetric system in the lab, TENYOUN 3DMoCap-GC130 (Beijing, China), requires a field of view of more than 1 m× 1 m× 1 m without any dead angle. To achieve the goal of on-line measurement, a multi-camera system is needed. We used a multi-camera system symmetrically formed by four CMOS cameras with fixed focal lengths of 6 mm. The laser tracker in the lab, FARO Xi from FARO Co, Ltd. (Lake Mary, FL, USA) is a well-known high accuracy instrument whose absolute distance measurement (ADM) is 10 μm ± 1.1 μm/mL. The laser beam can easily be lost in tracking because of barriers or the acceleration of the target, which would cause minor errors. Therefore, we combine the laser tracker with the photographic system to improve the measurement accuracy and stability and thereby make full use of the advantages of the high accuracy of the laser tracker and the free light-of-sight of the photographic system. After proper data fusion, the two types of data from the photographic system and the laser tracker can be gathered together. The method of data fusion and the verified experimental result are detailed in reference [21]. In the experiment, 80 points in the public field of the robot and the photogrammetric system are picked to build a cube of 200 mm × 200 mm × 200 mm. The reason for building a cube is to facilitate the selection of characteristic lines and the calculation of coincidence parameters. The two targets of the photogrammetric system and laser tracker are installed together with the end axis of the robot by a multi-faced fixture. To obtain accurate and stable data, the robot stops for 7 s at each location, and the sensors measure each point 20 times, providing an adequate measurement time for the photographic system and laser tracker. The experimental parameters of the photogrammetric system are an exposure time of 15 us, a frequency of 10 fps and a gain of 40, based on experience.
According to Equations (1)–(7) and the experimental data, we can obtain the parameters of the transformation matrix shown in Table 1, where aiθi. are the parameters for the coincidence of characteristic lines in Equations (1)–(7).
According to Equations (8)–(10), the transformation matrices from the robot base coordinate system to the coordinate system of the sensors are calculated as:
T r t r p = [ 0.99917951 0.03370790 0.02245170 1003.54380 0.03352107 0.99940061 0.00864662 167.88234 0.02272970 0.00788692 0.99971054 984.54935 0 0 0 1 ] T r t r l = [ 0.999178 0.033635 0.02262 832.501 0.033819 0.9994 0.007803 131.4773 0.02235 0.00856 0.99971 1004.76 0 0 0 1 ]
where, T r t r p is the transformation matrix from the robot base coordinate system to the coordinate system of the photogrammetric system. T r t r l is the transformation matrix from the robot base coordinate system tothe coordinate system of the laser tracker.
By means of the above transformation matrix, we can obtain the point cloud coordinates transformed from the coordinate system of the robot to that of the sensors by Equation (12). Both the origin coordinates before and after transformation as well as the transformation error are shown in Table 2, where, Px,Py,Pz and Rx,Ry,Rz are three components of the original coordinates in two different coordinate systems. Tx,Ty,Tz are the coordinates of points transformed from the robot base coordinate system to the sensor coordinate system, and Δ x , Δ y , Δ z are the three components of the transformation error.
It is can be calculated from Table 2 that the average values of the transformation error between the coordinate systems of the robot and photogrammetric system are Δ x ¯ = 0.106 mm, Δ y ¯ = −0.062 mm and Δ z ¯ = 0.013 mm. The average values of the transformation error between the coordinate systems of the robot and laser tracker are Δ x ¯ = −0.015 mm, Δ y ¯ = 0.041 mm and Δ z ¯ =0.023 mm. Figure 5 and Figure 6 show that the transformation error of the photogrammetric system is approximately 10 times greater than that of the laser tracker. As in the earlier presentation, the nominal measurement accuracy of the photogrammetric system is 10−2 mm and that of the laser tracker is 10−3 mm. The results illustrate that the transformation accuracy has the same order of magnitude as that of the measurement sensor. This indicates that the transformation error is so small that it would not influence the accuracy of the sensors. The transformation method can also make the error distribution of the low precision sensor more uniform to improve the transformation accuracy and the accuracy of the robot calibration.

4.2. Position Error of Robot after Coordinate Transformation and Calibration

Experiments are designed to calibrate the kinematic parameters of the robot. The measurement system is shown in Figure 7. Sixty points in space are used for the calibration. After the coordinate transformation, the position errors between the sensor and the robot manipulator are obtained. A constraint method based on the minimum distance error is adopted to calibrate the robot kinematic parameters [20,21]. Twenty seven kinematic parameters, including 24 link parameters and three parameters of the fixture, are corrected. Then, 60 correct positions are calculated using the calibrated robot kinematic parameters. To evaluate the performance of the proposed method, we use a group of calibration results, which adopts a different coordinate transformation method and the same robot calibration algorithm, as a comparison. The position errors of the robot after the coordinate transformation and calibration are shown in Figure 8. δ1 is the position error after the coordinate transformation with the other method [19], and δ2 is the position error after the coordinate transformation with the proposed method (Characteristic Line method).
In position measurement, the distance between two points is often used to evaluate the position accuracy, which is called the root mean square (RMS) error expressed by:
R M S d i = ( x i ' x i ) 2 + ( y i ' y i ) 2 + ( z i ' z i ) 2
It can be indicated from Figure 5 that the average RMS using the other coordinate transformation method is δ 1 ¯ = 0.436 mm. while the average RMS using the proposed method is δ 2 ¯ = 0.200 mm. The position accuracy is improved by 45.8% using the Characteristic Line method.
To evaluate the accuracy distribution of the robot for different areas of the working range, a new set of testing data are utilized in a demonstration experiment. The coordinates of the center of robot calibration region O is (750, 0, 1000) in the robot base coordinate system. Taking O as the center of the circle, 200 mm as the radius, in this region the positioning accuracy of the robot would be the highest. To verify the distribution of the robot accuracy in the non-calibration regions, five positions O1 (1000, 150, 550), O2 (780, 710, 900), O3 (780, 870, 410), O4 (600, −800, 860), O5 (940, −960, 450) are chosen. Taking the five positions as the centers of the circle, 200 mm as the radius, in each region 60 points are chosen to calibrate robot as same as the previous calibration experiment. The position errors in different regions are shown in Table 3.
It is indicated from Table 3 that the average RMS of robot position error within the calibration region is 0.200 mm. The position error outside the calibration region is about 0.323 mm. It is proved that the calibration accuracy isn't consistent in the whole working range of robot. Therefore, this calibration method is more applicable to a smaller working range of the robot.

4.3. Accuracy of Coordinate Transformation Method

To obtain the accuracy of the proposed coordinate transformation method, an experiment is designed using the laser tracker. The laser tracker is placed at two different stations to measure five common points. These common points should be collinear; otherwise, the Jacobi matrix will have a rank defect of one and be singular. The choice of five points is for improving the transformation accuracy. After the unification of the coordinate systems by the proposed method, the measurement results of the laser tracker are compared. Then, the accuracy of the proposed transformation method (Characteristic Line method) can be calculated, as Table 3 shows.
In addition, for evaluating the performance of the proposed method, we calculate the transformation accuracy of four methods on the same points: the Three Point method, Rodrigo Matrix method, Singular Value Decomposition method and Quaternion method. The measurement data of five public points measured by the laser tracker from two different stations are substituted into the four algorithms. Then, the rotation matrix R and translation matrix T can be calculated, and the coordinate transformation matrix Trt are also obtained. According to Equation (22), the new coordinates after transformation by the Trt can be generated as below:
( x i ' y i ' z i ' ) s 2 = R ( x i y i z i ) s 1 + T = ( r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ) ( x i y i z i ) s 1 + ( x 0 y 0 z 0 )
where (xi, yi, zi)s1 is the coordinate of the ith point measured by the laser tracker at Station 1. (xi', yi', zi')s2 are the new coordinate at Station 2 generated by the coordinate transformation matrix Trt. R is the rotation matrix, r11~r33 are the rotation parameters, T is the translation matrix, and (x0, y0, z0) are the translation parameters.
Compared with the actual data measured by the laser tracker in Station 2, the error of the coordinate transformation can be obtained. We also use the root mean square (RMS) of the transformation error of the five public points to describe the transformation accuracy. The experimental results are shown in Table 4 and Figure 9.
The following conclusions can be drawn by the comparison of the results with different algorithms. The accuracy of the Three-Point method is the lowest and its solution depends on the choice of public points. But its algorithm is simple and has the lowest time cost. The Rodrigo Matrix method has the highest accuracy, but the computation of the matrix might be the most complex. It also takes a long time for the calculation. The accuracies of the Singular Value Decomposition method and the Quaternion method are relatively high. The calculation of matrix in the two algorithms are simple and time saving. But they may not be able to work out the rotation matrix when the points are dense. The Characteristic Line method has the same level of accuracy as the Singular Value Decomposition method and the Quaternion method. Its algorithm are simple and stable, and its execution time is short as well. In addition, as Figure 9 shows, it can suppress the large error terms of the other four methods, and does not cumulate errors with the increase of the number of the common points like the other four methods do.

5. Conclusions

This paper proposes a simple method of coordinate transformation in a multi-sensor combination measurement system for use in the field of industrial robot calibration. It does not require a large amount of computation or a large field of view and is not affected by the errors of the robot. It operates fast, therefore, it can be integrated into the host computer program. It can be applied in cases where the coordinate systems change often. As verified by experiments, the accuracy of the transformation method is 10−3 mm. It reduces the cumulative error of the coordinate transformation in the robot calibration, thereby improving the accuracy of the robot calibration. To evaluate its performance, the accuracy is compared with four common coordinate transformation methods. The experimental results show that the proposed method has the same high accuracy as the Singular Value Decomposition method and the Quaternion method. It can suppress large error terms and does not cumulate errors. Therefore, this method has the advantages of being simple and fast and exhibiting high accuracy and stability. It should have a wide range of application in the field of industrial combination measurement.

Acknowledgments

This research was supported by the Natural Science Foundation of China (NSFC) No. 51275350and the Tianjin Program for Strengthening Marine Technology, No. KJXH201408.

Author Contributions

Bailing Liu, Fuming Zhang and Xinghua Qu conceived and designed the experiments; Bailing Liu and Xiaojia Shi performed the experiments; Bailing Liu and Fumin Zhang analyzed the data; Xinghua Qu contributed analysis tools; Bailing Liu wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, F.; Wei, X.Z.; Li, X. Coordinates transformation and the accuracy analysis in sensor network. J. Syst. Eng. Electron. 2005, 27, 1231–1234. [Google Scholar]
  2. Lin, J.R. Research on the Combined Measurement Method of Large Complex Objects Malcolm. Ph.D. Thesis, Tianjin University, Tianjin, China, May 2012; pp. 54–72. [Google Scholar]
  3. Davis, M.H.; Khotanzad, A.; Flamig, D.P.; Harms, S.E. A physics-based coordinate transformation for 3-D image matching. IEEE Trans. Med. Imaging 1997, 16, 317–328. [Google Scholar] [CrossRef] [PubMed]
  4. Huang, F.S.; Chen, L. CCD camera calibration technology based on the translation of coordinate measuring machine. Appl. Mech. Mater. 2014, 568–570, 320–325. [Google Scholar] [CrossRef]
  5. Liu, C.J.; Chen, Y.W.; Zhu, J.G.; Ye, S.H. On-line flexible coordinate measurement system based on an industry robot. J. Optoelectron. Laser 2010, 21, 1817–1821. [Google Scholar]
  6. Stein, D.; Monnich, H.; Raczkowsky, J.; Worn, H. Automatic and hand guided self-registration between a robot and an optical tracking system. In Proceedings of the International Conference on Advanced Robotics (ICAR), Munich, Germany, 22–26 June 2009; pp. 22–26.
  7. Jian, Y.Z.; Chen, Z.; Da, W.Z. Pose accuracy analysis of robot manipulators based on kinematics. Adv. Manuf. Syst. 2011, 201–203, 1867–1872. [Google Scholar]
  8. Denavit, J.; Hartenberg, R.S. A kinematic notation for lower-pair mechanisms based on matrices. ASME. J. Appl. Mech. 1955, 22, 215–221. [Google Scholar]
  9. Hayati, S.A. Robot arm geometric link parameter estimation. In Proceedings of the 22nd IEEE Conference on Decision and Control, San Antonio, TX, USA, 14–16 December 1983.
  10. Du, G.; Zhang, P. Online serial manipulator calibration based on multisensory process via extended Kalman and particle filters. IEEE Trans. Ind. Electron. 2014, 61, 6852–6859. [Google Scholar]
  11. Hans, D.R.; Beno, B. Visual-model-based, real-time 3D pose tracking for autonomous navigation: Methodology and experiments. Auton. Robot 2008, 25, 267–286. [Google Scholar]
  12. Yang, F.; Li, G.Y.; Wang, L. Research on the Methods of Calculating 3D Coordinate Transformation Parameters. Bull. Surv. Mapp. 2010, 6, 5–7. [Google Scholar]
  13. Zhang, H.; Yao, C.Y.; Ye, S.H.; Zhu, J.G.; Luo, M. The Study on Spatial 3-D Coordinate Measurement Technology. Chin. J. Sci. Instrum. 2004, 22, 41–43. [Google Scholar]
  14. Yang, F.; Li, G.Y.; Wang, L.; Yu, Y. A method of least square iterative coordinate transformation based on Lodrigues matrix. J. Geotech. Investig. Surv. 2010, 9, 80–84. [Google Scholar]
  15. Chen, X.J.; Hua, X.H.; Lu, T.D. Detection of exceptional feature points based on combined Rodrigo matrix. Sci. Surv. Mapp. 2013, 38, 94–96. [Google Scholar]
  16. Nicolas, L.B.; Stephen, J.S. Jacobi method for quaternion matrix singular value decomposition. Appl. Math. Comput. 2007, 187, 1265–1271. [Google Scholar]
  17. Ni, Z.S.; Liao, Q.Z.; Wu, X.X. General 6Rrobot inverse solution algorithm based on a quaternion matrix and a Groebner base. J. Tsinghua Univ. (Sci. Tech.) 2013, 53, 683–687. [Google Scholar]
  18. Weng, J.; Huang, T.S.; Ahuja, N. Motion and structure from two perspective views: Algorithms, error analysis, and error estimation. IEEE Trans. Pattern Anal. 1989, 11, 451–476. [Google Scholar] [CrossRef]
  19. Zhang, B.; Wei, Z.; Zhang, G.J. Rapid coordinate transformation between a robot and a laser tracker. Chin. J. Sci. Instrum. 2012, 9, 1986–1990. [Google Scholar]
  20. Li, R.; Qu, X.H. Study on calibration uncertainty of industrial robot kinematics parameters. Chin. J. Sci. Instrum. 2014, 35, 2192–2199. [Google Scholar]
  21. Liu, B.L.; Zhang, F.M.; Qu, X.H. A method for improving the pose accuracy of a robot manipulator based on multi-sensor combined measurement and data fusion. Sensors 2015, 15, 7933–7952. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Online calibration system of robot kinematic parameters.
Figure 1. Online calibration system of robot kinematic parameters.
Sensors 16 00239 g001
Figure 2. The schematic diagram of coordinate transformation method.
Figure 2. The schematic diagram of coordinate transformation method.
Sensors 16 00239 g002
Figure 3. Schematic diagram of a vector rotated around an arbitrary axis.
Figure 3. Schematic diagram of a vector rotated around an arbitrary axis.
Sensors 16 00239 g003
Figure 4. The processes of the coordinate transformation method.
Figure 4. The processes of the coordinate transformation method.
Sensors 16 00239 g004
Figure 5. Transformation error from robot to photogrammetric system.
Figure 5. Transformation error from robot to photogrammetric system.
Sensors 16 00239 g005
Figure 6. Transformation error from robot to laser tracker.
Figure 6. Transformation error from robot to laser tracker.
Sensors 16 00239 g006
Figure 7. Measurement system.
Figure 7. Measurement system.
Sensors 16 00239 g007
Figure 8. Position errors after robot calibration.
Figure 8. Position errors after robot calibration.
Sensors 16 00239 g008
Figure 9. Transformation error compared with different algorithms.
Figure 9. Transformation error compared with different algorithms.
Sensors 16 00239 g009
Table 1. Calculated Results of Coincidence Parameters (Units: mm, °).
Table 1. Calculated Results of Coincidence Parameters (Units: mm, °).
Robot to Photogrammetric SystemRobot to Laser Tracker
a1b1c1α1β1θ1a1b1c1α1β1θ1
−2588.981326−62472127.53°1.4461°256.282°−3.013511.132−5.8921117.89°13.456°0.007°
a2b2c2α2β2θ2a2b2c2α2β2θ2
−30491−395861397.287.979°37.588°208.161°−6.1430.26899−5.9267177.4°45.997°0.004°
a3b3c3α3β3θ3a3b3c3α3β3θ3
210.37−155.16194.6938.555°40.198°176.953°200.03159.99−200.07141.35°37.984°0.005°
Table 2. Coordinates of Point clouds after Transformation (Units: mm).
Table 2. Coordinates of Point clouds after Transformation (Units: mm).
Robot to Photogrammetric SystemRobot to Laser Tracker
Photogrammetric systemRobotLaser trackerRobot
PxPyPzRxRyRzLxLyLzRxRyRz
42.728138.567109.566895308751048.62029.944875.07789530875
108.751140.724108.418961308751114.67929.985874.99596130875
175.846143.007107.1531028308751181.64629.955874.989102830875
242.882145.388105.8451095308751248.68929.935874.791109530875
76 points are ignored76 points are ignored
Transformation resulterrorTransformation resulterror
TxTyTzΔxΔyΔzTxTyTzΔxΔyΔz
42.975138.590109.751−0.247−0.023−0.1851048.62429.967875.024−0.004−0.0230.053
108.921140.822108.277−0.170−0.0980.1411114.62529.956875.0120.0540.029−0.017
175.866143.088106.780−0.020−0.0810.3731181.62429.944875.0010.0220.011−0.012
242.812145.355105.2820.0700.0330.5631248.62429.932874.8900.0650.003−0.099
76 points are ignored76 points are ignored
Table 3. The RMS of position error calibrated in the different regions.
Table 3. The RMS of position error calibrated in the different regions.
RegionOO1O2O3O4O5
Position error/mm0.2000.3300.3600.2710.3350.319
Table 4. RMS of transformation error with the different algorithms.
Table 4. RMS of transformation error with the different algorithms.
PointsStation 1Station 2
x/mmy/mmz/mmx/mmy/mmz/mm
13049.626−188.668−1403.5551484.681639.268−1401.164
24247.93991.939−1401.3341050.1013264.365−1396.089
31678.9351946.842−1380.022−1049.191502.397−1379.453
43688.3752777.637−1403.824−778.883659.965−1398.95
53802.5781207.190−1397.241642.9312983.472−1392.788
PointsThree-PointRodrigo MatrixSVDQuaternionCharacteristic Line
RMS/mmRMS/mmRMS/mmRMS/mmRMS/mm
10.0130.0060.0150.0150.008
20.0500.0410.0410.0410.008
30.0120.0130.0110.0110.034
40.0290.0090.0210.0210.031
50.0610.0530.0530.0530.026
R M S ¯ 0.0330.0240.0270.0280.025
Execution time/s0.0210.2030.0310.0230.029

Share and Cite

MDPI and ACS Style

Liu, B.; Zhang, F.; Qu, X.; Shi, X. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence. Sensors 2016, 16, 239. https://doi.org/10.3390/s16020239

AMA Style

Liu B, Zhang F, Qu X, Shi X. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence. Sensors. 2016; 16(2):239. https://doi.org/10.3390/s16020239

Chicago/Turabian Style

Liu, Bailing, Fumin Zhang, Xinghua Qu, and Xiaojia Shi. 2016. "A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence" Sensors 16, no. 2: 239. https://doi.org/10.3390/s16020239

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop