Next Article in Journal
Uniform vs. Nonuniform Membership for Mildly Context-Sensitive Languages: A Brief Survey
Previous Article in Journal
Comment on: On the Kung-Traub Conjecture for Iterative Methods for Solving Quadratic Equations. Algorithms 2016, 9, 1
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Direct Linear Transformation for Parameter Decoupling in Camera Calibration

1
School of Electrical Engineering and Automation, Harbin Institute of Technology, Heilongjiang 150001, China
2
Capital Aerospace Machinery Company, Beijing 100076, China
*
Author to whom correspondence should be addressed.
Algorithms 2016, 9(2), 31; https://doi.org/10.3390/a9020031
Submission received: 11 December 2015 / Revised: 20 April 2016 / Accepted: 25 April 2016 / Published: 29 April 2016

Abstract

:
For camera calibration based on direct linear transformation (DLT), the camera’s intrinsic and extrinsic parameters are simultaneously calibrated, which may cause coupling errors in the parameters and affect the calibration parameter accuracy. In this paper, we propose an improved direct linear transformation (IDLT) algorithm for calibration parameter decoupling. This algorithm uses a linear relationship of calibration parameter errors and obtains calibration parameters by moving a three-dimensional template. Simulation experiments were conducted to compare the calibration accuracy of DLT and IDLT algorithms with image noise and distortion. The results show that the IDLT algorithm calibration parameters achieve higher accuracy because the algorithm removes the coupling errors.

1. Introduction

With the technological developments of digital cameras and microprocessors, computer vision has been widely applied to robot navigation, surveillance, three-dimensional (3D) reconstruction and other fields for its high speed, high accuracy and non-contact nature. To obtain improved 3D information from a two-dimensional (2D) image, it is necessary to calibrate the intrinsic parameters of the camera, such as the focal distance and optical center point, as well its extrinsic parameters, such as rotation and translation, which relate the world coordinate system to the camera coordinate system. Over the last decade, numerous studies have focused on this area. In [1], the authors proposed an efficient approach for the dynamic calibration of multiple cameras. In [2], the authors proposed a calibration algorithm based on line images. In [3], the authors used one-dimensional (1D) information to calibrate the parameters. All of these algorithms make camera calibration faster and more convenient.
To obtain high accuracy parameter results, high accuracy 3D or 2D templates can be used. These algorithms include direct linear transformation (DLT) [4], the Tsai calibration method [5] and the Zhang calibration method [6]. In space rendezvous and docking, as well as in visual tracking applications, it is necessary to obtain the specific extrinsic and intrinsic parameters of the camera. At the same time, with the wide application of the coordinate measuring machine (CMM) [7,8], high precision, large range 3D templates have become more readily applied. The DLT algorithm is more suitable for these applications.
The DLT algorithm is based on the perspective projection between 3D space points and 2D image points. It calculates a transformation matrix and obtains the camera’s intrinsic and extrinsic parameters according to the parameter decomposition. With this model, only one 3D template in one position is required for calculation; therefore, the template size, number of feature points and relative distance between the template and camera are critical [9,10,11].
Camera calibration with image noise and distortion remains a difficult task. Algorithms typically solve the camera parameters by analysis; then, they solve the model of image noise and distortion by optimization [6,12,13]. All traditional optimization algorithms applied to visual systems, such as the modified Newton algorithm, the Levenberg–Marquardt algorithm and the genetic algorithm, require a good initial solution to optimize. Therefore, the analytic solution is not only robust to image noise, but also effective for addressing distortion.
When the calibration data contain noise and distortion, coupling errors exist between the camera’s extrinsic and intrinsic parameters. This means that an error in an extrinsic parameter may be compensated by an error in an intrinsic parameter [14,15,16]. Existing parameter decoupling methods use calibration models without considering extrinsic parameters. These include vanishing points [15,17,18], straight lines [16,19], and cross ratios [20,21]. In the present study, the image re-projection error from the mathematical model and the camera pinhole geometric model were analyzed. The results show that the coupling error causes a small re-projection error variance and a large calibration parameter error variance. At the same time, this variance is related to the template size, number of feature points and relative distance between the template and camera. To improve the calibration accuracy of the camera parameters, a decoupling algorithm of intrinsic and extrinsic parameters is proposed based on DLT.
The remainder of this paper is organized as follows. The relationship between the coupling error, template size and relative distance between the template and camera is described in Section 2. The proposed improved DLT (IDLT) algorithm for camera calibration is proposed in Section 3. The experimental results are presented in Section 4, and the conclusions are given in Section 5.

2. Parameter Coupling Analysis

2.1. Camera Model

The camera model is shown in Figure 1. A 3D point is denoted by P i = [ X i Y i Z i ] T . A 2D image point is denoted by p i = [ u i v i ] T . The relationship between 3D point P i and its image projection, p i , is given by:
u i = u 0 + a x r 11 X i + r 12 Y i + r 13 Z i + t x r 31 X i + r 32 Y i + r 33 Z i + t z v i = v 0 + a y r 21 X i + r 22 Y i + r 23 Z i + t y r 31 X i + r 32 Y i + r 33 Z i + t z
where ax = f/dx, ay = f/dy and f is the camera focal distance. In addition, dx, dy are pixel sizes in the horizontal and vertical directions; (u0, v0) is the optical center point of the image; and R, T are the rotation matrix and translation vector, respectively, which relate the world coordinate system to the camera coordinate system. Furthermore, rij is the i-th row and j-th column element of R, and tx, ty, tz are the elements of T. For simplicity, the rotation matrix is represented by Euler angles, ψ , θ , ϕ , which represent the rotation around the respective x, y and z axes.
Owing to the influence of image noise and distortion, the calibration parameters, image points and space points do not completely conform to Equation (1). Thus, we have:
Δ u i = u i u 0 a x r 11 X + r 12 Y + r 13 Z + t x r 31 X + r 32 Y + r 33 Z + t z Δ v i = v i v 0 a y r 21 X + r 22 Y + r 23 Z + t y r 31 X + r 32 Y + r 33 Z + t z
min f ( u , v ) = i = 1 n ( Δ u i 2 + Δ v i 2 )
Equation (2) describes the calculation of image re-projection errors. Small re-projection errors are desired, which means that the calibration results satisfy Equation (3). However, the question arises of if a small image re-projection error will lead to the high calibration accuracy of the camera parameters. Analysis of the relationship between the image re-projection error and the calibration accuracy is therefore required.

2.2. Error Coupling Analysis

The two equations in Equation (2) have the same form. We thus analyze the first equation; all conclusions can apply to the second equation. We assume that u0 has error △u0 and that tx has error △tx. From Equation (2), we have:
Δ u i = u i ( u 0 + Δ u 0 ) a x r 11 X i + r 12 Y i + r 13 Z i + ( t x + Δ t x ) r 31 X i + r 32 Y i + r 33 Z i + t z = Δ u i ( Δ u 0 + Δ t x a x r 31 X i + r 32 Y i + r 33 Z i + t z )
when Δ t x = ( r 31 X i + r 32 Y i + r 33 Z i + t z ) / a x Δ u 0 , we have Δ u i = Δ u i . The image re-projection error will not change. However, because the space point coordinate values are different, other image point re-projection errors will be changed. We assume that:
Δ t x = r 31 X 1 + r 32 Y 1 + r 33 Z 1 + t z a x Δ u 0 = r 31 X i + r 32 Y i + r 33 Z i + t z a x Δ u 0 r 31 d X i + r 32 d Y i + r 33 d Z i a x Δ u 0
where dXi = X1Xi, dYi = Y1Yi, dZi = Z1Zi, and image re-projection errors are given by:
Δ u i = Δ u i + Δ u i  with  Δ u i = r 31 d X i + r 32 d Y i + r 33 d Z i r 31 X i + r 32 Y i + r 33 Z i + t z Δ u 0
Based on Equation (6), if d Z i is very small and t z is very large, the change in image re-projection errors will be very small. This is because coupling △u0 and △tx reduces the image re-projection error. In order to reduce the coupling effect, better results can be obtained if the 3D template is large and close to the camera. In fact, the template size is limited, and the calibration distance is limited by the field of view and the lens parameter.
With respect to focal distance and translation vector, we assume ax has error △ax and tz has error △tz. From Equation (2), we have:
Δ u i = u i u 0 ( a x + Δ a x ) C i B i + Δ t z
where B i = r 31 X i + r 32 Y i + r 33 Z i and C i = r 11 X i + r 12 Y i + r 13 Z i . We assume that Δ a x / Δ t z = a x / B 1 , and we obtain Δ u 1 = Δ u 1 . Other image re-projection errors are:
Δ u i = Δ u i + Δ u i  with  Δ u i = a x C i Δ t z B 1 ( B i + Δ t z ) a x C i B i × Δ t z B 1
From Equation (1), we have a x C i / B i ( u i u 0 ) . We assume that the Euler angles of the world coordinate system relative to the camera coordinate system are zero. By setting the first space point Z 1 = 1 , we have:
Δ u i = u i u 0 t z Δ t z
Based on Equation (9), if ( u i u 0 ) is small and t z is large, the change in the image re-projection error will be very small. This is because coupling △ax and △tz reduces the image re-projection error. To reduce the coupling effect, better results can be obtained if the 3D template is closer to the camera. Similar to the distortion, the camera focal distance and translation vector coupling error are related to the image point location. Therefore, under the influence of distortion, the camera focal distance and translation vector coupling error may increase the calibration error.
The coupling error is present in the geometric model. The camera model is also called a pinhole model; that is, the space point, image point and camera origin are located on the same line. Figure 2 shows the coupling between the optical center point of the image and the translation vector. When the optical center point of image Oc is offset, the origin of the camera coordinate system shifts. To continue fitting the pinhole model, the space point, Pi, under the camera coordinate system will produce an offset. The space point under the world coordinate system is fixed so that the relationship between the world coordinate system and camera coordinate system changes.
Figure 3 shows the coupling between the camera focal distance and the translation vector. The camera focal distance is calibrated based on the relative distance between multiple space points. When the camera focal distance changes, that is the origin of the camera coordinate system, Oc, moves to O'c, the world coordinate system moves along the direction of the optical axis of the camera because the distance is fixed between P1 and P2. When the Euler angles are zero, the value of the optical axis direction is t z .
In summary, under the influence of image noise and distortion, when the size of the templates, number of feature points and relative distance between the template and camera are certain, coupling errors occur in the camera’s intrinsic and extrinsic parameters that affect the calibration parameter accuracy. Therefore, to improve the accuracy of the calibration parameters, it is necessary to remove the parameter coupling errors.

3. Improved Direct Linear Transformation

3.1. Direct Linear Transformation

The DLT algorithm dates to the work of [4] or earlier. In [13,22], the authors analyzed the definition of the world coordinate system and proposed the use of data normalization to reduce the noise impact on the camera calibration parameters. In [10,23], the authors proposed combining the DLT algorithm with an optimization algorithm to address the camera distortion models.
The DLT algorithm consists of two steps: (1) homogeneous equation solving; and (2) parameter factorization. Space point P and its image p are related by homography M:
sp = MP with M = K[R|T]
where K is the camera intrinsic matrix. mij is the i-th row and j-th column element of M. The 12 parameters in M3×4 are unknown in the matrix M'12×1. From Equation (10), we have:
AM’ = 0 with ||M'|| = C
where C is a constant, and:
A = [ P T 0 u P T 0 P T v P T ]   A = [ P T 0 u P T 0 P T v P T ]
Because Equation (11) is a homogeneous equation, we assume that m34 is equal to one. The M matrix can be solved by singular value decomposition. We can then obtain the intrinsic and extrinsic parameters of the camera through factorization.
t z = 1 / m 31 2 + m 32 2 + m 33 2
u 0 = t z 2 ( m 11 m 31 + m 12 m 32 + m 13 m 33 )
v 0 = t z 2 ( m 21 m 31 + m 22 m 32 + m 23 m 33 )
a x = t z 2 ( m 12 m 33 m 13 m 32 ) 2 + ( m 11 m 33 m 13 m 31 ) 2 + ( m 11 m 32 m 12 m 31 ) 2
a y = t z 2 ( m 22 m 33 m 23 m 32 ) 2 + ( m 21 m 33 m 23 m 31 ) 2 + ( m 21 m 32 m 22 m 31 ) 2
t x = t z ( m 14 u 0 ) / a x
t y = t z ( m 24 v 0 ) / a y
r 11 = t z ( m 11 u 0 m 31 ) / a x r 12 = t z ( m 12 u 0 m 32 ) / a x r 13 = t z ( m 13 u 0 m 33 ) / a x r 21 = t z ( m 21 v 0 m 31 ) / a x r 22 = t z ( m 22 v 0 m 32 ) / a x r 23 = t z ( m 23 v 0 m 33 ) / a x r 31 = t z m 31 r 32 = t z m 32 r 33 = t z m 33
Accordingly, at least six non-coplanar feature points and their corresponding image points—the camera focal distance, optical center point, rotation matrix and translation vector—can be solved according to Equations (11)–(20).
However, with the influence of image noise and distortion, mij contains errors. From Equations (18) and (19), we have:
t x + Δ t x = ( t z + Δ t z ) ( m 14 + Δ m 14 u 0 Δ u 0 ) a x + Δ a x t y + Δ t y = ( t z + Δ t z ) ( m 24 u 0 + Δ m 24 Δ u 0 ) a y + Δ a y
where △tx, △tz, △m14, △ty and △m24 are errors. When t x t z ( m 14 u 0 ) a x + Δ a x , t y t z ( m 24 v 0 ) a y + Δ a y , we have:
Δ t x = t z ( Δ m 14 Δ u 0 ) a x + Δ a x + Δ t z ( m 14 u 0 ) a x + Δ a x + Δ t z ( Δ m 14 Δ u 0 ) a x + Δ a x Δ t y = t z ( Δ m 24 Δ v 0 ) a y + Δ a y + Δ t z ( m 24 v 0 ) a y + Δ a y + Δ t z ( Δ m 24 Δ v 0 ) a y + Δ a y
In Equation (22), the third molecule is much smaller than the others and can be ignored. When t z > t x and Δ u 0 Δ m 14 , we have:
Δ t x = t z Δ u 0 / a x
At the same time, when t z > t y and Δ v 0 Δ m 24 , we have:
Δ t y = t z Δ v 0 / a y
With respect to the coupling error between the focal distance and translation vector, to simplify the analysis model, we assume that the Euler angles of the world coordinate system relative to the camera coordinate system are zero. Then, we have:
M = [ a x / t z 0 u 0 / t z ( t x a x + t z u 0 ) / t z 0 a y / t z v 0 / t z ( t y a y + t z v 0 ) / t z 0 0 1 / t z 1 ]
From Equation (13), we have:
t z = 1 / m 33
From Equations (16) and (17), we have:
a x = t z 2 m 11 m 33 = m 11 t z a y = t z 2 m 22 m 33 = m 22 t z
Owing to the influence of image noise and distortion, M contains errors. From Equation (27), we thus have:
a x + Δ a x = m 11 t z + m 11 Δ t z + Δ m 11 t z + Δ m 11 Δ t z a y + Δ a y = m 22 t z + m 22 Δ t z + Δ m 22 t z + Δ m 22 Δ t z
where △ax, △tz, △m11, △ay and △m22 are errors. Δ m 11 Δ t z and Δ m 22 Δ t z are much smaller than the others and can be ignored. Then, we have:
Δ a x = m 11 Δ t z + Δ m 11 t z Δ a y = m 22 Δ t z + Δ m 22 t z
On account of Δ t z Δ m 11 and Δ t z Δ m 22 , we have:
Δ a x = m 11 Δ t z Δ a y = m 22 Δ t z
Equations (23), (24) and (30) show that a linear relationship exists between the calibration parameters after ignoring some minor errors. We use a simulation to illustrate the size of some of the ignored minor errors. In the simulation, we have ax = 1454.5, ay = 1454.5, u0 = 700 pixel and v0 = 512 pixel; moreover, the size of the pattern is 0.7 m × 0.7 m × 0.3 m. The relationship of the world coordinate system relative to the camera coordinate system relationship is represented by R = [20, 20, 20] (°) and T = [−0.25, −0.25, tz] (mm). Gaussian noise with a zero mean and a 0.1-pixel standard deviation is added to the projected image points. The analysis of Equations (23) and (24) is shown in Figure 4. The difference between △ax, △ay and the results of Equations (23) and (24) is small. The analysis of Equation (30) is shown in Figure 5. Because the Euler angles are not equal to 0°, △tx, △ty and the result of Equation (30) are different. However, the difference is small.
Under the influence of image noise and lens distortion, the M matrix of the DLT algorithm contains errors. These errors further affect the accuracy of the calibration parameters. Through the above analysis, there is a linear coupling relationship between Δ t x and Δ u 0 , Δ t y and Δ v 0 , Δ a x and Δ t z and Δ a y and Δ t z after ignoring some minor errors.

3.2. Improved Direct Linear Transformation

Owing to the linear relationship that exists between the intrinsic and extrinsic parameter errors, all of these errors are unknown, and the specific error value cannot be directly solved. We assume that the 3D template only moves along the z axis. a x , a y , u 0 , v 0 , t x , t y and t z 1 are calibration values of the DLT algorithm before the z axis translation. a x , a y , u 0 , v 0 , t x , t y and t z 2 are calibration values of the DLT algorithm after the z axis translation. From Equations (23) and (24), we have:
Δ t x = t x t x = t z 1 ( u 0 u 0 ) / a x Δ t x = t x t x = t z 2 ( u 0 u 0 ) / a x
Δ t y = t y t y = t z 1 ( v 0 v 0 ) / a y Δ t y = t y t y = t z 2 ( v 0 v 0 ) / a y
where a x , a y , u 0 , v 0 , t x , t y , t z 1 and t z 2 are true values. Because the translation occurs only along the z axis, we set t z 2 = n t z 1 . From Equations (31) and (32), we have:
t x t x n 1 = t z 1 a x ( n u 0 u 0 n 1 u 0 )
t y t y n 1 = t z 1 a x ( n v 0 v 0 n 1 v 0 )
From Equation (33), a linear relationship exists between ( n u 0 u 0 ) / ( n 1 ) and ( t x t x ) / ( n 1 ) . By repeatedly moving the 3D template, u 0 can be solved with a linear least-squares fit. Similarly, v 0 can be solved by Equation (34).
From Equations (31) and (32), we have:
n t x t x n 1 = t x t z a x n ( u 0 u 0 ) n 1
n t y t y n 1 = t y t z a y n ( v 0 v 0 ) n 1
A linear relationship exists between n ( u 0 u 0 ) / ( n 1 ) and ( n t x t x ) / n 1 , n ( v 0 v 0 ) / ( n 1 ) and ( n t y t y ) / ( n 1 ) . Thus, t x , t y can be solved by Equations (35) and (36).
From Equation (30), we have:
a x a x = ( r 11 a x + r 31 u 0 ) ( t z 1 t z 1 ) / t z 1 a x a x = ( r 11 a x + r 31 u 0 ) ( t z 2 t z 2 ) / t z 2
From Equation (37), we have:
( r 11 a x + r 31 u 0 ) ( ( t z 2 t z 1 ) ( t z 2 t z 1 ) ) ( t z 2 a x t z 1 a x a x ( t z 2 t z 1 ) ) = 0
We set d z 1 = t z 2 t z 1 . We then obtain:
( r 11 a x + r 31 u 0 ) t z 2 t z 1 d z 1 d z 1 t z 1 a x a x d z 1 a x + a x = 0
Because ( a x a x ) / d z 1 is small, we use t z 1 to replace t z 1 . Then, we have:
( r 11 a x + r 31 u 0 ) t z 2 t z 1 d z 1 d z 1 ( t z 1 a x a x d z 1 + a x ) + a x = 0
A linear relationship exists between ( t z 2 t z 1 d z 1 ) / d z 1 and t z 1 ( a x a x ) / d z 1 + a x . By repeatedly moving the 3D template, a x can be solved with a linear least-squares fit.
With respect to t z 1 , from Equation (37), we have:
Δ a x Δ a x = n ( t z 1 t z 1 ) t z 2 n t z 1
Then,
Δ a x t z 2 n Δ a x t z 1 = t z 1 ( n Δ a x n Δ a x )
A linear relationship exists between Δ a x t z 2 n Δ a x t z 1 and n Δ a x n Δ a x . By repeatedly moving the template, t z 1 can be solved with a linear least-squares fit.
For a y , we have:
( r 22 a y + r 32 v 0 ) t z 2 t z 1 d z 1 d z 1 ( t z 1 a y a y d z 1 + a y ) + a y = 0
Δ a y t z 2 n Δ a y t z 1 = t z 1 ( n Δ a y n Δ a y )
A linear relationship exists between t z 2 t z 1 d z 1 / d z 1 and ( a y a y ) t z 1 / d z 1 + a y and Δ a y t z 2 n Δ a y t z 1 and n Δ a y n Δ a y . By repeatedly moving the template, a y , t z 1 can be solved with a linear least-squares fit.
In sum, the 3D template along the z axis to the translation performs a DLT algorithm at each location. With the results of the DLT, u 0 , v 0 , t x , t y , a x , a y and t z 1 can again be solved with a linear least-squares fit by Equations (33)–(36), (40) and (42)–(44).

4. Experimental Section

The simulation and physical experiment parameters were set as follows. The camera focal length was 12 mm. The image resolution was 1400 × 1024. The pixel size was 7.4 μm × 7.4 μm.

4.1. Simulation Experiment

The size of the 3D template was 0.7 m × 0.7 m × 0.3 m and contained a pattern of 8 × 8 × 3 points. The rotation matrix and translation vector were R = [10, 10, 10] (°) and T = [−0.35, −0.35, tz] (m), tz = 1.2–3.2 m. Gaussian noise with a 0 mean and a 0.01–0.5-pixel standard deviation was added to the projected image points. The 3D template moved 0.1 m at a time. The IDLT algorithm was calculated after 18 times of movement. A total of 100 independent tests was performed for each noise to obtain the parameter error mean and standard deviation. The error mean plus three times the standard deviation was used to represent the calibration error.
Because the calibration error became larger as tz increased, we only analyzed the results at t z = 1.2 m. The results of the DLT and IDLT calibration are shown in Figure 6. For u 0 and v 0 , the errors of IDLT are less than 10% of the error of DLT. The errors in t x and t y are less than 0.1 mm. For a x , a y and t z , the errors of IDLT are less than 60% of the error of DLT.
Figure 7 shows the calibration result with noise and distortion. Gaussian noise with a 0 mean and a 0.1-pixel standard deviation is added to the projected image points. Image distortion comes from ideal image points ( u i , v i ) and real image points ( u i , v i ) with k = 2.915 × 10−9 − 2.915 × 10−8 (the image distortion is 1–10 pixels at (1400, 0) when u 0 = 700 ).
u i = u i + ( u i u 0 ) k r 2 v i = v i + ( v i v 0 ) k r 2
Owing to the influence of image distortion, the calibration errors of DLT are relatively large. However, the calibration errors of IDLT are relatively small. In particular, the errors of u 0 and v 0 are less than 0.7 pixels, whereas the maximum error of DLT is 44 pixels. The errors of a x and a y are less than 3. Because the pixels are square, the errors of a x and a y have the same form. The error in t z is larger than that in t x , t y . The main reason is that t z is larger than t x , t y .

4.2. Physical Experiment

We used a light-emitting diode (LED) as the space point fixed on a coordinate measurement machine, as shown in Figure 8. The size of the plane was 0.7 m × 0.7 m, which contained a pattern of 8 × 8 points. There were 20 planes; the data for three of these planes were used for one DLT calculation. The DLT results are shown in Table 1. The fluctuations in the Euler angle errors are less than 0.06°, and they are not very volatile. The most volatile values are ty and v0. The fluctuation in ty is less than 28.03 mm, and the fluctuation in v0 is less than 2.59 pixels. The results of the IDLT algorithm are shown in Table 2, where the calibration parameter values are different from those of DLT.
To compare the calibration accuracy of the two algorithms, the results of IDLT and the first set of DLT data were used to calculate the re-projection errors of 20 planes. Each plane image error was described by the mean plus three times the standard deviation of the image point re-projection errors. The results are shown in Figure 9. The calculation errors of DLT increase with the number of planes. This shows that the DLT calibration results are optimal for some planes, but not for all planes. The calibration errors of IDLT decrease with the number of planes. The reason is because the distance between the image point and optical center point of the image is small, and the effect of distortion is small.

5. Conclusions

Based on a camera model, the DLT algorithm uses linear equations to calculate the intrinsic and extrinsic camera parameters. Because the camera’s intrinsic and extrinsic parameters are simultaneously calibrated, the coupling error of the calibration parameter affects the calibration accuracy. In this paper, we analyzed the principles of intrinsic and extrinsic parameter error coupling, determined a linear coupling relationship between the intrinsic parameter calibration error and the translation vector (extrinsic parameters) calibration error and proposed the IDLT algorithm. The IDLT algorithm uses the linear coupling relationship to calculate the calibration parameters of the camera. The results of simulations and experiments show that there are significantly fewer calibration parameter errors using the IDLT algorithm than there are using DLT with noise and distortion.

Acknowledgments

This work was supported by the National Science Foundation of China (Grant No. 51075095) and the Natural Science Foundation of Heilongjiang Province (Grant No. E201045).

Author Contributions

Zhenqing Zhao proposed the method and wrote the manuscript. Xin Zhang performed the data analysis. Dong Ye contributed to the conception of this study. Gang Chen and Bin Zhang helped perform the analysis with constructive discussions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, I.H.; Wang, S.J. An efficient approach for dynamic calibration of multiple cameras. IEEE Trans. Autom. Sci. Eng. 2009, 6, 187–194. [Google Scholar] [CrossRef]
  2. Ly, D.S.; Demonceaux, C.; Vasseur, P.; Pegard, C. Extrinsic calibration of heterogeneous cameras by line images. Mach. Vis. Appl. 2014, 25, 1601–1614. [Google Scholar] [CrossRef]
  3. Peng, E.; Li, L. Camera calibration using one-dimensional information and its applications in both controlled and uncontrolled environments. Pattern Recogn. 2010, 43, 1188–1198. [Google Scholar] [CrossRef]
  4. Abdel-Aziz, Y.I.; Karara, H.M. Direct linear transformation from comparator coordinates into object space coordinates in close-range photo grammetry. In Proceedings of the Symposium on Close-Range Photogrammetry, University of Illinois at Urbana-Champaign, Champaign, IL, USA, 1971; pp. 1–18.
  5. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot Autom 1987, 3, 323–344. [Google Scholar] [CrossRef]
  6. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  7. Chen, G.; Guo, Y.B.; Wang, H.P.; Ye, D.; Gu, Y.F. Stereo vision sensor calibration based on random spatial points given by CMM. Optik 2012, 123, 731–734. [Google Scholar] [CrossRef]
  8. Samper, D.; Santolaria, J.; Brosed, F.J.; Majarena, A.C.; Aguilar, J.J. Analysis of Tsai calibration method using two- and three-dimensional calibration objects. Mach. Vis. Appl. 2013, 24, 117–131. [Google Scholar] [CrossRef]
  9. Sun, W.; Cooperstock, J.R. An empirical evaluation of factors influencing camera calibration accuracy using three publicly available techniques. Mach. Vis. Appl. 2006, 17, 51–67. [Google Scholar] [CrossRef]
  10. Huang, J.H.; Wang, Z.; Xue, Q.; Gao, J.M. Calibration of a camera projector measurement system and error impact analysis. Meas. Sci. Technol. 2012, 23, 125402. [Google Scholar] [CrossRef]
  11. Zhou, F.Q.; Cui, Y.; Wang, Y.X.; Liu, L.; Gao, H. Accurate and robust estimation of camera parameters using RANSAC. Opt. Laser Eng. 2013, 51, 197–212. [Google Scholar] [CrossRef]
  12. Ricolfe-Viala, C.; Sanchez-Salmeron, A. Camera calibration under optimal conditions. J. Opt. Soc. Am. 2011, 19, 10769–10775. [Google Scholar] [CrossRef] [PubMed]
  13. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2000; pp. 110–115. [Google Scholar]
  14. Batista, J.; Araujo, H.; Almeida, A.T. Iterative multistep explicit camera calibration. IEEE Trans. Robot. Autom. 1999, 15, 897–917. [Google Scholar] [CrossRef]
  15. Ricolfe-Viala, C.; Sanchez-Salmeron, A.; Valera, A. Efficient lens distortion correction for decoupling in calibration of wide angle lens cameras. IEEE Sens. J. 2013, 13, 854–863. [Google Scholar] [CrossRef]
  16. Guillemaut, J.Y.; Aguado, A.S.; Illingworth, J. Using points at infinity for parameter decoupling in camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 265–270. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Caprile, B.; Torre, V. Using vanishing points for camera calibration. Int. J. Comput. Vis. 1990, 4, 127–140. [Google Scholar] [CrossRef]
  18. He, B.W.; Zhou, X.L.; Li, Y.F. A new camera calibration method from vanishing points in a vision system. Trans. Inst. Meas. Control 2011, 33, 806–822. [Google Scholar] [CrossRef]
  19. Devernay, F.; Faugeras, O. Straight lines have to be straight. Mach. Vis. Appl. 2001, 13, 14–24. [Google Scholar] [CrossRef]
  20. Zhang, G.J.; He, J.J.; Yang, X.M. Calibrating camera radial distortion with cross-ratio invariability. Opt. Laser Technol. 2003, 35, 457–461. [Google Scholar] [CrossRef]
  21. Li, D.D.; Wen, G.J.; Hui, B.W.; Qiu, S.H.; Wang, W.F. Cross-ratio invariant based line scan camera geometric calibration with static linear data. Opt. Laser Eng. 2014, 62, 119–125. [Google Scholar] [CrossRef]
  22. Hartley, R.I. In defense of the eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 580–593. [Google Scholar] [CrossRef]
  23. Xu, Q.Y.; Ye, D.; Chen, H.; Che, R.S.; Chen, G.; Huang, Y. A valid camera calibration based on the maximum likelihood using virtual stereo calibration pattern. In Proceedings of the International Conference on Sensing, Computing and Automation, Chongqing, China, 8–11 May 2006; pp. 2346–2351.
Figure 1. Camera model.
Figure 1. Camera model.
Algorithms 09 00031 g001
Figure 2. Relationship between the principal point and translation.
Figure 2. Relationship between the principal point and translation.
Algorithms 09 00031 g002
Figure 3. Relationship between the focal distance and translation.
Figure 3. Relationship between the focal distance and translation.
Algorithms 09 00031 g003
Figure 4. Error approximate analysis: (a) t x error analysis; (b) t y error analysis.
Figure 4. Error approximate analysis: (a) t x error analysis; (b) t y error analysis.
Algorithms 09 00031 g004
Figure 5. Error approximate analysis: (a) a x error comparison; (b) a y error comparison.
Figure 5. Error approximate analysis: (a) a x error comparison; (b) a y error comparison.
Algorithms 09 00031 g005
Figure 6. Calibration error analysis between direct linear translation (DLT) and improved DLT (IDLT) with noise: (a) error of u 0 ; (b) error of v 0 ; (c) error of t x ; (d) error of t y ; (e) error of a x ; (f) error of t z based on a x ; (g) error of a y ; (h) error of t z based on a y .
Figure 6. Calibration error analysis between direct linear translation (DLT) and improved DLT (IDLT) with noise: (a) error of u 0 ; (b) error of v 0 ; (c) error of t x ; (d) error of t y ; (e) error of a x ; (f) error of t z based on a x ; (g) error of a y ; (h) error of t z based on a y .
Algorithms 09 00031 g006aAlgorithms 09 00031 g006b
Figure 7. Calibration error analysis between DLT and IDLT with noise and distortion: (a) error of u 0 ; (b) error of v 0 ; (c) error of t x ; (d) error of t y ; (e) error of a x ; (f) error of t z based on a x ; (g) error of a y ; (h) error of t z based on a y .
Figure 7. Calibration error analysis between DLT and IDLT with noise and distortion: (a) error of u 0 ; (b) error of v 0 ; (c) error of t x ; (d) error of t y ; (e) error of a x ; (f) error of t z based on a x ; (g) error of a y ; (h) error of t z based on a y .
Algorithms 09 00031 g007aAlgorithms 09 00031 g007b
Figure 8. Photograph of the physical experiment.
Figure 8. Photograph of the physical experiment.
Algorithms 09 00031 g008
Figure 9. Re-projection errors of 20 planes with DLT and IDLT calibration results: (a) u of projection image errors; (b) v of projection image errors.
Figure 9. Re-projection errors of 20 planes with DLT and IDLT calibration results: (a) u of projection image errors; (b) v of projection image errors.
Algorithms 09 00031 g009
Table 1. DLT results of the physical experiment.
Table 1. DLT results of the physical experiment.
axayu0 (pixel)v0 (pixel)tx (mm)ty (mm)tz (mm) ψ (°) θ (°) ϕ (°)
11735.901735.56701.09522.07−363.52−319.101705.861.11−0.131.36
21736.971736.67700.97522.59−363.67−317.701806.841.10−0.131.36
31738.261737.96701.66523.22−364.68−316.481908.121.08−0.131.36
41736.341736.05701.59523.46−364.90−314.872005.891.07−0.131.36
51737.881737.58700.65523.91−364.05−313.542107.601.06−0.131.36
61738.411738.15700.71523.93−364.39−311.732208.321.06−0.121.36
71737.421737.20700.71524.02−364.65−310.012307.001.05−0.121.36
81737.751737.60700.58524.17−364.72−308.382407.471.05−0.121.36
91737.211737.07700.71524.16−365.17−306.532506.681.05−0.131.36
01737.171737.05700.54524.10−365.18−304.622606.641.05−0.121.35
11736.761736.63700.66524.31−365.61−303.122705.921.04−0.121.35
21737.091737.00700.74524.76−365.99−302.022806.351.03−0.131.35
31737.191737.11700.97524.85−366.63−300.392906.441.03−0.131.35
41736.941736.86700.96524.60−366.89−298.183006.041.03−0.131.36
51737.051736.98700.83524.52−366.94−296.223106.311.04−0.131.36
61736.631736.56700.75524.32−367.06−294.053205.581.05−0.131.35
71736.331736.27700.88524.43−367.58−292.433304.941.04−0.131.35
81736.791736.77701.40524.66−368.87−291.073405.761.03−0.141.35
Table 2. IDLT results of the physical experiment.
Table 2. IDLT results of the physical experiment.
axayu0 (pixel)v0 (pixel)tx (mm)ty (mm)Tz (mm)
1738.111738.33696.27555.55−358.82 −352.391708.23

Share and Cite

MDPI and ACS Style

Zhao, Z.; Ye, D.; Zhang, X.; Chen, G.; Zhang, B. Improved Direct Linear Transformation for Parameter Decoupling in Camera Calibration. Algorithms 2016, 9, 31. https://doi.org/10.3390/a9020031

AMA Style

Zhao Z, Ye D, Zhang X, Chen G, Zhang B. Improved Direct Linear Transformation for Parameter Decoupling in Camera Calibration. Algorithms. 2016; 9(2):31. https://doi.org/10.3390/a9020031

Chicago/Turabian Style

Zhao, Zhenqing, Dong Ye, Xin Zhang, Gang Chen, and Bin Zhang. 2016. "Improved Direct Linear Transformation for Parameter Decoupling in Camera Calibration" Algorithms 9, no. 2: 31. https://doi.org/10.3390/a9020031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop