Abstract
Structural parameter calibration for the binocular stereo vision sensor (BSVS) is an important guarantee for high-precision measurements. We propose a method to calibrate the structural parameters of BSVS based on a double-sphere target. The target, consisting of two identical spheres with a known fixed distance, is freely placed in different positions and orientations. Any three non-collinear sphere centres determine a spatial plane whose normal vector under the two camera-coordinate-frames is obtained by means of an intermediate parallel plane calculated by the image points of sphere centres and the depth-scale factors. Hence, the rotation matrix R is solved. The translation vector T is determined using a linear method derived from the epipolar geometry. Furthermore, R and T are refined by nonlinear optimization. We also provide theoretical analysis on the error propagation related to the positional deviation of the sphere image and an approach to mitigate its effect. Computer simulations are conducted to test the performance of the proposed method with respect to the image noise level, target placement times and the depth-scale factor. Experimental results on real data show that the accuracy of measurement is higher than 0.9‰, with a distance of 800 mm and a view field of 250 × 200 mm2.
1. Introduction
As one of the main structures of machine vision sensors, BSVS acquires 3D scene geometric information through one pair of images and has many applications in industrial product inspection, robot navigation, virtual reality, etc. [1,2,3]. Structural parameter calibration is always an important and concerning issue in BSVS. Current calibration methods can be roughly classified into three categories: methods based on 3D targets, 2D targets and 1D targets. 3D target-based methods [4,5] obtain the structural parameters by placing the target only once in the sensor field of view. However, its disadvantages lie in the fact that large size 3D targets are exceedingly difficult to machine, and it is usually impossible to maintain the calibration image with all feature points at the same level of clarity. 2D target-based methods [2,6] require the plane target to be placed freely at least twice with different positions and orientations, and different target calibration features are unified to a common sensor coordinate frame through the camera coordinate frame. Therefore, calibration operation becomes more convenient than in 3D target-based methods. However, there are also weaknesses in two primary aspects. One is that repeated calibration feature unification will increase transformation errors. The other is that when the two cameras form a large viewing angle or the multi-camera system requires calibration, it is difficult to simultaneously maintain the calibration image with all features at the same level of clarity for all cameras. Regarding 1D target-based methods [7], which are much more convenient than 2D target-based methods, the target is freely placed no less than four times with different positions and orientations. The image points of the calibration feature points are used to determine the rotation matrix R and the translation vector T, and the scale factor of T is obtained by the known distance constraint. Unfortunately, 1D target-based methods have the same weaknesses as 2D target-based methods. Moreover, in practice, 1D targets should be placed many times to obtain enough feature points.
The sphere is widely used in machine vision calibration owing to its spatial uniformity and symmetry [8,9,10,11,12,13,14,15,16,17]. Agrawal et al. [11] and Zhang et al. [16] both used spheres to calibrate intrinsic camera parameters using the relationship between the projected ellipse of the sphere and the dual image of the absolute conic (DIAC). Moreover, they also mentioned that the structural parameters between two or more cameras could be obtained by using the 3D points cloud registration method. However, this method requires many feature points to guarantee high accuracy. Wong et al. [17] proposed two methods to recover the fundamental matrix, and then structural parameters could be deduced when the intrinsic parameters of the two cameras are known. One method is to use sphere centres, intersection points and visual points of tangency to compute the fundamental matrix. The other is to determine the fundamental matrix using the homography matrix and the epipoles, which are computed via plane-induced homography. However, the second method requires an extra plane target to transfer the projected ellipse from the first view to the second view.
In this paper, we propose a method using a double-sphere target to calibrate the structural parameters of BSVS. The target consists of two identical spheres fixed by a rigid bar of known length and unknown radii. During calibration, the double-sphere target is placed freely at least twice in different positions and orientations. From the projected ellipses of spheres, the image points of sphere centres and a so-called depth-scale factor for each sphere can be calculated. Because any three non-collinear sphere centres determine a spatial plane πs, if we have at least three non-parallel planes with their normal vectors obtained in both camera coordinate frames, the rotation matrix R can be solved. However, πs could not be directly obtained. We obtained its normal vector by an intermediate plane paralleling to the plane πs, which is recovered by the depth-scale factors and the image points of sphere centres. From the epipolar geometry, a linear relation between the translation vector T including a scale factor and the image points is derived, and then SVD is used to solve it. Furthermore, the scale factor is determined based on the known distance constraint. Finally, R and T are combined to be refined by Levenberg-Marquardt algorithm. Due to the complete symmetry of the sphere, wherever the sphere is placed in the sensor vision field, all cameras can capture the same high-quality images of the sphere, which are essential to maintain the calibration consistency, even if the angle between the principal rays of the two cameras is large. Moreover, regarding multi-camera system calibration, it is often difficult to make the target features simultaneously visible in all views because of the variety of positions and orientations of the cameras. In general, the cameras are divided into several smaller groups, and each group is calibrated separately, and finally all cameras are registered into a common reference coordinate frame [17]. However, using the double-sphere target can obtain the relationship of the cameras with common view district by once calibration. A kind of terrible configuration of two cameras mentioned above will be often taken place in multi-camera calibration. Therefore, using the double-sphere target can reduce the times of calibration and make calibration operation easy and efficient.
The remaining sections of this paper are organized as follows: Section 2 briefly describes a few basic properties of the projected ellipse of the sphere. Section 3 elaborates the principles of the proposed calibration method based on the double-sphere target. Section 4 provides detailed analysis of the impact on the image point of the sphere centre when the projected ellipse is not accurately extracted with positional deviation. Section 5 presents computer simulations and real data experiments to verify the proposed method. The conclusions are given in Section 6.
2. Basic Principles
This section mainly describes some related properties of the projected ellipse of sphere.
2.1. Derivation of the Projected Ellipse
Agrawal [11] and Zhang [14] each give the formula of the projected ellipse of the sphere. We further synthesize the two different derivation approaches to gain an easily understood explanation, which is briefly described as follows:
Consider a camera viewing a sphere with radius R0 centered at X = [X0 Y0 Z0]T in the camera coordinate frame O~XYZ, where is the camera intrinsic matrix. Q is expressed as (X − X0)2 + (Y − Y0)2 + (Z − Z0)2 = .
Denoting by h0; then we have:
where d is the unit vector of X.
Sphere Q is further expressed by the following coefficient matrix:
Thus, the dual Q* of Q is defined as:
Next, we obtain the dual C* of the projected ellipse C of sphere Q under camera P [4]:
Denoting h0/R0 by μ. From Equation (4), we have:
with o = μKd, which is the image point of sphere centre X.
2.2. Derivation of the Image Point of the Sphere Centre
From Equation (5), C* can also be written as:
where is an unknown scale factor, and .
Let , be the dual of projected ellipses of spheres , under camera , respectively; then we have:
where ρ1, ρ2 are two unknown scale factors, o1 = μ1Kd1, o2 = μ2Kd2, and μ1, μ2, d1, d2 have the same meanings as μ in Equation (5) and d in Equation (1).
Let denote the centres of spheres Q1, Q2, respectively. These two points and the camera centre O determine a plane. Denote the vanishing line of this plane by l12; then we know from [14] that:
From Equation (8), it is observed that is the eigenvector corresponding to the eigenvalue regarding matrix , which has two real intersections with each projected ellipse and .
Because passes through the image points and , . Hence, if we know three projected ellipses , and of spheres , and , the image points , and of these three spheres centres will be given by:
2.3. Computation of the Depth-Scale Factor μ
Motivated by [18], we give a simple method to solve the depth-scale factor. As known, there are two mutually orthogonal unit vectors and perpendicular with in Equation (5). Denote by ; then, the dual of the ellipse is also expressed as follows:
Ellipse is then given by:
where is an unknown scale factor. If is known, Equation (11) will be rewritten as:
Denoting by ; then we have:
As known, is an orthogonal matrix, so the singular values of matrix are , and , and can be obtained by SVD. For and , is greater than 1. For different spheres with the same radius, μ is proportional to the corresponding h0.
3. Calibration Principles
3.1. Acquisition of the Rotation Matrix
If is known, the normalized back-projected vector of the sphere centre in the camera coordinate frame will be:
Denote by ; then:
where is the sphere centre.
From Section 2.3, we can obtain the depth-scale factor , and when there are three spheres Q1, Q2 and Q3 with the same radius centered at , and , we can obtain three vectors D1, D2 and D3 to determine a plane . The plane is parallel to the plane formed by , and . Therefore, the normal vector of the plane is calculated by:
Refer to Figure 1; for the BSVS, we can obtain the normal vectors and of the same plane in the left camera coordinate frame (LCCF) and the right camera coordinate frame (RCCF) respectively. Thus, the following equation stands:
where R is the rotation matrix between LCCF and RCCF.
Figure 1.
Using a double-sphere target to calibrate a binocular stereo vision system.
If there are spheres with non-coplanar centres and , we can obtain the equations of Equation (17) to solve R.
3.2. Acquisition of the Translation Vector
In the BSVS, suppose that the left camera is and the right camera is , and , are the image points of the 3D point ; then we have:
where , are two unknown scale factors, and R, T are the rotation matrix and translation vector of the BSVS, respectively.
Define a skew-symmetric matrix by as . Denote , , and ; then from Equation (18), we have:
Denote by , and is known, so we can obtain the final expression as:
Obviously, Equation (20) is a homogenous equation of . Given at least three pairs of image points of the sphere centres, we can solve with a scale factor as (see Appendix A for more details); i.e., . Furthermore, based on the known distance between the two sphere centres of the target and the common sense of a positive for the Z coordinate of the sphere centre, is determined.
3.3. Nonlinear Optimization
Consider that and are separately obtained; in this section, we take them as the initial values to obtain more accurate results by combining them.
Establish the object function as:
where , represents the Euclidean distance, , are the real non-homogeneous image coordinates of the sphere centres, , are the non-homogeneous reprojection image coordinates of the sphere centres, is the calculated distance of the two sphere centres, is the known distance of the two sphere centres, is the number of placement times, and is the weight factor.
To maintain the orthogonal constraint of the rotation matrix, parameter is transformed into the Rodriguez vector , so . Considering the principle of error distribution, is taken to be 10. The Levenberg-Marquardt optimization algorithm is used to obtain the final results of and .
3.4. Summary
The implementation procedure of our proposed calibration is as follows:
- Calibrate the intrinsic parameters of two cameras.
- Take enough images of the double-sphere target with different positions and orientations by moving the target.
- Extract the subpixel contour points of the projected ellipses using Steger’s method [19], and then perform ellipse fitting [20].
- Compute the image points of each sphere centre, and then conduct image points matching.
- Compute the scale factor of each sphere.
- Solve the structural parameters and using the algorithm described in Section 3.1 and Section 3.2.
- Refine the parameters by solving Equation (21).
4. Error Analysis
The general equation of the ellipse is , and the coordinates of the ellipse centre are given as follows:
The matrix form of the ellipse is written as . The dual of is given by:
where is an unknown scale factor. Combining Equations (22) and (23), we can then obtain the following relationship between the ellipse centre and the elements of matrix :
As known, there are many factors affecting the extraction of the ellipse contour points. Regarding the extracted points, the positional deviation may occur due to noise. We will then discuss how the computation of the image point of the sphere centre is influenced under this condition.
Suppose that the shape of the ellipse remains constant and the ellipse does not rotate; then we use the ellipse centre to represent the position of the ellipse. Let denote the sphere, denote the projected ellipse of sphere , and denote the image point of the sphere centre.
To simplify the discussion, consider the condition in which the sphere centre is in the first quadrant of the camera coordinate frame. Because the sphere centre can be located in the first quadrant of the camera coordinate frame by rotating the camera, this discussion can be generalized.
First of all, let us discuss the element of (Note: ). Expanding Equation (5) by the replacements , , gives:
The sphere is always in front of the camera, and the sphere centre is located in the first quadrant of the camera coordinate frame, so we have , , , and . Based on these equations, we can obtain:
The details are described in Appendix B.
Next, we discuss the factors that have influences on calculating the image point of the sphere centre. Denoting by , we can obtain the following equation from Equation (5):
Substituting , into Equation (27), we get:
By Equation (28), we can obtain:
From Equation (29), computing the partial derivative of with respect to gives:
where .
Let:
and we can deduce that satisfies when is valid (see Appendix C for more details).
Suppose that is the positional deviation of the fitted ellipse, and is the computation error of image point . Equation (30) is then written as:
When and are valid, satisfies , which shows that computation error caused by is reduced.
Because the extracted ellipse contour points have a positional deviation, the fitted ellipse also has a similar deviation. The following section discusses the solution for how to reduce the computation error of the image point of the sphere centre in this condition.
Firstly, consider the relationship between and . From Equation (25), we can obtain:
When is valid, we can deduce . Hence, is a monotonically increasing parameter with respect to ().
Second, from Equation (28), we have:
When , we can obtain:
(see Appendix D for more details).
Suppose is the positional deviation of the fitted ellipse, and is the computation error caused by . Equation (34) can then be written as:
Based on Equations (26), (35) and (36), we can deduce that has a positive relationship with .
Similarly, we can obtain:
If , are valid, we can deduce that is a monotonically increasing parameter with respect to , and also has a positive relationship with .
Finally, based on the condition described above, we can obtain the following conclusion that the computation errors , both have positive relationships with . The smaller the value of μ, the smaller the computation error , . By reducing (the depth of the sphere centre) or increasing (the radius of the sphere), is smaller, so , will be smaller. In this way, we can improve the computational accuracy of the image point (,) of the sphere centre.
5. Experiments
5.1. Computer Simulations
Using computer simulations, we analyse the following factors affecting the calibration accuracy: (1) image noise level ; (2) the number of placement times N of the target; and (3) the depth-scale factor μ of the sphere.
Table 1 shows the intrinsic parameters of two simulation cameras. The camera distortions are not considered. Suppose that the LCCF is the world coordinate frame (WCF), and set the simulation BSVS structural parameters as r = [−0.03,0.47,0.07]T, T = [−490,−49,100]T. The working distance of BSVS is approximately 1000 mm, and the field of view is approximately 240 × 320 mm. The relative deviation of the calibration results and the truth values are used for the evaluation of accuracy. The rotation matrix R is expressed as Rodrigues vector r; then, both the rotation vector r and translation vector T have dimensions 3 × 1. The Euclidean distances of the simulation vectors and true vectors of r and T are used to represent absolute errors; the ratio of absolute error and the corresponding mould of the truth value are then the relative error.
Table 1.
Intrinsic parameters of the simulation cameras.
5.1.1. Performance w.r.t. the Noise Level and the Number of Placement Times of the Target
In this experiment, Gaussian noise with 0 mean and (0.05–0.50 pixel or 0.05–1.00 pixel) standard deviation is added to the contour points of the projected image. For each noise level and the number of placement times N (2, 3, 4) of the target, we perform 200 independent trials, and Figure 2 and Figure 3 show the relative error of R and T under different conditions. As we can see, errors increase with increasing noise level. The relative errors of R and T are even less than 5% with the minimum number of placement times (namely N = 2), and the relative errors are drastically reduced when increasing the number of placement times. For = 1, N = 4, the calibration errors of R and T are less than 1‰. However, it is clear that the noise level is less than 1 pixel in practical calibration.
Figure 2.
Relative errors vs. the noise level of the image points when N = 2.
Figure 3.
Relative errors vs. the noise level of the image points when N = 3 and 4.
5.1.2. Performance w.r.t. the Depth-Scale Factor μ
This experiment studies the performance with respect to the depth-scale factor μ, which is the ratio of the depth of sphere centre and sphere radius . To ensure the same orientation, we change the value of μ by varying only the sphere radius. Gaussian noise with 0 mean and standard deviation = 0.50 pixel is added to the contour points of the projected ellipse, and the target is placed N = 3 times. We vary the radius from 4 mm to 36 mm and perform 200 independent trials for each radius. Figure 4 shows the results that the relative errors decrease with increasing radius (that is, the depth-scale factor μ decreases). Note that in practice, if the radius increases too much, the image of sphere may be too large to display in the image plane.
Figure 4.
Relative errors vs. the sphere radius of the double-sphere target.
5.2. Real Data
In the experimental results on real data, the BSVS is composed of two AVT-Stingray F504B cameras, a 17 mm Schneider lens and support structures. The image resolution of the cameras is 1600 × 1200 pixels. Figure 5 shows the structure of the BSVS.
Figure 5.
Structure of the BSVS.
5.2.1. Intrinsic Parameters Calibration
The Matlab toolbox and a checkerboard target (see Figure 6) are used to calibrate the intrinsic parameters. There are 10 × 10 corner points on the checkerboard target, and the distance between any two adjacent corner points is 10 mm with 5 µm accuracy. Twenty images of different orientations are taken for intrinsic parameters calibration of each camera. Table 2 shows the calibration results.
Figure 6.
Checkerboard target and double-sphere target.
Table 2.
Intrinsic parameters of the cameras.
5.2.2. Structural Parameters Calibration
The double-sphere target (see Figure 6) is composed of two spheres with the same radius and support structure. The distance between these two spheres is 149.946 mm with 0.003 mm accuracy. Set the LCCF as the WCF. To explore the best of number of placement times, a double-sphere target is placed freely 28 times in the measurement space, and 28 pairs of images are obtained. For evaluating the accuracy of the calibration, another 15 pairs of images of the double-sphere target are captured.
We then randomly select 8, 10, 12, 14, 16, 18, 22 and 28 pairs of images for calibration using our method and obtain several sets of structural parameters. Figure 7 illustrates the extraction and fitting details of a pair of target images.
Figure 7.
Extraction and ellipse fitting of images (a,b) are the origin images and (c,d) are the processed images.
The calibrated BSVS is used to measure the distance between two sphere centres of the double-sphere target by means of another 15 pairs of measured images. Root-mean-square (RMS) errors of these measured values are taken as evaluation criteria of calibration accuracy. Table 3 shows the results, and Figure 8 displays the relative errors and absolute errors of RMS.
Table 3.
Comparison of measured values (mm).
Figure 8.
Relative errors and absolute errors of RMS errors with different number of placement times.
From Figure 8, we can see that when the number of placement times is greater than 16, the errors begin to monotonically decrease. Consequently, we must place the double-sphere target approximately 16 times in the experiment.
In this experiment, we have the calibration parameters with 18 times as the final result. By using the calibration parameters, we reconstruct the target positions in space, and the results are shown in Figure 9. For comparison, the Matlab toolbox method is also carried out for structural parameters calibration. Table 4 shows the calibration results of these two methods.
Figure 9.
3D reconstruction of the spatial position of the target by our method.
Table 4.
Comparison of the structural parameters.
5.2.3. Accuracy Evaluation
To evaluate the accuracy of the calibration, another 10 pairs of images of the checkerboard target are captured. In addition, 15 pairs of previous captured images of the double-sphere target are also used for accuracy evaluation.
Using the calibrated BSVS by these two methods, we measure the distance between two sphere centres of the double-sphere target and each distance between two adjacent corner points of the checkerboard target. The RMS errors of these measured values are taken as the evaluation criteria of calibration accuracy.
- (a)
- Measure the double-sphere target
Figure 10 displays the measured results of 15 distances, and Table 5 shows a comparison of the errors.
Figure 10.
Results of measurements by two methods.
Table 5.
Comparison of measurement accuracy of the double-sphere target (mm).
The results in Table 5 show that the RMS error of our algorithm is 0.084 mm, and the relative error is approximately 0.06%; the RMS error of the toolbox method is 0.111 mm, and the relative error is approximately 0.07%. Consequently, it is obvious that our method is slightly better than the toolbox method in measuring the distance of two sphere centres. The standard errors of the measured values show that the measured results by our method are more stable.
- (b)
- Measure the checkerboard target
The details of the checkerboard target have been introduced in the preceding paragraph. Because the measured values are too numerous, we represent these values by the scatter plots in Figure 11. Table 6 shows the comparison of errors. For an intuitive display of the calibration results of our method, we reconstruct the 3D points of the checkerboard target, and Figure 12 shows the results.
Figure 11.
Results of measuring the checkerboard target by two methods.
Table 6.
Comparison of the measurement accuracy of the checkerboard target (mm).
Figure 12.
Reconstructed 3D points by our method.
As we can see in Table 6, the RMS errors are 0.008 and 0.005 mm, and the relative errors are 0.08% and 0.05%, respectively. When measuring the distance of the corner points of the checkerboard, the toolbox method is slightly better. According to the standard errors, we can find that these two methods are both reasonably stable.
As we have observed from the accuracy evaluation, our method exhibits similar calibration accuracy to the toolbox method. The measured errors of both methods are less than 0.9‰. The sphere has an excellent property of complete symmetry, which can effectively avoid the simultaneously visible problem of target features in multi-camera calibration.
The toolbox method based on the plane target is a typical method. However, the algorithm usually requires the plane target to be placed in different orientations so as to provide enough constraints, which increases the possibility of occurrence of the simultaneous visibility problem. When the angle between the principal rays of the two cameras is large, it is difficult to capture high quality image at the same time, so the calibration accuracy would be heavily influenced. Figure 13 shows a comparison of the plane target and the double-sphere target in calibration when the angle between the principal rays is large. As seen in Figure 13, the right image of the plane target is so tilt that the corners cannot be accurately extracted, while both images of the double-sphere target have the same high level of clarity and the contours can be accurately extracted. Therefore, it is obvious that the double-sphere target will perform better than the plane target.
Figure 13.
Images of the plane target and the double-sphere target (a,c) are the left images; and (b,d) are the right images.
6. Conclusions
In the paper, we describe a method to calibrate structural parameters. This method requires a double-sphere target placed a few times in different positions and orientations. We utilize the normal vectors of spatial planes to compute the rotation matrix and use a linear algorithm to solve the translation vector. The simulations demonstrate how the noise level, the number of placement times and the depth-scale factor influence the calibration accuracy. Real data experiments have shown that when measuring the object with a length of approximately 150 mm, the accuracy is 0.084 mm, and when measuring 10 mm, the accuracy is 0.008 mm.
If the sphere centres are all coplanar, our method will fail. Therefore, the double-sphere target should be placed in different positions and orientations to avoid this degradation. Because the calibration characteristic of the sphere is its contour, we should prevent the double-sphere target from completely mutual occlusion. As mentioned above, the two spheres should have the same radius. However, if the two sphere centres are unequal, our method can still work. If the ratio of the two radii is known, the ratio value should be considered when recovering the intermediate parallel planes; other computation procedures remain unchanged. If the ratio is unknown, three arbitrary projected ellipses with the same sphere should be selected to recover the intermediate parallel plane. Furthermore, this target must be placed at least four times. Obviously, such a target provides fewer constraints than the target with a known ratio of sphere radii when solving the rotation matrix. To calibrate the BSVS with a small public view while guarantee high accuracy, we can couple these two spheres with large radii to form a double-sphere target. In multi-camera calibration, using the double-sphere target can avoid the simultaneous visibility problem and performs well.
Acknowledgments
This work is supported by the Natural Science Foundation of Beijing (No. 3142012), and the National Key Scientific Instrument and Equipment Development Project (No. 2012YQ140032), and the Supported Program for Young Talents (No. YMF-16-BJ-J-17). We appreciate the constructive comments received from the reviewers.
Author Contributions
The work presented here was performed in collaboration between two authors. Both authors have contributed to, seen and approved the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| BSVS | binocular stereo vision sensor |
| LCCF | left camera coordinate frame |
| RCCF | right camera coordinate frame |
| WCF | world coordinate frame |
| RMS | Root mean square |
Appendix A
This appendix provides a solution to solve Equation (20). Expanding Equation (20),
Equation (A1) can be written as
From Equation (A2), we have
where is now a matrix. Given at least three pairs of image points, we can obtain an equation , where is the coefficient matrix composed of coefficients from each equation . Using SVD, we can obtain the solution of equation .
Appendix B
In this appendix, the sign of in Equation (25) will be discussed. Because holds, we have
Because of , , , , we can deduce
If , then , therefore
Now, we have
Appendix C
In this appendix, we endeavour to determine the range of in Equation (31). Because the ellipse centre is close to the projected image point of the sphere centre, we can approximate .
Denote and , then
Considering , we can obtain
and then deduce
Denote by . Because holds, satisfies . In general, satisfies , so . Replacing with in Equation (C2), we have
Several other polynomials of Equation (C1) are reduced to
Combining Equations (C3) and (C4), can be written as
For a quadratic equation with respect to (), if satisfies , the quadratic equation will constantly be greater than zero.
Consequently, satisfies when .
Appendix D
In this appendix, we determine the sign of in Equation (34). For the numerator of Equation (34), because of , ,
constantly holds. For the denominator, we have
If , then holds. Because is close to , , and we have
Consequently, when , we have .
References
- Gao, H. Computer Binocular Stereo Vision; Publishing House of Electronics Industry: Beijing, China, 2012. [Google Scholar]
- Steger, C.; Ulrich, M.; Wiedemann, C. Machine Vision Algorithms and Applications; Tsinghua University Press: Beijing, China, 2008. [Google Scholar]
- Zhang, G. Visual Measurement; Science Press: Beijing, China, 2008. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Ma, S.; Zhang, Z. Computer Vision: Theory and Algorithms; Science Press: Beijing, China, 1998. [Google Scholar]
- Bouguet, J.Y. Camera Calibration Toolbox for Matlab. 2010. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/ (accessed on 1 May 2015).
- Zhou, F.; Zhang, G.; Wei, Z.; Jiang, J. Calibrating binocular vision sensor with one-dimensional target of unknown motion. J. Mech. Eng. 2006, 42, 92–96. [Google Scholar] [CrossRef]
- Penna, M.A. Camera calibration: A quick and easy way to determine the scale factor. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 1240–1245. [Google Scholar] [CrossRef]
- Daucher, N.; Dhome, M.; Lapreste, J. Camera calibration from spheres images. In Proceedings of the European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994; pp. 449–454.
- Teramoto, H.; Xu, G. Camera calibration by a single image of balls: From conics to the absolute conic. In Proceedings of the 5th Asian Conference on Computer Vision, Melbourne, Australia, 23–25 January 2002; pp. 499–506.
- Agrawal, M.; Davis, L.S. Camera calibration using spheres: A semi-definite programming approach. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 782–789.
- Wong, K.Y.K.; Mendonça, P.R.S.; Cipolla, R. Camera calibration from surfaces of revolution. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 147–161. [Google Scholar] [CrossRef]
- Ying, X.; Zha, H. Linear approaches to camera calibration from sphere images or active intrinsic calibration using vanishing points. In Proceedings of the Tenth IEEE International Conference on Computer Vision, Beijing, China, 15–21 October 2005; pp. 596–603.
- Zhang, H.; Zhang, G.; Wong, K.Y.K. Camera calibration with spheres: Linear approaches. In Proceedings of the International Conference on Image Processing, Genova, Italy, 11–14 September 2005; pp. 1150–1153.
- Zhang, G.; Wong, K.-Y.K. Motion estimation from spheres. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 1238–1243.
- Zhang, H.; Wong, K.Y.K.; Zhang, G. Camera calibration from images of spheres. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 499–502. [Google Scholar] [CrossRef] [PubMed]
- Wong, K.-Y.K.; Zhang, G.; Chen, Z. A stratified approach for camera calibration using spheres. IEEE Trans. Image Process. 2011, 20, 305–316. [Google Scholar] [CrossRef] [PubMed]
- Jia, J. Study on Some Vision Geometry Problems in Muti-Cameras System; Xidian University: Xi’an, China, 2013. [Google Scholar]
- Steger, C. Unbiased Extraction of Curvilinear Structures from 2D and 3D Images; Utz, Wiss.: Munich, Germany, 1998. [Google Scholar]
- Fitzgibbon, A.; Pilu, M.; Fisher, R.B. Direct Least Square Fitting of Ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 476–480. [Google Scholar] [CrossRef]
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).