Global Calibration of Multiple Cameras Based on Sphere Targets

Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.


Introduction
Three-dimensional (3D) vision systems have the advantages of high precision and good flexibility, so they are widely applied in various fields. The multi-sensor vision system (MVS) is always used, because it has larger measurement range than a single sensor. The MVS needs onsite global calibration after being installed. As one of the most important technical indices of MVS, the measurement accuracy is directly influenced by the global calibration accuracy. In most practical applications of MVS, the structures of the measured objects and the environments are complex, and always lead to a complex distribution of the vision sensors. Vision sensors even have the feature of non-overlapping field of view (FOV), which requires that the global calibration methods have high precision and good flexibility.
Most classical global calibration methods [1][2][3][4][5] rely on matching features in the common FOV of all the sensors and are not applicable in the case of non-overlapping FOV. To overcome the limitations of non-overlapping FOV, Luo [6] used a two-theodolite system, and Kitahara et al. [7] used a 3D laser-surveying instrument to accomplish their global calibrations. They used precision auxiliary instruments to reconstruct the feature points in the world coordinate system (WCS) to acquire the transformation matrix between the camera coordinate system (CCS) and the WCS. However, when the on-site space is narrow, the auxiliary instruments have blind zones, and there is even might not be room for them. In the areas of video surveillance and motion tracking, Heng et al. [8], Pflugfelder et al. [9], Carrera et al. [10] and Esquivel et al. [11] used self-calibration methods. The cameras are globally Heng et al. [8], Pflugfelder et al. [9], Carrera et al. [10] and Esquivel et al. [11] used self-calibration methods. The cameras are globally calibrated through observing objects with specific structures in their FOV. In industrial measurement, there is little satisfactory scene information for the self-calibration process, and its accuracy usually cannot meet the requirements. Agrawal et al. [12], Takahashi et al. [13] and Kumar et al. [14] achieved the global calibration by making each camera view the targets in a mirror. However, no clear target is guaranteed to be observed by every camera in the complex MVS. The fixed constraints of multiple targets are used by Liu et al. [15] to calibrate multiple cameras. High accuracy can be achieved with this method, but the repeated pair-wise operations would reduce the calibration accuracy of MVS. Liu et al. [16] proposed a global calibration method based on skew laser lines. This method is flexible and can deal with the cameras with different viewing directions, but there are operational difficulties when we use it to calibrate multiple on-site cameras. Liu et al. [17] and De et al. [18] used a one-dimensional target to calibrate MVS, but it is hard to process a long one-dimensional target to calibrate vision sensors at a long distance.
Focusing on complexly distributed multi-camera systems with non-overlapping FOV, we present a global calibration method. It is based on several groups of spherical targets. A group of spheres is made up of at least three spheres without constraint. Each camera observes a group of spheres. At the same time, an auxiliary precision camera views all the spheres. The WCS coincides with the coordinate system of the auxiliary camera. Every camera to be calibrated and the auxiliary camera reconstruct the sphere centers of a group of spheres, so the transformation matrix from every CCS to the WCS can be calculated. The auxiliary camera is light and handy, and can be easily operated. Moreover, the sphere targets can be observed from different directions, so that any blind zones would be greatly reduced. Besides, this global calibration method can be realized through a one-time operation. It avoids the heavy workloads and accuracy losses caused by other repeated operations.
The paper is organized as follows: in Section 2, the calculation method of the sphere center reconstruction is first proved in detail. Then the calculation method of the transformation matrix is given, followed by the nonlinear optimization. Section 3 provides the accuracy analysis and experimental results. Both simulation and experimental data are provided to test and verify the presented technique. The conclusions are stated in Section 4.

Global Calibration Principle
The principle of global calibration for multiple cameras is shown in Figure 1. Only two groups of cameras are drawn for the sake of discussion. There is no common FOV between the two groups of cameras to be calibrated, and each of them can observe at least three spherical targets. The auxiliary camera can view all of the sphere targets (at least six). If there are three or more groups of cameras to be calibrated, all the sphere targets should be visible to the auxiliary camera.   The global calibration process works as follows:

1.
Install multiple cameras whose intrinsic parameters have been obtained. Then in the FOV of each camera, place at least three sphere targets. They should not be shaded by each other. Fix a precision auxiliary camera. It can view all the targets, and its coordinate system is regarded as the WCS.

2.
Each camera takes a few images of the targets in its FOV. Then move the targets several times and repeat taking images.

3.
Reconstruct the sphere centers of each group of spheres in the corresponding CCSs and the WCS. Then calculate the transformation matrix from every CCS to the WCS using nonlinear optimization. Thus the global calibration is completed.

Sphere Center Reconstruction
The general equation of ellipse includes five degrees of freedom. However, the ellipse in the sphere projection has two constraints, so it just has three degrees of freedom. Hence it can be represented by a parameter equation with only three parameters. The synthetic data show that the fitting accuracy of parameter equation is obviously higher than that of general equation. If appropriate parameters are chosen, the parameter equation is a linear expression of the three parameters. They can be acquired by linear least squares method. This calculation method is simple and has high accuracy.
In the following paragraphs, firstly, the equation F(x,y,z) = 0 of the conic surface with three parameters λ, µ and σ is established. Next, the equation f(x,y) = 0 of the ellipse curve is given according to the geometric relationship of the sphere projection. Then we calculate the three parameters λ, µ and σ through fitting a group of sampling points on the ellipse curve. Finally, the 3D coordinates of the sphere center in the CCS can be acquired from the three parameters.

Sphere Projection Model
In the following paragraphs, we propose a parameter equation F(x,y,z) = 0 to describe the sphere projection model. The geometric relationship of the sphere projection is shown in Figure 2. Point O is the center of the camera, and O-xyz is the CCS (in millimeters). Point C is the principal point of the camera and the origin of the image coordinate system (in millimeters). Its x-axis and y-axis are respectively in the same direction with the x-axis and y-axis of the CCS. Point O S is the sphere center, and the point O 1 s is an intersection point of the line OO S and the image plane. The line OD and the sphere is tangent to the point D. Point A is an intersection point of the line OD and the image plane. The length of OC is the focal length f, and the sphere radius is known as R.
The sphere surface and the origin of the CCS, point O, can determine a conic surface. Its vertex is O. Each element of the conic surface is tangent to the sphere, and line OO S is the symmetry axis of the conic surface. Let =DOO S be θ, and the unit direction vector along OO S be S 0 " rcosα, cosβ, cosγs T They satisfies the constraint cos 2 α`cos 2 β`cos 2 γ " 1. Let P(x,y,z) be any point of the conic surface, so that we have the equation of the conic surface as: Transforming Equation (1) to coordinate form and simplifying it, we have: Substituting three parameters: λ = cosα/cosγ, µ = cosβ/cosγ and σ = cosθ/cosγ into Equation (2), we have the parameter equation of the conic surface in the CCS:

Ellipse Curve Fitting
The intersection of the conic surface and the image plane z = f generates the ellipse curve, so that its equation in the CCS is: The origin of the image coordinate system is C and its x-axis and y-axis are respectively in the same direction with the x-axis and y-axis of the CCS. Eliminating the z in Equation (4), we have the ellipse equation in the image coordinate system: If the normalized focal lengths of the camera are known as fx and fy, and its principal point is known as (u0,v0), we have: where (x,y) (in millimeters) and (u,v) (in pixels) express the same point on the image plane. Substituting Equation (6) into Equation (5), we have the ellipse equation (in pixels): If {(ui,vi)⏐i = 1,2,3…,n} are known as the coordinates (in pixels) of a group of sampling points of the ellipse curve on an image, we can use linear least squares method to fit the ellipse curve, Equation (7). Then we acquire the optimal solutions of parameters λ, μ and σ.
It is not hard to have the equation set: Figure 2. The sphere projection model.

Ellipse Curve Fitting
The intersection of the conic surface and the image plane z = f generates the ellipse curve, so that its equation in the CCS is: The origin of the image coordinate system is C and its x-axis and y-axis are respectively in the same direction with the x-axis and y-axis of the CCS. Eliminating the z in Equation (4), we have the ellipse equation in the image coordinate system: If the normalized focal lengths of the camera are known as f x and f y , and its principal point is known as (u 0 ,v 0 ), we have: where (x,y) (in millimeters) and (u,v) (in pixels) express the same point on the image plane. Substituting Equation (6) into Equation (5), we have the ellipse equation (in pixels): . . ,n} are known as the coordinates (in pixels) of a group of sampling points of the ellipse curve on an image, we can use linear least squares method to fit the ellipse curve, Equation (7). Then we acquire the optimal solutions of parameters λ, µ and σ.

Sphere Center Coordinate Calculation
In the right triangle ODO S , we have |OO s | " R{sinθ. Moreover, S 0 " rcosα, cosβ, cosγs T is defined as a unit direction vector along line OO S , so we have: It is not hard to have the equation set: Substituting equation set Equation (9) into Equation (8) ff T . Then we know the coordinates 1`λ 2`µ2´σ2¸. If the intrinsic parameters of the camera and the sphere radius are both known, the sphere center can be reconstructed from a single image. The sphere center coordinates reconstructed by multiple images can also be averaged to reduce the reconstruction error.

Transformation Matrix Calculation
As shown in Figure 3, all the spheres can be distributed to form an asymmetric 3D structure in the global calibration procedure. The distances between any sphere center and the others can form a signature vector. It can be used to distinguish this sphere center from the others. Through matching signature vectors, we match the sphere centers reconstructed in the CCS and the WCS.
Substituting equation set Equation (9) into Equation (8), we know OOS is . Then we know the coordinates of the sphere center OS in the CCS is If the intrinsic parameters of the camera and the sphere radius are both known, the sphere center can be reconstructed from a single image. The sphere center coordinates reconstructed by multiple images can also be averaged to reduce the reconstruction error.

Transformation Matrix Calculation
As shown in Figure 3, all the spheres can be distributed to form an asymmetric 3D structure in the global calibration procedure. The distances between any sphere center and the others can form a signature vector. It can be used to distinguish this sphere center from the others. Through matching signature vectors, we match the sphere centers reconstructed in the CCS and the WCS. In 3D space, the same sphere center can be described by two vectors, such as vector P in the CCS and vector Q in the WCS. In such a case, P and Q are called a pair of homonymous vectors. If three non-collinear sphere centers are reconstructed in both the CCS and the WCS, we get three pairs of homonymyous vectors. The transformation matrix between the CCS and the WCS can be calculated through them. Let three non-collinear sphere centers be described in the CCS as P1, P2 and P3, respectively, and Q1, Q2 and Q3 in the WCS. The transformation from the CCS to the WCS is defined The rotation matrix is calculated by: = − P P P P and 2 3 3 2 = − P P P P . The translation vector Consequently, the transformation matrix from the CCS to the WCS is acquired.

Nonlinear Optimization
In the practical calibration procedure, we acquire n pairs of homonymous vectors by fixing the cameras and placing the targets many times. Then the optimal solution of the transformation matrix H is calculated by using nonlinear optimization. In 3D space, the same sphere center can be described by two vectors, such as vector P in the CCS and vector Q in the WCS. In such a case, P and Q are called a pair of homonymous vectors. If three non-collinear sphere centers are reconstructed in both the CCS and the WCS, we get three pairs of homonymyous vectors. The transformation matrix between the CCS and the WCS can be calculated through them. Let three non-collinear sphere centers be described in the CCS as P 1 , P 2 and P 3 , respectively, and Q 1 , Q 2 and Q 3 in the WCS. The transformation from the CCS to the WCS is defined as {Qi = R¨P i + T | i = 1,2,3 . . . ,n}, so the transformation matrix is H " The rotation matrix is calculated by: with Q 1 Q 2 " Q 2´Q1 , Q 2 Q 3 " Q 3´Q2 , P 1 P 2 " P 2´P1 and P 2 P 3 " P 3´P2 . The translation vector Consequently, the transformation matrix from the CCS to the WCS is acquired.

Nonlinear Optimization
In the practical calibration procedure, we acquire n pairs of homonymous vectors by fixing the cameras and placing the targets many times. Then the optimal solution of the transformation matrix H is calculated by using nonlinear optimization.
The objective function is: with " Q i " rx q , y q , z q , 1s T and " P i " rx p , y p , z p , 1s T , where " Q i and " P i are respectively the homogeneous coordinates of Q i and P i . Let the rotation matrix R be » -- fi ffi fl, and it must satisfy orthogonal constraint. We have an equation set: Using the method of penalty function, from Equations (12) and (13) we get the unconstrained optimal objective function: where the penalty factor M determines the orthogonal error of the rotation matrix R. Here M takes a values of 10 considering the error distribution. Equation (14) is solved by the Levenberg-Marquardt method. The transformation matrix calculated from Equations (10) and (11) is used as the initial value of the iterations to solve Equation (14). Then we get the optimal value of the transformation matrix.

Analysis and Experiment
This section first discusses the factors which affect the calibration accuracy. The mathematical description and computer simulation are both given to analysis the calibration errors. Finally real data are given to evaluate the calibration accuracy.

Accuracy Analysis
The factors affecting the sphere center reconstruction are discussed here. According to Section 2.1.3, we have: According to Equation (15), the partial derivative is: Let ∆λ be the errors caused by noise, and ∆oo s be the variation of the sphere center reconstruction, so we get: Let |∆oo s | be the error of sphere center reconstruction, from Equation (17) we have: From Equation (15), we the partial derivatives: Substituting Equation (19) into Equation (18), we have: where L "`x 2`y2`z2˘0 .5 " |oo s |, which means the distance between the sphere center and the camera center. When considering all the errors of ∆λ, ∆µ and ∆σ, through the same proof procedure as above, we get: From Equation (21), we conclude that the larger R, the smaller |∆oo s |. The smaller L, the smaller |∆oo s |. Therefore, the calibration accuracy can be improved by enlarging the radius of the sphere targets or putting the sphere targets closer to the camera.

Synthetic Data
In this section, we first analyzed some factors that affect the sphere center reconstruction accuracy. Then we analyzed the influence that the nonlinear optimization makes on the calibration accuracy.

Factors That Affect the Sphere Center Reconstruction Accuracy
To verify the effectiveness of the sphere center reconstruction method, we use the software MATLAB to simulate it. The five factors include the ellipse fitting method, image noise level, sphere radius, sampling point number and distance between the sphere and camera. Their influences on the reconstruction error are studied. In the experiments, "general equation fitting" means using the general equation of the ellipse to fit it, and "parameter equation fitting" means using the parameter equation of the ellipse, Equation (7), to fit it. The experiment for each factor is repeated for 1000 times, and the reconstruction accuracy is evaluated by the root mean squared (RMS) error of the sphere center. The sphere radius is 40.000 mm in Figure 4a,c,d. The number of sampling points in Figure 4a,b,d is 600. Gaussian noise with 0 mean and 0.5 standard deviation is added to the image points in Figure 4b,c,d. The focal length is 24 mm and the image resolution is 1024ˆ1024.
As shown in Figure 4a-d, improving the imaging quality, increasing the sphere radius, increasing the sampling point number and decreasing the distance between the sphere and camera can all improve the reconstruction accuracy. However, when the radius is more than 35 mm, the number of sampling points is more than 600 or the distance between the sphere and camera is less than 1000 mm, their influences are very small. Comparing the two fitting methods, we can conclude that the fitting accuracy of the parameter equation is higher than that of the general equation.  Finally, two spheres with constant distance are projected from four different directions to generate four images. The same Gaussian noise is added to each one. Each image is used to reconstruct the two sphere centers to calculate their distance, and the four distances are averaged as the simulation result of multiple images. The experiment is repeated 1000 times, and the reconstruction accuracy is evaluated by the RMS error of the sphere center distance. The sphere radius is 40.000 mm, and the number of sampling points is 600. The focal length is 24 mm, and the real distance between the two sphere centers is 400.000 mm. As shown in Figure 4e, the reconstruction accuracy of four images is higher than that of a single image.

Effect of Nonlinear Optimization on Calibration Accuracy
In this subsection, we analyze the influence of the nonlinear optimization on the calibration accuracy through computer simulation. First, two cameras are globally calibrated by viewing three spherical targets at the same time, and the calibration results are calculated without optimization. Then, the two cameras are globally calibrated by viewing 15 sphere targets at the same time, and the calibration results are calculated through nonlinear optimization. The two kinds of results are Finally, two spheres with constant distance are projected from four different directions to generate four images. The same Gaussian noise is added to each one. Each image is used to reconstruct the two sphere centers to calculate their distance, and the four distances are averaged as the simulation result of multiple images. The experiment is repeated 1000 times, and the reconstruction accuracy is evaluated by the RMS error of the sphere center distance. The sphere radius is 40.000 mm, and the number of sampling points is 600. The focal length is 24 mm, and the real distance between the two sphere centers is 400.000 mm. As shown in Figure 4e, the reconstruction accuracy of four images is higher than that of a single image.

Effect of Nonlinear Optimization on Calibration Accuracy
In this subsection, we analyze the influence of the nonlinear optimization on the calibration accuracy through computer simulation. First, two cameras are globally calibrated by viewing three The rotation vector is expressed as a Rodriguez vector here. The radius of the sphere targets is 40 mm, and the distance between the camera and the sphere center is about 1000 mm. The calibration accuracy is expressed by the relative error of R lr and T lr . That is |∆R lr |/|R lr | and |∆T lr |/|T lr |. Gaussian noise with 0 mean and standard deviation (0.05-0.5) is added to the image points. The experiment is repeated for 1000 times, and the average of the relative error is regarded as the calibration error. As shown in Figure 5, the calibration results calculated through nonlinear optimization are more accurate than those without optimization.
The position relationship of the two cameras are set as Rlr = [0.8, 0.02, 0.1] T and Tlr = [-50, -400, -100]. The rotation vector is expressed as a Rodriguez vector here. The radius of the sphere targets is 40 mm, and the distance between the camera and the sphere center is about 1000 mm. The calibration accuracy is expressed by the relative error of Rlr and Tlr. That is ⏐ΔRlr⏐/⏐Rlr⏐ and ⏐ΔTlr⏐/⏐Tlr⏐. Gaussian noise with 0 mean and standard deviation (0.05-0.5) is added to the image points. The experiment is repeated for 1000 times, and the average of the relative error is regarded as the calibration error. As shown in Figure 5, the calibration results calculated through nonlinear optimization are more accurate than those without optimization.

Sphere Center Distance Measurement
In this experiment, the sphere radii and distances between the sphere centers have been accurately measured in advance as real values. A single camera is used to reconstruct two sphere centers to calculate their distance. In Section 2.1.3, we proved that a single camera can reconstruct the sphere center through a single image. When two spheres are both in the FOV of a camera, we use it to take an image of the two spheres, then the two sphere centers can be reconstructed through the image. If we get the 3D coordinates of the two sphere centers in the CCS, we can calculate their distance as the measurement result. Ten measurement results are used to calculate RMS error to evaluate the accuracy of sphere center reconstruction.
The sphere targets and their serial numbers are shown in Figure 6a. The precise measurement shows that the diameters of sphere targets 1 and 2 are 40.325 mm and 40.298 mm, respectively, and the 3D coordinates of their centers are respectively (0.010, 0.019, 0.000) and (113.219, 0.022, 0.000). So that the standard value of the distance between the centers of spheres 1 and 2 is 113.229 mm.

Sphere Center Distance Measurement
In this experiment, the sphere radii and distances between the sphere centers have been accurately measured in advance as real values. A single camera is used to reconstruct two sphere centers to calculate their distance. In Section 2.1.3, we proved that a single camera can reconstruct the sphere center through a single image. When two spheres are both in the FOV of a camera, we use it to take an image of the two spheres, then the two sphere centers can be reconstructed through the image. If we get the 3D coordinates of the two sphere centers in the CCS, we can calculate their distance as the measurement result. Ten measurement results are used to calculate RMS error to evaluate the accuracy of sphere center reconstruction.
The sphere targets and their serial numbers are shown in Figure 6a. The precise measurement shows that the diameters of sphere targets 1 and 2 are 40.325 mm and 40.298 mm, respectively, and the 3D coordinates of their centers are respectively (0.010, 0.019, 0.000) and (113.219, 0.022, 0.000). So that the standard value of the distance between the centers of spheres 1 and 2 is 113.229 mm. The resolution ratio of the high-precision camera used in the experiment is 4256 × 2832, and its focal length is 24 mm, and its angle of view is 74° × 53°. The intrinsic parameters of this camera are: and "Plumb Bob" distortion model [19] is used to get its distortion coefficients: c 0.09616, 0.07878, 0.00039, 0.00006, 0.00896 Forty images are captured and one of them is shown in Figure 6b. All the images are divided into 10 groups, and every group includes four images. Every image is used to calculate the distance between the centers of sphere 1 and 2. The four values calculated from every group of images are averaged, and the average value is regarded as a measurement result. All the ten measurement results are shown in Table 1.

Global Calibration Results
As shown in Figure 7a, two groups of sphere targets are used to calibrate two cameras without common FOV to verify the effectiveness of the global calibration method. Two groups of targets are respectively placed in the views of the two cameras. Each camera observes the targets in their own FOV, and an auxiliary precision camera observes all the sphere targets.  The resolution ratio of the high-precision camera used in the experiment is 4256ˆ2832, and its focal length is 24 mm, and its angle of view is 74˝ˆ53˝. The intrinsic parameters of this camera are: and "Plumb Bob" distortion model [19] is used to get its distortion coefficients: k c " r0.09616,´0.07878,´0.00039, 0.00006,´0.00896s (24) Forty images are captured and one of them is shown in Figure 6b. All the images are divided into 10 groups, and every group includes four images. Every image is used to calculate the distance between the centers of sphere 1 and 2. The four values calculated from every group of images are averaged, and the average value is regarded as a measurement result. All the ten measurement results are shown in Table 1.

Global Calibration Results
As shown in Figure 7a, two groups of sphere targets are used to calibrate two cameras without common FOV to verify the effectiveness of the global calibration method. Two groups of targets are respectively placed in the views of the two cameras. Each camera observes the targets in their own FOV, and an auxiliary precision camera observes all the sphere targets. The resolution ratio of the high-precision camera used in the experiment is 4256 × 2832, and its focal length is 24 mm, and its angle of view is 74° × 53°. The intrinsic parameters of this camera are: and "Plumb Bob" distortion model [19] is used to get its distortion coefficients: c 0.09616, 0.07878, 0.00039, 0.00006, 0.00896 Forty images are captured and one of them is shown in Figure 6b. All the images are divided into 10 groups, and every group includes four images. Every image is used to calculate the distance between the centers of sphere 1 and 2. The four values calculated from every group of images are averaged, and the average value is regarded as a measurement result. All the ten measurement results are shown in Table 1.

Global Calibration Results
As shown in Figure 7a, two groups of sphere targets are used to calibrate two cameras without common FOV to verify the effectiveness of the global calibration method. Two groups of targets are respectively placed in the views of the two cameras. Each camera observes the targets in their own FOV, and an auxiliary precision camera observes all the sphere targets.  The targets in the experiment are white matte ceramic spheres. Their serial numbers are shown in Figure 7b,c. The diameters of the spheres are precisely measured in advance, and the results are shown in Table 2. The auxiliary camera used in the experiment is the camera used in Section 3.3.1 above, and its intrinsic parameters are shown as Equations (23) and (24). The cameras to be calibrated are common industrial cameras, whose resolution ratio is 1360ˆ1024. Their intrinsic parameters are: All the cameras are fixed, and the two groups of sphere targets are placed for ten times in appropriate positions. Ten groups of images are captured and one of them is shown in Figure 8. The targets in the experiment are white matte ceramic spheres. Their serial numbers are shown in Figure 7b,c. The diameters of the spheres are precisely measured in advance, and the results are shown in Table 2. The auxiliary camera used in the experiment is the camera used in Section 3.3.1 above, and its intrinsic parameters are shown as Equations (23) and (24). The cameras to be calibrated are common industrial cameras, whose resolution ratio is 1360 × 1024. Their intrinsic parameters are: The transformation matrices from the left and right CCS to the WCS are respectively calculated using Equation (14). The results are:

Global Calibration Accuracy Evaluation
To verify the accuracy of the global calibration in Section 3.3.2 above, the two cameras make up a binocular vision system as shown in Figure 9a. Its global calibration result is shown in Equation (26). Sphere target 1 and sphere target 2 are fixed by a rigid rod, and are respectively laid in the views of the left and right cameras. Their diameters are shown in Table 2. The auxiliary camera measures the distance of the two sphere centers, and the average value of the results is regarded as the standard value of the distance. The two sphere centers are respectively reconstructed by the two cameras, so that their distance can be calculated based on the global calibration results. The distances measured by the binocular vision system are compared with the standard value to evaluate the accuracy of the global calibration.

Global Calibration Accuracy Evaluation
To verify the accuracy of the global calibration in Section 3.3.2 above, the two cameras make up a binocular vision system as shown in Figure 9a. Its global calibration result is shown in Equation (26). Sphere target 1 and sphere target 2 are fixed by a rigid rod, and are respectively laid in the views of the left and right cameras. Their diameters are shown in Table 2. The auxiliary camera measures the distance of the two sphere centers, and the average value of the results is regarded as the standard value of the distance. The two sphere centers are respectively reconstructed by the two cameras, so that their distance can be calculated based on the global calibration results. The distances measured by the binocular vision system are compared with the standard value to evaluate the accuracy of the global calibration. One of the images captured by the auxiliary camera is shown in Figure 9b. The ten measurement results are shown in Table 3. The two sphere targets are placed in the right positions for ten times, and one group of the images captured by the binocular vision system is shown in Figure 9c,d. The ten distances measured by the binocular vision system and the EMS error are shown in Table 4. One of the images captured by the auxiliary camera is shown in Figure 9b. The ten measurement results are shown in Table 3. The two sphere targets are placed in the right positions for ten times, and one group of the images captured by the binocular vision system is shown in Figure 9c,d. The ten distances measured by the binocular vision system and the EMS error are shown in Table 4.

Conclusions
In this paper, we have developed a new global calibration method. In the calibration process, an isotropic sphere target can be simultaneously observed by different cameras from different directions, so the blind zones are reduced. There is no restriction on the position relationship between any two spheres, so the method is flexible in complex on-site environments. Moreover, a one-time operation can globally calibrate all the cameras without common FOV. This avoids the heavy workloads and accuracy loss caused by other repeated operations. A parameter equation is also used to fit the ellipse curve to improve the global calibration accuracy. Our experiments show that the proposed method has the advantages of simple operation, high accuracy and good flexibility. It can conveniently realize the global calibration of complexly distributed cameras without common FOV. In the practical application of this method, images with high-quality ellipse contours are necessary to the high accuracy of calibration.