On-Site Global Calibration of Mobile Vision Measurement System Based on Virtual Omnidirectional Camera Model

The mobile vision measurement system (MVMS) is widely used for location and attitude measurement in aircraft takeoff and landing, and its on-site global calibration is crucial to obtaining high-accuracy measurement aimed at obtaining the transformation relationship between the MVMS coordinate system and the local-tangent-plane coordinate system. In this paper, several new ideas are proposed to realize the global calibration of the MVMS effectively. First, the MVMS is regarded as azimuth and pitch measurement equipment with a virtual single image plane at focal length 1. Second, a new virtual omnidirectional camera model constructed by three mutual orthogonal image planes is put forward, which effectively resolves the problem of global calibration error magnification when the angle between the virtual single image plane and view axis of the system becomes small. Meanwhile, an expanded factorial linear method is proposed to solve the global calibration equations, which effectively restrains the influence of calibration data error. Experimental results with synthetic data verify the validity of the proposed method.


Introduction
The location and attitude of aircraft play an important role in airborne remote sensing for motion parameter estimation, performance evaluation, verification of aircraft design theory, flight safety assurance, and so on [1].The mobile vision measurement system (MVMS) is a piece of effective equipment for the dynamic measurement of location and attitude.Usually, the MVMS mainly consists of a zoom-lens camera with long-range focus length and a pan/tilt servo unit, which is usually mounted on a carrier vehicle that could be laid out flexibly on the spot according to measurement requirements.Transforming aircraft location and attitude in the coordinate system of MVMS to the local-tangent-plane (LTP) coordinate system is a primary mission that leads to obtaining the direct physical meaning parameters for users, and this is called the global calibration of MVMS.
The existing global calibration methods can be divided into two categories: indoor and outdoor global calibration methods.
The indoor global calibration methods mainly include the following four categories.(1) Methods based on large-range measuring devices [2][3][4][5][6].Lu and Li [7] proposed a highaccuracy global calibration method in which the target is placed in the camera's field of view (FOV), and the three-dimensional (3D) coordinates of the marking points on the target are measured by a large-scale 3D coordinates measurement system consisting of two electronic theodolites called TCMS.Then, the transformation matrix between camera and TCMS is obtained by solving the relationship between camera and target.A new global calibration method using a laser tracker is described by Zhao et al. [8].In this method, a laser tracker is applied for getting two planes to fit using the built-in software by scanning the screens at both sides of the measuring operation site.The ground coordinate is established and the coordinates of planar calibration points are obtained by the laser tracker.This method achieves a measurement accuracy of 0.45 mm in the 8000 mm × 6000 mm working range.Although this method is of high accuracy for global calibration, the laser tracker is both expensive and less able to be manipulated in small-scale space.(2) Methods based on auxiliary markers and supported cameras.Zhao et al. [9] described a non-overlapping camera calibration method to find the transformations between the AR (Augmented Reality) markers and the calibrated cameras using a supported camera and chessboard, and then the transformations between the target cameras were obtained by estimating the transformations between the auxiliary markers.However, this type of method usually requires the supported camera to have a sufficiently wide FOV to capture all the auxiliary markers in the scene, and the calibration accuracy will become poor when two cameras to be calibrated are far apart from each other, because of the limited resolution of the supported camera.(3) Methods based on laser projection [10][11][12][13][14][15][16].Liu et al. [10] proposed a multi-vision system calibration method based on a laser projector that projects a straight laser line received by a planar target across the FOV of all cameras, and then the cameras' external parameters are calculated according to the collinear or co-planar constraints of the laser line after the planar target is moved several times.Zou et al. [11,12] installed a laser pointer on a calibration target and proposed a method of calibrating a camera by pointing toward and away from the camera.It is possible to achieve large-scale global calibration since the laser has a considerable projection distance, but the laser linearity error will affect the calibration accuracy because of the difficulty of maintaining the straightness or flatness of the laser when the operating distance is large.
Outdoor global calibration methods mainly include the following four categories.(1) Methods based on geometry model [17,18].Location and attitude are determined by angle measurement and transferred known points.The geometry model is constructed by the measured data, and the error equation is established by linearizing the model errors.Then, iterative computation is adopted to obtain location and attitude in the fixed coordinate system.(2) Methods based on a celestial body.In these types of methods, location and attitude are obtained by celestial body observation [19][20][21][22][23].For high positioning accuracy and reliability, these methods are applied widely in navigation, aviation, aerospace, deep-space exploration, and so on.Furthermore, orientation refers to obtaining the true azimuth of the direction line by astronomical observation.These include the method of Sun altitude, method of solar hour angle, and method of Polaris' hour angle.However, the deficiency is that such methods are susceptible to climatic variations, such as overcast or rainy days.(3) Methods based on satellites [24][25][26].These methods possess the advantage of high location accuracy.Centimeter-level positioning accuracy can be achieved by using the Real Time Kinematic (RTK) measurement technology [27][28][29].Sometimes, however, it is impossible to work when the GPS signals are masked.(4) Methods based on inertial navigation [30][31][32][33][34][35].These methods are the most popular used for navigation, especially in the aerospace field.They have special advantages in working autonomously.However, they also have the inevitable shortcomings of accumulative errors.Both global-calibrationbased satellite and global-calibration-based inertial navigation must be mounted on the MVMS, and it is not easy to carry out because that their location and attitude in the base coordinate system of MVMS need to be calibrated in advance.
Owing to the long distance and large range of on-site measurements, to achieve highprecision global calibration, the MVMS must obtain enough control points measured in the local geodetic coordinate system in a long distance and large space.If a calibration target is adopted as the aid device, it requires a large calibration target, which leads to difficulty in fabrication at the pre-condition of maintaining precision and layout in the field.Meanwhile, it increases the time overhead of global calibration.Precision measurement devices and laser projectors are not appropriate for working in the field.Although the principle of the above-mentioned indoor global calibration methods is similar, they are not suitable for longdistance and large-scale usage in the outdoors.However, the shortcomings of the global calibration methods of fixing a GPS device or inertial navigation device on MVMS have been pointed out.Therefore, in this paper, the MVMS system is regarded as a virtual camera with a virtual single image plane by azimuth and pitch.By wide-range-angle scanning and zoom imaging, it can accurately aim to the control point in the field, and then obtain the azimuth and pitch angles and convert them to the virtual image coordinates on the single virtual image plane, normalizing the focal length.Compared with the method that relies on image extraction of natural scenery control points through the wide-range zoom of the camera, this method has two outstanding advantages: first, it can make full use of the characteristics of small distortion and high quality in the camera center area to improve the acquisition accuracy of control points; second, because of the normalization of the focal length, it can effectively avoid the problem that the zoom focal length is difficult to obtain in high precision caused by zoom imaging due to the change of the feature-point distance.It is thus beneficial to improve the calibration accuracy.Here, the generation of on-site control points is formed by measuring the feature points of natural scenery within the appropriate space range in the field by the Real Time Kinematic (RTK) measurement of GPS [36,37], and their coordinates are three-dimensional coordinates in the Earth-centered, Earth-fixed (ECEF) coordinate system.On this basis, the virtual normalized image coordinates of the control points can be obtained by manipulating MVMS, aiming at the control points measured by GPS.However, there are some issues when the MVMS is regarded as a virtual camera with a virtual image plane.To be specific, when the angle between the virtual single image plane and the view axis of the system becomes small, the virtual normalized image coordinate error will enlarge sharply.To solve this problem, the MVMS system is further regarded as a virtual omnidirectional camera with a focal length of 1, which is composed of three mutual orthogonal virtual image planes.This model can solve the problem that the angle measurement error of the virtual single image plane camera model increases sharply with the increase of scanning range.Simultaneously, the coefficient matrix of linear homogeneous equations group (usually called the measurement matrix) is a nonlinear function of measurement data in the conventional linear method [38], so the measurement matrix is easily affected by minor errors.To solve the problem, an expanded factorization linear method is proposed to effectively suppress the influence of calibration data error on the global calibration accuracy of the new virtual omnidirectional camera model.A new measurement matrix is constructed in this method whose elements are control point coordinates, virtual image coordinates, or constants.The obtained improvement is that no error amplification of measurement data is introduced.Finally, the results in the LTP coordinate system are solved by the inherent geometric strain expressed by the longitude and latitude.
The rest of this paper is organized as follows.In Section 2, the notations of the coordinate systems used are described.In Section 3, the detailed calibration method is presented.In Section 4, the experimental results are given.Both simulations and outdoor experiments are used to validate the proposed method.Finally, Section 5 concludes the paper.

Coordinate System Notation
Several coordinate systems are involved in the paper.The 3D coordinate systems are listed as follows.
• o e − x e y e z e o − x e y e z e is an ECEF coordinate system denoted by 3D rectangular coordinates [39].
• o p − x p y p z p The order of magnitude of the control point coordinate located in o e − x e y e z e is enormous compared to the corresponding virtual image coordinate.The centroid coordinate o p of all control points in o e − x e y e z e is obtained, then the o e − x e y e z e is a translation from o e to o p .It could effectively lower the order of magnitude of the control point coordinate.
• o − L g B g H g o − L g B g H g is the geodetic coordinate system denoted by longitude, latitude, and ellipsoid height [39].
• o − x T y T z T o − x T y T z T is a LTP coordinate system.Right-handed variants denoted by East, North, and Up (ENU) coordinates are used [40].3D coordinate axes x,y, and z are separately defined by the directions of East, North, and upward vertically to the local level.They serve for representing the state of location and attitude that is commonly used in the aviation domain. •

Method Overview
The 3D control point coordinates in o e − x e y e z e are measured by GPS and then transferred to o p − x p y p z p .The MVMS is regarded as a virtual omnidirectional camera, which records the azimuth and pitch of the two-degree-of-freedom turntable while aiming to the 3D control point.The azimuth and pitch are transferred to the virtual image coordinate by the virtual omnidirectional camera model.The expanded factorial linear equation set is constructed between the virtual image coordinates and the 3D control point coordinates in o p − x p y p z p .The solution, which is the location and attitude of the MVMS in o p − x p y p z p , can be solved by the expanded factorial linear method.
Finally, there is an inherent geometric constraint between o p − x p y p z p and o − x T y T z T , which is expressed by the longitude and latitude in o e − x e y e z e .The location and attitude of the MVMS in o − x T y T z T are obtained by this constraint.
The method flowchart is shown in Figure 1. ordinate.
is the geodetic coordinate system denoted by longitude, latitude, and ellipsoid height [39]., can be solved by the expanded factorial linear method.
Finally, there is an inherent geometric constraint between

Virtual Omnidirectional Camera Model
The system comprises a two-degree-of-freedom turntable and an auto-long-zoom camera, and the measuring distance of which is in the range 50-5000 m.The zoom camera is mounted in the turntable's inner frame, and its optical axis is collinear with the direction of the pitch.The system is considered a virtual omnidirectional camera with three mutual orthogonal image planes z  , x  , and y  that transfer the realistic azimuth  and pitch  to the virtual image coordinates, as shown in Figure 2.

Virtual Omnidirectional Camera Model
The system comprises a two-degree-of-freedom turntable and an auto-long-zoom camera, and the measuring distance of which is in the range 50-5000 m.The zoom camera is mounted in the turntable's inner frame, and its optical axis is collinear with the direction of the pitch.The system is considered a virtual omnidirectional camera with three mutual orthogonal image planes π z , π x , and π y that transfer the realistic azimuth α and pitch β to the virtual image coordinates, as shown in Figure 2.
The virtual image coordinate is used in the form of homogeneous coordinate, so the virtual image coordinate is free from multiplying a non-zero scale factor.In particular, in our virtual omnidirectional camera, when the homogenous coordinate is multiplied by −1, it is equivalent to imaging in the virtual image plane with "focal length −1".Therefore, there are other three virtual imaging planes separately with focal length π x with f x = −1, π y with f y = −1, and π z with f z = −1.Therefore, there are actually six virtual mutual orthogonal image planes, and they are constructed as a cube.
Moreover, π x and π x , π y and π y , π z and π z are equivalent in virtual imaging processing in projective geometry in the form of homogeneous coordinate.The virtual image coordinate is used in the form of homogeneous coordinate, so the virtual image coordinate is free from multiplying a non-zero scale factor.In particular, in our virtual omnidirectional camera, when the homogenous coordinate is multiplied by -1, it is equivalent to imaging in the virtual image plane with "focal length −1".Therefore, there are other three virtual imaging planes separately with focal length x  Take π z and π z as an instance to illustrate.In Figure 3, the control point p = (x c , y c , z c ) T is projected on the virtual image plane π z and π z separately to obtain virtual image coordinate P π z = (X z , Y z ) T and P π z = (−X z , −Y z ) T .The homogenous coordinate of P π z and P π z are separately Pπ z = (X z , Y z , 1) T and P π z = (−X z , −Y z , −1) T .They are identical in the meaning of homogenous coordinate, so we could use either T to compute for global calibration.In other words, π z and π z are equivalent virtual image planes in the virtual omnidirectional camera.Likewise, Three mutual orthogonal virtual image planes π x , π y and π z could be equivalent to the six mutual orthogonal virtual image planes π x and π x , π y and π y , π z and π z .The virtual image coordinate is used in the form of homogeneous coordinate, so the virtual image coordinate is free from multiplying a non-zero scale factor.In particular, in our virtual omnidirectional camera, when the homogenous coordinate is multiplied by -1, it is equivalent to imaging in the virtual image plane with "focal length −1".Therefore, there are other three virtual imaging planes separately with focal length x  The o − x c y c z c coordinate system is fixed in the turntable.Its origin o is the rotation center of the turntable.The plane x c oz c is parallel to the horizontal angle encoder.The x c axis is pointed to the zero position of the horizontal angle encoder.The y c axis is aligned with the vertical axis's direction downwards and then perpendicular to the horizontal angle encoder.The z c axis is determined by the right-hand rule.A hypothesis is taken as virtual image planes π x ,π y , and π z , which are located separately in the planes, i.e., A 2D rectangular coordinate system O z − XY is established on the plane π z .The origin O z is the intersection of the z c axis and π z plane.The X and Y axes are aligned separately with the x c and y c axes.The relationship between the virtual image point P π z = (X z , Y z ) T and the 3D point p = (x c , y c , z c ) T in o − x c y c z c is expressed as The direction of the azimuth α = 0 is aligned with the direction of the positive direction of the axis x c .Looking down from the top, the increasing direction of the azimuth α ∈ [0, 360) is clockwise.The direction determined by the arbitrary azimuth and the pitch β = 0 is vertical with the normal vector of the horizontal angle encoder.The pitch β varies in the range (−90, 90), and the direction of aiming at the control point down the horizontal angle encoder is positive for pitch β.The system obtains the azimuth α and pitch β aiming at the control point.It is not convenient to calculate location and attitude with azimuth α, pitch β, and 3D control point, so azimuth α and pitch β are transferred to the virtual image coordinate by the virtual image plane π z by Equation ( 2), as shown in Figure 4: The direction of the azimuth 0   is aligned with the direction of the positive direction of the axis c x .Looking down from the top, the increasing direction of the azimuth [0,360)   is clockwise.The direction determined by the arbitrary azimuth  and the pitch 0   is vertical with the normal vector of the horizontal angle encoder.The pitch  varies in the range ( 90,90)


, and the direction of aiming at the control point down the horizontal angle encoder is positive for pitch  .The system obtains the azimuth  and pitch  aiming at the control point.It is not convenient to calculate location and attitude with azimuth  , pitch  , and 3D control point, so azimuth  and pitch  are trans- ferred to the virtual image coordinate by the virtual image plane z  by Equation ( 2), as shown in Figure 4: A 2D rectangular coordinate system O x − YZ is established on the plane π x .Analogously, azimuth α and pitch β are transferred to the virtual image coordinate by the virtual image plane π x : A 2D rectangular coordinate system O y − XZ is established on the plane π y .Analogously, azimuth α and pitch β are transferred to the virtual image coordinate by the virtual image plane π y : 2.4.Expanded Factorial Linear Transform 2.4.1.Direct Linear Transform of Single Image Plane π z X w = (x wi , y wi , z wi , 1) T is the homogeneous coordinate of the ith 3D control point, (X zi , Y zi , 1) T the ith virtual image homogeneous coordinate located in the π z , and P ij the ith row and jth column element of the virtual camera projection matrix P.Then, the projection equation of the virtual single image plane π z camera model is established: The vector p is computed by the equations as: M is constructed as: The SVD decomposition is used to solve the least-squares solution of the Equation (7).

Factorial Linear Transform of Virtual Camera Model of Virtual Single Image Plane π z
The deficiency of the direct linear transform is the bad estimation resulting from the minor error of measurement matrix M. The cause leading to this symptom is that the measurement matrix M is the nonlinear function of measurement data amplifying the measurement error.It results in the measurement matrix M being seriously biased against the real measurement matrix.
The nonlinearity of measurement data is decreased by using the factorial linear method, which restrains the bad influence on the estimation result.Instead of solving the estimation problem by the measurement matrix directly, the measurement matrix is decomposed into the product of the two factorial matrices, the elements of which are the 3D control point coordinate, virtual image coordinate, or constant.Then, the intermediate variables are introduced to construct the new measurement matrix, which is constructed only by the measurement data or constant.Finally, the estimation results are obtained by solving the least-squares solution of the equation set constructed by the new measurement matrix.The number of the factorial matrices is equal to the multiplicity of the linear estimation.The factorial linear method is not merely a linear method that reduces the nonlinearity with respect to the measurement data in the measurement matrix.The method restrains the influence effectively from the measurement data error to the estimation result.
The factorial linear transform of the projection matrix P of the virtual single image plane π z camera model is undertaken in three steps.

1.
The measurement matrix of the projection matrix P is deduced by the virtual single image plane π z camera model.It is decomposed into two factorial matrixes: Remote Sens. 2021, 13,1982 8 of 20

2.
The intermediate variable g = B 8n×12 p is introduced to build the new measurement matrix: In the condition of h = 1, the least-squares solution of the equation set is solved: p is computed by the equations as M π x is constructed as The measurement matrix of the projection matrix P is decomposed into two factorial matrixes as The projection equation of the virtual single image plane π y camera model is p is computed by the equations as M π y is constructed as The measurement matrix of the projection matrix P is decomposed into two factorial matrixes as Because the projection matrix of each virtual single image plane is identical, the virtual single image plane π x projection equation, virtual single image plane π y projection equation, and virtual single image plane π z projection equation are taken together as the projection equation of the virtual omnidirectional camera.
By introducing the intermediate variable g π x = B π x p, the new measurement submatrix of the image plane π x is built in the virtual omnidirectional camera model projection equation: Analogously, by introducing the intermediate variable g π y = B π y p, the new measurement sub-matrix of the image plane π y is built in the virtual omnidirectional camera model projection equation: Analogously, by introducing the intermediate variable g π z = B π z p, the new measurement sub-matrix of the image plane π z is built in the virtual omnidirectional camera model projection equation: The measurement matrix M new,x,y,z is constructed by three measurement sub-matrices: In the condition of h = 1, the least-squares solution of the equation set is solved as

Linear Solution of Location and Attitude
The projection matrix P is acquired by the vector p.The projection matrix P is the product of the intrinsic matrix and extrinsic matrix as The intrinsic matrix A is an upper triangular matrix.R is the rotation matrix from the camera coordinate system to o p − x p y p z p .C is the origin of the camera coordinate system in o p − x p y p z p .

Solution of the camera location C
Letting P 1 = AR and P 2 = −ARC, Equation ( 25) could be expressed as C is obtained by the equation

Solution of A and R
Since the intrinsic matrix A is an upper triangular matrix and the matrix R is orthogonal, A and R could be obtained by RQ decomposition:

Nonlinear Optimization
Nonlinear optimization is undertaken to obtain a more accurate solution using the linear solution as the initial value.The estimated control point XC in o − x c y c z c is computed by C, azimuth α, and pitch β.The estimated control point XC in o − x c y c z c is calculated by X w and R. The Levenberg-Marquardt optimization is adopted to the optimal objective function, which minimizes the mean-square error between XC and XC :

Location and Attitude Transformation
After the abovementioned procedures, the location and attitude in o − x g y g z g are obtained, but they are not convenient for demonstration and application in practice.Our approach is to transform them into the local coordinate system o − x T y T z T , which is situated in the local level and shown in Figure 5.

Computation of geodetic coordinate from 3D coordinates
The transformation from spatial rectangular coordinate (x wo , y wo , z wo ) T in o e − x e y e z e to geodetic coordinate (L, B) T in o e − x e y e z e .(L, B) T is composed of the longitude and latitude of the origin of o − x c y c z c , and it is obtained from its 3D coordinate (x wo , y wo , z wo ) T = C in o e − x e y e z e [39]: is the second eccentricity of the ellipsoid reference WGS84.

Location and Attitude Transformation
After the abovementioned procedures, the location and attitude in obtained, but they are not convenient for demonstration and application in practice.Our approach is to transform them into the local coordinate system T T T o x y z  , which is situ ated in the local level and shown in Figure 5.

Rotation to ENU coordinate system
There is an inherent geometric constraint between o − x T y T z T and o e − x e y e z e , that is, the local water level in o − x T y T z T is the tangent plane of the terrestrial sphere, which is shown in Figure 6.where a is the semi-major axis of the ellipsoid reference WGS84, and b is the semi minor axis of the ellipsoid reference WGS84. is the second eccentricity of the ellipsoid reference WGS84.

Rotation to ENU coordinate system
There is an inherent geometric constraint between

T T T o x y z
 and e e e e o x y z  , tha is, the local water level in is the tangent plane of the terrestrial sphere, which is shown in Figure 6.(90 ) tain for a common turntable.The error of virtual image coordinate X is denoted by X  , which is imported by azimuth  , and it is a derivative of Equation ( 2): The plot of X  variation with respect to azimuth  is shown in Figure 7.The error of virtual image coordinate Y is denoted by ∆ Y , which is imported by azimuth α and pitch β.First, the influence on ∆ Y , which is introduced by α ∈ [0, 90], was analyzed.∆α is equal to 0.05 • , and the pitch is set as β = 45 • .The error of virtual image coordinate Y imported by azimuth α is denoted by ∆ Y,α , and it is a derivative of Equation ( 2 The plot of ∆ Y,α variation with respect to azimuth α is shown in Figure 8.
Remote Sens. 2021, 13, x FOR PEER REVIEW 14 of 21 When  is small, it leads to large X  , so the reliable measuring range is narrower than [0,90]°.When The plot of  , and it is a derivative of Equation ( 2): , Y   variation with respect to pitch  is shown in Figure 9.When α is small, it leads to large ∆ Y,α , so the reliable measuring range is narrower than [0, 90] • .When α = 20 • and ∆ Y,α = 0.1402 m, ∆ Y,α is more sensitive than ∆ X to the same α.∆ Y,α increases intensively with the decrease of α within the range α ∈ (0, 10] • , which is more sensitive with a minor value of α. ∆ Y,α changes little within the range α ∈ [20, 90] • . Second, the influence to ∆ Y , which is introduced by β ∈ [0, 90], was analyzed.∆β is equal to 0.05 • , and the azimuth is set as α = 45 • .The error of virtual image coordinate Y imported by pitch β is denoted by ∆ Y,β , and it is a derivative of Equation ( 2): ∆ Y,β variation with respect to pitch β is shown in Figure 9.    .When the azimuth is too small or the pitch is too large, it introduces a large error in the virtual single image plane camera model, which leads to a narrow reliable measuring range.Then, the virtual omnidirectional camera model is put forward to overcome the deficiency.

Simulations
Simulations were conducted separately in the virtual omnidirectional camera and virtual single image plane camera models, and the global calibration was obtained by the expanded factorial linear method.The number of control points is 50.The depth range of control points in  1 and 2.   When the azimuth is too small or the pitch is too large, it introduces a large error in the virtual single image plane camera model, which leads to a narrow reliable measuring range.Then, the virtual omnidirectional camera model is put forward to overcome the deficiency.

Simulations
Simulations were conducted separately in the virtual omnidirectional camera and virtual single image plane camera models, and the global calibration was obtained by the expanded factorial linear method.The number of control points is 50.The depth range of control points in o − x c y c z c is 50-100 m.The azimuth and pitch are both in the range [20, Gaussian noise with 0 mean and standard deviation 0.05( • ) is added to α and β.Gaussian noise with 0 mean and standard deviation 1.5 × 10 −2 (m) is added to control point X w = (x w , y w , z w ) T .For each number of control points, 1000 independent trials were performed.The results are shown in Tables 1 and 2.

Outdoor Experiment
The proposed virtual omnidirectional camera model and expanded factorial linear method were also tested outdoors.The dynamic angle accuracy of MVMS is 0.005 • , and the location accuracy of RTK GPS (Model Number HY-212) is 0.01 m+1 ppm (root mean square).The number of control points is 10.The MVMS is shown in Figure 10 and RTK GPS in Figure 11.The experimental data are shown in Table 3.

Outdoor Experiment
The proposed virtual omnidirectional camera model and expanded factorial linear method were also tested outdoors.The dynamic angle accuracy of MVMS is 0.005°, and the location accuracy of RTK GPS (Model Number HY-212) is 0.01 m+1 ppm (root mean square).The number of control points is 10.The MVMS is shown in Figure 10 and RTK GPS in Figure 11.The experimental data are shown in Table 3.   Virtual omnidirectional camera model 0.084 0.029 0.085

Outdoor Experiment
The proposed virtual omnidirectional camera model and expanded factorial linear method were also tested outdoors.The dynamic angle accuracy of MVMS is 0.005°, and the location accuracy of RTK GPS (Model Number HY-212) is 0.01 m+1 ppm (root mean square).The number of control points is 10.The MVMS is shown in Figure 10 and RTK GPS in Figure 11.The experimental data are shown in Table 3.    ε x , ε y and ε z are the residual errors of 3D control points in the o − x c y c z c coordinate system.ε α and ε β are residual errors of azimuth and pitch.The results are shown in Table 4. ε x and ε z are better than ε y because the number of control points in the height range is less than the number in other axis directions, and ε α is better than ε β because the pitch range is smaller than the azimuth range.

Discussion
Simulations and analyses of the factors impacting accuracies, such as the number of control points and the noise of azimuth and pitch, were conducted to verify the robustness of the virtual omnidirectional camera model and the expanded factorial linear method. and pitch  .Gaussian noise with mean 0 and standard deviation      and pitch  .Gaussian noise with mean 0 and standard deviation   It can be seen from Figures 14 and 15 that SDLE and SDAAE, respectively, both decrease with increasing σ angle .In case (a), σ angle = 0.05, which is easy to carry out using the MVMS, and σ pα = 0.051 • , σ pβ = 0.024 • , σ pγ = 0.054 • , σ x = 0.17 m, σ y = 0.09 m, and σ z = 0.13 m.

Conclusions
In this paper, a virtual omnidirectional camera model of the MVMS is established to globally calibrate the location and attitude in the field.Moreover, this model enlarges the distribution range of the control points in 3D space and improves the transforming accuracy from the turntable angles of azimuth and pitch to the virtual image coordinates in contrast to the virtual camera model with a virtual single image plane.Furthermore, an expanded factorial linear method is proposed to obtain the linear solutions of the location and attitude, which effectively restrains the measurement matrix errors from the angles of azimuth, pitch, and the control points.Finally, the location and attitude are transformed from the o − x g y g z g coordinate system to the o − x T y T z T coordinate system with the inherent geometric constraint of the tangent plane of the terrestrial sphere expressed by longitude and latitude.Simulations were conducted separately relating to the number of the control points and the noises of the turntable angles of azimuth and pitch to verify the accuracy and robustness of the method.In addition, an outdoor experiment was conducted to verify the accuracy and effectiveness of the proposed method.
Therefore, there are actually six virtual mutual orthogonal image planes，and they are constructed as a cube.equivalent in virtual imag- ing processing in projective geometry in the form of homogeneous coordinate.Take z  and z   as an instance to illustrate.In Figure3, the control point identical in the meaning of homogenous coordinate, so we could use either =( , ,1) global calibration.In other words, z  and z   are equivalent virtual image planes in the virtual omnidirectional camera.Likewise, Three mutual orthogonal virtual image planes x

Figure 3 .Figure 2 .
Figure 3. virtual imaging illustration in z  and z  
. Therefore, there are actually six virtual mutual orthogonal image planes，and they are constructed as a cube.are equivalent in virtual imag- ing processing in projective geometry in the form of homogeneous coordinate.Take z  and z   as an instance to illustrate.In Figure3, the control point ( , , ) T c c c x y z  p is projected on the virtual image plane z  and z   separately to obtain virtual image coordinate =( , ) identical in the meaning of homogenous coordinate, so we could use either =( , ,1) global calibration.In other words, z  and z   are equivalent virtual image planes in the virtual omnidirectional camera.Likewise, Three mutual orthogonal virtual image planes x

Figure 3 .Figure 3 .
Figure 3. virtual imaging illustration in z  and z  

√ x wo 2
+y wo 2 ) B = arctan( z wo +e 2 b sin 3 θ √ x wo 2 +y wo 2 −e 2 a cos 3 θ ) (30) where a is the semi-major axis of the ellipsoid reference WGS84, and b is the semi-minor axis of the ellipsoid reference WGS84.e = √ a 2 −b 2 a is the first eccentricity of the ellipsoid reference WGS84, and e = √ a 2 −b 2 b

Figure 6 .
Figure 6.Inherent geometry constraint between o − x T y T z T and o e − x e y e z e .
virtual image coordinate Y is denoted by Y  , which is imported by azimuth  and pitch  .First, the influence on Y 0.05°, and the pitch is set as =45   .The error of virtual image coordinate Y imported by azimuth  is denoted by , to azimuth  is shown in Figure8.When  is small, it leads to large , the decrease of  within the range(0,10]   °,which is more sensitive with a minor value of  .influence to Y  , which is introduced by [0,90]   , was analyzed.  is equal to 0.05°, and the azimuth is set as =45   .The error of virtual image coordinate Y imported by pitch  is denoted by , Y 


is 50-100 m.The azimuth and pitch are both in the range[20,70]  .Gaussian noise with 0 mean and standard deviation 0.05( ) is added to  and  .Gaussian noise with 0 mean and standard deviation 1.5 × 10 −2 (m) is added to control point number of control points, 1000 independent trials were performed.The results are shown in Tables

Figure 9 .
Figure 9. ∆ Y,β vs. azimuth β.When β is large, it leads to large ∆ Y,β , so the reliable measuring range is narrower than [0, 90].When β = 60 • and ∆ Y,β = 0.09874 m, ∆ Y,β increases intensively with the increase of α in the range α ∈ [60, 90), which is more sensitive with larger β. ∆ Y,β changes little in the range α ∈ [0, 60].When the azimuth is too small or the pitch is too large, it introduces a large error in the virtual single image plane camera model, which leads to a narrow reliable measuring range.Then, the virtual omnidirectional camera model is put forward to overcome the deficiency.

Figure 11 .
Figure 11.RTK GPS in the field.

Figure 11 .
Figure 11.RTK GPS in the field.Figure 11.RTK GPS in the field.

Figure 11 .
Figure 11.RTK GPS in the field.Figure 11.RTK GPS in the field.

4. 2 .
Performance with Respect to Angle Noise Level An experiment was conducted to evaluate the accuracy with respect to different angle noises.The number of control points is 50.The depth of the control points is 50-100, 100-200, and 200-500 m.Both azimuth  and pitch  are in the range [20,70] .Gauss- ian noise with 0 mean and standard deviation [0.005,0.05]angle   () is added to azimuth noise level, 1000 independent trials were performed.The SDAAE and SDLE results are shown in Figures14 and 15, SDLE vs. angle noise   , respectively.

4. 2 .
Performance with Respect to Angle Noise Level An experiment was conducted to evaluate the accuracy with respect to different angle noises.The number of control points is 50.The depth of the control points is 50-100, 100-200, and 200-500 m.Both azimuth α and pitch β are in the range [20, 70].Gaussian noise with 0 mean and standard deviation σ angle ∈ [0.005, 0.05] ( • ) is added to azimuth α and pitch β.Gaussian noise with mean 0 and standard deviation σ 3d = 1.5 × 10 − 2 (m)is added to the control point X w = (x w , y w , z w ) T .For each noise level, 1000 independent trials were performed.The SDAAE and SDLE results are shown in Figures14 and 15, SDLE vs. angle noise σ angle , respectively.

4. 2 .
Performance with Respect to Angle Noise Level An experiment was conducted to evaluate the accuracy with respect to different angle noises.The number of control points is 50.The depth of the control points is 50-100, 100-200, and 200-500 m.Both azimuth  and pitch  are in the range [20,70] .Gauss- ian noise with 0 mean and standard deviation [0.005,0.05]angle   () is added to azimuth noise level, 1000 independent trials were performed.The SDAAE and SDLE results are shown in Figures14 and 15, SDLE vs. angle noise   , respectively.
o − x c y c z c o − x c y c z c is the virtual omnidirectional camera coordinate system.There are three 2D image coordinate systems, i.e., O z − XY, O x − YZ, and O y − ZX, located in the o − x c y c z c coordinate system.• O x − YZ, O x − YZ, O y − ZX O z − XY,O x − YZ, and O y − ZX are three 2D image coordinate systems.They are located separately in the three virtual image planes: π z , π x , and π y .

Table 1 .
Location errors in different virtual camera models.

Table 2 .
Angle errors in different virtual camera models.

Table 1 .
Location errors in different virtual camera models.

Table 2 .
Angle errors in different virtual camera models.mx , ε my and ε mz are the RMS errors of the MVMS's location.ε mα , ε mβ and ε mγ are the RMS errors of the attitude angle error.ε mx , ε my and ε mz , ε mα , ε mβ and ε mγ in the virtual omnidirectional camera model are both less than those in the virtual single image plane camera model. ε

Table 4 .
Location and attitude errors.