Next Article in Journal
A Passive Temperature-Sensing Antenna Based on a Bimetal Strip Coil
Previous Article in Journal
Flexible Piezoresistive Sensors Embedded in 3D Printed Tires
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System

1
State Key Laboratory of Precision Measurement Technology and Instruments, Tsinghua University, Beijing 100084, China
2
Astronaut Center of China, Beijing 100094, China
*
Authors to whom correspondence should be addressed.
Sensors 2017, 17(3), 655; https://doi.org/10.3390/s17030655
Submission received: 9 December 2016 / Revised: 10 March 2017 / Accepted: 20 March 2017 / Published: 22 March 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a novel method based on the geometrical constraints in object space, called checkerboard-fixed post-processing calibration (CPC), is proposed to solve the problem of simultaneously obtaining the intrinsic parameters of the cameras integrated in the MFNS and the transformation between the MFNS coordinate and the cameras’ coordinates. This method utilizes a two-axis turntable and a prior alignment of the coordinates is needed. Theoretical derivation and practical operation of the CPC method are introduced. The calibration experiment results of the MFNS indicate that the extrinsic parameter accuracy of the CPC reaches 0.1° for each Euler angle and 0.6 mm for each position vector component (1σ). A navigation experiment verifies the calibration result and the performance of the MFNS. The MFNS is found to work properly, and the accuracy of the position vector components and Euler angle reaches 1.82 mm and 0.17° (1σ) respectively. The basic mechanism of the MFNS may be utilized as a reference for the design and analysis of multiple-camera systems. Moreover, the calibration method proposed has practical value for its convenience for use and potential for integration into a toolkit.

1. Introduction

According to its space program schedule, China will launch the Tiangong Space Station in the next few years. To make use of the microgravity environment and conduct flight experiments and on-orbit service inside the space station, the Spacecraft Inside Tiangong Space Station (SITSS) is proposed (Figure 1). The SITSS aims to offer a platform for on-orbit-distributed satellite flight tests and serve as free-flying robots for environmental monitoring and astronaut assistance. Hence, an accurate, fast, and fully autonomous navigation system is expected. Nowadays, computer vision is widely used in the observation [1], measurement [2], entertainment [3], and navigation [4] fields because of the rich information it acquired from the surrounding environment, high precision, and potential for intelligence. To achieve the intended function for SITSS, a multiple field-of-view navigation system (MFNS) for SITSS (Figure 2) is built via computer vision.
Parameter calibration is the fundamental problem in computer vision and has been the focus of research since it was first proposed. An earlier work can be found dating back to the 1970s [5], and a large set of works are dedicated to camera calibration methods and calibration model analyses. Calibration methods that require beacon coordinates in the 3D world can be found in [6,7,8]. Other automatic methods were proposed by Tsai [9], Strum [10], and Heikkila [11] et al., all of whom utilized known objects as calibration references. Zhang’s method [12] published in 2000 observed an easy and accurate calibration method based on a planar checkerboard placed in different locations in the field-of-view (FOV) of a camera. However, those automatic methods are inconvenient to use in large FOV applications when the size of the reference object is limited. Aside from those methods, some approaches focus on calibration using vanishing points based on geometric relationships [13,14,15]. A large amount of literature on calibrating single cameras or vision systems based on extension, optimization, and adaption of methods applicable to different situations have been introduced as well [16,17,18]. Those methods are also applicable in other fields, for instance, the error analysis and calibration method of star trackers [19].
The effective and accurate intrinsic and extrinsic parameters of conventional single cameras can be recognized in many ways. The position and attitude relationship between cameras is a concern for binocular or multiple camera vision systems, such a relationship can be obtained based on homography because the cameras share a substantial overlap [20,21]. However, obtaining this relationship becomes a challenge when the cameras of a system do not share a visual field. In recent years, researchers have come up with several remarkable ideas for non-overlapping camera systems, some of them use mirrors and a calibration target [22,23], some use rigid motion constraints [24,25] while some use a combination of those two methods [26,27]. Unfortunately, these methods seem incapable of solving the problem of MFNS calibration directly, as the attitudes and positions of the cameras with respect to the MFNS coordinate are indispensable while difficult to obtain through either existing camera calibration methods or mechanical measurements.
In the present paper, we attempt to solve this problem by introducing an imaging model of the MFNS and its error analysis. A novel calibration method is proposed on this imaging model, which meets the requirement of MFNS and other similar systems. Our experiment indicates that the MFNS can reach an expected calibration accuracy by utilizing the method proposed. We introduce the mathematical model of the MFNS in Section 2. The error analysis of the system and the simulation result analysis are discussed in Section 3. The checkerboard-fixed post-processing calibration (CPC) method for MFNS parameters is proposed in Section 4, and a verification navigation experiment is conducted in Section 5. Finally, the conclusions are presented in Section 6.

2. Mathematical Model

2.1. Imaging Model

The pinhole model is commonly used for ideal mapping of the 3D points of the world to the 2D image generated by a camera [6,28]. Figure 3 displays the perspective projection performed by a pinhole camera. The world point P is projected through the projection center of the lens to the point p in the image plane I. Plane II is the symmetry plane of I, which is introduced to facilitate the analysis. The relationship between P (Xw, Yw, Zw) and its image p (u, v) is as follows:
Z c [ u v 1 ] = [ 1 / d x s u 0 0 1 / d y v 0 0 0 1 ] [ f 0 0 0 0 f 0 0 0 0 1 0 ] [ R T 0 T 1 ] [ X W Y W Z W 1 ] = [ α x γ u 0 0 0 α y v 0 0 0 0 1 0 ] [ R T 0 T 1 ] [ X W Y W Z W 1 ] = M 1 M 2 X W = M X W
where:
  • Zc is the optic axis coordinate of point P,
  • dx is the ratio coefficient in the x direction,
  • dy is the ratio coefficient in the y direction,
  • s and γ are the non-orthogonal factor of axes of the image coordinate,
  • (u0, v0) is the pixel coordinate of the camera principal point,
  • f is the principal distance of the camera,
  • R is the 3 × 3 rotation matrix,
  • T is the 3D translation vector,
  • αx = f/dx and αy = f/dy are the respective scale factors of the u-axis and v-axis of the image coordinate,
  • M1 and M2 are the intrinsic parameter matrix and extrinsic parameter matrix respectively, and
  • M = M1·M2 is the perspective projection transform matrix.
In the actual imaging system, the displacement of the principal point, focal length deviation, distortion, and other error factors should be considered on the basis of ideal pinhole imaging model (see Section 4.1).

2.2. MFNS Model

Three industrial cameras whose optic axes are approximately perpendicular to one another are the main components of the MFNS (Figure 2). This arrangement is adopted for immediate and accurate navigation performance of the SITSS, because it presents six degrees of freedom of free flight inside the space station. The three cameras form a generalized monocular vision system while the FOV is extended, and the accuracy in three directions is optimized compared with traditional monocular vision systems. Based on the imaging model of a single camera (Equation (1)), the imaging model of the MFNS is given as follows:
Z c k [ u k v k 1 ] = [ α x k γ k u 0 k 0 0 α y k v 0 k 0 0 0 1 0 ] [ C b k 0 0 T 1 ] [ C w b r b w b r b k b 0 T 1 ] [ X W i Y W i Z W i 1 ]
where k = x, y, z represents camera X, Y, and Z, respectively; i = 1, 2, 3,…represents the number of beacons observed, C b k is the rotation matrix form of the MFNS body coordinate to the camera k coordinate, C w b is the rotation matrix from the world coordinate to the MFNS body coordinate, and r b w b = ( t x , t y , t z ) T and r b k b = ( r x k , r y k , r z k ) T are the translation vectors from the origin of the MFNS body coordinate to the origins of the world coordinate and camera k coordinates respectively, which are expressed in the MFNS body coordinate.
Although a common way to model a set of rigidly attached cameras is using a generalized camera model, which find its way in many applications such as motion estimation, image reconfiguration, etc. [29,30,31,32], equation.4 is a practical model of the MFNS, which is an extension of the commonly used pinhole model. Coordinate transformation is simply applied during the derivation, which makes it simple and reliable. Moreover, the model of Equation (2) is convenient for navigation applications in our future work, as the rotation matrices and translation vectors have clear physical meanings, which are exactly the 6-DOF navigation parameters (three-axis attitude and position). Thus measurement equations can be built directly based on Equation (2). Let A k = [ α x k γ k u 0 k 0 0 α y k v 0 k 0 0 0 1 0 ] [ C b k 0 0 T 1 ] = ( a i j k ) 3 × 4 and C w b = ( r i j ) 3 × 3 . We therefore get:
{ ( a 11 k a 31 k u k ) ( r 11 X W i + r 12 Y W i + r 13 Z W i + t x r x k ) + ( a 12 k a 32 k u k ) ( r 21 X W i + r 22 Y W i + r 23 Z W i + t y r y k ) + ( a 13 k a 33 k u k ) ( r 31 X W i + r 32 Y W i + r 33 Z W i + t z r z k ) = 0 ( a 21 k a 31 k v k ) ( r 11 X W i + r 12 Y W i + r 13 Z W i + t x r x k ) + ( a 22 k a 32 k v k ) ( r 21 X W i + r 22 Y W i + r 23 Z W i + t y r y k ) + ( a 23 k a 33 k v k ) ( r 31 X W i + r 32 Y W i + r 33 Z W i + t z r z k ) = 0
Equation (5) is the practical imaging model of the MFNS. The pose of the MFNS (six unknowns) is possibly obtained by solving a Perspective-N-Point problem [33,34] when plenty of beacons are observed.

3. Error Analysis

The MFNS measurements are the coordinates of the images of navigation reference beacons. In the imaging model (Equation (2)), the error in beacon positions, intrinsic parameter calibration error of cameras, pose (with respect to the MFNS) calibration error of cameras, and algorithm error are the factors affecting the measurement accuracy of the system. The main focus of this research is the influence of the system parameter error on the measurement error. The algorithm error will be studied in future work.
To facilitate the equation derivation and error analysis, the following discussion is based on the assumption that C b z = I 3 and r b z b = 0 . Given that cameras X, Y, and Z are equivalent, only the measurement error of camera Z is calculated and simulated as follows:
(1) Beacon position error
The pixel coordinate (u, v) of a beacon observed by camera Z is given by:
u = α x r 11 X W + r 12 Y W + r 13 Z W + t x r 31 X W + r 32 Y W + r 33 Z W + t z + u 0 = α x X c Z c + u 0 v = α y r 21 X W + r 22 Y W + r 23 Z W + t y r 31 X W + r 32 Y W + r 33 Z W + t z + v 0 = α y Y c Z c + v 0
where (Xc, Yc, Zc) is the coordinate of that beacon in the camera Z coordinate.
Suppose the position of a beacon is inaccurate and its coordinate is (Xw + ΔXw, Yw + ΔYw, Zw + ΔZw), then the measurement of the pixel coordinate is given by:
u b e a c o n = α x r 11 ( X W + Δ X W ) + r 12 ( Y W + Δ Y W ) + r 13 ( Z W + Δ Z W ) + t x r 31 ( X W + Δ X W ) + r 32 ( Y W + Δ Y W ) + r 33 ( Z W + Δ Z W ) + t z + u 0 = α x X c b Z c b + u 0 v b e a c o n = α y r 11 ( X W + Δ X W ) + r 12 ( Y W + Δ Y W ) + r 13 ( Z W + Δ Z W ) + t y r 31 ( X W + Δ X W ) + r 32 ( Y W + Δ Y W ) + r 33 ( Z W + Δ Z W ) + t z + v 0 = α y Y c b Z c b + v 0
Thus, the error of measurement is as follows:
Δ u b = u b e a c o n u = α x ( X c b Z c b X c Z c ) Δ v b = v b e a c o n v = α y ( Y c b Z c b Y c Z c )
Let r1 = (r11, r12, r13)T, r2 = (r21, r22, r23)T, r3 = (r31, r32, r33)T, and ΔPw = (ΔXw, ΔYw, ΔZw)T. Equation (6) could be rewritten as:
Δ u b = α x ( X c + r 1 Δ P w Z c + r 3 Δ P w X c Z c ) Δ v b = α y ( Y c + r 2 Δ P w Z c + r 3 Δ P w Y c Z c )
The Monte Carlo (MC) method is used to study the relationship between the beacon position error and the measurement error. The parameters of the method are shown in Table 1. Only the pixel error of u is simulated because the properties of the pixels in the two directions (u, v) [or (Xc, Yc)] are the same. The MFNS coordinate is taken as the reference frame because of the imaging progress. Specifically, the MFNS is fixed at the origin of the world coordinate (see the extrinsic parameter of the MFNS in Table 1), and the beacons are in any possible position (see beacon position in Table 1). As a result, the simulation model is simplified and the result is more convenient for observation.
Figure 4 shows the simulation result, the dots in the figure stand for the root-mean-square error of the MC results, the same below. The pixel error increases linearly with the approximate beacon position error.
(2) Camera intrinsic parameter error
On the basis of the pinhole imaging model (Equation (2)), the influence of the principal point displacement, the focal length error, and the lens distortion should be considered in the practical vision measurement system. These parameters are the intrinsic parameters of a camera, whose calibration is one of the key problems in vision research. At present, a reasonable choice of calibration method ensures that the imaging accuracy reaches the sub-pixel level.
(3) Relative position error between camera and MFNS
When multiple cameras are integrated in the MFNS, a relative position error between each camera coordinate and the system body coordinate can occur, known as the error of the translation vector T in the imaging model. Let T′= (tx + Δtx, ty + Δty, tz + Δtz)T; it is obtained according to the imaging model:
Δ u p = u p o s i t i o n u = α x ( X c + Δ t x Z c + Δ t z X c Z c ) Δ v p = v p o s i t i o n v = α y ( Y c + Δ t y Z c + Δ t z Y c Z c )
The result of the MC simulation of the pixel error is shown in Figure 5, while the parameters are set as shown in Table 1.
(4) Relative attitude error between cameras and MFNS
Similar to the relative position error, an error inevitably occurs in the rotation matrix between the camera coordinate system and the MFNS coordinate, known as the error of attitude of Euler angle φ, θ, and ψ in the imaging model. Let Θ′ = (φ + Δφ, θ + Δθ, ψ + Δψ)T. The rotation matrix in the imaging model (Euler angles are defined as 2-1-3 rotation in this paper) can be rewritten as follows:
r 11 = cos ( ψ + Δ ψ ) cos ( θ + Δ θ ) + sin ( ψ + Δ ψ ) sin ( θ + Δ θ ) sin ( φ + Δ φ ) r 12 = sin ( ψ + Δ ψ ) cos ( φ + Δ φ ) r 13 = cos ( ψ + Δ ψ ) sin ( θ + Δ θ ) + sin ( ψ + Δ ψ ) cos ( θ + Δ θ ) sin ( φ + Δ φ ) r 21 = sin ( ψ + Δ ψ ) cos ( θ + Δ θ ) + cos ( ψ + Δ ψ ) sin ( θ + Δ θ ) sin ( φ + Δ φ ) r 22 = cos ( ψ + Δ ψ ) cos ( φ + Δ φ ) r 23 = sin ( ψ + Δ ψ ) sin ( θ + Δ θ ) + cos ( ψ + Δ ψ ) cos ( θ + Δ θ ) sin ( φ + Δ φ ) r 31 = cos φ sin ( θ + Δ θ ) r 32 = sin ( φ + Δ φ ) r 33 = cos ( θ + Δ θ ) cos ( φ + Δ φ )
Let r′1 = (r′11, r12, r′13)T, r′2 = (r′21, r′22, r′23)T, r′3 = (r′31, r′32, r′33)T, Pw = (Xw, Yw, Zw)T. Thus, we get
Δ u a = u a t t i t u d e u = α x ( r 1 P w r 3 P w X c Z c ) Δ v a = v a t t i t u d e v = α y ( r 2 P w r 3 P w Y c Z c )
The result of the MC simulation of pixel error is shown in Figure 6, while the parameters are set as shown in Table 1.
In Figure 4, Figure 5 and Figure 6, the pixel error grows approximately linearly with the beacon error and the MFNS extrinsic parameter error (including the relative position error and attitude error between cameras and the MFNS), such outcome is regarded as a system error. System errors have a serious influence on MFNS performance compared to the intrinsic parameter calibration errors of cameras. The simulation results show that the relative position and attitude of the system must be calibrated to an accuracy of <1 mm and <0.17° respectively if the observation error of the MFNS is expected to be at several pixels. This work attempts to find a solution for this challenging problem.

4. System Calibration Based on Geometrical Constraints in Object Space

4.1. CPC Method for MFNS

To achieve the accuracy requirements of the MFNS, the system parameters need to be calibrated, including the intrinsic and extrinsic parameters of each camera, namely, the transformation between the camera coordinates and the MFNS coordinate system. Accurate estimation of the camera parameters is the prerequisite for identification, measurement, and navigation in most computer vision applications. Therefore, scholars have carried out in-depth, extensive research on the calibration of camera intrinsic parameters to solve the related theoretical problems and put forward a variety of mature methods. Both intrinsic and extrinsic parameters must be accurately calibrated according to the error analysis of the MFNS in Section 3. However, at present, most methods mainly deal with the calibration of a single camera. For binocular vision systems, the common method is calibrating first the intrinsic parameters of each camera and then calculating the transformation between the two camera coordinates on the basis of the correspondence of images of the shared overlap.
However, the FOV of the cameras is discrete for MFNS calibration; thus, the cameras cannot observe a certain object simultaneously. Therefore, the existing single camera/binocular system calibration method cannot obtain the transformation between the camera coordinates. It is notable that there are a number of studies on calibration of non-overlapping cameras, these studies give good solutions for calibration of common non-overlapping cameras, but most of them are unable to satisfy the requirement of the MFNS calibration. The reason is that the installation matrix between the cameras and the MFNS likewise needs to be calibrated to achieve precise navigation and that is the prerequisite for the application of the MFNS on a carrier, but this problem tends to be ignored or less considered in computer vision application. To solve this problem, the geometrical constraints in the object space are utilized to build a transformation relationship between the FOV-separated cameras and the MFNS. The checkerboard is fixed and photographs are taken by each camera. Then, a high-accuracy turntable is utilized to control the pose of the MFNS and provide a dependable reference during the calibration process. All the coordinates involved in the calibration method are right-handed Cartesian coordinates (Figure 7) as follows:
  • MFNS body coordinate ObXbYbZb: fixed on the system frame and defined for ease of use.
  • Turntable coordinate OtXtYtZt: Ot is the center of the turntable, Zt points to the forward of the turntable main axis, and Xt points to the forward of the turntable auxiliary axis.
  • World coordinate OwXwYwZw: defined by the checkerboard according to the single camera calibration method, where Ow is the corner of the checkerboard, and Xw, and Yw are parallel to the edge of the checkerboard grid.
  • Coordinates of cameras X, Y, and Z: defined based on the imaging model in Section 2.1 and recorded respectively as OxXxYxZx, OyXyYyZy, and OzXzYzZz.
The superscripts and subscripts b, t, w, x, y, and z represent the coordinate systems of the MFNS, turntable, world, and cameras X, Y, and Z, respectively.
All physical quantities that should be calibrated are defined as follows:
  • Intrinsic parameter matrix Ak and distortion coefficient kc1kkc5k;
  • Rotation matrix C k b between camera k coordinate and the MFNS; and
  • Vector ri from camera principal points Ocx, Ocy, and Ocz to the origin of the MFNS coordinate system Ob, where k = x, y, and z stand for cameras X, Y, and Z, respectively.
Note that the intrinsic and extrinsic parameters of cameras X, Y, and Z can be estimated using the single camera calibration method, for instance the widely used Zhang’s method [12]. That is, the relationship between the camera systems and the world coordinate system is obtained after Zhang’s calibration. When the relationship between the MFNS coordinate system and the world system is established, the maximum likelihood estimation of the extrinsic parameters through the single camera calibration method can be utilized to carry out the solution for C k b and ri. Based on the ideas above, checkerboard-fixed post-processing calibration method is proposed, which can achieve MFNS calibration combined with most single camera calibration methods. In this paper, Zhang’s method is utilized for single camera calibration. The theoretical derivation and operation details are seen in Appendix A. Note that the MFNS coordinate is defined as follows considering ease of use: the origin of the coordinate is the center of the MFNS bottom surface, and the the Xb axis and Zb axis of the MFNS coordinate are parallel to the respective Xt axis and Zt axis of the turntable coordinate when the MFNS is mounted to the turntable. This coordinate definition is ensured by operations of finish, drilling threaded holes and locating holes performed on the bottom surface of the MFNS.
Figure 8 shows the process of our calibration method. The preparation needs to be done before system calibration:
  • The checkerboard should be fixed in an appropriate location that can be observed by the cameras while the MFNS rotates with the turntable.
  • Several pictures of the checkerboards that meet the requirements of the Zhang’s method are taken by cameras X, Y, and Z.
  • Alignment of the coordinates should be done subsequently so that the rotation matrix from the turntable coordinate to the world coordinate is approximate to an identity matrix.
Afterwards, the turntable is controlled manually to rotate around both Xt and Zt axes for checkerboard image acquisition by each camera, and the attitude of the turntable corresponding to each image is recorded. With the images prepared earlier and obtained, the maximum likelihood solutions of each camera’s parameters are given by Zhang’s calibration method (note that the previously prepared images are necessary to obtain good results using Zhang’s method, because the pictures of the checkerboard taken by the MFNS during rotation may not satisfy the diversity of the different checkerboard poses for Zhang’s method.) Finally, all the parameters needed are evaluated based on Zhang’s method results and the CPC process of the MFNS introduced. The contrast between the proposed method and the hand-eye calibration is as follows: In hand-eye calibration, the robot arm and cameras are attached, as the motion of the robot arm (hand) is known, the transformation between the cameras’(eye) coordinates and the base coordinate can be calculated through multiple poses and images taken corresponding to those poses. In our work, the “eye” is the MFNS and “hand” is the turntable, while the “eye” will be removed from the “hand” after calibration and used in other situations.

4.2. Calibration Results

An MFNS calibration is conducted using the method proposed. We use the 902E-1 two-axis testing turntable produced by Beijing Precision Engineering Institute for Aircraft Industry (Beijing, China). The turntable is aligned before MFNS calibration, and the accuracy of its angular position is 8″ in the direction of both axes. The cameras integrated in the MFNS are Daheng Image DH-HV1310FM (resolution 1280 × 1024, 18.6 frame/s under highest resolution). The intrinsic parameters obtained through Zhang’s method [12] are shown in Table 2 and the extrinsic parameters are shown in Table 3. The calibration of each camera achieved an accuracy of 0.1 pixel based on 21 images (1–11 are obtained on the turntable and 12–21 prepared beforehand; see Figure 9). The rotation matrix calibration achieved an accuracy of <0.1° for each Euler angle, and the position vector calibration achieved an accuracy of <0.6 mm for each position vector component.

5. Navigation Experiment

A complete model for the navigation system is established after calibration. Then, a demonstration experiment is conducted to confirm the validity of the calibration method and evaluate the accuracy of the system. The architecture of the experiment is shown in Figure 10.
Ten LED beacons are fixed on the steel frame with an apparent size of approximately 1 × 1 × 1.5 m3 around the turntable. As the position of the beacons in the world coordinate are known, the position and attitude of the MFNS can be calculated using the imaging model (Equation (3)), while plenty beacons are observed by the MFNS. The experiment is designed as follows to verify the accuracy of the navigation result, namely, the calculated pose (three-axis position and three-axis attitude) of the MFNS. First, the origin of the world coordinate is defined as the center of the turntable platform, which is the origin of the MFNS body coordinate in the initial pose, and the three axes of the world coordinates Xt, Yt, and Zt are defined parallel to the axes of the world coordinates Xw, Yw, and Zw, respectively. Then, the position of the beacons is measured, as shown in Table 4. Second, the turntable is controlled to rotate around the Zt axis, while the MFNS moves with the turntable, takes pictures of the surrounding environment, and calculates the pose of the MFNS based on the beacons observed.
Finally, based on the definition of the coordinates, the position of the turntable platform center does not move during the rotation. The true value of the MFNS position is (0, 0, 0) and the true attitude of the MFNS is (0, 0, ψt), where ψt is the angle given by the turntable around its Zt axis. Therefore, the pose error is obtained as the difference between the calculated pose and the true value. Figure 11 and Table 5 show the navigation results.
The analysis of experiment result, discussion on existing problems and possible reasons, ways to improve, and future work are as follows:
  • As the origin of the MFNS coordinate, Ob is defined on the mounting surface of the turntable, and the position of the MFNS is constant (0, 0, 0) while the turntable rotates around its Zt axis. Similarly, φ and θ remain 0 and ψ varies linearly while the turntable rotates uniformly under the ideal condition. The experiment results show that the MFNS worked properly during validation; the stand deviations of the three-axis position error are 1.60, 1.61, and 1.83 mm, respectively; and the stand deviations of the three-axis attitude error are 0.15°, 0.15°, and 0.17°, respectively.
  • The calibration method proposed for multiple camera systems deals with calibration results from the single-camera calibration method. In Section 3, we mainly utilize the LSQ for formula derivation and finding the optimization solution. This approach makes understanding the main idea and the process of our method easier, and the experiment results show that the performance of the MFNS after calibration is acceptable. However, the LSQ may not be the most accurate method to solve the problem, because any error will propagate during the process. For instance, when calculating C x b and C y b , according to Equations (A11) and (A12), the results are based on C t w , which is the optimal solution of Equation (A10). More in-depth work may focus on better ways to obtain optimal solutions on system extrinsic parameters.
  • Given that the imaging model of the MFNS was built in Section 2, research on the navigation algorithm of the MFNS should be carried out afterwards. The navigation experiment is only conducted to verify the accuracy of the system calibration result, because we simply use the Newton iteration method to find a numerical solution of navigation parameters. The problem of beacon pattern recognition and multiple solutions are ignored by manually matching the beacons and choosing the iteration initial value. Furthermore, a solution in a pose that is less than three beacons observed by the MFNS is difficult to obtain. Therefore, a complete navigation algorithm system is our goal for future work, including solution strategy, error propagation analysis, beacon distribution, and optimized methods.

6. Conclusions

The basics of the MFNS, including its system design, mathematical model, and error analysis, is proposed in detail in this paper. This approach will be a reference to further develop and study other multiple-camera vision systems. A novel calibration method based on error analysis, namely, CPC for MFNS, is proposed. A calibration experiment is conducted where intrinsic and extrinsic parameters are simultaneously obtained utilizing the method proposed. The navigation experiment shows that ideal calibration results are achieved. This method can be integrated into a toolkit and used for other vision systems, especially multiple-camera systems or FOV-separated systems.

Acknowledgments

This study is partially supported by a grant from the Chinese Manned Space Pre-research Project. The system calibration is performed at the State Key Laboratory of Precision Measurement Technology and Instruments at Tsinghua University. Both of them are gratefully acknowledged.

Author Contributions

Shuai Shi proposed the idea of the calibration method and carried out the theoretical derivation; Shuai Shi, Chenguang Ouyang, Yongkui Cao and Zhenzhou Wang performed the experiments and analyzed the data; Kaichun Zhao and Zheng You conceived and supervised the experiments; Shuai Shi wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. CPC Method for MFNS Calibration

Appendix A.1. Calibration of Rotation Matrix C k b

The proposed calibration process needs the aid of a two-axis turntable, as shown in Figure 7. According to the definition of the coordinates, the rotation matrix between the two coordinates C t b is an identity matrix when the MFNS is mounted to the turntable.
The checkerboard is fixed on the frame (Figure 7). When the system is in the initial pose, the checkerboard is in the FOV of camera Z, and the transformation between the coordinate systems at this moment is as follows:
C z w = C t w C b t C z b = C t w C z b
C z w is obtained using Zhang’s calibration method based on the imaging model. C z b is the rotation matrix to be calibrated. C t w is unknown and coupled with C z b , making the value of the two matrices difficult to calculate. To solve this problem, an IMU and a miniature solid-state laser are utilized to align the turntable coordinate and the world coordinate.
The two-axis turntable needs a strict level adjustment before use because of the task demand of the inertial device calibration. Therefore, the OwXwYw plane is nearly parallel to the OtXtYt plane after level adjustment of the checkerboard with the aid of the IMU’s gravity measurement. The heading of the world coordinate can be adjusted afterwards by fixing the laser on the turntable platform. The alignment of the trace of the laser spot on the checkerboard can easily be done when the laser rotates with the platform around the Xt axis of the turntable coordinate. This alignment points to the direction of the Yt axis (the details of the coordinate alignment can be seen in Appendix A.3). According to [35], when the Euler angles represent small quantities where small angle approximation is valid, the rotation matrix can be simplified. For instance, the difference between an angle less than 1.5° in radian and its sine is less than 3 × 10−6. After the alignment, the Euler angles corresponding to C t w are small quantities less than 0.5°, and the rotation matrix can be simplified as:
C t w = [ 1 γ β γ 1 α β α 1 ]
where α, β, and γ are the Euler angles corresponding to C t w . Each of these angles is in small quantities.
Note that the Euler angles corresponding to C z b are also in small quantities (ensured by the finishing machining of the MFNS structure frame), namely, φ, θ, and ψ. C z b is written as:
C z b = [ 1 ψ θ ψ 1 φ θ φ 1 ]
Let the turntable platform rotate an angle Ω around the Zt axis so that camera Z of the MFNS rotates to a position Z and the checkerboard is still in the FOV of camera Z. The transformation of the coordinates is as follows:
C z w = C t w C b t C b b C z b
where:
C b b = [ cos Ω sin Ω 0 sin Ω cos Ω 0 0 0 1 ]
Noting that C b t = I 3 and C z b = C z b , we get the following:
C z w = C t w C b b C z b
On the basis of Equations (A2), (A3), (A5) and (A6), we get:
C z w = [ 1 γ β γ 1 α β α 1 ] [ cos Ω sin Ω 0 sin Ω cos Ω 0 0 0 1 ] [ 1 ψ θ ψ 1 φ θ φ 1 ]
With the higher-order terms neglected, Equation (A7) can be written as:
C z w = [ cos Ω + γ sin Ω + ψ sin Ω ψ cos Ω + γ cos Ω sin Ω θ cos Ω φ sin Ω β γ cos Ω + sin Ω ψ cos Ω ψ sin Ω + cos Ω + γ sin Ω θ sin Ω + φ cos Ω + α α sin Ω + β cos Ω + θ α cos Ω β sin Ω φ 1 ]
Similarly, when the turntable platform rotates an angle ω around the Xt axis so that camera Z of the MFNS rotates to a position Z″ and the checkerboard is still in the FOV of camera Z, the transformations of the coordinates are:
C z w = [ 1 ψ + γ cos ω β sin ω θ γ sin ω β cos ω γ θ sin ω ψ cos ω cos ω + α sin ω + φ sin ω φ cos ω sin ω + α cos ω β ψ sin ω + θ cos ω α cos Ω + sin ω φ cos Ω φ sin ω + α sin ω + cos ω ]
On the basis of Equations (A8) and (A9), φ, θ, and ψ, as well as α, β, and γ can be calculated using the least-squares method:
[ sin Ω cos Ω 0 0 1 0 cos Ω sin Ω 0 1 0 0 0 1 0 sin Ω cos Ω 0 1 0 0 cos Ω sin Ω 0 0 sin ω cos ω 0 0 1 0 cos ω sin ω 0 1 0 0 0 1 0 sin ω cos ω 0 1 0 0 cos ω sin ω ] [ α β γ φ θ ψ ] = [ C z 31 w ( 1 ) C z 32 w ( 1 ) C z 13 w ( 1 ) C z 23 w ( 1 ) C z 12 w ( 1 ) C z 13 w ( 1 ) C z 21 w ( 1 ) C z 31 w ( 1 ) ]
Let the turntable rotate so that the checkerboard is in the FOV of cameras X and Y. The transformations of the coordinates are:
C x w = C t w C b b C x b
C y w = C t w C b b C y b
where C x w and C y w are obtained by Zhang’s method, C t w is already known through Equation (A10), and C b b and C b b are manually controlled. The optimal solutions of C x b and C y b can be obtained as C x b = C x b and C y b = C y b .
Thus far, the rotation matrices are all obtained.

Appendix A.2. Calibration of Vector ri

Vectors in this paper are appointed as follows: r i j k represents the vector from coordinate i’s origin Oi to coordinate j’s origin Oj and expressed in coordinate k. When the checkerboard is in the FOV of camera Z, the position relationship among the coordinate origins is:
r w z w = r w b w + r b z w = r w t w + r t b w + r b z w
If the turntable rotates only around the Zt axis, then the position of Ob does not change. Thus:
r w z w = r w b w + r b z w = r w b w + C b w r b z b = r w b w + C b w r b z b
where z′ and b′ represent the new positions of camera Z and the MFNS respectively.
Otherwise, the turntable rotates freely. Given that the position of Ot does not change, we get:
r w z w = r w t w + r t b w + r b z w = r w t w + C b w r t b b + C b w r b z b
On the basis of Equation (A14), through multiple images when the turntable rotates only around the Zt axis, we get the following:
r w z ( i ) w r w z ( j ) w = ( C b ( i ) w C b ( j ) w ) r b z b
Let r b z b = [ z x   z y   z z ] T , zx and zy can be obtained by the least squares method (LSQ) based on Equation (A16).
Let r w b w = [ b x   b y   b z ] T . As bz and zz are coupled and cannot be calculated by rotating the turntable, bz needs to be measured with a vernier caliper. Then bx, by, and zz are obtained by LSQ based on Equation (A14).
Afterwards, r w t w and r t b b are calculated to prepare for the calibration of vectors r b x b and r b y b based on Equation (A15).
The turntable is then controlled so that cameras X and Y can take photographs of the checkerboard. The position vectors are then given by:
r w x w = r w t w + C b w r t b b + C b w r b x b
r w y w = r w t w + C b w r t b b + C b w r b y b
With all the other parameters known, the optimal solutions for r b x b and r b y b can be obtained.

Appendix A.3. Operational Method for Coordinates’ Alignment

Two coordinates, OwXwYwZw and OtXtYtZt, need to be aligned before the system is calibrated. In this study, an operational method composed of two steps is introduced.

Step 1: Level adjustment of the checkerboard

Device required

A high-accuracy accelerometer.

Principle

The biaxial turntable used in the experiment has a mounting surface, whose level is ensured through rigorous adjustment because of the need to calibrate the inertial device. Thus, the level adjustment of the checkerboard is accomplished by making use of gravity. Specifically, place the accelerometer on the mounting surface of the turntable and the back of checkerboard to measure the specific force, and adjust the checkerboard so that the specific force measurements ftz and fwz are:
f t z = f w z
As a result, the OwXwYw plane is parallel to the OtXtYt plane, and there is only a difference of yaw angle ψtw between the two coordinates.

Step 2: Orientation adjustment of the world coordinate

Device required

A point light laser.

Principle

The laser is installed as shown in Figure A1. The reference frame is set as turntable coordinate OtXtYtZt. Assume the laser beam incident point is P0 (a, b, c) and the beam is a straight line with a direction vector n = (m, n, 1)T. The equation of laser beam l0 in the turntable coordinate is written as:
x a m = y b n = z c
When the laser rotates an angle of θ with the turntable around its Yt, its direction vector n is given by:
n = [ 1 0 0 0 cos φ sin φ 0 sin φ cos φ ] [ m n 1 ] = [ m n cos φ + sin φ n sin φ + cos φ ]
Similarly, [ 1 0 0 0 cos φ sin φ 0 sin φ cos φ ] [ a b c ] = [ a b cos φ + c sin φ b sin φ + c cos φ ] ; thus, the coordinate of incident point P is ( a , b cos φ + c sin φ , b sin φ + c cos φ ) . Thus, the equation of laser beam l is as follows:
x a m = y ( b cos φ + c sin φ ) n cos φ + sin φ = z ( b sin φ + c cos φ ) n sin φ + cos φ
Assume that the vertical distance between the checkerboard and the turntable mounting surface is h, and the intersection point of the laser beam and the checkerboard P1 is:
( m h ( b sin φ + c cos φ ) n sin φ + cos φ + a , ( n cos φ + sin φ ) h ( b sin φ + c cos φ ) n sin φ + cos φ + ( b cos φ + c sin φ ) , h )
From the coordinate of P1, when m = 0, the trace of P1 on the checkerboard will be a straight line, which is parallel to the Yt axis of the turntable. Therefore, we can adjust the direction of the laser and the checkerboard simultaneously while rotating the turntable around its Xt axis, so that the trace of the laser dot aligns with the grid line of the checkerboard. As a result, the Yw and Yt axes are parallel to each other, and the alignment of the two coordinates is complete.
Figure A1. Laser is installed on the turntable to adjust the orientation of the checkerboard.
Figure A1. Laser is installed on the turntable to adjust the orientation of the checkerboard.
Sensors 17 00655 g012

References

  1. Santos, C.A.; Costa, C.O.; Batista, J. A vision-based system for measuring the displacements of large structures: Simultaneous adaptive calibration and full motion estimation. Mech. Syst. Signal Process. 2016, 72, 678–694. [Google Scholar] [CrossRef]
  2. Vilaça, J.L.; Fonseca, J.C.; Pinho, A.M. Calibration procedure for 3D measurement systems using two cameras and a laser line. Opt. Laser Technol. 2009, 41, 112–119. [Google Scholar] [CrossRef] [Green Version]
  3. Zyda, M. From visual simulation to virtual reality to games. Computer 2005, 38, 25–32. [Google Scholar] [CrossRef]
  4. Mirota, D.J.; Ishii, M.; Hager, G.D. Vision-based navigation in image-guided interventions. Annu. Rev. Biomed. Eng. 2011, 13, 297–319. [Google Scholar] [CrossRef] [PubMed]
  5. Abdel-Aziz, Y.I. Direct linear transformation from comparator coordinates in close-range photogrammetry. In Proceedings of the ASP Symposium on Close-Range Photogrammetry, Urbana, IL, USA, 26–29 January 1971.
  6. Faugeras, O. Three-Dimensional Computer Vision: A Geometric Viewpoint; MIT Press: Cambridge, MA, USA, 1993. [Google Scholar]
  7. Jones, G.A.; Renno, J.R.; Remagnino, P. Auto-calibration in multiple-camera surveillance environments. In Proceedings of the Third IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, Copenhagen, Denmark, 1 June 2002.
  8. Triggs, B. Camera pose and calibration from 4 or 5 known 3D points. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1, pp. 278–284.
  9. Tsai, R.Y. An efficient and accurate camera calibration technique for 3D machine vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 22–26 June 1986.
  10. Sturm, P.F.; Maybank, S.J. On plane-based camera calibration: A general algorithm, singularities, applications. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999.
  11. Heikkila, J. Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1066–1077. [Google Scholar] [CrossRef]
  12. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  13. Beardsley, P.; Murray, D. Camera Calibration Using Vanishing Points; Springer: London, UK, 1992; pp. 416–425. [Google Scholar]
  14. Cipolla, R.; Drummond, T.; Robertson, D.P. Camera Calibration from Vanishing Points in Image of Architectural Scenes. BMVC 1999, 99, 382–391. [Google Scholar]
  15. Wong, K.Y.K.; Mendonca, P.R.S.; Cipolla, R. Camera calibration from surfaces of revolution. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 147–161. [Google Scholar] [CrossRef] [Green Version]
  16. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A flexible technique for accurate omnidirectional camera calibration and structure from motion. In Proceedings of the Fourth IEEE International Conference on Computer Vision Systems (ICVS’06), New York, NY, USA, 4–7 January 2006.
  17. Liu, T.; Burner, A.W.; Jones, T.W.; Barrows, D.A. Photogrammetric techniques for aerospace applications. Prog. Aerosp. Sci. 2012, 54, 1–58. [Google Scholar] [CrossRef]
  18. Hughes, C.; Glavin, M.; Jones, E.; Denny, P. Wide-angle camera technology for automotive applications: A review. IET Intell. Transp. Syst. 2009, 3, 19–31. [Google Scholar] [CrossRef]
  19. Sun, T.; Xing, F.; You, Z. Optical system error analysis and calibration method of high-accuracy star trackers. Sensors 2013, 13, 4598–4623. [Google Scholar] [CrossRef] [PubMed]
  20. Schwartz, C.; Sarlette, R.; Weinmann, M.; Rump, M.; Klein, R. Design and implementation of practical bidirectional texture function measurement devices focusing on the developments at the University of Bonn. Sensors 2014, 14, 7753–7819. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, X. Intelligent multi-camera video surveillance: A review. Pattern Recognit. Lett. 2013, 34, 3–19. [Google Scholar] [CrossRef]
  22. Kumar, R.K.; Ilie, A.; Frahm, J.M.; Pollefeys, M. Simple calibration of non-overlapping cameras with a mirror. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008.
  23. Rodrigues, R.; Barreto, J.P.; Nunes, U. Camera pose estimation using images of planar mirror reflections. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010.
  24. Caspi, Y.; Irani, M. Aligning non-overlapping sequences. Int. J. Comput. Vis. 2002, 48, 39–51. [Google Scholar] [CrossRef]
  25. Dai, Y.; Trumpf, J.; Li, H.; Barnes, N.; Hartley, R. Rotation averaging with application to camera-rig calibration. In Proceedings of the Asian Conference on Computer Vision, Xi’an, China, 23–27 September 2009.
  26. Hesch, J.A.; Mourikis, A.I.; Roumeliotis, S.I. Determining the camera to robot-body transformation from planar mirror reflections. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, 22–26 September 2008; pp. 3865–3871.
  27. Mariottini, G.L.; Scheggi, S.; Morbidi, F.; Prattichizzo, D. Planar catadioptric stereo: Single and multi-view geometry for calibration and localization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’09), Kobe, Japan, 12–17 May 2009; pp. 1510–1515.
  28. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  29. Pless, R. Using many cameras as one. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 16–22 June 2003.
  30. Grossberg, M.D.; Nayar, S.K. A general imaging model and a method for finding its parameters. In Proceedings of the Eighth IEEE International Conference on Computer Vision (ICCV 2001), Vancouver, BC, Canada, 7–14 July 2001.
  31. Henrik Stewénius, M.O.; Aström, K.; Nistér, D. Solutions to Minimal Generalized Relative Pose Problems. Available online: http://www.vis.uky.edu/~stewe/publications/stewenius_05_omnivis_sm26gen.pdf (accessed on 22 March 2017).
  32. Li, H.; Hartley, R.; Kim, J. A linear approach to motion estimation using generalized camera models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8.
  33. Lepetit, V.; Moreno-Noguer, F.; Fua, P. Epnp: An accurate O(n) solution to the pnp problem. Int. J. Comput. Vis. 2009, 81, 155. [Google Scholar] [CrossRef] [Green Version]
  34. Gao, X.S.; Hou, X.R.; Tang, J.; Cheng, H.F. Complete solution classification for the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 930–943. [Google Scholar]
  35. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: Norwood, MA, USA, 2013. [Google Scholar]
Figure 1. (a) 3D model and (b) flying sketch of the SITSS.
Figure 1. (a) 3D model and (b) flying sketch of the SITSS.
Sensors 17 00655 g001
Figure 2. (a) Prototype and (b) navigation sketch of the MFNS.
Figure 2. (a) Prototype and (b) navigation sketch of the MFNS.
Sensors 17 00655 g002
Figure 3. Pinhole imaging model.
Figure 3. Pinhole imaging model.
Sensors 17 00655 g003
Figure 4. MC simulation result of the pixel error caused by the beacon error.
Figure 4. MC simulation result of the pixel error caused by the beacon error.
Sensors 17 00655 g004
Figure 5. MC simulation result of pixel error caused by the relative position error.
Figure 5. MC simulation result of pixel error caused by the relative position error.
Sensors 17 00655 g005
Figure 6. MC simulation result of pixel error caused by the relative attitude error.
Figure 6. MC simulation result of pixel error caused by the relative attitude error.
Sensors 17 00655 g006
Figure 7. Calibration architecture of MFNS.
Figure 7. Calibration architecture of MFNS.
Sensors 17 00655 g007
Figure 8. Process of MFNS calibration method.
Figure 8. Process of MFNS calibration method.
Sensors 17 00655 g008
Figure 9. Images for calibration of camera X (a), camera Y (b), and camera Z (c) using Zhang’s method.
Figure 9. Images for calibration of camera X (a), camera Y (b), and camera Z (c) using Zhang’s method.
Sensors 17 00655 g009
Figure 10. Architecture of the navigation experiment.
Figure 10. Architecture of the navigation experiment.
Sensors 17 00655 g010
Figure 11. Results of the navigation experiment: (a) attitude error and (b) position error.
Figure 11. Results of the navigation experiment: (a) attitude error and (b) position error.
Sensors 17 00655 g011
Table 1. Parameters for MC simulation on MFNS error analysis.
Table 1. Parameters for MC simulation on MFNS error analysis.
Camera Parameter
αx (pixel)αy (pixel)u0 (pixel)v0 (pixel)
16001600640512
Extrinsic Parameter of MFNS
φ (°)θ (°)ψ (°)Tx (mm)Ty (mm)Tz (mm)
000000
Beacon Position
Xw (mm)Yw (mm)Zw (mm)
0–1000-0–1000
Number of MC Simulation Trials for Each Group of Parameters
10,000
Table 2. Calibration result of intrinsic parameters of camera X, Y and Z.
Table 2. Calibration result of intrinsic parameters of camera X, Y and Z.
ParameterCamera XCamera YCamera Z
Calibration ResultErrorCalibration ResultErrorCalibration ResultError
αx1599.261361.789661605.352861.662491611.215962.14183
αy1599.933021.641011603.543591.736081610.796072.14219
u0632.615911.62023619.712271.80177649.512101.39920
v0522.178701.61823505.993611.93821531.294721.34749
α0.000000.000000.000000.000000.000000.00000
kc1−0.130130.00336−0.112760.00301−0.104810.00338
kc20.287010.022430.017760.017760.158810.02341
kc3−0.000400.00029−0.000310.00034−0.001370.00021
kc4−0.000040.000310.000570.000300.001840.00022
kc50.000000.000000.000000.000000.000000.00000
Pixel error(0.11711, 0.10363)(0.11597, 0.12652)(0.11593, 0.11232)
Table 3. Calibration result of the extrinsic parameters.
Table 3. Calibration result of the extrinsic parameters.
Rotation MatrixPosition Vector
MatrixEuler AngleCalibration Result (°)Error (°)VectorComponentCalibration Result (mm)Error (mm)
C t w α−0.01100.0485 r w b w r w b , x w 33.84070.2932
β0.12630.0657 r w b , y w −128.94010.2932
γ0.08610.0443 r w b , z w 576.9100 (measured)0.2456
C x b φx−0.57880.0413 r w t w r w t , x w 34.09490.1496
θx−90.35230.0990 r w t , y w −128.92810.1496
ψx0.03950.0153 r w t , z w −684.71340.1496
C y b φy90.16360.0301 r b x b r b x , x b 76.13190.2529
θy−0.55010.0920 r b x , y b −36.83730.4498
ψy0.19250.0157 r b x , z b 78.89490.4959
C z b φz−0.10760.0485 r b y b r b y , x b −32.50040.5358
θz−0.72110.0657 r b y , y b 75.71490.0825
ψz1.16900.0443 r b y , z b 76.99670.1963
r b z b r b z , x b 34.16040.3097
r b z , y b 30.82510.3097
r b z , z b 119.70960.2932
Table 4. Position of beacons.
Table 4. Position of beacons.
Number of BeaconsPosition of Beacons (mm)
xyz
0487.4734−720.0265240.8359
1−51.2618−737.6155164.7393
2−366.1525−711.8317131.2681
3−458.8302−215.2964−69.2627
4−454.6380410.3959−60.6549
5−117.4392296.053783.0754
6182.4478279.1964143.2510
7481.8346252.9609217.5775
8647.3834−127.6167178.7407
9−122.5798−44.9064562.5394
Estimated position accuracy:0.3963 mm
Table 5. Results of the navigation experiment.
Table 5. Results of the navigation experiment.
ParameterMean ErrorStandard Deviation
Roll (φ/°)−0.00300.1541
Pitch (θ/°)−0.01920.1497
Yaw (ψ/°)0.00660.1729
x position (mm)−0.05531.6043
y position (mm)0.02751.6108
z position (mm)−0.06691.8292

Share and Cite

MDPI and ACS Style

Shi, S.; Zhao, K.; You, Z.; Ouyang, C.; Cao, Y.; Wang, Z. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System. Sensors 2017, 17, 655. https://doi.org/10.3390/s17030655

AMA Style

Shi S, Zhao K, You Z, Ouyang C, Cao Y, Wang Z. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System. Sensors. 2017; 17(3):655. https://doi.org/10.3390/s17030655

Chicago/Turabian Style

Shi, Shuai, Kaichun Zhao, Zheng You, Chenguang Ouyang, Yongkui Cao, and Zhenzhou Wang. 2017. "Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System" Sensors 17, no. 3: 655. https://doi.org/10.3390/s17030655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop