Next Article in Journal
Data Acquisition, Analysis and Transmission Platform for a Pay-As-You-Drive System
Next Article in Special Issue
Comparison of Several Methods for Determining the Internal Resistance of Lithium Ion Cells
Previous Article in Journal
Real-Time Gas Identification by Analyzing the Transient Response of Capillary-Attached Conductive Gas Sensor
Previous Article in Special Issue
Detecting Nano-Scale Vibrations in Rotating Devices by Using Advanced Computational Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of a Multi-Camera Vision System and Strapdown Inertial Navigation System (SDINS) with a Modified Kalman Filter

Mechatronic Systems Engineering, School of Engineering Science, Simon Fraser University, 250–13450 102nd Avenue, Surrey, BC V3T 0A3, Canada
*
Author to whom correspondence should be addressed.
Sensors 2010, 10(6), 5378-5394; https://doi.org/10.3390/s100605378
Submission received: 1 April 2010 / Revised: 20 April 2010 / Accepted: 10 May 2010 / Published: 28 May 2010

Abstract

:
This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free.

1. Introduction

It is well known that inertial navigation sensors have drifts. There are two components in the inertial sensor drift: bias stability and bias variability. These components are involved in double integration in position calculation; so after a while, the output of the Inertial Navigation System (INS) is not reliable. Since these factors are involved in the inertial navigation computing task, they cause unavoidable drift in orientation and position estimation. Removing the drift of inertial navigation systems requires that the sensors be assisted with other resources or technologies such as Global Positioning Systems (GPS) [1,2], vision systems [35], or odometers [6,7].
The use of Kalman filters is a common method used in the data fusion technique. The Kalman filter is a powerful method for improving the output estimation and reducing the effect of sensor drift. However, sensor integration is based on Kalman filtering, but different types of Kalman filters are being developed in this area [814].
In the past, the three-dimensional attitude representations were applied, but these representations are singular or discontinuous for certain attitudes [15]. As a result, the quaternion parameterization was proposed, which has the lowest dimensional possibility for a globally non-singular attitude representation [16,17].
In aided inertial motion tracking applications, the state variables of a Kalman filter usually take one of two forms: first, the sensed engineering quantities, that is acceleration, velocity, and attitude, etc.; and second, the errors of these quantities. The first form is used by Centralized Kalman Filter [14], Unscented Kalman Filter [1820], Adaptive Kalman Filter [10,21], and Sigma-point Extended Kalman Filter [22], while the second is used by Indirect Kalman Filter [2325].
A Kalman filter that operates on the error states is called an indirect or a complementary Kalman filter. The optimal estimates of the errors are then subtracted from the sensed quantities to obtain the optimal estimates. Since the 1960s, the complementary Kalman filter has become the standard method of integrating non-inertial with inertial measurements in aviation and missile navigation. This method requires dynamic models for both the navigation variable states and the error states [26].
This research develops an EKF which offers the estimation of the changes in the state variables. Then the current estimated values of changes in the variables are added to the previous estimation values of the position and velocity, respectively. According to the general equations of the SDINS, the constant value of the gravitational force is removed from the resulted equations and the resulting error from the uncertainty value of the gravitational force is eliminated.

2. Kinematics of Strapdown Inertial Navigation Systems Using Quaternions

Inertial navigation systems are typically employed to provide the present position and heading of a moving vehicle with respect to the known and fixed reference frame. An inertial navigation system localizes the vehicle by measuring the linear and angular components of its motion using inertial sensors and with knowing the initial value of its position and attitude.
Since Microelectromechanical System (MEMS) techniques provide the opportunity to manufacture miniature inertial sensors inexpensively, this has led to the development of Strapdown Inertial Navigation Systems (SDINS) for new applications such as medicine [27,28], industry [29,30], robotics [31,32], and sports [33].
In SDINS, MEMS-based inertial sensors are mounted rigidly on the body of a moving object [34] to provide the applied forces to and the turning rates of the object, while accelerometers and angular rate gyros are parallel to the axis of the body. Because of the measuring of inertial components in the body frame, a set of equations must be derived to compute the position and attitude with respect to the known navigation reference frame.
Motion analysis of a moving rigid body provides a set of equations determining its trajectory, speed, and attitude. Since the Inertial Measurement Unit (IMU) measures the inertial components of the connected point, the acceleration and velocity of other points on the body can be computed relatively. As shown in Figure 1, the IMU is attached to the bottom of the tool when the position of the tool tip is desired.
As the tool is rotating and translating, the body reference frame shown in Figure 1 is relocating with respect to the fixed navigation frame. The relative acceleration of the point B [35] is computed as:
a B = a A + ω ˙ × r B / A + ω × ( ω × r B / A ) + 2 ω × ( v B / A ) b + ( a B / A ) b
where aA and aB represent acceleration of the points A and B; rB/A, (vB/A)b, and (aB/A)b denote the relative position, velocity, and acceleration of point B with respect to point A measured in the body frame; and ω̇ and ω are angular acceleration and angular velocity of the body frame.
Since both point A and point B are located on the tool and moving along the tool, the relative acceleration and velocity of point B with respect to A is zero, and Equation (1) is rewritten as:
a B = a A + ω ˙ × r B / A + ω × ( ω × r B / A )
In order to transform the acceleration of the tool tip from the body frame into the North-East-Down (NED) frame [34], the cosine direction matrix must be computed from Equation (3):
C ˙ b n = C b n Ω nb n
where C b n and Ω nb n denote the cosine direction matrix and the skew symmetric form of the body rate with respect to the navigation frame. The three-dimensional Euler angles representations were applied for attitude estimation in the SDINS, but these representations are singular or discontinuous for certain attitudes [15]. Since the quaternion parameterization has the lowest dimensional possibility for a globally non-singular attitude representation [16,17], the quaternion is generally used for attitude estimation in the SDINS.
The transformation matrix C b n is related to the corresponding quaternion q = [q1 q2 q3 q4]T:
C b n   ( q ) = 2 × [ q 1 2 + q 2 2 0.5 q 2 q 3 q 1 q 4 q 2 q 4 + q 1 q 3 q 2 q 3 + q 1 q 4 q 1 2 + q 3 2 0.5 q 3 q 4 q 1 q 2 q 2 q 4 q 1 q 3 q 3 q 4 + q 1 q 2 q 1 2 + q 4 2 0.5 ]
Therefore Equation (3) can be changed to Equation (5) as:
q ˙ = 1 2 q Ω nb n
Moreover, the gravity compensation is required since the accelerometers measure the local gravitational force. As a result, the acceleration of point B with respect to the navigation frame is calculated as:
( a B ) n = C b n { f A + ω ˙ × r B / A + ω × ( ω × r B / A ) } + g n
where fA represents the applied forces measured in the body frame by accelerometers, and gn denotes the gravity vector expressed in the navigation frame, [0 0 9.81]T.
As a result, the state space equations of the system can be finalized as:
x ˙ n = v n v ˙ n = C b n   { f A + ω ˙ × r B / A + ω × ( ω × r B / A ) } + g n q ˙ = 1 2 Λ nb n   q = 1 2 Q ( q )   [ 0 ω ] Λ nb n = [ 0 ω T ω Ω nb n ]
where xn and vn stand for the position and velocity of the tool tip with respect to the navigation frame, and Q(q) is the 4 × 4 real matrix representation of a quaternion vector.
The navigation frame and the body frame shown in Figure 1 are rotating with the Earth as well. According to relative motion equations, the acceleration of point A in the Earth frame is:
a A = a 0 + ω ˙ E × r A / 0 + ω E × ( ω E × r A / 0 ) + 2 ω × ( v A / 0 ) n + ( a A / 0 ) n
where point O is the origin of the navigation frame. Since the navigation frame is fixed to the ground, then the relative acceleration of the navigation frame with respect to the Earth, a0, is zero. The angular velocity of the Earth is constant and nearly 7.3 × 10−5 rad/s [37], as a result the angular acceleration of the Earth is zero [36]. Since the relative position and velocity of point A with respect to point O is too small because of description of on-hand application, the effect of Coriolis and the centripetal acceleration terms in Equation (8) is too small to be detected by available accelerometers; therefore:
a A = ( a A / 0 ) n
which means the acceleration of point A with respect to the Earth reference frame is its acceleration with respect to the navigation frame.

3. Vision System

In this research, a vision system is proposed which includes four CCD cameras located on an arc to expand the domain of the field of view, see Figure 2. In order to find the Cartesian mapping grid for transforming 2D positions in the cameras’ image plane to the corresponding 3D position in the navigation frame, the single camera calibration for each camera and the stereo camera calibration for each two adjacent cameras are required.
The calibration of the vision system provides the intrinsic and extrinsic parameters of the cameras [38] in order to map a 2D point on the image planes to the 3D point in the world coordinate system. The estimation of camera parameters requires a single camera imaging model, as shown in Figure 3.
The camera lens distortion causes two radial and tangential displacements [39]. The longer distance from the center of the image plane initiates the larger displacement, when the distance of a point p = [ x y ] T = [ P x P z P y P z ] T on the image plane is defined as r2 = (x)2 + (y)2.
Considering two vectors α and β as the radial and tangential distortion factors of a camera, the distortions can be calculated as [40]:
d r = ( 1 + α x   r 2 + α y   r 4 ) d t = [ 2 β x   xy + β y   ( r 2 + 2 x 2 ) β x   ( r 2 + 2 x 2 ) + 2 β y   xy ]
Consequently, the projection of each point in the world coordinate system into the image plane is:
p = [ f 1 d r   x f 2 d r   y ] + [ d t , x d t , y ] + [ N x N y ]
where the vector N is a zero-mean Gaussian random measurement noise, and f1 and f2 denote the focal length factors of the lens. In fact, f1 and f2 are related to the focal length and the dimension of the pixels:
f 1 = f .   s u d px f 2 = f d py
where f is the focal length; dpx and dpy refer to center-to-center distance between adjacent sensor elements in x and y directions, respectively; and su represents the image scale factor [41], therefore:
p = f   d r   [ ks u x z l y z ] + [ d t , x d t , y ] + [ N x N y ]
where 1 k and 1 l denote the dimension of a pixel on the image plane.
According to the camera model obtained in Equation (13), the geometric parameters f, su, α, and β can be estimated by capturing enough images while the coordinate of each 3D point P and its 2D projected point p are known in calibration grids:
p = 1 z [ f   d r   ks u 0 0 0 f   d r   l 0 0 0 1 ]   [ x y 1 ] + [ N x N y ] = 1 z MP + N
Applying the parameter estimation method [34,42] to Equation (11) gives the geometric parameters of a camera. Furthermore, the transformation matrix for each two adjacent cameras is computed by substituting the equations of the coordinate system transformation into Equation (11) for each corresponding projected point.
In order to localize the tool tip, the edge detection and boundary extraction must be applied to every single frame from each camera. Obtaining the edge of the tool tip requires applying a thresholding technique. Each pixel is detected as an edge if its gradient is greater than the threshold. In this paper, the threshold is chosen as the boundary pixels of the tool tip are detected as the edge positions. Since the size of the tool tip is about a few pixels, then an adaptive thresholding technique is applied to remove the noise pixels around the tool tip as much as possible. For this purpose, a masking window is chosen around the initial guess of the position of the tool tip. Then, a fixed threshold is chosen which select pixels that their value is above the 80% of the value of all pixels of the image. If the boundary detection technique can identify the boundary of the tool tip, then it shows that the threshold selection is appropriate. Otherwise, the previous threshold is reduced by 5%, and this procedure is run recursively to find the proper threshold. Afterwards, the opening morphologic operation followed by closing operation is applied to simplify and smooth the shape of the tool tip. Finally, the boundary of the tool tip can be detected and extracted by using the eight-connected neighbors’ technique.

4. Modified Kalman Filter

The integrated navigation technique employs two or more independent sources of navigation information with complementary characteristics to achieve an accurate, reliable, and low-cost navigation system. Figure 4 shows a block diagram of the integration of the multi-camera vision system and the inertial navigation system:
Figure 4. Integration of SDINS and vision system using EKF.
Figure 4. Integration of SDINS and vision system using EKF.
Sensors 10 05378f4
Typically, Extended Kalman Filter (EKF) is applied by combining two independent estimates of a nonlinear variable [43]. The continuous form of a nonlinear system is described as:
x ˙ ( t ) = f ( x ( t ) ,   t ) + G ( t ) η ( t ) z ( t ) = h ( x ( t ) ,   k ) + n
Since the measurements are practically provided at discrete intervals of time, it is appropriate to express the system modeling in the form of discrete differential equations:
x k + 1 = ϕ k   x k + η k z k + 1 = H k + 1   x k + 1 + n k + 1
where:
ϕ k = exp [ F ( t k + 1 t k ) ] F ( t k ) f x | x = x k       and   H k f x | x = x k
Therefore the two set of equations involving the prediction and updating of the state of the system are defined as:
x ˜ k + 1 = f ( x k ,   k ) P ˜ = F k   P k   F k T + G k   R k   G k T x k + 1 = x k + 1 + K [ z k + 1 h   ( x ˜ k ,   k ) ] P k + 1 = P ˜ KH k   P ˜ K = P ˜ H k T   ( S k + H k   P ˜ H k T ) 1
where the system noise and the measurement noise are zero mean with known covariance R and S, respectively.
According to Equations (7), (17), and (18), the discrete form of the system is developed as:
x k + 1 = x k + T i   v k v k + 1 = v k + T i   ( C k   a k + g n ) q k + 1 = ( I + 0.5 T i Ω ) q k       a = f A + ω ˙ × r B / A + ω × ( ω × r B / A )
where Ti is the sampling rate of the inertial sensors. In this research, instead of estimating the actual value of these quantities, we propose to estimate how much the position and the velocity will be changed; that is:
Δ x k + 1 = x k + 1 x k = Δ x k + T i Δ v k Δ v k + 1 = v k + 1 v k = Δ v k + T i   ( Δ C k   a k 1 + C k   Δ a k )
As a consequence, the computation of the velocity is independent of the gravitational force in the new state-space model. In fact, the error caused by inaccurate value of the gravitational force in the new state-space model is completely eliminated.
The inertial sensor noise is theoretically modeled with a zero-mean Gaussian random process. In practice, the average of the noise is not absolutely zero. Due to the inherent characteristic of the Gaussian random process, the discrete difference of a zero-mean Gaussian random process is also a zero-mean Gaussian random process with very lower actual mean while its variance is twice of the variance of the original process. As a result, the drift resulting from the input noise is reduced and a smooth positioning is expected.
The equation of the INS with the state vector X = [Δx Δv q]T can be reformulated as:
[ Δ x Δ v q ˙ ] = [ 0 I 0 0 0 0 0 0 0.5 Λ ]   [ Δ x Δ v q ] + [ 0 Δ Ca + C Δ a 0 ] + [ 0 0 0 Δ C + 2 C 0.5 Q ( q ) 0 ] η ( t )
or:
[ Δ r Δ v q ˙ ] = [ Δ v Δ Ca + C Δ a 0.5 Λ   q ] + [ 0 0 0 Δ C + 2 C 0.5 Q ( q ) 0 ]   η ( t )
Subsequently, the transition matrix [44] can be calculated as:
F f X | X = X = [ 0 I 0 0 0 q ( Δ Ca + C Δ a ) 0 0 0.5 Λ ]
By considering Δa = [Δ1 Δ2 Δ3]T:
q C Δ a = 2 [ q 1 Δ 1 q 4 Δ 2 + q 3 Δ 3 q 2 Δ 1 + q 3 Δ 2 + q 4 Δ 3 q 4 Δ 1 + q 1 Δ 2 q 4 Δ 3 q 3 Δ 1 q 2 Δ 2 q 1 Δ 3 q 3 Δ 1 + q 2 Δ 2 + q 1 Δ 3 q 4 Δ 1 + q 1 Δ 2 q 2 Δ 3                           q 3 Δ 1 + q 2 Δ 2 + q 1 Δ 3 q 4 Δ 1 q 1 Δ 2 + q 2 Δ 3 q 2 Δ 1 + q 3 Δ 2 + q 4 Δ 3 q 1 Δ 1 q 4 Δ 2 q 1 Δ 3 q 1 Δ 1 + q 4 Δ 2 q 3 Δ 3 q 2 Δ 1 + q 3 Δ 2 + q 4 Δ 3 ]
Substituting C ˙ = lim Δ t 0 ( Δ C Δ t ), where Δt = T, into Equation (3) leads to the following Equation:
Δ C = T i C Ω
Therefore:
q   Δ Ca = q ( T i C Ω a ) = T i q C α α = Ω a = [ α 1 α 2 α 3 ] = [ ω 3 a 2 + ω 2 a 3 ω 3 a 1 ω 1 a 2 ω 2 a 1 + ω 1 a 2 ]
As a result of Equation (106):
q C α = 2 [ q 1 α 1 q 4 α 2 + q 3 α 3 q 2 α 1 + q 3 α 2 + q 4 α 3 q 4 α 1 + q 1 α 2 q 4 α 3 q 3 α 1 q 2 α 2 q 1 α 3 q 3 α 1 + q 2 α 2 + q 1 α 3 q 4 α 1 + q 1 α 2 q 2 α 3                       q 3 α 1 + q 2 α 2 + q 1 α 3 q 4 α 1 q 1 α 2 + q 2 α 3 q 2 α 1 + q 3 α 2 + q 4 α 3 q 1 α 1 q 4 α 2 q 1 α 3 q 1 α 1 + q 4 α 2 q 3 α 3 q 2 α 1 + q 3 α 2 + q 4 α 3 ]
Because the vision system as the measurement system provides the position of the tool tip, velocity can be computed by knowing the present and the previous position at each time step:
z ˜ = z + n = [ x ˜ v ˜ ] v ˜ l + 1 = x ˜ l + 1 x ˜ l T v
where Tv is the sampling rate of the cameras. Accordingly, the observation matrix would be:
Δ z k + 1 = [ I 0 0 0 I 0 0 0 0 ]   [ Δ r k + 1 Δ v k + 1 q k + 1 ] + n k + 1 Δ v k + 1 = T i 2 T v ( Δ r k + 1 Δ r k )

5. Experimental Results

This section presents the experimental hardware setup and the result of applying the proposed EKF. The experimental hardware includes a 3DX-GX1 IMU from Microstrain, an IDS Falcon Quattro PCIe frame grabber from IDS Imaging Development Systems, and four surveillance IR-CCD cameras. The IMU contains three rate gyros and three accelerometers with a sampling rate of 100 Hz and with a noise density of 3.5 °/ hour and 0.4 mg/rms Hz, respectively [45].
All cameras are connected through the frame grabber to a PC, which includes four parallel video channels able to capture images from four cameras simultaneously with a sampling rate of 20 fps. Since the multi-camera vision system is used as a measurement system, the camera calibration procedure must be performed primary. The intrinsic and extrinsic parameters of each camera are listed in Table 1.
Once the calibration is completed, the vision system is ready to track the tool and measure the position of the tool tip by applying image processing techniques. Figure 5 demonstrates the result of the video tracking by one of the cameras.
It should be mentioned that a predesigned path is printed on the 2D plane and it is tried to be traced by the tool tip during its movement on the plane in order to compare the performance of proposed EKF and with the performance of the conventional EKF reported in [5].
The sensor fusion techniques allow us estimating the states variables of the system at the sampling rate of the sensor with the highest measurement rate. In this experiment, the sampling rate of cameras and inertial sensors are 20 fps and 100 Hz. As a result of sensor fusion, the measurement rate of the proposed integrated system is 100 Hz.
The classical EKF is applied in both switch and continues modes. In the switch mode, the estimation of the states variables is corrected whenever the measurement of the vision system is available. Otherwise, the states are estimated only based on the SDINS. In order to reduce the computational complexity of image processing algorithms, sensor fusion allows that the sampling rate of the vision system can be reduced to 10 fps and 5 fps. As illustrated in Table 2, the positioning error is increased by reducing the sampling rate of the cameras. In addition, the error in proposed EKF grows faster than the other methods; since this technique assumes that the rate of the changes in state variables is constant from one frame to another frame. So, this assumption cannot be valid in lower measurement rates.
Although, it is shown in Table 2 that the position error of the continuous EKF is less than the others; it should be mentioned that the position obtained by the multi-camera vision system still has errors compared with the predesigned path.
Figure 6 and Figure 7 compare the position resulting from each method at two different parts of the trajectory of the tool tip at two sampling rate of 16 fps and 5 fps. As shown, the camera path is traced smoothly by applying continuous EKF. Since the position is estimated in real-time, it is not possible to fit a curve between each two camera measurement without sensor fusion techniques.
The position resulting from switch EKF is crinkly due to the drift position in the SDINS and the wrinkles are amplified by decreasing the measurement rate of the cameras. The position estimated by the proposed EKF is smooth and ripple-free and this method tries to reduce the errors of the entire system compared with the predesigned path. As a result, the proposed EKF is suitable for the higher measurement rate; while the continuous EKF is recommended for the lower sampling rate. However, the error of inertial sensors resulting from noise and the common motion-dependent errors are compensated, but the remaining errors cause the position error estimation in the integrated system. In addition, the video tracking errors lead to the position estimation error as well.

6. Conclusions

This paper describes the use of the EKF to develop integration of the multi-camera vision system and inertial sensors. The sensor fusion techniques allow estimation of the state variables at the sampling rate of the sensor with the highest measurement rate. This helps to reduce the sampling rate of the sensors with high computational load.
The classical EKF is designed for nonlinear dynamic systems such as the strapdown inertial navigation system. The performance of the classical EKF is reduced by lowering the sampling rate of the cameras. When the sampling rate of the cameras is reduced, the rate of updating decreases and the system must rely more on the inertial sensors output for estimating the position. Because of the drift in the SDINS, the position error increases.
The modified EKF is proposed to obtain position estimation with less error. Furthermore, it removes the effect of the gravitational force in the state-space model. In fact, the error resulting from inaccuracy in the evaluation of the gravitational force is eliminated in the state-space model. In addition, the estimated position is smooth and ripple-free. However; the proposed EKF is not convincing at the lower measurement rate. The error of the estimated position results from inertial sensor errors, uncompensated common motion-dependent errors, attitude errors, video tracking errors, and unsynchronized data.

References

  1. Farrell, J.; Barth, M. Global Positioning System and Inertial Navigation; McGraw-Hill: New York, NY, USA, 1999; p. 145. [Google Scholar]
  2. Grewal, M.; Weill, L.R.; Andrews, A.P. Global Positioning Systems, Inertial Navigation, and Integration, 2nd ed; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  3. Foxlin, E.; Naimark, L. VIS-Tracker: A Wearable Vision-Inertial Self-Tracker. Proceedings of IEEE Virtual Reality Conference, Los Angeles, CA, USA, March 2003; pp. 199–206.
  4. Parnian, N.; Golnaraghi, F. Integration of Vision and Inertial Sensors for Industrial Tools Tracking. Sens. Rev 2007, 27, 132–141. [Google Scholar]
  5. Parnian, N.; Golnaraghi, F. A low-Cost Hybrid SDINS/Multi-Camera Vision System for a Hand-Held Tool Positioning. Proceedings of 2008 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, May 6–8, 2008; pp. 489–496.
  6. Ernest, P.; Mazl, R.; Preucil, L. Train Locator Using Inertial Sensors and Odometer. Proceedings of IEEE Intelligent Vehicles Symposium, Parma, Italy, June 2004; pp. 860–865.
  7. Pingyuan, C.; Tianlai, X. Data Fusion Algorithm for INS/GPS/Odometer Integrated Navigation. Proceedings of IEEE Conference on Industrial Electronics and Applications, Harbin, China, May 2007; pp. 1893–1897.
  8. Abuhadrous, I.; Nashashibi, F.; Laurgeau, C. 3D Land Vehicle Localization: A Real-time Multi-Sensor Data Fusion Approach using RTMAPS. Proceedings of the 11th International Conference on Advanced Robotics, Coimbra, Portugal, June 30–July 3, 2003; pp. 71–76.
  9. Bian, H.; Jin, Z.; Tian, W. Study on GPS Attitude Determination System Aided INS Using Adaptive Kalman Filter. Meas. Sci. Technol 2005, 16, 2072–2079. [Google Scholar]
  10. Hu, C.; Chen, W.; Chen, Y.; Liu, D. Adaptive Kalman Filtering for Vehicle Navigation. J. Global Position Syst 2003, 2, 42–47. [Google Scholar]
  11. Crassidis, J.L.; Lightsey, E.G.; Markley, F.L. Efficient and Optimal Attitude Determination Using Recursive Global Positioning System Signal Operations. J. Guid. Control Dyn 1999, 22, 193–201. [Google Scholar]
  12. Crassidis, J.L.; Markley, F.L. New Algorithm for Attitude Determination Using Global Positioning System Signals. J. Guid. Control Dyn 1997, 20, 891–896. [Google Scholar]
  13. Kumar, N.V. Integration of Inertial Navigation System and Global Positioning System Using Kalman Filtering, PhD. Thesis,; Indian Institute of Technology: New Delhi, Delhi, India, 2004.
  14. Lee, T.G. Centralized Kalman Filter with Adaptive Measurement Fusion: it’s Application to a GPS/SDINS Integration System with an Additional Sensor. Int. J. Control Autom. Syst 2003, 1, 444–452. [Google Scholar]
  15. Pittelkau, M.E. An Analysis of Quaternion Attitude Determination Filter. J. Astron. Sci 2003, 51, 103–120. [Google Scholar]
  16. Markley, F.L. Attitude Error Representation for Kalman Filtering. J. Guid. Control Dyn 2003, 26, 311–317. [Google Scholar]
  17. Markley, F.L. Multiplicative vs. Additive Filtering for Spacecraft Attitude Determination. Proceedings of the 6th Conference on Dynamics and Control of Systems and Structures in Space (DCSSS), Riomaggiore, Italy, July 2004.
  18. Crassidis, J.L.; Markley, F.L. Unscented Filtering for Spacecraft Attitude Estimation. J. Guid. Control Dyn 2003, 26, 536–542. [Google Scholar]
  19. Grewal, M.S.; Henderson, V.D.; Miyasako, R.S. Application of Kalman Filtering to the Calibration and Alignment of Inertial Navigation Systems. IEEE Trans. Autom. Control 1991, 39, 4–13. [Google Scholar]
  20. Lai, K.L.; Crassidis, J.L.; Harman, R.R. In-Space Spacecraft Alignment Calibration Using the Unscented Filter. Proceedings of AIAA Guidance, Navigation, and Control Conference and Exhibit, Austin, TX, USA, August 2003; pp. 1–11.
  21. Pittelkau, M.E. Kalman Filtering for Spacecraft System Alignment Calibration. J. Guid. Control. Dynam 2001, 24, 1187–1195. [Google Scholar]
  22. Merwe, R.V.; Wan, E.A. Sigma-Point Kalman Filters for Integrated Navigation. Proceedings of the 60th Annual Meeting of the Institute of Navigation, Dayton, OH, USA, June 2004.
  23. Chung, H.; Ojeda, L.; Borenstein, J. Sensor fusion for Mobile Robot Dead-reckoning with a Precision-calibrated Fibre Optic Gyroscope. Proceedings of IEEE International Conference on Robotics and Automation, Seoul, Korea, May 2001; pp. 3588–3593.
  24. Chung, H.; Ojeda, L.; Borenstein, J. Accurate Mobile Robot Dead-reckoning with a Precision-Calibrated Fibre Optic Gyroscope. IEEE Trans. Rob. Autom 2004, 17, 80–84. [Google Scholar]
  25. Roumeliotis, S.I.; Sukhatme, G.S.; Bekey, G.A. Circumventing Dynamic Modeling: Evaluation Of The Error-State Kalman Filter Applied To Mobile Robot Localization. Proceedings of IEEE International Conference on Robotics and Automation, Detroit, MI, USA, May 1999; pp. 1656–1663.
  26. Friedland, B. Analysis Strapdown navigation Using Quaternions. IEEE Trans. Aerosp. Electron. Syst 1974, AES-14, 764–767. [Google Scholar]
  27. Tao, T.; Hu, H.; Zhou, H. Integration of Vision and Inertial Sensors for 3D Arm Motion Tracking in Home-based Rehabilitation. Int. J. Robot. Res 2007, 26, 607–624. [Google Scholar]
  28. Ang, W.T. Active Tremor Compensation in Handheld Instrument for Microsurgery, PhD Thesis, technology report CMU-RI-TR-04-28,; Robotics Institute, Carnegie Mellon University: Pittsburgh, PA, USA, May 2004.
  29. Ledroz, A.G.; Pecht, E.; Cramer, D.; Mintchev, M.P. FOG-Based Navigation in Doenhole Environment During Horizontal Drilling Utilizing a Complete Inertial Measurement Unit: Directional Measurement-While-Drilling Surveying. IEEE Trans. Instrum. Meas 2005, 54, 1997–2006. [Google Scholar]
  30. Pandiyan, J.; Umapathy, M.; Balachandar, S.; Arumugam, A.; Ramasamy, S.; Gajjar, N.C. Design of Industrial Vibration Transmitter Using MEMS Accelerometer. Instit. Phys. Conf. Ser 2006, 34, 442–447. [Google Scholar]
  31. Huster, A.; Rock, S.M. Relative Position Sensing by Fusing Monocular Vision and Inertial Rate Sensors. Proceedings of IEEE International Conference on Advanced Robotics, Coimbra, Portugal, June 30–July 3, 2003; pp. 1562–1567.
  32. Persa, S.; Jonker, P. Multi-sensor Robot Navigation System. SPIE Int. Soc. Opt. Eng 2002, 4573, 187–194. [Google Scholar]
  33. Treiber, M. Dynamic Capture of Human Arm Motion Using Inertial Sensors and Kinematical Equations, Master Thesis,; University of Waterloo: Ontario, Canada, 2004.
  34. Titterton, D.H.; Weston, J.L. Strapdown Inertial Navigation Technology, 2nd ed; The institution of Electrical Engineers: Herts, UK, 2004. [Google Scholar]
  35. Hibbeler, R.C. Enginnering Mechanics: Statics and Dynamics, 8th ed; Prentice-Hall: Bergen County, NJ, USA, 1998. [Google Scholar]
  36. Angular Acceleration of the Earth. Available online: http://jason.kamin.com/projects_files/equations.html/ (accessed on 21 January 2010).
  37. Angular Speed of the Earth. Available online: http://hypertextbook.com/facts/2002/Jason/Atkins.shtml/ (accessed on 21 January 2010).
  38. Forsyth, D.A.; Ponce, J. Computer Vision: A Modern Approach; Prentice-Hall: Upper Saddle River, NJ, USA, 2003. [Google Scholar]
  39. Yoneyama, S.; Kikuta, H.; Kitagawa, A.; Kitamura, K. Lens Distortion Correction for Digital Image Correlation by Measuring Rigid Body Displacement. Opt. Eng 2006, 42, 1–9. [Google Scholar]
  40. Brown, D.C. Close-Range Camera Calibration. Photogram. Eng 1971, 37, 855–866. [Google Scholar]
  41. Tsai, R.Y. A Versatile Camera Calibration Technique for High Accuracy 3D Machine Vision Metrology Using Off-the-shelf TV Cameras and Lenses. IEEE J. Rob. Autom 1987, RA-3, 323–344. [Google Scholar]
  42. Heikkila, J. Accurate Camera Calibration and Feature-based 3-D Reconstruction from Monocular Image Sequences, Dissertation,; University of Oulu: Oulun yliopisto, Finland, 1997.
  43. Grewal, M.S.; Andrews, A.P. Kalman Filtering: Theory and Practice Using MATLAB, 2nd ed; John Wiley: New York, NY, USA, 2001. [Google Scholar]
  44. Zarchan, P.; Musoff, H. Fundamentals of Kalman Filtering: A Practical Approach, 2nd ed; AIAA: Alexandria, VA, USA, 2005. [Google Scholar]
  45. MicroStrain: Orientation Sensors—Wireless Sensors. Available online: http://www./microstrain./com/ (accessed on 21 January 2010).
Figure 1. Hand-held tool and assigned reference frames.
Figure 1. Hand-held tool and assigned reference frames.
Sensors 10 05378f1
Figure 2. Experimental setup for the multi-camera vision system.
Figure 2. Experimental setup for the multi-camera vision system.
Sensors 10 05378f2
Figure 3. Camera imaging model.
Figure 3. Camera imaging model.
Sensors 10 05378f3
Figure 5. Tool tip tracking by Camera #1.
Figure 5. Tool tip tracking by Camera #1.
Sensors 10 05378f5
Figure 6. Estimated position by applying different estimation method: continuous EKF (left), Switch EKF (center), and proposed EKF (right); when the sampling rate of the cameras is 16 fps.
Figure 6. Estimated position by applying different estimation method: continuous EKF (left), Switch EKF (center), and proposed EKF (right); when the sampling rate of the cameras is 16 fps.
Sensors 10 05378f6
Figure 7. Estimated position by applying different estimation method: continuous EKF (left), Switch EKF (center), and proposed EKF (right); when the sampling rate of the cameras is 5 fps.
Figure 7. Estimated position by applying different estimation method: continuous EKF (left), Switch EKF (center), and proposed EKF (right); when the sampling rate of the cameras is 5 fps.
Sensors 10 05378f7
Table 1. Intrinsic and extrinsic parameters.
Table 1. Intrinsic and extrinsic parameters.
Camera #1Camera #2Camera #3Camera #4
Focal LengthX: 400.69 pixels
Y: 402.55 pixels
X: 398.51 pixels
Y: 400.44 pixels
X: 402.00 pixels
Y: 405.10 pixels
X: 398.74 pixels
Y: 400.60 pixels
Principal PointX: 131.12 pixels
Y: 130.10 pixels
X: 152.74 pixels
Y: 122.79 pixels
X: 144.77 pixels
Y: 118.23 pixels
X: 136.90 pixels
Y: 145.34 pixels
Distortion CoefficientsKr,x: −0.3494
Kr,y: 0.1511
Kt,x: 0.0032
Kt,y: −0.0030
Kr,x: −0.3522
Kr,y: 0.1608
Kt,x: 0.0047
Kt,y: −0.0005
Kr,x: −0.3567
Kr,y: 0.0998
Kt,x: −0.0024
Kt,y: 0.0016
Kr,x: −0.3522
Kr,y: 0.0885
Kt,x: 0.0024
Kt,y: −0.0002
Rotation Vector
Wrt Inertial
Reference Frame
1.552265
2.255665
−0.635153
0.4686021
2.889162
−0.7405382
0.6128003
−2.859007
0.7741390
1.537200
−2.314144
0.4821106
Translation Vector
wrt Inertial
Reference Frame
729.4870 mm
293.6999 mm
873.3399 mm
385.2578 mm
625.1560 mm
840.7220 mm
−61.1933 mm
623.1377 mm
851.9321 mm
−365.5847 mm
289.6135 mm
848.5442 mm
Table 2. Positions estimated by different estimation methods are compared with the position estimated by the multi-camera vision system.
Table 2. Positions estimated by different estimation methods are compared with the position estimated by the multi-camera vision system.
Proposed EKFEKF (Switch)EKF (Continuous)
Cameras Measurement RateError (RMS)VarianceError (RMS)VarianceError (RMS)Variance
16 fps0.98540.17791.00760.78510.43200.1386
10 fps1.08830.31971.21470.83430.56580.2149
5 fps1.47301.51731.32780.87550.72570.8025

Share and Cite

MDPI and ACS Style

Parnian, N.; Golnaraghi, F. Integration of a Multi-Camera Vision System and Strapdown Inertial Navigation System (SDINS) with a Modified Kalman Filter. Sensors 2010, 10, 5378-5394. https://doi.org/10.3390/s100605378

AMA Style

Parnian N, Golnaraghi F. Integration of a Multi-Camera Vision System and Strapdown Inertial Navigation System (SDINS) with a Modified Kalman Filter. Sensors. 2010; 10(6):5378-5394. https://doi.org/10.3390/s100605378

Chicago/Turabian Style

Parnian, Neda, and Farid Golnaraghi. 2010. "Integration of a Multi-Camera Vision System and Strapdown Inertial Navigation System (SDINS) with a Modified Kalman Filter" Sensors 10, no. 6: 5378-5394. https://doi.org/10.3390/s100605378

Article Metrics

Back to TopTop