Next Article in Journal
Smart Pipes—Instrumented Water Pipes, Can This Be Made a Reality?
Previous Article in Journal
Smart Roadside System for Driver Assistance and Safety Warnings: Framework and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unscented Kalman Filtering for Single Camera Based Motion and Shape Estimation

1
Department of Communications, Navigation and Control Engineering, National Taiwan Ocean University, 2 Pei-Ning Rd., Keelung 202-24, Taiwan
2
National Applied Research Laboratories, National Center for High-Performance Computing, Hsinchu 300, Taiwan
3
Department of Marine Engineering, National Taiwan Ocean University, 2 Pei-Ning Rd., Keelung 202-24, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2011, 11(8), 7437-7454; https://doi.org/10.3390/s110807437
Submission received: 26 May 2011 / Revised: 4 July 2011 / Accepted: 5 July 2011 / Published: 25 July 2011
(This article belongs to the Section Physical Sensors)

Abstract

: Accurate estimation of the motion and shape of a moving object is a challenging task due to great variety of noises present from sources such as electronic components and the influence of the external environment, etc. To alleviate the noise, the filtering/estimation approach can be used to reduce it in streaming video to obtain better estimation accuracy in feature points on the moving objects. To deal with the filtering problem in the appropriate nonlinear system, the extended Kalman filter (EKF), which neglects higher-order derivatives in the linearization process, has been very popular. The unscented Kalman filter (UKF), which uses a deterministic sampling approach to capture the mean and covariance estimates with a minimal set of sample points, is able to achieve at least the second order accuracy without Jacobians’ computation involved. In this paper, the UKF is applied to the rigid body motion and shape dynamics to estimate feature points on moving objects. The performance evaluation is carried out through the numerical study. The results show that UKF demonstrates substantial improvement in accuracy estimation for implementing the estimation of motion and planar surface parameters of a single camera.

1. Introduction

The problem of estimating positions and velocities of moving features in space leads to the problem of estimating motion and shape parameters of moving features from their corresponding image data observed over time. This is important in various engineering applications, such as robotics and machine vision [16]. A dynamical systems approach to machine vision is introduced in [1], in which the problem of motion and shape parameter estimation is described as an inverse problem associated with a pair of coupled Riccati partial differential equations. Identification of the motion and shape parameters using the estimation technique was discussed in [5,6]. A CCD camera with a laser range finder mounted on a mobile robot for better motion and shape parameters identification can be seen in [5]. A sliding mode approach was proposed to estimate the motion of a moving body with the aid of a CCD camera [6]. For performance improvement, various estimation techniques have been adopted to reduce the noise. The algebraic methods yielded feasible results, but they were computationally unrealistic. The Kalman filter method is used in the estimation of the motion parameters to reduce the effect of the measurement errors which are inevitable in the real world. The Kalman filter (KF) method [7] for estimation of the 3D camera motion in imagine sequences for the applications to the video coding system is proposed by Kim et al. [8]. Kano et al. [9] performed a numerical study and compared an extended Kalman filter (EKF) based recursive algorithm with a non-recursive algebraic method for estimating motion and planar surface parameters.

The KF is theoretically attractive because it has been shown to be the one that minimizes the variance of the estimation mean square error (MSE). The nonlinear filter is used for nonlinear dynamics and/or nonlinear measurement relationships. The problem of estimating the state variables of the nonlinear systems may be solved using the nonlinear version of the Kalman filter. The most popular form is the EKF. The fact that EKF highly depends on a predefined dynamics model forms a major drawback. To achieve good filtering results, the designers are required to have the complete a priori knowledge on both the dynamic process and measurement models, in addition to the assumption that both the process and measurement are corrupted by zero-mean Gaussian white sequences. If the input data does not reflect the real model, the estimates may not be reliable.

Similar to the EKF, the unscented Kalman filter (UKF) [1017] focuses on approximating the prediction probability characteristics and use the standard minimum mean square error estimator. The UKF has been developed in the context of state estimation of dynamic systems as a nonlinear distribution (or densities in the continuous case) approximation method. The UKF is superior to EKF not only in theory but also in many practical situations. The algorithm performs the prediction of the statistics with a set of carefully chosen sample points for capturing mean and covariance of the system. These sample points are sometimes referred to as the sigma points employed to propagate the probability of state distribution. The basic premise behind the UKF is it is easier to approximate a Gaussian distribution than it is to approximate an arbitrary nonlinear function. Instead of linearizing using Jacobian matrices as in the EKF and achieving first-order accuracy, the UKF can capture the states up to at least second order by using a deterministic sampling approach to capture the mean and covariance estimates with a minimal set of sample points. The deterministic sampling based UKF with applications on estimation of rigid body motion and shape dynamics are presented to estimate the feature points on the moving object. Results obtained shows that UKF is able to provide more accurate and reliable estimation accuracy of the object. Investigation of the UKF approach to the motion and shape estimation problem has not been seen in the literature.

This paper is organized as follows. In Section 2, preliminary background on rigid body motion is reviewed. The shape dynamics and optical flow dynamics are discussed in Sections 3 and 4, respectively. The unscented Kalman filter is introduced in Section 5. Results and Discussion are given in Section 6. Conclusions are given in Section 7.

2. Rigid Body Motion

The mathematical description of a 3D point undergoing a rigid transformation about the camera axes is given as follows. Let ωx, ωy and ωz represent the angle of rotation about the X, Y and Z axes, respectively. Figure 1 illustrates the world coordinate system. An arbitrary rotation R can be represented by successive rotations around the X, Y and Z axes, respectively, as:

R = [ 1 0 0 0 cos ( ω x ) sin ( ω x ) 0 sin ( ω x ) cos ( ω x ) ] [ cos ( ω y ) 0 sin ( ω y ) 0 1 0 sin ( ω y ) 0 cos ( ω y ) ] [ cos ( ω z ) sin ( ω z ) 0 sin ( ω z ) cos ( ω z ) 0 0 0 1 ]

Assuming infinitesimal rotations, the zeroth order terms of the Taylor series expansion of the trigonometric functions and cos provide the following approximations:

cos ( θ ) 1 , sin ( θ ) θ

Using the approximations in Equation (2), R can be approximated in the skew-symmetric matrix form in terms of angular velocity:

R [ 1 0 0 0 1 ω x 0 ω x 1 ] [ 1 0 ω y 0 1 0 ω y 0 1 ] [ 1 ω z 0 ω z 1 0 0 0 1 ] [ 1 ω z ω y ω z 1 ω x ω y ω x 1 ]

Let the vector T = [tx ty tz]T represent the translational velocity, where the elemental components tx, ty, and tz represent the translational velocities in the X, Y and Z directions, respectively. The velocity vector V = [Ẋ Ẏ Ż]T of a point in the world coordinates P = [X Y Z]T with respect to camera coordinates undergoing rigid transformation is represented as:

[ X ˙ Y ˙ Z ˙ ] = [ 0 ω z ω y ω z 0 ω x ω y ω x 0 ] [ X Y Z ] + [ t x t y t z ]
which may be written in matrix form:
V = ( R I ) P + T
where I represents a 3 × 3 identity matrix. By defining [−ωz ωyωx]T as [ω1 ω2 ω3]T ; and [tx ty tz]T as [b1 b2 b3]T, Equation (4) can be represented as:
V = Ω P + b
where b = [b1 b2 b3]T and:
Ω [ 0 ω 1 ω 2 ω 1 0 ω 3 ω 2 ω 3 0 ]

Consider the dynamical system Equation (5) where it is assumed that [ω1 ω2 ω3]T ≠ 0. If:

b Im Ω
it is easily seen that ImΩ is a plane in ℜ3 given as:
Im Ω = { x R | ω T x = 0 }
where:
ω = [ ω 3 ω 2 ω 1 ] T

3. Shape Dynamics

The motion field describing the motion of individual points on the surface might undergo a change in shape. The dynamical system which describes the changing shape of the surface is called the shape dynamics. Let (X, Y, Z) be the world coordinate frame wherein we have a surface defined by:

Z = S ( X , Y , t )

Assume that S is smooth enough so that its derivatives with respect to each of the variables are defined everywhere. The motion field is assumed to be described by:

X ˙ = f ( X , Y , Z ) , Y ˙ = g ( X , Y , Z ) , Z ˙ = h ( X , Y , Z )

How the surface as in Equation (10) moves as points on the surface move following the motion field as in Equation (11) can be described by the quasi-linear partial differential equation called the “shape dynamics”:

S t + f ( X , Y , Z ) S X + g ( X , Y , Z ) S Y = h ( X , Y , Z )

Consider the initial condition:

S ( X , Y , 0 ) = ϕ ( X , Y )

The pair of Equations (12) and (13) constitutes an example of a Riccati partial differential equation. The surface defined by Equation (10) is assumed to be a plane described by:

Z = p X + q Y + r
where p, q, r are shape parameters that are time varying as a result of the motion field. The vectors (X, Y, Z) and (p, q, r) are written in terms of homogeneous coordinates (X̄ Ȳ Z̄ W̄):
X = X ¯ W ¯ Y = Y ¯ W ¯ Z = Z ¯ W ¯
and:
p = p ¯ s ¯ q = q ¯ s ¯ r = r ¯ s ¯
where = (X2 + Y2 + Z2)1/2. Equation (14) can then be rewritten as:
p ¯ X ¯ + q ¯ Y ¯ s ¯ Z ¯ + r ¯ W ¯ = 0

Using Equation (15), Equation (5) can be written as:

d dt [ X ¯ Y ¯ Z ¯ W ¯ ] = [ Ω b 0 0 ] [ X ¯ Y ¯ Z ¯ W ¯ ]

The shape dynamics can be obtained by differentiating Equation (17) with respect to time t:

d dt [ p ¯ q ¯ s ¯ r ¯ ] = [ Ω 0 b T 0 ] [ p ¯ q ¯ s ¯ r ¯ ]
where p(t), q(t) and r(t) are the shape parameters to be discussed in Section 4.

4. Optical Flow Dynamics

As shown in Figure 2, the 3D vector P = (X, Y, Z) is assumed to be observed via perspective projection onto a plane parallel to the (x, y) axes and located at Z = 1 by defining the relationship between an image point u = (x, y) and a scene point (X, Y, Z) given by:

x = X Z , y = Y Z

The focal length f = 1 is usually used without loss of generality. With the relation given by Equation (5), differentiating Equation (20) leads to the relations:

x ˙ = Z X ˙ X Z ˙ Z 2 = X ˙ Z x Z ˙ Z = ( ω 1 y + ω 2 ) x ( ω 2 x ω 3 y ) + b 1 Z x b 3 Z
and:
y ˙ = Z Y ˙ Y Z ˙ Z 2 = Y ˙ Z y Z ˙ Z = ( ω 1 x + ω 3 ) y ( ω 2 x ω 3 y ) + b 2 Z y b 3 Z
From Equations (14) and (20), we have:
1 Z = 1 p x q y r
and hence the optical flow equations are given by:
x ˙ = ω 2 + ω 1 y + ω 2 x 2 + ω 3 x y + ( c 1 x c 3 ) ( 1 p x q y )
y ˙ = ω 3 ω 1 x + ω 2 x y + ω 3 y 2 + ( c 2 y c 3 ) ( 1 p x q y )
where ci = bi/r, i = 1,2,3.

Equations (24) and (25) denote the optical flow equations. Even when the motion field is time invariant, the parameters of the optical flow equations could be time-varying due to the fact that the shape parameters are changing in time as a result of motion field described by Equation (11). As t → ∞, we have:

p ( t ) ω 3 ω 1 , q ( t ) ω 2 ω 1 , r ( t ) bt ω 1 , c i b i ω 1 bt , i = 1 , 2 , 3

In particular, if b ∈ ImΩ, the shape parameters p(t), q(t) and r(t) are periodic functions satisfying the Riccati Equation [2,9]:

p ˙ = ω 2 ( 1 + p 2 ) + ω 1 q ω 3 p q q ˙ = ω 3 ( 1 + q 2 ) ω 1 q ω 2 p q r ˙ = b 3 b 1 p b 2 q r ( ω 3 q + ω 2 p )
which is parameterized by a total of six motion parameters and three initial conditions on shape parameters. Therefore, a total of nine parameters need to be adopted for describing the shape dynamics. The Riccati equation propagates in time the relationship between coordinates X, Y, and Z expressed via the surface described by Equation (10).

Once the modeling of shape dynamics and optical flow dynamics is accomplished, the estimation algorithm can be employed for implementing the estimation of motion and planar surface parameters. In addition to the algebraic methods, the recursive estimation algorithms are applicable. The recursive algorithm presented by Kano et al. [9] is an EKF-based algorithm. The motivation of the paper is to carry out the UKF-based approach, which has not been seen in the literature for the motion and shape estimation problem. A brief introduction of the UKF is provided in Section 5.

5. The Unscented Kalman Filter

The UKF is a nonlinear filter which deals with the case governed by the nonlinear stochastic difference equations:

x k + 1 = f k ( x k ) + w k
z k = h k ( x k ) + v k
where the state vector xk ∈ ℜn, process noise vector wk ∈ ℜn, measurement vector zk ∈ ℜm, and measurement noise vector vk ∈ ℜm. In Equation (28), both the vectors wk and vk are zero mean Gaussian white sequence having zero crosscorrelation with each other:
E [ w k w i T ] = { Q k , i = k 0 , i k ; E [ v k v i T ] = { R k , i = k 0 , i k ; E [ w k v k T ] = 0 for all i and k
where Qk is the process noise covariance matrix, Rk is the measurement noise covariance matrix.

5.1. The Unscented Transformation

The first step in the UKF is to sample the prior state distribution, i.e., generate the sigma points through the unscented transformation (UT). Figure 3 illustrates the true means and covariances as compared to those obtained by the mapping of the UKF versus that of the EKF. The dot-line ellipse represents the true covariance. The UKF is implemented through the transformation of the nonlinear function f(·), shown as the solid-line ellipse on the top portion of the figure; the EKF is accomplished through the Jacobian F = ∂f/x, shown as the solid-line ellipse at the bottom portion of the figure. The UKF approach estimates are expected to be closer to the true values than the EKF approach.

Several UT’s are available. One of the popular approaches is the scaled unscented transformation [1517]. Consider an n dimensional random variable x, having the mean and covariance P, and suppose that it propagates through an arbitrary nonlinear function f. The unscented transform creates 2n +1 sigma vectors X (a capital letter) and weighted points, given by:

X ( 0 ) = x ^
X ( i ) = x ^ + ( ( n + λ ) P ) i T , i = 1 , ... , n
X ( i + n ) = x ^ ( ( n + λ ) P ) i T , i = 1 , ... , n
where ( ( n + λ ) P ) i is the i th row of the matrix square root. ( n + λ ) P can be obtained from the lower-triangular matrix of the Cholesky factorization; λ = α2(n + κ) − n is a scaling parameter; α determines the spread of the sigma points around and is usually set to a small positive (e.g., 1e − 4 ≤ α ≤ 1); κ is a secondly scaling parameter (usually set as 0); β is used to incorporate prior knowledge of the distribution of (when x is normally distributed, β = 2 is an optimal value); W i ( m ) is the weight for the mean associated with the ith point; and W i ( c ) is the weigh for the covariance associated with the ith point:
W 0 ( m ) = λ ( n + λ )
W 0 ( c ) = W 0 ( m ) + ( 1 α 2 + β )
W i ( m ) = W i ( c ) = 1 2 ( n + λ ) , i = 1 , ... , 2 n

The sigma vectors are propagated through the nonlinear function to yield a set of transformed sigma points:

y i = f ( X ( i ) ) , i = 0 , ... , 2 n

The mean and covariance of yi are approximated by a weighted average mean and covariance of the transformed sigma points as follows:

y ¯ u = i = 0 2 n W i ( m ) y i
P ¯ u = i = 0 2 n W i ( c ) ( y i y ¯ u ) ( y i y ¯ u ) T

5.2. The Unscented Kalman Filter

A high level of operation of the unscented Kalman filter is shown in Figure 4. To look at the detailed algorithm of the UKF, firstly, the set of sigma points are created by Equation (30). After the sigma points are generated, the time update (prediction step) of the UKF involves the following steps:

( ζ k ) i = f ( X k 1 ( i ) ) , i = 0 , ... , 2 n
x ^ k = i = 0 2 n W i ( m ) ( ζ k ) i
P k = i = 0 2 n W i ( c ) [ ( ζ k ) i x ^ k ] [ ( ζ k ) i x ^ k ] T + Q k 1
( Z k ) i = h ( ( ζ k ) i )
z ^ k = i = 0 2 n W i ( m ) ( Z k ) i

The measurement update (correction) step of the UKF involves the following steps:

P zz = i = 0 2 n W i ( c ) [ ( Z k ) i z ^ k ] [ ( Z k ) i z ^ k ] T + R k
P xz = i = 0 2 n W i ( c ) [ ( ζ k ) i x ^ k ] [ ( Z k ) i z ^ k ] T
K k = P xz P zz 1
x ^ k = x ^ k + K k ( z k z ^ k )
P k = P k K k P zz K k T

The samples are propagated through true nonlinear equations, which can capture the states up to at least second order.

6. Results and Discussion

Simulation experiments have been carried out to evaluate the performance of the UKF approach in comparison with the conventional EKF approach for estimating motion and shape parameters. Computer code was developed using MATLAB. It is assumed that, at each time instant k, a set of three feature points (xi,k, yi,k), i = 1, 2, 3 are observed. The state vector xk is given by:

x k = [ ω 1 , k ω 2 , k ω 3 , k c 1 , k c 2 , k c 3 , k p k q k x 1 , k y 1. k x 2 , k y 2 , k x 3 , k y 3 , k ] T

The associated state equation in discretized form is given by:

x k + 1 = f ( x k )
which is constituted from the set of Equations (24,25) and (27), as follows:
p k + 1 = ( ω 1 , k q k ω 2 , k ω 2 , k p k 2 ω 3 , k p k q k ) Δ t + p k q k + 1 = ( ω 1 , k p k ω 3 , k ω 2 , k p k q k ω 3 q k 2 ) Δ t + q k r k + 1 = ( b 3 , k b 1 , k p k b 2 , k q k r k ( ω 3 , k q k + ω 2 , k p k ) ) Δ t + r k
x k + 1 = ( ω 2 , k + ω 1 , k y k + ω 2 , k x k 2 + ω 3 , k x k y k + ( c 1 , k x k c 3 , k ) ( 1 p k x k q k y k ) ) Δ t + x k y k + 1 = ( ω 3 , k ω 1 , k x k + ω 2 , k x k y k + ω 3 , k y k 2 + ( c 2 , k y k c 3 , k ) ( 1 p k x k q k y k ) ) Δ t + y k
c i , k = b i , k / r k
along with the random walk models for the angular velocities ωi,k+1 = ωi,k1, i = 1, 2, 3. Let zk ∈ ℜ6 represent the observation vector, the observation equation can then be written as:
z k = H x k + v k
where vk is assumed to be a zero mean Gaussian white sequences noise with covariance R = σ2I6(σ = 0.01). The elements of the measurement model H2n×14 is defined by:
H = [ 0 2 n × 8 I 2 n ] , n = 3

The process noise covariance matrix is given by Q = 10−10 × I14 and the parameters utilized in the UKF are given as follows: α = 1e − 4, β = 2, κ = 0. The sigma points capture the same mean and covariance irrespective of the choice of matrix square root which is used. The numerical efficient and stable method such as the Cholesky factorization has been used in obtaining the sigma points.

Experiment was conducted on a simulated three feature points: (2.5,2.5), (1.5,0.5) and (1.0,1.5) locations. The motion and shape parameters Ω is set as (ω1, ω2, ω3) = (−0.2, 0.1, −0.1), with initial values for shape parameters are (p0, q0, r0) = (0.1, 0.1, 2.0). For comparative purposes, the following two cases are considered [2].

Case 1 : b = ( 0.1 0.1 0.1 ) T Im Ω Case 2 : b = ( 0.1 0.1 0.1 ) T Im Ω

Figures 512 show the results for the numerical experiments. Figure 5 shows the sample trajectories for the three feature points on image plane for the case b ∈ ImΩ. It can be seen that the motions are circular and periodic around the axis of rotation. Figures 68 are the results for the case when b ∈ ImΩ. Estimation results for parameters c1, c2 and c3 for the three feature points are given in Figure 6. Three curves, shown in red, green, and black colors, denote the true, EKF-based, and UKF-based estimated values, respectively. For better clarity, the estimation errors for the three feature points are shown in Figure 7, which illustrates the advantage of UKF. Figure 8 gives the estimation accuracy for the shape parameters pk, qk and rk. The EKF is not working very well whereas the UKF demonstrates its good capability for capturing the true trajectories. Comparison of RMSE (in the unit of mm) of the three feature points on the image plane using the EKF and UKF is summarized in Table 1. Comparison of RMSE (in the unit of mm) for the feature points of the scene point using the EKF and UKF is summarized in Table 2.

Figure 9 shows the sample trajectories for the three feature points on image plane for the case b ∉ ImΩ. The performance comparison for EKF and UKF in the case of b ∉ ImΩ is basically similar to the results obtained for b ∈ ImΩ. The motions are receding to infinitely linearly in time. Figures 1012 provide the results for the case when b ∉ ImΩ. Figure 10 shows the estimation results for parameters c1, c2 and c3and for the three feature points; Figure 11 provides the estimation errors for the three feature points; and Figure 12 gives the estimation accuracy for the shape parameters pk, qk and rk. Comparison of RMSE (in the unit of mm) of the three feature points on the image plane using the EKF and UKF is summarized in Table 3. Comparison of RMSE (in the unit of mm) for the feature points of the scene point using the EKF and UKF is summarized in Table 4.

In general, utilization of an adequate nonlinear model with suitable filter makes it possible to achieve improved estimation performance. For the problem of single camera based motion and shape estimation, the UKF-based recursive estimation algorithm is a good alternative adopted for implementing the estimation of motion and planar surface parameters. The states and the dynamic process are related nonlinearly. The nonlinearity/uncertainty in the state estimate is suitably taken care of in the UKF, which has therefore demonstrated substantial state estimation accuracy improvement as compared to the EKF based approach. When compared with EKF, the UKF method exhibits superior performance since the series approximations in the EKF algorithm can lead to poor representations of the nonlinear functions and probability distributions of interest.

7. Conclusions

Various engineering applications, such as robotics and machine vision, require the estimation of positions and velocities of moving features in space, leading to the problem of estimating motion and shape parameters of moving features from their corresponding image data observed over time. To alleviate the noise and obtain better estimation accuracy in feature points on the moving objects, the filtering/estimation approach can be used. A variety of estimation techniques have been adopted to reduce the noise. One of the most important ones is the extended Kalman filter (EKF).

As compared to the EKF’s linear approximation, the unscented transformation is accurate to the second order for any nonlinear function. In light of unscented Kalman filter’s superiority over the extended Kalman filter, this paper has presented a deterministic sampling of UKF approach for estimating motion and shape parameters of feature points on the moving object. Such a motion is described by the nonlinear model with skew symmetric matrix Ω, which is widely used in the theory of machine vision. The reason is due to the fact that the UKF is able to deal with the nonlinear formulation, which will ensure better accurate parameter estimation. For the nonlinear estimation problem, alternatives for the classical model-based extended Kalman filter (EKF) can be employed. The UKF is a nonlinear distribution approximation method, which uses a finite number of sigma points to propagate the probability of state distribution through the nonlinear dynamics of system. The UKF exhibits superior performance when compared with conventional EKF since the series approximations in the EKF algorithm can lead to poor representations of the nonlinear functions and probability distributions of interest.

The analyses were confirmed by simulation studies. For both cases, b ∈ ImΩ and b ∉ ImΩ, motion and shape parameters were recovered successfully with remarkable accuracy improvement. The results obtained shows that the proposed UKF method have been compared to the EKF and have demonstrated substantial improvement in obtaining the motion and shape of moving objects.

Acknowledgments

This work has been supported in part by the National Science Council of China under grant NSC 97-2221-E-019-012 and NSC 98-2221-E-019-021-MY3. Valuable suggestions and detailed comments by the anonymous reviewers are gratefully acknowledged.

References

  1. Loucks, T; Ghosh, BK; Lund, J. An Optical Flow Based Approach for Motion and Shape Parameter EstimaTion in Computer Vision. Proceedings of the 31st IEEE Conference on Decision and Control, Tucson, AZ, USA, 16–18 December 1992; pp. 819–823.
  2. Ghosh, BK; Loucks, EP. A perspective theory for motion and shape estimation in machine vision. SIAM J. Control Optim 1995, 33, 1530–1559. [Google Scholar]
  3. Derpanis, KG. The Motion Field and its Affine Approximation, 2004. Available online: http://www.cse.yorku.ca/~kosta/CompVis_Notes/motion_field_review.pdf/ (accessed on 7 July 2011).
  4. Kano, H; Ghosh, BK. Identification of Relative Position and Orientation of Two Cameras from Motion and Shape Parameters of Moving Rigid Body. Proceedings of the 39th IEEE Conference on Decision and Control, Sydney, Australia, 12–15 December 2000; pp. 5169–5174.
  5. Takahashi, S; Ghosh, BK. Motion and Shape Parameters Identification with Vision and Range. Proceedings of the American Control Conference, Arlington, VA, USA, 25–27 June 2001; pp. 4626–4631.
  6. Dogan, E; Yilmaz, B; Unel, M; Sabanovic, A. A Sliding Mode Approach to Visual Motion Estimation. Proceedings of the 9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, 27–29 March 2006; pp. 733–738.
  7. Brown, RG; Hwang, PYC. Introduction to Random Signals and Applied Kalman Filtering; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  8. Kim, ET; Han, J-K; Kim, H-M. A Kalman-Filtering Method for 3D Camera Motion Estimation from Imagine Sequences. Proceedings of the International Conference on Image Processing, ICIP'97, Washington, DC, USA, 26–29 October 1997; 3, pp. 630–633.
  9. Kano, H; Ghosh, BK; Kanai, H. Single camera based motion and shape estimation using extended kalman filtering. Math. Comput. Modell 2001, 34, 511–525. [Google Scholar]
  10. Wan, EA; van der Merwe, R. The Unscented Kalman Filter for Nonlinear Estimation. Proceedings of the Adaptive Systems for Signal Processing, Communication and Control (AS-SPCC) Symposium, Alberta, AB, Canada, 1–4 October 2000; pp. 153–156.
  11. Wan, EA; van der Merwe, R. The Unscented Kalman Filter. In Kalman Filtering and Neural Networks; Haykin, S, Ed.; Wiley: New York, NY, USA, 2001. [Google Scholar]
  12. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: New York, NY, USA, 2006. [Google Scholar]
  13. Julier, SJ; Uhlmann, JK. Reduced Sigma Point Filters for the Propagation of Means and Covariances through Nonlinear Transformations. Proceedings of the American Control Conference, Anchorage, AK, USA, 8–10 May 2002; pp. 887–892.
  14. Li, Y; Wang, J; Rizos, C; Mumford, P; Ding, W. Low-Cost Tightly Coupled GPS/INS Integration Based on a Nonlinear Kalman Filter Design. Proceedings of the 2006 National Technical Meeting of The Institute of Navigation, Monterey, CA, USA, 18–20 January 2006; pp. 958–966.
  15. Julier, SJ. The Scaled Unscented Transformation. Proceedings of the American Control Conference, Anchorage, AK, USA, 8–10 May 2002; pp. 4555–4559.
  16. Van der Merve, R; Doucet, A; de Freitas, N; Wan, E. The Unscented Particle Filter; Technical Report CUED/F-INFENG/TR 380; Cambridge University Engineering Department: Cambridge, MA, USA; August; 2000. [Google Scholar]
  17. Pourtakdoust, SH; Ghanbarpour; Asl, H. An adaptive unscented Kalman filter for quaternion-based orientation estimation in low-cost AHRS. Aircr. Eng. Aerosp. Technol 2007, 79, 485–493. [Google Scholar]
Figure 1. Illustration of the world coordinate system [3].
Figure 1. Illustration of the world coordinate system [3].
Sensors 11 07437f1 1024
Figure 2. The relationship between an image point u = (x, y) and a scene point P = (X, Y, Z).
Figure 2. The relationship between an image point u = (x, y) and a scene point P = (X, Y, Z).
Sensors 11 07437f2 1024
Figure 3. Illustration of properties of UKF and EKF, Reproduced with permission from Yong Li [14].
Figure 3. Illustration of properties of UKF and EKF, Reproduced with permission from Yong Li [14].
Sensors 11 07437f3 1024
Figure 4. High level of operation of the unscented Kalman filter.
Figure 4. High level of operation of the unscented Kalman filter.
Sensors 11 07437f4 1024
Figure 5. Sample trajectories for the three feature points on image plane for the case b ∈ ImΩ.
Figure 5. Sample trajectories for the three feature points on image plane for the case b ∈ ImΩ.
Sensors 11 07437f5 1024
Figure 6. Estimation results for parameters c1, c2 and c3 for the three feature points for the case b ∈ ImΩ (true-in red; EKF-in green; UKF-in black).
Figure 6. Estimation results for parameters c1, c2 and c3 for the three feature points for the case b ∈ ImΩ (true-in red; EKF-in green; UKF-in black).
Sensors 11 07437f6 1024
Figure 7. Estimation errors for the three feature points for the case b ∈ ImΩ.
Figure 7. Estimation errors for the three feature points for the case b ∈ ImΩ.
Sensors 11 07437f7 1024
Figure 8. Comparison of estimation accuracy for shape parameters pk, qk and rk, for the case b ∈ ImΩ (true-in red; EKF- in green; UKF-in black).
Figure 8. Comparison of estimation accuracy for shape parameters pk, qk and rk, for the case b ∈ ImΩ (true-in red; EKF- in green; UKF-in black).
Sensors 11 07437f8 1024
Figure 9. Sample trajectories for the three feature points on image plane for the case b ∉ ImΩ.
Figure 9. Sample trajectories for the three feature points on image plane for the case b ∉ ImΩ.
Sensors 11 07437f9 1024
Figure 10. Comparison of estimation results for parameters c1, c2, and c3 for the three feature points for the case b ∉ ImΩ (true-in red; EKF- in green; UKF-in black).
Figure 10. Comparison of estimation results for parameters c1, c2, and c3 for the three feature points for the case b ∉ ImΩ (true-in red; EKF- in green; UKF-in black).
Sensors 11 07437f10 1024
Figure 11. Estimation errors for the three feature points for the case b ∉ ImΩ.
Figure 11. Estimation errors for the three feature points for the case b ∉ ImΩ.
Sensors 11 07437f11 1024
Figure 12. Comparison of estimation accuracy for shape parameters pk, qk and rk for the case b ∉ ImΩ (true-in red; EKF- in green; UKF-in black).
Figure 12. Comparison of estimation accuracy for shape parameters pk, qk and rk for the case b ∉ ImΩ (true-in red; EKF- in green; UKF-in black).
Sensors 11 07437f12a 1024Sensors 11 07437f12b 1024
Table 1. Comparison of RMSE (in the unit of mm) of the three feature points on the image plane using the EKF and UKF (b ∈ ImΩ).
Table 1. Comparison of RMSE (in the unit of mm) of the three feature points on the image plane using the EKF and UKF (b ∈ ImΩ).

EKFUKF

xyx-yxyx-y
Feature point 1 (x1,y1)0.00510.00570.00770.00110.00110.0016
Feature point 2 (x2,y2)0.00370.00370.00520.00090.00080.0012
Feature point 3 (x3,y3)0.00580.00430.00720.00080.0010.0013
Table 2. Comparison of RMSE (in the unit of mm) for the feature points of the scene point using the EKF and UKF (b ∈ ImΩ).
Table 2. Comparison of RMSE (in the unit of mm) for the feature points of the scene point using the EKF and UKF (b ∈ ImΩ).

EKFUKF

XYZX-Y-ZXYZX-Y-Z
Feature point 1 (x1,y1)5.355.62865.48159.50531.57831.78150.70822.4832
Feature point 2 (x2,y2)2.49561.03291.76863.22840.6210.24230.39680.7757
Feature point 3 (x3,y3)1.68922.5461.92053.60880.52640.84310.55151.1367
Table 3. Comparison of RMSE (in the unit of mm) of the three feature points on the image plane using the EKF and UKF (b ∉ ImΩ).
Table 3. Comparison of RMSE (in the unit of mm) of the three feature points on the image plane using the EKF and UKF (b ∉ ImΩ).

EKFUKF

xyx-yxyx-y
Feature point 1(x1,y1)0.00560.00730.00920.0010.00110.0015
Feature point 2 (x2,y2)0.00330.00290.00440.00090.00080.0011
Feature point 3 (x3,y3)0.00280.00250.00370.00080.0010.0013
Table 4. Comparison of RMSE (in the unit of mm) for the feature points of the scene point using the EKF and UKF (b ∉ ImΩ).
Table 4. Comparison of RMSE (in the unit of mm) for the feature points of the scene point using the EKF and UKF (b ∉ ImΩ).

EKFUKF

XYZX-Y-ZXYZX-Y-Z
Feature point 1 (x1,y1)2.91162.5575.12816.42760.4761.41020.70181.6455
Feature point 2 (x2,y2)0.86231.02751.64392.12170.09380.08940.11730.1748
Feature point 3 (x3,y3)0.84241.15481.91772.39180.05620.0870.12450.1619

Share and Cite

MDPI and ACS Style

Jwo, D.-J.; Tseng, C.-H.; Liu, J.-C.; Lee, H.-D. Unscented Kalman Filtering for Single Camera Based Motion and Shape Estimation. Sensors 2011, 11, 7437-7454. https://doi.org/10.3390/s110807437

AMA Style

Jwo D-J, Tseng C-H, Liu J-C, Lee H-D. Unscented Kalman Filtering for Single Camera Based Motion and Shape Estimation. Sensors. 2011; 11(8):7437-7454. https://doi.org/10.3390/s110807437

Chicago/Turabian Style

Jwo, Dah-Jing, Chien-Hao Tseng, Jen-Chu Liu, and Hsien-Der Lee. 2011. "Unscented Kalman Filtering for Single Camera Based Motion and Shape Estimation" Sensors 11, no. 8: 7437-7454. https://doi.org/10.3390/s110807437

Article Metrics

Back to TopTop