Next Article in Journal
Optimization of Experimental Variables Influencing Apoptosome Biosensor in HEK293T Cells
Previous Article in Journal
Using Rough Sets to Improve Activity Recognition Based on Sensor Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Smooth Variable Structure Smoother for Robust Estimation

1
School of Aerospace Science and Technology, Xidian University, Xi’an 710126, China
2
Department of Electrical, Electronic, and Information Engineering, University of Bologna, 47521 Cesena (FC), Italy
3
Academy of Space Electronic Information Technology, Xi’an 710100, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(6), 1781; https://doi.org/10.3390/s20061781
Submission received: 14 January 2020 / Revised: 11 March 2020 / Accepted: 19 March 2020 / Published: 23 March 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
The smooth variable structure filter (SVSF) is a new-type filter based on the sliding-mode concepts and has good stability and robustness in overcoming the modeling uncertainties and errors. However, SVSF is insufficient to suppress Gaussian noise. A novel smooth variable structure smoother (SVSS) based on SVSF is presented here, which mainly focuses on this drawback and improves the SVSF estimation accuracy of the system. The estimation of the linear Gaussian system state based on SVSS is divided into two steps: Firstly, the SVSF state estimate and covariance are computed during the forward pass in time. Then, the smoothed state estimate is computed during the backward pass by using the innovation of the measured values and covariance estimate matrix. According to the simulation results with respect to the maneuvering target tracking, SVSS has a better performance compared with another smoother based on SVSF and the Kalman smoother in different tracking scenarios. Therefore, the SVSS proposed in this paper could be widely applied in the field of state estimation in dynamic system.

1. Introduction

The state and parameter estimation of dynamic systems plays a key role in various application fields. In practice, not only the real-time state estimation of the system is required, but also an estimation result with higher precision through the smoother by using additional measurements made after the time of the estimated state vector is necessary [1,2,3,4]. For example, in a radar system, a smoother is used to obtain higher precision track trajectory for the battlefield situation estimation, and to make more accurate estimates for the aircraft performance. Smoothers are generally divided into fixed-point, fixed-interval, and fixed-lag smoothers [5,6,7], depending on attributes of the application, which type of smoothers can be decided to use.
In a dynamic system, the real-time sampled data contains various interferences and noises, and it is difficult to obtain the true state of the system. In 1960, in the case of linear Gaussian noise, a recursive solution was obtained using the Kalman filter (KF) [8]. This method was deemed to be the best under a linear, unbiased, and minimum mean square error condition. However, the KF estimates are imprecise and unstable when the system is nonlinear. To address these problems, scholars have proposed some different methods. For example, there is the extended Kalman filter (EKF) [9], the unscented Kalman filter (UKF) [10], and the cubature Kalman filter (CKF) [11,12]. Considering a very small region of nonlinear system as a linear system, EKF was proposed, which is equivalent to the first-order Taylor approximation. Compared with EKF, UKF equals to the second-order Taylor approximation without the Jacobian matrix calculating. In contrast, CKF is based on a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension to approximate the state expectation and covariance of a nonlinear system. CKF is equivalent to the fourth-order Taylor approximation and is more stable. In the case of nonlinear and non-Gaussian functions, research about the particle filter is very active, but the huge amount of calculation limits its scope of application [13,14].
One of the main problems of the KF-based filters is that in the case of modeling and parameter uncertainties, the performance of estimation is degraded. For example, state noises and measurement noises have heavy-tailed and/or skewed non-Gaussian distributions when measurements have much clutter [15,16,17]. To deal with this unknown noises, some robust methods have been presented [18,19], when the system with a heavy-tailed or a skewed measurement noises, robust Kalman filters have been proposed by employing the Student’s t or skew-t distribution to model measurement noises [15,20]. The filter based on a detect-and-reject idea when the measurements are partially disturbed by outliers [21]. For improving robustness and performance of EKF, optimization algorithms [22], adaptive step size control method [23] and online expectation-maximization approach [24] are applied, other theory include optimal [25], minimax estimators [26], etc. Above filters have poor performance in the system with an uncertain model, because mismatching exists between the theoretical signal model of those filters and the variational state model of the system, the theoretical model is entirely consistent variable state model, and is effective in estimation. When the system is a hybrid system that contains finite models, the multi-model methods are proposed to estimate the system state and it are different from single model strategies. The popular multi-model methods include the interactive multi-model method and the variable structure interactive multi-model method [27,28,29,30,31]. So far, the state estimation of dynamic systems is a difficult problem when the modeling and parameter are uncertain, which is worth further study.
The smooth variable structure filter (SVSF) was proposed based on variable structure theory in 2007, and it has advantages in dealing with the problem of uncertain modeling and noise interference [32]. Similar to the KF method, the SVSF is also a predictor-corrector strategy, its gain is based on the discontinuous correction gain, and limits the state estimation around the true state trajectory with a small deviation. Thereby it improves the stability and the robustness of the estimation [32]. The SVSF has been known as a sub-optimal filter [10,12], and different methods have been proposed continually to enrich the theory [33,34,35,36], such as a higher-order version and a revised version [37] of the SVSF, where Gadsden derived the state covariance of the SVSF, and introduced opportunity for its application [38]. Reference [10,12] calculated the optimal smoothing layer with state covariance to determine whether to use SVSF filtering or other filtering methods (e.g., EKF, UKF, CKF), and obtained better filtering results. According to previous research, the SVSF has achieved significant development and is applied in various fields, such as vehicle navigation [39,40], battery management [41,42], fault detection and diagnosis [43,44,45], and artificial intelligence [45,46,47]. However, the ability of SVSF to eliminate noises cannot meet the requirement of estimation in many application conditions such as target tracking, which can be improved by a smoother. There are not many works that previously studied the smoothing problem of the SVSF. The earliest related article [48,49] presented a two-pass smoother based on the SVSF (labeled as SVSTPS), which uses the SVSF gain. As mentioned above, the SVSF gain is considered suboptimal under linear cases [10,12]. Therefore, the performance of SVSF smoothing can be further improved.
This paper is inspired by the Kalman smoother [5,6] and infers the proposed SVSS algorithm in the linear case. Unlike using the SVSF gain, the proposed SVSS method is capable of getting a better state estimation by using the innovation sequence and projection theory [8]. Our former work focus on maneuvering target tracking [50,51,52,53], the SVSS is also applied in maneuvering target tracking. The rest of the paper is structured as follows. Section 2 reviews the SVSF. Section 3 expounds on the smooth theories and the SVSS algorithm. Section 4 exhibits the simulation and analysis. Section 5 concludes the main findings in this paper.

2. The SVSF Strategies

Sliding-mode control and estimation is a traditional technique [54] and is widely welcomed as the easy implementation and robustness to uncertainty modeling and errors. Based on Sliding-mode concepts, Habibi first proposed the variable structure filter(VSF) in 2003 [54], which is a new predictor-corrector estimator type, that is applied to linear systems. The SVSF, as an improved form of VSF, can be used for linear or non-linear systems [32].
The trajectory of the state variable over time is shown in Figure 1. In the SVSF concept, the system state trajectory is generated by uncertain modeling. The estimated state trajectory is approximated towards the true trajectory until it reaches the existence subspace. When the estimated state remains in the existence subspace, it is forced to switch back and forth in the true state trajectory. If modeling errors and noises are bounded, then estimates will remain within this limit. The SVSF strategy shows its stability and robustness to modelling uncertainties and errors.
In the case of linear dynamic systems under a zero-mean and Gaussian noise, the state equation and observation equation of a dynamic system are given by Equation (1), respectively:
{ x k + 1 = F x k + w k z k + 1 = H x k + 1 + v k + 1
The iterative process of SVSF is as follows:
The predicted state estimation and measurement estimation are calculated as:
x ^ k + 1 | k = F x ^ k | k
z ^ k + 1 | k = H x ^ k + 1 | k
The measurement innovation is expressed by:
e k + 1 | k = z k + 1 z ^ k + 1 | k
Based on Reference [32], the SVSF gain K k + 1 S V S F is given by:
K k + 1 S V S F = H + ( | e k + 1 | k | + γ | e k | k | ) s a t ( ψ ¯ 1 e k + 1 | k )
where K k + 1 S V S F is a gain function, H + is the generalized inverse matrix of H , γ ( 0 γ < 1 e ) is the SVSF “memory” or convergence rate, | e | is the absolute value of e , and represents the multiplication of the corresponding elements of the two matrices (Hadamard product). Besides,
s a t ( ψ ¯ 1 e k + 1 | k ) = { 1 , ψ ¯ 1 e k + 1 | k 1 ψ ¯ 1 e k + 1 | k , 1 < ψ ¯ 1 e k + 1 | k < 1 1 , ψ ¯ 1 e k + 1 | k 1
and
ψ ¯ 1 = [ 1 ψ 1   0   0 0     0 0 0 1 ψ m ]
where ψ i ( i = 1 , , m ) represents the element of a vector ψ that means smoothing boundary value, and m is the measurement dimension. The state estimate is updated by:
x ^ k + 1 | k + 1 = x ^ k + 1 | k + K k + 1 S V S F
The posterior measurement estimation and its innovation are described as:
z ^ k + 1 | k + 1 = H x ^ k + 1 | k + 1
e k + 1 | k + 1 = z k + 1 z ^ k + 1 | k + 1
The SVSF estimation process converges to the existence subspaces as shown in Figure 1 and Figure 2. The width of the existence subspace β is an uncertain dynamic and time-varying function. In general, the width of β is not completely known, but an upper bound (i.e., ψ ) can be chosen based on prior knowledge.
In the existence subspace, the SVSF gain will force the estimated state to switch back and forth along the true state trajectory. The high-frequency switching caused by the gain of the SVSF is called chattering [32], the chattering makes the estimation result inaccurate, and it can be minimized with a given value of smoothing boundary layer ψ . As shown in Figure 2a, when the smoothing boundary layer is larger than the existence subspace boundary layer, the estimated state trajectory is smoothed. When the smoothing boundary layer is smaller than the existence subspace boundary layer, the chattering remains due to the model uncertainty is underestimation, as shown in Figure 2b. Therefore, the SVSF algorithm is an effective estimation strategy for a system that has no an explicit model.
SVSF is generally used to process systems with the same dimension of state estimation and measurement. If the dimension of state estimation is larger than the dimension of measurement, the “artificial” measurement needs to be added [4]. For example, when estimate the speed in target tracking, it is necessary to increase the “artificial” measurements of speed.

3. Smoothing Theories and the Proposed Algorithms

Smoothing is a practical strategy using additional measurements made after the time of the estimated state vector to obtain better estimates than those attainable by filtering only. Unfortunately, it is often overlooked. Since the KF algorithm was proposed in 1960 [8], the optimal smoothing method has been derived from the same model using the Kalman theory [55]. Smoothers are the algorithmic or analog implementations of smoothing methods. The relation among smoother, predictor and filter is illustrated in Figure 3, fixed-interval smoothers, fixed-lag smoothers, and fixed-point smoothers are the most typical smoothers, since they have been developed to solve different types of application problems. The fixed-interval smoother is suitable for the trajectory optimization problems.
SVSF is a novel model-based robust state estimation method and has efficient performance in reducing the disturbances and uncertainties. However, SVSF is less efficient in noise elimination compared to the Kalman filter [10]. Therefore, if its performance in noise canceling can be improved, SVSF would be developed further and be widely utilized in the state estimation field. Previous research indicates that the influence of noises can be eliminated effectively by a filter followed by an additional smoothing process. Therefore, this paper studies the fixed-interval smoother based on SVSF.

3.1. The Kalman Smoother

The popular Rauch–Tung–Striebel (RTS) Two-pass Smoother is one of the fixed-interval Kalman smoothers [55], which is labeled as KS. KS is widely used for its fast iteration and high precision performance, and its iterative process is described below. When the current measurements are obtained, system state and covariance calculated by the Kalman filter in real-time, then, they are used in the smoothing process to obtain a more accurate estimate [55].
x ^ k + 1 | k = F x ^ k | k
P k + 1 | k = F P k | k F T + Q k + 1
z ^ k + 1 | k = H x ^ k + 1 | k
S k + 1 = H P k + 1 | k H T + R k + 1
K k + 1 = P k + 1 | k H T k + 1 S 1 k + 1
{ x ^ k + 1 | k + 1 = x ^ k + 1 | k + K k + 1 [ Z k + 1 z ^ k + 1 | k ] P k + 1 | k + 1 = ( I K k + 1 H ) P k + 1 | k ( I K k + 1 H ) T + K k + 1 R k + 1 K k + 1 T
The smoothing process is operated backward from the last estimation x ^ N | N to the first one [55], and the corresponding iterations are given by:
{ x ^ k | N = x ^ k | k + A k ( x ^ k + 1 | N x ^ k + 1 | k ) A k = P k | k F T P k + 1 | k 1
Equations (11)–(17) summarize the Rauch–Tung–Striebel (RTS) Two-pass Smoother (the KS algorithm), which is an iterative process.

3.2. The Proposed SVSS Algorithm

The covariance does not necessarily have to be calculated in SVSF, but it is significant for the smoother [38]. The linear system is described by Equation (1). Similar to the calculation method of KF, the predicted state covariance matrix of SVSF is shown as;
P k + 1 | k = F P k | k F T + Q k
where P k | k represents the posterior state covariance at k time, and F T represents the system transition matrix, Q is the white noise involved in the system and is calculated by:
{ w k ~ N ( 0 , Q k ) E { w k } = E { w k T } = 0 E { w k w k T } = Q k E { w k x ˜ k | k T } = E { w k } E { x ˜ k | k T } = 0 E { x ˜ k | k w k T } = E { x ˜ k | k } E { w k T } = 0
The state estimate x ^ k + 1 | k is given by
x ^ k + 1 | k = F x k | k
According to updated estimates x ^ k + 1 | k and real measurement z k , the innovation can be expressed by:
e k + 1 | k = z k H x ^ k + 1 | k
The state estimate is updated as:
x ^ k + 1 | k + 1 = x ^ k + 1 | k + K k + 1 e k + 1 | k
K k + 1 = H + d i a g ( | e k + 1 | k | + γ | e k | k | ) s a t ( ψ ¯ 1 e k + 1 | k ) × [ d i a g ( e k + 1 | k ) ] 1
The updated measurement innovation is calculated by:
e k + 1 | k + 1 = z k + 1 H x ^ k + 1 | k + 1
The posterior state covariance P k + 1 | k + 1 at k + 1 time can be written in:
P k + 1 | k + 1 = E { x ˜ k + 1 | k + 1 x ˜ k + 1 | k + 1 T } = E { [ ( I K k + 1 H ) x ˜ k + 1 | k K k + 1 v k + 1 ] [ ( I K k + 1 H ) x ˜ k + 1 | k K k + 1 v k + 1 ] T } = E { [ ( I K k + 1 H ) x ˜ k + 1 / k ( x ˜ k + 1 | k ) T ( I K k + 1 H ) T ( I K k + 1 H ) x ˜ k + 1 | k v k + 1 T K k + 1 T         K k + 1 v k + 1 ( x ˜ k + 1 / k ) T ( I K k + 1 H ) T + K k + 1 v k + 1 v k + 1 T K k + 1 T ] }
The relationship between the measurement noises and state estimates is depicted as Equation (26) under the linear Gaussian system:
{ E { v k + 1 } = E { v k + 1 T } = 0 E { v k + 1 v k + 1 T } = R k + 1 E { x ˜ k + 1 | k v k T } = E { x ˜ k + 1 | k } E { v k T } = 0 E { v k x ˜ k + 1 | k T } = E { v k } E { x ˜ k + 1 | k T } = 0
where R k + 1 means the measurement noise covariance matrix. In addition, v k + 1 follows a Gaussian distribution, i.e.
v k + 1 ~ N ( v ; 0 , R k + 1 )
According to Formulas (25) and (26), the Equation (25) can be rewritten as:
P k + 1 | k + 1 = ( I K k + 1 H ) P k + 1 | k ( I K k + 1 H ) T + K k + 1 R k + 1 K k + 1 T = ( I K k + 1 H ) P k + 1 k
The Equations (18)–(28) are the specific processes of the SVSF algorithm with the iterative update process of the covariance matrix.
The earliest article about SVSF [48] presented a two-pass smoother (labeled as SVSTPS) based on the Rauch–Tung–Striebel (RTS) Two-pass Smoother. The smoothing process uses Equation (17), in other words, it is based on the SVSF gain directly. However, the estimation result is not satisfying, so the SVSS using innovation sequences is proposed here. The innovation sequence is an orthogonal sequence containing the same statistical information as the original random sequence, so the innovation sequence is a white noise sequence with a zero mean. The smoothing process in the proposed smoother (SVSS) is updated by the innovation sequence according to the projective theorem (minimum mean square error criterion), and has better performance in eliminating Gaussian noise. The recursive equation is related with the smoothed estimates x ^ k | N and x ^ k | N 1 (with k < N ), as follows:
x ^ k | N = x ^ k | N 1 + M k | N 1 ( z N H x ^ N | N 1 ) = x ^ k | N 1 + ( i = k N 1 C i ) B N ( z N H x ^ N | N 1 )
where C i and B N are calculated by C i = P i | i F T P i + 1 | i 1 and B N = P N | N 1 H T ( H P N | N 1 H T + R ) 1 respectively.
Proof. 
When N = k + n and e k + n z n H x ^ N | N 1 , Equation (29) can be written as:
x ^ k | k + n = x ^ k | k + n 1 + M k | k + n 1 e k + n
where the smoothing gain M k | k + n 1 is defined in projection theory as:
M k | k + n 1 = E ( ( x k e k + n T ) ) { E ( e k + n e k + n T ) } 1
And according to Equation (1) and Equation (21), the innovation is given by:
e k + n = H ( F x ˜ k + n 1 | k + n 1 + w k + n ) + v k + n = H x ˜ k + n | k + n 1 + v k + n
Furthermore, according the References [56], the true state error is calculated by:
x ˜ k + n 1 | k + n 1 = F ( k + n 1 , k ) x ˜ k | k + i = k + 1 i + n 1 F ( k + n 1 , i ) { [ I n K i H ] w i 1 K i v i }
where F ( k + n 1 , k + n 1 ) and F ( k + n 1 , i ) are defined as:
F ( k + n 1 , k + n 1 ) = I n
F ( k + n 1 , i ) = [ I n K k + n 1 H ] F [ I n K i + 1 H ] F
Suppose n > 0, then according to Equation (19) and Equation (26), when w k + n 1 x k and v k + n x k , the expectation can be calculated as:
E ( ( x k e k + n T ) ) = E [ x k x ˜ k + n 1 | k + n 1 T ] F T H T
The true state value is expressed as x k = x ^ k | k + x ˜ k | k and x ^ k | k x ˜ k | k , substituting (33) into (36) yields
E ( ( x k e k + n T ) ) = P k | k F T ( k + n 1 ) F T H T = P k | k { i = 1 n 1 F T [ I n K k + i H ] T } F T H T = P k | k F T { i = 1 n 1 [ I n K k + i H ] T F T } H T
From Equation (28), Equation (37) can be written in:
E ( ( x k e k + n T ) ) = P k | k F T ( i = 1 n 1 P k + i + 1 | k + i 1 P k + i + 1 | k + i + 1 F T ) H T = ( i = k k + n 1 ( P i | i F T P i + 1 | i 1 ) P k + n | k + n 1 H T
E ( e k + n e k + n T ) = ( H P k + n | k + n 1 H T + R ) 1
Substituting Equation (38) and Equation (39) into Equation (30) yields:
M k | k + n 1 = E ( ( x k e k + n T ) ) { E ( e k + n e k + n T ) } 1 = ( i = k k + n 1 ( P i | i F T P i + 1 | i 1 ) P k + n | k + n 1 H T ( H P k + n | k + n 1 H T + R ) 1
So Equation (30) is written as:
x ^ k | k + n = x ^ k | k + n 1 + ( i = k k + n 1 ( P i | i F T P i + 1 | i 1 ) P k + n | k + n 1 H T ( H P k + n | k + n 1 H T + R ) 1 e k + n
When N = k + n and e k + n = z n H x ^ N | N 1 . Equation (41) can be written as:
x ^ k | N = x ^ k | N 1 + ( i = k N 1 C i ) B N ( z N H x ^ N | N 1 )
where C i and B N are calculated by C i = P i | i F T P i + 1 | i 1 , B N = P N | N 1 H T ( H P N | N 1 H T + R ) 1 . □
The other detailed derivation is described in the Appendix A. Moreover, the validity of this derivation process could be verified from Reference [6].
Equations (18)–(29) summarize the SVSS algorithm proposed in this paper. The pseudo-code is patched as follows.
The SVSS Algorithm
Input { x 0 , P 0 , ψ , F , Q , R , H } and the sequence measurement { z 1 , z 2 , , z N }
  Step 1 fiter
   x ^ k + 1 | k = F x k | k
   P k + 1 | k = F P k | k F T + Q k
   e k + 1 | k = z k H x ^ k + 1 | k
   K k + 1 = H + d i a g ( | e k + 1 | k | + γ | e k | k | ) s a t ( ψ ¯ 1 e k + 1 | k ) × [ d i a g ( e k + 1 | k ) ] 1
   x ^ k + 1 | k + 1 = x ^ k + 1 | k + K k + 1 e k + 1 | k
   P k + 1 | k + 1 = ( I K k + 1 H ) P k | k + 1 ( I K k + 1 H ) T + K k + 1 R k + 1 K k + 1 T
   e k + 1 | k + 1 = z k + 1 H x ^ k + 1 | k + 1
  Step 2 smoothing
   C i = P i | i F T P i + 1 | i 1
   B N = P N | N 1 H T ( H P N | N 1 H T + R ) 1
   x ^ k | N = x ^ k | N 1 + ( i = k N 1 C i ) B N ( z N H x ^ N 1 )
  Output { x ^ k | N }

4. Simulation

4.1. A Classic Target Tracking Scenario

A classic simulation scenario is always used in References [10,30,57]. This is a tracking problem with a generic air traffic control (ATC) scenario: The target position is provided by a radar system. The system noise obeys a Gaussian distribution with   200   m standard deviation. Figure 4 shows that an aircraft spent 125 s to move from the initial position of   [ 25000 m , 10000 m ] at t = 0 s , and flew with a speed of 120   m / s along the x-axis direction and remained 0 m / s along the y-axis direction. Then it turned at a rate of 3 ° / s for 30 s . Next, the target flew in a straight line for 125 s , and maneuvered at a rate of 1 ° / s for 90 s . Finally the target flew straight for 120 s till the end.
In the ATC scenario, the behavior of the civilian aircraft could be modeled by a uniform motion (UM) [10,30,57], i.e.,
x k + 1 = [ 1   T   0   0 0   1   0   0 0   0   1   T 0   0   0   1 ] x k + [ 1 2 T 2   0 0   T 1 2 T 2   0 0   T ] w k
where T refers to the sampling rate and w k denotes the system noise. The state vector x k of the systems is defined by:
x k = [ η k , η ˙ k , ξ k , ξ ˙ k ] T
where η k and ξ k represent the position of the aircraft along the x-axis and y-axis directions, η ˙ k and ξ ˙ k represent its speed, respectively. In the target tracking processing system, generally, radar only provides position measurements without target speed measurements. Therefore, the measurement model can be expressed as:
z k = [ 1   0   0   0 0   0   1   0 ] x k + v k
In the simulation, the state of the filter parameter is initialized as x 0 = [ 25000 , 80 , 10000 , 10 ] . Moreover, measurement covariance R , state covariance P 0 | 0 and process noise covariance Q are set as follows:
R = 200 2 [ 1   0 0   1 ]
P 0 | 0 d i a g ( [ 10000 , 1000 , 10000 , 1000 ] )
Q = L * [   T 3 3 T 2 2   0   0 T 2 2 T   0   0   0   0   T 3 3 T 2 2   0   0 T 2 2 T ]
where L represents the power spectral densities and is defined as 0.16 [10,30,57]. Besides, the SVSF “memory” ( i.e. convergence rate) was set to γ = 0.1 , which is tuned based on some knowledge of the system uncertainties such as the noise to decrease the estimation errors [58]. Those methods are coded with MATLAB and the simulations are run on a laptop computer with Intel Core i5-3230M CPU at 2.40 GHz. Five hundred trials of Monte Carlo are performed in each simulation, and the estimation results are expressed in the figures of merit-root mean square error (RMSE), accumulative RMSE and the average value.
1 Compared with KF
In this simulation, Kalman filter (KF), Kalman smoother (KS), Robust Student’s t-Based Kalman Filter (RSTKF) [20], SVSF and the proposed SVSF are tested. The SVSF and SVSS of smooth boundary layer widths is set to ψ = [ 800 , 800 ]   m , the In the RSTKF, the prior parameters are set as: w = v = 5 , τ = 5 , N = 10 [ 20 ] .
Figure 4 shows the target tracking position trajectory of one experiment, so the closer the tracking trajectory is to the real trajectory, the more accurate the tracking method is. The trajectory of SVSS is closer to true trajectory than SVSF, so SVSS reduces the chattering phenomena. The state estimation results of KF, KS, SVSF and the proposed SVSS are shown in Figure 5 and Table 1. Compared with RSTKF, SVSF and SVSS, when the system modeling is in accordance with target motions, the KF and KS exhibit better performance. But when the target motion model changes, a dramatic increase occurs in the tracking errors of KF and KS, while the estimations of RSTKF, SVSF and SVSS still maintain high accuracy. In Figure 5, the aircraft flies in a straight line after 155 s, the RMSE of SVSF does not decrease because the SVSF algorithm could not directly estimate the speed information. Therefore, there are no modifications in the relevant speed information. How to effectively use the “artificial” velocity measurements to modify the speed information will be described in the following. As shown in Figure 5 and Table 1, the simulation results demonstrate that for both KF and SVSF, the tracking errors obtained from their smoothers are smaller than that from the filters. In addition, Table 1 shows the position accumulative RMSE of SVSS is the lowest among KF, KS, RSTKF and SVSF. SVSS improves tracking accuracy by about 22% compared with SVSF and RSTKF, while KS only improves about 7% compared with KF. The reason for phenomenon is that the ability of SVSF to deal with noise is weaker than KF, so SVSS is easier to eliminate noises in its state estimates value compared to KS. KS and SVSS in smoothing process consume similar time, SVSF spends a little more time compared to KF, The RSKF cost most time. To sum up, SVSS has better robustness and higher tracking accuracy than the others.
2 Results under different smooth boundary layer widths
Figure 6 and Table 2 show results of the position accumulative RMSE under different smooth boundary layer widths. We can see that SVSS has a better performance than SVSF, because the ability of SVSF to eliminate noise is almost the same under different smooth boundary layer widths. In addition, the smooth boundary layer width ψ is an important parameter in SVSF. If it is not set properly, both the filter accuracy and stability would be deteriorated. The accumulative RMSE of SVSF estimation is high when the smooth boundary layer width is too large or too small. As shown in Figure 6, when the smoothing boundary layer ψ is selected as 1000 m , the proposed method has a better performance under the system measurement noises with standard deviation of 200 m . These parameters are selected based on the distribution of the system and measurement noises. Generally, the smoothing boundary layer width ψ is set to be 5 times the maximum measurement noise width. And the SVSF “memory” or convergence rate γ ( 0 γ < 1 ) is related to the rate of decay α , such that α = ( 1 / τ ) l n ( γ ) ,where the τ is the sampling time [32]. As shown in Figure 6, when the width of smooth boundary layer is large, the accumulated RMSE of SVSS position on x-axis and y-axis is different, the reason is the same as above, there is no modifications in the relevant speed.
3 Results under different lag fixed intervals
Simulations under different lag fixed intervals are also considered. The width of the smooth boundary layer is ψ = [ 800 , 800 ]   m , and the other experimental parameters are the same as mentioned above.
Figure 7 shows the estimation results using SVSS under three different lags and SVSF, and the performance of two lags and three lags is better than that of one lag. Compared with two lags, when the target moves with a constant velocity, the three lags improves the tracking accuracy slightly, but when the target makes a turning movement, the accuracy of three lags is lower than that of two lags. As a matter of fact, when the system model is consistent with the actual model, the proposed SVSS, which is derived under linear Gaussian noise, this can increase the estimation accuracy as does increasing the smoothing lag steps. However, if the models are inconsistent, the performance of the SVSS will probably decrease and is unstable as lag steps of smoothing increase, because innovation information in the SVSS contains more modeling error innovation. With the increase of the lag of SVSS, the computational complexity becomes higher. The computational complexity of n lags is O ( n ( n 1 ) / 2 * ( 3 + m ) i 3 + 2 n j i 2 + 2 n i j 2 + n m j 3 ) , and the detailed derivation is described in the Appendix B, the simulation time cost of different lags is also shown in Table A2 of the Appendix B. Therefore, under different requirements, users can choose the appropriate lag steps according to the needs of the system.

4.2. A Complex Maneuvering Environment Scenario

As for this scenario, the maneuver of the target is more complicated, motion modes including uniform motion, turning motion, uniform acceleration motion and angular acceleration motion. The initial position value of the aircraft is also set to be   [ 25000 m , 10000 m ] . Target flies with a high speed of 300 m / s along the x-axis direction and 250 m / s along the y-axis direction for 20 s .Then the aircraft maneuvers at a rate of 3 ° / s for 55 s . Next, the target turns at an initial angular velocity of 1 / s with an angular acceleration of 0.5 / s 2 for 25 s . Finally the target continues to fly at an acceleration of 20 m / s 2 along the x-axis direction and 10 m / s 2 along the y-axis direction for 25 s .
The process noises obeys a Gaussian distribution with 100 m standard deviation. The process noises and measurement noises ( w and v ) are considered as Gaussian, with a zero mean and variances Q and R , respectively. The initial state covariance P 0 | 0 , measurement noise covariance R , and process noise covariance Q are defined, respectively, as follows:
R = 100 2 [ 1   0 0   1 ]
P 0 | 0 = d i a g ( [ 400 , 100 , 400 , 100 ] )
where Q is given by the Equation (48) and L is defined as 15. For the SVSF and their smoother estimation processes use the UM model, and smoothing boundary layer widths were defined as, ψ = [ 500 , 600 , 500 , 600 ] T ; the SVSF “memory” or convergence rate was set to γ = 0.1 . In order to meet the actual demand, the estimation of speed is increased. As per earlier discussions, the system also needs to add an “artificial” velocity measurement; it is necessary to transform the measurement matrix of (45) into a square matrix (i.e., identity), and the “artificial” velocity measurement can be calculated by position measurements. For example, where y k represents measurement, which contains the artificial velocity measurements:
y k = [ z 1 , k , ( z 1 , k z 1 , k 1 ) / T , z 2 , k , ( z 2 , k z 2 , k 1 ) / T ] T
The accuracy of Equation (51) depends on the sampling rate T . A total of 500 Monte Carlo runs were performed.
The results of different estimation methods are shown in the following figures and Table 3, so we can see that the KS algorithm has the best performance when the model is matched (0 s to 20 s) with the actual motions, and the position RMSE and speed RMSE of KS are significantly smaller than the other two SVSF smoothers. Once the models do not match, such as the high speed target change from uniform motion to turn, angular acceleration, or acceleration, KS is unstable and RMSE increases, but the SVSTPS and SVSS are still able to overcome the uncertainties and have high accuracy and robustness. Table 3 shows that the position (velocity) accumulative RMSE of SVSTPS and SVSS increased by about 21% (15%) and 31% (33%) than SVSF, respectively. This is because SVSTPS uses SVSF gain, and the proposed SVSS uses innovation, and innovation is more conducive to smoother processing due to it containing as much noises as the original measurements. Thus more noise errors can be eliminated by SVSS. However, there is a special case where the SVSS performs not as well as SVSTPS when modeling errors affect it more than Gaussian noise. For example, at around 95 s in Figure 8 and Figure 9, the target is flying at a high speed with the angular velocity of 12 / s and angular acceleration of 0.5 / s 2 , so accuracy of SVSS is lower than that of SVSTPS. Therefore, SVSS exhibits the best performance with the above filters.

5. Conclusions

An SVSS algorithm is presented to improve the accuracy of the SVSF state estimation in the dynamic system with model uncertainty. Based on projection theory and SVSF, the smoothing recurrence formula of SVSS is deduced using innovation. The comparisons among SVSS, KS and SVSTPS are analyzed in the scene of aircraft trajectory tracking. According to the simulation results, the proposed SVSS performs well than SVSF under different bounded smoothing layers. In addition, compared with the popular KS algorithm, the SVSS is able to overcome inaccuracies and yield a stable solution in the presence of modeling uncertainties. Theory and simulations also prove that, the proposed SVSS has higher accuracy and better robustness compared with SVSTPS.

Author Contributions

Data processing and writing, Y.C.; supervision, L.X.; writing—review and editing, B.Y. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China under Grant 61701383, the Natural Science Basic Research Plan in Shaanxi Province of China under Grant 2018JQ6100 and China Postdoctoral Science Foundation under Grant 2019M663633.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof process is based on the orthogonal projection theory:

Appendix A.1. Definition 1

Based on random measurement variables z R m , estimate variables x ^ R n , which represent the linear minimum variance estimation of x R n , is a linear function of z , the minimization index of estimate J which is given by the following
J = E [ ( x x ^ ) T ( x x ^ ) ]
where E is the mean sign, and T is the transpose symbol,

Appendix A.2. Definition 2

The linear minimum variance estimation is only given by the following formula
x ^ = E x + P x z P z z 1 ( z E ( z ) )
P x z = E [ ( x E ( x ) ) ( z E ( z ) ) T ]
P z z = E [ ( z E ( z ) ) ( z E ( z ) ) T ]
Property 1
E x ^ = E x
Property 2
E [ ( x x ^ ) z T ] = 0
Property 3
Where x x ^ and z are uncorrelated random variables, recorded as x x ^ z , called x ^ which the projection of x on z .
x ^ = p r o j ( x | z )

Appendix A.3. Definition 3

Based on random variables z 1 , z 2 , , z k R m , The linear minimum variance estimate x ^ of the random variables x R n is defined as:
x ^ = p r o j ( x | w ) p r o j ( x | z 1 , z 2 , z k )
estimate x ^ is called x projective on the L ( w ) or L ( z 1 , z 2 , z k ) .

Appendix A.4. Definition 4

If random variables z 1 , z 2 , , z k R m has second moment, its innovation is defined as:
e k = z k p r o j ( z k | z 1 , z 2 , z k 1 ) ,   k = 1 , 2 ,
The predicted estimate of z ^ k | k 1 is defined as:
z ^ k | k 1 = p r o j ( z k | z 1 , z 2 , z k 1 )
e k = z k z ^ k | k 1 ,   k = 1 , 2 ,
e k = z k H x ^ k | k 1 = H ( F x ^ k | k + w ) + v
p r o j ( x | z 1 , z 2 , z k ) = p r o j ( x | e 1 , e 1 , e k )

Appendix A.5. Definition 5

For the estimate of random variable x ^ , when the random variable z 1 , z 2 , , z k R m has the second moment, we have the following iterative projective Formula (A14) (Projective recurrence theorem)
p r o g ( x | z 1 , z 1 , z k ) = p r o g ( x | z 1 , z 1 , , z k 1 + E [ ( x e k T ) ] ( E [ ( e k e k T ) ) ) 1 e k
The derivation of the SVSS algorithm is also based on the above theory
(1) One lag fixed interval
For the linear Gaussian system given by Equation (1), under the linear minimum mean square deviation criterion, the state estimate value of SVSS is given by the following formula:
x ^ k | k + 1 = x ^ k | k + M k | k + 1 e k + 1
M k | k + 1 = P k | k F T H T ( H P k + 1 | k H T + R ) 1
From (A2) to (A15), the state estimate value is obtained as:
x ^ k | k + 1 = x ^ k | k + P x z ( k + 1 ) P z z ( k + 1 ) e k + 1
P x z ( k + 1 ) = E [ ( x k e k + 1 T ) ]
P x z ( k + 1 ) = E [ ( x k e k + 1 T ) ] = E ( x k ( H ( F x ˜ k | k + w ) + v ) T ) = P k | k F T H T
P z z ( k + 1 ) = E [ ( e k + 1 e k + 1 T ) ] = H P k + 1 | k H + R
x ^ k | k + 1 = x ^ k | k + P k | k F T H T ( H P k + 1 | k H T + R ) 1 e k + 1
(2) Two lags fixed interval
As one-lag, the state estimate value of SVSS is given by the following formula:
x ^ k | k + 2 = x ^ k | k + 1 + M k | k + 2 e k + 2
x ^ k | k + 2 = x ^ k | k + 1 + P x z ( k + 2 ) P z z ( k + 2 ) e k + 2
P x z ( k + 2 ) = E [ ( x k e k + 2 T ) ] = E ( x k ( H ( F x ˜ k + 1 | k + 1 + w ) + v ) T ) = P k | k F T P k + 1 | k 1 P k + 1 | k + 1 F T H T
P z z ( k + 2 ) = E [ ( e k + 2 e k + 2 T ) ] = H P k + 2 | k + 1 H + R
M k | k + 2 = P k | k F T P k + 1 | k 1 P k + 1 | k + 1 F T H T ( H P k + 2 | k + 1 H T + R ) 1
According to (A23)–(A26), the state estimate value of SVSS is calculated by:
x ^ k | k + 2 = x ^ k | k + 1 + P k | k F T P k + 1 | k 1 P k + 1 | k + 1 F T H T ( H P k + 2 | k + 1 H T + R ) 1 e k + 2
(3) N lags fixed interval
x ^ k | k + n = x ^ k | k + n 1 + M k | k + n 1 e k + n
x ^ k | k + n = x ^ k | k + n 1 + P x z ( k + N ) P z z ( k + N ) e k + n
e k + n = H ( F x ˜ k + n 1 | k + n 1 + w ) + v
P x z ( k + n ) = E [ ( x k e k + n T ) ] = E ( x k ( H ( F x ˜ k + n 1 | k + n 1 + w ) + v ) T )
where P x z ( k + n ) is obtained from the orthogonality of the projection theory:
P x z ( k + n ) = P k | k ( i = 1 n 1 F T P k + i + 1 | k + i 1 P k + i + 1 | k + i + 1 ) F T H T = P k | k F T ( i = 1 n 1 P k + i + 1 | k + i 1 P k + i + 1 | k + i + 1 F T ) H T = ( i = k k + n 1 ( P i | i F T P i + 1 | i 1 ) P k + n | k + n 1 H T
M k | k + n = ( i = k k + n 1 ( P i | i F T P i + 1 | i 1 ) P k + n | k + n 1 H T ( H P k + n | k + n 1 H T + R ) 1
Form (A29) to (A33), the n lag state estimate value of SVSS is calculated by:
x ^ k | k + n = x ^ k | k + n 1 + ( i = k k + n 1 ( P i | i F T P i + 1 | i 1 ) P k + n | k + n 1 H T ( H P k + n | k + n 1 H T + R ) 1 e k + n
Equation (A34) can be written as:
x ^ k | N = x ^ k | N 1 + ( i = k N 1 C i ) B N ( z N H x ^ N | N 1 )
where C i and B N are calculated by C i = P i | i F T P i + 1 | i 1 , B N = P N | N 1 H T ( H P N | N 1 H T + R ) 1 .

Appendix B

The smoothing process is expressed by Equation (42). The state and the measurement dimension are i and j respectively, and the complexity of matrix computation is shown in Table A1.
Table A1. The complexity of matrix computation.
Table A1. The complexity of matrix computation.
Complexity Complexity
PFO(i3)CiO(2i3 + mi3)
PHO(ji2)BNO(2ji2 + 2ij2 +mj3)
P−1O(mi3)
Ignoring the low computational complexity, the computational complexity of one lag smoothing is O ( i 3 + 2 j i 2 + m j 3 ) .
The computational complexity of n lag smoothing is O ( n ( n 1 ) / 2 * ( 3 + m ) i 3 + 2 n j i 2 + 2 n i j 2 + n m j 3 ) . The implementation times of SVSS under different lag fixed intervals is shown in Table A2.
Table A2. Implementation Times of SVSS under different lag fixed intervals.
Table A2. Implementation Times of SVSS under different lag fixed intervals.
Different LagsSVSFOne LagsTwo LagsThree Lags
Single step run time (μs)4872120180

References

  1. Di Viesti, P.; Vitetta, G.M.; Sirignano, E. Double Bayesian Smoothing as Message Passing. IEEE Trans. Signal Process. 2019, 67, 5495–5510. [Google Scholar] [CrossRef]
  2. Gao, R.; Tronarp, F.; Sarkka, S. Iterated Extended Kalman Smoother-Based Variable Splitting for L-1-Regularized State Estimation. IEEE Trans. Signal Process. 2019, 67, 5078–5092. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, Y.H.; Zhang, H.B.; Mao, X.; Li, Y. Accurate Smoothing Methods for State Estimation of Continuous-Discrete Nonlinear Dynamic Systems. IEEE Trans. Autom. Control 2019, 64, 4284–4291. [Google Scholar] [CrossRef]
  4. Huang, Y.; Zhang, Y.; Li, N.; Chambers, J. Robust Student’st based nonlinear filter and smoother. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2586–2596. [Google Scholar] [CrossRef] [Green Version]
  5. Rauch, H.E. Linear Estimation of Sampled Stochastic Processes with Random Parameters; Stanford Univ Ca Stanford Electronics Labs: Stanford, CA, USA, 1962. [Google Scholar]
  6. Rauch, H. Solutions to the linear smoothing problem. IEEE Trans. Autom. Control 1963, 8, 371–372. [Google Scholar] [CrossRef]
  7. Huang, Y.; Zhang, Y.; Zhao, Y.; Mihaylova, L.; Chambers, J. Robust Rauch-Tung-Striebel smoothing framework for heavy-tailed and/or skew noises. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 415–441. [Google Scholar] [CrossRef]
  8. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  9. Munguia, R.; Urzua, S.; Grau, A. EKF-Based Parameter Identification of Multi-Rotor Unmanned Aerial VehiclesModels. Sensors 2019, 19, 4174. [Google Scholar] [CrossRef] [Green Version]
  10. Gadsden, S.A.; Habibi, S.; Kirubarajan, T. Kalman and Smooth Variable Structure Filters for Robust Estimation. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 1038–1050. [Google Scholar] [CrossRef]
  11. Arasaratnam, I.; Haykin, S. Cubature kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef] [Green Version]
  12. Gadsden, S.A.; Al-Shabi, M.; Arasaratnam, I.; Habibi, S.R. Combined cubature Kalman and smooth variable structure filtering: A robust nonlinear estimation strategy. Signal Process. 2014, 96, 290–299. [Google Scholar] [CrossRef]
  13. Zhang, K.W.; Hao, G.; Sun, S.L. Weighted Measurement Fusion Particle Filter for Nonlinear Systems with Correlated Noises. Sensors 2018, 18, 3242. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Zhou, H.; Xia, Y.; Deng, Y. A new particle filter based on smooth variable structure filter. Int. J. Adapt. Control Signal Process. 2020, 34, 32–41. [Google Scholar] [CrossRef]
  15. Huang, Y.; Zhang, Y.; Shi, P.; Wu, Z.; Qian, J.; Chambers, J.A. Robust Kalman filters based on Gaussian scale mixture distributions with application to target tracking. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2082–2096. [Google Scholar] [CrossRef]
  16. Huang, Y.; Zhang, Y.; Xu, B.; Wu, Z.; Chambers, J. A new outlier-robust Student’s t based Gaussian approximate filter for cooperative localization. IEEE Trans. Mechatron. 2017, 22, 2380–2386. [Google Scholar] [CrossRef] [Green Version]
  17. Huang, Y.; Zhang, Y.; Zhao, Y.; Chambers, J.A. A Novel Robust Gaussian–Student’s t Mixture Distribution Based Kalman Filter. IEEE Trans. Signal Process. 2019, 67, 3606–3620. [Google Scholar] [CrossRef]
  18. Huang, Y.; Zhang, Y.; Chambers, J.A. A Novel Kullback–Leibler Divergence Minimization-Based Adaptive Student’s t-Filter. IEEE Trans. Signal Process. 2019, 67, 5417–5432. [Google Scholar] [CrossRef]
  19. Huang, Y.; Zhang, Y.; Wu, Z.; Li, N.; Chambers, J. A novel adaptive Kalman filter with inaccurate process and measurement noise covariance matrices. IEEE Trans. Autom. Control 2017, 63, 594–601. [Google Scholar] [CrossRef] [Green Version]
  20. Huang, Y.; Zhang, Y.; Li, N.; Wu, Z.; Chambers, J.A. A Novel Robust Student’st-Based Kalman Filter. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1545–1554. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, H.; Li, H.; Fang, J.; Wang, H. Robust Gaussian Kalman filter with outlier detection. IEEE Signal Process. Lett. 2018, 25, 1236–1240. [Google Scholar] [CrossRef]
  22. Skoglund, M.A.; Hendeby, G.; Axehill, D. Extended Kalman filter modifications based on an optimization view point. In Proceedings of the 2015 18th International Conference on Information Fusion (Fusion), Washington, DC, USA, 6–9 July 2015; pp. 1856–1861. [Google Scholar]
  23. Bakhshande, F.; Söffker, D. Adaptive Step Size Control of Extended/Unscented Kalman Filter using Event Handling Concept. Front. Mech. Eng. 2019, 5, 74. [Google Scholar] [CrossRef] [Green Version]
  24. Huang, Y.; Zhang, Y.; Xu, B.; Wu, Z.; Chambers, J.A. A new adaptive extended Kalman filter for cooperative localization. IEEE Trans. Aerosp. Electron. Syst. 2017, 54, 353–368. [Google Scholar] [CrossRef]
  25. Milanese, M.; Tempo, R. Optimal algorithms theory for robust estimation and prediction. IEEE Trans. Autom. Control 1985, 30, 730–738. [Google Scholar] [CrossRef]
  26. Zhuk, S.M. Minimax state estimation for linear discrete-time differential-algebraic equations. Automatica 2010, 46, 1785–1789. [Google Scholar] [CrossRef] [Green Version]
  27. Farrell, W.J. Interacting multiple model filter for tactical ballistic missile tracking. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 418–426. [Google Scholar] [CrossRef]
  28. Mazor, E.; Averbuch, A.; Bar-Shalom, Y.; Dayan, J. Interacting multiple model methods in target tracking: A survey. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 103–123. [Google Scholar] [CrossRef]
  29. Li, X.-R.; Bar-Shalom, Y. Multiple-model estimation with variable structure. IEEE Trans. Autom. Control 1996, 41, 478–493. [Google Scholar]
  30. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part V. Multiple-model methods. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1255–1321. [Google Scholar]
  31. Goodman, J.M.; Wilkerson, S.A.; Eggleton, C.; Gadsden, S.A. A Multiple Model Adaptive SVSF-KF Estimation Strategy. In Proceedings of the Signal Processing, Sensor/Information Fusion, and Target Recognition XXVIII, Baltimore, MD, USA, 15–17 April 2019. [Google Scholar]
  32. Habibi, S. The smooth variable structure filter. Proc. IEEE 2007, 95, 1026–1059. [Google Scholar] [CrossRef]
  33. Lee, A.; Gadsden, S.A.; Wilkerson, S.A. An Adaptive Smooth Variable Structure Filter based on the Static Multiple Model Strategy. In Proceedings of the Signal Processing, Sensor/Information Fusion, and Target Recognition XXVIII, Baltimore, MD, USA, 15–17 April 2019. [Google Scholar]
  34. Gadsden, S.A.; Habibi, S.R. A New Robust Filtering Strategy for Linear Systems. J. Dyn. Syst. Meas. Control Trans. 2013, 135, 014503. [Google Scholar] [CrossRef]
  35. Afshari, F.H.; Gadsden, S.A.; Habibi, S. A nonlinear second-order filtering strategy for state estimation of uncertain systems. Signal Process. 2019, 155, 182–192. [Google Scholar] [CrossRef]
  36. Zhou, H.; Xu, L.; Chen, W.; Guo, K.; Shen, F.; Guo, Z. A novel robust filtering strategy for systems with Non-Gaussian noises. AEU Int. J. Electron. Commun. 2018, 97, 154–164. [Google Scholar] [CrossRef]
  37. Spiller, M.; Bakhshande, F.; Söffker, D. The uncertainty learning filter: A revised smooth variable structure filter. Signal Process. 2018, 152, 217–226. [Google Scholar] [CrossRef]
  38. Gadsden, S.A.; Habibi, S.R. A new form of the smooth variable structure filter with a covariance derivation. In Proceedings of the 49th IEEE Conference on Decision and Control (CDC), Atlanta, GA, USA, 15–17 December 2010; pp. 7389–7394. [Google Scholar]
  39. Demim, F.; Nemra, A.; Louadj, K.; Hamerlain, M.; Bazoula, A. Cooperative SLAM for multiple UGVs navigation using SVSF filter. Automatika 2017, 58, 119–129. [Google Scholar] [CrossRef] [Green Version]
  40. Demim, F.; Nemra, A.; Boucheloukh, A.; Kobzili, E.; Hamerlain, M.; Bazoula, A. SLAM based on Adaptive SVSF for Cooperative Unmanned Vehicles in Dynamic environment. IFAC Pap. 2019, 52, 73–80. [Google Scholar] [CrossRef]
  41. Kim, T.; Wang, Y.; Sahinoglu, Z.; Wada, T.; Hara, S.; Qiao, W. State of charge estimation based on a realtime battery model and iterative smooth variable structure filter. In Proceedings of the 2014 IEEE Innovative Smart Grid Technologies-Asia (ISGT ASIA), Kuala Lumpur, Malaysia, 20–23 May 2014; pp. 132–137. [Google Scholar]
  42. Gadsden, S.A.; Al-Shabi, M.; Habibi, S. Estimation strategies for the condition monitoring of a battery system in a hybrid electric vehicle. ISRN Signal Process. 2011, 2011, 120351. [Google Scholar] [CrossRef] [Green Version]
  43. Gadsden, S.A.; Song, Y.; Habibi, S.R. Novel Model-Based Estimators for the Purposes of Fault Detection and Diagnosis. IEEE Trans. Mechatron. 2013, 18, 1237–1249. [Google Scholar] [CrossRef]
  44. Gadsden, S.A.; Kirubarajan, T. Development of a Variable Structure-Based Fault Detection and Diagnosis Strategy Applied to an Electromechanical System. In Proceedings of the Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI, Anaheim, CA, USA, 10–12 April 2017. [Google Scholar]
  45. Ahmed, R.; El Sayed, M.; Gadsden, S.A.; Tjong, J.; Habibi, S. Automotive Internal-Combustion-Engine Fault Detection and Classification Using Artificial Neural Network Techniques. IEEE Trans. Veh. Technol. 2015, 64, 21–33. [Google Scholar] [CrossRef]
  46. Ahmed, R.; El Sayed, M.; Gadsden, S.A.; Tjong, J.; Habibi, S. Artificial neural network training utilizing the smooth variable structure filter estimation strategy. Neural Comput. Appl. 2016, 27, 537–554. [Google Scholar] [CrossRef]
  47. Ismail, M.; Attari, M.; Habibi, S.; Ziada, S. Estimation theory and Neural Networks revisited: REKF and RSVSF as optimization techniques for Deep-Learning. Neural Netw. 2018, 108, 509–526. [Google Scholar] [CrossRef]
  48. Gadsden, S.A.; Lee, A.S. Advances of the smooth variable structure filter: Square-root and two-pass formulations. J. Appl. Remote Sens. 2017, 11, 015018. [Google Scholar] [CrossRef]
  49. Gadsden, S.; Al-Shabi, M.; Kirubarajan, T. Two-Pass Smoother Based on the SVSF Estimation Strategy; SPIE: Washington, WA, USA, 2015; Volume 9474. [Google Scholar]
  50. Yan, B.; Xu, N.; Wang, G.; Yang, S.; Xu, L.P. Detection of Multiple Maneuvering Extended Targets by Three-Dimensional Hough Transform and Multiple Hypothesis Tracking. IEEE Access 2019, 7, 80717–80732. [Google Scholar] [CrossRef]
  51. Yan, B.; Xu, N.; Zhao, W.B.; Li, M.Q.; Xu, L.P. An Efficient Extended Targets Detection Framework Based on Sampling and Spatio-Temporal Detection. Sensors 2019, 19, 2912. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Yan, B.; Xu, N.; Zhao, W.-B.; Xu, L.-P. A Three-Dimensional Hough Transform-Based Track-Before-Detect Technique for Detecting Extended Targets in Strong Clutter Backgrounds. Sensors 2019, 19, 881. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Yan, B.; Zhao, X.Y.; Xu, N.; Chen, Y.; Zhao, W.B. A Grey Wolf Optimization-Based Track-Before-Detect Method for Maneuvering Extended Target Detection and Tracking. Sensors 2019, 19, 1577. [Google Scholar] [CrossRef] [Green Version]
  54. Habibi, S.R.; Burton, R. The variable structure filter. J. Dyn. Syst. Meas. Control Trans. 2003, 125, 287–293. [Google Scholar] [CrossRef]
  55. Grewal, M.; Andrews, A. Kalman Filtering: Theory and Practice with MATLAB, 4th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  56. Sun, S.L.; Ma, J. Optimal filtering and smoothing for discrete-time stochastic singular systems. Signal Process. 2007, 87, 189–201. [Google Scholar] [CrossRef]
  57. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  58. Gadsden, S.A. Smooth Variable Structure Filtering: Theory and Applications. Ph.D. Thesis, McMaster University, Hamilton, ON, Canada, 2011. [Google Scholar]
Figure 1. SVSF estimation concept [32].
Figure 1. SVSF estimation concept [32].
Sensors 20 01781 g001
Figure 2. Smoothing boundary layer concept [10,32] (a) Smoothed estimated trajectory ( ψ β ) ; (b) Presence of chattering effect ( ψ β ) .
Figure 2. Smoothing boundary layer concept [10,32] (a) Smoothed estimated trajectory ( ψ β ) ; (b) Presence of chattering effect ( ψ β ) .
Sensors 20 01781 g002
Figure 3. Estimate/measurement timing constraints [55].
Figure 3. Estimate/measurement timing constraints [55].
Sensors 20 01781 g003
Figure 4. Position trajectory of one experiment.
Figure 4. Position trajectory of one experiment.
Sensors 20 01781 g004
Figure 5. Position of RMSE of x-axis and y-axis (m) (a) position of RMSE on x-axis; (b) position of RMSE on y-axis.
Figure 5. Position of RMSE of x-axis and y-axis (m) (a) position of RMSE on x-axis; (b) position of RMSE on y-axis.
Sensors 20 01781 g005
Figure 6. The position accumulative of different ψ on the x-axis and y-axis (m).
Figure 6. The position accumulative of different ψ on the x-axis and y-axis (m).
Sensors 20 01781 g006
Figure 7. SVSF smoother of different lags compare (a) position RMSE on x- axis; (b) position RMSE on y-axis.
Figure 7. SVSF smoother of different lags compare (a) position RMSE on x- axis; (b) position RMSE on y-axis.
Sensors 20 01781 g007
Figure 8. Average position trajectory.
Figure 8. Average position trajectory.
Sensors 20 01781 g008
Figure 9. RMSE of state estimation (m) (a) position RMSE on x- axis; (b) position RMSE on y-axis; (c) velocity RMSE on x- axis; (d) velocity RMSE on y-axis.
Figure 9. RMSE of state estimation (m) (a) position RMSE on x- axis; (b) position RMSE on y-axis; (c) velocity RMSE on x- axis; (d) velocity RMSE on y-axis.
Sensors 20 01781 g009
Table 1. The position accumulative RMSE on the x-axis and y-axis (m).
Table 1. The position accumulative RMSE on the x-axis and y-axis (m).
Different MethodsKFKSRSTKFSVSFSVSS
Position of accumulative RMSE on x-axis (m)413387145146113
Position of accumulative RMSE om y-axis (m)413387144148117
Single step run time (μs)446811924872
Table 2. The position accumulative RMSE on the x-axis and y-axis (m) for different smooth boundary layer widths.
Table 2. The position accumulative RMSE on the x-axis and y-axis (m) for different smooth boundary layer widths.
Different Smooth Boundary Layer(m) 10050010001500200025003000
SVSF position accumulative RMSE on x-axis(m)200174142164197232264
SVSS position accumulative RMSE on x-axis(m)143127110118131142151
SVSF position accumulative RMSE on y-axis(m)200175145167201235266
SVSS position accumulative RMSE on y-axis(m)143129117134160186209
Table 3. The state estimation accumulative RMSE on x-axis and y-axis (m) for different algorithms.
Table 3. The state estimation accumulative RMSE on x-axis and y-axis (m) for different algorithms.
State EstimationKFKSSVSFSVSTPSSVSS
x-position accumulative RMSE(m)461335957565
x-velocity accumulative RMSE(m/s)160141867357
y-position accumulative RMSE(m)297218927565
y-velocity accumulative RMSE(m/s)113102786952

Share and Cite

MDPI and ACS Style

Chen, Y.; Xu, L.; Yan, B.; Li, C. A Novel Smooth Variable Structure Smoother for Robust Estimation. Sensors 2020, 20, 1781. https://doi.org/10.3390/s20061781

AMA Style

Chen Y, Xu L, Yan B, Li C. A Novel Smooth Variable Structure Smoother for Robust Estimation. Sensors. 2020; 20(6):1781. https://doi.org/10.3390/s20061781

Chicago/Turabian Style

Chen, Yu, Luping Xu, Bo Yan, and Cong Li. 2020. "A Novel Smooth Variable Structure Smoother for Robust Estimation" Sensors 20, no. 6: 1781. https://doi.org/10.3390/s20061781

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop