A Novel Smooth Variable Structure Smoother for Robust Estimation

The smooth variable structure filter (SVSF) is a new-type filter based on the sliding-mode concepts and has good stability and robustness in overcoming the modeling uncertainties and errors. However, SVSF is insufficient to suppress Gaussian noise. A novel smooth variable structure smoother (SVSS) based on SVSF is presented here, which mainly focuses on this drawback and improves the SVSF estimation accuracy of the system. The estimation of the linear Gaussian system state based on SVSS is divided into two steps: Firstly, the SVSF state estimate and covariance are computed during the forward pass in time. Then, the smoothed state estimate is computed during the backward pass by using the innovation of the measured values and covariance estimate matrix. According to the simulation results with respect to the maneuvering target tracking, SVSS has a better performance compared with another smoother based on SVSF and the Kalman smoother in different tracking scenarios. Therefore, the SVSS proposed in this paper could be widely applied in the field of state estimation in dynamic system.


Introduction
The state and parameter estimation of dynamic systems plays a key role in various application fields. In practice, not only the real-time state estimation of the system is required, but also an estimation result with higher precision through the smoother by using additional measurements made after the time of the estimated state vector is necessary [1][2][3][4]. For example, in a radar system, a smoother is used to obtain higher precision track trajectory for the battlefield situation estimation, and to make more accurate estimates for the aircraft performance. Smoothers are generally divided into fixed-point, fixed-interval, and fixed-lag smoothers [5][6][7], depending on attributes of the application, which type of smoothers can be decided to use. In a dynamic system, the real-time sampled data contains various interferences and noises, and it is difficult to obtain the true state of the system. In 1960, in the case of linear Gaussian noise, a recursive solution was obtained using the Kalman filter (KF) [8]. This method was deemed to be the best under a linear, unbiased, and minimum mean square error condition. However, the KF estimates are imprecise and unstable when the system is nonlinear. To address these problems, scholars have proposed some different methods. For example, there is the extended Kalman filter (EKF) [9], the unscented Kalman filter (UKF) [10], and the cubature Kalman filter (CKF) [11,12]. Considering a very small region of nonlinear system as a linear system, EKF was proposed, which is equivalent to the first-order concepts, Habibi first proposed the variable structure filter(VSF) in 2003 [54], which is a new predictor-corrector estimator type, that is applied to linear systems. The SVSF, as an improved form of VSF, can be used for linear or non-linear systems [32].
The trajectory of the state variable over time is shown in Figure 1. In the SVSF concept, the system state trajectory is generated by uncertain modeling. The estimated state trajectory is approximated towards the true trajectory until it reaches the existence subspace. When the estimated state remains in the existence subspace, it is forced to switch back and forth in the true state trajectory. If modeling errors and noises are bounded, then estimates will remain within this limit. The SVSF strategy shows its stability and robustness to modelling uncertainties and errors.  [32].
In the case of linear dynamic systems under a zero-mean and Gaussian noise, the state equation and observation equation of a dynamic system are given by Equation (1), respectively: The iterative process of SVSF is as follows: The predicted state estimation and measurement estimation are calculated as: 1| 1|k k k k z Hx The measurement innovation is expressed by: Figure 1. SVSF estimation concept [32].
In the case of linear dynamic systems under a zero-mean and Gaussian noise, the state equation and observation equation of a dynamic system are given by Equation (1), respectively: The iterative process of SVSF is as follows: The predicted state estimation and measurement estimation are calculated as: x k+1|k = Fx k|k (2) z k+1|k = Hx k+1|k The measurement innovation is expressed by: Sensors 2020, 20, 1781 4 of 21 Based on Reference [32], the SVSF gain K SVSF k+1 is given by: where K SVSF k+1 is a gain function, H + is the generalized inverse matrix of H,γ (0 ≤ γ < 1e) is the SVSF "memory" or convergence rate, |e| is the absolute value of e, and • represents the multiplication of the corresponding elements of the two matrices (Hadamard product). Besides, and where ψ i (i = 1, · · · , m) represents the element of a vector ψ that means smoothing boundary value, and m is the measurement dimension. The state estimate is updated by: The posterior measurement estimation and its innovation are described as: The SVSF estimation process converges to the existence subspaces as shown in Figures 1 and 2. The width of the existence subspace β is an uncertain dynamic and time-varying function. In general, the width of β is not completely known, but an upper bound (i.e., ψ) can be chosen based on prior knowledge.
In the existence subspace, the SVSF gain will force the estimated state to switch back and forth along the true state trajectory. The high-frequency switching caused by the gain of the SVSF is called chattering [32], the chattering makes the estimation result inaccurate, and it can be minimized with a given value of smoothing boundary layer ψ. As shown in Figure 2a, when the smoothing boundary layer is larger than the existence subspace boundary layer, the estimated state trajectory is smoothed. When the smoothing boundary layer is smaller than the existence subspace boundary layer, the chattering remains due to the model uncertainty is underestimation, as shown in Figure 2b. Therefore, the SVSF algorithm is an effective estimation strategy for a system that has no an explicit model. SVSF is generally used to process systems with the same dimension of state estimation and measurement. If the dimension of state estimation is larger than the dimension of measurement, the "artificial" measurement needs to be added [4]. For example, when estimate the speed in target tracking, it is necessary to increase the "artificial" measurements of speed. explicit model. SVSF is generally used to process systems with the same dimension of state estimation and measurement. If the dimension of state estimation is larger than the dimension of measurement, the "artificial" measurement needs to be added [4]. For example, when estimate the speed in target tracking, it is necessary to increase the "artificial" measurements of speed.

Smoothing Theories and the Proposed Algorithms
Smoothing is a practical strategy using additional measurements made after the time of the estimated state vector to obtain better estimates than those attainable by filtering only. Unfortunately, it is often overlooked. Since the KF algorithm was proposed in 1960 [8], the optimal smoothing method has been derived from the same model using the Kalman theory [55]. Smoothers are the algorithmic or analog implementations of smoothing methods. The relation among smoother, predictor and filter is illustrated in Figure 3, fixed-interval smoothers, fixed-lag smoothers, and fixed-point smoothers are the most typical smoothers, since they have been developed to solve different types of application problems. The fixed-interval smoother is suitable for the trajectory optimization problems.

Smoothing Theories and the Proposed Algorithms
Smoothing is a practical strategy using additional measurements made after the time of the estimated state vector to obtain better estimates than those attainable by filtering only. Unfortunately, it is often overlooked. Since the KF algorithm was proposed in 1960 [8], the optimal smoothing method has been derived from the same model using the Kalman theory [55]. Smoothers are the algorithmic or analog implementations of smoothing methods. The relation among smoother, predictor and filter is illustrated in Figure 3, fixed-interval smoothers, fixed-lag smoothers, and fixedpoint smoothers are the most typical smoothers, since they have been developed to solve different types of application problems. The fixed-interval smoother is suitable for the trajectory optimization problems.  [55].
SVSF is a novel model-based robust state estimation method and has efficient performance in reducing the disturbances and uncertainties. However, SVSF is less efficient in noise elimination compared to the Kalman filter [10]. Therefore, if its performance in noise canceling can be improved, SVSF would be developed further and be widely utilized in the state estimation field. Previous research indicates that the influence of noises can be eliminated effectively by a filter followed by an additional smoothing process. Therefore, this paper studies the fixed-interval smoother based on SVSF. SVSF is a novel model-based robust state estimation method and has efficient performance in reducing the disturbances and uncertainties. However, SVSF is less efficient in noise elimination compared to the Kalman filter [10]. Therefore, if its performance in noise canceling can be improved, SVSF would be developed further and be widely utilized in the state estimation field. Previous research indicates that the influence of noises can be eliminated effectively by a filter followed by an additional smoothing process. Therefore, this paper studies the fixed-interval smoother based on SVSF.

The Kalman Smoother
The popular Rauch-Tung-Striebel (RTS) Two-pass Smoother is one of the fixed-interval Kalman smoothers [55], which is labeled as KS. KS is widely used for its fast iteration and high precision performance, and its iterative process is described below. When the current measurements are obtained, system state and covariance calculated by the Kalman filter in real-time, then, they are used in the smoothing process to obtain a more accurate estimate [55].
The smoothing process is operated backward from the last estimationx N|N to the first one [55], and the corresponding iterations are given by: Equations (11)-(17) summarize the Rauch-Tung-Striebel (RTS) Two-pass Smoother (the KS algorithm), which is an iterative process.

The Proposed SVSS Algorithm
The covariance does not necessarily have to be calculated in SVSF, but it is significant for the smoother [38]. The linear system is described by Equation (1). Similar to the calculation method of KF, the predicted state covariance matrix of SVSF is shown as; where P k|k represents the posterior state covariance at k time, and F T represents the system transition matrix, Q is the white noise involved in the system and is calculated by: The state estimatex k+1|k is given byx Sensors 2020, 20, 1781 7 of 21 According to updated estimatesx k+1|k and real measurement z k , the innovation can be expressed by: The state estimate is updated as: The updated measurement innovation is calculated by: The posterior state covariance P k+1|k+1 at k + 1 time can be written in: The relationship between the measurement noises and state estimates is depicted as Equation (26) under the linear Gaussian system: where R k+1 means the measurement noise covariance matrix. In addition, v k+1 follows a Gaussian According to Formulas (25) and (26), the Equation (25) can be rewritten as: The Equations (18)- (28) are the specific processes of the SVSF algorithm with the iterative update process of the covariance matrix.
The earliest article about SVSF [48] presented a two-pass smoother (labeled as SVSTPS) based on the Rauch-Tung-Striebel (RTS) Two-pass Smoother. The smoothing process uses Equation (17), in other words, it is based on the SVSF gain directly. However, the estimation result is not satisfying, so the SVSS using innovation sequences is proposed here. The innovation sequence is an orthogonal sequence containing the same statistical information as the original random sequence, so the innovation sequence is a white noise sequence with a zero mean. The smoothing process in the proposed smoother (SVSS) is updated by the innovation sequence according to the projective theorem (minimum mean square error criterion), and has better performance in eliminating Gaussian noise. The recursive equation is related with the smoothed estimatesx k|N andx k|N−1 (with k < N), as follows: Sensors 2020, 20, 1781 where C i and B N are calculated by Proof. When N = k + n and e k+n z n − Hx N|N−1 , Equation (29) can be written as: where the smoothing gain M k|k+n−1 is defined in projection theory as: And according to Equation (1) and Equation (21), the innovation is given by: Furthermore, according the References [56], the true state error is calculated by: where F(k + n − 1, k + n − 1) and F(k + n − 1, i) are defined as: Suppose n > 0, then according to Equation (19) and Equation (26), when w k+n−1 ⊥x k and v k+n ⊥x k , the expectation can be calculated as: The true state value is expressed as x k =x k|k + x k|k andx k|k ⊥ x k|k , substituting (33) into (36) yields From Equation (28), Equation (37) can be written in: Substituting Equation (38) and Equation (39) into Equation (30) yields: Sensors 2020, 20, 1781 9 of 21 So Equation (30) is written as: When N = k + n and e k+n = z n − Hx N|N−1 . Equation (41) can be written as: where C i and B N are calculated by The other detailed derivation is described in the Appendix A. Moreover, the validity of this derivation process could be verified from Reference [6].
Equations (18)-(29) summarize the SVSS algorithm proposed in this paper. The pseudo-code is patched as follows.

A Classic Target Tracking Scenario
A classic simulation scenario is always used in References [10,30,57]. This is a tracking problem with a generic air traffic control (ATC) scenario: The target position is provided by a radar system. The system noise obeys a Gaussian distribution with 200 m standard deviation. Figure 4 shows that an aircraft spent 125 s to move from the initial position of [−25000m, −10000m] at t = 0s, and flew with a speed of 120 m/s along the x-axis direction and remained 0m/s along the y-axis direction. Then it turned at a rate of 3 • /s for 30s. Next, the target flew in a straight line for 125s, and maneuvered at a rate of 1 • /s for 90s. Finally the target flew straight for 120s till the end.
In the ATC scenario, the behavior of the civilian aircraft could be modeled by a uniform motion (UM) [10,30,57], i.e., where T refers to the sampling rate and w k denotes the system noise. The state vector x k of the systems is defined by: where η k and ξ k represent the position of the aircraft along the x-axis and y-axis directions, . η k and . ξ k represent its speed, respectively. In the target tracking processing system, generally, radar only provides position measurements without target speed measurements. Therefore, the measurement model can be expressed as: In the simulation, the state of the filter parameter is initialized as x 0 = [−25000, 80, −10000, −10]. Moreover, measurement covariance R, state covariance P 0|0 and process noise covariance Q are set as follows: where L represents the power spectral densities and is defined as 0.16 [10,30,57]. Besides, the SVSF "memory" ( i.e. convergence rate) was set to γ = 0.1, which is tuned based on some knowledge of the system uncertainties such as the noise to decrease the estimation errors [58].  [20], SVSF and the proposed SVSF are tested. The SVSF and SVSS of smooth boundary layer widths is set to ψ = [800, 800] m, the In the RSTKF, the prior parameters are set as: w = v = 5, τ = 5, N = 10 [20]. Figure 4 shows the target tracking position trajectory of one experiment, so the closer the tracking trajectory is to the real trajectory, the more accurate the tracking method is. The trajectory of SVSS is closer to true trajectory than SVSF, so SVSS reduces the chattering phenomena. The state estimation results of KF, KS, SVSF and the proposed SVSS are shown in Figure 5 and Table 1. Compared with RSTKF, SVSF and SVSS, when the system modeling is in accordance with target motions, the KF and KS exhibit better performance. But when the target motion model changes, a dramatic increase occurs in the tracking errors of KF and KS, while the estimations of RSTKF, SVSF and SVSS still maintain high accuracy. In Figure 5, the aircraft flies in a straight line after 155 s, the RMSE of SVSF does not decrease because the SVSF algorithm could not directly estimate the speed information. Therefore, there are no modifications in the relevant speed information. How to effectively use the "artificial" velocity measurements to modify the speed information will be described in the following. As shown in Figure 5 and Table 1, the simulation results demonstrate that for both KF and SVSF, the tracking errors obtained from their smoothers are smaller than that from the filters. In addition, Table 1 shows the position accumulative RMSE of SVSS is the lowest among KF, KS, RSTKF and SVSF. SVSS improves tracking accuracy by about 22% compared with SVSF and RSTKF, while KS only improves about 7% compared with KF. The reason for phenomenon is that the ability of SVSF to deal with noise is weaker than KF, so SVSS is easier to eliminate noises in its state estimates value compared to KS. KS and SVSS in smoothing process consume similar time, SVSF spends a little more time compared to KF, The RSKF cost most time. To sum up, SVSS has better robustness and higher tracking accuracy than the others.    Figure 6 and Table 2 show results of the position accumulative RMSE under different smooth boundary layer widths. We can see that SVSS has a better performance than SVSF, because the ability     Figure 6 and Table 2 show results of the position accumulative RMSE under different smooth boundary layer widths. We can see that SVSS has a better performance than SVSF, because the ability of SVSF to eliminate noise is almost the same under different smooth boundary layer widths. In addition, the smooth boundary layer width ψ is an important parameter in SVSF. If it is not set  2 Results under different smooth boundary layer widths Figure 6 and Table 2 show results of the position accumulative RMSE under different smooth boundary layer widths. We can see that SVSS has a better performance than SVSF, because the ability of SVSF to eliminate noise is almost the same under different smooth boundary layer widths. In addition, the smooth boundary layer width ψ is an important parameter in SVSF. If it is not set properly, both the filter accuracy and stability would be deteriorated. The accumulative RMSE of SVSF estimation is high when the smooth boundary layer width is too large or too small. As shown in Figure 6, when the smoothing boundary layer ψ is selected as 1000m, the proposed method has a better performance under the system measurement noises with standard deviation of 200m. These parameters are selected based on the distribution of the system and measurement noises. Generally, the smoothing boundary layer width ψ is set to be 5 times the maximum measurement noise width. And the SVSF "memory" or convergence rate γ(0 ≤ γ < 1) is related to the rate of decay α, such that α = (1/τ)ln(γ),where the τ is the sampling time [32]. As shown in Figure 6, when the width of smooth boundary layer is large, the accumulated RMSE of SVSS position on x-axis and y-axis is different, the reason is the same as above, there is no modifications in the relevant speed.
Sensors 2020, 20, x FOR PEER REVIEW 12 of 21 parameters are selected based on the distribution of the system and measurement noises. Generally, the smoothing boundary layer width ψ is set to be 5 times the maximum measurement noise width.
And the SVSF "memory" or convergence rate (0 1) γ γ ≤ < is related to the rate of decay α , such that where the τ is the sampling time [32]. As shown in Figure 6, when the width of smooth boundary layer is large, the accumulated RMSE of SVSS position on x-axis and y-axis is different, the reason is the same as above, there is no modifications in the relevant speed.     Simulations under different lag fixed intervals are also considered. The width of the smooth boundary layer is ψ = [800, 800] m, and the other experimental parameters are the same as mentioned above. Figure 7 shows the estimation results using SVSS under three different lags and SVSF, and the performance of two lags and three lags is better than that of one lag. Compared with two lags, when the target moves with a constant velocity, the three lags improves the tracking accuracy slightly, but when the target makes a turning movement, the accuracy of three lags is lower than that of two lags. As a matter of fact, when the system model is consistent with the actual model, the proposed SVSS, which is derived under linear Gaussian noise, this can increase the estimation accuracy as does increasing the smoothing lag steps. However, if the models are inconsistent, the performance of the SVSS will probably decrease and is unstable as lag steps of smoothing increase, because innovation information in the SVSS contains more modeling error innovation. With the increase of the lag of SVSS, the computational complexity becomes higher. The computational complexity of n lags is O n(n − 1)/2 * (3 + m)i 3 + 2n ji 2 + 2ni j 2 + nm j 3 , and the detailed derivation is described in the Appendix B, the simulation time cost of different lags is also shown in Table A2 of the Appendix B. Therefore, under different requirements, users can choose the appropriate lag steps according to the needs of the system.  Figure 7 shows the estimation results using SVSS under three different lags and SVSF, and the performance of two lags and three lags is better than that of one lag. Compared with two lags, when the target moves with a constant velocity, the three lags improves the tracking accuracy slightly, but when the target makes a turning movement, the accuracy of three lags is lower than that of two lags. As a matter of fact, when the system model is consistent with the actual model, the proposed SVSS, which is derived under linear Gaussian noise, this can increase the estimation accuracy as does increasing the smoothing lag steps. However, if the models are inconsistent, the performance of the SVSS will probably decrease and is unstable as lag steps of smoothing increase, because innovation information in the SVSS contains more modeling error innovation. With the increase of the lag of SVSS, the computational complexity becomes higher. The computational complexity of n lags is  Table B2 of the appendix B. Therefore, under different requirements, users can choose the appropriate lag steps according to the needs of the system.

A Complex Maneuvering Environment Scenario
As for this scenario, the maneuver of the target is more complicated, motion modes including uniform motion, turning motion, uniform acceleration motion and angular acceleration motion. The

A Complex Maneuvering Environment Scenario
As for this scenario, the maneuver of the target is more complicated, motion modes including uniform motion, turning motion, uniform acceleration motion and angular acceleration motion. The initial position value of the aircraft is also set to be [−25000m, −10000m]. Target flies with a high speed of 300m/s along the x-axis direction and 250m/s along the y-axis direction for 20s.Then the aircraft maneuvers at a rate of −3 • /s for 55s. Next, the target turns at an initial angular velocity of 1 • /s with an angular acceleration of 0.5 • /s 2 for 25s. Finally the target continues to fly at an acceleration of 20m/s 2 along the x-axis direction and 10m/s 2 along the y-axis direction for 25s. The process noises obeys a Gaussian distribution with 100m standard deviation. The process noises and measurement noises (w and v) are considered as Gaussian, with a zero mean and variances Q and R, respectively. The initial state covariance P 0|0 , measurement noise covariance R, and process noise covariance Q are defined, respectively, as follows: where Q is given by the Equation (48) and L is defined as 15. For the SVSF and their smoother estimation processes use the UM model, and smoothing boundary layer widths were defined as, ψ = [500, 600, 500, 600] T ; the SVSF "memory" or convergence rate was set to γ = 0.1. In order to meet the actual demand, the estimation of speed is increased. As per earlier discussions, the system also needs to add an "artificial" velocity measurement; it is necessary to transform the measurement matrix of (45) into a square matrix (i.e., identity), and the "artificial" velocity measurement can be calculated by position measurements. For example, where y k represents measurement, which contains the artificial velocity measurements: The accuracy of Equation (51) depends on the sampling rate T. A total of 500 Monte Carlo runs were performed.
The results of different estimation methods are shown in the following figures and Table 3, so we can see that the KS algorithm has the best performance when the model is matched (0 s to 20 s) with the actual motions, and the position RMSE and speed RMSE of KS are significantly smaller than the other two SVSF smoothers. Once the models do not match, such as the high speed target change from uniform motion to turn, angular acceleration, or acceleration, KS is unstable and RMSE increases, but the SVSTPS and SVSS are still able to overcome the uncertainties and have high accuracy and robustness. Table 3 shows that the position (velocity) accumulative RMSE of SVSTPS and SVSS increased by about 21% (15%) and 31% (33%) than SVSF, respectively. This is because SVSTPS uses SVSF gain, and the proposed SVSS uses innovation, and innovation is more conducive to smoother processing due to it containing as much noises as the original measurements. Thus more noise errors can be eliminated by SVSS. However, there is a special case where the SVSS performs not as well as SVSTPS when modeling errors affect it more than Gaussian noise. For example, at around 95 s in Figures 8 and 9, the target is flying at a high speed with the angular velocity of 12 • /s and angular acceleration of 0.5 • /s 2 , so accuracy of SVSS is lower than that of SVSTPS. Therefore, SVSS exhibits the best performance with the above filters.

Conclusions
An SVSS algorithm is presented to improve the accuracy of the SVSF state estimation in the dynamic system with model uncertainty. Based on projection theory and SVSF, the smoothing recurrence formula of SVSS is deduced using innovation. The comparisons among SVSS, KS and SVSTPS are analyzed in the scene of aircraft trajectory tracking. According to the simulation results, the proposed SVSS performs well than SVSF under different bounded smoothing layers. In addition, compared with the popular KS algorithm, the SVSS is able to overcome inaccuracies and yield a stable solution in the presence of modeling uncertainties. Theory and simulations also prove that, the proposed SVSS has higher accuracy and better robustness compared with SVSTPS.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A.
Proof process is based on the orthogonal projection theory: Based on random measurement variables z ∈ R m , estimate variablesx ∈ R n , which represent the linear minimum variance estimation of x ∈ R n , is a linear function of z, the minimization index of estimate J which is given by the following where E is the mean sign, and T is the transpose symbol, The linear minimum variance estimation is only given by the following formulâ

Property 3
Where x −x and z are uncorrelated random variables, recorded as x −x⊥z, calledx which the projection of x on z.x = pro j(x z) (A7) Appendix A.3. Definition 3 Based on random variables z 1 , z 2 , · · · , z k ∈ R m , The linear minimum variance estimatex of the random variables x ∈ R n is defined as: estimatex is called x projective on the L(w) or L(z 1 , z 2 , · · · z k ).

Appendix B.
The smoothing process is expressed by Equation (42). The state and the measurement dimension are i and j respectively, and the complexity of matrix computation is shown in Table A1. Ignoring the low computational complexity, the computational complexity of one lag smoothing is O i 3 + 2 ji 2 + m j 3 .