Next Article in Journal
Detection of Salmonella Typhimurium with Gold Nanoparticles Using Quartz Crystal Microbalance Biosensor
Previous Article in Journal
Low-Cost UVBot Using SLAM to Mitigate the Spread of Noroviruses in Occupational Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Formulation of the Alpha Sliding Innovation Filter: A Robust Linear Estimation Strategy

by
Mohammad AlShabi
1,* and
Stephen Andrew Gadsden
2
1
Department of Mechanical & Nuclear Engineering, University of Sharjah, Sharjah P.O. Box 27272, United Arab Emirates
2
Department of Mechanical Engineering, McMaster University, Hamilton, ON L8S 4L8, Canada
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(22), 8927; https://doi.org/10.3390/s22228927
Submission received: 6 October 2022 / Revised: 11 November 2022 / Accepted: 14 November 2022 / Published: 18 November 2022
(This article belongs to the Section Physical Sensors)

Abstract

:
In this paper, a new filter referred to as the alpha sliding innovation filter (ASIF) is presented. The sliding innovation filter (SIF) is a newly developed estimation strategy that uses innovation or measurement error as a switching hyperplane. It is a sub-optimal filter that provides a robust and stable estimate. In this paper, the SIF is reformulated by including a forgetting factor, which significantly improves estimation performance. The proposed ASIF is applied to several systems including a first-order thermometer, a second-order spring-mass-damper, and a third-order electrohydrostatic actuator (EHA) that was built for experimentation. The proposed ASIF provides an improvement in estimation accuracy while maintaining robustness to modeling uncertainties and disturbances.

1. Introduction

Estimation and filtering techniques are essential to successfully modeling and controlling engineering systems. By having a well-estimated state, the noise in the measured signal can be reduced and the quality of the controller can be improved [1]. Filters extract valuable information from noisy measurement signals and forward this information to a control system. Several works have studied filtering and estimation theory and can be summarized based on the principles that were used to develop the filter. Important features and methods that are considered for filters include optimality, robustness, and stability, combining techniques from the first two categories, and the utilization of artificial intelligence.
The most popular filter in the optimality category is the Kalman filter (KF), which was introduced by Rudolph Kalman in the 1960s [2]. This work was formulated to solve a linear stochastic system that involves Gaussian-white signals by reducing the state error covariance matrix. Later, a significantly large number of works were published that modified the KF in order to improve robustness and make it applicable to nonlinear systems and measurements. These works include the extended KF (EKF) [2,3], where a first-order Taylor series expansion is used to linearize the system and measurement matrices. Other works used the unscented transformation, such as the unscented KF (UKF) [3,4], and the Cubature rule, such as the cubature KF (CKF) [5], to approximate the nonlinear functions. These techniques used several points that are drawn from pre-defined density functions and propagated through the nonlinear functions. Next, the methods used statistical regression summation as a linearization technique.
Several works targeted the robustness of the KF by using simple algebraic techniques as in [6,7,8,9,10,11]. For example, some used QR decomposition and Cholesky factoring for the covariance matrices [10,11]. Other techniques used boundaries for the error to make sure it remained close to the state estimates [12]. For example, given the upper bounds on the level of modeling uncertainty, the KF gain may be bounded to help improve the estimation stability [12]. Other methodologies include the addition of fictitious system noise and adding a fading memory strategy [3,13]. Modifying the system noise matrix was performed when less confidence was placed in the system model used by the filter, or in other words when there was a great deal of system uncertainty [3,13,14,15]. Finally, the H∞ filter was introduced in [16,17]. This filter takes into consideration the worst-case uncertainties to derive the gain, which ensures that the estimates are bounded.
The second category targets the robustness and stability of the filter. The main filters in this area are sliding mode (SMO), variable structure (VSO) observers [13,18,19,20,21], and the strong tracking filter (STF) [22,23]. These observers define hyperplanes, usually functions of the innovation, and apply a discontinuous switching force to keep the estimate within a region of the true trajectory. For VSO, these hyperplanes divide the trajectory space into sub-regions in which the system dynamics are continuous. After crossing one of these planes, the observer changes its structure [20,24]. On the other hand, the hyperplanes of SMO are used to keep the estimates on them while sliding to towards zero [25,26]. A special form of the VSO and SMO was introduced in [1,27,28] in a predictor-corrector form, referred to as the smooth variable structure filter (SVSF). The SVSF gain is a function of the a priori and a posteriori innovation vectors and a memory function. Later, in 2020, a new filter referred to as the sliding innovation filter (SIF) joined the family [29]. SIF shared the same principles as the SVS but was simple in structure. SIF was used in target tracking [30,31], linear systems [32], unmanned aerial vehicles [33], and fault detection and diagnosis [34].
The third and fourth categories involve merging the first two categories to overcome limitations in each method, as per [35,36,37,38,39]. For example, the use of sigma points with the KF, such as central difference and unscented KF and the cubature KF, were combined with the SVSF in order to improve the accuracy of the SVSF while maintaining robustness. Other works combined the H∞ filter with the KF as in [40,41]. Some other works combined AI techniques such as in [42,43,44] where fuzzy logic was combined with the KF, and in [45,46,47] where AI was combined with the SMO and SVSF. The SIF was combined with multiple model-based filters in [48,49] and particle filters in [50].
In this paper, a novel algorithm referred to as the alpha sliding innovation filter (ASIF) is proposed. The proposed ASIF method gives an efficient yet simple mechanism that improves the performance of the standard SIF. By utilizing a forgetting factor coefficient, the level of measurement confidence can be optimized. Measurement noise affects the performance of the SIF; therefore, the SIF can be improved by minimizing the effects of noise using a forgetting factor. This paper is organized as follows. The SIF is introduced in Section 2, followed by the proposed ASIF algorithm in Section 3. The proof of stability for the proposed filter is derived in Section 4. The algorithm is tested and verified in Section 5 on three different systems. The paper is then concluded in Section 6.

2. The Sliding Innovation Filter (SIF)

The SIF is a new estimation technique that was proposed in [29] and later modified in [51,52]. The filter is a predictor-corrector filter that shares the same principles as the SVSF [35,53,54,55]. It utilizes the innovation as a switching hyperplane and forces the estimate to be within its neighborhood. The process starts by assuming that the system and measurements behave according to the following, respectively:
x k + 1 = A x k + B u k + w k  
z k + 1 = C x k + 1 + v k + 1  
It then uses the knowledge of the system model to obtain a priori, predicted estimates and measurements as follows:
x ^ k + 1 | k = A x ^ k | k + B u k
z ˜ k + 1 | k = z k + 1 C x ^ k + 1 | k
The a priori estimate is refined in the corrector stage of the a posteriori estimate, as follows:
x ^ k + 1 | k + 1 = x ^ k + 1 | k + K k + 1 z ˜ k + 1 | k
where K k + 1 is the correction gain of the SIF defined in Equation (6) and z ˜ k + 1 | k refers to the innovation or measurement error.
K k + 1 = C + s a t ¯ ( | z ˜ k + 1 | k | / δ )
where s a t ¯ is the saturation function written as diagonal elements in a matrix and is defined as:
s a t ¯ ( | z ˜ k + 1 | k | / δ ) = [ s a t ( | z ˜ 1 , k + 1 | k | / δ 1 ) s a t ( | z ˜ n , k + 1 | k | / δ n ) ]   w h e r e   s a t ( | A | / B ) = { 1 | A | B | A | / B | A | < B
The elements have values between 0 and +1. The boundary layer δ is used to smooth the estimate and reduce the effect of the measurement noise [29]. However, if the innovation becomes larger than the boundary layer, then s a t ¯ becomes the identity matrix. The SIF estimation process is illustrated in Figure 1.

3. The Proposed Alpha Sliding Innovation Filter (ASIF) Strategy

This section describes the proposed modification to the standard SIF. A forgetting factor coefficient (FF) or α is introduced to create the ASIF. The SIF using (2.6) is considered robust [29]. However, its performance in terms of optimality highly depends on the selection of the boundary layer. The selection mechanism proposed in [29] to obtain the widths of these layers was based on trial and error. However, this is not an easy task for several reasons, summarized as follows:
  • The range of acceptable δ is infinitely wide, as it can be any positive value ( 0 , ) .
  • The number of boundary layers required is equal to the number of measurements, which means exhaustive trials for high-dimensional systems (e.g., AI output layers).
  • In most cases, δ is selected to be the maximum allowable error in the system. For small innovation amplitudes, the filter will have a very small convergence rate, and for large amplitudes, the estimates may chatter.
These limitations inspired the authors to develop a modified version of the SIF that improves the performance while overcoming these limitations. A fading memory is added to the SIF’s gain. By adding the coefficient α , the s a t ¯ term can be ignored and replaced with an identity matrix. In our case, α is a measure of measurement confidence, as per the following:
  • If α = 1 , ASIF collapses to the SIF with a very small boundary layer.
  • If α 0 , the filter depends more on the system and less on the measurement. This can be used to reduce the effect of the measurement noise. This helps significantly when R is larger than Q .
  • If α is large, the filter depends more on the measurement, which makes the ASIF sensitive to the measurement noise but less sensitive to modeling uncertainties. This helps when Q is larger than R .
For the ASIF estimation strategy, the proposed gain has the following simplified structure:
K k + 1 = α C +
Note that the ASIF estimation process is similar to the SIF, except that Equation (8) is used instead of Equation (6).

4. Proof of Stability

The SIF gain was developed to guarantee the stability and robustness of the filter. Adding the forgetting factor changes the filter structure. In this section, the stability of the new ASIF is examined. We consider the discrete Lyapunov function M k + 1 defined as follows:
M k + 1 = | z ˜ k + 1 | k + 1 |
Then, according to Equation (9), the proposed filter is stable as long as the innovation is decreasing with time, as follows:
| z ˜ k + 1 | k + 1 | < | z ˜ k | k |
Referring to the equations of Section 2 and Section 3, taking the difference between Equations (1) and (5) yields:
x ˜ k + 1 | k + 1 = x ˜ k + 1 | k α C + z ˜ k + 1 | k
Substituting Equation (2) in (4) results in:
z ˜ k + 1 | k = C x ˜ k + 1 | k + v k + 1   o r   C + z ˜ k + 1 | k = x ˜ k + 1 | k + C + v k + 1
Substituting Equation (12) in (11) and multiplying by C gives:
z ˜ k + 1 | k + 1 = z ˜ k + 1 | k α z ˜ k + 1 | k   o r   z ˜ k + 1 | k + 1 = ( 1 α ) z ˜ k + 1 | k
Subtracting Equation (3) from Equation (1) and using Equation (12) yields the following:
x ˜ k + 1 | k = A x ˜ k | k + w k
z ˜ k + 1 | k = C ( A x ˜ k | k + w k ) + v k + 1
Substituting Equation (15) into Equation (13) yields:
z ˜ k + 1 | k + 1 = ( 1 α ) ( C ( A x ˜ k | k + w k ) + v k + 1 )
Rewriting Equation (16) and simplifying yields:
z ˜ k + 1 | k + 1 = ( 1 α ) C A C + z ˜ k | k + η
where η is defined as an uncertainty vector as follows:
η = ( 1 α ) C ( A C + v k w k ) + ( 1 α ) v k + 1
If the measurement and system noise vectors are white, then E [ w k ] = E [ v k ] = E [ η ] = 0 . Therefore, taking the expectation of Equation (17) yields:
E [ z ˜ k + 1 | k + 1 ] = ( 1 α ) C A C + E [ z ˜ k | k ]
From Equation (19) and Equation (20), the filter is considered stable if
| ( 1 α ) C A C + | < 1
where E refers to the expectation or expected value. The system under study is stable if the eigenvalues of A are less than unity. Therefore, without losing generality, C A C + is less than unity, which yields the following condition:
1 < ( 1 α ) < 1
Solving Equation (21) further gives:
0 α 2
Therefore, for the ASIF estimation process to be stable, α must be defined as per Equation (22).

5. Computer Experiments and Results

The proposed ASIF estimation strategy will be tested on first-, second-, and third-order systems. The ASIF is compared with the original SIF and the well-known KF. The study was conducted on MATLAB and was repeated 1000 times. The repetitions were conducted to obtain the Monte Carlo Simulation (MCS) in order to provide better insight into the different estimation methods. The parameters, Q ,   R ,   δ , and α are estimated for no modeling uncertainties case. Their values are obtained by trial and error. They are assumed to be valid for the molding uncertainties case.
The results are compared in terms of root mean squared error (RMSE) and maximum absolute error (MAE) under normal operating conditions, and modeling uncertainty or faulty conditions. RMSE and MAE are calculated using Equations (23) and (24), respectively, assuming n is the number of sampled data.
R M S E = i = 1 n ( x i x ^ i ) 2 n
M A E = max ( | x i x ^ i | i = 1 n )

5.1. Mercury Thermometer

A thermometer has a simple first-order differential equation, which is derived from the conservation of energy. The fluid transfers the heat by convection and stores it in the fluid using:
h A Δ T = m C d T d t
The above relation can be simplified to:
d T d t + h A p m C p Δ T = d T d t + 1 τ Δ T = 0
where τ is the time constant with a value of 1.7 sec [56]. Rearranging Equation (26) yields:
d T d t + 1 τ T = 1 τ T i
Equation (25) can be discretized into the following:
T k + 1 = ( 1 T s τ ) T k + { T s τ T i , k = u k } + w k
or using the following state space form:
x k + 1 = [ 1 T s τ ] x k + [ T s τ ] u k + w k = A x k + B u k + w k
z k + 1 = C x k + 1 + v k + 1
where u is the temperature of the surroundings and is treated as the system input. It is selected to change its value between 25   ° C and 50   ° C every second. The system and measurement noises ( w and v ) are normally distributed with zero mean and covariance’s Q and R defined by Equations (31) and (32), respectively. C is equal to unity and α is selected as shown in Equation (33).
Q = 0.5775
R = 0.5813
α = 0.6182
The initial values for the filters and their corresponding initial covariance values were set to zero and one, respectively. The boundary layer was set to 0.5 based on trial and error in order to minimize the estimation error. The simulations were repeated after injecting the system with an uncertainty of 20%, i.e., A becomes 0.7995. Figure 2 shows the averaged RMSE and MAE for the system with and without modeling uncertainties, respectively. Figure 3 shows the actual estimates for the two respective cases. Table 1 and Table 2 include the calculated RMSE and MAE of the MCS without and with modeling uncertainties, respectively.
When no modeling uncertainties were present, the RMSE and MAE results for the ASIF illustrate a better performance compared with the KF and SIF strategies. The performance of the ASIF remains better than the KF when uncertainties were present. Compared to the standard SIF, the ASIF yielded better estimates. However, when uncertainties were present, both the SIF and ASIF estimation strategies performed similarly, with slight superiority to SIF.

5.2. Spring-Mass-Damper

The well-known spring-mass-damper system (S-M-D) is mathematically described using a second-order differential equation, according to the following formula:
M d 2 x d t 2 + b d x d t + k x = u
Equation (34) can be represented as discretized state space equations (system and measurement) as follows:
x k + 1 = [ 1 T s k T s / M ( 1 b T s / M ) ] x k + [ 0 T s / M ] u
z k + 1 = C x k + 1 + v k + 1
where u is the force applied to the system, which is selected to change its value between 50 and 500 Newton every second. The system and measurement noises ( w and v ) are normally distributed with zero mean and covariance’s Q and R defined by Equations (37) and (38), respectively. The boundary layer and α are chosen as shown in Equations (39) and (40), respectively. C is the identity matrix.
Q = d i a g ( [ 2.3 2.3 ] ) × 10 14
R = d i a g ( [ 5.8 5.8 ] ) × 10 3
ψ = [ 0.1   0.01 ] T
α = 0.05
The initial values for the filters and their corresponding initial covariance values were set to zero vectors and identity matrices, respectively. The simulations were repeated after injecting the system with uncertainties as mentioned in Table 3. The system parameters are summarized in Table 3. Figure 4 shows the RMSE and MAE for the first and second states for the two case scenarios: No modeling and modeling uncertainties present. The true and estimated state values for the first and second states with no modeling and with modeling uncertainties present are shown in Figure 5. Table 4 and Table 5 summarize the averaged RMSE and MAE for the cases without and with modeling uncertainties. ASIF shows a similar performance to KF when no modeling uncertainties present, which is better than those obtained by SIF. Once modeling uncertainties were injected, ASIF has a similar performance to SIF, which is better than KF.

5.3. Electrohydrostatic Actuator

The third and final example in this study includes an aerospace flight surface actuator proposed and studied in [6,27,29]. The linear system and measurements are formulated as:
x k + 1 = [ 1 T s 0 0 1 T s 557 28.6 0.94 ] x k + [ 0 0 557 ] u k + w k
z k + 1 = C x k + 1 + v k + 1
where u is the controller input. In this case, it is defined as a multistep input that changes its value between −0.5 and 0.5 r a d / s every second. The system and measurement noise covariance’s are Q and R defined by Equations (43) and (44), respectively. As per the previous examples, modeling uncertainties are injected using Equation (45). The boundary layer and α are chosen as shown in Equations (46) and (47), respectively. C is the identity matrix.
Q = d i a g ( [ 1.8 × 10 2 1.8 × 10 2 1.8 × 10 2 ] )
R = d i a g ( [ 5.8 5.8 5.8 ] ) × 10 2
x k + 1 = [ 1 T 0 0 1 T 55.7 2.86 0.97 ] x k + [ 0 0 55.7 ] u k + w k
ψ = [ 0.1   0.5   0.5 ] T
α = 0.9995
The initial values for the filters, their corresponding initial covariance values, and the boundary layer’s width were set to be similar to the S-M-D example. Figure 6 shows the RMSE and MAE for the three states without and with modeling uncertainties present. Figure 7 shows the actual and estimated state values for the states in both scenarios. Table 6 and Table 7 summarize the RMSE and MAE for the cases without and with modeling uncertainties.
Under normal operating conditions, the KF provided the best results (the RMSE and MAE were the smallest). In the presence of uncertainties (e.g., a fault) and after tuning the KF and ASIF, they performed similarly. Note that the SIF and ASIF were not as sensitive to uncertainties compared with the KF.
To complete the comparison, the simulation time for the three previous examples was measured for the three filters, namely KF, SIF, and ASIF. Figure 8 shows that the ASIF has the shortest time among the other filters. It requires less than half the time that is required for the other filters in the second and third examples. Adding this to the results obtained from RMSE and MAE tables, ASIF can be considered to have an overall superior performance.

6. Conclusions

In this paper, a new linear filter referred to as the ASIF is presented. The ASIF takes advantage of the robustness and stability of the sliding innovation filter (SIF) while allowing for a design parameter, which is a measure of our confidence in the measurement. The addition of a so-called forgetting factor improves the performance of the standard SIF while maintaining robustness to modeling uncertainties. Furthermore, the tuning process for the design parameter is significantly easier than defining the sliding boundary layer widths used by the standard SIF. ASIF needs less computational time compared to the other filter, which makes it the best candidate for fast online systems. Future work will include the development of a time-varying forgetting factor to create an adaptive ASIF and an optimal ASIF.

Author Contributions

Conceptualization, M.A. and S.A.G.; methodology, M.A. and S.A.G.; software, M.A. and S.A.G. validation, M.A. and S.A.G.; formal analysis, M.A.; investigation, M.A.; resources, M.A. and S.A.G.; writing—original draft preparation, M.A.; writing—review and editing, M.A. and S.A.G.; visualization, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare that there is no conflict of interest in this work.

Nomenclature

SymbolComments
1 ,   + ,   T Notation denoting an inverse, a pseudo inverse and a transpose, respectively.
|A|Absolute value of A.
A ,   B The system, and input matrices, respectively.
C The measurement matrix, which is the identity matrix in all examples.
w k ,   v k The system and measurement noise vectors at time k , respectively.
x k ,   z k ,   u k The states, measurements and imput vectores, respectively.
A ^ The estimate value of A .
K k The correction matrix at time k.
δ The Boundary Layer
A k | k 1 ,   A k | k The a priori and the a posteriori estimate of A at time k , respectively.
α The fading memory coefficient.
Q ,   R The system and measurement noise covariance matrices, respectively.
η The uncertainity vector
A ¯ ,   A ˜ The error in A .
E ( a ) The expectation operator of the element a .
h ,   A p ,   C p ,   m   The thermometer parameters; convective heat transfer coefficient, surface area, specific heat and mass, respectively.
T ,   τ ,   T s The tempretaure, time constant, and sampling time, respectively.
M ,   b ,   k The spring-mass-damper parameters; mass, damping coefficient, and spring constant, respectively.
I n × n The identity matrix with dimensions of n × n .

References

  1. Al-Shabi, M. The General Toeplitz/Observability Smooth Variable Structure Filter: Fault Detection and Parameter Estimation; LAP LAMBERT Academic Publishing: Atlanta, GA, USA, 2012. [Google Scholar]
  2. Afshari, H.; Gadsden, S.; Habibi, S. Gaussian filters for parameter and state estimation: A general review of theory and recent trends. Signal Process. 2017, 135, 218–238. [Google Scholar] [CrossRef]
  3. Simon, D. Optimal State Estimation: Kalman, H-Infinity, and Nonlinear Approaches; Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  4. Julier, S.J.; Uhlmann, J.K. Unscented Filtering and Nonlinear Estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef] [Green Version]
  5. Arasaratnam, I.; Haykin, S. Cubature Kalman Filters. IEEE Trans. Autom. Control. 2009, 54, 1254–1269. [Google Scholar] [CrossRef] [Green Version]
  6. Gadsden, S.A. Smooth Variable Structure Filtering: Theory and Applications. Ph.D. Thesis, McMaster University, Hamilton, ON, USA, 2011. [Google Scholar]
  7. Potter, J.E.; Stern, R.G. Statistical Filtering of Space Navigation Measurements. In Proceedings of the 1963 AIAA Guidance and Control Conference, New York, NY, USA, September 1963. [Google Scholar]
  8. Carlson, N.A. Fast triangular formulation of the square root filter. . AIAA J. 1973, 11, 1259–1265. [Google Scholar] [CrossRef]
  9. Bierman, G.J. Factorization Methods for Discrete Sequential Estimation; Academic Press, Inc.: New York, NY, USA, 1977. [Google Scholar]
  10. Van Der Merwe, R.; Wan, E.A. The square-root unscented Kalman filter for state and parameter-estimation. In Proceedings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, UT, USA, 7–11 May 2001. [Google Scholar]
  11. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes: The Art of Scientific Computing; Cambridge University Press: New York, NY, USA, 2007. [Google Scholar]
  12. Bar-Shalom, Y.; Li, X.-R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; John Wiley and Sons, Inc.: New York, NY, USA, 2001. [Google Scholar] [CrossRef]
  13. Al-Shabi, M. The General Toeplitz/Observability SVSF; McMaster University: Hamilton, ON, USA, 2011. [Google Scholar]
  14. Jazwinski, A. Adaptive Filtering. Automatica 1969, 5, 475–485. [Google Scholar] [CrossRef]
  15. Jazwinski, A. Stochastic Processes and Filtering Theory; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  16. Bernstein, D.S.; Haddad, W.M. Steady-state kalman filtering with an H error bound. Syst. Contr. Lett. 1989, 12, 9–16. [Google Scholar] [CrossRef]
  17. Nagpal, K.M.; Khargonekar, P.P. Filtering and smoothing in an Hinf setting. IEEE Trans. Automat. Contr. 1991, 36, 152–166. [Google Scholar] [CrossRef]
  18. Spurgeon, S.K. Sliding Mode Observers: A Survey. Int. J. Syst. Sci. 2008, 751–764. [Google Scholar] [CrossRef]
  19. Utkin, V.I. Variable Structure Systems With Sliding Mode: A Survey. IEEE Trans. Autom. Control. 1977, 22, 212–222. [Google Scholar] [CrossRef]
  20. Utkin, V.I. Sliding Mode and Their Application in Variable Structure Systems, English Translation ed; Mir Publication: Moscow, Russia, 1978. [Google Scholar]
  21. Fillipov, A.F. Differential Equation with Discontinuous Right Hand Side; American Mathematical Society Transactions: Providence, RI, USA, 1969; Volume 62. [Google Scholar]
  22. Zhou, D.; Su, Y.; Xi, Y.; Zhang, Z. Extension of Friedland’s separate-bias estimation to randomly time-varying bias of nonlinear systems. IEEE Trans. Autom. Control 1993, 38, 1270–1273. [Google Scholar] [CrossRef]
  23. Yin, Z.; Li, G.; Zhang, Y.; Liu, J. Symmetric-Strong-Tracking-Extended-Kalman-Filter-Based Sensorless Control of Induction Motor Drives for Modeling Error Reduction. IEEE Trans. Ind. Inform. 2019, 15, 650–662. [Google Scholar] [CrossRef]
  24. Utkin, V.I. Sliding Modes in Control and Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1992. [Google Scholar] [CrossRef] [Green Version]
  25. Qaiser, S.H.; Bhatti, A.; Iqbal, M.; Samar, R.; Qadir, J. Estimation of precursor concentration in a research reactor by using second order sliding mode observer. Nucl. Eng. Des. 2009, 239, 2134–2140. [Google Scholar] [CrossRef]
  26. Slotine, J.J.E.; Li, W. Applied Nonlinear Control; Prentice-Hall: Englewood Cliffs, NJ, USA, 1991. [Google Scholar]
  27. Habibi, S. The Smooth Variable Structure Filter. Proc. IEEE 2007, 95, 1026–1059. [Google Scholar] [CrossRef]
  28. Gadsden, S.A.; Habibi, S.R. Derivation of an Optimal Boundary Layer Width for the Smooth Variable Structure Filter. In Proceedings of the ASME/IEEE American Control Conference (ACC), San Francisco, CA, USA, 29 June–1 July 2011. [Google Scholar]
  29. Gadsden, S.A.; Al-Shabi, M. The Sliding Innovation Filter. IEEE Access 2020, 8, 96129–96138. [Google Scholar] [CrossRef]
  30. Hilal, W.; Gadsden, S.A.; Wilkerson, S.A.; Al-Shabi, M. Square-Root Formulation of the Sliding Innovation Filter for Target Tracking. Proc. SPIE Int. Soc. Opt. Eng. 2022, 12122, 1212203. [Google Scholar]
  31. AlShabi, M.; Gadsden, A. Application of the sliding innovation filter to complex road. Proc. SPIE Int. Soc. Opt. Eng. 2022, 12122, 121220A. [Google Scholar]
  32. AlShabi, M.; Gadsden, A. The Luenberger sliding innovation filter for linear systems. Proc. SPIE Int. Soc. Opt. Eng. 2022, 12122, 121220B. [Google Scholar]
  33. Al Shabi, M.; Gadsden, M.; El Haj Assad, B.; Khuwaileh, S.W. Application of the sliding innovation filter to unmanned aerial systems. Proc. SPIE Int. Soc. Opt. Eng. 2021, 11758, 117580T. [Google Scholar]
  34. Al-Shabi, M.; Gadsden, S.A.; Assad, M.E.H.; Khuwaileh, B. Application of the sliding innovation filter for fault detection and diagnosis of an electromechanical system. Proc. SPIE Int. Soc. Opt. Eng. 2021, 11756, 1175607. [Google Scholar]
  35. Al-Shabi, M.; Bani-Yonis, M.; Hatamleh, K.S. The sigma-point central difference smooth variable structure filter application into a robotic arm. In Proceedings of the 2015 IEEE 12th International Multi-Conference on Systems, Signals & Devices (SSD15), Mahdia, Tunisia, 16–19 March 2015. [Google Scholar]
  36. AlShabi, M.; Gadsden, S.; Habibi, S. Kalman filtering strategies utilizing the chattering effects of the smooth variable structure filter. Signal Process. 2013, 93, 420–431. [Google Scholar] [CrossRef]
  37. Al-Shabi, M.; Hatamleh, K.S. The Unscented Smooth Variable Structure Filter Application Into a Robotic Arm. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Houston, TX, USA, 13–19 November 2015. [Google Scholar]
  38. Gadsden, S.A.; Habibi, S.; Kirubarajan, T. Kalman and smooth variable structure filters for robust estimation. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 1038–1050. [Google Scholar] [CrossRef]
  39. Gadsden, S.A.; Habibi, S.R. A New Robust Filtering Strategy for Linear Systems. ASME J. Dyn. Syst. Meas. Control 2012, 135, 014503. [Google Scholar] [CrossRef]
  40. Einicke, G.A.; White, L.B. Robust Extended Kalman Filtering. IEEE Trans. Signal Process. 1999, 47, 2596–2599. [Google Scholar] [CrossRef]
  41. Li, Y.; Ding, L.; Jing, Z. Optimal-switched extended H∞ filter for nonlinear systems with stochastic uncertainties. Int. J. Robust Nonlinear Control. 2020, 30, 2850–2870. [Google Scholar] [CrossRef]
  42. Ren, Z.-L.; Wang, L.-G.; Bi, L. Improved Extended Kalman Filter Based on Fuzzy Adaptation for SLAM in Underground Tunnels. Int. J. Precis. Eng. Manuf. 2019, 20, 2119–2127. [Google Scholar] [CrossRef]
  43. Pires, D.S.; Serra, G.L.D.O. Methodology for Evolving Fuzzy Kalman Filter Identification. Int. J. Control. Autom. Syst. 2019, 17, 793–800. [Google Scholar] [CrossRef]
  44. Kim, D.; Lee, D. Fault Parameter Estimation Using Adaptive Fuzzy Fading Kalman Filter. Appl. Sci. 2019, 9, 3329. [Google Scholar] [CrossRef] [Green Version]
  45. Hendel, R.; Khaber, F.; Essounbouli, N. Adaptive high order sliding mode controller/observer based terminal sliding mode for MIMO uncertain nonlinear system. Int. J. Control 2019, 94, 486–506. [Google Scholar] [CrossRef]
  46. Ye, S. Fuzzy sliding mode observer with dual SOGI-FLL in sensorless control of PMSM drives. ISA Trans. 2019, 85, 161–176. [Google Scholar] [CrossRef]
  47. Gadsden, S.A.; Al-Shabi, M.A.; Habibi, S.R. A fuzzy-smooth variable structure filtering strategy: For state and parameter estimation. In Proceedings of the 2013 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), Amman, Jordan, 3–5 December 2013. [Google Scholar]
  48. AlShabi, M.; Gadsden, A. A multiple model-based sliding innovation filter and its application on aerospace actuator. Proc. SPIE Int. Soc. Opt. Eng. 2022, 12122, 1212218. [Google Scholar]
  49. Al Shabi, M.; Gadsden, A.; El Haj Assad, M.; Khuwaileh, B. A multiple model-based sliding innovation filter. Proc. SPIE Int. Soc. Opt. Eng. 2021, 11756, 1175608. [Google Scholar]
  50. Hilal, W.; Gadsden, A.; Wilkerson, S.A.; Al-Shabi, M. Combined Particle and Smooth Innovation Filtering for Nonlinear Estimation. Proc. SPIE Int. Soc. Opt. Eng. 2022, 12122, 1212204. [Google Scholar]
  51. Lee, A.S.; Gadsden, S.A.; Al-Shabi, M. An Adaptive Formulation of the Sliding Innovation Filter. IEEE Signal Process. Lett. 2021, 28, 1295–1299. [Google Scholar] [CrossRef]
  52. Al Shabi, M.; Gadsden, S.A.; El Haj Assad, M.; Khuwaileh, B. The two-pass sliding innovation smoother. Proc. SPIE—Int. Soc. Opt. Eng. 2021, 11756, 1175609. [Google Scholar]
  53. Goodman, J.; Kim, J.; Lee, A.S.; Gadsden, S.A.; Al-Shabi, M. A Variable Structure-Based Estimation Strategy Applied to an RRR Robot System. J. Robot. Netw. Artif. Life 2017, 4, 142–145. [Google Scholar] [CrossRef] [Green Version]
  54. Al-Shabi, M. Sigma-point Smooth Variable Structure Filters applications into robotic arm. In Proceedings of the 7th International Conference on Modeling, Simulation, and Applied Optimization (ICMSAO), Sharjah, United Arab Emirates, 4–6 April 2017. [Google Scholar]
  55. Al-Shabi, M.; Habibi, S. New novel time-varying and robust smoothing boundary layer width for the smooth variable structure filter. In Proceedings of the 9th International Symposium on Mechatronics and Its Applications (ISMA), Amman, Jordan, 9–11 April 2013. [Google Scholar]
  56. Thomsen, V. Response time of a thermometer. Phys. Teach. 1998, 36, 540–541. [Google Scholar] [CrossRef]
Figure 1. SIF estimation concept showing the sliding mode [29].
Figure 1. SIF estimation concept showing the sliding mode [29].
Sensors 22 08927 g001
Figure 2. RMSE and MAE for thermometer with (a) no modeling and (b) modeling uncertainties.
Figure 2. RMSE and MAE for thermometer with (a) no modeling and (b) modeling uncertainties.
Sensors 22 08927 g002
Figure 3. An example of the outputs from the different filters applied to the thermometer system with (a) no modeling and (b) modeling uncertainties.
Figure 3. An example of the outputs from the different filters applied to the thermometer system with (a) no modeling and (b) modeling uncertainties.
Sensors 22 08927 g003
Figure 4. RMSE and MAE for the S-M-D system’s (a) first and (b) second states with no modeling uncertainties, and (c) first and (d) second states with modeling uncertainties.
Figure 4. RMSE and MAE for the S-M-D system’s (a) first and (b) second states with no modeling uncertainties, and (c) first and (d) second states with modeling uncertainties.
Sensors 22 08927 g004
Figure 5. An example of the (a) position and (b) velocity estimates from the different filters applied to the S-M-D system with no modeling uncertainties, and (c) position and (d) velocity estimates from the different filters applied to the S-M-D system with modeling uncertainties.
Figure 5. An example of the (a) position and (b) velocity estimates from the different filters applied to the S-M-D system with no modeling uncertainties, and (c) position and (d) velocity estimates from the different filters applied to the S-M-D system with modeling uncertainties.
Sensors 22 08927 g005
Figure 6. RMSE and MAE for the EHA system’s (a) first, (b) second, and (c) third states with no modeling uncertainties, and the system’s (d) first, (e) second, and (f) third states with modeling uncertainties.
Figure 6. RMSE and MAE for the EHA system’s (a) first, (b) second, and (c) third states with no modeling uncertainties, and the system’s (d) first, (e) second, and (f) third states with modeling uncertainties.
Sensors 22 08927 g006
Figure 7. An example of the (a) position, (b) velocity, and (c) acceleration estimates from the different filters applied to the aerospace actuator system with no modeling uncertainties, and the (d) position, (e) velocity, and (f) acceleration estimates from the different filters applied to the aerospace actuator system with modeling uncertainties.
Figure 7. An example of the (a) position, (b) velocity, and (c) acceleration estimates from the different filters applied to the aerospace actuator system with no modeling uncertainties, and the (d) position, (e) velocity, and (f) acceleration estimates from the different filters applied to the aerospace actuator system with modeling uncertainties.
Sensors 22 08927 g007aSensors 22 08927 g007b
Figure 8. The simulation time for (a) thermometer, (b) spring-mass-damper, and (c) EHA examples for the different filters.
Figure 8. The simulation time for (a) thermometer, (b) spring-mass-damper, and (c) EHA examples for the different filters.
Sensors 22 08927 g008
Table 1. RMSE For MCS of The Thermometer Example.
Table 1. RMSE For MCS of The Thermometer Example.
CaseStateKFSIFASIF
Mean σ Mean σ Mean σ
1 x 1 9.5 × 10 1 2.2 × 10 2 5.8 × 10 1 3.7 × 10 3 4.5 × 10 1 4.6 × 10 3
2 x 1 2.9 × 10 1 7.1 × 10 0 1.1 × 10 0 2.5 × 10 2 6.6 × 10 0 1.5 × 10 0
Table 2. MAE for MCS of the thermometer example.
Table 2. MAE for MCS of the thermometer example.
CaseStateKFSIFASIF
Mean σ Mean σ Mean σ
1 x 1 3.3 × 10 0 2.2 × 10 1 1.1 × 10 0 6.8 × 10 3 1.3 × 10 0 4.4 × 10 2
2 x 1 9.998 × 10 1 4.99 × 10 1 1.4 × 10 0 3.6 × 10 3 1.0 × 10 0 1.0 × 10 0
Table 3. Parameters Used In the S-M-D Example.
Table 3. Parameters Used In the S-M-D Example.
ParameterCase 1Case 2
M   ( k g ) 500 500
k   ( k N / m ) 1 0.5
b   ( N s / m ) 5 0
Table 4. RMSE For MCS Of The S-M-D Example.
Table 4. RMSE For MCS Of The S-M-D Example.
CaseStateKFSIFASIF
Mean σ Mean σ Mean σ
1 x 1 2.0 × 10 4 6.1 × 10 5 1.0 × 10 3 3.2 × 10 5 3.0 × 10 4 3.8 × 10 5
x 2 3.0 × 10 4 6.4 × 10 5 5.4 × 10 3 4.5 × 10 5 3.0 × 10 4 4.0 × 10 5
2 x 1 7.9 × 10 1 4.0 × 10 3 1.0 × 10 3 0.0 × 10 0 9.0 × 10 4 0.0 × 10 0
x 2 8.9 × 10 1 1.3 × 10 3 5.4 × 10 3 0.0 × 10 0 1.2 × 10 2 1.0 × 10 4
Table 5. MAE for MCS of the S-M-D example.
Table 5. MAE for MCS of the S-M-D example.
CaseStateKFSIFASIF
Mean σ Mean σ Mean σ
1 x 1 5.8 × 10 3 2.3 × 10 3 3.5 × 10 3 3.0 × 10 4 8.0 × 10 4 1.0 × 10 4
x 2 5.8 × 10 3 2.2 × 10 3 1.0 × 10 2 0.0 × 10 0 9.0 × 10 4 1.0 × 10 4
2 x 1 1.7 × 10 0 8.1 × 10 3 3.5 × 10 3 3.0 × 10 4 3.3 × 10 3 3.0 × 10 4
x 2 1.4 × 10 0 2.0 × 10 3 1.1 × 10 2 3.0 × 10 4 2.5 × 10 2 5.0 × 10 4
Table 6. RMSE for MCS of the Aerospace Actuator Example.
Table 6. RMSE for MCS of the Aerospace Actuator Example.
CaseStateKFSIFASIF
Mean σ Mean σ Mean σ
1 x 1 2.0 × 10 2 1.7 × 10 4 5.4 × 10 2 4.4 × 10 4 5.8 × 10 2 3.6 × 10 4
x 2 3.3 × 10 2 4.2 × 10 4 2.9 × 10 2 6.3 × 10 4 5.8 × 10 2 3.6 × 10 4
x 3 5.8 × 10 2 3.9 × 10 4 5.8 × 10 2 4.3 × 10 4 5.99 × 10 2 4.3 × 10 4
2 x 1 8.7 × 10 1 6.7 × 10 2 5.4 × 10 2 5.0 × 10 4 5.8 × 10 2 4.0 × 10 4
x 2 5.5 × 10 2 6.5 × 10 3 2.9 × 10 2 6.0 × 10 4 5.8 × 10 2 4.0 × 10 4
x 3 7.1 × 10 2 3.7 × 10 3 5.8 × 10 2 4.0 × 10 4 6.6 × 10 2 1.3 × 10 3
Table 7. MAE for MCS of the Aerospace Actuator Example.
Table 7. MAE for MCS of the Aerospace Actuator Example.
CaseStateKFSIFASIF
Mean σ Mean σ Mean σ
1 x 1 4.9 × 10 2 7.0 × 10 4 1.2 × 10 1 2.5 × 10 3 9.99 × 10 2 0.0 × 10 0
x 2 1.0 × 10 1 4.5 × 10 3 1.1 × 10 1 9.2 × 10 3 9.99 × 10 2 0.0 × 10 0
x 3 1.0 × 10 1 4.0 × 10 4 2.1 × 10 0 1.1 × 10 2 1.3 × 10 1 0.0 × 10 0
2 x 1 2.95 × 10 0 3.4 × 10 1 1.1 × 10 1 2.7 × 10 3 9.99 × 10 2 0.0 × 10 0
x 2 1.98 × 10 1 2.8 × 10 2 1.1 × 10 1 9.1 × 10 3 9.99 × 10 2 0.0 × 10 0
x 3 2.2 × 10 1 2.4 × 10 2 2.1 × 10 1 1.3 × 10 2 1.9 × 10 1 1.3 × 10 2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

AlShabi, M.; Gadsden, S.A. Formulation of the Alpha Sliding Innovation Filter: A Robust Linear Estimation Strategy. Sensors 2022, 22, 8927. https://doi.org/10.3390/s22228927

AMA Style

AlShabi M, Gadsden SA. Formulation of the Alpha Sliding Innovation Filter: A Robust Linear Estimation Strategy. Sensors. 2022; 22(22):8927. https://doi.org/10.3390/s22228927

Chicago/Turabian Style

AlShabi, Mohammad, and Stephen Andrew Gadsden. 2022. "Formulation of the Alpha Sliding Innovation Filter: A Robust Linear Estimation Strategy" Sensors 22, no. 22: 8927. https://doi.org/10.3390/s22228927

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop