Next Article in Journal
A Rule-Based Grapheme-to-Phoneme Conversion System
Previous Article in Journal
Prestress Force Monitoring and Quantification of Precast Segmental Beams through Neutral Axis Location Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recursive Optimal Finite Impulse Response Filter and Its Application to Adaptive Estimation

1
Division of Electrical, Control and Instrumentation Engineering, Kangwon National University, Samcheok-si 25913, Gangwon-do, Korea
2
School of Mechanical System Engineering, Kangwon National University, Samcheok-si 25913, Gangwon-do, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(5), 2757; https://doi.org/10.3390/app12052757
Submission received: 3 February 2022 / Revised: 20 February 2022 / Accepted: 3 March 2022 / Published: 7 March 2022
(This article belongs to the Section Robotics and Automation)

Abstract

:
In this paper, the recursive form of an optimal finite impulse response filter is proposed for discrete time-varying state-space models. The recursive form of the finite impulse response filter is derived by employing finite horizon Kalman filtering with optimally estimated initial conditions. The horizon initial state and its error covariance on the horizon are optimally estimated by using recent finite measurements, in the sense of maximum likelihood estimation, then initiating the finite horizon Kalman filter. The optimality and unbiasedness of the proposed filter are proved by comparison with the conventional optimal finite impulse response filter in batch form. Moreover, an adaptive FIR filter is also proposed by applying the adaptive estimation scheme to the proposed recursive optimal FIR filter as its application. To evaluate the performance of the proposed algorithms, a computer simulation is performed to compare the conventional Kalman filter and adaptive Kalman filters for the gas turbine aircraft engine model.

1. Introduction

The Kalman filter has been used as a standard tool to deal with state estimation of linear state-space models. However, since the Kalman filter has infinite impulse response (IIR) structure, which makes use of whole measurements from the initial time to the current, model uncertainties which come from the limited knowledge of the system model and the statistics of the noises and computation errors may accumulate in the estimated state. These could originate the divergence problem in the Kalman filter [1,2,3]. In order to prevent divergence problems, finite impulse response (FIR) filters have been used as an alternative to the Kalman filter [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. Since FIR filters estimate the states by using finite measurements on the most recent time interval, these filters are known to be more robust against modeling uncertainties and numerical errors that cause of divergence problem in Kalman filter. Moreover, due to their FIR structure, FIR filters have good properties such as built-in bounded input/bounded output (BIBO) stability and fast tracking speed.
However, despite the aforementioned advantages of FIR filters, their complicated derivation and batch form might lead to computational inefficiency and limitations in further developments. Since the Kalman filter is well-known and has recursive form, which can give effective computation methods to the FIR filter, the recursive form of FIR filters were introduced by modifying the Kalman filter [7,8,9,10,11,12,13,14,15,16]. In [7,8], the receding horizon Kalman (RHK) filters, whose concept is introduced in Figure 1a, and its fast iteration method, are proposed for time-invariant systems. The filter equations of RHK filters are easy to understand and many useful Kalman filtering methods could be directly applied to FIR filtering problems for improving the performance of FIR filters because they are derived by combining the Kalman filter algorithm and receding horizon strategy. Since RHK filters have exactly the same structure as the Kalman filter on the finite estimation horizon, the RHK filtering problem can be thought as a recursive finite horizon Kalman filtering problem with special initial conditions. Thus, the initial state and its error covariance for RHK filters are very important factors for the performance of estimation. However, RHK filters were derived with heuristic assumptions on initial conditions such as the infinite initial error covariance. In the derivation of the RHK filter, inversion of the state transition matrix is used in the iterative calculation to estimate state and error covariance matrix. For an infinite covariance, the inverse matrix becomes singular and the estimation problem may not be feasible. Furthermore, the optimality of RHK filters is not clear in their derivations, and they cannot be applied to time-varying systems. On the other hand, Kalman-like unbiased FIR (KUFIR) filters, whose concept is introduced in Figure 1b, are also proposed for the recursive FIR filtering [9,10,11,12,13,14,15,16]. KUFIR filtering is a recursive Kalman-like algorithm that ignores the noise statistics and initial conditions. These are ignored by determining the optimal horizon length which minimize the mean-square estimation error, then the recursive prediction and correction procedure are repeated without using noise statistics. Since the optimal horizon length is the only design parameter of the KUFIR filter, the determination of optimal horizon length is a major problem, hence, several algorithms have been developed to find the optimal horizon length. In [9], the optimal horizon length was derived for the l-degree polynomial model by minimizing the mean-square estimation error. In [11], it was measured using the correlation method, and in [12], it was determined by using a bank of KUFIR filters operating in parallel. For fast computation of the optimal horizon length, an adaptive KUFIR filter was also suggested for time-invariant systems in [15]. However, even though the horizon initial state can be ignored, the state at time k N o p t + K in Figure 1b considered as an actual initial state is required at each estimation horizon, and they are obtained by calculating the batch form of the filter equation. In addition to the aforementioned computational inefficiency, these approaches have a common disadvantage due to the fact that heavy computational load is also required to specify the optimal horizon length, because it must be obtained for each horizon in time-varying systems. Moreover, the optimality of KUFIR filters is not guaranteed and the horizon length cannot be adjusted.
Therefore, in this paper, a new recursive optimal FIR (ROFIR) filter is proposed for linear time-varying systems in order to overcome the disadvantages of previous methods for recursive FIR filtering. The ROFIR filter is derived by employing the finite horizon Kalman filter and the optimal and unbiased initial state estimation. The initial state and its corresponding error covariance on the estimation horizon are obtained by solving the maximum likelihood estimation problem, then they initiate the finite horizon Kalman filter. Since the initial state is estimated from the measurements at each finite estimation horizon, the ROFIR filter does not require any priori initial information. In addition, the proposed ROFIR filter is derived without assumption on a nonsingular state-transition matrix and has less computational burden than the KUFIR filters for time-varying systems. Furthermore, the ROFIR filter provides the best linear unbiased estimate (BLUE) of the state on the finite estimation horizon. In addition, since AFIR filters in previous studies were designed in batch form, they were mostly focused on how to adjust the horizon length [17,18,19]. To the author’s best knowledge, there are no results on the adaptive FIR (AFIR) filters which consider noise statistics, thus, we propose a new adaptive FIR filtering algorithm by employing a sequential noise statistics estimation technique as an application of the proposed ROFIR filter.
This paper is organized as follows: In Section 2, the ROFIR filter is proposed for linear time-varying state-space models and its optimality and unbiasedness are proved. Moreover, the AFIR filter is also proposed by applying the modified sequential noise statistics estimation method to the proposed ROFIR filter. In Section 3, the performance and effectiveness of the proposed ROFIR and AFIR filters are shown and discussed via computer simulations. Finally, our conclusions are presented in Section 4.

2. Recursive Optimal FIR Filter and Adaptive FIR Filter

2.1. Recursive Optimal FIR Filter with Optimally Estimated Initial Conditions

Consider the following discrete time-varying state-space model:
x k + 1 = A k x k + w k ,
y k = C k x k + v k ,
where x k is the state vector, y k is the measurement, w k and v k are the process noise and measurement noise, respectivley. We assume that w k and v k are zero-mean white Gaussian and mutually uncorrelated. These noises are uncorrelated with the initial state x k 0 and Q k and R k denote the covariance matrices of w k and v k , respectively. The pair ( A k , C k ) of the system (1) and (2) is assumed to be observable so that all modes are observed at the output and stabilized observers can be constructed.
On the horizon [ k N k ] , the finite number of measurements is expressed as a batch form as follows:
Y N , k 1 = C ˜ k 1 x k N + G ˜ k 1 W k 1 + V k 1 .
Y N , k 1 is the finite number of measurements defined as:
Y i , k j = [ y k i T y k i + 1 T y k j T ] T ( i j ) .
The finite measurement noise vector V k 1 and the finite process noise vector W k 1 are defined by replacing y · in (4) with v · and w · , respectively, and C ˜ k 1 and G ˜ k 1 are defined as:
C ˜ k i = C k N C k N + 1 Φ N , N C k i Φ i + 1 , N ,
G ˜ k i = 0 0 0 0 0 C k N + 1 0 0 0 0 C k N + 2 Φ N 1 , N 1 C k N + 2 0 0 0 C k N + 3 Φ N 2 , N 1 C k N + 3 Φ N 2 , N 2 C k N + 3 0 0 C k i Φ i + 1 , N 1 C k i Φ i + 1 , N 2 C k i Φ i + 1 , N 3 C k i 0 ,
where Φ i , j = A k i A k j + 1 A k j ( i j ) .
The estimate of the horizon initial state x k N at time k can be represented to be linear with finite measurements on the recent horizon [ k N k ] as
x ^ k N | k = i = k N k 1 h k i y i = H k Y N , k 1 ,
where H k = [ h k N h k N + 1 h k 1 ] is the gain matrix of initial state estimator.
The optimally estimated initial state x ^ k N k can be obtained by the following maximum likelihood criterion:
max x ^ k N k p ( x k N Y N , k 1 ) .
p ( x k N Y N , k 1 ) in (8) is the conditional probability density function of initial state x k N given Y N , k 1 as follows:
p ( x k N Y N , k 1 ) = 1 ( 2 π ) N Π k 1 e 1 2 S k 1 T Π k 1 1 S k 1 ,
where
S k 1 = Y N , k 1 C ˜ k 1 x k N ,
Π k 1 = G ˜ k 1 Q ˜ N , 1 G ˜ k 1 T + R ˜ N , 1 ,
Q ˜ i , j = d i a g Q k i , Q k i + 1 , , Q k j ( i j ) ,
R ˜ i , j = d i a g R k i , R k i + 1 , , R k j ( i j ) .
To maximize p ( x k N Y N , k 1 ) , we can equivalently maximize ln p ( x k N Y N , k 1 ) , or minimize the following cost function
J k = 1 2 S k 1 T Π k 1 1 S k 1 .
By taking the derivative of J k with respect to x k N as
J k x k N = C ˜ k 1 T Π k 1 1 S k 1 = 0 ,
then the optimal estimate of the horizon initial state x ^ k N k can be obtained as
x ^ k N k = H k Y N , k 1 = ( C ˜ k 1 T Π k 1 1 C ˜ k 1 ) 1 C ˜ k 1 T Π k 1 1 Y N , k 1 .
From (3) and (16), the estimation error e k N can be represented as
e k N = x k N x ^ k N k = ( I ( C ˜ k 1 T Π k 1 1 C ˜ k 1 ) 1 C ˜ k 1 T Π k 1 1 C ˜ k 1 ) x k N ( C ˜ k 1 T Π k 1 1 C ˜ k 1 ) 1 C ˜ k 1 T Π k 1 1 ( G ˜ k 1 W k 1 + V k 1 ) = ( C ˜ k 1 T Π k 1 1 C ˜ k 1 ) 1 C ˜ k 1 T Π k 1 1 ( G ˜ k 1 W k 1 + V k 1 ) .
By taking the expectation on estimation error e k N , we have
E [ e k N ] = ( C ˜ k 1 T Π k 1 1 C ˜ k 1 ) 1 C ˜ k 1 T Π k 1 1 ( G ˜ k 1 E W k 1 + E V k 1 ) = 0 ,
which shows that the maximum likelihood estimate of the initial state on the horizon is unbiased. Furthermore, the initial error covariance P k N can be obtained with the aid of (11) as
P k N = E { e k N e k N T } = ( C ˜ k 1 T Π k 1 1 C ˜ k 1 ) 1 C ˜ k 1 T Π k 1 1 Π k 1 × ( C ˜ k 1 T Π k 1 1 C ˜ k 1 ) 1 C ˜ k 1 T Π k 1 1 T = ( C ˜ k 1 T Π k 1 1 C ˜ k 1 ) 1 .
In order to obtain recursive form of the optimally estimated initial conditions, define the following matrices:
C ^ i , k 1 = C k i + 1 Φ i , i C k i + 2 Φ i 1 , i C k 2 Φ 3 , i C k 1 Φ 2 , i = C k i + 1 C ^ i 1 , k 1 A k i ,
G ^ i , k 1 = C k i + 1 0 0 C k i + 2 Φ i 1 , i 1 C k i + 2 0 C k i + 3 Φ i 2 , i 1 C k i + 3 Φ i 2 , i 2 0 C k 1 Φ 2 , i 1 C k 1 Φ 2 , i 2 C k 1 ,
Π ^ i , k 1 = G ^ i , k 1 Q ˜ i 1 , 1 G ^ i , k 1 T + R ˜ i 1 , 1 ,
w ^ i , k 1 = C ^ i , k 1 T Π ^ i , k 1 1 Y i 1 , k 1 ,
P ^ i , k 1 = C ^ i , k 1 Π ^ i , k 1 1 C ^ i , k 1 T .
Then, the optimal estimate of the horizon initial state x ^ k N k in (16) and error covariance P k N in (19) can be rewritten as
x ^ k N k = P k N C k N T R k N 1 y k N + w ^ N , k 1 ,
P k N = ( C k N T R k N 1 C k N + P ^ N , k 1 ) 1 .
Although the optimal initial conditions are obtained, they have computationally inefficient batch forms. In order to obtain the recursive form of x ^ k N k and P k N in (25) and (26), w ^ N , k 1 and P ^ N , k 1 should be calculated recursively.
The recursive equations of w ^ N , k 1 and P ^ N , k 1 can be obtained as follows:
P ^ i + 1 , k 1 = C ^ i + 1 , k 1 T Π ^ i + 1 , k 1 1 C ^ i + 1 , k 1 = C ^ i + 1 , k 1 T G ^ i + 1 , k 1 Q ˜ i , 1 G ^ i + 1 , k 1 T + R ˜ i , 1 1 C ^ i + 1 , k 1 = A k i 1 T C k i C ^ i , k 1 T R k i 0 0 Π ^ i , k 1 + C k i C ^ i , k 1 Q k i C k i C ^ i , k 1 T 1 × C k i C ^ i , k 1 A k i 1 = A k i 1 T C k i T R k i 1 C k i A k i 1 + A k i 1 T P ^ i , k 1 T A k i 1 A k i 1 T × C k i T R k i 1 C k i + P ^ i , k 1 [ Q k i 1 + A k i 1 T C k i T R k i 1 C k i + P ^ i , k 1 × A k i 1 ] 1 C k i T R k i 1 C k i + P ^ i , k 1 A k i 1 ,
w ^ i + 1 , k 1 = C ^ i + 1 , k 1 T Π ^ i + 1 , k 1 1 Y i , k 1 , = A k i 1 T C k i C ^ i , k 1 T R k i 0 0 Π ^ i , k 1 + C k i C ^ i , k 1 Q k i C k i C ^ i , k 1 T 1 × y k i Y i 1 , k 1 = A k i 1 T C k i T R k i 1 y k i + A k i 1 T w ^ i , k 1 A k i 1 T C k i T R k i 1 C k i + P ^ i , k 1 × Q k i 1 + A k i 1 T C k i T R k i 1 C k i + P ^ i , k 1 A k i 1 1 × C k i T R k i 1 y k i + w ^ i , k 1 ,
with P ^ 1 , k 1 = 0 , w ^ 1 , k 1 = 0 , and 1 i N 1 .
Finally, the ROFIR filter can be represented by applying the estimated horizon initial state (25) and its error covariance (26) to the one-step-ahead prediction dynamics of the Kalman filter, as follows:
x ^ k N + i + 1 k = A k N + i ( I K k N + i C k N + i ) x ^ k N + i k + A k N + i K k N + i y k N + i ,
where
K k N + i = P k N + i k C k N + i T ( C k N + i P k N + i k C k N + i T + R k N + i ) 1 ,
P k N + i + 1 k = A k N + i P k N + i k A k N + i T + Q k N + i A k N + i K k N + i × ( C k N + i P k N + i k C k N + i T + R k N + i ) K k N + i T A k N + i T .

2.2. Opimallity and Unbiasdness of Recursive Optimal FIR Filter

In this section, the optimality and unbaisedness of the proposed ROFIR filter is verified. Since the conventional optimal FIR filter is optimal and provides unbiased estimate in the finite horizon, the optimality and unbiasedness of the proposed ROFIR filter can be verified by showing the equality with the conventional optimal FIR filter.
To begin with, the best linear unbiased FIR filter for linear time-varying state space is introduced. The best linear unbiased FIR filter is designed to have the properties of unbiasedness and optimality by design as per the following lemma.
Lemma 1.
([21]). For linear time-varying state-space model (1) and (2), the best linear unbiased FIR filter is obtained as a linear function of finite measurements on the horizon [ k N k ] :
x ^ k k = H k Y N , k 1 ,
where the filter gain matrix H k , chosen to minimize the estimation error variance with unbiased constraint E [ x k ] = E [ x ^ k k ] , is obtained as
H k = M k W 1 k W 2 k W 2 k T W 3 k 1 C ˜ k 1 T G ˜ k 1 T R ˜ N , 1 1 ,
with
M k = Φ 1 , N Φ 2 , N Φ N , N I ,
W 1 k = C ˜ k 1 T R ˜ N , 1 1 C ˜ k 1 ,
W 2 k = C ˜ k 1 T R ˜ N , 1 1 G ˜ k 1 ,
W 3 k = G ˜ k 1 T R ˜ N , 1 1 G ˜ k 1 + Q ˜ N , 1 1 .
Next, the finite horizon Kalman filter on the horizon [ k N k ] can be represented as following theorem.
Theorem 1.
On the horizon [ k N k ] , a batch form of finite horizon Kalman filter can be obtained as
x ^ k k = M k W 1 k + P k N 1 W 2 k W 2 k T W 3 k 1 P k N 1 0 x k N + C ˜ k 1 T G ˜ k 1 T R ˜ N , 1 1 Y N , k 1 .
Proof of Theorem 1.
By using the induction method, we can obtain a batch form of the finite horiozon Kalman filter as follows.
x ^ i k and P i can be obtained from x ^ i 1 k and P i 1 by substuting the Kalman gain matrix and covariance matrix into dynamic equation of Kalman filter as
x ^ i k = A i 1 A i 1 P i 1 C i 1 T R i 1 + C i 1 T P i 1 C i 1 1 C i 1 x ^ i 1 k + A i 1 P i 1 C i 1 T ( R i 1 + C i 1 P i 1 C i 1 T ) 1 y i 1 ,
P i = A i 1 P i 1 A i 1 T + Q i 1 A i 1 P i 1 C i 1 T R i 1 + C i 1 P i 1 C i 1 T 1 × C i 1 P i 1 A i 1 T ,
where i is used instead of k N + i for simple notation.
By defining notations L i , N i , and S i as
L i = C ˜ i T R ˜ N , N i + 1 1 C ˜ i + P 0 1 C ˜ i T R ˜ N , N i + 1 1 G ˜ o , i G ˜ o , i R ˜ N , N i + 1 1 C ˜ i G ˜ o , i T R ˜ N , N i + 1 1 G ˜ o , i + Q ˜ N , N i + 2 1 ,
N i = C ˜ i T R ˜ N , N i + 1 1 C ˜ i + P 0 1 C ˜ i T R ˜ N , N i + 1 1 G ˜ i G ˜ i R ˜ N , N i + 1 1 C ˜ i G ˜ i T R ˜ N , N i + 1 1 G ˜ i + Q ˜ N , N i + 1 1 ,
S i = P 0 1 0 x ^ i k + C ˜ i T G ˜ i T R ˜ N , N i + 1 1 Y N , k N + i ,
where G ˜ o , i is defined by removing the last zero column from G ˜ i , the Equations (39) and (40) can be rewritten as
x ^ i k = M i N i 1 S i ,
P i = M i N i 1 M i T .
For i = 1 , x ^ 1 k can be represented with initial state x 0 and covarinace P 0 as
x ^ 1 k = A 0 x 0 + A 0 P 0 C 0 T ( R 0 + C 0 P 0 C 0 T ) 1 ( y 0 C 0 x 0 ) = A 0 A 0 P 0 C 0 T R 0 + C 0 P 0 C 0 T 1 C 0 x 0 + A 0 P 0 C 0 T ( R 0 + C 0 P 0 C 0 T ) 1 y 0 = M 1 N 1 1 S 1 ,
and P 1 is calculated as
P 1 = A 0 P 0 A 0 T + Q 0 A 0 P 0 C 0 T R 0 + C 0 P 0 C 0 T 1 C 0 P 0 A 0 T = A 0 P 0 1 + C 0 T R 0 1 C 0 1 A 0 T + Q 0 , = A 0 I P 0 1 + C 0 T R 0 1 C 0 0 0 Q 0 1 1 A 0 I T = M 1 N 1 1 M 1 T .
For i + 1 , x ^ i + 1 k can be calculated from x ^ i k as
x ^ i + 1 k = A i A i P i C i T R i + C i P i C i T 1 C i x ^ i k + A i P i C i T ( R i + C i P i C i T ) 1 y i = A i M i N i 1 N i 1 M i T C i T R i + C i M i N i 1 M i T C i T 1 C i M i N i 1 S i + A i M i N i 1 M i T C i T R i + C i M i N i 1 M i T C i T 1 y i = A i M i L i 1 S i + A i M i L i 1 M i T C i T R i 1 y i = A i M i L i 1 S i + M i T C i T R i 1 y i = A i M i I L i 1 0 0 Q i S i + 1 = M i + 1 N i + 1 1 S i + 1 ,
and P i + 1 can be obtained from P i as
P i + 1 = A i P i P i C i T R i + C i P i C i T 1 C i P i A i T + Q i = A i M i N i 1 N i 1 M i T C i T R i + C i M i N i 1 M i T C i T 1 × C i M i N i 1 M i T A i + Q i = A i M i N i + M i T C i T R i 1 C i M i 1 M i T A i + Q i = A i M i L i 1 M i T A i + Q i = M i + 1 N i + 1 1 M i + 1 T .
This completes the proof. □
Finally, it can be shown that the finite horizon Kalman filter with estimated initial conditions is equivalent to the conventional optimal FIR filter (32) by applying the estimated initial state (16) and error covariance (19) to the finite horizon Kalman filter (38) as per the following theorem.
Theorem 2.
The optimal FIR filter (32) can be obtained by replacying x k N and P k N in the finite Kalman filter (38) with the estimated initial conditions (16 ) and (19), respectively.
Proof of Theorem 2.
Π k 1 1 in (11) can be represented as
Π k 1 1 = G ˜ k 1 Q ˜ N , 1 G ˜ k 1 T + R ˜ N , 1 1 = R ˜ N , 1 1 R ˜ N , 1 1 G ˜ k 1 G ˜ k 1 T R ˜ N , 1 1 G ˜ k 1 + Q ˜ N , 1 1 1 G ˜ k 1 T R ˜ N , 1 1 .
By using (35)–(37) and (50), P k N 1 and P k N 1 x ^ k N in (38) can be rewritten as
P k N 1 = C ˜ k 1 T Π k 1 1 C ˜ k 1 = C ˜ k 1 T R ˜ N , 1 1 R ˜ N , 1 1 G ˜ k 1 G ˜ k 1 T R ˜ N , 1 1 G ˜ k 1 + Q ˜ N , 1 1 1 G ˜ k 1 T R ˜ N , 1 1 C ˜ k 1 = W 1 k W 2 k W 3 k 1 W 2 k T , P k N 1 x ^ k N = P k N 1 P k N C ˜ k 1 T Π k 1 1 = C ˜ k 1 T R ˜ N , 1 1 R ˜ N , 1 1 G ˜ k 1 G ˜ k 1 T R ˜ N , 1 1 G ˜ k 1 + Q ˜ N , 1 1 1 G ˜ k 1 T R ˜ N , 1 1 = C ˜ k 1 T R ˜ N , 1 1 Y N , k 1 W 2 k W 3 k 1 G ˜ k 1 T R ˜ N , 1 1 Y N , k 1 ,
respectively.
Then, we can obtain the following relations by applying (51) to the right side of Equation (38):
M k W 1 k + P k N 1 W 2 k W 2 k T W 3 k 1 P k N 1 0 x ^ k N + C ˜ k 1 T G ˜ k 1 T R ˜ N , 1 1 Y N , k 1 = M k W 1 k + P k N 1 W 2 k W 2 k T W 3 k 1 2 C ˜ k 1 T R ˜ N , 1 1 Y N , k 1 W 2 k W 3 k 1 G ˜ k 1 T R ˜ N , 1 1 Y N , k 1 G ˜ k 1 T R ˜ N , 1 1 Y N , k 1 = M k W 1 k + P k N 1 W 2 k W 2 k T W 3 k 1 2 I W 2 k W 3 k 1 0 I C ˜ k 1 T G ˜ k 1 T R ˜ N , 1 1 Y N , k 1 = M k W 1 k W 2 k W 2 k T W 3 k 1 C ˜ k 1 T G ˜ k 1 T R ˜ N , 1 1 Y N , k 1 ,
This completes the proof. □

2.3. Adaptive FIR Filter with Sequential Noise Statistics Esitmation

Since the structure of the proposed ROFIR filter is exactly same as the Kalman filter on the horizon, many useful techniques of Kalman filtering can be applied to the the proposed ROFIR filter for improving the performance of FIR filter. In this section, we propose an AFIR filter as an application of the proposed ROFIR filter.
By applying modified sequential noise statistics esitmation method introduced in Figure 2 to ROFIR filter, an AFIR filter can be obtained as follows.
Firtly, consider the linear measurement state relationship to estimate the measurement noise statistics. On the horizon [ k N 1 k 1 ] , i-th approximation sample of the measurement noise, r k N + i k 1 can be represented as
r k N + i k 1 = y k N + i C k N + i A k N + i x ^ k N + i 1 k 1 .
An unbiased estimate of the initial mean of the measurement noise at time k can be defined as
r ^ k N k = 1 N i = 1 N r k i + 1 k 1 ,
where r k k 1 = y k C k A k x ^ k 1 k 1 .
Then, the unbiased estimation of initial variance C r , k N k can be obtained as
C ^ r , k N k = 1 N 1 i = 1 N ( r k i k 1 r ^ k N k ) ( r k i k 1 r ^ k N k ) T .
By using the expectation of C r , k N k as
E C ^ r , k N k = 1 N i = 1 N γ k i k 1 + R k N ,
the unbiased estimate of the initial measurement noise covariance R ^ k N k can be obtained as
R ^ k N k = 1 N 1 i = 1 N ( ( r k i k 1 r ^ k N k ) ( r k i k 1 r ^ k N k ) T N 1 N γ k i k 1 ) ,
where γ k i k 1 = C k i P k i k 1 C k i T .
The mean and covariance of the measurement noise can be obtained on the horizon [ k N k ] as
r ^ k N + i k = r ^ k N + i 1 k + 1 N ( r k N + i k r k N + i k 1 ) ,
R ^ k N + i k = R ^ k N + i 1 k + 1 N 1 ( ( r k N + i k r ^ k N + i k ) × ( r k N + i k r ^ k N + i k ) T ( r k N + i k 1 r ^ k N + i k ) × ( r k N + i k 1 r ^ k N + i k ) T + 1 N ( r k N + i k r k N + i k 1 ) × ( r k N + i k r k N + i k 1 ) T N 1 N ( γ k N + i k γ k N + i k 1 ) ) ,
for 1 i N 1 . For i = N , r ^ k k and R ^ k k can be obtained by replacing r k N + i k 1 and γ k N + i k 1 in (58) and (59) with r k N k 1 and γ k N k 1 , respectively.
Secondly, for the process noise statistics, define the approximation of state noise sample on the horizon [ k N 1 k 1 ] as
q k N + i k 1 = x ^ k N + i k 1 A k N + i x ^ k N + i 1 k 1 ,
where q k N + i k 1 is defined as the i-th process noise sample at time k. In the same way as the process of measurement noise statistics, an unbiased estimate for horizon initial sampled mean q ^ k N k at time k can be represented as
q ^ k N k = 1 N i = 1 N q k i + 1 k 1 ,
where q k k 1 = x ^ k k 1 A k x ^ k 1 k 1 .
Then, the unbiased estimation of the initial process noise covariance Q ^ k N k can be represented as
Q ^ k N k = 1 N 1 i = 1 N ( ( q k i k q ^ k N k ) ( q k i k q ^ k N k ) T N 1 N Δ k N + i k 1 ) ,
where Δ k i k 1 = A k i P k i 1 k 1 A k i T P k i k 1 . Then, the mean and covariance of the process noise on the horizon [ k N k ] can be calculated sequentially as
q ^ k N + i k = q ^ k N + i 1 k + 1 N ( q k N + i k 1 q k N + i k ) ,
Q ^ k N + i k = Q ^ k N + i 1 k + 1 N 1 ( ( q k N + i k q ^ k N + i k ) × ( q k N + i k q ^ k N + i k ) T ( q k N + i k 1 q ^ k N + i k ) × ( q k N + i k 1 q ^ k N + i k ) T + 1 N ( q k i k q k i k 1 ) × ( q k i k q k i k 1 ) T N 1 N ( Δ k N + i k Δ k N + i k 1 ) ) ,
With the above sequential noise statistics esitmates, the AFIR filter can be obtained on the horizon [ k N k ] as:
x ^ k N + i + 1 k = A k N + i ( I K k N + i C k N + i ) x ^ k N + i k + A k N + i × K k N + i ( y k N + i r ^ k N + i k ) + q ^ k N + i k ,
where the filter gain and prediction covariance matrix are obtained as
K k N + i = P k N + i k C k N + i T ( C k N + i P k N + i k C k N + i T + R ^ k N + i k ) 1 ,
P k N + i + 1 k = A k N + i P k N + i k A k N + i T + Q ^ k N + i k A k N + i K k N + i × ( C k N + i P k N + i k C k N + i T + R ^ k N + i ) K k N + i T A k N + i T ,
where the optimal unbiased estimate of the horizon initial state x ^ k N k and the state covariance P k N k are obtained by (25) and (26) with estimated noise statistics R ^ · k 1 and Q ^ · k 1 in (59) and (64), respectively.
Since the noise statistics estimated in the AFIR filter are obtained by using the measurements a step ahead of the estimation time, the modified sequential noise statistics estimation method in the proposed AFIR filter may provide more adaptive estimation results than results given by previous sequential noise statistics estimation methods in adaptive Kalman filtering.

3. Simulation Results and Discussion

To demonstrate the validity of the proposed filters, the estimation performance of the proposed algorithms are compared with the conventional Kalman filter, modified Sage-Husa adaptive Kalman (SHAK) filters [22], and limited memory adaptive Kalman (LMAK) filter [23] for the F-404 gas turbine aircraft engine model in [19]. The discrete-time nominal F-404 gas turbine aircraft engine model can be represented as follows:
x k + 1 = A x k + ω k = 0.9305 0 0.1107 0.0077 0.9802 0.0173 0.0142 0 0.8953 x k + w k ,
y = C x k + v k = 1 0 0 0 1 0 x k + v k ,
where covariances matrices of the process noise and measurement noise are set as Q = 0.02 I 3 × 3 and R = 0.01 I 2 × 2 , respectively.
Even if dynamic systems and signals are well-represented in the state-space model, it may undergo unpredictable changes, such as jumps in frequency, phase, and velocity. These effects typically occur over a short time horizon, so they are called temporary uncertainties. Although these effects typically occur over a short time interval, the filter should be robust enough to diminish the effects of the temporary uncertainty. Due to its structure and measurement processing manner, an FIR estimator is believed to be robust against numerical errors and temporary modeling uncertainties that may cause a divergence phenomenon in the case of the IIR filter. To illustrate this fact and the fast convergence, the proposed filters and Kalman filter are compared for the following temporarily uncertain model, where temporary uncertainties are added to the nominal models (68) and (69), as
x k + 1 = A ¯ x k + ω ¯ k = A + Δ A x k + ω ¯ k ,
y = C ¯ x k + v ¯ k = C + Δ C x k + v ¯ k ,
where
Δ A = 0.1 δ k I 3 × 3 , Δ C = 0.01 δ k 0 0 0 0.01 δ k 0 , δ k = 1 , 200 k 250 , 0 , o t h e r w i s e ,
and process and measurement noise covariance matrices are taken as Q ¯ = 0.25 I 3 × 3 and R ¯ = 0.02 I 2 × 2 , respectively.
Filters are designed for the nominal state-space models (68) and (69), then they are applied to the temporarily uncertain system (70) and (71). Additionally, the horizon length is taken as N = 15 and N = 12 for the proposed FIR filters and LMAK filter, respectively, and the forgetting factor of SHAK filter is set as α = 0.3 .
Figure 3, Figure 4 and Figure 5 show estimation errors for the states x 1 , x 2 , and x 3 , respectively, for five filters. In addition, mean relative estimation errors (MREE) are also compared in Table 1, Table 2 and Table 3. The MREE is defined as
e r = 1 N f N s k = N s N f | x k x ^ k | 1 N f N s k = N s N f x k ,
where x k is the real state, x ^ k is the estimate of the filter, N s and N f are the initial time and the end time of simulation, respectively.
Since the conventional Kalman filter provides optimal estimates, the estimation errors of the conventional Kalman filter are smaller than those of other filters during the time interval [ 50 200 ] , when there are no system uncertainties. However, from the simulation results in time intervals [ 50 550 ] and [ 201 400 ] , it can be seen that the proposed ROFIR and AFIR filters have smaller estimation errors and faster convergence speed than the conventional Kalman filter and adaptive Kalman filters. These results show that when model uncertainties exist, the proposed ROFIR and AFIR filters can work well and have better performance than the Kalman filters due to their FIR structure.
By comparing the simulation results of adaptive filters in time interval [ 201 400 ] , it can be easily shown that the estimation errors of the proposed AFIR filter are remarkably smaller than those of LMAK filter, even though the horizon length of the proposed AFIR filter is larger than that of LMAK filter. In addition, the estimation errors of the proposed AFIR filter rapidly converge to zero after temporary model uncertainty disappears, whereas those of adaptive Kalman filters do not. Moreover, the estimation errors of adaptive Kalman filters fluctuate and oscillate during time interval [ 401 550 ] , which are caused by accumulation of estimation errors. From these results, it can be assumed that the modified sequential noise statistics estimation method and its combination with recursive FIR filtering make for more adaptive and faster convergence performance than the sequential noise statistics estimation with Kalman filtering.

4. Conclusions

In this paper, the optimal- and recursive-form FIR filter was proposed by employing the Kalman filtering technique, moving the horizon estimation strategy for discrete time-varying state-space models. The initial state and its error covariance were optimally estimated in the maximum likelihood sense over the horizon, then they initiated the finite horizon Kalman filter. The proposed recursive optimal FIR filter was designed without assumption of nonsingular transition system matrix and any a priori initial information. In addition, it was also proved that the proposed ROFIR filter is the best linear estimator on the finite estimation horizon. Furthermore, by applying the modified sequential noise statistics estimation method to the ROFIR filter, an AFIR filter was also proposed as an application of the ROFIR filter, which shows that many useful techniques of Kalman filtering could be applied to the proposed ROFIR filter for improving the estimation performance of FIR filters. To validate the proposed filters, computer simulation was performed and it was shown that the proposed filters were more accurate and robust than other conventional Kalman filters and adaptive Kalman filters.

Author Contributions

Conceptualization, B.K.; methodology, B.K.; software, B.K.; validation, B.K. and S.-i.K.; formal analysis, B.K.; investigation, B.K. and S.-i.K.; resources, B.K. and S.-i.K.; data curation, B.K. and S.-i.K.; writing—original draft preparation, B.K.; writing—review and editing, S.-i.K.; visualization, B.K.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IIRInfinite Impulse Response
FIRFinite Impulse Response
BIBOBounded Input Bounded Output
RHKReceding Horizon Kalman
KUFIRKalman-like Unbiased Finite Impulse Response
ROFIRRecursive Optimal Finite Impulse Response
AFIRAdative Finite Impulse Response
SHAKSage-Husa Adaptive Kalman
LMAKLimited Memory Adaptive Kalman

References

  1. Fitzgerald, R.J. Divergence of the Kalman filter. IEEE Trans. Autom. Control 1971, 6, 736–747. [Google Scholar] [CrossRef]
  2. Sangsuk-Iam, S.; Bullock, T.E. Analysis of discrete-time Kalman filtering under incorrect noise covariances. IEEE Trans. Autom. Control 1990, 35, 1304–1309. [Google Scholar] [CrossRef]
  3. Grewal, M.S.; Anderews, A.P. Kalman Filtering—Theory and Practice; Prentice-Hall: Englewood Cliffs, NJ, USA, 1993. [Google Scholar]
  4. Kwon, W.H.; Lee, K.S.; Kwon, O.K. Optimal FIR Filters for Time-Varying State-Space Models. IEEE Trans. Aerosp. Electron. Syst. 1990, 26, 1011–1021. [Google Scholar] [CrossRef]
  5. Kwon, B.; Quan, Z.; Han, S. A robust fixed-lag receding horizon smoother for uncertain state space models. Int. J. Adapt. Control Signal Process. 2015, 29, 1354–1366. [Google Scholar] [CrossRef]
  6. Kwon, B.; Han, S.; Han, S. Improved Receding Horizon Fourier Analysis for Quasi-periodic Signals. J. Electr. Eng. Technol. 2017, 12, 378–384. [Google Scholar] [CrossRef] [Green Version]
  7. Kwon, W.H.; Kim, P.S.; Park, P. A receding horizon Kalman FIR filter for discrete time-invariant systems. IEEE Trans. Autom. Control 1999, 44, 1787–1791. [Google Scholar] [CrossRef]
  8. Kwon, W.H.; Kim, P.S.; Han, S.H. A Receding Horizon Unbiased FIR Filter for Discrete-Time State Space Models. Automatica 2002, 38, 545–551. [Google Scholar] [CrossRef]
  9. Shmaliy, Y.S.; Munoz-Diaz, J.; Arceo-Miquel, L. Optimal horizons for a one-parameter family of unbiased FIR filter. Digit. Signal Process. 2008, 18, 739–750. [Google Scholar] [CrossRef]
  10. Shmaliy, Y.S. An iterative Kalman-like algorithm ignoring noise and initial conditions. IEEE Trans. Signal Process 2011, 59, 2465–2473. [Google Scholar] [CrossRef]
  11. Kou, Y.; Jiao, Y.; Xu, D.; Zhang, M.; Liu, Y.; Li, X. Low-cost precise measurement of oscillator frequency instability based on GNSS carrier observation. Adv. Space Res. 2012, 51, 969–977. [Google Scholar] [CrossRef]
  12. Pak, J.M.; Ahn, C.K.; Shmaliy, Y.S.; Shi, P.; Lim, M.T. Switching extensible FIR filter bank for adaptive horizon state estimation with application. IEEE Trans. Control Syst. Technol. 2016, 24, 1052–1058. [Google Scholar] [CrossRef]
  13. Shmaliy, Y.S.; Khan, S.; Zhao, S. Ultimate iterative UFIR filtering algorithm. Measurement 2016, 92, 236–242. [Google Scholar] [CrossRef] [Green Version]
  14. Zhao, S.; Shmaliy, Y.S.; Liu, F. Fast Kalman-Like Optimal Unbiased FIR Filtering with Applications. IEEE Trans. Signal Process. 2016, 64, 2284–2297. [Google Scholar] [CrossRef]
  15. Zhao, S.; Shmaliy, Y.S.; Ahn, C.K.; Liu, F. Adaptive-Horizon Iterative UFIR Filtering Algorithm with Applications. IEEE Trans. Ind. Electron. 2018, 65, 6393–6402. [Google Scholar] [CrossRef]
  16. Zhao, S.; Shmaliy, Y.S.; Ahn, C.K.; Liu, F. Self-Tuning Unbiased Finite Impulse Response Filtering Algorithm for Processes with Unknown Measurement Noise Covariance. IEEE Trans. Control Syst. Technol. 2021, 29, 1372–1379. [Google Scholar] [CrossRef]
  17. Pak, J.M.; Yoo, S.Y.; Lim, M.T.; Song, M.K. Weighted Average Extended FIR Filter Bank to Manage the Horizon Size in Nonlinear FIR Filtering. Int. J. Control Autom. Syst. 2014, 13, 138–145. [Google Scholar] [CrossRef]
  18. Pak, J.M.; Kim, P.S.; You, S.H.; Lee, S.S.; Song, M.K. Extended Least Square Unbiased FIR Filter for Target Tracking Using the Constant Velocity Motion Model. Int. J. Control Autom. Syst. 2017, 15, 947–951. [Google Scholar] [CrossRef]
  19. Kim, P.S. Selective Finite Memory Structure Filtering Using the Chi-Square Test Statistic for Temporarily Uncertain Systems. Appl. Sci. 2019, 9, 4257. [Google Scholar] [CrossRef] [Green Version]
  20. Kwon, B.; Han, S.; Lee, K. Robust Estimation and Tracking of Power System Harmonics Using an Optimal Finite Impulse Response Filter. Energies 2018, 11, 1811. [Google Scholar] [CrossRef] [Green Version]
  21. Kwon, B. An Optimal FIR Filter for Discrete Time-varying State Space Models. J. Inst. Control Robot. Syst. 2011, 17, 1183–1187. [Google Scholar] [CrossRef] [Green Version]
  22. Akhlaghi, S.; Zhou, N.; Huang, Z. Adaptive Adjustment of Noise Covariance in Kalman Filter for Dynamic State Estimation. In Proceedings of the 2017 IEEE Power and Energy Society General Meeting, Chicago, IL, USA, 16–20 July 2017. [Google Scholar]
  23. Myers, K.; Tapley, B. Adaptive sequential estimation with unknown noise statistics. IEEE Trans. Autom. Control 1976, 21, 520–523. [Google Scholar] [CrossRef]
Figure 1. (a) The concept of RHK filter. (b) The concept of KUFIR filter.
Figure 1. (a) The concept of RHK filter. (b) The concept of KUFIR filter.
Applsci 12 02757 g001
Figure 2. The concept of modified sequential noise statistics estimation in AFIR filter.
Figure 2. The concept of modified sequential noise statistics estimation in AFIR filter.
Applsci 12 02757 g002
Figure 3. Estimation error for the first state x 1 , k ( e 1 , k = x 1 , k x ^ 1 , k ).
Figure 3. Estimation error for the first state x 1 , k ( e 1 , k = x 1 , k x ^ 1 , k ).
Applsci 12 02757 g003
Figure 4. Estimation error for the second state x 2 , k ( e 2 , k = x 2 , k x ^ 2 , k ).
Figure 4. Estimation error for the second state x 2 , k ( e 2 , k = x 2 , k x ^ 2 , k ).
Applsci 12 02757 g004
Figure 5. Estimation error for the third state x 3 , k ( e 3 , k = x 3 , k x ^ 3 , k ).
Figure 5. Estimation error for the third state x 3 , k ( e 3 , k = x 3 , k x ^ 3 , k ).
Applsci 12 02757 g005
Table 1. Comparison of the MREE for the first state ( e 1 , r ).
Table 1. Comparison of the MREE for the first state ( e 1 , r ).
Time Interval [ 50 550 ] [ 50 200 ] [ 201 400 ] [ 401 550 ]
Kalman filter 0.7567 0.2157 2.6138 0.1789
SHAK filter 0.4107 0.4852 1.3432 0.7997
LMAK filter 0.3760 0.4394 1.2185 0.7809
ROFIR filter 0.1038 0.2469 0.3141 0.1965
AFIR filter 0.0872 0.2927 0.1546 0.2476
Table 2. Comparison of the MREE for the second state ( e 2 , r ).
Table 2. Comparison of the MREE for the second state ( e 2 , r ).
Time Interval [ 50 550 ] [ 50 200 ] [ 201 400 ] [ 401 550 ]
Kalman filter 0.2709 0.1933 0.4839 0.1579
SHAK filter 0.0642 0.2431 0.0998 0.5697
LMAK filter 0.0558 0.2387 0.0894 0.4248
ROFIR filter 0.0552 0.2314 0.0947 0.1779
AFIR filter 0.0216 0.2490 0.0240 0.2011
Table 3. Comparison of the MREE for the third state ( e 3 , r ).
Table 3. Comparison of the MREE for the third state ( e 3 , r ).
Time Interval [ 50 550 ] [ 50 200 ] [ 201 400 ] [ 401 550 ]
Kalman filter 0.7973 0.3574 3.3627 0.2304
SHAK filter 0.3510 0.6860 1.2738 0.6468
LMAK filter 0.3355 0.6675 1.2022 0.6429
ROFIR filter 0.1808 0.4205 0.6386 0.2776
AFIR filter 0.1756 0.5891 0.4568 0.4040
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kwon, B.; Kim, S.-i. Recursive Optimal Finite Impulse Response Filter and Its Application to Adaptive Estimation. Appl. Sci. 2022, 12, 2757. https://doi.org/10.3390/app12052757

AMA Style

Kwon B, Kim S-i. Recursive Optimal Finite Impulse Response Filter and Its Application to Adaptive Estimation. Applied Sciences. 2022; 12(5):2757. https://doi.org/10.3390/app12052757

Chicago/Turabian Style

Kwon, Bokyu, and Sang-il Kim. 2022. "Recursive Optimal Finite Impulse Response Filter and Its Application to Adaptive Estimation" Applied Sciences 12, no. 5: 2757. https://doi.org/10.3390/app12052757

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop