Abstract
The performance evaluation of state estimators for nonlinear regular systems, in which the current measurement only depends on the current state directly, has been widely studied using the Bayesian Cramér-Rao lower bound (BCRLB). However, in practice, the measurements of many nonlinear systems are two-adjacent-states dependent (TASD) directly, i.e., the current measurement depends on the current state as well as the most recent previous state directly. In this paper, we first develop the recursive BCRLBs for the prediction and smoothing of nonlinear systems with TASD measurements. A comparison between the recursive BCRLBs for TASD systems and nonlinear regular systems is provided. Then, the recursive BCRLBs for the prediction and smoothing of two special types of TASD systems, in which the original measurement noises are autocorrelated or cross-correlated with the process noises at one time step apart, are presented, respectively. Illustrative examples in radar target tracking show the effectiveness of the proposed recursive BCRLBs for the prediction and smoothing of TASD systems.
1. Introduction
Filtering, prediction and smoothing have attracted wide attention in many engineering applications, such as target tracking [1,2], signal processing [3], sensor registration [4], econometrics forecasting [5], localization and navigation [6,7], etc. For filtering, the Kalman filter (KF) [8] is optimal for linear Gaussian systems in the sense of minimum mean squared error (MMSE). However, most real-world system models are usually nonlinear, which does not meet the assumptions of the Kalman filter. To deal with this, many nonlinear filters have been developed. The extended Kalman filter (EKF) [9] is the most well-known one, which approximates nonlinear systems as linear systems by the first-order Taylor series expansion of the nonlinear dynamic and/or measurement systems. The divided difference filter (DDF) was proposed in [10] using the Stirling interpolation formula. DDFs include the first-order divided difference filter (DD1) and second-order divided difference filter (DD2), depending on the interpolation order. Moreover, some other nonlinear filters have also been proposed, including the unscented Kalman filter (UKF) [11,12], quadrature Kalman filter (QKF) [13], cubature Kalman filter (CKF) [14,15], etc. All these nonlinear filters use different approximation techniques, such as function approximation and moment approximation [16]. Another type of nonlinear filter is the particle filter (PF) [17,18], which uses the sequential Monte Carlo method to generate random sample points to approximate the posterior density. Prediction is also very important since it can help people make decisions in advance and prevent unknown dangers. Following the same idea of filters, various predictors have been studied, e.g., Kalman predictor (KP) [19], extended Kalman predictor (EKP) [20], unscented Kalman predictor (UKP) [21], cubature Kalman predictor (CKP) [22] and particle predictor (PP) [23]. It is well known that smoothing is, in general, more accurate than the corresponding filtering. To achieve higher precision for estimation, many smoothers have been proposed, such as the Kalman smoother (KS) [24], extended Kalman smoother (EKS) [25], unscented Kalman smoother (UKS) [26], cubature Kalman smoother (CKS) [27] and particle smoother (PS) [28].
Despite the significant progress in nonlinear filtering, prediction and smoothing, they mainly deal with nonlinear regular dynamic systems, in which the current measurement depends only on the current state directly. However, in practice, many systems may have two-adjacent-states dependent (TASD) measurements. For example, the nonlinear systems having autocorrelated measurement noises or cross-correlated measurement and process noises at one time step apart [24] can be regarded as systems with TASD measurements. These types of systems are common in practice. For example, in many radar systems, the auto-correlations of measurement noises can not be ignored [29,30] due to the high measurement frequency. In satellite navigation systems, multi-path error and weak GPS signals make the measurement noise regarded as integral to white noise [31]. Further, in signal processing, measurement noises are usually autocorrelated because of time-varying fading and band-limited channel [32,33]. In sensor fusion, the time alignment of different sensors will cause the dependency of process noise and measurement noise [34]. In target-tracking systems, the discretization of continuous systems can induce the cross-correlation between the process and measurement noises at one time step apart [35]. In aircraft inertial navigation systems, the vibration of the aircraft has a common effect on the sources of the process and measurement noises, which results in the cross-correlation between them [36]. For these systems, some estimators have been studied. To deal with the nonlinear systems with autocorrelated measurement noise, which is modeled as a first-order autoregressive sequence, a nonlinear Gaussian filter and a nonlinear Gaussian smoother were proposed in [37,38], respectively. It makes the new measurement noise white by reformulating a TASD measurement equation. A PF was proposed for the nonlinear systems with dependent noise [39], in which the measurement is dependent on two adjacent states due to the cross-correlation between process and measurement noises. For nonlinear systems with the cross-correlated process and measurement noises at one time step apart, the Gaussian approximate filter and smoother were proposed in [40].
As is well known, assessing the performance of estimators is of great significance. The posterior Cramér-Rao lower bound (PCRLB) defined as the inverse of Fisher information matrix (FIM), also called Bayesian Cramér-Rao lower bound (BCRLB), provides a lower bound on the performance of estimators for nonlinear systems [41,42], Ch. 4 of [43]. In [44,45], a recursive BCRLB was developed for the filtering of nonlinear regular dynamic systems in which the current measurement is only dependent on the current state directly. Moreover, the BCRLBs for the prediction and smoothing of nonlinear regular dynamic systems was proposed in [45]. Compared with the conventional BCRLB, a new concept called conditional PCRLB (CPCRLB) was proposed in [46]. This CPCRLB is conditioned on the actual past measurements and provides an effective online performance bound for filters. In [47], another two CPCRLBs, i.e., A-CPCRLB and D-CPCRLB, were proposed. Since the auxiliary FIM is discarded, A-CPCRLB in [47] is more compact than the CPCRLB proposed in [46]. D-CPCRLB in [47] is not recursive and directly approximates the exact bound through numerical computations.
Some recent work has conducted a filtering performance assessment of TASD systems. In [48], a BCRLB was provided for the filtering of nonlinear systems with higher-order colored noises. Further, they presented the BCRLB for a special case in which the measurement model is driven by first-order autocorrelated Gaussian noises. In [49], the BCRLBs were proposed for the filtering of nonlinear systems with two types of dependence structures, of which the type II dependency can lead to TASD measurements. However, both of them did not generalize the BCRLB in [48,49] to the general form of TASD systems. In addition, the recursive BCRLBs for the prediction and smoothing of TASD systems were not covered in [48,49]. For the general form of TASD systems, a CPCRLB for filtering was developed in [50], which is dependent on the actual measurements. Compared with the BCRLB, this CPCRLB can provide performance evaluations for a particular nonlinear system’s state realization and better criteria for online sensor selection. In practice, the TASD systems sometimes may incorporate some unknown nonrandom parameters. For the performance evaluation of joint state and parameter estimation for nonlinear parametric TASD systems, a recursive joint CRLB (JCRLB) was studied in [51].
As equally important as CPCRLB is the BCRLB. It only depends on the structures and parameters of the dynamic model and measurement model but not the specific realization of measurement. As a result of this, BCRLBs can be computed offline. The BCRLB for the filtering of the general form of TASD systems has been obtained as a special case of the JCRLB in [51] when the parameter belongs to the empty set. However, the BCRLBs for the prediction and smoothing of the general form of TASD systems have not been studied yet. This paper aims to obtain the BCRLB for the prediction and smoothing of such nonlinear systems. First, we develop the recursive BCRLBs for the prediction and smoothing of general TASD systems. A comparison between the BCRLBs for TASD systems and regular systems is also made, and specific and simplified forms of the BCRLBs for additive Gaussian noise cases are provided. Second, we study specific BCRLBs for the prediction and smoothing of two special types of TASD systems, with autocorrelated measurement noises and cross-correlated process and measurement noises at one time step apart, respectively.
The rest of this paper is organized as follows. Section 2 formulates the BCRLB problem for nonlinear systems with TASD measurements. Section 3 develops the recursions of BCRLB for the prediction and smoothing of general TASD systems. Section 4 presents specific BCRLBs for two special types of nonlinear systems with TASD measurements. In Section 5, some illustrative examples in radar target tracking are provided to verify the effectiveness of the proposed BCRLBs. Section 6 concludes the paper.
2. Problem Formulation
Consider the following general discrete-time nonlinear systems with TASD measurements
where and are the state and measurement at time k, respectively, the process noise and the measurement noise are mutually independent white sequences with probability density functions (PDFs) and , respectively. We assume that the initial state is independent of the process and measurement noise sequences with PDF .
Definition 1.
Define and as the accumulated state and measurement up to time k, respectively. The superscript “′” denotes the transpose of a vector or matrix.
Definition 2.
Define and as estimates of and given the measurement , respectively. are state estimates for filtering, prediction and smoothing when , and , respectively.
Definition 3.
The mean square error (MSE) of is defined as
The MSE of is defined as
where and are the associated estimation errors, and are the joint PDFs. are MSEs for filtering, prediction and smoothing when , and , respectively.
Definition 4.
Define the FIM about the accumulated state as
where Δ denotes the second-order derivative operator, i.e., , and ∇ denotes the gradient operator.
Lemma 1.
The MSE of satisfying certain regularity conditions as in [41] is bounded from below by the inverse of as [41,45]
where the inequality means that the difference is a positive semidefinite matrix.
Definition 5.
Define as the right-lower block of and as the FIM about , where “n” is the dimension of the state . are FIMs for filtering, prediction and smoothing when , and , respectively.
Lemma 2.
The MSE of satisfying certain regularity conditions as in [41] is bounded from below by the inverse of as [41,44]
Compared with regular systems, the measurement of the nonlinear systems (1) and (2) not only depends on the current state but also the most recent previous state directly. The main goal of this paper is to obtain the recursive FIMs for the prediction and smoothing of nonlinear TASD systems without manipulating the larger matrix .
3. Recursive BCRLBs for Prediction and Smoothing
3.1. BCRLBs for General TASD Systems
For simplicity, the following notations are introduced in advance
where , and .
To initialize the recursion for FIMs of prediction and smoothing, the recursion of the FIM for filtering is required. This can be obtained from Corollary 3 of [51], as shown in the following lemma.
Lemma 3.
The FIM for filtering obeys the following recursion [51]
with .
3.1.1. BCRLB for Prediction
Theorem 1.
The FIMs and are related to each other through
for .
Proof.
See Appendix A. □
3.1.2. BCRLB for Smoothing
Let , be an estimate of the accumulated state consisting of the smoothing estimates , and the filtering estimate . The MSE for is bounded from below by the inverse of . Thus contains the smoothing BCRLBs , , and filtering BCRLB on its main diagonal. Then we have
where zero blocks have been left empty, , , and ‘diag’ denotes diagonal matrix [52].
Theorem 2.
The FIM for smoothing can be recursively obtained as
for . This backward recursion is initialized by the FIM for filtering.
Proof.
See Appendix B. □
3.2. Comparison with the BCRLBs for Nonlinear Regular Systems
For nonlinear regular systems, measurement only depends on state directly, i.e., . Clearly, nonlinear regular systems are special cases of nonlinear TASD systems (2) since
As a result, the likelihood function for TASD systems in (3) will be reduced to for regular systems. Correspondingly, , and in (3) will be reduced to
Substituting , and in (10) into (8), the recursion of the FIM for smoothing of TASD systems will be reduced to
This is exactly the recursion of the FIM for smoothing of nonlinear regular systems in [45]. That is, the recursion of the FIM for the smoothing of nonlinear regular systems is a special case of the recursion of the FIM for the smoothing of nonlinear TASD systems.
For the FIM of prediction, it can be seen that the FIMs for prediction in (5) of TASD systems are governed by the same recursive equations as the FIMs for regular systems in [45], except that , , is different. This is because predictions for both TASD systems and regular systems only depend on the same dynamic Equation (1).
Next, we study specific and simplified BCRLBs for TASD systems with additive Gaussian noises.
3.3. BCRLBs for TASD Systems with Additive Gaussian Noise
Assume that the nonlinear systems (1) and (2) is driven by additive Gaussian noises as
where , and the covariance matrices and are invertible. Then the ’s and ’s of (3) used in the recursions of FIMs for prediction and smoothing will be simplified to
Assume that the systems (12) and (13) is further reduced to a linear Gaussian system as
where , and the covariance matrices and are invertible. Then the ’s and ’s of (3) used in the recursions of FIMs for prediction and smoothing will be further simplified to
Remark 1.
If we rewrite the linear TASD systems (15) and (16) as the following augmented form
where zero blocks have been left empty and, then the process noisein (18) will be correlated with its adjacent noisesand, but uncorrelated with. For this special type of linear system, how to obtain its BCRLBs is still unknown.
4. Recursive BCRLBs for Two Special Types of Nonlinear TASD Systems
Two special types of nonlinear systems, in which the measurement noises are autocorrelated or cross-correlated with the process noises at one time step apart, can be deemed as nonlinear TASD systems described in (1) and (2). These two types of nonlinear systems are very common in many engineering applications. For example, in target-tracking systems, the high radar measurement frequency will result in autocorrelations of measurement noises [29] and the discretization of continuous systems can induce the cross-correlation between the process and measurement noises at one time step apart [35]. In navigation systems, the multi-path error and weak GPS signal will make measurement noises autocorrelated [31] and the effect caused by vibration on the aircraft may result in the cross-correlation between the process and measurement noises [36]. Next, specific recursive BCRLBs for the prediction and smoothing of these two systems are obtained by applying the above theorems in Section 3.
4.1. BCRLBs for Systems with Autocorrelated Measurement Noises
Consider the following nonlinear system
where is a nonlinear measurement function, is autocorrelated measurement noise satisfying a first-order autoregressive (AR) model [38]
where is the known correlation parameter, the process noise and the driven noise are mutually independent white noise sequences, and both independent of the initial state as well.
To obtain the BCRLBs for the prediction and smoothing of nonlinear systems with autocorrelated measurement noises, a TASD measurement equation is first constructed by differencing two adjacent measurements as
Then, we can get a pseudo measurement equation depending on two adjacent states as
where
Clearly, the pseudo measurement noise in (24) is white and independent of the process noise and the initial state .
From the above, we know that the systems (20)–(22) is equivalent to the TASD systems (20) and (24). Applying Theorems 1 and 2 to this TASD system, we can get the BCRLBs for the prediction and smoothing of nonlinear systems with autocorrelated measurement noises.
Next, we discuss some specific and simplified recursions of FIMs for the prediction and smoothing of nonlinear and linear systems with autocorrelated measurement noises when the noises are Gaussian.
Theorem 3.
Proof.
See Appendix C. □
Corollary 1.
Then the ’s and ’s of (25) in Theorem 3 will be simplified to
Theorem 4.
Proof.
See Appendix D. □
4.2. BCRLBs for Systems with Noises Cross-Correlated at One Time Step Apart
Consider the following nonlinear system
where , and they are cross-correlated at one time step apart [39], satisfying , where is the Kronecker delta function. Both and are independent of the initial state .
To obtain the BCRLBs for the prediction and smoothing of nonlinear systems with noises cross-correlated at one time step apart, as in [50], a TASD measurement equation is constructed as
where
Clearly, the pseudo measurement noise is uncorrelated with the process noise , and , .
Proposition 1.
Proof.
First, from the assumption of noise independence, we know that is independent of and . Therefore, it is obvious that is independent of . Second, because the state in is only determined by , which is independent of , the state is independent of . Therefore, is independent of the pseudo measurement noise . This completes the proof. □
Proposition 1 shows that the reconstructed TASD systems (31) and (33) satisfies the independence assumption of the TASD systems in Section 2.
From the above, we know that the systems (31) and (32) is equivalent to the TASD systems (31) and (33). Applying Theorems 1 and reftheorem4 to this TASD system, the BCRLBs for the prediction and smoothing of nonlinear systems in which the measurement noise is cross-correlated with the process noise at one time step apart can be obtained.
Next, we discuss some specific and simplified recursions of FIMs for the prediction and smoothing of nonlinear and linear systems with Gaussian process and measurement noises cross-correlated at one time step apart.
Theorem 5.
Corollary 2.
Then the ’s and ’s of (34) in Theorem 5 will be simplified to
Theorem 6.
Proof.
See Appendix E. □
5. Illustrative Examples
In this section, illustrative examples in radar target tracking are presented to demonstrate the effectiveness of the proposed recursive BCRLBs for the prediction and smoothing of nonlinear TASD systems.
Consider a target with nearly constant turn (NCT) motion in a 2D plane [14,40,48,53]. The target motion model is
where is the state vector, s is the sampling interval, s is the turning rate and the process noise with [53]
where ms is the power spectral density.
Assume that a 2D radar is located at the origin of the plane. The measurement model is
where the radar measurement vector is composed of the range measurement and bearing measurement , and is the measurement noise.
5.1. Example 1: Autocorrelated Measurement Noises
In this example, we assume that the measurement noise sequence in (41) is first-order autocorrelated and modeled as
where is a identity matrix, the driven noise with diag, m and mrad. Further, and are mutually independent. The initial state with
To show the effectiveness of the proposed BCRLBs in this radar target tracking example with autocorrelated measurement noises, we use the cubature Kalman filter (CKF) [37], cubature Kalman predictor (CKP) [37] and cubature Kalman smoother (CKS) [38] to obtain the state estimates. These estimators generate an augmented measurement to decorrelate the autocorrelated measurement noises instead of using the first-order linearization method. Meanwhile, these Gaussian approximate estimators can obtain accurate estimates with very low computational cost, especially in the high-dimensional case with additive Gaussian noises. The RMSEs and BCRLBs are obtained over 500 Monte Carlo runs.
Figure 1 shows the RMSE versus for position and velocity estimation. It can be seen that the proposed BCRLBs provide lower bounds to the MSEs of CKP and CKS. Moreover, the gaps between the RMSEs of CKP and CKS and the s for one-step prediction and fixed-interval smoothing are very small. This means that the CKP and CKS are close to being efficient. Moreover, it can be seen that the for one-step prediction lies above the for filtering and the RMSE of CKP lies above the RMSE of CKF. This is because prediction only depends on the dynamic model, whereas filtering depends on both the dynamic and measurement models. Since smoothing uses both past and future information, the for fixed-interval smoothing is lower than the for filtering and the RMSE of CKS is lower than the RMSE of CKF.
Figure 1.
RMSE versus in Example 1.
Figure 2 shows the s for multi-step prediction, i.e., 1-step to 5-step prediction. It can be seen that the more steps we predict ahead, the larger the for prediction is. This is because if we take more prediction steps, the predictions for position and velocity will be less accurate.
Figure 2.
s for prediction in Example 1.
Figure 3 shows the s for fixed-lag and fixed-interval smoothing. It can be seen that the for 1-step fixed-lag smoothing is the worst and the for fixed-interval smoothing is the best. This is because the smoothing estimation becomes more and more accurate as the length of the data interval increases.
Figure 3.
s for smoothing in Example 1.
5.2. Example 2: Cross-Correlated Process and Measurement Noises at One Time Step Apart
In this example, we assume that the process noise sequence in (39) is cross-correlated with the measurement noise sequence in (41) at one time step apart. The cross-correlation covariance is . The distribution of is with diag, m and mrad. The initial state with
To show the effectiveness of the proposed BCRLBs in this radar target tracking example with the the cross-correlated process and measurement noises at one time step apart, we use the cubature Kalman filter (CKF), cubature Kalman predictor (CKP) and cubature Kalman smoother (CKS) in [40] to obtain the state estimates. These estimators decorrelate the cross-correlation between process and measurement noises by reconstructing a pseudo measurement equation. Compared with the Monte Carlo approximation method, these Gaussian approximate estimators can give an effective balance between estimation accuracy and computational cost. A total of 500 Monte Carlo runs are performed to obtain the RMSEs and BCRLBs.
Figure 4 shows the RMSEs of CKF, CKP and CKS versus three types of s, i.e., for filtering, one-step prediction and fixed-interval smoothing. It can be seen that the RMSEs of CKP and CKS are bounded from below by their corresponding s. It can also be observed that the gaps between the RMSEs of CKP and CKS and their corresponding s are very small. This indicates that these estimators are close to being efficient. Moreover, we can see that the for one-step prediction lies above the for filtering, and the RMSE of CKP lies above the RMSE of CKF because prediction uses less information than filtering. Since smoothing uses data within the whole interval, the for fixed-interval smoothing is lower than the for filtering and the RMSE of CKS is lower than the RMSE of CKF.
Figure 4.
RMSE versus in Example 2.
Figure 5 shows the s for multi-step prediction. We can see that the for prediction grows as the prediction step increases. This is because if we predict more steps ahead, the predictions for position and velocity will be less accurate.
Figure 5.
s for prediction in Example 2.
Figure 6 shows the s for fixed-lag and fixed-interval smoothing. Clearly, smoothing becomes more accurate as the length of the data interval increases. Hence, the for 1-step fixed-lag smoothing is the worst. In contrast, the for fixed-interval smoothing is the best.
Figure 6.
s for smoothing in Example 2.
6. Conclusions
In this paper, we have proposed recursive BCRLBs for the prediction and smoothing of nonlinear dynamic systems with TASD measurements, i.e., the current measurement depends on both the current and the most recent previous state directly. A comparison with the recursive BCRLBs for nonlinear regular systems, in which the current measurement only depends on the current state directly, has been made. It is found that the BCRLB for the smoothing of regular systems is a special case of the newly proposed BCRLB, and the recursive BCRLBs for the prediction of TASD systems have the same forms as the BCRLBs for the prediction of regular systems except that the FIMs are different. This is because prediction only depends on the dynamic model, which is the same for both of them. Specific and simplified forms of the BCRLBs for the additive Gaussian noise cases have also been given. In addition, the recursive BCRLBs for the prediction and smoothing of two special types of nonlinear systems with TASD measurements, in which the original measurement noises are autocorrelated or cross-correlated with the process noises at one time step apart, have been presented, respectively. It is proven that the optimal linear predictors are efficient estimators if these two special types of nonlinear TASD systems are linear Gaussian.
Author Contributions
Conceptualization, X.L., Z.D. and M.M.; methodology, X.L., Z.D. and M.M.; software, X.L.; validation, X.L., Z.D. and Q.T.; writing—original draft preparation, X.L., Z.D. and M.M.; writing—review and editing, Z.D., Q.T. and M.M. All authors have read and agreed to the published version of the manuscript.
Funding
This work is supported in part by the National Key Research and Development Plan under Grants 2021YFC2202600 and 2021YFC2202603, and the National Natural Science Foundation of China through Grants 62033010, 61773147 and 61673317.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The authors declare that the data that support the findings of this study are available from the authors upon request.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| BCRLB | Bayesian Cramér-Rao lower bound |
| CPCRLB | conditional posterior Cramér-Rao lower bound |
| CKF | cubature Kalman filter |
| CKP | cubature Kalman predictor |
| CKS | cubature Kalman smoother |
| FIM | Fisher information matrix |
| MSE | mean square error |
| MMSE | minimum mean squared error |
| JCRLB | joint Cramér-Rao lower bound |
| PCRLB | posterior Cramér-Rao lower bound |
| probability density function | |
| RMSE | root mean square error |
| TASD | Two-adjacent-states dependent |
Appendix A. Proof of Theorem 1
For the FIM , the joint PDF of and is
Partition as and as
Since is equal to the right-lower block of , from the inversion of a partitioned matrix [24], the FIM about can be obtained as
Partition as and as
where
Similarly, we can obtain
Then, can be rewritten as
Since the prediction FIM matrix is the inverse of the right-lower submatrix of , from (A7), we have
This completes the proof.
Appendix B. Proof of Theorem 2
For the FIM , the joint PDF of and at arbitrary time k is
Similar to (A7), by using (A8), we can partition as
where zero blocks have been left empty, , , and the block matrix is
Since is the lower-right block of defined in (7), we have
Then using the matrix inversion lemma [24], the FIM is given by
This completes the proof.
Appendix C. Proof of Theorem 3
From the assumptions that the noises are additive Gaussian white noises, we have
where is a constant.
Thus, the partial derivatives of are
The remaining , , , and can be obtained similarly. This completes the proof.
Appendix D. Proof of Theorem 4
For simplicity, we introduce
Let
The inverse of in (A30) can be rewritten as
Rewrite in (A30) as
Similarly, rewrite as
Thus, the inverse of becomes
Using (6) and Corollary 1, the FIM for one-step prediction can be obtained as
For the optimal one-step predictor, the MSE matrix is given by
Then we have
Using (6) and Corollary 1, the FIM for two-step prediction can be written as
For the optimal two-step predictor, one has
and
Similarly, we can prove that , . This completes the proof.
Appendix E. Proof of Theorem 6
Using (6) and Corollary 2, the FIM for one-step prediction can be written as
For the optimal one-step predictor, the MSE matrix is given by
Using (6) and Corollary 2, the FIM for two-step prediction is given by
For the optimal two-step predictor, one has
and
Similarly, we can prove that , . This completes the proof.
References
- Bar-Shalom, Y.; Willett, P.; Tian, X. Tracking and Data Fusion: A Handbook of Algorithms; YBS Publishing: Storrs, CT, USA, 2011. [Google Scholar]
- Mallick, M.; Tian, X.Q.; Zhu, Y.; Morelande, M. Angle-only filtering of a maneuvering target in 3D. Sensors 2022, 22, 1422. [Google Scholar] [CrossRef]
- Li, Z.H.; Xu, B.; Yang, J.; Song, J.S. A steady-state Kalman predictor-based filtering strategy for non-overlapping sub-band spectral estimation. Sensors 2015, 15, 110–134. [Google Scholar] [CrossRef]
- Lu, X.D.; Xie, Y.T.; Zhou, J. Improved spatial registration and target tracking method for sensors on multiple missiles. Sensors 2018, 18, 1723. [Google Scholar] [CrossRef] [Green Version]
- Ntemi, M.; Kotropoulos, C. Prediction methods for time evolving dyadic processes. In Proceedings of the 26th European Signal Process, Roma, Italy, 3–7 September 2018; pp. 2588–2592. [Google Scholar]
- Chen, G.; Meng, X.; Wang, Y.; Zhang, Y.; Tian, P.; Yang, H. Integrated WiFi/PDR/Smartphone using an unscented Kalman filter algorithm for 3D indoor localization. Sensors 2015, 15, 24595–24614. [Google Scholar] [CrossRef] [Green Version]
- Xu, Y.; Chen, X.Y.; Li, Q.H. Autonomous integrated navigation for indoor robots utilizing on-line iterated extended Rauch-Tung-Striebel smoothing. Sensors 2013, 13, 15937–15953. [Google Scholar] [CrossRef] [Green Version]
- Kalmam, R.E. A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
- Schmidt, S.F. The Kalman filter—Its recognition and development for aerospace applications. J. Guid. Control Dyn. 1981, 4, 4–7. [Google Scholar] [CrossRef]
- Norgaard, M.; Poulsen, N.; Ravn, O. New developments in state estimation of nonlinear systems. Automatica 2000, 36, 1627–1638. [Google Scholar] [CrossRef]
- Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef] [Green Version]
- Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Automat. Contr. 2000, 45, 477–482. [Google Scholar] [CrossRef] [Green Version]
- Arasaratnam, I.; Haykin, S.; Elliott, R.J. Discrete-time nonlinear filtering algorithms using Gauss-Hermite quadrature. Proc. IEEE 2007, 95, 953–977. [Google Scholar] [CrossRef]
- Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef] [Green Version]
- Liu, H.; Wu, W. Strong tracking spherical simplex-radial cubature Kalman filter for maneuvering target tracking. Sensors 2017, 17, 741. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, X.R.; Jilkov, V.P. A survey of maneuvering target tracking: Approximation techniques for nonlinear filtering. In Proceedings of the SPIE Conference Signal Data Process, Small Targets, Orlando, FL, USA, 25 August 2004; pp. 537–550. [Google Scholar]
- Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; Li, T.; Sun, S.; Corchado, J.M. A survey of recent advances in particle filters and remaining challenges for multitarget tracking. Sensors 2017, 17, 2707. [Google Scholar] [CrossRef] [Green Version]
- Sun, S.L. Optimal and self-tuning information fusion Kalman multi-step predictor. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 418–427. [Google Scholar] [CrossRef]
- Adnan, R.; Ruslan, F.A.; Samad, A.M.; Zain, Z.M. Extended Kalman filter (EKF) prediction of flood water level. In Proceedings of the 2012 IEEE Control and System Graduate Research Colloquium, Shah Alam, Malaysia, 16–17 July 2012; pp. 171–174. [Google Scholar]
- Tian, X.M.; Cao, Y.P.; Chen, S. Process fault prognosis using a fuzzy-adaptive unscented Kalman predictor. Int. J. Adapt. Control Signal Process. 2011, 25, 813–830. [Google Scholar] [CrossRef] [Green Version]
- Han, M.; Xu, M.L.; Liu, X.X.; Wang, X.Y. Online multivariate time series prediction using SCKF-γESN model. Neurocomputing 2015, 147, 315–323. [Google Scholar] [CrossRef]
- Wang, D.; Yang, F.F.; Tsui, K.L.; Zhou, Q.; Bae, S.J. Remaining useful life prediction of Lithium-Ion batteries based on spherical cubature particle filter. IEEE Trans. Instrum. Meas. 2016, 65, 1282–1291. [Google Scholar] [CrossRef]
- Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
- Leondes, C.T.; Peller, J.B.; Stear, E.B. Nonlinear smoothing theory. IEEE Trans. Syst. Sci. Cybern. 1970, 6, 63–71. [Google Scholar] [CrossRef]
- Sarkka, S. Unscented Rauch-Tung-Striebel smoother. IEEE Trans. Autom. Control 2008, 53, 845–849. [Google Scholar] [CrossRef] [Green Version]
- Arasaratnam, I.; Haykin, S. Cubature Kalman smoothers. Automatica 2011, 47, 2245–2250. [Google Scholar] [CrossRef]
- Lindsten, F.; Bunch, P.; Godsill, S.J.; Schon, T.B. Rao-Blackwellized particle smoothers for mixed linear/nonlinear state-space models. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6288–6292. [Google Scholar]
- Wu, W.R.; Chang, D.C. Maneuvering target tracking with colored noise. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 1311–1320. [Google Scholar]
- Li, Z.; Wang, Y.; Zheng, W. Adaptive consensus-based unscented information filter for tracking target with maneuver and colored noise. Sensors 2019, 19, 3069. [Google Scholar] [CrossRef] [Green Version]
- Yuan, G.N.; Xie, Y.J.; Song, Y.; Liang, H.B. Multipath parameters estimation of weak GPS signal based on new colored noise unscented Kalman filter. In Proceedings of the 2010 IEEE International Conference on Information and Automation, Harbin, China, 20–23 June 2010; pp. 1852–1856. [Google Scholar]
- Jamoos, A.; Grivel, E.; Bobillet, W.; Guidorzi, R. Errors-in-variables based approach for the identification of AR time-varying fading channels. IEEE Signal Process. Lett. 2007, 14, 793–796. [Google Scholar] [CrossRef]
- Mahmoudi, A.; Karimi, M.; Amindavar, H. Parameter estimation of autoregressive signals in presence of colored AR(1) noise as a quadratic eigenvalue problem. Signal Process. 2012, 92, 1151–1156. [Google Scholar] [CrossRef]
- Gustafsson, F.; Saha, S. Particle filtering with dependent noise. In Proceedings of the 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010; pp. 26–29. [Google Scholar]
- Zuo, D.G.; Han, C.Z.; Wei, R.X.; Lin, Z. Synchronized multi-sensor tracks association and fusion. In Proceedings of the 4th International Conference on Information Fusion, Montreal, QC, Canada, 7–10 August 2001; pp. 1–6. [Google Scholar]
- Chui, C.K.; Chen, G. Kalman Filtering: With Real-Time Applications; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Wang, X.X.; Pan, Q. Nonlinear Gaussian filter with the colored measurement noise. In Proceedings of the 17th International Conference on Information Fusion, Salamanca, Spain, 7–10 July 2014; pp. 1–7. [Google Scholar]
- Wang, X.X.; Liang, Y.; Pan, Q.; Zhao, C.; Yang, F. Nonlinear Gaussian smoother with colored measurement noise. IEEE Trans. Autom. Control 2015, 60, 870–876. [Google Scholar] [CrossRef]
- Saha, S.; Gustafsson, F. Particle filtering with dependent noise processes. IEEE Trans. Signal Process. 2012, 60, 4497–4508. [Google Scholar] [CrossRef] [Green Version]
- Huang, Y.L.; Zhang, Y.G.; Li, N.; Shi, Z. Design of Gaussian approximate filter and smoother for nonlinear systems with correlated noises at one epoch apart. Circ. Syst. Signal Process. 2016, 35, 3981–4008. [Google Scholar] [CrossRef]
- Van Trees, H.L.; Bell, K.L.; Tian, Z. Detection, Estimation, and Modulation Theory, Part I: Detection, Estimation, and Filtering Theory, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2013. [Google Scholar]
- Hernandez, M. Performance Bounds for Target Tracking: Computationally Efficient Formulations and Associated Applications. In Integrated Tracking, Classification, and Sensor Management: Theory and Applications; Mallick, M., Krishnamurthy, V., Vo, B.-N., Eds.; Wiley-IEEE Press: Piscataway, NJ, USA, 2012; pp. 255–310. [Google Scholar]
- Ristic, B.; Arulampalam, S.; Gordon, N. Beyond the Kalman Filter; Artech House: Norwood, MA, USA, 2004. [Google Scholar]
- Tichavsky, P.; Muravchik, C.H.; Nehorai, A. Posterior Cramér-Rao bounds for discrete-time nonlinear filtering. IEEE Trans. Signal Process. 1998, 46, 1386–1396. [Google Scholar] [CrossRef] [Green Version]
- Simandl, M.; Kralovec, J.; Tichavsky, P. Filtering, predictive, and smoothing Cramér-Rao bounds for discrete-time nonlinear dynamic systems. Automatica 2001, 37, 1703–1716. [Google Scholar] [CrossRef]
- Zuo, L.; Niu, R.X.; Varshney, P.K. Conditional posterior Cramér-Rao lower bounds for nonlinear sequential Bayesian estimation. IEEE Trans. Signal Process. 2011, 59, 1–14. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Y.J.; Ozdemir, O.; Niu, R.X.; Varshney, P.K. New conditional posterior Cramér-Rao lower bounds for nonlinear sequential Bayesian estimation. IEEE Trans. Signal Process. 2012, 60, 5549–5556. [Google Scholar] [CrossRef]
- Wang, Z.G.; Shen, X.J.; Zhu, Y.M. Posterior Cramér-Rao bounds for nonlinear dynamic system with colored noises. J. Syst. Sci. Complex. 2019, 32, 1526–1543. [Google Scholar] [CrossRef]
- Fritsche, C.; Saha, S.; Gustafsson, F. Bayesian Cramér-Rao bound for nonlinear filtering with dependent noise processes. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013; pp. 797–804. [Google Scholar]
- Huang, Y.L.; Zhang, Y.G. A new conditional posterior Cramér-Rao lower bound for a class of nonlinear systems. Int. J. Syst. Sci. 2016, 47, 3206–3218. [Google Scholar] [CrossRef]
- Li, X.Q.; Duan, Z.S.; Hanebeck, U.D. Recursive joint Cramér-Rao lower bound for parametric systems with two-adjacent-states dependent measurements. IET Signal Process. 2021, 15, 221–237. [Google Scholar] [CrossRef]
- Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: New York, NY, USA, 2012. [Google Scholar]
- Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part I: Dynamic models. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1333–1364. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).