Abstract
Rényi entropy as a generalization of the Shannon entropy allows for different averaging of probabilities of a control parameter . This paper gives a new perspective of the Kalman filter from the Rényi entropy. Firstly, the Rényi entropy is employed to measure the uncertainty of the multivariate Gaussian probability density function. Then, we calculate the temporal derivative of the Rényi entropy of the Kalman filter’s mean square error matrix, which will be minimized to obtain the Kalman filter’s gain. Moreover, the continuous Kalman filter approaches a steady state when the temporal derivative of the Rényi entropy is equal to zero, which means that the Rényi entropy will keep stable. As the temporal derivative of the Rényi entropy is independent of parameter and is the same as the temporal derivative of the Shannon entropy, the result is the same as for Shannon entropy. Finally, an example of an experiment of falling body tracking by radar using an unscented Kalman filter (UKF) in noisy conditions and a loosely coupled navigation experiment are performed to demonstrate the effectiveness of the conclusion.
1. Introduction
In the late 1940s, Shannon introduced a logarithmic measure of information [1] and a theory that included information entropy (the literature shows that it is related to Boltzmann entropy in statistical mechanics). The more stochastic and unpredictable a variable is, the larger its entropy is. As a measure of information, entropy has been used in various fields, such as information theory, signal processing, information-theoretic learning [2,3], etc. As a generalization of the Shannon entropy, Rényi entropy, named after Alfréd Rényi [4], allows for different averaging of probabilities through a control parameter , and is usually used to quantify the diversity, uncertainty, or randomness of random variables. Liang [5] presented the evolutionary entropy equations and the uncertainty estimation for Shannon entropy and relative entropy, which is also called Kullback–Leibler divergence [6], within the framework of dynamical systems. However, higher-order Rényi entropy has some better properties than Shannon entropy by setting the control parameter in most cases.
The Kalman filter [7] and its variants have been widely used in navigation, control, tracking, etc. Many works focus on combining different entropy and entropy-like quantities with the original Kalman filter to improve the performance. When the state space equation is nonlinear, Rényi entropy can be used to measure the nonlinearity [8,9]. Shannon entropy was used to estimate the weight of each particle from the weights of different measurement models for the fusion algorithm in [10]. Quadratic Rényi entropy [11] of innovation has been used as a minimum entropy criterion under a nonlinear and non-Gaussian circumstance [12] in unscented Kalman filter (UKF) [13] and finite mixtures [14]. A generalized density evolution equation [15] and polynomial-based non-linear compensation [16] were used to improve the minimum entropy filtering [17]. Relative entropy has been used to measure the similarity between the probabilistic density functions during the recursive processes of the nonlinear filter [18,19]. As for the nonlinear measurement equation with additive Gaussian noise, relative entropy can be deduced to measure the nonlinearity of the measurement [20], and can also be used to measure the approximation error of the i-th measurement element in the partitioned update Kalman filter [21]. When the state variables and the measurement variables do not belong to strict Gaussian distribution, such as in the seamless indoor/outdoor multi-source fusion positioning problem [22], the estimation error can be measured by the relative entropy. Relative entropy can also be used to calculate the number of particles in the unscented particle filter for mobile robot self-localization [23] and to calculate the sample window size in the cubature Kalman filter (CKF) [24] for attitude estimation [25]. Moreover, it has been verified that the original Kalman filter can be derived by maximizing the relative entropy [26]. Meanwhile, the robust maximum correntropy criterion has been adopted as the optimal criterion to derive the maximum correntropy Kalman filter [27,28]. However, there has been no work on the direct connections between the Rényi entropy and the Kalman filter theory until now.
In this paper, we propose a new perspective of the Kalman filter from the Rényi entropy for the first time, which bridges the gap between the Kalman filter and the Rényi entropy. We calculate the temporal derivative of the Rényi entropy for the Kalman filter mean square error matrix, which provides the optimal recursive solution mathematically and will be minimized to obtain the Kalman filter gain. Moreover, from the physical point of view, the continuous Kalman filter approaches a steady state when the temporal derivative of the Rényi entropy is equal to zero, which also means that the Rényi entropy will keep stable. A numerical experiment of falling body tracking in noisy conditions with radar using the UKF and a practical experiment of loosely-coupled integration are provided to demonstrate the effectiveness of the above conclusion.
The structure of this paper is as follows. In Section II, the definitions and properties of Shannon entropy and Rényi entropy are presented. In Section III, the Kalman filter is derived from the perspective of minimizing the temporal derivative of Rényi entropy, and the connection between the Rényi entropy and the algebraic Riccati equation is explained. In Section IV, experimental results and analysis are given by the simulation of the UKF and the real integrated navigation data. We finally conclude this paper and provide an outlook for future work in Section V.
2. The Connection between the Kalman Filter and the Temporal Derivative of the Rényi Entropy
2.1. Rényi Entropy
To calculate the Rényi entropy of the continuous probability density function (PDF), it is necessary to extend the definition of the Rényi entropy to the continuous form. The Rényi entropy of order for a continuous random variable with a multivariate Gaussian PDF is defined [4] and calculated [9] as:
where , and is a parameter providing a family of entropy functions. N is the dimension of the random variable x. is the support. is the covariance matrix of .
It is straightforward to show that the temporal derivative of the Rényi entropy is given by [9]:
where is the temporal derivative of the covariance matrix and is the trace operator.
It is easy to get the Shannon entropy for the multivariate Gaussian PDF by taking the limitation of Equation (1) as approaches 1. This entropy is given as , and the temporal derivative of the Shannon entropy is given as . It is obvious the temporal of the Shannon entropy is the same as the temporal of the Rényi entropy. Therefore, we will see later that the conclusion can also be derived from the temporal derivative of the Shannon entropy. However, the Rényi entropy for the multivariate Gaussian PDF instead of the temporal derivative of the Rényi entropy will be used by adjusting the free parameter for different uncertainty measurements in most cases, as the filtering problem has to account for the nonlinearity and the non-Gaussian noise; we adopt the Rényi entropy as the measurement for uncertainty.
2.2. Kalman Filter
Given the continuous-time linear system [29]:
where is the state vector; is the state transition matrix; is the system noise driving matrix; is the measurement vector; is the measurement matrix; and and are independent white Gaussian noise with zero mean value; their covariance matrices are and , respectively:
where is the Dirac impulse function, is a symmetric non-negative definite matrix, and is a symmetric positive matrix.
The continuous Kalman filter can be deduced by taking the limit of the discrete Kalman filter. The discrete-time state-space model is arranged as follows [29]:
where is an n-dimensional state vector; is an m-dimensional measurement vector; , , and are the known system structure parameters, which are called the dimensional one-step state update matrix, the dimensional system noise distribution matrix, and the dimensional measurement matrix, respectively; is the l-dimensional system noise vector, and is the m-dimensional measurement noise vector. Both of them are Gaussian noise vector sequences with zero mean value, and are independent of each other:
The above equation is the basic assumption for the noise requirement in the Kalman filtering state space model, where is a symmetric non-negative definite matrix, and is a symmetric positive definite matrix. is the Kronecker function.
The covariance parameters and play roles similar to those of Q and R in the continuous filter, but they do not have the same numerical values. Next, the relationship between the corresponding continuous and discrete filter parameters will be derived.
To achieve the transformation from the continuous form to the discrete form, the relations between Q and R and the corresponding and for a small step size are needed. According to the linear system theory, the relation between Q and from Equation (3) to Equation (8) is as follows:
Denote the discrete-time interval as , when does not change too dramatically within the shorter integral interval . Take the Taylor expansion of with respect to and set , so the higher-order terms are negligible and the one-step transition matrix, Equation (13), can be approximated as:
Equation (14) shows that is the linear transform of the Gaussian white noise ; the result remains the normal distribution random vector. Therefore, the first- and second-order statistical characteristics can be used to describe and be equivalent to . Referring to Equation (5), the mean of is given as follows:
For the second-order statistical characteristics, when , the time parameter between the noise and is independent, so and are uncorrelated:
When , thus
Substituting Equation (5) into the above equation:
When the noise control matrix changes slowly during the time interval , Equation (19) becomes:
When is satisfied, the above equation can be further approximated:
Notice that [29]:
The derivation of the equation relating to and R is more subtle. In the continuous model, is white, so simple sampling of leads to measurement noise with infinite variance. Hence, in the sampling process, we have to imagine averaging the continuous measurement over the interval to get an equivalent discrete sample. This is justified because x is not the Gaussian white noise and can be approximately constant within the interval.
Then, the discrete noise matrix and the continuous noise matrix are equivalent:
From Equation (12), we have:
2.3. Derivation of the Kalman Filter
Assuming that the optimal state estimation at is , the state estimation error is , and the state estimation covariance matrix is :
and
If we take the expectation operator of both sides of Equation (8), we obtain the state one-step prediction and the state one-step estimation error:
Since is uncorrelated with , we therefore obtain the covariance of the state one-step estimation error as follows:
In a similar way, the measurement at can be predicted by the state one-step estimation prediction and system measurement Equation (9) as follows:
In fact, there is difference between the measurement one-step prediction and the actual measurement . The difference is denoted as measurement one-step prediction error:
In general, the measurement one-step prediction error is called innovation in the classical Kalman filter theory, and it indicates the new information about the state estimate carried by the measurement one-step prediction error.
On the one hand, if the estimation of only includes the state one-step prediction of the system state equation, the estimation accuracy will be low, as no information of the measurement equation has been used. On the other hand, according to Equation (37), the measurement one-step prediction error calculated using the system measurement equation contains the information of the state one-step prediction of . Consequently, it is natural to consider all the state information that comes from the system state equation and the measurement equation, respectively, and correct the state one-step prediction mean with the measurement one-step prediction error . Thereby, the optimal estimation of can be calculated by the combination of and as follows:
where is the undetermined correction factor matrix.
From Equation (39), the current state estimation is a linear combination of the last state estimation and the current measurement , which considers the influence of the structural parameters in the state equation and the structure parameters in the measurement equation with different types of construction.
The state estimation error at the current time is denoted as:
where is the true values and is the posterior estimation of .
Then, the mean square error matrix of state estimation is given by:
2.4. The Temporal Derivative of the Rényi Entropy and the Kalman Filter Gain
To obtain the continuous form of covariance matrix , the limit will be taken. However, the relation between the undetermined correction factor matrix and its continuous form still remains unknown. Therefore, we make the following assumption.
Assumption 1.
is of the order of , that is:
From the conclusion, we can also derive this assumption conversely. We next draw the conclusion as one theorem under the assumption, as follows:
Theorem 1.
The discrete form of the undetermined correction factor matrix is the same as the continuous form when the temporal derivative of Rényi entropy is minimized. This can be presented in a mathematical form as follows:
Proof of Theorem 1.
We substitute the expression for into Equation (45) and neglect higher-order terms in ; Equation (45) becomes:
Moving the first term of Equation (48) from right to left and dividing both sides by to form the finite difference expression:
Finally, passing to the limit as and dropping of the subscripts lead to the matrix differential equation:
is invertible, as it is a positive matrix. Multiplying with Equation (50), we can consider the temporal derivative of the Rényi entropy of the mean square error matrix using Equation (2):
where the invariance under the cyclic permutation property of the trace operator has been used to eliminate and , as well as the truth that has been used to simplify the formula.
It is obvious that Equation (51) is a quadratic function of the undetermined correction factor matrix K. Thereby, there must be a minimum of in a probabilistic sense. Taking the derivative of both sides of Equation (51) with respect to matrix K obtains:
In addition, since and are symmetric matrices, the result is:
is invertible, as it is a positive matrix. According to the extreme value principle of the function, when the above are equal to zero, then we have:
So far, we have found the analytic solution to the undetermined correction factor matrix K, which is called the continuous-time Kalman filter gain in the classical Kalman filter. Then, the recursive formulations of the Kalman filter can be established through the Kalman filter gain K. Most importantly, this implies the connection between the temporal derivative of Rényi entropy and the classical Kalman filter: The temporal derivative of the Rényi entropy is minimized when the Kalman filter gain satisfies Equation (54).
Therefore, the discrete-time Kalman filter gain can be expressed as follows:
□
Remark 1.
The discrete-time Kalman filter gain has the same form as the continuous-time filter gain, as shown in the Equation (54). In principle, this is consistent with our intuition and proves the correctness and rationality of Assumption A1, in turn.
Remark 2.
The Kalman filter gain is equivalent to the minimization of the temporal derivative of the Rényi entropy, although it has the same result as the original Kalman filter, which is deduced under the minimum mean square error criterion.
This is a second-order nonlinear differential equation with respect to the mean square error matrix , and it is commonly called the Riccati equation. This is the same result as that of the Bucy–Kalman filter [7].
If the system equation, Equation (3), and the measurement equation, Equation (4), form a linear time-invariant system with constant noise covariance, the mean square error matrix may reach a steady-state value, and may eventually reach zero. So, we have the continuous algebraic Riccati equation as follows:
As we can see, the time derivative of covariance at the steady state is zero; then, the temporal derivative of the Rényi entropy should also be zero:
This implies that when the system approaches a stable state, the Rényi entropy approaches a steady value so that the temporal derivative of the Rényi entropy is zero. This is reasonable when the steady system owns a constant Rényi entropy, as uncertainty is stable, which follows our intuitive understanding. Consequently, it is worth noting that whether the value of the Rényi entropy is stable or not can be a validated indicator of whether the system is approaching the steady state.
3. Simulations and Analysis
In this section, we give two experiments to show that when the nonlinear filter system approaches the steady state, the Rényi entropy of the system approaches stability. The first experiment is a numerical example of a falling body in noisy conditions, tracked by radar [30] using the UKF. The second experiment is a practical experiment of loosely coupled integration [29]. The simulations were carried out on MATLAB 2018a running on a computer with i5-5200U, 2.20 GHz CPU, and the graphs were plotted by MATLAB.
3.1. Falling Body Tracking
In the example of a falling body being tracked by radar, the body falls vertically. The radar is placed at a vertical distance L from the body, and the radar measures the distance y from the radar to the body. The state-space equation of the body is given by:
where is the height, is the velocity, is the ballistic coefficient, m/s is the gravity acceleration, and d is the air drag, which could be approximated as:
where is the air density with an initial value of ; and are constants.
The measurement equation is:
It is worth noting that the drag and the square root cause severely nonlinearity in the state-space function and measurement function, respectively.
The discrete-time nonlinear system can be given by the Euler discretization method. Combining the additive process with Gaussian white noises for measurement, we can obtain:
In the UKF numerical experiment, we set the sampling period to s, the horizontal distance to m, the maximum number of samples to , the process noise to , the measurement noise to , and the initial state to . The results are shown as follows:
Figure 1 shows the evolution of covariance matrix . Figure 2 and Figure 3 show the Rényi entropy of covariance matrix and its change in adjacent time, respectively. Notice that the uncertainty increases near the middle of the plots, which is coincident with the drag peak. However, the Rényi entropy fluctuates around 15; even the fourth element of changes dramatically. Of course, the entropy changes are closely accompanied by the drag peak, which means the change of the entropy of covariance reflects the evolution of matrix . Consequently, the Rényi entropy can be viewed as the indicator of whether the system is approaching the steady state or not.
Figure 1.
Evolution of matrix .
Figure 2.
Simulation results for the entropy.
Figure 3.
Simulation results for the change of entropy.
3.2. Practical Integrated Navigation
In the loosely integrated navigation system, the system state parameter x is composed of inertial navigation system (INS) error states in the North–East–Down (NED) local-level navigation frame, and can be expressed as follows:
where , , and represent the position error, the velocity error, and the attitude error, respectively; and are modeled as first-order Gauss–Markov processes, representing the gyroscope bias and the accelerometer bias, respectively.
The discrete-time state update equation is used to update state parameters as follows:
where is the system noise matrix, is the system noise, and is the state transition matrix from to ; this is determined by the dynamic model of the state parameter.
In the loosely coupled integration, the measurement equation can be simply expressed as:
where is the measurement noise, is the measurement matrix, and is the measurement vector calculated by subtracting the global navigation satellite system (GNSS) observation with the inertial navigation system (INS) mechanism.
The experiments reported in this section were carried out by processing the data from an unmanned ground vehicle test. The gyroscope random walk was set to 0.03 deg/ and the velocity random walk was set to 0.05 m/s/. The sampling rates of the inertial measurement unit (IMU) and the GNSS are 200 Hz and 1 Hz, respectively. The test lasts 48 min.
The position error curve, velocity error curve, and attitude error curve of the loosely coupled integration are shown in Figure 4, Figure 5 and Figure 6. The root mean squares (RMSs) of the position errors in the north, east, and earth directions are m, m, and m, respectively. The RMS of the velocity errors in the north, east, and earth directions are m/s, m/s, and m/s, respectively. The RMSs of the attitude errors in the roll, pitch, and yaw directions are deg, deg, and deg, respectively.
Figure 4.
Position error of the loosely coupled integration.
Figure 5.
Velocity error of the loosely coupled integration.
Figure 6.
Attitude error of the loosely coupled integration.
The Rényi entropy of the covariance P is shown in Figure 7. As we can see, the Rényi entropy fluctuates around once the filter converges, which is consistent with the conclusion from the entropy perspective.
Figure 7.
Rényi entropy of the covariance .
4. Conclusions and Final Remarks
We have considered the original Kalman filter by taking the minimization of the temporal derivative of the Rényi entropy. In particular, we show that the temporal derivative of Rényi entropy is equal to zero when the Kalman filter system approaches the steady state, which means that the Rényi entropy approaches a stable value. Finally, simulation experiments and practical experiments show the Rényi entropy truly stays stable when the system becomes steady.
Future work includes calculating the Rényi entropy of the innovation term when the measurements and the noise are non-Gaussian [14] in order to evaluate the effectiveness of measurements and adjust the noise covariance matrix. Meanwhile, we can also calculate the Rényi entropy of the nonlinear dynamical equation to measure the nonlinearity in the propagation step.
Author Contributions
Conceptualization, Y.L. and C.G.; Funding acquisition, C.G. and J.L.; Investigation, Y.L.; Methodology, Y.L., C.G., and S.Y.; Project administration, J.L.; Resources, C.G.; Software, Y.L. and S.Y.; Supervision, J.L.; Validation, S.Y.; Visualization, S.Y.; Writing—original draft, Y.L.; Writing—review and editing, C.G., S.Y., and J.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by a grant from the National Key Research and Development Program of China (2018YFB1305001).
Acknowledgments
In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Principe, J.C. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer Science & Business Media: Berlin, Germany, 2010. [Google Scholar]
- He, R.; Hu, B.; Yuan, X.; Wang, L. Robust Recognition via Information Theoretic Learning; Springer International Publishing: Berlin, Germany, 2014. [Google Scholar]
- Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1961. [Google Scholar]
- Liang, X.S. Entropy evolution and uncertainty estimation with dynamical systems. Entropy 2014, 16, 3605–3634. [Google Scholar] [CrossRef]
- Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
- Kalman, R.E.; Bucy, R.S. New results in linear filtering and prediction theory. J. Basic Eng. 1961, 83, 95–108. [Google Scholar] [CrossRef]
- DeMars, K.J. Nonlinear Orbit Uncertainty Prediction and Rectification for Space Situational Awareness. Ph.D. Thesis, The University of Texas at Austin, Austin, TX, USA, 2010. [Google Scholar]
- DeMars, K.J.; Bishop, R.H.; Jah, M.K. Entropy-based approach for uncertainty propagation of nonlinear dynamical systems. J. Guid. Control. Dyn. 2013, 36, 1047–1057. [Google Scholar] [CrossRef]
- Kim, H.; Liu, B.; Goh, C.Y.; Lee, S.; Myung, H. Robust vehicle localization using entropy-weighted particle filter-based data fusion of vertical and road intensity information for a large scale urban area. IEEE Robot. Autom. Lett. 2017, 2, 1518–1524. [Google Scholar] [CrossRef]
- Zhang, J.; Du, L.; Ren, M.; Hou, G. Minimum error entropy filter for fault detection of networked control systems. Entropy 2012, 14, 505–516. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, H.; Hou, C. UKF based nonlinear filtering using minimum entropy criterion. IEEE Trans. Signal Process. 2013, 61, 4988–4999. [Google Scholar] [CrossRef]
- Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control 2000, 45, 477–482. [Google Scholar] [CrossRef]
- Contreras-Reyes, J.E.; Cortés, D.D. Bounds on rényi and shannon entropies for finite mixtures of multivariate skew-normal distributions: Application to swordfish (xiphias gladius linnaeus). Entropy 2016, 18, 382. [Google Scholar] [CrossRef]
- Ren, M.; Zhang, J.; Fang, F.; Hou, G.; Xu, J. Improved minimum entropy filtering for continuous nonlinear non-Gaussian systems using a generalized density evolution equation. Entropy 2013, 15, 2510–2523. [Google Scholar] [CrossRef]
- Zhang, Q. Performance enhanced Kalman filter design for non-Gaussian stochastic systems with data-based minimum entropy optimisation. AIMS Electron. Electr. Eng. 2019, 3, 382. [Google Scholar] [CrossRef]
- Chen, B.; Dang, L.; Gu, Y.; Zheng, N.; Príncipe, J.C. Minimum error entropy Kalman filter. IEEE Trans. Syst. Man Cybern. Syst. 2019. [Google Scholar] [CrossRef]
- Gultekin, S.; Paisley, J. Nonlinear Kalman filtering with divergence minimization. IEEE Trans. Signal Process. 2017, 65, 6319–6331. [Google Scholar] [CrossRef]
- Darling, J.E.; DeMars, K.J. Minimization of the kullback–leibler divergence for nonlinear estimation. J. Guid. Control Dyn. 2017, 40, 1739–1748. [Google Scholar] [CrossRef]
- Morelande, M.R.; Garcia-Fernandez, A.F. Analysis of Kalman filter approximations for nonlinear measurements. IEEE Trans. Signal Process. 2013, 61, 5477–5484. [Google Scholar] [CrossRef]
- Raitoharju, M.; García-Fernández, Á.F.; Piché, R. Kullback–Leibler divergence approach to partitioned update Kalman filter. Signal Process. 2017, 130, 289–298. [Google Scholar] [CrossRef]
- Hu, E.; Deng, Z.; Xu, Q.; Yin, L.; Liu, W. Relative entropy-based Kalman filter for seamless indoor/outdoor multi-source fusion positioning with INS/TC-OFDM/GNSS. Clust. Comput. 2019, 22, 8351–8361. [Google Scholar] [CrossRef]
- Yu, W.; Peng, J.; Zhang, X.; Li, S.; Liu, W. An adaptive unscented particle filter algorithm through relative entropy for mobile robot self-localization. Math. Probl. Eng. 2013. [Google Scholar] [CrossRef]
- Arasaratnam, I.; Haykin, S. Cubature kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
- Kiani, M.; Barzegar, A.; Pourtakdoust, S.H. Entropy-based adaptive attitude estimation. Acta Astronaut. 2018, 144, 271–282. [Google Scholar] [CrossRef]
- Giffin, A.; Urniezius, R. The Kalman filter revisited using maximum relative entropy. Entropy 2014, 16, 1047–1069. [Google Scholar] [CrossRef]
- Chen, B.; Liu, X.; Zhao, H.; Principe, J.C. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
- Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
- Gongmin, Y.; Jun, W. Lectures on Strapdown Inertial Navigation Algorithm and Integrated Navigation Principles; Northwestern Polytechnical University Press: Xi’an, China, 2019. [Google Scholar]
- Kumari, L.; Padma Raju, K. Application of Extended Kalman filter for a Free Falling body towards Earth. IJACSA Ed. 2011, 2, 4. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).