Analysis of Polynomial Nonlinearity Based on Measures of Nonlinearity Algorithms

We consider measures of nonlinearity (MoNs) of a polynomial curve in two-dimensions (2D), as previously studied in our Fusion 2010 and 2019 ICCAIS papers. Our previous work calculated curvature measures of nonlinearity (MoNs) using (i) extrinsic curvature, (ii) Bates and Watts parameter-effects curvature, and (iii) direct parameter-effects curvature. In this paper, we have introduced the computation and analysis of a number of new MoNs, including Beale’s MoN, Linssen’s MoN, Li’s MoN, and the MoN of Straka, Duník, and S̆imandl. Our results show that all of the MoNs studied follow the same type of variation as a function of the independent variable and the power of the polynomial. Secondly, theoretical analysis and numerical results show that the logarithm of the mean square error (MSE) is an affine function of the logarithm of the MoN for each type of MoN. This implies that, when the MoN increases, the MSE increases. We have presented an up-to-date review of various MoNs in the context of non-linear parameter estimation and non-linear filtering. The MoNs studied here can be used to compute MoN in non-linear filtering problems.

In the early stages of NLF, the extended Kalman filter (EKF) [1][2][3][4] was widely used. It was observed in some problems, e.g., falling of a body in earth's atmosphere with high velocity [13,14] and bearing-only filtering [5,7,8] that the EKF performs poorly due to linearization. The high degree of nonlinearity in these problems was the attributed cause for the poor performance of the problem without a quantitative measure of nonlinearity (MoN). To overcome the poor accuracy and convergence problems of the EKF, a number of improved approximate non-linear filters, such as the unscented 1.
Is it possible to find a quantitative MoN for a nonlinear filtering problem? 2.
Can we establish a correspondence between the MoN of a NLF problem and the performance of a filtering algorithm? 3.
Can we show that the UKF, CKF, or PF gives better results than the EKF, when the degree of nonlinearity (DoN) is high? Remark 1. In this paper we consider a parameter estimation problem with polynomial nonlinearity. We hope that insights and results from this analysis would encourage further study of MoN in NLF problems. Next, we describe some historical developments in the field of parameter estimation and NLF.
Beale in his pioneering work [18] proposed four MoNs for the static non-random parameter estimation problem. Two MoNs were empirical and two were theoretical. Guttman and Meeter [19] and Linssen [20] observed that Beale's method gives lower MoN for highly non-linear problems and proposed a modified MoN. Using differential geometry based curvature measures, Bates and Watts [21,22] and Goldberg et al. [23] extended Beale's work and developed curvature measures of nonlinearity (CMoN) for the static non-random parameter estimation problem. Bates and Watts formulated two CMoN, the parameter-effects curvature and intrinsic curvature [21,[24][25][26].
In our previous work [35], we considered a polynomial curve in two-dimensions (2D) and calculated CMoN using differential geometry (e.g., extrinsic curvature) [36][37][38], Bates and Watts parameter-effects curvature [21,25,26], and direct parameter-effects curvature [29]. The computation of these curvatures requires the Jacobian and Hessian of the measurement function [2] evaluated at the true or estimated parameter. The extrinsic curvature uses the true parameter, whereas the other two CMoN use the estimated parameter.
In [35], we obtained the maximum likelihood (ML) estimate [2,39] of the parameter x while using a vector measurement by numerical minimization. In [40], we derived analytic expressions for the ML estimator (MLE) [2,39] and associated variance using a vector measurement. This approach is simple and efficient, since it does not require numerical minimization. We also showed through Monte Carlo simulations in [40] that the variance of the MLE and the Cramér-Rao lower bound (CRLB) [2,41] are nearly the same for different powers of x. We also found that the bias error was small and the mean square error (MSE) [2] was close to the CRLB and variance of the MLE. Our numerical results showed that the average normalized estimation error squared (ANEES) [42] was within the 99% confidence interval most of the time. Hence, the variance of the MLE was in agreement with the estimation error.
Li constructed a combined non-linear function while using the non-linear time evolution function and measurement function in a discrete-time nonlinear filtering problem, and he proposed a global MoN at each measurement time [43]. This MoN minimizes the mean square distance between the combined non-linear function and the set of all affine functions with the same dimension at each measurement time. An un-normalized MoN and a normalized MoN were proposed in [43]. These MoNs can also be unconditional or conditional. The normalized MoN lies in the interval [0, 1]. A journal version of the paper with enhancements was published in [44].
The normalized MoN that was proposed in [43] was calculated for non-linear filtering problems, including one with the nearly constant turn motion and a non-linear measurement model [45], a video tracking problem using PF [46], and a hypersonic entry vehicle state estimation problem [47]. In these cases, the normalized MoN were rather low. In [33], we compared the normalized MoN for the BOF and GMTI filtering problems. Contrary to our expectation, we found that the GMTI filtering problem had a higher conditional normalized MoN than that of the BOF problem in the examples that we investigated.
Using the current mean (e.g., predicted mean) and associated covariance, Duník et al. [48] generate a number of sample points (e.g., sigma points using unscented transform [14]) and transform these points using a non-linear function (e.g., non-linear measurement function or time evolution function). Subsequently, they try to predict the transformed points using a linear transformation and estimate the parameter of the transformation using linear weighted least squares (WLS) [39]. They use the cost function of the WLS evaluated at the estimated parameter as a local MoN.
In [35], we showed analytically and through Monte Carlo simulations that affine mappings with positive slopes exist among the logarithm of the extrinsic curvature, Bates and Watts parameter-effects curvature, direct parameter-effects curvature, MSE, and CRLB. For completeness, we have included these key results from [35] in Section 4. New contributions in this paper include the computation and analysis of following MoNs: Li's MoN [43,44], and • MoN of Straka, Duník, andSimandl [48,49].
It is not possible to derive a mapping analytically between the logarithm of Beale's MoN, Linssen's MoN, Li's MoN, MoN of Straka, Duník, andSimandl, and the logarithm of the MSE. The numerical results from Monte Carlo simulations also show that affine mappings with positive slopes exist among the logarithm of the MSE and the logarithm of two of these MoNs.
The paper is organized, as follows. Section 2.1 describes the measurement model for polynomial nonlinearity. The MLE for parameter estimation and CRLB using polynomial nonlinearity and a vector measurement is presented in Section 2. Section 3 presents different types of MoN, such as extrinsic curvature based on differential geometry, Beale's MoN, Linssen's MoN, Bates and Watts parameter-effects curvature, direct parameter-effects curvature, Li's MoN, and MoN of Straka, et al. Section 4 discusses mappings among logarithms of extrinsic curvature, parameter-effects curvature, CRLB, and MSE. Section 5 presents the numerical simulation and results. Finally, Section 6 summarizes our contribution and concludes with future work.
Notation Convention: For clarity, we use italics to denote scalar quantities and boldface for vectors and matrices. A lower or upper case Roman letter represents a name (e.g., "s" for "sensor", "RMS" for "root mean square", etc.). We use ":=" to define a quantity and A denotes the transpose of the vector or matrix A. The n−dimensional identity matrix, m−dimensional null matrix, and m × n null matrix are denoted by I n , 0 m , and 0 m×n , respectively.

Measurement Model
We studied CMoN of a polynomial smooth scalar function h of a non-random variable x in [35], where h(x) = ax n , and a is a non-zero scalar. In scenarios considered, x > 0 and n = 2, 3, 4, 5.
Remark 2. For MoN of other forms of nonlinearity, such as the bearing-only [27], GMTI [32], and video filtering [34] problems in radar communities, we shall discuss in detail in the end of Section 3.
The measurement model for the polynomial function is given by where v i is a zero-mean white Gaussian measurement noise with variance σ 2 , We assume that the measurement noises are independent. The measurement model can be written in the vector form where

ML Estimate of Parameter
The likelihood function of x is [2,50,51] The maximization of the likelihood in (10) is equivalent to the minimization of the cost function [2,51] The maximum likelihood (ML) estimatex of x is obtained by setting the derivative of J(x) to zero [2,51], From (11) and (12), we obtain Because the derivative of h(x) with respect to x is not zero, we obtain Hence, the ML estimate satisfies, Left-multiplying both sides of (15) by d , we obtain We note that Using (1) and (17) in (16) we get ax n =z, (18) wherez is the sample mean of z,z Thus, from (18), the ML estimate of x is given bŷ Remark 3. In general, the MLE for a nonlinear measurement model is biased [51]. We can calculate the variance ofx under the small error assumption using the linearization approximation. To guarantee the validity of the variance, the bias in the MLE must be calculated. The bias can be numerically calculated using Monte Carlo simulation. The bias in the MLE is defined by [2, 51] b(x) := x −x.
Remark 4. The ML estimate of x in [35] was obtained by minimizing the cost function in (11) numerically. The estimator in (20) provides simple and efficient way of estimating x using a vector measurement z without numerical optimization.

Cramér-Rao Lower Bound
The CRLB [2, 41] for the MSE in the current problem is given by Remark 5. Calculation of the variance σ 2 x and CRLB x are similar. For σ 2 x , we use the estimatex while calculating the Jacobian of the measurement function, whereas, for CRLB x , we use the true x while calculating the Jacobian of the measurement function.
Using similar procedure, we obtain From (30) and (33), we find that, for a given x, the standard deviation (SD) and square root of CRLB are inversely proportional to the power n. Secondly (33) shows that, for a given power, the square root of CRLB decreases as x increases.

Measures of Nonlinearity
To explain the key concepts of nonlinearity, consider the scalar function h(x) = 5 sin (4x)/x shown in Figure 1. We observe in Figure 1 that the function is nearly linear at A and E. If we draw a tangent to the curve at A and E, then the curve is close to the tangent in the neighborhood of A and E. However, tangents to the curve at points B, C, and D differ by large amounts from the curve in the neighborhood of these points. The tangent represents an affine approximation to the curve at a point. We observe that, among points B, C, and D, the curve bends the most at B and the least at point D. If we draw a circle (called the osculating circle) at these points, then the radius of the circle can be used to judge nonlinearity. The rate of bending is high when the radius of the circle is small. In differential geometry [37,38], the curvature κ is inverse of the radius of the osculating circle and, hence, curvature can be viewed as a measure of nonlinearity. The radii of the osculating circles at A and E are nearly infinity and, hence, the curvatures are nearly zero. From Figure 1, we observe that, in general, the nonlinearity of a function can vary with x. Hence, the nonlinearity is a local measure. If the second derivative of a function is non-zero, then the function is non-linear.
In [35,40], we analyzed the CMoN of a polynomial scalar function h of a non-random variable x, as described in Section 2.1. The CMoN were based on the extrinsic curvature using differential geometry, Bates and Watts parameter-effects curvature, and direct parameter-effects curvature. In this paper, we study the following MoNs: • extrinsic curvature using differential geometry x -1.5 If a MoN has a high value, then the nonlinearity is high and if it has a low value, then the Therefore, it is impossible to compare them based on numerical values. We can only study their variations.
Consider the m-dimensional vector non-linear function h of the non-random n−dimensional parameter x. Letx be a known estimate of x. Using the Taylor series expansion of h(x) aboutx and keeping the first order term gives where T(x) represents the tangent plane approximation (an affine mapping) to h(x) anḋ If m > n, then h is an n−dimensional manifold embedded in an m−dimensional space [37,38]. The tangent plane is tangent to the surface h atx. The concept of tangent plane is used in Beale's MoN, Linssen's MoN, Bates and Watts parameter-effects curvatures [21,25], and direct parameter-effects curvature [44].
For polynomial nonlinearity, the CMoN using differential geometry is calculated at the true value x and, hence, it is non-random. The Bates and Watts parameter-effects curvature, direct parameter-effects curvature, Beale's MoN, Li's MoN, and the MoN of Straka et al. are calculated while using an estimatê x of x. The estimatex is obtained from a measurement model involving the measurement function h. Since x is a scalar, we need one or more scalar measurements to estimate x. Table 1 summarizes features of various MoNs. x is obtained from a measurement model involving the measurement function h. Since x is a scalar, we need one or more scalar measurements to estimate x. Next, we describe various MoN.

Extrinsic Curvature Using Differential Geometry
The curvature of a circle at every point on the circumference is equal to the inverse of the radius of the circle. Thus, the curvature of a circle is a constant. A circle with a smaller radius bends more sharply and, therefore, has a higher curvature.
We assume that the first and second derivatives of the nonlinear smooth scalar function h exist. The curvature of the curve y = h(x) at a point x is equal to the curvature of the osculating circle at that point. The extrinsic curvature at the point x is defined by [36][37][38], The first derivative of h at a point x is given in (26). The second derivative of h with respect to x is given byḧ Thus, usingḣ(x) andḧ(x) in (36), we can calculate the extrinsic curvature κ(x) at any point x by

Beale's MoN
Consider the nonlinear measurement model for the non-random n-dimensional parameter x where z, h, and v are the measurement, non-linear measurement function, and measurement noise, respectively. Letx be an estimate of x. Subsequently, a Taylor series expansion of h(x) about x and keeping the first order term is as (34). Suppose we choose m vectors x i , i = 1, 2, . . . , m in the neighborhood of x. Then Beale's first empirical MoN [18] is given bŷ where ρ is the standard radius and it is defined by Guttman and Meeter [19] observed that the empirical MoN underestimates severe nonlinearity. When m approaches infinity, the empirical MoNN x approaches the theoretical MoN N x .

Least Squares Based Beale's MoN
Consider the scalar function h for polynomial nonlinearity, as described in (1). As described in Beale's MoN, we choose m points x i , i = 1, 2, . . . , m in th neighborhood of x. Let An affine mapping as approximation to y i is given by We compute A and B by minimizing the cost function Then we can use the affine mapping withÂ andB in Beale's MoN.

Linssen's MoN
In order the correct the deficiency in Beale's MoN, Linssen proposed a modification to obtain the following MoN [20]

Least Squares Based Linssen's MoN
Using the same procedure as in Section 3.3, we can use an affine mapping withÂ andB as an approximation to y i in computing Linssen's MoN.

Parameter-Effects Curvatures
The parameter-effects curvature and intrinsic curvature defined by Bates and Watts [21,25,26] are associated with a non-linear parameter estimation problem and are defined at the estimated parameter. We note that in (1), h : R → R. Since h is a scalar function, the intrinsic curvature of Bates and Watts K N (x) [21] or the direct intrinsic curvature β N δ (x) [29] is zero. Thus, only the parameter-effects curvature of Bates and Watts K T (x) and the direct parameter-effects curvature β T δ (x) are non-zero. Since the intrinsic curvature is zero, for simplicity in notation, we drop the superscript "T" from the parameter-effects curvature and they are given by From (26), we getḦ = an(n − 1)x n−2 d.
Hence, from (27) and (52), we obtain Substitution of results from (55) and (56) in (50) and (51) gives We note that the extrinsic curvature in (36) is evaluated at the true x, while the parameter-effects curvatures K(x) in (50) and β δ (x) in (51) are evaluated at the estimatex. Becausex is a random variable, K(x) and β δ (x) are random variables. When we perform Monte Carlo simulations and estimate x from measurements,x varies among Monte Carlo runs. Therefore, K(x) and the set of all linear β δ (x) vary with Monte Carlo runs.

Li's MoN
For a scalar random variable x, the un-normalized MoN proposed by Li [43,44] represents the square root of the minimum mean square distance between the nonlinear measurement function h and the set of all affine functions L, where L(x) = Ax + B. The scalar parameters A and B are determined in the minimization process. For the current problem, where x is non-random, the un-normalized MoN J and normalized MoN ν ar given, respectively, by (60) Givenx and σ x (30), the unscented transformation (UT) [14,15], cubature transformation (CT) [16], or Monte Carlo method [8] can be used to compute σ 2 h and c hx . We find that the UT gives good results in calculating the two MoNs. Next we dscribe computing J and ν using the UT. We use κ UT = 2 [14]. The three weights and sigma points are given, respectively, by The measurement transformed points are Then the mean and variance of h a given byh The cross-covariance c hx is computed by

MoN of Straka, Duník, andSimandl
Straka, Duník, andSimandl presented two local MoNs in [48,49]. Given the estimatex and variance σ 2 x , these MoNs use a number of points χ i , i = 1, 2, . . . , m in the neighborhood ofx. We analyze the first MoN proposed by the authors. The transformed points by the non-linear function h are given by Define A linear approximation to Z is Xθ, where θ is a scalar parameter to be estimated. The cost function that is proposed in [48,49] to determine θ is given by where the weight-matrix W is given by The LS estimate [39] that minimizes the cost function is given bŷ For this problem, the LS estimate in (74) reduces tô The cost function J 1 evaluated atθ LS is treated as a local MoN η, given by Remark 6. We have calculated the average MoN for the bearing-only filtering [27], GMTI [32], and video filtering [34] problems. The MoN is presented in the table below (Table 2). From this table we find that the degree of nonlinearity of the bearing-only filtering problem is about two orders of magnitude higher than that of the GMTI or video filtering problem. This implies that a simple filter, such as the EKF or UKF, is sufficient for the GMTI or video filtering problem, but an advanced filter, such as the PF, is needed for the BOF [17] problem. Table 2. MoNs for the bearing-only, GMTI, and video filtering problems.

Mapping between CMoN and MSE in Polynomial NonLinearity
The nonlinearity of the problem imposes challenges in parameter estimation. We analyze the CMoN and MSE of the non-linear estimation problem to discover relationships among them. For the current problem, CMoN are measured by the parameter-effects curvature in (57) and the direct parameter-effects curvature in (58). In general, CMoN depend on the first and second derivatives of the non-linear function calculated at the parameter estimate and on the norm of the estimation error for β δ (x). Therefore, the CMoN will depend the type of estimator (e.g., ML) used to obtain parameter estimate. The extrinsic curvature (38) depends on the first and second derivatives of the non-linear function evaluated at the true x.

MSE and Sample MSE
We estimate the x coordinate using noisy measurements at a discrete set {x k } N x k=1 of values. Letx k,m denote the estimate of x k in the mth Monte Carlo run. Subsequently, the errorx k,m inx k,m is defined bỹ x k,m := x k −x k,m , k = 1, 2, . . . , N x , m = 1, 2, . . . , M, where M is the number of Monte Carlo runs. The MSE at x k is given by The sample MSE (SMSE) at x k is defined by Let L CRLB (x) denote the log 10 of the CRLB, Taking the log of CRLB x in (32) we get L CRLB (x) = log 10 σ 2 Nn 2 a 2 − 2(n − 1) log 10 x. (81)

MSE and Parameter-Effects Curvature
Let L K (x) denote the log of the expected value of K(x) in (57). Then In order to compute L K (x), we first approximate the expectation in (82) by assuming σx x, which holds for the case investigated in our paper, The last step of the above equation follows from an assumption that the estimator is nearly unbiased. Now, taking the logarithm, we have Now, from Equations (84) and (81), we can see that there is an affine mapping between L CRLB (x) and L K (x). That is, where We observe that α K 1 is positive and, hence, L K (x) and L CRLB (x) have the same sign of the non-zero slopes. As a result, K(x) and CRLB have the same sign of the non-zero slopes.
Now, taking the expected value of β, we have The RHS of (88) can be simplified by assuming thatx is unbiased and that it achieves the CRLB. Additionally, we approximate this error to be Gaussian and the variance ofx is given in (29). Then, Substituting (89) into (88) and using (32) for CRLB x we have Thus, From (91) and (81) we can write the affine mapping where We also observe that α β 1 is positive and, hence, L β (x) and L CRLB (x) have the same sign of the non-zero slopes. As a result, β δ (x) and CRLB have the same sign of the non-zero slopes.

Extrinsic Curvature
The expression for extrinsic curvature for our problem is given in (38). Similar to previous sections, we define L κ (x) := log 10 (κ(x)).
Taking the log of (94), we have L κ (x) = log 10 (κ(x)) = log 10 an(n − 1)x n−2 − 3 2 log 10 1 + (anx n−1 ) 2 ≈ log 10 an(n − 1)x n−2 − 3 2 log 10 (anx n−1 ) 2 Note that the second last expression is a valid approximation for x > 2. From (95) and (84) it is easy to establish the affine mapping where Similarly, from (95) and (91) we can establish the affine relationship where Using similar arguments used in previous sections, we infer that the extrinsic curvature and parameter-effects curvature have the same sign of the non-zero slopes. Similarly, the extrinsic curvature and direct parameter-effects curvature have the same non-zero slopes.
Suppose that an affine mapping exists between b and c. Subsequently, where e k is a random noise. Afterwards, we can write (108) in the matrix-vector form by where Given b and H c , we can estimate α using the linear least squares (LLS). We can similarly define the affine mapping between other variable pairs. Altogether, we consider the following four: between b (log 10 (SMSE k )) and c (log 10 (K(x k )) for each power of the polynomial function, as in (85), 2.

Numerical Simulation and Results
We follow the same simulation scenario as used in our previous work [35]. We use a = 0.6 and n = 2, 3, 4, 5 and a number of uniformly spaced x coordinates with the spacing of 0.1 in the interval [2,7]. The measurement noise standard deviation (σ) is 0.5. The dimension of the measurement vector is 10 or 20. The results are based on 1000 Monte Carlo runs. Figure 2 shows log 10 (h(x)) versus x. To assess the accuracy of the MLE, we compute the sample bias, sample MSE, ANEES [42], and CRLB [2, 41,51]. Let x k,i = x k ,x k,i , and σ 2 k,i denote the true parameter, ML estimate, and associated variance, respectively, at the kth point in the ith Monte Carlo run. The sample bias in the estimate at the kth point is defined by [9] where M is the number of Monte Carlo runs. The sample root MSE (RMSE) [9] and ANEES [2, 9,42] at the kth point are defined, respectively, by Figure 3 presents the sample bias for different powers of x. We observe from Figure 3 that the bias is small when compared with the true value of x and the bias decreases with increase in the power of x. In Figure 4, we have plotted the √ CRLB and the average of σ x over Monte Carlo runs. Figure 4 shows that, for each power of x, the √ CRLB and the average of σ x are on top of each other and it is not possible to distinguish them in the figure.   We present the ANEES [42] in Figure 6 for different powers of x with 99% confidence bounds. We see from Figure 6 that the ANEES lies within the 99% confidence bounds. This shows that the variance σ 2 x calculated using the MLE is consistent with the estimation error.  Figure 7 presents the logarithm of the extrinsic curvature log 10 (κ(x)) versus x. The extrinsic curvature is completely determined by the first and second derivatives of the non-linear function h and it is evaluated while using the true x.
In Figures 8-18, we present results using 10 scalar measurements. We have also generated results using 20 scalar measurements. In order to limit the number of figures, we have not presented figures with 20 scalar measurements. The CRLB, variance of estimation error, all MoNs, and MSE follow the same trend. However, the corresponding values compared with 20 measurements are reduced due to improved estimation accuracy.            In [35], we had shown analytically, and through Monte Carlo simulation, that affine mappings exist among log 10 (MSE), log 10 (κ), log 10 (Avg. K), and log 10 (Avg. β). In Figures 13-18, we have plotted the log 10 (MSE) versus log 10 of various MoNs using 10 scalar measurements. These figures show that the log 10 (MSE) varies with log 10 (MoN) according to an affine mapping with a positive slope. This implies that the MSE increases as an MoN increases. We obtain similar results for the case of 20 scalar measurements.
The above results demonstrate that, for the polynomial nonlinearity problem analyzed, any of the seven MoNs analyzed is suitable metrics to quantify the MSE, which represents the complexity of a parameter estimation problem. Further research is needed to study the applicability of these MoNs in real-world non-linear filtering problems.

Conclusions
We considered a polynomial curve in 2D and derived analytic expressions for the ML estimate and associated variance of the independent variable x using a vector measurement. The ML estimate is used to evaluate the Jacobian and Hessian of the measurement function appearing in the computation of Bates and Watts and direct parameter-effects curvatures, Beale s MoN, and Linssen s MoN. Our numerical results show that the variance of the estimated parameter and the Cramér-Rao lower bound (CRLB) are nearly the same for different powers of x. The average normalized estimation error squared (ANEES) lies within the 99% confidence interval, which indicates that the ML based variance is consistent with the estimation error.
We used seven MoNs, including the extrinsic curvature using differential geometry, Beale's MoN (and its least squares variant), Linssen's MoN (and its least squares variant), Bates and Watts parameter-effects curvature, direct parameter-effects curvature, Li's MoN, and the MoN of Straka, Duník, andSimandl. If a MoN has a high value, then the nonlinearity is high. All of the MoNs show the same type of variation with x and the power of of the polynomial. Secondly, as the logarithm of a MoN increases, the logarithm of the MSE also increases linearly for each MoN. This implies that, as a MoN increases, and then the MSE increases. These results are quite surprising, given the fact that these MoNs are derived based on completely different theoretical considerations. The second feature of our analysis is useful in establishing that a MoN in our study can be considered as a candidate metric for quantifying the MSE that represents the complexity of a parameter estimation problem. Our future work will study other practical parameter estimation and non-linear filtering problems.