1. Introduction
Filtering algorithms designed within the framework of the Bayesian stochastic approach are widely used to solve problems of integrated measurement processing connected with data fusion from various navigation sensors or navigation systems, i.e., multisensor navigation data fusion. They aim to calculate an estimate that is optimal in the mean square sense [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11]. This estimate is a mathematical expectation corresponding to the posterior probability density function (PDF) for the vector of parameters being estimated. One of the main advantages of such algorithms is the possibility of generating, in addition to the estimate, the corresponding accuracy characteristic in the form of a calculated covariance matrix of estimation errors. When such a matrix is consistent with its real covariance matrix, the filtering algorithm is also considered consistent [
5,
12].
Usually, algorithms are designed as recursive, which implies the processing of incoming measurements one after another. The problems in which the posterior PDF has a single-extremal form (the function has only one extremum in the domain of the a priori uncertainty, and from here on, such a PDF is referred to as single-extremal) are often solved using recursive Kalman-type algorithms (KTAs) based on the Gaussian approximation of the posterior density [
6,
11,
13,
14,
15,
16]. The simplest of such algorithms are based on the expansion of nonlinear functions, describing the model of a dynamic system and measurements, in a truncated Taylor series. Such algorithms include the linearized [
2,
6] and extended [
2,
11] Kalman filters, second-order filters [
6,
17], polynomial filters [
18], third- and higher-order filters [
19,
20], and their iterative modifications [
2,
11]. Another approach to the KTA design involves statistical linear regression procedures [
21]. They are sometimes called filters that do not require the calculation of derivatives, as when obtaining an approximate description of nonlinear functions using their linear analogs, there is no need to calculate the derivatives [
22]. Among such algorithms, also known as sigma-point filters, are, for example, Unscented [
23,
24], Cubature [
25,
26], Smart Sampling Kalman Filters [
27], and many others [
28,
29].
Traditional recursive KTAs are ineffective for problems in which the posterior PDF takes a complicated multi-extremal form [
30]. To solve them, when designing filtering algorithms, designers use the known recursive relation for the posterior PDF and various methods of its approximation [
28]. These algorithms use a significant set of parameters to describe the posterior density. Here, we should mention filters based on the point-mass method and those using poly-Gaussian approximation [
31,
32,
33]. The most widely used are the so-called sequential Monte Carlo methods, also known as particle filters, and their various modifications [
34,
35,
36,
37,
38]. These algorithms can achieve accuracy close to the potential one, i.e., the accuracy corresponding to the optimal algorithm, but they often prove to be computationally complicated. Various procedures are used to reduce the amount of computation in such algorithms, for example, the importance resampling [
39] and the Rao-Blackwellization procedure [
40]. Despite this, their computational complexity remains extremely high, which often limits their online application. Thus, the most difficult problems to solve in practice are those in which the posterior PDF has a complicated multi-extremal form. Among various filtering problems connected with multisensor navigation data fusion, we can single out a separate class in which the posterior PDF, being multi-extremal at the initial moments of estimation time, becomes single-extremal. It is this class of problems that is considered in this paper. Such problems arise, for example, during navigation data fusion; among them are the problem of the navigation of a group of autonomous underwater vehicles [
30], the beacon navigation problem [
41], the map-aided navigation problem [
42,
43,
44,
45,
46,
47], and some others. Traditional recursive algorithms based on the Gaussian approximation of the posterior density turn out to be inefficient for their solution, and recursive algorithms that need a significant set of parameters to describe the posterior density are often computationally intensive, which makes them unsuitable for online applications. In [
30,
48], we showed that nonrecursive iterative KTAs used to solve such problems can provide the accuracy of optimal estimation and be consistent starting from the moment when the posterior PDF becomes single-extremal. In this case, a batch of all measurements accumulated by the current moment arrives at the input of such algorithms, and this is the reason why they are called batch algorithms [
9,
30]. Despite all their advantages, these algorithms have two main drawbacks when used to solve the problems under consideration. First, their computational complexity––although often lower than that of algorithms that use a significant set of parameters to describe posterior PDF (for example, particle filters)––remains significant. This is due to the need to invert high-dimensional matrices. Second, the moment when the posterior PDF becomes single-extremal is usually unknown in practice, and its identification is a nontrivial task.
The aim of this paper is to design algorithms that lack the above-mentioned shortcomings. The paper proposes two algorithms.
The first one is economical in a computational sense—with respect to the amount of computations; it is nonrecursive iterative algorithm called the Recursive Iterative Batch Linearized Smoother (RI-BLS). It works due to the fact that instead of inverting a high-dimensional matrix at each iteration, it uses a recursive procedure that ensures the necessary estimates by solving the smoothing problem. That is, the RI-BLS is based on the combined use of recursive and nonrecursive data processing schemes. The RI-BLS has low computational complexity. This algorithm can be used in problems for which the moment when the posterior PDF becomes single-extremal can be determined in advance. At the same time, a challenging problem here is to identify this moment.
The second algorithm is called the Recursive Iterative Batch Multiple Linearized Smoother (RI-BMLS). In some cases, its computational complexity may be somewhat higher than that of the RI-BLS, but the RI-BMLS is capable of determining the moment at which the posterior PDF becomes single-extremal. The proposed RI-BMLS, as well as the RI-BLS, is based on the combined use of recursive and nonrecursive data processing schemes. The essence of the RI-BMLS is the simultaneous use of a set of RI-BLSs, each of which has its own individual linearization point at the initial moment. A set of linearization points, each of which is used in its RI-BLS, is formed in such a way that their environment, all together, should cover the domain of a priori uncertainty. At the moment when the posterior PDF becomes single-extremal, the estimates at the output of the RI-BLS set with various linearization points are grouped into an area corresponding to one extremum, which allows us to identify the moment at which the posterior PDF becomes single-extremal. After that, to save computational resources, the problem is solved with a single recursive iterative Kalman filter. In the case of a single-extremal PDF, it will provide an accuracy close to that of the optimal algorithm. Such algorithms are used to solve the problem of navigation system correction with the use of nonlinear measurements from various sensors, for example, in the navigation of a group of autonomous underwater vehicles, single-beacon navigation, and map-aided navigation.
The paper is structured as follows. In
Section 2, the nonlinear filtering problem under study is formulated.
Section 3 describes the proposed Recursive Iterative Batch Linearized Smoother, and
Section 4 considers the Recursive Iterative Batch Multiple Linearized Smoother.
Section 5 formulates a map-aided navigation problem and demonstrates the performance and advantages of the proposed algorithms.
Section 6 and
Section 7 analyze and summarize the main contributions of this paper.
2. Bayesian Statement of the Nonlinear Estimation Problem
Consider the problem of the nonlinear estimation of an
n-dimensional random vector described by a shaping filter
using
m-dimensional vector measurements:
In these relations, it is assumed that and are known nonlinear n- and m-dimensional functions describing the dynamics for the state vector and the measurement model; k is the discrete time index; is an n-dimensional vector of known input signals (for double indexing in subscripts, here and below, the first index denotes the component number, and the second refers to the time); is an n-dimensional random Gaussian vector with a given PDF ; hereinafter, the notation is used for the density of the Gaussian random vector b with mathematical expectation and covariance matrix B; is -dimensional zero-mean discrete Gaussian white noise, independent of with a known -dimensional covariance matrix ; is an -dimensional matrix; and is m-dimensional zero-mean discrete Gaussian white noise, independent of and , with covariance matrix .
The essence of the estimation problem in the framework of the Bayesian approach is to obtain, in a certain sense, an optimal estimate of state vector based on measurements and, if possible, the corresponding conditional covariance matrix of estimation error , which characterizes its current accuracy (corresponding to a specific set of measurements ).
It is known that optimal, in the mean square sense, estimate
and the corresponding covariance matrix
are defined as follows [
5]:
where
is the posterior PDF, conditional to the measurements.
It is clear that in order to find an optimal estimate (3) and covariance matrices (4), we need to have posterior PDF , which is difficult to calculate, so that in practice, different methods of its approximation are used to design suboptimal filtering algorithms. The specificity of the nonlinear filtering problem considered in this paper is that the posterior PDF, which has a complicated multi-extremal character at initial moments of estimation time, evolves into a single-extremal form.
Note that this is not uncommon for the Bayesian approach, along with the optimal, in the mean square sense, estimate (3); it is also possible to determine the estimate corresponding to the posterior PDF maximum:
It is important to emphasize that in the case where the posterior PDF has one extremum and is close to the Gaussian one, estimates (3) and (5) will be close to each other.
Note that the optimal estimate minimizes the conditional (4) and unconditional covariance matrices of estimation errors defined as [
12]
where
is a joint PDF of vectors
and
, and
E is the sign of the mathematical expectation with a subscript explaining from which PDF it is calculated.
It was already noted in the Introduction that when designing estimation algorithms, preference is usually given to recursive algorithms based on a processing scheme that is recursive with respect to measurements. Along with recursive algorithms, the desired estimate of the state vector at an arbitrary point in time can be obtained by finding it as part of the vector being estimated, which includes all elements for all
k. The recursive and nonrecursive algorithms are described in
Appendix A.1 and
Appendix A.2.
Note that the optimal estimate does not depend on the processing scheme according to which it is calculated—recursive or nonrecursive. The situation is different when recursive and nonrecursive suboptimal algorithms are used to calculate the estimate. This also applies to algorithms based on linearization, which is the subject of this paper.
The algorithms that are based on the linearized representation of nonlinear functions assume that
where
, and
are Jacobian matrices of functions
with dimensions
and
calculated at linearization points
and
.
Suboptimal algorithms based on linearization have two distinctive features. First, the posterior PDF
is replaced at each step with its Gaussian approximation described by the estimates generated in the algorithm of estimate
and the corresponding covariance
, i.e.,
. The second feature is determined by the fact that the processing of the current measurement is carried out based on the ideology of designing a linear optimal algorithm [
13]. The methods for developing such algorithms with the use of both recursive and nonrecursive schemes are described in
Appendix A.3 and
Appendix A.4.
Recall that in this paper, we consider a class of problems whose specificity lies in the fact that the posterior PDF of the state vector under estimation changes its character, as measurements accumulate, from a multi-extremal to a single-extremal density. The extended Kalman filter (EKF) and iterative extended Kalman filter (IEKF) use a Gaussian approximation of the posterior PDF at each step and involve the estimate from the previous step to calculate the linearization points. It is clear that they are ineffective for the solution of the class of problems under consideration. The reason for this is that at the initial stage (with a small number of accumulated measurements), the form of the posterior PDF differs from the Gaussian one, which leads to significant linearization errors in such well-known algorithms [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10] as the EKF and IEKF. In some cases, the accuracy of the EKF and IEKF may be lower than that of the linearized Kalman filter (LKF), whose linearization points at each step
k are chosen as some fixed values that only depend on the a priori information [
30].
As shown in [
30,
48], in contrast to traditional recursive algorithms, nonrecursive (batch) iterative algorithms based on linearization allow for accuracy close to the potential one, i.e., the accuracy corresponding to the optimal algorithm, in the solution of problems in which the posterior PDF of the estimated state vector changes its character, as measurements accumulate, from multi-extremal to single-extremal.
Note that in the suboptimal nonrecursive (batch) algorithms, the linearization points
depend only on a priori information. As shown in [
30], this fact creates prerequisites for increasing the efficiency of the algorithms designed with the use of the nonrecursive scheme, compared to recursive algorithms, when solving problems of the class under consideration.
The simplest batch algorithm, Batch Linearized Smoother (BLS), can be designed if linearization points are chosen as .
The BLS efficiency can be increased using iterations similar to the way it is achieved in the IEKF. The algorithm designed in this way—Iterative Batch Linearized Smoother (I-BLS)—will correspond to the block diagram shown in
Figure 1. The main feature of the I-BLS is that during linearization at fixed points, it repeatedly (iteratively) processes a batch of all measurements accumulated by the current moment of time. In this case, at each iteration, the linearization points are corrected simultaneously taking into account the results obtained at the previous iteration.
As shown in [
30], for the class of problems under consideration, the I-BLS, starting from the moment when the PDF becomes single-extremal, is capable of providing accuracy close to the potential one and is consistent. Note that the algorithms called BLS and I-BLS in this paper are called NR-EKF and NR-IEKF in [
30]. This is due to the fact that the expressions for the BLS and I-BLS can be obtained by designing the EKF for the solution of the original problems (1) and (2) in its nonrecursive formulation, i.e., at each step
k, vector
is estimated using measurement (2). The main disadvantage of the I-BLS is its high computational complexity.
Further, we present an algorithm called the RI-BLS, based on the combined use of recursive and nonrecursive schemes, which is an economical implementation of the I-BLS in terms of computational complexity.
3. Recursive Iterative Batch Linearized Smoother
In the RI-BLS, just as in the case of the I-BLS, after the next k-th measurement arrives, all measurements included in the batch of measurements accumulated by the current moment are processed again (iteratively). Each iteration includes two blocks.
Block 1 implements a recursive solution of the filtering problem for all
using the procedures of the linearized Kalman filter [
5], in which linearization points
and
are fixed, and in the first iteration
, they depend only on the a priori mathematical expectation
. In this block, we can highlight a set of calculations for all
:
- Step 1.
Formation of linearization points:
- Step 2.
Calculation of estimate prediction and its error covariance matrix:
- Step 3.
Calculation of measurements prediction and its error covariance matrix:
- Step 4.
Calculation of the cross-covariance matrix:
- Step 5.
Calculation of the gain factor:
- Step 6.
Calculation of the estimate and its corresponding covariance matrix:
During processing, batches of accumulated measurements , prediction values , estimates calculated in Block 1, and the corresponding covariance matrices and for each moment of estimation time are saved to be used further.
Block 2 implements recursive calculation in reverse order for of estimates corresponding to the solution of the smoothing problem. The second block includes the following calculations:
- Step 1.
Calculation of the smoother transition matrix:
- Step 2.
Calculation of the smoothing estimate and its corresponding covariance matrix:
It is important to note that estimates and the corresponding covariance matrices obtained at the first iteration of j = 1 as a result of the smoothing problem solution coincide with estimates and covariance matrices , generated in the BLS as part of vector and a block of matrix .
Estimates obtained in Block 2, corresponding to the solution of the smoothing problem, are saved and used further during the repeated processing of measurements (in the next iteration) as linearization points , after which the above procedure is repeated again.
The RI-BLS can be represented as the block diagram shown in
Figure 2.
The estimate and covariance matrix obtained at the latest iteration are taken as the estimate and covariance matrix in the RI-BLS at the k-th step. The estimate and covariance matrix generated in the RI-BLS coincide with the estimate and covariance matrix generated in the I-BLS as part of vector and a block of matrix . In addition, when implementing the proposed algorithm, it is not necessary to invert high-dimensional matrices; it is sufficient to invert only the n × n-dimensional matrices, which allows for a significant reduction in computation. Thus, the RI-BLS can be considered an economical implementation of the I-BLS in terms of computational complexity.
With all the advantages of the I-BLS, the RI-BLS, starting from moment , when the posterior PDF takes a single-extremal form, generates an estimate close to the optimal one, in the mean square sense, and satisfies the consistency properties.
The algorithm considered can be used in problems for which the moment when the posterior PDF becomes single-extremal is known in advance. At the same time, in practice, researchers have to deal with problems in which this information is unavailable, particularly in solving navigational problems. The modification of the RI-BLS that allows us to determine the moment is considered below.
4. Recursive Iterative Batch Multiple Linearized Smoother
4.1. RI-BMLS Description
The main idea of the proposed algorithm is to use a set of RI-BLSs running in parallel, with different linearization points that are selected based on the features of the problem being solved. For example, these points should be selected in such a way that their neighborhoods cumulatively cover the domain of the a priori uncertainty.
It is known [
2] that the iterative Kalman filter is aimed to find an estimate corresponding to the maximum of the posterior PDF. The iterative Gauss–Newton procedure used for these purposes does not guarantee global convergence. In the case of a multiextremal posterior distribution, the estimate generated by the iterative algorithm may correspond to a local extremum. However, when the set of RI-BLSs used is sufficiently large and has different linearization points, the estimates they generate will be grouped in areas corresponding to all extrema, both global and local. At estimation time point
, when the posterior PDF takes the single-extremum form, the RI-BLS estimates will be grouped within a small domain corresponding to a single extremum.
Thus, the process of the RI-BMLS design can be divided into the following steps:
- Step 1.
Formation of samples from the previously selected domain in which is different from zero;
- Step 1.
Parallel start of for which the linearization points in the first iteration are determined for each according to
- Step 3.
Verification of the following inequalities:
where
is the maximum value,
is the minimum one among the γ-th components of estimates
, and
denotes the components of a certain predefined vector
. We will assume the posterior PDF is a single-extremal one if, for each
, the difference between
and
is less than the γ-th component of vector
.
- Step 4.
If all inequalities are satisfied, then
It is clear that the procedure used to identify the extrema of the posterior PDF is approximate, and the correctness of the estimation time point identification will depend on the specific values for the vector components, which are selected heuristically. However, as will be shown further, the use of such a procedure as part of the RI-BMLS allows for the correct identification of time point .
After has been identified, it becomes possible to continue solving the problem without using the RI-BLS bank and instead use a single recursive IEKF, which, in the case of a single-extremal posterior PDF, will have all the advantages of the RI-BLS and will be simpler in terms of computational complexity.
Thus, the RI-BMLS can be represented as a block diagram shown in
Figure 3.
The total time of the problem solution using the proposed RI-BMLS can be divided into two stages.
At the first stage (), when the posterior PDF is multi-extremal, the RI-BLS bank is used. If necessary, it is possible to use the estimate and the estimation error covariance matrix generated in one of the RI-BLSs as the algorithm output at the first stage. However, it should be kept in mind that the algorithm at the first stage is not consistent and the estimation errors can be significant. In this work, we assume that at the first stage, the RI-BMLS solves only the problem of identification and does not generate an estimate and the calculated accuracy characteristic. After has been identified, the estimates and the covariance matrices of all RI-BLSs will coincide and any of them can be used as the initial settings of the IEKF initiated at the next stage.
At the second stage , the problem is solved with the use of the IEKF, and the estimates and estimation error covariance matrices generated in it are the RI-BMLS output. In this case, since the posterior PDF is single-extremal, the estimate generated at the second stage is close in accuracy to the optimal one in the mean square sense, and the algorithm is consistent.
Since the extremum can appear at any point belonging to the domain of a priori uncertainty, the linearization points should be selected in such a way that their neighborhoods cumulatively cover the domain of the a priori uncertainty.
It should be noted that attempts to design a similar algorithm have already been made previously, for example, in [
49], but they were not successful. This is primarily due to the fact that the iterative algorithm with multiple linearization was designed with the use of a set of recursive iterative Kalman filters. However, as noted before, the recursive scheme used to solve the class of problems considered here leads to the accumulation of errors caused by linearization. In this paper, we use an RI-BLS—a nonrecursive algorithm—to design an iterative algorithm with multiple linearization.
4.2. Methodological Example
Let us explain the essence of the proposed algorithms by considering a simple methodological example.
Assume that it is required to estimate an exponentially correlated sequence
described by a linear shaping filter:
where
,
,
is the sequence variance,
is the value inverse to the correlation interval
, and
is a known input signal. Nonlinear measurements (2) have the form
where
are known coefficients.
The simulation was carried out with the following parameters: , where is the measurement variance; sampling interval and simulation time .
The problem was simulated and solved for one sample (run) with a set of where . The linearization points at the initial moment of estimation time were uniformly distributed over the domain with a step of .
The results of the RI-BMLS operation in the form of the true values of the estimated sequence and RI-BLS estimates generated for a different number of processed measurements are presented in
Figure 4.
In
Figure 4, the blue solid lines represent the graphs of estimates
obtained with the
output. Note that at the initial moment, linearization points were taken as estimates, i.e.,
. The red solid line shows the true values of the estimated sequence, and the dotted purple line indicates the identified point of estimation time
.
The graph in
Figure 4 shows that the estimates were initially grouped within two areas, but as measurements accumulated, these areas began to converge with each other and with the true value of the sequence being estimated. At a certain point in time, when
k = 7, the estimates at the
output clustered within a small domain corresponding to the true value of the estimated variable, allowing us to correctly identify
. The posterior PDF then became single-extremal.
Figure 5 shows the graphs of the posterior PDF obtained with the use of sequential Monte Carlo methods [
34] at
It is evident from
Figure 4 that after processing the measurements at step
k = 7, the posterior PDF becomes single-extremal, as shown in
Figure 5d. At the same time, it is evident from
Figure 4 that at the same time moment, the estimates at the output of the whole set of
are grouped within a small area corresponding to the real value of the estimated quantity, which made it possible to correctly identify the moment of time
.
5. Map-Aided Navigation
Let us now consider one of the possible options for the practical application of the proposed algorithm to map-aided navigation [
42,
43,
44,
45,
46,
47,
49]. This problem is an example of the data fusion from a navigation system and a sensor of the Earth gravity field. Following [
42], the problem of map-aided navigation is formulated with a Bayesian framework as a nonlinear filtering problem. It is assumed that there is a navigation system (NS) that generates vehicle coordinates
in the following form:
where
are unknown vehicle coordinates, and
are the NS errors.
In addition, we assume that there is a sensor of some geophysical field and a corresponding digital map.
Thus, we can write the measurements as
where function
determines the dependence of the geophysical field on the vehicle coordinates, and
are the measurement errors.
By replacing
with
, we can formulate the following problem: estimate
using measurements
Consider the simplest case: when the NS errors for each coordinate are described by a random zero-mean Gaussian value with variances
, and the measurement errors are described by Gaussian white noise with variances
. In this case, we need to estimate the two-dimensional vector
with measurements
We also suppose that vector
and errors
are independent of each other. Under the assumptions made, we can write the Formula (A7) as
The simulation was performed for a gravity anomaly field generated using the EGM 2008 model [
50]. In this case,
describes the dependence of gravity anomalies on the coordinates. The isolines of gravity anomalies are shown in mGal in
Figure 6.
The path was assumed to be fixed, and the following parameters were used in the simulation: km; mGal; the distance between measurements was ; and the path length was 24 km.
First, we demonstrate the solution of the problem under consideration using the proposed algorithm with multiple linearization for one sample. To design the algorithm, we set the linearization points at the initial moment at the nodes of a uniform grid on the region of a priori uncertainty.
Figure 7 shows the isolines of the a priori PDF, and the red dots indicate the position of the linearization points.
Figure 8 shows the isolines of the posterior PDFs at the estimation time moments
,
, and
; the red dots indicate the estimates generated by the
As can be seen from the simulation results, the posterior PDF, which has, at the initial estimation time moments, a multi-extremal form significantly different from the Gaussian one, becomes single-extremal as measurements accumulate, and the estimates converge within the neighborhood of a small domain, which makes it possible to identify .
Note that the
representing a set of
nonrecursive
was used only methodologically to clarify its essence. The total complexity of the designed algorithm with multiple linearization turns out to be extremely high, which does not allow it to be applied in online mode. In this connection, to solve the problem under consideration, we designed the
with only
linearization points or nine parallel
which is the same.
Figure 9 shows the isolines of the a priori PDF graph; the red dots represent the positions of the linearization points at the initial moment of time.
Using statistical testing and predictive simulation based on
samples, consider the simulation results for the following algorithms: the
and
proposed in this paper, two traditional recursive linearization-based algorithms—EKF and IEKF—and an algorithm based on sequential Monte Carlo methods—particle filter (PF). The PF aims to calculate the optimal estimate, in the mean square sense. For each
-th algorithm, according to the methodology in [
12], we obtained the real
and calculated covariance matrices
:
where
and
,
, are the samples of random vectors obtained by simulation according to (12) and (13), and
and
are the estimates and calculated covariance matrices obtained for the
j-th realizations of
and
,
.
Using (16) and (17), we calculated the real
and calculated
radial errors of the estimate for each of the coordinates:
where
and
, are the real and calculated variances of estimation errors
and
averaged over L samples.
Also, for the purpose of comparing the computational complexity, we calculated the corresponding coefficient:
where
,
is the time spent by the computer to solve the estimation problem using the analyzed algorithm, and
is the time corresponding to the EKF algorithm, which requires the minimum time of all the compared algorithms.
The formula below was used to calculate the value of
which characterizes the time averaged over a set of samples, starting from which the posterior PDF becomes single-extremal.
In
Figure 10, the solid line shows the calculation results for the real radial errors, and the dotted line, the calculated radial errors. Blue color corresponds to the EKF (no. 1); green—the IEKF (no. 2); black—
(no. 3); purple –
(no. 4); and red—PF (no. 5). The purple dotted line corresponds to
. The real radial errors for the EKF and IEKF are the largest compared to other methods; moreover, starting from the moment when the posterior PDF becomes single-extremal, they begin to grow indefinitely. On the contrary, algorithms
and
demonstrate high accuracy; in doing so, the actual radial error coincides with the calculated one after
is reached.
The simulation results show that almost coincides with the moment when the reaches the PF accuracy and becomes consistent, which indicates that, starting from this moment, the posterior PDF becomes single-extremal. At the same time, the recursive EKF and IEKF turned out to be inefficient in solving map-aided navigation problems
Table 1 presents the values of the computational complexity factor calculated using (20) for the compared algorithms.
The calculation of the computational complexity factor has shown that the is 15-fold simpler in computational terms than the PF. Note that the using a set of turned out to be simpler than the by a factor of 1.5 and more than 20-fold simpler than the PF.
6. Discussion
It should be noted that the algorithms described in this paper are universal since they are intended to solve a wide class of problems in which the posterior PDF takes a single-extremal form as measurements accumulate. This class includes, among others, problems associated with the navigation data fusion, for example, the problem of map-aided navigation discussed above, the group navigation problem of autonomous underwater vehicles described in [
30], the single-beacon navigation problem [
51], and others. We would like to emphasize that the idea of using multiple linearization in designing filtering algorithms, which underlies RI-BMLS, is not new in itself. Such algorithms have already been proposed earlier, for example, in [
41,
49]. But unlike the RI-BMLS proposed in this paper, they were designed with the use of a recursive scheme, which limits their application to the class of problems discussed in this paper.
It is clear that the comparison results of the computational complexities presented in
Table 1 are due to the specific features of the map-aided navigation problem being solved, namely, the fact that the type of posterior PDF becomes single-extremal in a rather short period of time. But if this is not the case and the posterior PDF remains multi-extremal for a long time, the computational complexity of the RI-BMLS may be high due to the nonrecursive character of the
.
Another possible way to reduce the computational complexity of the RI-BLS and, as a consequence, RI-BMLS is to use incremental smoothing procedures developed with the factor-graph optimization method [
52], particularly in the iSAM2 algorithm [
53].
In principle, the proposed RI-BMLS can be constructed without using RI-BLS, for example, with the simultaneous use of a set of parallel iSAM2 algorithms or some other computationally efficient nonrecursive algorithms, but this issue is beyond the scope of this paper.
One of the RI-BMLS disadvantages is its low accuracy and the fact that it does not satisfy the consistency properties before the moment of time . Yet, this disadvantage can also be overcome if the posterior PDF in it is described on the time interval as a sum of Gaussian densities determined by the estimates and covariance matrices generated in the RI-BLS bank. However, the computational complexity of the RI-BMLS in this case can increase significantly. Indeed, the accumulation of measurements makes the algorithm implementation more complicated. However, the algorithms discussed are intended to be used in a special class of problems in which a batch of measurements is only necessary at the initial stage of the solution. At the same time, it should be noted that it is the accumulation of data that makes the algorithm efficient.
Moreover, the computational complexity of the algorithm increases with the increase in the dimension of the state vector. This is especially true for solutions of nonlinear problems. However, the RI-BMLS algorithm developed allows us to limit the growth of computational complexity compared to, for example, a particle filter.
7. Conclusions
A class of nonlinear filtering problems connected with data fusion from various navigation sensors and a navigation system has been considered. A special feature of these problems is that the posterior PDF of the state vector being estimated changes its character from multi-extremal to single-extremal as measurements accumulate.
Algorithms based on sequential Monte Carlo methods (particle filter), which in principle provide the possibility of attaining potential accuracy, corresponding to the optimal accuracy in the mean square sense of the estimate, are computationally complicated, especially when implemented in real time. Traditional recursive algorithms, such as the extended Kalman filter and its iterative modification prove to be inoperable in this case.
Two algorithms, devoid of the above drawbacks, are proposed to solve this class of nonlinear filtering problems: the RI-BLS and the RI-BMLS.
The first algorithm, RI-BLS, is essentially a nonrecursive iterative algorithm; at each iteration, it processes all measurements accumulated by the current time of measurement. However, to achieve this, it uses a recursive procedure: first, the measurements are processed from the first to the current one in the linearized Kalman filter, and then the obtained estimates are processed recursively in reverse time. The RI-BLS, like the I-BLS, provides accuracy close to the potential one and has the consistency property starting from the moment of time , when the posterior PDF becomes single-extremal. Along with this, when implementing the RI-BLS, it is not necessary to invert high-dimensional matrices nk × nk, but it is sufficient to invert only n-dimensional matrices, which allows us to significantly reduce the amount of calculation compared to the I-BLS. This algorithm can be used in problems for which the moment when the posterior PDF becomes single-extremal can be determined in advance. In practice, this is not always the case.
The second algorithm is free from this drawback. It is the RI-BMLS based on the RI-BLS that makes it possible to determine the moment . The essence of the RI-BMLS lies in the simultaneous use of a set of RI-BLSs running in parallel; the linearization points for each RI-BLS are selected based on the peculiarities of the problem being solved. The estimates generated by a set of RI-BLSs are grouped in the domains corresponding to the PDF extrema, both local and global. At moment of time , when the posterior PDF becomes single-extremal, the estimates at the RI-BLS output are clustered within a small domain corresponding to a single extremum. This allows us to identify , after which the problem is solved using one IEKF, which, in the case of a single-extremal PDF, is consistent and provides accuracy close to the potential one, corresponding to the optimal algorithm.
The application of the proposed algorithms is illustrated with a methodological example and the solution of the map-aided navigation problem. The effectiveness of the proposed algorithms is demonstrated by solving the problem of data fusion from a navigation system and a sensor of the Earth gravity field, the digital map of which is available aboard the vessel. The calculation of the computational complexity factor showed that the RI-BLS is more than 15-fold simpler than the particle filter in computational terms, and the RI-BMLS is more than 20-fold with a comparable estimation accuracy.