Next Article in Journal
Low-Cost Sensor Systems and IoT Technologies for Indoor Air Quality Monitoring: Instrumentation, Models, Implementation, and Perspectives for Validation
Previous Article in Journal
Hybrid AI Intrusion Detection: Balancing Accuracy and Efficiency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recursive Batch Smoother with Multiple Linearization for One Class of Nonlinear Estimation Problems: Application for Multisensor Navigation Data Fusion

Faculty of Control Systems and Robotic, ITMO University, 197101 St. Petersburg, Russia
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(24), 7566; https://doi.org/10.3390/s25247566
Submission received: 21 October 2025 / Revised: 8 December 2025 / Accepted: 9 December 2025 / Published: 12 December 2025
(This article belongs to the Section Intelligent Sensors)

Abstract

A class of nonlinear filtering problems connected with data fusion from various navigation sensors and a navigation system is considered. A special feature of these problems is that the posterior probability density function (PDF) of the state vector being estimated changes its character from multi-extremal to single-extremal as measurements accumulate. The algorithms based on sequential Monte Carlo methods, which, in principle, provide the possibility of attaining potential accuracy, are computationally complicated, especially when implemented in real time. Traditional recursive algorithms, such as the extended Kalman filter and its iterative modification prove to be inoperable in this case. Two algorithms, devoid of the above drawbacks, are proposed to solve this class of nonlinear filtering problems. The first algorithm, a Recursive Iterative Batch Linearized Smoother (RI-BLS), is essentially a nonrecursive iterative algorithm; at each iteration, it processes all measurements accumulated by the current time of measurement. However, to do this, it uses a recursive procedure: first, the measurements are processed from the first to the current one in the linearized Kalman filter, and then the obtained estimates are processed recursively in reverse time. The second algorithm, a Recursive Iterative Batch Multiple Linearized Smoother (RI-BMLS), is based on the simultaneous use of a set of RI-BLS running in parallel. The application of the proposed algorithms and their advantages are illustrated by a methodological example and solution of the map-aided navigation problem. The calculation of the computational complexity factor has shown that the RI-BLS is more than 15-fold simpler than the particle filter in computational terms, and the RI-BMLS, more than 20-fold with comparable estimation accuracy.

1. Introduction

Filtering algorithms designed within the framework of the Bayesian stochastic approach are widely used to solve problems of integrated measurement processing connected with data fusion from various navigation sensors or navigation systems, i.e., multisensor navigation data fusion. They aim to calculate an estimate that is optimal in the mean square sense [1,2,3,4,5,6,7,8,9,10,11]. This estimate is a mathematical expectation corresponding to the posterior probability density function (PDF) for the vector of parameters being estimated. One of the main advantages of such algorithms is the possibility of generating, in addition to the estimate, the corresponding accuracy characteristic in the form of a calculated covariance matrix of estimation errors. When such a matrix is consistent with its real covariance matrix, the filtering algorithm is also considered consistent [5,12].
Usually, algorithms are designed as recursive, which implies the processing of incoming measurements one after another. The problems in which the posterior PDF has a single-extremal form (the function has only one extremum in the domain of the a priori uncertainty, and from here on, such a PDF is referred to as single-extremal) are often solved using recursive Kalman-type algorithms (KTAs) based on the Gaussian approximation of the posterior density [6,11,13,14,15,16]. The simplest of such algorithms are based on the expansion of nonlinear functions, describing the model of a dynamic system and measurements, in a truncated Taylor series. Such algorithms include the linearized [2,6] and extended [2,11] Kalman filters, second-order filters [6,17], polynomial filters [18], third- and higher-order filters [19,20], and their iterative modifications [2,11]. Another approach to the KTA design involves statistical linear regression procedures [21]. They are sometimes called filters that do not require the calculation of derivatives, as when obtaining an approximate description of nonlinear functions using their linear analogs, there is no need to calculate the derivatives [22]. Among such algorithms, also known as sigma-point filters, are, for example, Unscented [23,24], Cubature [25,26], Smart Sampling Kalman Filters [27], and many others [28,29].
Traditional recursive KTAs are ineffective for problems in which the posterior PDF takes a complicated multi-extremal form [30]. To solve them, when designing filtering algorithms, designers use the known recursive relation for the posterior PDF and various methods of its approximation [28]. These algorithms use a significant set of parameters to describe the posterior density. Here, we should mention filters based on the point-mass method and those using poly-Gaussian approximation [31,32,33]. The most widely used are the so-called sequential Monte Carlo methods, also known as particle filters, and their various modifications [34,35,36,37,38]. These algorithms can achieve accuracy close to the potential one, i.e., the accuracy corresponding to the optimal algorithm, but they often prove to be computationally complicated. Various procedures are used to reduce the amount of computation in such algorithms, for example, the importance resampling [39] and the Rao-Blackwellization procedure [40]. Despite this, their computational complexity remains extremely high, which often limits their online application. Thus, the most difficult problems to solve in practice are those in which the posterior PDF has a complicated multi-extremal form. Among various filtering problems connected with multisensor navigation data fusion, we can single out a separate class in which the posterior PDF, being multi-extremal at the initial moments of estimation time, becomes single-extremal. It is this class of problems that is considered in this paper. Such problems arise, for example, during navigation data fusion; among them are the problem of the navigation of a group of autonomous underwater vehicles [30], the beacon navigation problem [41], the map-aided navigation problem [42,43,44,45,46,47], and some others. Traditional recursive algorithms based on the Gaussian approximation of the posterior density turn out to be inefficient for their solution, and recursive algorithms that need a significant set of parameters to describe the posterior density are often computationally intensive, which makes them unsuitable for online applications. In [30,48], we showed that nonrecursive iterative KTAs used to solve such problems can provide the accuracy of optimal estimation and be consistent starting from the moment when the posterior PDF becomes single-extremal. In this case, a batch of all measurements accumulated by the current moment arrives at the input of such algorithms, and this is the reason why they are called batch algorithms [9,30]. Despite all their advantages, these algorithms have two main drawbacks when used to solve the problems under consideration. First, their computational complexity––although often lower than that of algorithms that use a significant set of parameters to describe posterior PDF (for example, particle filters)––remains significant. This is due to the need to invert high-dimensional matrices. Second, the moment when the posterior PDF becomes single-extremal is usually unknown in practice, and its identification is a nontrivial task.
The aim of this paper is to design algorithms that lack the above-mentioned shortcomings. The paper proposes two algorithms.
The first one is economical in a computational sense—with respect to the amount of computations; it is nonrecursive iterative algorithm called the Recursive Iterative Batch Linearized Smoother (RI-BLS). It works due to the fact that instead of inverting a high-dimensional matrix at each iteration, it uses a recursive procedure that ensures the necessary estimates by solving the smoothing problem. That is, the RI-BLS is based on the combined use of recursive and nonrecursive data processing schemes. The RI-BLS has low computational complexity. This algorithm can be used in problems for which the moment when the posterior PDF becomes single-extremal can be determined in advance. At the same time, a challenging problem here is to identify this moment.
The second algorithm is called the Recursive Iterative Batch Multiple Linearized Smoother (RI-BMLS). In some cases, its computational complexity may be somewhat higher than that of the RI-BLS, but the RI-BMLS is capable of determining the moment at which the posterior PDF becomes single-extremal. The proposed RI-BMLS, as well as the RI-BLS, is based on the combined use of recursive and nonrecursive data processing schemes. The essence of the RI-BMLS is the simultaneous use of a set of RI-BLSs, each of which has its own individual linearization point at the initial moment. A set of linearization points, each of which is used in its RI-BLS, is formed in such a way that their environment, all together, should cover the domain of a priori uncertainty. At the moment when the posterior PDF becomes single-extremal, the estimates at the output of the RI-BLS set with various linearization points are grouped into an area corresponding to one extremum, which allows us to identify the moment at which the posterior PDF becomes single-extremal. After that, to save computational resources, the problem is solved with a single recursive iterative Kalman filter. In the case of a single-extremal PDF, it will provide an accuracy close to that of the optimal algorithm. Such algorithms are used to solve the problem of navigation system correction with the use of nonlinear measurements from various sensors, for example, in the navigation of a group of autonomous underwater vehicles, single-beacon navigation, and map-aided navigation.
The paper is structured as follows. In Section 2, the nonlinear filtering problem under study is formulated. Section 3 describes the proposed Recursive Iterative Batch Linearized Smoother, and Section 4 considers the Recursive Iterative Batch Multiple Linearized Smoother. Section 5 formulates a map-aided navigation problem and demonstrates the performance and advantages of the proposed algorithms. Section 6 and Section 7 analyze and summarize the main contributions of this paper.

2. Bayesian Statement of the Nonlinear Estimation Problem

Consider the problem of the nonlinear estimation of an n-dimensional random vector described by a shaping filter
x k = f k ( x k 1 ) + G k w k + u k ,
using m-dimensional vector measurements:
y k = h k ( x k ) + v k .
In these relations, it is assumed that f k ( ) and h k ( ) are known nonlinear n- and m-dimensional functions describing the dynamics for the state vector and the measurement model; k is the discrete time index; u k = ( u 1 , k , u 2 , k , u n , k ) T is an n-dimensional vector of known input signals (for double indexing in subscripts, here and below, the first index denotes the component number, and the second refers to the time); x 0 is an n-dimensional random Gaussian vector with a given PDF p ( x 0 ) = N ( x 0 ; x ¯ 0 , P 0 ) ; hereinafter, the notation N ( b ; b ¯ ,   B ) is used for the density of the Gaussian random vector b with mathematical expectation b ¯ and covariance matrix B; w k   is n w -dimensional zero-mean discrete Gaussian white noise, independent of x 0 with a known n w × n w -dimensional covariance matrix Q k ; G k is an n × n w -dimensional matrix; and v k is m-dimensional zero-mean discrete Gaussian white noise, independent of x 0 and w k , with covariance matrix R k .
The essence of the estimation problem in the framework of the Bayesian approach is to obtain, in a certain sense, an optimal estimate of state vector x ^ k o p t ( Y k c ) based on measurements Y k c = ( y 1 T , y 2 T , , y k T ) T and, if possible, the corresponding conditional covariance matrix of estimation error P k o p t ( Y k c ) , which characterizes its current accuracy (corresponding to a specific set of measurements Y k c ).
It is known that optimal, in the mean square sense, estimate x ^ k o p t ( Y k c ) and the corresponding covariance matrix P ( Y k c ) are defined as follows [5]:
x ^ k o p t Y k c = x k p x k | Y k c d x k ,
P ( Y k c ) = ( x k x ^ k o p t ( Y k c ) ) ( x k x ^ k o p t ( Y k c ) ) T p ( x k | Y k c ) d x k   ,
where p ( x k | Y k c ) is the posterior PDF, conditional to the measurements.
It is clear that in order to find an optimal estimate (3) and covariance matrices (4), we need to have posterior PDF p ( x k | Y k c ) , which is difficult to calculate, so that in practice, different methods of its approximation are used to design suboptimal filtering algorithms. The specificity of the nonlinear filtering problem considered in this paper is that the posterior PDF, which has a complicated multi-extremal character at initial moments of estimation time, evolves into a single-extremal form.
Note that this is not uncommon for the Bayesian approach, along with the optimal, in the mean square sense, estimate (3); it is also possible to determine the estimate corresponding to the posterior PDF maximum:
x ^ k o p t Y k c = a r g m a x   x k p x k | Y k c .
It is important to emphasize that in the case where the posterior PDF has one extremum and is close to the Gaussian one, estimates (3) and (5) will be close to each other.
Note that the optimal estimate minimizes the conditional (4) and unconditional covariance matrices of estimation errors defined as [12]
G k o p t = E p ( x k , Y k ) x k x ^ k o p t ( Y k c ) x k x ^ k o p t ( Y k c ) T ,
where p x k , Y k c is a joint PDF of vectors x k and Y k c , and E is the sign of the mathematical expectation with a subscript explaining from which PDF it is calculated.
It was already noted in the Introduction that when designing estimation algorithms, preference is usually given to recursive algorithms based on a processing scheme that is recursive with respect to measurements. Along with recursive algorithms, the desired estimate of the state vector at an arbitrary point in time can be obtained by finding it as part of the vector being estimated, which includes all elements for all k. The recursive and nonrecursive algorithms are described in Appendix A.1 and Appendix A.2.
Note that the optimal estimate x ^ k o p t does not depend on the processing scheme according to which it is calculated—recursive or nonrecursive. The situation is different when recursive and nonrecursive suboptimal algorithms are used to calculate the estimate. This also applies to algorithms based on linearization, which is the subject of this paper.
The algorithms that are based on the linearized representation of nonlinear functions assume that
f i ( x i 1 ) f i ( x i l i n 1 ) + F i ( x i 1 x i l i n 1 ) , h i ( x i ) h i ( x i l i n 2 ) + H i ( x i x i l i n 2 ) ,
where i = 1,2 , , k , and F i   a n d H i are Jacobian matrices of functions f i x i 1   a n d h i ( x i ) with dimensions n × n and m × n calculated at linearization points x i l i n 1 and x i l i n 2 .
Suboptimal algorithms based on linearization have two distinctive features. First, the posterior PDF p ( x k | Y k c ) is replaced at each step with its Gaussian approximation described by the estimates generated in the algorithm of estimate x ^ k S U B and the corresponding covariance P k S U B , i.e., p x k | Y k c N x k ; x ^ k S U B , P k S U B . The second feature is determined by the fact that the processing of the current measurement is carried out based on the ideology of designing a linear optimal algorithm [13]. The methods for developing such algorithms with the use of both recursive and nonrecursive schemes are described in Appendix A.3 and Appendix A.4.
Recall that in this paper, we consider a class of problems whose specificity lies in the fact that the posterior PDF of the state vector under estimation changes its character, as measurements accumulate, from a multi-extremal to a single-extremal density. The extended Kalman filter (EKF) and iterative extended Kalman filter (IEKF) use a Gaussian approximation of the posterior PDF at each step and involve the estimate from the previous step to calculate the linearization points. It is clear that they are ineffective for the solution of the class of problems under consideration. The reason for this is that at the initial stage (with a small number of accumulated measurements), the form of the posterior PDF differs from the Gaussian one, which leads to significant linearization errors in such well-known algorithms [1,2,3,4,5,6,7,8,9,10] as the EKF and IEKF. In some cases, the accuracy of the EKF and IEKF may be lower than that of the linearized Kalman filter (LKF), whose linearization points at each step k are chosen as some fixed values that only depend on the a priori information [30].
As shown in [30,48], in contrast to traditional recursive algorithms, nonrecursive (batch) iterative algorithms based on linearization allow for accuracy close to the potential one, i.e., the accuracy corresponding to the optimal algorithm, in the solution of problems in which the posterior PDF of the estimated state vector changes its character, as measurements accumulate, from multi-extremal to single-extremal.
Note that in the suboptimal nonrecursive (batch) algorithms, the linearization points X k c , l i n 1 = ( x 1 l i n 1 ) T , , ( x k l i n 1 ) T T depend only on a priori information. As shown in [30], this fact creates prerequisites for increasing the efficiency of the algorithms designed with the use of the nonrecursive scheme, compared to recursive algorithms, when solving problems of the class under consideration.
The simplest batch algorithm, Batch Linearized Smoother (BLS), can be designed if linearization points x i l i n 2 , i = 1,2 , , k   are chosen as x i l i n 2 = x ¯ i .
The BLS efficiency can be increased using iterations similar to the way it is achieved in the IEKF. The algorithm designed in this way—Iterative Batch Linearized Smoother (I-BLS)—will correspond to the block diagram shown in Figure 1. The main feature of the I-BLS is that during linearization at fixed points, it repeatedly (iteratively) processes a batch of all measurements accumulated by the current moment of time. In this case, at each iteration, the linearization points are corrected simultaneously taking into account the results obtained at the previous iteration.
As shown in [30], for the class of problems under consideration, the I-BLS, starting from the moment when the PDF becomes single-extremal, is capable of providing accuracy close to the potential one and is consistent. Note that the algorithms called BLS and I-BLS in this paper are called NR-EKF and NR-IEKF in [30]. This is due to the fact that the expressions for the BLS and I-BLS can be obtained by designing the EKF for the solution of the original problems (1) and (2) in its nonrecursive formulation, i.e., at each step k, vector X k c = ( x 0 T , , x k T ) T is estimated using measurement (2). The main disadvantage of the I-BLS is its high computational complexity.
Further, we present an algorithm called the RI-BLS, based on the combined use of recursive and nonrecursive schemes, which is an economical implementation of the I-BLS in terms of computational complexity.

3. Recursive Iterative Batch Linearized Smoother

In the RI-BLS, just as in the case of the I-BLS, after the next k-th measurement arrives, all measurements included in the batch of measurements Y k c = ( y 1 T ,   ,   y k T ) T accumulated by the current moment are processed again (iteratively). Each iteration includes two blocks.
Block 1 implements a recursive solution of the filtering problem for all i = 1 , 2 , , k using the procedures of the linearized Kalman filter [5], in which linearization points x i l i n 1 and x i l i n 2 are fixed, and in the first iteration ( j = 1 ) , they depend only on the a priori mathematical expectation x ¯ 0 . In this block, we can highlight a set of calculations for all i = 1 , 2 , , k :
Step 1.
Formation of linearization points:
x ¯ i = f i ( x ¯ i 1 ) + u i , x i l i n 1 = x ¯ i 1 , x i l i n 2 ( j ) = x ¯ i , j = 1 ;
Step 2.
Calculation of estimate prediction and its error covariance matrix:
x ^ i / i 1 ( j ) = f i ( x i 1 l i n 1 ) + F k ( x ^ i 1 ( j ) x i 1 l i n 1 ) + u i ,   P i / i 1 ( j ) = F i ( j ) P i 1 ( j ) ( F i ( j ) ) T + G i Q i G i T ;
Step 3.
Calculation of measurements prediction and its error covariance matrix:
y ^ i ( j ) = h i ( x i l i n 2 ( j ) ) + H i ( j ) ( x ^ ( j ) i / i 1 x i l i n 2 ( j ) ) ,   P y i ( j ) = H i ( j ) P i / i 1 ( j ) ( H i ( j ) ) T + R i ;
Step 4.
Calculation of the cross-covariance matrix:
P x i y i ( j ) = P i / i 1 ( j ) ( H i ( j ) ) T ;
Step 5.
Calculation of the gain factor:
K i ( j ) = P x i y i ( j ) P y i ( j ) ;
Step 6.
Calculation of the estimate and its corresponding covariance matrix:
x ^ i ( j ) = x ^ ( j ) i / i 1 + K i ( j ) y i y ^ i ( j ) ,   P i ( j ) = P i / i 1 ( j ) K i ( j ) P x i y i ( j ) .
During processing, batches of accumulated measurements Y k c = ( y 1 T ,   ,   y k T ) T , prediction values x ^ i / i - 1 ( j ) , estimates x ^ i ( j ) calculated in Block 1, and the corresponding covariance matrices P i / i 1 ( j ) and P i ( j ) for each moment of estimation time i = 1 , 2 , . . . , k are saved to be used further.
Block 2 implements recursive calculation in reverse order for i = k 1 , k 2 , . . . , 0 of estimates x ^ i / k corresponding to the solution of the smoothing problem. The second block includes the following calculations:
Step 1.
Calculation of the smoother transition matrix:
A i ( j ) = P i ( j ) ( F i + 1 ( j ) ) T ( P i + 1 / i ( j ) ) 1 .
Step 2.
Calculation of the smoothing estimate and its corresponding covariance matrix:
x ^ k / k ( j ) = x ^ k ( j ) , P k / k ( j ) = P k ( j ) , x ^ i / k ( j ) = x ^ i ( j ) + A i ( j ) ( x ^ i + 1 / k ( j ) x ^ i + 1 / i ( j ) ) , P i / k ( j ) = P i ( j ) + A i ( j ) ( P i + 1 / k ( j ) P i + 1 / i ( j ) ) ( A i ( j ) ) T .
It is important to note that estimates x ^ i / k ( 1 ) and the corresponding covariance matrices P i / k ( 1 ) obtained at the first iteration of j = 1 as a result of the smoothing problem solution coincide with estimates x ^ i B L S and covariance matrices P i B L S , i = 0,1 , , k 1 generated in the BLS as part of vector X ^ k c , B L S and a block of matrix P k c , B L S .
Estimates x ^ i / k ( j ) obtained in Block 2, corresponding to the solution of the smoothing problem, are saved and used further during the repeated processing of measurements (in the next iteration) as linearization points x i l i n 2 ( j ) , after which the above procedure is repeated again.
The RI-BLS can be represented as the block diagram shown in Figure 2.
The estimate x ^ k ( j ) and covariance matrix P k ( j ) obtained at the latest iteration are taken as the estimate and covariance matrix in the RI-BLS at the k-th step. The estimate x ^ k R I B L S and covariance matrix P k R I B L S generated in the RI-BLS coincide with the estimate x ^ k I B L S and covariance matrix P k I B L S generated in the I-BLS as part of vector X ^ k c , I B L S and a block of matrix P k c , I B L S . In addition, when implementing the proposed algorithm, it is not necessary to invert high-dimensional matrices; it is sufficient to invert only the n × n-dimensional matrices, which allows for a significant reduction in computation. Thus, the RI-BLS can be considered an economical implementation of the I-BLS in terms of computational complexity.
With all the advantages of the I-BLS, the RI-BLS, starting from moment T e f f , when the posterior PDF takes a single-extremal form, generates an estimate close to the optimal one, in the mean square sense, and satisfies the consistency properties.
The algorithm considered can be used in problems for which the moment when the posterior PDF becomes single-extremal is known in advance. At the same time, in practice, researchers have to deal with problems in which this information is unavailable, particularly in solving navigational problems. The modification of the RI-BLS that allows us to determine the moment T e f f is considered below.

4. Recursive Iterative Batch Multiple Linearized Smoother

4.1. RI-BMLS Description

The main idea of the proposed algorithm is to use a set of RI-BLSs running in parallel, with different linearization points that are selected based on the features of the problem being solved. For example, these points should be selected in such a way that their neighborhoods cumulatively cover the domain of the a priori uncertainty.
It is known [2] that the iterative Kalman filter is aimed to find an estimate corresponding to the maximum of the posterior PDF. The iterative Gauss–Newton procedure used for these purposes does not guarantee global convergence. In the case of a multiextremal posterior distribution, the estimate generated by the iterative algorithm may correspond to a local extremum. However, when the set of RI-BLSs used is sufficiently large and has different linearization points, the estimates they generate will be grouped in areas corresponding to all extrema, both global and local. At estimation time point T e f f , when the posterior PDF takes the single-extremum form, the RI-BLS estimates will be grouped within a small domain corresponding to a single extremum.
Thus, the process of the RI-BMLS design can be divided into the following steps:
Step 1.
Formation of samples x 0 ( s ) , s = 1 ,   2 , , S from the previously selected domain Λ 0 in which p ( x 0 ) is different from zero;
Step 1.
Parallel start of S RI - BL S ( s ) a l g o r i t h m s , for which the linearization points in the first iteration are determined for each according to
x i ( s ) = f i ( x i 1 s ) , x i l i n 1 j = x i 1 s , x i l i n 2 j = x i s , j = 1 .
Step 3.
Verification of the following inequalities:
x ^ γ , k ( s ) m a x x ^ γ , k ( s ) m i n < D γ   for   γ = 1,2 , , n ,
where x ^ γ , k ( s ) m a x is the maximum value, x ^ γ , k ( s ) m i n is the minimum one among the γ-th components of estimates x k ( s ) , s = 1,2 , , S , and D γ denotes the components of a certain predefined vector D = D 1 , D 2 , , D n T . We will assume the posterior PDF is a single-extremal one if, for each γ = 1,2 , , n , the difference between x ^ γ , k ( s ) m a x and x ^ γ , k ( s ) m i n is less than the γ-th component of vector D .
Step 4.
If all inequalities are satisfied, then T e f f = t k
It is clear that the procedure used to identify the extrema of the posterior PDF is approximate, and the correctness of the estimation time point identification will depend on the specific values for the vector D components, which are selected heuristically. However, as will be shown further, the use of such a procedure as part of the RI-BMLS allows for the correct identification of time point T e f f .
After T e f f     has been identified, it becomes possible to continue solving the problem without using the RI-BLS bank and instead use a single recursive IEKF, which, in the case of a single-extremal posterior PDF, will have all the advantages of the RI-BLS and will be simpler in terms of computational complexity.
Thus, the RI-BMLS can be represented as a block diagram shown in Figure 3.
The total time of the problem solution using the proposed RI-BMLS can be divided into two stages.
At the first stage ( t = ( 0 , T e f f ] ), when the posterior PDF is multi-extremal, the RI-BLS bank is used. If necessary, it is possible to use the estimate and the estimation error covariance matrix generated in one of the RI-BLSs as the algorithm output at the first stage. However, it should be kept in mind that the algorithm at the first stage is not consistent and the estimation errors can be significant. In this work, we assume that at the first stage, the RI-BMLS solves only the problem of T e f f identification and does not generate an estimate and the calculated accuracy characteristic. After T e f f has been identified, the estimates and the covariance matrices of all RI-BLSs will coincide and any of them can be used as the initial settings of the IEKF initiated at the next stage.
At the second stage t = ( T e f f , ) , the problem is solved with the use of the IEKF, and the estimates and estimation error covariance matrices generated in it are the RI-BMLS output. In this case, since the posterior PDF is single-extremal, the estimate generated at the second stage is close in accuracy to the optimal one in the mean square sense, and the algorithm is consistent.
Since the extremum can appear at any point belonging to the domain of a priori uncertainty, the linearization points should be selected in such a way that their neighborhoods cumulatively cover the domain of the a priori uncertainty.
It should be noted that attempts to design a similar algorithm have already been made previously, for example, in [49], but they were not successful. This is primarily due to the fact that the iterative algorithm with multiple linearization was designed with the use of a set of recursive iterative Kalman filters. However, as noted before, the recursive scheme used to solve the class of problems considered here leads to the accumulation of errors caused by linearization. In this paper, we use an RI-BLS—a nonrecursive algorithm—to design an iterative algorithm with multiple linearization.

4.2. Methodological Example

Let us explain the essence of the proposed algorithms by considering a simple methodological example.
Assume that it is required to estimate an exponentially correlated sequence x k described by a linear shaping filter:
x k = F x k 1 + G w k + u ,
where F = e α Δ t , G = 2 σ 2 α α 1 e α Δ t , σ 2 is the sequence variance, α is the value inverse to the correlation interval τ k , and u is a known input signal. Nonlinear measurements (2) have the form
y k = h 1 + h 2 x k + h 3 x k 2 + h 4 x k 3 + v k ,
where h 1 , h 2 , h 3 ,   a n d   h 4 are known coefficients.
The simulation was carried out with the following parameters: σ = 1.5 ;   α = 0.1 s 1 ;   r = 0.1 , where r 2 is the measurement variance; h 1 = 0.0875 ,   h 2 = 0.1825 ,   h 3 = 0.01 ,   h 4 = 0.01 ;   u = 1 ; sampling interval Δ t = 1 s ; and simulation time T = 10 s .
The problem was simulated and solved for one sample (run) with a set of S = 46   RI - BL S ( s ) , where s = 1 . . S ¯ . The linearization points at the initial moment of estimation time were uniformly distributed over the domain Λ 0 = 3 σ ; 3 σ with a step of θ l i n = 0.2 .
The results of the RI-BMLS operation in the form of the true values of the estimated sequence and RI-BLS estimates generated for a different number of processed measurements are presented in Figure 4.
In Figure 4, the blue solid lines represent the graphs of estimates x ^ k ( s ) obtained with the RI - BL S ( s ) output. Note that at the initial moment, linearization points were taken as estimates, i.e., x ^ 0 ( s ) = x 0 ( s ) . The red solid line shows the true values of the estimated sequence, and the dotted purple line indicates the identified point of estimation time T e f f .
The graph in Figure 4 shows that the estimates were initially grouped within two areas, but as measurements accumulated, these areas began to converge with each other and with the true value of the sequence being estimated. At a certain point in time, when k = 7, the estimates at the RI - BL S ( s ) output clustered within a small domain corresponding to the true value of the estimated variable, allowing us to correctly identify T e f f . The posterior PDF then became single-extremal. Figure 5 shows the graphs of the posterior PDF obtained with the use of sequential Monte Carlo methods [34] at k = 1 , 3 , 5 , 7 .
It is evident from Figure 4 that after processing the measurements at step k = 7, the posterior PDF becomes single-extremal, as shown in Figure 5d. At the same time, it is evident from Figure 4 that at the same time moment, the estimates at the output of the whole set of RI - BL S ( s ) are grouped within a small area corresponding to the real value of the estimated quantity, which made it possible to correctly identify the moment of time T e f f .

5. Map-Aided Navigation

Let us now consider one of the possible options for the practical application of the proposed algorithm to map-aided navigation [42,43,44,45,46,47,49]. This problem is an example of the data fusion from a navigation system and a sensor of the Earth gravity field. Following [42], the problem of map-aided navigation is formulated with a Bayesian framework as a nonlinear filtering problem. It is assumed that there is a navigation system (NS) that generates vehicle coordinates y k N S = y k N S ( 1 ) , y k N S ( 2 ) T in the following form:
y k N S = x k O + Δ k ,
where x k O = [ x 1 , k O , x 2 , k O ] T are unknown vehicle coordinates, and Δ k = [ Δ 1 , k , Δ 2 , k ] T are the NS errors.
In addition, we assume that there is a sensor of some geophysical field and a corresponding digital map.
Thus, we can write the measurements as
y k = φ ( x k O ) + v k ,
where function φ ( x k O ) determines the dependence of the geophysical field on the vehicle coordinates, and v k are the measurement errors.
By replacing x k O with ( y k N S Δ k ) , we can formulate the following problem: estimate Δ k using measurements
y k = φ y k N S Δ k + v k .
Consider the simplest case: when the NS errors for each coordinate are described by a random zero-mean Gaussian value with variances ( σ Δ ) 2 , and the measurement errors are described by Gaussian white noise with variances r 2 . In this case, we need to estimate the two-dimensional vector
x k = Δ k = Δ k 1 = Δ
with measurements
y k = φ ( y k N S Δ ) + v k = h k Δ + v k .
We also suppose that vector x k and errors v k are independent of each other. Under the assumptions made, we can write the Formula (A7) as
J ( Δ ) = ( Δ 1 , k ) 2 + ( Δ 2 , k ) 2 ( σ Δ ) 2 + i = 1 k y i h i ( Δ 1 , k , Δ 2 , k ) 2 r 2
The simulation was performed for a gravity anomaly field generated using the EGM 2008 model [50]. In this case, φ ( x k O ) describes the dependence of gravity anomalies on the coordinates. The isolines of gravity anomalies are shown in mGal in Figure 6.
The path was assumed to be fixed, and the following parameters were used in the simulation: σ Δ = 1 km; r = 0.5 mGal; the distance between measurements was δ = 300 m ; and the path length was 24 km.
First, we demonstrate the solution of the problem under consideration using the proposed algorithm with multiple linearization for one sample. To design the algorithm, we set the linearization points at the initial moment at the nodes of a uniform grid on the region of a priori uncertainty. Figure 7 shows the isolines of the a priori PDF, and the red dots indicate the position of the linearization points.
Figure 8 shows the isolines of the posterior PDFs at the estimation time moments k = 1 , k = 15 , and k = 30 ; the red dots indicate the estimates generated by the RI - BL S ( s ) .
As can be seen from the simulation results, the posterior PDF, which has, at the initial estimation time moments, a multi-extremal form significantly different from the Gaussian one, becomes single-extremal as measurements accumulate, and the estimates converge within the neighborhood of a small domain, which makes it possible to identify T e f f .
Note that the RI - BMLS , representing a set of S = 441 nonrecursive RI - BLS , was used only methodologically to clarify its essence. The total complexity of the designed algorithm with multiple linearization turns out to be extremely high, which does not allow it to be applied in online mode. In this connection, to solve the problem under consideration, we designed the RI - BMLS with only S = 9 linearization points or nine parallel RI - BL S ( s ) , which is the same. Figure 9 shows the isolines of the a priori PDF graph; the red dots represent the positions of the linearization points at the initial moment of time.
Using statistical testing and predictive simulation based on L = 500 samples, consider the simulation results for the following algorithms: the RI - BLS and RI - BMLS proposed in this paper, two traditional recursive linearization-based algorithms—EKF and IEKF—and an algorithm based on sequential Monte Carlo methods—particle filter (PF). The PF aims to calculate the optimal estimate, in the mean square sense. For each μ -th algorithm, according to the methodology in [12], we obtained the real G k μ and calculated covariance matrices G ~ k μ :
G k μ 1 L j = 1 L ( x k ( j ) x ^ k μ ( Y k c , ( j ) ) ) ( x k ( j ) x ^ k μ ( Y k c , ( j ) ) ) T ,
G ~ k μ 1 L j = 1 L P k μ ( Y k c , ( j ) ) ,
where x k ( j ) and Y k c , ( j ) , j = 1 . L ¯ , are the samples of random vectors obtained by simulation according to (12) and (13), and x ^ k μ ( Y k c , ( j ) ) and P k μ ( Y k c , ( j ) ) are the estimates and calculated covariance matrices obtained for the j-th realizations of x k ( j ) and Y k c , ( j ) , j = 1 . L ¯ .
Using (16) and (17), we calculated the real G k R , μ and calculated G ~ k R , μ radial errors of the estimate for each of the coordinates:
G k R , μ = G k x 1 , μ + G k x 2 , μ , G ~ k R , μ = G ~ k x 1 , μ + G ~ k x 2 , μ ,
where G k x l , μ   and G ~ k x l , μ , l = 1,2 , are the real and calculated variances of estimation errors x ^ 1 , k and x ^ 2 , k averaged over L samples.
Also, for the purpose of comparing the computational complexity, we calculated the corresponding coefficient:
T μ = τ μ τ * τ * ,
where τ μ = 1 L j = 1 L t j μ , τ * = 1 L j = 1 L t j * , t j μ is the time spent by the computer to solve the estimation problem using the analyzed algorithm, and t j * is the time corresponding to the EKF algorithm, which requires the minimum time of all the compared algorithms.
The formula below was used to calculate the value of
T ¯ e f f = 1 L j = 1 L T e f f ( j ) ,
which characterizes the time averaged over a set of samples, starting from which the posterior PDF becomes single-extremal.
In Figure 10, the solid line shows the calculation results for the real radial errors, and the dotted line, the calculated radial errors. Blue color corresponds to the EKF (no. 1); green—the IEKF (no. 2); black— RI - BLS (no. 3); purple – RI - BMLS (no. 4); and red—PF (no. 5). The purple dotted line corresponds to T ¯ e f f . The real radial errors for the EKF and IEKF are the largest compared to other methods; moreover, starting from the moment when the posterior PDF becomes single-extremal, they begin to grow indefinitely. On the contrary, algorithms RI - BLS and RI - BMLS demonstrate high accuracy; in doing so, the actual radial error coincides with the calculated one after T ¯ e f f is reached.
The simulation results show that T ¯ e f f almost coincides with the moment when the RI - BLS reaches the PF accuracy and becomes consistent, which indicates that, starting from this moment, the posterior PDF becomes single-extremal. At the same time, the recursive EKF and IEKF turned out to be inefficient in solving map-aided navigation problems
Table 1 presents the values of the computational complexity factor calculated using (20) for the compared algorithms.
The calculation of the computational complexity factor has shown that the RI - BLS is 15-fold simpler in computational terms than the PF. Note that the RI - BMLS using a set of RI - BL S ( s ) turned out to be simpler than the RI - BLS by a factor of 1.5 and more than 20-fold simpler than the PF.

6. Discussion

It should be noted that the algorithms described in this paper are universal since they are intended to solve a wide class of problems in which the posterior PDF takes a single-extremal form as measurements accumulate. This class includes, among others, problems associated with the navigation data fusion, for example, the problem of map-aided navigation discussed above, the group navigation problem of autonomous underwater vehicles described in [30], the single-beacon navigation problem [51], and others. We would like to emphasize that the idea of using multiple linearization in designing filtering algorithms, which underlies RI-BMLS, is not new in itself. Such algorithms have already been proposed earlier, for example, in [41,49]. But unlike the RI-BMLS proposed in this paper, they were designed with the use of a recursive scheme, which limits their application to the class of problems discussed in this paper.
It is clear that the comparison results of the computational complexities presented in Table 1 are due to the specific features of the map-aided navigation problem being solved, namely, the fact that the type of posterior PDF becomes single-extremal in a rather short period of time. But if this is not the case and the posterior PDF remains multi-extremal for a long time, the computational complexity of the RI-BMLS may be high due to the nonrecursive character of the RI - BL S ( s ) .
Another possible way to reduce the computational complexity of the RI-BLS and, as a consequence, RI-BMLS is to use incremental smoothing procedures developed with the factor-graph optimization method [52], particularly in the iSAM2 algorithm [53].
In principle, the proposed RI-BMLS can be constructed without using RI-BLS, for example, with the simultaneous use of a set of parallel iSAM2 algorithms or some other computationally efficient nonrecursive algorithms, but this issue is beyond the scope of this paper.
One of the RI-BMLS disadvantages is its low accuracy and the fact that it does not satisfy the consistency properties before the moment of time T e f f . Yet, this disadvantage can also be overcome if the posterior PDF in it is described on the time interval t = ( 0 , T e f f ] as a sum of Gaussian densities determined by the estimates and covariance matrices generated in the RI-BLS bank. However, the computational complexity of the RI-BMLS in this case can increase significantly. Indeed, the accumulation of measurements makes the algorithm implementation more complicated. However, the algorithms discussed are intended to be used in a special class of problems in which a batch of measurements is only necessary at the initial stage of the solution. At the same time, it should be noted that it is the accumulation of data that makes the algorithm efficient.
Moreover, the computational complexity of the algorithm increases with the increase in the dimension of the state vector. This is especially true for solutions of nonlinear problems. However, the RI-BMLS algorithm developed allows us to limit the growth of computational complexity compared to, for example, a particle filter.

7. Conclusions

A class of nonlinear filtering problems connected with data fusion from various navigation sensors and a navigation system has been considered. A special feature of these problems is that the posterior PDF of the state vector being estimated changes its character from multi-extremal to single-extremal as measurements accumulate.
Algorithms based on sequential Monte Carlo methods (particle filter), which in principle provide the possibility of attaining potential accuracy, corresponding to the optimal accuracy in the mean square sense of the estimate, are computationally complicated, especially when implemented in real time. Traditional recursive algorithms, such as the extended Kalman filter and its iterative modification prove to be inoperable in this case.
Two algorithms, devoid of the above drawbacks, are proposed to solve this class of nonlinear filtering problems: the RI-BLS and the RI-BMLS.
The first algorithm, RI-BLS, is essentially a nonrecursive iterative algorithm; at each iteration, it processes all measurements accumulated by the current time of measurement. However, to achieve this, it uses a recursive procedure: first, the measurements are processed from the first to the current one in the linearized Kalman filter, and then the obtained estimates are processed recursively in reverse time. The RI-BLS, like the I-BLS, provides accuracy close to the potential one and has the consistency property starting from the moment of time T e f f , when the posterior PDF becomes single-extremal. Along with this, when implementing the RI-BLS, it is not necessary to invert high-dimensional matrices nk × nk, but it is sufficient to invert only n-dimensional matrices, which allows us to significantly reduce the amount of calculation compared to the I-BLS. This algorithm can be used in problems for which the moment when the posterior PDF becomes single-extremal can be determined in advance. In practice, this is not always the case.
The second algorithm is free from this drawback. It is the RI-BMLS based on the RI-BLS that makes it possible to determine the moment T e f f . The essence of the RI-BMLS lies in the simultaneous use of a set of RI-BLSs running in parallel; the linearization points for each RI-BLS are selected based on the peculiarities of the problem being solved. The estimates generated by a set of RI-BLSs are grouped in the domains corresponding to the PDF extrema, both local and global. At moment of time T e f f , when the posterior PDF becomes single-extremal, the estimates at the RI-BLS output are clustered within a small domain corresponding to a single extremum. This allows us to identify T e f f , after which the problem is solved using one IEKF, which, in the case of a single-extremal PDF, is consistent and provides accuracy close to the potential one, corresponding to the optimal algorithm.
The application of the proposed algorithms is illustrated with a methodological example and the solution of the map-aided navigation problem. The effectiveness of the proposed algorithms is demonstrated by solving the problem of data fusion from a navigation system and a sensor of the Earth gravity field, the digital map of which is available aboard the vessel. The calculation of the computational complexity factor showed that the RI-BLS is more than 15-fold simpler than the particle filter in computational terms, and the RI-BMLS is more than 20-fold with a comparable estimation accuracy.

Author Contributions

Conceptualization, O.S. and A.I.; methodology, O.S.; validation, A.I.; formal analysis, A.I., Y.L. and E.D.; investigation, A.I.; data curation, A.I. and Y.L.; writing—original draft preparation, A.I.; writing—review and visualization, E.D.; editing, E.D. and Y.L.; supervision, O.S.; project administration, O.S.; funding acquisition, O.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Russian Science Foundation, project no. 23-19-00626, https://rscf.ru/project/23-19-00626/ (accessed on 1 December 2025).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Appendix A.1. Recursive Algorithms

The design of algorithms based on a processing scheme that is recursive with respect to measurements largely relies on recursive relations for the posterior PDF [1,9]:
p ( x k | Y k c ) = p ( y k | x k ) p ( x k | Y k 1 c ) p ( y k | x k ) p ( x k | Y k 1 c ) d x k ,
p ( x k | Y k 1 c ) = p ( x k | x k 1 ) p ( x k 1 | Y k 1 c ) d x k 1 ,
where p y k | x k is the likelihood function; p ( x k | Y k 1 c ) is the prediction PDF; and p ( x k | x k 1 ) is the transition PDF.
The design of suboptimal algorithms involves the development of computationally efficient procedures to provide the derivation of estimate x ^ k s u b ( Y k c ) , which is slightly different in accuracy from the nonlinear optimal estimate (3), and the calculated accuracy characteristic, which is consistent with the real one (6), in the form of the estimation error covariance matrix:
G ~ k s u b = E p ( Y k ) P k s u b ( Y k c ) = P k s u b ( Y k c ) p ( Y k c ) d Y k c G k s u b ,
where P k s u b ( Y k c ) is the covariance matrix calculated with the use of the suboptimal algorithm, p ( Y k c ) is the PDF of the measurement vector Y k c , and G k s u b is the unconditional covariance matrix for the suboptimal algorithm defined by the expression similar to (6). Suboptimal algorithms for which G ~ k s u b G k s u b are called consistent algorithms, as already noted in the Introduction [5,12].

Appendix A.2. Nonrecursive Algorithms

Along with recursive algorithms, the desired estimate of the state vector at an arbitrary point in time can be obtained by finding it as part of the estimate for vector X k c = ( x 1 T ,   ,   x k T ) T , including x i for all i = 1 ,   2 ,   ,   k , using the vector measurement Y k c = ( y 1 T ,   ,     y k T ) T . To obtain this estimate, we can solve the problem formulated as follows. Estimate time-invariant vector X k c with a priori PDF p ( X k c ) using m × k measurements
Y k c = H c ( X k c ) + V k c ,
in which H c ( X k c ) = ( h 1 T ( x 1 ) , h k T ( x k ) ) T , and V k c = ( v 1 T , v k T ) T is a composite vector of measurement errors with block-diagonal covariance matrix R k c with matrices Ri, i = 1 . k ¯ on the main diagonal. To obtain the desired estimate, we need the posterior PDF p ( X k c | Y k c ) , defined as
p ( X k c | Y k c ) = c p ( X k c ) p ( Y k c | X k c ) ,
where c is a normalizing factor.
It is easy to show that
p ( X k c ) p ( Y k c | X k c ) = c exp 1 2 J ( X k c ) ,
where
J ( X k c ) = x 0 T P 0 1 x 0 + i = 1 k ( x i f i x i 1 u i ) T × × G i T Q i G i 1 ( x i f i x i 1 u i ) + + i = 1 k ( y i h i ( x i ) ) T R i 1 ( y i h i ( x i ) ) .
In the case of the nonrecursive scheme, the optimal, in the mean square sense, estimate of the full composite vector X k c is defined as
X ^ k c , o p t Y k c = X k c p X k c | Y k c d X k c .
If an estimate is required only at a point in time i k , then instead of (3), for the nonrecursive scheme, we can write
x ^ i / k o p t Y k c = x i p X k c | Y k c d X k c ,
where x ^ i / k o p t is the estimate obtained using k measurements. When i < k, estimate x ^ i / k o p t corresponds to the solution of the smoothing problem.

Appendix A.3. Suboptimal Recursive Algorithms Based on Linearization

To find the desired estimate and the corresponding calculated covariance matrix in recursive suboptimal algorithms based on linearization, at each k-th step of the j-th iteration, we can select the following set of calculations:
Step 1.
Calculation of the prediction and the covariance matrix of its errors:
x ^ k / k - 1 = f k ( x k 1 l i n 1 ) + F k ( x ^ k 1 x k 1 l i n 1 ) + u k , x ^ 0 = x ¯ 0 ,   P k / k 1 = F k P k 1 ( F k ) T + G k Q k G k T ;
Step 2.
Calculation of the measurement prediction and its error covariance matrix:
y ^ k ( j ) = h k ( x k l i n 2 ( j ) ) + H k ( j ) ( x ^ k / k - 1 x k l i n 2 ( j ) ) ,   P y k ( j ) = H k ( j ) P k / k 1 ( H k ( j ) ) T + R k ;
Step 3.
Calculation of the cross-covariance matrix:
P x k y k ( j ) = P k / k 1 ( H k ( j ) ) T ;
Step 4.
Calculation of the gain factor:
K k ( j ) = P x k y k ( j ) P y k ( j ) 1 ;
Step 5.
Calculation of the estimate and the corresponding covariance matrix:
x ^ k ( j ) = x ^ k / k 1 + K k ( j ) y k y ^ k ( j ) ,   P k ( j ) = P k / k 1 K k ( j ) P x k y k ( j ) .
Depending on the choice of linearization points x i l i n 1   and   x i l i n 2 , the above expressions can be used to design various suboptimal algorithms, the generalized block diagram of which is presented in Figure A1.
Figure A1. Block diagram of suboptimal recursive algorithms based on linearization.
Figure A1. Block diagram of suboptimal recursive algorithms based on linearization.
Sensors 25 07566 g0a1
In the linearized Kalman filter, the simplest algorithm, the linearization points at each step k are chosen as some fixed values that only depend on the a priori information: x k l i n 1 = x ¯ k 1 , x k l i n 2 = x ¯ k , x ¯ k = F k x ¯ k 1 + u k . The LKF will correspond to the block diagram shown in Figure A1 if it does not take into account the relations highlighted in blue and red.
Another algorithm, called the extended Kalman filter, aims to increase the accuracy of approximations (A1) due to the appropriate choice of linearization points. It is clear that the best linearized description of functions f k ( x k 1 ) and h k ( x k ) will be for the case where the Jacobian matrices F k and H k are calculated directly at the points of the true values x k 1 T r , x k T r . Since their values are unknown, it makes sense to choose as linearization points those that differ least from x k 1 T r and x k T r . Based on these considerations, x k l i n 1 = x ^ k 1 and x k l i n 2 = x ^ k / k 1 are used as the linearization points in the EKF. The block diagram in Figure A1 will correspond to the EKF if it does not take into account the relation highlighted in red.
The algorithm, called the iterative extended Kalman filter (IEKF), also aims to increase the accuracy of the linearized approximation (A1) due to the appropriate choice of the linearization points. The main idea behind its design is as follows: if estimate x ^ k is closer to the true value x k T r than prediction x ^ k / k - 1 , then if it is chosen as the linearization point x k l i n 2 , i.e., at x k l i n 2 = x ^ k , the accuracy of the linearized representation of h k ( x k ) will also increase. Based on these considerations, in the IEKF, the same measurement is processed repeatedly (iteratively) at the current step, and at each successive j-th iteration of the linearization points x k l i n 2 ( j ) , it is corrected, taking into account the processing results at the previous iteration, i.e., x k l i n 2 ( j ) = x ^ k ( j 1 ) . The IEKF corresponds to the block diagram in Figure A1 if all of its blocks are taken into account.
It is clear that the LKF efficiency can be low because of the errors in the linearized representation (A1), which is why the LKF, unlike the EKF and IEKF, has not achieved wide application in solving practical problems.

Appendix A.4. Suboptimal Nonrecursive (Batch) Algorithms Based on Linearization

To find the desired estimate and the corresponding calculated covariance matrix in nonrecursive (batch) suboptimal algorithms based on linearization, at each k-th step of the j-th iteration, we can select the following set of calculations:
Step 1.
Calculation of the first two moments X ¯ k c and P X k c c for the composite vector X k c = ( x 0 T , , x k T ) T :
X ¯ k c = ( x ¯ 0 T , , x ¯ k T ) T ,
where x ¯ i = f i ( x i 1 l i n 1 ) + F i ( x ¯ i 1 x i 1 l i n 1 ) + u i , x i l i n 1 = x ¯ i , i = 0 . k ¯ , i.e., x ¯ i = f i ( x ¯ i 1 ) + u i ,
P X k c c = T k c P b , k c ( T k c ) T ,
where T k c = I n × n 0 n × n 0 n × n 0 n × n F 1 G 1 0 n × n 0 n × n F 2 F 1 F 2 G 1 G 2 0 n × n 0 n × n F k F k 1 F 1 F k F k 1 F 2 G 1 F k F k 1 F 3 G 2 G n ,
P b , k c = P 0 0 n × n 0 n × n 0 n × n 0 n × n Q 1 0 n × n 0 n × n 0 n × n 0 n × n Q k 1 0 n × n 0 n × n 0 n × n 0 n × n Q k ,
I n × n and 0 n × n are n × n -dimensional identical and zero matrices, respectively.
Step 2.
Calculation of the measurement prediction and its error covariance matrix:
Y ¯ k c , ( j ) = h 1 T ( x 1 l i n 2 ( j ) ) , , h k T ( x k l i n 2 ( j ) ) T + H k c , ( j ) X ¯ k c X k c , l i n 2 ( j ) ,   P Y k c c , ( j ) = H k c , ( j ) P k c ( H k c , ( j ) ) T + R k c ,
where X k c , l i n 2 ( j ) = ( x 1 l i n 2 ( j ) ) T , , ( x k l i n 2 ( j ) ) T T , H k c , ( j ) = d i a g ( H 1 ( j ) , , H k ( j ) ) , and R k c = d i a g ( R 1 , , R k ) .
Step 3.
Calculation of the cross-covariance matrix:
P X k c Y k c c , ( j ) = P X k c c ( H k c , ( j ) ) T ;
Step 4.
Calculation of the gain factor:
K k c , ( j ) = P X k c Y k c c , ( j ) P Y k c c , ( j ) 1 ;
Step 5.
Calculation of the estimate and the corresponding covariance matrix:
X ^ k c , ( j ) ( Y k c ) = X ¯ k c + K k c , ( j ) ( Y k c Y ¯ k c , ( j ) ) ,   P k c , ( j ) = P X k c c K k c , ( j ) P X k c Y k c c , ( j ) .
The simplest batch algorithm, Batch Linearized Smoother (BLS), can be designed if the linearization points x i l i n 2 , i = 1 . k ¯ , are chosen as x i l i n 2 = x ¯ i . The BLS will correspond to the block diagram shown in Figure 1 if it does not take into account the relation highlighted in red. It is easy to show that estimate x ^ k B L S and covariance matrix P k B L S generated in the BLS as part of vector X ^ k c , B L S and block matrix P k c , B L S   will coincide with estimate x ^ k L K F and covariance matrix P k L K F at the LKF output. This follows from the fact that both algorithms are based on linearization, which is carried out at the same points, i.e., estimates x ^ k B L S and x ^ k L K F and covariance matrices P k B L S and P k L K F are obtained as a result of the solution of the same linearized problem, but in different ways: x ^ k B L S and P k B L S using a nonrecursive scheme and x ^ k L K F and P k L K F using a recursive scheme. Meanwhile, the estimate and the covariance matrix obtained by solving a linear Gaussian problem do not depend on the processing scheme that was used to obtain them—recursive or nonrecursive.
It should be noted that the LKF is designed using the recursive scheme, which is economical in terms of computational complexity compared to the BLS, which is designed using the nonrecursive scheme. Thus, if it is required to obtain an estimate only at time k, the LKF can be considered an economical implementation of the BLS in terms of computational complexity. It is clear that the BLS has all the disadvantages of the LKF.

References

  1. Stratonovich, R.L. Uslovnye Markovskie Protsessy i Ikh Primenenie k Teorii Optimal’nogo Upravleniya. (Conditional Markov Processes and Their Application to the Theory of Optimal Control); Moscow State University Publishing House: Moscow, Russia, 1966. [Google Scholar]
  2. Jazwinski, A.H. Stochastic Process and Filtering Theory; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  3. Gelb, A. Applied Optimal Estimation; M.I.T. Press: Cambridge, UK, 1974. [Google Scholar]
  4. Antoulas, A.C.; Kalman, R.E. (Eds.) Mathematical System Theory: The Influence of R. E. Kalman; Springer: Berlin/Heidelberg, Germany, 1991; ISBN 978-3-540-52994-1. [Google Scholar]
  5. Bar-Shalom, Y.; Li, X.-R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software; Wiley: New York, NY, USA, 2007; ISBN 978-0-471-41655-5. [Google Scholar]
  6. Simon, D. Optimal State Estimation: Kalman, H. [Infinity] and Nonlinear Approaches; Wiley-Interscience: Hoboken, NJ, USA, 2006; ISBN 978-0-470-04533-6. [Google Scholar]
  7. Gibbs, B.P. Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook; Wiley: Hoboken, NJ, USA, 2011; ISBN 978-1-118-00316-9. [Google Scholar]
  8. Brown, R.G. Introduction to Random Signals and Applied Kalman Filtering: With MATLAB Exercises, 4th ed.; J. Wiley & Sons: Hoboken, NJ, USA, 2012; ISBN 978-0-470-60969-9. [Google Scholar]
  9. Särkkä, S. Bayesian Filtering and Smoothing; Institute of Mathematical Statistics Textbooks; Cambridge University Press: Cambridge, UK, 2013; ISBN 978-1-107-03065-7. [Google Scholar]
  10. Stepanov, O.A. Optimal and Suboptimal Filtering in Integrated Navigation Systems. In Aerospace Navigation Systems; Nebylov, A.V., Watson, J., Eds.; Wiley: Hoboken, NJ, USA, 2016; pp. 244–298. ISBN 978-1-119-16307-7. [Google Scholar]
  11. Stepanov, O.A. Methods for Processing Information of Navigation Measurement; ITMO University: St. Petersburg, Russia, 2017. [Google Scholar]
  12. Stepanov, O.A.; Isaev, A.M. A Procedure of Comparative Analysis of Recursive Nonlinear Filtering Algorithms in Navigation Data Processing Based on Predictive Simulation. Gyroscopy Navig. 2023, 14, 213–224. [Google Scholar] [CrossRef]
  13. Zhao, Z.; Li, X.R.; Jilkov, V.P. Best Linear Unbiased Filtering with Nonlinear Measurements for Target Tracking. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 1324–1336. [Google Scholar] [CrossRef]
  14. Jwo, D.-J.; Chen, Y.-L.; Cho, T.-S.; Biswal, A. A Robust GPS Navigation Filter Based on Maximum Correntropy Criterion with Adaptive Kernel Bandwidth. Sensors 2023, 23, 9386. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, J.; Feng, K.; Li, J.; Zhang, C.; Wei, X. An Adaptive Unscented Kalman Ilter Integrated Navigation Method Based on the Maximum Versoria Criterion for INS/GNSS Systems. Sensors 2025, 25, 3483. [Google Scholar] [CrossRef]
  16. Xiong, K.; Zhou, P.; Huang, X. Inter-Spacecraft Rapid Transfer Alignment Based on Attitude Plus Angular Rate Matching Using Q-Learning Kalman Filter. Sensors 2025, 25, 2774. [Google Scholar] [CrossRef]
  17. Gustafsson, F.; Hendeby, G. Some Relations Between Extended and Unscented Kalman Filters. IEEE Trans. Signal Process. 2012, 60, 545–555. [Google Scholar] [CrossRef]
  18. Stepanov, O.A.; Litvinenko, Y.A.; Vasiliev, V.A.; Toropov, A.B.; Basin, M.V. Polynomial Filtering Algorithm Applied to Navigation Data Processing under Quadratic Nonlinearities in System and Measurement Equations. Part 1. Description and Comparison with Kalman Type Algorithms. Gyroscopy Navig. 2021, 12, 205–223. [Google Scholar] [CrossRef]
  19. Basin, M. New Trends in Optimal Filtering and Control for Polynomial and Time-Delay Systems; Lecture Notes in Control and Information Sciences; Springer: Berlin/Heidelberg, Germany, 2008; Volume 380, ISBN 978-3-540-70802-5. [Google Scholar]
  20. Stepanov, O.A.; Litvinenko, Y.A.; Isaev, A.M. Comparative Analysis of Quasi-Linear Kalman-Type Algorithms in Estimating a Markov Sequence with Nonlinearities in the System and Measurement Equations. Mehatronika Avtom. Upr. 2024, 25, 585–595. [Google Scholar] [CrossRef]
  21. Lefebvre, T.; Bruyninckx, H.; De Schuller, J. Comment on “A New Method for the Nonlinear Transformation of Means and Covariances in Filters and Estimators” [with Authors’ Reply]. IEEE Trans. Autom. Control 2002, 47, 1406–1409. [Google Scholar] [CrossRef]
  22. Šimandl, M.; Straka, O.; Duník, J. Efficient Adaptation of Design Parameters of Derivative-Free Filters. Autom Remote Control. 2016, 77, 261–276. [Google Scholar] [CrossRef]
  23. Julier, S.J.; Uhlmann, J.K.; Durrant-Whyte, H.F. A New Approach for Filtering Nonlinear Systems. In Proceedings of the 1995 American Control Conference—ACC’95, Seattle, WA, USA, 21–23 June 1995; American Autom Control Council: Seattle, WA, USA, 1995; Volume 3, pp. 1628–1632. [Google Scholar]
  24. Zhao, J.; Zhang, Y.; Li, S.; Wang, J.; Fang, L.; Ning, L.; Feng, J.; Zhang, J. An Improved Unscented Kalman Filter Applied to Positioning and Navigation of Autonomous Underwater Vehicles. Sensors 2025, 25, 551. [Google Scholar] [CrossRef] [PubMed]
  25. Arasaratnam, I.; Haykin, S. Cubature Kalman Filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
  26. Liu, W.; Yang, J.; Xu, T.; Ma, X.; Wang, S. Cubature Kalman Hybrid Consensus Filter for Collaborative Localization of Unmanned Surface Vehicle Cluster with Random Measurement Delay. Sensors 2024, 24, 6042. [Google Scholar] [CrossRef] [PubMed]
  27. Steinbring, J.; Hanebeck, U.D. LRKF Revisited: The Smart Sampling Kalman Filter (S2KF). J. Adv. Inf. Fusion 2014, 9, 106–123. [Google Scholar]
  28. Afshari, H.H.; Gadsden, S.A.; Habibi, S. Gaussian Filters for Parameter and State Estimation: A General Review of Theory and Recent Trends. Signal Process. 2017, 135, 218–238. [Google Scholar] [CrossRef]
  29. Dunik, J.; Biswas, S.K.; Dempster, A.G.; Pany, T.; Closas, P. State Estimation Methods in Navigation: Overview and Application. IEEE Aerosp. Electron. Syst. Mag. 2020, 35, 16–31. [Google Scholar] [CrossRef]
  30. Isaev, A.; Stepanov, O.; Litvinenko, Y. Comparative Analysis of Recursive and Nonrecursive Linearization-Based Estimation Algorithms. Int. J. Dyn. Control 2025, 13, 95. [Google Scholar] [CrossRef]
  31. Bucy, R. Nonlinear Filtering Theory. IEEE Trans. Autom. Control 1965, 10, 198. [Google Scholar] [CrossRef]
  32. Sorenson, H.W.; Alspach, D.L. Recursive Bayesian Estimation Using Gaussian Sums. Automatica 1971, 7, 465–479. [Google Scholar] [CrossRef]
  33. Matoušek, J.; Duník, J.; Straka, O. Density Difference Grid Design in a Point-Mass Filter. Energies 2020, 13, 4080. [Google Scholar] [CrossRef]
  34. Doucet, A.; Freitas, N.; Gordon, N. (Eds.) Sequential Monte Carlo Methods in Practice; Springer: New York, NY, USA, 2001; ISBN 978-1-4419-2887-0. [Google Scholar]
  35. Chen, Z. Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond; McMaster University: Hamilton, ON, Canada, 2003. [Google Scholar]
  36. Ristic, B.; Arulampalam, S.; Neil, J. Gordon Beyond the Kalman Filter: Particle Filters for Tracking Applica-Tions; Artech House: Norwood, MA, USA, 2004. [Google Scholar]
  37. Hu, Y.; Peng, A.; Tang, B.; Xu, H. An Indoor Navigation Algorithm Using Multi-Dimensional Euclidean Distance and an Adaptive Particle Filter. Sensors 2021, 21, 8228. [Google Scholar] [CrossRef] [PubMed]
  38. Yang, Z.; Zhang, X.; Xiang, W.; Lin, X. A Novel Particle Filter Based on One-Step Smoothing for Nonlinear Systems with Random One-Step Delay and Missing Measurements. Sensors 2025, 25, 318. [Google Scholar] [CrossRef] [PubMed]
  39. Li, T.; Bolic, M.; Djuric, P.M. Resampling Methods for Particle Filtering: Classification, Implementation, and Strategies. IEEE Signal Process. Mag. 2015, 32, 70–86. [Google Scholar] [CrossRef]
  40. Särkkä, S.; Vehtari, A.; Lampinen, J. Rao-Blackwellized Particle Filter for Multiple Target Tracking. Inf. Fusion 2007, 8, 2–15. [Google Scholar] [CrossRef]
  41. Koshaev, D.A.; Bogomolov, V.V. Algorithm of Long Baseline Navigation of an Autonomous Underwater Vehicle in the Absence of a Priori Data on Its Location under Conditions of Sparse Beacon Placement. J. Instrum. Eng. 2024, 67, 1052–1064. [Google Scholar] [CrossRef]
  42. Stepanov, O.A.; Toropov, A.B. Nonlinear Filtering for Map-Aided Navigation. Part 1. An Overview of Algorithms. Gyroscopy Navig. 2015, 6, 324–337. [Google Scholar] [CrossRef]
  43. Gao, S.; Cai, T. Parallel Multiple Methods with Adaptative Decision Making for Gravity-Aided Navigation. J. Mar. Sci. Eng. 2023, 11, 1624. [Google Scholar] [CrossRef]
  44. Quintas, J.; Cruz, J.; Pascoal, A.; Teixeira, F.C. A Comparison of Nonlinear Filters for Underwater Geomagnetic Navigation. In Proceedings of the 2020 IEEE/OES Autonomous Underwater Vehicles Symposium (AUV)(50043), St Johns, NL, Canada, 30 September–2 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  45. Wang, R.; Wang, J.; Li, Y.; Ma, T.; Zhang, X. Research Advances and Prospects of Underwater Terrain-Aided Navigation. Remote Sens. 2024, 16, 2560. [Google Scholar] [CrossRef]
  46. Gao, S.; Cai, T.; Fang, K. Gravity-Matching Algorithm Based on K-Nearest Neighbor. Sensors 2022, 22, 4454. [Google Scholar] [CrossRef]
  47. Zou, J.; Cai, T.; Zhao, S. Gravity-Aided Navigation Underwater Positioning Confidence Study Based on Bayesian Estimation of the Interquartile Range Method. Remote Sens. 2025, 17, 2137. [Google Scholar] [CrossRef]
  48. Stepanov, O.A.; Motorin, A.V.; Zolotarevich, V.P.; Isaev, A.M.; Litvinenko, Y.A. Recursive and Nonrecursive Algorithms Applied to Navigation Data Processing: Differences and Interrelation with Factor Graph Optimization Algorithms. In Proceedings of the 31st Saint Petersburg International Conference on Integrated Navigation Systems (ICINS), Saint Petersburg, Russia, 29–31 May 2024; State Research Center of the Russian Federation Concern CSRI Elektropribor, JSC: Saint-Petersburg, Russia, 2024; pp. 424–431. [Google Scholar]
  49. Dmitriev, S.P.; Shimelevich, L.I. A Generalized Kalman Filter with Repeated Linearization and Its Use in Navigation over Geophysical Fields. Autom. Remote Control 1978, 39, 505–509. [Google Scholar]
  50. Pavlis, N.K.; Holmes, S.A.; Kenyon, S.C.; Factor, J.K. The Development and Evaluation of the Earth Gravitational Model 2008 (EGM2008). J. Geophys. Res. 2012, 117, 2011JB008916. [Google Scholar] [CrossRef]
  51. Koshaev, D.A. Multiple Model Algorithm for Single-Beacon Navigation of Autonomous Underwater Vehicle without Its A Priori Position. Part 2. Simulation. Gyroscopy Navig. 2020, 11, 319–332. [Google Scholar] [CrossRef]
  52. Dai, J.; Liu, S.; Hao, X.; Ren, Z.; Yang, X. UAV Localization Algorithm Based on Factor Graph Optimization in Complex Scenes. Sensors 2022, 22, 5862. [Google Scholar] [CrossRef]
  53. Kaess, M.; Johannsson, H.; Roberts, R.; Ila, V.; Leonard, J.J.; Dellaert, F. iSAM2: Incremental Smoothing and Mapping Using the Bayes Tree. Int. J. Robot. Res. 2012, 31, 216–235. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the Iterative Batch Linearized Smoother.
Figure 1. Block diagram of the Iterative Batch Linearized Smoother.
Sensors 25 07566 g001
Figure 2. Block diagram of RI-BLS.
Figure 2. Block diagram of RI-BLS.
Sensors 25 07566 g002
Figure 3. Block diagram of the RI-BMLS.
Figure 3. Block diagram of the RI-BMLS.
Sensors 25 07566 g003
Figure 4. RI-BMLS estimates for different numbers of measurements.
Figure 4. RI-BMLS estimates for different numbers of measurements.
Sensors 25 07566 g004
Figure 5. Graphs of posterior PDFs for different estimation time moments: (a) k = 1; (b) k = 3; (c) k = 5; (d) k = 7.
Figure 5. Graphs of posterior PDFs for different estimation time moments: (a) k = 1; (b) k = 3; (c) k = 5; (d) k = 7.
Sensors 25 07566 g005
Figure 6. The scheme of the isolines of the gravity anomalies for the region and the vehicle’s path.
Figure 6. The scheme of the isolines of the gravity anomalies for the region and the vehicle’s path.
Sensors 25 07566 g006
Figure 7. Position of the linearization points in the a priori uncertainty domain.
Figure 7. Position of the linearization points in the a priori uncertainty domain.
Sensors 25 07566 g007
Figure 8. The isolines of the posterior PDFs at different estimation time moments: (a) k = 1; (b) k = 15; (c) k = 30.
Figure 8. The isolines of the posterior PDFs at different estimation time moments: (a) k = 1; (b) k = 15; (c) k = 30.
Sensors 25 07566 g008
Figure 9. Position of the linearization points in the a priori uncertainty domain.
Figure 9. Position of the linearization points in the a priori uncertainty domain.
Sensors 25 07566 g009
Figure 10. Calculation results for the real and calculated radial errors.
Figure 10. Calculation results for the real and calculated radial errors.
Sensors 25 07566 g010
Table 1. The values of the computational complexity factor of the algorithms.
Table 1. The values of the computational complexity factor of the algorithms.
AlgorithmComputational Complexity Factor
EKF0
IEKF3.2
RI-BLS181
RI-BMLS121
PF2496
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stepanov, O.; Isaev, A.; Dranitsyna, E.; Litvinenko, Y. Recursive Batch Smoother with Multiple Linearization for One Class of Nonlinear Estimation Problems: Application for Multisensor Navigation Data Fusion. Sensors 2025, 25, 7566. https://doi.org/10.3390/s25247566

AMA Style

Stepanov O, Isaev A, Dranitsyna E, Litvinenko Y. Recursive Batch Smoother with Multiple Linearization for One Class of Nonlinear Estimation Problems: Application for Multisensor Navigation Data Fusion. Sensors. 2025; 25(24):7566. https://doi.org/10.3390/s25247566

Chicago/Turabian Style

Stepanov, Oleg, Alexey Isaev, Elena Dranitsyna, and Yulia Litvinenko. 2025. "Recursive Batch Smoother with Multiple Linearization for One Class of Nonlinear Estimation Problems: Application for Multisensor Navigation Data Fusion" Sensors 25, no. 24: 7566. https://doi.org/10.3390/s25247566

APA Style

Stepanov, O., Isaev, A., Dranitsyna, E., & Litvinenko, Y. (2025). Recursive Batch Smoother with Multiple Linearization for One Class of Nonlinear Estimation Problems: Application for Multisensor Navigation Data Fusion. Sensors, 25(24), 7566. https://doi.org/10.3390/s25247566

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop