Next Article in Journal
Inverse Problem of Identifying a Time-Dependent Source Term in a Fractional Degenerate Semi-Linear Parabolic Equation
Previous Article in Journal
A Novel Hybrid Approach Using an Attention-Based Transformer + GRU Model for Predicting Cryptocurrency Prices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advances in Multi-Source Navigation Data Fusion Processing Methods

by
Xiaping Ma
1,
Peimin Zhou
1,* and
Xiaoxing He
2
1
School of Geomatics, Xi’an University of Science and Technology, Xi’an 710054, China
2
School of Civil Engineering and Surveying and Mapping Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1485; https://doi.org/10.3390/math13091485
Submission received: 12 March 2025 / Revised: 25 April 2025 / Accepted: 28 April 2025 / Published: 30 April 2025

Abstract

:
In recent years, the field of multi-source navigation data fusion has witnessed substantial advancements, propelled by the rapid development of multi-sensor technologies, Artificial Intelligence (AI) algorithms and enhanced computational capabilities. On one hand, fusion methods based on filtering theory, such as Kalman Filtering (KF), Particle Filtering (PF), and Federated Filtering (FF), have been continuously optimized, enabling effective handling of non-linear and non-Gaussian noise issues. On the other hand, the introduction of AI technologies like deep learning and reinforcement learning has provided new solutions for multi-source data fusion, particularly enhancing adaptive capabilities in complex and dynamic environments. Additionally, methods based on Factor Graph Optimization (FGO) have also demonstrated advantages in multi-source data fusion, offering better handling of global consistency problems. In the future, with the widespread adoption of technologies such as 5G, the Internet of Things, and edge computing, multi-source navigation data fusion is expected to evolve towards real-time processing, intelligence, and distributed systems. So far, fusion methods mainly include optimal estimation methods, filtering methods, uncertain reasoning methods, Multiple Model Estimation (MME), AI, and so on. To analyze the performance of these methods and provide a reliable theoretical reference and basis for the design and development of a multi-source data fusion system, this paper summarizes the characteristics of these fusion methods and their corresponding application scenarios. These results can provide references for theoretical research, system development, and application in the fields of autonomous driving, unmanned vehicle navigation, and intelligent navigation.
MSC:
93B27

1. Introduction

Individually, Global Navigation Satellite Systems (GNSS), Inertial Navigation Systems (INS), Ultra-Wideband (UWB) technology, Bluetooth, Wireless Local Area Networks (WLAN), visual sensors, Pseudolites (PL), and various other sensors struggle to meet demanding navigation performance requirements. GNSS, for instance, as a non-autonomous navigation system, is particularly limited in specific complex environments such as urban canyons and tunnels, where its signals are highly susceptible to blockage, interference, and shielding. To significantly enhance the overall performance of navigation systems, integrated navigation technology emerges as an effective solution. This technology entails the collaborative use of two or more distinct types of navigation systems to measure and calculate the same navigation information, thereby generating quantitative measurements. These measurements are subsequently utilized to compute and correct the errors inherent in each navigation system. By leveraging a diverse array of technical means and methods, this approach ensures high accuracy and reliability of navigation and positioning services across a wide range of scenarios. These scenarios encompass seamless indoor and outdoor positioning, environments with electromagnetic interference, as well as underwater and underground environments. Therefore, multi-source data fusion positioning, which is founded on the collaboration of multiple technology sources and adheres to specific optimization criteria, becomes the linchpin for achieving optimal fusion positioning. The fusion method not only serves as the prerequisite and foundation for all-source navigation but also acts as the key and core of integrated navigation systems.
The concept of data fusion was first introduced by the renowned American systems scientist Bar-Shalom in his seminal article titled ‘Extension of the Probabilistic Data Association Filter in Multi-Target Tracking’. In this pioneering work, he proposed the probabilistic data interconnection filter, which has since become a hallmark of multi-source information fusion technology. Over the years, multi-source data fusion methods have evolved and diversified. Currently, the primary approaches employed in this field include the switching method, the average weighted fusion method, and the adaptive weighted fusion method. Each of these methods offers unique advantages and is tailored to address specific challenges in data fusion, thereby enhancing the overall effectiveness and reliability of integrated navigation and positioning systems.
Based on the performance of different positioning sources, the optimal single positioning source is selected as the positioning means [1]. However, ignoring other positioning sources is a waste and not the best choice. The average weighted fusion method does not take into account the different performances of different positioning sources but assigns the same weight to all positioning sources for fusion localization [2], which cannot achieve the optimal fusion effect. The adaptive weighted fusion method assigns different weights according to the characteristics of different fusion sources to achieve the best fusion positioning [3]. The algorithms corresponding to these three methods mainly include optimal estimation algorithms, weighted fusion algorithms or adaptive weighted fusion algorithms [4,5,6], Bayesian filters (BF), variable-decibel Bayesian adaptive estimation [7,8], Particle Filter (PF), Statistical decision theory [9], evidence theory [10], fuzzy logic [11], etc. However, these algorithms all have specific preconditions and application scenarios, and it is necessary to establish a mathematical model between the observation information of the navigation source and the system state parameters.
In the field of dynamic positioning such as autonomous driving and vehicle navigation, the Kalman Filter (KF) has been widely used due to the introduction of physical motion models. However, KF is primarily designed for linear systems. For nonlinear systems, such as inertial navigation, the Extended KF (EKF) is suitable for weakly nonlinear objects because higher-order terms above the second order are discarded in the linearization process. To address the issue that the batch processing of the EKF random model requires storing a large amount of data, a recursive method for the random model has been proposed [12], and time-domain non-local filtering data fusion algorithms have also been included [13]. The Unscented KF (UKF) retains the accuracy achieved by the third-order term of the Taylor series, making it suitable for nonlinear object estimation, although it involves relatively high computational demands. When both the system state and measurement noise are nonlinear, the PF can be used for nonlinear systems and systems with uncertain error models. However, the PF requires a probability density that closely approximates the real density. Otherwise, the filtering effect may be poor or even divergent. To address this, the Unscented Kalman Particle Filter (UPF) algorithm has been developed [14]. However, both the PF and UPF methods face the issue of rapidly increasing computational load as the number of particles grows.
With the increasing demand from users for more comprehensive and intelligent navigation and positioning performance, filtering methods such as Factor Graphs (FG) and neural networks have been introduced. For example, FG algorithms have been extensively applied in single GNSS positioning, GNSS/INS integrated positioning, ambiguity resolution, and robust estimation [15,16,17,18]. To enhance positioning accuracy in urban environments, FG algorithms have been optimized and improved [19,20]. These studies have demonstrated that under certain conditions, FG algorithms exhibit higher computational accuracy and robustness compared to EKF. In 1965, Magill proposed the Multi-Model Estimation (MME) method [21], which enhances the adaptability of system models to real systems and changes in external environments under complex conditions, thereby improving the accuracy and stability of filtering estimates. The design of the model set, the selection of filters, estimation fusion, and the reset of filter inputs are all very important aspects. To enhance the high fault-tolerance capability of integrated navigation systems, Carlson introduced the Federated Filtering (FF) theory in 1988 [22]. This theory has been applied in indoor navigation, robotic navigation, and vision–language tasks. Existing Artificial Intelligence (AI) algorithms mainly include fuzzy control adaptive algorithms and neural network adaptive algorithms [23,24]. For example, to address the impact of random disturbances on systems in underwater environments, RBF neural network-assisted FF has been employed for information fusion [25]. By establishing a black box model with sufficiently accurate samples through offline training, the positioning accuracy and adaptability of the algorithm have been improved. To tackle the issues of high cost and susceptibility to weather conditions in existing high-precision satellite navigation for agricultural machinery, Yu et al. (2021) proposed a multi-sensor fusion automatic navigation method for farmland based on D-S-CNN [26]. However, these AI algorithms require extensive training data, comprehensive pre-training of the system, and significant computational resources, and often struggle to ensure real-time performance, typically being used for post-processing.
Recently, scholars from various countries have conducted extensive research on integrated navigation systems. For instance, researchers from Linköping University in Sweden proposed a combined navigation system that integrates GPS, INS, and visible light vision assistance [27]. This system utilizes the vision system and INS for positioning when GPS fails. Locata Corporation in Australia has integrated the Locata system with GPS, INS, vision systems, and Simultaneous Localization and Mapping (SLAM), achieving high-precision applications of the Locata system in both indoor and outdoor environments [28]. A communication and navigation fusion system has been applied for seamless positioning across wide-area indoor and outdoor spaces [29]. A multi-frequency ground-penetrating radar data fusion system is used for working antennas in different frequency ranges [30], while multi-sensor data fusion is employed for analyzing airspeed measurement fault detection in drones [3]. Additionally, an indoor mobile robot based on dead reckoning data fusion and fast response code detection [31], and an IoT-based multi-sensor fusion strategy for analyzing occupancy sensing systems in smart buildings have been developed [32]. Systems that integrate vision, inertial navigation, and asynchronous optical tracking with Inertial Measurement Units (IMU) have also been implemented. Furthermore, several research teams have successfully developed open-source integrated navigation systems for use by academic or industrial technical personnel [33,34,35].
Although the aforementioned studies include extensive research and testing on multi-source data fusion methods, fusion systems, and their applications, the theories and models of these methods have their specific applicable scenarios and conditions. Therefore, this paper summarizes the fundamental principles and mathematical models of multi-source data fusion methods, analyzes the advantages and disadvantages of different fusion approaches, and provides theoretical support and reference for the design, development, and application of fusion systems.

2. Multi-Source Navigation Data Fusion Processing Method

Fusion methods are primarily categorized into optimal estimation methods, filtering methods, MME, FG, FF, and other fusion approaches. The following sections will focus on elaborating the basic principles of these methods, their corresponding algorithmic models, and applicable scenarios.

2.1. Optimal Estimation Method

We estimate the parameter by utilizing randomly distributed observation vectors. Specifically, we seek a mapping function to compute the estimated value. In this context, the system state is characterized by the vector x , while the measurements from various navigation sources are denoted by y . The observation equations, which can be either nonlinear or linear, are formulated as follows:
y = A x + ε
where A is the observation matrix. ε includes the error term caused by random observation noise and linearization. The state of the system can be estimated by observation, and the estimated result is a vector x ^ .
By employing different estimation criteria to address the problem of estimating unknown parameters, various estimation methods can be derived. Based on Equation (1) and the optimization of the criterion function, methods such as Least Squares Estimation (LSE), Minimum Variance Estimation (MVE), Maximum Likelihood Estimation (MLE), and Maximum A Posteriori Estimation (MAPE) can be formulated.

2.1.1. General LSE and Weighted LSE (WLSE)

The LSE is a parameter estimation algorithm proposed by the German mathematician Carl Friedrich Gauss in 1795, initially developed to determine planetary orbits [36]. Regardless of whether the variable x in the linear model (2) possesses prior statistical information or the random distribution followed by y , the LSE criterion employs a quadratic minimum.
J x ^ = y A x Τ W y A x = min
where W is the appropriate positive definite weighted matrix, the above formula is the general LSE criterion when W = I . Then, the estimation error and its variance are as follows:
x ^ L S = A Τ W A 1 A Τ W y Σ δ x ^ L S = A Τ W A 1 A Τ W Σ ε W A A Τ W A 1
It is easy to prove that when W = Σ ε 1 , LSE has a minimum variance [36,37].
The most significant feature of this method lies in its algorithmic simplicity and independence from statistical information related to estimators and measurements. Previous researchers have implemented this algorithm in multi-source data fusion processing [38,39]. However, the LSE exclusively utilizes measurement information for current state estimation, a characteristic that paradoxically constrains its applicability. Furthermore, while the LSE optimality criterion guarantees the minimization of the total mean squared error in measurement estimates, it fails to ensure optimal estimation error for the estimator itself, consequently leading to suboptimal estimation accuracy.

2.1.2. MLE

Let the conditional probability density of y with respect to x be f y / x and the MLE criterion be the following:
f y / x x = x ^ M L = max
This means that y has its maximum value at f y / x x = x ^ M L = max , which is equivalent to the following:
f y / x / x | x = x ^ M L = 0
Obviously, the solution of Equation (5) is related to f y / x . Different conditional probability density functions lead to different valuation formulas, therefore a universal formula cannot be derived. When f y / x follows a normal distribution, for example:
f y / x = 1 2 π n / 2 Σ y / x 1 / 2 exp y E y / x Τ Σ y / x 1 y E y / x 2
The expectation and variance formulas under the normal distribution condition are as follows: [36,37]
E y / x = E y + Σ y x Σ x 1 x E x Σ y / x = Σ y Σ y x Σ x 1 Σ x y
From the above derivation, it can be concluded that MLE does not require prior distribution information about x . However, when its prior distribution information is known, f y / x should be derived strictly based on Equation (5).

2.1.3. MAPE

The estimation x ^ M A of MAPE makes f x / y reach the maximum value under the condition x ^ = x ^ M A . The equivalent MAPE criteria are as follows after similar derivation [37]:
f x / y x ^ = x ^ M A = max
According to the conditional probability formula, it is easy to prove that (8) is equivalent to the following:
f x / y x ^ = x ^ M A = f x / y / f 2 y | x ^ = x ^ M A = max f x , y | x ^ = x ^ M A = max
That is, the MAPE employs the joint probability density function of the parameters and the observed data as its maximization criterion.
When both y and x are normally distributed, the conditional probability density function is as follows:
f x / y = 1 2 π t / 2 Σ x / y exp x E x / y Τ Σ x / y 1 x E x / y 2
Differentiate Equation (10) and set the derivative to zero to obtain the variance and estimation error of the MAPE:
x ^ M A = E x / y Σ δ x ^ M A = Σ x / y
When x is uncorrelated with y, the estimated result is equal to the prior information of the parameter.
Compared with MLE, MAPE places greater emphasis on the prior information of parameters. Therefore, MAPE becomes meaningless when parameters are non-random quantities. In practice, MAPE can be understood as a modification of parameters through observational data, where the extent of this modification is determined by the variance of the observations and their correlation with the parameters.

2.1.4. MVE

The MVE is an optimization criterion that minimizes the variance of the estimation error to estimate x ^ M V .
J x ^ = E x x ^ M V Τ x x ^ M V x ^ = x ^ M V = x x ^ M V Τ x x ^ M V f x , y d x d y = min
According to the conditional and edge probability density functions, Equation (12) is equivalent to the following:
h x ^ M V = x x ^ M V Τ x x ^ M V d x = min
The MVE and its estimation error are as follows:
x ^ M V = E x / y Σ δ x ^ M V = Σ x / y f 2 y d y
where f 2 y is the edge probability density function of the observation. When both the x and the y are normally distributed, the MVE is completely equivalent to the MAPE.

2.1.5. LMVE

MLE, MAPE, and MVE all necessitate the joint probability density or conditional probability density function of y and x . Their estimation formulas hinge on distributional information and do not necessarily manifest as linear combinations of the observed values. In contrast, the LSE dispenses with the need for distributional information of the parameters and observed values. Instead, it formulates parameter estimation as a linear combination of the observed values, thus qualifying as a linear estimation technique. As the name implies, the LMVE is also a linear estimation method. It does not require any specific distributional information of the observed values and parameters, relying only on their statistical properties.
Let LMVE be estimated as the following:
x ^ L M V = α + β y
where α is a constant vector and β is a constant matrix, then the estimation error and its variance are as follows:
δ x ^ L M V = x x ^ L M V = x α β y Σ δ x ^ L M V = Σ x + β Σ y β Τ Σ x y β Τ β Σ x y
It is worth noting that LMVE takes the minimum mean square error Σ δ x ^ L M V as the estimation criterion.
M δ x ^ L M V = E δ x ^ L M V Τ δ x ^ L M V = E δ x ^ L M V E δ x ^ L M V Τ + Σ δ x ^ L M V = E δ x ^ L M V E δ x ^ L M V Τ + β Σ x y Σ y 1 Σ y β Σ x y Σ y 1 + Σ x Σ x y Σ y 1 Σ y x
Then,
M δ x ^ L M V = min E δ x ^ L M V = 0 β = Σ x y Σ y 1 E δ x ^ L M V = 0 var δ x ^ L M V = min var δ x ^ L M V = min
Therefore, LMVE is an unbiased estimation, and the variance of estimation error is minimal under the premise of an unbiased estimation. However, the minimum variance of the estimation error is only a necessary condition for LMVE. By inserting Equation (18) into Equations (15) and (16), the estimation error and variance of LMVE are as follows:
x ^ L M V = E x + Σ x y Σ y 1 y E y = μ x + Σ x y Σ y 1 y E y Σ x ^ L M V = Σ x y Σ y 1 Σ y x
It is noteworthy that Equation (19) is derived under the premise of knowing the statistical properties (namely, the mathematical expectation and variance) of y and x, and is independent of their distribution information. Therefore, when both x and y follow a normal distribution, the LMVE is equivalent to the MVE and MAPE.

2.1.6. Comparison of Several Different Optimal Estimation Methods

When x and y are normally distributed, Table 1 presents various parameter estimates based on different criteria.
Table 2 presents the advantages and disadvantages of several optimal estimation methods.

2.2. Filtering Algorithm

The optimization method utilizes the observations from the current epoch to estimate the system state, hence the localization results are significantly influenced by the current observation errors. In 1960, R. E. Kalman first proposed the KF, which employs a recursive approach to avoid the accumulation of large amounts of data and designs the filter using the state–space method in the time domain [41]. As a result, the KF is particularly well-suited for estimating multi-dimensional random processes. Depending on the differences in system and state equations, filtering algorithms are primarily categorized into the following types.

2.2.1. Standard KF

Assuming that both the state motion and observation models are linear, the estimated state x k at epoch t k is driven by the system noise sequence w k 1 , and the driving mechanism is described by the following equation of state:
x k = Φ k , k 1 x k 1 + Γ k 1 w k 1
where Φ k , k 1 is a one-step transfer matrix from time t k 1 to time t k . Γ k 1 is the system noise-driving term. Let the observation of x k be z k , z k and x k satisfy the linear relationship, and the observation equation is as follows:
z k = H k x k + v k
where H k is the observation matrix, v k is the observation noise, and w k and v k meet at the same time:
E w k = 0 ,   cov w k , w j = E w k w j Τ = Q k δ k j
E v k = 0 ,   cov v k , v j = E v k v j Τ = R k δ k j
cov w k , v j = E w k v j Τ = 0
where Q k is the variance matrix of the state noise sequence, and R k is the variance matrix of the observation noise sequence.
If the Q k is not negative definite, the R k is positive definite, then the estimate x ^ k is solved according to the KF equation. The calculation steps of the KF include a filter gain calculation loop (time update process) and a filter calculation process (measurement update process)—two calculation loops. As shown in Figure 1, areas with green shading represent the estimates and their covariance matrix.
As illustrated in Figure 1, the KF necessitates the provision of initial values. Experience has demonstrated that the recursive computation method is the most significant advantage of the KF. This algorithm can be implemented on a computer without the need to store large volumes of measurement data over time, which is why the KF has been extensively adopted in engineering applications. The KF explicitly stipulates that both the system driving noise and the measurement noise must be white noise. However, in reality, these two types of noise in some systems are often colored noise. Consequently, variants of the KF under colored noise conditions have been proposed, primarily including the KF when the system noise is colored and the measurement noise is white, and the KF when the system noise is white and the measurement noise is colored [42].
The estimated P k | k + 1 calculated according to the filtering equations tends towards zero or to a steady-state value as the number of observation epochs k increases. When the deviation between the estimated value and the true value becomes increasingly large, the filter gradually loses its estimation capability. This phenomenon is known as filter divergence. To mitigate filter divergence, numerous scholars have proposed various methods, including information filtering [43], square root filtering [44], UUDT decomposition filtering [45], adaptive filtering [46], suboptimal filtering [47], and H∞ filtering [43]. Each of these methods has its own advantages, disadvantages, and applicable scopes, and further details can be found in the relevant literature.

2.2.2. Extended Kalman Filter (EKF)

The KF is based on a linear mathematical model, meaning it functions effectively only when both the system and observation equations are linear. However, in many engineering applications, such as INS of aircraft and ships, satellite navigation, and industrial control systems, the mathematical models are often nonlinear. This nonlinearity renders the KF unsuitable for direct application. To address this issue, the EKF can be employed. The EKF leverages Taylor series expansion to linearize the nonlinear system, thereby transforming the original nonlinear system model into a linearized state and observation equations.
The nonlinear equation of state and observation equation corresponding to the EKF are as follows:
x ˙ k = f x k + g w k z k = h x k + v k
where f x k and g w k are nonlinear functions of the state vector x k and system noise vector w k , respectively. h x k is the nonlinear function of the design matrix.
The two equations in Formula (25) are linearized as follows:
δ x k = F k 1 δ x k 1 + G k 1 w k 1 δ z k = H k δ x k + v k
After linearization, the solution can be solved by using the KF.
The EKF is suitable for weakly nonlinear systems because its linearization process retains only the first-order terms while discarding higher-order terms above the second order. To address the issue of the EKF stochastic model batch processing requiring the storage of large amounts of data, the algorithm proposes a recursive approach for the stochastic model [12]. Additionally, it includes a time-domain non-local filtering data fusion algorithm [13]. Wang et al. (2024) leveraged the advantage of mixture correntropy in dealing with Non-Gaussian Noise (NGN) to investigate the robust state estimation problem for discrete-time systems subject to non-Gaussian process and measurement noises (PMNs) [48]. Wang et al. (2025) addressed the state estimation challenge in Wireless Localization (WL) when confronted with time-varying skewness measurement noise arising from variable non-line-of-sight (NLOS) and imperfect synchronization [49]. They employed a Shape Parameter Mixture (SPM) distribution and developed an EKF. The proposed algorithm outperforms existing counterparts in the presence of time-varying skewness noise.

2.2.3. Unscented Kalman Filter (UKF)

The EKF discards higher-order terms above the second order and retains only the linear terms, thus the EKF algorithm is only suitable for the estimation of weakly nonlinear objects. When the nonlinearity of the estimated object is stronger, the resulting estimation error will also be larger, and may even cause the filter to diverge. The UKF is an effective method for solving nonlinear system problems. In 1995, S.J. Julier and J.K. Uhlmann proposed the UKF algorithm to address the filtering problem of strongly nonlinear objects [50], which was subsequently further refined by E.A. Weiss and R.V. Merwe [51].
The core of the UKF is use of the Untrace Transformation (UT) to determine the mapping relationship between variables, which is equivalent to retaining the accuracy achieved by the third-order term of the Taylor series, so it is suitable for nonlinear object estimation.
Using discrete nonlinear systems, i.e., the following:
x k = f x k 1 + w k z k = h x k + v k
where x k is the state vector, w k is the system noise vector, z k is the observed vector, and v k is the is the observed noise vector.
A series of sampling points are selected near x k 1 . The mean and covariance of these sample points are x ^ k 1 and P k 1 , respectively. The corresponding transform sampling points are generated through the nonlinear system. The predicted mean and covariance can be obtained by calculating these transform sampling points.
Let the state variable be n-dimensional, then 2n + 1 sampling points and their weights are as follows:
ξ 0 = x ^ k 1 ξ i = x ^ k 1 + n + τ P k 1 i ξ i + n = x ^ k 1 n + τ P k 1 i i = 1 , 2 , , n
And the weight corresponding to ξ i , k 1 i = 0 , 1 , , 2 n is the following:
W i m = W i c = κ / n + κ i = 0 1 / 2 n + κ i 0
where κ is the proportional coefficient, which can be used to adjust the distance between sigma points and x ^ k 1 , and this coefficient only affects the higher-order matrix deviations above the second order. P k 1 i is the i-th row or column of the square root matrix. There are different ways to determine W i m , and some studies have made some improvements, such as the UKF algorithm under additive noise cases and non-additive noise conditions [52]. The specific process of the UKF algorithm can be found in the relevant literature. Practice proves that the UKF is suitable for nonlinear object estimation, but the computation is relatively large.
Due to the strong advantages of the UKF in handling nonlinear systems, it has also been applied in multi-source data fusion. Examples include autonomous multi-level positioning based on smartphone-integrated sensors and pedestrian indoor networks [53], as well as the use of the UKF in mobile mapping applications based on low-cost GNSS/IMU under demanding conditions [54].

2.2.4. Particle Filter (PF)

The PF was initially proposed by Metropolis and Wiener as early as 1940 [55]. PF is a method that approximates the probability density function by propagating a set of random samples in the state space. It replaces the integral operation with the sample mean, thereby obtaining the minimum variance estimate of the system state. These samples are referred to as particles in the image, hence the method is known as PF. Here, the samples are the particles, and when the number of samples N→∞, it can approximate any form of probability density distribution.
The PF directly calculates the conditional mean based on the probability density, which is the minimum variance estimate. This probability density can be approximated by the EKF or the UKF. The estimate is determined by the weighted average of sample values (particles) from multiple different distributions. Each particle computation requires a complete EKF or UKF calculation. Therefore, the PF is suitable for estimation under non-linear system and measurement conditions, offering higher estimation accuracy than using the EKF or UKF alone, but with a significantly higher computational load compared to the EKF and UKF.
PF is known by a variety of names, each reflecting its diverse applications and theoretical underpinnings. For instance, it is referred to as Sequential Importance Sampling (SIS) [56], Bootstrap Filtering [57], the Condensation Algorithm [58], Interacting Particle Approximations [59], Monte-Carlo Filtering [60], and Sequential Monte-Carlo filtering [61,62].
According to the relevant literature, the general procedure for executing a PF is as follows:
Step 1: Initial value determination
The initial particle value χ 0 i i = 1 , 2 , N is generated according to the prior probability density p x 0 of the initial state.
For k = 1, 2, 3, … Execute,
Step 2: Select the recommended probability density q x k / x 0 k i , z 0 k . According to this recommended density, a particle χ k i at time k, i = 1, 2, … N is generated, as a secondary sample of the original particle.
Calculated weight coefficient:
w k i = w k 1 i p z k / χ k i p χ k i / χ k 1 i q χ k i / χ 0 i , z 0 k
w 0 i = p χ 0 i
w ˜ k i = w k i j = 1 N w k i , i = 1 , 2 , n
Step 3: the PF algorithm, such as the SIR method or residual secondary sampling method, is used to perform secondary sampling on the original particle χ k i i = 1 , 2 , n and generate secondary sampling to update the particle χ k + i i = 1 , 2 , n .
Step 4: Calculating the filtering value based on the secondary sampling particles:
x ^ k = 1 n i = 1 n χ k + j
The PF is mainly used for data fusion and integrity monitoring [63,64,65,66]. Therefore, when both system and measurement noises are nonlinear, the PF can be applied to nonlinear systems as well as systems with uncertain error models.

2.2.5. UKF-Based Particle Filter (UPF)

The core of the PF is selection of the proposal probability density. The closer the proposal density is to the true density, the better the filtering effect will be. Conversely, if there is a significant difference between the proposal density and the true density, the filtering effect will deteriorate, and divergence may even occur. If the PF is combined with a UKF, and the proposal density is determined by UKF, the problem of particle degeneracy can be resolved. When updating the particles, the latest posterior information can be obtained, which is beneficial for regions with a high particle likelihood ratio. The method that combines PF with UKF is called UPF. However, both PF and UPF have a common problem, which is that as the number of particles increases, the computational complexity will sharply increase.
To better select the appropriate filtering algorithm, Table 3 lists the system models, computational complexity, accuracy, and applicable scenarios of KF, EKF, UKF, PF, and UPF. See Table 3.

2.2.6. Federated Filtering (FF)

The filtering methods introduced above belong to Centralized KF (CKF). CKF has problems such as high state dimension, heavy computational burden, and poor fault tolerance. Another filtering method is Decentralized KF (DKF). DKF has been developed for over 20 years. As early as 1971, Pearson (1971) proposed the concept of dynamic decomposition and a two-level structure for state estimation [67]. Subsequently, Speyer (1979), Willsky et al. (1982), Kerr (1987), and Carlson (1988) made contributions to DKF techniques [22,68,69,70]. Among the many DKF methods, the FF proposed by Carlson has been valued for its design flexibility, low computational load, and good fault-tolerance. Now, the FF has been selected as the basic algorithm for the U.S. Air Force’s fault-tolerant navigation system, the Common Kalman Filter program [71].
The FF proposed by Carlson is designed to address the following issues:
(1)
The filter should have good fault tolerance. When one or several navigation subsystems fail, it should be able to easily detect and isolate the faults and quickly recombine the remaining normal navigation subsystems (reconfiguration) to continue providing the required filtering solution.
(2)
The filtering accuracy should be high.
(3)
The fusion algorithm from local filtering to global filtering should be simple, with low computational load and minimal data communication, to facilitate real-time implementation of the algorithm.
The FF is a two-level filter, as shown in Figure 2. The Carlson FF introduces a master filter. Since the master filter does not accept measurement inputs, it only has time updates and no measurement updates. Additionally, a feedback control switch from the master filter to the sub-filters is added. The entire filtering system consists of N ¯ = N + 1 filters, with N sub-filters providing local estimates ( X ^ k c i and P k c i ) to the master filter. These local estimates are optimally combined with the master filter estimates ( X ^ k m and P k m ) to obtain the global estimates ( X ^ k g and P k g ).
If there are N local state estimates X ^ 1 , X ^ 2 , …, X ^ N and their corresponding covariance matrices P 11 , P 22 , …, P N N , and the local estimates are uncorrelated with each other, i.e., P i j = 0 i j , then the global optimal estimate can be expressed as follows:
X ^ g = P g i = 1 N P i i 1 X ^ i
P g = i = 1 N P i i 1 1
Let the FF have N (N > 2) sub-filters, and the output of sub-filter i is the following:
X ^ i = X + X ˜ i i = 1 , 2 , N
where X represents the common state of all sub-filters, with a dimension of n. X ˜ i denotes the estimation error of the i -th sub-filter. If the sub-filter operates normally, X ˜ i is white noise.
The measurement equation for X can be formulated based on the outputs of the N sub-filters.
Z = H X + V
Thus,
Z = X ^ 1 X ^ 2 X ^ N , H = I n × n I n × n I n × n , V = X ˜ 1 X ˜ 2 X ˜ N
Assuming that each sub-filter operates normally and the estimation errors are mutually uncorrelated, we have the following:
E V = 0 R = E V V Τ = d i a g P 11 P 22 P N N
Here, P i i = E X ˜ i X ˜ i Τ is the covariance matrix of the estimation error of sub-filter i .
According to the literature, the Markov estimate of the common state X is the following:
P g = H Τ R 1 H 1 = i = 1 N P i i 1 1 X ^ g = H Τ R 1 H 1 H Τ R 1 Z = i = 1 N P i i 1 1 i = 1 N P i i 1 X ^ i = P g i = 1 N P i i 1 X ^ i
The physical meaning of the above result is quite evident. If the estimation accuracy of X ˜ i is poor, that is, P i i is large, then its contribution to the global estimate is relatively small. The above discussion pertains to the fusion algorithm when the estimates of each sub-filter are uncorrelated. For the fusion algorithm when the estimates of each sub-filter are correlated, one can refer to the relevant literature.

2.2.7. Comparison of Different Filtering Methods

To more clearly illustrate the performance of the various filtering methods introduced above, Table 4 provides a summary of the information.

2.3. Multiple Model Estimation (MME)

MME can estimate the system state through weighted first-order filter estimates with different parameter values, thereby achieving the goal of adapting to unknown or uncertain system parameters [21]. In 1970, Ackerson first applied MME to jump environments, arguing that the system pattern is a finite-state Markov chain that can switch between different patterns. Since then, MME methods have been widely used in many fields under various names [72], such as multi-model adaptive estimation, parallel processing algorithm, filter bank method, segmented filter, and improved Gaussian sum filter.
Any real system has different degrees of uncertainty, which are sometimes manifested inside the system and sometimes manifested outside the system. From the inside of the system, the mathematical model structure and parameters describing the controlled object cannot be fully known in advance by the designer. As for the influence of the external environment on the system, it can be expressed as many disturbances. These disturbances are often unpredictable and can be either deterministic or random. In addition, some measurement noise enters the system from various measurement feedback in the same way as disturbances, and the statistical properties of these random disturbance noises are usually unknown. Therefore, under the condition that the mathematical models of the controlled object and the disturbances are not fully determined, the control sequence is designed to make the specified performance index as close to and as optimal as possible. Essentially, an adaptive control system is an intrinsically nonlinear system, which is very difficult to analyze. MME theory employs N linear stochastic control systems to solve the nonlinear problem of adaptive control, which will improve the adaptability of the system models to real systems and external environmental changes, and enhance the accuracy and reliability of filter estimation.
Assume the linear stochastic system model is as follows:
x k + 1 = Φ θ x k + B θ u k + Γ θ w k z k = H θ x k + v k
where x k is the state vector of the system, z k is the output vector of the system, and Φ θ , B θ , Γ θ , H θ are the system matrices of appropriate dimensions. u k and w k are sequences of zero-mean white noise vectors with dimensions n and m, respectively, and their covariance matrices are Q θ and R θ .
The nonlinear system is linearized at N operating points, allowing the original nonlinear system to be approximated by N sets of linear equations. The parameter θ θ 1 , θ 2 , θ N can take N discrete values, thereby forming a combination of N linear systems that serve as an approximation of the nonlinear system model.
By representing any value of the parameter θ as θ i , the system matrix in Equation (41) is redefined as follows:
Φ θ i = Φ i , B θ i = B i Γ θ i = Γ i ,   H θ i = H i i = 1 , 2 , N
Based on the aforementioned notation, N discrete random linear systems can be characterized as follows:
x i k + 1 = Φ i x k + B i u k + Γ i w k z i k = H i x k + v i k
Figure 3 illustrates the fundamental principle of the MME method, wherein N discrete values constitute N distinct systems.
As illustrated in Figure 3, a parallel array of filters is implemented to accommodate different operational modes of the stochastic hybrid system. Each filter processes both the control input and measurement data from the system, generating output residuals and state estimates based on individual models. The system incorporates model probability design for each corresponding filter, and the comprehensive state estimation is derived from the weighted average of all filter state estimates. Table 5 summarizes the performance characteristics of the MME method.
To date, the MME method has been extensively utilized in diverse fields, including integrated navigation data fusion, self-calibration, and target tracking [74,75,76].

2.4. Factor Graph (FG) Methods

In the filtering method mentioned earlier, the state of the system at the current moment is only related to the observations at the current moment and the navigation state at the previous moment. However, in practical applications, some observations may be delayed, and certain position solutions need to be realized over a period before and after the observations. This cannot be described solely by the state information of the current and previous moments. For example, the EKF converts historical observations into prior information of the current state through the state equation for propagation, and the state linearization points corresponding to the historical observations after propagation are fixed. When there is an undetected blunder in the historical observations, the linearization point error is large, which can easily lead to the prior information being contaminated, thereby affecting positioning accuracy. In contrast, the FG method can fully utilize historical observations by iteratively updating the linearization points, mining the constraint information of the observations in the temporal dimension, and suppressing the influence of blunders.
The FG method decomposes a complex global function involving multiple variables into the product of several simpler local functions, thereby constructing a bidirectional graph structure. When dealing with global functions that involve numerous variables, the conventional approach is to break down the given function into its constituent factors, which serve as local functions. These local functions are then combined multiplicatively to represent the original global function. The FG method is a bipartite graph model that elegantly captures this factorization process. It typically consists of variable nodes, factor nodes, and edges connecting them. By leveraging this structure, FG can effectively decompose multivariable functions into products of local functions, facilitating more efficient computation and analysis.
g x 1 , x 2 , x n = j J f j X j
where x n denotes a variable node, the local function f j denotes a factor node, g x 1 , x 2 , x n denotes the independent variable associated with the local function f j , and subsequently, the factor node is connected to the corresponding variable node via an edge that represents their mutual relationship.
Let g x 1 , x 2 , x 3 , x 4 , x 5 be a global function involving five variables. It can be decomposed into the product of four factors, which can be expressed as follows:
g x 1 , x 2 , x 3 , x 4 , x 5 = f 1 x 1 , x 3 f 2 x 1 , x 2 , x 4 f 3 x 3 , x 4 f 4 x 4 , x 5
The corresponding FG structure is shown in Figure 4.
Figure 5 establishes a factor graph-based multi-sensor fusion framework, incorporating sensors including IMU, GPS, barometric altimeter, optical flow sensor, magnetic heading sensor, and star tracker.
In Figure 5, the blue circles represent variable nodes, and the black solid small squares represent factor nodes. The variables of the functions associated with the factor nodes include the variable nodes connected to them. Each factor has a corresponding error function. By adjusting the x to minimize the error of the FG, the optimal estimate can be obtained. The formula is as follows:
x ^ = argmin x i g i x i
In the integrated system, the factor node constructs the corresponding function to calculate the difference between the predicted measurement and the actual measurement, thereby obtaining the estimate of the state variable and the cost function, as follows:
f i x i = L g i x i z i
where z i is the actual measurement value obtained by the sensor, and L is the cost function.
FG model the optimal estimation problem in a graphical form and solve for state estimates based on the MAPE criterion. During the optimization process, as the system operates over time, the scale of the graph inevitably increases, leading to a decline in the real-time performance of graph optimization. Therefore, a sliding window mechanism is introduced to balance accuracy and efficiency, thereby enhancing the system’s real-time capabilities. Table 6 summarizes the performance characteristics of the FG method.
The FG algorithm has been widely applied in various domains, including single GNSS positioning, GNSS and INS integrated positioning, ambiguity resolution, and robust estimation [15,16,17,18]. The FG method has been optimized and improved for urban environments to enhance positioning accuracy [19,20], and it has been demonstrated that under certain conditions, the FG algorithm achieves higher solution accuracy and robustness compared to the EKF.

2.5. Artificial Intelligence (AI) Method

In addition to the aforementioned fusion methods, machine learning and deep learning have found typical applications in practice, demonstrating excellent performance in fields such as pattern recognition and image processing. Their learning and training concepts have also been applied to position estimation in multi-source integrated navigation, particularly in scenarios where explicit mathematical models cannot be established between navigation source observations and system states. For example, in WLAN positioning, carrier positions cannot be directly calculated from signal strength, and in visual cooperative target localization, carrier positions cannot be directly derived from acquired image tags. However, through offline learning of signal strength or image tags, a learning model can be established between system states and observational information. Thus, in real-time positioning, carrier positions can be directly computed via image tags. By inputting acquired observational data into the trained learning model, system states can be output, enabling multi-source information fusion processing. Table 7 summarizes the advantages and disadvantages of AI methods.
The AI-based algorithms in multi-source data fusion mainly include Neural Networks (NNs), Support Vector Machines (SVM), Hidden Markov Models (HMM), Decision Trees, etc. [78,79,80].
NNs are amongst the most classical AI algorithms. They can be classified based on network topology and information flow direction. When categorized by topological structure, NN models can be divided into hierarchical structures, interconnected structures, and sparsely connected structures according to the connection methods between neurons. Based on the direction of internal information flow, they can be classified as feedforward or feedback neural networks. The learning methods of artificial NNs mainly fall into three categories: supervised learning, unsupervised learning, and rote learning. Among them, the classic Backpropagation (BP) neural network belongs to the purely hierarchical type with a feedforward information flow direction and employs supervised learning. Its typical three-layer network model is shown in Figure 6.
The vectors for each layer are denoted as X , Y , and O , respectively. In a multi-source fusion navigation system, the data vector X = x 0 x 1 x n 1 x n refers to the observation information from different sources. The hidden layer vector is denoted as Y = y 0 y 1 y n 1 y n , and the output layer vectors are denoted as O = o 1 o k o l . The output can represent the system state, such as position, or large-scale location identifiers. The connection observation noise and weight matrix are denoted as V = v 1 v k v m and W = w 1 w k w l , respectively.
The BP neural network algorithm consists of a training process and a learning process, as shown in Figure 7.
Scholars have conducted research on AI-based adaptive algorithms, including fuzzy control adaptive algorithms and neural network adaptive algorithms [23]. For instance, to address the impact of random disturbances in underwater environments on systems, a Radial Basis Function (RBF) neural network was employed to assist FF for information fusion [25]. In response to the challenges of high costs and significant susceptibility to meteorological conditions in existing high-precision satellite navigation systems for agricultural machinery in farmlands, Yu et al. (2021) proposed a D-S-CNN-based multi-sensor fusion automatic navigation method for farmland applications [26]. These existing AI algorithms require a large amount of training data and pre-scaled training of the system, which makes it difficult to ensure computational capacity and real-time performance. Therefore, they are mostly used for post-computation.
In the processing of multi-source data fusion, the main methods for AI to deal with the challenge of high-quality training data include data augmentation, transfer learning, Generative Adversarial Networks (GANs), multimodal learning, federated learning, etc.
(1)
Data Augmentation: By generating synthetic data or expanding existing data, this can solve the problem of insufficient data or uneven distribution [81].
(2)
Transfer Learning: Utilizing the knowledge transfer of pre-trained models to reduce the demand for high-quality data in the target domain [82]. For example, using models pre-trained on ImageNet to handle medical image data fusion tasks.
(3)
GANs: Generating synthetic data to make up for the shortage of real data, especially suitable for scenarios with scarce or sensitive data [83]. For example, using GANs to generate synthetic samples of multi-source sensor data.
(4)
Multimodal Learning: Fusing features from different modalities (such as text, images, and audio) to enhance model robustness [84]. For example, cross-modal feature alignment is used for multi-source data fusion.
(5)
Federated Learning: Training models using distributed multi-source data while protecting privacy [85]. For example, joint training of medical diagnostic models across hospitals.
(6)
Semi-supervised Learning: Combining a small amount of labeled data with a large amount of unlabeled data to reduce labeling costs [86]. For example, using unlabeled multi-source satellite images for land cover classification.
(7)
Active Learning: Selecting the most informative samples for labeling to optimize data quality [87]. For example, prioritizing the labeling of multi-source sensor data with high uncertainty in industrial inspection.
(8)
Data Cleaning: Improving data quality by removing noise, filling in missing values, and unifying formats [88]. For example, processing noise and redundancy in multi-source social media text.

2.6. Methods Based on Uncertain Reasoning

Some fusion methods are not suitable for establishing a mathematical model between measurements and system states, nor can they directly estimate the state. However, they can be used to assess the reliability of navigation sources and determine large-scale locations. For example, the identification of specific large-scale locations such as classrooms, cafes, conference rooms, offices, and residences can be achieved by combining observations from GNSS, WLAN sources, and inertial sources to perform uncertainty reasoning and obtain large-scale location information. Therefore, a multi-source information fusion method based on uncertainty reasoning is proposed. The method requires extracting evidence from measurement data and using relevant knowledge (mainly expert knowledge) to gradually derive conclusions from the evidence or to verify the credibility of specific information. The currently mainstream uncertainty reasoning methods mainly include subjective Bayesian estimation, evidence reasoning, and fuzzy reasoning [89]. This method can also adjust the weight of each measurement datum in the final fusion result to achieve precise identification of locations.
Uncertainty reasoning includes symbolic reasoning and numerical reasoning. The former, such as Endorsement Theory, is characterized by minimal information loss during the reasoning process but involves a large computational burden. The latter, such as Bayesian reasoning and the Dempster–Shafer theory of evidence, is characterized by ease of implementation but involves some degree of information loss during the reasoning process. As uncertainty reasoning methods are fundamental tools for target recognition and attribute fusion, the subjective Bayes method and the Dempster–Shafer (D–S) evidence theory are two commonly used uncertainty reasoning approaches. For specific applications, please refer to the related literature. To facilitate future use, Table 8 summarizes the advantages and disadvantages of uncertainty reasoning methods.

3. Representative Applications of Partial Fusion Methodology in Navigation and Positioning

As the authors primarily focus on research in the field of navigation and location services, only the typical applications of some fusion methods in this domain are listed. Contemporary research has demonstrated significant advancements in rapid high-precision navigation for complex urban environments through sensor fusion strategies. Scholars have systematically explored three principal approaches: multi-sensor integration, functional model optimization, and stochastic modeling enhancement. Noteworthy implementations include the following:
(1)
Multi-constellation Hybridization: Wang et al. (2022) developed a train-integrated navigation system leveraging full-state fusion of multi-constellation GNSS and INS [91]. This architecture demonstrates enhanced robustness against signal degradation in metropolitan corridors.
(2)
Advanced Tight Coupling Methods: Zhu et al. (2023) proposed a MEMS IMU error mitigation framework combining GNSS RTK carrier phase, TDCP observations, and INS data through an adaptive KF [92]. Li et al. (2017) augmented this paradigm with carrier motion constraints, significantly improving solution availability during prolonged GNSS outages [93].
(3)
Precision Positioning Enhancements: Gao (2016) established that multi-GNSS PPP/INS tight coupling reduces PPP convergence time by 40–60% while maintaining centimeter-level accuracy [94]. Urban navigation breakthroughs by Li et al. (2018) revealed that single-frequency RTK/INS integration achieves dual-frequency RTK performance (2–3 cm horizontal RMS) under typical urban multipath conditions [95].
(4)
Multi-Sensor Constraint Integration: Liu (2021) incorporated odometric measurements and zero-velocity constraints into GNSS/INS fusion, demonstrating 75% improvement in INS drift suppression during 60 s GNSS outages [96]. Indoor–outdoor continuity solutions by Yu et al. (2020), Cao (2021), and Yuan et al. (2023) achieved seamless decimeter-level positioning through UWB/GNSS/INS tight coupling [97,98,99].
(5)
Resilient Navigation Architectures: Li et al. (2023) engineered a vision–INS–UWB hybrid system where UWB supplants GNSS in denied environments, reducing visual–inertial odometry drift by 32% in extended indoor operations [8]. Emerging multi-sensor SLAM frameworks integrate lidar, vision, and inertial data with crowd-sourced mapping, showing particular promise for dynamic urban canyon navigation [100].
This methodological evolution highlights the critical role of partial fusion techniques in balancing computational complexity with positioning integrity across heterogeneous operational environments.

4. Summary of Features and Application Scenarios for Multi-Source Fusion Methods

The introduced methods for multi-source data fusion processing show differences in terms of optimization criteria, fundamental principles, mathematical models, prior information, number of observations, and application scenarios. To enable users to make targeted selections of different fusion methods when developing integrated systems, we have summarized the main characteristics and applicable scenarios of the fusion methods introduced, as shown in Table 9.

5. Conclusions

This paper provides a detailed overview of various algorithms corresponding to multi-source fusion processing methods. It summarizes the fundamental principles of these algorithms and briefly introduces their mathematical models, key characteristics, and application scenarios, offering significant theoretical and technical support for intelligent navigation, driverless vehicles, autonomous navigation, and related fields. Due to limitations in theoretical understanding and technical conditions, all existing fusion algorithms exhibit shortcomings. Currently, no single fusion algorithm can fully meet the requirements of multi-source integrated navigation systems. Therefore, appropriate fusion algorithms must be selected based on practical needs and application contexts. The historical development of these fusion algorithms reveals their interdisciplinary nature, combining theories and methodologies from integrated navigation, GNSS data processing, satellite geodesy, probability theory and mathematical statistics, computer science, statistics, and artificial intelligence. Consequently, multi-source integrated navigation algorithms should not be confined to traditional positioning and navigation approaches. Instead, they should continuously incorporate insights from other disciplines, foster mutual learning and advancement across fields, and generate innovative theories and methods through interdisciplinary integration. This evolution aims to deliver high-precision, high-reliability positioning, navigation, and timing services across all temporal and spatial domains—representing the future development trend of multi-source integrated navigation systems.

Author Contributions

Conceptualization, X.M. and X.H.; methodology, X.M.; software, P.Z.; validation, X.H. and P.Z.; formal analysis, P.Z.; investigation, P.Z.; resources, X.M.; data curation, P.Z.; writing—original draft preparation, X.M.; writing—review and editing, X.H.; visualization, P.Z.; supervision, X.H.; project administration, X.H.; funding acquisition, X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (42364002, 42274039), the Major Discipline Academic and Technical Leaders Training Program of Jiangxi Province (20225BCJ23014), the Key Research and Development Program Project of Jiangxi Province (20243BBI91033), Xi‘an Science and Technology plan Project (24ZDCYJSGG0015), and State Key Laboratory of Satellite Navigation System and Equipment Technology (CEPNT2023B02), and Chongqing Municipal Education Commission Science and Technology Research Project (KJQN202403241).

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zou, D.; Meng, W.; Han, S. Euclidean Distance Based Handoff Algorithm for Fin-gerprint Positioning of Wlan System. In Proceedings of the 2013 IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 7–10 April 2013; pp. 1564–1568. [Google Scholar]
  2. Bhujle, H. Weighted-average Fusion Method for Multiband Images. In Proceedings of the 2016 International Conference on Signal Processing and Communications (SPCOM), Bangalore, India, 12–15 June 2016; pp. 1–5. [Google Scholar]
  3. Guo, X.; Li, L.; Ansari, N.; Liao, B. Knowledge Aided Adaptive Localization via Global Fusion Profile. IEEE Internet Things J. 2018, 5, 1081–1089. [Google Scholar] [CrossRef]
  4. Bar-Shalom, Y.; Li, X. Multitarget-Multisensor Tracking: Principles and Techniques. Aerosp. Electron. Syst. Mag. IEEE 1996, 16, 93. [Google Scholar]
  5. Hoang, M.; Denis, B.; Harri, J.; Slock, M. Robust and Low Complexity Bayesian Data Fusion for Hybrid Cooperative Vehicular Localization. In Proceedings of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; pp. 1–6. [Google Scholar]
  6. Zhang, D.; Liu, J.; Lv, Y.; Wei, X. A Weighted Fusion Algorithm for Multi-sensors. In Proceedings of the 2016 Sixth International Conference on Instrumentation Measurement, Computer, Communication and Control (IMCCC), Harbin, China, 21–23 July 2016; pp. 808–811. [Google Scholar]
  7. Sun, T.; Yu, M. Research on Multi-source Data Fusion Method Based on Bayesian Estimation. In Proceedings of the 2016 9th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 10–11 December 2016; Volume 2, pp. 321–324. [Google Scholar]
  8. Li, X.; Li, J.; Wang, A. A review of integrated navigation technology based on visual/inertial/UWB fusion. Sci. Surv. Mapp. 2023, 48, 49–58. [Google Scholar]
  9. Berger, J. Statistical Decision Theory and Bayesian Analysis; Springer: Berlin/Heidelberg, Germany, 2002; Volume 83, p. 266. [Google Scholar]
  10. Zhang, Y.; Liu, Y.; Zhang, Z.; Chao, H.; Zhang, J.; Liu, Q. A Weighted Evidence Combination Approach for Target Identification in Wireless Sensor Networks. IEEE Access 2017, 5, 21585–21596. [Google Scholar] [CrossRef]
  11. Arikumar, K.; Natarajan, V.; Clarence, L.; Priyanka, M. Efficient Fuzzy Logic Based Data Fusion in Wireless Sensor Networks. In Proceedings of the 2016 Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, India, 19 November 2016; pp. 1–6. [Google Scholar]
  12. Zhang, X.; Lu, X. Recursive estimation of the stochastic model based on the Kalman filter formulation. GPS Solut. 2021, 25, 24. [Google Scholar] [CrossRef]
  13. Cheng, Q.; Liu, H.; Shen, H.; Wu, P.; Zhang, L. A Spatial and Temporal Nonlocal Filter-Based Data Fusion Method. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4476–4488. [Google Scholar] [CrossRef]
  14. Merwe, R.; Doucet, A.; Freitas, N.; Wan, E. The Unscented Particle Filter. In Proceedings of the 14th International Conference on Neural Information Processing Systems, Denver, CO, USA, 1 January 2001; Volume 13. [Google Scholar]
  15. Watson, R.; Gross, J. Robust Navigation in GNSS Degraded Environment Using Graph Optimization. In Proceedings of the 30th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2017), Portland, OR, USA, 25–29 September 2017; pp. 2906–2918. [Google Scholar]
  16. Wen, W.; Pfeifer, T.; Bai, X.; Hsu, L. It is time for Factor Graph Optimization for GNSS/INS Integration: Comparison between FGO and EKF. arXiv 2020, arXiv:2004.10572. [Google Scholar]
  17. Gao, H.; Li, H.; Huo, H.; Yang, C. Robust GNSS Real-Time Kinematic with Ambiguity Resolution in Factor Graph Optimization. In Proceedings of the 2022 International Technical Meeting of The Institute of Navigation, Long Beach, CA, USA, 25–27 January 2022; pp. 835–843. [Google Scholar]
  18. Zhang, T.; Wang, G.; Chen, Q.; Tang, H.; Wang, L.; Niu, X. Influence Analysis of IMU Scale Factor Error in GNSS/MEMS IMU vehicle integrated navigation. J. Geod. Geodyn. 2024, 44, 134–137. [Google Scholar]
  19. Liao, J.; Li, X.; Feng, S. GVIL: Tightly-Coupled GNSS PPP/Visual/INS/LiDAR SLAM Based on Graph Optimization. Geomat. Inf. Sci. Wuhan Univ. 2023, 48, 1204–1215. [Google Scholar]
  20. Zhang, X.; Zhang, Y.; Zhu, F. Factor Graph Optimization for Urban Environment GNSS Positioning and Robust Performance Analysis. Geomat. Inf. Sci. Wuhan Univ. 2023, 48, 1050–1057. [Google Scholar]
  21. Magill, D. Optimal Adaptive Estimation of Sampled Stochastic Processes. IEEE Trans. Autom. Control 1965, 10, 434–439. [Google Scholar] [CrossRef]
  22. Carlson, N. Federated filter for fault-tolerant integrated navigation systems. In Proceedings of the IEEE PLANS 88, Orlando, FL, USA, 29 November–2 December 1988; pp. 110–119. [Google Scholar]
  23. Bian, H.; Jin, Z.; Tian, W. Analysis of Adaptive Kalman Filter Based on Intelligent Information Fusion Techniques in Integrated Navigation System. Syst. Eng. Electron. 2004, 26, 1449–1452. [Google Scholar]
  24. Ghamisi, P.; Hofle, B.; Zhu, X. Hyperspectral and Lidar Data Fusion Using Extinction Profiles and Deep Convolutional Neural Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3011–3024. [Google Scholar] [CrossRef]
  25. Li, P.; Xu, X.; Zhang, X. Application of intelligent Kalman Filter to Underwater Terrain Integrated Navigation System. J. Chin. Inert. Technol. 2011, 19, 579–589. [Google Scholar]
  26. Yu, J.; Lu, W.; Zeng, M.; Zhao, S. Low-cost Agricultural Machinery Intelligent Navigation Method Based on Multi-Sensor Information Fusion. China Meas. Test 2021, 47, 106–119. [Google Scholar]
  27. Conte, G.; Doherty, P. Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information. EURASIP J. Adv. Signal Process. 2009, 2009, 387308. [Google Scholar] [CrossRef]
  28. Montillet, J.; Bonenberg, L.; Hancock, C.; Roberts, G. On the Improvements of the Single Point Positioning Accuracy with Locata Technology. GPS Solut. 2014, 18, 273–282. [Google Scholar] [CrossRef]
  29. Deng, Z.; Yu, Y.; Yuan, X.; Wan, N.; Yang, L. Situation and Development Tendency of Indoor Positioning. China Commun. 2013, 10, 42–55. [Google Scholar] [CrossRef]
  30. Coster, D.; Lambot, S. Fusion of Multifrequency GPR Data Freed from Antenna Effects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 2, 1–11. [Google Scholar] [CrossRef]
  31. Nazemzadeh, P.; Fontanelli, D.; Macii, D. Indoor Localization of Mobile Robots through QR Code Detection and Dead Reckoning Data Fusion. IEEE/ASME Trans. Mechatron. 2017, 22, 2588–2599. [Google Scholar] [CrossRef]
  32. Nesa, N.; Banerjee, I. IoT-Based Sensor Data Fusion for Occupancy Sensing Using Dempster–Shafer Evidence Theory for Smart Buildings. Internet Things J. 2017, 4, 1563–1570. [Google Scholar] [CrossRef]
  33. Chen, K.; Chang, G.; Chen, C. GINav: A MATLAB-based Software for the Data Processing and Analysis of A GNSS/INS Integrated Navigation System. GPS Solut. 2021, 25, 108. [Google Scholar] [CrossRef]
  34. Chi, C.; Zhang, X.; Liu, J.; Sun, Y.; Zhang, Z.; Zhan, X. GICI-LIB: A GNSS/INS/Camera Integrated Navigation Library. arXiv 2023, arXiv:2306.13268. [Google Scholar] [CrossRef]
  35. Li, X.; Huang, J.; Li, X.; Yuan, Y.; Zhang, K.; Zheng, H.; Zhang, W. GREAT: A Scientific Software Platform for Satellite Geodesy and Multi-Source Fusion Navigation. Adv. Space Res. 2024, 74, 1751–1769. [Google Scholar] [CrossRef]
  36. Koch, K. Least Squares Adjustment and Collocation. Bull. Geod. 1977, 51, 127–135. [Google Scholar] [CrossRef]
  37. Cui, X.; Yu, Z.; Tao, B.; Liu, D.; Yu, Z.; Sun, H.; Wang, X. Generalized Surveying Adjustment; Wuhan University Press: Wuhan, China, 2001. [Google Scholar]
  38. He, X.; Wang, T.; Liu, W.; Luo, T. Measurement Data Fusion Based on Optimized Weighted Least-Squares Algorithm for Multi-Target Tracking. IEEE Access 2019, 7, 13901–13916. [Google Scholar] [CrossRef]
  39. Zhang, Q.; Zhang, P.; Li, T. Information Fusion for Large-Scale Multi-Source Data Based on the Dempster-Shafer Evidence Theory. Inf. Fusion 2025, 115, 102754. [Google Scholar] [CrossRef]
  40. Huang, W. Modern Adjustment Theory and Its Applications; PLA Press: Beijing, China, 1992. [Google Scholar]
  41. Kalman, R. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  42. Jazwinski, A. Stochastic Processes and Filtering Theory; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  43. Simon, D. Optimal State Estimation-Kalman, H∞ and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  44. Bierman, G.; Belzer, M. A Decentralized Square Root Information Filter/Smoother. In Proceedings of the NAECON, Dayton, OH, USA, 18–22 May 1987; pp. 1448–1456. [Google Scholar]
  45. Li, J.; Hao, S.; Huang, G. Modified Strong Tracking Filter based on UD Decomposition. Syst. Eng. Electron. 2009, 31, 1953–1957. [Google Scholar]
  46. Mehra, R. Approaches to adaptive filtering. IEEE Trans. Autom. Control 1972, 17, 693–698. [Google Scholar] [CrossRef]
  47. Athans, M.; Wisher, R.; Bertolini, A. Suboptimal State Estimation Algorithm for Continuous-Time Nonlinear Systems from Discrete Measurements. IEEE Trans. Autom. Control 1968, AC-13, 504–515. [Google Scholar] [CrossRef]
  48. Wang, G.; Fan, X.; Zhao, J.; Yang, C.; Ma, L.; Dai, W. Iterated Maximum Mixture Correntropy Kalman Filter and Its Applications in Tracking and Navigation. IEEE Sens. J. 2024, 24, 27790–27802. [Google Scholar] [CrossRef]
  49. Wang, G.; Zhang, Z.; Yang, C.; Ma, L.; Dai, W. Robust EKF Based on Shape Parameter Mixture Distribution for Wireless Localization with Time-Varying Skewness Measurement Noise. IEEE Trans. Instrum. Meas. 2025, 74, 1–10. [Google Scholar] [CrossRef]
  50. Julier, S.; Uhlmann, J.; Durrant-Whyte, H. A new Approach for Filtering Nonlinear Systems. In Proceedings of the 1995 American Control Conference-ACC ’95, Seattle, WA, USA, 21–23 June 1995; Volume 3, pp. 1628–1632. [Google Scholar]
  51. Wan, E.; Van Der Merwe, R. The Unscented Kalman Filter for Nonlinear Estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, And Control Symposium (Cat. No. 00EX373), Lake Louise, AB, Canada, 4 October 2000; pp. 153–158. [Google Scholar]
  52. Ye, L.; Anxi, Y.; Amp, J. Unscented Kalman Filtering in the Additive Noise Case. Sci. China Technol. Sci. 2010, 53, 929–941. [Google Scholar]
  53. Shi, C.; Teng, W.; Zhang, Y.; Yu, Y.; Chen, L.; Chen, R.; Li, Q. Autonomous Multi-Floor Localization Based on Smartphone-Integrated Sensors and Pedestrian Indoor Network. Remote Sens. 2023, 15, 2933. [Google Scholar] [CrossRef]
  54. Cahyadi, N.; Asfihani, T.; Suhandri, F.; Erfianti, R. Unscented Kalman Filter for a Low-Cost GNSS/IMU-based Mobile Mapping Application Under Demanding Conditions. Geod. Geodyn. 2024, 15, 166–176. [Google Scholar] [CrossRef]
  55. Wiener, N. The theory of prediction. Mod. Math. Eng. 1956, 165, 6. [Google Scholar]
  56. Doucet, A.; De Freitas, N.; Gordon, N. Sequential Monte Carlo Methods in Practice; Springer: New York, NY, USA, 2001. [Google Scholar]
  57. Gray, J.; Muray, W. A Derivation of An Analytical Expression for the Tracking Index for the Alpha-Beta-Gamma Filtering. IEEE Trans. Aerosp. Electron. Syst. 1993, 29, 1064–1065. [Google Scholar] [CrossRef]
  58. MacCormick, J.; Blake, A. A probabilistic Exclusion Principle for Tracking Multiple Objects. In Proceedings of the International Conference on Computer Vision, Corfu, Greece, 20–25 September 1999; pp. 572–578. [Google Scholar]
  59. Moral, P. Measure Valued Process and Interacting Particle Systems: Application to Non-Linear Filtering Problems. Ann. Appl. Probab. 1998, 8, 438–495. [Google Scholar]
  60. Kitagawa, G. Monte Carlo Filter and Smother for Non-Gaussian Nonlinear State Space Models. J. Comput. Graph. Stat. 1996, 5, 1–25. [Google Scholar] [CrossRef]
  61. Crisan, D.; Ducet, A. A Survey of Convergence Results on Particle Filtering Methods for Practitioners. IEEE Trans. Signal Process. 2002, 50, 736–746. [Google Scholar] [CrossRef]
  62. Andrieu, C.; Doucet, A.; Singh, S.; Tadic, V. Particle Methods for Change Detection, System Identification and Control. Proc. IEEE 2004, 92, 428–438. [Google Scholar] [CrossRef]
  63. Wang, E.; Pang, T.; Qu, P.; Cai, M.; Zhang, Z. GPS receiver Autonomous Integrity Monitoring Algorithm Based on Improved Particle Filter. Tele-Commun. Eng. 2014, 54, 437–441. [Google Scholar] [CrossRef]
  64. Wang, E.; Qu, P.; Pang, T.; Qu, P.; Cai, M.; Zhang, Z. Receiver Autonomous Integrity Monitoring Based on Particle Swarm Optimization Particle Filter. J. Beijing Univ. Aeronaut. Astronaut. 2016, 42, 2572–2578. [Google Scholar]
  65. Yun, L.; Shu, S.; Gang, H. A Weighted Measurement Fusion Particle Filter for Nonlinear Multisensory Systems Based on Gauss–Hermite Approximation. Sensors 2017, 17, 2222. [Google Scholar] [CrossRef] [PubMed]
  66. Bi, X. HUS Firefly Algorithm with High Precision Mixed Strategy Optimized Particle Filter. J. Shanghai Jiaotong Univ. 2019, 53, 232–238. [Google Scholar]
  67. Pearson, J. Dynamic Decomposition Techniques in Optimization Methods for Large-Scale System; McGraw-Hill: New York, NY, USA, 1971; p. 67. [Google Scholar]
  68. Speyer, J. Computation and Transmission Requirements for A Decentralized Linear-Quadratic-Gaussian Control Problem. IEEE Trans. Autom. Control 1979, AC-24, 266–269. [Google Scholar] [CrossRef]
  69. Willsky, A.; Bello, M.; Castanon, D.; Levy, B.; Verghese, G. Combining and Updating of Local Estimates and Regional Maps Along Sets of One-Dimensional Tracks. IEEE Trans. Autom. Control 1982, AC-27, 799–813. [Google Scholar] [CrossRef]
  70. Kerr, T. Decentralized filtering and Redundancy Management for Multisensor Navigation. IEEE Trans. Aerosp. Electron. Syst. 1987, AES-23, 83–119. [Google Scholar] [CrossRef]
  71. Loomis, P.; Carlson, N.; Berarducci, M. Common Kalman filter: Fault-Tolerant Navigation for Next Generation Aircraft. In Proceedings of the 1988 National Technical Meeting of The Institute of Navigation, Santa Barbara, CA, USA, 26–29 January 1988; pp. 38–45. [Google Scholar]
  72. Li, X. Hybrid Estimation Techniques in Control and Dynamic Systems: Advances in Theory and Applications; Academic Press: New York, NY, USA, 1996; Volume 76, pp. 213–287. [Google Scholar]
  73. Blom, H.; Bar-Shalom, Y. The Interacting Multiple Model Algorithm for Systems with Markovian Switching Coefficients. IEEE Trans. Autom. Control 1988, 33, 780–783. [Google Scholar] [CrossRef]
  74. Luan, Z.; Yu, C.; Gu, B.; Zhao, X. A Time-varying IMM Fusion Target Tracking Method. Radar Navig. 2021, 47, 111–116. [Google Scholar]
  75. Tian, Y.; Yan, Y.; Zhong, Y.; Li, J.; Meng, Z. Data Fusion Method Based on IMM-Kalman for An Integrated Navigation System. J. Harbin Eng. Univ. 2022, 43, 973–978. [Google Scholar]
  76. Yang, H.; Wang, M.; Wang, Y.; Wu, Y. Multiple-Mode Self-Calibration Unscented Kalman Filter Method. J. Aerosp. Power 2024, 1–7. [Google Scholar]
  77. Kschischang, F.; Frey, B.; Loeliger, H. Factor Graphs and the Sum-Product Algorithm. IEEE Trans. Inf. Theory 2002, 47, 498–519. [Google Scholar] [CrossRef]
  78. Zhou, Z. Machine Learning; Tsinghua University Press: Beijing, China, 2016. (In Chinese) [Google Scholar]
  79. Mcclelland, J.; Rumelhart, D.; PDP Research Group. Parallel Distributed Processing; MIT Press: Cambridge, NA, USA, 1987. [Google Scholar]
  80. Vapnik, V. An Overview of Statistical Learning Theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef]
  81. Cubuk, E.; Zoph, B.; Mane, D.; Vasudevan, V.; Le, Q. AutoAugment: Learning Augmentation Strategies from Data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  82. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How Transferable Are Features in Deep Neural Networks. Adv. Neural Inf. Process. Syst. 2014, 27, 3320–3328. [Google Scholar]
  83. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  84. Radford, A.; Kim, J.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models from Natural Language Supervision. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
  85. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR, Ft. Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  86. Tarvainen, A.; Valpola, H. Mean teachers are better role models: Weight-averaged Consistency Targets Improve Semi-Supervised Deep Learning Results. Adv. Neural Inf. Process. Syst. 2017, 30, 1195–1204. [Google Scholar]
  87. Settles, B. Active Learning Literature Survey; University of Wisconsin-Madison: Madison, WI, USA, 2009. [Google Scholar]
  88. Côté, P.; Nikanjam, A.; Ahmed, N.; Humeniuk, D.; Khomh, F. Data Cleaning and Machine Learning: A Systematic Literature Review. Autom. Softw. Eng. 2024, 31, 54. [Google Scholar] [CrossRef]
  89. Zhang, N.; Poole, D. Exploiting Causal Independence in Bayesian Network Inference. J. Artif. Intell. Res. 1996, 5, 301–328. [Google Scholar] [CrossRef]
  90. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  91. Wang, X.; Li, X.; Liao, J.; Feng, S.; LI, S.; Zhou, Y. Tightly Coupled Stereo Visual-Inertial-LiDAR SLAM based on graph optimization. Acta Geod. Cartogr. Sin. 2022, 51, 1744–1756. [Google Scholar]
  92. Zhu, H.; Wang, F.; Zhang, W.; Luan, M.; Cheng, Y. Real-Time Precise Positioning Method for Vehicle-Borne GNSS/MEMS IMU Integration in Urban Environment. Geomat. Inf. Sci. Wuhan Univ. 2023, 48, 1232–1240. [Google Scholar]
  93. Li, Y.; Yang, Y.; He, H. Effects Analysis of Constraints on GNSS/INS Integrated Navigation. Geomat. Inf. Sci. Wuhan Univ. 2017, 42, 1249–1255. [Google Scholar]
  94. Gao, Z. Research on The Methodology and Application of the Integration Between the Multi-Constellation GNSS PPP and Inertial Navigation System. Ph.D. Thesis, Wuhan University, Wuhan, China, 2016. [Google Scholar]
  95. Li, W.; Li, W.; Cui, X.; Zhao, S.; Lu, M. A tightly coupled RTK/INS Algorithm with Ambiguity Resolution in the Position Domain for Ground Vehicles in Harsh Urban Environments. Sensors 2018, 18, 2160. [Google Scholar] [CrossRef] [PubMed]
  96. Liu, F. Research on High-Precision Seamless Positioning Model and Method Based on Multi-Sensor Fusion. Acta Geod. Cartogr. Sin. 2021, 50, 1780. [Google Scholar]
  97. Yu, H.; Li, Z.; Wang, J.; Han, H. Data Fusion for GPS/INS Tightly-Coupled Positioning System with Equality and Inequality Constraints Using an Aggregate Constraint Unscented Kalman filter. J. Spat. Sci. 2020, 65, 377–399. [Google Scholar] [CrossRef]
  98. Cao, Z. Indoor and Outdoor Seamless Positioning Based on the Integration of GNSS/INS/UWB. Master’s Thesis, Beijing Jiaotong University, Beijing, China, 2021. [Google Scholar]
  99. Yuan, L.; Zhang, S.; Wang, J.; Guo, H. GNSS/INS integrated navigation aided by sky images in urban occlusion environment. Sci. Surv. Mapp. 2023, 48, 1–8. [Google Scholar]
  100. Wang, J.; Ling, H.; Jiang, W.; Cai, B. Integrated Train Navigation system based on Full state fusion of Multi-constellation satellite positioning and inertial navigation. J. China Railw. Soc. 2022, 44, 45–52. [Google Scholar]
Figure 1. Calculation process for the KF.
Figure 1. Calculation process for the KF.
Mathematics 13 01485 g001
Figure 2. The general structure of FF.
Figure 2. The general structure of FF.
Mathematics 13 01485 g002
Figure 3. Principle of the MME method.
Figure 3. Principle of the MME method.
Mathematics 13 01485 g003
Figure 4. Structure of FG.
Figure 4. Structure of FG.
Mathematics 13 01485 g004
Figure 5. Multi-sensor fusion framework based on FG.
Figure 5. Multi-sensor fusion framework based on FG.
Mathematics 13 01485 g005
Figure 6. Three-layer network structure of the BP neural network.
Figure 6. Three-layer network structure of the BP neural network.
Mathematics 13 01485 g006
Figure 7. Flowchart of the BP neural network algorithm.
Figure 7. Flowchart of the BP neural network algorithm.
Mathematics 13 01485 g007
Table 1. Parameter estimation under different criteria [40] ( E x = μ x , E y = μ y , var x = Σ x , var y = Σ y , cov x , y = Σ x y ).
Table 1. Parameter estimation under different criteria [40] ( E x = μ x , E y = μ y , var x = Σ x , var y = Σ y , cov x , y = Σ x y ).
TypeEstimation Method
LSMLEMAPMVELMVE
Estimation criterionFormula (2)Formula (4)Formula (8)Formula (12)Formula (17)
Estimation formulaThe first equation in Formula (3)The first equation in Formula (7)See Formulas (10), (13) and (18)
Estimation error varianceThe second equation in Formula (3)The second equation in Formula (7)
UnbiasingUnbiasedness
Table 2. Comparison of several optimal estimation methods [40].
Table 2. Comparison of several optimal estimation methods [40].
MethodAdvantagesDisadvantages
LSESimple and easy to use, no distribution assumptions, geometrically intuitive, and not dependent on the specific distribution of the data.Sensitive to outliers, lacks statistical features, and is limited to linear models.
MLEGradual progression is beneficial, with strong versatility and clear statistical properties.Computationally complex, sensitive to initial values, and dependent on distribution assumptions.
MAPEUtilizing the prior distribution. Suitable for situations with small sample sizes or insufficient data and provides a complete posterior distribution for further analysis.Relies on prior selection, with high computational complexity requiring integral computation. Sensitive to prior selection, where different prior choices may lead to distinct results.
MVEThe optimal unbiased estimator possesses well-defined statistical properties, achieves the minimum mean squared error among all estimators, and exhibits the best performance across all estimation methods. When both the estimated quantity and the measurements follow a normal distribution, the LMVE becomes equivalent to the MVE.Dependent on model assumptions, computationally complex, and limited to unbiased estimation. Requires determining the conditional mean of measurements and estimated values under measurement conditions, which is computationally intensive. For non-stationary processes, it necessitates knowledge of first- and second-order moments at each time instant, resulting in high computational demands.
LMVEBest Linear Unbiased Estimator, simple to compute with clear statistical properties.Limited to linear models. Dependent on model construction. Sensitive to outliers.
Table 3. Comparison of filtering methods in terms of computational complexity and accuracy [43].
Table 3. Comparison of filtering methods in terms of computational complexity and accuracy [43].
FilterSystem ModelComputational ComplexityAccuracy
KFLinear system, Gaussian noiseLowOptimal when assumptions are met
EKFWeakly nonlinear system, Gaussian noiseMediumMedium (linearization error affects accuracy)
UKFModerately nonlinear system, Gaussian noiseMedium to highHigh, no need for differentiation, avoids linearization error
PFStrongly nonlinear/non-Gaussian systemsHighHigh, dependent on the number of particles, high resource consumption
UPFStrongly nonlinear/non-Gaussian systemsVery highHigher than PF (combined with UKF suggested distribution)
Table 4. Performance comparison of different filtering methods [43].
Table 4. Performance comparison of different filtering methods [43].
MethodFilter TypeAdvantagesDisadvantages
KFCentralized KF
(1)
Low computational load and strong real-time performance.
(2)
It can provide optimal state estimation under linear Gaussian systems.
(3)
The algorithm is relatively simple and easy to implement.
(1)
It can only handle linear systems and is not capable of dealing with nonlinear systems.
(2)
It has strict requirements for noise distribution, which must be Gaussian.
(3)
It is sensitive to model errors, and inaccurate initial state estimation may lead to a slower convergence rate of the filter.
EKFCentralized KF
(1)
The EKF approximates the state and measurement equations of the real system by linearizing the nonlinear functions, retaining only the first-order terms and discarding the second-order and higher-order terms.
(2)
In systems with low nonlinearity, the EKF can maintain a high estimation accuracy.
(3)
Compared with other nonlinear filtering methods (such as the UKF), the EKF has a lower computational complexity and is more suitable for real-time applications.
The EKF approximates the state and measurement equations of the real system by linearizing the nonlinear functions, thereby enabling it to handle nonlinear systems. In systems with low nonlinearity, the EKF can maintain a high estimation accuracy. Compared with other nonlinear filtering methods (such as the UKF), the EKF has a lower computational complexity and is more suitable for real-time applications.
UKFCentralized KF
(1)
It is capable of better handling non-Gaussian noise and nonlinear systems.
(2)
Compared to the EKF, it retains terms up to the second order in the linearization process, and it performs better in high-dimensional state spaces.
(1)
High computational complexity, as it requires the calculation of a large number of sigma points.
(2)
Sensitivity to noise, which may introduce additional noise.
(3)
Manual parameter tuning is required, such as the number and weights of sigma points.
PFCentralized KF
(1)
It can handle nonlinear systems and non-Gaussian noise.
(2)
There are no strict requirements for the noise distribution.
(3)
The algorithm can be computed in parallel, improving computational efficiency.
(1)
It has a large computational load, especially in high-dimensional state spaces.
(2)
The choice of particle number can affect the filtering performance, and particle degeneracy may occur.
UPFCentralized KF
(1)
More accurate than particle filters, especially in high-dimensional state spaces.
(2)
It can effectively reduce the phenomenon of particle degeneracy.
(1)
Slightly higher computational load compared to particle filters.
(2)
Somewhat dependent on the choice of the UT.
FFDecentralized KF
(1)
It reduces the communication burden of centralized filtering and decreases the dependence on the central processor.
(2)
Compared with Decentralized filtering, it improves the estimation accuracy by adding some central coordination.
(1)
The implementation complexity is relatively high, and an effective information exchange mechanism needs to be designed.
(2)
There may be data imbalance issues that can affect the accuracy and stability of the model.
Table 5. Performance characteristics of MME [73].
Table 5. Performance characteristics of MME [73].
MethodAdvantagesDisadvantages
MME
(1)
The MME method describes different states or behaviors of a system by combining multiple models, enabling more comprehensive coverage of complex system characteristics.
(2)
Through the integration of multiple models, the system can better address model errors, noise, and uncertainties. Even if one model deviates, others can still provide reliable estimates, thereby enhancing the overall robustness of the system.
(3)
MME dynamically adjusts model weights or switches between models based on real-time data, allowing better adaptation to changes in system states. This adaptive capability makes it particularly effective in dynamic environments.
(1)
The MME method requires simultaneous computation and updating of multiple models, resulting in significantly higher computational complexity compared to single-model approaches. This may impose demanding requirements on computational resources for real-time applications.
(2)
MME necessitates careful design and management of parameters, weights, and switching logic across multiple models. Coordinating and optimizing interactions between models constitutes a complex process requiring meticulous design and debugging.
(3)
Accurate parameter estimation for multiple models and effective model switching typically require substantial data support. In scenarios with data scarcity, the performance of multi-model methods may be constrained.
Table 6. Performance characteristics of the FG method [77].
Table 6. Performance characteristics of the FG method [77].
MethodAdvantagesDisadvantages
FG
(1)
High Flexibility: FG can more generally represent the decomposition of probability distributions and apply to a variety of complex probabilistic models. It can flexibly integrate data from different types of sensors (such as IMU, GPS, LiDAR, etc.) and introduce multiple constraints.
(2)
Effective Optimization: The FG method is suitable for large-scale sparse structure optimization and offers better numerical stability, especially in SLAM problems. It optimizes through Bayesian inference and can effectively handle multi-source heterogeneous data.
(3)
Strong Dynamic Adaptability: FG can dynamically adjust optimization strategies to adapt to dynamic changes in data, especially when dealing with inconsistent sensor information frequencies and dynamically changing validity.
(4)
Plug-and-Play: The FG method has a plug-and-play feature, allowing easy integration of new sensor data and strong scalability.
(1)
High computational complexity: the FG method has high computational complexity when dealing with large-scale data, which may limit its real-time applications.
(2)
High data preprocessing requirements: Multisource data usually needs to be preprocessed, such as data cleaning and normalization, to ensure the quality and consistency of the data.
(3)
Difficulty in model construction: Constructing an effective FG model requires a deep understanding of the uncertainty and correlation of the data, which increases the difficulty of model construction.
(4)
Real-time challenges: In some scenarios that require real-time decision-making, the computational complexity may limit its application.
Table 7. Performance characteristics of AI methods [78].
Table 7. Performance characteristics of AI methods [78].
MethodAdvantagesDisadvantages
AI
(1)
Strong Dynamic Adaptability: AI methods (such as machine learning, deep learning, and reinforcement learning) can dynamically learn and adapt to environmental changes, automatically adjust fusion strategies, and overcome the limitations of traditional methods in handling nonlinearity, time-varying characteristics, and uncertainties.
(2)
Enhanced Capability in Handling Complex Data Relationships: Through model training, AI algorithms can process complex multi-source heterogeneous data relationships, improving fusion accuracy and system robustness.
(3)
Automatic Feature Extraction: Neural networks and similar algorithms can automatically learn data features, reducing the need for manual feature engineering, especially for large-scale, high-dimensional data.
(4)
Improved Decision-Making Efficiency: AI algorithms can rapidly integrate information from diverse sensors or data sources, providing support for real-time decision-making.
(1)
High computational costs: Multimodal data fusion models require processing information from multiple data streams, demanding significant computational resources (e.g., GPUs) and energy consumption, which may constrain real-time applications.
(2)
Demanding data preprocessing requirements: Multi-source data often exhibit discrepancies in formats, scales, semantics, and quality, necessitating preprocessing steps such as data cleaning and standardization, thereby increasing implementation complexity.
(3)
High model complexity: AI models (e.g., deep learning models) are typically intricate, requiring substantial time for training and optimization, while also demanding high data volume and quality.
(4)
Data privacy and security concerns: Data sources are diverse and may contain sensitive information, making privacy protection and security safeguarding critical challenges.
Table 8. Advantages and disadvantages of uncertainty reasoning [90].
Table 8. Advantages and disadvantages of uncertainty reasoning [90].
MethodAdvantagesDisadvantages
Uncertainty
reasoning
(1)
Strong capability in handling uncertainty: Uncertainty reasoning methods (such as the Dempster–Shafer theory of evidence) can effectively deal with uncertainty in data, including imprecision, inconsistency of data, and sensor errors.
(2)
High flexibility: These methods can flexibly handle different types of data sources and meet the needs of fusing multi-source heterogeneous data.
(3)
Enhanced decision-making reliability: By properly dealing with uncertainty, the reliability of the fusion results can be improved, providing support for decision-making in complex environments.
(4)
Support for dynamic updates: In scenarios where data are constantly changing, uncertainty reasoning methods can dynamically adjust the fusion strategy to adapt to new data inputs.
(1)
High computational complexity: Uncertainty reasoning methods typically require complex computational processes, especially when dealing with large-scale data, resulting in high computational costs.
(2)
High requirements for data preprocessing: Multi-source data often need to be preprocessed, such as data cleaning and normalization, to ensure the quality and consistency of the data.
(3)
Difficulty in model construction: Building effective uncertainty models requires a deep understanding of the uncertainty and correlation of the data, which increases the difficulty of model construction.
(4)
Real-time challenges: In scenarios that require real-time decision-making, the computational complexity may limit their application.
Table 9. Main characteristics and applicable scenarios of fusion methods.
Table 9. Main characteristics and applicable scenarios of fusion methods.
Main CharacteristicsApplicable Scenarios
LSE,
WLSE
(1)
Parameters are estimated by minimizing the sum of squared errors.
(2)
In linear regression models, the LSE is unbiased and achieves the minimum variance among all linear unbiased estimators (i.e., it is BLUE, the Best Linear Unbiased Estimator).
(3)
It is sensitive to outliers because its optimization relies on minimizing the sum of squared errors.
(4)
It is suitable for data with linear relationships, offering computational simplicity and ease of implementation.
Linear regression and curve fitting. If the data distribution is unknown and the model is simple, it is preferable to prioritize LSE.
MLE
(1)
Parameters are estimated by maximizing the likelihood function, relying solely on observed data without considering prior information.
(2)
For large sample sizes, MLE typically exhibits consistency (estimates converge to the true values as the sample size increases).
(3)
The computational complexity is low, and solutions can generally be obtained through analytical methods or numerical optimization.
(4)
MLE is asymptotically efficient when model assumptions hold but may fail when these assumptions are violated.
Highly versatile and suitable for large samples, but computationally intensive. If the dataset is large and the distribution is known, MLE should be prioritized.
MAPE
(1)
Combines the likelihood of observed data with prior information about parameters, representing a Bayesian estimation method.
(2)
MAPE generally outperforms MLE when prior information is reliable and sample sizes are small.
(3)
MAPE can be interpreted as a regularized version of MLE, where the prior distribution acts as a regularization term.
(4)
Computational complexity may be high, particularly with complex prior distributions or when numerical optimization is required.
Combines prior information, suitable for small samples, but relies on prior selection. If the sample size is small and there is a need to incorporate prior information, the MAPE is chosen.
MVE
(1)
Among all unbiased estimators, it has the minimum variance, making it the optimal unbiased estimator.
(2)
It typically requires assumptions about the data distribution (e.g., normal distribution), and its optimality is guaranteed under these assumptions.
(3)
It exhibits higher computational complexity, particularly with high-dimensional data.
In the case where the model is known and unbiased estimation is required, select either the MVE or the LMVE.
LMVE
(1)
It is the estimation method with the minimum variance among all linear estimators.
(2)
It only requires knowledge of the first- and second-order moments of the estimated quantity and measured quantity, making it suitable for stationary processes.
(3)
For non-stationary processes, precise knowledge of the first- and second-order moments at each time instant is required, which significantly constrains its applicability.
(4)
The computational complexity is moderate, but the estimation accuracy critically depends on the accuracy of the assumed moments
KF
(1)
Linear System Assumption: The KF assumes both the system model and observation model are linear, with additive Gaussian noise. This linear–Gaussian assumption theoretically guarantees a globally optimal solution.
(2)
High Computational Efficiency: The KF exhibits low computational complexity (O(n2) for state dimension n), making it well-suited for real-time applications with stringent timing requirements.
(3)
Optimal Estimation Accuracy: Under strict adherence to linear–Gaussian assumptions, the KF provides statistically optimal estimates in the minimum mean-square error sense.
(4)
Limited Applicability: The KF’s strict linearity assumptions lead to significant estimation errors when applied to nonlinear systems, severely constraining its practical application scope.
It is suitable for scenarios where the system is linear and the noise follows a Gaussian distribution, such as in simple navigation and signal processing.
EKF
(1)
Nonlinear system processing: The EKF approximates nonlinear problems by linearizing the nonlinear system model.
(2)
Moderate computational complexity: The computational complexity of the EKF is higher than the KF but lower than the UKF.
(3)
Limited accuracy: Due to the linearization process, the EKF may experience larger estimation errors in strongly nonlinear systems.
(4)
Jacobian matrix calculation: The EKF requires computation of the Jacobian matrices of the system model and observation model, which increases the algorithm’s complexity.
(5)
Broad applicability: The EKF is a widely used method for handling nonlinear systems and is suitable for moderate nonlinearity.
It is suitable for scenarios where the nonlinearity is not high and fast real-time processing is required. Due to its lower computational complexity, the EKF is well-suited for use in embedded systems with limited computational resources. When the initial state estimation is relatively accurate, the EKF can converge quickly and provide better estimation results.
UKF
(1)
Strong nonlinear system processing capability: The UKF selects a set of deterministic sampling points (Sigma points) through the UT, enabling a more accurate approximation of the statistical properties of nonlinear systems.
(2)
No Jacobian matrices required: The UKF avoids the complex Jacobian matrix calculations required in the EKF, enhancing the algorithm’s stability and precision.
(3)
Higher computational complexity: The computational complexity of the UKF is higher than that of the EKF, but it generally remains within acceptable limits.
(4)
High accuracy: The UKF outperforms the EKF in strongly nonlinear systems, providing more accurate estimation results.
Suitable for scenarios involving nonlinear systems with Gaussian noise, such as complex target tracking and robotic navigation
PF
(1)
Strong nonlinear and non-Gaussian adaptability: The PF is a recursive Bayesian estimation technique based on Monte Carlo methods, capable of handling complex nonlinear systems and non-Gaussian noise environments. It approximates the posterior probability distribution through a large number of random samples (particles), thus avoiding the linearity and Gaussian assumptions required by traditional filtering methods (e.g., KF).
(2)
Flexibility and robustness: The PF exhibits high flexibility, adapting to diverse systems and observation models. It demonstrates strong robustness against noise and outliers, making it particularly suitable for dynamic systems in complex environments.
(3)
Parallel computing capability: The computational process of the PF can be parallelized, as the prediction and update operations for each particle are executed independently, significantly improving computational efficiency.
(4)
Handling complex probability distributions: The PF can manage multimodal probability distributions, making it ideal for state estimation in complex scenarios. Through its particle weight updating mechanism, it effectively integrates information from multiple sensors.
(5)
Challenges and optimizations: Despite its adaptability, the PF faces challenges such as particle degeneracy and high computational complexity. To address these issues, researchers have proposed improvements like adaptive PF, combining the UKF to generate proposal distributions, and optimizing resampling strategies.
Suitable for nonlinear and non-Gaussian systems, such as target tracking, robotic navigation, and signal processing.
UPF
(1)
Adaptability to non-Gaussian noise: When combined with a PF, the UKF further enhances its adaptability to non-Gaussian and multimodal probability distributions.
(2)
Balanced computational efficiency and accuracy: The UKF achieves a good balance between computational efficiency and accuracy. While its computational complexity is higher than that of the traditional KF, the UKF outperforms the EKF in handling high-dimensional state spaces and has lower computational complexity compared to the PF. By integrating a PF, particles are generated and updated via the UT, further improving particle effectiveness and sampling efficiency.
(3)
Mitigation of particle degeneracy: In PFs, particle degeneracy—a common issue where most particle weights approach zero, reducing the number of effective particles—is alleviated when combined with the UKF. Sigma points generated through the UT provide a more accurate description of the state distribution, thereby reducing particle degeneracy.
(4)
Further optimization via PF integration: The UPF combines the strengths of the UKF and the PF. By generating and updating particles through the UKF UT, UPF retains PF’s strong adaptability to nonlinear and non-Gaussian problems while improving sampling efficiency.
Suitable for nonlinear and non-Gaussian systems where high estimation accuracy is required.
FF
(1)
Distributed Structure and Flexibility: The FF adopts a distributed architecture, decomposing data fusion tasks into multiple sub-filters and a master filter. This structural design offers flexibility, allowing the selection of appropriate filtering algorithms (e.g., EKF, UKF) based on the characteristics of different sensors. It also supports dynamic adaptation to sensors joining or leaving the network.
(2)
Fault Tolerance and Fault Isolation Capability: The FF can detect and isolate faulty sensors in real time, preventing erroneous data from degrading global estimation accuracy. This ensures high precision and reliability even when partial sensor failures occur.
(3)
Diversity of Information Allocation Strategies: The FF supports multiple information allocation strategies, including zero-reset mode, variable proportion mode, feedback-free mode, and fusion-feedback mode. These strategies exhibit trade-offs in computational complexity, fusion accuracy, and fault tolerance, enabling flexible selection tailored to specific application scenarios.
(4)
High Computational Efficiency: By distributing data processing among multiple sub-filters, each of which handles only local data, the FF reduces the computational load. This distributed computing approach not only enhances the system’s real-time performance but also lowers the demand for hardware resources.
(5)
Adaptive and Dynamic Adjustment Capability: Integrated with adaptive filtering theory, the FF dynamically adjusts information allocation coefficients based on sensor performance and environmental changes.
(6)
Plug-and-Play Functionality: The FF supports plug-and-play operation for sensors, enabling rapid adaptation to dynamic sensor addition or removal. This feature enhances flexibility and adaptability in complex environments or evolving mission requirements.
(7)
Globally Optimal Estimation: The master filter synthesizes local estimates from sub-filters to achieve a globally optimal estimate. This two-tier architecture preserves local filtering precision while improving overall system performance through global fusion.
(8)
Suitability for Complex Environments: The FF excels in multi-source heterogeneous data fusion scenarios, such as integrating satellite navigation, inertial navigation, and visual navigation in navigation systems. It effectively addresses sensor noise, model biases, and environmental interference challenges.
Suitable for multi-sensor networks, especially in scenarios where sensors are widely distributed and communication resources are limited.
MME
(1)
Multimodal Data Processing Capability: MME methods can handle data from different modalities, such as images, text, audio, and video. By mapping multimodal data into a unified feature space or integrating information through fusion techniques, these methods leverage the complementary nature of different modalities to enhance model performance.
(2)
Diversity of Fusion Strategies: MME methods support various fusion strategies, including early fusion (integration at the data level), mid-level fusion (integration at the feature level), late fusion (integration at the decision level), and hybrid fusion (combining multiple strategies). Different strategies are suited to different application scenarios, enabling flexible adaptation to complex data fusion requirements.
(3)
Enhanced Model Robustness: By incorporating multimodal data and multi-model estimation, uncertainties arising from single-modality data can be effectively reduced, improving the model’s robustness against noise and outliers.
(4)
Improved Performance and Generalization Capability: MME methods comprehensively capture target information by fusing multimodal data, thereby boosting both model performance and generalization ability.
(5)
Flexibility in Model Architecture: MME methods typically offer high flexibility, allowing the selection of appropriate model architectures based on specific task requirements. For example, unified embedding-decoder architectures and cross-modal attention architectures can be combined to fully exploit the advantages of different structural designs.
(1)
In scenarios such as multi-object tracking and complex trajectory prediction, multi-model estimation methods can effectively deal with the various motion patterns and uncertainties of targets.
(2)
Suitable for integrated navigation systems, such as the fusion of inertial navigation and satellite navigation, multi-model estimation can improve positioning accuracy and reliability.
(3)
In complex industrial scenarios such as power systems and chemical processes, MME methods can be used for modeling and control to address the dynamic changes in system parameters.
FG
(1)
Intuitiveness and Flexibility: FGs visually represent the relationships between variables and factors, making complex probabilistic models more intuitive and easier to understand. Compared to Bayesian networks, FGs can more generally express the decomposition of probability distributions, making them suitable for a wide range of complex probabilistic models.
(2)
Efficient Inference and Optimization Capabilities: FGs excel in algorithms such as variable elimination and message passing, significantly improving inference efficiency. By jointly optimizing the relationships between data from different sensors, FGs can efficiently fuse multi-sensor data.
(3)
Strong Multi-Source Data Fusion Capability: FGs can effectively integrate data from various sensors (such as cameras, radars, and IMUs). By constructing an FG model, they can jointly optimize heterogeneous multi-source data, thereby enhancing the accuracy and reliability of data fusion.
(4)
Support for Dynamic Data Processing: FG algorithms support techniques such as sliding-window optimization (Sliding-Window FGO), enabling dynamic processing of real-time data streams. This makes them suitable for scenarios with high computational complexity and stringent real-time requirements.
(5)
Robustness and Outlier Resistance: In complex environments (such as urban canyons), FGO demonstrates better robustness and positioning accuracy compared to traditional methods like the EKF.
(1)
Autonomous Driving and Robot Navigation: FG methods are widely used in autonomous driving and robot navigation to integrate data from multiple sources such as cameras, radar, LiDAR, and IMU, thereby enhancing the accuracy of localization and mapping.
(2)
Underwater Robot Localization: In complex underwater environments, FG methods can effectively handle abnormal sensor observations, thereby improving the positioning accuracy of Autonomous Underwater Vehicles.
(3)
Industrial Fault Detection and Diagnosis: FG methods can be used to integrate monitoring data from different sensors to achieve early warning and diagnosis of equipment faults.
(4)
Medical Diagnosis: In the medical field, FG methods can integrate patients’ medical records, imaging data, and laboratory test results to improve the accuracy of disease diagnosis.
(5)
Environmental Monitoring: FG methods can integrate data from satellites, ground sensors, and meteorological data to monitor environmental changes in real time, supporting environmental management and decision-making.
AI
(1)
Heterogeneous Data Processing Capability: AI methods can handle data from diverse sources, formats, and structures (such as text, images, and sensor data) through multimodal models (e.g., Transformers, hybrid neural networks) to achieve unified processing.
(2)
Automatic Feature Extraction and Representation Learning: Deep learning models (e.g., autoencoders, BERT) autonomously learn high-level abstract features without manual feature engineering, adapting to the complexity of multi-source data. This reduces reliance on domain-specific knowledge while improving feature expression efficiency and generalization.
(3)
Robustness and Fault Tolerance: AI maintains stability amid data noise, missing values, or conflicts by leveraging adversarial training, data augmentation, and attention mechanisms to filter redundant information.
(4)
Additional capabilities include cross-modal association and reasoning, real-time performance and scalability, and adaptability and dynamic updating.
(1)
Autonomous Driving: Integrating data from cameras, radar, and LiDAR helps vehicles understand their surroundings from multiple perspectives, enabling safe navigation.
(2)
Medical Diagnosis: Fusing patient medical records with medical imaging data allows for the rapid and accurate identification of diseases, thereby improving diagnostic efficiency.
(3)
Smart Cities: In real estate management, combining sensor data, social media feedback, and Geographic Information System (GIS) data optimizes urban spatial layout and resource allocation.
(4)
Industrial Internet of Things (IIoT): Analyzing sensor data enables predictive maintenance and optimization of equipment, thereby increasing production efficiency.
(5)
Environmental Monitoring: Integrating satellite data, ground sensor data, and meteorological data allows for real-time monitoring of environmental changes, supporting environmental management and decision-making.
Uncertainty reasoning
(1)
Strong capability in handling uncertainty: Uncertainty reasoning methods can effectively deal with uncertainty in data, including imprecision, inconsistency, and sensor errors. For example, the Dempster–Shafer theory of evidence can model and fuse uncertain information, while Bayesian methods handle uncertainty through probabilistic updates.
(2)
High flexibility: Uncertainty reasoning methods can flexibly handle different types of data sources and meet the needs of fusing multi-source heterogeneous data. They are capable of integrating symbolic reasoning and numerical reasoning to process various types of information, ranging from qualitative to quantitative.
(3)
Enhanced decision-making reliability: By properly handling uncertainty, uncertainty reasoning methods can improve the reliability of the fusion results and provide support for decision-making in complex environments.
(4)
High computational complexity, high requirements for data preprocessing, difficulty in model construction, and challenges in real-time performance.
(1)
Target Recognition and Tracking in Complex Environments: In target recognition and tracking tasks, uncertainty reasoning methods can effectively handle uncertain data from multiple sensors (such as cameras and radars), thereby improving the accuracy and reliability of target detection.
(2)
Medical Diagnosis: In the medical field, by integrating patients’ medical records, imaging data, and laboratory test results, uncertainty reasoning methods can more accurately identify diseases and enhance the accuracy of diagnosis.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, X.; Zhou, P.; He, X. Advances in Multi-Source Navigation Data Fusion Processing Methods. Mathematics 2025, 13, 1485. https://doi.org/10.3390/math13091485

AMA Style

Ma X, Zhou P, He X. Advances in Multi-Source Navigation Data Fusion Processing Methods. Mathematics. 2025; 13(9):1485. https://doi.org/10.3390/math13091485

Chicago/Turabian Style

Ma, Xiaping, Peimin Zhou, and Xiaoxing He. 2025. "Advances in Multi-Source Navigation Data Fusion Processing Methods" Mathematics 13, no. 9: 1485. https://doi.org/10.3390/math13091485

APA Style

Ma, X., Zhou, P., & He, X. (2025). Advances in Multi-Source Navigation Data Fusion Processing Methods. Mathematics, 13(9), 1485. https://doi.org/10.3390/math13091485

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop