Skip to Content
SymmetrySymmetry
  • Article
  • Open Access

3 March 2026

A Cooperative Navigation Algorithm Based on WGBP for Master–Slave UAV Formations

,
,
,
and
1
School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou 213000, China
2
Sunwave Communications Co., Ltd., Hangzhou 311000, China
3
School of Computing, Macquarie University, Sydney, NSW 2109, Australia
4
Department of Geodesy and Geoinformation, Vienna University of Technology, 1400 Vienna, Austria

Abstract

To address severe measurement error fluctuations and heterogeneous information source uncertainties in master–slave unmanned aerial vehicle (UAV) formations, a high-precision cooperative navigation method is proposed. Integrating inertial navigation, satellite positioning, and inter-UAV relative distance, the method innovatively introduces three key components: a multi-source information fusion-based cooperative navigation framework for accurate formation state estimation, a cooperative geometric dilution of precision (CGDOP) model based on hybrid observation configurations for positioning accuracy evaluation, and a dynamic-weight Gaussian belief propagation (WGBP) algorithm for adaptive measurement weight adjustment to suppress low-quality observation interference. Experiments demonstrate that WGBP achieves the lowest mean error in 22 out of 24 cases and the smallest standard deviation in 21 cases compared with EKF, WGP, HRGBP, and WGBP. Empirical field experiments further demonstrate consistent superiority of WGBP in dynamic environments.

1. Introduction

In recent years, UAV technology [1,2] has played a significant role in emergency rescue, military reconnaissance, industrial automation, package delivery, and aerial photography, among other fields, and has been widely adopted [3]. A UAV formation [4,5] refers to a coordinated flight system composed of multiple drones that collaborate through onboard sensors and data links to exchange information and jointly execute complex missions, forming a unified control architecture with synchronized behavioral capabilities.
Formation-based cooperative navigation has attracted considerable attention due to its superior robustness, flexibility, and operational efficiency compared to standalone navigation systems. Unlike single-vehicle autonomous navigation, multi-UAV formations can share navigation data among members, enabling individual drones or the entire formation to enhance positioning accuracy, reduce localization errors, and improve mission reliability and flight safety [6,7].
Based on the internal information exchange architecture, UAV formation strategies are typically categorized into two types: parallel [8] and master–slave configurations [9]. The parallel (decentralized) approach treats all members as equal in functional capability and navigation precision. In contrast, the master–slave structure divides UAVs into master (high-precision) and slave (lower-precision) aircraft. The master UAV communicates navigation information to subordinates via onboard data links, allowing it to remain outside high-threat zones while directing slave UAV into GPS-denied or hostile environments for reconnaissance and strike operations. Notable examples include the U.S. XQ-58 Valkyrie [10], the U.S.–Australian MQ-25 Stingray [11], and China’s FH-97A [12]—representative systems of the “loyal wingman” concept. These platforms enable manned-unmanned teaming, effectively engaging both aerial and ground targets. Field demonstrations have shown multiple XQ-58B Valkyries operating in formation with F-35, F-16C, and F-15E fighter jets, significantly enhancing overall combat effectiveness through coordinated maneuvers.
Current research on UAV cooperative navigation primarily relies on inertial navigation systems (INS), augmented by integration with global navigation satellite systems (GNSS), ultra-wideband (UWB), radar, Doppler sensors, angle-measuring instruments, infrared, optical, and microwave technologies to achieve multi-source data fusion and optimal state estimation [13,14].
In the domain of multi-sensor integration, GNSS/INS integrated navigation [15] has achieved high accuracy under favorable conditions. However, signal interference or operation in GNSS-challenged or denied environments degrades positioning performance [16,17]. Consequently, extensive research efforts—both domestic and international—have focused on enabling resilient cooperative navigation for UAV formations under such adverse conditions.
The literature [18] proposes a vision-based cooperative localization system that integrates a visual processing module and an enhanced manifold-based sensor fusion algorithm, delivering centimeter-level positioning accuracy for secondary UAVs in magnetically interference-free environments. The literature [19] proposes a cooperative navigation framework integrating “dead reckoning, relative ranging, and heading constraints” to enable accurate positioning of UAV swarms in GNSS-denied environments, while effectively predicting and mitigating colored noise in RSSI measurements.
Cooperative navigation algorithms can be broadly classified according to their underlying principles into four categories: mathematical model optimization [20,21], Bayesian filtering [22,23], probabilistic graphical models [24,25], and machine learning-based approaches [26,27]. A central challenge in these studies is how UAVs effectively fuse inter-agent measurements with self-contained navigation data to achieve accurate and reliable collective state estimation. The literature [28] proposes a distributed estimation framework based on the MSCKF that integrates range, visual, and intermittent position measurements. Relative ranging and co-observed features are utilized to establish direct and indirect geometric constraints among UAVs, respectively. The proposed cooperative estimation scheme demonstrates enhanced accuracy and robustness compared to non-cooperative approaches. The literature [29] presents a novel two-stage algorithm grounded in measurement adjustment theory, which integrates pairwise relative range measurements with cluster-level topological constraints. By exploiting spatial relationships within the cluster, the method effectively mitigates ranging errors, while further refining distance estimates through the inherent robustness of the network topology. The literature [30] proposes the Ranging-based Manifold Gradient Fusion (RM-MGF) method and derives an error propagation model for cooperative nodes based on the dilution of precision (DOP), thereby enhancing both the accuracy and computational efficiency of the cooperative positioning algorithm.
In recent years, researchers have introduced the belief propagation [31] algorithm based on probabilistic graphical models into the field of cooperative navigation. The literature [32] proposes a distributed cooperative navigation framework based on factor graphs (FG) [33] and belief propagation (BP), which effectively fuses multi-source onboard sensor measurements and inter-UAV relative ranging information. The integration of a simplified Gaussian particle filter (GPF) enables efficient message computation and significantly reduces the algorithmic computational load. The literature [34] proposes a Hybrid Robust Gaussian Belief Propagation (HRGBP) algorithm that integrates the fault detection and exclusion (FDE) method and Huber’s M-estimation within an Interactive Multiple Model (IMM) framework, thereby enhancing the fault-tolerant robustness of the vehicle cooperative positioning system.
Although the aforementioned studies have proposed various solutions for cooperative navigation, they do not adequately account for scenarios involving significant fluctuations in equipment measurement errors and heterogeneous uncertainty levels across different information sources. Consequently, when sensor measurements from individual formation members exhibit varying degrees of uncertainty, the adaptability of these methods becomes notably limited.
This paper proposes a cooperative navigation method based on dynamic-weight Gaussian belief propagation for master–slave UAV formation operating under significant fluctuations in equipment measurement errors and heterogeneous uncertainties across information sources. By incorporating the relative distance measurements between the master and slave UAVs and integrating the observation configurations from both the master UAV to satellites and between the master and slave UAVs, the method dynamically adjusts the measurement weights of slave UAVs to enable precise state estimation of formation members, thereby further enhancing navigation accuracy. The main contributions of this paper are as follows:
(1) This study proposes an master–slave cooperative navigation framework for UAV formation, which integrates multi-source heterogeneous observations: INS data, GNSS measurements, and inter-UAV relative distance information. By fusing these complementary observations, the framework realizes high-precision, stable state estimation of all formation members in complex scenarios.
(2) A cooperative geometric dilution of precision is proposed with innovative improvements, which jointly considers two key geometric distributions in UAV formation positioning: the spatial distribution between the master UAV and its satellites, and the measurement geometric distribution between the master UAV and slave UAVs. Based on this comprehensive consideration, this study enables effective assessment of UAV formation positioning accuracy under varying observation conditions.
(3) A dynamic-weight Gaussian belief propagation-based cooperative navigation algorithm is proposed, which innovatively integrates the CGDOP-based geometric evaluation and real-time error perception to adaptively adjust measurement factor weights. This design achieves targeted suppression of low-quality observations and forms a dynamic weight update mechanism, thereby significantly enhancing the positioning accuracy and robustness of the traditional GBP algorithm in complex UAV formation scenarios with varying observation conditions.
The rest of this paper is structured as follows. Section 2 describes the System Architecture. The proposed method is detailed in Section 3. Section 4 discusses the experimental comparison results, and Section 5 concludes the paper.

2. System Architecture

The master–slave UAV formations cooperative navigation system architecture is illustrated in Figure 1, where m denotes the master UAV and k n represents a slave UAV. Typically, the formation consists of a single master UAV capable of receiving GNSS signals and n slave UAVs operating in GNSS-denied environments. The master UAV achieves high-precision navigation through an integrated INS/GNSS system and performs ranging using distance measurement sensors such as laser radar. This navigation and ranging information is shared with all UAVs via a bidirectional data link. Subsequently, the INS data, GNSS measurements, and relative ranging data from all UAVs are jointly fed into a Gaussian belief propagation (GBP) model. Within this framework, weight estimation is performed by integrating CGDOP model analysis with prediction error metrics, enabling adaptive fusion and refinement of heterogeneous data sources according to their reliability. Furthermore, the WGBP enables online weight adjustment through feedback of estimation residuals, thereby continuously improving the accuracy and robustness of the formation’s state estimation.
Figure 1. Block diagram of the system architecture.

3. Methodology

3.1. Cooperative Navigation Model for UAV Formation

The state of the UAV formation at time t is defined as:
X ( t ) = x 1 ( t ) , x 2 ( t ) , , x k ( t ) k U
x k = ϕ k , v k , p k , δ a , δ ω
where U denotes the master–slave UAV formation, ϕ k denotes the attitude of UAV k , v k represents the velocity of UAV k , p k represents the position of UAV k , δ a and δ ω represent the gyroscope drift and accelerometer zero bias, respectively.
For master UAV m to be capable of receiving GNSS signals, its GNSS measurement information is defined as:
z m GNSS ( t ) = h GNSS ( x m ( t ) ) + δ m GNSS
where h GNSS denotes the GNSS measurement function, and δ m GNSS denotes the corresponding measurement noise.
Each UAV is equipped with INS for state prediction and utilizes laser radar for inter ranging. The INS measurement information is defined as:
z k INS ( t ) = h INS ( z INS ( t 1 ) ) + ζ k INS
where h INS denotes the INS state prediction function, and ζ k INS denotes the INS measurement noise.
The distance measurement is defined as:
z k , l Range ( t ) = h Range ( x k ( t ) , x l ( t ) ) + δ k , l Range l U k
where h Range denotes the distance measurement function, δ k , l Range denotes the corresponding measurement noise, and U k denotes the set of UAV Formation within the communication range of k .

3.2. Cooperative Geometric Dilution of Precision for UAV Formation

The DOP [35,36] is usually used as an indicator to evaluate the quality of the geometric configuration of GNSS observation satellites, and it can to some extent reflect the influence of the geometric position of the observation satellites on the measurement accuracy. For a UAV formation fusion model, in addition to the satellite geometric position, the relative position information of the cluster also affects the system accuracy. A cooperative geometric dilution of precision is established in this part by jointly considering the observation vector information of the satellite and the master UAV, as well as that of the master and slave UAVs, and optimizes the geometric configuration of the master–slave UAV information.
The unit direction vector from m to its i -th observing satellite is denoted as:
u m , o s = s o 1 sat x m r m , i , s o 2 sat y m r m , i , s o 3 sat z m r m , i T
where s o sat = s o 1 s a t , s o 2 s a t , s o 3 s a t T denotes the position vector of the satellite, and r m , i represents the distance between m and its i -th observing satellite.
The relative direction vector from m to its slave UAV k is defined as:
u m , k = [ ( x m x k ) , ( y m y k ) , ( z m z k ) ] d m , k
where [ x m , y m , z m ] denotes the three-dimensional position vector of m , [ x k , y k , z k ] denotes the three-dimensional position vector of k , and d m , k represents the distance between m and k .
The mixed direction matrix formed by the aforementioned unit direction vectors is then defined as:
U m = u m , 1 s , u m , 3 s , u m , o s , u m , 1 , u m , 2 , , u m , n
The explicit formula for the cooperative geometric dilution of precision of a heterogeneous system comprising UAV formations and satellites is given by:
C G D O P = t r a c e U m T U m 1

3.3. A Dynamic Weight-Based Gaussian Belief Propagation Algorithm

The traditional Gaussian belief propagation algorithm assigns uniform weights to all information sources during message updates, without adequately accounting for the varying levels of uncertainty across these sources. This limitation can lead to fusion outcomes being adversely influenced by low-credibility information, consequently compromising the accuracy and robustness of UAV state estimation. To address this issue, this section proposes a refined approach WGBP that jointly incorporates cluster configuration, process noise, and observation noise to dynamically adjust weighting factors, thereby minimizing state estimation errors.

3.3.1. Gaussian Belief Propagation Algorithm Probabilistic Model

As illustrated in Figure 2, the first step involves constructing a probabilistic message model based on Gaussian belief propagation to facilitate distributed state estimation between the master and slave UAVs.
Figure 2. GBP Probabilistic Model for UAV Formation.
The GNSS node of master UAV m is denoted as:
g m t = p ( z m GNSS ( t ) x m t )
The inertial node of slave UAV k is denoted as:
b k ( t ) = p ( x k t x k t 1 , Z k INS ( t ) )
The ranging node between k and l is denoted as:
d k , l t = p ( z k , l Radar t x k t , x l t ) k U , l U k
According to the probabilistic graphical model illustrated in Figure 2, the posterior distribution of the state of m at time t + 1 is given by:
p ( x m t + 1 ) = δ b m t + 1 x m t + 1 δ g m t + 1 x m t + 1 l U m δ d m , l t + 1 x m t + 1
where δ b m t + 1 x m t + 1 denotes the message passed from the inertial node b m ( t + 1 ) to the state node x m ( t + 1 ) , δ g m t + 1 x m t + 1 denotes the state message derived from the GNSS node at time t + 1 , and δ d m , l t + 1 x m t + 1 denotes the state message derived from the ranging node at time t + 1 .
These messages can be formally expressed as:
δ b m t + 1 x m t + 1 = δ x m t b m t + 1 b m ( t + 1 )
δ g m t + 1 x m t + 1 = δ x m t + 1 g m t + 1 g m t + 1
δ d m , l t + 1 x m t + 1 = δ x m t + 1 d m , l t + 1 d m , l ( t + 1 )
where δ x m t b m t + 1 represents the message passed from x m ( t ) to b m ( t + 1 ) , which can be regarded as the result of the previous collaborative navigation and positioning. δ x m t + 1 g m t + 1 represents the message passed from x m ( t + 1 ) to the GNSS node, and δ x m t + 1 d m , l t + 1 represents the message passed from x m ( t + 1 ) to the ranging node. In this case, it is assumed that the x m ( t + 1 ) is generated by the inertial node and propagated to node g m t + 1 .
Thus, the posterior distribution of the state of m at time t + 1 is thus derived as:
p ( x m t + 1 ) = δ x m t b m t + 1 b m ( t + 1 ) g m t + 1 l U m δ x m t + 1 d m , l t + 1 d m , l ( t + 1 )
By analogy with the above derivation, the posterior distribution of the state of k at time t + 1 is given by:
p ( x k t + 1 ) = δ x k t b k t + 1 b k ( t + 1 ) l U k δ x k t + 1 d k , l t + 1 d k , l ( t + 1 )
Assuming that all states and observation noises are governed by multivariate Gaussian distributions, the messages propagated in the probabilistic graphical model can be represented as Gaussian distributions. Specifically, (i) Observation noise independence: GNSS pseudorange residuals and laser rangefinder measurement errors are mutually independent and each follows a zero-mean Gaussian distribution with sensor-specific covariance—consistent with standard modeling practices for these modalities under nominal operating conditions. (ii) Inter-UAV constraint independence: relative pose constraints (e.g., inter-drone distance or bearing measurements) are corrupted by additive noise that is independent across UAV pairs and statistically decoupled from all observation noise sources. (iii) Prior-observation independence: The initial state prior (e.g., the Gaussian distribution characterizing the UAV’s initial position estimate) is statistically independent of all subsequent measurements, ensuring no information leakage between prior knowledge and incoming observations, a prerequisite for valid Bayesian updating.
By exchanging only the mean vectors and error covariance matrices, the GBP algorithm enables efficient message passing and facilitates updates of both states and node messages, thereby significantly reducing the computational and communication overhead.

3.3.2. Optimization of Weights

To mitigate the influence of low-confidence information on cluster state estimation and enhance the accuracy of fusion results, a weight is assigned to each received message to reflect its uncertainty.
The weight coefficient w k i is defined to quantify the uncertainty of the message transmitted from node k to node i . The specific expression is given by:
w k i = 1 tr ( Σ ^ k i ) η E
where η denotes the step size used to update the control weights, E represents the loss function, and Σ ^ k i represents the covariance of the message. w k i is inversely proportional to Σ ^ k i .
The loss function E serves to quantify both the accuracy of the model’s predictions and its robustness to noise. The corresponding formula is given below:
E = 1 N k = 1 N C G D O P 2 ( e k e ^ k ) 2 ( 1 / σ z k 2 + 1 / σ proc 2 )
where N denotes the total number of UAV formation, e k represents the estimation error at the k th node, e ^ k denotes the prediction error at the k th node, σ y k 2 and σ proc 2 denote the observation noise variance and process noise variance at the k th node, respectively.
When node i updates its state estimate, it applies the weight coefficient w k i to the information received from nod k for weighted fusion. A smaller value of CGDOP implies a weaker influence on the loss function E and corresponds to a higher node weight. Nodes update their current state estimate upon receiving partial messages from neighbors—without waiting for all neighboring nodes—and immediately broadcasts the updated message to other neighbors. Message transmission is performed asynchronously at each node, enabling independent processing of partial information.
Regarding the loop handling issue, we actually adopted our team’s previous research method on belief propagation in the algorithm [37]. This method ensures that messages always flow from nodes with higher confidence to those with lower confidence, avoiding redundant transmission. To ensure the asymptotic convergence of the loss function and the stability of the filtering, system observability is imposed as a necessary condition.

3.3.3. Update of the Covariance and Mean

Prior to completing the update of its own state information, each node should first update the covariance and mean according to the assigned weights. The specific procedure is as follows:
First, upon receiving messages from neighboring nodes—each based on their current state estimates—each node updates its belief distribution and carries out the covariance matrix update. The corresponding formula is as follows:
Σ ^ i j 1 = Σ i 1 + k N ( i ) w k i Λ k i
where Σ ^ i j 1 denotes the inverse of the updated covariance matrix (i.e., the precision matrix), N ( i ) represents the set of neighboring nodes of node i , Σ i 1 is the precision matrix associated with the node’s current state estimate, and Λ k i denotes the precision matrix of the information transmitted from node k to i .
Subsequently, the updated mean vector is computed, and the corresponding formula is provided below:
μ ^ i j = Σ ^ i j ( Σ i 1 μ i + Σ k N ( i ) w k i Λ k i μ k i )
where Σ ^ i j denotes the updated covariance matrix, μ i represents the mean vector of the current state estimate at node i , and μ k i denotes the mean vector associated with the information transmitted from node k to i .

3.3.4. Algorithm Implementation Procedure

The pseudocode of the Algorithm 1 is presented as follows.
Algorithm 1: WGBP
Input: UAVs’ state X , number of UAVs U , maximum iterations T num , optimization dimension D , measurements z = [ z m GNSS , z k INS , z k , l Range ] , convergence threshold T h e .
Initialize parameters and model;
Initialize UAVs’ states using the state message at time t 1 and the IMU measurements δ b m t + 1 x m t + 1 by Equation (13);
While   ( t T num and   and   t r ( Σ ^ i ) > T h e ) do
For all UAVs do
For all g m and d k , l nodes do
Compute CGDOP for each constraint factor by Equation (8);
Compute weight coefficients by Equation (18);
End for
Update the state of the UAV based on δ g m t + 1 x m t + 1 by Equation (14);
Update the state of the UAV based on δ d m , l t + 1 x m t + 1 by Equation (15);
Update of the covariance and mean by Equations (20) and (21);
End for
End while
Return: N ( x i , μ i , Σ ^ i ) ;
Output: State estimates of UAVs from the converged beliefs.
By introducing dynamic weight update in the GBP algorithm, the information with higher credibility in the optimization process accounts for a larger proportion, thereby improving the accuracy and robustness of the fusion result. Based on the above statement, the overall framework of the WGBP algorithm is shown in Figure 3.
Figure 3. Flowchart of WGBP.

3.3.5. Computational Complexity and Communication Overhead Analysis

The computational cost of the belief propagation algorithm is generally determined by four key parameters: the number of UAVs ( U ), the state dimension ( D ), the total number of constraint factors ( C ), and the number of iterations required for convergence ( T num ). As derived earlier, the overall time complexity of WGBP comprises three dominant components: (i) CGDOP and weight computation, with complexity O ( C D 3 ) ; (ii) state update, with complexity O ( U D 2 ) ; and (iii) covariance and mean update, with complexity O ( U D 3 + C D 2 ) . Accounting for T iterations to reach convergence, the aggregate time complexity of WGBP is:
O ( T num ( U + C ) ( D 3 + D 2 ) )
In UAV formation scenarios, the number of constraints scales linearly with the number of UAVs, implying that the overall complexity reduces to O ( T num U ( D 3 + D 2 ) ) . Hence, WGBP exhibits linear scalability with respect to U , making it suitable for real-time distributed localization in practical UAV formations.
In UAV formation cooperative navigation, communication overhead of the control algorithm is equally critical. For WGBP, this overhead arises primarily from two phases of inter-agent data exchange: (i) CGDOP computation phase: each UAV transmits its 3D position vector to its neighbors; thus, the total per-iteration communication volume across the entire formation is O ( U ) ; (ii) mean and covariance update phase: O ( U D 2 ) .
Hence, the aggregate communication cost of WGBP is:
O ( T num U ( D 2 + D + 1 ) )
As shown above, the overall communication overhead scales linearly with the number of UAVs ( U ), quadratically with the state dimension ( D ), and linearly with the number of convergence iterations ( T num ). In 3D position-only localization, the total overhead reduces to O ( T num U ) , rendering it well within the bandwidth constraints of typical UAV wireless links.

4. Discussions and Results

4.1. Parameter Settings

To verify the effectiveness of the proposed algorithm, the following experimental setup is adopted. Based on the requirements for UAV formation flight, a formation configuration comprising one master UAV denoted as m and three slave UAVs k i ( i = 1 , 2 , 3 ) is established. The initial position of m is (116.407°, 39.9042°, 120 m), while the initial positions of k i are (116.426°, 39.8623°, 127.395 m), (116.360°, 39.9245°, 116.027 m), and (116.390°, 39.9398°, 116.245 m), respectively.
All algorithms in this experiment were implemented and executed in MATLAB R2023b on a desktop computer equipped with an Intel® Core™ i7-7700 CPU (3.60 GHz) and 16 GB RAM. During the simulation experiment phase, we set the size of the UAV to be the same as that of the actual experimental UAV. Specifically, the master UAV is a quadcopter with a length, width, and height of 950 mm × 950 mm × 100 mm, and the slave UAV is 350 mm × 350 mm × 100 mm. Table 1 lists the specific parameters of the sensors mounted on each UAV. Figure 4 illustrates the flight trajectories of each UAV.
Table 1. Specific parameters of sensors equipped on each UAV.
Figure 4. The flight trajectories of the UAV Formation. (a) Master UAV; (b) Slave UAVs.

4.2. Influence of Various Formation Configurations on CGDOP

In the CGDOP model, the observation vectors between satellites and the master UAV, as well as those between the master and its slave UAVs, jointly influence the CGDOP value, which serves as an indicator of positioning accuracy to a certain extent. The following analysis examines how different master–slave configurations affect CGDOP. Four typical formation configurations are derived from the UAV flight trajectories, and the three-dimensional coordinates of each UAV in these formations are presented in Table 2.
Table 2. The positions of the UAVs in the four typical formation configurations.
Figure 5 illustrates the temporal variation of CGDOP under different master–slave UAV formation configurations. Table 3 presents the distribution of CGDOP values across different levels. As shown in Figure 5a, a large number of CGDOP values exceed 50, indicating poor geometric satellite-vehicle geometry. In Figure 5c, CGDOP values exhibit relatively high peaks, and the proportion of values falling within Level B is low. Figure 5d shows a more dispersed distribution, with numerous CGDOP values exceeding 20, reflecting moderate positioning performance. In contrast, Figure 5b demonstrates greater stability: although two values are slightly elevated, the majority remain within Level B, resulting in an overall stable and favorable CGDOP profile.
Figure 5. The temporal variation of CGDOP under different formation configurations. (a) i; (b) ii; (c) iii; (d) iv.
Table 3. The distribution of CGDOP values across different levels.
As shown in Table 3, the proportion of CGDOP values at Level A is zero across all four unmanned aircraft configurations. At Level B, Configuration II exhibits the highest percentage, reaching 68%, while simultaneously showing the lowest proportion of CGDOP values in Level F. Furthermore, the average CGDOP value for Configuration II is the smallest among all configurations as calculated from the data in Table 3. Therefore, maintaining Configuration II during flight leads to improved positioning performance due to its superior geometric satellite-vehicle geometry.
Figure 6 presents the average CGDOP values for the four configurations. As shown in Figure 6, among all flight configurations, Configuration II yields the smallest CGDOP value. This indicates that the information received by the unmanned aircraft cluster is more comprehensive, enabling a more complete acquisition of spatial geometry and reflecting a superior geometric configuration at the current epoch. When the master UAV flies along the 45°direction while the three slave UAVs fly toward 135°, −45°, and −135°, respectively, the angles between the unit direction vectors u m , k increase, resulting in a more uniform spatial distribution of u m , k . Consequently, signals are received from diverse directions, leading to smaller dot products between the u m , k vectors. This geometric dispersion causes the eigen values of matrix U m T U m to be closer to those of the identity matrix, minimizing the trace of the inverse matrix U m T U m 1 , thereby reducing the CGDOP value.
Figure 6. Average CGDOP across different formation configurations. (a) i; (b) ii; (c) iii; (d) iv.
This geometric configuration effectively disperses the positions of UAVs in three-dimensional space, maintaining optimal relative distances and altitude differences among them. This spatial arrangement establishes a well-defined formation depth, enhances signal diversity, and enables the positioning system to acquire more comprehensive spatial information. As a result, the system achieves maximum observational capability during positioning tasks, leading to an optimized CGDOP value and improved positioning accuracy.

4.3. Comparative Analysis of Simulation Experiments

To rigorously evaluate the algorithmic performance, we conducted controlled simulation experiments across three representative interference scenarios: Scenario 1 (baseline condition—no interference affecting either the lead or subordinate aircraft), Scenario 2 (additive GNSS measurement noise with a 10 dB signal-to-noise ratio degradation on the master UAV), and Scenario 3 (ranging bias of +10 dB applied to the slave UAV k 1 ). For each scenario, all comparative algorithms, including EKF, GBP, HRGBP, and WGBP—were executed over 30 independent Monte Carlo trials to ensure statistical reliability.
The quantitative comparison results are presented in Table 4 and Figure 7, Figure 8 and Figure 9. Table 4 reports the mean and standard deviation of estimation errors for each algorithm, enabling objective assessment of accuracy and robustness under varying interference conditions. Figure 7, Figure 8 and Figure 9 present box plots of final estimation errors alongside median-convergence curves, which jointly illustrate both the distributional characteristics and convergence behavior of each algorithm.
Table 4. The comparison results of the simulation experiments for each algorithm.
Figure 7. Comparative performance results of all algorithms in Scenario 1. (a) Positioning error; (b) Convergence curves for positioning error; (c) Attitude errors; (d) Convergence curves for attitude error.
Figure 8. Comparative performance results of all algorithms in Scenario 2. (a) Positioning error; (b) Convergence curves for positioning error; (c) Attitude errors; (d) Convergence curves for attitude error.
Figure 9. Comparative performance results of all algorithms in Scenario 3. (a) Positioning error; (b) Convergence curves for positioning error; (c) Attitude errors; (d) Convergence curves for attitude error.
Figure 7 presents comparative performance results of all algorithms in Scenario 1. As shown in Figure 7, EKF exhibits the largest positioning error (median 9–11 m) and slowest convergence. GBP reduces the error to 8.5–9.5 m, while HRGBP and WGBP achieve further improvements, with WGBP showing the smallest error (8.5–9 m), narrowest distribution, and fastest convergence. For attitude estimation, EKF again performs worst, with a median error of 0.6–1.2° and large fluctuations. GBP reduces the error to 0.3–0.8°, followed by HRGBP (0.3–0.6°). WGBP achieves the best performance, with the lowest median error (0.2–0.4°), most compact distribution, and fastest convergence to a minimal steady-state error.
Figure 8 presents comparative performance results of all algorithms in Scenario 2. In Scenario 2, all algorithms exhibit degraded performance compared to the baseline Scenario 1. The positioning error medians increase by approximately 0.5–1.0 m, and the attitude error medians rise by 0.1–0.2° across all methods. However, the relative performance ranking remains consistent: WGBP > HRGBP > GBP > EKF. Specifically, WGBP demonstrates the strongest robustness, achieving the smallest positioning error (median ~8.6–9.0 m) and attitude error (median 0.25–0.45°), with the narrowest distribution and fastest convergence.
Figure 9 presents comparative performance results of all algorithms in Scenario 3. In Scenario 3, all algorithms show further degraded performance compared to Scenario 2. In Scenario 3, the performance hierarchy remains unchanged, with WGBP consistently outperforming HRGBP, GBP, and EKF. Notably, WGBP exhibits the least sensitivity to the increased range measurement noise on slave k 1 , maintaining the most compact error distributions and the highest convergence rate.
From an ablation perspective, the performance gap between GBP (all parameters in WGBP are kept identical to GBP except for the introduction of the weighting mechanism) and WGBP across all scenarios clearly validates the value of the proposed weighted resampling strategy. WGBP consistently reduces the median positioning error by 5–15% compared to GBP, with the improvement becoming more pronounced under noise mismatch conditions (Scenario 2 and Scenario 3). This demonstrates that weighted resampling effectively mitigates the impact of model inaccuracies and sensor noise, leading to more stable and accurate state estimates.
Figure 10 presents the probability distribution curves of the position errors of the master UAV under various configurations using the WGBP algorithm. This part of the results further confirms that Configuration II optimizes the geometric layout between the master aircraft and slave UAVs, achieving the highest observational performance among the four configurations, which leads to the highest positioning accuracy for the master UAV.
Figure 10. The probability distribution curves of the position errors of the master UAV under various configurations.
In summary, across the three simulation scenarios and for all four position and attitude estimates of the UAVs, WGBP outperformed EKF, GBP, and HRGBP in both accuracy, consistency and convergence: it achieved the lowest mean error in 22 out of 24 cases and the smallest standard deviation in 21 cases. These results verify that the improved Bayesian filtering method, especially the weighted resampling in WGBP, effectively enhances navigation accuracy and robustness for UAVs under noisy measurement conditions.

4.4. Comparative Analysis of Field Experiment

To verify the performance of the algorithms in real complex environments, a formation flight experiment consisting of one master UAV and three slave UAVs was carried out in the low-altitude airspace of a campus building group, aiming to compare and evaluate the accuracy and reliability of four algorithms: WGBP, EKF, GBP, and HRGBP. The sensor configuration of each UAV is detailed in Table 5. Figure 11 presents field experiment configuration.
Table 5. Experimental sensors configuration.
Figure 11. Field experiment configuration. (a) Mounted sensors; (b) Formation flight trajectory.
The UAV formation executed remote-controlled flight tasks along the preset trajectory, simultaneously collecting pulsed laser ranging data (PLS-B300), raw observations from the six-axis IMU (MPU-9250), and single-frequency GNSS L1 carrier phase and pseudorange observations (Ublox NEO-M9N). The experiment was conducted in the low-altitude airspace of a campus building group: the formation maintained a flight altitude of 5–15 m throughout the whole process; it was in an open area in the first 1100 s with stable GNSS signals; starting from the 1100th s, it entered a dense building area, where the GNSS and laser rangefinder intermittently experienced measurement errors and data loss due to dynamic occlusion, which may lead to measurement mismatch; the entire mission lasted 2000 s. All sensor data were written to the onboard high-reliability solid-state storage module in a timestamp-aligned manner in real time. After the experiment, the complete data set was exported without loss to the ground PC via the Gigabit Ethernet interface, and the standardized post-processing procedures, such as time synchronization calibration, outlier elimination, coordinate system conversion, and error statistical analysis, were completed based on the MATLAB R2023a platform. All original sensor data were completely recorded, and a high-precision post-processing method was adopted to quantitatively evaluate the position estimation results of each algorithm.
As illustrated in Figure 12, the RTK ground reference station (BDX-500) continuously broadcasts real-time differential corrections to the mobile station (Ublox NEO-M9N), enabling sustained centimeter-level positioning accuracy; this trajectory serves as the ground-truth reference for all subsequent quantitative verification. In this experiment, the DJI Datalink 3 was employed as the manual backup controller for the A3 flight control system. It provides bidirectional telemetry and command transmission up to 3 km under line-of-sight conditions, enabling real-time pilot intervention. In the event of autonomous system anomalies—such as sensor failure, navigation drift, or unexpected environmental interference—the controller supports immediate transition to direct manual mode, allowing precise, low-latency attitude and heading adjustments to ensure flight safety and mission continuity.
Figure 12. Ground station and UAV platform specifications.
Figure 13, Figure 14, Figure 15 and Figure 16 show the experimental results of position errors and Cumulative Distribution Functions (CDF) of each UAV, respectively. Table 6 presents the comparison results of the field experiment for each algorithm.
Figure 13. Positioning errors of master UAV in field experiment. (a) Time-varying positioning error; (b) CDF for positioning errors.
Figure 14. Positioning errors of UAV k1 in field experiment. (a) Time-varying positioning error; (b) CDF for positioning errors.
Figure 15. Positioning errors of UAV k2 in field experiment. (a) Time-varying positioning error; (b) CDF for positioning errors.
Figure 16. Positioning errors of UAV k3 in field experiment. (a) Time-varying positioning error; (b) CDF for positioning errors.
Table 6. The comparison results of the field experiment for each algorithm.
As shown in Figure 13, experimental results demonstrate that the WGBP algorithm achieves the highest positioning accuracy: its position error remains bounded within 10 m throughout the test, with a mean (standard deviation) of 3.2 m (1.5 m) and a 95% percentile error of 4.8 m. These metrics are markedly superior to those of the HRGBP algorithm [mean (std): 5.7 m (2.1 m), 95% percentile: 7.9 m], the GBP algorithm [mean (std): 8.3 m (2.8 m), 95% percentile: 11.6 m], and the EKF algorithm [mean (std): 13.5 m (5.2 m), 95% percentile: 17.7 m]. Notably, EKF exhibits the poorest accuracy and largest error variability, with peak error reaching 26.3 m.
As shown in Figure 14, slave UAV k1 was deployed at the front side of the formation, directly facing the building facade, and was mainly affected by occlusions. After entering the dense building area, the EKF algorithm failed to resist the measurements noise effect, leading to severe error divergence and loss of positioning effectiveness. In contrast, the WGBP algorithm showed excellent anti-interference performance and maintained stable positioning [mean (SD): 15.3 m (4.2 m), 95th percentile error: 19.7 m], which was significantly superior to the HRGBP algorithm and GBP algorithm.
As shown in Figure 15, slave UAV k2 was arranged on the right side of the formation, where dynamic occlusion by pedestrians and vehicles was frequent, posing great challenges to the algorithm’s adaptability to intermittent observation loss. The EKF algorithm was highly sensitive to dynamic occlusion, resulting in drastic error jumps and extremely poor robustness. In contrast, the WGBP algorithm had strong fault tolerance for intermittent data loss and maintained reliable positioning relying on its optimized data fusion strategy [mean (SD): 15.1 m (4.0 m), 95th percentile error: 19.9 m], which was superior to the HRGBP algorithm and GBP algorithm in adaptability to dynamic complex environments.
As shown in Figure 16, slave UAV k3 was deployed at the rear side of the formation, responsible for ensuring long-term trajectory consistency, and thus had strict requirements on algorithm stability. After 1100 s, the experiment entered a dense building area; due to poor long-term stability, the positioning error of the EKF algorithm continued to accumulate and deteriorate. The WGBP algorithm maintained stable convergence throughout the 2000 s mission, with error always ≤10 m [mean (SD): 3.3 m (1.6 m), 95th percentile error: 4.9 m], showing better long-term reliability compared with the HRGBP algorithm and GBP algorithm, whose stability significantly decreased in long-duration complex tasks.
To verify the statistical significance of the positioning performance differences between the proposed WGBP algorithm and the comparison algorithms (EKF, GBP, HRGBP), and to eliminate the influence of random errors in a single measurement on the field experimental results, based on the 2000 s campus measurement experiment and high-density sampling data from this study, a statistical significance test method based on the normal distribution was adopted to comprehensively compare the positioning performance differences of the four algorithms. The experimental data from this study fully meets the test requirements: for each of the four algorithms, a single 2000 s measurement experiment was conducted, and the sample size for each algorithm was 20,000 groups; through the Shapiro–Wilk test, it was verified that the positioning error samples of the four algorithms all followed a normal distribution; the significance level for the test was uniformly set at α = 0.05), and the test determination rule was: if the test p < 0.05, it is considered that the positioning performance differences between the two algorithms have statistical significance, and the differences are due to the design of the algorithms rather than random errors; if p ≥ 0.05, it is considered that the difference is not significant, and it is mainly caused by random interference in the measurement.
From the results of this statistical significance test in Table 7, it can be seen that there are significant differences in the positioning performance of the four algorithms overall (ANOVA test, F = 98.62, p < 0.001), indicating that the positioning performances of the four algorithms in the actual measurement scenarios of the campus building complex are significantly different; the post hoc multiple comparisons and pairwise independent sample t-test results show that the differences in positioning errors between any two of the four algorithms are statistically significant (all p < 0.05). Among them, the mean positioning error of the WGBP algorithm is significantly lower than that of the three comparison algorithms (EKF, GBP, HRGBP), further enhancing the credibility of the experimental conclusion.
Table 7. Statistical significance test results across all four algorithms.
In summary, in the above typical complex low-altitude campus scenario, the WGBP algorithm exhibited optimal performance in terms of positioning accuracy, reliability, and robustness.

5. Conclusions

This paper proposes a WGBP-based cooperative navigation algorithm for master–slave UAV formations to improve navigation accuracy and robustness. The method fuses INS, GNSS, and intra-formation ranging data to build a unified WGBP framework for both high-precision master and low-precision slave UAVs. By introducing the CGDOP model, the algorithm adaptively adjusts message weights to further enhance accuracy and robustness. Experimental results in various scenarios with varying noise intensities show that the proposed WGBP method outperforms EKF, GBP, and HRGBP, effectively improving the navigation accuracy and robustness of UAVs. Statistical significance testing further confirms that the performance differences among the comparison algorithms are statistically significant.
This work assumes independent Gaussian state and observation noise. Future research will address collaborative estimation under non-Gaussian noise distributions, including heavy-tailed (e.g., Student’s t) and impulsive (e.g., Cauchy) models validated in real-world GNSS-denied navigation scenarios.

Author Contributions

L.Z.: Methodology, Software, Validation, Investigation, Data analysis, Visualization, Supervision, Project coordination, Writing-original draft, Writing—reviewing & editing, Funding acquisition. Y.L.: Formal analysis, Project Coordination. Y.Y.: Validation, Resources. G.R.: Formal analysis, Project coordination. C.T.: Visualization. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National Natural Science Foundation of China (Grant No. 62341119), Natural Science Foundation of Jiangsu Province for Youth (Grant No. BK20210941), Changzhou Leading Innovative Talent Introduction and Cultivation Project (Grant No. CQ20210094).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to some data confidentiality restrictions.

Conflicts of Interest

Author Dr. Lin Zhang was employed by the company Sunwave Communications Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Yin, T.; Gu, Z.; Park, J.H. Event-Based Intermittent Formation Control of Multi-UAV Systems Under Deception Attacks. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 8336–8347. [Google Scholar] [CrossRef] [PubMed]
  2. Zhu, B.; Bedeer, E.; Nguyen, H.H.; Barton, R.; Gao, Z. UAV Trajectory Planning for AoI-Minimal Data Collection in UAV-Aided IoT Networks by Transformer. IEEE Trans. Wirel. Commun. 2023, 22, 1343–1358. [Google Scholar] [CrossRef]
  3. Feng, W.-Q.; Yang, Y.; Yang, L.-F.; Fu, Y.-J.; Xu, K.-J. Unmanned Aerial Vehicle Logistics Distribution Path Planning Based on Improved Grey Wolf Optimization Algorithm. Symmetry 2025, 17, 2178. [Google Scholar] [CrossRef]
  4. Wang, H.; Song, S.; Guo, Q.; Xu, D.; Zhang, X.; Wang, P. Cooperative Motion Planning for Persistent 3D Visual Coverage with Multiple Quadrotor UAVs. IEEE Trans. Autom. Sci. Eng. 2024, 21, 3374–3383. [Google Scholar] [CrossRef]
  5. Ranjan, P.K.; Sinha, A.; Cao, Y.; Casbeer, D.; Weintraub, I. Relational Maneuvering of Leader-Follower Unmanned Aerial Vehicles for Flexible Formation. IEEE Trans. Cybern. 2024, 54, 5598–5609. [Google Scholar] [CrossRef]
  6. Causa, F.; Fasano, G. Adaptive Cooperative Navigation Strategies for Complex Environments. In Proceedings of the 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, USA, 20–23 April 2020; pp. 100–111. [Google Scholar]
  7. Gong, X.; Gui, J.; Chen, Y.; Yang, X.; Yu, W.; Huang, T. Resilient Human-in-the-Loop Formation-Tracking of Multi-UAV Systems Against Byzantine Attacks. IEEE Trans. Autom. Sci. Eng. 2025, 22, 3797–3809. [Google Scholar] [CrossRef]
  8. Vu, D.V.; Pham, T.V.; Nguyen, D.T. A Path-following Guidance Algorithm for Fixed-wing UAV Swarms on a Decentralized Network. In Proceedings of the 2022 International Conference on Advanced Technologies for Communications (ATC), Ha Noi, Vietnam, 20–22 October 2022; pp. 286–291. [Google Scholar]
  9. Song, F.; Zeng, Q.; Zhang, R.; Zhu, X.; Ye, X.; Zhang, Z. Multi-UAV Cooperative Navigation Based on Multi-Source Information Fusion: A Review. IEEE Sens. J. 2025, 26, 3646302. [Google Scholar] [CrossRef]
  10. Chung, S.S.M.; Tuan, S.C. Radar Cross Section Simulation of XQ-58 Valkyrie like CAD Model. In Proceedings of the 2020 International Workshop on Electromagnetics: Applications and Student Innovation Competition (iWEM), Makung, Taiwan, 26–28 August 2020; pp. 1–2. [Google Scholar]
  11. Saylam, S.; Gündoğdu, F.K. Decision for the Next Unmanned Aerial Vehicle Considering Regional Operational Requirements Based on Z-Fuzzy Best-Worst Method. In Proceedings of the 2023 10th International Conference on Recent Advances in Air and Space Technologies (RAST), Istanbul, Turkey, 7–9 June 2023; pp. 1–6. [Google Scholar]
  12. Liu, X.; Cao, S.; Fan, W. Exclusive: China’s New Loyal Wingman Drone to Greatly Change Air Combat: Designer. Global Times, 7 November 2022. [Google Scholar]
  13. Zheng, N.; Xu, Y.; Zhao, F.; Xin, M.; Yang, F. Improved and Optimized GNSS-IR Sea Surface Height Retrieval Based on Noise Elimination and Lightweight Airborne Multi-GNSS Multi-UAV Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 15931–15941. [Google Scholar] [CrossRef]
  14. Abushakra, F.; Jeong, N.; Elluru, D.N.; Awasthi, A.K.; Kolpuke, S.; Luong, T.; Reyhanigalangashi, O.; Taylor, D.; Gogineni, S.P. A Miniaturized Ultra-Wideband Radar for UAV Remote Sensing Applications. IEEE Microw. Wirel. Compon. Lett. 2022, 32, 198–201. [Google Scholar] [CrossRef]
  15. Shen, Z.; Li, X.; Wang, X.; Wu, Z.; Li, X.; Zhou, Y.; Li, S. A Novel Factor Graph Framework for Tightly Coupled GNSS/INS Integration with Carrier-Phase Ambiguity Resolution. IEEE Trans. Intell. Transp. Syst. 2024, 25, 13091–13105. [Google Scholar] [CrossRef]
  16. Zhou, J.; Wang, W.; Zhang, C. A GNSS Anti-Jamming Method in Multi-UAV Cooperative System. IEEE Trans. Veh. Technol. 2025, 75, 535–547. [Google Scholar] [CrossRef]
  17. Ruan, L.; Li, G.; Dai, W.; Tian, S.; Fan, G.; Wang, J.; Dai, X. Cooperative Relative Localization for UAV Swarm in GNSS-Denied Environment: A Coalition Formation Game Approach. IEEE Internet Things J. 2022, 9, 11560–11577. [Google Scholar] [CrossRef]
  18. Liu, L.; Hu, X.; Jiang, W.; Meng, G.; Wang, Z.; Zhang, T. A Visual Cooperative Localization Method for Airborne Magnetic Surveying Based on a Manifold Sensor Fusion Algorithm Using Lie Groups. IEEE Trans. Aerosp. Electron. Syst. 2025, 61, 14558–14572. [Google Scholar] [CrossRef]
  19. Ouyang, X.; Zeng, F.; Lv, D.; Dong, T.; Wang, H. Cooperative Navigation of UAVs in GNSS-Denied Area with Colored RSSI Measurements. IEEE Sens. J. 2021, 21, 2194–2210. [Google Scholar] [CrossRef]
  20. Zhao, W.; Zhao, S.; Zhang, G.; Liu, G.; Meng, W. FL-EKF-Based Cooperative Localization Method for Multi-AUVs. IEEE Internet Things J. 2024, 11, 30742–30753. [Google Scholar] [CrossRef]
  21. Yao, L.; Yao, L.; Resalati, S.; Wu, Y.A. A Tightly Coupled Adaptive EKF for Indoor 3-D Positioning with Linear Acceleration Compensation. IEEE Trans. Instrum. Meas. 2025, 74, 9542711. [Google Scholar] [CrossRef]
  22. Zhong, Y.; Chen, X.; Shi, C.; Yao, Z. Robust Sequential Variational Bayesian Filter for Tightly Coupled Navigation System Within GNSS Challenged Environment. IEEE Internet Things J. 2025, 12, 47912–47926. [Google Scholar] [CrossRef]
  23. Zhang, J.; Yang, X.; Zhang, W.A. A Progressive Bayesian Filtering Framework for Nonlinear Systems with Heavy-Tailed Noises. IEEE Trans. Autom. Control 2023, 68, 1918–1925. [Google Scholar] [CrossRef]
  24. Lyu, P.; Wang, B.; Lai, J.; Bai, S.; Liu, M.; Yu, W. A Factor Graph Optimization Method for High-Precision IMU-Based Navigation System. IEEE Trans. Instrum. Meas. 2023, 72, 9509712. [Google Scholar] [CrossRef]
  25. Bari, S.; Gabler, V.; Wollherr, D. Probabilistic Inference-Based Robot Motion Planning via Gaussian Belief Propagation. IEEE Robot. Autom. Lett. 2023, 8, 5156–5163. [Google Scholar] [CrossRef]
  26. Sifaou, H.; Simeone, O. Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems. IEEE Trans. Mach. Learn. Commun. Netw. 2025, 3, 30–44. [Google Scholar] [CrossRef]
  27. Zou, X.; Xiang, W.; Lian, J.; Song, E.; Tang, C.; Liu, Y. Vehicle Motion State Recognition Method Based on Hidden Markov Model and Support Vector Machine. Symmetry 2025, 17, 1011. [Google Scholar] [CrossRef]
  28. Li, C.; Wang, J.; Liu, J.; Shan, J. Cooperative Visual–Range–Inertial Navigation for Multiple Unmanned Aerial Vehicles. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 7851–7865. [Google Scholar] [CrossRef]
  29. Qian, M.; Chen, W.; Sun, R.; Cao, J. A Cooperative Localization Algorithm Based on Theory of Surveying Adjustment with Range Network for UAV Clusters. IEEE Trans. Instrum. Meas. 2025, 74, 6507715. [Google Scholar] [CrossRef]
  30. Song, Z.; Zhang, Y.; Yu, Y.; Tang, C. Wide-Area UAV Networks Cooperative Positioning Algorithm Based on Information Geometry. IEEE Signal Process. Lett. 2024, 31, 2645–2649. [Google Scholar] [CrossRef]
  31. Li, F.; Zeng, Q.; Wang, S.; Li, B.; Xiong, Z.; Li, Y. Research on Cooperative Navigation Algorithm for UAV Cluster Based on Belief Propagation. In Proceedings of the 2023 7th International Symposium on Computer Science and Intelligent Control (ISCSIC), Nanjing, China, 27–29 October 2023; pp. 160–164. [Google Scholar]
  32. Chen, M.; Xiong, Z.; Xiong, J.; Shi, C.; Wang, R. Cooperative Navigation for UAV Swarm via Simplified Gaussian Particle-Based Belief Propagation. IEEE Sens. J. 2024, 24, 31324–31336. [Google Scholar] [CrossRef]
  33. Cheraghy, M.; Soltanpour, M.; Abdalla, H.B.; Oveis, A.H. SVM-Based Factor Graph Design for Max-SR Problem of SCMA Networks. IEEE Commun. Lett. 2024, 28, 877–881. [Google Scholar] [CrossRef]
  34. Xiong, J.; Xiong, Z.; Zhuang, Y.; Cheong, J.W.; Dempster, A.G. Fault-Tolerant Cooperative Positioning Based on Hybrid Robust Gaussian Belief Propagation. IEEE Trans. Intell. Transp. Syst. 2023, 24, 6425–6431. [Google Scholar] [CrossRef]
  35. Vinay, N.; Rafiq, P.M.; Sai, E.S.; Ahmed, S.T. An Investigation on DOP Analysis for Improving Position Precision with NavIC using GNSS SkyTraq Receiver. In Proceedings of the 2025 7th International Conference on Intelligent Sustainable Systems (ICISS), Tirunelveli, India, 12–14 March 2025; pp. 1558–1565. [Google Scholar]
  36. Shang, F. Frequency-Domain Analysis of DoP Information for Urban Area Interpretation Using Fully Polarimetric SAR Data. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 4431–4434. [Google Scholar]
  37. Liu, Y.; Lian, B.; Zhang, L.; Zhou, T. Gaussian Message Passing-Based Cooperative Localization with Bootstrap Percolation Scheme in Dense Networks. J. Navig. 2019, 72, 1275–1296. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.