Next Article in Journal
Energy-Management Strategy of Battery Energy Storage Systems in DC Microgrids: A Distributed Fuzzy Output Consensus Control Considering Multiple Cyber Attacks
Previous Article in Journal
Spherical Gravity Forwarding of Global Discrete Grid Cells by Isoparametric Transformation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gaussian Mixture Probability Hypothesis Density Filter for Heterogeneous Multi-Sensor Registration

1
School of Electronics and Information Engineering, Beihang University, Beijing 100191, China
2
Hangzhou Innovation Institute, Beihang University, Hangzhou 310052, China
3
School of Aerospace Engineering, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(6), 886; https://doi.org/10.3390/math12060886
Submission received: 2 March 2024 / Revised: 14 March 2024 / Accepted: 15 March 2024 / Published: 17 March 2024
(This article belongs to the Section Probability and Statistics)

Abstract

:
Spatial registration is a prerequisite for data fusion. Existing methods primarily focus on similar sensor scenarios and rely on accurate data association assumptions. To address the heterogeneous sensor registration in complex data association scenarios, this paper proposes a Gaussian mixture probability hypothesis density (GM-PHD)-based algorithm for heterogeneous sensor bias registration, accompanied by an adaptive measurement iterative update algorithm. Firstly, by constructing augmented target state motion and measurement models, a closed-form expression for prediction is derived based on Gaussian mixture (GM). In the subsequent update, a two-level Kalman filter is used to achieve an approximate decoupled estimation of the target state and measurement bias, taking into account the coupling between them through pseudo-likelihood. Notably, for heterogeneous sensors that cannot directly use sequential update techniques, sequential updates are first performed on sensors that can obtain complete measurements, followed by filtering updates using extended Kalman filter (EKF) sequential update techniques for incomplete measurements. When there are differences in sensor quality, the GM-PHD fusion filter based on measurement iteration update is sequence-sensitive. Therefore, the optimal subpattern assignment (OSPA) metric is used to optimize the fusion order and enhance registration performance. The proposed algorithms extend the multi-target information-based spatial registration algorithm to heterogeneous sensor scenarios and address the impact of different sensor-filtering orders on registration performance. Our proposed algorithms significantly improve the accuracy of bias estimation compared to the registration algorithm based on significant targets. Under different detection probabilities and clutter intensities, the average root mean square error (RMSE) of distance and angular biases decreased by 11.8% and 8.6%, respectively.

1. Introduction

In sensor networks, information gathered from various sensors is collaboratively fused to enhance the overall system performance [1]. Sensor fusion techniques have found extensive applications in fields such as autonomous mobile systems [2], remote sensing [3], wireless sensor networks [4], and target tracking [5], benefiting from the rapid development of sensor technology. Spatial registration is a critical aspect of data fusion in multi-sensor systems. As shown in Figure 1, it is essential to estimate and correct the measurement biases (such as angle and distance) of the sensors. This allows the measurements from multiple sensors to be processed in a common reference system. Failure to perform spatial registration and directly fuse sensor data can lead to degraded tracking results, target loss, and even inferior performance compared to single-sensor processing.
Currently, most information fusion systems treat spatial registration and track association as separate steps. Initially, significant target track association is performed, followed by spatial registration based on a single significant target. These methods can yield good bias registration results when significant targets are correctly extracted. However, these methods have limitations. In multi-sensor multi-target tracking scenarios, spatial registration and track association are interconnected, and the presence of sensor biases presents significant challenges to track association [6,7,8,9,10].
In recent years, the application of finite set statistics (FISST) theory has emerged as a solution to address multi-target tracking problems without the need for data association [11,12]. Within the FISST framework, target states and measurements are modeled as random finite sets (RFS), allowing for the tracking of an unknown and time-varying number of targets in cluttered environments. By constructing multi-target transition densities and multi-target likelihood functions, multi-target tracking can be effectively described within a rigorous Bayesian framework. However, the optimal implementation of multi-target Bayesian filters becomes challenging due to the combinatorial nature of multiple integrals and multi-target densities. To overcome this challenge, Mahler introduced the probability hypothesis density (PHD) filter as a first-order statistical moment approximation of the multi-target posterior density [13].
There are two primary methods for implementing PHD recursion: sequential Monte Carlo (SMC) and Gaussian mixture (GM). The SMC-PHD filter employs a large number of particles to approximate multidimensional integrals, resulting in a high computational burden [14,15]. Furthermore, clustering techniques are necessary to extract target state estimates. To address these limitations, Vo et al. introduced the GM-PHD filter, which analytically propagates weights, means, and covariances using Kalman filtering [16,17,18,19]. For multi-sensor multi-target tracking, Mahler proposed the iterated corrector PHD (IC-PHD) filter [20]. This filter follows an iterative approach, sequentially processing the filtering results of the previous sensor using the subsequent sensor. While the concept of this filter is straightforward, its tracking performance can be influenced by the order in which the sensors are processed [21,22]. In order to perform spatial registration alongside filtering, Lian et al. integrated spatial registration into the GM-PHD framework, enabling simultaneous filtering and registration. However, their method only addressed spatial registration for sensors of the same type [9,23,24,25].
All existing spatial registration methods for heterogeneous sensors in the current literature depend on significant targets (obtained through additional association relationships). Therefore, this study focuses on the spatial registration of heterogeneous multi-sensor systems within the RFS framework. In the RFS framework, there is no need for additional association relationships to extract common significant targets, and complete information from multiple targets can be utilized for spatial registration. While a previous study has developed a similar sensor spatial registration algorithm based on multi-target information within the RFS framework, the main objective of this paper is to extend the spatial registration algorithms proposed in [9,23,24,25] to a wider range of heterogeneous sensor scenarios. However, this extension presents three key challenges:
(1)
The nonlinear relationship between the state space and measurement space in recursive filtering of heterogeneous sensors causes coupling between target states and measurement biases.
(2)
Heterogeneous sensors yield data with varying dimensions for target detection (including complete and incomplete measurements), posing challenges for multi-sensor sequential filtering.
(3)
The final fusion and registration results can be affected by the varying sequential filtering order due to performance differences among sensors.
To address these problems, two algorithms are proposed: the heterogeneous sensor-augmented state registration based on GM-PHD random fusion order (HSAR-GM-PHD-R) and the heterogeneous sensor-augmented state registration based on GM-PHD adaptive observation with iterative updates (HSAR-GM-PHD-AO). It is important to note that this paper introduces two heterogeneous sensor spatial registration algorithms. The HSAR-GM-PHD-R method addresses the first two technologies, while the HSAR-GM-PHD-AO method tackles all three technical challenges.
The contributions are as follows:
(1)
By constructing augmented target motion and measurement models and using PHD recursion, a closed-form expression for augmented state prediction is derived based on Gaussian mixture models. Additionally, a two-level Kalman filter is used in the update to approximately decouple the estimation of target state and measurement biases.
(2)
For the registration problem of heterogeneous sensors, measurements obtained from the sensors are divided into complete measurements and incomplete measurements. Sequential updates are performed first for sensors that provide complete measurements, followed by filtering updates for incomplete measurements using EKF sequential update techniques.
(3)
To address the sensitivity of the sequence fusion-filtering algorithm to sensor quality differences, real-time evaluation of fusion consistency for each sensor is performed using the optimal subpattern assignment (OSPA) metric when the data quality of each sensor is unknown. By optimizing the fusion order, more accurate fusion results can be obtained.
This paper is organized as follows. Section 2 presents related work on multi-sensor spatial registration. The problem formulation is introduced in Section 3. Section 4 demonstrates the proposed spatial registration methods. The simulation results for four different scenarios are given in Section 5. Conclusions are in Section 6.

2. Related Work

Sensor spatial registration has applications in robotics, augmented reality, target tracking, navigation, autonomous vehicles, remote sensing, and industrial automation. Extensive research has been conducted on spatial registration for multi-sensor systems [23,24,25,26,27,28,29,30,31,32,33,34,35]. Spatial registration can be broadly divided into two categories: one is based on common significant targets, which rely on known association relationships, and the other is based on the RFS framework. The spatial registration based on RFS does not require the extraction of common significant targets but can directly use multi-target information for registration, thus avoiding the impact of data association on spatial registration. In current research, the RFS-based spatial registration method has shown superior performance compared to the common significant target-based approach. Although our study focuses on the RFS framework, we will provide a detailed summary of both types of methods in the following paragraphs to ensure comprehensive analysis and comparison.
Spatial registration algorithms based on significant targets include least squares (LS) [26,27,28], general least squares (GLS) [29,30], real-time quality control (RTQC) [31], exact maximum likelihood (EML) [32,33], and maximum likelihood registration (MLR) [34,35]. Among these algorithms, RTQC and LS disregard the influence of sensor measurement noise, limiting their performance to scenarios with low measurement noise. While the GLS algorithm considers measurement noise, it can only estimate system biases pairwise, similar to the LS algorithm, which hinders the achievement of optimal performance. The EML algorithm utilizes measured values of sensors in the system plane and employs the maximum likelihood principle to estimate the target position and sensor biases simultaneously. It approximates the sine and cosine results of the system biases during the coordinate transformation, establishes a linearized model between biases and measurements, and iteratively obtains bias estimates. MLR improves upon this approximation method by establishing a linearized model between biases and measurements based on Taylor expansion, resulting in bias estimation results close to the Cramer–Rao lower bound. However, these algorithms assume well-established data association.
In complex target scenarios, where the collaboration of multiple types of sensors is required, the spatial registration of heterogeneous sensors becomes a significant research focus in the field of sensor fusion. Heterogeneous sensor measurements can be classified into two categories. The first category is complete measurements, which provide the full 3D position of the target obtained from sensors like radars. The second category is incomplete measurements or missing information measurements, which include one-dimensional or two-dimensional measurement information obtained from sensors such as optical cameras, infrared sensors, and electronic support measures [35]. For spatial registration involving sensors with complete measurements, the maximum likelihood registration algorithm can achieve estimation performance that closely approximates the Cramer–Rao lower bound. However, when dealing with incomplete measurements, this method can suffer from model mismatch, resulting in a decline in registration performance. To address this, reference [34] extended the maximum likelihood registration algorithm to the context of heterogeneous sensor registration by utilizing triangulation techniques to complete the missing measurement information, thereby enabling spatial registration for heterogeneous sensors. Nevertheless, it is important to note that the aforementioned methods for heterogeneous sensor spatial registration still rely on the presence of common significant targets. They essentially expand the approach based on common significant targets to accommodate the requirements of heterogeneous sensors.
The utilization of random finite set theory in solving multi-target tracking problems eliminates the need for data association. Therefore, conducting spatial registration within the RFS framework eliminates the need to select specific common significant targets and allows for the full utilization of multi-target detection information. Currently, many researchers have applied the PHD filter to multi-sensor spatial registration. Lian et al. proposed a spatial registration method based on the augmented state GM-PHD filter, utilizing Kalman filter techniques to estimate both target states and measurement biases [23]. However, the coupling between measurement biases and target states limits the effectiveness of this method in multi-target scenarios. Meanwhile, Ristic et al. relaxed the assumptions of nonlinear and non-Gaussian probability density functions and implemented spatial registration using sequential Monte Carlo particle filtering [36,37]. In order to enrich the measurement information, Wu et al. introduced a multi-target GM-PHD filter registration method based on Doppler information. This method incorporates Doppler information into the original measurements and performs joint estimation of target states and system biases by deriving a Gaussian mixture posterior density that incorporates Doppler measurement information [25]. Compared with the registration method proposed in reference [24], this approach demonstrates stronger robustness in dense clutter. It is important to note that these spatial registration methods proposed within the RFS framework only focus on spatial registration among sensors of the same type.
As the variety of sensor types and detection scenarios increases, there is a pressing need to investigate spatial registration algorithms for heterogeneous sensors. Therefore, the main focus of this paper is to extend the spatial registration algorithms within the RFS framework to heterogeneous sensor scenarios, while also addressing the key technologies encountered in the extension process.

3. Problem Formulation

In order to simultaneously estimate the target state and measurement bias within a unified filtering framework, references [23,24,25] combine them into an augmented state. In this section, we discuss the linear dynamic and measurement models for augmented target states within the GM-PHD filter framework. We first review the linear dynamic equations for the augmented state vector. Then, we introduce the linear pseudo-measurement equations based on the augmented target states. Finally, we present the random finite set formulation for multi-target filtering in linear Gaussian systems. This provides sufficient background information for introducing the registration method based on the augmented state GM-PHD filter.

3.1. Linear Gaussian Dynamical Model of the Augmented State

Let x k , i = x k , i , x ˙ k , i , y k , i , y ˙ k , i , ω k , i T be the standard state of target i at time k . Assuming a linear dynamic model for the target, i.e.,
x k , i = F k 1 x k 1 , i + G w k 1
where, assuming the process noise is zero-mean Gaussian white noise, w k ~ N 0 , Q k . Q k = d i a g σ k , w 2 , σ k , w 2 , σ k , ω 2 represents the covariance matrix of process noise, and the state transition matrix F can be expressed as
F = 1 sin ω T ω 0 cos ω T 1 ω 0 0 cos ω T 0 sin ω T 0 0 1 cos ω T ω 1 sin ω T ω 0 0 sin ω T 0 cos ω T 0 0 0 0 0 1
where the discrete time interval is T , consistent with the sampling intervals of all sensors, and
G = T 2 2 0 0 T 0 0 0 T 2 2 0 0 T 0 0 0 T
Assuming there are N S unregistered sensors, and their measurement biases are mutually independent, the dynamics of each measurement bias can also be modeled as a first-order Gaussian Markov process.
β k , s = β k 1 , s + n k 1 , s , s = 1 , , N S
where n k 1 , s denotes the process noise of sensor s . The process noise is assumed to follow a zero-mean Gaussian white noise, i.e., n k 1 , s ~ N 0 , B k 1 , s , where B k 1 , s represents the covariance matrix of the process noise associated with measurement bias.
An augmented state vector x ¯ k , i = x k , i ; β k , 1 ; ; β k , N S T is constructed with its motion model as
x ¯ k , i = F ¯ k 1 x ¯ k 1 , i + w ¯ k 1
where F ¯ k 1 represents the augmented state transition matrix, i.e., F ¯ k 1 = blk-diag F k 1 , I n 1 × n 1 , , I n S × n S , while n = dim β k , s denotes the dimensions of measurement bias. w ¯ k 1 ~ N 0 , Q ¯ k 1 represents the augmented process noise, with Q ¯ k 1 = blk-diag Q k 1 , B k 1 , 1 , , B k 1 , N S being the augmented process covariance matrix and blk-diag representing a block-diagonal matrix.

3.2. Linear Gaussian Measurement Model of the Augmented State

Considering N S sensors performing measurements in the observation scenario, if the m-th measurement from sensor s at time k is associated with target i , the nonlinear measurement equation is given by
z k , m s = h x k , i , x k , s + β k , s + υ k , s
where x k , i represents the state vector of target i at time k , and x k , s denotes the known position of sensor s . h x k , i , x k , s is a nonlinear mapping function from target i in the common coordinate system to the measurement coordinate system of sensor s . β k , s represents the measurement bias of sensor s . υ k , s represents zero-mean Gaussian white noise, denoted as υ k , s ~ N 0 , R k , s .
To linearize the measurement Equation (6), a first-order Taylor expansion is performed on h x k , i , x k , s around the predicted position x ^ k k 1 , i , resulting in
z k , m s = h x k , i , x k , s + β k , s + υ k , s             h x ^ k k 1 , i , x k , s + H k , s x k , i x ^ k k 1 , i + β k , s + υ k , s             = H k , s x k , i + h x ^ k k 1 , i , x k , s H k , s x ^ k k 1 , i + β k , s + υ k , s
where H k , s represents the Jacobian matrix of the nonlinear measurement function h x k , i , x k , s of sensor s at position x ^ k k 1 , i . According to Equation (6), the pseudo-measurement z c , k , m s is constructed as
z c , k , m s = z k , m s + H k , s x ^ k k 1 , i h x ^ k k 1 , i , x k , s
If the measurement z k , m s belongs to the target state x k , i , the pseudo-measurement z c , k , m s of z k , m s is related to the true target state as
z c , k , m s = H k , s x k , i + β k , s + υ k , s
Equation (9) is reformulated for the pseudo-measurement as
z c , k , m s = H k , s , Ψ 1 , , Ψ N S x k , i T , β k , 1 T , , β k , N S T T + υ k , s   = H ¯ k , s x ¯ k , i + υ k , s
where H ¯ k , s is referred to as the augmented observation matrix, and Ψ r is written as
Ψ r = I n r × n r r = s 0 r s

3.3. Random Finite Set Formulation of Multi-Target Filtering

The enhanced dynamic model for multiple moving targets and the multi-sensor measurement system model are integrated into the RFS framework. Let N k 1 denote the number of targets at time k 1 , with their states represented by x ¯ k 1 , 1 , , x ¯ k 1 , N k 1 . At time k , these targets may either continue to exist or disappear, new targets may emerge, and existing targets may give rise to new ones. At the same time, consider that sensor s has M k s measurements, denoted as z k , 1 s , , z k , M k s s . These measurements can originate from targets or clutter. Based on RFS theory, the target states and measurements are represented as
X ¯ k = x ¯ k , 1 , , x ¯ k , N k
Z k s = z k , 1 s , , z k , M k s s
For the standard multi-target motion model, given the multi-target state X ¯ k 1 at time k 1 , the multi-target state set X ¯ k at time k is composed of three components: newborn targets, surviving targets, and derived targets, denoted as
X ¯ k = Γ ¯ k x ¯ k 1 X ¯ k 1 S k | k 1 x ¯ k 1 x ¯ k 1 X ¯ k 1 B k | k 1 x ¯ k 1
where Γ ¯ k represents the RFS of newborn target states at time k . The survival target RFS is denoted as S k | k 1 x ¯ k 1 , with a probability p S , k x ¯ k 1 to persist to the next moment, or with a probability 1 p S , k x ¯ k 1 to disappear. B k | k 1 x ¯ k 1 represents the RFS of target derivations based on the target state x ¯ k 1 , which is not considered in this paper.
Based on this, the multi-target transition density from x ¯ k 1 , i to x ¯ can be expressed as
f k | k 1 x ¯ | x ¯ k 1 , i = N x ¯ ; F ¯ k 1 x ¯ k 1 , i , Q ¯ k 1
Furthermore, it is assumed that the survival probability is independent of the target state, i.e.,
p S , k x ¯ k 1 , i = p S , k
Moreover, it is also assumed that the RFS involved in the union operation in Equation (14) are mutually independent.
Given the multi-target state RFS X ¯ k , the corresponding measurement RFS consists of two parts: clutter and target. The measurement RFS Z k s is defined as
Z k s = K k s x ¯ k , i X ¯ k Θ k s x ¯ k , i
where K k s represents the RFS of clutter, and Θ k s x ¯ k , i is the RFS of target measurements obtained from the detection of target x ¯ k . The target is detected with a probability p D , k s x ¯ k , i , generating measurement z k , m s , and may be missed with a probability of 1 p D , k s x ¯ k , i , with its measurement represented as . In a linear Gaussian multi-target tracking system, the probability density of the measurement likelihood function can be modeled as
g k z k , m s | x ¯ k , i = N z c , k , m s ; H ¯ k , s x ¯ k , i , R s
Assuming that the detection probability is independent of the state, then
p D , k s x ¯ k 1 , i = p D , k s

4. Methods

In this section, we address three key challenges encountered when extending the spatial registration algorithm proposed in [9,23,24,25] to heterogeneous sensor scenarios. As shown in Figure 2, we provide solutions for each challenge. Firstly, to address the coupling problem between the target state and measurement bias, we propose a two-level Kalman filtering approximation for decoupling. Secondly, as different-dimensional target detection data generated by heterogeneous sensors pose a challenge for multi-sensor filtering, we utilize EKF sequential filtering techniques to effectively filter and fuse data with varying dimensions. Lastly, considering the fusion and registration results can be affected by the varying sequential filtering order due to performance differences among sensors, we utilize the OSPA metric to evaluate real-time fusion consistency across sensors. By optimizing the fusion sequence, more precise fusion results can be achieved.

4.1. Augmented State GM-PHD Registration Based on Two-Level Kalman Filter

To tackle the problem of coupling between the target state and measurement bias discussed in [23], we utilize a two-level Kalman filter. Please note that this subsection primarily addresses spatial registration for similar sensors. The subsequent subsection will extend this registration method to heterogeneous sensors.
It is also assumed that the sets involved in the union operation in Equation (17) are mutually independent. Based on the assumption of the independence between target states and sensor biases, the PHD of the target birth density RFS Γ ¯ k can be represented in the form of a Gaussian mixture
γ k x ¯ = l = 1 L γ , k w γ , k l N x ¯ ; m ¯ γ , k l , P ¯ γ , k l                       = l = 1 L γ , k w γ , k l N x ; m γ , k l , P γ , k l s = 1 N S N β s ; β γ , k , s l , B γ , k , s l
where m ¯ γ , k l = m γ , k l ; β γ , k , 1 l ; ; β γ , k , N S l T , P ¯ γ , k l = blk-diag P γ , k l , B γ , k , 1 l , , B γ , k , N S l , L γ , k is the number of newborn Gaussian components. w γ , k l represents the weights of the newborn Gaussian components. m γ , k l stands for the mean state of the newborn Gaussian components. P γ , k l denotes the covariance of the newborn Gaussian components. β γ , k , s l represents the mean bias of the newborn Gaussian components. B γ , k , s l is the covariance of the bias for the newborn Gaussian components, and l = 1 , , L γ , k , s = 1 , , N S .
Prediction: Assuming that the v k 1 x ¯ at time k 1 follows a Gaussian mixture form,
v k 1 x ¯ = l = 1 L k 1 w k 1 l N x ¯ ; m ¯ k 1 l , P ¯ k 1 l                             = l = 1 L k 1 w k 1 l N x ; m k 1 l , P k 1 l s = 1 N S N β s ; β k 1 , s l , B k 1 , s l
where m ¯ k l = m k l ; β k , 1 l ; ; β k , N S l T , P ¯ k l = blk-diag P k l , B k , 1 l , , B k , N S l . L k is the number of Gaussian components. w k l represents the weights of the Gaussian components. m k l stands for the mean state of the Gaussian components. P k l denotes the covariance of the state for the Gaussian components. β k , s l represents the mean bias of the Gaussian components, and B k , s l is the covariance of the bias for the Gaussian components.
The prediction density v k | k 1 at time k also takes the form of a Gaussian mixture, given by
v k | k 1 x ¯ = γ k x ¯ + v S , k | k 1 x ¯   = l = 1 L k | k 1 w k | k 1 l N x ¯ ; m ¯ k | k 1 l , P ¯ k | k 1 l   = l = 1 L k | k 1 w k | k 1 l N x ; m k | k 1 l , P k | k 1 l s = 1 N S N β j ; β k | k 1 , s l , B k | k 1 , s l
where m k | k 1 l represents the predicted target state at time k . β k | k 1 , s l stands for the predicted measurement bias for sensor s . P k | k 1 l is the covariance of the target state prediction. B k | k 1 , s l represents the covariance of the measurement bias prediction. m ¯ k | k 1 l = m k | k 1 l ; β k | k 1 , 1 l ; ; β k | k 1 , N S l T and P ¯ k | k 1 l = blk-diag P k | k 1 l , B k | k 1 , 1 l , , B k | k 1 , N S l are the predicted augmented state vector and augmented covariance matrix, respectively. γ k x ¯ represents the density of the birth target, and v S , k | k 1 is the density of the surviving target. v S , k | k 1 is given by the following equation:
v S , k | k 1 x ¯ = p S , k l = 1 L k 1 w k 1 l N x ¯ ; m ¯ S , k | k 1 l , P ¯ S , k | k 1 l
where
m ¯ S , k | k 1 l = m S , k | k 1 l ; β k | k 1 , 1 l ; ; β k | k 1 , N S l T = F ¯ k 1 m ¯ k 1 l
P ¯ S , k | k 1 l = blk-diag P S , k | k 1 l , B k | k 1 , 1 l , , B k | k 1 , N S l = F ¯ k 1 P ¯ k 1 l F ¯ k 1 T + Q ¯ k 1
In Equations (24) and (25), m S , k | k 1 l and P S , k | k 1 l represent the predicted mean and predicted covariance of the surviving components, respectively. β k | k 1 , s l and B k | k 1 , j , l represent the predicted mean and covariance of the bias components, respectively.
Expanding Equations (24) and (25), we obtain
m S , k | k 1 l = F k 1 m k 1 l P S , k | k 1 l = F k 1 P k 1 l F k 1 T + Q k 1
β k | k 1 , s l = β k 1 , s l B k | k 1 , s l = B k 1 , s l + B k 1 , s
Similarly, Equation (23) can be expanded as
v S , k | k 1 x ¯ = p S , k l = 1 L k 1 w k 1 l N x ; m S , k | k 1 l , P S , k | k 1 l s = 1 N S N β s ; β k | k 1 , s l , B k | k 1 , s l
By combining Equations (20), (22), and (28), we obtain
v k | k 1 x ¯ = l = 1 L γ , k w γ , k l N x ; m γ , k l , P γ , k l s = 1 N S N β s ; β γ , k , s l , B γ , k , s l   + p S , k l = 1 L k 1 w k 1 l N x ; m S , k | k 1 l , P S , k | k 1 l s = 1 N S N β s ; β k | k 1 , s l , B k | k 1 , s l
Update: We assume a measurement set from N S sensors at time k , based on the sequential sensor update approach, where the GM-PHD filter update equations are sequentially applied on each sensor. For sensor r r = 1 , , N S , the posterior intensity v k | k , r x ¯ at time k is also in the form of a Gaussian mixture.
v k | k , r x ¯ = l = 1 L k k , r 1 1 p D , k r v k | k , r 1 + z Z k r v D , k , r x ¯ ; z
where v k | k , r 1 x ¯ is the posterior density from sensor r 1 , and
v D , k , r x ¯ ; z = l = 1 L k | k , r 1 w k | k , r l N x ¯ ; m ¯ k | k , r l ,   P ¯ k | k , r l
where L k | k , r 1 is the number of Gaussian components from previous sensors. w k | k , r l is the weight of Gaussian component l after the update by sensor r . m ¯ k | k , r l and P ¯ k | k , r l are the augmented mean and covariance of Gaussian component l after the update by sensor r . Assuming that sensor measurement biases are independent of the target states, Equation (31) can be transformed into
v D , k , r x ¯ ; z = l = 1 L k | k , r 1 w k | k , r l N x ¯ ; m ¯ k | k , r l ,   P ¯ k | k , r l   l = 1 L k | k , r 1 w k | k , r l N x ; m k | k , r l ,   P k | k , r l s = 1 N S N β s ; β k | k , r s , l ,   B k | k , r s , l
where m k | k , r l and P k | k , r l represent the target state and state covariance of the Gaussian component l after the update by sensor r , respectively. β k | k , r s , l and B k | k , r s , l represent the estimated measurement bias of sensor s and its corresponding covariance in Gaussian component l after the update by sensor r , respectively.
Based on the sequential iterative fusion rule, let v k | k , 0 x ¯ ; z v k | k 1 x ¯ ; for any Gaussian component l , m k | k , 0 l m k | k 1 l , P k | k , 0 l P k | k 1 l , β k | k , 0 s , l β k | k 1 s , l , B k | k , 0 s , l B k | k 1 s , l , the estimates of state and bias values are updated through the pseudo-measurement z c , k , m r , m = 1 , , M k r .
m ¯ k | k , r l = m ¯ k | k , r 1 l + G ¯ c , k , r l z ˜ c , k , m r , l
P ¯ k | k , r l = I G ¯ c , k , r l H ¯ k , r l P ¯ k | k , r 1 l
where the augmented gain matrix G ¯ c , k , r l for Gaussian component l corresponding to sensor r can be expressed as
G ¯ c , k , r l = G c , k , r l ; K 1 , k , r l ; ; K N S , k , r l = P ¯ k | k , r 1 l H ¯ k , r l S c , k , r l 1
where the Jacobian matrix H ¯ k , r l is given by
H ¯ k , r l = H ¯ k , r m ¯ k | k , r 1 l
In the fusion Equations (33) and (34), m ¯ k | k , r 1 l = m k | k , r 1 l ; β k | k , r 1 1 , l ; ; β k | k , r 1 N S , l T , P ¯ k | k , r 1 l = blk-diag P k | k , r 1 l , B k | k , r 1 1 , l , , B k | k , r 1 N S , l . To reduce computational complexity, m ¯ k | k , r 1 l is block-diagonalized, and according to Equation (34), we have
P ¯ k | k , r l blk-diag P k | k , r l , B k | k , r 1 , l , , B k | k , r N S , l
The detailed derivation of P ¯ k | k , r l , G c , k , r l , H k , r l , K s , k , r l , and S c , k , r l can be found in Appendix A.
Using the principle of block matrix decomposition, the expanded state is decomposed into the target state and individual sensor measurement bias vectors. The target state is then updated using the residual of the pseudo-measurements. The pseudo-measurement residuals are computed based on
z ˜ c , k , m r , l = z c , k , m r H ¯ k , r l m ¯ k | k , r 1 l   = z c , k , m r H k , r l m k | k , r 1 l s = 1 N S Ψ s β k | k , r 1 s , l   = z c , k , m r H k , r l m k | k , r 1 l β k | k , r 1 s , l
After obtaining the residuals, sequential updates are performed on the target state, target state estimation error covariance, sensor measurement biases, and sensor measurement bias estimation error covariance, respectively.
m k | k , r l = m k | k , r 1 l + G c , k , r l z ˜ c , k , m r , l
P ¯ k | k , r l = I G ¯ c , k , r l H ¯ k , r l P ¯ k | k , r 1 l
β k | k , r s , l = β k | k , r 1 s , l + K s , k , r l z ˜ c , k , m r , l , s = 1 , , N S
B k | k , r s , l = I K s , k , r l Ψ s B k | k , r 1 s , l , s = 1 , , N S
Once the updated mean and variance are obtained, the corresponding Gaussian component weights are updated as
w k | k , r l = p D , k r w k | k , r 1 l N z c , k , m r ; H k , r l m k | k , r 1 l + β k | k , r 1 s , l , S c , k , r l κ k l + p D , k r l = 1 L k | k , r 1 w k | k , r 1 l N z c , k , m r ; H k , r l m k | k , r 1 l + β k | k , r 1 s , l , S c , k , r l
After each sensor update as described above, computational complexity can be reduced by performing the standard merging and pruning operations of the GM-PHD filter. These operations can be applied either after each sensor update or after all sensor fusion is completed. It is important to note that when the last sensor fusion is completed, its updated result should be considered as the posterior density after multi-sensor fusion, i.e.,
v k x ¯ = l = 1 L k w k l N x ¯ ; m ¯ k l , P ¯ k l   = l = 1 L k w k l N x ; m k l , P k l s = 1 N S N β s ; β k , s l , B k , s l   = v k | k , N S x ¯   = l = 1 L k | k , N S w k | k , N S l N x ; m k | k , N S l , P k | k , N S l s = 1 N S N β s ; β k | k , N S s , l , B k | k , N S s , l
By estimating the state, it is possible to obtain the number of scene targets and their estimated state values. Since this state estimation has already eliminated the influence of measurement biases, quantitative calculation of the measurement bias for each sensor can be achieved using the following equation:
β k , s = 1 c l = 1 L k w k l β k , s l , s = 1 , , N S
where c is a normalization constant.
c = l = 1 L k w k l
The pseudocode for augmented state GM-PHD registration based on a two-level Kalman filter is provided in Algorithm 1.
Algorithm 1 Augmented state GM-PHD registration based on two-level Kalman filter
Step 1. Prediction
            The augmented state prediction is achieved through a two-level Kalman
            recursion based on Equations (26) and (27).
Step 2. Update
            The augmented state update is achieved through a two-level Kalman
            recursion based on Equations (39)–(42).
Step 3. Calculation of the measurement bias
            The measurement bias for each sensor can be achieved using Equations
            (45) and (46).

4.2. Heterogeneous Multi-Sensor Sequential Filtering

Currently, there are two main fusion structures for heterogeneous sensor fusion: centralized and distributed fusion. Distributed fusion involves sending local track information to a fusion center to achieve the fusion of multi-target posterior densities from local track nodes. In contrast, centralized fusion sends the measurement information of local sensors to the fusion center. This fusion structure does not require separate processing of local sensor data but processes the measurement data centrally at the fusion center to complete global track updates. Compared to the distributed fusion structure, centralized fusion utilizes raw data from each sensor and requires larger communication bandwidth, but offers slightly higher fusion accuracy [38,39,40].

4.2.1. Centralized Fusion Filtering of Heterogeneous Sensors Based on EKF

Multi-sensor centralized fusion primarily relies on three filtering methods: parallel filtering, data compression filtering, and sequential filtering [41]. Among them, parallel filtering merges all the measurements reported by sensors at time k into a global measurement by expanding the dimensions, and then it performs a one-time update on the target state. However, due to the data expansion, high-dimensional matrix multiplication and inversion operations are introduced, leading to increased computational complexity. Consequently, this approach is rarely used in multi-sensor multi-target fusion. Data compression filtering is mainly suitable for data fusion among homogeneous sensors. For data fusion between heterogeneous sensors, the mismatch in measurement noise covariances can significantly affect the fusion results. Sequential filtering treats the measurements provided by each sensor at time k as independent new measurements and processes them sequentially. This approach does not require extending all measurements, thus avoiding a significant increase in computational complexity. Furthermore, it independently updates the target state based on the measurements from each sensor, mitigating the need to match measurement noise covariances between sensors. Therefore, sequential filtering becomes the primary approach for handling data fusion in scenarios involving heterogeneous sensors.
It should be noted that this paper categorizes heterogeneous measurements into complete and incomplete measurements. Complete measurements include the target’s full three-dimensional position information (distance, elevation angle, azimuth angle), while incomplete measurements typically provide only the target’s angle information.
The central idea of sequential filtering for heterogeneous sensor data fusion, as shown in Figure 3, is to first update the target state using complete measurements (such as active radar measurements). The filtered state estimate x ¯ k c obtained from the complete measurements is then used to replace the value of state estimation x ¯ k k 1 In for incomplete measurements (such as infrared measurements). Similarly, the filtered state estimation covariance P ¯ k c obtained from the complete measurements is used to replace the updated value of state estimation covariance P ¯ k k 1 In for incomplete measurement sensors. Subsequently, the target state is updated using the incomplete measurements, and the updated results serve as the global filtering state estimate x ¯ k and state estimation covariance P ¯ k . The pseudocode for heterogeneous sensor registration using EKF sequential filtering is provided in Algorithm 2.
    Algorithm 2. Heterogeneous sensor registration using EKF sequential filtering
    INPUT:  x ¯ 0 , P ¯ 0 , Z k 1 : N S k = 1 K , the number of complete measurement sensors L
    OUTPUT:  x ¯ k , P ¯ k
    step 1. Prediction
                Calculate the one-step prediction state x ¯ k k 1 c and covariance P ¯ k k 1 c based on (29).
    step 2. Update the target state using complete measurements (e.g., active radar
    measurements).
                Mathematics 12 00886 i001
    step 3. Update the target state using incomplete measurements (e.g., infrared
    measurements).
                Mathematics 12 00886 i002
    step 4. Assign the filtering state estimate x ¯ k , N S In and covariance P ¯ k , N S In from incomplete measurements to the global filtering state estimate x ¯ k and state estimate covariance P ¯ k

4.2.2. Recursive GM-PHD Filtering for Sequential Fusion with Heterogeneous Sensors

To address the multi-target filtering and registration in heterogeneous sensors within the RFS framework, this subsection combines the concepts of EKF filtering and sequential fusion [42]. By employing the GM-PHD filtering approach, it enables recursive operations on the states of multiple targets from heterogeneous sensors. The specific approach involves first updating the prior GM-PHD of multiple targets at time k using complete measurements and using the updated results as the predicted values for the incomplete measurement sensors. Subsequently, in the fusion center, the incomplete measurements are utilized to update the target states and obtain the overall state estimation of multiple targets at time k .
  • The prediction for the target state at time k is shown in Equation (22).
  • Let us denote v k | k , 1 x ¯ = v k | k 1 x ¯ , then the equation for updating the target states with complete measurements at time k can be expressed as
v k | k , r c x ¯ = l = 1 L k k , r 1 1 p D , k c , r v k | k , r 1 c + z Z k r v D , k , r c x ¯ ; z
where v k | k , r c ( r = 1 L ) represents the posterior GM-PHD at time k after updating the states using complete measurements. p D , k c , r is the detection probability of the complete measurement sensor, and z Z k r v D , k , r c x ¯ ; z denotes the contribution of the complete measurements to the target state update.
The process involves taking the updated results of the target states with complete measurements as predictions for incomplete measurements. Subsequently, the target states are updated using the incomplete measurements, and the resulting updated states with incomplete measurements are considered the global posterior GM-PHD obtained by the fusion system at time k .
v k | k , r In x ¯ = l = 1 L k k , r 1 1 p D , k In , r v k | k , r 1 c + z Z k r v D , k , r In x ¯ ; z v k | k x ¯ = v k | k , N S In x ¯
where v k | k , r In x ¯ ( r = L + 1 N S ) denotes the posterior GM-PHD of multiple targets at time k after updating the target states using incomplete measurements. p D , k In , r denotes the detection probability of the incomplete measurement sensor. z Z k r v D , k , r In x ¯ ; z signifies the contribution of the incomplete measurements to the update of the target state. v k | k x ¯ stands for the posterior GM-PHD of the fusion system at time k , obtained after a global update of the target states.

4.3. Heterogeneous Sensor Adaptive Measurement Iterative Update

In multi-sensor sequential filtering, different updating orders of sensors can lead to different tracking results. For instance, a tracking system consists of “good” sensors, denoted as s 1 sensors, and one “bad” sensor with low detection probability, high clutter intensity, and measurement noise. If the “bad” sensor is the i -th sensor, its negative impact may potentially be mitigated by the subsequent s i sensors in the sequential updating process. If the “bad” sensor is the last to update, the tracking results may suffer from false alarms, missed detections, and inaccurate state estimates, as there are no sensors to correct the negative influence caused by the “bad” sensor. In summary, the larger the value of i for the “bad” sensor, the greater its negative impact on the tracking results of the system [22].
The GM-PHD fusion filtering algorithm based on sequential iterative update is sensitive to the update sequence, thus requiring the investigation of an adaptive optimization algorithm for fusion order. In practical applications, there is usually no prior information available about the quality of sensor measurements. Therefore, to optimize the observation sequence of sensors, online evaluation of the quality of multiple sensor data is necessary.

4.3.1. Measurement Consistency Metrics

To overcome the sensitivity of measurement sequential iteration results to fusion order when the data quality of each sensor’s measurements is unknown, a real-time assessment of measurement consistency for each sensor is performed using the OSPA distance. By optimizing the fusion order, more accurate fusion results are obtained. By selecting the state fusion results and the measurement sets from each sensor as finite sets at time k 1 , the quality of sensor measurement data is assessed by calculating the OSPA distance between each finite set and the state fusion results.
We assume there are N S sensors, and the measurement set for sensor s is denoted as z k 1 , m s m = 1 M k 1 s , and the state fusion result set is denoted as w k 1 l , m ¯ k 1 l , P ¯ k 1 l l = 1 L k 1 . The OSPA distance for sensor s with respect to the target state fusion set is defined as
D k s ( z ) = ( 1 L k 1 ( min π ( s ) { 1 , 2 , M k 1 s } l = 1 L k 1 g ( m ¯ k 1 l ) , z k 1 , π ( s ) s p + c p | L k 1 M k 1 s | ) ) 1 / p
where g m ¯ k 1 l is obtained from Equation (6)
g m ¯ k 1 l = h m ¯ k 1 l , x k 1 , s
where c represents the association sensitivity parameter, indicating the minimum truncation distance, and p is the distance sensitivity parameter. Both parameters provide a comprehensive evaluation of the quality of target state and number. The measurement consistency metric reflects the consistency between the measurement set and the state fusion results. A smaller OSPA distance between sensor measurements and state fusion results indicates higher data quality for the sensor.
Based on Equations (49) and (50), the OSPA distance between sensor measurement data and state fusion results can be computed. These evaluation metrics are then sorted to determine the optimized sequence for the measurement sequential update at time k , resulting in more accurate fusion results.

4.3.2. Iterative Update Based on Consistency Metrics

When evaluating the consistency of heterogeneous sensors, the approach differs slightly from that of homogeneous sensors due to differences in data dimensions. In Figure 4, the first step involves measuring the consistency between the complete measurement and the state fusion results. The resulting measurements are then sorted in descending order. Next, the consistency between the incomplete measurement and the state fusion results is assessed. The measurements are ranked in descending order and appended to the sorted results of the complete measurement. This process generates the final consistency measurement results for heterogeneous sensor measurements. It is worth noting that, in this paper, the default incomplete sensors primarily include infrared and optical sensors, which exhibit higher angular measurement accuracy compared to complete measurement sensors, such as active 3D radar.

5. Experimental Results

In the domain of multi-sensor target tracking and data fusion, it is customary to select generic simulated scenarios (i.e., simulated scenarios provided in references [14,15,16,23,24,25]). However, our simulations extend these scenarios to heterogeneous sensor settings. Moreover, in complex detection scenarios, the main factors affecting target tracking and sensor registration are the sensor’s detection probability and the target’s clutter rate.
In this section, to evaluate our algorithm’s performance under different detection probabilities and clutter intensities, we conducted performance analysis using four different combinations based on ablation studies. OSPA distance and RMSE are widely used performance evaluation metrics in multi-target tracking, and we primarily employed them to assess the effectiveness of our proposed algorithms (HSAR-GM-PHD-R and HSAR-GM-PHD-AO). Lastly, we compared the HSAR-GM-PHD-AO algorithm with a heterogenous sensor registration algorithm based on significant targets to highlight its efficacy [43].

5.1. Simulation Scenarios

As shown in Figure 5, in a two-dimensional simulation scenario denoted as 1500 ,   1500 × 0 ,   2000   m 2 , there are four targets, all following a coordinated turn motion model. The target state is x k , i = x k , i , x ˙ k , i , y k , i , y ˙ k , i , ω k , i T , including the position, velocity and angular velocity. The motion model for the target can be represented as
x k , i = F k 1 x k 1 , i + G w k 1
where the state transition matrix F k 1 is given by
F k 1 = 1 sin ω T ω 0 cos ω T 1 ω 0 0 cos ω T 0 sin ω T 0 0 1 cos ω T ω 1 sin ω T ω 0 0 sin ω T 0 cos ω T 0 0 0 0 0 1
where all sensors have a sampling interval T = 1 s. The process noise covariance matrix Q is given by
Q = σ v 2 × T 2 2 0 0 σ v 2 × T 0 0 0 σ v 2 × T 2 2 0 0 σ v 2 × T 0 0 0 σ ω 2
where σ v 2 = 5   m / s 2 , σ ω = 1   deg / s and w k 1 ~ N · ; 0 , I 3 .
The initial motion states and survival durations of the four targets in the simulation scenario are presented in Table 1, where the target survival probability set to p S = 0.99 .
The scene consists of four sensors observing simultaneously, including two complete measurement sensors (radar) and two incomplete measurement sensors (infrared). The observation process for all sensors lasts for K = 100   s . The measurement equation for the complete measurement sensors is
z k , m s = h x k , i , x k , s + β k , s c + υ k , s c   = arctan y k , i y k , s x k , i x k , s x k , i x k , s 2 + y k , i y k , s 2 + β k , s c + υ k , s c
and the measurement equation for the incomplete measurement sensors is
z k , m s = h x k , i , x k , s + β k , s In + υ k , s In   = arctan y k , i y k , s x k , i x k , s + β k , s In + υ k , s In
where x k , s = x k , s , x ˙ k , s , y k , s , y ˙ k , s T represents the sensor state, υ k , s c ~ N ; 0 , σ s 2 is the noise associated with complete measurements, and σ s 2 = σ θ 2 0 0 σ ρ 2 . υ k , s In ~ N ; 0 , σ θ 2 is the noise associated with incomplete measurements, and σ θ = 0.5 ° , σ ρ = 10   m . The parameters of the sensors are listed in Table 2.
The covariance matrix of the process noise for the two complete measurement sensors is set to B k , i c = 0.1   deg 2 0 0 1   m 2 . The covariance matrix of the process noise for the two incomplete measurement sensors is set to B k , i In = 0.1   deg 2 . The weights of the Gaussian components for the newborn target intensity are w γ 1 = w γ 2 = 0.02 and w γ 3 = w γ 4 = 0.03 . The augmented mean m ¯ γ l for newborn targets is shown in Table 3, with the corresponding variance set to P ¯ γ l = diag 50   m ,   50   m ,   50   m ,   50   m ,   6 ° / s ,   0.1 ° , 10   m ,   0.1 ° ,   10   m ,   0.01 ° ,   0.01 ° 2 ,   l = 1 , 2 , 3 , 4 .
The clutter follows a Poisson distribution and is uniformly distributed in the region of size π , π rad × 0 , 2000   m . To prevent unlimited growth of Gaussian mixture components, filter parameters are set with a maximum limit of L m a x = 100 for Gaussian mixture components, a pruning threshold of T p r u n i n g = 10 5 , and a merging distance of T m e r g e = 4 . The filtering performance is assessed using the OSPA distance, which provides a comprehensive measure considering both the errors in target state estimation and target cardinality estimation. In our study, we specifically set the order parameter as p = 1 and the penalty factor as c = 100 to calculate the OSPA distance. All parameters for the heterogeneous augmented GM-PHD filter are the same as those for the standard heterogeneous multi-sensor GM-PHD filter and have similar birth target intensities. To validate the HSAR-GM-PHD-R and the HSAR-GM-PHD-AO algorithms under different detection probabilities and clutter intensities, the simulation scenarios are set as shown in Table 4.

5.2. Simulation Results

Scenario 1: Same detection probability and clutter intensity.
The detection probability for each sensor is p D , k = 0.9 , and the clutter intensity is λ = 10 . The measurement results for sensor 1 are shown in Figure 6.
The HSAR-GM-PHD-R and HSAR-GM-PHD-AO algorithms proposed in this paper address sensor bias and target state estimation within the framework of random finite sets. In the simulated scenarios defined in this study, the angle information obtained from the infrared sensor is used to perform sequential EKF updates on the fusion state of corresponding targets observed by the radar sensor, resulting in the final fused target state. It is crucial to emphasize that the target state obtained by the algorithms proposed in this paper is the result after bias elimination. Moreover, to mitigate the impact of sensor performance on sequential fusion results, the HSAR-GM-PHD-AO algorithm conducts real-time performance evaluation of the infrared and radar sensors separately, enabling the determination of the optimal fusion order and achieving stable target tracking.
In the case of the same detection probability and clutter intensity, the performance comparison between the HSAR-GM-PHD-R algorithm and the HSAR-GM-PHD-AO algorithm is shown in Figure 7 and Figure 8.
Figure 7 and Figure 8 demonstrate that the target trajectories obtained from the HSAR-GM-PHD-R and HSAR-GM-PHD-AO algorithms closely align with the true target trajectory. When the detection probability and clutter intensity are the same, both algorithms exhibit comparable registration performance. These algorithms effectively mitigate sensor measurement biases and enhance target tracking fusion performance.
To illustrate the real-time estimation performance of bias in both algorithms, Figure 9 presents the root mean square error (RMSE) curves for sensor bias estimation in 100 Monte Carlo experiments.
By examining the RMSE curves of bias estimation for the two proposed algorithms, it is evident that the bias estimation algorithm typically converges within 10 s. The RMSE for angular estimation is approximately 1/20 of the set angular bias, while the RMSE for distance estimation is approximately 1/60 of the set distance bias. This level of accuracy in bias estimation ensures high-precision real-time operation of the fusion system. To compare and validate the performance of the two proposed algorithms under equal detection probability and clutter intensity for each sensor, Figure 10 and Figure 11 display the real-time estimation results for the number of targets (cardinality) and OSPA distances for both algorithms, respectively. Furthermore, this study focuses on analyzing the performance during the time interval with the maximum number of targets, specifically from 40 s to 80 s.
As depicted in Figure 10 and Figure 11, along with Table 5, when the detection probability and clutter intensity are equal for all sensors, both algorithms exhibit comparable performance in estimating the number of targets and OSPA distance. The target number estimates for both algorithms remain stable around the true values with minor fluctuations, and the OSPA distances consistently fall within a relatively low range. Specifically, for the HSAR-GM-PHD-AO algorithm, the average target number estimate between 40 s and 80 s is 4.08, deviating from the true target number by only 0.08. Moreover, the average OSPA distance during this period is 8.84. Similarly, for the HSAR-GM-PHD-R algorithm, the average target number estimate between 40 s and 80 s is 4.11, deviating from the true target number by 0.11, with an average OSPA distance of 8.89. Since the final fusion filtering results are unaffected by the sensor fusion order under the conditions of equal detection probability and clutter intensity for all sensors, both algorithms demonstrate similar registration and fusion performance.
Scenario 2: Different detection probabilities, same clutter intensity.
Figure 12 and Figure 13 present the comparison of target number estimation and OSPA distance between the RHSA-GM-PHD-R algorithm and the AOHSA-GM-PHD-R algorithm under different detection probabilities but same clutter intensity.
Figure 12 and Figure 13, along with Table 6, demonstrate notable differences in target number estimation and OSPA distance estimation between the HSAR-GM-PHD-R algorithm and the HSAR-GM-PHD-AO algorithm when considering different detection probabilities but the same clutter intensity. The HSAR-GM-PHD-AO algorithm exhibits target number estimates that closely align with the true values, with minimal fluctuations, and maintains relatively low OSPA distances. Conversely, the HSAR-GM-PHD-R algorithm tends to overestimate the number of targets, resulting in more pronounced fluctuations around the true values and higher OSPA values. Specifically, in the 40 s–80 s time period, the HSAR-GM-PHD-AO algorithm achieves an average target number estimate of 4.12, deviating by 0.12 from the true target number. The average OSPA distance during this period is 11.23. In contrast, the HSAR-GM-PHD-R algorithm yields an average target number estimate of 4.21, deviating by 0.21 from the true target number, with a notably higher average OSPA distance of 16.57. The differences in performance arise from the fact that, when varying detection probabilities across different sensors, the sequential fusion order of different sensors can have a significant impact on the final fusion result. The HSAR-GM-PHD-AO algorithm proposed in this study efficiently plans the optimal fusion order for sensors in real time, achieving superior fusion results. Specifically, the HSAR-GM-PHD-AO algorithm achieves a 32.2% reduction in OSPA distance compared to the HSAR-GM-PHD-R algorithm. Therefore, the proposed HSAR-GM-PHD-AO algorithm outperforms the HSAR-GM-PHD-R algorithm in overall performance.
Scenario 3: Same detection probability, different clutter intensities.
Figure 14 and Figure 15 illustrate the performance comparison between the HSAR-GM-PHD-R algorithm and the HSAR-GM-PHD-AO algorithm when the detection probability is the same but the clutter intensity varies.
When the detection probability is the same but the clutter intensity varies, both the HSAR-GM-PHD-R algorithm and the HSAR-GM-PHD-AO algorithm exhibit minor differences in target number estimation and OSPA distance estimation, as shown in Figure 14 and Figure 15 and summarized in Table 7. The estimated target number fluctuates significantly around the true value in both algorithms. For the HSAR-GM-PHD-AO algorithm, the average estimated target number between 40 s and 80 s is 4.13, with a deviation of 0.13 from the true target number, and the average OSPA distance is 8.48. Similarly, the HSAR-GM-PHD-R algorithm yields an average estimated target number of 4.13 with a deviation of 0.13 from the true target number, and an average OSPA distance of 8.85. The varying clutter intensities reflect differences in sensor performance, which can slightly impact the final fusion results, although to a lesser extent than detection probability. The proposed HSAR-GM-PHD-AO algorithm efficiently plans the optimal fusion order of sensors in real time, achieving optimal sequential fusion. It achieves a 4.18% lower OSPA distance compared to the HSAR-GM-PHD-R algorithm in the case of varying clutter intensities. Therefore, the HSAR-GM-PHD-AO algorithm demonstrates slightly improved performance compared to the HSAR-GM-PHD-R algorithm when clutter intensity varies among sensors.
Scenario 4: Different detection probabilities and clutter intensities.
Figure 16 and Figure 17 display the performance comparison between the HSAR-GM-PHD-R and HSAR-GM-PHD-AO algorithms when both the detection probability and clutter intensity are different.
As shown in Figure 16 and Figure 17, and Table 8, when detection probability and clutter intensity are different, the two algorithms exhibit significant differences in target number estimation and OSPA distance estimation. In comparison to the HSAR-GM-PHD-AO algorithm, the HSAR-GM-PHD-R algorithm’s target number estimation fluctuates dramatically around the true value. Specifically, the HSAR-GM-PHD-AO algorithm has an average target number estimation of 4.05 during the 40 s–80 s time interval, with a deviation of 0.05 from the true target number. At the same time, the average OSPA distance is 12.58. In contrast, the HSAR-GM-PHD-R algorithm has an average target number estimation of 3.81 during the 40 s–80 s time interval, with a deviation of 0.19 from the true target number. The average OSPA distance is 17.72. Because different sensors have varying detection probability and clutter intensity, their performance differs significantly. Consequently, the sequential fusion order of sensors has a considerable influence on the final fusion results, surpassing the impact of individual detection probability or clutter intensity. The simulation results demonstrate that the proposed HSAR-GM-PHD-AO algorithm can efficiently determine the optimal fusion order for sensors in real time, achieving optimal sequential fusion. In this regard, the HSAR-GM-PHD-AO algorithm reduces the OSPA distance relative to the HSAR-GM-PHD-R algorithm by 34.65%.
In summary, the simulation results from the four scenarios indicate that both detection probability and clutter intensity have a certain impact on fusion performance. Particularly, when the detection probabilities of the individual sensors differ, the performance differences between the two proposed algorithms become more significant. This is mainly because the detection probability significantly influences the observation quality of the sensors, making the fusion results more sensitive to the sequential fusion order. In contrast, clutter intensity has a relatively minor impact on the sequential fusion order, resulting in subtle differences between the two proposed algorithms in scenarios with varying clutter intensities.
Our proposed algorithms extend the spatial registration algorithm within the RFS framework to heterogeneous sensor scenarios for the first time. To assess the performance of our proposed algorithm, we compare the HSAR-GM-PHD-AO algorithm with the heterogeneous sensor spatial registration method based on significant targets. The results in Table 9 indicate that, under varying detection probabilities and clutter intensities, the HSAR-GM-PHD-AO algorithm achieves a substantial decrease of 11.8% in distance bias RMSE and 8.6% in angle bias RMSE, significantly enhancing the accuracy of bias estimation. This improvement can be attributed to our algorithm’s ability to avoid complex association relationships and leverage comprehensive multi-target information.

6. Conclusions

This paper addresses spatial registration in heterogeneous multi-sensor multi-target scenarios under the RFS framework. We propose two registration algorithms: HSAR-GM-PHD-R and HSAR-GM-PHD-AO. Both algorithms linearize the measurement equation by introducing pseudo-measurements and achieve a decoupled estimation of target states and measurement biases through iterative recursion of the multi-sensor GM-PHD. They also utilize EKF sequential filtering techniques for precise registration of data from different dimensions of heterogeneous sensors. To mitigate the impact of fusion order on registration and fusion accuracy, the HSAR-GM-PHD-AO algorithm employs an OSPA metric to optimize the fusion order and enhance registration tracking performance. The simulation results demonstrate that both proposed registration algorithms can effectively estimate the target position states and measurement biases of heterogeneous sensors. Additionally, under varying probabilities and clutter intensities, the HSAR-GM-PHD-AO algorithm consistently exhibits outstanding performance regardless of the sensor fusion order. Compared to the heterogeneous sensor spatial registration method based on significant targets, the average RMSE of distance and angle biases decreased by 11.8% and 8.6%, respectively, effectively enhancing the accuracy of bias estimation.
In wide-area detection scenarios, it is possible for the measurement data from different sensors to arrive at the fusion center at different times. However, our current method does not address the issue of time synchronization among the sensors. Therefore, in our future work, we plan to integrate time synchronization and spatial registration into a unified framework for sensor registration.

Author Contributions

Conceptualization, Y.Z., S.W. and J.W.; methodology, Y.Z., Y.L., X.Z. and J.W.; software, Y.Z.; validation, C.Z., Y.Z. and J.W.; formal analysis, Y.Z., Y.L., S.W., X.Z. and C.Z.; investigation, Y.Z., S.W., X.Z. and C.Z.; resources, Y.Z., S.W., X.Z. and C.Z.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z., Y.L., S.W. and J.W.; visualization, Y.Z.; supervision, Y.Z.; project administration, Y.Z.; funding acquisition, S.W. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 62171029 and No. 61671035).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this section, we provide a detailed derivation process for P ¯ k | k , r l , G c , k , r l , H k , r l , K s , k , r l , and S c , k , r l . By utilizing Equation (34), we express P ¯ k | k , j , l as
P ¯ k | k , r l = I G ¯ c , k , r l H ¯ k , r l P ¯ k | k , r 1 l = P ¯ k | k , r 1 l G ¯ c , k , r l H ¯ k , r l P ¯ k | k , r 1 l
For clarity of expression, we represent P ¯ k | k , j , l in the form of a block matrix.
P ¯ k | k , r l = P k | k , r 1 l G c , k , r l H k , r l P k | k , r 1 l B k | k , r 1 1 , l K 1 , k , r l Ψ 1 B k | k , r 1 1 , l B k | k , r 1 1 , l K 1 , k , r l Ψ 1 B k | k , r 1 1 , l
It is assumed that the target states are independent of the measurement biases, and the measurement biases between sensors are also independent.
Therefore, the off-diagonal elements of the matrix P ¯ k | k , r l , which represent the cross-covariance between target states and sensor measurement biases, are all set to 0 , i.e.,
P ¯ k | k , r l P k | k , r 1 l B k | k , r 1 1 , l B k | k , r 1 1 , l       = blk-diag P k | k , r l , B k | k , r 1 , l , , B k | k , r N S , l
Consequently, the matrix P ¯ k | k , r l can be simplified into a block-diagonal matrix, reducing the computational complexity of high-dimensional matrix multiplication.
Simultaneously, block-wise factorize G ¯ c , k , r l = P ¯ k | k , r 1 l H ¯ k , r l S c , k , r l 1 , and based on the corresponding equivalence of matrix block parts, the following can be derived:
G c , k , r l = P k | k , r 1 l H k , r l T S c , k , r l 1
K s , k , r l = B k | k , r 1 s , l Ψ s S c , k , r l 1 = B k | k , r 1 s , l S c , k , r l 1 , s = r 0 , s r
where H k , r l is the Jacobian of the nonlinear measurement function h evaluated at m ¯ k | k , r 1 l , i.e.,
H k , r l = H k , r m k | k , r 1 l
In addition, the innovation S c , k , r l can be expressed as
S c , k , r l = H ¯ k , r l P ¯ k | k , r 1 l H ¯ k , r l T + R r   = H k , r l P k | k , r 1 l H k , r l T + s = 1 N S Ψ s B k | k , r 1 s , l Ψ s T + R r
where R r is the covariance matrix of measurement noise, i.e.,
v r ~ N 0 , R r
Substituting the matrix Ψ defined in (11) into (A7) yields
S c , k , r l = H k , r l P k | k , r 1 l H ¯ k , r l T + B k | k , r 1 s , l + R r
where B represents the covariance matrix of estimated measurement bias.

References

  1. Cao, B.; Zhao, J.; Yang, P.; Yang, P.; Zhang, Y. 3-D deployment optimization for heterogeneous wireless directional sensor networks on smart city. IEEE Trans. Ind. Inform. 2018, 15, 1798–1808. [Google Scholar] [CrossRef]
  2. Alatise, M.B.; Hancke, G.P. A review on challenges of autonomous mobile robot and sensor fusion methods. IEEE Access 2020, 8, 39830–39846. [Google Scholar] [CrossRef]
  3. Li, J.; Bi, G.; Wang, X.; Nie, T.; Huang, L. Radiation-Variation Insensitive Coarse-to-Fine Image Registration for Infrared and Visible Remote Sensing Based on Zero-Shot Learning. Remote Sens. 2024, 16, 214. [Google Scholar] [CrossRef]
  4. Shahzad, M.K.; Islam, S.M.R.; Hossain, M.; Abdullah-Al-Wadud, M.; Alamri, A.; Hussain, M. GAFOR: Genetic Algorithm Based Fuzzy Optimized Re-Clustering in Wireless Sensor Networks. Mathematics 2021, 9, 43. [Google Scholar] [CrossRef]
  5. Ou, C.; Shan, C.; Cheng, Z.; Long, Y. Adaptive Trajectory Tracking Algorithm for the Aerospace Vehicle Based on Improved T-MPSP. Mathematics 2023, 11, 2160. [Google Scholar] [CrossRef]
  6. Bu, S.; Zhou, G. Joint data association spatiotemporal bias compensation and fusion for multisensor multitarget tracking. IEEE Trans. Signal Process. 2023, 71, 1509–1523. [Google Scholar] [CrossRef]
  7. Marek, J.; Chmelař, P. Survey of Point Cloud Registration Methods and New Statistical Approach. Mathematics 2023, 11, 3564. [Google Scholar] [CrossRef]
  8. Cormack, D.; Schlangen, I.; Hopgood, J.R.; Clark, D.E. Joint registration and fusion of an infrared camera and scanning radar in a maritime context. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 1357–1369. [Google Scholar] [CrossRef]
  9. Chai, L.; Yi, W.; Kong, L. Joint sensor registration and multi-target tracking with phd filter in distributed multi-sensor networks. Signal Process. 2023, 206, 108909. [Google Scholar] [CrossRef]
  10. Bu, S.; Kirubarajan, T.; Zhou, G. Online sequential spatiotemporal bias compensation using multisensor multitarget measurements. Aerosp. Sci. Technol. 2021, 108, 106407. [Google Scholar] [CrossRef]
  11. Mahler, R. Advances in Statistical Multi-Source Multi-Target Information Fusion; Artech House: Norwood, MA, USA, 2014. [Google Scholar]
  12. Mahler, R. PHD Filters of Higher Order in Target Number. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 1523–1543. [Google Scholar] [CrossRef]
  13. Mahler, R. Multitarget Bayes filtering via first-order multitarget moments. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1152–1178. [Google Scholar] [CrossRef]
  14. Vo, B.N.; Singh, S.; Doucet, A. Sequential Monte Carlo methods for multi-target filtering with random finite sets. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1224–1245. [Google Scholar]
  15. Vo, B.N.; Singh, S.; Doucet, A. Sequential Monte Carlo implementation of the PHD filter for multi-target tracking. In Proceedings of the 6th International Conference on Information Fusion, Cairns, Australia, 8–11 July 2003; pp. 792–799. [Google Scholar]
  16. Vo, B.N.; Ma, W.K. The Gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process. 2006, 54, 4091–4104. [Google Scholar] [CrossRef]
  17. Clark, D.E.; Vo, B.N. Convergence analysis of the Gaussian mixture PHD filter. IEEE Trans. Signal Process. 2007, 55, 1204–1211. [Google Scholar] [CrossRef]
  18. Shi, K.; Shi, Z.; Yang, C.; He, S.; Chen, J.; Chen, A. Road-Map Aided GM-PHD Filter for Multivehicle Tracking with Automotive Radar. IEEE Trans. Ind. Inform. 2022, 18, 97–108. [Google Scholar] [CrossRef]
  19. Sung, Y.; Tokekar, P. GM-PHD Filter for Searching and Tracking an Unknown Number of Targets with a Mobile Sensor with Limited FOV. IEEE Trans. Autom. Sci. Eng. 2022, 19, 2122–2134. [Google Scholar] [CrossRef]
  20. Mahler, R. The Multisensor PHD filter: II. Erroneous Solution via “Poisson magic”. In Signal Processing, Sensor Fusion, and Target Recognition; International Society for Optics and Photonics: Orlando, FL, USA, 2009; pp. 182–193. [Google Scholar]
  21. VO, B.T.; See, C.M.; Ma, N.; Ng, W.T. Multi-sensor joint detection and tracking with the Bernoulli filter. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 1385–1402. [Google Scholar] [CrossRef]
  22. Liu, L.; Ji, H.; Fan, Z. Improved Iterated-corrector PHD with Gaussian mixture implementation. Signal Process. 2015, 114, 89–99. [Google Scholar] [CrossRef]
  23. Lian, F.; Han, C.; Liu, W.; Chen, H. Joint spatial registration and multitarget tracking using an extended probability hypothesis density filter. IET Radar Sonar Navig. 2011, 5, 441–448. [Google Scholar] [CrossRef]
  24. Li, W.; Jia, Y.; Du, J.; Yu, F. Gaussian mixture PHD filter for multi-sensor multi-target tracking with registration errors. Signal Process. 2013, 93, 86–99. [Google Scholar] [CrossRef]
  25. Wu, W.; Jiang, J.; Liu, W.; Feng, X.; Gao, L.; Qin, X. Augmented state GM-PHD filter with registration errors for multi-target tracking by Doppler radars. Signal Process. 2016, 120, 117–128. [Google Scholar] [CrossRef]
  26. Fortunati, S.; Farina, A.; Gini, F.; Graziano, A.; Greco, M.S.; Giompapa, S. Least squares estimation and cramer–rao type lower bounds for relative sensor registration process. IEEE Trans. Signal Process. 2010, 59, 1075–1087. [Google Scholar] [CrossRef]
  27. Rhode, S.; Usevich, K.; Markovsky, I.; Gauterin, F. A recursive restricted total least-squares algorithm. IEEE Trans. Signal Process. 2014, 62, 5652–5662. [Google Scholar] [CrossRef]
  28. Pulford, G.W. Analysis of a nonlinear least squares procedure used in global positioning systems. IEEE Trans. Signal Process. 2010, 58, 4526–4534. [Google Scholar] [CrossRef]
  29. Bai, S.; Zhang, Y. Error registration of netted radar by using GLS algorithm. In Proceedings of the 33rd Chinese Control Conference, Nanjing, China, 28–30 July 2014; pp. 7430–7433. [Google Scholar]
  30. Zhou, Y.; Leung, H.; Blanchette, M. Sensor alignment with earth-centered earth-fixed (ECEF) coordinate system. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 410–418. [Google Scholar] [CrossRef]
  31. Lu, C.; Wang, X.; Koutsoukos, X. Feedback utilization control in distributed real-time systems with end-to-end tasks. IEEE Trans. Parall. Distr. Syst. 2005, 16, 550–561. [Google Scholar]
  32. Zhou, Y.F.; Leung, H.; Yip, P.C. An exact maximum likelihood registration algorithm for data fusion. IEEE Trans. Signal Process. 1997, 45, 1560–1573. [Google Scholar] [CrossRef]
  33. Chitour, Y.; Pascal, F. Exact Maximum Likelihood Estimates for SIRV Covariance Matrix: Existence and Algorithm Analysis. IEEE Trans. Signal Process. 2008, 56, 4563–4573. [Google Scholar] [CrossRef]
  34. Okello, N.; Ristic, B. Maximum likelihood registration for multiple dissimilar sensors. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1074–1083. [Google Scholar] [CrossRef]
  35. Wang, J.; Zeng, Y.; Wei, S.; Wei, Z.; Wu, Q.; Savaria, Y. Multisensor track-to-track association and spatial registration algorithm under incomplete measurements. IEEE Trans. Signal Process. 2021, 69, 3337–3350. [Google Scholar] [CrossRef]
  36. Ristic, B.; Clark, D.; Gordon, N. Calibration of multi-target tracking algorithms using non-cooperative targets. IEEE J. Sel. Top. Signal Process. 2013, 7, 390–398. [Google Scholar] [CrossRef]
  37. Ristic, B.; Clark, D. Particle filter for joint estimation of multi-target dynamic state and multi-sensor bias. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Kyoto, Japan, 25–30 March 2012; pp. 3877–3880. [Google Scholar]
  38. Yi, W.; Li, G.; Battistelli, G. Distributed Multi-Sensor Fusion of PHD Filters with Different Sensor Fields of View. IEEE Trans. Signal Process. 2020, 68, 5204–5218. [Google Scholar] [CrossRef]
  39. Liu, S.; Shen, H.; Chen, H.; Peng, D.; Shi, Y. Asynchronous Multi-Sensor Fusion Multi-Target Tracking Method. In Proceedings of the 14th International Conference on Control and Automation, Anchorage, AK, USA, 12–15 June 2018; pp. 459–463. [Google Scholar]
  40. Tollkühn, A.; Particke, F.; Thielecke, J. Gaussian state estimation with non-gaussian measurement noise. In Proceedings of the Sensor Data Fusion: Trends, Solutions, Applications (SDF), Bonn, Germany, 9–11 October 2018; pp. 1–5. [Google Scholar]
  41. Bar-Shalom, Y.; Willett, P.K.; Tian, X. Tracking and Data Fusion; YBS Publishing: Storrs, CT, USA, 2011. [Google Scholar]
  42. Taghavi, E.; Tharmarasa, R.; Kirubarajan, T.; Bar-Shalom, Y.; Mcdonald, M. A practical bias estimation algorithm for multisensor-multitarget tracking. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2–19. [Google Scholar] [CrossRef]
  43. Wang, R.; Mao, H.; Hu, C.; Zeng, T.; Long, T. Joint association and registration in a multiradar system for migratory insect track observation. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 4028–4043. [Google Scholar] [CrossRef]
Figure 1. Spatial registration schematic where T represents the observation set (multiple targets), and Δ r and Δ θ represent distance bias and azimuth bias, respectively.
Figure 1. Spatial registration schematic where T represents the observation set (multiple targets), and Δ r and Δ θ represent distance bias and azimuth bias, respectively.
Mathematics 12 00886 g001
Figure 2. Spatial registration within the RFS framework.
Figure 2. Spatial registration within the RFS framework.
Mathematics 12 00886 g002
Figure 3. Heterogeneous sensor sequential filtering.
Figure 3. Heterogeneous sensor sequential filtering.
Mathematics 12 00886 g003
Figure 4. The iterative update process based on consistency metrics for heterogeneous sensors.
Figure 4. The iterative update process based on consistency metrics for heterogeneous sensors.
Mathematics 12 00886 g004
Figure 5. The true trajectories of targets.
Figure 5. The true trajectories of targets.
Mathematics 12 00886 g005
Figure 6. Measurements of sensor 1.
Figure 6. Measurements of sensor 1.
Mathematics 12 00886 g006
Figure 7. The estimated target positions after spatial registration and their true values.
Figure 7. The estimated target positions after spatial registration and their true values.
Mathematics 12 00886 g007
Figure 8. Estimation results of target positions at different times.
Figure 8. Estimation results of target positions at different times.
Mathematics 12 00886 g008
Figure 9. Time-varying RMSE curves for two proposed algorithms (100 Monte Carlo runs): (a) RMSE for angle in HSAR-GM-PHD-R; (b) RMSE for angle in HSAR-GM-PHD-AO; (c) RMSE for distance in HSAR-GM-PHD-R; (d) RMSE for distance in HSAR-GM-PHD-AO.
Figure 9. Time-varying RMSE curves for two proposed algorithms (100 Monte Carlo runs): (a) RMSE for angle in HSAR-GM-PHD-R; (b) RMSE for angle in HSAR-GM-PHD-AO; (c) RMSE for distance in HSAR-GM-PHD-R; (d) RMSE for distance in HSAR-GM-PHD-AO.
Mathematics 12 00886 g009
Figure 10. Target number estimation (same detection probability and clutter intensity).
Figure 10. Target number estimation (same detection probability and clutter intensity).
Mathematics 12 00886 g010
Figure 11. OSPA distance (same detection probability and clutter intensity).
Figure 11. OSPA distance (same detection probability and clutter intensity).
Mathematics 12 00886 g011
Figure 12. Target number estimation (different detection probabilities, same clutter intensity).
Figure 12. Target number estimation (different detection probabilities, same clutter intensity).
Mathematics 12 00886 g012
Figure 13. OSPA distance (different detection probabilities, same clutter intensity).
Figure 13. OSPA distance (different detection probabilities, same clutter intensity).
Mathematics 12 00886 g013
Figure 14. Target number estimation (same detection probability, different clutter intensities).
Figure 14. Target number estimation (same detection probability, different clutter intensities).
Mathematics 12 00886 g014
Figure 15. OSPA distance (same detection probability, different clutter intensities).
Figure 15. OSPA distance (same detection probability, different clutter intensities).
Mathematics 12 00886 g015
Figure 16. Target number estimation (different detection probabilities and clutter intensities).
Figure 16. Target number estimation (different detection probabilities and clutter intensities).
Mathematics 12 00886 g016
Figure 17. OSPA distance (different detection probabilities and clutter intensities).
Figure 17. OSPA distance (different detection probabilities and clutter intensities).
Mathematics 12 00886 g017
Table 1. The initial states of targets.
Table 1. The initial states of targets.
Target NumberInitial StateSurvival Time
1 1507.4   m 11   m / s 256.8   m 10   m / s 1 ° / s T 0 ,   100   s
2 255.9   m 20   m / s 1011.4   m 3   m / s 0.67 ° / s T 10 ,   80   s
3 246.1   m 11   m / s 738.9   m 5   m / s 0.5 ° / s T 35 ,   100   s
4 242.6   m 993.2   m / s 993.2   m 12   m / s 1 ° / s T 40 ,   100   s
Table 2. The initial states and parameters of the sensors.
Table 2. The initial states and parameters of the sensors.
Sensor NumberSensor StateBiasesObservation Time
1 1500 ,   0 ,   0 ,   0 T   m 1 ° ,   30   m T 0 ,   100   s
2 0 ,   0 ,   2000 ,   0 T   m 1 ° ,   30   m T 0 ,   100   s
3 1500 ,   0 ,   0 ,   0 T   m 0.1 ° T 0 ,   100   s
4 0 ,   0 ,   0 ,   0 T   m 0.1 ° T 0 ,   100   s
Table 3. Mean of newborn components.
Table 3. Mean of newborn components.
Gaussian Component m ¯ γ l = x γ l ,   x ˙ γ l ,   y γ l ,   y ˙ γ l ,   ω γ l ,   θ γ , 1 l ,   r γ , 1 l ,   θ γ , 2 l ,   r γ , 2 l ,   θ γ , 3 l ,   θ γ , 4 l T
1 m ¯ γ 1 = 1500   m ,   0 ,   250   m ,   0 ,   0 ,   1 ° ,   30   m ,   1 ° ,   30   m ,   0.1 ° ,   0.1 ° T
2 m ¯ γ 2 = 250   m ,   0 ,   1000   m ,   0 ,   0 ,   1 ° ,   30   m ,   1 ° ,   30   m ,   0.1 ° ,   0.1 ° T
3 m ¯ γ 3 = 250   m ,   0 ,   750   m ,   0 ,   0 ,   1 ° ,   30   m ,   1 ° ,   30   m ,   0.1 ° ,   0.1 ° T
4 m ¯ γ 4 = 1000   m ,   0 ,   1500   m ,   0 ,   0 ,   1 ° ,   30   m ,   1 ° ,   30   m ,   0.1 ° ,   0.1 ° T
Table 4. Simulation scenarios.
Table 4. Simulation scenarios.
ScenarioDetection ProbabilityClutter Intensity
1 p D , k 1 = p D , k 2 =   p D , k 3 =   p D , k 4 = 0.9 λ 1 = λ 2 = λ 3 = λ 4 = 10
2 p D , k 1 = 0.8 ,       p D , k 2 = 0.7 ,       p D , k 3 = 0.99 ,       p D , k 4 = 0.9 λ 1 = λ 2 = λ 3 = λ 4 = 10
3 p D , k 1 = p D , k 2 = p D , k 3 = p D , k 4 = 0.9 λ 1 = 20 ,     λ 2 = 40   ,       λ 3 = 5 ,     λ 4 = 10
4 p D , k 1 = 0.8 ,       p D , k 2 = 0.7 ,       p D , k 3 = 0.99 ,       p D , k 4 = 0.9 λ 1 = 20 ,     λ 2 = 40   ,       λ 3 = 5 ,     λ 4 = 10
Table 5. Performance comparison (same detection probability and clutter intensity).
Table 5. Performance comparison (same detection probability and clutter intensity).
AlgorithmAverage OSPA Distance
from 40 s and 80 s
Average Target Cardinality Estimation from 40 s and 80 s
HSAR-GM-PHD-AO8.844.08
HSAR-GM-PHD-R8.894.11
Table 6. Performance comparison (different detection probabilities, same clutter intensity).
Table 6. Performance comparison (different detection probabilities, same clutter intensity).
AlgorithmAverage OSPA Distance
from 40 s and 80 s
Average Target Cardinality Estimation from 40 s and 80 s
HSAR-GM-PHD-AO11.234.12
HSAR-GM-PHD-R16.574.21
Table 7. Performance comparison (same detection probability, different clutter intensities).
Table 7. Performance comparison (same detection probability, different clutter intensities).
AlgorithmAverage OSPA Distance
from 40 s and 80 s
Average Target Cardinality Estimation from 40 s and 80 s
HSAR-GM-PHD-AO8.484.13
HSAR-GM-PHD-R8.854.13
Table 8. Performance comparison (different detection probabilities and clutter intensities).
Table 8. Performance comparison (different detection probabilities and clutter intensities).
AlgorithmAverage OSPA Distance
from 40 s and 80 s
Average Target Cardinality Estimation from 40 s and 80 s
HSAR-GM-PHD-AO11.584.05
HSAR-GM-PHD-R17.723.81
Table 9. Reduction rate of RMSE.
Table 9. Reduction rate of RMSE.
Sensor NumberRate (%)Observation Environment
θRDetection ProbabilityClutter Intensity
S17.210.5 p D , k 1 = 0.8 λ 1 = 20
S29.713.1 p D , k 2 = 0.7 λ 2 = 40
S37.4 p D , k 3 = 0.99 λ 3 = 5
S410.1 p D , k 4 = 0.9 λ 4 = 10
Average8.611.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zeng, Y.; Wang, J.; Wei, S.; Zhang, C.; Zhou, X.; Lin, Y. Gaussian Mixture Probability Hypothesis Density Filter for Heterogeneous Multi-Sensor Registration. Mathematics 2024, 12, 886. https://doi.org/10.3390/math12060886

AMA Style

Zeng Y, Wang J, Wei S, Zhang C, Zhou X, Lin Y. Gaussian Mixture Probability Hypothesis Density Filter for Heterogeneous Multi-Sensor Registration. Mathematics. 2024; 12(6):886. https://doi.org/10.3390/math12060886

Chicago/Turabian Style

Zeng, Yajun, Jun Wang, Shaoming Wei, Chi Zhang, Xuan Zhou, and Yingbin Lin. 2024. "Gaussian Mixture Probability Hypothesis Density Filter for Heterogeneous Multi-Sensor Registration" Mathematics 12, no. 6: 886. https://doi.org/10.3390/math12060886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop