1. Introduction
Photoelectric tracking systems are systems that utilize optical sensors to detect and track targets. The task of target tracking is to estimate the position and velocity of a moving target using noisy data collected by a single sensor or multiple spatially distributed sensor nodes [
1]. Due to their ability to provide high-precision angular measurements, these systems play a critical role in precise tracking and identification tasks across broad civilian domains, including unmanned aerial vehicle monitoring for environmental inspection, aircraft monitoring and space object tracking [
2]. However, unlike active radar, photoelectric tracking systems rely on passive detection and provide only bearing information without direct range measurements [
2], which makes bearing-only target tracking with passive sensors a fundamental and challenging problem in statistical signal processing [
3]. In addition, optical measurements are typically sampled discretely, whereas target motion is generally modeled by continuous-time dynamics, resulting in a natural continuous–discrete system. Furthermore, factors such as ambient illumination changes, atmospheric attenuation, occlusion, and sensor noise can significantly degrade the measurement accuracy, increasing the uncertainty in target tracking. Consequently, achieving accurate target state estimation using photoelectric tracking systems remains a challenging task, motivating further research in bearing-only target tracking.
For a single photoelectric tracking system, the absence of direct range information makes estimation performance highly sensitive to the sensor–target geometry. For example, the target may become unobservable under certain motion patterns, leading to large estimation bias and poor convergence behavior [
4,
5]. Classic results show that the single-observer tracking process can be unobservable when the observer velocity is constant, and estimating the complete target state may require changes in observer velocity or course [
6,
7]. However, when multiple spatially separated passive photoelectric tracking systems are available, the situation improves substantially: combining angle measurements from different viewpoints provides implicit triangulation information and can restore practical observability of target position and velocity without requiring aggressive maneuvers [
8,
9]. From an implementation standpoint, the centralized fusion of all raw optical measurements may be impractical due to bandwidth, latency, and robustness constraints [
10,
11]. This is why multi-sensor bearing-only tracking has attracted sustained attention in modern distributed photoelectric tracking systems. A widely adopted alternative is track-level fusion, where each photoelectric tracking system runs a local filter and transmits compact summaries such as state estimates and associated covariances to a fusion node, offering significant advantages in bandwidth efficiency and robustness [
12].
A fundamental challenge in bearing-only photoelectric tracking system arises from the highly nonlinear relationship between bearing measurements and the target state, including position and velocity. To address this nonlinearity, various state estimation methods have been developed. The extended Kalman filter (EKF) [
13] linearizes the measurement model around the current estimate. However, for bearing-only tracking, the EKF has been shown to suffer from instability due to the coupling between the bearing and range errors, potentially leading to covariance collapse and divergence [
13]. To overcome these limitations, the unscented Kalman filter (UKF) [
14] propagates the mean and covariance through the true nonlinear functions using deterministic sigma points, capturing higher-order effects and generally providing improved stability and performance. Nevertheless, both EKF and UKF rely on Gaussian assumptions and may perform poorly in the presence of strong nonlinearities or non-Gaussian posteriors. From a Bayesian perspective, bearing-only tracking is inherently nonlinear and often non-Gaussian, making particle filters (PFs) [
15] a natural choice. PFs approximate the posterior distribution using weighted particles and can be applied to general state-space models. However, standard PFs may suffer from particle degeneracy, and this issue becomes increasingly severe as the state dimension grows, leading to the curse of dimensionality. As a result, PFs often require a large number of particles, frequent resampling, or carefully designed proposal distributions to maintain stable performance [
16,
17]. More recently, the feedback particle filter (FPF) [
17] has been proposed as an alternative that avoids importance sampling. In FPF, particles evolve under a feedback control law derived from an innovation error, with the filter gain obtained by solving a Poisson equation. By eliminating importance weights and resampling, FPF mitigates particle degeneracy and exhibits improved scalability with respect to state dimension, making it particularly attractive for nonlinear bearing-only tracking problems.
In realistic surveillance environments, the measurement stream is rarely clean. Each scan may include spurious detections generated by background returns and non-target phenomena, such as receiver thermal noise, reflections from terrain or structures, meteorological effects, and even birds [
18]. Consequently, the observations available at a given time are often a mixture of target-originated and non-target-originated measurements, making it unclear which measurement should be attributed to the target. These effects introduce measurement-origin uncertainty and create a data association problem [
18]. The probabilistic data association (PDA) framework [
19] is a classical and computationally efficient approach to this challenge. The PDA aggregates validated measurements through association probabilities rather than enumerating combinatorial hypotheses, and it has been extensively studied for tracking in cluttered environments [
20,
21]. Integrating PDA principles with nonlinear filtering methods is therefore a natural direction for robust bearing-only tracking under clutter. More recently, probabilistic data association feedback particle filter (PDA-FPF) algorithms have been proposed for single-sensor tracking scenarios [
22,
23]. However, for distributed sensor networks, research on FPF algorithms applicable to cluttered environments remains very limited, particularly for continuous–discrete systems with bearing-only measurements. As a result, this problem remains an open and pressing research topic.
Motivated by the above, this paper investigates 3D bearing-only target tracking in cluttered environments using a distributed photoelectric tracking system with multiple passive sensors. We propose a DPDA-FPF framework in which each sensor runs a local PDA-FPF using an efficient gain approximation, and local state estimates with uncertainty information are then fused at the track-level through covariance-weighted fusion. The main contributions of this paper are summarized as follows:
- 1.
Continuous–discrete PDA-FPF derivation: We extend the FPF framework to accommodate both PDA and continuous–discrete system dynamics. Specifically, we derive the association-weighted particle flow equation (Equation (
11)) by incorporating PDA likelihoods into the logarithmic homotopy transformation of [
24]. This formulation yields an explicit innovation structure (Equation (
12)) that naturally handles multiple validated measurements while preserving the computational advantages of the constant-gain approximation.
- 2.
Design of a distributed multi-sensor FPF architecture for cluttered environments: This architecture achieves high bandwidth efficiency by having each sensor independently perform data association and local filtering. Subsequently, only compact track summary information is transmitted for centralized fusion processing. The proposed design fully accounts for practical constraints commonly encountered in real-world sensor networks, such as limited communication bandwidth and decentralized processing requirements.
- 3.
Numerical validation in a 3D multi-sensor setting with clutter: This shows that the fused DPDA-FPF estimate significantly improves the tracking accuracy relative to single-sensor local estimates, consistent with the observability advantages of multi-view angle information.
The remainder of this paper is organized as follows.
Section 2 formulates the problem.
Section 3 extends the PDA-FPF originally derived for continuous-time systems to the continuous–discrete setting.
Section 4 presents the proposed DPDA-FPF algorithm and discusses its computational complexity and real-time implementation considerations.
Section 5 reports the simulation settings and performance results.
Section 6 concludes the paper and discusses future research directions.
2. Problem Statement
This paper investigates the problem of target tracking in a 3D cluttered environment, considering a distributed photoelectric tracking system network composed of multiple spatially separated bearing-only photoelectric tracking systems, as illustrated in
Figure 1. Each photoelectric tracking system integrates a passive optical sensing unit that provides bearing-only measurements and a local processing module and is, therefore, regarded as a single sensing node in the distributed network. The target is modeled as a point target. Furthermore, each photoelectric tracking system is assumed to possess sufficient resolution to discriminate between objects, ensuring that each measurement originates from a single source, either the target or clutter. Based on these assumptions, this section establishes the system model for the considered scenario and then formulates the distributed tracking problem.
2.1. Continuous-Time Target Model
Since continuous-time models more accurately represent target motion than their discrete-time counterparts, we assume the target state evolves according to the stochastic differential equation
where
∈
denotes the target state at time
t, including its position and velocity in the
X,
Y, and
Z directions, i.e.,
. The function
is continuously differentiable, i.e., a
function,
is a standard Wiener process, and
is a constant.
2.2. Discrete-Time Sensor Model
At time , the sensor s receives a set of measurements where denotes the i-th measurement. In this paper, we assume the number of measurements is fixed at . However, the sources of the measurements are unknown. It is assumed that each sensor receives one measurement from the true target, and the rest are clutter. The set of measurements received up to time t is denoted as . The measurement model is defined as follows.
2.2.1. Target Measurement
At time
, the measurement set of each sensor contains one target-originated measurement
. Since a bearing-only sensor can only obtain the azimuth and elevation angle information of the target within the three-dimensional surveillance region, the target measurement is expressed as
. The position of the
s-th sensor is defined as
. Therefore, the target measurement model can be described by the following discrete nonlinear equation:
where the nonlinear function
is given by
and the measurement noise
is assumed to follow a zero-mean Gaussian distribution with known covariance, i.e.,
2.2.2. Clutter Measurement
At each time instant , each sensor generates a fixed number of clutter measurements, denoted by , which remains constant over time. The clutter is generated by independently sampling the slant range and bearing angles from uniform distributions within the sensor field-of-view. Specifically, the azimuth and elevation angles are uniformly distributed over the predefined angular ranges, and the slant range is uniformly distributed within . Note that this sampling results in a distribution that is uniform in the spherical-coordinate parameters , but not necessarily uniform in Cartesian 3D volume.
The clutter samples are generated in spherical coordinates according to
then, the Cartesian position of each clutter point relative to the sensor is obtained by
and the corresponding bearing-only clutter measurement is given by
Under the sampling scheme used in this work, let
denote the spherical coordinates associated with
, and define the clutter support region
Since the Jacobian determinant of the transformation
is
the induced 3D clutter density (w.r.t. volume) is
2.3. Distributed Tracking Problem
Distributed state estimation techniques are employed for target tracking under bearing-only measurements in order to enhance the robustness and flexibility of sensor network–based tracking systems. By fusing the local state estimates, more accurate global estimates can be obtained, thereby improving the overall tracking performance. In a cluttered environment, the single-target tracking problem can be decomposed into three main components. First, measurement-to-target association needs to be performed. Subsequently, a local filter capable of processing bearing-only measurements is constructed. Finally, the tracks generated by the local filters are fused to obtain the final tracking result.
4. Distributed Data Association Feedback Particle Filter
This section extends the PDA-FPF designed for continuous–discrete systems to a distributed multi-sensor framework. In this framework, each local sensor independently executes the PDA-FPF to produce a local estimate, which is subsequently fused at a central fusion center using a designated fusion criterion to yield the final estimate. The fusion method used in this paper and the overall procedure of the proposed DPDA-FPF algorithm are presented below.
4.1. Covariance-Weighted Fusion
At time
, the
s-th sensor independently runs a PDA-FPF and obtains a local state estimate as follows:
The local posterior covariance is estimated from the particle cloud using the sample covariance
To integrate these local estimates, when the estimation errors from different sensors are mutually independent and unbiased, an optimal linear fusion rule in the minimum mean square error (MMSE) sense can be employed. Consider a linear fused estimate of the form
where the coefficients are required to satisfy
. The covariance of the fused estimation error is
By minimizing
using the method of Lagrange multipliers, the weight matrices are obtained as
where the fused covariance is
Therefore, at time
, the fused state estimate is given by
and it is straightforward to verify that
. At each time
, the fusion algorithm is as shown in Algorithm 1.
| Algorithm 1 Covariance-weighted fusion |
- 1:
for to S do - 2:
, - 3:
Transmit to fusion center - 4:
end for - 5:
,
|
The MMSE optimality of the covariance-weighted fusion in Equation (
18) is derived under the standard assumption that the local estimation errors are mutually independent or, more precisely, mutually uncorrelated. This assumption is attractive in track-level fusion, because it enables the fusion center to compute the weights using only the marginal covariances
, thereby avoiding the propagation and communication of
cross-covariance terms. In practical distributed tracking, however, exact independence is rarely satisfied: even if measurement noises are independent across sensors, the local posteriors become coupled through the common target state and shared modeling information. The dominant sources of correlation include the common process uncertainty induced by the stochastic maneuver term
in Equation (
1), which affects all sensors simultaneously, and common prior information at initialization such as identical
. Nevertheless, the independence approximation may still be reasonable in regimes where measurements are sufficiently informative and geometrically diverse, such that sensor-specific information dominates, and the induced cross-covariances remain small relative to the marginal covariances.
When these correlations are neglected by enforcing
, the fusion center effectively double-counts shared information, giving rise to the well-known data incest phenomenon. Although the fused mean in Equation (
18) remains unbiased as long as the local estimates are unbiased and
, the resulting weights are no longer MMSE-optimal, and Equation (
17) typically underestimates the true fused error covariance. This overconfidence can lead to filter inconsistency and may amplify occasional local misassociations or weak-observability episodes, potentially causing divergence in long-term tracking. Correlation-robust methods such as covariance intersection avoid overconfidence without requiring cross-covariances, but they are usually conservative and can sacrifice precision when sensors provide complementary information. In the considered photoelectric tracking scenario, spatially separated sensors offer strong geometric diversity, which approximate orthogonal bearing constraints, and the fusion architecture is feed-forward, i.e., no fused-feedback to local filters, both of which mitigate the practical impact of unknown correlations. Therefore, consistent with common approximations in bandwidth-constrained distributed tracking [
10], we adopt the standard covariance-weighted fusion in Equation (
17) to maximize the exploitation of multi-view observability, while interpreting the obtained
as an approximate uncertainty description; correlation-aware fusion or adaptive covariance inflation is left for future work.
4.2. Distributed Data Association Feedback Particle Filter Algorithm
This section presents an overview of the DPDA-FPF algorithm for continuous–discrete systems. In order to implement the algorithm, several approximation procedures are required.
First, the association probability can be approximated using the following particle-based method in numerical experiments:
where
Therefore, at each time
, the association probability of each sensor
s is calculated as shown in Algorithm 2.
| Algorithm 2 Association probability |
- 1:
for to do - 2:
- 3:
for to N do - 4:
- 5:
- 6:
end for - 7:
end for - 8:
,
|
In practice, to reduce computational complexity, the correction term is neglected. Accordingly, the particle dynamics are approximated by
Numerous studies have investigated gain approximations [
26]. In this paper, the following constant-gain approximation is employed in the numerical experiments:
where
Exact computation of the gain function requires solving a Poisson equation at each step, which is computationally intractable for real-time applications. Therefore, we adopt the constant-gain approximation, which assumes a quasi-Gaussian structure for the gain computation while retaining the nonlinear transport of particles.
The complete DPDA-FPF algorithm under the constant-gain approximation and without the correction term is presented in Algorithm 3. The prediction step is discretized via the Euler–Maruyama method with step size
, while the update step is discretized in pseudo-time with step size
.
| Algorithm 3 DPDA-FPF for continuous–discrete systems |
- 1:
For each sensor , sample . - 2:
for to K do - 3:
for all do - 4:
for to N do - 5:
, where i.i.d. - 6:
end for - 7:
Compute association probability according to Algorithm 2 - 8:
for to N do - 9:
- 10:
end for - 11:
for to () do - 12:
- 13:
Compute according to Equation ( 22) - 14:
for to N do - 15:
- 16:
for to do - 17:
Compute from Equation ( 12) - 18:
- 19:
end for - 20:
- 21:
end for - 22:
end for - 23:
for to N do - 24:
- 25:
end for - 26:
Compute local estimation from Algorithm 1 - 27:
end for - 28:
Compute covariance-weighted fusion and from Algorithm 1 - 29:
end for
|
4.3. Computational Complexity and Real-Time Considerations
Let
denote the state dimension,
the measurement dimension,
N the number of particles,
S the number of sensors, and
the number of validated measurements at scan
k for sensor
s. The pseudo-time discretization uses step size
with
sub-steps. Denote by
and
the operation counts of evaluating
and
, respectively; for typical kinematic models and bearing-only measurement functions, both are
up to small constant factors such as a few trigonometric evaluations. The per-scan per-sensor computational complexity of the main DPDA-FPF modules is summarized in
Table 1. In fact, the covariance-weighted fusion in Algorithm 1 requires one inversion per sensor and a few matrix multiplications, i.e.,
per scan, which is negligible for small
compared with the particle operations. From
Table 1, the overall per-scan complexity across all sensors is dominated by the pseudo-time particle update, because it contains the innermost triple loop, i.e., pseudo-time, particles and measurements, and repeatedly evaluates the innovation terms and measurement functions. Therefore, improving the real-time performance of DPDA-FPF mainly reduces to controlling
,
N, and
and to accelerating this pseudo-time update kernel.
Let denote the sensor sampling period. A necessary condition for real-time operation is that the processing time per scan remains below , accounting for communication and fusion latency. The real-time feasibility of the proposed DPDA-FPF therefore relies on several practical strategies. First, validation gating reduces the number of validated measurements by rejecting observations that are statistically inconsistent with the predicted bearing, yielding an immediate reduction in the dominant association-probability and pseudo-time update costs while also improving numerical stability in the presence of clutter. Second, the constant-gain approximation replaces the intractable Poisson equation for the feedback gain with sample-moment computations of complexity , making real-time implementation feasible. Third, adopting a single-step pseudo-time update avoids the pseudo-time integration loop by evaluating the gain and innovation once and transporting particles to the posterior in a single Euler step, thereby significantly reducing latency. Finally, the DPDA-FPF architecture is naturally parallel: local filtering across sensors can be executed concurrently, while particle propagation and updates within each sensor are parallel and well-suited for SIMD or GPU acceleration. Since only low-dimensional summary statistics are transmitted to the fusion center, the communication overhead is minimal. Together, these strategies enable real-time deployment of DPDA-FPF in photoelectric tracking systems without altering the underlying Bayesian structure of the filter.
6. Conclusions
This paper proposes a DPDA-FPF algorithm for photoelectric tracking systems in three-dimensional bearing-only target tracking with clutter. By extending the PDA-FPF to a continuous–discrete formulation, the method explicitly accommodates the mismatch between continuous-time target dynamics and discrete-time optical measurements. A track-level distributed architecture was further developed, in which each photoelectric sensor performs local PDA-FPF processing and transmits compact state estimates and covariance information to a fusion center. Numerical simulations demonstrate that, although individual bearing-only trackers may suffer from observability degradation and clutter-induced fluctuations, the proposed covariance-weighted fusion effectively exploits multi-view geometry and yields substantially improved global tracking accuracy.
The proposed DPDA-FPF framework is able to deal with single-target tracking in moderate-clutter environments, but its performance may deteriorate in complex scenarios involving multiple closely spaced targets, missed detections, or significant sensor correlations. This is because of the inherent modeling assumptions, specifically the single-target constraint, idealized detection probabilities, and statistical independence, as well as the numerical approximations adopted for real-time feasibility, which restrict the accuracy of the particle flow in highly nonlinear or low signal-to-noise regimes. Extending the proposed methods to deal with multi-target tracking and realistic sensor imperfections is non-trivial. A possible direction can be combining the feedback particle filter with joint probabilistic data association or embedding it into random finite set frameworks to naturally account for target number variability and clutter density. Another future work will be developing correlation-aware distributed fusion strategies to address the potential inconsistency caused by neglected dependencies in large-scale sensor networks.