Space Debris Tracking with the Poisson Labeled Multi-Bernoulli Filter

This paper presents a Bayesian filter based solution to the Space Object (SO) tracking problem using simulated optical telescopic observations. The presented solution utilizes the Probabilistic Admissible Region (PAR) approach, which is an orbital admissible region that adheres to the assumption of independence between newborn targets and surviving SOs. These SOs obey physical energy constraints in terms of orbital semi-major axis length and eccentricity within a range of orbits of interest. In this article, Low Earth Orbit (LEO) SOs are considered. The solution also adopts the Partially Uniform Birth (PUB) intensity, which generates uniformly distributed births in the sensor field of view. The measurement update then generates a particle SO distribution. In this work, a Poisson Labeled Multi-Bernoulli (PLMB) multi-target tracking filter is proposed, using the PUB intensity model for the multi-target birth density, and a PAR for the spatial density to determine the initial orbits of SOs. Experiments are demonstrated using simulated SO trajectories created from real Two-Line Element data, with simulated measurements from twelve telescopes located in observatories, which form part of the Falcon telescope network. Optimal Sub-Pattern Assignment (OSPA) and CLEAR MOT metrics demonstrate encouraging multi-SO tracking results even under very low numbers of observations per SO pass.


Introduction
The efficient detection, tracking, and cataloging of orbiting space objects (SOs) are of paramount importance for improved Space Situational Awareness (SSA). Due to a recent collision in space and an increased number of launches, a large number of SOs now exist. As a result, the demand for modern SO tracking applications to produce more accurate and more computationally efficient detection and tracking capabilities is higher than ever.
Some of the components of the forces acting on SOs can be considered to vary in a random manner causing their orbits to change over time. Therefore, recursive Bayesian estimation methods have been adopted to detect, track, and update the states of SOs [1][2][3]. Under this paradigm, a probability density function of the multi-target state of the SOs entering the field of view (FoV) of a sensor can be propagated in time using captured observations. Subsequently, the estimated states of the SOs are propagated by updating a recursive Bayesian filter based on further observations. Due to the large variance of various orbital parameters, limited FoVs of the sensors and typically small numbers of observations per SO pass, it is challenging to initialize new tracks and update existing tracks due to high data association uncertainty when no prior information about the SOs is available. To rectify this problem, the Admissible Region (AR) approach was proposed to limit the candidate SO orbits to be tracked by selecting either a subset of acceptable range and range rate pairs for optical observations or right ascension • The construction of a database based on published Two-Line Element (TLE) data. • Simulated measurements based on the type of observations and locations of twelve observatories of the Falcon Telescope Network (FTN) [18]. • The use of a particle distribution for the prediction step and a Gaussian distribution for the update step. The state vector is represented in the Earth-Centered Inertial (ECI) frame of reference.
• The use of a motion model which takes into account the gravitational effect of the moon and sun and solar radiation pressure acting on the SO [19]. This is in contrast to previous work which only considered the gravitational effects of the Earth [20]. • The use of a measurement model for simulated optical measurements modelling each telescopic image.
The contributions of this article include: • The use of a PUB-PAR-PLMB filter inspired by [10,12], for modeling the SOs initial uncertainty region within the PLMB filter. It reduces the search space for the standard birth densities, and it is more efficient and effective than a Gaussian birth density. • The use of a PAR, which is created from a dense uniform grid, resulting in a sparse weighted particle distribution.
To place this work into perspective, in Section 2, the PAR and PUB models are explained, where the analysis and equations summarize the work of Jones et al. [10]. In Section 3, a summary of the PLMB filter prediction and update concepts are provided based on the analysis in [12]. A precise SO kinematic model is shown in Section 4, in order to account for the long periods of time which can elapse between successive measurements. A telescopic image based measurement model, which provides angular measurements of the start and end points of streaks caused by moving SOs, is given in Section 5. A method for single and multi-target state extraction from the PLMB filter posterior distribution and useful multi-target filter performance metrics are presented in Section 6. Finally, Sections 7 and 8 show the results and conclusions, respectively.

Multi SO Initialization
Multi-SO initialization is composed of two components, namely the use of a PUB during the prediction step, and the PAR approach, which provides the single SO spatial density parameters in the update step.
Inspired by the PUB-CPHD filter in [10], a PUB-PLMB filter, combined with the PAR method for modeling the initial state of a telescopic based measurement, is adopted. The PAR is an AR which adheres to the assumption of independence between newborn and surviving targets [10]. In contrast to the PUB-CPHD filter, the proposed PUB-PLMB filter is a true multi-target tracking filter in that it estimates the identification of each target at each time step in the form of a unique label. The CPHD filter is not a true multi-target tracker, since, while it provides sequential estimates of the multi-target state, it does not provide target identity estimates, meaning that no target correlation information across time frames is provided. Furthermore, recent articles such as [12][13][14] have demonstrated superior multi-target tracking performance with labeled filters such as the PLMB filter, as opposed to non-labeled approaches such as the PHD and CPHD filters.

Partially Uniform Birth (PUB) Multi-Target Initiation Model
The PUB intensity generates a uniformly distributed birth density in the sensor FoV, with intensity function given by: where x is a target state in the ECI frame, θ θ θ(x) is a function that maps the target state to the observable part of the state, which, for a telescopic image, is given by θ θ θ(x) = [α, β,α,β] T . In this case, φ φ φ(x) = [s,ṡ] T is the non-observable part of the state, where s is the distance between the observer and the target, andṡ is the radial velocity of the target. Note that the joint vector [θ θ θ(x) T , φ φ φ(x) T ] T is the target state in the telescopic camera coordinate system. λ β is the expected number of targets of the Poisson intensity, U (θ θ θ(x); B) represents the uniform density for the observable part of the target state with boundary B given by the sensor FoV. The non-observable part of the state is modeled by a Gaussian Mixture (GM)

Probabilistic Admissible Region (PAR) Approach for New Target Density Approximation
The PAR approach is an orbital admissible region that adheres to the assumption of independence between newborn targets and surviving SOs. These SOs obey physical energy constraints in terms of orbital semi-major axis length and eccentricity within a range of orbits of interest. In this article, Low Earth Orbit (LEO) SOs are considered.
The new SO distribution is modeled by the multiplication of the birth intensity D B (x) and the measurement likelihood distribution l z (z|x), which will be shown in detail in Section 3: where N (θ θ θ(x); z, R) is a Gaussian distribution on θ θ θ(x) modelling the resulting observable density for a given measurement z, with sensor noise matrix R ( [10], pp. 1459).
In order to estimate the non-observable part of the state φ φ φ(x) = [s,ṡ] T based on constraints provided by the measurements, the PAR [1,10,21] methodology is used. More details about the CAR/PAR methods, including equations and their parameters, are given in Appendix A (Note that the following equations are based on the equations presented in Appendix A.). As examples, Figures 1 and 2 show two curves corresponding to two constraints, namely the semi-major axis length constraint (constraint 1) and eccentricity constraint (constraint 2), calculated from simulated measurements from two randomly chosen satellites from the TLE. The constraint regions are, for constraint 1: and for constraint 2: where a and e represent the semi-major axis length and eccentricity, respectively, of a hypothesized SO orbit. The Earth gravitational constant µ = 3.986004418 × 10 5 [ km 3 s 2 ] and w 1 , F(s), U(s) and γ 1 to γ 4 are given in Appendix A. Note that these constraints are functions with variables s andṡ, and form a 2D nonlinear system of equations. The solutions of the equations are given in Appendix A with their derivations given in [4,10,22]. Figure 2 shows curves corresponding to constraint 1 (magenta) and constraint 2 (red) for the SO based on simulated measurements corresponding to satnum 337 from the TLE. It also shows the posterior SO distribution, which is modeled with particles with weights represented by the color map in the figure. The particle distribution state variables and weights are determined as follows:

1.
For a given measurement, z calculates the parameters of constraints 1 and 2.

2.
Divide the (s,ṡ) space into a grid as shown in Figure 1. To define this, it is necessary to determine the coordinates of the limiting points A, B, and C in the figure. These are determined as follows: (a) Obtain the points on the constraint curves 1 and 2 (corresponding to Equations (3) and (4), respectively) at the region edges, given by a max = 43,000 [km] and e max = 0.4. Note that LEO objects are defined by the constraints a ≤ a max and e ≤ e max . (b) Find (s c1 right ,ṡ c1 right ), which is the coordinate at the right hand extreme of constraint 1 (Equation (3)), shown as point C in Figure 1. This coordinate can be found by differentiating constraint 1 with respect toṡ and setting ds/dṡ = 0. Note that s is a function ofṡ, i.e., s = f (ṡ). The solution can be determined by realizing thatṡ c1 right = −w 1 /2, then substituting forṡ c1 right into constraint 1 (Equation (3)) and solving the resulting 6-th degree polynomial to obtain s c1 right . (c) For s min = 300 [km], corresponding to the lower limit of LEO SOs, obtainṡ (ṡ c1 top andṡ c1 bottom ), shown as points A and B in Figure 1: 3.
For each point (s,ṡ) on the grid: (a) Solve for a and e from the constraint Equations (3) and (4): (b) A point on the grid is a valid LEO SO when it falls within both constrained regions, which is equivalent to a ≤ a max and e ≤ e max . For example, in Figure 1, points D and E are invalid because they are outside of both constraint regions. Point F is within constraint 1 s region, but outside of constraint 2 s region; therefore, it is also an invalid point. However, points G and H are within both constraint regions, and are valid points. Invalid points have a weightγ = 0, and for valid points: (c) evaluate the probabilityγ = p(a, e) that the semi-major axis length and eccentricity are a and e as determined from Equations (6) and (7). The distribution p(a, e) is created using TLE data, and the procedure can be found in [10], and Appendix C.

5.
The birth particle distribution {γ j , x j } J B 1 is then given by: where f eci t-radec (·) is a function that maps from a topological Right Ascension and Declination (RADEC) frame to the ECI frame.

The Poisson Labeled Multi-Bernoulli (PLMB) Filter
This section describes the PLMB filter, which has been used to develop the multi object tracking algorithm. The PLMB filter is capable of modelling any number of new possible targets, and also tracking and identifying targets with a unique label. It is directly applicable to the PUB model, already demonstrated in [10] with the CPHD filter.
The multi-target tracking density is given by a LMB density characterized by its Probability Generating Functional (PGFl) G lmb X [h], ([8], p. 456), [23]: where, for the RFS X , f is the single target density of a target with label , r the target's probability of existence and h(x) a function defined in the space of the individual elements with 0 ≤ h(x) ≤ 1.
The birth process is modeled by a Poisson intensity D B (x) = λ B f B (x), where λ B is the expected number of targets to be born with spatial distribution f B (x). The union of Poisson and LMB densities in PGFl form is: where and G lmb Y [h] is an LMB PGFl (Equation (10)). Equations (10)-(12) provide in PGFl form, the general PLMB filter equations. Their implementation will be discussed in the following subsections.

Multi-Target Prediction
For a prior LMB density with parameters (r , f ), and ∈ L, where L is the set of all target labels, the prediction (r , f ) of the LMB density is given by where P S (x) is the probability of survival, l x (x k |x k−1 ) is the kinematic model, and in general ξ, η = ξ(x)η(x)dx represents the inner product. This prediction step is efficient and equivalent to the Labeled Probability Hypothesis Density (LPHD) filter prediction step in [13,23].

Multi-Target Update
The Bayesian update of the Poisson multi-Bernoulli PGFl (11) is a Labeled Multi-Bernoulli Mixture (LMBM) PGFl, which is approximated by an LMB PGFl. The resulting LMBM PGFl is composed of three components: New targets, detected targets, and misdetected targets [12]: where (r + , f + ) are the probability of existence and the probability density of the updated target identified by the label , respectively. The probability of existence r + is obtained by solving the assignment problem on the cost matrix W n,m by applying LBP [16]. Other methods used to solve the assignment problem are Murty's algorithm [17], and Gibbs sampling [24]. The cost matrix W n,m represents the weights of different combinations of targets n , n ∈ 1 . . . N, for a total of N targets and M measurements z m , m ∈ 1 . . . M. Misdetected targets are represented by m = 0 and n ≥ 1 and new born targets are represented by n = 0 and m ≥ 1.
The parameters representing misdetected targets are given by: The parameters representing detected targets are given by: and new target labels, which, in this article, are represented by a triplet of the current time step k, measurement index m and sensor index o, are N+m = (k, m, o), and their parameters are given by: The uniform distribution of clutter, where λ κ is the expected number of clutter measurements and V κ the area formed by the sensor FoV. The density f + 0,m (x) is obtained as a particle distribution by the PAR methodology described in Section 2.
The PUB intensity in Equation (1) can be expressed by separating the observed and non-observed components: and, therefore, The probability of existence r + 0,m is given by: (23) and the cost value W 0,m is: The LBP algorithm returns the weights p n,m that each assignment has for each label. A pseudo-code of the LBP algorithm can be found in ( [16], p. 20). Then, the posterior for previously existing targets is given by: and for new targets: Equations (25) and (26) give the parameters of the posterior multi-target distribution of Equation (14).

SO Kinematic Prediction Model
The equation of satellite motion is assumed to be [11]: where δ p (r,ṙ) represents perturbation forces produced by different sources, and a represents non-modeled forces. r andṙ are the position and velocity components, respectively, of the state vector x, i.e., x = [r(t) T ,ṙ(t) T ] T . The predicted state is then given by: where t k is the time at time step k. Each particle of the state is propagated using the method given in [19], using the Shampine-Gordon (The Shampine-Gordon integrator is a multi-step method which uses the information from previous steps, in contrast with the Runga-Kutta method, which discards previously calculated information. Therefore, the Shampine-Gordon integrator is more efficient. The Shampine-Gordon integrator and the SO propagator C code used in this article can be found at https://www.researchgate.net/ publication/340793133_High_Precision_Orbit_Propagator_C_code, (accessed on 16 May 2021). The Shampine-Gordon integrator has been used in other state propagation models including the DROMO [25], SPOOK [26][27][28], and ZUNIEM [29] propagators, in which good performance has been demonstrated.) integrator [30], which models the following forces: • Earth central gravitation, • Earth non-spherical forces, such as geopotential, solid tides, ocean tides, • solar, and lunar gravitation, • solar radiation pressure.
Non-modeled perturbations a can be estimated by [11,34]: where ω is a zero-mean Gaussian noise source on the second component in the object's Radial-Intrack-Crosstrack (RIC) frame, and f eci ric is the mapping that transforms a vector in the object's RIC frame to the reference ECI frame (see Appendix B.4).

Particle State Prediction
For a target modeled by a particle distribution {γ k−1 , x k−1 } J j=1 , where γ k represents a particle weight at time k, the predicted distribution is given by:

Variable Time Step Prediction
Since, most of the time, targets are not observable, in order to reduce computational cost and improve integration performance, the target is predicted over a single time step or a long time step. It was noted in this work that using many single time step integrations is more computationally intensive than using a single long time step integration. The long time step prediction method requires the intermediate time steps and states calculated during integration and applies a spline interpolation method, function f j ι ( n , t) in Algorithm 1, to estimate future state values at desired time steps.
The decision rule used in the experiments presented in Algorithm 1 for using a long ∆T ι or single t k − t k−1 time step prediction is to use long prediction after N steps th single time state predictions in which the estimate is out of the FoV of all telescopes. The number of consecutive time steps where target n is out of all telescopes' FoVs is given by U n . If the target is not observed after the long time step prediction ∆T ι , another long time step prediction is executed. When the target state is within the FoV of any of the telescopes, the next predictions are carried out using the single time step prediction.

Algorithm 1 Prediction
Step Algorithm 1: Input: , U n Number of consecutive time steps where target n is out all telescopes' FoVs.
States are overwritten. 3: for n ∈ 1 . . . N do Prediction for the distribution of each element. 4: Weights remain the same. 6: 12: else Single time step prediction.

SO Observation Model
An SO detection in a telescopic image is a streak. In this work, SO detection is modeled in terms of the angles [α 1 , β 1 , α 2 , β 2 ] of the streak limits, with respect to the image center, see Figure 3. The FoV of the telescope is within the range α ∈ [−A/2, A/2] with respect to the horizontal axis and β ∈ [−B/2, B/2] with respect to the vertical axis. The measurement is not instantaneous and occurs during an elapsed time known as the exposure time ∆t exp . The measurement can then be expressed as: An observation model for real image measurements can be found in [35]. The observation model is given by the functionẑ = H f cam eci (x), where f cam eci (x) is the projection of the state vector into the camera spherical coordinate system and H is the observation matrix: The single target measurement model is then given by the likelihood function l z (z|x), modeled by the Gaussian distribution: An observed stateẑ The probability of detection of a particle x j k|k−1 is given as follows: Therefore, the average probability of detection of the predicted particle distribution {γ The single target posterior densities are calculated based on the procedure in [11]: 1.
Transform the particle state to the topocentric camera coordinates, with weights corrected by the probability of detection {γ Approximate as a Gaussian distribution (µ µ µ k|t−1 , P k|t−1 ).

3.
Update using a linear Kalman filter.

4.
Sample the resulting distribution {1/J, x j k,cam } J j=1 . Note that the radial components s,ṡ do not change because they are not observed.

5.
Transform from camera to The particle distribution of undetected targets is given by x j k = x j k|k−1 and: The complete pseudo code of the multi-target tracking update is given in Algorithm 3. In Algorithm 3, the following functions are used: • PARTICLESTOGM( γ j , y j J j=1 ): Converts a set of weighted particles to a Gaussian Mixture. This can be implemented with a variant of the Expectation Maximization (EM) algorithm, such us the Fixed Weighted-Data EM (FWD-EM) [36].
: Extracts samples from a GM distribution by first sampling the Gaussian component i with respect to the weights w i , and then sampling from the normal distribution N (·, µ µ µ i , P i n ). o)): Calculates the weighted particle distribution of the hypothesis given by the measurement z and its observer o (with parameters o,ȯ, R cam eci (t k , o)). • LBP(W n,m ): Calculates the individual contribution of each state-measurement pair from the cost matrix W n,m using the LBP algorithm [16].

State Extraction
To extract the target states, the particle distribution in the ECI frame is converted to a Gaussian distribution in the RADEC frame. Usual methods to achieve this are Euclidean particle averaging. However, Euclidean averaging is not the best option for calculating the average positions of orbital trajectories. Averaging particle states converted to RADEC coordinates is also problematic due to the discontinuities that occur when α ra = 2π and β dec = π. Instead, methodologies used in Riemannian manifolds are applied as follows [37,38]. Let {γ j , x j } J j=1 be the particle distribution for a given target. The weighted average is calculated in spherical coordinates RADEC x = [r, α ra , β dec ] T . Now, the unit vector q j represents the target position on the unit sphere: Define the logarithmic map, with respect to a reference unit vector q 0 , as: θ θ θ := Log q 0 (q) = uθ, u = q − q 0 cos(θ) sin(θ) , θ = arccos(q T 0 q), (39) and the exponential map as: The weighted average [38] is calculated using the Gauss-Newton iterative algorithm by initiatingq ← q 1 orq ← q j * with j * = arg j max γ j J , and iterating until the error between two consecutive iterations is lower than a chosen threshold: Then, forq = [q x ,q y ,q z ] T , the conversion to RADEC coordinates is given by: It is then necessary to convert back to Cartesian coordinates in the ECI frame to givē x = f eci radec ([r,ᾱ ra ,β ra ] T ).

Defining the Distance between Estimates and Ground Truth
To quantify the performance of the PLMB filter, distance metrics are needed. In particular, the Optimal Sub-Pattern Assignment (OSPA) and CLEAR MOT metrics used in this article require the individual error distances between estimated SO states and ground truth.
Since the estimation of SO orbital trajectories is sought in this article, it makes sense to define single SO estimate to ground-truth distance errors in terms of distances along their orbital trajectories, rather than the Euclidean distance between them. For example, if an SO is on one side of the Earth according to ground-truth, but its estimate is on the other side, the error distance should be half of the perimeter of the orbit.
When objects have different distances from the Earth (r 1 and r 2 ), the distance between two vectors x 1 and x 2 can be modeled by the line integral of a spiral: Imposing r(0) = a = min(r 1 , r 2 ), where θ 12 is the angle between x 1 and x 2 , and r(θ 12 ) = max(r 1 , r 2 ), so b = (max(r 1 , r 2 ) − min(r 1 , r 2 ))/θ 12 .
The line integral is given by: and finally: is the error distance between x 1 and x 2 , to be used in the multi-SO error metrics for quantifying the performance of the PLMB filter.

Multi-Target Tracking Metrics
The multi-SO tracking results here are compared using the OSPA [39] and OSPA (2) [40] metrics and the Multi Object Tracking Precision (MOTP) and Multi Object Tracking Accuracy (MOTA) CLEAR MOT metrics [41]. Both the OSPA and OSPA (2) metrics measure the precision and cardinality of two sets of targets (ground truth and estimates in this case) in one value per time step, with the difference that the OSPA (2) metric is designed to evaluate labeled tracks, whereas the OSPA metric does not take labels into account. The MOTP metric gives the estimated target location errors, when correctly detected, and the MOTA metric gives the accuracy in tracking targets, taking into account missdetections, false alarms, and label switching.

Database Construction
The database is built using simulated telescopic measurements of LEO SOs, obtained from the TLE file of 4 October 2019 (The TLE file used in the experiments, containing SO trajectory parameters from 2-6 October 2019, and a video demonstrating the resulting tracking performance can be downloaded at: https://www.dropbox.com/sh/a79hgj5 hpo4vys8/AACu9OAYHLmgy4gK4V8NtqcMa?dl=0, (accessed on 16 May 2021)), which contains data related to more than 5000 SOs. The measurements are simulated by projecting the SO trajectories into the image plane of twelve telescopic cameras from the FTN. The name, location, and simulated pointing directions of the telescopes are shown in Table 1. In the experiment, a subset of nine SOs were used. These SOs are those that produce a higher number of measurements from the telescopes when the sample frequency is seven frames per second (fps).
Experimental results at three different measurement update times (a) and (b) at m = 1, (c) at m = 2, (d) at m = 11 are shown in Figure 4. The complete ground truth trajectories are shown in cyan. Gold stars ( * ) represent the telescope positions, which change over time due to the Earth's rotation, as can be seen comparing the graphs. Circles show the ground truth SO location and their neighbouring numbers m correspond to the mth measurement update. The pink points show the SO's particles. In Figure 4a,b, it is interesting to see the particle distribution of a new target (pink) where the high radial standard deviation (σ r ) of the estimate in the radial direction (gold line) with respect to the telescope's position is evident. The error distance between ground truth and the average particle position state is 52 [km]. Figure 4c shows the update at the next time step, where it can be seen that σ r reduced significantly. Figure 4d shows the update after 11 time steps together with the measurement of satnum = 337, which in this case is observed by a different telescope with respect to Figure 4b,c, as seen by the gold line. At this time step, the error distance is greatly reduced to 0.68 [km], and σ r is reduced even further.

Grid Size Performance for the Probabilistic Admissible Region
The process of computing the particle distribution composed of state vectors and their associated weights can be parallelized. The PAR algorithm was implemented in C++ and processed on a computer with 16 threads. Using a uniform grid (s,ṡ) the values p(a, e), were calculated and consequently the weighted particles {γ, [s, α, β,ṡ,α,β] T } obtained. Since most of the weights γ have values close to zero, the corresponding elements are discarded, reducing the size of the particle set.
Experiments with three regular grids (s,ṡ) were executed to demonstrate the computational times and corresponding numbers of particles in the resulting distributions: • A 100 × 100 grid resulted in an average particle set size of 136 ± 29 weighted particles and required 0.04 [s] to execute. The resulting particle set was composed of 1.36 ± 0.29% of the total number of elements in the grid. • A 200 × 200 grid resulted in an average particle set size of 465 ± 130 weighted particles and required 0.15 [s] to execute. The resulting particle set was composed of 1.16 ± 0.33% of the total number of elements in the grid. • A 300 × 300 grid resulted in an average particle set size of 964 ± 308 weighted particles and required 0.34 [s] to execute. The resulting particle set was composed of 1.07 ± 0.34% of the total number of elements in the grid. Note that, even though the number of particles generated is proportional to the number of grid cells, it is typically less than only 1.5% of this number. Furthermore, not all the (s,ṡ) pairs in each grid cell need to be calculated. This is because only those cells which comply with the PAR constraints (i.e., only the cells enclosed within both the magenta and red regions of Figure 1, such as points G and H) are necessary.

Conversion of Weighted Particles to a Gaussian Mixture Distribution
The state represented by a set of weighted particles in the prediction step is converted to a GM distribution in the update step. The conversion was implemented using the FWD-EM method [36] modified using the Fuzzy C-Means method for initialization suggested in [42], under which a superior performance was demonstrated. This procedure required the FWD-EM algorithm to execute for a fixed number of Gaussian components. In this work, GMs containing between 1 and 10 components were calculated and their fit to the weighted particles compared using the Bayesian Information Criterion (BIC). In the case of a single Gaussian component, its mean and standard deviation were calculated from the weighted average and standard deviation of the particles.
In the case of multiple Gaussian components, the FWD-EM algorithm was used. Experimentally, the lowest BIC was obtained based on a single-Gaussian component in all the conversions. This possibly surprising result can be explained because the Gaussian distribution is calculated in spherical as opposed to Cartesian coordinates. For this reason, a single Gaussian component is used in the following experiments.

Global Tracking
Ten Monte Carlo (MC) experiments with randomly simulated observations, using the parameters in Table 2, were realized using the PLMB filter. The OSPA and CLEAR MOT metrics were computed for performance evaluation. All the parameters used in this experiment are shown in Table 2. Sensor noise covariance matrix The distances between ground truth and estimated targets were calculated using (48). The MOTA metric in this case was 96.34 ± 1.09% and MOTP metric was 3.14 ± 0. 24 [km]. This indicates that each target was correctly tracked more than 96% of the time since it was detected by a telescope, with a distance precision between 2.9-3.4 [km]. Figure 5 shows the resulting cardinality, in which it can be seen that most of the time the estimated number of tracks was correct, and in some intervals the error corresponds to only a single cardinality error, as can also be seen in the OSPA cardinality error, third row of Figure 6. The OSPA and OSPA (2) metrics are shown in Figure 6. The first row of graphs shows the complete OSPA errors, whereas the second and third rows show the individual OSPA localization and cardinality error components. It can be seen that the OSPA distance (first row) graphs appear to be very similar to the OSPA localization graphs (second row). This is because, during all the experiments, the cardinality errors are low. The OSPA metric shows particularly low errors between 7-11 h. To explain this, it is necessary to analyze the individual tracks in Figures 7-9. The first measurement of each SO occurs at different times, and its track begins at that moment. The first estimate which is modeled by the PAR produces a hypothesis with low uncertainty in the angular directions, but high uncertainty in the radial direction with respect to its observing telescope, as shown in Figure 2. This effect can be seen in Figure 8, where, during the time period 11-16 h, the radial error is very high, close to 100 [km], which is reduced to less than 1 [km] when new measurements contribute to reduce target hypothesis uncertainty. It can then be seen that the radial distance between the SO's estimate and ground truth remains below 1 [km] for most of the time.

Identifying Individual SO Tracks
In Figures 7-9, the first row of graphs show an SO's estimated probability of existence r in blue, and the observation window time when measurements are captured in red. The second row of graphs show the ground truth to SO distance from Equation (48), which is the orbital distance used in the OSPA and CLEAR MOT metrics.
Graphs from the SO track with satnum 337 are shown in Figure 7. This SO yielded seven consecutive measurements immediately after PAR initialization. Therefore, the initial distance errors reduced significantly after this first set of seven measurements. As seen during time period 9-20 h, the SO produced no measurements, provoking an increase of distance error, but its track was successfully maintained and updated at time 20.5 h. Figure 8 shows PLMB tracking performance, in a MC realization, for satnum 5715 in which only one measurement was obtained immediately after PAR initialization. In this case, the first graph in Figure 8 shows that r ≈ 0.7 until just after 15 h, then r reduces before the next measurement when the probability of detection increases (The probability of detection increases above zero when the target hypothesis distribution overlaps the FoV of its observing telescope, see Equations (35) and (36)). Subsequently, r increases to approximately unity and remains near this value after future measurements.
Finally, Figure 9 shows the tracking result of a less successful case (another MC realization for satnum 5715) in which multiple labels, and hence target tracks, are erroneously assigned to a single SO. Figure 9a shows that a track with label (k, m, o) = (5612, 1,9) is assigned to this SO. Its probability of existence r is significantly reduced at a time of 17 h and then increases back to unity at a time of approximately 19 h. During this time period, Figure 9b shows that a new track, with label (8836, 1,5), is also assigned to satnum 5715, but with higher probability of existence r. This demonstrates a failure to maintain the track of satnum 5715 during this time period. Possible solutions to this problem include the implementation of another state extractor which takes into account the history of tracks with hysteresis. Other alternatives can be the fusion of tracks or the use of sets of trajectories as in [43]. Improving the state model by using a GM density, instead of a single Gaussian density as carried out here, or by using a distribution which better models the "banana shape" of the SO distribution, could also be beneficial.

Computational Performance
To measure the computational performance, the average time to execute ten simulations was calculated. On average, the computational time necessary for one cycle (from time k to k + 1) of the PUB-PAR-PLMB filter was 2.5 [s]. This time consumption was made up of the following components, where all percentages are with respect to this cycle time. Note that the element Others refers to data logging, overhead, and other functions: • Prediction: 79.94%. It can be seen that approximately 70% of the total filter cycle time is used by the integrator within the prediction component. Note that this component forms part of the SO dynamic model and not specifically the PUB-PAR-PLMB filter. The scope of this work is not to optimize the SO propagator, but to demonstrate the suitability of the PLMB filter for multi-SO tracking. Nevertheless, it should be noted that this computational time could be reduced by using a different target predictor such as the Extended Kalman Filter (EKF) or the Unscented Kalman Filter (UKF) predictors and by using a single target state distribution which better fits the "banana shape" of SO distributions.
It can also be seen that the time required by the LBP algorithm for measurement to state assignment approximation is very small compared with the total filter cycle time. This is expected since, in these experiments, the number of measurements (M) and estimated targets (N) is low. However, since the LBP algorithm's execution time increases with both N and M, then, as the numbers of measurements and targets increase, its execution time would start to dominate the overall filter cycle time.

Conclusions
A simulated space debris database was created based on TLE data, in which LEO satellites/debris were observed by simulated telescopes based on the FTN. A satellite measurement that corresponds to both ends of detected streaks was converted into celestial coordinates. These coordinate pairs were then used as inputs to the PLMB SO tracking algorithm. The experiment simulated measurements observed by twelve telescopes. The implementation demonstrates a particle filter version of the PLMB filter. A multi-sensor strategy was used by performing an iterative multi-target update by each sensor based on the LBP algorithm. It would be interesting to compare this with a more sophisticated multi-sensor tracking concept, for example based on Gibbs sampling, as used in [14]. The implementation of the PAR and PUB approaches for modeling the birth of the PLMB filter demonstrated a high accuracy and efficiency. The tracking results of the PLMB filter are promising and demonstrated good results even during long periods with no measurements. The performance could possibly be improved in future work with a different single target state distribution which better fits the "banana shape" of SO distributions, a more complex state extractor, by using sets of trajectories and/or another propagator/integrator. This article has presented a proof of concept of applying the PLMB filter together with the PUB and PAR. For future work, it would be interesting to substitute various components such as the PUB, PAR, or the PLMB filter itself and compare results. where substituting (A2) and (A3) into (A1), for a ≤ a max , results in: where F(s) = w 2 s 2 + w 3 s + w 4 − 2µ s 2 + w 5 s + w 0 (A6) with: w 0 = o 2 , w 1 = 2ȯ · u s , w 2 =α 2 cos 2 (β) +β 2 w 3 = 2α(ȯ · u α ) + 2β ȯ · u β , The eccentricity constraint is given by the angular momentum h = r ×ṙ, where the energy is: and substituting (A2) and (A3) into (A1), for e ≤ e max results in: where γ 0 = F(s)U(s) + µ 2 (1 − e 2 max ) γ 1 = F(s)P(s) + w 1 U(s) γ 2 = U(s) + c 0 F(s) + w 1 P(s) with: and Let x cam = [x, y, z,ẋ,ẏ,ż] T , and let s = x 2 + y 2 ; then, where is a threshold representing a value close to 0, sinceα andβ are undefined when s = 0. Here, = 10 −10 , and

Appendix C. Construction of Semi-Major Axis Length and Eccentricity GM Distribution
The construction of the p(a, e) density is realized with the procedure described in [10]. The density p(q s ) = p(a, e) is a GM approximation of the a-e density derived via the public TLE catalog from 4 October 2019. Using the 1118 samples consistent with the a and e constraints in ( [10], Table I), the EM algorithm with a convergence tolerance of 10 −7 is used to generate a GM with 30 components. Figure A2 shows the a and e values for more than 10,000 SOs from the TLE and the resulting GM distribution. The TLE provides the orbit eccentricity but does not directly provide the semi-major axis length. Instead, it provides the the mean motion n, which is the number of revolutions that the satellite moves around the Earth in one day. Using the Kepler equation n 2 a 3 = µ, it is possible to obtain the semi-major axis length a in terms of the mean motion n.