Next Article in Journal
Forest Aboveground Carbon Storage in the Three Parallel Rivers Region: A Remote Sensing and Machine Learning Perspective
Previous Article in Journal
Superpixel-Tokenized and Frequency-Modulated Hybrid CNN–Transformer for Remote Sensing Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Resident Space Object (RSO) Tracking in Space-Based, Low Resolution, Non-Constant-Attitude Imagery

1
Department of Earth and Space Science and Engineering, York University, Toronto, ON M3J 1P3, Canada
2
Magellan Aerospace, Winnipeg, MB R3H 0S5, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(5), 755; https://doi.org/10.3390/rs18050755
Submission received: 21 January 2026 / Revised: 21 February 2026 / Accepted: 24 February 2026 / Published: 2 March 2026
(This article belongs to the Section Remote Sensing Image Processing)

Highlights

What are the main findings?
  • A rules-based, end-to-end Resident Space Objects (RSOs) detection and tracking pipeline is demonstrated for low-resolution, short-exposure, non-constant-attitude space imagery without relying on external attitude information.
  • The proposed method achieves robust detection of faint RSOs using real on-orbit imagery, validated on 878 images containing 2191 labelled RSO instances.
What are the implications of the main findings?
  • Reliable RSO detection can be achieved using wide field-of-view, low-cost optical sensors, expanding Space Situational Awareness capabilities beyond dedicated instruments.
  • The lightweight and interpretable design enables potential onboard deployment and repurposing of degraded, auxiliary, or End-of-Life spacecraft for Space Situational Awareness purposes.

Abstract

Resident Space Objects (RSOs) are a collection of both man-made and natural objects in near-Earth space. Given their large orbital velocities and rapidly increasing quantity, they pose a collision threat to space assets, necessitating better Space Situational Awareness (SSA). SSA begins with detecting these objects in the first place and can be accomplished by using space-based optical images, such as images from the Fast Auroral Imager (FAI) on the CASSIOPE satellite. However, these short-exposure images are low in resolution and contain various artifacts and noise, posing challenges to traditional source detection methods. Furthermore, the background stars and RSOs both move due to the satellite’s non-constant attitude, posing a challenge for tracking algorithms. Nevertheless, these images are a valuable source of SSA data, which can be used to develop algorithms to ultimately augment the capabilities of current SSA systems. Such augmentations include performing RSO detection as a simultaneous function on existing spacecraft or allowing dedicated SSA payloads to detect RSOs during slew maneuvers, where background stars will similarly move. This paper proposes a rules-based RSO tracking algorithm tailored for low-resolution, short-exposure, space-based imagery with non-constant spacecraft attitude, addressing the challenge of distinguishing RSOs from background stars that are also in motion. This method consists of a custom thresholding algorithm, along with the Iterative Closest Point (ICP) algorithm to correct the motion of the background stars, followed by a tracking algorithm to finally detect the RSOs within the imagery, returning their pixel positions. The algorithm was tested on an 878-image dataset, achieving 79% precision and 71% recall, while detecting 87% of all RSOs at least once. These results prove that the algorithm is a feasible method for detecting RSOs in non-constant-attitude imagery, providing a means to develop current SSA systems.

1. Introduction

The near-Earth space environment is experiencing a large and sudden increase in the number of Resident Space Objects (RSOs), which can be attributed to the increase in space-based activities. Satellite launches, anti-satellite missile testing, and even in-orbit collisions contribute to the RSO population. The Iridium 33-Cosmos 2251 collision in 2009 [1] and the Fengyun-1C destruction are just a few examples of such activities contributing to the generation of space debris [2]. The space community continues to work on mitigating the risks associated with space debris by improving collision avoidance strategies, promoting responsible satellite disposal practices, and increasing international cooperation to address this growing problem [3]. In order to implement these strategies, operators need detailed knowledge of the space environment to be able to task satellites with collision avoidance maneuvers. Knowledge such as object velocity, orbital parameters, and object identity are all necessary. The collection of this knowledge is an activity referred to as Space Situational Awareness (SSA). A first step in achieving SSA can be through the use of optical imagery, from which space objects can be detected and then identified [4].
Traditionally, SSA using optical imagery is conducted on the ground, using high sensitivity, long-exposure, narrow Field-of-View (FOV) dedicated networks of telescopes. An example of such a telescope was NASA’s Michigan Orbital DEbris Survey Telescope (MODEST), which used a 1.3-degree by 1.3-degree FOV to capture images of orbital debris in Geosynchronous Earth Orbit (GEO) using a five-second exposure time [5]. These ground-based telescopes are used to track the debris, following their motion through the sky, meaning that the RSOs appear as point sources in the image. However, complications arise in using a similar system to track RSOs in Low-Earth Orbit (LEO), due to their higher relative velocities [6,7]. Moreover, the ground-based SSA systems are constrained by geographical location, sensor performance, observational windows, and weather. They are also expensive to construct, operate, and maintain, and are typically run by commercial or government entities. Consequently, most of the data they produce is not publicly available, either due to security concerns or the cost of acquisition.
In response, several studies have explored the feasibility of repurposing non-dedicated optical sensors for SSA [8]. A prominent candidate is the star tracker, an onboard optical instrument designed for spacecraft attitude determination. Star trackers operate in stare mode, continuously capturing short-exposure images of star fields at rates typically between 1–10 Hz, depending on configuration. These images are processed in real time by matching observed stars to onboard catalogs. Importantly, RSOs occasionally pass through a star tracker’s field of view and are imaged during routine operation. With modern constellations such as Starlink already equipped with star trackers [9], there exists a globally distributed, high-volume sensor network that could be leveraged for opportunistic SSA data acquisition.
Despite their potential, star trackers are not optimized for RSO imaging. Compared to dedicated SSA telescopes, they feature smaller apertures, lower-resolution detectors, and shorter exposure times. Consequently, the images they produce are lower in quality, with reduced spatial detail and Signal-to-Noise Ratio (SNR). Dedicated SSA instruments may also operate in tracking mode, allowing them to follow objects of interest and use long exposures, resulting in either high-SNR detections or streak imagery. In contrast, short-exposure imagery (10–100 ms) typical of star trackers yields point-source detections, requiring motion analysis over time. Narrow-FOV systems provide high angular resolution but limited sky coverage, while wide-FOV systems such as FAI cover larger areas per frame, at the cost of lower resolution and greater optical distortion. Nonetheless, star trackers offer several practical advantages: they are low cost, operate at high cadence, cover wide FOVs per frame, and are unaffected by atmospheric disturbances. These characteristics make them strong candidates for scalable, space-based SSA applications.
This concept of opportunistic SSA sensors was evaluated using imagery acquired from the FAI instrument onboard the Cascade SmallSat and Ionospheric Polar Explorer CASSIOPE satellite [8]. FAI was originally designed for auroral observations, with its wide-FOV images at 10 Hz [10]. These images resemble those produced by star trackers. However, unlike FAI, most star trackers do not store their imagery due to onboard processing constraints and limited downlink bandwidth. The availability of FAI’s optical dataset thus provided a rare opportunity to evaluate RSO detection in low-resolution, non-inertial, space-based imagery from a non-dedicated SSA sensor.
This type of data presents several challenges: image noise, Earth limb artifacts, and the non-constant attitude of the host satellite, which causes both stars and RSOs to move within and across frames. These factors complicate traditional detection techniques and motivated the development of a novel, rules-based detection algorithm. The algorithm combines local adaptive thresholding with a pose estimation step (using the Iterative Closest Point, ICP) to align and subtract the background star field, thereby enabling robust identification of RSOs.
Tracking RSOs in this context requires identifying and maintaining object identities across sequential images. A wide-FOV system is preferred in this case to maximize RSO quantity in the FOV. Many streak detection methods have been used to identify star or RSO streaks in long-exposure astronomical images and can be categorized into several classes [11]. These classes include simple methods such as source detection, computer vision methods such as derivative-based edge detection, Point Spread Function (PSF) template matching algorithms, and even machine learning algorithms [12,13,14,15].
Short exposure imagery (as in the data considered in this research), captured in the order of 10 to 100 ms, requires the use of more sophisticated detection methods. In this type of low-resolution imagery, stars and RSOs will both appear as point sources of light in images, making the task of differentiating RSOs from stars more complicated. Instead, information from multiple sequential images across time can be used. Furthermore, tracking must be performed to uniquely detect RSOs. On the ground, stars will remain roughly stationary in short periods of time, while RSOs will move through the FOV. Simple source detection can be used in this case, but a different shape requirement must be used to detect RSOs. However, in space-based imagery, the stationary star assumption may not be feasible, especially in cases with non-inertial pointing spacecraft. In this imagery, the background stars move in addition to the RSOs, making the RSO detection and tracking problem even more challenging.
Nevertheless, multiple methods and pipelines have been developed to process short-exposure astronomical imagery. Frame differencing has been used to detect RSOs in imagery where stars do not move significantly in short time frames [16]. Consequently, this algorithm does not perform as well when the imager is moving relative to the stars, which causes stars to move in the images. Another commonly used method is image stacking, a technique which involves combinations of subsequent astronomical images. This method has been successfully used to both increase the signal-to-noise ratio of detections, as well as detect geostationary satellites [17]. This method works well to detect these satellites because stars will tend to streak after stacking, while geostationary satellites will remain in the same location. However, this method may not be sufficient in detecting satellites that appear to be moving in images, such as ground-based observations of LEO satellites or satellites as observed from a moving space-based platform. On the other hand, detecting RSOs in low-resolution imagery imposes a great deal of challenges. While most object detection algorithms are designed for high resolution images where features such as lines, corners, and facets are visible in the images, these methods are not applicable in low-resolution images where a star or an RSO is represented with a very small number of lit pixels. Furthermore, these low-resolution images contain more noise, making real objects harder to distinguish from the noise.
While several studies have explored RSO detection in optical imagery, very few address the specific challenges posed by FAI imagery. Notably, reference [18] applied a Faster R-CNN to similar imagery, achieving a recall of 32%, while in [19], a Convolutional Neural Network (CNN) is first used to classify images as containing detections (of both stars and RSOs), and then the tracking process is used to classify objects as stars, RSOs or noise. A graph-based tracking method is used to match objects between frames, which uses an objective function consisting of the minimum of a set of four cost functions. The tracks determined by this algorithm are then classified into stars, RSOs, and noise.
However, both studies relied on datasets with limitations; [18] used only 332 images, and [19] evaluated performance using a mix of real and synthetic images with an SNR cutoff of six, excluding dimmer objects. By contrast, the dataset in this study includes 878 real images and over 2000 annotated RSOs without an SNR threshold, making it more representative of real detection conditions. Furthermore, this study introduces brightness-based metrics and evaluates end-to-end detection performance, offering a more comprehensive benchmark for future algorithm development.
In this paper, we outline a method for RSO detection and tracking in low-resolution, non-constant-attitude imagery. In Section 2 Materials and Methods, the full SSA pipeline is discussed, the dataset used for this research is outlined. and the algorithm itself will be described. In Section 3, the metrics used to evaluate the algorithm’s performance are introduced, followed by the results. In Section 4, the results are discussed, and advantages and limitations of the algorithm are described. In Section 5, improvements that are planned and currently being performed are discussed.
The main contributions of this paper are as follows:
(1)
Systematic review and selection of the imagery from the Fast Auroral Imager (FAI) instrument onboard the Cascade SmallSat and Ionospheric Polar Explorer (CASSIOPE) satellite, using criteria such as scene illumination, RSO motion, and brightness variation to identify sequences suitable for RSO detection.
(2)
A novel rules-based RSO detection algorithm specifically designed for low-resolution, short-exposure, non-inertial space-based imagery, integrating local adaptive thresholding, Iterative Closest Point (ICP) for background star motion correction without attitude information, and a three-frame motion-consistency tracker for robust RSO identification and tracking.
(3)
A detection pipeline that operates independently of external attitude solutions, enabling RSO detection from star trackers or slew-phase imagery, expanding SSA capability to degraded or auxiliary space sensors.
(4)
A robust, algorithm validation process, making use of real data from on-orbit imagery, featuring 878 images containing 2191 labeled RSO instances corresponding to 75 unique RSOs from 12 different observations periods with varying challenging conditions.

2. Materials and Methods

2.1. Space Situational Awareness Pipeline

While the focus of this research is to present and evaluate an RSO tracking algorithm, it is important to discuss the complete processing pipeline in detecting RSOs from unresolved optical imagery for SSA. Firstly, as outlined in this research, RSOs need to be detected within a sequence of images, producing detections in pixel coordinates relative to the imager’s frame of reference. Then, coordinate transformations need to be considered to transform the RSO detections in the images to an inertial frame of reference, such as the celestial coordinate system, alongside the accuracy of this transformation. After doing this, the detected RSOs need to be identified. This can be done by performing Initial Orbit Determination (IOD) to get an estimate of orbital parameters, from which a matching algorithm can be used to correlate the detection to the positions of satellites as reported by their corresponding ephemeris data or Two-Line Element (TLE) data. It is important to note that this pipeline does not consider photometric information from RSO detections, such as their brightness. However, such information can be used to enhance RSO identification, or to gain further knowledge about the RSO, such as its attitude.

2.1.1. RSO Detection and Tracking

The SSA pipeline begins by detecting RSOs within a sequence of images. As highlighted in Section 1, various algorithms are used for this purpose, depending on characteristics such as the imager’s attitude, the exposure time, the resolution, the SNR, and artifacts in the images. The ideal RSO detection algorithm will be able to track RSOs across images, maintaining unique identities for each RSO over time. The algorithm’s output will produce pixel coordinates of RSO detections. The algorithm proposed in this research aims to fulfill this part of the pipeline.

2.1.2. Celestial Coordinate System Transformation

Each RSO detection made by the algorithm needs to be converted from image coordinates to coordinates in an inertial frame of reference, such as the celestial coordinate system. This coordinate system consists of the Right Ascension (RA) and Declination (Dec) of a celestial object, which is based on an inertial frame centered at the center of the Earth. To do this, an image first needs to be plate solved, a process that identifies the stars in an image using a known star catalogue and returns the astrometric calibration of that image, consisting of information such as the RA/Dec of the center of the image, the orientation of the image with respect to the celestial coordinate system, and the pixel scale [20]. Since the FAI instrument’s image plane rotates and translates over time (due to the host spacecraft’s non-constant attitude), the orientation of the images will change over time as well. However, as will be mentioned below, this movement can be negated using the RSO detection algorithm in this research. This reduces the frequency at which the images need to be plate solved.
With the center pixel RA/Dec, orientation of the imager, and pixel scale known, each RSO detection can be converted to RA/Dec coordinates by first orienting the image such that the positive y-axis aligns with North (the direction of increasing Dec). Then, the location of each RSO in the image can be calculated as pixel offsets in the x-axis and y-axis independently, from the center pixel location. These pixel offsets can then be converted to arcsecond offsets by using the pixel scale. Finally, these offsets can then be added to the center pixel’s RA/Dec, following the convention of increasing Dec in the positive y-direction, and increasing RA in the positive x-direction, to get each RSO’s RA/Dec. These transformations are often captured in a World Coordinate System (WCS) file returned by tools such as [20] but need to be done manually as outlined above if using a custom attitude determination algorithm. Figure 1 visually depicts this transformation.

2.1.3. Initial Orbit Determination

To accurately identify the RSOs in the images, further information is needed about the RSO, such as its orbital parameters. These parameters can be extracted using an angles-only IOD algorithm such as the Gauss or Laplace method [21], which seeks to use multiple observations of the same RSO (in RA/Dec) to find its orbital parameters. Angles-only IOD algorithms are useful, in that information such as the range to the detected RSO is not required. However, the host spacecraft’s positional information needs to be known as well.

2.1.4. RSO Identification

With the RSO’s orbital parameters known, the RSO can be identified. This can be done by correlating the detected RSOs to known RSO data. For this, the time at the instant of imaging needs to be known, along with the corresponding RSO detections to be matched. Existing RSOs need to then be propagated to this time by using existing information about them. Such RSO information can be in the form of TLE data, ephemeris data, or some other system, coming from databases such as Space Track and CelesTrak, or from internal databases generated from high-confidence observations. A matching algorithm can consider the orbital parameters acquired by the steps above, alongside metrics such as the closest RSO, and use them to find the best matching RSO. RSOs detected in the FOV with poor matches may be uncatalogued RSOs, for which repeated measurements should be conducted and orbital estimations refined to confirm this hypothesis. Figure 2 graphically depicts this SSA pipeline.

2.2. Dataset Used in the Study

The dataset used for this research consists of low resolution, short exposure imagery captured by the Fast Auroral Imager (FAI), an instrument onboard the CASSIOPE satellite [10]. Though developed and routinely tasked to image the aurora, RSOs appear in the FOV of the imager, allowing the use of the FAI imagery for SSA purposes. Furthermore, this instrument has comparable optical properties to a conventional star tracker [8], as described below and in Table 1.
The images have a resolution of 256 by 256 pixels, and an exposure time of 100 milliseconds. Given the low resolution, short exposure time, and low angular rate of the host spacecraft, both stars and RSOs appear as point sources of light with limited pixels per source. As of 17 December 2021, the CASSIOPE spacecraft is spin-stabilized using torque rods, given the failure of its momentum wheels [22]. This caused a variety of challenges that needed to be accounted for during image processing. For example, the stars and RSOs both moved through image sequences, which meant that traditional algorithms such as frame differencing could not be used to identify RSOs, since these algorithms rely on a static background to identify RSOs. Additionally, spin stabilization caused RSOs in the FOV to follow a curved trajectory. This meant that a simple linear motion model could not be applied directly to identify RSOs. Furthermore, the changing attitude of the imager caused the illumination conditions of each image to vary spatially, in addition to the temporal illumination change. This variable background illumination, both within a single image and across multiple images, prevented the use of simple thresholding techniques to process the images. Using a single threshold value would result in both over and under-thresholded areas within a given image while inter-image variability would cause similar over and under-thresholding throughout image sequences. An algorithm that can consistently detect and track RSOs within these images needs to account for all these challenges. Figure 3 shows a variety of images captured by the FAI instrument and demonstrates some of the challenges in processing these images. More details on the FAI images can be found in [10].
Unlike many SSA datasets that are synthetic, SNR-filtered, or captured under near-constant pointing, this dataset preserves operational variability: spatially non-uniform illumination, intermittent detectability, and non-constant attitudes that induce star-field motion [19,23]. These properties directly stress short-arc detection and association typical of angles-only/tracklet SSA workflows [24,25].

2.3. Tracking Algorithm Overview

An algorithm was developed to detect and track RSOs within FAI imagery. Multiple analytical techniques were combined to simplify the algorithm design, making further extensions to this research, such as a dedicated space-based mission, feasible and computationally inexpensive. Several assumptions were also made to simplify the algorithm’s design. Firstly, it was assumed that the dominant point sources of light within the images were stars. Secondly, it was assumed that the background stars appeared to move rigidly within the imager’s FOV throughout subsequent images. These two assumptions facilitated the use of the Iterative Closest Point (ICP) method to find and correct the effect of the host spacecraft’s attitude on the images, which is discussed in greater detail in Section 2.3.2. Thirdly, it was assumed that RSOs travel approximately linearly through images, after accounting for the spacecraft’s attitude with respect to the background stars. Finally, it was assumed that RSOs travel the same distance between each image. These two assumptions were used to design a method to distinguish RSOs from the remaining objects after processing the images. The algorithm was developed in Python 3.11.14, using OpenCV 4.13.0, scikit-learn 1.8.0, and scikit-image 0.26.0 among other common Python libraries.
The algorithm takes in three images at a time, in a sliding window fashion. In other words, the algorithm maintains three images in memory, replacing the last image with a new image, rolling the window forward. The algorithm also takes in certain parameters as inputs, such as the threshold value to use. As outputs, the algorithm produces text files containing the locations, pixel sizes, and other telemetry regarding the detected RSOs. Optionally, the algorithm can also generate processed output images, consisting of the RSOs detected within the current frame as well as the predicted locations of previously detected RSOs which have disappeared from the current frame. The algorithm consists of five main steps as outlined in Figure 4, namely (a) preprocessing, (b) ICP star removal, (c) three-frame RSO association and tracking, (d) position estimation, and (e) RSO status and archival. The overall structure of the algorithm is summarized in the pseudocode presented in Table 2.
It is to be noted that the proposed pipeline is lightweight, but its contribution lies in how modules are coupled for non-constant pose imagery. Star-field registration establishes a consistent reference across frames, enabling short-arc motion gating; windowed thresholding mitigates spatially varying illumination; motion-consistency filtering suppresses transient false positives; and ICP is applied as a refinement step that improves alignment given an initial transformation rather than acting as a detector [26]. This modular design targets failure modes typical of star-tracker-like imagery: attitude changes, illumination artifacts, and intermittent low-SNR targets. A quantitative verification of these sensitivity factors is provided in Section 3.4.

2.3.1. Preprocessing

The preprocessing step consists of several processing techniques to generate a list of data such as centroid coordinates corresponding to objects (stars, RSOs, and noise) from each raw image.
Image Reading and Cropping
Firstly, three images are read into memory. Given that the algorithm uses a sliding window of three images at a time, the preprocessing must be performed on all three images for just the first iteration. Later iterations need only read in and perform preprocessing on the third (new) image. The images contain additional information in the first 72 and last 12 rows of each image, which are cropped from the images before processing begins.
Circular Region of Interest (ROI) Extraction
Next, the remaining pixels outside of the circular FOV of the imager are set to zero to prevent false detections due to noise, and to prevent these areas from interfering with further processing. This is done by keeping all the pixels in the image within a radius of 120 pixels from the center of the image. This radius value is left as a parameter that can be adjusted. This circular mask approximates the usable field of view of the optical system and helps eliminate peripheral lens artifacts and vignetting effects. The ROI also reduces computational load by limiting downstream analysis, such as thresholding and centroid extraction, to a smaller, cleaner subset of the image.
Windowed Multi-Otsu Thresholding
After this, thresholding is performed to binarize the images, leaving the background as the minimum value and the stars and RSOs as the maximum value. While well-established methods such as SExtractor exist for extracting sources from astronomical images, a custom algorithm was developed for simplicity and specifically for thresholding images for SSA, where steps such as background map construction, faint galaxy deblending, and star-galaxy separation are unnecessary [27]. Given the varying spatial illumination in the image, a local thresholding method is used to threshold each image in sections or windows. For the images used in this research, a user-defined window size of 32 by 32 pixels was used to threshold each image, window by window, which produced acceptable thresholding results. In cases where there is significant spatial inhomogeneity, it is recommended to decrease the window size further, and vice versa.
To account for the varying temporal illumination, histogram-based thresholding was used. Specifically, Otsu’s method was adapted, which seeks to find a threshold value which separates pixel values in an image into background and foreground classes [28]. The idea behind this algorithm is that most pixels in an image will belong to one of those specific classes, forming two peaks in the pixel histogram of the image. In the case of these images, those two classes would be the background and the star and RSO signals. A threshold that best separates these two peaks could then be found, determined for each image, to dynamically produce an optimal threshold for each image. However, analysis of the histograms of FAI images revealed that there existed a third class of pixels within the images, with average values greater than the background but smaller than the star and RSO signals. This class appeared to belong to the illumination effects, such as lens flare. For this reason, multi-Otsu thresholding was used instead, which returned a threshold that separated the first two classes from the third class, allowing the stars and RSOs to pass the threshold without any background effects. However, an experimentally determined constant needed to be added on top of the returned threshold value, likely due to overlap between the pixel values of illumination effects and star and RSO signals. A user-defined constant of 35 was used, found by manually adding to the returned threshold on a subset of images until a desirable threshold was produced. This constant is recommended to be increased or decreased depending on the results of the threshold returned by the multi-Otsu thresholding algorithm. This threshold is calculated and applied to individual segments of each image, as mentioned previously. An example result of this windowed multi-Otsu thresholding is shown in Figure 5, for one image.
In cases where multi-Otsu thresholding fails, typically due to insufficient histogram separation or low dynamic range in a window, a fallback single-threshold version of Otsu’s method is applied. This ensures robust binarization even in poorly illuminated or low-contrast regions. The backup method computes the inter-class variance from the histogram and determines a single threshold to separate background and foreground pixels. The same constant offset of 35 is applied to this fallback threshold to maintain consistency across all thresholding results.
This two-tiered strategy (multi-Otsu thresholding with a fallback to simple Otsu thresholding) allows the thresholding mechanism to adapt both locally and dynamically across a wide range of illumination conditions. It ensures reliable detection performance under both faint and bright imaging sequences, without introducing complex background modeling or training-based techniques. As an alternative, clustering-based segmentation (e.g., k-means on pixel intensities) can be used for threshold selection. Otsu and k-means are closely related because both minimize within-class variance, but Otsu searches globally for thresholds while k-means is iterative and can be sensitive to initialization and convergence settings [29,30]. Comparative studies report that k-means can outperform Otsu in some domains [31]. In this work we select windowed multi-Otsu thresholding because it is deterministic and parameter-light when executed repeatedly across many small windows under strong illumination variability [28,32].
Object Detection
Next, Connected Component Analysis (CCA) is performed on each binary image to individually segment and label groups of connected pixels. This offers a way to separate point sources within the image into distinct objects [33]. For each of these objects, in each image, properties are extracted such as the total pixel area, height, width, and centroid coordinates of the bounding box around the object. The centroid coordinates of the objects are especially useful in further processing steps.
Object Filtration
Finally, the preprocessing step concludes by filtering the objects found in the previous step by imposing maximum and minimum limits on each object’s properties, which were experimentally determined. Firstly, a minimum pixel area is imposed to remove small objects from the images, which are likely a result of noise such as hot pixels and faint stars. In this research, objects corresponding to pixel areas smaller than two pixels are discarded. Next, a maximum pixel area is imposed to remove large objects from the images, which are likely due to surviving illumination effects such as lens flare. Objects greater than 81 pixels in area are discarded in this research. These values were determined empirically to balance rejection of artifacts while retaining RSOs. After this, maximum width and height thresholds were imposed to remove streak-like artifacts from the images, usually caused by light streaking from bright stars and RSOs, as well as radiation strikes. A value of 15 pixels is used as the maximum threshold for an object’s height and width. After this filtration step, the data is ready to be processed for star removal and RSO detection. Filtering is performed using OpenCV’s connectedComponentsWithStats function, which returns statistics such as area, bounding box width and height, and centroid position for each labeled region. These properties are used to apply the filtering thresholds described above.

2.3.2. ICP Star Removal

The next step involves removing the points corresponding to stars from the lists corresponding to the three images by using the ICP algorithm, which seeks to find a rigid transformation that fits a set of source points to a set of reference points, as well as the point correspondences [34,35]. This is done by iteratively matching the point sets and minimizing an error metric, in this case the distances between the point matches, until a threshold is reached. Convergence is defined as when the rotation change falls below 0.0001 radians and the translation change is less than 0.001 pixels. The method proves useful in that it does not require an initial estimation of the transformation or the point correspondences. Though commonly used for lidar applications to reconstruct 3D surfaces from different scans, a 2D implementation was used for this RSO detection algorithm [36]. As mentioned previously, an assumption here is that there are significantly more stars than RSOs in the images. Since the ICP algorithm returns the point correspondences between two point clouds, and the stars are often the dominant source of points in the images, the ICP algorithm will tend to match the star transformation between frames, ignoring the transformation of the points corresponding to the few RSOs between the images. The other relevant assumption here is that the stars between images appear to transform rigidly. This is because stars appear fixed due to their large distances away from near-Earth observers, and only appear to move between images due to the host spacecraft’s own attitude. This means that the ICP method can be used to determine the transformation of the background stars in the images, since it requires that the point clouds transform rigidly between images, which is correct in the case of the stars. Thus, the returned list of matching points tends to be the star matches between the images. A point is considered a match if its Euclidean distance from a candidate is less than 3 pixels, and at least five such matches must exist for the transformation to be accepted. With three point sets (corresponding to the three images), the idea is to keep the second image’s point set as the reference point set, and then match the point sets from the other two images to the second image point set. For this, ICP is applied twice, once to match the points from the first image to the second image, and then the third image to the second image. The two operations return the points that were matched (assumed to be stars) and all points from the first image and third image transformed by the identified rigid transformation.
After having identified the stars in the images, they can be removed from all the images. The matched points from the first image and second image are searched for within the original points from the first image and second image and removed. The same operation is done between the third image and the second image. Matched points from both pairings are labeled as background stars and removed from further processing. Figure 6 shows an example of the points that are left over from all the images after this, stacked onto a single image. A star identification method for star removal would be less restricted and would return information about the Right Ascension (RA) and Declination (Dec), necessary for orbit determination. However, the algorithm is intended to be independent of attitude determination (which performs star identification) and using ICP for star removal has the additional benefit of removing the spacecraft’s rotation from the RSOs, making them easier to detect with the approximate linear motion model. Ideally, after this step, all that remains are RSOs, since RSOs have motion different from the stars within the images. However, there are leftover stars that did not get matched in the ICP algorithm and noise. Furthermore, the points corresponding to RSOs have not been matched between the three images, since the ICP algorithm only returns the star matches. Lastly, the RSOs have not been checked for matches in previous images. The next step seeks to address these concerns.

2.3.3. Three-Frame RSO Association and Tracking

The next step uses an algorithm to determine which of the remaining points correspond to RSOs. This is done by using the assumption that RSOs will travel the same distance between frames, and that their motion will be roughly linear and in one direction. A nested loop is used to iterate through unique combinations of points, where each point must correspond to a different image, and not already be matched to an RSO. The first step in the loop is to calculate the distance, d1, between the first point (first image) and the second point (second image), as well as the distance d2 between the second point and the third point (third image). Next, after ensuring that neither distance is close to zero (to avoid divide-by-zero errors), the similarity of the distances, dsimilarity, is calculated using Equation (1). If this similarity is above an experimentally determined threshold (0.5 is used), the next check is performed, which determines how linear the motion is between the points, and if they are in the same direction. This is done by determining the vectors d1 and d2, and calculating the angle between them, using Equation (2).
Each candidate point is compared across three consecutive frames. If distance similarity exceeds 0.5 and the angular deviation is below a linear threshold (θ < 55 × dsimilarity − 27.5), the object is declared an RSO.
d s i m i l a r i t y = 1 d 1 d 2 d 1 , d 2
θ = d 1 · d 2 d 1 d 2
To ensure that the motion of an RSO is roughly linear, the calculated angle must be limited to a maximum value. It was found that some RSOs with large distance similarities may still have large angles between their corresponding vectors. Similarly, some RSOs may have small distance similarities but also have small angles. An appropriate maximum angle, θmax, was determined for all distance similarities by plotting the distance similarities and their corresponding angles for both the remaining points and RSOs, as illustrated in Figure 7. The line that separated the RSOs from the remaining points was then identified, as shown in Equation (3) below, and used to calculate θmax for a given dsimilarity to classify a set of three points as an RSO or not.
θ m a x = 55 d s i m i l a r i t y 27.5
Once these conditions have been met, the set of three points are associated to a single RSO detection and are removed from the list of points to loop through. Detections are matched and assigned a unique ID. In ambiguous cases, track confidence is assigned based on motion history and the number of supporting detections. The next step involves associating the detected RSO to previous detections. Since a sliding window is used, the current iteration’s points in the second image will correspond to the previous iteration’s points in the third image. The same can be said about the first image and second image, respectively. Using this information, the current RSO detection’s points can be compared to previous RSO detections’ points to see if there is a match. If there is a match, the RSO detection is associated with it. Otherwise, a new label is assigned to the detection. Figure 8 illustrates the results of the three-frame RSO association algorithm, showing how the RSOs are picked out from the remaining points. Distance and direction are standard motion descriptors; the novelty here is their role as a parameter-light, interpretable motion-consistency classifier coupled with star-field registration and windowed detection to enable reliable short-arc association in non-constant pose imagery. This mirrors tracklet-style gating logic used in angles-only SSA pipelines [24,25].

2.3.4. Position Estimation

The second-to-last step (d) is used to account for dropped RSO detections from previous frames by estimating their positions using previous data. This step starts with looping through all the RSOs from the last iteration that were not detected in the current iteration. If the position of that RSO was not already estimated in the previous iteration (to prevent multiple position estimations of false positive detections), its position is estimated in the current iteration. This is done by calculating the x and y distance travelled by the RSO between the two previous frames and adding it to the x and y coordinates of that RSO in the previous frame. Additional checks are added as well, such as ensuring that the state that is estimated is within the FOV of the imager. A summary of all key algorithm parameters, their values, and descriptions is provided in Table 3.
For RSOs near the frame edge, a linear extrapolation is performed if their previous two detections are labeled as ‘measured’. A new point is projected using the observed motion vector. The result is used to maintain continuity in tracking and is labeled as an ‘estimated’ detection.

2.3.5. RSO Status and Archival

The last step (e) involves the generation of text files to report on the status of RSOs, and to archive dropped detections. RSOs that are detected in the current iteration are reported, including information such as their x and y pixel location, pixel size, and a Boolean to indicate whether the RSO had been estimated in the current iteration. RSOs that were in the previous iteration, but are no longer detected in the current iteration, have their information archived in a separate text file.
The system also generates bounding boxes around RSOs in the final detection stage, which are used for visualization. These bounding boxes are labeled with unique identification numbers, and the detected RSOs include both measured and estimated types, which are color-coded distinctly to allow easy differentiation in the output imagery.
Optionally, figures are generated to provide a visual understanding of the RSO detections. Figure 9 is an example of RSOs detected in three frames, represented by bounding boxes with unique identification numbers.

3. Results

3.1. RSO Detection

For the current study, we used a total of 878 images from the FAI instrument, collected during the year, 2023. RSOs in each image were annotated manually and 2191 RSO detections, corresponding to 75 unique RSOs, were identified during the annotation, in total. As noted, CASSIOPE is currently operated in spin-stabilization, adding complexity not only to the RSO detection algorithm development, but to the annotation process as well. Slow moving RSOs could be confused with stars, given that the stars were also moving in the images. Furthermore, RSOs were harder to visually detect due to the varying illumination in the images, making it easy to lose track of an RSO or identify it in the first place.
To quantify the performance of the developed algorithm, the precision and recall are used, calculated using Equation (4) and Equation (5), respectively. These metrics use the True Positives (TPs), False Positives (FPs), and False Negatives (FNs).
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
There are several caveats with respect to the results which must be considered. Firstly, RSO detections were determined visually from the images, and therefore the annotations may be inaccurate, especially in challenging images. Furthermore, RSOs at the edge of the circular FOV are ignored, given that the algorithm reduces the FOV to avoid lighting issues at the perimeter of the FOV. Additionally, image sequences with drastic lighting changes were ignored. Finally, the True Negatives (TN) were ignored, given the difficulty and ambiguity in calculating this quantity.
Given that the algorithm uses three sequential frames to detect RSOs, the number of RSOs reported at each iteration may be different than the number of RSOs reported in total, after viewing an entire sequence. For example, if an RSO were to appear halfway through a sequence, it would take three iterations (corresponding to three images) before that object was classified as an RSO. However, the classification process detects the RSO in the previous two iterations as well. Therefore, to capture this caveat in the results, the “per-frame (PF)” results are quantified to show the performance at every iteration, and the “full-sequence (FS)” results are quantified to show the performance at the end of an entire sequence of images. Table 4 below shows these results.
Another metric to consider when evaluating the performance of the algorithm is to measure how many RSOs were detected at least once during the RSO’s crossing of the imager’s FOV. To capture these results, the total number of unique RSOs, along with the number of detections and missed detections is presented in Table 5.

3.2. Detection Accuracy

To quantify the positional accuracy of the detections, the centroids of objects detected by the algorithm are compared to truth data. While an algorithm as outlined in Section 2.3. is unavailable in this study to use as truth data for RSO detections, the background stars can be used instead, given that the algorithm also detects stars as captured by the ICP algorithm in the star removal step. To serve as truth data, images can be plate solved using a tool such as the one outlined in [20]. However, tools such as these may contain their own centroiding errors, but for this study, these are assumed to be negligible. Further studies conducted for the rest of the SSA pipeline will evaluate the impact of centroiding errors on RSO identification. Since these stars are centroided in the same way as the RSOs, and since both objects are similar in appearance (point sources of light), the stars’ centroids can be used to determine the algorithm’s centroiding accuracy. While the focus of this research is on RSO detection and metrics associated with RSO detection, centroid accuracy is an important consideration for the rest of the SSA pipeline and to identify improvements needed.
Using [20], two images from 11 of the 12 sequences were plate solved to both identify the stars in the images and their true positions. Sequence 7 was unable to be plate solved, given the challenging lighting conditions (and lack of sources detected). While [20] was unable to plate solve the original images, they were able to plate solve the thresholded images, and so these were used. Given the rapid exposure time relative to the changing attitude, only two images per sequence were used, one from the start and one from the end, since many of the same stars would appear in subsequent images. Nevertheless, there were some duplicate stars, which were included in the results since they are centroided again in subsequent images, hence not counting as duplicated results. Table 6 below presents the results from this testing, showing the number of stars used in each sequence and the average centroid difference in pixels, from the true pixel location of the stars.

3.3. Detection Brightness

Another result examined in this study is the brightness of the detections in the images, quantified by their visual magnitudes in the visible band. RSO truth data would be especially difficult to use in this case, since the visual magnitude of RSOs can vary as their attitudes change, and the assumptions that need to be made about their shapes when calculating their visual magnitudes. Therefore, the stars are used again to determine the average, minimum, and maximum visual magnitude of the objects detected.
The same two images were used for each sequence as in Section 3.3. The tool outlined in [20], used to plate solve the images in the previous section, also returned the RA/Dec of the detected stars in each of the images. With these values, the corresponding stars’ visual magnitudes could be queried, using the tool outlined in [37]. Table 7 below shows the results from this analysis, depicting the number of stars with visual magnitude information available for each sequence, the magnitude of the brightest star, the magnitude of the faintest star, and the average magnitude of the stars.

3.4. Ablation and Sensitivity Analysis

An ablation and sensitivity analysis were conducted to provide quantitative verification that the proposed design choices are well-founded, rather than relying solely on empirical motivation. Temporal association can be performed using 2-frame pairing, multi-frame fitting, or motion-consistency gating. Two-frame pairing is simple but more susceptible to coincidental matches in clutter, whereas longer-window constraints can fragment tracks when detections are intermittent. All three variants were evaluated with identical preprocessing with only the association gate differing. We therefore adopt a three-frame motion-consistency gate as a compromise and quantify its benefit via ablation against two-frame and longer-window (five-frame) variants. As shown in Figure 10a, the three-frame gate achieves sequence-averaged performance (79% precision, 71% recall) than the two-frame (6% precision, 21% recall) or five-frame (11% precision, 22% recall) variants on the evaluated set. The gating parameters are expressed in image-plane units (e.g., allowable displacement in pixels per frame and allowable direction change), making the method tunable to higher apparent target speeds. Given the plate scale of 388 arcseconds per pixel, these thresholds can also be expressed in angular units (arcseconds per frame) when needed. The low precision of the two-frame variant (6%) reflects the combinatorial nature of pairwise point matching: without a third frame to enforce directional and speed consistency, any two spatially proximate residual points, whether noise, unmatched stars, or genuine RSOs, can satisfy a simple distance gate. With 20–40 residual points per frame after ICP star removal, the number of candidate pairs substantially outnumbers true RSOs, collapsing precision. The five-frame variant suffers the opposite failure: intermittent detectability of faint RSOs causes track fragmentation, reducing both precision (spurious re-initiations) and recall.
To verify the impact of the threshold offset in practice, we include a sensitivity study (Figure 10b) showing that ΔT directly controls the miss/false-alarm trade-off, and that the baseline ΔT = 35 yields the best sequence-averaged performance on the evaluated set (~79% precision, ~71% recall). To assess the impact of motion-model flexibility on missed detections, we evaluated a curvature-bounded angular gate sensitivity by varying the additional allowable inter-frame angle change (Δθ) from 0 to 30 degrees, as shown in Figure 10c. Relaxing Δθ across this range produced no meaningful change in either precision or recall on the evaluated set, indicating that the apparent motion of RSOs in these sequences is predominantly near-linear after ICP-based star-field registration. This suggests that the linear motion assumption is well-suited to this dataset, and that the missed detections identified in the discussion are more attributable to thresholding and SNR limitations than to motion-model rigidity. To evaluate how detection performance varies with apparent target motion and brightness, all detections produced by the algorithm across the sequences were stratified by two image-domain observables: apparent speed (pixels per frame) and an SNR proxy (Figure 11). Apparent speed was computed as the Euclidean displacement of each detected RSO between consecutive frames within the three-frame association window, and the SNR proxy was estimated locally at each detection centroid as the peak intensity subtracted by median intensity divided by the median absolute deviation (MAD) of pixel intensities within a 16 × 16-pixel neighborhood. Within each sequence, detections were independently divided into four equal-count quartile bins per observable. For each bin, we report two metrics: within-bin precision (TP/(TP + FP) for detections falling in that bin) and recall contribution. Note that recall contribution differs from standard within-bin recall: it represents each bin’s share of all ground-truth RSO instances recovered across the full sequence, such that the four bins sum to approximately the full-sequence recall of 0.71. A bin with recall contribution of 0.20 means that bin accounts for 20% of all recovered ground-truth instances, not that 20% of ground-truth instances within that bin were recovered. This formulation allows degradation to be localized to specific operating regimes without double-counting.
The results in Figure 11 reveal two distinct degradation modes. In Figure 11a, precision degrades monotonically with apparent speed, dropping from 0.92 at 4.6 px/frame to 0.53 at 13.6 px/frame, consistent with fast-moving RSOs increasingly violating the near-linear motion assumption. The recall contribution per bin, representing each bin’s share of total ground-truth instances recovered, declines from 0.21 to 0.14 with increasing speed, and the four bins sum to approximately the overall full-sequence recall of 0.71, indicating that faster RSOs account for a disproportionately smaller fraction of successfully recovered detections. Figure 11b shows that precision at low SNR falls to 0.51, reflecting noise near the threshold boundary being misclassified as RSOs, before recovering to 0.84–0.92 at higher SNR. The recall contribution varies only modestly across SNR bins (0.14 to 0.20), suggesting that missed detections are not strongly concentrated in the lowest SNR group and are more consistent with short-arc association effects (motion-consistency rejection) than thresholding alone. These findings indicate that the algorithm’s operating parameters, such as motion-gating thresholds and threshold offsets, can be tuned based on measurable image-domain conditions, particularly apparent motion and SNR, to select an appropriate operating point for different observation scenarios.

4. Discussion

Overall, the algorithm maintains good precision due to its robustness to FPs but struggles with recall due to its high FNs. These FNs can be attributed to several reasons. Firstly, the algorithm was built on the assumption that RSOs travel roughly linearly as viewed through the imager’s FOV. In some cases, RSOs did not travel in this fashion, instead following different motion patterns. This was especially true for faint and slow-moving RSOs. However, the algorithm handled most of these cases using Equation (3). Slow-moving RSOs generally had high distance similarities, and so larger angles would be permitted by Equation (3), allowing the algorithm to capture some slow-moving, curving RSOs. Despite this, there were a few cases of RSOs which curved heavily and had low distance similarities, which were not detected by the algorithm consistently. However, our curvature-gate sensitivity study (Figure 10c) indicates that for this dataset, the primary source of missed detections is SNR/thresholding rather than motion-model rigidity, suggesting that improving faint-object detection is the higher-priority path. Another cause of high FNs is the extremely faint RSOs. These RSOs were very hard to distinguish even while annotating the data, given the changing illumination conditions and noise in the images. FPs, though fewer, were also of concern, and were mostly due to the challenging lighting conditions, which caused the thresholding to interpret the lit pixels caused by the lighting as objects.
The performance limitations observed in this study are directly linked to the assumptions made in the algorithm design. In particular, the assumption that stars significantly outnumber RSOs enables effective ICP-based star-field alignment, but performance can degrade in scenes with low star density, strong illumination artifacts, or very faint RSOs. Similarly, deviations from rigid background star motion can reduce alignment accuracy, contributing to missed detections under challenging brightness and imaging conditions.
Unsurprisingly, the precision is greater in the per-frame performance than in the full-sequence performance, while the recall is greater in the full-sequence performance than in the per-frame performance. This is because as mentioned, the per-frame performance considers each frame at a time, while the full-sequence performance considers all previous frames. This means that there will be more FPs and TPs reported in the full-sequence result, since it considers the previous two frames that created each detection.
Though the algorithm’s recall for the per-frame and full-sequence tests was 63% and 71%, respectively, the algorithm captured 87% of all RSOs at least once, through their crossings of the imager’s FOV. This indicates that while there remains improvement to be made for detection consistency, the algorithm does quite well in capturing RSOs when considering each RSO’s full transit.
Notable sequences where the algorithm results are particularly poor can be found in sequences 6, 7, and 10. Each of these sequences have FS recall values that are lower than the average of 71% and have percentage detected values lower than the average of 87%. Overall, this implies that many RSO detections were missed in these sequences. In sequence 6, two slow-moving, faint RSOs are missed entirely, which results in many FNs, which ultimately reduces the FS recall metric overall, since these RSOs corresponded to more detections due to their speed, and thus accounted for a larger number of RSO detections in the overall sequence. These missed RSOs are attributed to both the thresholding algorithm and the linear motion algorithm defined by Equation (3). In terms of the thresholding algorithm, the RSOs were sometimes too faint to pass the threshold. However, even when they did, they travelled too curvilinearly for the linear motion model presented by Equation (3), and these RSOs were ultimately not detected. This is also the reasoning behind the missed RSOs in sequence 7 and 10, which also caused the overall FS recall to be lower than the average due to the reasons above. In sequences 6 and 10, the FS precision was lower than the average of 79%, implying that many FPs were present in these sequences. In the case of sequence 6, a large amount of background illumination was present, and some of it survived the thresholding process. The RSO association algorithm incorrectly identified some of these points as RSOs, which resulted in many FPs and ultimately a low FS precision. In the case of sequence 10, many point sources corresponding to noise passed the thresholding process, which similarly to sequence 6, were incorrectly identified as RSOs. The next section details the research improvements to be made to improve upon these shortcomings.
The RSO detection algorithm in [18] consists of a Faster Region-based Convolutional Neural Network (R-CNN), while the RSO detection algorithm in [19] uses a CNN combined with a rules-based tracking algorithm. The datasets used in both studies contain notable differences and ambiguities, making direct comparisons of algorithm results between the studies inaccurate. Nevertheless, some comparisons can be made, especially with respect to the recall of the algorithm, which is provided in the study. The algorithm in [18] achieved a recall of 32%, while the algorithm in this study achieved a recall of 71%—an improvement of 39%. However, it should be noted that the same sequences of images were not used between the studies, which means a direct comparison is not possible. Furthermore, the algorithm in [18] was tested on 332 images, while the algorithm in this study was tested on 878 images. This could mean that the results presented in [18] are not as conclusive as the results in this study, given the smaller testing dataset. The algorithm in [19] did not follow the same evaluation metrics as in this study, making direct comparisons impossible. In [19], the CNN was first evaluated by determining the object detection accuracy of stars, RSOs, and noise with SNR values above 6. The algorithm in this study only provides results on the objects of interest, RSOs. Moreover, the SNR restriction of 6 was not placed on the objects in this study. The algorithm in [19] then evaluated the tracking and classification algorithm based on accuracy, precision, and recall for the objects that were detected in the previous step. In this study, only the recall and precision were evaluated. Moreover, these metrics were evaluated end-to-end, rather than on detections that were already made by earlier stages of the algorithm. Lastly, the dataset used in [19] consists of an unspecified number of real and artificial images, making comparisons even more difficult.
While quantitative comparisons between the algorithm in this study and the algorithms in [18,19] are difficult to make due to the differences, qualitative improvements made in this study can be shown. To begin, this study uses a higher quality dataset than used in the previous studies, allowing for better development and testing of future algorithms. This higher quality stems from a variety of reasons. For one, the dataset in this study consists of over twice the number of images used in [18], providing more training and testing samples. Furthermore, this dataset does not impose the SNR restriction of 6 as was imposed in [19], meaning that dimmer objects are included in this dataset, which are more difficult to detect and offer increased challenges to algorithms. The next qualitative improvement that this study makes is the inclusion of brightness, which is another useful metric that can be used to evaluate future algorithms. This metric can be used to determine the minimum and average dimness of the objects detected by future algorithms and can be compared to the results in this study. This metric is important because algorithms that are able to detect dimmer objects are better, as they can be used in a wider range of RSO detection applications.
While the results presented above depend on the algorithm’s performance alone (since the truth data is based on the FAI images), the accuracy and brightness results depend both on the algorithm’s performance and the limits imposed by the FAI instrument itself.
In terms of accuracy, the RSO detection algorithm is able to centroid stars in these FAI images within one pixel of their true locations; 0.66 pixels in the x-direction, and 0.72 pixels in the y-direction, which is relatively good given the low resolution of the images and smearing of the detections due to the host imager’s attitude. Using the Euclidean distance formula, this corresponds to an overall pixel difference of 0.98 pixels. Additionally, sequences with brighter stars were detected more accurately, which makes sense given that brighter stars appear larger in the images, which can be centroided with more accuracy. The opposite is true as well, given that faint stars can correspond to just one pixel, reducing the accuracy with which these stars can be detected. Given the pixel scale of 388 arcseconds, the centroid accuracy corresponds to 379 arcseconds. In an SSA pipeline, these image centroids ultimately translate into angles-only line-of-sight observations (e.g., RA/Dec), where range is not directly observed [25,38]; therefore, the impact of centroid error on orbit determination depends on viewing geometry, arc length, time history, and correlation across multiple tracklets rather than on centroid error alone. In this context, the relatively coarse angular precision expected from opportunistic, low-cost sensors can still be useful because frequent observations and multi-arc fusion can reduce uncertainty in downstream estimation and enable correlation to known objects. More robust centroiding methods (e.g., PSF-based fitting) may further improve angular accuracy, and centroid noise can also be reduced by averaging detections over multiple frames.
In terms of brightness, the algorithm detects stars with an average visual magnitude of about 5.02 within these images and can detect stars as faint as 8.64 in visual magnitude. With current satellite constellations having magnitudes between 4 and 6 [39], the system would be capable of seeing the majority of these RSOs appearing within its FOV, assuming that the algorithm itself is able to detect these RSOs. This presents a significant opportunity for detecting and monitoring a large number of satellites, validating their detections and identifying anomalies.
As noted in Table 6 and Table 7, sequence 7 could not be plate solved, and as a result, the detection accuracy and detection brightness studies could not be performed. This was a result of the high illumination in these sequences, causing the lens defects to appear in the images as point sources of noise themselves. The high illumination also caused stars to appear comparatively dimmer, causing the thresholding algorithm to detect fewer of them. These two effects together produced thresholded images with few stars and many points corresponding to noise, which could not be plate solved by the tool in [20].
As mentioned previously, the FAI instrument’s host spacecraft CASSIOPE operates in a non-constant-attitude condition, causing the background stars to appear to move in sequences of images, complicating RSO detection. However, this complication provided the opportunity to develop an RSO detection algorithm that is robust to the background star movement. With such an algorithm, the possibility of RSO detection can be expanded, such as repurposing End-of-Life (EOL) or malfunctioning satellites to perform RSO detection, despite non-constant attitude, or performing RSO detection during slew maneuvers. The promising results outlined in this paper suggest that RSO detection can be extended to these conditions.
While a variety of image enhancement and super-resolution techniques have been applied to low-resolution imagery in other domains, we found that many of these are not suitable for FAI images due to their sparse, featureless nature. Unlike natural images, star tracker images typically consist only of point sources such as stars and RSOs. As a result, techniques like Gaussian smoothing, commonly used to reduce noise or improve resolution, can suppress or blur faint RSOs entirely, leading to higher false negative rates. The proposed algorithm avoids such preprocessing to preserve the integrity of point-source detections. In addition to detection performance, the algorithm’s modularity and simplicity offer significant advantages for real-time, onboard deployment. Unlike deep learning-based super-resolution or enhancement models, our approach does not require training data or high computational resources. This makes it more suitable for integration into spacecraft systems with limited processing capacity. Furthermore, the transformation estimates derived from the ICP-based star alignment process (i.e., rotation and translation between frames) may serve a secondary purpose: enabling angular rate estimation that could support or augment onboard attitude determination systems. This opens the door to future integrated solutions where RSO detection and spacecraft attitude tracking are performed simultaneously within a unified framework.
The proposed method is intended as an onboard SSA front-end for star-tracker-class imagery, producing candidate detections and short-arc tracklets (angles and short-arc motion) under non-constant pose and spatially varying illumination. It is applicable to opportunistic in-orbit observations during attitude slews/maneuvers and to distributed low-cost sensing architectures. By extracting tracklets onboard, it can reduce the downlink burden and prioritize relevant image segments for transmission. The resulting tracklets complement existing SSA systems by enabling downstream catalog correlation and subsequent orbit determination updates, and can also support cueing of higher-fidelity sensors. Because angles-only observations do not directly measure range, correlation and multi-arc fusion are typically prerequisites for accurate OD and catalog maintenance [25,39]. Accordingly, the contribution here is robust detection and tracklet formation under non-constant attitude; end-to-end operational integration and on-orbit demonstration are left for future work. A practical benefit is reduced dependence on external attitude products, which can lower integration burden for low-cost platforms or degraded ADCS cases. The resulting outputs, image-derived angles and tracklets remain compatible with downstream angles-only correlation and orbit-determination workflows [25,39].

Future Work

As mentioned in the previous section, the algorithm’s FNs could be improved. With respect to the highly curving RSOs, an improvement can be made by incorporating a more robust RSO motion model, replacing what was experimentally determined and captured in Equation (3). We are currently investigating the approach of fitting quadratic functions to established tracks to classify RSOs, while assuming circular motion for stars. With respect to the extremely faint RSOs, both the algorithm and annotation process could be improved to help detect these RSOs by using a more advanced light source detection technique and by visually enhancing the data to be labelled, respectively. Adjustments could be made to improve the algorithm’s robustness to FPs, and consequently the precision. To improve the FPs identified in the results, the algorithm could be improved by refining the state estimation algorithm to add a confidence score for RSO detections based on metrics such as its motion history. This confidence score could then be used to limit which detections are found to be RSOs, decreasing the number of FPs. This could also decrease the number of FNs, since a high-confidence track could be estimated more than once. Additionally, the thresholding process could be improved or changed entirely to prevent lit pixels from challenging illumination conditions from being detected as objects in further processing steps. These improvements are currently being researched to enhance the algorithm.
To improve the centroiding accuracy of the algorithm (to subsequently improve the positional accuracy of RSO detections made by the algorithm), a centroiding method which more accurately considers the pixel intensity and shape of the detected objects is being considered. A CNN-based algorithm is also being investigated since the shapes of the detections can be learned and used to separate detections from the background. This would mean that even challenging images such as in Figure 12 could be processed for RSO detection, expanding the number of RSOs that can be detected, as well as the versatility of the RSO detection algorithm and usefulness of the FAI images for RSO detection.
Though the algorithm was only tested on FAI images, it is still applicable to other situations in which the background stars may be moving as well, as described in previous sections. Since the ICP algorithm can robustly determine rigid transformations, the algorithm can be used in scenarios such as an imager on a stratospheric balloon platform. During unstable conditions such as the launch of the balloon and high wind, the background stars will appear to move with respect to the imager’s FOV due to excessive motion [40,41]. Another situation in which the background stars will move is during a spacecraft’s slew operation. The algorithm could theoretically be used to correct the rotation and translation of these stars and detect RSOs even in these challenging conditions, and in the latter case, RSO detection can be performed as a secondary task. A dataset is currently being developed using the data captured during the stratospheric balloon flight in [40,41] to further test the algorithm’s effectiveness. To better understand the contribution of each module, Section 3.4 provides sensitivity and temporal-window ablation; a full component-disabling ablation (e.g., running the pipeline without ICP alignment) is planned as future work. Such analysis will help isolate the most critical algorithmic elements and guide future optimization. Aspects of this have already been examined in prior work [42].

5. Conclusions

In this research, an RSO tracking algorithm was developed to track RSOs in low-exposure, low-resolution, wide-FOV star field imagery, where the background stars appear to move due to spacecraft attitude motion. The algorithm was built using rules-based methods, making use of computer vision techniques for object detection, ICP for star removal, and linear motion modelling for RSO classification. The algorithm was tested on real FAI imagery, as they are publicly available and share specifications similar to commercial star tracker imagery. The algorithm yielded a precision of 87% and 79% in the per-frame results and full-sequence results, respectively, while yielding a recall of 63% and 71% in those respective result categories. Overall, the algorithm detected 87% of RSOs at least once, during each RSO’s transit through the imager’s FOV. The stars detected in these sequences were centroided with an accuracy of 0.66 pixels in the x-axis, and 0.72 pixels in the y-axis, and were on average 5.02 in visual magnitude. The algorithm’s recall could be improved by incorporating better motion modelling and faint RSO detection, while the precision could be improved by adding further robustness to challenging illumination conditions. The centroiding algorithm is being improved to consider the shape and pixel intensity of the detected objects, while CNN-based detection is being considered to increase the number of RSOs that can be detected in these images. These improvements are currently being investigated, alongside the algorithm’s performance on night sky images taken during unstable conditions on a stratospheric balloon.
While the process was challenging, the capability to detect RSOs from starfield images taken on a moving, space-based platform provides an opportunity to examine the feasibility of SSA operations during slew maneuvers, collision avoidance operations, harsh lighting conditions, or a satellite that cannot maintain stability during its observation period. For example, any satellite with a functioning star tracker can serve as an SSA instrument to provide useful information, even under the conditions mentioned above.

Author Contributions

Conceptualization, P.K. and R.S.K.L.; methodology, P.K. and R.S.K.L.; software, P.K.; validation, P.K. and V.S.; formal analysis, P.K. and V.S.; investigation, P.K. and V.S.; resources, R.S.K.L., P.H. and M.D.; data curation, P.K.; writing—original draft preparation, P.K., V.S., R.S.K.L., P.H., M.D., R.Q. and G.C.; writing—review and editing, P.K., V.S., R.S.K.L., P.H., M.D., R.Q. and G.C.; visualization, P.K., V.S., R.S.K.L., P.H., M.D., R.Q. and G.C.; supervision, R.S.K.L., P.H. and M.D.; project administration, R.S.K.L., P.H. and M.D.; funding acquisition, R.S.K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada, grant number RGPIN-2019-06322 and ALLRP 577761—22 and the Canadian Space Agency, grant number 19FAYORA12.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to continuing research and containing sensitive data.

Conflicts of Interest

Authors Paul Harrison and Matthew Driedger are employed at Magellan Aerospace. Magellan Aerospace provided funding to York University toward this research in support of active customer-financed projects. The success of these projects depends on meticulous research and the accurate reporting of results. Magellan team members provided student mentorship and direction with respect to project requirements and industry methods of analysis, but did not otherwise influence the solutions obtained.

References

  1. Kelso, T.S. Analysis of the Iridium 33-Cosmos 2251 Collision. In Proceedings of the 19th AIAA/AAS Astrodynamics Specialist Conference, Pittsburgh, PA, USA, 9–13 August 2009. [Google Scholar]
  2. Johnson, N.L.; Stansbery, E.; Liou, J.-C.; Horstman, M.; Stokely, C.; Whitlock, D. The Characteristics and Consequences of the Break-up of the Fengyun-1C Spacecraft. Acta Astronaut. 2008, 63, 128–135. [Google Scholar] [CrossRef]
  3. Migaud, M.R. Protecting Earth’s Orbital Environment: Policy Tools for Combating Space Debris. Space Policy 2020, 52, 101361. [Google Scholar] [CrossRef]
  4. Weeden, B.C.; Cefola, P.; Sankaran, J. Global Space Situational Awareness Sensors. Secure World Foundation. 2010. Available online: https://www.researchgate.net/publication/228787139_Global_Space_Situational_Awareness_Sensors (accessed on 21 June 2024).
  5. Abercromby, K.J.; Seitzer, P.; Cowardin, H.M.; Barker, E.S.; Matney, M.J. Michigan Orbital DEbris Survey Telescope Observations of the Geosynchronous Orbital Debris Environment Observing Years: 2007–2009. National Aeronautics and Space Administration. 2011. Available online: https://ntrs.nasa.gov/api/citations/20110022976/downloads/20110022976.pdf (accessed on 13 January 2023).
  6. Petit, A.; Rolin, A.; Duthil, L.; Tarrieu, H.; Lucken, R.; Giolito, D. Extraction of light curve from passive observations during survey campaign in LEO, MEO and GEO regions. In Proceedings of the Advanced Optical Maui Optical and Space Surveillance (AMOS) Technologies Conference, Maui, HI, USA, 27–30 September 2022. [Google Scholar]
  7. Oltrogge, D.L. The “we” approach to space traffic management. In Proceedings of the 15th International Conference on Space Operations, Marseille, France, 28 May–1 June 2018; pp. 1–21. [Google Scholar]
  8. Clemens, S.; Lee, R.S.K.; Harrison, P.; Soh, W. Feasibility of Using Commercial Star Trackers for On-Orbit Resident Space Object Detection. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, Maui, HI, USA, 11–14 September 2018. [Google Scholar]
  9. SpaceX. Starlink Technology. 2023. Available online: https://www.starlink.com/ca/technology (accessed on 10 July 2025).
  10. Cogger, L.; Howarth, A.; Yau, A.; White, A.; Enno, G.; Trondsen, T.; Asquin, D.; Gordon, B.; Marchand, P.; Ng, D.; et al. Fast Auroral Imager (FAI) for the e-POP Mission. Space Sci. Rev. 2014, 189, 15–25. [Google Scholar] [CrossRef]
  11. Nir, G.; Zackay, B.; Ofek, E.O. Optimal and Efficient Streak Detection in Astronomical Images. Astron. J. 2018, 156, 229. [Google Scholar] [CrossRef]
  12. Waszczak, A.; Prince, T.A.; Laher, R.; Masci, F.; Bue, B.; Rebbapragada, U.; Barlow, T.; Surace, J.; Helou, G.; Kulkarni, S. Small Near-Earth Asteroids in the Palomar Transient Factory Survey: A Real-Time Streak-Detection System. Publ. Astron. Soc. Pac. 2017, 129, 034402. [Google Scholar] [CrossRef]
  13. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef]
  14. Cvrček, V.; Šára, R. Detection and Certification of Faint Streaks in Astronomical Images. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019; pp. 498–509. [Google Scholar] [CrossRef]
  15. Jeffries, C.; Acuña, R. Detection of Streaks in Astronomical Images Using Machine Learning. J. Artif. Intell. Technol. 2023, 4, 1–8. [Google Scholar] [CrossRef]
  16. Yao, R.; Zhang, Y. Compressive Sensing for Small Moving Space Object Detection in Astronomical Images. J. Syst. Eng. Electron. 2012, 23, 378–384. [Google Scholar] [CrossRef]
  17. Privett, G.; Appleby, G.; Sherwood, R. Image stacking techniques for GEO satellites and a three-site collection. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, Maui, HI, USA, 9–12 September 2014. [Google Scholar]
  18. Qashoa, R.; Driedger, M.; Clark, R.; Harrison, P.; Berezin, M.; Lee, R.S.K.; Howarth, A. SPACEDUST-Optical: Wide-FOV Space Situational Awareness from Orbit. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, Maui, HI, USA, 19–22 September 2023. [Google Scholar]
  19. Dave, S.; Clark, R.; Lee, R.S.K. RSOnet: An Image-Processing Framework for a Dual-Purpose Star Tracker as an Opportunistic Space Surveillance Sensor. Sensors 2022, 22, 5688. [Google Scholar] [CrossRef]
  20. Lang, D.; Hogg, D.W.; Mierle, K.; Blanton, M.; Roweis, S. Astrometry.Net: Blind Astrometric Calibration of Arbitrary Astronomical Images. Astron. J. 2010, 139, 1782–1800. [Google Scholar] [CrossRef]
  21. Lovell, T.A.; Sinclair, A.J.; Newman, B. Angles Only Initial Orbit Determination: Comparison of Relative Dynamics and Inertial Dynamics Approaches with Error Analysis. In Proceedings of the 2018 Space Flight Mechanics Meeting, Kissimmee, FL, USA, 8–12 January 2018. [Google Scholar]
  22. CASSIOPE/e-POP Fact Sheet. Available online: https://epop.phys.ucalgary.ca/quickfacts/ (accessed on 21 June 2024).
  23. Zhang, Y.; Zhang, R.; Jia, Q.; Xiao, J.; Bai, L.; Feroskhan, M. Astro-Det: Resident Space Object Detection for Space Situational Awareness. In Proceedings of the 2024 IEEE Conference on Artificial Intelligence (CAI), Singapore, 25–27 June 2024; IEEE: New York, NY, USA, 2024; pp. 228–233. [Google Scholar]
  24. Weigel, M.; Meinel, M.; Fiedler, H. Comparison of Observation Correlation Techniques for a Telescope Survey of the Geostationary Ring. In Proceedings of the 6th European Conference on Space Debris, Darmstadt, Germany, 22–25 April 2013; p. 141. [Google Scholar]
  25. Milani, A.; Tommei, G.; Farnocchia, D.; Rossi, A.; Schildknecht, T.; Jehn, R. Correlation and Orbit Determination of Space Objects Based on Sparse Optical Data. Mon. Not. R. Astron. Soc. 2011, 417, 2094–2103. [Google Scholar] [CrossRef]
  26. Open3D Team. ICP Registration. Open3D 0.19.0 Documentation. 2023. Available online: https://www.open3d.org/docs/release/tutorial/pipelines/icp_registration.html (accessed on 14 February 2026).
  27. Bertin, E.; Arnouts, S. SExtractor: Software for Source Extraction. Astron. Astrophys. Suppl. Ser. 1996, 117, 393–404. [Google Scholar] [CrossRef]
  28. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  29. Liu, D.; Yu, J. Otsu Method and K-Means. In Proceedings of the 2009 Ninth International Conference on Hybrid Intelligent Systems, Shenyang, China, 12–14 August 2009; IEEE: New York, NY, USA, 2009; Volume 1, pp. 344–349. [Google Scholar]
  30. Scikit-Learn Developers. sklearn.cluster.KMeans. Scikit-Learn 1.8.0 Documentation. 2025. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html (accessed on 14 February 2026).
  31. Ayunie Ahmad Khairudin, N.; Shamimi Rohaizad, N.; Salihah Abdul Nasir, A.; Chee Chin, L.; Jaafar, H.; Mohamed, Z. Image Segmentation Using K-Means Clustering and Otsu’s Thresholding with Classification Method for Human Intestinal Parasites. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2020; Volume 864, p. 012132. [Google Scholar]
  32. Scikit-Image Team. Multi-Otsu Thresholding. Scikit-Image 0.25.2 Documentation. 2025. Available online: https://scikit-image.org/docs/0.25.x/auto_examples/segmentation/plot_multiotsu.html (accessed on 14 February 2026).
  33. Bolelli, F.; Allegretti, S.; Baraldi, L.; Grana, C. Spaghetti Labeling: Directed Acyclic Graphs for Block-Based Connected Components Labeling. IEEE Trans. Image Process. 2020, 29, 1999–2012. [Google Scholar] [CrossRef]
  34. Chen, Y.; Medioni, G. Object Modelling by Registration of Multiple Range Images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  35. Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  36. Lu, F.; Milios, E. Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994. [Google Scholar] [CrossRef]
  37. Wenger, M.; Ochsenbein, F.; Egret, D.; Dubois, P.; Bonnarel, F.; Borde, S.; Genova, F.; Jasniewicz, G.; Laloë, S.; Lesteven, S.; et al. The SIMBAD Astronomical Database. Astron. Astrophys. Suppl. Ser. 2000, 143, 9–22. [Google Scholar] [CrossRef]
  38. Schwab, D.; Singla, P.; O’Rourke, S. Angles-Only Initial Orbit Determination via Multivariate Gaussian Process Regression. Electronics 2022, 11, 588. [Google Scholar] [CrossRef]
  39. Nandakumar, S.; Eggl, S.; Tregloan-Reed, J.; Adam, C.; Anderson-Baldwin, J.; Bannister, M.T.; Battle, A.; Benkhaldoun, Z.; Campbell, T.; Colque, J.P.; et al. The High Optical Brightness of the BlueWalker 3 Satellite. Nature 2023, 623, 938–941. [Google Scholar] [CrossRef]
  40. Suthakar, V.; Sanvido, A.A.; Qashoa, R.; Lee, R.S.K. Comparative Analysis of Resident Space Object (RSO) Detection Methods. Sensors 2023, 23, 9668. [Google Scholar] [CrossRef]
  41. Suthakar, V.; Porto, I.; Myhre, M.; Sanvido, A.A.; Clark, R.; Lee, R.S.K. RSONAR: Data-Driven Evaluation of Dual-Use Star Tracker for Stratospheric Space Situational Awareness (SSA). Sensors 2026, 26, 179. [Google Scholar] [CrossRef]
  42. Qashoa, R.; Suthakar, V.; Chianelli, G.; Kunalakantha, P.; Lee, R.S.K. Technology Demonstration of Space Situational Awareness (SSA) Mission on Stratospheric Balloon Platform. Remote Sens. 2024, 16, 749. [Google Scholar] [CrossRef]
Figure 1. Illustration depicting the transformation performed to align the image axes (yellow) to the celestial coordinate system axes (red), to then determine the x and y pixel offsets (purple) to a desired RSO (blue) to determine the RSO’s RA/Dec.
Figure 1. Illustration depicting the transformation performed to align the image axes (yellow) to the celestial coordinate system axes (red), to then determine the x and y pixel offsets (purple) to a desired RSO (blue) to determine the RSO’s RA/Dec.
Remotesensing 18 00755 g001
Figure 2. Block diagram outlining the steps in the SSA pipeline for identifying RSOs from unresolved optical imagery. The algorithm proposed in this research aims to perform the first step in the pipeline, RSO detection.
Figure 2. Block diagram outlining the steps in the SSA pipeline for identifying RSOs from unresolved optical imagery. The algorithm proposed in this research aims to perform the first step in the pipeline, RSO detection.
Remotesensing 18 00755 g002
Figure 3. Several examples of astronomical images taken by FAI. Stars and RSOs appear as point sources of light with limited features. The illumination conditions vary significantly in the images, posing a challenge for simple thresholding methods.
Figure 3. Several examples of astronomical images taken by FAI. Stars and RSOs appear as point sources of light with limited features. The illumination conditions vary significantly in the images, posing a challenge for simple thresholding methods.
Remotesensing 18 00755 g003
Figure 4. Block diagram outlining RSO tracking algorithm steps.
Figure 4. Block diagram outlining RSO tracking algorithm steps.
Remotesensing 18 00755 g004
Figure 5. The original image (left) and the image after removing pixels out of the imager’s FOV and thresholding (right).
Figure 5. The original image (left) and the image after removing pixels out of the imager’s FOV and thresholding (right).
Remotesensing 18 00755 g005
Figure 6. The points left over after removing the stars from the three images, plotted against a single background. Red (circle) corresponds to the first image, green (triangle) corresponds to the second image, and blue (square) corresponds to the third image in the rolling window.
Figure 6. The points left over after removing the stars from the three images, plotted against a single background. Red (circle) corresponds to the first image, green (triangle) corresponds to the second image, and blue (square) corresponds to the third image in the rolling window.
Remotesensing 18 00755 g006
Figure 7. Plot of angles against distance similarities of the remaining points and RSOs at this step. A line can be found that separates each class.
Figure 7. Plot of angles against distance similarities of the remaining points and RSOs at this step. A line can be found that separates each class.
Remotesensing 18 00755 g007
Figure 8. The 3-frame RSO association algorithm selects the RSOs (right) from the remaining points (left). Red (circle) corresponds to the first image, green (triangle) corresponds to the second image, and blue (square) corresponds to the third image in the rolling window.
Figure 8. The 3-frame RSO association algorithm selects the RSOs (right) from the remaining points (left). Red (circle) corresponds to the first image, green (triangle) corresponds to the second image, and blue (square) corresponds to the third image in the rolling window.
Remotesensing 18 00755 g008
Figure 9. Example of figures generated by RSO detection algorithm. Detected RSOs are captured in bounding boxes and given a unique identification number.
Figure 9. Example of figures generated by RSO detection algorithm. Detected RSOs are captured in bounding boxes and given a unique identification number.
Remotesensing 18 00755 g009
Figure 10. Ablation and sensitivity study. (a) Temporal association window comparison (2-frame, proposed 3-frame, and 5-frame). (b) Multi-Otsu post-threshold offset sensitivity (baseline ΔT = 35). (c) Curvature-bounded angular gate sensitivity (Δθ). Metrics are sequence-averaged precision and recall.
Figure 10. Ablation and sensitivity study. (a) Temporal association window comparison (2-frame, proposed 3-frame, and 5-frame). (b) Multi-Otsu post-threshold offset sensitivity (baseline ΔT = 35). (c) Curvature-bounded angular gate sensitivity (Δθ). Metrics are sequence-averaged precision and recall.
Remotesensing 18 00755 g010
Figure 11. Within-bin precision and recall (mean) versus apparent speed (a) and SNR proxy (b), averaged over observation sequences. Shaded bands show ±1 SD. Recall contributions sum to the overall recall (dashed, 0.71); overall precision is shown for reference (dashed, 0.79). Recall contribution is each bin’s share of total recovered ground-truth instances and differs from standard within-bin recall.
Figure 11. Within-bin precision and recall (mean) versus apparent speed (a) and SNR proxy (b), averaged over observation sequences. Shaded bands show ±1 SD. Recall contributions sum to the overall recall (dashed, 0.71); overall precision is shown for reference (dashed, 0.79). Recall contribution is each bin’s share of total recovered ground-truth instances and differs from standard within-bin recall.
Remotesensing 18 00755 g011
Figure 12. Three different examples of challenging images from the FAI. In the example on the left, the angle of incoming light causes the rest of the objects to appear dimmer. In the middle frame, Earth’s limb obstructs the FOV of the imager. In the right-most example, streak-like artifacts are present, which may require further preprocessing. RSOs are still present within these image examples but are harder to detect.
Figure 12. Three different examples of challenging images from the FAI. In the example on the left, the angle of incoming light causes the rest of the objects to appear dimmer. In the middle frame, Earth’s limb obstructs the FOV of the imager. In the right-most example, streak-like artifacts are present, which may require further preprocessing. RSOs are still present within these image examples but are harder to detect.
Remotesensing 18 00755 g012
Table 1. FAI instrument properties.
Table 1. FAI instrument properties.
Instrument PropertyProperty Value
FOV26° full angle
Focal length68.9 mm
Focal length with reducer13.78 mm
f-number4.0
Pixel size26 µm by 26 µm
Pixel scale388 arcsec/pixel
Resolution256 by 256 pixels
Exposure time100 ms
Table 2. High-level pseudocode of the RSO detection and tracking algorithm. The algorithm processes images in a sliding three-frame window, applies local thresholding and ROI filtering, removes static stars via ICP alignment, and identifies RSOs based on motion consistency. Additional steps include extrapolating missing detections and archiving RSO data.
Table 2. High-level pseudocode of the RSO detection and tracking algorithm. The algorithm processes images in a sliding three-frame window, applies local thresholding and ROI filtering, removes static stars via ICP alignment, and identifies RSOs based on motion consistency. Additional steps include extrapolating missing detections and archiving RSO data.
1Function RSO_Detection_Tracking(image_sequence):
2        Load image1, image2 from image_sequence;
3        Apply circular ROI mask to image1, image2;
4        Apply local adaptive thresholding to each image;
5        Perform connected components analysis;
6        Extract and filter objects by size and shape;
7        Build point lists of centroids from both images;
8        foreach image3 in image_sequence[3:] do
9                Apply ROI mask and thresholding;
10                Perform connected components and extract features;
11                Create point list for image3;
12                Align image1 and image3 to image2 using ICP;
13                Identify star matches and remove background stars;
14                foreach unmatched triplet (point1, point2, point3) do
15                        Calculate inter-point distances d1, d2;
16                        Calculate distance similarity and motion angle;
17                        if similarity and angle within threshold then
18                                Label as RSO and assign ID;
19                                Update in-frame tracking list;
20                foreach RSO in previous frame do
21                        if eligible for state estimation then
22                                Predict next position using motion vector;
23                                Add as estimated detection;
24                Save output images and detection text;
25                Shift frame windows: image1 ← image2, image2 ← image3;
26        Archive completed RSO tracks;
Table 3. Summary of key algorithm parameters and their values.
Table 3. Summary of key algorithm parameters and their values.
ParameterDescriptionValue
ROI radiusLimit on detection region (central circular area)120 px
Slice sizeWindow size for local thresholding32 px
Otsu threshold binsHistogram bins used for fallback Otsu thresholding256
Threshold offsetOffset added to multi-Otsu value to reduce false negatives35
Min areaMinimum object size to avoid bright noise pixels2 px2
Max areaMaximum object size to filter large artifacts81 px2
Max width/heightObject size filter to reject streaks15 px
ICP distance thresholdMax distance between matched points3 px
ICP rotation convergenceMinimum rotation difference to terminate ICP0.0001 rad
ICP translation conv.Minimum translation difference to terminate ICP0.001 px
ICP min pairsMinimum point pairs required for successful ICP match5
Angle function slopeUsed to define motion angle threshold for RSO matching55
Angle function interceptDefines threshold line with distance similarity−27.5
RSO distance thresholdMax allowed motion per frame between RSO detections20 px
State estimation radiusLimit for projecting future RSO positions110 px
Table 4. RSO detection algorithm performance, per-frame (PF) and full-sequence (FS).
Table 4. RSO detection algorithm performance, per-frame (PF) and full-sequence (FS).
Test
Number
DateNumber of ImagesPrecision (PF)Recall (PF)Precision (FS)Recall (FS)
12023-01-164195%85%90%90%
22023-01-217688%87%77%93%
32023-01-255391%68%89%75%
42023-03-3113990%64%83%71%
52023-05-3112896%64%92%72%
62023-06-039168%46%57%52%
72023-06-2039100%52%100%56%
82023-07-1911492%81%86%91%
92023-08-042494%89%86%100%
102023-08-047584%44%72%48%
112023-08-053590%81%84%90%
122023-08-056388%76%80%91%
Total 87887%63%79%71%
Table 5. RSO detection algorithm performance. RSOs detected at least once are considered as detections.
Table 5. RSO detection algorithm performance. RSOs detected at least once are considered as detections.
Test
Number
DateNumber of ImagesUnique RSOsDetected RSOsMissed RSOsPercentage Detected
12023-01-1641110100%
22023-01-2176330100%
32023-01-2553550100%
42023-03-311391312192%
52023-05-311281713476%
62023-06-0391119282%
72023-06-203921150%
82023-07-19114880100%
92023-08-0424220100%
102023-08-047564267%
112023-08-0535330100%
122023-08-0563440100%
Total 87875651087%
Table 6. Centroiding accuracy of the algorithm for the FAI images.
Table 6. Centroiding accuracy of the algorithm for the FAI images.
Test
Number
DateNumber of StarsAverage
Centroid
Difference in X (Pixels)
Average
Centroid
Difference in Y (Pixels)
12023-01-16820.650.75
22023-01-21590.450.62
32023-01-25500.490.63
42023-03-31490.530.50
52023-05-31440.580.59
62023-06-03570.610.79
72023-06-20N/AN/AN/A
82023-07-19680.820.81
92023-08-04820.910.79
102023-08-04650.650.77
112023-08-05720.600.65
122023-08-05520.800.91
Total 6800.660.72
Table 7. Visual magnitude results of stars detected by algorithm for the FAI images.
Table 7. Visual magnitude results of stars detected by algorithm for the FAI images.
Test
Number
DateNumber of StarsBrightest
Visual
Magnitude
Faintest
Visual
Magnitude
Average
Visual
Magnitude
12023-01-16752.987.145.12
22023-01-21572.896.174.90
32023-01-25462.896.574.95
42023-03-31400.978.645.06
52023-05-31430.917.744.01
62023-06-03550.917.744.43
72023-06-20N/AN/AN/AN/A
82023-07-19672.077.445.17
92023-08-04732.897.145.52
102023-08-04663.087.315.13
112023-08-05652.897.365.19
122023-08-05512.897.235.25
Total 6380.918.645.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kunalakantha, P.; Suthakar, V.; Harrison, P.; Driedger, M.; Qashoa, R.; Chianelli, G.; Lee, R.S.K. Resident Space Object (RSO) Tracking in Space-Based, Low Resolution, Non-Constant-Attitude Imagery. Remote Sens. 2026, 18, 755. https://doi.org/10.3390/rs18050755

AMA Style

Kunalakantha P, Suthakar V, Harrison P, Driedger M, Qashoa R, Chianelli G, Lee RSK. Resident Space Object (RSO) Tracking in Space-Based, Low Resolution, Non-Constant-Attitude Imagery. Remote Sensing. 2026; 18(5):755. https://doi.org/10.3390/rs18050755

Chicago/Turabian Style

Kunalakantha, Perushan, Vithurshan Suthakar, Paul Harrison, Matthew Driedger, Randa Qashoa, Gabriel Chianelli, and Regina S. K. Lee. 2026. "Resident Space Object (RSO) Tracking in Space-Based, Low Resolution, Non-Constant-Attitude Imagery" Remote Sensing 18, no. 5: 755. https://doi.org/10.3390/rs18050755

APA Style

Kunalakantha, P., Suthakar, V., Harrison, P., Driedger, M., Qashoa, R., Chianelli, G., & Lee, R. S. K. (2026). Resident Space Object (RSO) Tracking in Space-Based, Low Resolution, Non-Constant-Attitude Imagery. Remote Sensing, 18(5), 755. https://doi.org/10.3390/rs18050755

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop